input
stringlengths
6.82k
29k
Instruction: Is prostate-specific antigen a potential new marker of androgen excess in polycystic ovary syndrome? Abstracts: abstract_id: PUBMED:17441890 Is prostate-specific antigen a potential new marker of androgen excess in polycystic ovary syndrome? Aim: To determine whether serum prostate-specific antigen (PSA) levels are increased in polycystic ovary syndrome (PCOS) and the possibility of PSA to be used as a diagnostic marker of hyperandrogenism in females. Methods: A total of 43 women with PCOS and 43 age-matched healthy females were recruited in this prospective case-control study. The subjects were compared by means of demographic parameters, hormonal and metabolic measures and serum PSA levels. The correlations between this marker and a wide variety of hormonal, biochemical, anthropometric measures were evaluated. Student's t-test, chi2-test and Spearman's correlation analysis were used for the statistical analysis where appropriate. Statistical significance was assumed with a value of P<0.05. Results: Mean body mass index, waist/hip ratio, Ferriman-Gallwey scores (FG), lutenizing hormone/follicle stimulating hormone ratio, insulin resistance, serum triglycerides and very low density lipoprotein levels were demonstrated to be significantly higher in PCOS (P=0.02, P=0.008, P<or=0.001, P<or=0.001, P=0.007, P<or=0.001, P=0.01, respectively). Total testosterone (TT), dehydroepiandrosteronesulfate (DHEAS) levels appeared to be significantly higher in PCOS (P=0.002, P<or=0.001, respectively). Serum PSA levels were found to be significantly higher in PCOS (PSA: 0.026+/-0.023 ng/mL in PCOS, PSA: 0.009+/-0.008 ng/mL in control cases, P<0.001). Sex hormone binding globulin (SHBG) was significantly lower in the subjects with PCOS (P<0.001). Additionally, positive correlations between PSA and FG scores (r: 0.417, P<or=0.001), PSA and TT (r: 0.456, P=0.03) and between PSA and DHEAS (r: 0.268, P=0.02) were found. A negative correlation between SHBG and PSA was apparent (r: -0.40, P=0.04). Conclusion: PSA is a well-established tumor marker of prostatic adenocarcinoma. It is also shown to be produced by extraprostatic tissues and fluids. As the gene expression of PSA is upregulated by the androgens and progestins in hormonally responsive tissues, hyperandrogenic syndromes such as PCOS may be associated with elevated serum PSA levels. PSA appears to be a promising marker of endogenous androgen excess in females suffering from PCOS. abstract_id: PUBMED:25883418 Reconnoitring the status of prostate specific antigen and its role in women. Prostate specific antigen is considered to be a tumour marker having maximum utility and specificity for prostate cancer since decades. After the discovery of methods to quantify different molecular fractions of prostate specific antigen (PSA), its usefulness in diagnosing early prostate cancer cases has increased tremendously. The "specificity" of PSA, is now challenged by many studies which proved that PSA, once believed to be secreted exclusively by prostatic epithelium, is also present in females. The exact biological role of extraprostatic PSA is still debatable though many theories substantiated by in vitro evidence has been put forward. With the advent of ultrasensitive analytical techniques, PSA is now quantifiable in female serum in its various molecular forms and this has led to many assumptions of it being useful as a marker in female breast cancers. In a similar scenario to prostate cancer, the ratio of free to total PSA is shown to be useful in detecting early breast cancer cases. It is also shown to be a good prognostic indicator and a predictor of response to therapy and recurrence. Apart from its role in breast cancer, it has been advocated to be a marker of hyper androgenic states in women like hirsutism and polycystic ovarian syndrome. Conflicting reports regarding the role of extra prostatic PSA is accumulating but it has been proven beyond doubt that PSA is no longer specific and confined to prostate gland. Various studies have registered that PSA is an ubiquitous molecule, secreted by hormone responsive organs and its synthesis is stimulated by androgens and progesterone but not oestrogens. In this article, a review of various literatures is done about the presence of extra prostatic PSA, its probable role in those sites as well as its utility as a tumour marker in breast cancer. abstract_id: PUBMED:27426018 Prostate specific antigen (PSA) in diagnosis of polycystic ovarian syndrome - a new insight. Introduction: Polycystic ovary syndrome (PCOS) is the commonest endocrine disorder and cause of androgen excess in women. Prostate specific antigen (PSA) could be a new marker of hyperandrogenism in PCOS. Objective: The aim of the study was to assess the concentration PSA (total PSA - TPSA and free PSA - fPSA) in 165 patients with PCOS and 40 healthy female controls, the relationship between PSA (TPSA and fPSA) and hormonal parameters and to determine the performance of PSA in diagnosis of PCOS. Results: Total PSA was higher in PCOS group versus controls. The fPSA was below the lower detection levels among all patients. The median value of FAI was 4.31 in PCOS patients versus 1.79 in controls, p < 0.001. There was a correlation of TPSA and tT (r= 0.173, p = 0.027) and TPSA and FAI (r = 0.2603, p = 0.001). AUC for FAI was 82.1%, threshold 2.56 nmol/l, for tT AUC 80.5%, threshold 0.54 ng/ml, for TPSA AUC 66.3%, threshold 0.005 ng/ml. The ROC analysis for A AUC 62.7%, threshold 3.95 ng/ml. Conclusion: PCOS women have higher serum concentration of TPSA than controls. TPSA positively correlate with T and FAI, which is the best marker for hyperandrogenic states and has better accuracy for tT and total PSA serum levels in diagnostic of PCOS. abstract_id: PUBMED:31615559 Prostatic-specific antigen (PSA) levels in patients with polycystic ovary syndrome (PCOS): a meta-analysis. Purpose: The polycystic ovary syndrome (PCOS) is a reproductive endocrine disorder, clinically characterized by oligo-ovulation/chronic anovulation, menstrual irregularities, hyperandrogenism (such as hirsutism, acne), hyperinsulinemia, and obesity. Prostatic-specific antigen (PSA) has been identified as a potential new marker in PCOS women. Although the precise role of PSA in PCOS patients still remains undetermined, PSA might serve as a useful clinical marker and might even represent a new diagnostic criterion of hyperandrogenemia in females of PCOS. Methods: A meta-analysis was performed in the study to identify the association between the polycystic ovary syndrome and prostatic-specific antigen. To identify eligible original articles, we searched a range of computerized databases, including Medline via PubMed, EMBASE, CNKI and Web of Science with a systematic searching strategy. The characteristics of each study and standard mean differences (SMD) with corresponding confidence intervals (CIs) were calculated and subgroup analysis was performed to analyze heterogeneity. Results: A total of 532 patients from seven articles were included in the meta-analysis. We identified a significant relationship between polycystic ovary syndrome and prostatic-specific antigen, with a pooled SMD of 0.81 (95% CI: 0.58 to 1.04; P < 0.01). The pooled data were calculated with the random-effects model as a moderate significant heterogeneity was found among the studies. Conclusions: The meta-analysis suggested that there was a significant association between the polycystic ovary syndrome and prostatic-specific antigen and we should not ignore the role of PSA in the PCOS patients in clinical. abstract_id: PUBMED:27259815 Prostate-Specific Antigen as a Marker of Hyperandrogenism in Women and Its Implications for Antidoping. Background: Since its discovery in the 1970s, prostate-specific antigen (PSA) has become widely known as a biomarker of prostate cancer in males but has often been overlooked in female malignancies. Although the serum concentration of PSA differs between men and women by about 1000-fold, studies have suggested that PSA concentrations drastically differ among healthy females and those who exhibit increased androgen production. Content: There have been reports of increased PSA expression in women exhibiting hyperandrogenic states, including polycystic ovary syndrome and hirsutism, as well as marked increases in a subset of breast cancer patients. These findings have not only revealed the remarkable diagnostic potential of PSA in a diverse range of clinical conditions but also point to its potential of becoming a useful biomarker of steroid hormone doping among female athletes. Recently, highly sensitive assays that can measure PSA at low limits of detection have been developed, which will aid in the discrimination of PSA between these different conditions. Summary: The overall aim of this review is to revisit the expression of PSA in hormonally-regulated tissues and in female malignancies, and to demonstrate how the regulation of PSA permits its use in antidoping initiatives. abstract_id: PUBMED:16699296 Serum PSA levels are not affected by the menstrual cycle or the menopause, but are increased in subjects with polycystic ovary syndrome. Background/aims: Prostatic specific antigen (PSA) is the most specific prostatic tumor marker in man. Recently, PSA has been detected in a variety of tissues and fluids in women, and its determination suggested as a marker of hyperandrogenism. However, precise information about the physiology of PSA in females is not available. The goal of this study was to assess serum concentrations of PSA in healthy pre-menopausal women (healthy pre-menopausal group), menopausal women (menopause group) and patients with polycystic ovary syndrome (PCOS group). Methods: PSA, androgens, LH, FSH, 17-beta-estradiol (E2), progesterone (Pg) were assessed in 40 post-menopausal women, 35 fertile controls and 35 women with PCOS. Results: No significant difference in PSA concentrations could be demonstrated in different phases of the menstrual cycle in healthy pre-menopausal group and between pre- and post-menopausal groups. No correlations could be demonstrated between serum PSA levels and the following parameters: age, body mass index (BMI), LH, FSH, E2, testosterone (T), DHEAS, and SHBG, both in pre- and post-menopausal women. Significantly higher PSA levels (median=14 pg/ml) were found in the PCOS group compared to both pre-menopausal (median=5 pg/ml) and menopausal (median= 5 pg/ml) groups (p< 0.05). Conclusions: only minor fluctuations of serum PSA concentrations are observed in healthy pre- and post-menopausal women, while serum level is higher in PCOS, and therefore PSA can be considered a suitable marker of female hyperandrogenism. abstract_id: PUBMED:11297583 Prostate-specific antigen and human glandular kallikrein 2 are markedly elevated in urine of patients with polycystic ovary syndrome. Prostate-specific antigen (PSA) is a well-established tumor marker of prostatic adenocarcinoma. Human glandular kallikrein 2 (hK2), another serine protease closely related to PSA, is also gaining ground as a promising diagnostic tool in prostate cancer. The expression of these 2 proteases is known to be regulated by androgens and progestins in hormonally responsive tissues, such as the male prostate and the female breast. Previously, we have shown that serum PSA levels in normal women are very low but still detectable by ultrasensitive PSA immunoassays. We have also demonstrated that some women with hyperandrogenic syndromes have elevated serum PSA levels. In this study, we have measured urinary PSA and urinary hK2 levels in 35 polycystic ovary syndrome (PCOS) patients and compared them to those of 41 age-matched controls. We found that urinary PSA levels were significantly higher (P < 0.0001) in PCOS patients (mean +/- SE = 820 +/- 344 ng/L) than in the controls (mean +/- SE = 4.3 +/- 1.8 ng/L). Similarly, the difference between urinary hK2 of patients (mean +/- SE = 8.2 +/- 3.1 ng/L) and controls (0.5 +/- 0.3 ng/L) was also significant (P < 0.001). A weak correlation was observed between urinary PSA and serum 3 alpha-androstanediol glucuronide (r(s) = 0.42, P = 0.03) as well as between urinary PSA and serum testosterone (r(s) = 0.40, P = 0.04). The results of this study indicate that urinary PSA, and possibly urinary hK2, are promising markers of hyperandrogenism in females suffering from PCOS. abstract_id: PUBMED:15233555 Serum prostate specific antigen levels in women with polycystic ovary syndrome and the effect of flutamide+desogestrel/ethinyl estradiol combination. Background: Prostate-specific antigen (PSA) is expressed in many female tissues and its concentrations were higher in hirsute subjects. We aimed to determine serum PSA level in hirsute women and evaluate the effect of flutamide+desogestrel/ethynil estradiol combination. Subjects And Study Design: Thirty patients with polycystic ovary syndrome (PCOS) and 30 healthy controls were studied. Hirsutism was defined by modified Ferriman-Gallwey score (FGS). Free androgen index (FAI) was used for hyperandrogenism. Patients received flutamide (500 mg/d) and oral contraceptive (desogestrel+ethinyl estradiol) for 9 months. Results: Mean FGS (p<0.0001), insulin (p<0.01), FAI (0.0001), androstenedione (p<0.0001), LH (p<0.05), and free testosterone (p<0.003) levels of patients with PCOS were higher than the control group. Mean serum total and free PSA level of PCOS patients were higher than the control group (p<0.0001 and p<0.0001). We found a positive correlation between total PSA levels and FGS (r=0.568, p<0.001), FAI and FGS (r=0.456 and p<0.01). There was also a positive correlation between FAI and total PSA (r=0.503 and p<0.005). At the end of treatment, FGS, androstenedione, free and total testosterone, FAI, serum PSA and LH levels decreased significantly [serum total PSA was 0.0208 +/- 0.0178 ng/ml at baseline and 0.0061 +/- 0.0044 ng/ml after treatment (p<0.0001)]. Conclusions: 1. Serum prostate specific antigen level is higher in patients with PCOS; 2. There is a positive correlation among FGS, FAI and PSA levels; 3. Serum PSA levels decrease with antiandrogen treatment; 4. Serum PSA measurement might be a marker for hirsutism. abstract_id: PUBMED:37931646 N6-methyladenosine demethylase FTO related to hyperandrogenism in PCOS via AKT pathway. Background: Polycystic ovary syndrome (PCOS) was known as the common endocrine disease in women, featured as hyperandrogenism, ovulation disorders, etc. Fat mass and obesity-associated protein (FTO), a m6A demethylase, is abnormal in the occurrence of ovarian diseases. However, the mechanism of FTO in the pathogenesis of PCOS is still unclear. Methods: The level of FTO in clinical samples, PCOS rat with hyperandrogenism and granulosa cells (GCs) lines effected by DHT were investigated by ELISA, qRT-PCR, WB, and IHC, while m6A RNA methylation level was studied by m6A Colorimetric and androgen level was tested through ELISA. Changes in steroid hormone synthetase and androgen receptor (AR)/prostate-specific antigen (PSA) levels in vitro were visualized by WB after transient transfection silenced FTO. The effect of DHT combined with FTO inhibitor meclofenamic acid (MA) on FTO, AR/PSA, and AKT phosphorylation were also demonstrated by WB. The co-localization of FTO and AR in KGN cells was analyzed by confocal microscopy, and the physiological interaction between FTO and AR was studied by Co-IP assay. The effect of FTO-specific inhibitor MA, AKT phosphorylation inhibitor LY294002, and the combined them on GCs proliferation and cell cycle were evaluated by drug combination index, EDU assay, and flow cytometry analysis. Results: FTO expression was upregulated in follicular fluid and GCs in PCOS patients clinically. The high FTO expression in patients was negative with the level of m6A, but positive with the level of androgen. The upregulation of FTO was accompanied with a decrease in the level of m6A in PCOS rat with hyperandrogenism. Dihydrotestosterone (DHT) promoted the FTO expression and inhibited m6A content as a dose-dependent way in vitro. In contrast, suppression of FTO with siRNA attenuated the expression of steroid hormone synthetase such as CYP11A1, CYP17A1, HSD11B1, HSD3B2 except CYP19A1 synthetase, ultimately inducing the decrease of androgen level. Suppression of FTO also decreased the biological activity of androgen through downregulation AR/PSA. MA treatment as the specific FTO antagonist decreased cell survival in time- and dose-dependent way in GCs lines. Correspondingly, MA treatment decreased the expression of FTO, AR/PSA expression, and AKT phosphorylation in the presence of DHT stimulation. Additionally, we also speculate there is a potential relation between FTO and AR according to FTO was co-localized and interacted with AR in KGN cells. Compared with AKT phosphorylation inhibitor LY294002 or MA alone, LY294002 combined with MA synergistically inhibited cell survival and increased G2/M phase arrest in GC line. Conclusions: We first evaluated the correlation of FTO and m6A in PCOS clinically, and further explored the mechanism between FTO and hyperandrogenism in PCOS animal and cell models. These findings contributed the potential therapy by targeting the FTO for hyperandrogenism in PCOS. abstract_id: PUBMED:28361781 Serum complexed and free prostate-specific antigen (PSA) for the diagnosis of the polycystic ovarian syndrome (PCOS). Background: Polycystic ovarian syndrome (PCOS) is a common cause of reproductive and metabolic dysfunction. We hypothesized that serum prostate-specific antigen (PSA) may constitute a new biomarker for hyperandrogenism in PCOS. Methods: We conducted a cross-sectional study of 45 women with PCOS and 40 controls. Serum from these women was analyzed for androgenic steroids and for complexed PSA (cPSA) and free PSA (fPSA) with a novel fifth- generation assay with a sensitivity of ~10 fg/mL for cPSA and 140 fg/mL for fPSA. Results: cPSA and fPSA levels were about three times higher in PCOS compared to controls. However, in PCOS, cPSA and fPSA did not differ according to waist-to-hip ratio, Ferriman-Gallwey score, or degree of hyperandrogenemia or oligo-ovulation. In PCOS and control women, serum cPSA and fPSA levels were highly correlated with each other, and with free and total testosterone levels, but not with other hormones. Adjusting for age, body mass index (BMI) and race, cPSA was significantly associated with PCOS, with an odds ratio (OR) of 5.67 (95% confidence interval [CI]: 1.86, 22.0). The OR of PCOS for fPSA was 7.04 (95% CI: 1.65, 40.4). A multivariate model that included age, BMI, race and cPSA yielded an area-under-the-receiver-operating-characteristic curve of 0.89. Conclusions: Serum cPSA and fPSA are novel biomarkers for hyperandrogenism in PCOS and may have value for disease diagnosis. Answer: Yes, prostate-specific antigen (PSA) is considered a potential new marker of androgen excess in polycystic ovary syndrome (PCOS). Several studies have investigated the relationship between PSA levels and PCOS, a common endocrine disorder characterized by hyperandrogenism and ovulation disorders. Research has shown that serum PSA levels are significantly higher in women with PCOS compared to age-matched healthy controls (PUBMED:17441890). This elevation in PSA levels correlates positively with other markers of hyperandrogenism, such as total testosterone (TT), dehydroepiandrosteronesulfate (DHEAS), and the Ferriman-Gallwey scores (FG), which assess the severity of hirsutism (PUBMED:17441890). Additionally, a negative correlation between sex hormone-binding globulin (SHBG) and PSA has been observed, suggesting that PSA levels may reflect the bioavailability of androgens in PCOS (PUBMED:17441890). Further studies have supported the notion that PSA, traditionally known as a male-specific tumor marker for prostate cancer, is also present in females and may serve as a marker for hyperandrogenic states such as PCOS (PUBMED:25883418). PSA levels in women with PCOS have been found to be higher than in healthy controls, and PSA positively correlates with testosterone and the free androgen index (FAI), which is a marker for hyperandrogenic states (PUBMED:27426018). A meta-analysis has also identified a significant association between PCOS and PSA, suggesting that PSA could be a useful clinical marker and might even represent a new diagnostic criterion of hyperandrogenemia in females with PCOS (PUBMED:31615559). Moreover, PSA has been implicated as a marker for hirsutism and its levels decrease with antiandrogen treatment (PUBMED:15233555). In summary, the evidence suggests that PSA is a promising marker of endogenous androgen excess in females suffering from PCOS and could potentially be used in the diagnosis and monitoring of this condition (PUBMED:17441890, PUBMED:25883418, PUBMED:27426018, PUBMED:31615559, PUBMED:15233555).
Instruction: Is blood glucose on admission a predictor of mortality in adult acute pneumonia? Abstracts: abstract_id: PUBMED:22974193 Is blood glucose on admission a predictor of mortality in adult acute pneumonia? Background: Even if hyperglycaemia is often identified as an independent risk factor for developing respiratory tract infection, only few studies have investigated this relationship. The aim of this study is to investigate if plasma glucose on admission is related with in-hospital mortality among patients with pneumonia and to identify the glycaemic range with significant reductions of mortality risks in non-intensive care patients. Methods: Data come from administrative records of 1018 non-intensive care patients hospitalised with diagnosis of pneumonia. For every patient, administrative records were linked with the plasma glucose. A multivariate logistic regression model was performed in order to evaluate the associations between in-hospital mortality and a set of demographic and clinical variables. Plasma glucose was added to the model as restricted cubic spline; risk estimates for hypoglycaemic and hyperglycaemic patients have been derived on the basis of this nonlinear model and presented with two values of odds ratio (OR). Results: The minimal risk of in-hospital mortality was found at plasma glucose levels of mean 86 mg/dL [95% confidence interval (CI) 61-102]. The adjusted OR of deaths for plasma glucose on admission for hypoglycaemic patients (below 86 mg/dL) is 0.78 (95% CI 0.62-0.98) for each 10 mg/dL of decrease, whereas for hyperglycaemic patients (above 86 mg/dL), the OR is 1.33 (95% CI 1.07-1.66) for each 10 mg/dL of increase in plasma glucose. Conclusions: Our observations suggest that in non-intensive care patients, hypoglycaemia, as hyperglycaemia, is associated with in-hospital mortality. abstract_id: PUBMED:32647915 Fasting blood glucose at admission is an independent predictor for 28-day mortality in patients with COVID-19 without previous diagnosis of diabetes: a multi-centre retrospective study. Aims/hypothesis: Hyperglycaemia is associated with an elevated risk of mortality in community-acquired pneumonia, stroke, acute myocardial infarction, trauma and surgery, among other conditions. In this study, we examined the relationship between fasting blood glucose (FBG) and 28-day mortality in coronavirus disease 2019 (COVID-19) patients not previously diagnosed as having diabetes. Methods: We conducted a retrospective study involving all consecutive COVID-19 patients with a definitive 28-day outcome and FBG measurement at admission from 24 January 2020 to 10 February 2020 in two hospitals based in Wuhan, China. Demographic and clinical data, 28-day outcomes, in-hospital complications and CRB-65 scores of COVID-19 patients in the two hospitals were analysed. CRB-65 is an effective measure for assessing the severity of pneumonia and is based on four indicators, i.e. confusion, respiratory rate (>30/min), systolic blood pressure (≤90 mmHg) or diastolic blood pressure (≤60 mmHg), and age (≥65 years). Results: Six hundred and five COVID-19 patients were enrolled, including 114 who died in hospital. Multivariable Cox regression analysis showed that age (HR 1.02 [95% CI 1.00, 1.04]), male sex (HR 1.75 [95% CI 1.17, 2.60]), CRB-65 score 1-2 (HR 2.68 [95% CI 1.56, 4.59]), CRB-65 score 3-4 (HR 5.25 [95% CI 2.05, 13.43]) and FBG ≥7.0 mmol/l (HR 2.30 [95% CI 1.49, 3.55]) were independent predictors for 28-day mortality. The OR for 28-day in-hospital complications in those with FBG ≥7.0 mmol/l and 6.1-6.9 mmol/l vs <6.1 mmol/l was 3.99 (95% CI 2.71, 5.88) or 2.61 (95% CI 1.64, 4.41), respectively. Conclusions/interpretation: FBG ≥7.0 mmol/l at admission is an independent predictor for 28-day mortality in patients with COVID-19 without previous diagnosis of diabetes. Glycaemic testing and control are important to all COVID-19 patients even where they have no pre-existing diabetes, as most COVID-19 patients are prone to glucose metabolic disorders. Graphical abstract. abstract_id: PUBMED:35871756 Association of admission blood glucose level and clinical outcomes in elderly community-acquired pneumonia patients with or without diabetes. Introduction: Community-acquired pneumonia (CAP) is the major cause of infection-related mortality worldwide. Patients with CAP frequently present with admission hyperglycemia. Objectives: The aim of this study was to evaluate the association between admission blood glucose (ABG) level and clinical outcomes in elderly CAP patients (≥80 years of age) with or without diabetes. Methods: In this single center retrospective study, 290 elderly patients diagnosed with CAP were included. Demographic and clinical information were collected and compared. The associations between admission blood glucose level and the 30-day mortality as well as intensive care unit (ICU) admission and invasive mechanical ventilation (IMV) in elderly CAP patients with or without diabetes were assessed. Results: Of the 290 eligible patients with CAP, 159 (66.5%) patients were male, and 64 (22.1%) had a known history of diabetes at hospital admission. After adjusting for age and sex, the logistic regression analysis had identified several risk factors that might be associated with clinical outcomes in elderly patients with CAP. Multivariable logistic regression analysis revealed that admission glucose level > 11.1 mmol/L was significant associated with ICU admission, IMV, and 30-day mortality both in non-diabetic and diabetic patients. Furthermore, Kaplan-Meier analysis indicated that patients with higher admission glucose level were correlated statistically significantly with 30-day mortality in patients with CAP (P < 0.001). Conclusion: Admission blood glucose is correlated with 30-day hospital mortality, ICU admission, and IMV of CAP in elderly patients with and without diabetes. Specially, admission glucose > 11.1 mmol/L was a significant risk factor for 30-day hospital mortality. abstract_id: PUBMED:30764852 Combined utility of white blood cell count and blood glucose for predicting in-hospital outcomes in acute ischemic stroke. Background: High white blood cell (WBC) count and high blood glucose level are risk factors for mortality and pneumonia after acute ischemic stroke (AIS). We investigated the combined effect of high WBC count and high blood glucose level on hospital admission and in-hospital mortality and pneumonia in acute AIS patients. Methods: A total of 3124 AIS patients enrolled from December 2013 to May 2014 across 22 hospitals in Suzhou city were included in the present study. We divided patients into four groups according to their level of WBC count and blood glucose: NWNG (normal WBC count and normal glucose), NWHG (normal WBC count and higher glucose), HWNG (higher WBC count and normal glucose), and HWHG (higher WBC count and higher glucose). Cox proportional hazard model and logistic regression model were used to estimate the combined effect of WBC count and blood glucose on all-cause in-hospital mortality and pneumonia in AIS patients. Results: HWHG was associated with a 2.22-fold increase in the risk of in-hospital mortality in comparison to NWNG (adjusted hazard ratio [HR] 2.22; 95% confidence interval [CI], 1.21-4.07; P trend = 0.003). The risk of pneumonia was significantly higher in patients with HWHG compared to those with NWNG (adjusted odds ratio [OR] 2.61; 95% CI, 1.66-4.10; P trend < 0.001). The C-statistic for the combined WBC count and blood glucose was higher than WBC count or blood glucose alone for prediction of in-hospital mortality and pneumonia (all p < 0.01). Conclusions: High WBC count combined with high blood glucose level at admission was independently associated with in-hospital mortality and pneumonia in AIS patients. Moreover, the combination of WBC count and blood glucose level appeared to be a better predictor than WBC count or blood glucose alone. abstract_id: PUBMED:27079139 The prognostic significance of admission blood glucose levels in elderly patients with pneumonia (GAP Study). Aims: Evaluate the association between admission blood glucose (ABG) and short and long-term mortality following hospitalization for pneumonia of elderly patients with and without diabetes mellitus (DM). Methods: Observational data derived from the electronic records of hospitalized patients ≥65years, admitted for pneumonia between January 2011 and December 2013. ABG levels were classified to categories: ≤70 (low), 70-110 (normal), 111-140 (mildly elevated), 141-199mg/dl (moderately elevated) and ≥200mg/dl (markedly elevated). Main outcomes were all-cause mortality rates at various time points. Results: Cohort included 2164 patients, 743 with DM (mean age 81, 53% male) and 1421 without it (mean age 83, 52% male). There was a significant interaction between DM, ABG and mortality (p≤0.05). In patients without DM, compared with normal ABG, in-hospital and 30-day mortality rates (adjusted hazard ratio, 95% CI) were higher with moderately (1.5 and 1.4, respectively, p<0.05) and markedly elevated ABG (2.7 and 1.9, respectively, p<0.05). Long-term results were similar at 12 and 36months (1.3 and 1.8, respectively, p<0.05, for moderately and markedly elevated ABG). Conclusion: In elderly non-diabetic patients hospitalized for pneumonia, moderately and markedly elevated ABG is associated with increased short- and long-term mortality. In diabetic patients there is no association between ABG and mortality. abstract_id: PUBMED:32617025 Hypercapnia at Hospital Admission as a Predictor of Mortality. Introduction: Hypercapnia is an indicator of ventilatory exhaustion. There is some disagreement regarding whether hypercapnia is also a predictor of mortality. In this prospective study, we aimed to investigate whether hypercapnia can predict in-hospital and 1-year mortality rates in patients with dyspnea or pulmonary diseases. Patients And Methods: All patients with dyspnea or pulmonary diseases underwent routine blood gas analysis at hospital admission. During the 12-month enrollment period, 2710 patients were enrolled, and 588 patients with hypercapnia at admission were identified. Of the 1626 normocapnic patients, 62 were randomly selected as controls. In-hospital and 1-year mortality rates were determined. Results: There were significant increases in mortality rate between acute hypercapnic patients and both chronic hypercapnic patients and normocapnic controls. Their in-hospital mortality rates were 17%, 6.7% and 3.2%, respectively. Their 1-year mortality rates were 32%, 20.2% and 14.5%, respectively. The 1-year mortality rates of hypercapnic patients with different underlying diseases were 24.6% (chronic obstructive pulmonary disease), 28.4% (congestive heart disease), 1.6% (obstructive sleep apnea syndrome/obesity hypoventilation syndrome), 50.9% (pneumonia), 0% (suppressed central respiratory drive, primarily due to opiate abuse) and 22.8% (other conditions). Discussion: The 1-year mortality rate of patients with acute hypercapnia at hospital admission was 32%, with significant differences compared to chronic hypercapnic patients (20.2%) and normocapnic patients (14.5%). There was a wide range of 1-year mortality rates between the hypercapnic patients with different underlying diseases. abstract_id: PUBMED:24190147 Association of hyperglycemia on admission and during hospitalization with mortality in diabetic patients admitted for pneumonia. Objective: Information available on the clinical features and outcomes of pneumonia in diabetic patients is limited. There are no data on the association between glycemic control during hospitalization and mortality in this population. The objective of this study is to examine whether the presence of hyperglycemia on admission and during hospitalization is associated with mortality in diabetic patients admitted to the hospital for pneumonia. Methods: This study is a retrospective observational cohort study of diabetic adults hospitalized for the first time for pneumonia between 2005 and 2011 in a 358-bed community hospital. Univariate and multivariate analyses were performed for 30-day all-cause hospital mortality adjusted for sex, age, type of pneumonia (community-acquired pneumonia or nursing and health care-associated pneumonia), severity of pneumonia according to the A-DROP score and various comorbidities in consideration of the serum glucose and hemoglobin A1c levels on admission and the mean plasma glucose level during hospitalization. Results: Of the 1,499 pneumonia patients evaluated, 185 (12.3%) (mean age 75 years) had diabetes mellitus. Fourteen (7.6%) of the 185 diabetic patients died within 30 days after admission. According to the univariate analysis, 30-day mortality was significantly associated with the A-DROP score (p<0.0001), the admission glucose level (p=0.01) and the mean plasma glucose level during hospitalization (p<0.0001). Even after adjusting for factors related to the severity of pneumonia, the mean plasma glucose level during hospitalization remained significantly associated with 30-day mortality (p=0.004). Conclusion: Hyperglycemia determined according to the mean plasma glucose level during hospitalization is independently associated with 30-day all-cause hospital mortality in diabetic patients admitted for pneumonia. abstract_id: PUBMED:22182876 Elevated admission systolic blood pressure after blunt trauma predicts delayed pneumonia and mortality. Background: Although avoiding hypotension is a primary focus after trauma, elevated systolic blood pressure (SBP) is frequently disregarded. The purpose of this study was to determine the association between elevated admission SBP and delayed outcomes after trauma. Methods: The Los Angeles County Trauma System Database was queried for all patients between 2003 and 2008 with blunt injuries who survived for at least 2 days after admission. Demographics and outcomes (pneumonia and mortality) were compared at various admission SBP subgroups (≥160 mm Hg, ≥170 mm Hg, ≥180 mm Hg, ≥190 mm Hg, ≥200 mm Hg, ≥210 mm Hg, and ≥220 mm Hg). Patients with moderate-to-severe traumatic brain injury (TBI), defined as head Abbreviated Injury Score ≥3, were then identified and compared with those without using multivariable logistic regression. Results: Data accessed from 14,382 blunt trauma admissions identified 2,601 patients with moderate-to-severe TBI (TBI group) and 11,781 without moderate-to-severe TBI (non-TBI group) who were hospitalized ≥2 days. Overall mortality was 2.9%, 7.1% for TBI patients, and 1.9% for non-TBI patients. Overall pneumonia was 4.6%, 9.5% for TBI patients, and 3.6% for non-TBI patients. Regression modeling determined SBP ≥160 mm Hg was a significant predictor of mortality in TBI patients (adjusted odds ratio [AOR], 1.59; confidence interval [CI], 1.10-2.29; p = 0.03) and non-TBI patients (AOR, 1.47; CI, 1.14-1.90; p = 0.003). Similarly, SBP ≥160 mm Hg was a significant predictor for increased pneumonia in TBI patients (AOR, 1.79; CI, 1.30-2.46; p = 0.0004), compared with non-TBI patients (AOR, 1.28; CI, 0.97-1.69; p = 0.08). Conclusions: In blunt trauma patients with or without TBI, elevated admission SBP was associated with worse delayed outcomes. Prospective research is necessary to determine whether algorithms that manage elevated blood pressure after trauma, especially after TBI, affect mortality or pneumonia. abstract_id: PUBMED:33054674 High Admission Glucose Is Associated With Poor Outcome After Endovascular Treatment for Ischemic Stroke. Background And Purpose: High-serum glucose on admission is a predictor of poor outcome after stroke. We assessed the association between glucose concentrations and clinical outcomes in patients who underwent endovascular treatment. Methods: From the MR CLEAN Registry, we selected consecutive adult patients with a large vessel occlusion of the anterior circulation who underwent endovascular treatment and for whom admission glucose levels were available. We assessed the association between admission glucose and the modified Rankin Scale score at 90 days, symptomatic intracranial hemorrhage and successful reperfusion rates. Hyperglycemia was defined as admission glucose ≥7.8 mmol/L. We evaluated the association between glucose and modified Rankin Scale using multivariable ordinal logistic regression and assessed whether successful reperfusion (extended Thrombolysis in Cerebral Infarction 2b-3) modified this association. Results: Of 3637 patients in the MR CLEAN Registry, 2908 were included. Median admission glucose concentration was 6.8 mmol/L (interquartile range, 5.9-8.1) and 882 patients (30%) had hyperglycemia. Hyperglycemia on admission was associated with a shift toward worse functional outcome (median modified Rankin Scale score 4 versus 3; adjusted common odds ratio, 1.69 [95% CI, 1.44-1.99]), increased mortality (40% versus 23%; adjusted odds ratio, 1.95 [95% CI, 1.60-2.38]), and an increased risk of symptomatic intracranial hemorrhage (9% versus 5%; adjusted odds ratio, 1.94 [95% CI, 1.41-2.66]) compared with nonhyperglycemic patients. The association between admission glucose levels and poor outcome (modified Rankin Scale score 3-6) was J-shaped. Hyperglycemia was not associated with the rate of successful reperfusion nor did successful reperfusion modify the association between glucose and functional outcome. Conclusions: Increased admission glucose is associated with poor functional outcome and an increased risk of symptomatic intracranial hemorrhage after endovascular treatment. abstract_id: PUBMED:35924142 Survival status and predictor of mortality among adult stroke patients in Saint Paul's hospital millennium medical college, Addis Ababa, Ethiopia. Objective: In Africa, there is up to 316 per 100,000 annual incidence rate of stroke, a prevalence of up to 1460 per 100,000, and a 3-year mortality rate higher than 80%. The incidence of stroke mortality in Ethiopia is 19.2%. Stroke is a major cause of disability and death worldwide. Early mortality in-hospital and deprived functional outcomes are common in acute stroke. This study aims to assess survival status and predictors of mortality among adult stroke patients. Methods: Institution-based retrospective cohort was conducted on patients who were admitted to Saint Paul's Hospital Millennium Medical College. A simple random sampling technique was used. Bivariate and multivariable cox regression analysis were used to identify predictors of stroke mortality. For multivariable cox regression analysis statistically, significance was declared at p-value <0.05 and 95% CI. Results: A total of 251 adult stroke patient charts were included in the study. The overall Kaplan-Meier estimate showed that the probability of in-hospital stroke patients' survival on the 41st day was nearly 50%. In this study, 77 (30.7%) of the study participants died. Decreased Glasgow Coma Scale (GCS) with adjusted hazard ratio (AHR) 4.26 (95% CI: 2.25-8.07) and p-value ⩽ 0.01 and presence of pneumonia with AHR 3.05 (95% CI: 1.29-7.21) and p-value ⩽ 0.01 are the predictor of mortality. Conclusion: The probability of survival relatively falls as follow-up time increases. The overall incidence of mortality among adult stroke patients was high. The decreased GCS and the presence of pneumonia are predictors of mortality among adult stroke patients. Answer: Yes, blood glucose on admission is a predictor of mortality in adult acute pneumonia. Studies have shown that both hyperglycemia and hypoglycemia on admission are associated with increased in-hospital mortality in patients with pneumonia. Specifically, one study found that the minimal risk of in-hospital mortality was at plasma glucose levels of mean 86 mg/dL, with an adjusted odds ratio (OR) of deaths for plasma glucose on admission for hypoglycemic patients (below 86 mg/dL) being 0.78 for each 10 mg/dL decrease, and for hyperglycemic patients (above 86 mg/dL), the OR was 1.33 for each 10 mg/dL increase in plasma glucose (PUBMED:22974193). Additionally, another study focusing on elderly community-acquired pneumonia patients, with or without diabetes, found that admission blood glucose level was correlated with 30-day hospital mortality, ICU admission, and invasive mechanical ventilation (IMV). In particular, an admission glucose level > 11.1 mmol/L was significantly associated with these outcomes in both non-diabetic and diabetic patients (PUBMED:35871756). Furthermore, in the context of COVID-19, fasting blood glucose (FBG) at admission was found to be an independent predictor for 28-day mortality in patients without a previous diagnosis of diabetes, suggesting that glycaemic testing and control are important for all COVID-19 patients, even those without pre-existing diabetes (PUBMED:32647915). In summary, blood glucose levels on admission are important predictors of mortality in adult patients with acute pneumonia, and careful monitoring and management of blood glucose may be crucial for improving outcomes in these patients.
Instruction: Is schizophrenia a neurodegenerative disease? Abstracts: abstract_id: PUBMED:17413023 Tracking Alzheimer's disease. Population-based brain mapping provides great insight into the trajectory of aging and dementia, as well as brain changes that normally occur over the human life span. We describe three novel brain mapping techniques, cortical thickness mapping, tensor-based morphometry (TBM), and hippocampal surface modeling, which offer enormous power for measuring disease progression in drug trials, and shed light on the neuroscience of brain degeneration in Alzheimer's disease (AD) and mild cognitive impairment (MCI). We report the first time-lapse maps of cortical atrophy spreading dynamically in the living brain, based on averaging data from populations of subjects with Alzheimer's disease and normal subjects imaged longitudinally with MRI. These dynamic sequences show a rapidly advancing wave of cortical atrophy sweeping from limbic and temporal cortices into higher-order association and ultimately primary sensorimotor areas, in a pattern that correlates with cognitive decline. A complementary technique, TBM, reveals the 3D profile of atrophic rates, at each point in the brain. A third technique, hippocampal surface modeling, plots the profile of shape alterations across the hippocampal surface. The three techniques provide moderate to highly automated analyses of images, have been validated on hundreds of scans, and are sensitive to clinically relevant changes in individual patients and groups undergoing different drug treatments. We compare time-lapse maps of AD, MCI, and other dementias, correlate these changes with cognition, and relate them to similar time-lapse maps of childhood development, schizophrenia, and HIV-associated brain degeneration. Strengths and weaknesses of these different imaging measures for basic neuroscience and drug trials are discussed. abstract_id: PUBMED:15176463 Single-cell gene expression analysis: implications for neurodegenerative and neuropsychiatric disorders. Technical and experimental advances in microaspiration techniques, RNA amplification, quantitative real-time polymerase chain reaction (qPCR), and cDNA microarray analysis have led to an increase in the number of studies of single-cell gene expression. In particular, the central nervous system (CNS) is an ideal structure to apply single-cell gene expression paradigms. Unlike an organ that is composed of one principal cell type, the brain contains a constellation of neuronal and noneuronal populations of cells. A goal is to sample gene expression from similar cell types within a defined region without potential contamination by expression profiles of adjacent neuronal subpopulations and noneuronal cells. The unprecedented resolution afforded by single-cell RNA analysis in combination with cDNA microarrays and qPCR-based analyses allows for relative gene expression level comparisons across cell types under different experimental conditions and disease states. The ability to analyze single cells is an important distinction from global and regional assessments of mRNA expression and can be applied to optimally prepared tissues from animal models as well as postmortem human brain tissues. This focused review illustrates the potential power of single-cell gene expression studies within the CNS in relation to neurodegenerative and neuropsychiatric disorders such as Alzheimer's disease (AD) and schizophrenia, respectively. abstract_id: PUBMED:35549603 What are the challenges with multi-targeted drug design for complex diseases? Introduction: Current findings on multifactorial diseases with a complex pathomechanism confirm that multi-target drugs are more efficient ways in treating them as opposed to single-target drugs. However, to design multi-target ligands, a number of factors and challenges must be taken into account. Areas Covered: In this perspective, we summarize the concept of application of multi-target drugs for the treatment of complex diseases such as neurodegenerative diseases, schizophrenia, diabetes, and cancer. We discuss the aspects of target selection for multifunctional ligands and the application of in silico methods in their design and optimization. Furthermore, we highlight other challenges such as balancing affinities to different targets and drug-likeness of obtained compounds. Finally, we present success stories in the design of multi-target ligands for the treatment of common complex diseases. Expert Opinion: Despite numerous challenges resulting from the design of multi-target ligands, these efforts are worth making. Appropriate target selection, activity balancing, and ligand drug-likeness belong to key aspects in the design of ligands acting on multiple targets. It should be emphasized that in silico methods, in particular inverse docking, pharmacophore modeling, machine learning methods and approaches derived from network pharmacology are valuable tools for the design of multi-target drugs. abstract_id: PUBMED:23503168 Genomic and epigenomic insights into nutrition and brain disorders. Considerable evidence links many neuropsychiatric, neurodevelopmental and neurodegenerative disorders with multiple complex interactions between genetics and environmental factors such as nutrition. Mental health problems, autism, eating disorders, Alzheimer's disease, schizophrenia, Parkinson's disease and brain tumours are related to individual variability in numerous protein-coding and non-coding regions of the genome. However, genotype does not necessarily determine neurological phenotype because the epigenome modulates gene expression in response to endogenous and exogenous regulators, throughout the life-cycle. Studies using both genome-wide analysis of multiple genes and comprehensive analysis of specific genes are providing new insights into genetic and epigenetic mechanisms underlying nutrition and neuroscience. This review provides a critical evaluation of the following related areas: (1) recent advances in genomic and epigenomic technologies, and their relevance to brain disorders; (2) the emerging role of non-coding RNAs as key regulators of transcription, epigenetic processes and gene silencing; (3) novel approaches to nutrition, epigenetics and neuroscience; (4) gene-environment interactions, especially in the serotonergic system, as a paradigm of the multiple signalling pathways affected in neuropsychiatric and neurological disorders. Current and future advances in these four areas should contribute significantly to the prevention, amelioration and treatment of multiple devastating brain disorders. abstract_id: PUBMED:37627253 Counting the Toll of Inflammation on Schizophrenia-A Potential Role for Toll-like Receptors. Toll-like receptors (TLRs) are a family of pattern recognition receptors (PRRs) that are ubiquitously expressed in the human body. They protect the brain and central nervous system from self and foreign antigens/pathogens. The immune response elicited by these receptors culminates in the release of cytokines, chemokines, and interferons causing an inflammatory response, which can be both beneficial and harmful to neurodevelopment. In addition, the detrimental effects of TLR activation have been implicated in multiple neurodegenerative diseases such as Alzheimer's, multiple sclerosis, etc. Many studies also support the theory that cytokine imbalance may be involved in schizophrenia, and a vast amount of literature showcases the deleterious effects of this imbalance on cognitive performance in the human population. In this review, we examine the current literature on TLRs, their potential role in the pathogenesis of schizophrenia, factors affecting TLR activity that contribute towards the risk of schizophrenia, and lastly, the role of TLRs and their impact on cognitive performance in schizophrenia. abstract_id: PUBMED:22007157 Modulation of GSK-3 as a Therapeutic Strategy on Tau Pathologies. Glycogen synthase kinase-3 (GSK-3) is ubiquitously expressed and unusually active in resting, non-stimulated cells. In mammals, at least three proteins (α, β1, and β2), generated from two different genes, gsk-3α and gsk-3β, are widely expressed at both the RNA and protein levels although some tissues show preferential expression of some of the three proteins. Control of GSK-3 activity occurs by complex mechanisms that depend on specific signaling pathways, often controlling the inhibition of the kinase activity. GSK-3 appears to integrate different signaling pathways from a wide selection of cellular stimuli. The unique position of GSK-3 in modulating the function of a diverse series of proteins and its association with a wide variety of human disorders has attracted significant attention as a therapeutic target and as a means to understand the molecular basis of brain disorders. Different neurodegenerative diseases including frontotemporal dementia, progressive supranuclear palsy, and Alzheimer's disease, present prominent tau pathology such as tau hyperphosphorylation and aggregation and are collectively referred to as tauopathies. GSK-3 has also been associated to different neuropsychiatric disorders, like schizophrenia and bipolar disorder. GSK-3β is the major kinase to phosphorylate tau both in vitro and in vivo and has been proposed as a target for therapeutic intervention. The first therapeutic strategy to modulate GSK-3 activity was the direct inhibition of its kinase activity. This review will focus on the signaling pathways involved in the control of GSK-3 activity and its pathological deregulation. We will highlight different alternatives of GSK-3 modulation including the direct pharmacological inhibition as compared to the modulation by upstream regulators. abstract_id: PUBMED:29554474 Impaired social behaviors and minimized oxytocin signaling of the adult mice deficient in the N-methyl-d-aspartate receptor GluN3A subunit. The N-methyl-d-aspartate receptor (NMDAR) has been implicated in the pathophysiology of neurological diseases, such as schizophrenia, autism spectrum disorders (ASD), and Alzheimer's disease (AD), whose unique clinical hallmark is a constellation of impaired social and/or cognitive behaviors. GluN3A (NR3A) is a unique inhibitory subunit in the NMDAR complex. The role of GluN3A in social behavioral activities is obscure. In this study, we sought to evaluate altered social activities in adult GluN3A knockout (KO) mice. GluN3A KO mice spent less time in reciprocal social interaction in the social interaction test compared to wild-type (WT) mice. A social approach test using a three-chamber system confirmed that mice lacking GluN3A had lower sociability and did not exhibit a preference for social novelty. GluN3A KO mice displayed abnormal food preference in the social transmission of food preference task and low social interaction activity in the five-trial social memory test, but without social memory deficits. Using a home cage monitoring system, we observed reduced social grooming behavior in GluN3A KO mice. Signaling genes that might mediate the altered social behaviors were examined in the prefrontal cortex, hippocampus, and thalamus. Among nine genes examined, the expression of the oxytocin receptor was significantly lower in the prefrontal cortex of GluN3A KO mice than that in WT mice. Oxytocin treatment rescued social activity deficits in GluN3A KO mice. These findings support a novel idea that a chronic state of moderate increases in NMDAR activities may lead to downregulation of the oxytocin signaling and impaired behavioral activities that are seen in psychiatric/neurodegenerative disorders. abstract_id: PUBMED:28596922 Noradrenergic Modulation of Cognition in Health and Disease. Norepinephrine released by the locus coeruleus modulates cellular processes and synaptic transmission in the central nervous system through its actions at a number of pre- and postsynaptic receptors. This transmitter system facilitates sensory signal detection and promotes waking and arousal, processes which are necessary for navigating a complex and dynamic sensory environment. In addition to its effects on sensory processing and waking behavior, norepinephrine is now recognized as a contributor to various aspects of cognition, including attention, behavioral flexibility, working memory, and long-term mnemonic processes. Two areas of dense noradrenergic innervation, the prefrontal cortex and the hippocampus, are particularly important with regard to these functions. Due to its role in mediating normal cognitive function, it is reasonable to expect that noradrenergic transmission becomes dysfunctional in a number of neuropsychiatric and neurodegenerative diseases characterized by cognitive deficits. In this review, we summarize the unique role that norepinephrine plays in prefrontal cortical and hippocampal function and how its interaction with its various receptors contribute to cognitive behaviors. We further assess the changes that occur in the noradrenergic system in Alzheimer's disease, Parkinson's disease, attention-deficit/hyperactivity disorder, and schizophrenia and how these changes contribute to cognitive decline in these pathologies. abstract_id: PUBMED:29626340 Neuroimmune Biomarkers in Mental Illness. Exploration of neuroimmune mechanisms is vital to the understanding of the pathogenesis and pathophysiology of mental disorders. Inflammatory and immune mechanisms are increasingly understood to underpin a number of neuropsychiatric disorders, with an ever-expanding evidence base drawn from basic science to large-scale epidemiological data. Unravelling of these mechanisms should lead to biomarker discovery and potential new avenues for therapeutics that modulate immunological mechanisms. Identification of neuroimmune biomarkers is vital to improving diagnosis, stratification and treatment of mental disorders. There is an urgent clinical need for new therapeutic approaches with poor treatment response and treatment resistance a major problem for many psychiatric disorders including depression and schizophrenia. Neurodegenerative psychiatric disorders such as Alzheimer's also have clear neuroimmune underpinnings and manifest an urgent clinical need for improvements in diagnosis and research towards transformative disease-modifying treatments. This chapter provides some background on the role of the neuroimmune system in mental illness, exploring the role for biomarkers, in addition to reviewing the current state of knowledge in this exciting field. We also reflect on the inherent challenges and methodological pitfalls faced by research in this field, including the complexity of conceptualising multidimensional mental disorders and the dynamic shifting sands of the immune system. abstract_id: PUBMED:26348872 Non-canonical actions of Nogo-A and its receptors. Nogo-A is a myelin associated protein and one of the most potent neurite growth inhibitors in the central nervous system. Interference with Nogo-A signaling has thus been investigated as therapeutic target to promote functional recovery in CNS injuries. Still, the finding that Nogo-A presents a fairly ubiquitous expression in many types of neurons in different brain regions, in the eye and even in the inner ear suggests for further functions besides the neurite growth repression. Indeed, a growing number of studies identified a variety of functions including regulation of neuronal stem cells, modulation of microglial activity, inhibition of angiogenesis and interference with memory formation. Aim of the present commentary is to draw attention on these less well-known and sometimes controversial roles of Nogo-A. Furthermore, we are addressing the role of Nogo-A in neuropathological conditions such as ischemic stroke, schizophrenia and neurodegenerative diseases. Answer: Schizophrenia is often categorized as a neuropsychiatric disorder rather than a classical neurodegenerative disease like Alzheimer's disease (AD). However, there is evidence suggesting that neurodegenerative processes may play a role in the pathophysiology of schizophrenia. For instance, the review on Toll-like receptors (TLRs) discusses the immune response in the brain and central nervous system and suggests that the detrimental effects of TLR activation have been implicated in multiple neurodegenerative diseases such as Alzheimer's, as well as in schizophrenia, indicating that inflammation could be a contributing factor to the pathology of schizophrenia (PUBMED:37627253). Additionally, the review on neuroimmune biomarkers in mental illness highlights that inflammatory and immune mechanisms are increasingly understood to underpin a number of neuropsychiatric disorders, including schizophrenia, suggesting that neuroimmune processes are relevant to its pathogenesis (PUBMED:29626340). Furthermore, the review on non-canonical actions of Nogo-A mentions that Nogo-A has been investigated in the context of neuropathological conditions such as ischemic stroke, schizophrenia, and neurodegenerative diseases, which implies that there may be overlapping mechanisms between schizophrenia and neurodegenerative conditions (PUBMED:26348872). While these findings do not definitively classify schizophrenia as a neurodegenerative disease, they do indicate that neurodegenerative-like processes, such as inflammation and immune system dysregulation, may contribute to the disorder's progression. Schizophrenia is typically characterized by a constellation of symptoms including cognitive deficits, altered social behaviors, and disruptions in thought processes, which can be associated with changes in brain structure and function over time. However, unlike classical neurodegenerative diseases where there is a clear and progressive neuronal loss, the neurodegenerative aspects of schizophrenia may be more subtle and complex, involving a range of neurobiological changes rather than a straightforward pattern of neurodegeneration.
Instruction: Surgery for degenerative lumbar disc disease. Should the black disc be grafted? Abstracts: abstract_id: PUBMED:37786570 Alkaptonuria Presenting With Lumbar Disc Herniation: A Case Report. Alkaptonuria is a rare autosomal recessive trait. Symptomatic lumbar disc herniation warranting surgical intervention is a rare scenario in alkaptonuria and only a few cases have been described in the literature. We present one such rare case of alkaptonuria in a 31-year-old female presenting with low back pain and left leg radiculopathy not relieved with conservative management. Roentgenograms of the lumbar spine revealed wafer-like disc calcifications and MRI showed a herniated disc at the L4-L5 level with deeply hypointense disc spaces in T2 suggestive of disc calcification and associated modic type 2 changes. During the surgery, the disc material removed was black in color, which raised a clinical suspicion of alkaptonuria. Postoperatively, the patient was re-examined and urine homogentisic acid was found to be raised. This, along with a histopathological examination, was diagnostic of alkaptonuria. The patient had excellent relief of symptoms postoperatively. In conclusion, if a 'black disc' is found during surgery, retrospective analysis and re-examination of patient clinical features and urine examination have to be done to diagnose alkaptonuria. While making a differential diagnosis of degenerative disc disease in patients with a calcified disc seen on radiography, a high index of suspicion for alkaptonuria has to be maintained. abstract_id: PUBMED:9881403 Surgery for degenerative lumbar disc disease. Should the black disc be grafted? Purpose Of The Study: To determine predictive factors allowing to improve the results of fusion in low back pain treatment. Material And Method: Fifty six patients were retrospectively reviewed. Average age at surgery was 42. In 29 cases, discectomy or nucleolysis had been previously performed. All patients were treated by anterior lumbar interbody fusion. Functional results were assessed by the Beaujon index, with determination of a relative improvement rate. Results were analyzed according to clinical symptoms, fused level, previous surgery, association to posterior osteosynthesis and MRI changes. MRI changes were classified according to Modic types. Results: The average relative improvement rate was 66 per cent. Pain topography, previous surgery, fused level, association with posterior osteosynthesis had not statistical effect on the functional result. Inversely, a close relation was observed between pre-operative MRI changes and the result of surgery: best results were observed in Modic I changes on adjacent vertebral end plates, with decreased signal of T1 and increased signal on T2 weighted images, suggesting inflammatory changes; poor results were observed in isolated disc degeneration without vertebral end-plates changes; poor results were observed in Modic II changes with increased signal on both T1 and T2 weighted images, suggesting degenerative changes; but among 5 non unions, 3 were observed in Modic II changes. Discussion: The authors identify a lumbar disc dysfunction syndrome characterized by mechanical pain, with disc narrowing and anterior condensation of the vertebral plates on MRI (Modic I changes). This syndrome should be differentiated from common degenerative disc disease, without vertebral plates abnormalities (the "black disc" on MRI). Conclusion: Anterior fusion is effective for the treatment of low-back pain due to degenerative disc disease, when associated to vertebral plate changes; as the pathology is mainly anterior. We prefer an anterior mini-invasive approach; furthermore, posterior elements are intact and canal exploration is unnecessary. However, an additional posterior osteosynthesis is preferable in Modic type II, as non union rate is increased by fatty degenerative involution. abstract_id: PUBMED:7846596 Experimental study of intervertebral disc allografting in the dog. Study Design: An investigation of the use of intervertebral disc allografting as a potential alternative to intervertebral fusion was performed. In 13 adult mongrel dogs, intervertebral disc units cryopreserved after slow freezing were grafted. Objectives: To evaluate the morphologic and biosynthetic damage caused by cryopreserving the canine intervertebral disc before allografting. Summary Of Background Data: Cryopreservation did not alter the normal histologic appearance. The metabolic activity, measured by 35S-sulfate incorporation, was observed to be 44% of the fresh control. Methods: Transplanted intervertebral discs were examined histologically and radiographically at 12, 24, and 48 weeks after surgery. Results: The allografted disc had preserved anular and nuclear architecture with moderate loss of chondrocytes 12 weeks after grafting. However, the intervertebral disc showed evidence of progressive disc degeneration 1 year after transplantation. Conclusions: More investigation of storage methods is needed to enhance the viability of the intervertebral disc for successful intervertebral disc allografting. abstract_id: PUBMED:25550020 Anterior cervical disc replacement for degenerative disc disease. Purpose: To review the outcomes of anterior cervical disc replacement using the Prestige LP system for degenerative disc disease. Methods: Medical records of 12 men and 23 women aged 26 to 66 (mean, 46) years who underwent 48 anterior cervical disc replacements using the Prestige LP system by a single spine surgeon were reviewed. 22 patients underwent one-level disc replacement at C5-C6 (n=13) and C6-C7 (n=9), and 13 patients underwent 2-level disc replacement at C5-C6 and C6-C7 (n=11), C4-C5 and C5-C6 (n=1), and C6-C7 and C7-T1 (n=1). Neck Disability Index (NDI) score, visual analogue scale (VAS) for pain in the neck and arm, and physical and mental component scores of the Short Form 36 were evaluated at week 6 and months 6, 12, 24, 36, and 48. Results: The NDI score, VAS score for neck and arm pain, and physical and mental component scores of the Short Form 36 improved significantly after surgery (p<0.001). 80% of patients were satisfied with the treatment. One patient developed a neck haematoma on day 1 and underwent surgical evacuation. Another patient developed Horner's syndrome and achieved partial recovery at 6 weeks and complete recovery at 6 months. No patient had implant-related complications or reoperation. Conclusion: The Prestige LP cervical disc implant was safe for one- and 2-level cervical disc replacement. abstract_id: PUBMED:33578086 Imaging characteristics of intradural disc herniation: A comparison with large disc extrusion. Objective: To compare the computed tomography (CT) and magnetic resonance imaging (MRI) findings of lumbar intradural disc herniation (IDH) and disc extrusion mimicking IDH. Materials And Methods: Between January 2015 and August 2018, 32 with surgically confirmed IDH or disc extrusion mimicking IDH were included. Age, sex, symptoms, herniated disc level, history of discectomy at the same site, and operative findings were investigated through the medical records. We evaluated the direction, type, migration, margin, and shape of disc herniation, the presence of an abrupt discontinuity of the posterior longitudinal ligament (PLL), Y-sign of ventral dura, disc material beyond the PLL, and disc calcification or ossification. In addition, maximum herniated disc diameter to central canal diameter (MHDD/CCD) ratios were calculated. Results: Twelve patients (8 males, 4 females; mean age 53.3 [21-83] years) were surgically confirmed to have lumbar IDH and 20 (11 males, 9 females; mean age 52 [19-78] years) had disc extrusion mimicking lumbar IDH. Margins and beak-like shapes of herniated discs, abrupt discontinuity of the PLL, Y-sign of ventral dura, disc material beyond the PLL, calcification or ossification of herniated discs, and MHDD/CCD ratios were significantly different in the IDH and non-IDH groups (p < 0.05). Conclusion: Imaging findings of an ill-defined margin, a beak-like shape, herniated disc calcification or ossification, abrupt PLL discontinuity, Y-sign of ventral dura, disc material beyond the PLL and a high MHDD/CCD ratio were found to predict the presence of IDH. abstract_id: PUBMED:22915917 Bryan total disc arthroplasty: a replacement disc for cervical disc disease. Total disc arthroplasty is a new option in the treatment of cervical degenerative disc disease. Several types of cervical disc prostheses currently challenge the gold-standard discectomy and fusion procedures. This review describes the Bryan Cervical Disc System and presents the Bryan prosthesis, its indications, surgical technique, complications, and outcomes, as given in the literature. abstract_id: PUBMED:25694914 Annulo-nucleoplasty using Disc-FX in the management of lumbar disc pathology: early results. Background: Back pain due to Lumbar Disc Disease is a major clinical problem. The treatment options range from physiotherapy to fusion surgery. A number of minimally invasive procedures have also been developed in the recent past for its management. Disc-FX is a new minimally invasive technique that combines percutaneous discectomy, nuclear ablation and annular modification. Literature on its role in the management of lumbar disc pathology is scarce. Methods: We included 24 consecutive patients who underwent the Disc-FX for back pain due to lumbar disc pathology non-responsive to non-operative treatment for a period of at least 6 months. Based on Magnetic Resonance Imaging (MRI) these patients fell into 2 groups - those with degenerative disc disease (DDD) (n = 12) and those with a contained lumbar disc herniation (CLDH)(n = 12). They were evaluated using the Visual analogue scale (VAS), Oswestry Disability Index (ODI) and Short Form-36 (SF-36) scores preoperatively and postoperatively. Results: The mean age was 37.9 years (21-53 years). There were 17 males and 7 females. One patient in each subgroup was excluded from the final evaluation. Significant improvement was seen in all outcome measures. The overall rate of reintervention for persistent symptoms was 18.18% (4/22); in the CLDH subgroup, it was 36.36% (4/11). Conclusions And Level Of Evidence: Early results after the Disc-FX procedure suggest that it s a reasonable treatment option for patients with back pain due to lumbar disc disease, especially for those with DDD who fail conservative treatment. It could be an alternative to procedures like fusion or disc replacement. This study presents Level IV evidence. Clinical Relevance: We feel that our study establishes Disc-FX as a modality of treating symptomatic lumbar disc disease due to DDD. However, longer term prospective studies are needed to prove this and to evaluate its role in the treatment of patients with CLDH. abstract_id: PUBMED:33460811 IL-1β promotes disc degeneration and inflammation through direct injection of intervertebral disc in a rat lumbar disc herniation model. Background Context: Lumbar intervertebral disc herniation (LDH) is a common disease that causes low back pain, radiating leg pain, and sensory impairment. Preclinical studies rely heavily upon standardized animal models of human diseases to predict clinical treatment efficacy and to identify and investigate potential adverse events in human subjects. The current method for making the LDH model involves harvesting the nucleus pulposus (NP) from autologous coccygeal discs and applying to the lumbar nerve roots just proximal to the corresponding dorsal root ganglion. However, this surgical method generates a model that exhibits very different characteristics of disc herniation than that observed in human. Purpose: To produce a rat LDH model that better resembles disc herniation in humans and a standardized and uniform LDH model using Interleukin-1 beta (IL-1β). Study Design: Experimental rat LDH model. Methods: We exposed the L5-6 disc dorsolaterally on the right side through hemi-laminectomy without nerve compression. Herniation was initiated by puncturing the exposed disc with a 30-gauge needle at a depth of 4 mm. Interleukin-1 beta (IL-1β) was injected simultaneously to heighten the pathological processes of disc degeneration, including inflammatory responses, matrix destruction, and herniation of the NP. We performed histological staining to assess morphological changes, immunohistochemistry to analyze inflammation- and pain-related expression within and around the puncture site of the L5-6 disc, and real-time polymerase chain reaction to examine expression of markers for degenerative processes. In addition, we performed locomotor tests on the rats. Results: We found that the IL-1β groups showed that the border between the annulus fibrosis and nucleus pulposus was severely interrupted compared to that of the control (puncture only) group. And, the injection of IL-1β leads to accelerated disc degeneration and inflammation in a more consistent manner in LDH model. Functional deficit was consistently induced by puncturing and injection of IL-1β in the exposed disc. Conclusions: The method proposed here can be used as an index to control the severity of disc degeneration and inflammation through the injected IL-1β concentration concurrent with surgically induced herniation. Clinical Significance: Our proposed model may facilitate research in drug development to evaluate the efficacy of potential therapeutic agents for disc herniation and neuropathic pain and may also be used for nonclinical studies to more accurately assess the effectiveness of various treatment strategies according to the severity of disc degeneration. abstract_id: PUBMED:21964454 Reversal of anterior cervical discectomy and fusion with cervical artificial disc replacement: regain motion after 9 years fusion. Study Design: Case report and review of the literature. Objective: To present a conversion of an anterior cervical discectomy and fusion (ACDF) with a cervical artificial disc replacement in a 39-year-old woman and to review the relevant literature. Summary Of Background Data: Return of functional spinal unit motion 9 years after attempted fusion is extremely rare. Methods: The patient underwent an attempted anterior cervical discectomy and fusion 9 years earlier for bilateral hand numbness and leg weakness. Most of her neurological deficits had resolved after the index operation, but returned 2 months before the second operation and were unresponsive to nonoperative treatment. Computed tomography (CT) myelography showed recurrence of cervical disc herniation at the cephalad adjacent segment, which compressed the spinal cord. There was still some osteophyte at the C5/6 level that was also causing compression to the spinal cord. A solid fusion was suspected at this level. Surgery was performed to take down the grafted region and replace both levels with artificial disks. Results: The range of motion (ROM) of the revised level at the 6-month follow-up was well preserved, there was no sign of instability at either operated level. The 6-month follow-up CT scan shows that, there was no obvious compression in the spinal canal. The remobilized facet joints of C5/6 segment have not demonstrated further degeneration. The patient's neck symptom and neurological function were significantly recovered. Conclusions: This case demonstrates application of a cervical artificial disc replacement to restore motion at a level that was previously grafted and fused. In select cases, cervical artificial disc replacement may represent a reasonable alternative to a repeated attempt at fusion. It is imperative that preoperatively lack of fusion of the facet joints be demonstrated on reconstructed CT scanning. abstract_id: PUBMED:29702310 Endoscopic Anterior Approach for Cervical Disc Disease (Disc Preserving Surgery). Objective: To report our experience of endoscopic disc removal by anterior approach for management of cervical myelopathy in 210 patients. Methods: A retrospective study of 187 cases of single- and 23 cases of double-level disc disease was performed. Cases of myelopathy with or without unilateral or bilateral radiculopathy and unilateral radiculopathy with either soft or hard disc prolapse were included. Patients with ≥3 disc levels, unstable spine, infections, trauma, significant posterior compression, congenital canal stenosis, disc extending more than half the vertebral body height, and prior surgery at the same level were excluded. Results: C5-6 (n = 119 patients), C6-7 (n = 58 patients), C4-5 (n = 49 patients), C3-4 (n = 6 patients), and C2-3 (n = 1 patient) levels were represented. Visual analog scale and Nurick grading system were used to assess severity of neck and arm pain and functional outcomes, respectively. Preoperative mean visual analog scale scores for arm and neck pain were 6.7 and 3.2, respectively, which improved to 1.7 and 1.1 at 3 months after surgery. The average preoperative Nurick grade improved from 2.64 to 0.81 at 6 months postoperatively. Follow-up was 6-54 months. Conclusions: Endoscopic anterior discectomy (disc preserving surgery) is an effective and safe alternative in cervical disc disease. Although there was reduction in disc height, clinical outcome was good at an average 29 months of follow-up. Long-term follow-up is required to assess any progressive disc degeneration and clinical results. Answer: The question of whether a "black disc" should be grafted during surgery for degenerative lumbar disc disease is addressed in the abstract from PUBMED:9881403. The study aimed to determine predictive factors that could improve the results of fusion in the treatment of low back pain. The study reviewed 56 patients who underwent anterior lumbar interbody fusion and assessed functional results using the Beaujon index. The results were analyzed according to various factors, including clinical symptoms, fused level, previous surgery, association to posterior osteosynthesis, and MRI changes classified according to Modic types. The study found that the best results were observed in cases with Modic I changes on adjacent vertebral end plates, which suggest inflammatory changes. Poor results were observed in isolated disc degeneration without vertebral end-plates changes, as well as in Modic II changes, which suggest degenerative changes. Among the non-unions, a higher rate was observed in Modic II changes. The authors concluded that anterior fusion is effective for the treatment of low-back pain due to degenerative disc disease when associated with vertebral plate changes, as the pathology is mainly anterior. They preferred an anterior mini-invasive approach and suggested that additional posterior osteosynthesis might be preferable in Modic type II changes due to the increased non-union rate associated with fatty degenerative involution. In summary, the decision to graft a "black disc" during surgery for degenerative lumbar disc disease should be informed by the type of MRI changes observed, particularly the presence of Modic type changes. The study suggests that grafting may be more effective in cases with inflammatory changes (Modic I) rather than in cases with degenerative changes (Modic II), where the non-union rate is higher and additional posterior osteosynthesis may be beneficial.
Instruction: Are asthma patients at increased risk of clinical depression? Abstracts: abstract_id: PUBMED:26313241 Are asthma patients at increased risk of clinical depression? A longitudinal cohort study. Objective: In this study, we assessed whether adult patients with asthma are more likely to be diagnosed with depression than diabetes patients or "healthy" controls during follow-up in primary care. Methods: Data from the Nijmegen Continuous Morbidity Registration were used to assess the risk for a first depression. Patients with asthma were compared with patients with diabetes and with two healthy controls matched on age, gender, socioeconomic status and attending general practice. With Cox proportional hazard analysis, we compared the risk of depression between these groups. These analyses were corrected for relevant covariates including a time-depending variable for multimorbidity. Explorative subgroup analyses were done for age, gender, socioeconomic status and multimorbidity. Results: Cumulative incidence of depression in asthma patients was 5.2%, in DM patients 4.1% and in control subjects 3.3%. The hazard ratios for a first episode of depression in the asthma patients (n = 795) compared to DM patients (n = 1033) and control subjects after correction for covariates were 1.11 (95% CI 0.60-2.04) and 1.18 (95% CI 0.78-1.79), respectively. Exploratory analyses showed that asthma patients without multimorbidity were at higher risk for a depression compared to reference groups, while asthma patients with multimorbidity were at lower risk for depression. Conclusion: Asthma patients were not more likely to be diagnosed with a first depression compared to "healthy" control subjects or diabetes patients. The influence of multimorbidity on depression risk in asthma patients warrants further study. abstract_id: PUBMED:32192212 The Risk of Depression in Patients with Pemphigus: A Nationwide Cohort Study in Taiwan. Pemphigus is a chronic dermatological disorder caused by an autoimmune response and is associated with a high proportion of comorbidities and fatalities. The aim of this study was to investigate the risk of depression in patients with pemphigus. Data were derived from the National Health Insurance Research Database recorded during the period 2000-2010 in Taiwan. Multivariate Cox proportional hazards regression models were used to analyze the data and assess the effects of pemphigus on the risk of depression after adjusting for demographic characteristics and comorbidities. Patients with pemphigus were 1.98 times more likely to suffer from depression than the control group (pemphigus, adjusted HR: 1.99, 95% CI = 1.37-2.86). People aged ≥65 years were 1.69 times more likely to suffer from depression than those aged 20-49 years (≥65 years, adjusted HR: 1.42, 95% CI = 0.92-2.21). Female and male patients with pemphigus were respectively 2.02 and 1.91 times more likely to suffer from depression than the control group (female, adjusted HR: 2.09, 95% CI = 1.24-3.54; male, adjusted HR: 1.87, 95% CI = 0.97-3.60). People with HTN, hyperlipidemia, asthma/COPD, and chronic liver disease were respectively 1.73, 2.3, 2.2, and 1.69 times more likely to suffer from depression than those without these comorbidities (HTN, adjusted HR: 0.75, 95% CI = 0.41-1.42; hyperlipidemia, adjusted HR: 1.48, 95% CI = 0.78-2.82; asthma/COPD, adjusted HR: 1.4, 95% CI = 0.72-2.69; and chronic liver disease, adjusted HR: 1.61, 95% CI = 1.07-2.43). There was a significant association between pemphigus and increased risk of depression. Female patients had a higher incidence of depression. abstract_id: PUBMED:28692781 Depression as a Risk Factor of Organic Diseases:An International Integrative Review. Purpose And Design: This integrative review offers a systematic synthesis of the international literature regarding the role of depression as a risk factor in physical illnesses and the mechanisms of this connection. Special attention is paid to those modifiable factors. Findings: Published studies of depression and physical illness and disease(N = 24) from five countries that were indexed in PubMed, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), APA PsycNET, Scopus, Dialnet, and CUIDEN were examined. Results suggest that depression is a significant risk factor for the development of physical illnesses and diseases. More commonly studied were the connections between depression and cardiovascular disease, metabolic syndrome, biochemical alterations, diabetes, dementia, cognitive impairment, Alzheimer's disease, somatization and chronic pain, asthma, arthritis, and hyperlipidemia. Less frequently studied conditions connected to depression were cancer, infections, allergies, autoimmune disease, gastric ulcer, rhinitis, thyroiditis, bronchitis, migraines, fractures, and osteoporosis. Conclusions: Mechanisms connecting depression to physical illness appear to involve alterations in the hypothalamic-pituitary axis, unhealthy lifestyle, chronic or acute stressors including posttraumatic stress, an increase in C-reactive protein (CRP) in men, taking antidepressant medication, and social and emotional loneliness. Clinical Relevance: A good patient-provider relationship can help to promote decreased acute or chronic stressors, increased family and social support, decreased loneliness, modification of unhealthy lifestyles such as smoking, obesity, physical inactivity, alcohol, control of CRP, and antidepressant medication. Nurses are well placed to help prevent physical diseases through detection and referral of patients who are depressed or undiagnosed and not receiving adequate mental health treatment. abstract_id: PUBMED:11978917 Risk of depression in patients with chronic obstructive pulmonary disease and its determinants. Background: Although it has been repeatedly suggested that chronic obstructive pulmonary disease (COPD) is associated with depression, no conclusion has so far been reached. A study was undertaken to investigate whether depression occurs more often in patients with COPD than in controls. The demographic and clinical variables associated with depression were also determined. Methods: Patients with a registered diagnosis of obstructive airway disease in general practice, aged > or=40 years, forced expiratory volume in 1 second (FEV(1)) <80% predicted, FEV(1) reversibility <or =12%, FEV(1)/VC < or = predicted - 1.64 x SD, and a history of smoking were selected. A random sample of subjects without a registered diagnosis of asthma or COPD aged 40 years or older acted as controls. Depression was assessed using the Centers for Epidemiologic Studies Depression (CES-D) scale. Results: In patients with severe COPD (FEV(1) <50% predicted), the prevalence of depression was 25.0% compared with 17.5% in controls and 19.6% in patients with mild to moderate COPD. When the results were adjusted for demographic variables and comorbidity, the risk for depression was 2.5 times greater for patients with severe COPD than for controls (OR 2.5, 95% CI 1.2 to 5.4). In patients with mild to moderate COPD this increased risk of depression was not seen. Living alone, reversibility of FEV(1) % predicted, respiratory symptoms and physical impairment were significantly associated with the scores on the CES-D scale. Conclusion: Patients with severe COPD are at increased risk of developing depression. The results of this study underscore the importance of reducing symptoms and improving physical functioning in patients with COPD. abstract_id: PUBMED:30292921 Risk of Postpartum Depression Among Women with Asthma. Background: Several epidemiological studies have suggested that the risk of depression is increased in patients with asthma, but the impact of asthma during pregnancy on postpartum depression remains unknown. Objective: To assess the association between maternal asthma and postpartum depression in a population-based cohort study retrieved from administrative databases. Methods: A cohort of 35,520 pregnancies in women with asthma during pregnancy and 197,057 pregnancies in women without asthma who delivered between 1998 and 2009 was extracted from the Quebec Asthma and Pregnancy Database. They were followed from the day of delivery up to 1 year postpartum. A generalized estimating equation model was used to estimate the adjusted odds ratios of postpartum depression with 95% CIs in women with asthma during pregnancy versus women without asthma. Results: Postpartum depression within 1 year after delivery occurred in 6.1% of women with asthma versus 2.9% of women without asthma. After adjusting for several potential confounders, including depression/postpartum depression up to 10 years before pregnancy, we found that women with asthma were 58% more likely to experience postpartum depression within 1 year after delivery than women without asthma during pregnancy (adjusted odds ratio, 1.58; 95% CI, 1.50-1.67). Conclusions: Our findings suggest that women with asthma are more likely to suffer from postpartum depression. A close monitoring of signs of depression for pregnant women with asthma is indicated, allowing prompt and efficient interventions if needed. abstract_id: PUBMED:32735934 Genetic liability to major depression and risk of childhood asthma. Objective: Major depression and asthma frequently co-occur, suggesting shared genetic vulnerability between these two disorders. We aimed to determine whether a higher genetic liability to major depression was associated with increased childhood asthma risk, and if so, whether such an association differed by sex of the child. Methods: We conducted a population-based cohort study comprising 16,687 singletons born between 1991 and 2005 in Denmark. We calculated the polygenic risk score (PRS) for major depression as a measure of genetic liability based on the summary statistics from the Major Depressive Disorder Psychiatric Genomics Consortium collaboration. The outcome was incident asthma from age 5 to 15 years, identified from the Danish National Patient Registry and the Danish National Prescription Registry. Stratified Cox regression was used to analyze the data. Results: Greater genetic liability to major depression was associated with an increased asthma risk with a hazard ratio (HR) of 1.06 (95% CI: 1.01-1.10) per standard deviation increase in PRS. Children in the highest major depression PRS quartile had a HR for asthma of 1.20 (95% CI: 1.06-1.36), compared with children in the lowest quartile. However, major depression PRS explained only 0.03% of asthma variance (Pseudo-R2). The HRs of asthma by major depression PRS did not differ between boys and girls. Conclusion: Our results suggest a shared genetic contribution to major depression and childhood asthma, and there is no evidence of a sex-specific difference in the association. abstract_id: PUBMED:25469061 Prevalence and risk factors for depression in Korean adult patients with asthma: is there a difference between elderly and non-elderly patients? Depression is an important comorbidity of asthma. However, little information is available about depression and its potential impact on asthma control in Korean adult asthma patients. We aimed to estimate the prevalence and risk factors for depression in Korean adults with persistent asthma. The 127 non-elderly (20-64 yr) and 75 elderly (≥65 yr) patients with asthma were recruited. Demographic and clinical data were extracted, and the patients completed the Asthma Specific Quality of Life (AQOL) questionnaire and asthma control test (ACT). Depression status was defined using the Korean version of the Patient Health Questionnaire-9 (PHQ-9). Depression was more prevalent in non-elderly (18.9%) than in elderly patients with asthma (13.3%). Patients with depression were significantly younger, had lower economic status, shorter disease duration, poorer asthma control, and worse AQOL scores (P<0.05). Within the non-elderly group, younger age and shorter disease duration were significantly associated with depression (P<0.05). Within the elderly group, a higher body mass index and current smoking status were significantly associated with depression (P<0.05). The PHQ-9 score was significantly correlated with worse ACT and AQOL scores in both groups. In conclusion, depression is strongly associated with poor asthma control and quality of life in Korean adult asthma patients. Our results provide important clues that used to target modifiable factors which contribute to development of depression in asthma patients. abstract_id: PUBMED:34779274 Increased Risk of Postpartum Depression in Women With Allergic Rhinitis During Pregnancy: A Population-Based Case-Control Study. Objectives: Allergic rhinitis (AR) is associated with increased risk of major depression in the general population, however, no previous study has evaluated its role among pregnant women. We aimed to investigate the potential impact of AR during pregnancy on the development of postpartum depression (PPD). Methods: This is a population-based case-control study. Data were retrieved from the National Health Insurance Research Database (NHIRD). Medical records of a total of 199 470 deliveries during 2000 and 2010 were identified. Among which, 1416 women with PPD within 12 months after delivery were classified as the case group, while 198 054 women without PPD after delivery formed the control group. Univariate and multivariate regression analyses were conducted to determine the associations between AR during pregnancies and other study variables with PPD. Results: AR during pregnancy was found in 9.53% women who developed PPD and 5.44% in women without PPD. After adjusting for age at delivery, income level, various pregnancy and delivery-related conditions, asthma, atopic dermatitis and other medical comorbidities in the multivariate analysis, AR was significantly associated with increased odds of PPD (aOR: 1.498, 95% CI: 1.222-1.836). Conclusion: AR during pregnancy was independently and significantly associated with an approximately 50% increased risk of PPD among women giving birth. Closely monitoring of AR is warranted in the future in order to optimize mother and child outcomes after delivery. abstract_id: PUBMED:32122453 Incidence of depression and its influencing factors in 387 patients with asthma from Xinjiang, China. Background: Our study aimed to investigate the incidence of depression in 387 patients with asthma. Methods: The Zung self-rating depression scale and the Hamilton depression scale were used to evaluate the depression status in patients with asthma. Results: Results of logistic regression analysis indicated that, severity of asthma symptoms, taking medicine, frequency of asthma onset, and lack of education were the major risk factors for depression in patients with asthma. Conclusion: Depression is a complication with high morbidity in patients with asthma. It largely affects disease control of asthma and the quality life in patients. Multiple factors are relevant for depression in the patient with asthma. This study provided a comprehensive horizon for clinical management and treatment of depression in patients with asthma. abstract_id: PUBMED:19351294 Depression, anxiety and their relationship with chronic diseases: a review of the epidemiology, risk and treatment evidence. Objective: To review the evidence for an association between depression and anxiety and the National Health Priority Area conditions -- heart disease, stroke, diabetes mellitus, asthma, cancer, arthritis and osteoporosis -- and for the effectiveness of treatments for depression and anxiety in these settings. Data Sources: Systematic literature search of systematic reviews, meta-analyses and evidence-based clinical practice guidelines published between 1995 and 2007, inclusive. Data Extraction: Each review was examined and summarised by two people before compilation. Data Synthesis: Depression is more common in all disease groups than in the general population; anxiety is more common in people with heart disease, stroke and cancer than in the general population. Heterogeneity of studies makes determination of risk and the direction of causal relationships difficult to determine, but there is consistent evidence that depression is a risk factor for heart disease, stroke and diabetes mellitus. Antidepressants appear to be effective for treating depression and/or anxiety in patients with heart disease, stroke, cancer and arthritis, although the number of studies in this area is small. A range of psychological and behavioural treatments are also effective in improving mood in patients with cancer and arthritis but, again, the number of studies is small. Conclusion: The evidence for the association of physical illness and depression and anxiety, and their effects on outcome, is very strong. Further research to establish the effectiveness of interventions is required. Despite the limits of current research, policy and practice still lags significantly behind best evidence-based practice. Models of integrated care need to be developed and trialled. Answer: Based on the provided abstracts, the relationship between asthma and the risk of clinical depression is nuanced and may be influenced by various factors. One study found that adult patients with asthma were not more likely to be diagnosed with a first depression compared to "healthy" control subjects or diabetes patients (PUBMED:26313241). However, this study also noted that asthma patients without multimorbidity were at higher risk for depression compared to reference groups, while those with multimorbidity were at lower risk, suggesting that the presence of other health conditions alongside asthma may influence the risk of depression. Another study indicated that women with asthma are more likely to suffer from postpartum depression, with a 58% increased likelihood compared to women without asthma during pregnancy (PUBMED:30292921). This suggests that asthma may be a risk factor for depression in specific populations, such as postpartum women. Additionally, a study on the genetic liability to major depression found a shared genetic contribution to major depression and childhood asthma, although the effect size was small (PUBMED:32735934). This points to a potential genetic link between asthma and depression. In contrast, a study on Korean adult patients with asthma found that depression was more prevalent in non-elderly patients with asthma compared to elderly patients, and depression was associated with poor asthma control and quality of life (PUBMED:25469061). This suggests that age and asthma control may be important factors in the relationship between asthma and depression. Overall, while some studies suggest an increased risk of depression among asthma patients, particularly in certain subgroups or under specific conditions, the evidence is not entirely consistent. Factors such as age, multimorbidity, genetic predisposition, and asthma control may play a role in determining the risk of depression in asthma patients. Further research is needed to fully understand the relationship between asthma and depression and to identify which asthma patients may be at higher risk.
Instruction: Reentry in an accessory atrioventricular pathway as a trigger for atrial fibrillation initiation in manifest Wolff-Parkinson-White syndrome: a matter of reflection? Abstracts: abstract_id: PUBMED:18774096 Reentry in an accessory atrioventricular pathway as a trigger for atrial fibrillation initiation in manifest Wolff-Parkinson-White syndrome: a matter of reflection? Background: Patients with an accessory pathway (AP) have an increased propensity to develop atrial fibrillation (AF), but the mechanism is unknown. Objective: The purpose of this study was to identify crucial risk factors and to test the hypothesis that reflection and/or microreentry of atrial impulses propagating into the AP triggers AF. Methods: Five hundred thirty-four patients successfully treated with radiofrequency ablation of AP at two university hospitals were evaluated. Patients were separated into those with concealed vs those with manifest AP in terms of their propensity to develop AF. To investigate AF triggering mechanisms, linear and branched two-dimensional models of atrium-to-ventricle propagation across a heterogeneous 1 x 6 AP using human ionic kinetics were simulated. Results: A history of AF was twice as common in patients with manifest AP vs concealed AP irrespective of AP location. AF was more likely to occur in older males and in patients with larger atria. There was no correlation between AF history and AP refractory measures. However, the electrophysiologic properties of APs seemed to fulfill the prerequisites for reflection and/or microreentry of atrially initiated impulses. In the linear AP model, repetitive atrial stimulation resulted in progressively larger delay of atrium-to-ventricle propagation across the passive segment. Eventually, sufficient time for repolarization of the atrial segment allowed for reflection of an impulse that activated the entire atrium and by wavefront-wavetail interaction with a new atrial stimulus AF reentry was initiated. Simulations using the branched model showed that microreentry at the ventricular insertion of the AP could also initiate AF via retrograde atrial activation as a result of unidirectional block at the AP-ventricle junction. Conclusion: Propensity for AF in patients with an AP is strongly related to preexcitation, larger atria, male gender, and older age. Reflection and microreentry at the AP may be important for AF initiation in patients with manifest (preexcited) Wolff-Parkinson-White syndrome. Similar mechanisms also may trigger AF in patients without an AP. abstract_id: PUBMED:8693318 Supraventricular tachycardias: mechanism, diagnosis and therapy Most of the paroxysmal forms of supraventricular tachycardia are reentry tachycardias in origin with either an AV-nodal reentry (AVNRT; approx. 50%) or an AV reentry circuit via accessory pathway (AVRT; approx. 30%) as the anatomical basis of tachycardia. The therapeutic options include either drug therapy or transcatheter radiofrequency ablation. In asymptomatic patients, where supra-ventricular tachycardia or WPW syndrome was documented incidentally, an expectative approach without therapy is usually recommended. However, if frequent tachycardia recurrences or severe clinical symptoms (syncope, pre-syncope make treatment mandatory, medical therapy with either betablockers, calcium antagonists or digoxin should be tried if AV-nodal reentry tachycardia is the mechanism (no evidence of WPW syndrome on the 12-lead ECG). In patients with symptomatic WPW syndrome or drug refractory AV-nodal reentry tachycardia, transcatheter radiofrequency ablation with its good results and low complication rate is the therapy of choice. The treatment modality is particularly indicated in young patients who otherwise would need lifelong drug therapy. In contrast, supraventricular tachycardias with badly defined anatomical substrate, such as multifocal atrial tachycardias or atrial fibrillation, should in the first place be treated medically and not by radiofrequency ablation. abstract_id: PUBMED:8223750 Pathophysiology of supraventricular tachycardia. Supraventricular tachycardia is due to altered automaticity, triggered activity or most commonly, reentry. Atrial flutter is most frequently due to reentry with counterclockwise activation in the right atrium with the left atrium acting as a bystander. The small wavefronts demonstrated during atrial fibrillation are less random than was previously thought. In the study of atrioventricular (AV) nodal reentrant tachycardia, recent evidence suggests that multiple pathways involving intranodal and perinodal tissue are probably involved. In the Wolff-Parkinson-White (WPW) syndrome, accessory pathways near the AV node have been demonstrated with decremental properties, suggesting conduction through accessory AV nodal tissue. The mechanism for atrial fibrillation in the WPW syndrome is poorly understood, but there is evidence that accessory pathways are branched and may themselves support micro-reentry. In Mahaim nodoventricular pathways, the atrium is not essential for reentry, unlike atriofascicular reentrant circuits. Atriofascicular pathways consist of AV node-like tissue and are usually located in the right atrial free wall. abstract_id: PUBMED:1841805 The nonpharmacological treatment of tachyarrhythmias. The surgery of supraventricular arrhythmias not due to pre-excitation Surgery can provide treatment for supraventricular tachyarrhythmias without operative risk and with a definite improvement of patients' quality of life. Ectopic atrial tachycardia is a rare but invalidating arrhythmia that may lead to cardiomyopathy: intraoperative mapping is necessary for the location of the location of the ectopic focus; in our experience, surgical, cryothermal or isolation ablation of the area were effective in 9 patients out of 9, with regression of the cardiomyopathy. Atrioventricular node reentry tachycardia is a reentry tachycardia which is often associated with Wolff-Parkinson-White syndrome and is amenable to surgical treatment. Discrete cryolesions around the Koch triangle are effective in interrupting the atrial inputs to the atrioventricular node and therefore the reentry mechanism: in our experience, 7 patients were successfully treated without operative mortality: 3 patients underwent surgical ablation of Kent bundles as well. Atrial fibrillation is not based on a well-defined mechanism, and therefore does not permit an electrically-guided surgical treatment. However, in case of atrial septal defect it is possible to isolate the enlarged right atrium in order to allow sinus rhythm to activate the left atrium and ventricles.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:8552528 Wide complex tachycardia due to automaticity in an accessory pathway. Patients with the Wolff-Parkinson-White (WPW) syndrome have preexcited tachycardia as the result of atrial arrhythmias or antidromic reentry. This article describes a patient with persistent wide complex tachycardia due to abnormal automaticity in the accessory pathway. Radiofrequency catheter ablation resulted in simultaneous elimination of accessory pathway conduction and automaticity. Accessory pathway automaticity may be an infrequent cause of preexcited tachycardia in patients with the WPW syndrome. abstract_id: PUBMED:8810791 Catheter ablation for therapy of patients with arrhythmia Catheter ablation using radiofrequency currents has been established as the safe and effective therapeutic modality of tachyarrhythmias. Most stable results were obtained in the ablation of accessory pathway of any location in WPW syndrome, and in the ablation of slow pathway in atrioventricular nodal reentry. However, in ventricular tachycardia, the efficacy is strongly affected by underlying heart diseases as well as the site of VT origin. Idiopathic VT either from the right ventricular outflow tract or left ventricular septum can be highly ablated. Common type atrial flutter is another candidate for catheter ablation. Using catheter, Maze procedure is mimicked in atrial fibrillation. abstract_id: PUBMED:10929437 Administration of adenosine for termination of atrioventricular nodal reentry tachycardia: induction of atrial fibrillation with rapid conduction over an accessory pathway and unmasking of concomitant Wolff-Parkinson-White syndrome The antiarrhythmic properties of adenosine, its ultra-short half-life and the absence of frequent serious side effects make it a front-line agent in arrhythmia management, especially in the treatment of atrioventricular nodal reentrant tachycardia. Due to a shortening of atrial refractoriness, adenosine can facilitate the induction of atrial fibrillation. Life threatening tachycardias may result from a potential rapid conduction of atrial fibrillation over an accessory pathway especially if the latter one has a short antegrade refractory period. We report a case of a 59 year old female patient in which intravenous administration of adenosine during typical atrioventricular nodal reentrant tachycardia was followed by atrial fibrillation with rapid conduction over a hitherto unknown accessory pathway. After intravenous administration of adenosine the tachycardia was terminated successfully within 38 s. After a short period of asystole, spontaneous atrial fibrillation developed unmasking an antegrade preexcitation with subsequent rapid ventricular response (210 b/min). The three-lead ECG showed a narrow QRS complex tachycardia. Because of spontaneous conversion to sinus rhythm and the absence of hemodynamic compromise there was no need for external cardioversion. During electrophysiological study an antidromic atrioventricular reentrant tachycardia was recorded over a left posteroseptal accessory pathway including antegrade conduction properties only. Because of its ultrashort half-life, serious side effects after adenosine administration are rare. The possibility of life threatening proarrhythmias after intravenous adenosine administration should be taken into consideration if the etiology of a paroxysmal supraventricular tachycardia is not clear and a concomitant Wolff-Parkinson-White syndrome cannot be excluded. As with application of all intravenous antiarrhythmic agents, the administration of adenosine should only be performed if continuous ECG monitoring and cardioversion facilities are available and possible. abstract_id: PUBMED:8454253 Atrial fibrillation in Wolff-Parkinson-White syndrome. Development and therapy In patients with Wolff-Parkinson-White syndrome the accessory pathway may participate in various tachyarrhythmias thereby influencing symptoms and prognosis. Atrial fibrillation occurs in 10 to 32% of patients and may have life-threatening consequences by precipitating ventricular fibrillation in patients with rapid conduction due to an accessory pathway with short anterograde refractory period (< 250 ms). Pathogenesis of atrial fibrillation in the WPW syndrome and therapeutic options are reviewed in this presentation. Spontaneous degeneration of atrioventricular reentrant tachycardia has been reported to represent the most frequent mode of initiation of atrial fibrillation during electrophysiologic study (up to 64% of episodes). Hemodynamic changes during tachycardia may lead to increased sympathetic tone, hypoxemia or increased tension of the atrial wall, thus, triggering atrial fibrillation. Induction of reentrant tachycardia during electrophysiologic study also has shown to be strongly correlated to its clinical prevalence and is inducible in up to 77% of patients with atrial fibrillation. The pathogenesis and high incidence of atrial fibrillation in patients with WPW syndrome is related to presence and functional properties of the accessory pathway. After surgical excision or catheter ablation more than 90% of patients are free of this arrhythmia. Anterograde conduction properties of the pathway appear to be more important than retrograde properties. High incidence of atrial fibrillation is related to short anterograde refractory periods, and of note, this arrhythmia is rare (3%) in patients with concealed pathways. With intracardiac recordings, Jackman et al. could demonstrate atrial fibrillation due to micro-reentry originating in accessory pathway networks.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:8194348 Diagnosis and management of arrhythmias associated with Wolff-Parkinson-White syndrome. Because of the emergency nature of the arrhythmias associated with WPW syndrome, nurses are often called upon for diagnosis and intervention in critical settings. In such cases the nurse's understanding of mechanisms, ECG recognition, and emergency treatment guarantees the patient the best possible outcome, not only in the critical setting, but in the long term as well. The most common arrhythmias of WPW syndrome are PSVT and atrial fibrillation. In PSVT a differential diagnosis is made on the ECG between (1) CMT using the AV node anterogradely and an accessory pathway retrogradely and (2) AV nodal reentry tachycardia. Helpful clues are location of the P' wave, presence of QRS alternans, the initiating P'R interval, and presence of aberrancy. Atrial fibrillation with an accessory pathway has the morphology of VT but is differentiated because the rhythm is irregular and the rate is more than 200 beats per minute. Emergency treatment consists of blocking the accessory pathway with procainamide. Emergency treatment for both types of PSVT consists of breaking the reentry circuit at the AV node (eg, vagal maneuver, adenosine, or verapamil). Procainamide can also be used to block the retrograde fast pathway in the AV node and to terminate CMT by blocking the accessory pathway. Symptomatic patients with accessory pathways are referred for evaluation and possible radio-frequency ablation. abstract_id: PUBMED:1557619 Endocavitary ablation: a new therapeutic approach to supraventricular tachycardia Nowadays patients suffering from supraventricular tachycardia are first treated with antiarrhythmic drugs. In refractory cases alternative treatment is antitachycardia pacemaker implantation or surgery. Recently new percutaneous catheter techniques have been developed to ablate directly the anatomical substrate responsible for the arrhythmia by delivering energy (high or low energy shock or radiofrequency) at the tip of a catheter. This article summarizes the results of clinical application of these different techniques and briefly presents our experience. The very recent studies with RF in patients with drug-refractory av nodal reentry or accessory pathway related tachycardia (WPW syndrome) report a success rate of 92-99% with a very low complication rate. If the long term success of radiofrequency is confirmed this technique will emerge in the near future as the treatment of choice in patients suffering from symptomatic supraventricular tachycardia, since it avoids expensive, longlasting drug treatment with the permanent risk of side effects. Answer: The study identified in abstract PUBMED:18774096 explores the hypothesis that reflection and/or microreentry of atrial impulses propagating into an accessory pathway (AP) may trigger atrial fibrillation (AF) in patients with manifest Wolff-Parkinson-White (WPW) syndrome. The research involved evaluating 534 patients who underwent successful radiofrequency ablation of AP at two university hospitals. The results indicated that a history of AF was twice as common in patients with manifest AP compared to those with concealed AP, regardless of the AP's location. Factors such as older age, male gender, and larger atria were associated with a higher likelihood of AF. The study's simulations suggested that the electrophysiologic properties of APs could allow for reflection and/or microreentry of atrially initiated impulses, which could initiate AF. In particular, the linear AP model showed that repetitive atrial stimulation could lead to a delay in atrium-to-ventricle propagation across the passive segment, eventually allowing for reflection of an impulse that could activate the entire atrium and initiate AF reentry. Additionally, the branched model indicated that microreentry at the ventricular insertion of the AP could initiate AF via retrograde atrial activation due to unidirectional block at the AP-ventricle junction. The study concluded that reflection and microreentry at the AP might be significant mechanisms for AF initiation in patients with manifest WPW syndrome, and similar mechanisms could also trigger AF in patients without an AP.
Instruction: Cancer patients' roles in treatment decisions: do characteristics of the decision influence roles? Abstracts: abstract_id: PUBMED:20713872 Cancer patients' roles in treatment decisions: do characteristics of the decision influence roles? Purpose: Patients with more active roles in decisions are more satisfied and may have better health outcomes. Younger and better educated patients have more active roles in decisions, but whether patients' roles in decisions differ by characteristics of the decision itself is unknown. Patients And Methods: We surveyed a large, population-based cohort of patients with recently diagnosed lung or colorectal cancer about their roles in decisions regarding surgery, radiation therapy, and/or chemotherapy. We used multinomial logistic regression to assess whether characteristics of the decision, including evidence about the treatment's benefit, whether the decision was likely preference-sensitive (palliative therapy for metastatic cancer), and treatment modality, influenced patients' roles in that decision. Results: Of 10,939 decisions made by 5,383 patients, 38.9% were patient controlled, 43.6% were shared, and 17.5% were physician controlled. When there was good evidence to support a treatment, shared control was greatest; when evidence was uncertain, patient control was greatest; and when there was no evidence for or evidence against a treatment, physician control was greatest (overall P < .001). Decisions about treatments for metastatic cancers tended to be more physician controlled than other decisions (P < .001). Conclusion: Patients making decisions about treatments for which no evidence supports benefit and decisions about noncurative treatments reported more physician control, which suggests that patients may not want the responsibility of deciding on treatments that will not cure them. Better strategies for shared decision making may be needed when there is no evidence to support benefit of a treatment or when patients have terminal illnesses that cannot be cured. abstract_id: PUBMED:35782325 Exosomes-mediated tumor treatment: One body plays multiple roles. Exosomes are vesicles secreted by a variety of living cells, containing proteins, RNA and other components, which are nanoscale capsules commonly existed in the body. Exosomes play important roles in a variety of physiological and pathological processes by participating in material and information exchange between cells, which can play multiple roles in tumor treatment. On the one hand, exosomes can be used as carriers and biomarkers, participate in the apoptosis signaling pathway and improve chemotherapy resistance, thus playing beneficial roles in tumor treatment. On the other hand, exosomes play unfavorable roles in tumor treatment. Tumor cell exosomes contain PD-L1, which is a nuclear weapon for tumor growth, metastasis, and immunosuppression. In addition, exosomes can not only promote the epithelial-mesenchymal transition process, tumor angiogenesis and chemoresistance, but also participate in the autocrine pathway. In this review, the multiple roles of exosomes and their prospects in the treatment of tumor were reviewed in detail. abstract_id: PUBMED:36054937 Images and images: Current roles of therapeutic radiographers. Introduction: Therapeutic radiography is a small profession and has adapted in response to advanced techniques. An increase in on-line adaptive MRI-guided radiotherapy (MRIgRT) will require role extension for therapeutic radiographers (TRs). This study will investigate the current role description for TRs and the activities they currently undertake with regards to MRIgRT. Method: A training needs analysis was used to ask TRs about their current roles and responsibilities and essential skills required for MRIgRT. For the purposes of this paper, the authors present the results from the demographics of the individual, their current job title with roles and responsibilities, and experience with decision making and image assessment. Descriptive statistics was used to analyse the data. Results: 261 responses were received (n = 261). Only 28% of job titles listed contained the protected title of 'therapeutic radiographer'. Advanced clinical practice roles were expressed by participants indicating that if a service need is presented, emerging roles will be created. Variation existed across the standardised roles of TRs and this discrepancy could present challenges when training for MRIgRT. TRs are pivotal in image verification and recognition on a standard linac, and skills developed there can be transferred to MRIgRT. Decision making is crucial for adaptive techniques and there are many skills within their current scope of practice that are indispensable for the MRIgRT. Conclusion: It has been demonstrated that TRs have a range of roles that cover vast areas of the oncology pathway and so it is important that TRs are recognised so the pivotal role they play is understood by all. TRs have extensive soft-tissue IGRT knowledge and experience, aiding the evolution of decision-making skills and application of off-protocol judgments, the basis of MRIgRT. Implications For Practice: Role development and changes in education for therapeutic radiographers. abstract_id: PUBMED:29101469 Triadic treatment decision-making in advanced cancer: a pilot study of the roles and perceptions of patients, caregivers, and oncologists. Purpose: The research on cancer treatment decision-making focuses on dyads; the full "triad" of patients, oncologists, and caregivers remains largely unstudied. We investigated how all members of this triad perceive and experience decisions related to treatment for advanced cancer. Methods: At an academic cancer center, we enrolled adult patients with advanced gastrointestinal or hematological malignancies, their caregivers, and their oncologists. Triad members completed a semi-structured qualitative interview and a survey measuring decisional conflict and perceived influence of the other triad members on treatment decisions. Results: Seventeen patients, 14 caregivers, and 10 oncologists completed the study. Patients and caregivers reported little decisional regret and voiced high satisfaction with their decisions, but levels of decisional conflict were high. We found sizeable disagreement among triad members' perceptions and preferences. For example, patients and oncologists disagreed about the caregiver's influence on the decision 56% of the time. In addition, many patients and caregivers preferred to defer to their oncologist about treatment decisions, felt like no true decision existed, and disagreed with their oncologist about how many treatment options had been presented. Conclusions: Patients, caregivers, and oncologists have discordant perceptions of the cancer treatment decision-making process, and bring different preferences about how they want to make decisions. These data suggest that oncologists should assess patients' and caregivers' decisional preferences, explicitly signal that a decision needs to be made whenever approaching an important crossroads in treatment and ensure that patients and caregivers understand the full range of presented options. abstract_id: PUBMED:36606192 Understanding the Dual Roles of CircHIPK3 in Tumorigenesis and Tumor Progression. CircHIPK3 is a type of endogenous circular RNA, which contains a covalently closed circular structure and cannot encode protein or polypeptide. CircHIPK3 is unusually expressed in varieties of tumors and plays dual roles of tumor promotion or tumor inhibition in tumorigenesis and development of tumors by serving as the sponge for miRNA in multiple tumors. Here, we reviewed the differential expression, the dual functions, the regulation mechanism, and the network in a variety of tumors as well as the potential value for the diagnosis and treatment of tumors, which are of great significance for our comprehensive understanding of the roles and mechanisms of circHIPK3 in tumors. abstract_id: PUBMED:33685844 Investigating opinions of, and perceptions to, advanced practice radiation therapist roles. Introduction: The demand for cancer services is growing due to increased incidence and the number of people who survive their initial diagnosis but require ongoing therapy. One method of increasing capacity in radiation oncology is to delegate tasks from one professional group to another. In the last ten years there has been increasing interest in advanced practice radiation therapist (APRT) roles. The majority of the Australian literature relates to metropolitan radiation oncology centres with a paucity of information from regional or rural settings. This study sought to explore the knowledge of, and attitudes to APRT roles of members of three professional groups in public radiation oncology centres in Tasmania. Method: Data was collected through a self-reported online survey from radiation oncologists (RO), radiation oncology medical physicists (ROMP) and radiation therapists (RT) working in Tasmanian public radiation oncology services regarding their knowledge and understanding of APRT roles, acceptance and support for the roles and where APRTs could contribute to improving patient care. The survey incorporated a combination of five point Likert scale, Yes/No/Don't know and Yes/No/Not Applicable choices. The survey was reviewed by a professional panel of RT, RO and ROMP from mainland Australian radiation oncology centres. Results: At the time of survey invitation, there were 52 RTs, 7 ROs, and 7 ROMPs working in the identified departments. The survey had an overall response rate of 48.5%with profession specific response rates of 48.1% (RT), 42.9% (RO) and 57.1% (ROMP). General agreement was found amongst survey respondents with regards to understanding of APRT roles having themes of clinical expertise, leadership, communication, collaboration and teaching. Where participants were offered a list of tasks to choose those appropriate to APRT roles, the highest agreement was with "Contour organs at risk per protocol", "Image review - soft tissue online decision making/adaptive RT" and "Principal investigator in clinical research. The notion of establishing ARPT roles was well supported, as strong agreement was found with the statements related to improvement in job satisfaction, opportunities, recruitment and retention for RTs, and that APRTs could be used to relieve workload of other professionals. Conclusion: This exploratory study found that the respondents were generally in favour of APRT roles, but that they were not clear about the tasks to be performed by APRTs. It was identified that patients, ROs, ROMPs, RTs and the department would benefit from the implementation of APRT roles. Sseveral areas of practice were identified by respondents which they perceived would improve the quality of patient care. abstract_id: PUBMED:31423493 Seekers, Providers, Welcomers, and Storytellers: Modeling Social Roles in Online Health Communities. Participants in online communities often enact different roles when participating in their communities. For example, some in cancer support communities specialize in providing disease-related information or socializing new members. This work clusters the behavioral patterns of users of a cancer support community into specific functional roles. Based on a series of quantitative and qualitative evaluations, this research identified eleven roles that members occupy, such as welcomer and story sharer. We investigated role dynamics, including how roles change over members' lifecycles, and how roles predict long-term participation in the community. We found that members frequently change roles over their history, from ones that seek resources to ones offering help, while the distribution of roles is stable over the community's history. Adopting certain roles early on predicts members' continued participation in the community. Our methodology will be useful for facilitating better use of members' skills and interests in support of community-building efforts. abstract_id: PUBMED:20860782 Dying cancer patients talk about physician and patient roles in DNR decision making. Background: Within medical and bioethical discourse, there are many models depicting the relationships between, and roles of, physician and patient in medical decision making. Contestation similarly exists over the roles of physician and patient with regard to the decision not to provide cardiopulmonary resuscitation (CPR) following cardiac arrest [the do-not-resuscitate or do-not-resuscitate (DNR) decision], but there is little analysis of patient perspectives. Objective: Analyse what patients with cancer within weeks before dying say about the decision to forego CPR and the roles of patient and physician in this decision. Design And Participants: Discursive analysis of qualitative data gathered during semi-structured interviews with 28 adult cancer patients close to death and attending palliative or oncology clinics of an Australian teaching hospital. Results: Participants' descriptions of appropriate patient or physician roles in decisions about CPR appeared related to how they conceptualized the decision: as a personal or a medical issue, with patient and doctor respectively identified as appropriate decision makers; or alternatively, both medical and personal, with various roles assigned embodying different versions of a shared decision-making process. Participants' endorsement of physicians as decision makers rested upon physicians' enactment of the rational, knowledgeable and compassionate expert, which legitimized entrusting them to make the DNR decision. Where this was called into question, physicians were positioned as inappropriate decision makers. Conclusion: When patients' and physicians' understandings of the best decision, or of the preferred role of either party, diverge, conflict may ensue. In order to elicit and negotiate with patient preferences, flexibility is required during clinical interactions about decision making. abstract_id: PUBMED:35416333 Roles of general practitioners in shared decision-making for patients with cancer: A qualitative study. Objective: The shared decision-making (SDM) process for the treatment of pancreatic and oesophageal cancer primarily takes place with healthcare professionals (HCPs) in the hospital setting. This study aims to explore the perspectives of general practitioners (GPs) on their possible roles during this SDM process, their added value and their requirements for involvement in SDM. Methods: Semi-structured interviews were conducted with 12 GPs about their views on SDM for patients with cancer. The interviews were analysed by two researchers using an inductive open coding approach. Results: Five potential roles in SDM were described by the interviewed GPs, of which the role as 'coach' of the patient was mentioned by all. GPs see their main added value as their long-standing relationship with the patient. To be able to participate optimally in SDM, GPs indicated that they need to be kept up to date during the patient's care process and should receive enough medical information about treatment options and contextual information. Conclusion: GPs see different potential roles for themselves when involved in SDM. Hospital HCPs that want to facilitate GP involvement should take the initiative, provide the GPs with enough and timely information and must be easy to consult. abstract_id: PUBMED:37394734 Family caregiver roles and challenges in assisting patients with cancer treatment decision-making: Analysis of data from a national survey. Background: We aimed to describe the roles and challenges of family caregivers involved in patients' cancer treatment decision-making. Methods: Family caregiver-reported data were analyzed from a national survey conducted in the United States by CancerCare® (2/2021-7/2021). Four select-all-that-apply caregiver roles were explored: (1) observer (patient as primary decision-maker); (2) primary decision-maker; (3) shared decision-maker with patient and (4) decision delegated to healthcare team. Roles were compared across five treatment decisions: where to get treatment, the treatment plan, second opinions, beginning treatment and stopping treatment. Ten challenges faced by caregivers (e.g., information, cost, treatment understanding) were then examined. χ2 and regression analyses were used to assess associations between roles, decision areas, challenges and caregiver sociodemographics. Results: Of 2703 caregiver respondents, 87.6% reported involvement in patient decisions about cancer treatment, including 1661 who responded to a subsection further detailing their roles and challenges with specific treatment decisions. Amongst these 1661 caregivers, 22.2% reported an observing role, 21.3% a primary decision-making role, 53.9% a shared decision-making role and 18.1% a role delegating decisions to the healthcare team. Most caregivers (60.4%) faced ≥1 challenge, the most frequent being not knowing how treatments would affect the patient's physical condition (24.8%) and quality of life (23.2%). In multivariable models, being Hispanic/Latino/a was the strongest predictor of facing at least one challenge (b = -0.581, Wald = 10.69, p < .01). Conclusions: Most caregivers were involved in patients' cancer treatment decisions. The major challenge was not understanding how treatments would impact patients' physical health and quality of life. Challenges may be more commonly faced by Hispanic/Latino/a caregivers. Patient Or Public Contribution: The CancerCare® survey was developed in partnership with caregiving services and research experts to describe the role of cancer family caregivers in patient decision-making and assess their needs for support. All survey items were reviewed by a CancerCare advisory board that included five professional patient advocates and piloted by a CancerCare social worker and other staff who provide counselling to cancer caregivers. Answer: Yes, characteristics of the decision do influence cancer patients' roles in treatment decisions. According to a study that surveyed a large cohort of patients with recently diagnosed lung or colorectal cancer, patients' roles varied depending on the evidence about the treatment's benefit, whether the decision was likely preference-sensitive (such as palliative therapy for metastatic cancer), and the treatment modality. The study found that when there was good evidence to support a treatment, shared control between patients and physicians was greatest. In contrast, when evidence was uncertain, patient control was greatest, and when there was no evidence for or evidence against a treatment, physician control was greatest. Additionally, decisions about treatments for metastatic cancers tended to be more physician-controlled than other decisions. This suggests that patients may not want the responsibility of deciding on treatments that will not cure them, indicating a need for better strategies for shared decision-making in such scenarios (PUBMED:20713872).
Instruction: Seeing and liking cigarette advertisements: is there a 'mere exposure' effect? Abstracts: abstract_id: PUBMED:22948413 Seeing and liking cigarette advertisements: is there a 'mere exposure' effect?. Aims: We aimed to explain the association between exposure to a cigarette advertisement and favorable attitudes towards the advertisement. Methods: We used data from an observational cross-sectional study with a sample of 3,415 German schoolchildren aged 10-17 years. Cigarette advertising exposure was assessed with an image of a Marlboro ad, asking for contact frequency (number of times seen the ad) and brand name. Liking of the ad was measured with two items (alpha = 0.78). Results: We found a positive linear association between exposure to the Marlboro ad and liking it. This association remained significant (standardized β = 0.09; p < 0.001) even after statistical control for smoking status, smoking of friends and parents, attitudes towards smoking in general, cigarette advertising receptivity (having a favorite cigarette ad), exposure to other advertisings, age, sex, socioeconomic status, rebelliousness and sensation seeking, self-reported school performance, and study region. Conclusions: The association between exposure to an advertisement and liking it was robust and could not be fully explained without referring to either unmeasured confounding or implicit advertising effects (e.g. mere exposure). Implicit effects have implications for prevention strategies as it may be very difficult to counteract unconscious advertising effects. abstract_id: PUBMED:26145272 Effect of sensory exposure on liking for fat- or sugar-reduced biscuits. This study investigates the effect of exposure to fat- or sugar-reduced biscuits on liking for these products. Two sets of biscuits were manufactured, each including a standard variant and 4 variants differing by the level of reduction of either fat or sugar content, to 33% of fat content or 28% of sugar content. Biscuit consumers were recruited to eat either the fat (n = 113) or the sugar-reduced set of biscuits (n = 106). They participated in 5 testing sessions, once a week, in laboratory conditions. During each session, they rated their liking of the 5 variants. At the end of each of the 4 first sessions, consumers were given 16 biscuits for their home consumption during the week. Participants were split into 3 groups of exposure: every week, a control group received the standard variant, a "direct" group received the most reduced variant and a "stepwise" group received a more and more reduced variant. After both control and stepwise exposure, almost no evolution of liking was observed. At the end of the direct exposure period to the 33% fat-reduced variant, liking for this variant significantly improved. On the contrary, after the direct exposure to the 28% sugar-reduced variant, liking only improved for 9 and 16% sugar-reduced variants. abstract_id: PUBMED:38368909 Sweet and sour sips: No effect of repeated exposure to sweet or sour-tasting sugary drinks on children's sweetness preference and liking. Health agencies advocate reducing children's sweetness exposure to lower sweetness preference or liking to ultimately lower sugar intake. However, the relationship between sweetness exposure, preference, and liking remains unclear. This work investigated the influence of exposure to a sucrose-containing sweet or sour-tasting drink on sweetness preference and liking for sweet and sour products in 4-7-year-old children (n = 65). The children were randomized into three groups with one daily exposure to either the sweet drink, sour drink, or water (control group) for 14 days. Sweetness preference was assessed at baseline (t1), day 15 (t2), and two months after the intervention (t3), using a forced-choice, paired comparison test with five beverages varying in sweetness intensity. Hedonic liking for the intervention drinks, a sweet and sour yogurt, and a sweet and sour candy was evaluated using a 5-point pictorial scale. Linear mixed models revealed a significant increase in sweetness preference from t1 to t3 (F(2) = 7.46, p < 0.001). However, ANCOVA analysis indicated that this effect was not caused by the intervention. Based on linear mixed models, we observed that children's hedonic liking for sweet and sour products remained stable from t1 to t3 and was not influenced by the intervention. These findings suggest that 14 exposures to a sucrose-containing sweet or sour-tasting drink did not affect sweetness preference or liking in 4-7-year-old children. abstract_id: PUBMED:35949004 The sweet tooth of infancy: Is sweetness exposure related to sweetness liking in infants up to 12 months of age? Infants become increasingly exposed to sweet-tasting foods in their first year of life. However, it is still unclear whether repeated exposure to sweet taste is linked to infants' sweetness liking during this period. Making use of data from the OPALINE cohort, this study aimed to examine the link between sweetness exposure and sweetness liking during two important periods in early infant feeding: at the start of complementary feeding (3-6 months) and the transition to the family table (10-12 months). Infants' sweetness exposure was assessed using 7-d food records which were completed by mothers every month (n 312), reporting daily consumption rates of formula/breast milk or complementary food and the type of formula milk and/or complementary foods for each feeding occasion. Infants' sweetness liking was studied in the laboratory at 3, 6 and 12 months of age by assessing their response to a lactose-water solution and the amount drunk of this solution compared with plain water. Linear regressions and structural equation model assessed associations between exposure to and liking for sweetness at 6 and 12 months. Neither at 6 (n 182) nor at 12 months (n 197) was sweetness exposure associated with sweetness liking. While sweetness liking at 3 months was unrelated to liking at 6 months, the latter predicted sweetness liking at 12 months. These findings demonstrate no association between sweetness exposure at 3 to 12 months and liking at 6 and 12 months despite a sharp increase in sweetness exposure in that period. However, sweetness liking at 6 and 12 months was positively associated. abstract_id: PUBMED:31838369 Dissociating wanting and anticipated liking from consummatory liking in smokers with different levels of nicotine dependence. Introduction: Incentive Sensitisation theory suggests wanting and liking are dissociable concepts, with wanting, but not liking typically increasing with repeated drug use. Wanting is associated with anticipation of reward, whereas liking relates to pleasure derived from consummatory behaviour. However, numerous studies have conceptualised liking as an anticipatory cognition. This study explores whether levels of nicotine dependence differentially effect wanting and liking responses to smoking-related cues, and whether anticipated and consummatory liking are equivalent, and dissociable from wanting. Method: Heavy (HS, mean = 16 cigarettes/day) and light non-daily (LS, mean = 2 cigarettes/day) smokers completed wanting and anticipated liking questionnaires pre-, immediately post-exposure to smoking-related and neutral cues and at session-end. Consummatory liking was measured post-session, immediately after smoking. Results: Wanting and anticipated liking responses were comparable. Smoking-related cues increased wanting and anticipated liking compared to neutral cues. This effect was maintained until session-end. No baseline differences were seen between HS and LS on wanting or anticipated liking, however after cue exposure, and at session-end, HS reported greater drug wanting and anticipated liking than LS. Conversely, HS and LS did not differ on consummatory liking. Analyses confirmed the relationship between wanting and anticipated liking was significantly stronger than wanting and consummatory liking or anticipated and consummatory liking. Conclusions: Wanting and anticipated liking appear to be overlapping constructs assessing expectations of reward, that are dissociable from consummatory liking. Furthermore, heavier smoking increases drug wanting, but not smoking pleasure. Future attempts to dissociate these concepts should ensure liking is measured during/immediately after consumption. abstract_id: PUBMED:31055010 Repeated exposure to vegetable-enriched snack bars may increase children's liking for the bars - but not for the vegetables. Most children do not consume enough vegetables and a key reason is their relatively low hedonic acceptance. The aims of this study were to test if integration of vegetables into a popular snack product might serve as a means to increase the liking for vegetables and to test how exposure frequency influences the intake and liking. Totally 256 children from 12 school classes, aged 7-10 years, participated in the study. School classes were assigned to one of six intervention groups based on the bar they were exposed to: beetroot/carrot (B/C), n = 45; spinach/Jerusalem artichoke (S/J), n = 34; pumpkin/sweet potato (P/S), n = 44; Neutral, n = 46, who received a neutral bar without vegetables; and a Boredom Group, who received the beetroot/carrot (b/c) bar at high frequency, n = 47. These groups received 8 exposures. The Boredom Group was exposed daily while the other groups were exposed every second day. A Control Group (n = 40) was not exposed. Liking for the bars was assessed at baseline and post-intervention and the intake of the bars was recorded during all exposures. Results showed increases in liking for the exposed bars in all groups except the Boredom Group, but changes were only significant in the B/C Group (p = 0.03) and the P/S Group (p = 0.02). Some increases towards unexposed bars were observed. Liking for vegetables remained stable or decreased. Intake of the bars during exposures did not differ significantly between groups, but was somewhat lower in the Boredom Group. In conclusion, repeated exposure may be a successful approach to increase liking for a vegetable bar, but does not generalize to changes in vegetable liking, and a lower exposure frequency may be advantageous. abstract_id: PUBMED:24355332 Effect modification of the association of cumulative exposure and cancer risk by intensity of exposure and time since exposure cessation: a flexible method applied to cigarette smoking and lung cancer in the SYNERGY Study. The indiscriminate use of the cumulative exposure metric (the product of intensity and duration of exposure) might bias reported associations between exposure to hazardous agents and cancer risk. To assess the independent effects of duration and intensity of exposure on cancer risk, we explored effect modification of the association of cumulative exposure and cancer risk by intensity of exposure. We applied a flexible excess odds ratio model that is linear in cumulative exposure but potentially nonlinear in intensity of exposure to 15 case-control studies of cigarette smoking and lung cancer (1985-2009). Our model accommodated modification of the excess odds ratio per pack-year of cigarette smoking by time since smoking cessation among former smokers. We observed negative effect modification of the association of pack-years of cigarette smoking and lung cancer by intensity of cigarette smoke for persons who smoked more than 20-30 cigarettes per day. Patterns of effect modification were similar across individual studies and across major lung cancer subtypes. We observed strong negative effect modification by time since smoking cessation. Application of our method in this example of cigarette smoking and lung cancer demonstrated that reducing a complex exposure history to a metric such as cumulative exposure is too restrictive. abstract_id: PUBMED:32993116 Comparison between Gradual Reduced Nicotine Content and Usual Nicotine Content Groups on Subjective Cigarette Ratings in a Randomized Double-Blind Trial. In 2018, the United States Food and Drug Administration (FDA) issued an advanced notice of proposed rulemaking to reduce nicotine in tobacco products to produce a minimally addictive or nonaddictive effect, but there was a research gap in the subjective responses of reduced-nicotine-content cigarettes. We compared the responses of the modified cigarette evaluation questionnaire (mCEQ) and cigarette-liking scale (CLS) between the gradually reduced nicotine content (RNC) group and the usual nicotine content (UNC) group. Linear mixed-effects models for repeated measures were used to analyze and compare the change over time for the mCEQ and CLS across the two treatment groups (RNC and UNC). We found that the change over time for the mCEQ and CLS was significant between the RNC and the UNC treatment groups at the beginning of visit 6 with 1.4 mg nicotine/cigarette. At visits 8 and 9, the RNC group reported significantly lower satisfaction scores compared to UNC. Subscale analysis showed that smoking satisfaction decreased in RNC while other measures, such as cigarette enjoyment, did not change. Understanding the impact of nicotine reduction on cigarette subjective responses through evaluation and liking scales would provide valuable information to the FDA on nicotine reduction policies for cigarettes. abstract_id: PUBMED:24879887 Enhancing consumer liking of low salt tomato soup over repeated exposure by herb and spice seasonings. There is strong evidence for the link between high dietary sodium and increased risk of cardiovascular disease which drives the need to reduce salt content in foods. In this study, herb and spice blends were used to enhance consumer acceptability of a low salt tomato soup (0.26% w/w). Subjects (n = 148) scored their liking of tomato soup samples over 5 consecutive days. The first and last days were pre-and post-exposure visits where all participants rated three tomato soup samples; standard, low salt and low salt with added herbs and spices. The middle 3 days were the repeated exposure phase where participants were divided into three balanced groups; consuming the standard soup, the low salt soup, or the low salt soup with added herbs and spices. Reducing salt in the tomato soup led to a significant decline in consumer acceptability, and incorporating herbs and spices did not lead to an immediate enhancement in liking. However, inclusion of herbs and spices enhanced the perception of the salty taste of the low salt soup to the same level as the standard. Repeated exposure to the herbs and spice-modified soup led to a significant increase in the overall liking and liking of flavour, texture and aftertaste of the soup, whereas no changes in liking were observed for the standard and low salt tomato soups over repeated exposure. Moreover, a positive trend in increasing the post-exposure liking of the herbs and spices soup was observed. The findings suggest that the use of herbs and spices is a useful approach to reduce salt content in foods; however, herbs and spices should be chosen carefully to complement the food as large contrasts in flavour can polarise consumer liking. abstract_id: PUBMED:24929191 Liking the odour, liking the food. Toddlers' liking of strongly flavoured foods correlates with liking of their odour. Olfaction plays a significant role in the sensing of foods. However, little information is available at any age on the relationship between the hedonic responses to given food odours and the effective liking and disliking of foods bearing these same odours. The present study aimed to assess the relationships between food odour liking and liking of the corresponding foods. This study relied on a longitudinal design involving 235 toddlers who were assessed for both their observed liking of a set of food odours and their parent-reported liking of foods at 12 and 22 months. To assess odour liking, eight odorants representing pleasant and unpleasant foods were presented in bottles along with neutral stimuli. The participants' behaviour towards the odorized and control bottles was measured in terms of mouthing, a behaviour considered to reflect attraction. For each odorant, odour-liking scores were calculated relative to the control. The participants' food liking was rated by the parents at the periods 12-15 and 21-24 months. Positive correlations were found between the odour-liking scores for some of the odours and the liking of the associated foods. These correlations concerned foods with strong, unpleasant flavours at 12 months only, suggesting that the olfactory system acts as an 'alarm' system during this period of food transition. At 22 months, no significant correlations were found, except a marginal one for green vegetables. Whatever the age, no significant correlations were found for pleasant odours. Thus, some correlations were found between the observed odour liking for food-related odours and the liking for the corresponding foods. However, these relationships are subject to developmental fluctuations depending on the hedonic nature of the odorants and the age when infants are tested. Answer: Yes, there appears to be a 'mere exposure' effect when it comes to seeing and liking cigarette advertisements. The study with German schoolchildren aged 10-17 years found a positive linear association between exposure to a Marlboro cigarette advertisement and liking it. This association remained significant even after controlling for various factors such as smoking status, smoking of friends and parents, attitudes towards smoking in general, exposure to other advertisements, and demographic variables. The robustness of this association suggests that it could not be fully explained without considering either unmeasured confounding factors or implicit advertising effects, such as the mere exposure effect (PUBMED:22948413). The mere exposure effect posits that repeated exposure to a stimulus increases an individual's preference for it, which seems to be supported by the findings of this study.
Instruction: Is emergency aortic root replacement combined with arch replacement safe? Abstracts: abstract_id: PUBMED:35753735 Application of retrograde cerebral perfusion in aortic root replacement combined with right half aortic arch replacement. Objectives: After cardiac surgery involving the aortic arch, the incidence of neurological complications remains high, therefore it is very important to take measures to protect brain. This study is to investigate the safety and effectiveness of deep hypothermic circulatory arrest and retrograde cerebral perfusion for aortic root combined with right half aortic arch replacement. Methods: Clinical data of 31 patients, who underwent aortic root and right half aortic arch replacement with deep hypothermic circulatory arrest and retrograde cerebral perfusion in Xiangya Hospital, Central South University, were retrospectively analyzed. This cohort included 23 aortic aneurysms and 8 aortic dissections. Aortic root replacement was conducted in 26 patients by Bentall procedures, and 5 patients by David procedures. Time of deep hypothermic circulatory arrest and retrograde cerebral perfusion in surgery was (21.9±5.2) min. The in-hospital mortality, postoperative neurological dysfunction and other major adverse complications were observed and recorded. Results: No in-hospital death and permanent neurological dysfunction occurred. Two patients had transient neurological dysfunction and 2 patients with aortic dissection requiring long-time ventilation due to hypoxemia, 1 patient underwent resternotomy. During 6-36 months of follow-up, all patients recovered satisfactorily. Conclusions: Deep hypothermic circulatory arrest and retrograde cerebral perfusion can be safely and effectively applied in aortic root and right half aortic arch replacement, and which can simplify the surgical procedures and be worth of clinical promotion. abstract_id: PUBMED:22569490 Is emergency aortic root replacement combined with arch replacement safe? Background: Aortic root replacement (ARR) combined with aortic arch replacement (AAR) is an invasive procedure even in elective cases. Nevertheless, such combined operations are often mandatory in acute type A aortic dissection. We examined whether emergency operation might have further incremental risks compared with elective surgery in this type of operations. Methods: Forty-six cases of ARR combined with AAR were divided into 2 groups, the emergency (EM) group and the elective (EL) group. The EM group consisted of 10 cases of acute type A aortic dissection, whereas the EL group of 36:23 of chronic aortic dissection and 13 of true aneurysm. Results: There were no statistical differences between the 2 groups in the durations of aortic crossclamp, selective cerebral perfusion and cardiopulmonary bypass. The incidences in the EM and EL groups were as follows:in-hospital death; 0 vs 3( 8%), respiratory failure; 4 (40%) vs 14 (39%), renal failure; 0 vs 6 (17%), IABP requirement; 1 (10%) vs 3 (8%), and cerebral infarction; 0 vs 1 (3%), respectively. Conclusion: Early surgical results of emergency ARR combined with AAR were almost equal to those in elective surgery. abstract_id: PUBMED:25354747 Clinical outcomes of combined aortic root reimplantation technique and total arch replacement. Objectives: The goal of this study was to evaluate early and late outcomes of combined valve-sparing aortic root replacement and total arch replacement (TAR). Methods: From October 1999 to May 2014, 195 patients underwent valve-sparing operations using the David reimplantation technique. Thirty-one patients underwent combined TAR for aortic regurgitation (AR) with extended aortic aneurysm from the aortic root to the aortic arch. Aetiologies included acute type A aortic dissection in 12 cases, chronic aortic dissection in 8 cases and non-dissecting aneurysm in 11 cases. There were 9 patients with Marfan syndrome. The preoperative severity of AR was mild in 4, moderate in 16 and severe in 11. Even though half of those were emergent operations for acute aortic dissection, preoperative haemodynamic conditions were stable in all patients. Results: No hospital deaths occurred. Postoperative complications included prolonged mechanical ventilation (>48 h) in 1 case and re-exploration for bleeding in 2 cases. Other complications, such as neurological dysfunction or low cardiac output syndrome, were not observed. At hospital discharge, 2 patients had mild AR, 22 had trace AR and 7 had no AR. During follow-up, 2 patients had moderate AR, 7 had mild AR, 18 had trace AR and 3 had no AR. Follow-up was completed in 95.1% of patients, and the mean follow-up period was 60.5 ± 9.1 months. No late death and thromboembolic complication occurred during follow-up. One patient required reoperation for AR. Freedom from reoperation at 5 and 10 years was 100 ± 0 and 83.3 ± 3.5%, respectively. Freedom from moderate or severe AR at 3 and 5 years was 83.3 ± 3.5 and 83.3 ± 3.5%, respectively. Conclusions: Early outcomes of combined aortic root reimplantation and TAR were satisfactory and provided excellent freedom from thromboembolic complication. The rate of freedom from reoperation during long-term follow-up was acceptable. Further follow-up is required to evaluate this procedure. abstract_id: PUBMED:36239233 Resternotomy aortic root and arch replacement following previous complex type A aortic dissection requiring endovascular repair for malperfusion. A redo sternotomy, aortic root, and arch replacement in a patient following previous complex surgical and endovascular type A aortic dissection repair is presented in this video case report. Shortly after having the initial type A aortic dissection repair with replacement of the ascending aorta, the patient developed severe visceral malperfusion due to a compressed distal true lumen and underwent emergency endovascular repair with ascending arch and descending thoracic aorta stents and chimney grafts for the aortic arch vessels as well as fenestration of the intimal flap of the abdominal aorta. Unfortunately, the patient developed permanent paraplegia and progressive symptomatic severe aortic regurgitation. The patient underwent a redo sternotomy, aortic root, and arch replacement with explantation of the ascending stent graft and chimney stent grafts. Antegrade cerebral perfusion was maintained throughout the procedure. The aortic arch was replaced using a Terumo Aortic Plexus multibranched graft distally anastomosed to the endovascular stent graft, and the innominate and left common carotid arteries were reimplanted onto the graft. The aortic root was replaced with a Bioconduit graft, using a modified Cabrol technique to reimplant the left coronary artery. A satisfactory postoperative course and computed tomography imaging highlight the feasibility of this highly complex aortic arch repair with careful preoperative planning. abstract_id: PUBMED:36590741 Extent of aortic replacement and operative outcome in open proximal thoracic aortic aneurysm repair. Objectives: There are few data to delineate the risk differences among open aortic procedures. We aimed to investigate the influence of the procedural types on the outcomes of proximal thoracic aortic aneurysm repair. Methods: Among 1900 patients who underwent aortic replacement in our institution between 2005 and 2019, 1132 patients with aortic aneurysm who underwent a graft replacement of proximal thoracic aorta were retrospectively reviewed. Patients were divided into 4 groups based on the extent of the aortic replacement: isolated ascending aortic replacement (n = 52); ascending aortic replacement with distal extension with hemiarch, partial arch, or total arch replacement (n = 126); ascending aortic replacement with proximal extension with aortic valve or root replacement (n = 620); and ascending aortic replacement with distal and proximal extension (n = 334). "Eventful recovery," defined as occurrence of any key complications, was used as the primary end point. Odds ratios for inability to achieve uneventful recovery in each procedure were calculated using ascending aortic replacement as a reference. Results: Overall, in-hospital mortality and stroke occurred in 16 patients (1.4%) and 24 patients (2.1%). Eventful recovery was observed in 19.7% of patients: 11.5% in those with ascending aortic replacement, 36.5% in those with partial arch or total arch replacement, 16.6% in those with proximal extension with aortic valve or root replacement, and 20.4% in those with distal and proximal extension (P < .001). With ascending aortic replacement as the reference, a multivariable logistic regression revealed partial arch or total arch replacement (odds ratio, 10.0; 95% confidence interval, 1.8-189.5) was an independent risk factor of inability to achieve uneventful recovery. Conclusions: Open proximal aneurysm repair in the contemporary era resulted in satisfactory in-hospital outcomes. Distal extension was associated with a higher risk for postoperative complications. abstract_id: PUBMED:31609491 Rapid-deployment aortic valve replacement after aortic root replacement: A safe alternative to redo root replacement. Reoperative aortic root replacement, following prior biologic or mechanical valved conduit aortic root prosthesis, presents a technical challenge. The rapid-deployment aortic valve prosthesis is an approved alternative to traditional bioprosthetic aortic valve replacement. We present three clinical cases in which rapid-deployment aortic valve prostheses were utilized in lieu of reoperative full aortic root replacement. All three patients recovered uneventfully. The rapid-deployment valve insertion in a prior surgical aortic root prosthesis is a safe option to avoid reoperative full aortic root replacement. abstract_id: PUBMED:35604783 Valve-Preserving Root Reimplantation Combined with Arch Procedure: Optimizing Patient Selection. Objective: Patients with thoracic aortic disease commonly present with concomitant multisegment pathology. We describe the patient population, analyze outcomes, and define the patient selection strategy for valve-preserving aortic root reimplantation (VPARR) combined with the arch procedure. Methods: From 2008 to 2018, 98 patients underwent VPARR combined with the aortic arch procedure (hemi-arch, 50% [n = 49, limited repair]; total arch, 50% [n = 49, complete repair] including 39 with elephant trunk). Indications for surgery were aneurysmal disease (61%) and aortic dissection (39%). The median follow-up was 17 months (IQR, 8 to 60 months). Results: There were no operative deaths or paraplegia, and 5 patients underwent re-exploration for bleeding. During follow-up, 2 patients required aortic valve replacement for severe aortic insufficiency at 1 and 5 years, and 4 patients died. In the limited repair group, 1 patient underwent reintervention for aortic arch replacement, whereas 4 patients underwent planned intervention (1 endovascular and 3 open thoracoabdominal aortic repair). In the complete repair group, 23 patients underwent planned intervention (15 endovascular and 8 open thoracoabdominal repair). Conclusions: Single-stage, complete, proximal aortic repair including VPARR combined with total aortic arch replacement is as safe and feasible to perform as limited arch repair and facilitates further intervention in carefully selected patients with diffuse aortic pathology at centers of expertise. abstract_id: PUBMED:32299699 Results of ascending aortic and arch replacement for type A aortic dissection. Objective: The outcomes of emergency surgery for type A acute aortic dissection have improved. However, ascending aortic replacement sometimes leads to dilatation of the distal aorta. The present study reviewed our outcomes of ascending aortic replacement and total arch replacement in patients with type A acute aortic dissection. Methods: A total of 253 patients with type A acute aortic dissection underwent a central repair operation. Our standard technique was ascending aortic replacement. Total arch replacement was performed only when entry existed in the major curvature of the aortic arch and the proximal descending aorta. A total of 169 patients (67%) underwent ascending aortic replacement, and 84 patients (33%) underwent total arch replacement. Hospital death due to initial surgery, dilatation of the distal aorta greater than 5 cm, new occurrence of aortic dissection, any distal aortic surgery, and aortic-related deaths were defined as distal aortic events. Results: The mortality was 7.1% in the ascending aortic replacement group and 6.0% in the total arch replacement group. Postoperative computed tomography was performed in 162 patients in the ascending aortic replacement group. The false lumen of the residual aortic arch had thrombosed and healed in 94 patients (58%) and remained present in 68 patients (42%). The distal aortic event-free rate in the ascending aortic replacement group decreased from 74% at 5 years to 51% at 9 years, and the rate in the total arch replacement group was 83% at 5 to 9 years (P < .01). For the ascending aortic replacement group, more patients with a dissected arch had a distal aortic event compared with patients with a healed arch (P < .01). Conclusions: Total arch replacement was associated with fewer distal aortic events. We may expand the indications for total arch replacement in stable patients. abstract_id: PUBMED:34511770 Treatment of aortic arch rupture grade III with hybrid arch replacement: the key role of perfusion preservation. Hybrid arch replacement is a well-accepted method for the treatment of lesions involving the aortic arch, though its benefits compared to classic surgical techniques remain controversial. Multiple surgical approaches have been analyzed in the literature for the treatment of such a challenging pathology. In this case report, we describe the surgical management of a 72-year-old man presenting with a complicated aortic arch rupture. The patient was treated urgently with a type I hybrid arch replacement in two stages, with total preservation of cerebral and systemic perfusion. Our case shows that hybrid arch methods are applicable even in emergency cases. abstract_id: PUBMED:37931796 Natural history of the distal aorta following elective root replacement in patients with Marfan syndrome. Objective: It is unclear if the addition of a prophylactic arch operation is beneficial at the time of root replacement in patients with Marfan syndrome (MFS). This project aims to understand the fate of the distal aorta following elective root replacement in patients with MFS. Methods: Between 2000 and 2019, 124 adult patients with MFS were identified as having undergone elective aortic root replacement with a clamped distal aortic anastomosis during their lifetime. Serial axial imaging was analyzed. The primary outcome was a composite of subsequent type B aortic dissection (TBAD), aneurysmal degeneration (>4 cm), and aortic reintervention. Secondary outcomes included subsequent TBAD and mortality. Results: Mean age at root replacement was 33.3 years. Median follow-up was 11.3 years. Thirty-one patients (25%) experienced the primary outcome, with no survival difference (P = .9). The crossclamp and aortic cannulation sites were stable (growth rate, 0.33 mm/year), and 2 patients (1.6%) required reintervention there. Twenty-three patients (19.8%) experienced subsequent TBAD. Patients with TBAD had a higher rate of distal degeneration (P < .001), but no significant change in survival (P = .2). Preoperative hypertension (odds ratio, 3.96; P < .05) and younger age at root replacement (odds ratio, 1.05; P < .05) increased the risk of TBAD, based on regression analysis. Conclusions: The distal aorta (including the clamp and cannulation site) overall appears stable in patients with MFS following elective root replacement without prophylactic arch operation. Development of TBAD seems to be the primary driver of distal degeneration. Factors associated with TBAD development included hypertension, underscoring the importance of strict blood pressure control in these patients. Answer: Emergency aortic root replacement combined with arch replacement is a highly invasive procedure, but the available evidence suggests that it can be performed safely, even in acute cases. According to a study examining the outcomes of emergency versus elective surgery for aortic root replacement combined with aortic arch replacement, the early surgical results of emergency procedures were almost equal to those in elective surgery, with no significant differences in in-hospital death, respiratory failure, renal failure, intra-aortic balloon pump (IABP) requirement, and cerebral infarction between the two groups (PUBMED:22569490). This indicates that emergency operations do not necessarily have further incremental risks compared to elective surgery for this type of operation. Furthermore, other studies have shown satisfactory outcomes with the use of deep hypothermic circulatory arrest and retrograde cerebral perfusion in aortic root and right half aortic arch replacement, with no in-hospital death and permanent neurological dysfunction reported, and all patients recovering satisfactorily during follow-up (PUBMED:35753735). Additionally, valve-sparing aortic root replacement combined with total arch replacement has been associated with satisfactory early outcomes and excellent freedom from thromboembolic complications, with an acceptable rate of freedom from reoperation during long-term follow-up (PUBMED:25354747). In summary, while emergency aortic root replacement combined with arch replacement is a complex and invasive procedure, the evidence suggests that it can be performed safely with outcomes comparable to elective surgery. However, careful preoperative planning and patient selection are crucial to optimize outcomes (PUBMED:35604783).
Instruction: Does pulse oximetry accurately monitor a patient's ventilation during sedated endoscopy under oxygen supplementation? Abstracts: abstract_id: PUBMED:23624448 Does pulse oximetry accurately monitor a patient's ventilation during sedated endoscopy under oxygen supplementation? Introduction: Pulse oximetry (SpO2) measures oxygen saturation but not alveolar ventilation. Its failure to detect alveolar hypoventilation during sedated endoscopy under oxygen supplementation has been reported. The aim of this study was to measure the masking effect of oxygen supplementation in SpO2 when alveolar hypoventilation develops during sedated endoscopy. Methods: A total of 70 patients undergoing sedated diagnostic colonoscopy were randomly divided into two groups - oxygen supplementation group (n = 35) and room air breathing group (n = 35). SpO2 and end-tidal carbon dioxide (etCO2) were measured by non-intubated capnography during the procedure for all the patients. Results: The rise of etCO2 caused by alveolar hypoventilation was comparable in the two groups after sedation. SpO2 was significantly higher in the oxygen supplementation group than in the room air breathing group (98.6% ± 1.4% vs. 93.1% ± 2.9%; p < 0.001) at peak etCO2, and oxygen supplementation caused SpO2 to be overestimated by greater than 5% when compared with room air. SpO2 at peak etCO2 was reduced from the baseline before sedation for the oxygen supplementation and room air breathing groups by 0.5% ± 1.1% and 4.1% ± 3.1%, respectively (p < 0.001). Conclusion: SpO2 alone is not adequate for monitoring alveolar ventilation during sedated endoscopy under oxygen supplementation due to possible delays in detecting alveolar hypoventilation in patients. Even if SpO2 decreases by only 1% during the procedure and its level remains near 100%, physicians should consider the onset of severe alveolar hypoventilation, which requires immediate intervention. abstract_id: PUBMED:7567739 Pulse oximetry monitoring during non-sedated upper gastrointestinal endoscopy. Fifty consecutive patients judged fit for non-sedated upper gastrointestinal endoscopy were monitored by pulse oximetry before, during and after the procedure. Transient hypoxia developed during intubation in five subjects (10%) but treatment was not required nor was the test halted. Only one patient with pre-existing respiratory problems became hypoxaemic to the extent that oxygen had to be given and the procedure halted. The chance of hypoxia was unrelated to age, sex, smoking, anxiety, or the duration of intubation. Routine pulse oximetry is not necessary for non-sedated gastroscopy but oximetry monitoring may be important in selected cases. abstract_id: PUBMED:8934090 Pulse oximetry and supplemental oxygen during gastrointestinal endoscopy: a critical review. A high proportion of the mortality and morbidity associated with endoscopic procedures may be of cardiopulmonary origin. For this reason, the appropriate degree of monitoring and prophylactic measures to be used in the sedated patient undergoing endoscopy has become a topic of discussion during recent years. The development of myocardial ischaemia during gastrointestinal endoscopy has traditionally been considered to be due to the simultaneous arterial hypoxaemia; however, recent investigations have suggested that tachycardia may be a more important pathogenic factor. No study has ever shown that pulse oximetry monitoring or supplemental oxygen will reduce the morbidity or mortality during gastrointestinal endoscopy. The current guidelines for monitoring and oxygen therapy are therefore not supported by scientific data. There is a need for further studies on the pathogenic mechanisms in myocardial ischaemia during endoscopy, and the influence of hypoxaemia on the outcome after upper and lower endoscopy should be clarified. abstract_id: PUBMED:11218232 Pulse oximetry saturation levels during routine unsedated diagnostic upper gastrointestinal endoscopy. Background: Diagnostic procedures account for over 90% of all upper gastrointestinal endoscopies. Pulse oximetry saturation (SpO2) levels were assessed in patients attending for routine unsedated diagnostic gastroscopy to identify factors associated with oxygen desaturation. Methods: Three hundred and thirty unsedated patients were monitored with continuous pulse oximetry. A further 154 patients who requested sedation prior to endoscopy were studied as a positive control group. Results: SpO2 levels were lower in sedated compared to unsedated patients (P < 0.0001). Six unsedated patients (2%) desaturated to 90% or less during endoscopy compared to 32 sedated patients (21%) (P < 0.0001). SpO2 levels in unsedated patients were not related to patient sex, age, cigarette smoking, endoscope diameter, basal SpO2 levels or duration of endoscopy. In contrast, examination of the pharyngeal area and epiglottis (P = 0.0002) and a longer intubation time (P = 0.0002) were associated with lower SpO2 levels. The lowest SpO2 level recorded during unsedated endoscopy was 88%. Conclusion: Pulse oximetry is not a prerequisite to performing routine unsedated diagnostic gastroscopy in patients without severe systemic disease. abstract_id: PUBMED:2606231 Arterial oxygen saturation in sedated patients undergoing gastrointestinal endoscopy and a review of pulse oximetry. The pulse oximeter is a widely used noninvasive monitor which provides a continuous reading of arterial oxygen saturation. We sought to determine if patients with cardiopulmonary disease were at higher risk of oxygen unsaturation than controls during gastrointestinal endoscopy. Patients presenting for upper gastrointestinal endoscopy and/or colonoscopy were placed into one of two groups: A. control population:patients with no history of cardiac and/or pulmonary disease (N = 11); B. test population:patients with a history of cardiac and/or pulmonary disease (N = 39). The procedure was performed by one of 14 equally experienced physicians. The Nellcor N-100 Pulse Oximeter and D-25 finger probe were used to monitor arterial oxygen saturation which was recorded at five minute intervals from commencement of intravenous sedation analgesia to 15 minutes post-procedure. In both groups, the oxygen saturation fell to statistically significant. A: 98.3% to 90.2%; B: 97.7% to 89.5%. However, the amount of unsaturation which occurred in Group B was not statistically different from Group A. Therefore, history of prior cardiac and/or pulmonary disease does not increase the degree of unsaturation from that occurring in patients with no such history. In addition, the effects of age, procedure, duration, I.V. sedation, and individual tolerance on oxygen unsaturation were studied by regression analysis and analysis of variance (ANOVA). Only the upper GI endoscopy patients showed a dependence of oxygen unsaturation upon one of the variables, specifically patient tolerance. It is concluded that patients at risk for hypoxemia during endoscopy or colonoscopy cannot be determined prior to the initiation of the procedure, and it is recommended that all of these patients be given supplemental oxygen as well as monitored with a pulse oximeter. abstract_id: PUBMED:3868307 Pulse oximetry monitoring of sedated pediatric dental patients. Hypoxemia is recognized as a major complication of sedating pediatric dental patients. Traditional methods of patient monitoring show changes only with moderate to severe hypoxemia. This study compared pulse oximetry, a new monitoring technology, to traditional techniques in their ability to detect hypoxemia in sedated children. The results demonstrated that pulse oximetry is a more sensitive monitor of mild to moderate hypoxemia than measurements of heart rate, blood pressure, respiratory rate, or visual observation for cyanosis in sedated pediatric dental patients. abstract_id: PUBMED:20387345 The role of sedation and pulse oximetry during upper gastrointestinal endoscopy. Introduction: To determine the changes in oxygen saturation, blood pressure and pulse rate during endoscopic procedure and to evaluate--oesophagogastroduodenoscopy (OGD) related discomfort assessed by the patient. Methods: It is observational case control study. Baseline pulse, blood pressure and oxygen saturation were monitored before procedure, at one minute of procedure, at five minute and soon after procedure. Patients were randomly selected according to lottery system (1: without sedation and 2: with sedation) and divided into two groups; without and with sedation. Before leaving department they were asked about level of discomfort during and after procedure. Results: Patient's mean age +/- SD: 36.65, +/- 11.42 years and 53.3% were men. Base line mean oxygen saturation among sedated patient were 96.77 +/-1.56 % and 97.23, +/- 2.26 % respectively (P = 0.358). Mild to moderate hypoxia was noted more in sedated patient than in non sedated patient. Severe hypoxia was noted in 3.3% of sedated patient. No statistically significant change was noticed in pulse and blood pressure in both the group. None or only slight discomfort was experienced by 9.4% in non sedated group and 90.6% in sedated group. Severe discomfort by 96.4% in non sedated group and 3.6% in sedated group. Conclusions: There is slight more incidence of mild hypoxia in sedated group than in non sedated group but no change in pulse and blood pressure. However, sedated patient have significant less level of discomfort than in non sedated group. Though routine use of pulse oxymeter is not necessary, routine use of sedation during endoscopy is recommended. abstract_id: PUBMED:10919420 Hypoxia during upper gastrointestinal endoscopy with and without sedation and the effect of pre-oxygenation on oxygen saturation. In Study A, the incidence of arterial oxygen desaturation was studied using pulse oximetry (SaO2) in 100 sedated and 100 nonsedated patients breathing room air who underwent diagnostic upper gastrointestinal endoscopy. Hypoxia (SaO2 92% or less of at least 15 s duration) occurred in 17% and 6% of sedated patients and nonsedated patients, respectively (p < 0.03). Mild desaturation (SaO2 94% or less and less than 15 s duration) occurred in 47% of sedated patients compared with 12% of nonsedated patients (p < 0.001). In Study B, the effects of supplementary oxygen therapy and the effects of different pre-oxygenation times on arterial oxygen saturation (SaO2) in sedated patients were studied using pulse oximetry. One hundred and twenty patients who underwent diagnostic upper gastrointestinal endoscopy with intravenous sedation were studied. Patients were randomly allocated to one of four groups: Group A (n = 30) received no supplementary oxygen while Groups B-D received supplementary oxygen at 4 1 x min(-1) via nasal cannulae. The pre-oxygenation time in Group B (n = 30) was zero minutes, Group C (n = 30) was 2 min and Group D (n = 30) was 5 min before sedation and introduction of the endoscope. Hypoxia occurred in seven of the 30 patients in Group A and none in groups B, C and D (p < 0.001). We conclude that desaturation and hypoxia is common in patients undergoing upper gastrointestinal endoscopy with and without sedation. Sedation significantly increases the incidence of desaturation and hypoxia. Supplementary nasal oxygen at 4 1 x min(-1) in sedated patients abolishes desaturation and hypoxia. Pre-oxygenation confers no additional benefit. abstract_id: PUBMED:2061775 Pulse oximetry and upper intestinal endoscopy in infants and children. Infants and children undergoing upper intestinal endoscopy were monitored by both pulse oximetry and chest EKG. Fifty-seven patients between the ages of 6 weeks and 36 months underwent 60 flexible upper intestinal endoscopies. All patients received parenteral sedation only. All procedures were successfully performed without significant complications. In 53 of the patients there was only transient mild oxygen desaturation with introduction of the endoscope into the pharynx and upper esophagus. In contrast, in seven patients, oxygen desaturation to less than 90% was noted following sedation but prior to insertion of the endoscope without overt clinical evidence of complications. Oxygen administered by nasal cannula resulted in a return of the oxygen saturation to at least the preprocedural level and allowed for safe completion of the studies. With improved monitoring, the use of smaller, more flexible endoscopes, and more experience, routine general anesthesia in children less than 3 years of age, as recommended in the past, may not be mandatory. Pulse oximetry may be particularly useful as an early indicator of poor oxygenation and may provide an objective means to assess the need for supplemental oxygen and to determine the degree of postprocedural observation. abstract_id: PUBMED:27295798 Using pulse oximetry to assess oxygen levels. Abstract Olive S (2016) Using pulse oximetry to assess oxygen levels. Detecting low oxygen levels in patients is important but not always easy; central cyanosis--when a patient's lips, tongue and mucus membranes acquire a blue tinge--can be missed, even by skilled observers, until significant hypoxaemia is present. Pulse oximetry can be undertaken to measure a patient's oxygen levels and help identify earlier when action must be taken. This article outlines the procedure and its limitations, as well as the circumstances in which it should be used. Answer: Pulse oximetry does not accurately monitor a patient's ventilation during sedated endoscopy under oxygen supplementation. While pulse oximetry (SpO2) measures oxygen saturation, it does not measure alveolar ventilation. A study found that oxygen supplementation can mask the detection of alveolar hypoventilation during sedated endoscopy, as indicated by a significant rise in end-tidal carbon dioxide (etCO2) levels that was comparable in both oxygen-supplemented and room air-breathing groups. Despite the development of alveolar hypoventilation, SpO2 remained significantly higher in the oxygen supplementation group compared to the room air breathing group, leading to an overestimation of SpO2 by more than 5% when compared with room air. This suggests that SpO2 alone is not adequate for monitoring alveolar ventilation during sedated endoscopy under oxygen supplementation, as it may delay the detection of alveolar hypoventilation in patients (PUBMED:23624448). Other studies have also indicated that pulse oximetry monitoring may not be necessary for non-sedated gastroscopy in all cases, but it may be important in selected cases, particularly those with pre-existing respiratory problems (PUBMED:7567739). Moreover, no study has conclusively shown that pulse oximetry monitoring or supplemental oxygen reduces morbidity or mortality during gastrointestinal endoscopy, and current guidelines for monitoring and oxygen therapy are not supported by scientific data (PUBMED:8934090). In summary, while pulse oximetry is a useful tool for measuring oxygen saturation, it is not a reliable indicator of ventilation, especially during sedated endoscopy with oxygen supplementation, as it may not detect hypoventilation promptly.
Instruction: Do comorbidity influences acute toxicity and outcome in elderly patients with endometrial cancer treated by adjuvant radiotherapy plus brachytherapy? Abstracts: abstract_id: PUBMED:23359175 Do comorbidity influences acute toxicity and outcome in elderly patients with endometrial cancer treated by adjuvant radiotherapy plus brachytherapy? Purpose: To correlate comorbidity and acute radiation toxicity in elderly patients treated with adjuvant external beam radiotherapy (EBRT) plus brachytherapy-high dose rate (HDR-BRT) for endometrial cancer (EC). Methods: Endometrial cancer patients over 65 were treated and evaluated for comorbidity assessment with ACE-27 and Charlson comorbidity index (CCI). EBRT total dose was 45-50.4 Gy (1.8 Gy/day). The vault vagina boost of dose was performed by HDR-BRT with 2/3 fractions with a total dose of 10-15 Gy. Results: From 2008 to 2011, 35 patients were analyzed. Eighteen patients (51.43 %) had not ACE-27 comorbidity; while 27 patients (77.14 %) had CCI lower than three. During treatment, acute toxicity was mild and not influenced by the comorbidity score. Two-year Progression Free and Overall Survival were 69 and 80 %. ACE-27 and CCI did not affect progression-free survival (p = 0.51, p = 0.3) and OS (p = 0.26, p = 0.5). Conclusion: External beam radiotherapy plus BRT-HDR are well tolerated in EC elderly with good performance status and low comorbidity profile. abstract_id: PUBMED:36150980 Adjuvant Radiotherapy in Stage II Endometrial Cancer: Selective De-intensification of Adjuvant Treatment. Aims: Risk stratification, including nodal assessment, allows for selective de-intensification of adjuvant radiotherapy in stage II endometrial cancer. Patterns of treatment and clinical outcomes, including the use of reduced volume 'mini-pelvis' radiotherapy fields, were evaluated in a population-based study. Materials And Methods: All patients diagnosed with pathological stage II endometrial cancer between 2000 and 2014, and received adjuvant radiotherapy in a regional healthcare jurisdiction were reviewed. Registry data were supplemented by a comprehensive review of patient demographics, disease characteristics and treatment details. The Charlson Comorbidity Score was calculated. Survival and recurrence data were analysed. Results: In total, 264 patients met the inclusion criteria. Most patients had endometrioid histology (83%); 41% of patients had International Federation of Gynecologists and Obstetricians grade 1 disease. Half (49%) had surgical nodal evaluation; 11% received chemotherapy. Most patients (59%) were treated with full pelvic radiotherapy fields ± brachytherapy. Seventeen per cent of patients received mini-pelvis radiotherapy ± brachytherapy, whereas 24% received brachytherapy alone. Five-year recurrence-free survival was 87% for the entire cohort, with no significant difference by adjuvant radiotherapy approach. Only one patient receiving mini-pelvis radiotherapy ± brachytherapy recurred in the pelvis but outside of the mini-pelvis field. Recorded late toxicity rates were highest for full pelvis radiotherapy + brachytherapy. Conclusion: Risk stratification in a real-world setting allowed for selective de-intensification of adjuvant radiation with equivalent outcomes for stage II endometrial cancer. Mini-pelvis radiotherapy combined with brachytherapy is effective in highly selected patients, with the potential to decrease toxicity without compromising local control. Brachytherapy should be considered in low-risk stage II patients. abstract_id: PUBMED:25911995 Adjuvant radiotherapy in Stage II endometrial carcinoma: Is brachytherapy alone sufficient for local control? Objective: To evaluate recurrence patterns and overall survival in patients treated with adjuvant radiation after surgical staging for Stage II endometrial carcinoma. Secondary goals include identification of prognostic factors for recurrence and toxicity assessment. Methods/materials: The medical records of 41 patients treated with adjuvant radiotherapy at Washington University School of Medicine after surgical staging for endometrial cancer (total abdominal hysterectomy and bilateral salpingo-oophorectomy, peritoneal cytology, lymph node dissection) were reviewed. Nineteen were treated with a combination of external beam radiotherapy and vaginal brachytherapy (VB), and 22 patients were treated with postoperative VB alone. Median followup for all patients was 41 months. Results: Median patient age was 59 years (range, 42-87 years). All tumors were of endometrioid histology. There were 20 Grade 1 tumors, 13 Grade 2 tumors, and 8 Grade 3 tumors. For all patients, the 5-year overall survival was 69.8%, and the 5-year recurrence-free survival was 89.0%. There was no statistically significant difference in overall survival (p = 0.510) or freedom from vaginal (p = 0.840), distant (p = 0.133), or any recurrence (p = 0.275) with respect to modality of treatment (external beam radiotherapy and VB vs. VB alone). There were no pelvic lymph node recurrences. In the univariate analysis, there were no risk factors influencing overall survival or recurrences. One patient experienced a toxicity requiring hospital admission. She was treated with pelvic external beam radiation plus brachytherapy. Conclusions: VB alone results in excellent local control for patients with Stage II endometrial cancer after surgical staging. Long-term toxicities are rare and more common in the group of patients who were treated with pelvic external beam plus brachytherapy. abstract_id: PUBMED:24772913 Adjuvant radiotherapy for endometrial cancer--a comparative review of radiotherapy technique with acute toxicity. Objectives: The addition of pelvic radiotherapy to brachytherapy (EBRT-BT) in early-stage endometrial cancer is controversial and may cause unnecessary toxicity. The incidence of acute toxicity of EBRT-BT will have an impact on clinical decision and patient compliance but is currently poorly understood. This study compares the acute toxicities of EBRT-BT versus BT alone. Materials And Methods: Seventy-nine patients with FIGO Stage IA-II endometrial cancer who underwent adjuvant radiotherapy, (EBRT-BT or BT alone) from 2001 to 2011 were included in the study. Medical records of these patients were reviewed retrospectively and toxicity graded according to the Common Terminology Criteria for Adverse Events (CTCAE) version 4.0. Patients were followed up for at least three months post-treatment to assess resolution of toxicity. Results: The mean age of the study group was 60.6 years. Median follow-up was four years. Forty patients received EBRT-BT. There was a 37% increase in Grade 1-3 diarrhea with the addition of pelvic radiotherapy (OR 18.67, p < 0.0005) and a 34% increase in lethargy (p < 0.0005). There was also an increased occurrence of genitourinary and skin toxicities. Two patients in the EBRT-BT group required hospitalisation for severe diarrhea and three patients were unable to complete the treatment. All acute toxicities had resolved by three months post treatment. Conclusion: EBRT-BT causes significantly more acute toxicities compared to BT alone. Patients should be informed of this during counselling. abstract_id: PUBMED:15275729 Pathologic stage I-II endometrial carcinoma in the elderly: radiotherapy indications and outcome. Purpose: To assess the indications for, and the outcome of, adjuvant radiotherapy (RT) in elderly patients with pathologic Stage I-II endometrial carcinoma. Methods And Materials: Between 1980 and 2001, 79 elderly (age > or =75 years) patients with pathologic Stage I-II endometrial carcinoma were seen at our institution. All underwent total abdominal hysterectomy and bilateral salpingo-oophorectomy with assessment of peritoneal cytology. Pelvic and paraaortic lymph node sampling was performed in 39 and 29 women, respectively. Patients with Stage IA and IB Grade 1-2 were designated as low risk; those with Stage IB Grade 3 and IC-IIB were designated as high risk. Thirty-four received adjuvant RT consisting of whole pelvic (n = 23) or vaginal brachytherapy (n = 3), or both (n = 8). Actuarial disease-free survival, cause-specific survival, and pelvic recurrence-free survival analyses were performed by the Kaplan-Meier method. Results: Adverse factors were common, including deep (>50%) myometrial invasion (47%), Grade 3 disease (28%), cervical involvement (15%), and unfavorable histologic features (15%). Overall, 46 patients (58%) had high-risk disease. Primarily because of concerns over toxicity, RT was administered in only 73%, 55%, and 67% of patients with deep myometrial invasion, Grade 3 disease, and cervical involvement, respectively. Thirty-one high-risk patients (67%) received adjuvant RT. At a median follow-up of 33.5 months, 19 patients had relapsed, for a 5-year actuarial disease-free survival rate of 67.7%. Ten patients (12%) had recurrence in the pelvis, 9 of whom had been patients treated with surgery alone. The 5-year pelvic recurrence-free survival rate of patients treated with and without RT was 97% and 73.1%, respectively (p = 0.02). The corresponding rates in the high-risk patients were 97% and 47% (p = 0.0001). High-risk patients treated with RT also had better 5-year actuarial disease-free survival (p = 0.0001) and cause-specific survival (p = 0.003) than those treated with surgery alone. RT was well tolerated, with all patients receiving their treatment as planned. Only 1 patient developed significant late toxicity. Conclusions: Adverse features are common in pathologic Stage I-II elderly endometrial carcinoma patients, and pelvic recurrence is high after surgery alone. Given the improvement in outcome and low incidence of toxicity, our results support the use of adjuvant RT in elderly pathologic Stage I-II patients with high-risk disease. abstract_id: PUBMED:32824243 Evaluation of Uterine Brachytherapy as Primary Treatment Option for Elderly Patients with Medically Inoperable Endometrial Cancer-A Single-Center Experience and Review of the Literature. We aimed to gain more evidence regarding the feasibility, toxicity, and oncological outcome of primary brachytherapy in patients with medically inoperable endometrial cancer. Thirteen patients receiving primary brachytherapy ± external beam radiotherapy (EBRT) for endometrial cancer due to medical inoperability were identified. The Kaplan-Meier method was used to estimate overall survival (OS), progression-free survival (PFS), and local failure-free survival (LFFS). Univariate outcome analyses were performed using the log-rank test. Peri-interventional complications, acute and chronic toxicities were evaluated. Additionally, we performed a Pubmed search and review of the literature of the last 10 years. Mean age at time of diagnosis was 73.9 years (60.4-87.1 years). Eleven patients were staged FIGO IA/B and one patient each with FIGO IIIA and IIIC. Kaplan-Meier-estimated 2-/5-year LFFS were 76.2%/56.4%, respectively. High grading correlated with a worse LFFS (p = 0.069). Kaplan-Meier-estimated 2-/5-year PFS were 76.9%/53.8% and 2-/5-year-OS were 76.9%/69.2%, respectively. No acute toxicities > grade II and only two late toxicities grade II/III occurred. We observed three peri-interventional complications. The available evidence suggests high rates of local control after definitive brachytherapy for inoperable endometrial cancer with a favorable toxicity profile. Definitive brachytherapy +/- EBRT should be considered as the preferred approach for this patient group. abstract_id: PUBMED:24144528 Comparison of clinical outcomes and toxicity in endometrial cancer patients treated with adjuvant intensity-modulated radiation therapy or conventional radiotherapy. Purpose: To evaluate the treatment outcomes and toxicity in endometrial cancer patients treated with hysterectomy and adjuvant intensity-modulated radiation therapy (IMRT) or conventional radiotherapy (CRT). Methods: There were 101 patients with stage IA-IIIC2 endometrial carcinoma treated with hysterectomy and adjuvant radiotherapy. In total, 36 patients received adjuvant CRT and 65 were treated with adjuvant IMRT. The endpoints were overall survival, local failure-free survival, and disease-free survival. Patients were assessed for acute toxicity weekly according to the Common Terminology Criteria for Adverse Events version 3.0. Late toxicity was evaluated according to the Radiation Therapy Oncology Group and the European Organization for Research and Treatment of Cancer Late Radiation Morbidity Scoring Schema. Results: The 5-year overall survival, local failure-free survival, and disease-free survival for the CRT group and the IMRT group were 82.9% versus 93.5% (p = 0.26), 93.7% versus 89.3% (p = 0.68), and 88.0% versus 82.8% (p = 0.83), respectively. Four (11.1%) patients had Grade 3 or greater acute gastrointestinal (GI) toxicity and three (8.3%) patients had Grade 3 or greater acute genitourinary (GU) toxicity in the CRT group, whereas four (6.2%) patients had Grade 3 or greater acute GI toxicity in the IMRT group and no patient had severe GU toxicity. There was one (2.8%) patient who had Grade 3 or greater late GI toxicity and one (2.8%) patient had Grade 3 or greater late GU toxicity in the CRT group, whereas no patient had severe GI or GU toxicity in the IMRT group. Conclusion: Adjuvant IMRT for endometrial cancer patients had comparable clinical outcomes with CRT and had less acute and late toxicity. abstract_id: PUBMED:34944884 Postoperative Radiotherapy for Endometrial Cancer in Elderly (≥80 Years) Patients: Oncologic Outcomes, Toxicity, and Validation of Prognostic Scores. Endometrial cancer is a common malignancy in elderly women that are more likely to suffer from limiting medical comorbidities. Given this narrower therapeutic ratio, we aimed to assess the oncologic outcomes and toxicity in the adjuvant setting. Out of a cohort of 975 women, seventy patients aged ≥ 80 years, treated with curative postoperative radiotherapy (RT) for endometrial cancer between 2005 and 2021, were identified. Outcomes were assessed using Kaplan-Meier-analysis and comorbidities using the Charlson Comorbidity Index and G8 geriatric score. The overall survival at 1-, 2- and 5-years was 94.4%, 82.6%, and 67.6%, respectively, with significant correlation to G8 score. At 1- and 5-years, the local control rates were 89.5% and 89.5% and distant control rates were 86.3% and 66.9%, respectively. Severe (≥grade 3) acute toxicity was rare with gastrointestinal (2.9%), genitourinary (1.4%), and vaginal disorders (1.4%). Univariate analysis significantly revealed inferior overall survival with lower RT dose, G8 score, hemoglobin levels and obesity, while higher grading, lymphangiosis, RT dose decrease and the omission of chemotherapy reduced distant control. Despite older age and additional comorbidities, elderly patients tolerated curative treatment well. The vast majority completed treatment as planned with very low rates of acute severe side-effects. RT offers durable local control; however, late distant failure remains an issue. abstract_id: PUBMED:24377035 Incidence of radiation toxicity in cervical cancer and endometrial cancer patients treated with radiotherapy alone versus adjuvant radiotherapy. Aim: The study was made to evaluate early and late toxicity in a diversified group of patients receiving definitive or adjuvant radiotherapy in terms of clinical diagnosis and treatment methods. Background: Radiotherapy is a standard way of treatment in cervical and endometrial cancer patients, both as definitive and adjuvant therapy. But every radiation treatment may be involved with toxicity. Materials And Methods: A detailed analysis was performed of 263 patients with gynaecological cancer treated with definitive (90 patients with cervical cancer received radiochemotherapy or radiotherapy exclusively) and adjuvant radiotherapy (38 with cervical and 135 with endometrial cancer). Results: Acute reactions were found in 51.3% and late reactions were found in 14.8% of patients. It was stated that early (p < 0.007) and late (p < 0.003) post radiation reaction appear more frequently in women treated with definitive than adjuvant radiotherapy. The analysis of the whole group revealed higher rate of toxicity, both early and late, in the gastrointestinal tract than in the urinary system (p < 0.004). Comparing the subgroups, it was found that intestinal reactions occurred more frequently in the definitive radiotherapy group than in the adjuvant one. The occurrence of side effects was associated with the prolongation of total irradiation time due to necessary interruptions of radiotherapy. The comparison of the subgroups showed that interruptions occurred more frequently in patients receiving definitive rather than adjuvant radiotherapy (17.7-2.9%). Conclusions: Definitive radiotherapy compared with adjuvant treatment may by associated with higher percentage of side effects caused by dose of therapy and correlation with chemotherapy. abstract_id: PUBMED:33897797 Adjuvant vaginal cuff brachytherapy in surgically treated endometrial carcinoma patients - in view of the recent evidence. Vaginal cuff brachytherapy is an essential component of adjuvant post-operative therapy in endometrial carcinoma. Brachytherapy boost, as a part of adjuvant pelvic radiotherapy, including concomitant chemoradiotherapy combined with four cycles carboplatin/paclitaxel chemotherapy, is used in early-stage high-risk and advanced stage disease. This strategy is widely accepted and recommended by international guidelines, despite the fact that combined therapy has never been verified in randomized trials. Brachytherapy alone is the adjuvant treatment of choice for many patients with early-stage endometrial cancer, with high-intermediate features, replacing external beam pelvic radiotherapy. It provides equivalent vaginal control with a lower risk of toxicity, and minimal impact on health-related quality of life. Available evidence did not demonstrate the superiority of sole vaginal brachytherapy combined with three cycles of carboplatin/paclitaxel chemotherapy, over the standard pelvic irradiation for patients with early-stage, high-intermediate-, and high-risk endometrial cancer. This article summarized the available evidence on the role of post-operative vaginal cuff brachytherapy in endometrial cancer patients. Additionally, the risk groups definition, some aspects of brachytherapy technique, and the importance of pathological and molecular risk factors for endometrial cancer risk stratification were presented. Furthermore, the role of brachytherapy according to the European Society of Gynecological Oncology/European Society for Radiotherapy and Oncology/European Society of Pathology 2021 guidelines for the management of patients with endometrial carcinoma was presented. Answer: Comorbidity does not appear to significantly influence acute toxicity and outcome in elderly patients with endometrial cancer treated by adjuvant radiotherapy plus brachytherapy. In a study that assessed comorbidity with the Adult Comorbidity Evaluation-27 (ACE-27) and the Charlson comorbidity index (CCI), it was found that acute toxicity during treatment was mild and not influenced by the comorbidity score. Additionally, two-year Progression Free and Overall Survival rates were 69% and 80%, respectively, and were not affected by ACE-27 and CCI scores (PUBMED:23359175). Moreover, other studies have shown that adjuvant radiotherapy, including brachytherapy, is well tolerated in elderly patients with good performance status and low comorbidity profile (PUBMED:23359175), and that brachytherapy alone can result in excellent local control for patients with Stage II endometrial cancer after surgical staging, with long-term toxicities being rare and more common in patients treated with both pelvic external beam radiotherapy and brachytherapy (PUBMED:25911995). Furthermore, in a study of elderly patients (age ≥75 years) with pathologic Stage I-II endometrial carcinoma, adjuvant radiotherapy was well tolerated, with a low incidence of toxicity, and it supported the use of adjuvant radiotherapy in elderly pathologic Stage I-II patients with high-risk disease (PUBMED:15275729). In the context of elderly patients aged ≥80 years treated with curative postoperative radiotherapy for endometrial cancer, severe acute toxicity was rare, and patients tolerated curative treatment well, with very low rates of acute severe side-effects (PUBMED:34944884). Overall, these findings suggest that comorbidity does not have a significant impact on acute toxicity or outcomes in elderly patients with endometrial cancer undergoing adjuvant radiotherapy plus brachytherapy.
Instruction: Do we know how many cancer patients have a family history of cancer? Abstracts: abstract_id: PUBMED:16189056 'Coming down the line'-- patients' understanding of their family history of common chronic disease. Purpose: The family history is becoming an increasingly important feature of health promotion and early detection of common chronic diseases in primary care. Previous studies of patients from genetics clinics suggest a divergence between how persons with a family history perceive and understand their risk and the risk information provided by health professionals. This interview study aimed to explore how patients in primary care understand and come to terms with their family history of cancer, heart disease, or diabetes and how family history might affect consultations about disease risk and management. Methods: Thirty semistructured interviews were conducted with general practice patients who had a family history of cancer, heart disease, or diabetes. The transcript data underwent a qualitative constant comparative analysis. Results: What exactly constitutes having a family history of an illness varied among participants. The development of a personal sense of vulnerability to the illness in the family depended not only on the biomedical approach of counting affected relatives but also on a sophisticated interplay of other factors. The emotional impact of witnessing the illness in the family, particularly if the illness was sudden, premature, or fatal, and the nature of personal relationships within a family that determine a sense of emotional closeness and personal likeness with the affected relative, all contributed to the perception of disease risk. Different beliefs about the contributions of nature and nurture to disease can affect patients' views on the degree of control they can exert over their risk. Conclusion: This study highlights potential differences between the way patients and medical professionals assess and understand familial risk of cancer, heart disease, and diabetes. Our previous systematic review findings are enhanced by showing that personal experience of disease and the emotional impact can also influence familial risk perceptions. Eliciting the patient's perspective when discussing risk of chronic disease, particularly in the context of a family history, could inform a more patient-centered approach to risk assessment and communication and support patients to make informed decisions about the management of their disease risk. abstract_id: PUBMED:20414939 Do we know how many cancer patients have a family history of cancer? Purpose: It has been estimated that approximately 5-10% of the general population have a family history that is indicative of hereditary cancer, predominately breast and colorectal. However, it is not precisely known how many patients have positive family history of cancer. The purpose of this study was to determine how many cancer patients have positive family history of cancer. Methods: Patients were interviewed during the first visit to Daily Chemotherapy Hospital (DCH) of the Institute for Oncology and Radiology of Serbia, Belgrade. Data about patient cancer type and cancer types among family members were recorded in the hospital chart and analyzed. Results: During an 8-month period, 677 newly diagnosed cancer patients with 9 cancer types were referred to DCH for chemotherapy. Positive family history (at least one first degree relative) for any cancer type was recorded in 163 (24.1%) patients and in 47 (6.9%) patients for the same cancer type. The highest percentage of the positive family history for the same type of cancer showed patients with breast cancer (9.9%), followed by colorectal (7.2%) and brain tumors (6.25%). Conclusion: The overall incidence of positive family cancer history was 31.0% and was higher than expected. Cancer can be more disturbing for persons who already had experience with this disease in a close family member. Those patients need special attention with more intensive and carefully preplanned psychological support. abstract_id: PUBMED:25142628 Completeness of pedigree and family cancer history for ovarian cancer patients. Objective: To investigate the completeness of pedigree and of number of pedigree analysis to know the acceptable familial history in Korean women with ovarian cancer. Methods: Interview was conducted in 50 ovarian cancer patients for obtaining familial history three times over the 6 weeks. The completeness of pedigree is estimated in terms of familial history of disease (cancer), health status (health living, disease and death), and onset age of disease and death. Results: The completion of pedigree was 79.3, 85.1, and 85.6% at the 1st, 2nd, and 3rd time of interview and the time for pedigree analysis was 34.3, 10.8, and 3.1 minutes, respectively. The factors limiting pedigree analysis were as follows: out of contact with their relatives (38%), no living ancestors who know the family history (34%), dispersed family member because of the Korean War (16%), unknown cause of death (12%), reluctance to ask medical history of relatives (10%), and concealing their ovarian cancer (10%). The percentage of cancers revealed in 1st (2%) and 2nd degree (8%) relatives were increasing through surveys, especially colorectal cancer related with Lynch syndrome (4%). Conclusion: Analysis of pedigree at least two times is acceptable in Korean woman with ovarian cancer from the first study. The completion of pedigree is increasing, while time to take family history is decreasing during three time survey. abstract_id: PUBMED:22889133 Do people really know what makes a family history of cancer? Background: Family history is often referred to as a family tree in casual everyday conservations, but it carries a different connotation in medicine. This study is the first to investigate people's understanding of 'family medical history' and the concept of 'family' in the context of inherited cancer. Methods: Three hundred and nine staff at the Faculty of Medicine and Health, University of Leeds completed an online web survey. Results: Not all respondents understood or knew what makes a family history of cancer. Only 54% knew exactly the type of information required to make a family history. Apart from blood relatives, adopted and step-siblings, step parents, in-laws, spouses, friends and colleagues were also named as 'family' for family history taking. Personal experience of living with cancer and academic qualification were not significant in influencing knowledge of family history. Conclusions: There is misunderstanding and poor knowledge of family history of cancer and the type of information required to make a family history even in a sample of people teaching and researching medicine and health issues. Public understanding of the value of family medical history in cancer prevention and management is important if informed clinical decisions and appropriate health care are to be delivered. abstract_id: PUBMED:29228613 The incidence and prognosis of nasopharyngeal carcinoma patients with family history. Purpose: Epidemiological data showed that nasopharyngeal carcinoma (NPC) was a regional malignancy. It suggested that genetic factor may play an important role in tumorigenesis of NPC. The aim was to investigate the incidence and the prognosis of NPC patients with family history. Methods: The clinical data of patients with NPC treated in Fudan University Shanghai Cancer Center from January 2008 to December 2012 were reviewed, and the patients with family history were selected. The prognosis of patients with family history was follow-up. The 5-year overall survival (OS), local recurrence-free survival (LRFS), and distant metastasis-free survival (DMFS) were analyzed by Kaplan-Meier and log-rank test. Cox proportional hazard model was used for multivariate analysis. Results: There were 3.64% (135/3706) NPC patients with family history of NPC. Eighty-three percent (112/135) patients had only one family member suffering from NPC previously, and 74.1% (100/135) patients who had family history only in first-degree family members. Excluding five patients lost to follow-up, 130 patients were eventually used to analyze the prognosis. The 5-year OS, LRFS, and DMFS rates of all patients with family history were 84.1%, 83.4%, and 83.8%, respectively. There were no significant differences of OS, LRFS and DMFS between one relative group and at least two relatives group. In addition, the degree of NPC had no association with OS, LRFS and DMFS, respectively. Conclusion: Our results showed that there was an incidence rate of 3.64% NPC patients with family history. These patients had a satisfied prognosis, and the prognosis of NPC patients with family history in different degree or numbers of relatives had no significant differences. abstract_id: PUBMED:7980235 Accuracy of reported family history of heart disease: the impact of 'don't know' responses. Background: Usual 'Yes/No' definitions of a family history of heart disease do not allow for 'Don't know' responses. In some families, misclassification of family history may result if these are not considered. Aims: To study the accuracy of family history reporting and the impact of 'Don't know' responses. Methods: We compared reported family history of coronary heart disease (CHD) and cancer among 360 deceased relatives of 59 probands with 'gold standard' diagnoses recorded on official death certificates. Results: For first degree relatives, sensitivity of reporting was 81% for CHD, 86% for any heart disease and 60% for cancer, while for second degree relatives, sensitivity was 73%, 69% and 90%, respectively. When 'Don't know' responses were classified as 'No', sensitivity in first degree relatives was not affected, but in second degree relatives, it fell to 49%, 41% and 60%, respectively. Specificities were unchanged. These data show that in families where information beyond first degree relatives is considered, failing to account for 'Don't know' responses is likely to lead to misclassification of family history of CHD. abstract_id: PUBMED:35068232 Comparison of a Focused Family Cancer History Questionnaire to Family History Documentation in the Electronic Medical Record. Introduction: Family health history can be a valuable indicator of risk to develop certain cancers. Unfortunately, patient self-reported family history often contains inaccuracies, which might change recommendations for cancer screening. We endeavored to understand the difference between a patient's self-reported family history and their electronic medical record (EMR) family history. One aim of this study was to determine if family history information contained in the EMR differs from patient-reported family history collected using a focused questionnaire. Methods: We created the Hereditary Cancer Questionnaire (HCQ) based on current guidelines and distributed to 314 patients in the Department of Family Medicine waiting room June 20 to August 1, 2018. The survey queried patients about specific cancers within their biological family to assess their risk of an inherited cancer syndrome. We used the questionnaire responses as a baseline when comparing family histories in the medical record. Results: Agreement between the EMR and the questionnaire data decreased as the patients' risk for familial cancer increased. Meaning that the more significant a patient's family cancer history, the less likely it was to be recorded accurately and consistently in the EMR. Patients with low-risk levels, or fewer instances of cancer in the family, had more consistencies between the EMR and the questionnaire. Conclusions: Given that physicians often make recommendations on incomplete information that is in the EMR, patients might not receive individualized preventive care based on a more complete family cancer history. This is especially true for individuals with more complicated and significant family history of cancer. An improved method of collecting family history, including increasing patient engagement, may help to decrease this disparity. abstract_id: PUBMED:31865627 Importance of family history and indications for genetic testing. Family history is an important cancer risk assessment tool, and it is easy to use. The family history is integral in identifying an individual's risk for primary cancer and assists in the assessment of risk for a second primary cancer. For oncology providers, the critical family history is defined as including first- and second-degree family history, maternal and paternal history, type of primary cancer, and age at diagnosis and ethnicity. Family history should be taken at diagnosis and updated periodically. Despite the importance of family history to patient care, there are significant barriers to taking a family history. We review the impact of collecting complete family history data with respect to calculation of cancer risk, recommendations for screening, and prevention strategies and referral for genetic testing. abstract_id: PUBMED:9283847 The family history in family practice: a questionnaire study. Objectives: Our aims were to investigate family medical history taking in general practice, and to evaluate the value attached to the family medical history as an aid to decision making in general practice. Method: A postal questionnaire survey was conducted among all 291 GPs working within the Calderdale and Kirklees Health Authority area. Each questionnaire was followed by a reminder. The main outcome measures were answers to questions on routine and opportunistic family history taking and a question about transmitting knowledge about genetic risk to other members of the family. Questions were also posed about the value attached to the family medical history as an aid to decision making. Results: A total of 193 GPs returned the questionnaire (response rate 66.3%). On registration, 94.3% of GPs indicated that enquiries were made about a family history of coronary heart disease. Breast and colorectal cancer were specifically asked about by 48.4% and 30.7% of GPs, respectively. One-fifth of respondents indicated that they asked a general question about family medical history. A little over one-quarter of respondents indicated that they made opportunistic enquiries about the family history or suggested that the patient should inform other members of the family about possible risks. In the scenarios highlighted in this study, the majority of respondents felt that the family medical history had value as an aid to decision making. This was particularly the case for checking a patient's cholesterol (92.1%) and for initiating referrals in younger patients with possible cancer-related symptoms (three-quarters of respondents). Conclusion: GPs value the family medical history as an aid to decision making. Unfortunately, apart from enquiries about coronary heart disease, routine or opportunistic family history taking is not occurring in practice. Mechanisms need to be sought to extract information from the family medical history so that it can be more effectively used by GPs. abstract_id: PUBMED:24978702 The prognostic value of family history among patients with urinary bladder cancer. A history of urinary bladder cancer (UBC) in first-degree relatives increases UBC risk by twofold. The influence of positive family history on UBC prognosis is unknown. Here, we investigated association of first-degree UBC family history with clinicopathological characteristics and prognosis of UBC patients. Detailed clinical data of 1,465 non-muscle-invasive bladder cancer (NMIBC) and 250 muscle-invasive or metastatic bladder cancer (MIBC) patients, diagnosed from 1995 to 2010, were collected through medical file review. Competing risk analyses were used to compare recurrence-free survival (RFS) and progression-free survival (PFS) of NMIBC patients according to self-reported UBC family history. Overall survival in MIBC patients was estimated using Kaplan-Meier analysis. The added value of family history in prediction of NMIBC prognosis was quantified with Harrell's concordance-index. Hundred (6.8%) NMIBC and 14 (5.6%) MIBC patients reported UBC in first-degree relatives. Positive family history was statistically significantly associated with smaller tumor size and non-significantly with more favorable distribution of other tumor characteristics. In univariable analyses, positive family history correlated with longer RFS (p = 0.11) and PFS (p = 0.04). Hazard ratios for positive vs. negative family history after adjustment for clinicopathological characteristics were 0.75 (95% CI = 0.53-1.07) and 0.45 (95% CI = 0.18-1.12) for RFS and PFS, respectively. Five familial and 48 sporadic MIBC patients (Kaplan-Meier 10-year risk: 41% and 25%) died within 10 years. Family history did not improve the c-index of prediction models. This study shows that a first-degree family history of UBC is not clearly associated with NMIBC prognosis. Family history does not aid in prediction of NMIBC recurrence or progression. Answer: Based on the provided abstracts, it is not precisely known how many cancer patients have a positive family history of cancer. However, one study conducted at the Daily Chemotherapy Hospital of the Institute for Oncology and Radiology of Serbia found that during an 8-month period, out of 677 newly diagnosed cancer patients, 163 (24.1%) had a positive family history (at least one first-degree relative) for any cancer type, and 47 (6.9%) had a positive family history for the same cancer type (PUBMED:20414939). This incidence was higher than expected, indicating that a significant proportion of cancer patients have a family history of the disease. However, these figures may vary depending on the population and the methodology used to collect family history data.
Instruction: Is participation in cardiac rehabilitation programs associated with better quality of life and return to work after coronary artery bypass operations? Abstracts: abstract_id: PUBMED:11433629 Is participation in cardiac rehabilitation programs associated with better quality of life and return to work after coronary artery bypass operations? The Israeli CABG Study. Objective: To explore the putative effect of cardiac rehabilitation programs on the 'health-related quality of life' and 'return to work' in pre-retirement patients one year after coronary artery bypass grafting. Methods: Of the 2,085 patients aged 45-64 who survived one year after CABG and were Israeli residents, 145 (6.9%) had participated in rehabilitation programs. Of these, 124 (83%) who answered QOL questionnaires were individually matched with 248 controls by gender, age within 5 years, and the time the questionnaire was answered. All patients had full clinical follow-up including a pre-operative interview. The Short Form-36 QOL questionnaire as well as a specific questionnaire were mailed to surviving patients one year after surgery. Study outcomes included the scores on eight scales and two summary components of the SF-36, as well as 'return to work' and 'satisfaction with medical services' from the specific questionnaire. Analysis was done for matched samples. Results: Cardiac rehabilitation participants had significantly higher SF-36 scores in general health, physical functioning, and social functioning. They had borderline significant higher scores in the physical summary component of the SF-36. The specific questionnaire revealed significantly better overall functioning, higher satisfaction with medical care, and higher rate of return to work. While participants in cardiac rehabilitation and their controls were similar in their socio-demographic and clinical profiles, participating patients tended to be more physically active and more fully employed than their controls. Conclusions: Rehabilitation participants had a self-perception of better HRQOL, most significantly in social functioning. Our findings of more frequent return to work and higher satisfaction with medical care should induce a policy to encourage participation in cardiac rehabilitation programs after CABG. abstract_id: PUBMED:316348 Return to work and quality of life after surgery for coronary artery disease. Changes in work capability and quality of life were assessed retrospectively in 130 patients with ischaemic heart disease who had undergone aortocoronary bypass operations during 1976-7 because of medically uncontrollable angina. A total of 85 patients (65.4%) reported complete relief from angina six months after operation, though 12 later suffered a recurrence. Substantially fewer patients needed drugs after the operation. Before operation 9 out of 117 men fully employed at the onset of angina were working without restriction or doing lighter, fulltime work, 38 were at work but seriously incapacitated by angina, and 70 were forced to stop work. After operation 70 were working without restriction or engaged in lighter work, 15 were at work but still restricted by angina, and only 32 were forced to stop work. This result was highly significant (P less than 0.001). These differences were even more pronounced in heavy manual workers, of whom none cobld work normally before operation, whereas 16 were working without restriction afterwards. Of patients wishing to engage in hobbies or sports, social activity, and sexual intercourse but were restricted before operation, about two-thirds could resume these activities afterwards. Coronary artery surgery provided dramatic symptomatic relief in up to 90% of patients and permitted rehabilitation and return to gainful employment irrespective of type of labour. The degree of symptomatic improvement and increase in exercise tolerance after successful surgery is usually far greater than occurs with any other form of treatment and directly improves quality of life and work capability. abstract_id: PUBMED:11326234 Sociodemographic and clinical factors associated with low quality of life one year after coronary bypass operations: the Israeli coronary artery bypass study (ISCAB). Objectives: We sought to examine the effect of sociodemographic characteristics and perioperative clinical factors 1 year after coronary bypass operations on low health-related quality of life. We also sought to assess the usefulness of an additional single question on overall health for identifying patients with low health-related quality of life. Methods: This report is part of the Israeli coronary artery bypass study of 1994, in which every patient undergoing isolated coronary bypass grafting in Israel was included. The target population for this report comprised all survivors beyond 1 year who were 45 to 65 years of age. Patients were interviewed before the operations. Self-administered questionnaires regarding health-related quality of life (SF-36) were sent to 1724 patients who were successfully located 1 year postoperatively, and 1270 questionnaires were completed. Low health-related quality of life was defined as the lowest tertile of the distribution of scores for the 2 summary components of the SF-36 and the single question on overall health. Logistic models were constructed for each of the 3 outcomes. Results: Female sex and low socioeconomic background were associated with low health-related quality of life in the logistic models. Other significant factors were symptoms of angina, sleep disturbances, hypertension, high severity of illness scores, hospital readmission, no rehabilitation, and hospitals with high perioperative mortality. Of the 3 study outcomes, the model for the single question on overall health was the most discriminating (C statistic = 0.76 vs 0.70 and 0.70, respectively). Conclusions: The study identifies patients who would most benefit from posthospitalization community support after bypass operations. Under circumstances of limited resources, these disadvantaged groups should be targeted as a priority. Encouraging participation in existing rehabilitation programs or introducing telephone hotlines could improve health-related quality of life after coronary bypass grafting without large investments. abstract_id: PUBMED:6768935 Return to work after coronary artery bypass operation. A societal cost-benefit argument has been made for the coronary artery bypass graft (CABG) operation. Most patients experience improvement in symptoms and many can return to a productive livelihood. To estimate the rate of return to work and identify the factors influencing that outcome, we analyzed the work status before and after operation in a follow-up of 105 patients undergoing CABG operations in a new teaching hospital. Overall, relief or improvement in angina was accomplished in 92% of patients, and there was a 10% net increase to th work force after operation. Of all variables studied, preoperative work status was found to be the most statistically significant predictor of the postoperative return to work; other factors associated with return to work included symptomatic relief or improvement, age, and educational level. Preoperative and postoperative means of support did not play a major role in determining work outcome. abstract_id: PUBMED:3499550 Resumption of employment following aortocoronary bypass operation Between April 1981 and May 1983, 921 patients with coronary sclerosis underwent a bypass operation. Following the operation, 327 of these patients were asked to give their professional status. These were compared with the surgical result and the clinical findings. During the 22-month follow-up period 47.1% were without work. 52.9% returned to work after an average of 3.3 months. The following factors played a significant role in the decision whether or not to return to work: 1) the age of the patient (p less than 0.001); 2) the degree of physical stress to which the patient was subjected in his job before the operation (p less than 0.001); 3) the severity of the postoperative angina pectoris symptoms (p less than 0.01); 4) the improvement in the output of the left ventricle (p less than 0.05) and 5) the participation in rehabilitation treatment (p less than 0.01). By reducing the preliminary investigation period, increasing the operation capacity and making more effective use of the rehabilitation programme, more people could return to work after the operation and this in turn would increase our patients' quality of life. abstract_id: PUBMED:10900673 Intensified after-care--a method for improving occupational reintegration after myocardial infarct and/or bypass operation Occupational reintegration after coronary heart disease of patients who are insured by the German workers pension (Arbeiterrentenversicherung) succeeds in 40-60% of the recorded cases. Patients who were not able to return to work after finishing their regular cardiac rehabilitation took part in a program called "Intensivierte Nachsorge (INA)". INA is an interdisciplinary support program, taking place twice a week for a whole day over a period of six weeks. On the remaining three days patients were either progressively reintegrated into their working place or stayed at home. The results of the INA group were compared to those of a control group. Two years after patients had terminated the cardiac rehabilitation, statistically significant effects could still be found: 70.2% of the INA group had returned to work compared to 52.6% of the control group. This is a difference of 17.6%. After the results had been corrected by considering age differences between the two groups, the control group had a recalculated return to work rate of 57.4%. A significant difference of 12.8% could still be identified with respect to the INA group. Support programs which follow regular cardiac rehabilitation seem to be specially suitable for older patients with highly perceived job strain, because our results showed that these patients had lower return to work rates. 44.2% of the INA group and 21.9% of the control group were progressively reintegrated into their work place. Two years after their regular rehabilitation 36.3% of the INA group patients took part in ambulatory heart groups compared with 10.4% of the control group. It was also found that patients of the INA group showed improvements in activities of daily life. The INA program however does not seem to have an influence on behavioral components such as eating habits, relaxation, and smoking as well as on the psychological status. The physical fitness measured in watt x min at the beginning of the INA program (T1) was 589.46 +/- 255.03 in the control group. This number increased to 598.32 +/- 276.01 six months after regular rehabilitation (T3) and continued to rise to 661.15 +/- 362.01 after two years (T4). In the INA group the numbers were as follows: 658.13 +/- 263.63 at T1, 751.83 +/- 318.15 at T3, and 717.93 +/- 336.76 at T4. The differences between the groups are significant at T1 and T3, whereas at T4 there is no significant difference. It should also be stated that the lipid parameters indicated no differences between the groups except for the triglyceride values which were significantly lower statistically in the INA group than in the control group. abstract_id: PUBMED:3263734 Return to work following myocardial infarct The aim of this retrospective study involving 471 patients was to investigate the percentage of patients who returned to work after acute myocardial infarction. In the group of 350 patients who did not undergo subsequent aortocoronary bypass operation 70% returned to work, whereby half of these patients started work again within 6 months after infarction. A significantly higher number of patients who were enrolled in a cardiac rehabilitation programme returned to work as compared with patients not participating in such a programme (72% vs. 59%, respectively; p less than 0.04). There was, however, no difference between these 2 groups with respect to either the time point of resumption of work after myocardial infarction or the duration of employment between infarction and eventual retirement. Of the 121 patients who underwent an aortocoronary bypass operation, only 38% returned to work (p less than 0.001 vs. patients without bypass surgery). abstract_id: PUBMED:19480760 Survival, clinical status and quality of life five years after coronary surgery. The ARCA study. Introduction And Objectives: Little is known about the long-term outcomes of coronary surgery and their determinants in Spain. The objectives of this study were to evaluate clinical outcomes, quality of life and survival in a cohort of patients 5 years after undergoing a first aortocoronary bypass operation without any other associated procedure. Methods: Patients who survived the operation and whose pre- and postoperative data had been collected prospectively were followed up by telephone interview after 5 years. Results: Information was available after 5 years on 1,300 (85.2%) of the 1,525 patients who survived until hospital discharge. Of these, 13.6% had died, while 24% had either died, undergone revascularization or were readmitted because of a cardiac complaint. The cumulative survival rate (excluding the period of hospitalization) was 0.87 (95% confidence interval, 0.85-0.89). Mortality varied significantly with the level of preoperative risk (i.e. the EuroSCORE), to the extent that mortality in the low-risk group was equivalent to that in the general reference population. Conclusions: Three-quarters of patients who survived until hospital discharge after coronary surgery did not experience a major cardiac event within 5 years and their level of functioning and quality of life were good. The survival rate after the immediate postoperative period varied according to the patient's preoperative risk profile and, in low-risk patients, was equivalent to that in the general reference population. abstract_id: PUBMED:1413963 The dynamics of aging and rehabilitation following coronary bypass operation The dynamics of aging encompass the topics "aging of society" and "coping with chronic illness". Both are discussed using data from a qualitative study on return to work after coronary artery bypass surgery. Data from the retrospective substudy reveal four forms of aging as biographical development toward early retirement, old-age retirement, or related stages. It emerges that the manner of aging is less influenced by medical rehabilitation or medical history, and more by the patient's occupation and socio-economic status. Case reports are used to show that social class is a facilitation context, but not a determining force, regarding the choice between postoperative retirement or return to work. The final section briefly discusses whether the findings concerning the dynamics of aging of the cohort operated on in the late 1970s apply also to that of the cohort operated on in the late 1980s; this is addressed using some data from the prospective sub-study. abstract_id: PUBMED:2642367 Cardiac transplantation in the seventh decade of life. Twenty-five patients older than 60 years of age underwent cardiac transplantation using an immunosuppression protocol with cyclosporin and azathioprine, but without routine use of oral steroids. There were 24 men and 1 woman (age range 60 to 69 years, mean 63). The etiology of heart disease was coronary artery disease in 21 and idiopathic dilated cardiomyopathy in 4. Six patients had previous coronary artery bypass operations, 1 had undergone repair of an abdominal aneurysm and 1 had pulmonary embolism. Sixteen patients were in New York Heart Association class IV and 9 in class III. Donor mean age was 30 (14 to 46) years. Hospital stay after transplantation was 10 to 90 days (median 11). Four died within 30 days and none from 5 to 59 months (mean 22). The 1-year actuarial survival was 84%. The incidence of rejection was 2.16 episodes per patient. Only 1 patient (4%) had serious infection. Six patients received antihypertensive treatment, 3 had reversible impairment of renal function, 2 had gout and 1 had drop foot. No patient had convulsions, transient ischemic attack or cerebrovascular accident. None had significant psychological problems. The 21 patients currently alive are in New York Heart Association class I. Quality of life, assessed by the Nottingham Health Profile, showed marked improvement. It is concluded that the initial results of cardiac transplantation in the seventh decade of life are encouraging. Answer: Yes, participation in cardiac rehabilitation programs is associated with better quality of life and return to work after coronary artery bypass operations. Studies have shown that patients who participate in cardiac rehabilitation programs have significantly higher scores in general health, physical functioning, and social functioning, as well as a higher rate of return to work (PUBMED:11433629). Additionally, coronary artery surgery has been found to provide dramatic symptomatic relief in up to 90% of patients, allowing for rehabilitation and return to gainful employment irrespective of the type of labor, which directly improves quality of life and work capability (PUBMED:316348). Furthermore, sociodemographic and clinical factors such as female sex, low socioeconomic background, symptoms of angina, sleep disturbances, hypertension, high severity of illness scores, hospital readmission, and no rehabilitation were associated with low health-related quality of life after coronary bypass operations. Encouraging participation in rehabilitation programs could improve health-related quality of life after coronary bypass grafting (PUBMED:11326234). Other studies have also indicated that factors such as age, the degree of physical stress in the patient's job before the operation, the severity of postoperative angina pectoris symptoms, improvement in the output of the left ventricle, and participation in rehabilitation treatment play a significant role in the decision to return to work after the operation (PUBMED:3499550). Intensified after-care programs, like the "Intensivierte Nachsorge (INA)" program, have also been shown to improve occupational reintegration after myocardial infarct and/or bypass operation (PUBMED:10900673). In summary, the evidence suggests that cardiac rehabilitation programs are beneficial for patients recovering from coronary artery bypass operations, leading to improved quality of life and increased likelihood of returning to work.
Instruction: Is it good to express uncertainty to a patient? Abstracts: abstract_id: PUBMED:19604663 Is it good to express uncertainty to a patient? Correlates and consequences for medical students in a standardized patient visit. Objective: To examine the consequences of expressions of uncertainty (EOUs) in medical student interactions, with a particular focus on the gender of the expressor. Methods: EOUs were identified in 147 videotaped interactions between third-year medical students and standardized patients enacting four medical scenarios. The encounters were also analyzed using the Roter Interaction Analysis System (RIAS). A subset of these interactions was shown to 244 analogue patients who rated satisfaction, liking, and perceived competence and confidence. Results: Female medical students used more EOUs but not when adjusted for total number of statements. The EOU rate varied across scenarios. A higher EOU rate was perceived as more affectively negative by both RIAS trained coders and analogue patients. EOU rate was inversely related to analogue patient satisfaction and liking and this relationship was stronger for males than females. Female student use of EOUs was more strongly associated with ratings of anxiety than was male EOU use. Conclusion: There was an overall negative perception of medical students who expressed uncertainty; however, the strength of the associations varied due to medical student gender. Practice Implications: The implications for medical education are discussed. abstract_id: PUBMED:12401420 Doctors expressions of uncertainty and patient confidence. Although recent work has encouraged doctors to express their uncertainty to patients as a means to improve communication the potential impact of this on patients remains unclear. The present study first explored the impact of the way in which uncertainty was expressed (behaviourally versus verbally) on doctor's and patient's beliefs about patient confidence. Second the study examined the role of the patient's personal characteristics and knowledge of their doctor as a means to address the broader context. Matched questionnaires were completed by GPs (n=66, response rate=92%) and patients (n=550, response rate=88%) from practices in the south-east of England. The results showed that the majority of GPs and patients viewed verbal expressions of uncertainty such as 'Let's see what happens' as the most potentially damaging to patient confidence and both GPs and patients believed that asking a nurse for advice would have a detrimental effect. In contrast, behaviours such as using a book or computer were seen as benign or even beneficial activities. When compared directly, GPs and patients agreed about behavioural expressions of uncertainty, but the patients rated the verbal expressions as more detrimental to their confidence than anticipated by the doctors. In terms of the context, patients who indicated that both verbal and behavioural expressions of uncertainty would have the most detrimental impact upon their confidence were younger, lower class and had known their GP for less time. To conclude, patients' reactions to uncertainty occur within the context of the patient's own background and experience, however, some consistent responses can be found. In particular, whilst behavioural expressions of uncertainty may have a positive impact upon patient confidence, verbal expressions have a consistently detrimental effect which is underestimated by GPs. abstract_id: PUBMED:28271278 Patient response to physician expressions of uncertainty: a systematic review. Background: The consequences of uncertainty are extensive with the potential to influence a multitude of issues such as patient expectations, diagnosis, treatment decisions, patient confidence and satisfaction and ultimately patient compliance. Aims: To review the relevant literature pertaining to physician expressions of uncertainty and its effects on patients. Methods: A search was conducted in July 2016 of electronic databases; PsycINFO, PsycARTICLES, MEDLINE and Embase. Key search terms included uncertainty, physician uncertainty, patient attitude, patient confidence and patient satisfaction. Other sources were obtained from reference lists. The eligibility of studies and data extracted were cross-checked with inclusion and exclusion criteria. Results: The search yielded 14 studies and 4 were included for review. These studies yielded mixed results. In some studies patients preferred disclosure of uncertainty while in others such disclosure was negatively associated with patient confidence and satisfaction. This is at odds with the belief of theorists that uncertainty disclosure to patients is the crux of a successful doctor-patient relationship. The studies were, however, united in so far as the way in which the uncertainty was communicated to the patient was pivotal to patient responses to uncertainty. Conclusions: Uncertainty is a ubiquitous feature of medicine and more research should examine how physician's expressions of uncertainty affect patients. abstract_id: PUBMED:36127538 Communication of Diagnostic Uncertainty in Primary Care and Its Impact on Patient Experience: an Integrative Systematic Review. Background: Diagnostic uncertainty is a pervasive issue in primary care where patients often present with non-specific symptoms early in the disease process. Knowledge about how clinicians communicate diagnostic uncertainty to patients is crucial to prevent associated diagnostic errors. Yet, in-depth research on the interpersonal communication of diagnostic uncertainty has been limited. We conducted an integrative systematic literature review (PROSPERO CRD42020197624, unfunded) to investigate how primary care doctors communicate diagnostic uncertainty in interactions with patients and how patients experience their care in the face of uncertainty. Methods: We searched MEDLINE, PsycINFO, and Linguistics and Language Behaviour Abstracts (LLBA) from inception to December 2021 for MeSH and keywords related to 'communication', 'diagnosis', 'uncertainty' and 'primary care' environments and stakeholders (patients and doctors), and conducted additional handsearching. We included empirical primary care studies published in English on spoken communication of diagnostic uncertainty by doctors to patients. We assessed risk of bias with the QATSDD quality assessment tool and conducted thematic and content analysis to synthesise the results. Results: Inclusion criteria were met for 19 out of 1281 studies. Doctors used two main communication strategies to manage diagnostic uncertainty: (1) patient-centred communication strategies (e.g. use of empathy), and (2) diagnostic reasoning strategies (e.g. excluding serious diagnoses). Linguistically, diagnostic uncertainty was either disclosed explicitly or implicitly through diverse lexical and syntactical constructions, or not communicated (omission). Patients' experiences of care in response to the diverse communicative and linguistic strategies were mixed. Patient-centred approaches were generally regarded positively by patients. Discussion: Despite a small number of included studies, this is the first review to systematically catalogue the diverse communication and linguistic strategies to express diagnostic uncertainty in primary care. Health professionals should be aware of the diverse strategies used to express diagnostic uncertainty in practice and the value of combining patient-centred approaches with diagnostic reasoning strategies. abstract_id: PUBMED:12472078 Patient autonomy and the challenge of clinical uncertainty. Bioethicists have articulated an ideal of shared decision making between physician and patient, but in doing so the role of clinical uncertainty has not been adequately confronted. In the face of uncertainty about the patient's prognosis and the best course of treatment, many physicians revert to a model of nondisclosure and nondiscussion, thus closing off opportunities for shared decision making. Empirical studies suggest that physicians find it more difficult to adhere to norms of disclosure in situations where there is substantial uncertainty. They may be concerned that acknowledging their own uncertainty will undermine patient trust and create additional confusion and anxiety for the patient. We argue, in contrast, that effective disclosure will protect patient trust in the long run and that patients can manage information about uncertainty. In situations where there is substantial uncertainty, extra vigilance is required to ensure that patients are given the tools and information they need to participate in cooperative decision making about their care. abstract_id: PUBMED:35111482 Improve Communicating Uncertainty in Intensive Care Unit With Patient and Family (ICU-PF). Communicating uncertainty with patients and families in the intensive care unit is challenging and requires time and skill to convey the information. This proposal aims to provide a structured path for identifying and communicating uncertainty with patients and families in the unit. The focus is to improve the quality of care and timely communication to meet the expectations and needs of families and patients. The project focuses on the first 24 hours of intensive care unit admission to improve communication of uncertainty. By utilizing the Plan-Do-Study-Act cycle, the workflow uses a screening tool to identify uncertainty and communicate using evidence-based recommendations and the mnemonic VALUE (Value family statements, Acknowledge emotions, Listen, Understand the patient as a person, Elicit questions) as the standard of care. The workflow can be incorporated during the routine rounds as part of the A-F liberation bundle. The outcome is to improve patient and family satisfaction scores using a validated Family Satisfaction with Care in the Intensive Care Unit (FS-ICU 24) questionnaire to achieve a score of 75 or more, which correlates with very good. Challenges and limitations are discussed in the proposal. abstract_id: PUBMED:32424175 Patient experience of uncertainty in cancer genomics: a systematic review. While genomics provides new clinical opportunities, its complexity generates uncertainties. This systematic review aimed to summarize what is currently known about the experience of uncertainty for adult patients undergoing cancer genomic testing. A search of five databases (2001 to 2018) yielded 6508 records. After removing duplicates, abstract/title screening, and assessment of full articles, ten studies were included for quality appraisal and data extraction. Qualitative studies were subjected to thematic analysis, and quantitative data were summarized using descriptive statistics. Cancer genomic results reduced uncertainty for patients regarding treatment decisions but did not reduce uncertainty in the risk context. Qualitative and quantitative data synthesis revealed four themes: (1) coexisting uncertainties, (2) factors influencing uncertainty, (3) outcomes of uncertainty, and (4) coping with uncertainty. Uncertainty can motivate, or be a barrier to, pursuing cancer genomic testing. Appraisal of uncertainty influences the patient experience of uncertainty, the outcome of uncertainty for patients, as well as the coping strategies utilized. While this systematic review found that appraisal of uncertainty is important to the patients' experience of uncertainty in the cancer genomic context, more mixed methods longitudinal research is needed to address the complexities that contribute to patient uncertainty across the process. abstract_id: PUBMED:34312032 Patient and clinician experiences of uncertainty in the diagnostic process: Current understanding and future directions. Objective: Uncertainty occurs throughout the diagnostic process and must be managed to facilitate accurate and timely diagnoses and treatments. Better characterization of uncertainty can inform strategies to manage it more effectively in clinical practice. We provide a comprehensive overview of current literature on diagnosis-related uncertainty describing (1) where patients and clinicians experience uncertainty within the diagnostic process, (2) how uncertainty affects the diagnostic process, (3) roots of uncertainty related to probability/risk, ambiguity, or complexity, and (4) strategies to manage uncertainty. Discussion: Each diagnostic process step involves uncertainty, including patient engagement with the healthcare system; information gathering, interpretation, and integration; formulating working diagnoses; and communicating diagnoses to patients. General management strategies include acknowledging uncertainty, obtaining more contextual information from patients (e.g., gathering occupations and family histories), creating diagnostic safety nets (e.g., informing patients what red flags to look for), engaging in worst case/best case scenario planning, and communicating diagnostic uncertainty to patients, families, and colleagues. Potential strategies tailored to various aspects of diagnostic uncertainty are also outlined. Conclusion: Scientific knowledge on diagnostic uncertainty, while previously elusive, is now becoming more clearly defined. Next steps include research to evaluate relationships between management and communication of diagnostic uncertainty and improved patient outcomes. abstract_id: PUBMED:16731728 Uncertainty, responsibility, and the evolution of the physician/patient relationship. The practice of evidence based medicine has changed the role of the physician from information dispenser to gatherer and analyser. Studies and controlled trials that may contain unknown errors, or uncertainties, are the primary sources for evidence based decisions in medicine. These sources may be corrupted by a number of means, such as inaccurate statistical analysis, statistical manipulation, population bias, or relevance to the patient in question. Regardless of whether any of these inaccuracies are apparent, the uncertainty of their presence in physician information should be disclosed to the patient. These uncertainties are not, however, shared by physicians with patients, and have caused a direct increase in patient responsibilities and mistrust. Only when disclosure of uncertainty becomes commonplace in medical practice will the physician/patient relationship evolve to a level of greater understanding and satisfaction for both the physician and patient. abstract_id: PUBMED:38224480 The Use of Social Media to Express and Manage Medical Uncertainty in Dyskeratosis Congenita: Content Analysis. Background: Social media has the potential to provide social support for rare disease communities; however, little is known about the use of social media for the expression of medical uncertainty, a common feature of rare diseases. Objective: This study aims to evaluate the expression of medical uncertainty on social media in the context of dyskeratosis congenita, a rare cancer-prone inherited bone marrow failure and telomere biology disorder (TBD). Methods: We performed a content analysis of uncertainty-related posts on Facebook and Twitter managed by Team Telomere, a patient advocacy group for this rare disease. We assessed the frequency of uncertainty-related posts, uncertainty sources, issues, and management and associations between uncertainty and social support. Results: Across all TBD social media platforms, 45.98% (1269/2760) of posts were uncertainty related. Uncertainty-related posts authored by Team Telomere on Twitter focused on scientific (306/434, 70.5%) or personal (230/434, 53%) issues and reflected uncertainty arising from probability, ambiguity, or complexity. Uncertainty-related posts in conversations among patients and caregivers in the Facebook community group focused on scientific (429/511, 84%), personal (157/511, 30.7%), and practical (114/511, 22.3%) issues, many of which were related to prognostic unknowns. Both platforms suggested uncertainty management strategies that focused on information sharing and community building. Posts reflecting response-focused uncertainty management strategies (eg, emotional regulation) were more frequent on Twitter compared with the Facebook community group (χ21=3.9; P=.05), whereas posts reflecting uncertainty-focused management strategies (eg, ordering information) were more frequent in the Facebook community group compared with Twitter (χ21=55.1; P<.001). In the Facebook community group, only 36% (184/511) of members created posts during the study period, and those who created posts did so with a low frequency (median 3, IQR 1-7 posts). Analysis of post creator characteristics suggested that most users of TBD social media are White, female, and parents of patients with dyskeratosis congenita. Conclusions: Although uncertainty is a pervasive and multifactorial issue in TBDs, our findings suggest that the discussion of medical uncertainty on TBD social media is largely limited to brief exchanges about scientific, personal, or practical issues rather than ongoing supportive conversation. The nature of uncertainty-related conversations also varied by user group: patients and caregivers used social media primarily to discuss scientific uncertainties (eg, regarding prognosis), form social connections, or exchange advice on accessing and organizing medical care, whereas Team Telomere used social media to express scientific and personal issues of uncertainty and to address the emotional impact of uncertainty. The higher involvement of female parents on TBD social media suggests a potentially greater burden of uncertainty management among mothers compared with other groups. Further research is needed to understand the dynamics of social media engagement to manage medical uncertainty in the TBD community. Answer: The question of whether it is good to express uncertainty to a patient is complex and context-dependent. The literature presents mixed findings on the consequences of physicians expressing uncertainty. On one hand, some studies suggest that expressions of uncertainty (EOUs) by medical students were perceived negatively, with higher EOU rates inversely related to patient satisfaction and liking. This negative perception was stronger for male students than female students, and female students' use of EOUs was more strongly associated with ratings of anxiety (PUBMED:19604663). Similarly, verbal expressions of uncertainty were seen as potentially damaging to patient confidence, with patients rating these expressions as more detrimental than doctors anticipated. Patients who were younger, lower class, and had known their GP for less time were more likely to view uncertainty negatively (PUBMED:12401420). However, other studies indicate that the way uncertainty is communicated is pivotal to patient responses. Some patients preferred disclosure of uncertainty, while others found it negatively associated with confidence and satisfaction. The belief that uncertainty disclosure is central to a successful doctor-patient relationship is not universally supported, and more research is needed to understand how expressions of uncertainty affect patients (PUBMED:28271278). In primary care, doctors use patient-centered communication strategies and diagnostic reasoning strategies to manage diagnostic uncertainty. Patients' experiences of care in response to these strategies were mixed, with patient-centered approaches generally regarded positively (PUBMED:36127538). Effective disclosure of uncertainty can protect patient trust in the long run, and patients can manage information about uncertainty (PUBMED:12472078). In the context of intensive care, structured communication of uncertainty aims to improve the quality of care and meet the expectations and needs of families and patients (PUBMED:35111482). In cancer genomics, uncertainty can motivate or be a barrier to pursuing testing, with appraisal of uncertainty influencing patient experience, outcomes, and coping strategies (PUBMED:32424175). Overall, while uncertainty is inherent in medicine, the literature suggests that the impact of expressing uncertainty to patients varies. It is important for healthcare professionals to be aware of the diverse strategies used to express uncertainty and to combine patient-centered approaches with diagnostic reasoning strategies. Further research is needed to evaluate the relationship between the management and communication of diagnostic uncertainty and improved patient outcomes (PUBMED:34312032, PUBMED:16731728). In summary, expressing uncertainty to patients can have both positive and negative effects, and the appropriateness of doing so may depend on factors such as the patient's background, the physician's communication style, and the context of the medical situation.
Instruction: Do depressive symptoms on hospital admission impact early functional outcome in elderly patients with hip fracture? Abstracts: abstract_id: PUBMED:24954835 Do depressive symptoms on hospital admission impact early functional outcome in elderly patients with hip fracture? Background: Depression is the most common mood disorder in elderly people and one of the most prevalent comorbidities in older people with hip fracture. While several authors have confirmed that depressive symptoms assessed at a later stage after hip fracture impact functional outcome and mortality, the role of depressive symptoms identified at an earlier stage after hip fracture remains understudied. The aim of the present study was to determine if depressive symptoms assessed on hospital admission impact early functional outcome after hip fracture surgery. Methods: We studied 112 patients who underwent surgery for hip fracture during a 6-month period. Depressive symptoms were assessed using the 30-item Geriatric Depression Scale on admission to the acute setting. Multidimensional assessment included sociodemographic characteristics, general health status, cognitive status, functional status prior to injury, and perioperative variables. The primary outcome measure was motor Functional Independence Measure at discharge. Results: Adjusted multivariate regression analysis revealed that the presence of moderate to severe depressive symptoms (Geriatric Depression Scale ≥ 20), older age, and female gender were independently related to motor Functional Independence Measure at discharge. Conclusion: Increasing levels of depressive symptoms in elderly hip fracture patients influence short-term functional outcome. We strongly support the introduction of routine assessment of this baseline comorbidity, especially in female patients. Failure to identify such patients is a missed opportunity for possible improvement of early functional outcome after hip fracture in elderly. abstract_id: PUBMED:10995051 Delirium on hospital admission in aged hip fracture patients: prediction of mortality and 2-year functional outcomes. Background: Hip fracture patients are at increased risk of confusion or delirium due to the trauma associated with the injury and the rapid progression to hospitalization and surgery, in addition to the pain and loss of function experienced. Hip fracture patients who develop delirium may require longer hospital stays, are more often discharged to long-term care, and have a generally poor prognosis for returning home or regaining function in activities of daily living (ADL). Methods: The present study examines the impact of delirium present on hospital admission in a sample of 682 non-demented, aged hip fracture patients residing in the community at the time of their fracture. In-hospital assessments designed to assess both prefracture and postfracture functioning, as well as follow-up interviews at 2, 6, 12, 18, and 24 months postfracture, were obtained from participants. Results: Analyses indicate that baseline or admission delirium is an important prognostic predictor of poor long-term outcomes in persons without known cognitive impairment, after controlling for age, gender, race, comorbidity, and functional status. Delirium at admission (i.e., prior to surgery) was associated with poorer functioning in physical, cognitive, and affective domains at 6 months postfracture and slower rates of recovery. Impairment and delays in recovery may be further exacerbated by increased depressive symptoms in confused patients over time. Delirium on hospital admission was not a significant predictor of mortality after adjustment for confounding factors. Conclusions: The present findings further emphasize the significance of immediate detection and treatment of delirium in hip fracture patients to ameliorate the short and long-term effects of acute confusion on functional outcomes. abstract_id: PUBMED:23651760 The neuropsychological sequelae of delirium in elderly patients with hip fracture three months after hospital discharge. Background: Delirium is a risk factor for long-term cognitive impairment and dementia. Yet, the nature of these cognitive deficits is unknown as is the extent to which the persistence of delirium symptoms and presence of depression at follow-up may account for the association between delirium and cognitive impairment at follow-up. We hypothesized that inattention, as an important sign of persistent delirium and/or depression, is an important feature of the cognitive profile three months after hospital discharge of patients who experienced in-hospital delirium. Methods: This was a prospective cohort study. Fifty-three patients aged 75 years and older were admitted for surgical repair of acute hip fracture. Before the surgery, baseline characteristics, depressive symptomatology, and global cognitive performance were documented. The presence of delirium was assessed daily during hospital admission and three months after hospital discharge when patients underwent neuropsychological assessment. Results: Of 27 patients with in-hospital delirium, 5 were still delirious after three months. Patients with in-hospital delirium (but free of delirium at follow-up) showed poorer performance than patients without in-hospital delirium on tests of global cognition and episodic memory, even after adjustment for age, gender, and baseline cognitive impairment. In contrast, no differences were found on tests of attention. Patients with in-hospital delirium showed an increase of depressive symptoms after three months. However, delirium remained associated with poor performance on a range of neuropsychological tests among patients with few or no signs of depression at follow-up. Conclusion: Elderly hip fracture patients with in-hospital delirium experience impairments in global cognition and episodic memory three months after hospital discharge. Our results suggest that inattention, as a cardinal sign of persistent delirium or depressive symptomatology at follow-up, cannot fully account for the poor cognitive outcome associated with delirium. abstract_id: PUBMED:23194775 Affective functioning after delirium in elderly hip fracture patients. Background: Delirium in elderly patients is associated with various long-term sequelae that include cognitive impairment and affective disturbances, although the latter is understudied. Methods: For a prospective cohort study of elderly patients undergoing hip fracture surgery, baseline characteristics and affective and cognitive functioning were assessed preoperatively. During hospital admission, presence of delirium was assessed daily. Three months after hospital discharge, affective and global cognitive functioning was evaluated again in patients free from delirium at the time of this follow-up. This study compared baseline characteristics and affective functioning between patients with and without in-hospital delirium. We investigated whether in-hospital delirium is associated with increased anxiety and depressive levels, and post-traumatic stress disorder (PTSD) symptoms three months after discharge. Results: Among 53 eligible patients, 23 (43.4%) patients experienced in-hospital delirium after hip fracture repair. Patients who had experienced in-hospital delirium showed more depressive symptoms at follow-up after three months compared to the 30 patients without in-hospital delirium. This association persisted in a multivariate model controlling for age, baseline cognition, baseline depressive symptoms, and living situation. The level of anxiety and symptoms of PTSD at follow-up did not differ between both groups. Conclusion: This study suggests that in-hospital delirium is associated with an increased burden of depressive symptoms three months after discharge in elderly patients who were admitted to the hospital for surgical repair of hip fracture. Symptoms of depression in patients with previous in-hospital delirium cannot be fully explained by persistent (sub)syndromal delirium or baseline cognitive impairment. abstract_id: PUBMED:10608650 Characterization of elderly patients in rehabilitation: stroke versus hip fracture. Purpose: To compare the characteristics of elderly patients hospitalized for rehabilitation following stroke with those following hip fracture (HF). Methods: A prospective study in a geriatrics department of a general university hospital in southern Israel. Five hundred and sixteen hospitalized elderly patients were included in the study, 221 following stroke and 295 following HF. The characteristics were compared by univariate and logistic regression analyses. Results: The mean age (+/-SD) of the stroke patients was 71.7+/-7.8 years compared to 77.4+/-7.9 for HF (p < 0.000001). Fifty-three per cent of the stroke patients were women compared to 76% of the HF patients (p < 0.000001). Stroke patients had significantly lower levels of folic acid (p = 0.00002). HF patients had more hearing and visual impairments (p = 0.008 and p = 0.017, respectively), but these were related to age differences between the groups. The Folstein Minimental test result was significantly higher in the HF group (p = 0.002). There were no differences in the symptoms of depression score as measured by geriatric depression screening scale. The Functional Independent Measure scale showed a higher pre-event functional capacity among the stroke patients (p < 0.000001), but there was no difference in this scale on admission to rehabilitation or upon discharge. Conclusions: There is a difference in the nature of the stroke and HF events. When either event involves an elderly patient with a broad range of limitations and diseases, a new medical condition develops. In this condition the symptoms of depression and the functional state at admission and upon discharge are not significantly different between these groups of patients. abstract_id: PUBMED:27530770 Depression Predicts Functional Outcome in Geriatric Inpatient Rehabilitation. Objective: To evaluate the effect of depression on functional recovery in geriatric patients who have completed an inpatient rehabilitation program. Design: Prospective cohort study. Setting: Inpatient rehabilitation unit of a university-affiliated geriatric hospital. Participants: Convenience sample of patients (N=65; mean age, 81.6y; 25 men) admitted to rehabilitation over a 10-month period. Patients >60 years of age who were proficient in English and capable of providing informed consent were eligible to participate in the study. Interventions: Depression was assessed using both the Geriatric Depression Scale-short form (GDS-15) and the Patient Health Questionnaire (9-item screen for depression) (PHQ-9). Measures of well-established predictors of rehabilitation outcome, which may interact with depression, were also obtained, and multiple regression linear modeling was used to evaluate the relation between depression and functional outcome over and above the contribution of these other factors. Main Outcome Measure: FIM (Functional Independence Measure) at discharge from the rehabilitation program. Results: Depression, as assessed by the GDS-15, but not the PHQ-9, was predictive of functional outcome (standardized beta=-.151, P=.030) after controlling for other significant predictors, which included baseline disability, pain, cognition, and educational level. Participation in recreational, but not physio- or occupational, therapy additionally contributed to a small amount of variance in the functional outcome. Conclusions: Our findings suggest that self-report of depression is an independent predictor of functional outcome in high-tolerance, short-duration geriatric rehabilitation. Routine assessment of depressive symptoms in older adults using an instrument (eg, GDS-15) may help identify those at risk for poorer outcomes in rehabilitation. abstract_id: PUBMED:22783335 Connections between the outcomes of osteoporotic hip fractures and depression, delirium or dementia in elderly patients: rationale and preliminary data from the CODE study. Background: osteoporosis, depression and other neuro-psychiatric disorders are very common after 50 years of age. Although these conditions recognize several and specific etiologic factors, they however appear to share physiologic, environmental processes and risk factors which may explain their possible association. Methods: we have built up a specific research project (the CODE study, Connections between the outcomes of osteoporotic hip fractures and depression, delirium or dementia in elderly patients), and carried out a preliminary survey on 55 hip fractured elderly patients (42 women, mean age 85 years old and 13 men, mean age 82 years old), hospitalized at SS. Annunziata hospital in Florence from July to September 2010. Results: there was a significant difference (p=0.010) in the functional recovery after surgery (as measured by Cumulated Ambulation Score, CAS) between depressed and non-depressed subjects (n=38), with a worse recovery and a lower CAS score in depressed patients (n=17). We also observed a higher prevalence of depression in the osteoporotic-fragile elderly people (69,1% of total sample). Conclusion: our preliminary survey has validated the suitability of the CODE study protocol in assessing connections between outcomes of osteoporotic hip fractures and depression in elderly patients, fostering the extension of the study (and suggesting also the inclusion of delirium and dementia) within a multicentric prospective study aimed to provide specific information and guidelines for osteoporotic fractured patients with concomitant depression or other neuro-psychiatric disorders. abstract_id: PUBMED:21098965 Depressive symptoms and one year mortality among elderly patients discharged from a rehabilitation ward after orthopaedic surgery of the lower limbs. The objective of the present prospective observational study is to evaluate the effect of depressive symptoms on 1-year mortality in a population of elderly patients discharged from a rehabilitation unit after orthopaedic surgery of the lower limbs. A total of 222 elderly inpatients were included, and stratified according to 12-months survival. 14 (6.3%) of the patients who were eligible for this study died during the 12-months period after discharge. As expected, patients who died were significantly older, lower cognitive performance, more depressive symptoms, poorer nutritional status and higher comorbidity in comparison to those who survived. Furthermore, they were generally more functionally dependent on admission to the Department, had worse functional recovery and were more disable at discharge, although a longer length of stay comparing to survived patients. In the adjusted logistic regression model, after adjustment for possible confounders and covariates, the presence of severe depressive symptoms significantly predicted a four-fold risk of death at 12 months. The only other factor associated poor 12-months survival was comorbidity, that predicted a 6-fold risk of death. In conclusions this study suggests that severe depressive symptoms on admission predicts 1-year mortality in elderly patients discharged from a post-acute care unit after orthopaedic rehabilitation. abstract_id: PUBMED:24054308 Orthopedic Multidimensional Prognostic Index (Ortho-MPI) in the elderly with hip or neck femur fracture: a pilot study. The Orthopedic Multidimensional Prognostic Index (Ortho-MPI) was performed and validated in order to ameliorate the decision-making process as regards the elderly with hip or neck femur fractures. A retrospective study was performed. 95 patients 65 years old and over with a diagnosis of hip or femur fracture were enrolled. A standardized comprehensive orthopedic geriatric assessment was performed. It included information on: depressive symptoms, functional and instrumental activities of daily living, cognitive and nutritional status, laboratory tests, risk of pressure sore, comorbidities and comorbidity. The Ortho-MPI was calculated. After six months their initial assessment, patients were recalled in order to know if they live too or not. The survival condition was associated to the prognostic capacity calculated by the Ortho-MPI. Results showed that higher Ortho-MPI Index value was associated with higher six months-later mortality. In an unvaried analysis model the Ortho-MPI index was associated with death event of the elderly patients enrolled (OR=1.05; 95% CI, 1.01-1.10; z=2.27; p=0.023). This association was also validated by considering different ages between participants (OR=1.05; 95% CI, 1.004-1.11; z=2.13; p=0.033). Furthermore, each specific index considered in the total Ortho-MPI was associated with the death event of the elderly patients. In conclusion it was shown that the Ortho-MPI Index could be used to predict outcome in the elderly with hip or femur fracture. abstract_id: PUBMED:10440975 Geriatric Depression Screening Scale (GDS) in patients hospitalized for physical rehabilitation. Objective: To determine the prevalence of symptoms of depression and the factors affecting their presence in an elderly population at the start of rehabilitation. To assess changes in the severity of these symptoms during rehabilitation and the correlation between these changes and corresponding changes in the patient's functional state. Design: A population-based prospective study. Setting: A geriatric ward in a general university hospital in southern Israel. Participants: Two hundred and seventy-six elderly patients hospitalized for physical rehabilitation, 150 following hip fracture (HF) and 126 after stroke. Measurements: Symptoms of depression were measured by the Geriatric Depression Screening Scale (GDS). The functional state was assessed using the FIM scale. A broad spectrum of clinical, functional, social and demographic variables was measured using conventional tests. The association between the GDS and these variables was tested by stepwise multiple regression. Results: One hundred and thirteen patients (41%) showed signs of depressions (GDS>10), with 12 (4%) patients having severe symptoms (GDS>20). No significant difference was found between HF and stroke patients in symptoms of depression. Only four of the 41 variables tested were found to be significantly and independently associated with the GDS: pre-event functional state (beta=-0.311, p<0.001), the self-care component of the FIM scale on admission to the hospital (beta=-0.267, p<0.001), living alone (beta=0.149, p=0.015) and impaired visual acuity (beta=0.137, p=0.026). The total variance in GDS accounted for by these four variables (adjusted R-square) was 0. 24. The severity of depression symptoms decreased significantly during rehabilitation and the GDS at discharge was significantly lower than on admission (p=0.008). This change correlated significantly with the corresponding change in functional state (R=-0.15, p=0.03). Conclusions: Symptoms of depression are common in elderly patients beginning rehabilitation. These symptoms are affected independently, and almost exclusively, by the functional state of the patient, both prior to the event and after its occurrence. The depressed condition improves towards the end of hospitalization and the degree of improvement is correlated with the corresponding change in the patient's functional state. Answer: Yes, depressive symptoms on hospital admission do impact early functional outcome in elderly patients with hip fracture. A study found that the presence of moderate to severe depressive symptoms, as assessed using the 30-item Geriatric Depression Scale on admission, was independently related to motor Functional Independence Measure at discharge, along with older age and female gender (PUBMED:24954835). This suggests that increasing levels of depressive symptoms in elderly hip fracture patients influence short-term functional outcome, and routine assessment of this baseline comorbidity is recommended, especially in female patients. Additionally, depressive symptoms have been shown to predict functional outcome in geriatric inpatient rehabilitation. Depression, as assessed by the Geriatric Depression Scale-short form (GDS-15), was predictive of functional outcome after controlling for other significant predictors, including baseline disability, pain, cognition, and educational level (PUBMED:27530770). Furthermore, depressive symptoms on admission have been associated with a four-fold risk of death at 12 months in elderly patients discharged from a post-acute care unit after orthopaedic rehabilitation (PUBMED:21098965). In the context of hip fractures, depressive symptoms have also been associated with worse functional recovery after surgery, with depressed patients showing a lower Cumulated Ambulation Score (PUBMED:22783335). Moreover, symptoms of depression are common in elderly patients beginning rehabilitation and are affected independently by the patient's functional state both prior to the event and after its occurrence (PUBMED:10440975). Overall, these findings underscore the importance of assessing and addressing depressive symptoms in elderly hip fracture patients to improve their early functional outcomes and potentially reduce the risk of mortality.
Instruction: Does practice modify the relationship between postural control and the execution of a secondary task in young and older individuals? Abstracts: abstract_id: PUBMED:15114037 Does practice modify the relationship between postural control and the execution of a secondary task in young and older individuals? Background: Numerous daily activities require performing more than one task simultaneously, such as standing while engaging in a conversation. Recent studies have shown that postural control may be degraded when individuals are asked to perform a secondary task and that this effect seems to be more pronounced in older adults. Since various types of secondary tasks have been used in postural control studies, the novelty of the tasks may partly explain why dual-task interference occurs. It is known that novel tasks require greater attentional resources and thus may interfere to a greater extent with the performance of another task. Therefore, by practicing this dual-task situation, interference could perhaps be diminished. Since the dual-tasking efficiency is reduced with aging, practice could be very beneficial to older adults. Objectives: The main goal of this study was to examine whether practice could modify the changes seen in postural sway when individuals are asked to perform a secondary task while maintaining upright stance and whether older individuals could benefit to a greater extent from practice than would young individuals. The second goal was to examine the dual-task performance in young versus older adults and to determine whether older individuals benefit equally or to a greater extent from practice as compared with young individuals. Methods: Young and older individuals were asked to stand on a force platform while performing a secondary task or no task. The secondary task condition was repeated six times to examine the effects of practice. Results: Practice did not modify the performance of postural sway, but did lead to an increase in speed of execution of the secondary task for both groups equally. In young participants, the amplitude of sway was decreased, and the frequency of sway was increased, indicating an increased stiffness when performing the cognitive task. Older participants showed increased amplitude of sway and increased frequency of sway in the mediolateral direction only. Conclusions: Since the dual-task condition was only repeated six times, it could be hypothesized that the effect of practice would be greater, if more trials were added or if more practice sessions were included. More research is needed to verify this hypothesis. abstract_id: PUBMED:28570861 Cognitive tasks promote automatization of postural control in young and older adults. Researchers looking at the effects of performing a concurrent cognitive task on postural control in young and older adults using traditional center-of-pressure measures and complexity measures found discordant results. Results of experiments showing improvements of stability have suggested the use of strategies such as automatization of postural control or stiffening strategy. This experiment aimed to confirm in healthy young and older adults that performing a cognitive task while standing leads to improvements that are due to automaticity of sway by using sample entropy. Twenty-one young adults and twenty-five older adults were asked to stand on a force platform while performing a cognitive task. There were four cognitive tasks: simple reaction time, go/no-go reaction time, equation and occurrence of a digit in a number sequence. Results demonstrated decreased sway area and variability as well as increased sample entropy for both groups when performing a cognitive task. Results suggest that performing a concurrent cognitive task promotes the adoption of an automatic postural control in young and older adults as evidenced by an increased postural stability and postural sway complexity. abstract_id: PUBMED:29310079 Differential effects of a visuospatial attention task on measures of postural control in young and older adults. The purpose of this study was to examine the influence of a visuospatial attention task on three measures of postural control in young and older adults. 20 young (19-36  years) and 20 older (67-91 years) adults performed a choice stepping response time (CSRT) task, a submaximal dorsiflexion force steadiness task, and quiet standing in 3 bilateral stances. All tasks were performed with and without a visuospatial (VS) attention task that involved visualizing a star moving within a 2 × 2 grid. CSRT increased with the addition of the VS task in both groups (p  < .001), with a larger increase for older adults than young adults (p < .001). Older adults were less steady while performing the dorsiflexion task with the VS task (p  < .001), while the VS task did not influence steadiness in young adults (p = .235). Performance during quiet standing was not influenced by the VS task in any stance (p  > .084). The findings suggest that visuospatial attention differentially affects postural control in young and older adults and the effect is task-specific. These findings suggest the need to include stepping and force control tasks to further determine what role visuospatial attention plays in postural control. abstract_id: PUBMED:30414089 Cognitive task modality influences postural control during quiet standing in healthy older adults. Background: The interstimulus interval of a cognitive task was found to have a limited effect on postural control in young adults, while visual cognitive tasks were found to improve stability compared to auditory tasks. It is of interest to investigate whether postural control in healthy older adults is sensitive to these types of cognitive task manipulations. Aims: The objectives of the present experiment were to evaluate the impact of interstimulus interval and modality of a continuous cognitive task on postural control in healthy older adults. Methods: Fifteen healthy older adults (70 ± 3.2 years, 3 male) were asked to stand with feet together on a force platform while performing auditory and visual cognitive tasks performed with interstimulus intervals of 2 and 5 s. Results: Visual tasks led to reductions in sway area and sway variability in the anterior-posterior direction compared to auditory tasks (ps ≤ 0.05). The interstimulus interval did not lead to a change in sway, except for a small change in the medial-lateral direction for the 2-s interval compared to the 5-s interval (p = 0.05). Discussion And Conclusions: Results suggest that the interstimulus interval had a very limited effect on postural sway. The modality of the cognitive task had a greater effect on postural sway, as visual cognitive tasks yielded smaller sway area and anterior-posterior sway variability than auditory conditions. Visual stimuli may have acted as an anchor, yielding reduced sway. abstract_id: PUBMED:30634135 Subjective stability perception is related to postural anxiety in older subjects. Background: Under static conditions, the objective and subjective measures of postural stability correlate well. However, age-related changes in postural control and task-related anxiety may modify the relationship between these subjective and objective measures. Ultimately, patients' symptoms represent subjective reports, thus understanding this relationship has clinical implications. Aims: This study investigates the relationship between subjective-objective measures of postural stability in dynamic conditions and whether this relationship is influenced by age or task-related anxiety. Methods: 50 healthy participants (aged 18-83 years) stood on a platform oscillating at variable amplitudes, with-without a fall-preventing harness to modulate task-related anxiety. Trunk sway path, hip velocity and foot lifts (objective measures) and subjective scores of instability and task-related anxiety were recorded. Results: The subjective perception of stability accurately matched objective body sway, following a logarithmic function profile (r2 = 0.72, p < 0.001). This function did not change significantly with age, harness or task presentation order. A strong relationship was observed between subjective measures of stability and task-related anxiety for all subjects (r = 0.81, p < 0.001). Task repetition reduced anxiety in the young, uncoupling anxiety changes from subjective instability, but not in the elderly who retained higher anxiety levels in line with subjective unsteadiness. Discussion: Subjects accurately rate their own instability during dynamic postural challenges, irrespective of age and actual fall risk. However, anxiety may selectively modulate the perception of instability in older subjects. The perception of stability relies upon the integration of sensory afferents but also recruits emotional-cognitive processes, particularly in older individuals. The use of a safety harness has no influence on subjective or objective postural stability. abstract_id: PUBMED:23352618 Intermittent use of an "anchor system" improves postural control in healthy older adults. Haptic information, provided by a non-rigid tool (i.e., an "anchor system"), can reduce body sway in individuals who perform a standing postural task. However, it was not known whether or not continuous use of the anchor system would improve postural control after its removal. Additionally, it was unclear as to whether or not frequency of use of the anchor system is related to improved control in older adults. The present study evaluated the effect of the prolonged use of the anchor system on postural control in healthy older individuals, at different frequencies of use, while they performed a postural control task (semi-tandem position). Participants were divided into three groups according to the frequency of the anchor system's use (0%, 50%, and 100%). Pre-practice phase (without anchor) was followed by a practice phase (they used the anchor system at the predefined frequency), and a post-practice phase (immediate and late-without anchor). All three groups showed a persistent effect 15min after the end of the practice phase (immediate post-practice phase). However, only the 50% group showed a persistent effect in the late post-practice phase (24h after finishing the practice phase). Older adults can improve their postural control by practicing the standing postural task, and use of the anchor system limited to half of their practice time can provide additional improvement in their postural control. abstract_id: PUBMED:27477709 Active vision task and postural control in healthy, young adults: Synergy and probably not duality. In upright stance, individuals sway continuously and the sway pattern in dual tasks (e.g., a cognitive task performed in upright stance) differs significantly from that observed during the control quiet stance task. The cognitive approach has generated models (limited attentional resources, U-shaped nonlinear interaction) to explain such patterns based on competitive sharing of attentional resources. The objective of the current manuscript was to review these cognitive models in the specific context of visual tasks involving gaze shifts toward precise targets (here called active vision tasks). The selection excluded the effects of early and late stages of life or disease, external perturbations, active vision tasks requiring head and body motions and the combination of two tasks performed together (e.g., a visual task in addition to a computation in one's head). The selection included studies performed by healthy, young adults with control and active - difficult - vision tasks. Over 174 studies found in Pubmed and Mendeley databases, nine were selected. In these studies, young adults exhibited significantly lower amplitude of body displacement (center of pressure and/or body marker) under active vision tasks than under the control task. Furthermore, the more difficult the active vision tasks were, the better the postural control was. This underscores that postural control during active vision tasks may rely on synergistic relations between the postural and visual systems rather than on competitive or dual relations. In contrast, in the control task, there would not be any synergistic or competitive relations. abstract_id: PUBMED:20022754 Effects of a secondary task on postural control in children with Tourette syndrome. Tourette syndrome (TS) is a neurodevelopmental disorder characterized by involuntary motor and vocal tics. Sub-clinical postural control anomalies have recently been reported in children with TS. The goal of the present study was to determine whether these anomalies interact with attention in postural control. Thirty-two younger (below 10 years) and 21 older (above 10 years) children with TS were compared to 13 younger and 15 older age-matched controls. Postural control was examined during standing with and without a secondary visual attention task. Sway velocity was higher in younger children than older ones and also higher in children with TS than in controls. The secondary task exacerbated the velocity anomalies in younger children with TS. The effects were independent of tic severity, medication, and attention deficit. The results suggest that postural control anomalies in TS are sensitive to attentional requirements. abstract_id: PUBMED:35513836 Effects of fixed versus variable task prioritization during short-term dual task practice on motor and cognitive task performance in young adults. Objective: It has been shown that variable compared to fixed task prioritization during dual task practice more effectively improves motor (i.e., postural control) and cognitive (i.e., memory) performance in older adults. However, it is unclear whether this finding is also valid in young adults. Thus, the present study examined the effect of fixed (allocate equal priority on both tasks) versus variable (vary priority between both tasks) priority during short-term motor-cognitive dual task practice on single and dual task performance in healthy young adults (age range: 20-30 years). Results: During two days of practice, significant improvements of motor (i.e., balance task: reduced root mean square error; p < 001, ηp2 = .72) and cognitive (i.e., arithmetic task: increased serial three subtractions; p < .001, ηp2 = .78) task performance were observed and that was irrespective of group ("fixed priority" and "variable priority"). Further, the statistical analysis of post-practice single and dual task performance revealed no significant differences between groups, irrespective of task (i.e., motor or cognitive). This indicates that in young as opposed to old adults, single and dual task performance improvements are independent of task prioritization (i.e., fixed or variable priority) during short-term motor-cognitive dual task practice. abstract_id: PUBMED:26340278 Age-related changes in postural control to the demands of a precision task. Optimal sensorimotor integration is needed to maintain the precision of a visuomotor postural task. Furthermore, cognitive resources have been suggested to be involved in maintaining balance, especially in older adults. This study investigated how older and younger adults differed in employing sensorimotor strategies in a dual-task situation. Older (age 65-84 years) and younger adults (age 19-30 years) performed a visually-based, postural tracking task in different body orientations (from 0° to 45°), which necessitated slightly different task goals. On some trials, participants performed a concurrent silent arithmetic task with the visuomotor tracking task. The results demonstrated that sensorimotor control declined with age. Older adults showed greater medial-lateral center of pressure variability compared to younger adults in the precision task. Younger adults displayed a trend to decrease anterior-posterior variability, but older adults exhibited an opposite trend when the body orientation changed from 0° to 45°. The addition of a dual-task situation decreased overall postural variability in both age groups. Age-related changes in postural control may degrade the flexible coordination of the sensory feedback and motor execution. This study suggested that medial-lateral stability may be more sensitive to this age-related decline and may be closely associated with postural instability and falls. Answer: Practice appears to have a differential impact on the relationship between postural control and the execution of a secondary task in young and older individuals. In a study examining the effects of practice on dual-task performance, it was found that practice did not modify the performance of postural sway but did lead to an increase in the speed of execution of the secondary task for both young and older groups equally. However, in young participants, the amplitude of sway decreased, and the frequency of sway increased, indicating increased stiffness when performing the cognitive task. In contrast, older participants showed increased amplitude of sway and increased frequency of sway in the mediolateral direction only (PUBMED:15114037). Another study suggested that performing a concurrent cognitive task promotes the adoption of an automatic postural control in both young and older adults, as evidenced by increased postural stability and postural sway complexity (PUBMED:28570861). This indicates that cognitive tasks can lead to improvements in postural control, potentially due to automatization. However, the effects of practice on postural control during dual-tasking may be task-specific. For example, a visuospatial attention task differentially affected postural control in young and older adults, with older adults showing less steadiness during a dorsiflexion task when the attention task was added, while young adults' performance was not influenced (PUBMED:29310079). The modality of the cognitive task also influences postural control, with visual tasks leading to reductions in sway area and sway variability compared to auditory tasks in older adults (PUBMED:30414089). Additionally, intermittent use of an "anchor system" improved postural control in healthy older adults, with the best results seen when the system was used 50% of the time (PUBMED:23352618). In summary, practice can modify the relationship between postural control and the execution of a secondary task, but the extent and nature of this modification depend on various factors, including the age of the individual, the type of secondary task, and the modality of the task. More research is needed to fully understand these relationships and how practice can be optimized to benefit postural control during dual-tasking in both young and older individuals.
Instruction: Does prior abdominal surgery influence outcomes or complications of robotic-assisted laparoscopic radical prostatectomy? Abstracts: abstract_id: PUBMED:24730750 Laparoscopic versus robotic-assisted radical prostatectomy: an Australian single-surgeon series. Background: In Australia, robotic-assisted radical prostatectomy (RARP) has steadily replaced open and laparoscopic surgery in the management of localized prostate cancer. Given the increased cost of this technology, we aimed to compare the perioperative, pathological, oncological and functional outcomes as well as short-term complications of laparoscopic and RARP. Methods: We performed a retrospective review of prospectively collected data on 200 consecutive patients during the transition of a single surgeon (DM) from pure laparoscopic (n = 100) to RARP (n = 100) between September 2007 and March 2011. Results: Median operative time and estimated blood loss were the same for both surgical approaches, 195 min (P = 0.29) and 300 mL (P = 0.88) respectively. Median length of hospital stay was shorter for RARP (P = 0.003). Complication rates were not statistically different between groups. There was no significant difference in positive surgical margin rates in pT2 (P = 0.36) or pT3 disease (0.99) or biochemical recurrence rate between groups (P = 0.14). The 12 months continence rate was improved with RARP compared with laparoscopic radical prostatectomy (93% versus 82%; P = 0.025). The potency rate was 56% and 74% at 12 months after laparoscopic radical prostatectomy and RARP respectively (P = 0.12) for a bilateral nerve sparing procedure. Conclusion: We conclude from our single-surgeon comparative series that the robotic approach results in a significantly shorter length of hospital stay and improved 12 months continence rates and demonstrated a trend towards better potency rates. Complications, positive surgical margin rates and the requirement for adjuvant therapy are all improved with the robotic approach but did not show statistically significant differences. abstract_id: PUBMED:24912809 Pitfalls of robot-assisted radical prostatectomy: a comparison of positive surgical margins between robotic and laparoscopic surgery. Objectives: To compare the surgical outcomes of laparoscopic radical prostatectomy and robot-assisted radical prostatectomy, including the frequency and location of positive surgical margins. Methods: The study cohort comprised 708 consecutive male patients with clinically localized prostate cancer who underwent laparoscopic radical prostatectomy (n = 551) or robot-assisted radical prostatectomy (n = 157) between January 1999 and September 2012. Operative time, estimated blood loss, complications, and positive surgical margins frequency were compared between laparoscopic radical prostatectomy and robot-assisted radical prostatectomy. Results: There were no significant differences in age or body mass index between the laparoscopic radical prostatectomy and robot-assisted radical prostatectomy patients. Prostate-specific antigen levels, Gleason sum and clinical stage of the robot-assisted radical prostatectomy patients were significantly higher than those of the laparoscopic radical prostatectomy patients. Robot-assisted radical prostatectomy patients suffered significantly less bleeding (P < 0.05). The overall frequency of positive surgical margins was 30.6% (n = 167; 225 sites) in the laparoscopic radical prostatectomy group and 27.5% (n = 42; 58 sites) in the robot-assisted radical prostatectomy group. In the laparoscopic radical prostatectomy group, positive surgical margins were detected in the apex (52.0%), anterior (5.3%), posterior (5.3%) and lateral regions (22.7%) of the prostate, as well as in the bladder neck (14.7%). In the robot-assisted radical prostatectomy patients, they were observed in the apex, anterior, posterior, and lateral regions of the prostate in 43.0%, 6.9%, 25.9% and 15.5% of patients, respectively, as well as in the bladder neck in 8.6% of patients. Conclusions: Positive surgical margin distributions after robot-assisted radical prostatectomy and laparoscopic radical prostatectomy are significantly different. The only disadvantage of robot-assisted radical prostatectomy is the lack of tactile feedback. Thus, the robotic surgeon needs to take this into account to minimize the risk of positive surgical margins. abstract_id: PUBMED:35216946 Robotic-assisted Versus Laparoscopic Radical Prostatectomy: 12-month Outcomes of the Multicentre Randomised Controlled LAP-01 Trial. Background: Recently, our LAP-01 trial demonstrated superiority of robotic-assisted laparoscopic radical prostatectomy (RARP) over conventional laparoscopic radical prostatectomy (LRP) with respect to continence at 3 mo. Objective: To compare the continence, potency, and oncological outcomes between RARP and LRP in the 12-mo follow-up. Design, Setting, And Participants: In this multicentre, randomised, patient-blinded controlled trial, patients referred for radical prostatectomy to four hospitals in Germany were randomly assigned (3:1) to undergo either RARP or LRP. Outcome Measurements And Statistical Analysis: Continence was assessed as a patient-reported outcome through validated questionnaires. Secondary endpoints included potency and oncological outcomes. Data were statistically analysed by bivariate tests and multivariable models. Results And Limitations: At 12 mo, follow-up data were available for 701 of 782 patients. Continence at 6 and 12 mo after surgery was better in RARP patients, however no longer statistically significant (p = 0.068 and 0.38, respectively). Patients who were potent at baseline and underwent nerve-sparing surgery reported significantly higher potency after RARP, as defined by the capability to maintain an erection sufficient for intercourse at 3 (p = 0.005), 6 (p = 0.018), and 12 mo (p = 0.013). There were no statistically significant differences in oncological outcomes at 12 mo. It is a limitation that the influence of different anastomotic techniques was not investigated in this study. Conclusions: Both LRP and RARP offer a high standard of therapy for prostate cancer patients. However, robotic assistance offers better functional outcomes in specific areas such as potency and early continence in patients who are eligible for nerve-sparing RP. Patient Summary: We compared outcomes 12 mo after radical prostatectomy between robotic-assisted and conventional laparoscopy. Both methods were equivalent with respect to oncological outcomes. Better recovery of continence in patients with robotic-assisted surgery, which was observed at 3 mo, blurred up to 12 mo. A benefit of robotic-assisted surgery was also observed in potency. abstract_id: PUBMED:25422166 Robot-assisted radical prostatectomy in an initial Japanese series: the impact of prior abdominal surgery on surgical outcomes. Objectives: To evaluate the influence of prior abdominal surgery on surgical outcomes of robot-assisted radical prostatectomy in an early single center experience in Japan. Methods: We reviewed medical records of patients with localized prostate cancer who underwent robot-assisted radical prostatectomy from October 2010 to September 2013 at Tottori University Faculty of Medicine, Yonago, Tottori, Japan. Patients with prior abdominal surgery were compared with those with no prior surgery with respect to total operative time, port-insertion time, console time, positive surgical margin and perioperative complication rate. Furthermore, the number of patients requiring minimal adhesion lysis was compared between the two groups. Results: Of 150 patients who underwent robot-assisted radical prostatectomy, 94 (63%) had no prior abdominal surgery, whereas 56 patients (37%) did. The mean total operative time was 329 and 333 min (P = 0.340), mean port insertion time was 40 and 34.5 min (P = 0.003), mean console time was 255 and 238 min (P = 0.145), a positive surgical margin was observed in 17.9% and 17.0% patients (P = 0.896), and the incidence of perioperative complications was 25% and 23.4% (P = 0.825), respectively, in those with and without prior abdominal surgery. In the prior abdominal surgery group, 48 patients (80.4%) required adhesion lysis at the time of trocar placement or while operating the robotic console. Conclusion: Robot-assisted radical prostatectomy appears to be a safe approach for patients with prior abdominal surgery without increasing total operative time, robotic console time, positive surgical margin or the incidence of perioperative complications. abstract_id: PUBMED:27637506 Robotic-assisted laparoscopic radical prostatectomy: initial 15 cases in Japan. Recently, we have introduced robotic-assisted laparoscopic radical prostatectomy (RALP) in Japan. This article describes the details of a training program to shorten the learning curve in the absence of an urologist with expertise in robotic surgery. Five months after a 2-day training course of robotic surgery, RALP was first performed in Japan, and a total of 15 cases were performed in the subsequent 4 months. Our training program consisted of: (1) image training using surgical operation videos, (2) dry lab training using a sham pelvic cavity model, and (3) intraoperative mentoring. The operative procedure was divided into five consecutive stages, and time required to complete each stage was recorded. Robotic radical prostatectomy was completed in all patients without conversion to open surgery, except for the first patient in whom a restriction to a 2-h operation had been imposed by the ethics committee. The mean console time and the mean intraoperative blood loss (including urine) reduced from 264.2 min and 459.4 ml, respectively, in the first 11 cases, to 151 min and 133.3 ml, respectively, in the last three cases. With direct intraoperative guidance by the mentor during cases 13 and 14, the operation time was reduced at all five stages of the operative procedure. Our training program proved remarkably effective in reducing the learning curve of RALP in Japan, where there is no person with expertise in robotic surgery. abstract_id: PUBMED:26351883 Robotic-Assisted Laparoscopic Radical Prostatectomy. Background: The use of radical prostatectomy for the treatment of prostate cancer has been increasing during the last decade partially due to the widespread adoption of the robotic-assisted laparoscopic technique. Although no prospective, randomized controlled trials have compared open radical prostatectomy (ORP) with robotic-assisted laparoscopic radical prostatectomy (RALRP), numerous comparative studies have been retrospectively conducted. Methods: A systematic review of the literature was performed to clarify the role and advancement of RALRP. Studies comparing ORP with RALRP that measured outcomes of cancer control, urinary and sexual function, and complications were included. A nonsystematic review was utilized to describe the advancements in the techniques used for RALRP. Results: RALRP is the procedure of choice when treating localized prostate cancer. This preference is due to the observed improvement in morbidity rates, as evidenced by decreased rates of blood loss and postoperative pain and similar oncological outcomes when compared with ORP. Robotic assistance during surgery is continually being modified and the techniques advanced, as evidenced by improved nerve sparing for preserving potency and reconstruction of the bladder neck to help in the early recovery of urinary continence. Conclusions: Morbidity rates should continue to improve with the advancement of minimally invasive techniques for radical prostatectomy. The adoption of robotic assistance during surgery will continue as the applications of robotic-assisted surgery expand into other solid organ malignancies. abstract_id: PUBMED:31807424 Perioperative outcomes of robotic-assisted laparoscopic radical prostatectomy, laparoscopic radical prostatectomy and open radical prostatectomy: 10 years of cases at Ramathibodi Hospital. Background: This study aims to compare the perioperative and pathological outcomes of open radical prostatectomy (ORP), laparoscopic radical prostatectomy (LRP), and robotic-assisted laparoscopic radical prostatectomy (RALRP) at Ramathibodi Hospital within Mahidol University in Thailand. Methods: From January 2008 to July 2017, 679 RPs were performed. Patients' data were collected retrospectively to evaluate their perioperative and pathological outcomes. This data included the age, body mass index (BMI), serum prostate specific antigen (PSA) level, clinical stage, Gleason score (GS) from biopsy, operative time, estimated blood loss (EBL), perioperative complications, blood transfusion rate, adjacent organ injury rate, length of hospital stay, pathological stage, GS of the biopsy specimen, specimen weight (g), and marginal status of the patients. Results: Of the 679 RPs performed, 128 (19.28%) were ORPs, 241 (36.30%) were LRPs, and 295 (44.43%) were RALRPs. Patients who underwent a RALRP had a significant advantage in EBL (1,600, 500, and 300 mL for ORPs, LRPs, and RALRPs, respectively), overall complications, and blood transfusion rate. As they are minimally invasive techniques, LRP and RALRP presented an advantage in terms of the length of hospital stay (an average of 9, 6, and 6 days for ORPs, LRPs, and RALRPs, respectively) and adjacent organ injury rate. ORPs also had the shortest operative time (160, 210, and 200 min for ORPs, LRPs, and RALRPs, respectively). However, the specimen weight and marginal status were similar in all of the techniques. Conclusions: Minimally invasive RP techniques, such as LRPs and RALRPs, appear to be safe, have significantly better perioperative outcomes than ORPs, and have comparable pathological outcomes to those of ORPs. abstract_id: PUBMED:27472974 Using a Checklist in Robotic-Assisted Laparoscopic Radical Prostatectomy Procedures. Robotic surgical systems are relatively new in New Zealand and have been used mainly for laparoscopic radical prostatectomy. Checklists are successfully used in other industries and health care facilities, so we developed a checklist for use during robotic-assisted laparoscopic radical prostatectomy (RALRP) procedures. After a two-month trial using the checklist, we calculated the completeness of each phase of the checklist as a percentage of the number of completed checklists versus total number of compliant checklists in that phase. Operating room personnel participated in an audiotaped focus group and discussed their perceptions about using the RALRP checklist. We collected, transcribed, and reviewed the focus group discussion and thematically analyzed the responses, which confirmed that the checklist served as a guideline and reminder during the setup. Additionally, staff members associated the checklist with improved OR readiness, minimized workflow interruption, improved efficiency, and positive changes in confidence and teamwork. abstract_id: PUBMED:35991216 Left ectopic ureteral insertion into seminal vesicle detected after robotic assisted laparoscopic radical prostatectomy. Ectopic ureters are rare congenital malformations of the urinary tract, more frequent in females and most commonly associated with single collecting system in males. We report a case of a prostate cancer patient undergoing robotically assisted laparoscopic radical prostatectomy. Duplication of vas deferens was thought to be found during surgery. Postoperatively, patient developed fevers. CT showed incidental finding of duplex collecting system on the left with dilatation of the upper moiety. Percutaneous nephrostomy was placed but an attempt at antegrade insertion of ureteric stent was unsuccessful. Robotic reimplantation of the ectopic ureter was successfully performed on day six post prostatectomy. abstract_id: PUBMED:20708782 Does prior abdominal surgery influence outcomes or complications of robotic-assisted laparoscopic radical prostatectomy? Objectives: To determine whether robotic-assisted laparoscopic radical prostatectomy (RALP) in patients with prior abdominal surgery is associated with increased operating times, positive surgical margins, or complications. Methods: An institutional review board-approved retrospective review of a prospective, prostatectomy database was performed. Patients undergoing surgery between January 1, 2004, and February 29, 2008 were included. Transition from open retropubic prostatectomy to RALP took place through 2004, at which point all surgical candidates were offered RALP, regardless of prior surgical history. Learning curves from all surgeons were included. Patients with prior abdominal surgery were compared with those patients without prior surgery with respect to total operating time, robotic-assist time, surgical margin positivity, and rate of complications. Results: A total of 1083 patients underwent RALP between January 1, 2004, and February 29, 2008, at our institution; of these, 839 had sufficient data available for analysis. In all, 251 (29.9%) patients had prior abdominal surgery, whereas 588 (70.1%) had no prior abdominal surgery. Total operating times were 209 and 204 minutes (P = .20), robotic console times were 165 and 163 minutes (P = .59), and surgical margin positivity was 21.1% and 27.2% (P = .08) for patients with and without prior abdominal surgery, respectively. The incidence of complications was 14.3% and 17.3% for patients with and without prior abdominal surgery (P = .33). Conclusions: Prior abdominal surgery was not associated with a statistically significant increase in overall operating time, robotic assist time, margin positivity, or incidence of complications in patients undergoing RALP. Robotic prostatectomy can be safely and satisfactorily performed in patients who have had a wide variety of prior abdominal surgery types. Answer: Prior abdominal surgery does not appear to significantly influence the outcomes or complications of robotic-assisted laparoscopic radical prostatectomy (RALP). According to the studies reviewed, RALP can be safely performed in patients with a history of prior abdominal surgery without increasing total operative time, robotic console time, positive surgical margin rates, or the incidence of perioperative complications (PUBMED:20708782; PUBMED:25422166). One study specifically noted that in patients with prior abdominal surgery, 80.4% required adhesion lysis at the time of trocar placement or while operating the robotic console, but this did not result in increased operative times or complications (PUBMED:25422166). Another study confirmed that prior abdominal surgery was not associated with a statistically significant increase in overall operating time, robotic assist time, margin positivity, or incidence of complications in patients undergoing RALP (PUBMED:20708782). These findings suggest that RALP is a viable option for patients regardless of their history of abdominal surgeries.
Instruction: Is the Effect of Anhedonia on Smoking Cessation Greater for Women Versus Men? Abstracts: abstract_id: PUBMED:27287389 Is the Effect of Anhedonia on Smoking Cessation Greater for Women Versus Men? Introduction: Anhedonia has been recognized as a major risk factor for smoking persistence. Potential gender differences in the effect of anhedonia on smoking cessation have not been studied. Using data from a completed clinical trial of maintenance nicotine patch therapy, we hypothesized that gender would moderate the effect of anhedonia on short-term abstinence, such that anhedonic women would be less likely to achieve abstinence. Methods: Participants (N = 525; 50% female, 48.2% Black/African American, average age: 46 years) received 21mg/day nicotine patch and four brief behavior counseling sessions over 8 weeks. Participants were classified at baseline using the Snaith-Hamilton Pleasure Scale as anhedonic (scores > 2) or hedonic (scores ≤ 2). Bioverified 7-day point prevalence abstinence was measured at week 8. Using logistic regression analysis, we tested the interaction of anhedonia by gender predicting abstinence, adjusting for age, race, nicotine dependence, and baseline depressive symptomatology. Results: Seventy participants (13%) were classified as anhedonic. Men were more likely to be anhedonic than women (16.6% vs. 10.2%, p = .03). Contrary to our hypothesis, the interaction of anhedonic status (hedonic vs. anhedonic) by gender was nonsignificant (p = .18). There was a main effect of hedonic capacity, such that anhedonia predicted abstinence, odds ratio = 3.24, 95% confidence interval = 1.39-7.51, p = .006. Conclusion: Both male and female anhedonic smokers were more likely to be abstinent, which contrasts with prior research indicating that anhedonia is a risk factor for difficulty quitting. This unexpected finding may be explained by a possible selective benefit of nicotine patch therapy, which has been observed in some studies to have antidepressant effects. Implications: This is the first study to examine whether the association between pretreatment anhedonia and smoking cessation differs by gender. For both women and men, anhedonia was associated with a greater likelihood of abstinence after 8 weeks of treatment with 21mg/day nicotine patch and behavior counseling. Our findings indicate that the association between anhedonia and smoking cessation is not as clear as has been assumed and may depend in part on the type of treatment delivered. abstract_id: PUBMED:31809878 Behavioral activation and smoking cessation outcomes: The role of depressive symptoms. Introduction: Depressive symptoms are related to smoking cessation outcomes. We examined the effects of behavioral activation (BA), as part of a cognitive behavioral intervention to quit smoking, in terms of abstinence rates according to depressive symptom level. We also analyzed whether BA could differentially benefit participants with higher versus lower anhedonia. Methods: The sample was composed of 183 smokers (Mage = 45.3; 62.8% female) who participated in a randomized clinical trial assessing the effects of a BA intervention compared to a standard intervention. Smoking outcomes were biochemically confirmed point prevalence abstinence, and abstinence days after treatment during one year follow-up. The intensity of depressive symptomatology and anhedonic symptoms were assessed using the Beck Depression Inventory-II. Results: No differences in abstinence rates were found in relation to depressive symptom level. The BA condition (vs. standard condition) predicted greater abstinence rates (OR = 1.91) in participants with lower scores on depressive symptoms, whereas in participants with higher scores, it did not (OR = 1.17). Moreover, the BA condition predicted greater abstinence rates in participants with lower scores on anhedonia. When examining days of abstinence during the one-year follow-up period, a significant interaction was found between depressive symptoms and treatment condition, favoring the BA condition. Conclusion: BA implemented as part of a cognitive behavioral intervention to quit smoking improves long-term abstinence rates, especially among those with fewer depressive symptoms. abstract_id: PUBMED:30047750 Anhedonia and smoking cessation among Spanish-speaking Mexican-Americans. Objective: Smoking cessation is associated with improved health and reduced risk of disease. Understanding specific factors that are associated with smoking cessation is important both for identifying those who may have the greatest difficulty quitting smoking and tailoring smoking cessation interventions accordingly. Low positive affect/anhedonia, a key transdiagnostic symptom of several psychiatric disorders, is associated with lower levels of smoking cessation in the general population, but to date, few studies have examined factors influencing smoking cessation among Spanish-speaking Mexican-American smokers. Methods: The current study examined whether low positive affect/anhedonia was inversely related to cessation status across 3 time points among Spanish-speaking Mexican-American smokers (N = 199) who were making a smoking quit attempt. Results: Using multilevel modeling, the between-person low positive affect/anhedonia score was found to be inversely associated with smoking at quit day, 3 and 26 weeks after quit while controlling for relevant covariates (i.e., age, gender, education, income, relationship status, heaviness of smoking index) but not when controlling for other symptoms of depression. Conclusions: Contrary to prior research, the results of this study did not confirm the unique predictive role of low positive affect/anhedonia among Mexican Americans, suggesting that risk factors for this group may be different from other populations and cessation approaches may also need to differ. (PsycINFO Database Record abstract_id: PUBMED:24219183 Anhedonia, depressed mood, and smoking cessation outcome. Objective: Although the relation between lifetime depression and smoking cessation outcome has been well studied, the proposition that different symptomatic expressions of depression exert disparate predictive effects on risk of smoking cessation failure has largely gone uninvestigated. This study analyzed the individual contributions of depression's 2 hallmark affective symptoms, anhedonia (i.e., diminished interest in normally enjoyable activities) and depressed mood (i.e., elevated sadness), to the prediction of smoking cessation outcome. Method: Participants were adult daily smokers (N = 1,469; mean age = 45 years, 58% female, 84% White) enrolled in a smoking cessation treatment study. Lifetime history of anhedonia and depressed mood were classified via structured interview prior to quit day. Seven-day point prevalence smoking abstinence was assessed at 8 weeks and 6 months postquit. Results: When examined separately, both lifetime anhedonia, OR (95% CI) = 1.42 (1.16, 1.73), p = .004, and depressed mood, OR (95% CI) = 1.35 (1.11, 1.63), p = .002, predicted increased odds of relapse. These relations remained after adjusting for covariates, including lifetime depressive disorder, which did not predict outcome. After controlling for the covariation between lifetime anhedonia and depressed mood, anhedonia predicted cessation outcome, OR (95% CI) = 1.31 (1.05, 1.62), p = .02, while depressed mood did not (p = .19). Symptom duration (>2 weeks), treatment, and substance use disorder did not modify relations of lifetime anhedonia and depressed mood with cessation outcome. Conclusions: Results suggest that (1) symptoms of affective disturbance capture depression-relevant risk of cessation failure, which is not adequately demarcated by the lifetime depressive disorder diagnosis, and (2) anhedonia is a more sensitive index of this affective disturbance than depressed mood per se. Clinical attention to anhedonia may facilitate smoking cessation. abstract_id: PUBMED:18324570 Dimensions of depressive symptoms and smoking cessation. Because different psychopathologic components of depressive symptoms may have distinct etiologies, examining their differential effects on smoking cessation may elucidate mechanisms underlying the smoking-depression relationship. Negative affect (NA), somatic features (SF), low positive affect/anhedonia (PA), and interpersonal disturbance (IP) have been identified as unique dimensions of depression that can be measured using the Center for Epidemiologic Studies Depression Scale (CESD). This study examined common and unique associations between CESD subscales and baseline smoking characteristics, nicotine withdrawal, and relapse in 157 participants enrolled in a smoking cessation trial for heavy social drinkers. Each dimension was univariately associated with negative and positive reinforcement smoking motives. Only SF had unique relations with tolerance smoking motives and univariate associations with nicotine dependence severity. Only PA predicted cessation-related changes in withdrawal symptoms on quit day. Analyses predicting abstinence at 8, 16, and 26 weeks post quit date showed that NA, SF, and PA each univariately predicted relapse, ps< or =.0083. Only low PA predicted poorer outcomes incrementally to the other dimensions, even when controlling for level of nicotine dependence, smoking frequency, and history of major depression, p = .0018. Interventions targeting anhedonia and low positive affect may be useful for smokers trying to quit. abstract_id: PUBMED:32830368 Time-varying effects of 'optimized smoking treatment' on craving, negative affect and anhedonia. Aims: To identify when smoking cessation treatments affect craving, negative affect and anhedonia, and how these symptoms relate to abstinence, to help evaluate the effects of particular intervention components in multi-component treatments and accelerate treatment refinement. Design: Secondary analysis of data from a two-arm randomized controlled trial. Setting: Seven primary care clinics in Wisconsin, United States. Participants: Adult primary care patients who smoked daily (n = 574). Intervention And Comparator: Intervention was abstinence-optimized treatment (A-OT, n = 276) comprising 3 weeks of nicotine mini-lozenges pre-target quit day (TQD), 26 weeks of combination nicotine patch and mini-lozenges post-TQD and extensive psychosocial support. The comparator was recommended usual care (RUC, n = 298), comprising brief counseling and 8 weeks of nicotine patch post-TQD. Measurements: Time-varying effect models examined dynamic effects of A-OT (versus RUC) on the primary outcomes of nightly cigarette craving, negative affect and anhedonia from 1 week pre- to 2 weeks post-TQD. Exploratory models examined within-person relations between nicotine medication use and same-day symptom ratings. Secondary logistic regression analyses examined associations between post-TQD craving, negative affect and anhedonia and 1-month post-TQD abstinence. Findings: A-OT significantly suppressed pre- and post-TQD craving (β = -0.27 to -0.46 across days) and post-TQD anhedonia (β = -0.24 to -0.38 across days), relative to RUC. Within individuals, using patches was associated with lower negative affect in RUC (β = -0.42 to -0.52), but not in A-OT. Using more mini-lozenges was associated with greater craving (β = 0.04-0.07) and negative affect (β = 0.03-0.05) early, and with lower anhedonia (β = -0.06 to -0.12) later. Greater post-TQD craving (OR = 0.68) and anhedonia (OR = 0.85) predicted lower odds of abstinence 1 month post-TQD. Conclusion: Time-varying effect models showed that a multi-component treatment intervention for smoking cessation suppressed significant withdrawal symptoms more effectively than recommended usual care among daily adult smokers motivated to quit. The intervention reduced craving pre- and post-target quit day (TQD) and anhedonia post-TQD. abstract_id: PUBMED:11023009 Relationships between sensation seeking and emotional symptomatology during smoking cessation with nicotine patch therapy. This study explored relationships between the sensation-seeking trait and the development of emotional symptomatology during smoking cessation with nicotine transdermal patches. Twenty-five subjects were evaluated before they stopped smoking, on Day 8, Day 30, Day 90, and Day 120. Initial motives for smoking and the sensation-seeking personality trait were tested as possible predictors for the development of specific mood disturbances. Our subjects scored very high on sensation seeking, consistent with previous results on smokers. This may also be due to the well-known tendency of high sensation-seekers to be willing to try new experiences. The sensation-seeking trait did not predict the issue of cessation. However, it was related to emotional deficit (anhedonia, affective blunting), tiredness, and a lack of energy, before and during smoking cessation. Two different interpretations of emotional deficit are proposed. abstract_id: PUBMED:27633341 What a difference a day makes: differences in initial abstinence response during a smoking cessation attempt. Aims: To (1) identify distinct classes of smokers based on quit day withdrawal symptoms and (2) explore the relations between withdrawal classes and demographics, tobacco dependence, treatment and smoking outcomes. Design: Secondary data analysis of participants (n = 1504) in a randomized double-blind placebo-controlled multi-site smoking cessation trial who provided ecological momentary assessments of withdrawal symptoms on their quit day. Participants received smoking cessation counseling and were randomized to receive placebo or one of five active pharmacotherapies. Setting: Research offices in Madison and Milwaukee, Wisconsin, USA. Participants: Adult smokers (n = 1236; 58% female, 86% white), recruited from the community via advertisements, who abstained on their quit day. Measurements: Demographics and tobacco dependence were assessed at baseline and participants carried palmtop computers to record withdrawal symptoms (craving, negative affect, difficulty concentrating, hunger and anhedonia) on their quit day. Point-prevalence abstinence and latency to relapse were assessed at 8 weeks and 6 months post-quit. Findings: Latent class analysis identified four withdrawal classes [Akaike information criterion (AIC) = 70.09]: Moderate withdrawal (64% of sample), high craving-anhedonia (8% of sample), affective withdrawal (13% of sample) and hunger (15% of sample). The high craving-anhedonia class reported significantly higher dependence (P < 0.01), were less likely to have received combination nicotine replacement, reported lower week 8 abstinence rates and relapsed sooner than those in the moderate withdrawal class (P < 0.05). The affective withdrawal class reported higher levels of baseline negative affect and life-time psychopathology (P < 0.05) and relapsed more quickly than the moderate withdrawal class (P < 0.01). Conclusions: While the majority of smokers report typical levels of withdrawal symptoms on their quit day, more than one-third report extreme craving or extreme negative affective or extreme hunger responses to initial abstinence. These distinct quit-day withdrawal symptom patterns are related to baseline characteristics, treatment and cessation success. abstract_id: PUBMED:19571250 Relations between anhedonia and smoking motivation. Introduction: A growing literature suggests that anhedonia-an affective dimension related to the inability to experience pleasure-is associated with poor smoking cessation outcomes. Despite these findings, research of the motivational mechanisms linking anhedonia and smoking has been limited. Accordingly, the present study examined (a) relationships between anhedonia and motivationally relevant smoking characteristics and (b) whether anhedonia moderated the effects of tobacco deprivation on appetitive and aversive aspects of smoking urges. Methods: Smokers (N = 212; >or=5 cigarettes/day) first attended a baseline session during which measures of anhedonia and smoking characteristics were completed. Prior to a subsequent experimental session, a portion of participants were randomized to one of two groups: (a) 12-hr tobacco deprivation before the session (n = 51) and (b) ad libitum smoking (n = 69). Results: Smokers with higher levels of anhedonia reported a greater number of past failed quit attempts and a higher proportion of quit attempts that ended in rapid relapse within 24 hr, rs > .20, ps < .05. Anhedonia did not consistently correlate with smoking heaviness, chronicity, and dependence motives. Anhedonia significantly moderated the influence of tobacco deprivation on appetitive smoking urges, such that deprivation effects on appetitive urges were stronger in high anhedonia smokers (beta = .64) than in low anhedonia smokers (beta = .23). Anhedonia did not moderate deprivation effects on aversive smoking urges. This pattern of results remained robust when controlling for baseline negative affect. Discussion: These findings elucidate anhedonia's link with smoking relapse and could be useful for developing cessation interventions for anhedonic smokers. abstract_id: PUBMED:25396705 Dysphoria and smoking among treatment seeking smokers: the role of smoking-related inflexibility/avoidance. Background: Emerging data suggest that dysphoria is one facet of depression that is especially related to various aspects of cigarette smoking. However, it is presently unknown what emotional processes may account for these relations. Objectives: In the current cross-sectional study, the impact of avoidance and inflexibility to smoking (AIS), a smoking-specific form of experiential avoidance, was tested on the relationship of dysphoria to four specific smoking processes that are key factors in cessation: perceived barriers to cessation, severity of problems during prior quit attempts, negative reinforcement smoking expectancies, and motivation to quit smoking. Methods: Participants (n = 465) were treatment-seeking adult daily smokers. Relative indirect effects were subjected to bootstrap analyses to test direct and indirect effects of dysphoria on smoking processes. Results: After controlling for gender, nicotine dependence severity, drinking problems, cannabis use, negative affectivity, tobacco-related medical problems, and AIS, dysphoria remained directly, positively related to perceived barriers and cessation problems. Additionally, dysphoria was indirectly, positively related to perceived barriers, cessation problems, negative reinforcement smoking expectancies, and motivation to quit indirectly through higher levels of AIS. Conclusion: In the context of dysphoria, AIS may explain a wide range of clinically-relevant smoking processes. Answer: The effect of anhedonia on smoking cessation does not appear to be greater for women compared to men. A study that specifically investigated this question using data from a clinical trial of maintenance nicotine patch therapy found that the interaction of anhedonic status by gender was nonsignificant (p = .18), indicating that gender did not moderate the effect of anhedonia on short-term abstinence. Contrary to the initial hypothesis, both male and female anhedonic smokers were more likely to be abstinent, which was an unexpected finding as anhedonia is typically considered a risk factor for difficulty quitting. This result suggests that the association between anhedonia and smoking cessation may not be as clear as previously assumed and could depend on the type of treatment delivered (PUBMED:27287389). Other studies have explored the relationship between anhedonia and smoking cessation but did not specifically address gender differences. For instance, one study found that behavioral activation (BA) as part of a cognitive behavioral intervention improved long-term abstinence rates, especially among those with fewer depressive symptoms, which can include anhedonia (PUBMED:31809878). Another study indicated that low positive affect/anhedonia was inversely associated with smoking cessation among Spanish-speaking Mexican-American smokers, but this relationship was not unique when controlling for other symptoms of depression (PUBMED:30047750). Additionally, research has shown that anhedonia is a more sensitive index of affective disturbance related to smoking cessation failure than depressed mood per se (PUBMED:24219183). In summary, the available evidence does not support the idea that the effect of anhedonia on smoking cessation is greater for women compared to men. Instead, anhedonia seems to be a factor that can influence smoking cessation outcomes for both genders, and its impact may vary depending on the treatment approach and individual differences in depressive symptoms (PUBMED:27287389).
Instruction: Does proclination of maxillary incisors really affect the sagittal position of point A? Abstracts: abstract_id: PUBMED:23678998 Does proclination of maxillary incisors really affect the sagittal position of point A? Objective: To examine the effect of maxillary incisor proclination due to orthodontic treatment upon the sagittal position of point A and evaluate the effect of this possible movement of point A on sella-nasion-point A (SNA) angle. Materials And Methods: A study group was formed from 25 subjects (12 male and 13 female) who had Class II division 2 malocclusion with retroclined upper incisors, and a control group was formed from 25 subjects (12 male and 13 female) who had minor crowding in the beginning of the treatment and required no or minimal maxillary anterior tooth movement. Treatment changes in maxillary incisor inclination, sagittal position of point A, SNA angle, and movement of incisor root apex and incisal edge were calculated on pretreatment and posttreatment lateral cephalographs. Results: Maxillary incisors were significantly proclined (17.33°) in the study group and not significantly proclined (1.81°) in the control group. This proclination resulted in 2.12-mm backward movement of the root apex and 5.76-mm forward movement of the incisal edge of maxillary incisors. Point A moved 1.04 mm backward (P = .582) and 0.48 mm (P = .811) forward in the study and control groups, respectively. Incisor root apex and incisal edge almost remained stable in the control group. No significant change was observed in the value of the SNA angle in both the study and control groups. Conclusions: Proclination of maxillary incisors accompanied by backward movement of incisor root apex caused posterior movement of point A. However, this posterior movement does not significantly affect the SNA angle. abstract_id: PUBMED:25419382 The effects of incisor inclination changes on the position of point A in Class II division 2 malocclusion using three-dimensional evaluation: a long-term prospective study. Objective: The aim of this study was to evaluate change in the sagittal position of point A due to orthodontic treatment by orthodontic community-cone beam computed tomography (CBCT). Materials And Methods: 40 subjects (22 males and 18 females) who had Class II division 2 malocclusion were recruited, and 40 subjects (23 males and 17 females) who had minor crowding in the beginning of the treatment and required no or minimal maxillary anterior tooth movement were served as control. The changes in maxillary incisor inclination, sagittal position of point A, SNA angle, and movement of incisor root apex and incisal edge were calculated before and after CBCT treatment. Results: Maxillary incisors were significantly proclined in the study group but not in the control group. This proclination resulted in 2.95 mm backward movement of the root apex and 6.23 mm forward movement of the incisal edge of maxillary incisors. Point A moved 1.24 mm and 0.18 mm backward in the study and control groups, respectively. Incisor root apex and incisal edge almost remained stable in the control group. No significant change was observed in the SNA angle in both the study and control groups. However, the change in SNA between the two groups was found to be significant. Conclusions: Proclination of maxillary incisors with backward movement of incisor root apex caused posterior movement of point A. This posterior movement significantly affects the SNA angle. abstract_id: PUBMED:32802775 Cephalometric Evaluation of Maxillary Incisors Inclination, Facial, and Growth Axes in Different Vertical and Sagittal Patterns: An Original Study. Aim: The aim of this study was to evaluate the inclination of the maxillary incisors (I), facial axis (FA), and growth axis (GA) in different vertical and sagittal patterns. Materials And Methods: A total of 498 consecutive orthodontic patients, with an average age of 18.87 years (range = 5-63 years), were grouped based on their vertical and sagittal patterns. Maxillary incisors, FA, and GA axes were traced and their corresponding angles to nasion-basion and true horizontal lines were measured. The sample was divided into three groups based on the mandibular divergence (mandibular plane-MP/sella-nasion [SN]): Group 1-hypodivergent pattern (MP/SN ≤ 27; n = 30), Group 2-normodivergent pattern (27 < MP/SN <37; n = 254), and Group 3-hyperdivergent pattern (MP/SN ≥ 37; n = 214); the sample was then divided into three groups based on the sagittal pattern (ANB, angle between points A, Nasion and B): Group I-skeletal CLI (Class I) (0 <ANB <4; n = 228), Group II-skeletal CLII (ANB ≥ 4; n = 216), and Group III-skeletal CLIII (ANB ≤ 0; n = 54). Group differences were evaluated using the analysis of variance and post hoc tests. Chi-square tests were used for testing relationships between categorical variables. Results: FA/nasion-basion (NBa) and GA/NBa were different among the vertical groups (P < 0.001). FA/NBa was found significantly different in the sagittal groups, whereas GA/NBa was only different between CLII and CLIII groups. Compensation in maxillary incisors' inclination was present in the sagittal groups, but not in the vertical ones. CLI patients when stratified in vertical groups showed FA/NBa and GA/NBa to be different across the three vertical groups. Conclusion: FA/NBa was found different in the vertical and the sagittal groups. Maxillary incisors compensation was only found in the sagittal and not in the vertical groups. abstract_id: PUBMED:19705940 Maxillary incisor proclination effect on the position of point A in Class II division 2 malocclusion. Objective: To test the hypothesis that the magnitude of alteration in the position of point A is not associated with proclination of the upper incisors in Class II division 2 malocclusion. Materials And Methods: Cephalometric films were taken for 30 Class II division 2 patients (8 males and 22 females; average age, 18.3 years) before and after upper incisor proclination. The total change in the position of point A was measured by superimposing the pretreatment and postproclination lateral cephalograms on the sella-nasion line at the sella. To determine the local effect of alveolar bone remodeling associated with upper incisor proclination on the position of point A, postproclination tracing of the maxilla was superimposed on the pretreatment tracing according to the Bolton template of maxillary superimposition. Results: The total vertical displacement in Point A position was downward by 0.84 mm (P = .002), and the total horizontal displacement was forward by 0.45 mm (P = .054). Assessment of local changes in point A revealed that the position of point A had moved backward by 0.60 mm (P = .001). No significant change was observed in the value of the sella-nasion-point A angle (SNA). Conclusion: The hypothesis is rejected. The position of point A is affected by local bone remodeling associated with proclination of the upper incisor in Class II division 2 malocclusion, but this minor change does not significantly affect the SNA angle. abstract_id: PUBMED:35599206 Does the angulation between the maxillary central incisors and the nasopalatine canal differ among sagittal and vertical skeletal patterns? A CBCT study. Objectives: To assess the influence of sagittal and vertical skeletal patterns, and sex in the angulation and relationship between the maxillary central incisors and the nasopalatine canal. Materials And Methods: Cone-beam computed tomography (CBCT) exams of 172 patients were classified into sagittal (class I, II, and III) and vertical skeletal patterns (mesofacial, brachyfacial and dolichofacial). Hard palate-nasopalatine canal and hard palate-long axis of maxillary central incisors angles were measured in sagittal reconstructions. The obtained angles were used to calculate the angulation between maxillary central incisors and nasopalatine canal by triangle geometric relationship. Analysis of variance (multi-way ANOVA) and Tukey post-hoc test were used for data analysis (α=5%). Results: Sagittal and vertical skeletal patterns, and sex presented a low influence on the angulation of maxillary central incisors and nasopalatine canal. Dolichofacial class II women showed the greatest hard palate-nasopalatine canal angulations (P<0.05) and maxillary central incisors and nasopalatine canal angulations (P<0.05). Mesofacial class II and III men presented the lowest maxillary central incisors and nasopalatine canal angulations (P<0.05). Conclusions: The sagittal and vertical skeletal patterns, and sex have a low influence on the angulation of the maxillary central incisors and the nasopalatine canal. abstract_id: PUBMED:36865728 Relationship of Nasolabial Angle with Maxillary Incisor Proclination and Upper Lip Thickness in North Indian Population. Aim: The aim of the study was to evaluate the relationship between nasolabial angle (NLA) with maxillary incisor proclination (U1-NA) and upper lip thickness (ULT). Materials And Methods: Pretreatment lateral cephalometric radiographs of 120 patients were taken, and NLA, U1-NA, and basic ULT measurements were obtained for each patient. Descriptive statistics were calculated for all the variables involved in the study. The correlation was found using the Pearson correlation coefficient (r) test. p < 0.01 was considered statistically significant. Results: The mean values of NLA, upper incisor proclination, and ULT were found to be 91.38° ± 7.10°, 34.21° + 5.17°, and 15.38 ± 1.76 mm, respectively. r (r = -0.583) was found between NLA and upper incisor proclination and (r = -0.040) for NLA and ULT. Conclusion: There is a statistically significant relationship between NLA and U1-NA. How To Cite This Article: Garg H, Khundrakpam D, Saini V, et al. Relationship of Nasolabial Angle with Maxillary Incisor Proclination and Upper Lip Thickness in North Indian Population. Int J Clin Pediatr Dent 2022;15(5):489-492. abstract_id: PUBMED:33482798 Aesthetic evaluation of the labiolingual position of maxillary lateral incisors by orthodontists and laypersons. Background: The maxillary anterior teeth play a crucial role in smile aesthetics. Previous studies regarding the importance of maxillary lateral incisors for smile aesthetics concentrated on their size, incisor edge level, and inclination, etc. However, the aesthetic effect of lateral incisor movement in the spatial position has not been studied yet. Therefore, the purpose of this study was to explore the influence of the labiolingual position of maxillary lateral incisors on the aesthetic perception of smiles by orthodontists and laypersons, as well as analyze differences in this perception between male and female raters. Methods: A three-dimensional (3D) dental model was generated from the photograph of a man's smile using iOrtho7.0 software (Time Angel, Wuxi, China). Based on this model, seven images were generated with different labiolingual positions of the maxillary lateral incisors in 0.5 mm increments (+ indicating labial translation, and-indicating lingual translation). The images were evaluated by 86 orthodontists and 161 laypersons using a visual analog scale, with lower scores indicating less attractiveness. Data were analyzed using Student's t test and one-way analysis of variance with post hoc test. Results: There was no significant difference in smile ratings by males and females. Orthodontists assigned lower scores to all images than laypersons. The smile at + 1.5 mm was considered the least attractive by orthodontists, while smiles at + 1.5 mm and - 1.5 mm were regarded as the least attractive by laypersons. The smile at 0 mm was evaluated as the most attractive by all raters. Laypersons gave different scores to smiles at 0 or - 0.5 mm, but orthodontists did not. Conclusions: The labiolingual position of maxillary lateral incisors does affect the perception of smile aesthetics. Orthodontists may rate smile aesthetics more critically than laypersons. Therefore, communication and discussion between orthodontists and patients is needed to achieve better therapeutic and aesthetic outcomes. abstract_id: PUBMED:34154929 Relationship of anteroposterior position of maxillary central incisors with the forehead in an adult Iranian subpopulation: A cross-sectional study. Objective: This study aimed to assess the relationship of the anteroposterior position of maxillary central incisors with the forehead in an Iranian subpopulation residing in Kermanshah city. Materials And Methods: In this cross-sectional study, 12 orthodontists were requested to analyse full-smile profile photographs with complete maxillary incisor and forehead show in 70 patients with optimal facial harmony (35 males and 35 females) as the control group and 140 patients without optimal facial harmony (70 males and 70 females) as the test group. The inclusion criteria were (I) males and females between 18-60 years seeking orthodontic treatment and (II) willingness for participation in the study. The exclusion criteria were (I) presence of severe craniofacial anomalies, and (II) history of orthodontic treatment. The photographs were traced and analysed using Digimizer Image Analysis software 5.3.5. Data were analysed using STATA version 14.2 via independent t-test and linear regression. Results: A significant difference existed in the anteroposterior position of maxillary incisors in males between the test and control groups (P=0.002). However, this difference was not significant in females of the two groups (P=0.77). A significant difference was noted in the anteroposterior position of maxillary incisors between males and females in the test group (P=0.001) but not in the control group (P=0.80). The frontal angle had a significant correlation with gender and anteroposterior position of maxillary incisors in both groups (P<0.001). Conclusions: According to the results, the anteroposterior position of maxillary incisors was significantly different in the test and control groups. The maxillary incisors in males were more retruded than in females. Also, the frontal angle had a significant correlation with the anteroposterior position of maxillary incisors, and by 1° increase in the frontal angle, the incisors were protruded by averagely 0.307mm. abstract_id: PUBMED:27340165 Relationship of Incisive Papilla to Maxillary Incisors and Canines. Aim: The study was carried to find a relationship between the postextraction stable landmark, the incisive papilla, and the most labial position of the maxillary central incisor teeth, which occurred in Nepalese population. Materials And Methods: Casts of the subjects selected by nonprobability random sampling meeting the inclusion criteria were obtained. Each casts were standardized with respect to the occlusal plane and a photographic technique was used to measure the distance from the tangent of the labial surface of the central incisors to the posterior border of the incisive papilla. The measurements were made using Adobe Photoshop and results were analyzed by using appropriate statistical methods. Most appropriate software (SPSS) for the purpose was used to generate all desired values. Results: The data obtained suggested that the distance from the labial surface of maxillary central incisors to the posterior border of the incisive papilla ranged from 9 to 15.9 mm with a mean of 11.59 mm (SD 1.3). Various other results were also found after evaluation of the arch forms in relation to sex and race. Conclusion: Within the limitations of the study, these results suggested that there is a relationship between the maxillary central incisors and the incisive papilla aiding in the anteroposterior positioning of the anterior tooth. The clinical relevance of the study lies in application of the incisive papilla as a starting point in the preliminary location of maxillary incisors and canines during construction of the denture in absence of preextraction records. abstract_id: PUBMED:26439521 Changes in anteroposterior position and inclination of the maxillary incisors after surgical-orthodontic treatment of skeletal class III malocclusions. Purpose: The purpose of this study was to evaluate and compare the anteroposterior (AP) position and inclination of the maxillary incisors in subjects with class I normal occlusion and a harmonious profile with patients with skeletal class III malocclusions, and to investigate the changes in maxillary incisor inclination and AP position after surgical-orthodontic treatment in class III patients. Material And Methods: Sixty-five subjects (35 female and 30 male; mean age: 21.8 ± 3.89 years) with normal profiles and class I skeletal and dental patterns were selected as a control sample. Sixty-seven patients (38 female and 29 male; mean age: 21.3 ± 3.31 years) with skeletal and dental class III malocclusions who sought surgical-orthodontic treatment were used as the study sample. Subjects were asked to smile and profile photographs were taken with the head in a natural position and the maxillary central incisors and the forehead in full view; cephalograms were taken and superimposed on the profile pictures according to the outline of the forehead and nose. Forehead inclination, maxillary incisor facial inclination and the AP position of the maxillary central incisor relative to the forehead (FAFFA) were measured on the integrated images and statistical analyses were performed. Results: In both groups, there were no significant male/female differences in either the maxillary central incisor inclination or AP position. Female subjects had a significantly steeper forehead inclination compared with males (P < 0.001) in both groups. After combined surgical-orthodontic treatment, the significant labial inclination (P < 0.001) and posterior positioning (P < 0.001) of the maxillary central incisors had been corrected to close to normal range (P > 0.05). In the control group, 84.6% had the facial axial point (FA) of their maxillary central incisors positioned between lines through the forehead facial axis (FFA) point and the glabella. In the study group, however, 79.1% had the maxillary central incisors positioned posterior to the line through the FFA point and the difference with the control group was statistically significant (P < 0.001). The position of the maxillary central incisors was strongly correlated with forehead inclination in the control sample (r(2) = 0.456; P < 0.01), but only a poor correlation was detected in the class III group (r(2) = 0.177; P > 0.05). Conclusions: With the integrated radiograph-photograph method, the lateral cephalogram was reoriented, which makes it possible to accurately measure the variables on profile photographs. The AP position and inclination of the maxillary central incisors relative to the forehead plays an important role in the esthetics of the profile during smiling and could be an important variable to be considered during diagnosis and treatment planning. Answer: The proclination of maxillary incisors does indeed affect the sagittal position of point A, but the extent and significance of this effect vary according to different studies. According to the study with PUBMED:23678998, proclination of maxillary incisors accompanied by backward movement of the incisor root apex caused a posterior movement of point A. However, this movement did not significantly affect the SNA angle. In contrast, the study with PUBMED:25419382 found that while proclination of maxillary incisors with backward movement of the incisor root apex caused a posterior movement of point A, this movement significantly affected the SNA angle. Another study with PUBMED:19705940 also observed that the position of point A is affected by local bone remodeling associated with proclination of the upper incisor in Class II division 2 malocclusion, but this minor change did not significantly affect the SNA angle. These findings suggest that while there is a measurable effect of maxillary incisor proclination on the position of point A, the clinical significance of this effect in terms of altering the SNA angle may not be substantial or consistent across different studies.
Instruction: Doppler ultrasound of the hepatic artery and vein performed daily in the first two weeks after orthotopic liver transplantation. Useful for the diagnosis of acute rejection? Abstracts: abstract_id: PUBMED:8675425 Doppler ultrasound of the hepatic artery and vein performed daily in the first two weeks after orthotopic liver transplantation. Useful for the diagnosis of acute rejection? Rationale And Objectives: To analyze changes in Doppler ultrasound variables in relation to liver biopsy findings for the diagnosis of acute rejection after orthotopic liver transplantation (OLT), the authors performed in a prospective study 316 Doppler ultrasound examinations in the first 2 weeks after OLT on 23 patients. Methods: Recordings were obtained daily from the hepatic artery (resistive index [RI]) and hepatic vein (damping index [DI]). Correlations were explored between the Doppler ultrasound findings and histologic data. The chi-square test was used to analyze differences in Doppler ultrasound variables in patients with and without acute rejection. Results: Serial Doppler ultrasound examinations showed a significant increase in the RI in 11 of 22 patients (50%); the 23rd patient was excluded because of hepatic artery thrombosis. Despite an agreement in 15 of 22 patients (68%) no statistically significant correlation could be found (positive predictive value 6/11 = 55%; negative predictive value 9/11 = 82%; chi-square = 3.14; P > 0.05). A significant increase in the DI was observed in 14 of 23 patients (61%). However, no statistically significant correlation could be found as well with this parameter (positive predictive value 6/14 = 43%; negative predictive value 6/9 = 67%; chi-square = 0.00; P > 0.05). Conclusion: Serial Doppler ultrasound examinations were not helpful in predicting acute rejection. abstract_id: PUBMED:8970880 Changes in portal hemodynamics and acute rejection in the first 2 weeks after orthotopic liver transplantation. A prospective Doppler ultrasound study. Rationale And Objectives: To analyze changes in Doppler ultrasound variables of the portal vein in relation to liver biopsy findings, the authors performed a prospective study of 316 Doppler ultrasound examinations in the first 2 weeks after orthotopic liver transplantation on 23 patients. Methods: Recordings were obtained daily from the portal vein (diameter, maximum velocity, and flow). Correlations were explored between the Doppler ultrasound findings and histologic data. The chi-square test was used to analyze differences in Doppler ultrasound variables in patients with and without acute rejection. Results: In our series of 23 patients, acute rejection was diagnosed by liver biopsy in nine of them (39%). Changes in portal vein diameter, maximum velocity, and flow did not correlate consistently with liver biopsy findings, due to a multifactorial origin. Changes in portal hemodynamics were observed in patients with hepatic artery thrombosis, portal vein stenosis, acute rejection, and sepsis. Conclusions: Although routine screening using Doppler ultrasound proved to be useful for the determination of rapid changes in portal hemodynamics within a short time, serial Doppler ultrasound examinations were not helpful in predicting acute rejection. abstract_id: PUBMED:9646347 Use of the ultrasound contrast medium levovist in after-care of liver transplant patients. Improved vascular imaging in color Doppler ultrasound Introduction: A study was carried out to determine whether an improvement in the detection of vascular signals in patients after orthotopic liver transplantation can be achieved by the use of ultrasound contrast medium in colour Doppler sonography. Material And Methods: In the early postoperative follow-up of liver transplant recipients, 31 colour Doppler sonograms were obtained in 21 patients before and after intravenous injection of the ultrasound contrast agent Levovist (Schering, Berlin). A grading score with four categories was used to evaluate the sonograms with special regard to the visibility of colour and flow signals in the hepatic artery and also in the portal vein and the hepatic veins. The arterial and portal venous signals were evaluated in the hepatic portal and in the left and right lobe. Results: With contrast enhancement significantly better arterial signals were seen in 20 of 31 sonograms for the hepatic portal, in 22 for the right lobe and in 26 for the left lobe. Better portal vein signals were obtained in 17 of 31 examinations for the right lobe and in 16 for the left lobe; only little improvement was obtained for the main stem of the portal vein. For the hepatic veins there was no significant improvement. Conclusions: Early vascular complications after liver transplantation usually occur in the hepatic arteries. With the use of contrast-enhanced colour Doppler sonography, better detection of arterial and peripheral portal signals can be achieved; peripheral portal vein branches can be helpful in finding small arteries. abstract_id: PUBMED:8145614 The value of duplex sonography after orthotopic liver transplantation. Experience with 44 patients In a prospective study, 44 patients (11 women, 33 men) who had received orthotopic liver transplants underwent a total of 196 consecutive duplex Doppler ultrasound examinations. The aim of the study was to evaluate the correlation between the pulsatile flow index (PFI) and the damping index (DI) as far as complications as rejection or cholangitis were concerned. The patients were examined five times each on average. The PFI and DI were measured in the hepatic artery, the portal vein and the hepatic veins. The findings were compared with the clinical course (cholangitis, rejection) and the histomorphological diagnosis as determined in biopsy specimens. In biopsy-proven rejection episodes, the sensitivity of the PFI in the hepatic artery was 69.4%, the specificity 72.2%. The sensitivity of the DI in the hepatic vein was 89.4%, the specificity 89.1%. Combining the two, specificity was more than 90%. PFI and DI in the portal vein bore no apparent relation to clinical course or histomorphological diagnosis. We found duplex Doppler ultrasound extremely beneficial in determining the timing and indication for liver biopsy. In addition, this simple examination, which can be performed as often as desired, accurately shows the transplanted liver's response to measures taken to counter rejection. abstract_id: PUBMED:28657844 Doppler Parameters of the Hepatic Artery as Predictors of Graft Status in Pediatric Liver Transplantation. Objective: The purpose of this article is to determine whether hepatic artery (HA) Doppler parameters can be used to predict pediatric liver graft status in the immediate postoperative (< 10 days) period and in the first year after transplant. Materials And Methods: A retrospective review of the liver transplant database was performed to review operative, clinical, laboratory, histopathologic, and ultrasound findings at three time points after surgery: 3 days, 3 months, and 1 year. The association between HA velocity and resistive index (RI) with graft status was assessed. Results: Of 120 children (54 girls and 66 boys; mean age, 3 years 2 months) enrolled, 70 had satisfactory graft status at the 1-year follow-up examination. HA velocity of 50-200 cm/s at 3 days was associated with normal graft status during the immediate postoperative period (p = 0.003), at 3 months (p = 0.0653), and at the 1-year follow-up examination (p = 0.0268). Vascular and biliary complications in the immediate postoperative period were more frequent at an HA velocity of 201-300 cm/s (p = 0.0024). There was a significant association between RI at 3 days and graft status in the immediate postoperative period (p = 0.0308), with an RI less than 0.5 associated with vascular complications (p = 0.0116). Conclusion: An RI less than 0.5 is associated with vascular complications. An HA velocity of 50-200 cm/s and an RI of 0.5-0.8 are associated with normal graft status in children within the first year after transplant. Values outside this range, or in this range along with a constellation of other ultrasound findings and clinical status, support the initiation of additional more-specific investigations that may help in early treatment to prevent complications. abstract_id: PUBMED:1395375 The role of hepatic vein Doppler in diagnosing acute rejection following paediatric liver transplantation. Serial Doppler ultrasound examinations of the hepatic veins were performed on 50 consecutive paediatric liver transplants. Damping of the normally pulsatile signal was observed in 23 of the 32 biopsy-proven episodes of rejection. In 10 episodes, the reduction in hepatic vein pulsatility preceded clinical and biochemical evidence of rejection by up to 36 h. Seven cases had damped signals throughout the post-operative period which precluded assessment by this method. In two patients the hepatic vein signals remained pulsatile despite rejection, one patient having unsuspected tricuspid regurgitation, and the other a stenotic IVC anastomosis. In the 35 liver transplants with normal pulsatility, hepatic vein Doppler proved to be a valuable indicator of acute rejection during the first 2 weeks following transplantation (sensitivity 92%, specificity 100%, positive predictive value 100% and negative predictive value 83%). abstract_id: PUBMED:9663281 Liver transplant rejection: value of hepatic vein Doppler waveform analysis. Background: To determine whether abnormal hepatic vein Doppler tracings can be used to predict liver transplantation rejection. Methods: A total of 158 hepatic vein Doppler tracings were obtained on 93 postliver transplant patients (63 patients without rejection and 30 patients with biopsy-proven rejection). Hepatic vein Doppler tracings were scored according to an established grading system (0 = normal triphasic waveform, 1 = dampened waveform, with loss of flow reversal, 2 = completely flat waveform). The hepatic vein Doppler tracings were then correlated with biopsy findings. Results: In the group of 63 patients without rejection, 124 Doppler examinations were performed and graded as follows: 0 = 87 (70%), 1 = 31 (25%), and 2 = 6 (5%). In the group of 30 patients with biopsy-proven rejection, 34 Doppler examinations were performed and graded as follows: 0 = 16 (47%), 1 = 14 (41%), and 2 = 4 (12%). The sensitivity of abnormal hepatic vein Doppler tracings for detection of rejection was 53% and the specificity was 70%. The positive predictive value of an abnormal hepatic vein Doppler tracing was 33% and the negative predictive value of a normal Doppler tracing was 84%. Conclusions: Abnormal hepatic vein Doppler tracings are observed in patients with and without liver transplant rejection. Abnormal tracings cannot be used to predict liver transplant rejection. abstract_id: PUBMED:7663136 Postoperative liver allograft dysfunction: the use of quantitative duplex Doppler signal analysis in adult liver transplant patients. 40 patients after orthotopic liver transplantation were prospectively analysed by serial quantitative duplex Doppler signal analysis (DDSA) to quantify the abnormalities of Doppler waveform of the hepatic artery (HA), the portal vein (PV) and the hepatic vein (HV) under various conditions of graft dysfunction. Quantitative analysis of the HA, PV and HV was obtained before, during and after allograft dysfunction by different Doppler angle independent parameters. The results obtained later on were correlated with clinical and laboratory data, cytological and histological findings of liver core biopsy and quantitative DDSA data of healthy transplanted volunteers. The increase of the resistive index of Pourcelot calculated for the hepatic artery (HA-RI) was found to be significant in early graft reperfusion reaction (p < 0.01). No correlation was found between the HA-RI and acute allograft rejection. Different patterns of damping quantified by the damping index (DI = minimum velocity shift/maximum velocity shift) for the portal vein (PV-DI) and the hepatic vein (HV-DI) Doppler signal were observed under various conditions of allograft dysfunction. Acute rejection was identified by premature decrease of PV-DI and increase of HV-DI (p < 0.01) with a sensitivity of 75%, a specificity of 91%, a positive predictive value (ppv) of 75% and a negative predictive value (npv) of 91%. Chronic allograft rejection was not associated with an increase of HV-DI but only with a significant decrease of PV-DI (p < 0.01), with a sensitivity of 80%, a specificity of 95%, ppv of 95%, npv of 98%.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:27819194 Role of Doppler Ultrasonography in Defining Normal and Abnormal Graft Hemodynamics After Living-Donor Liver Transplant. Objectives: Our aim was to investigate the early changes that occur after graft perfusion in living-donor liver transplant by Doppler ultrasonography. Materials And Methods: We prospectively evaluated liver grafts of 30 patients who underwent living-donor liver transplant during an 18-month period and who were followed for 1 year postoperatively. The hepatic artery peak systolic velocity, resistivity index, portal vein velocity, portal vein anastomotic velocity ratio, and hepatic vein pattern were compared after excluding patients who developed vascular complications and acute rejection episodes. Results: We observed intraoperative increases in the mean hepatic artery peak systolic velocity (96.3 ± 65 cm/s), the resistivity index (0.78 ± 0.091), and the portal vein velocity (99.6 ± 48 cm/s), which started to normalize after 2 weeks. In comparing the mean portal vein velocity, portal vein anastomotic velocity ratio, hepatic artery peak systolic velocity, and resistivity index after excluding 5 patients who developed vascular complications, we observed overall significance levels of P < .001, P = .039, P < .001, and P = .040. After we excluded 9 patients who developed acute rejection, our comparison of the portal vein velocity, hepatic artery peak systolic velocity, and resistivity index showed overall significance (P < .001, P < .001, and P = .043). Conclusions: Early and transient increases in portal vein velocity, anastomotic velocity ratio, hepatic artery peak systolic velocity, and resistivity index are common after living-donor liver transplant, with significant declines in the first 2 weeks posttransplant. abstract_id: PUBMED:20412097 Non-invasive ICG-clearance: a useful tool for the management of hepatic artery thrombosis following liver transplantation. Background: The clinical presentation of hepatic artery thrombosis (HAT) post-liver transplantation (LT) varies considerably. Doppler ultrasonography (Doppler US) is the first line investigation, with a diagnostic sensitivity for HAT as high as 92%. Because indocyanine green (ICG) elimination from the blood depends among other factors on the hepatic blood flow, we hypothesized that plasma disappearance rate of indocyanine green (PDR-ICG) can be influenced by the flow in the hepatic artery. Thus, we evaluated the role of PDR-ICG measurement in HAT diagnosis in post-LT patients. Patients And Methods: Fourteen liver transplant patients with no visible flow in the hepatic artery (Doppler US) were identified. Of the 14, seven patients had HAT confirmed by CT-angiography. The PDR-ICG measurement, an investigation routinely used in our center, was performed in all 14 patients. Results: The PDR-ICG in patients with HAT was significantly lower than in patients without HAT (5.8 ± 4.3 vs. 23.8 ± 7.4%/min, p= 0.0009). In patients with HAT, after the revascularization, the PDR-ICG value increased (5.8 ± 4.3 vs. 15.6 ± 3.5%/min, p = 0.006). Conclusion: The ICG elimination may be an adjunct diagnostic tool in the management of patients with suspected HAT following LT. Answer: Based on the provided abstracts, Doppler ultrasound of the hepatic artery and vein performed daily in the first two weeks after orthotopic liver transplantation does not appear to be particularly useful for the diagnosis of acute rejection. Several studies have investigated the correlation between Doppler ultrasound findings and acute rejection with varying results: 1. One study found that serial Doppler ultrasound examinations showed a significant increase in the resistive index (RI) in 50% of patients and in the damping index (DI) in 61% of patients. However, no statistically significant correlation could be found between these Doppler ultrasound variables and acute rejection, suggesting that Doppler ultrasound was not helpful in predicting acute rejection (PUBMED:8675425). 2. Another study analyzing changes in portal vein hemodynamics also found no consistent correlation with liver biopsy findings, indicating that while Doppler ultrasound was useful for determining rapid changes in portal hemodynamics, it was not helpful in predicting acute rejection (PUBMED:8970880). 3. A study using the ultrasound contrast medium Levovist found that contrast-enhanced color Doppler sonography improved the detection of vascular signals, which could be beneficial for early detection of vascular complications after liver transplantation, but the study did not directly address the utility of Doppler ultrasound in diagnosing acute rejection (PUBMED:9646347). 4. A study evaluating the pulsatile flow index (PFI) and damping index (DI) found that combining these two indices could provide specificity of more than 90% for complications such as rejection or cholangitis, suggesting some potential utility in the timing and indication for liver biopsy (PUBMED:8145614). 5. In pediatric liver transplantation, certain hepatic artery (HA) Doppler parameters were associated with normal graft status and could predict vascular complications, indicating that Doppler ultrasound may have some predictive value in this specific patient population (PUBMED:28657844). 6. A study on pediatric liver transplants found that hepatic vein Doppler was a valuable indicator of acute rejection during the first 2 weeks following transplantation, with high sensitivity and specificity (PUBMED:1395375). 7. However, another study concluded that abnormal hepatic vein Doppler tracings cannot be used to predict liver transplant rejection, as they were observed in patients both with and without rejection (PUBMED:9663281). 8.
Instruction: Can care of elderly be measured? Abstracts: abstract_id: PUBMED:19513178 Planning elderly and palliative care in Montenegro. Introduction: Montenegro, a newly independent Balkan state with a population of 650,000, has a health care reform programme supported by the World Bank. This paper describes planning for integrated elderly and palliative care. Description: The current service is provided only through a single long-stay hospital, which has institutionalised patients and limited facilities. Broad estimates were made of current financial expenditures on elderly care. A consultation was undertaken with stakeholders to propose an integrated system linking primary and secondary health care with social care; supporting people to live, and die well, at home; developing local nursing homes for people with higher dependency; creating specialised elderly-care services within hospitals; and providing good end-of-life care for all who need it. Effectiveness may be measured by monitoring patient and carers' perceptions of the care experience. Discussion: Changes in provision of elderly care may be achieved through redirection of existing resources, but the health and social care services also need to enhance elderly care budgets. The challenges for implementation include management skills, engaging professionals and political commitment. Conclusion: Middle-income countries such as Montenegro can develop elderly and palliative care services through redirection of existing finance if accompanied by new service objectives, staff skills and integrated management. abstract_id: PUBMED:17686412 Evaluation of the care of the elderly program in frail elderly individuals with COPD in primary care centers in Sabadell (Spain) Objective: To evaluate the effectiveness of the "Care of the Elderly" program developed by primary care teams in improving the quality of life of frail elderly individuals with a diagnosis of chronic obstructive pulmonary disease (COPD) living in Sabadell (Spain) compared with that of patients receiving routine care. Method: A quasi-experimental study with an experimental and a control group was performed in the nursing consultation service of 6 randomly selected primary care centers in Sabadell. The variables studied were quality of life measured by the EuroQuol (EQ-5D), the impact of COPD on health status measured by the St. George's respiratory questionnaire (SGRQ), the number of visits to the center and the emergency department, and hospital admissions. The study was performed over a 2-year period. Results: The mean age of the 203 patients studied was 74.94 years (standard deviation [SD] = 6.11). The intervention was not effective in improving quality of life as measured by the EQ-5D, or in reducing the impact of COPD on health status measured by the SGRQ, or the number of visits to the center and hospital admissions. A statistically significant impact was found on the number of visits to the emergency department (p = 0.03). Conclusions: The "Care of the Elderly" program in frail elderly individuals with COPD was not effective. The present study is of use in identifying aspects that are ineffective so that nurses designing future programs and studies can include new aspects such as treatment adherence, smoking cessation, and patient satisfaction. abstract_id: PUBMED:29935371 Self-care profiles of the elderly institutionalized in Elderly Care Centres. The concept of self-care in the elderly has been frequently associated with autonomy, independence and personal accountability. Self-care practices are a result of individual lifestyles and paths adapted to the circumstances and expectations of the elderly. Based on the model by Backman and Hentinen (1999), the present study attempts to categorically describe the types of self-care of the elderly. This is an exploratory study, transversal, of a quantitative nature. The sample was comprised of 313 participants, randomly selected among Elderly Care Centres in the interior North of Portugal. The Portuguese version of Self-care of Home Dwelling Elderly was used for data collection. The results call for a replication of the study, using the theoretical derivation of the four self-care profiles (responsible, formally guided, independent and abandoned). abstract_id: PUBMED:38156423 The care needs of the elderly in China's elderly care institutions: a narrative synthesis. China's population is ageing, affecting trends in social development and basic national conditions. More attention must be paid to the lack of care needs assessments for the elderly in China's pension institutions. This paper discusses a systematic evaluation of the care needs of the elderly in China's elderly care institutions. Literature was collected and synthesized after a search of the Web of Science, PubMed, and other databases for works published up to August 2021. Relevant content is proposed, including the name of the first author, publication date, study area, and sample size. Exactly 18 articles were included in the literature, documents that reported on a total of 7277 elderly people. The results showed a combined demand rate of primary care needs ≥50%. The top five needs included mental/psychological (76%), tranquillity/care (73%), living/environmental (71%), medical treatment (64%), and preventive healthcare (64%). The combined demand rate of secondary care needs was ≥50%. The top five needs included 79% for room/laundry/cleaning, 77% for psychological comfort and nursing, 73% for end-of-life care, 70% for disease diagnosis and treatment, and 69% for physical examination. The health needs of older people are diverse and focus mainly on mental/psychological, tranquility/care, living/environmental (71%), pharmacotherapy, and preventive healthcare. abstract_id: PUBMED:34486428 The Care-of-the-Elderly Health Guide. Introduction: The Evidence-Based Care of the Elderly Health Guide is a clinical guide with cross-references for care recommendations. This guide is an innovative adaptation of the Rourke Baby Record to support elderly care. In 2003, the guide was published with an endorsement from the Health Care-of-the-Elderly Committee of the College of Family Physicians of Canada. Since then, physicians have used the guide as a checklist and a monitoring tool for care to elderly patients. Objective: We will update the 2003 Care-of-the-Elderly Health Guide with current published evidence-based recommendations. Methods: This was a mixed methods study consisting of (1) the creation of a list of topics and corresponding guidelines or recommendations, (2) two focus group discussions among family physicians (n = 12) to validate the list for relevance to practice, and (3) a modified Delphi technique in a group of ten experts in Care of the Elderly and geriatrics to attain consensus on whether the guidelines/recommendations represent best practice and be included. Results: The initial list contained 43 topics relevant to family practice, citing 49 published guidelines or recommendations. The focus group participants found the list of topics and guidelines potentially useful in clinical practice and emphasized the need for user-friendliness and clinical applicability. In the first online survey of the modified Delphi technique, 93% (63/66) of the references attained consensus that these represented standards of care. The other references (3/66) attained consensus in the second online survey. The final list contained 47 topics, citing 66 references. Conclusion: The Care-of-the-Elderly Health Guide is a quick reference to geriatric care, reviewed for relevance by family physicians and a panel of experts. The Guide is intended to be used in primary care practice. abstract_id: PUBMED:34498438 Five years University Practice of Elderly Care: users, care provided, costs and experiences The increasing number of elderly people living at home demands a strengthening of primary care for (vulnerable) older adults. Therefore, in 2015 the VU University Medical Centre in Amsterdam founded the Universitaire Praktijk Ouderengeneeskunde (UPO) (University Practice for Elderly Care). This practice makes the expertise of the Elderly Care physician (SO) available in primary care through a close and easy accessible collaboration between the SO and the General Practitioner. The aim of this research is an evaluation of the UPO, in order to determine the added value of this care innovation and to identify areas for improvement. This research consists of a quantitative part in which the characteristics of the 190 treated UPO patients have been assessed. Also, costs have been estimated of UPO care compared to expected usual care. The qualitative part of this study consists of 22 interviews with the most important UPO stakeholders. The results of this study show that 1) the UPO seems to meet a demand from general practitioners to support them in the care of vulnerable elderly people with mostly cognitive disorders, 2) this care seems to save costs and 3) the UPO was appreciated by most of those involved. The current results call for applying this care innovation in other regions and to evaluate it on a larger scale. abstract_id: PUBMED:23494688 Proactive and structured care for the elderly in primary care Background: Currently, primary care for the older, vulnerable patient is reactive, fragmented and does not meet patients needs. Given the expected increase of home-dwelling frail elderly people a transition is needed to proactive and integrated care. Methods: In the described study, we explore two innovative interventions in primary care. First we describe a newly developed screening and monitoring program for frail elderly patients based on routine care information in general practice. Second, we describe a multidisciplinary intervention program by trained nurses for frail elderly patients in general practice. The effectiveness of the interventions is examined in a three-armed, cluster randomized trial, taking place in 58 primary care practices in Utrecht, the Bilt and Maarsenbroek. Results: Three thousand eight patients are included. Primary outcome measure is the impact of the interventions on the daily activities, measured with the Katz questionnaire. Secondary outcomes measures are the quality of life, mortality, recording in a care or nursing home, visit to an emergency room or outpatient unit, recording in the hospital and volunteer caregivers tax. abstract_id: PUBMED:25650032 Timing of Advance Care Planning in frail elderly patients: when to start? Advance Care Planning (ACP) is the process of discussing and recording patient preferences concerning goals for end-of-life care and to facilitate decision-making. ACP is an essential element of care for frail elderly patients because frailty increases the risks of negative health outcomes and loss of function. In this article, we present three patient cases to illustrate how general practitioners (GPs) can perform ACP and to demonstrate the importance of early and iterative end-of-life discussions with frail elderly patients. Good timing is decisive for the success of the intervention. GPs are in a key position to identify and discuss ACP matters at an early stage, supported by the geriatrician if necessary. Posing the 'surprise question' has proved helpful to determine timing. Complex ACP interventions contribute to care which is better adapted to the needs of frail elderly patients. abstract_id: PUBMED:21925408 ICU and critical care outreach for the elderly. Average life expectancy has increased over the past century resulting in a shift in world population demographics. There are more elderly people alive now than throughout all of human history. The burden of comorbid disease and dependency rises with age and has been shown to independently predict need for hospitalization, institutionalization and mortality. Accordingly, there are more elderly persons living longer in more tenuous states of health. The relative proportion of patients admitted to hospital and intensive care who are elderly is considerable and recent data have suggested an increasing trend. There is likely significant selection bias amongst elderly patients triaged for access to finite critical care services. In fact, data have shown that elderly patients often receive less intensive therapy and have greater support limitations when admitted to an intensive care environment. "Chronologic" age has been an inconsistent predictor of prognosis in elderly patients who present with critical illness. However, surrogate measures of "physiologic" age are likely more relevant, such as an assessment of frailty, to aid in prognostication and informed decision-making and that ultimately correlate not only with short-term survival but additional outcomes such as functional status, institutionalization and quality of life after an episode of critical illness. There is a paucity of literature on the specific interaction of rapid response systems (RRS) and hospitalized "at-risk" elderly patients; however, the RRS may have particular application for this cohort. In particular, data have emerged to suggest mature ICU-based RRS respond commonly to elderly patients and are increasingly participating in end-of-life care discussions. In addition, another aspect of the RRS, critical care outreach (CCO), may facilitate the identification of elderly patients for timely goal-oriented advanced care planning prior to clinical deterioration. abstract_id: PUBMED:25809819 Structure and process quality as predictors of satisfaction with elderly care. The structure versus process approach to quality of care presented by Donabedian is one of the most cited ever. However, there has been a paucity of research into the empirical validity of this framework, specifically concerning the relative effects of structure and process on satisfaction with elderly care as perceived by the older persons themselves. The current research presents findings from a national survey, including a wide range of quality indicators for elderly care services, conducted in 2012 at the request of the Swedish National Board of Health and Welfare in which responses from 95,000 elderly people living in 324 municipalities and districts were obtained. The results revealed that the only structural variable which significantly predicted quality of care was staffing, measured in terms of the number of caregivers per older resident. More interestingly, process variables (e.g. respect and access to information) explained 40% and 48% of the variance in satisfaction with care, over and above the structural variables, in home care and nursing homes respectively. The findings from this large nationwide sample examining Donabedian's model suggest that quality in elderly care is primarily determined by factors pertaining to process, that is, how caregivers behave towards the older persons. This encourages a continued quality improvement in elderly care with a particular focus on process variables. Answer: Yes, care of the elderly can be measured through various methods and indicators. Effectiveness in elderly care may be assessed by monitoring patient and carers' perceptions of the care experience, as suggested in the planning of integrated elderly and palliative care in Montenegro (PUBMED:19513178). Additionally, in the evaluation of the "Care of the Elderly" program in Sabadell, Spain, quality of life was measured using the EuroQuol (EQ-5D), the impact of COPD on health status was measured by the St. George's respiratory questionnaire (SGRQ), and the number of visits to the center, emergency department, and hospital admissions were recorded (PUBMED:17686412). Furthermore, self-care profiles of the elderly can be categorized and described based on individual lifestyles and circumstances, as attempted in a study using the Self-care of Home Dwelling Elderly tool in Portugal (PUBMED:29935371). In China, a systematic evaluation of the care needs of the elderly in elderly care institutions was conducted, which included assessing primary and secondary care needs such as mental/psychological, tranquility/care, living/environmental, medical treatment, and preventive healthcare (PUBMED:38156423). The Care-of-the-Elderly Health Guide is another tool that provides evidence-based recommendations for geriatric care and is reviewed for relevance by family physicians and a panel of experts, serving as a quick reference in primary care practice (PUBMED:34486428). Additionally, the evaluation of the University Practice for Elderly Care in Amsterdam assessed the characteristics of treated patients, estimated costs, and gathered experiences through interviews with stakeholders to determine the added value of care innovations (PUBMED:34498438). Moreover, proactive and structured care interventions for the elderly in primary care have been explored, with outcomes measured by daily activities, quality of life, mortality, and other health-related factors (PUBMED:23494688). The timing of Advance Care Planning in frail elderly patients is also crucial, and general practitioners can perform ACP to ensure care is better adapted to the needs of frail elderly patients (PUBMED:25650032). Lastly, the structure and process quality as predictors of satisfaction with elderly care have been studied, with findings indicating that process variables such as respect and access to information significantly predict satisfaction with care (PUBMED:25809819). In summary, care of the elderly can be measured through various quantitative and qualitative methods, including patient quality of life assessments, care needs evaluations, satisfaction surveys, and the monitoring of health outcomes and service utilization.
Instruction: Is elective neck dissection necessary in cases of laryngeal recurrence after previous radiotherapy for early glottic cancer? Abstracts: abstract_id: PUBMED:25418930 Is elective neck dissection necessary in cases of laryngeal recurrence after previous radiotherapy for early glottic cancer? Objectives: To assess the clinical utility of elective neck dissection in node-negative recurrent laryngeal carcinoma after curative radiotherapy for initial early glottic cancer. Methods: A retrospective review was undertaken of 110 consecutive early glottic cancer patients who developed laryngeal recurrence after radiotherapy (34 recurrent T1, 36 recurrent T2, 29 recurrent T3 and 11 recurrent T4a) and received salvage laryngeal surgery between 1995 and 2005. Results: Six patients presented with laryngeal and neck recurrence and underwent salvage laryngectomy with therapeutic neck dissection, 97 patients with recurrent node-negative tumours underwent salvage laryngeal surgery without neck dissection and only 7 underwent elective neck dissection. No occult positive lymph nodes were documented in neck dissection specimens. During follow up, only three patients with neck failure were recorded, all in the group without neck dissection. There was no significant association between the irradiation field (larynx plus neck vs larynx) and the development of regional failure. A higher rate of post-operative pharyngocutaneous fistula development occurred in the neck dissection group than in the group without neck dissection (57.2 per cent vs. 13.4 per cent, p = 0.01). Multivariate logistic regression analysis showed that early (recurrent tumour-positive, node-positive) or delayed (recurrent tumour-positive, node-negative) neck relapse was not significantly related to the stage of the initial tumour or the recurrent tumour. An age of less than 60 years was significantly associated with early neck failure (recurrent tumour-positive, node-positive). Conclusion: Owing to the low occult neck disease rate and high post-operative fistula rate, elective neck dissection is not recommended for recurrent node-negative laryngeal tumours after radiation therapy if the initial tumour was an early glottic cancer. abstract_id: PUBMED:22058599 Effectiveness and pitfalls of elective neck dissection in N0 laryngeal cancer. The aim of the study was to evaluate the efficacy and potential pitfalls of selective neck dissection of levels II-IV in controlling occult neck disease in clinically negative neck (cN0) of patients with laryngeal squamous cell carcinoma. Charts of 96 consecutive cN0 laryngeal cancer patients undergoing 122 neck dissections at the University of Florence from January 2000 to December 2004 were reviewed. N0 neck was defined with contrast enhanced computed tomography scan. Occult neck disease rate was 12.5%, involvement per level was: 47.6% at level II, 38.1% at level III, 9.5% at level IV. Six patients developed neck recurrence (6.25%) after selective neck dissection of levels II-IV within the first two years after treatment. In conclusion, selective neck dissection of levels II-IV is effective in N0 laryngeal squamous cell carcinoma; posterior limits of surgical resection are missing therefore if post-operative radiation is required, the field should be extended beyond the dissected levels. The low incidence of occult neck disease indicates the need to refine treatment strategy, restricting elective neck dissection only to supraglottic T2 with epilaryngeal involvement, supraglottic T3-4 and glottic T4 tumours, and considering a "wait and see" protocol implemented with imaging techniques and cytological assessments for other lesions. abstract_id: PUBMED:18182969 Selective neck dissection (IIA, III): a rational replacement for complete functional neck dissection in patients with N0 supraglottic and glottic squamous carcinoma. Introduction: The purpose of this paper is to determine the optimal elective treatment of the neck for patients with supraglottic and glottic squamous carcinoma. During the past century, various types of necks dissection have been employed including conventional and modified radical neck dissection (MRND), selective neck dissection (SND) and various modifications of SND. Materials And Methods: A number of studies were reviewed to compare the results of MRND and SND in regional recurrence and survival of patients with supraglottic and glottic cancers, as well as the distribution of lymph node metastases in these tumors. Results: Data from seven prospective, multi-institutional, pathologic, and molecular analyses of neck dissection specimens, obtained from 272 patients with laryngeal squamous carcinoma and clinically negative necks, revealed only four patients (1.4%) with positive lymph nodes at sublevel IIB. Data was also collected from three prospective, multi-institutional, pathologic and molecular studies of neck dissection specimens which include 175 patients with laryngeal squamous carcinoma (only 2 with subglottic cancer) and clinically negative necks. Only six patients (3.4%) had positive nodes at level IV. Conclusions: SND of sublevel IIA and level III appears to be adequate for elective surgical treatment of the neck in supraglottic and glottic squamous carcinoma. Dissection of level IV lymph nodes may not be justified for elective neck dissection of stage N0 supraglottic and glottic squamous carcinoma. Bilateral neck dissection in cases of supraglottic cancer may be necessary only in patients with centrally or bilaterally located tumors. abstract_id: PUBMED:30839065 The role of elective neck dissection for cT4aN0 glottic squamous cell carcinoma. Objective: The indication for elective neck dissection for patients with clinically T4aN0 (cT4aN0) glottic cancer is not established. The objective of this study was to evaluate the role of elective neck dissection for patients with cT4aN0 glottic cancer. Methods: We assessed patients with cT4aN0 laryngeal squamous cell carcinoma who underwent total laryngectomy between 1998 and 2014 and conducted a retrospective analysis. We considered occult neck metastasis positive when confirmed by histological analysis. When patients with late neck metastases did not undergo therapeutic neck dissection, the presence of occult neck metastasis was judged on the basis of computed tomography. The validity of elective neck dissection for patients with cT4aN0 glottic cancer was assessed from comparisons the rates of occult neck metastases of supraglottic and subglottic cancers, which are generally recommended for elective neck dissection. The distribution of occult neck metastases in glottic cancer is described according to nodal levels. Results: The rate of occult neck metastasis of cT4aN0 laryngeal cancer was 36% (14/39). There were no significant differences among the rates of glottic 7/21 (33%), supraglottic 3/8 (38%) and subglottic 4/10 (40%) cancers. Patients with glottic cancer did not have bilateral Level IIB and ipsilateral Level IV metastasis. One patient had contralateral Level IV metastasis. The metastases rates of ipsilateral Levels IIA, III, and VI were >10%. Conclusions: We think that elective neck dissection is valid for cT4aN0 glottic cancer. Ipsilateral Levels IIA, III and VI should be dissected. The omission of neck dissection for bilateral Levels IIB and IV can be considered when there is a need to avoid complications. abstract_id: PUBMED:16615158 Elective lateral neck dissection for laryngeal cancer in the clinically negative neck. Background: Despite the introduction of modern imaging techniques, it is still difficult to detect microscopic disease in neck nodes. The purpose of this study is to evaluate the efficacy of the lateral neck dissection (LND) for elective treatment of the clinically node negative neck (cN0) in laryngeal squamous cell carcinoma (SCC). Methods: The clinical records of 110 cN0 patients with laryngeal SCC treated in this hospital from January 1997 to December 2002 were reviewed retrospectively. Results: One hundred ten patients received 145 elective LND. Occult metastasis was detected in 22 (20.0%) of this group of patients. The distribution of the 37 positive nodes was as follows: Level II 56.8%; Level III 37.8%; Level IV 5.4%. The 3-year neck recurrence rate estimated by the Kaplan-Meier approach for all cN0 patients (n = 110) was 5.4% [95% CI: 0.0%; 12.5%]. No significant difference in 3-year lymph node recurrence was found between node negative and node positive groups, between supraglottis and glottis groups, or between surgery alone and combined therapy groups. Conclusion: The lateral neck dissection is effective in elective treatment of the neck in patients with laryngeal carcinoma. abstract_id: PUBMED:17633152 Evolution of elective neck dissection in N0 laryngeal cancer. Management of cervical lymph node metastasis is one of the most challenging problems facing clinicians dealing with head and neck cancer. A retrospective evaluation has been made of results in N0 laryngeal cancer patients treated from 1978 to date by comparing historical data reported in related papers previously published by our institution. The medical records of 2207 consecutive patients with cN0 SCC of the larynx were used as the source of data in the present study. Together with primary treatment, 759 (34.4%) received at least unilateral elective neck dissection, while the remaining 1448 (65.6%) were admitted to a wait-and-see protocol. Overall, in the electively dissected patients (ED): 128 (16.9%) cases were submitted to classical radical neck dissection, while 403 (53.1%) cases were submitted to functional neck dissection (FND) and 228 (30.0%) cases to jugular node dissection (JND, removing Level II, III and IV). In 125 of the ED group, a neck procedure on the contralateral N0 neck was associated, of which 15 were RNDs, 35 FNDs and 75 JNDs, respectively. Based on this large series population, the change in the philosophy was evaluated concerning elective neck treatment in N0 laryngeal cancer, from RND through FND towards JND. As far as concerns the reliability as a staging procedure, no statistically significant difference was found between RND, FND and JND (p = 0.794). The 5-year neck recurrence rate, as estimated by the Kaplan Meier, method, for all ED patients, was 7.7%. No significant difference in the rate of 5-year neck recurrence was detected between RND, FND and JND groups (p = 0.178). In the survival curves, no differences, in terms of actuarial survival by Kaplan Meier analysis, were observed, in our series, as far as concerns type of elective neck dissection performed (p = 0.222). In conclusion, following a critical revision of 25 years' experience, at our Institution, in the management of cN0 necks in laryngeal cancer patients, definitive changes were observed in the surgical approach to the treatment of occult disease in cN0 cases. JND, compared to more extensive neck dissections, did not show statistically significant differences in terms of neck control (p = 0.233), in terms of impact on survival (p = 0.122) and in terms of accuracy as staging procedure (p = 0.794). abstract_id: PUBMED:16143563 Is dissection of level IV absolutely necessary in elective lateral neck dissection for clinically N0 laryngeal carcinoma? A prospective study of 73 previous untreated consecutive patients with clinically N0 laryngeal squamous cell carcinoma (SCC) from January 1997 to October 2002 was undertaken to determine whether level IV lymph nodes can be saved in elective lateral neck dissection (LND) performed as a treatment for the N0 neck. The incidence of pathological metastases to level IV lymph nodes was evaluated, as were the incidence of regional recurrence after elective LND, and postoperative complications such as chylous leakage and phrenic nerve paralysis. A total of 142 LNDs were enrolled in this prospective study. The mean number of harvested lymph nodes by level was as follows; 13.1 in level II, 7.1 in level III, and 9.2 in level IV. Pathologic examination revealed nodal involvement in 25 neck specimens (17.6%, 25 of 142). Five necks had lymph nodes which were positive for microscopic metastasis in level IV (3.5%, 5 of 142). These necks were all ipsilateral (6.8%, 5 of 73) and none of the 69 contralateral neck specimens had level IV lymph node metastasis (0%, 0 of 69). With regard to T stage, 3.3% (1 of 30) of ipsilateral necks of T2 tumors exhibited occult metastasis in level IV lymph nodes, 5.9% (2 of 34) for T3 tumors, and 33.3% (2 of 6) for T4 tumors. There were no cases of T1 (n = 3). Separate skip metastasis in level IV lymph nodes was observed in two necks (1.4%, 2 of 142). Four cases of regional recurrence (5.5%, 4 of 73) were observed. Postoperative chylous leakage and phrenic nerve paralysis occurred in four cases (5.5%, 4 of 73) and two cases (2.7%, 2 of 73), respectively. The results of the present study demonstrate the rare incidence of level IV occult lymph node metastasis, as well as infrequent nodal recurrence after elective LND in the treatment of clinically N0 laryngeal SCC. Therefore, dissection of level IV lymph node pads, especially in the ipsilateral neck of early T staged tumors or the contralateral neck, may be unnecessary for the treatment of laryngeal SCC patients with a clinically N0 neck. abstract_id: PUBMED:23832261 Elective treatment of the neck for second primary tumors of the head and neck. The aim of this study was to define the role of elective neck dissection in patients with a second N0 head and neck squamous cell carcinoma (HNSCC). We carried out a retrospective study in 74 patients with a second N0 HNSCC treated with an elective neck dissection. Thirteen patients (17.6%) had occult neck node metastases. The risk of occult neck nodes was low for patients with a second glottic tumor (0%), and for patients with non-glottic T1-T2 tumors who had received previous radiotherapy in the neck (5.3%). Patients with non-glottic locally advanced tumors (T3-T4) and non-glottic T1-T2 tumors who had not received previous radiotherapy in the neck had a risk of occult neck nodes of 28.1 and 33.3%, respectively. Elective neck dissection could be omitted in patients with glottic tumors and in patients with an early tumor (T1-T2) who had received previous radiotherapy in the neck. abstract_id: PUBMED:22217501 Is elective neck dissection necessary for the surgical management of T2N0 glottic carcinoma? Objective: Evaluation of neck metastasis incidence in surgically treated T2N0M0 glottic carcinoma patients and discussion the necessity of elective neck dissection in this subset of larynx carcinomas. Materials And Methods: The patients who were staged clinically and radiologically as T2N0M0 glottic carcinoma having surgical intervention for their primary tumor and elective neck dissection between March 1996 and July 2009 with at least 2 years of follow up were included in the study. The recordings of patients were evaluated retrospectively for primary tumor location, vocal cord mobility, type of laryngectomy and neck dissection, results of histopathological examination, number of dissected lymph nodes for each specimen and for local and regional failure during the follow up. Results: In the 13-year study period, 24 consecutive patients with a mean age of 56.4 were treated surgically with 20 frontolateral, three vertical laryngectomies and one cricohyoido-epiglottopexy; 19 lateral and 5 functional neck dissections were performed for the necks. Histopathological examinations of the neck dissection specimens revealed an average of 32 lymph nodes (8-65) and there was no metastasis in any of these specimens. After an average 58 months of follow-up, only one patient had local failure and no patients had regional failure. Conclusion: As occult metastasis was not detected in any of the neck dissection specimens in cT2N0M0 patients and no regional failure was experienced during the follow-up period, it was concluded that the neck could be followed up without performing elective neck dissection in the surgical management of cT2N0M0 glottic carcinoma patients. abstract_id: PUBMED:11000683 Elective dissection in cancer of the larynx Objective: 1) To determine if elective neck dissection (END) is more effective than surgery without END in patients with cancer and a clinically negative neck (N0). 2) To determine if selective posterolateral neck dissection is effective in these patients. Patients And Methods: The study included 74 patients with laryngeal cancer and no palpable nodes who were treated surgically at the ENT Department of the Universidad Complutense de Madrid between 1994 and 1997. Thirty-seven patients underwent surgery alone (Group A) and 37 patients underwent laryngeal surgery and elective neck dissection (Group B). No patient underwent irradiation. Minimum follow-up was 24 months. Results: Cervical recurrence was observed in 4 (11%) patients who underwent laryngeal surgery alone and in 2 (5%) patients who underwent laryngeal surgery and elective neck dissection. Conclusions: Laryngeal surgery with elective neck dissection was more effective than laryngeal surgery without END in patients with laryngeal cancer and a clinically negative neck. Selective lateral neck dissection was effective for the elective treatment of these patients. Answer: Based on the provided abstracts, elective neck dissection does not appear to be necessary in cases of laryngeal recurrence after previous radiotherapy for early glottic cancer. A retrospective review of 110 consecutive early glottic cancer patients who developed laryngeal recurrence after radiotherapy and received salvage laryngeal surgery showed that no occult positive lymph nodes were documented in neck dissection specimens. Furthermore, only three patients with neck failure were recorded, all in the group without neck dissection. The study concluded that due to the low occult neck disease rate and high post-operative fistula rate, elective neck dissection is not recommended for recurrent node-negative laryngeal tumors after radiation therapy if the initial tumor was an early glottic cancer (PUBMED:25418930). Additionally, other studies have suggested that elective neck dissection may be omitted in certain cases. For instance, a study on second primary tumors of the head and neck found that the risk of occult neck nodes was low for patients with a second glottic tumor (0%), indicating that elective neck dissection could be omitted in these patients (PUBMED:23832261). Another study evaluating neck metastasis incidence in surgically treated T2N0M0 glottic carcinoma patients found no occult metastasis in any neck dissection specimens and no regional failure during follow-up, leading to the conclusion that elective neck dissection may not be necessary in the surgical management of cT2N0M0 glottic carcinoma patients (PUBMED:22217501). Overall, the evidence suggests that elective neck dissection may not be necessary for recurrent node-negative laryngeal tumors after radiation therapy for early glottic cancer, and a more conservative "wait and see" approach could be considered, potentially avoiding the risks and complications associated with elective neck dissection.
Instruction: Isotretinoin therapy: any need for laboratory assessment? Abstracts: abstract_id: PUBMED:38186323 A novel predictive method for risk stratification in acne patients receiving isotretinoin: an analysis of laboratory abnormalities and changes in inflammatory parameters. Introduction: Isotretinoin is a widely used, effective medication for moderate to severe acne. It is typically used for several months, which necessitates regular laboratory monitoring. However, consensus on the optimal assessment frequency is lacking. Method: This is a single-center retrospective study on 1182 patients who received isotretinoin for acne at the Dermatology Clinic in Jordan University Hospital over 5 years. Results: Of the 1182 patients, 892 (76.57% females) met the inclusion criteria. An increase in the proportion of patients with abnormal triglycerides and total cholesterol levels from baseline to the sixth month was observed (p < 0.05). Conversely, differences in the number of patients with abnormal AST, ALT, and CBC were not found throughout treatment (p > 0.05). Moreover, there was a decrease in the neutrophil-to-lymphocyte ratio (NLR) ratio and systemic inflammatory index (SII) after the sixth month of isotretinoin treatment compared to the baseline (p = 0.012 and p = 0.021, respectively). Conclusions: We found that a baseline cholesterol level of 163.9 mg/dl and a baseline triglycerides level of 85.5 mg/dL are highly specific and sensitive in detecting grade 1 abnormalities at the one-month follow-up. This novel prediction approach serves as an effective risk stratification method for isotretinoin acne patients. abstract_id: PUBMED:15730082 Isotretinoin therapy: any need for laboratory assessment? Background: Recently studies showed the no need for laboratory follow up of patients on isotretinoin therapy. The aim of this study is to assess this issue. Patients And Methods: A retrospective study on-400 patients was performed to analyze the laboratory data before and after isotretinoin therapy of acne vulgaris patients. All patients received isotretinoin at a dose of 1mg/kg for 20 weeks. Results: AST was elevated in 25/400(6.25 %), ALT was elevated in 10/400(2.5 %). Cholesterol was elevated in 55/400(13.75%). Triglycerides were elevated in 20/400(5 %). Conclusion: This study supports the previous evidence for the no need for performing laboratory investigations for patients on isotretinoin therapy. abstract_id: PUBMED:27189824 Standardized laboratory monitoring with use of isotretinoin in acne. Background: Laboratory monitoring for adverse effects to isotretinoin occurs with variability. Standardization of laboratory monitoring practices represents an opportunity to improve quality of care. Objective: We sought to develop an evidence-based approach to laboratory monitoring of patients receiving isotretinoin therapy for acne. Methods: We reviewed laboratory data from 515 patients with acne undergoing 574 courses of isotretinoin from March 2003 to July 2011. Frequency, timing, and severity of abnormalities were determined. Results: Clinically insignificant leukopenia or thrombocytopenia occurred in 1.4% and 0.9% of patients, respectively. Elevated liver transaminases were detected infrequently and not significantly increased compared with baseline detection rates (1.9% vs 1.6% at baseline). Significant elevations occurred with triglyceride (19.3%) and cholesterol (22.8%) levels. The most severe abnormalities were grade 2 (moderate). Mean duration of treatment before abnormalities were detected was 56.3 days for hypertriglyceridemia, 61.9 days for alanine transaminitis, and 50.1 days for hypercholesterolemia. Limitations: This was a single-center experience examining variable isotretinoin laboratory monitoring practices. Conclusions: In healthy patients with normal baseline lipid panel and liver function test results, repeated studies should be performed after 2 months of isotretinoin therapy. If findings are normal, no further testing may be required. Routine complete blood cell count monitoring is not recommended. abstract_id: PUBMED:31228528 The clinical utility of laboratory monitoring during isotretinoin therapy for acne and changes to monitoring practices over time. Background: As a result of concerns about hypertriglyceridemia, liver enzyme abnormalities, and leukopenia during isotretinoin therapy for acne, patients are often monitored closely with routine laboratory assessments, although the value of this practice has been questioned. Methods: We conducted a cohort study of patients receiving isotretinoin for acne between January 1, 2008, and June 30, 2017, using the OptumInsights Electronic Health Record Database (Optum, Eden Prairie, MN) to evaluate the frequency of laboratory abnormalities. Poisson regression was used to evaluate for changes to the frequency of routine laboratory monitoring over time. Results: Among 1863 patients treated with isotretinoin, grade 3 or greater triglyceride and liver function testing abnormalities were noted in fewer than 1% and 0.5% of patients screened, respectively. No grade 3 or greater cholesterol or complete blood count abnormalities were observed. There were no meaningful changes in the frequency of laboratory monitoring over time. Limitations: Limitations include that we are unable to evaluate the clinical notes to understand the exact clinical decision making when clinicians encountered abnormal laboratory values. Conclusion: Although laboratory abnormalities are rare and often do not influence management, frequent laboratory monitoring remains a common practice. There are opportunities to improve the quality of care among patients being treated with isotretinoin for acne by reducing the frequency of lipid and liver function monitoring and by eliminating complete blood count monitoring. abstract_id: PUBMED:35689471 The utility of laboratory testing for pediatric patients undergoing isotretinoin treatment. Isotretinoin, the most effective treatment for severe cystic acne, involves laboratory monitoring. In this retrospective case series of 130 pediatric patients taking isotretinoin, there were significant increases in cholesterol (143.9 mg/dl to 155.3 mg/dl), triglycerides (81.8 mg/dl to 115.2 mg/dl), and low-density lipoprotein (82.0 mg/dl to 98.1 mg/dl), and a decrease in high-density lipoprotein (50.0 mg/dl to 44.7 mg/dl) from baseline to follow-up (p < .05); there were no significant changes in liver enzymes. None of the patients had clinical sequelae (triglyceride-induced pancreatitis, retinoid-induced hepatotoxicity) related to their abnormal lab values. These findings question the utility of laboratory monitoring for prevention of severe clinical sequelae in pediatric patients, and suggest testing based on individualized risk factors may be more appropriate. abstract_id: PUBMED:30836808 Minimize the regular laboratory monitoring during the systemic isotretinoin treatment: data of 704 patients with acne vulgaris. Backround: Despite extensive usage of systemic isotretinoin in patients with acne for many years, laboratory monitoring protocols for adverse effects depend on the centers and there is no standardized practice for frequency and type of laboratory testing. We aimed to decrease unnecessary monitoring and to standardize our general clinical practice of our department as well as to provide patient comfort and cost saving. Materials and methods: The medical charts of 704 acne patients treated with systemic isotretinoin were reviewed retrospectively. The National Cancer Institute common terminology criteria for adverse events v3.0 grading system was used in order to categorize the laboratory abnormalities of liver functions and lipid levels. Results: All laboratory abnormalities were grade I. Abnormal liver function was seen in 7.2% of the patients (n = 51), maximum values were ALT: 87 IU/L, AST: 97 IU/L (normal values, ALT: 5-41 IU/L, AST: 5-40 IU/L) and median time of the abnormalities in liver function tests was in the second month. Lipid profile abnormalities were detected in 58% of the patients (n = 401). Maximum values during the laboratory monitoring were Tg: 481 mg/dL CHOL: 314 mg/dL, LDL: 259 mg/dL (normal values, Tg: 0-200 mg/dL, CHOL.: 0-200 mg/dL, LDL: 0-100 mg/dL). The median time of lipid abnormalities was in the first month (1-3 months). Abnormalities related to CBC were seen in 8.2% of the patients (n = 58). The median time of CBC abnormality was the second month of treatment. Anemia, leukopenia and thrombocytopenia were seen in 3.4%, 3.7%, and 1.6% of the patients respectively. Conclusion: Clinically insignificant and reversible CBC abnormalities, mild to moderate elevation of liver transaminases and serum lipids are the most common laboratory abnormalities in patients with acne treated with oral isotretinoin. Due to these abnormalities are reversible even the isotretinoin therapy continued, and generally there is no need to discontinuation of treatment due to laboratory abnormalities, frequent biochemical monitoring is not recommended. abstract_id: PUBMED:37123305 Optimal laboratory testing protocol for patients with acne taking oral isotretinoin. Background: Isotretinoin is a widely prescribed drug for various dermatological conditions, especially acne. Blood tests are routinely performed to monitor its side effects. However, the optimal testing schedule remains controversial. Aim: To evaluate the laboratory monitoring tests and schedules of patients with acne taking isotretinoin to determine the most efficient testing regimen. Methods: We retrospectively reviewed electronic medical records from the Dermatology Department of Dongguk University Ilsan Hospital from 2005 to 2020 for patients prescribed isotretinoin for acne who underwent monthly blood tests. Results: Aspartate aminotransferase (AST), alanine aminotransferase (ALT), and triglyceride (TG) levels significantly changed between 5 and 6 mo when the total prescription period and dose variables were considered altogether. The high-density lipoprotein level also significantly changed between 5 and 6 mo. However, low-density lipoprotein (LDL) and total cholesterol levels significantly changed between 1 and 2 mo. Conclusion: We recommend testing AST, ALT, and TG levels once every 5 to 6 mo. We also suggest testing LDL and total cholesterol levels during the first and second months. abstract_id: PUBMED:32656231 Creatine Phosphokinase Values during Low Starting Dose Isotretinoin Therapy. Background: The value of creatine phosphokinase (CPK) monitoring during isotretinoin therapy is still a matter of debate because of the rarity of muscular and kidney damage in this setting. The aim of the study was to investigate the incidence of CPK elevation during a low starting dose isotretinoin regimen and to discuss the clinical approach to patients with increased CPK. Methods: We reviewed the records of the acne patients from 2015 to 2018 at the University of Ferrara and at the University of Modena and Reggio Emilia. Routine clinical and laboratory follow-up was analyzed. Results: The records of 328 patients were included in the study. Abnormal CPK levels were observed in 5.5% of cases. Revised Leeds acne scores and visual assessment scale (VAS) measurements decreased significantly after isotretinoin therapy. Discussion: The adoption of the low starting dose isotretinoin regimen is associated with low incidence of CPK elevation. The finding of CPK over the conventional value of 5 times above the limit was rarely observed and was never associated with kidney damage. Therefore, it should be interpreted as a relatively benign phenomenon that does not require the interruption of isotretinoin therapy and that should be managed mainly through the reassurance of the patient. abstract_id: PUBMED:12171680 Analysis of laboratory data in acne patients treated with isotretinoin: is there really a need to perform routine laboratory tests? Introduction: Isotretinoin has been used to treat acne since 1982. Its current indications in the package insert are limited and many physicians still feel uncomfortable prescribing it because of its side effects. Serum levels of liver enzymes and lipids are carried out as a routine in most clinics both before and during treatment. Aims: Our objective was to evaluate the effect of isotretinoin on serum lipids, liver function and other laboratory parameters in order to assess the necessity to perform routine laboratory tests. Methods: Computerized medical files of 1292 patients in private practice that received isotretinoin for acne were analyzed. Results: 907 patients completed a treatment course of 5 to 9 months. Serum levels of liver enzymes were not elevated to a degree necessitating discontinuation of treatment. Only 1.5% of the patients had serum triglyceride levels above 400 mg%. No laboratory abnormalities were a cause for discontinuation of treatment. During a 6-year follow up only 3.5% of patients received a second course of therapy with isotretinoin. Conclusions: Aside from its teratogenic effect, isotretinoin is a safe and excellent drug for acne therapy. It should be prescribed for any inflammatory acne and in our opinion there is no need for a routine laboratory follow-up in young, healthy patients aside from a pregnancy test in females. At present, isotretinoin should be considered as the drug of choice for moderate to severe acne. abstract_id: PUBMED:37252320 The Cost Utility of Nonpregnancy Laboratory Monitoring for Persons on Isotretinoin Acne Therapy. We sought to project the 1-year cost utility of nonpregnancy laboratory monitoring cessation among patients initiating isotretinoin. We conducted a model-based cost utility analysis comparing (i) current practice (CP) and (ii) cessation of nonpregnancy laboratory monitoring. Simulated 20-year-old persons initiating isotretinoin were maintained on therapy for 6 months, unless taken off because of laboratory abnormalities in CP. Model inputs included probabilities of cell-line abnormalities (0.12%/wk), early cessation of isotretinoin therapy after detection of an abnormal laboratory result (2.2%/wk, CP only), quality-adjusted life-years (0.84-0.93), and laboratory monitory costs ($5/wk). We collected adverse events, deaths, and quality-adjusted life-years and costs (2020 USD) from a health care payer perspective. Over 1 year, and for 200,000 people on isotretinoin in the United States, the CP strategy resulted in 184,730 quality-adjusted life-years (0.9236 per person), and nonpregnancy laboratory monitoring resulted in 184,770 quality-adjusted life-years (0.9238 per person). The CP and nonpregnancy laboratory monitoring strategies resulted in 0.08 and 0.09 isotretinoin-related deaths, respectively. Nonpregnancy laboratory monitoring was the dominating strategy, realizing $24 million savings per year. No variation of a single parameter across its plausible range changed our cost utility findings. Cessation of laboratory monitoring could realize savings of $24 million per year for the US health care system and improve patient outcomes, with negligible effects on adverse events. Answer: The necessity of laboratory assessment during isotretinoin therapy for acne is a topic of debate among researchers and clinicians. Some studies suggest that routine laboratory follow-up may not be necessary for all patients undergoing isotretinoin therapy. For instance, a study analyzing 400 patients found that only a small percentage showed elevated levels of AST, ALT, cholesterol, and triglycerides, leading to the conclusion that there may be no need for performing laboratory investigations for patients on isotretinoin therapy (PUBMED:15730082). Similarly, another study indicated that frequent biochemical monitoring is not recommended since the laboratory abnormalities observed were clinically insignificant, reversible, and did not generally necessitate discontinuation of treatment (PUBMED:30836808). On the other hand, some research has identified specific laboratory changes associated with isotretinoin use. A study involving 1182 patients found an increase in abnormal triglycerides and total cholesterol levels from baseline to the sixth month of treatment, although no significant differences were observed in AST, ALT, and CBC. This study also proposed a novel predictive method for risk stratification based on baseline cholesterol and triglycerides levels (PUBMED:38186323). Another study recommended that in healthy patients with normal baseline lipid panel and liver function test results, repeated studies should be performed after 2 months of isotretinoin therapy, and if findings are normal, no further testing may be required (PUBMED:27189824). Furthermore, a retrospective case series of 130 pediatric patients taking isotretinoin showed significant increases in cholesterol and triglycerides, questioning the utility of laboratory monitoring for preventing severe clinical sequelae in pediatric patients (PUBMED:35689471). A study from Dongguk University Ilsan Hospital suggested testing AST, ALT, and triglyceride levels once every 5 to 6 months, and LDL and total cholesterol levels during the first and second months (PUBMED:37123305). In summary, while some studies argue against the need for routine laboratory monitoring during isotretinoin therapy due to the low incidence of clinically significant abnormalities (PUBMED:15730082, PUBMED:30836808), others advocate for a more tailored approach based on initial laboratory values and subsequent changes during treatment (PUBMED:38186323, PUBMED:27189824, PUBMED:35689471, PUBMED:37123305). The decision to perform laboratory assessments should be individualized, taking into account the patient's baseline risk factors and the potential for adverse effects.
Instruction: Can patients with asthma feel inhaler therapy working right away? Abstracts: abstract_id: PUBMED:19995138 Can patients with asthma feel inhaler therapy working right away? Two clinical trials testing the effect of timing of assessment on patient perception. Background: Feeling a maintenance therapy work right away may provide positive reinforcement and may offer one way to improve adherence in patients with asthma. Precise measurement is required to accurately compare the presence of this effect across clinical trial treatment groups. Methods: Two randomized, controlled studies tested whether timing of assessment (daily vs weekly, study 1; and predose vs postdose, study 2) influenced patients' reports of whether they can feel a medication working right away (perception), and their satisfaction with this perception (satisfaction). These 2-week US-based multicenter double-blind, parallel-group studies included patients > or = 18 years of age with mild to moderate persistent asthma. In each, patients were randomized to one of two drugs with different onset profiles: budesonide/formoterol pressurized metered-dose inhaler (pMDI) 80/4.5 microg x 2 inhalations (160/9 microg) twice daily or budesonide pMDI 80 microg x 2 inhalations (160 microg) twice daily. Patients were further randomized to complete previously validated perception and satisfaction questions in a cross-over fashion, either daily and weekly (N = 123) or predose and postdose (N = 134). Patient surveys also assessed perceptions of the onset of effect of medication and their value of these perceptions. Results: No significant differences were observed in patients' reports of perception, either daily versus weekly or predose versus postdose. A statistically significant difference in satisfaction was found in study 1 only, favoring weekly recall (p < 0.05), with sensitivity analysis showing no difference by treatment group (p = 0.162). Across both studies, most patients (87%) who perceived their inhaler working right away (136 of 157 patients) identified positive airway sensations. Most patients reported that feeling their medication work right away is reassuring and would help them manage their asthma. Conclusion: Assessment timing has no effect on patient response to the perception of feeling a medication working right away. Differences found in satisfaction levels reported with weekly versus daily recall were consistent across treatment groups, indicating that no bias was introduced in favor of either treatment group. Patients characterized the perception of feeling a maintenance therapy working right away as easier breathing and reported this perception as beneficial to patient self-care. abstract_id: PUBMED:36964764 Open-inhaler versus single-inhaler triple therapy (long-acting muscarinic antagonist, inhaled corticosteroid, and long-acting β2-agonist) in asthma patients: a narrative review. Objective: To review the evidence for the use of open-inhaler (inhaled corticosteroid [ICS] plus long-acting β2-agonist [LABA] with separate add-on long-acting muscarinic antagonist [LAMA]) versus single-inhaler triple therapy (ICS/LABA/LAMA combination) and the merits of add-on LAMA to ICS/LABA in patients with uncontrolled asthma. Data Sources: Original research articles were identified from PubMed using the search term "triple therapy asthma." Information was also retrieved from the ClinicalTrials.gov website. Study Selections: Articles detailing the use of add-on LAMA to ICS plus LABA (open-inhaler triple therapy), and closed triple therapy compared with ICS plus LABA dual therapy, addressing patient symptoms, exacerbations, and health-related quality of life. Results: Open-inhaler triple therapy was associated with a significantly reduced incidence of hospitalizations and emergency department visits and a decrease in ICS dose, oral corticosteroids use, and antibiotics use. Exacerbations and acute respiratory events were also reduced. Single-inhaler triple therapy showed a greater improvement in lung function, asthma control, and health status and was noninferior to open-inhaler triple therapy for Asthma Quality of Life Questionnaire scores. Single-inhaler triple therapy may also lead to improved therapy adherence. Conclusion: Add-on LAMA to ICS plus LABA (open- or single-inhaler triple therapy) improves the response in patients who remain symptomatic and provides a reasonable alternative to ICS dose escalation in treatment-refractory patients. abstract_id: PUBMED:35080761 How to Choose the Right Inhaler Using a Patient-Centric Approach? There are many different inhaler devices and medications on the market for the treatment of asthma and chronic obstructive pulmonary disease, with over 230 drug-delivery system combinations available. However, despite the abundance of effective treatment options, the achieved disease control in clinical practice often remains unsatisfactory. In this context, a key determining factor is the match or mismatch of an inhalation device with the characteristics or needs of an individual patient. Indeed, to date, no ideal device exists that fits all patients, and a personalized approach needs to be considered. Several useful choice-guiding algorithms have been developed in the recent years to improve inhaler-patient matching, but a comprehensive tool that translates the multifactorial complexity of inhalation therapy into a user-friendly algorithm is still lacking. To address this, a multidisciplinary expert panel has developed an evidence-based practical treatment tool that allows a straightforward way of choosing the right inhaler for each patient. abstract_id: PUBMED:37489386 Open and Closed Triple Inhaler Therapy in Patients with Uncontrolled Asthma. Long-acting muscarinic antagonists (LAMAs) are a class of inhalers that has recently been included as add-on therapy in the GINA guidelines, either in a single inhaler device with inhaled corticosteroids plus long-acting β2-agonists (ICS + LABA) (closed triple inhaler therapy) or in a separate one (open triple inhaler therapy). This review summarizes the existing evidence on the addition of LAMAs in patients with persistently uncontrolled asthma despite ICS + LABA treatment based on clinical efficacy in the reduction of asthma symptoms and exacerbations, the improvement in lung function, and its safety profile. abstract_id: PUBMED:30532633 Inhaler technique education and asthma control among patients hospitalized for asthma in Jordan. Objectives: To investigate the effect of inhaler technique education delivered by a clinical pharmacist to patients hospitalised for asthma, on inhaler technique scores and asthma control at three months post-discharge. Methods: This pre-post interventional study in Jordan enrolled patients who had been admitted for asthma and were using controller medication by Accuhaler [Diskus] (ACC), Turbuhaler (TH) or Pressurized metered dose inhalers (pMDI). Inhaler technique was assessed using published checklists (score 0-9). Asthma symptom control was assessed by Asthma Control Test (ACT, range 5-25). Patients were assessed on admission (baseline), pre-discharge, and 3 months later. All patients received a 'Show-and-Tell' inhaler technique counseling service prior to discharge. Results: Baseline data were available for 140 patients, 71% females, mean age 52.7 (SD 16.64) years, mean ACT score 10.0 (SD 4.8). Mean inhaler score was 7.5 (SD 1.52) with no significant difference between the inhaler groups (p = 0.174). After pre-discharge training, all patients had correct technique (score 9/9). After 3 months, mean inhaler scores were significantly higher than at baseline (8.14 (SD 0.87, p < 0.001), with mean change significantly higher for TH 1.21 (SD 2.25) and ACC 0.85 (SD 0.97) than pMDI (0.16; SD 0.72), p = 0.001. Symptom control improved significantly for all patients, with a mean increase in ACT score of 7.54 (SD 8.18), with no significant difference between the inhaler device groups (p = 0.326). Conclusions: Patients hospitalized for asthma achieved correct inhaler technique after training by a pharmacist, and maintained better technique at 3 months than on admission. Significant improvements in ACT scores were documented for all inhaler groups. abstract_id: PUBMED:31435367 Evaluation of inhaler technique among patients with asthma and COPD in Yemen. Objectives: This study aimed to evaluate metered-dose inhaler (MDI) technique among Yemeni patients with asthma and chronic obstructive pulmonary disease (COPD), and to investigate factors that affect correct use of MDIs. Methods: A cross-sectional study was conducted in five pharmacies in the Yemeni capital city of Sana'a to evaluate MDI technique among adult patients with asthma and COPD and to investigate factors that influence proper application of MDI techniques. Results: Of 49 patients, 47 (95.9%) were males and two (4.1%) were females. The mean age of the patients was 52.90 ± 6 years (range: 43-64 years). This study showed that only nine patients (18.4%) demonstrated good knowledge of MDI techniques; two patients (4.1%) demonstrated moderate knowledge; the majority of patients (77.6%) showed poor knowledge regarding correct MDI technique. Previous training regarding MDI technique was the only factor that affected appropriate use of MDIs. Conclusion: The majority of patients in this study could not use their MDI appropriately. Thus, patients should be educated regarding correct MDI technique upon dispensing of inhaler medications. Training and educational intervention programs may contribute to improved MDI technique among patients. abstract_id: PUBMED:32780932 Evaluating nonadherence to preventer inhaler therapy in severe asthmatic patients receiving omalizumab. Background: Asthma is one of the most common chronic diseases worldwide and can often be controlled by existing treatments, but in the 5%-10% of patients with severe asthma, control is frequently more challenging. The aim of this paper is to evaluate the adherence to preventer inhaler therapy in severe asthmatic patients receiving omalizumab, a recombinant humanized monoclonal anti-IgE antibody indicated for moderate-to-severe asthma. Methods: This study included 29 patients who were receiving omalizumab treatment as maintenance therapy. All patients completed the six question Turkish Modified Morisky Scale. In addition, patients were classified by adherence to their prescribed preventer therapy, based on pharmaceutical and medical device institution records from the preceding one year. Results: Of the 29 enrolled patients, 24.1% had NSAID-exacerbated respiratory disease; 75.9% were female and median age was 47.2 ± 12.99 years. Severe asthma patients demonstrated high levels of motivation (98.8%, n = 28) and knowledge (86.2%, n = 22) about asthma treatment. Nevertheless, the rate of nonadherence to preventer inhaler therapy was only 34.5% (n = 10). There were no significant differences among the characteristics of the patients that might affect the adherence to inhaler therapy. Conclusion: In patients with severe asthma, suboptimal adherence to preventer inhaler therapy is common before and after omalizumab treatment. Although patients have high levels of motivation and knowledge about asthma treatment, adherence to inhaler preventer therapy declined after omalizumab therapy. Given these findings, it is important to decide whether preventer inhaler therapy should be continued as monotherapy or be discontinued inpatients exhibiting impaired adherence to biological agents, such as omalizumab. abstract_id: PUBMED:34388078 Adherence, quality of life, and satisfaction with conventional fix combined therapy versus maintenance and reliever therapy in patients with asthma after inhaler training. Objective: Treatment adherence, asthma control, inhaler satisfaction, and quality of life of patients with asthma were investigated to examine whether there were any differences between conventional fixed combined therapy and maintenance and reliever therapy after inhaler education. Methods: Cross-sectional observational follow-up of 120 patients with asthma receiving third- and fourth-step treatment was completed at zero, three, and six months. Fifty-eight patients received long-acting beta-agonists or inhaler corticosteroids (ICS) as maintenance therapy and short-acting beta-agonists (SABA) as reliever therapy (CONV) with Diskus and Ellipta devices. Sixty-two patients received ICS/formoterol as maintenance and reliever therapy (MART) with metered-dose inhaler and Turbuhaler devices. The asthma control test (ACT), feeling of satisfaction with the inhaler (FSI-10) questionnaire, Morisky Green Levine (MGL) scale, and quality of life questionnaire (AQLQ) were evaluated at each visit. Results: Significant improvement was observed in the ACT, AQLQ, FSI-10, MGL scales, and inhaler technique in both groups over time (p < 0.05). There was no difference in the MGL scale between the groups at all visits. Nonadherence was highest in the Diskus device group (55%) and lowest in the Ellipta device group at the first visit (25%). While 36% of the patients had low adherence at the first visit, this decreased to 12% by the last visit. A significant correlation was found between ACT and MGL at all visits (p < 0.001). Conclusion: Asthma control, adherence, inhaler satisfaction, and quality of life improved equally with regular follow-up and standard inhaler education in the CONV and MART groups. abstract_id: PUBMED:35752431 Adherence and Persistence to Single-Inhaler Versus Multiple-Inhaler Triple Therapy for Asthma Management. Background: Treatment guidelines recommend triple therapy for patients with asthma who remain uncontrolled on inhaled corticosteroid/long-acting β2-agonist therapy. Previously, triple therapy was only available via multiple inhalers. Single-inhaler fluticasone furoate/umeclidinium/vilanterol (FF/UMEC/VI) is approved as maintenance treatment for asthma; however, real-world information on adherence and persistence is limited. Objective: To compare adherence and persistence among adult patients with asthma receiving single-inhaler FF/UMEC/VI versus multiple-inhaler triple therapy (MITT) in the United States. Methods: This retrospective cohort study used IQVIA PharMetrics Plus data to evaluate patients with asthma who initiated once-daily FF/UMEC/VI 100/62.5/25 mcg or MITT between September 18, 2017, and September 30, 2019. Inverse probability weighting and multivariable regression adjusted for differences in characteristics between the FF/UMEC/VI and MITT cohorts. Adherence was assessed using proportion of days covered (PDC) and proportion of patients achieving PDC ≥0.8 and PDC ≥0.5. Non-persistence was identified as a >45-day gap between fills. Results: The study included 1396 FF/UMEC/VI and 5115 MITT initiators. Three months after initiation, FF/UMEC/VI users had significantly higher mean PDC versus MITT users (0.68 vs 0.59; P < .001) and 31% more likely to be adherent (PDC ≥0.8; 40.6% vs 31.3%; adjusted risk ratio [95% confidence interval (CI)]: 1.31 [1.13-1.54]; P < .001). Similar patterns were observed at 6 and 12 months post initiation. In addition, FF/UMEC/VI users were 49% more likely to persist at 12 months than MITT users (25.9% vs 15.1%, adjusted hazard ratio [95% CI]: 1.49 [1.39-1.60]; P < .001). Conclusions: Patients with asthma initiating triple therapy with FF/UMEC/VI had significantly better adherence and persistence compared with MITT initiators. abstract_id: PUBMED:31992675 Influence of Gender on Inhaler Technique. Background: This study was designed to evaluate the influence of gender on the inhaler technique of subjects on inhaler therapy and to determine the factors predicting the correct inhaler technique and a change of inhaler device. Methods: A total of 568 adult subjects (276 male, 292 female) on inhaler therapy were included in this cross-sectional, observational study. Data on sociodemographic characteristics, inhaler therapy, subject-reported difficulties, and technician-reported errors in inhaler technique were recorded. Results: A change of inhaler device was noted in 71.0% of male subjects and 77.4% of female subjects, and this was based on the physicians' decision in most cases (41.7% and 51.7%, respectively). A higher percentage of female subjects reported difficulties with using inhalers (63.7% vs 40.6%, P < .001). Overall, having received training on the inhaler technique was associated with a higher likelihood of correct inhaler technique (odds ratio 12.56, 95% CI 4.44-35.50, P < .001) and a lower risk of device change (odds ratio 0.46, 95% CI 0.27-0.77, P = .004). Conclusions: Errors in the inhaler technique, including inhalation maneuvers and device handling, were common in subjects on inhaler therapy. Subject-reported difficulties with using inhalers were more prevalent among female subjects, whereas errors in the inhaler technique identified by direct observation were similarly high in both genders. Overall, a lack of training on the inhaler technique predicted a higher likelihood of errors in the inhaler technique and a change of inhaler device. Answer: Yes, patients with asthma can feel inhaler therapy working right away. In two clinical trials, most patients (87%) who perceived their inhaler working right away identified positive airway sensations such as easier breathing. These patients reported that feeling their medication work right away is reassuring and would help them manage their asthma (PUBMED:19995138).
Instruction: Cut-off values for waist circumference in Turkish population: Is there a threshold to predict insulin resistance? Abstracts: abstract_id: PUBMED:25795939 Cut-off values of waist circumference to predict metabolic syndrome in obese adolescents. Introduction: Metabolic syndrome (MetS) is a constellation of metabolic alteration related to abdominal obesity, inflammation and insulin resistance, which increase cardiovascular disease and mortality. The aims of the present study were to identify the prevalence of comorbidities and altered parameters in obese adolescents with and without MetS, and determine cut-off points of waist circumference to predict MetS. Methods: 195 obese adolescents were recruited and divided according to MetS diagnosis based on IDF criteria. Blood analyses of glucose, lipids, liver enzymes, adiponectin and leptin were measured. Insulin resistance was assessed by HOMA-IR, QUICKI and HOMA-AD. Visceral, subcutaneous and hepatic fat were ultrasonography obtained. Body composition was estimated by BOD POD system. Results: We observed a prevalence of 25% of MetS (n=50). The MetS group presented significant higher body mass, BMI, body fat (kg), free-fat mass (kg), waist circumference, visceral fat, glucose, insulin, insulin resistance, total-cholesterol, LDL-c, VLDL-c, triglycerides, liver enzymes, non-alcoholic fatty liver disease (NAFLD) and blood pressure. Significant lower QUICKI and adiponectin in MetS group were noted. MetS girls presented significant higher leptin/adiponectin ratio compared to Non-MetS girls. Cut-off points of 111.5 cm for boys and 104.6 cm for girls of waist circumference were suggested to predict metabolic syndrome. Moreover, waist circumference was positive correlated with visceral fat and the number of metabolic syndrome parameters. Conclusion: MetS group presented significant higher metabolic alterations and inflammation compared to Non-MetS group. Waist circumference is considered an anthropometric measure predictor of metabolic syndrome in obese adolescents, being useful in clinical practice. abstract_id: PUBMED:31173313 A study of the correlation of waist circumference with metabolic risks among non-obese populations. Objective: To investigate the correlation of waist circumference with metabolic risks among non-obese populations. Patients And Methods: A retrospective analysis was conducted for the clinical information of 13,145 non-obese subjects who received physical examinations in the hospital from January 2015 to January 2018. Among them, 1,971 subjects were definitely diagnosed with diabetes mellitus, hypertension, and metabolic syndrome and set as the metabolic disorder group, with the remaining 11,174 cases as the control group. All metabolism-related indicators were compared between the patients in the two groups, the analysis was performed for the prevalence of diabetes, hypertension, and metabolic syndrome in patients with different waist circumference as well as males and females with different waist circumferences. Finally, the optimal cut-off points of waist circumference were determined. Results: According to the comparison, there were no statistically significant differences in subjects with low-density lipoprotein cholesterol (LDL-C), high-density lipoprotein cholesterol (HDL-C), and total cholesterol between the metabolic disorder group and the control group (p&gt;0.05). The subjects in the metabolic disorder group had higher waist circumference, waist-to-hip ratio, triglyceride, fasting insulin, fasting blood glucose, and insulin resistance index than those in the control group, displaying statistically significant differences (p&lt;0.05). When waist circumference was more than 70 cm, as it increased, the prevalence rates of diabetes, hypertension, and metabolic syndrome gradually elevated. When waist circumference was 60-90 cm, hypertension took the leading place in metabolic disorders, so did the metabolic syndrome and diabetes, with a waist circumference of more than 90 cm. There were no remarkable differences in the prevalence rates of diabetes, hypertension, and metabolic syndrome among male and female patients with different waist circumferences ((p&gt;0.05). The patients with the waist circumference of over 90 cm exhibited gradually declined prevalence rates of diabetes, hypertension, and metabolic syndrome, while those in the patients with a waist circumference of 60-90 cm were gradually increased. The optimal cut-off points of waist circumference were calculated according to the prevalence rates of diabetes, hypertension, and metabolic syndrome among patients, and the calculation results were 78 cm, 79 cm, and 77 cm for male patients and 73 cm, 78 cm, and 73 cm for female patients. Based on the weight combined with the optimal cut-off points of waist circumference, the optimal waist circumference cut-off points for males and females were 78 cm and 74 cm, respectively. Conclusions: There is a correlation between the waist circumference and metabolic risks among non-obese populations, but the screening for metabolic diseases should be conducted among males with a waist circumference of over 78 cm and females with a waist circumference of over 74 cm. The treatment measures are supposed to be taken promptly to improve the prognosis. abstract_id: PUBMED:35586815 Insulin Resistance and Biological Aging: The Role of Body Mass, Waist Circumference, and Inflammation. The purpose of this investigation was to evaluate the association between insulin resistance and biological aging in a randomly selected sample of 2,596 U.S. women and men. Another key objective was to examine the extent to which the insulin resistance and biological aging association was influenced by differences in body mass, waist circumference, and systemic inflammation. Biological aging was indexed using the length of leukocyte telomeres. The homeostatic model assessment (HOMA) was employed to index insulin resistance. The body mass index (BMI) was used to represent body mass independent of height. Waist circumference was used to assess abdominal adiposity, and C-reactive protein (CRP) was measured to index body-wide inflammation. Insulin resistance and telomere length were both treated as continuous variables. Results revealed that insulin resistance was related significantly with cellular aging, after adjusting for several demographic covariates (F = 5.7, P = 0.0234). The association remained significant after controlling for multiple demographic and lifestyle covariates together (F = 4.6, P = 0.0410). However, after controlling for BMI, along with the other covariates, insulin resistance was no longer associated with biological aging (F = 2.1, P = 0.1573). After adjusting for differences in waist circumference, along with the demographic and lifestyle covariates, but not BMI, the relationship between insulin resistance and biological aging was negated further (F = 1.5, P = 0.2283). Adjusting for CRP with the demographic and lifestyle covariates, but not BMI or waist circumference, weakened the relationship (F = 4.0, P = 0.0552). Evidently, if all adults in the U.S. had the same BMI or waist circumference, there would not be a relationship between insulin resistance and telomere length. It appears that insulin resistance accounts for differences in biological aging mainly because of differences in BMI and waist circumference, especially the latter. abstract_id: PUBMED:35843386 Associations of plasma miRNAs with waist circumference and insulin resistance among women with polycystic ovary syndrome - Pilot study. Background: Insulin resistance (IR) and central obesity are common in polycystic ovary syndrome (PCOS), but pathomechanisms for IR in PCOS are not established. Circulating microRNAs (miRNAs) are non-invasive biomarkers of epigenetic regulation that may contribute to the pathogenesis of IR and central adiposity in PCOS. Methods: We conducted a pilot study to examine associations of circulating miRNAs with IR and central adiposity among women with PCOS (n = 11) using high-throughput miRNA sequencing. We fit generalized linear models examining associations of waist circumference and HOMA-IR with plasma miRNAs. We used false discovery rate (FDR)-adjusted cutoff p &lt; 0.1 to correct for multiple testing. We used miRDB's Gene Ontology (GO) tool to identify predicted pathways for top hits. Results: Mean age and BMI of participants were 27.9 years and 32.5 kg/m2, respectively. Lower levels of miR-1294 were associated with higher waist circumference (β = -0.10, FDR = 0.095). While no miRNAs were associated with HOMA-IR at our FDR cut off &lt;0.1, 11 miRNAs were associated with waist circumference and 14 miRNAs with HOMA-IR at unadjusted p &lt; 0.01, including members of the highly conserved miR-17/92 cluster and miR-1294 (β = -0.10, p &lt; 0.001). The GO analysis of miR-1294 identified 54 overrepresented pathways, including "negative regulation of insulin receptor signaling" (FDR = 0.019), and 6 underrepresented pathways. Conclusions: Plasma miR-1294 along with members of the miR-17/92 cluster and miRNAs involved in insulin signaling may be associated with central obesity and insulin resistance in PCOS. Larger studies among women with and without PCOS are needed to validate these findings. abstract_id: PUBMED:27938404 BMI and waist circumference cut-offs for corresponding levels of insulin sensitivity in a Middle Eastern immigrant versus a native Swedish population - the MEDIM population based study. Background: The aim of this study was to identify corresponding body mass index (BMI) and waist circumference cut-offs for equivalent levels of insulin sensitivity in a Middle Eastern immigrant population compared with native Swedes. Methods: Citizens of Malmö, Sweden aged 30 to 75 years, who were born in Iraq or Sweden, were in 2010-2012 invited to participate in a health examination including anthropometrics, oral glucose tolerance test, fasting samples and interviews concerning sociodemographic factors and lifestyle behaviours. Results: In total, 1176 individuals born in Iraq and 688 born in Sweden, without previously diagnosed type 2 diabetes, participated in the study. In normal weight participants (BMI &lt; 25 kg/m2), 21.2% of Iraqis vs 9.3% of Swedes were insulin resistant. Corresponding figures in participants without abdominal obesity (waist circumference, men &lt; 94 cm, women &lt; 80 cm) were 28.2% of Iraqis vs 9.4% of Swedes. The age-adjusted insulin sensitivity index (ISI) for obese Swedes (BMI 30 kg/m2) corresponded in Iraqi men with BMI of 28.5 kg/m2, and in Iraqi women with BMI of 27.5 kg/m2. The ISI level in abdominally obese Swedes corresponded with waist circumference cut-offs of 84.0 cm and 71.0 cm in Iraqi men and women, respectively. In men only, larger waist circumference (P interaction = 0.026) presented a stronger association with impaired ISI in Iraqis as compared to Swedes. Conclusions: Our data shows that the impact of BMI and waist circumference on ISI is ethnic- and gender-specific, indicating a disturbed fat metabolism in Iraqi males in particular. Our data suggests that 10 cm lower cut-off values for abdominal obesity, than is currently recommended by major organisations, should be considered when estimating diabetes risk in Middle Eastern populations. abstract_id: PUBMED:37391962 Body Mass Index versus Waist Circumference in Predicting Insulin Resistance in Infertile Women with Polycystic Ovary Syndrome. Weight management is central to the treatment of infertile women with polycystic ovary syndrome. Both body mass index and waist circumference are measures of obesity. The objective of the study was to examine the clinical importance of waist circumference and body mass index in predicting insulin resistance. The cross-sectional study involved a total of 126 consecutive infertile women with polycystic ovary syndrome attending the Infertility unit of the Department of Obstetrics and Gynaecology of Bangabandhu Sheikh Mujib Medical University (BSMMU), Dhaka, Bangladesh from January 2017 to December 2017. The anthropometric measurements like weight, height and waist circumference were made and body mass index and waist hip ratio were calculated. Fasting insulin and fasting plasma glucose were estimated in early follicular phase of menstrual cycle. Insulin resistance was calculated with HOMA-IR. Body mass index and waist circumference were examined for clinical prediction of insulin resistance by ROC curve analysis. The mean age was 25.56±3.90 years. Mean body mass index was 26.79±3.25, mean waist circumference was 90.9±9.4cm. By body mass index thresholds, 47.9% of the women were overweight, 39.7% of the women were obese. By waist circumference threshold, 80.2% of the women were centrally obese. Body mass index and waist circumference, both had significant correlation with hyperinsuliaemia. When body mass index and waist circumference were examined for predicting insulin resistance by sensitivity, specificity, positive likelihood ratio and negative likelihood ratio, waist circumference appeared to have moderate clinical importance compared to the negligible role of body mass index. Waist circumference may be a better predictor of insulin resistance than body mass index in infertile women with polycystic ovary syndrome. abstract_id: PUBMED:20676898 Preliminary data on the association between waist circumference and insulin resistance in children without a previous diagnosis. The aim of the present work was to study the association between different anthropometric parameters and insulin resistance (IR) in Spanish schoolchildren without a previous diagnosis. A total of 443 Spanish schoolchildren (9-11 years of age) were studied in this cross-sectional study. The anthropometric measurements collected were weight, height, body circumferences and skinfolds. Body mass index (BMI), waist/hip ratio, percentage body fat and fat-free body mass were determined. Overnight-fasted blood lipids, insulin and glucose levels were analysed, and estimation of IR, taking into account the homeostasis model assessment (HOMA), was calculated. The children with IR had higher serum triglycerides and insulin levels, were heavier and taller, and had a higher BMI, a larger waist circumference, a larger hip circumference, a larger waist/hip ratio and thicker bicipital and tricipital skinfolds than those who did not have IR. Age, sex, BMI and waist circumference explained 32.0% of the variance in the HOMA values; only sex, triglycerides and waist circumference independently influenced this variable. A 1-cm increase in waist circumference was associated with approximately a 3.8% increase in the mean HOMA value. The children with a waist circumference of over the p90 for their age and sex were at greater risk of showing IR as measured by the HOMA: odds ratio = 6.94 (2.01-23.91; P &lt; 0.001). In conclusion, according to these results, waist circumference is the best anthropometric parameter associated with IR in children, and those with a waist circumference of over the p90 for their age and sex would appear to be at particular risk. abstract_id: PUBMED:27050332 A comparison of the clinical usefulness of neck circumference and waist circumference in individuals with severe obesity. Purpose/Aim: Neck circumference (NC) is an emerging anthropometric parameter that has been proposed to reflect metabolic health. The aim of the current study was to compare its clinical usefulness to waist circumference (WC) in the assessment of individuals with severe obesity. Materials And Methods: A total of 255 subjects participated in the study. All anthropometric measurements were done by a single medical professional. Biochemical measurements included oral glucose-tolerance tests (OGTTs), fasting insulin, lipids, and hepatic enzymes. Results: The mean age of the participants was 49 ± 12 years with the mean body mass index (BMI) of 36.9 ± 6.2 kg/m2. Correlation analyses revealed that while WC was better associated with adiposity parameters, it was of little use in comparison to NC with regard to metabolic outcomes. In men, NC was positively associated with fasting plasma glucose, fasting insulin, FINDRISC scores. ROC analyses showed NC was better in distinguishing type 2 diabetes (AUC = 0.758; p &lt; 0.001), insulin resistance (AUC = 0.757; p = 0.001), metabolic syndrome (AUC = 0.724; p &lt; 0.001), and hypertension (AUC = 0.763; p = 0.001). Similar correlations were observed in women. Using binary logistic regression, we determined that a NC of ≥35 cm in women and ≥38 cm in men are valuable cut-off values to use in the everyday practice. Conclusion: In individuals with severe obesity, NC performs better than WC in the assessment of metabolic health. abstract_id: PUBMED:35256020 Cut-off points in the waist:height ratio for the prediction of insulin resistance, second stage of sexual maturation in Brazilian adolescents. The objective of this study was verify the best waist:height ratio (WHtR) cut-off points for the prediction of insulin resistance (IR) according to stage of sexual maturation in Brazilian adolescents. Data were analysed from 37 759 adolescents between 12 and 17 years of age, of both sexes, participating in the ERICA study on cardiovascular risk factors in adolescents ('Estudo de Riscos Cardiovasculares em Adolescentes - ERICA'). The following variables were included sex, age and sexual maturity according to the Tanner stage, waist circumference, height, WHtR, fasting insulin and fasting blood sugar so as to obtain the homeostasis model assessment-estimated insulin resistance (HOMA-IR). We constructed an Receiver Operating characteristic (ROC) curve for each group in order to determine the best WHtR cut-off points for the sample stratified by sex and stratified by sex and sexual maturation stage. The female sex corresponded to 60% of the sample, with a mean age of 14·7 years (sd ± 0·08 years) and 13·1 % had IR. The prediction of IR through the WHtR indicator was better for boys, where all the areas under the ROC curve were superior to 0·70. When stratifying by sex alone, the best WHtR cut-off points found were equal to 0·45 for girls and 0·44 for boys. When stratifying by Tanner maturation stages, the cut-off points were reduced with advancing sexual maturation in the male group, while the opposite occurred in the female group. Among Brazilian adolescents, WHtR is a good predictor of IR evaluated by HOMA-IR, but different cut-off points should be adopted according to sex and sexual maturation stage. abstract_id: PUBMED:23736362 Waist circumference to assess reversal of insulin resistance following weight reduction after bariatric surgery: cohort and cross-sectional studies. Objective: To validate the use of waist circumference to assess reversal of insulin resistance after weight loss induced by bariatric surgery. Design: In cross-sectional studies, threshold values for insulin resistance were determined with homeostasis model assessment of insulin resistance (HOMA-IR) (algorithm based on fasting plasma glucose and insulin) in 1018 lean subjects and by hyperinsulinemic euglycemic clamp (clamp) in 26 lean women. In a cohort study on 211 patients scheduled for bariatric surgery, HOMA-IR and waist circumference were measured before and 1.5-3 years after weight reduction. In a subgroup of 53 women, insulin sensitivity was also measured using clamp. Results: The threshold for insulin resistance (90th percentile) was 2.21 (mg dl(-1) fasting glucose × mU l(-1) fasting insulin divided by 405) for HOMA-IR and 6.118 (mg glucose per kg body weight per minute) for clamp. Two methods to assess reversal of insulin resistance by measuring waist circumference were used. A single cutoff value to &lt;100 cm for waist circumference was associated with reversal of insulin resistance with an odds ratio (OR) of 49; 95% confidence interval (CI)=7-373 and P=0.0002. Also, a diagram based on initial and weight loss-induced changes in waist circumference in patients turning insulin sensitive predicted reversal of insulin resistance following bariatric surgery with a very high OR (32; 95% CI=4-245; P=0.0008). Results with the clamp cohort were similar as with HOMA-IR analyses. Conclusions: Reversal of insulin resistance could either be assessed by a diagram based on initial waist circumference and reduction of waist circumference, or by using 100 cm as a single cutoff for waist circumference after weight reduction induced by bariatric surgery. Answer: The provided abstracts do not specifically mention cut-off values for waist circumference in the Turkish population to predict insulin resistance. However, they do provide insights into the relationship between waist circumference and insulin resistance in various populations and contexts, which can be informative for understanding how such thresholds might be determined. For instance, in obese adolescents, cut-off points of 111.5 cm for boys and 104.6 cm for girls were suggested to predict metabolic syndrome, which is closely related to insulin resistance (PUBMED:25795939). In non-obese populations, waist circumference was correlated with metabolic risks, and cut-off points of 78 cm for males and 74 cm for females were proposed for metabolic disease screening (PUBMED:31173313). Another study found that insulin resistance was significantly associated with cellular aging, and this relationship was influenced by differences in body mass and waist circumference (PUBMED:35586815). In women with polycystic ovary syndrome (PCOS), waist circumference was a better predictor of insulin resistance than body mass index (BMI) (PUBMED:37391962). Similarly, in a Middle Eastern immigrant population, lower BMI and waist circumference cut-offs were suggested for equivalent levels of insulin sensitivity compared to native Swedes (PUBMED:27938404). In children without a previous diagnosis, waist circumference was the best anthropometric parameter associated with insulin resistance (PUBMED:20676898). For individuals with severe obesity, neck circumference was compared to waist circumference, and it was found that neck circumference might be better in assessing metabolic health (PUBMED:27050332). In Brazilian adolescents, different waist-to-height ratio (WHtR) cut-off points were recommended according to sex and sexual maturation stage for predicting insulin resistance (PUBMED:35256020). Lastly, waist circumference was validated as a measure to assess reversal of insulin resistance after bariatric surgery, with a single cut-off value of <100 cm being associated with reversal of insulin resistance (PUBMED:23736362). While these studies provide valuable information on the relationship between waist circumference and insulin resistance, they do not offer specific cut-off values for the Turkish population. To establish such thresholds for the Turkish population, similar studies would need to be conducted, taking into account the unique demographic, genetic, and lifestyle factors that may influence the relationship between waist circumference and insulin resistance in this population.
Instruction: Does social support really help to eat a low-fat diet? Abstracts: abstract_id: PUBMED:36872251 You are what you eat? Rodent study suggests defended body weight mainly dictated by fat content of diet. N/A abstract_id: PUBMED:29016744 Effects of dapagliflozin on human epicardial adipose tissue: modulation of insulin resistance, inflammatory chemokine production, and differentiation ability. Aims: In patients with cardiovascular disease, epicardial adipose tissue (EAT) is characterized by insulin resistance, high pro-inflammatory chemokines, and low differentiation ability. As dapagliflozin reduces body fat and cardiovascular events in diabetic patients, we would like to know its effect on EAT and subcutaneous adipose tissue (SAT). Methods And Results: Adipose samples were obtained from 52 patients undergoing heart surgery. Sodium-glucose cotransporter 2 (SGLT2) expression was determined by real-time polymerase chain reaction (n = 20), western blot, and immunohistochemistry. Fat explants (n = 21) were treated with dapagliflozin and/or insulin and glucose transporters expression measured. Glucose, free fatty acid, and adipokine levels (by array) were measured in the EAT secretomes, which were then tested on human coronary endothelial cells using wound healing assays. Glucose uptake was also measured using the fluorescent glucose analogue (6NBDG) in differentiated stromal vascular cells (SVCs) from the fat pads (n = 11). Finally, dapagliflozin-induced adipocyte differentiation was assessed from the levels of fat droplets (AdipoRed staining) and of perilipin. SGLT2 was expressed in EAT. Dapagliflozin increased glucose uptake (20.95 ± 4.4 mg/dL vs. 12.97 ± 4.1 mg/dL; P &lt; 0.001) and glucose transporter type 4 (2.09 ± 0.3 fold change; P &lt; 0.01) in EAT. Moreover, dapagliflozin reduced the secretion levels of chemokines and benefited wound healing in endothelial cells (0.21 ± 0.05 vs. 0.38 ± 0.08 open wound; P &lt; 0.05). Finally, chronic treatment with dapagliflozin improved the differentiation of SVC, confirmed by AdipoRed staining [539 ± 142 arbitrary units (a.u.) vs. 473 ± 136 a.u.; P &lt; 0.01] and perilipin expression levels (121 ± 10 vs. 84 ± 11 a.u.). Conclusions: Dapagliflozin increased glucose uptake, reduced the secretion of pro-inflammatory chemokines (with a beneficial effect on the healing of human coronary artery endothelial cells), and improved the differentiation of EAT cells. These results suggest a new protective pathway for this drug on EAT from patients with cardiovascular disease. abstract_id: PUBMED:26343163 Differential behaviour of epicardial adipose tissue-secretomes with high and low orosomucoid levels from patients with cardiovascular disease in H9C2 cells. Epicardial adipose tissue releases orosomucoid (ORM), an acute phase protein with multiple modulatory and protective properties. We aimed to identify the effect of EAT-supernatants according to their ORM levels on H9C2 cells. H9C2 were cultured with EAT-secretomes or ORM protein itself on a Real-Time Cell Analyser. Secretome proteins identification was performed by LC-mass spectrometry according to their ORM levels. Two of them were validated by ELISA in EAT-supernatants from 42 patients. ORM effect on H9C2 and neonatal rat cardiomyocytes apoptosis under hypoxia with or without fatty acid treatment was determined by Annexin-V flow cytometry measurement. Caspase-3 expression levels were determined by western blot in H9C2. Our results showed a differential effect of EAT-secretomes according their ORM levels. Although additional secreted proteins can contribute to their beneficial effects, ORM reduced hypoxia-induced apoptosis through caspase-3 inhibition. Our data showed the cardioprotective role of ORM and suggest that its quantification on EAT secretomes might help us to find new secreted factors with a cardioprotective role. abstract_id: PUBMED:29573602 Similarities and differences between families who have frequent and infrequent family meals: A qualitative investigation of low-income and minority households. Introduction: Numerous quantitative studies have examined the association between family meal frequency and child/adolescent weight and weight-related behaviors. However, limited qualitative research has been conducted to identify mealtime characteristics (e.g., child behavior during meals, rules/expectations, family dynamics) that occur during family meals that may explain why some families engage in frequent family meals and others do not. This is particularly important within racially/ethnically diverse households, as these demographic groups are at higher risk for weight-related problems. The current study aimed to identify similarities and differences in mealtime characteristics between households that have frequent and infrequent family meals within a low-income and minority population. Methods: This qualitative study included 118 parents who participated in Family Meals, LIVE!, a mixed-methods, cross-sectional study. Parents (90% female; mean age = 35) were racially/ethnically diverse (62% African American, 19% White, 4% Native American, 4% Asian, 11% Mixed/Other) and from low-income (73% &lt; $35,000/yr.) households. Data were analyzed using inductive content analysis. Results: Results indicated some similar mealtime characteristics (i.e., picky eating, involving family members in meal preparation) between households having frequent and infrequent family meals. Additionally, several differences in mealtime characteristics were identified between households having frequent (i.e., importance of family meals, flexibility in the definition of family meals, family meal rules, no pressure-to-eat feeding practices) versus infrequent family meals (i.e., pressure-to-eat parent feeding practices, family meals are dinner meals only, and difficult meal time behaviors). Discussion: Study findings may be useful for developing intervention targets for low-income and racially/ethnically diverse households so more families can benefit from the protective nature of family meals. abstract_id: PUBMED:29684414 Examining variability in parent feeding practices within a low-income, racially/ethnically diverse, and immigrant population using ecological momentary assessment. Background: Current measures of parent feeding practices are typically survey-based and assessed as static/unchanging characteristics, failing to account for fluctuations in these behaviors across time and context. The current study uses ecological momentary assessment to examine variability of, and predictors of, parent feeding practices within a low-income, racially/ethnically diverse, and immigrant sample. Methods: Children ages 5-7 years old and their parents (n = 150 dyads) from six racial/ethnic groups (n = 25 from each; Black/African American, Hispanic, Hmong, Native American, Somali, White) were recruited for this mixed-methods study through primary care clinics. Results: Among parents who used restriction (49%) and pressure-to-eat (69%) feeding practices, these feeding practices were utilized about every other day. Contextual factors at the meal associated with parent feeding practices included: number of people at the meal, who prepared the meal, types of food served at meals (e.g., pre-prepared, homemade, fast food), meal setting (e.g., kitchen table, front room), and meal emotional atmosphere (p &lt; 0.05). Parents tended to restrict desserts, dairy, and vegetables and pressure children to eat fruits, vegetables, meat proteins, and refined grains (p &lt; 0.05). There were some differences by race/ethnicity across findings (p &lt; 0.01), with Hmong parents engaging in the highest levels of pressure-to-eat feeding practices. Conclusions: Parent feeding practices varied across the week, indicating feeding practices are more likely to be context-specific, or state-like than trait-like. There were some meal characteristics more strongly associated with engaging in restriction and pressure-to-eat feeding practices. Given that parent feeding practices appear to be state-like, future interventions and health care providers who work with parents and children may want to address contextual factors associated with parent feeding practices to decrease restriction and pressure-to-eat parent feeding practices. abstract_id: PUBMED:21086023 Effects of low level microwave radiation on carcinogenesis in Swiss Albino mice. This study concerns with the multiple treatment of the target site to potent carcinogen and the super imposition of low level radiofrequency and microwave radiation. Swiss albino mice (male) were used for this investigation. The study has been divided in two parts, part A: a single dose of 7,12-dimethylbenz(a)anthracene (DMBA) 100 μg/animal was applied topically on the skin of mice and were exposed to 112 MHz amplitude modulated (AM) at 16 Hz (power density 1.0 mW/cm(2), specific absorption rate (SAR) 0.75 W/kg). Similarly after a single dose of DMBA, mice were exposed to 2.45 GHz radiation (power density of 0.34 mW/cm(2), SAR, 0.1 W/kg), 2 h/day, 3 days a week for a period of 16 weeks. The two sets of experiments were carried out separately. Part B: mice were transplanted intraperitoneally (ip) with ascites 8 × 10(8) (Ehrlich-Lettre ascites, strain E) carcinoma cells per mouse. These mice were exposed to 112 MHz amplitude modulated at 16 Hz and 2.45 GHz radiation separately for a period of 14 days. There was no tumor development in mice exposed to RF and MW. Similarly a topical application of single dose of DMBA followed by RF/MW exposure also did not produce any visible extra tumor on the skin of mice. On the other hand mice were transplanted intraperitoneally with ascites (8 × 10(8) cell/ml) and subsequently exposed to above mentioned fields for 14 days showed a slight increase in the cell numbers as compared to the control group. However, the increase is insignificant. There were insignificant differences either in the mortality or cell proliferation among the control and exposed group. This results show that low level RF or MW do not alter tumor growth and development as evidenced by no observable change in tumor size. abstract_id: PUBMED:1968945 Prediction of cell survival curves from DNA double-strand break repair data for low- and high-LET radiation. A model developed previously (Ostashevsky 1989), was applied to calculate survival curves from the DNA double-strand break (dsb) repair data for Ehrlich ascites tumour (EAT) cells irradiated with X-rays or 3.4 MeV alpha-particles (Blöcher 1988). It is shown that the calculated curves are in agreement with experiments. The relationship between the low-dose-rate survival curve slopes for low- and high-LET radiations was obtained. Data for relatively radioresistant cells (EAT, normal fibroblasts, V-79 and CHO-K1) seem to be in agreement with the predicted numbers, while those for radiosensitive cells (AT fibroblasts and xrs-6) do not. Possible reasons for this discrepancy are discussed. In the framework of the model, three factors are important for the high RBE of alpha-particles: (1) an increased radiation yield of induced dsb; (2) a reduced dsb repair rate and (3) an increased probability of losing fragments from the DNA. abstract_id: PUBMED:26156827 "Do You Wanna Breathe or Eat?": Parent Perspectives on Child Health Consequences of Food Insecurity, Trade-Offs, and Toxic Stress. Objectives: This study among 51 parents of young children under age four investigated how parents that report marginal, low and very low food security characterize how trade-offs associated with food insecurity affect parents' mental health and child well-being. Methods: We carried out 51 semi-structured audio-recorded interviews after participants responded to a survey regarding food security status and maternal depressive symptoms. Each interview was transcribed. Through a content analysis, we coded "meaning units" in each manuscript and organized them by themes in ATLAS.ti. Among participants reporting both food insecurity and depressive symptoms, we identified three primary areas of concern: trade-offs, mental health, and child well-being. Results: Parents described how trade-offs associated with food insecurity have a profound relationship with their mental health and home environment that strongly affects young children. Descriptions of hardships include anxiety and depression related to overdue bills and shut-off notices, strains with housing costs, and safety. Parents described how their own frustration, anxiety, and depression related to economic hardship have a negative impact on their children's physical health, and their social and emotional development. Conclusions: Parents in food insecure households recognize that trade-offs between food and other basic necessities are associated with their personal stress and poor mental health that, in turn, affects their children's health and development. Partnerships between healthcare providers, policymakers, and parents are essential to successfully address and prevent the poor child health outcomes of toxic stress associated with food insecurity and poverty. abstract_id: PUBMED:7220806 Characterization and properties of repair of potentially lethal damage as measured with the help of beta-arabinofuranosyladenine in plateau-phase EAT cells. N/A abstract_id: PUBMED:38436133 A Flexible Mouse Model of Autoimmune Thyroiditis Induced by Immunization with an Adenovirus Containing Full-Length Thyroglobulin cDNA. The main challenge in the "post-GWAS" era is to determine the functional meaning of genetic variants and their contribution to disease pathogenesis. Development of suitable mouse models is critical because disease susceptibility is triggered by complex interactions between genetic, epigenetic, and environmental factors that cannot be modeled by in vitro models. Thyroglobulin (TG) is a key gene for autoimmune thyroid disease (AITD) and several single nucleotide polymorphisms (SNPs) in the TG coding region have been associated with AITD. The classical model of experimental autoimmune thyroiditis (EAT), based on immunization of genetically susceptible mouse strains with purified TG protein in adjuvant, does not allow testing the impact of TG sequence variants on the development of autoimmune thyroiditis. Here we describe a protocol for the induction of EAT by immunization of mice susceptible to thyroiditis with an adenovirus vector carrying full-length human TG cDNA (Ad-TG EAT). We also provide support protocols for evaluation of autoimmune thyroiditis including serological assessment of TG antibodies, in vitro splenocyte proliferation assay and cytokines secretion, thyroid histology, and evaluation of thyroid lymphocytic infiltration by immunostaining. This protocol for EAT induction allows manipulation of the TG cDNA to introduce variants associated with AITD, enabling the testing of the functional effects of susceptible variants and their haplotypes on the immunogenicity of TG. Furthermore, the Ad-TG EAT mouse model is a valuable model for studying the interactions of the TG variants with non-genetic factors influencing AITD development (e.g., cytokines, iodine exposure) or with variants of other susceptible genes (e.g., HLA-DRβ1). © 2024 Wiley Periodicals LLC. Basic Protocol: Development of a mouse model of autoimmune thyroiditis induced by immunization with adenovirus containing full-length thyroglobulin cDNA Support Protocol 1: Splenocytes isolation Support Protocol 2: T cell stimulation and carboxyfluorescein diacetate succinimidyl ester (CFSE) based cell proliferation assay Support Protocol 3: Cytokine assays: measuring levels of interferon gamma (IFNγ) and interleukins IL-2, IL-4, and IL-10 in splenocyte supernatants Support Protocol 4: Evaluating thyroid histology and infiltration with immune cells: hematoxylin-eosin staining of mice thyroid glands Support Protocol 5: Immunohistochemistry of thyroid tissues: Immunofluorescence protocol of paraffin-embedded thyroid sections Support Protocol 6: Anti-thyroglobulin antibody measurement in mice sera by enzyme-linked immunosorbent assay (ELISA). Answer: The provided abstracts do not directly address the question of whether social support helps individuals adhere to a low-fat diet. However, some of the abstracts touch on related topics that could be indirectly associated with the role of social support in dietary habits. For instance, the study on family meals within low-income and minority households (PUBMED:29573602) suggests that family dynamics, including mealtime characteristics and family involvement in meal preparation, may influence the frequency of family meals. Frequent family meals have been associated with healthier eating patterns, which could include a lower fat diet. This implies that social support within the family unit may contribute to healthier eating habits, potentially including adherence to a low-fat diet. Additionally, the study examining variability in parent feeding practices (PUBMED:29684414) indicates that feeding practices are context-specific and can vary across different situations. This suggests that the social and environmental context, which includes social support, can influence dietary choices and behaviors. While these studies do not provide a direct answer to the question, they highlight the importance of social context and family dynamics in dietary behavior, which could extend to support for eating a low-fat diet. To conclusively answer the question, research specifically investigating the impact of social support on adherence to a low-fat diet would be required.
Instruction: Perirenal fat stranding on CT: is there an association with bladder outlet obstruction? Abstracts: abstract_id: PUBMED:27123701 Perirenal fat stranding on CT: is there an association with bladder outlet obstruction? Objective: To determine the association between perirenal fat stranding (PFS) on CT and bladder outlet obstruction (BOO). Methods: CT scans from 122 patients who had undergone urodynamic study for lower urinary tract symptoms (LUTS) were registered after exclusion of patients with renal or retroperitoneal disease. Images were independently reviewed by two radiologists and compared with those of 244 age- and sex-matched control patients without LUTS. The PFS severity was scored on a four-point scale, and the interobserver agreement was assessed with kappa statistics. The severity score and incidence was compared between the groups, and the association with baseline characteristics was analyzed. For the LUTS group, an association between PFS severity and urodynamic and laboratory data was evaluated. Results: PFS was more frequent and more severe in the LUTS group than in the control group (p-value &lt; 0.001); its presence was significantly associated with male gender and older age (p-value &lt; 0.001). PFS was predominantly bilateral in both groups (80.1-93.2%). In the LUTS group, PFS severity scores were significantly correlated with the maximum flow rate, maximum detrusor pressure and estimated glomerular filtration rate (p-value &lt; 0.001). Interobserver agreements were excellent for PFS presence (κ = 0.883) and severity (κ = 0.816). Conclusion: Severe PFS was observed in older, male patients with LUTS. PFS severity was associated with the degree of BOO and impaired renal function. Advances In Knowledge: Recognition of PFS on the CT scan may warrant further evaluation of BOO and appropriate management to prevent renal impairment. abstract_id: PUBMED:2414469 Perirenal urinoma secondary to prostatic obstruction. We report a case of perirenal urinoma secondary to benign prostatic hypertrophy. This rare complication of bladder outlet obstruction was due to subsequent hydronephrosis and increased pressure in the renal cavities, leading to rupture of caliceal fornix and allowing perirenal extravasation of urine. Complete resolution of the urinoma was obtained with bladder decompression and conservative management. abstract_id: PUBMED:14579190 Urinary ascites and perirenal urinoma - a renoprotective "Complication" of posterior urethral valves We report on a male newborn with urinary ascites and perirenal urinoma due to calyceal rupture caused by bladder outlet obstruction and posterior urethral valves. Despite the presence of left-sided high-grade reflux, the "pop-off" mechanism occurred in the non-refluxing right kidney. Impressive normalization of the upper urinary tract anomalies and renal function occurred after surgical treatment of the valves and removal of obstruction. abstract_id: PUBMED:7647808 Use of injectable fat to obstruct the urethra in rabbits. Bulking agents have been injected to correct urinary incontinence for at least 15 years. The injection seeks to increase bladder outlet resistance by partially obstructing the urethra and thereby reduce urinary leakage in patients with stress urinary incontinence. Although the implant is effective and requires a shorter in-hospital stay than more traditional procedures, no ideal implant substance has been discovered. To assess the effectiveness of injected fat as a bulking agent, we injected small volumes of perivesical fat into the bladder neck in New Zealand White (NZW) rabbits and tested the effect on the bladder. In eight rabbits, we harvested perivesical fat and partially closed the urethral lumen with an initial injection. A second injection 1 month later completely closed the urethral lumen. We injected six other rabbits with similar volumes of saline as controls. Two weeks after the second injection, we measured micturition frequency, bladder weight, response to electrical field stimulation, and response to bethanecol in each group. Fat implants were present at the injection site in each case 4 weeks after the first injection. Rabbits receiving fat implants had increased micturition frequency, increased bladder weight, and increased response to bethanecol and field stimulation. Previous studies have demonstrated that these changes are characteristic of mild outlet obstruction in rabbits. Injected fat can be made to close the urethra and create bladder outlet obstruction in rabbits. The short-term success of fat as a bulking agent in this experiment is encouraging and suggests the need for longer term studies. abstract_id: PUBMED:12657357 Urethrolysis with Martius labial fat pad graft for iatrogenic bladder outlet obstruction. This article evaluates treatment outcomes of urethrolysis with the Martius labial fat pad graft for patients with outlet obstruction after incontinence surgery. A total of 23 women were diagnosed with iatrogenic bladder outlet obstruction by urinary retention, urodynamic criteria, physical examination findings, and/or temporal relation of voiding dysfunction to anti-incontinence surgery. The urodynamic definition of female outlet obstruction was a maximum flow rate &lt;12 mL/sec and a detrusor pressure at maximum flow &gt;20 cm of water. Surgical treatment consisted of urethrolysis with complete circumferential urethral mobilization. A Martius labial fat pad graft was used to circumferentially wrap the urethra. No concurrent resuspension procedures were performed. Procedure efficacy was determined by retrospective review and phone interview. Mean patient age was 55 years (range, 37 to 85 years). Mean postoperative follow-up time was 15 months (maximum, 44 months). All patients related voiding dysfunction symptoms to their anti-incontinence surgery. In all, 17 of 23 (74%) patients had preoperative urinary retention requiring catheterization, and 63% of patients met urodynamic criteria for obstruction. After urethrolysis with a Martius labial fat pad graft, 20 of 23 (87%) patients had complete resolution of their obstruction; 3 patients required persistent catheterization. Postoperative stress incontinence was reported by 6 of 23 (13%) patients. Urodynamically documented detrusor instability occurred in 6 of 23 (26%) patients with de novo detrusor instability occurring in 3 of 15 (20%) patients. abstract_id: PUBMED:5347777 Neonatal ascites: perirenal urinary extravasation with bladder outlet obstruction. N/A abstract_id: PUBMED:15373255 Urinary bladder incarceration and infarction by an intra-abdominal fat pad in a Virginia opossum (Didelphis virginiana). A 2.5-year-old, female opossum had acute stranguria. Based on radiography and ultrasonographic examination a cystic structure was identified in the caudal abdomen associated with bilateral hydroureter and hydronephrosis. This structure contained a neutrophilic fluid, determined to be urine. There was a neutrophilic leukocytosis. Serum chemistry values were within normal limits. The opossum was euthanized. An intra-abdominal fat pad incarceration of the urinary bladder above the trigone was present, resulting in complete obstruction of the urinary bladder and partial obstruction of the ureters. Vessels to the bladder were involved in the incarceration which resulted in vascular compromise and infarction of the bladder. Mild to moderate hydroureter and hydronephrosis were present. abstract_id: PUBMED:1557845 Gas-forming infections in genitourinary tract. Diabetes mellitus associated with urinary tract infections and ureteral obstruction can be predisposing factors leading to emphysematous pyelonephritis. Fever, flank pains, and a palpable renal mass, associated with dehydration and hyperglycemia, were the most frequent presenting symptoms associated with emphysematous pyelonephritis. Computerized tomography (CT) scan is the best method to identify a renal or perirenal abscess and its ramifications. Intravenous antibiotic therapy is determined by blood and urine cultures. Mortality was zero in patients treated by nephrectomy. One patient who had incision and drainage of a renal abscess died of sepsis, and 1 patient died of sepsis following incision and drainage of a prostatic abscess. Patients with cystitis emphysematosa require antibiotic therapy and relief of bladder outlet obstruction. Prostatic abscess is best treated by perineal incision and drainage. Periurethral scrotal abscesses should be incised, drained, and the overlying necrotic skin debrided. Early diagnosis and aggressive medical and surgical management of gas-forming infections of the genitourinary tract are vital. abstract_id: PUBMED:24139352 Long-term morbidity of Martius labial fat pad graft in vaginal reconstruction surgery. Objective: To assess the long-term morbidity of Martius labial fat pad (MLFP) interposition in vaginal reconstruction procedures at a tertiary institution. Materials And Methods: After institutional review board approval, medical records of consecutive women in a prospective surgical database requiring MLFP as part of their vaginal reconstruction were reviewed for demographics, indications for MLFP, complications, and outcomes, with minimum of 6 months follow-up. Three MLFP groups were compared as follows: (1) vesicovaginal fistula, (2) bladder outlet obstruction, and (3) others. Patients were contacted by mailed survey and/or structured telephonic interview with Quality of Life score, validated Female Sexual Function Index questionnaire, and a specific question addressing Martius harvest site, that is, "pain or numbness in labia". Results: Between 1996 and 2011, 122 women met inclusion criteria, with 25 excluded for lack of follow-up details or death. Mean age was 54 years (range, 19-78), with mean BMI 28 kg/m(2) (range, 19-43) and mean follow-up of 85 months (range, 6-202). Indications for MLFP included vesicovaginal fistula (20), bladder outlet obstruction (60), and others (17) (bladder neck closures, urethral diverticulum, and excision of duplicate urethra). No perioperative complications were recorded. Of the 97 women, 79 (81%) had normal sensation, with 5 (5%) reporting pain and 13 (14%) reporting numbness, respectively. Nine (7%) reported distortion of labia majora. Of the 29 women reporting sexual activity, only 26 (27%) responded to Female Sexual Function Index questionnaires with equivocal sexual function outcomes between all 3 surgical groups. Conclusion: The MLFP has minimal early and delayed morbidity at mean 7 years follow-up. abstract_id: PUBMED:12028629 Spontaneous ureteral rupture: is immediate surgical intervention always necessary? Presentation of four cases and review of the literature. Purpose: We report our experience with spontaneous ureteral rupture (SUR) managed conservatively. Case Reports: Data on three men and one woman 53 to 89 years old (mean age 73) with SUR were retrospectively evaluated. The common complaint was sudden-onset abdominal and concomitant flank pain. On physical examination, all patients had abdominal tenderness and pain with costovertebral angle tenderness on the associated side. One patient had rheumatoid arthritis treated with corticosteroids, and one had carcinoma of the prostate. All patients had urinary extravasation on CT scans. Two patients had ureteral obstruction by stones, and another had bladder outlet obstruction secondary to prostate cancer. Interestingly, irrespective of the obstruction level, the rupture was near the ureteropelvic junction in all patients. The symptoms regressed within hours after basket catheterization and ureteral stent placement in one patient and close follow-up and supportive treatment in three patients. In two patients, disappearance of extravasation and reabsorption of perirenal fluid were confirmed by second- and third-day CT scans. The patients were followed for a mean of 17 (range 14-21) months without any problems. Conclusions: We believe that SUR cases are more frequent than reported. It is the authors' opinion that if the clinical scenario is suspected in the acute phase and investigated by appropriate radiologic techniques, many more SUR cases will be diagnosed. Careful monitoring with supportive measures may be curative in the majority of the patients, obviating a surgical intervention. Answer: Yes, there is an association between perirenal fat stranding (PFS) on CT and bladder outlet obstruction (BOO). A study aimed to determine this association found that PFS was more frequent and more severe in patients with lower urinary tract symptoms (LUTS) suggestive of BOO than in a control group without LUTS. The presence of PFS was significantly associated with male gender and older age. In the LUTS group, the severity of PFS was significantly correlated with urodynamic measures such as the maximum flow rate and maximum detrusor pressure, as well as with estimated glomerular filtration rate, indicating an association with the degree of BOO and impaired renal function. The study concluded that severe PFS observed in older, male patients with LUTS was associated with the degree of BOO, and the recognition of PFS on CT scans may warrant further evaluation of BOO and appropriate management to prevent renal impairment (PUBMED:27123701). Additionally, there are case reports of perirenal urinoma secondary to prostatic obstruction, which is a rare complication of BOO due to subsequent hydronephrosis and increased pressure in the renal cavities leading to rupture of caliceal fornix and allowing perirenal extravasation of urine (PUBMED:2414469). Another case involved urinary ascites and perirenal urinoma due to calyceal rupture caused by BOO and posterior urethral valves in a newborn, where the "pop-off" mechanism occurred in the non-refluxing kidney, and normalization of upper urinary tract anomalies and renal function occurred after surgical treatment of the valves and removal of obstruction (PUBMED:14579190). These cases further support the association between PFS and BOO.
Instruction: Do SpaceLabs ambulatory non-invasive blood pressure recorders measure blood pressure consistently over several years use? Abstracts: abstract_id: PUBMED:15687874 Do SpaceLabs ambulatory non-invasive blood pressure recorders measure blood pressure consistently over several years use? Objective: To assess the measurement consistency of SpaceLabs ambulatory recorders (Spacelabs, Washington, USA) that are in regular use. Methods: A total of 14 SpaceLabs 90207 and one 90217 ambulatory recorders were tested for measurement consistency using the Dynatech CuffLink (Dynatech, Nevada, USA), a commercially available non-invasive blood pressure (NIBP) simulator. The NIBP recorders were tested at a range of pressures with 20 repeated determinations at a simulated 120/80 mmHg and five repeated determinations at simulated pressures of 80/50, 100/80, 150/100, 200/165 and 250/195 mmHg. Tests were carried out in 1998, 2002 and late 2003 or early 2004. Conclusions: All 15 SpaceLabs recorders measured consistently over the 6 years with 89.5% of the differences in average pressures, recorded by any particular device at each recorded pressure, less than 2 mmHg between successive test episodes. The maximum difference was 4.5 mmHg and 60.1% of the differences were less than 1 mmHg. The measurements for all devices were within the tolerances specified by the supplier for the device when tested with the simulator. Maintenance records also show that most devices required breakdown maintenance less than once every 3 years. The results show that the SpaceLabs devices maintain measurement consistency in the demanding conditions of ambulatory pressure recording over several years. abstract_id: PUBMED:1893652 Assessment of ambulatory blood pressure recorders: accuracy and clinical performance. There are now more than ten different manufacturers of non-invasive, portable blood pressure monitors in North America, Europe, and Japan. These ambulatory blood pressure recorders measure blood pressure by either auscultatory or oscillometric methodology. Technologic advances in the recorders have resulted in reduction in monitor size, reduction in or absence of motor noise during cuff inflation, ability to program the recorder without an external computer system, and enhanced precision. Recently, there has been concern that more structured validation protocols have not been implemented prior to the widespread marking of ambulatory blood pressure recorders. There is a need for proper assessment of recorders prior to use in clinical research or practice. Data on several existing recorders suggest that while most are reasonably accurate during resting measurements, many lose this accuracy during motion, and clinical performance may vary among the monitors. Validation studies of ambulatory recorders should include comparison with mercury column and intra-arterial determinations, resting and motion measurements, and assessment of clinical performance in hypertensive patients. abstract_id: PUBMED:3099709 Comparison of ambulatory blood pressure determinations using the Remler and Spacelabs devices Ambulatory blood pressure recorded with the Spacelabs was compared with both Remler recordings and office blood pressures in 11 normotensive and 13 hypertensive volunteers. Analysis of 324 simultaneous measurements with Remler and Spacelabs showed very high correlation for systolic and for diastolic blood pressure. For systolic blood pressure, measurements with Spacelabs recorder were lower than those with the Remler recorder by a mean of 3.6 +/- 6.5 mmHg, principally in hypertensive subjects. Spacelabs measurements were also lower than Remler measurements for diastolic blood pressure by a mean of 5.3 +/- 6.6 mmHg principally in normotensive subjects. The averages of blood pressure variability (standard deviation as index) recorded by each device were identical. Mean systolic and diastolic blood pressure differed by more than 10 and 5 mmHg respectively between the two devices in 11 and 83% of the subjects. These individual discrepancies were unpredictable. Comparison of Spacelabs records with office blood pressure measured with a standard mercury manometer showed discrepancies of more than 10 mmHg for systolic blood pressure and more than 5 mmHg for diastolic blood pressure in 46 and 42% of the 24 subjects, respectively. These discrepancies could be due, in part, to different methods of blood pressure measurements. These data demonstrate that the Spacelabs apparatus provides records for a group, reasonably close to those obtained with the Remler over a wide range of blood pressure. For the management of hypertension, the advantages--as compared to physician's measurements--of ambulatory blood pressures recorded with indirect methods,need to be weighed, taking into consideration the difference between the ambulatory record and the office blood pressure measured with the same device. abstract_id: PUBMED:1795209 Testing the accuracy of blood pressure monitoring devices in ambulatory conditions. In recent years technological progress has improved the construction of ambulatory blood pressure monitoring devices. This has resulted in devices able to measure blood pressure continuously and non-invasively, and also in lighter, less noisy and more accurate intermittent blood pressure monitors. The accuracy of monitors, however, is still tested by taking blood pressure measurements at rest, and testing against intra-arterial blood pressure values, in true ambulatory conditions, is very seldom used. When evaluated by the latter approach, devices such as SpaceLabs 5300 and the Sandoz SPS 1558 recorders can be substantially inaccurate. Newer devices such as the SpaceLabs 90202 and 90207 are also somewhat inaccurate, particularly when diastolic blood pressure is considered. However, hour-to-hour changes in blood pressure obtained by the SpaceLabs 90202 and 90207 monitors are qualitatively and quantitatively similar to those obtained by invasive methods. This makes it possible to describe the 24-h blood pressure profile more accurately. abstract_id: PUBMED:26529437 Accuracy of the Spacelabs 90217 ambulatory blood pressure monitor in a pediatric population. Background: Ambulatory blood pressure monitoring techniques provide unique advantages for diagnosing hypertension, although few devices have been independently validated in the pediatric population. Methods: We sought to validate the accuracy of ambulatory blood pressure monitoring with the Spacelabs 90217 monitor in children using a modified British Hypertension Society protocol. Results: A total of 112 children, aged between 6 and 17 years, completed the study at one of the three participating centers. Overall, the monitor earned an 'A' for systolic blood pressure and 'B' for diastolic blood pressure. It performed slightly better among 6-12 year olds (A/A) compared with 13-17 year olds (A/B). Conclusions: We conclude that the Spacelabs 90217 monitor is an appropriate monitor for use in children who are 6 years of age or older. abstract_id: PUBMED:1639465 Evaluation of noninvasive blood pressure monitoring devices Spacelabs 90202 and 90207 versus resting and ambulatory 24-hour intra-arterial blood pressure. This study evaluated the accuracy of blood pressure values provided by the Spacelabs 90202 and 90207 devices in comparison with intra-arterial recording in 19 subjects at rest and in nine subjects in ambulatory conditions (Oxford method). At rest Spacelabs monitors reflected intra-arterial systolic blood pressure values very closely but overestimated to a considerable extent intra-arterial diastolic blood pressure (Spacelabs-intra-arterial differences, -0.8 +/- 9.2, NS, and 9.1 +/- 8.8 mm Hg, p less than 0.01, for systolic and diastolic blood pressures, respectively). In ambulatory conditions Spacelabs-intra-arterial average differences in 24-hour values were +0.4 +/- 5.1 mm Hg for systolic blood pressure (NS) and +14.0 +/- 2.9 mm Hg for diastolic blood pressure (p less than 0.01) when group data were considered. The performance of both Spacelabs devices was worse when assessed in individual subjects or for each hourly interval. In spite of these differences between noninvasive and intra-arterial absolute blood pressure values, however, Spacelabs 90202 and 90207 monitors were able to faithfully reflect directional hour-to-hour changes in intra-arterial blood pressure (chi 2 = 18.2 and chi 2 = 23.1 for systolic and diastolic blood pressures, respectively, p less than 0.01). No differences were found between the performance of the two Spacelabs devices. Thus, although the absolute accuracy of blood pressure values provided by these monitors in ambulatory subjects is still limited, they seem to be suitable for studies aimed at assessing 24-hour blood pressure profiles quantitatively as well as qualitatively. abstract_id: PUBMED:8512726 Use of spreadsheet for statistical and graphical processing of records from the ambulatory blood pressure monitor Spacelabs 90207 The introduction of portable devices for non-invasive ambulatory blood-pressure measurement is recognized as an advance in the study of human arterial hypertension, allowing a significant improvement in the selection of hypertensive patients as well as in the analysis of the effects of antihypertensive drugs during clinical trials. The Spacelabs 90207 is a recent example of this kind of apparatus, possessing high levels of portability and being highly classified in validation studies. Nevertheless, the software of this apparatus (like other similar devices) has severe limitations in what concerns the calculation of the area under the curve of blood pressure during the time of measurement, as well as in the possibility of grouping several records in a database for easy statistic and graphic analysis of different groups of records. In order to overcome these difficulties, the authors describe the development of a group of programs, using Microsoft Excel v3.0 spreadsheets and macros, that allow a direct import of individual files from the Spacelabs software to a spreadsheet and its further processing in three phases. These three phases, which we designated by "conversion", "export to database" and "statistic and graphic analysis", will permit an easy and fast statistic and graphic analysis of selected groups of records. abstract_id: PUBMED:24374722 Blood pressure variability of two ambulatory blood pressure monitors. Objective: There are no data on the evaluation of blood pressure (BP) variability comparing two ambulatory blood pressure monitoring monitors worn at the same time. Hence, this study was carried out to compare variability of BP in healthy untreated adults using two ambulatory BP monitors worn at the same time over an 8-h period. Methods: An Accutorr device was used to measure office BP in the dominant and nondominant arms of 24 participants.Simultaneous 8-h BP and heart rate data were measured in 24 untreated adult volunteers by Mobil-O-Graph (worn for an additional 16 h after removing the Spacelabs monitor) and Spacelabs with both random (N=12) and nonrandom (N=12) assignment of each device to the dominant arm. Average real variability (ARV), SD, coefficient of variation, and variation independent of mean were calculated for systolic blood pressure, diastolic blood pressure, mean arterial pressure, and pulse pressure (PP). Results: Whether the Mobil-O-Graph was applied to the dominant or the nondominant arm, the ARV of mean systolic (P=0.003 nonrandomized; P=0.010 randomized) and PP (P=0.009 nonrandomized; P=0.005 randomized) remained significantly higher than the Spacelabs device, whereas the ARV of the mean arterial pressure was not significantly different. The average BP readings and ARVs for systolic blood pressure and PP obtained by the Mobil-O-Graph were considerably higher for the daytime than the night-time. Conclusion: Given the emerging interest in the effect of BP variability on health outcomes, the accuracy of its measurement is important. Our study raises concerns about the accuracy of pooling international ambulatory blood pressure monitoring variability data using different devices. abstract_id: PUBMED:7631686 Ambulatory blood pressure in pregnancy: comparison of the Spacelabs 90207 and Accutracker II monitors with intraarterial recordings. Objective: Our purpose was to compare systolic and diastolic blood pressures obtained with the Spacelabs 90207 (Spacelabs Medical Products, Dee Why, Australia) or Accutracker II (Suntech Medical Instruments, Melbourne, Australia) ambulatory blood pressure monitoring devices with intraarterial blood pressures in pregnant women. Study Design: Direct (intraarterial) and resting blood pressures with the ambulatory blood pressure monitoring device were compared in 39 pregnant women (14 Accutracker II and 25 Spacelabs 90207). Results: The Accutracker II device underestimated direct systolic pressure by -9 (-13, -3) mm Hg (median, interquartile range) (p = 0.028) but gave similar diastolic pressure. The Spacelabs 90207 device gave similar systolic pressures but overestimated direct diastolic pressure by 7 (2, 12) mm Hg (p &lt; 0.001). Variability for systolic and diastolic blood pressures within subjects was similar with the two devices. Both received poor gradings by standards of the British Hypertension Society and did not meet criteria of the Association for the Advancement of Medical Instrumentation, with intraarterial recordings used as the reference. Conclusions: The Accutracker II device significantly underestimated resting direct systolic pressure, whereas the Spacelabs 90207 device significantly overestimated resting direct diastolic pressure in pregnant women. Although poor gradings were achieved for both devices when intraarterial pressures were used as the reference, this is similar to comparisons of routine mercury sphygmomanometry with intraarterial recordings and does not mean these devices are unsuitable for use in pregnancy. abstract_id: PUBMED:8217971 Validation of the SpaceLabs 90207 ambulatory blood pressure monitor for use in pregnancy. Objective: To determine the accuracy of the SpaceLabs 90207 ambulatory blood pressure monitor in pregnant women. Design: Prospective observational study. Setting: Queen Charlotte's and Chelsea Hospital for Women. Subjects: One hundred and twenty-two pregnant women from the antenatal population of Queen Charlotte's Hospital. Methods: Evaluation was carried out according to the British Hypertension Society (BHS) Protocol. Results: The mean difference between observer and device for systolic blood pressure was 3 (SD 4) mmHg. For diastolic blood pressure the difference was 5 (SD 6) mmHg for Phase IV (muffling of Korotkoff sound), and 4 (SD 4) mmHg for Phase V (disappearance of Korotkoff sound). According to the BHS grading criteria, the device reached a B grading for systolic blood pressure. B grading was also reached for diastolic blood pressure (Phase V) but grade C for diastolic blood pressure (Phase IV). The accuracy criteria stipulated by the Association for the Advancement of Medical Instrumentation (AAMI) were also met for both systolic and diastolic blood pressure. Conclusion: The SpaceLabs 90207, when compared with trained observers, is accurate in determining both systolic and diastolic blood pressure for clinical purposes, as judged by the BHS Protocol in pregnancy. Answer: Yes, SpaceLabs ambulatory non-invasive blood pressure recorders have been shown to measure blood pressure consistently over several years of use. A study assessing the measurement consistency of 14 SpaceLabs 90207 and one 90217 ambulatory recorders found that all 15 devices measured consistently over 6 years, with 89.5% of the differences in average pressures recorded by any particular device at each recorded pressure being less than 2 mmHg between successive test episodes. The maximum difference was 4.5 mmHg, and 60.1% of the differences were less than 1 mmHg. The measurements for all devices were within the tolerances specified by the supplier for the device when tested with the simulator, and maintenance records showed that most devices required breakdown maintenance less than once every 3 years. These results indicate that SpaceLabs devices maintain measurement consistency in the demanding conditions of ambulatory pressure recording over several years (PUBMED:15687874).
Instruction: Does the early feedback of results improve reassurance following diagnostic testing? Abstracts: abstract_id: PUBMED:25133827 Does the early feedback of results improve reassurance following diagnostic testing? A randomized controlled trial in patients undergoing cardiac investigation. Objective: Providing reassurance is often a critical component of the medical consultation. An important area that has not been addressed in the literature is how delay in providing the results of medical tests affects patient reassurance. In this study we investigated whether the early provision of a normal diagnostic result immediately following medical testing improves patient reassurance compared to results provided 4 weeks later. Method: We conducted a longitudinal randomized controlled trial and 1-month follow-up. Fifty-one cardiology outpatients with no known cardiac pathology referred for an echocardiogram test were randomized following normal test results to receive their test results from a cardiologist either immediately following testing or 4 weeks later. Measures of symptoms, anxiety, and health perceptions were taken prior to diagnostic testing. Reassurance was assessed immediately after the results were provided and 1 month later. Results: Data analysis showed that the provision of early results had no impact on patient reassurance. Cardiac anxiety was strongly associated with lower reassurance; patients who were more anxious about their heart were significantly less reassured by a normal test result, both immediately following feedback and 1 month later. Conclusions: The early provision of test results had no impact on patient reassurance. The study suggests the identification and targeting of patients high in cardiac anxiety may be a better method for improving reassurance than reducing the waiting time for results following medical testing. abstract_id: PUBMED:34104563 Lack of reassurance after unexpected positive health risk feedback - an analysis of temporal dynamics. Introduction: How do people receive unexpected positive health risk information? While common motivational accounts predict acceptance, consistency accounts such as the cue-adaptive reasoning account (CARA) predict a 'lack of reassurance'. Objectives: We therefore tested (1) whether people prefer striving for positivity or retaining a sense of self-consistency ('lack of reassurance'), and (2) if there are systematic differences in short- and long-term reception, which would indicate temporal dynamics in processing. Methods: As part of a longitudinal cohort study, participants of a community health screening (N = 1,055) received their actual cholesterol readings. Feedback reception was assessed immediately, at one month and six months. Results: Processing trajectories for unexpected positive feedback showed a significant 'lack of reassurance' effect over time compared with expected positive feedback, while unexpected negative feedback was less threatening than expected negative feedback. Conclusions: The perseverance of this 'lack of reassurance' over time indicates that striving for consistency in self-views is a robust phenomenon, even if it means forfeiting a better view of one's own health. abstract_id: PUBMED:27412477 Pre-Feedback Risk Expectancies and Reception of Low-Risk Health Feedback: Absolute and Comparative Lack of Reassurance. Background: Personalised health-risk assessment is one of the most common components of health promotion programs. Previous research on responses to health risk feedback has commonly focused on the reception of bad news (high-risk feedback). The reception of low-risk feedback has been comparably neglected since it is assumed that good news is reassuring and readily received. However, field studies suggest mixed responses to low-risk health feedback. Accordingly, we examine whether pre-feedback risk expectancies can mitigate the reassuring effects of good news. Methods: In two studies (N = 187, N = 565), after assessing pre-feedback risk expectancies, participants received low-risk personalised feedback about their own risk of developing (the fictitious) Tucson Chronic Fatigue Syndrome (TCFS). Study 2 also included peer TCFS risk status feedback. Afterwards, self- and peer-related risk perception for TCFS was assessed. Results: In both studies, participants who expected to be at high risk but received good news (unexpected low-risk feedback) showed absolute lack of reassurance. Specifically, they felt at significantly greater TCFS risk than participants who received expected good news. Moreover, the unexpected low-risk group even believed that their risk was as high as (Study 1) or higher (Study 2) than that of their peers (comparative lack of reassurance). Conclusion: Results support the notion that high pre-feedback risk expectancies can mitigate absolute and comparative reassuring effects of good news. abstract_id: PUBMED:32875934 Using the consultation-based reassurance questionnaire to assess reassurance skills among physiotherapy students: reliability and responsiveness. Background: There is a robust body of evidence suggesting that communication skills during consultations are associated with patient satisfaction and adherence. Training practitioners to improve communication and reassurance delivery is therefore desirable. The Consultation-Based Reassurance Questionnaire (CRQ) for people presenting with back pain has not been tested as a tool to examine the reliability and responsiveness of communication training for practitioners. Objective: To translate and examine the reliability of the CRQ and to explore its ability to detect change (responsiveness) before and after a single session of training in communication skills. Methods: Thirty-five simulated consultations were recorded. 36 second-year physiotherapy students took part, either as a simulated patient or a simulated physiotherapist. All videotape simulations were rated independently by two trained observers, using the CRQ. Results: Correlations indicate that the two raters were significantly and highly correlated (r = 0.9, 95% CI 0.797-0.951). The responsiveness statistics were in the moderate range for the total CRQ score, with moderate responsiveness range for data-gathering and relationship-building, and acceptable responsiveness for generic and cognitive reassurance. Conclusion: The CRQ Scale shows good reliability and acceptable levels of responsiveness to detect change before and after training in communication skills in physiotherapy students. The scale requires testing in real-life settings to establish better responsiveness. abstract_id: PUBMED:2558207 A diagnostic centre for general practitioners: results of individual feedback on diagnostic actions. A diagnostic centre, managing diagnostic tests for general practice, can improve the service provided by primary health care and the communication between general practitioners and specialists. In addition, it can evaluate the use and misuse of tests. This paper describes the work of a diagnostic centre in the Netherlands serving 80 general practitioners. Following the introduction of individual feedback to general practitioners on their use of diagnostic tests there was a decrease in the number of requests for tests. abstract_id: PUBMED:36861765 Teacher feedback-based collaborative testing improves students' knowledge gaps of parasitology. Collaborative testing has been demonstrated the ability to improve students' performance, enhance students' learning, and aid in knowledge retention in many different courses. However, this examination mode lacks the process of teacher feedback. Herein, a short teacher feedback from was added immediately after the collaborative testing to improve the students' performance. A parasitology class of 121 undergraduates was randomized into two groups: group A and group B. Collaborative testing was carried out at the end of theoretical teaching. During the test, students would first answer questions as individuals for 20 minutes. Then, students from group A answered the same questions in groups (5 students in each group) for 20 minutes, while the group-testing duration was only 15 minutes in group B. Immediately after the group testing, teachers conducted a 5-minute feedback about the morphology identification according to the analysis of the answers by group B. Four weeks later, a final test was conducted in an individual test. The total scores and scores for each examination content were analyzed. The results showed that there was no significant difference in the final exam scores between both groups (t = -1.278, P = 0.204). However, the morphological and diagnostic test results of the final examination in group B were significantly higher than those of the midterm examination, while there was no significant change in group A (t = 4.333, P = 0.051). The results confirmed that the teacher feedback after the collaborative testing can effectively make up for the students' knowledge gaps.NEW &amp; NOTEWORTHY This study found that collaborative group testing is helpful for teachers to grasp students' knowledge gaps more easily and the teacher feedback after the collaborative group testing can effectively make up for the knowledge gaps of students. abstract_id: PUBMED:35830287 Current practice in diagnostic genetic testing of the epilepsies Epilepsy genetics is a rapidly developing field, in which novel disease-associated genes, novel mechanisms associated with epilepsy, and precision medicine approaches are continuously being identified. In the past decade, advances in genomic knowledge and analysis platforms have begun to make clinical genetic testing accessible for, in principle, people of all ages with epilepsy. For this reason, the Genetics Commission of the International League Against Epilepsy (ILAE) presents this update on clinical genetic testing practice, including current techniques, indications, yield of genetic testing, recommendations for pre- and post-test counseling, and follow-up after genetic testing is completed. We acknowledge that the resources vary across different settings but highlight that genetic diagnostic testing for epilepsy should be prioritized when the likelihood of an informative finding is high. Results of genetic testing, in particular the identification of causative genetic variants, are likely to improve individual care. We emphasize the importance of genetic testing for individuals with epilepsy as we enter the era of precision therapy. abstract_id: PUBMED:34417335 Improving diagnostic performance through feedback: the Diagnosis Learning Cycle. Background: Errors in reasoning are a common cause of diagnostic error. However, it is difficult to improve performance partly because providers receive little feedback on diagnostic performance. Examining means of providing consistent feedback and enabling continuous improvement may provide novel insights for diagnostic performance. Methods: We developed a model for improving diagnostic performance through feedback using a six-step qualitative research process, including a review of existing models from within and outside of medicine, a survey, semistructured interviews with individuals working in and outside of medicine, the development of the new model, an interdisciplinary consensus meeting, and a refinement of the model. Results: We applied theory and knowledge from other fields to help us conceptualise learning and comparison and translate that knowledge into an applied diagnostic context. This helped us develop a model, the Diagnosis Learning Cycle, which illustrates the need for clinicians to be given feedback about both their confidence and reasoning in a diagnosis and to be able to seamlessly compare diagnostic hypotheses and outcomes. This information would be stored in a repository to allow accessibility. Such a process would standardise diagnostic feedback and help providers learn from their practice and improve diagnostic performance. This model adds to existing models in diagnosis by including a detailed picture of diagnostic reasoning and the elements required to improve outcomes and calibration. Conclusion: A consistent, standard programme of feedback that includes representations of clinicians' confidence and reasoning is a common element in non-medical fields that could be applied to medicine. Adapting this approach to diagnosis in healthcare is a promising next step. This information must be stored reliably and accessed consistently. The next steps include testing the Diagnosis Learning Cycle in clinical settings. abstract_id: PUBMED:28895257 Do negative screening test results cause false reassurance? A systematic review. Purpose: It has been suggested that receiving a negative screening test result may cause false reassurance or have a 'certificate of health effect'. False reassurance in those receiving a negative screening test result may result in them wrongly believing themselves to be at lower risk of the disease, and consequently less likely to engage in health-related behaviours that would lower their risk. Methods: The present systematic review aimed to identify the evidence regarding false reassurance effects due to negative screening test results in adults (over 18 years) screened for the presence of a disease or its precursors, where disease or precursors are linked to lifestyle behaviours. MEDLINE and PsycINFO were searched for trials that compared a group who had received negative screening results to an unscreened control group. The following outcomes were considered as markers of false reassurance: perceived risk of disease; anxiety and worry about disease; health-related behaviours or intention to change health-related behaviours (i.e., smoking, diet, physical activity, and alcohol consumption); self-rated health status. Results: Nine unique studies were identified, reporting 55 measures in relation to the outcomes considered. Outcomes were measured at various time points from immediately following screening to up to 11 years after screening. Despite considerable variation in outcome measures used and timing of measurements, effect sizes for comparisons between participants who received negative screening test results and control participants were typically small with few statistically significant differences. There was evidence of high risk of bias, and measures of behaviours employed were often not valid. Conclusions: The limited evidence base provided little evidence of false reassurance following a negative screening test results on any of four outcomes examined. False reassurance should not be considered a significant harm of screening, but further research is warranted. Statement of contribution What is already known on this subject? It has been argued that screening for disease may cause 'false reassurance' whereby those who receive a negative screening test result wrongly interpret their result as indicating they are less likely to develop the disease in future. There is some evidence for false reassurance, but the relevant studies consider a range of diseases and possible indicators of false reassurance (i.e., risk perceptions, lifestyle behaviours, emotional outcomes, and quality of life). For these reasons, it is currently unclear that the extent to receive negative screening test results is likely to impact on participants' lifestyle behaviours, or other possible indicators of false reassurance. What does this study add? Current available evidence shows that negative screening test results are unlikely to cause false reassurance and, in particular, are unlikely to have a negative impact on lifestyle behaviours. Given the limitations of the current evidence base in terms of number of studies and study quality, future research should continue to explore this issue, where this can be done at low cost. abstract_id: PUBMED:29941829 Double Feedback Control Method for Determining Early-Age Restrained Creep of Concrete Using a Temperature Stress Testing Machine. Early-age restrained creep influences the cracking properties of concrete. However, conventional creep measurements require a large number of tests to predict the restrained creep as it is influenced by the combined effects of variable temperature, creep recovery, and varying compression and tension stresses. In this work, a double feedback control method for temperature stress testing was developed to measure the early-age restrained creep of concrete. The results demonstrate that the conventional single feedback control method neglects the effect of restrained elastic deformation, thus providing a larger-than-actual creep measurement. The tests found that the double feedback control method eliminates the influence of restrained elastic deformation. The creep results from the double feedback method match well with results from the single feedback method after compensation for the effects of restrained elastic deformation is accounted for. The difference in restrained creep between the single and double feedback methods is significant for concrete with a low modulus of elasticity but can be neglected in concrete with a high modulus of elasticity. The ratio between creep and free deformation was found to be 40⁻60% for low, moderate, and high strength concretes alike. The double feedback control method is therefore recommended for determining the restrained creep using a temperature stress testing machine. Answer: The early feedback of results does not appear to improve reassurance following diagnostic testing. A randomized controlled trial conducted on cardiology outpatients who underwent an echocardiogram test found that providing early results had no impact on patient reassurance. The study highlighted that cardiac anxiety was strongly associated with lower reassurance, suggesting that patients who were more anxious about their heart were significantly less reassured by a normal test result, both immediately following feedback and 1 month later. This suggests that targeting patients with high cardiac anxiety may be more effective in improving reassurance than reducing the waiting time for results (PUBMED:25133827). Additionally, another study on the reception of unexpected positive health risk information showed a significant 'lack of reassurance' effect over time compared with expected positive feedback. This indicates that striving for consistency in self-views is a robust phenomenon, even if it means forfeiting a better view of one's own health (PUBMED:34104563). Furthermore, research on the reception of low-risk health feedback found that pre-feedback risk expectancies can mitigate the reassuring effects of good news. Participants who expected to be at high risk but received unexpected low-risk feedback showed an absolute lack of reassurance, feeling at significantly greater risk than those who received expected good news (PUBMED:27412477). In summary, the evidence suggests that early feedback of results does not necessarily improve reassurance following diagnostic testing, and factors such as cardiac anxiety and pre-feedback risk expectancies play a significant role in how patients perceive and are reassured by their test results.
Instruction: I. Do chronic pain patients' perceptions about their preinjury jobs determine their intent to return to the same type of job post-pain facility treatment? Abstracts: abstract_id: PUBMED:8788574 I. Do chronic pain patients' perceptions about their preinjury jobs determine their intent to return to the same type of job post-pain facility treatment? Objective: To demonstrate that chronic pain patients' (CPPs') perceptions about their preinjury jobs determine their intent to return to the same type of job post pain facility treatment. Design: A total of 225 CPPs completed a series of rating scales and yes/no questions relating to their preinjury job perceptions and a question relating to intent to return to the same type of preinjury job post-pain facility treatment. The CPPs were broken down into subgroups (males, females, college males, noncollege males, college females, noncollege females), and within each subgroup those not intending to return to the same type of pre-injury job were compared to those intending to return on the preinjury job perception questions. In addition for the whole group, stepwise discriminant analysis was used to predict who planned to return to the preinjury job utilizing the job perceptions questions. Setting: Multidisciplinary Pain Center. Patients: Consecutive chronic pain patients. Results: For the whole group, CPPs not intending to return were more likely to complain of job excessive physical demands, job satisfaction, and job dislike. Job perception complaints that were significantly different between the intending and not intending to return groups differed between the subgroups. For example, noncollege males not intending to return were more likely to complain of excessive physical demands only versus satisfaction and liking as significant items for college males who did not intend to return. Within the discriminant analysis, the combination of job satisfaction, excessive physical demands, employee conflicts, job liking, job dangerousness, supervisory conflicts, job stress, and age classified 73.46% of the CPPs correctly as to intent to return to the same type of preinjury job. Conclusions: There appears to be a relationship between preinjury job perceptions and intent to return to the same type of job post pain treatment. However, subgroups of CPPs will differ by which job perceptions are important towards making that decision. abstract_id: PUBMED:8788575 II. Do chronic pain patients' perceptions about their preinjury jobs differ as a function of worker compensation and non-worker compensation status? Objectives: (1) To demonstrate a relationship between intent to return to preinjury job and preinjury job perceptions about that job; and (2) to demonstrate that worker compensation chronic pain patients (WC CPPs) would be more likely than non-worker compensation chronic pain patients (NWC CPPs) not to intend to return to a preinjury type of job because of preinjury job perceptions. Study Design: The relationship between preinjury job perceptions and intent to return to the preinjury job was investigated and compared between worker compensation (WC) and nonworker compensation (NWC) chronic pain patients (CPPs). Within the WC and NWC groups CPPs not intending to return to their preinjury type of work were compared to those CPPs intending to return on preinjury job perception. Background Data: Compensation status, being a WC CPPs or being a non-WC CPPs, has been claimed to be predictive or not predictive of return to work post pain treatment. These studies have, however, ignored the preinjury job stress perception variable as an area of research. Methods: WC CPPs were age- and sex-matched to NWC CPPs and statistically compared on their responses to rating scale and yes/no questionnaires for intent to return to work and perceived preinjury job stress. In a second analysis, both the WC and NWC groups were divided according to their intent to return to work and statistically compared on their responses to these questionnaires. Results: Both male and female WC CPPs were less likely than their counterparts to intend to return to their preinjury job. Both WC and NWC were found to complain of preinjury job complaints, and these complaints were found to differ between WC and NWC CPPs. An association between intent not to return to work and the perceptions of preinjury job dissatisfaction and job dislike was found for male and female WC CPPs and for male and female NWC CPPs. Conclusions: There may be a relationship between some preinjury job perceptions and intent to return to the preinjury type of work in some groups of CPPs. However, a specific relationship between WC status, intent not to return to the preinjury type of work, and preinjury job perceptions in comparison to NWC CPPs could not be demonstrated. abstract_id: PUBMED:9303251 Impact of chronic pain patients' job perception variables on actual return to work. Objective: To examine the impact of preinjury job perceptions on chronic pain patients (CPPs) return to work after pain facility treatment. Design: A total of 128 CPPs completed a series of rating scales and yes/no questions relating to their preinjury job perceptions and a question relating to "intent" to return to the same type of preinjury job following pain facility treatment. These CPPs were part of a grant study for prediction of return to work, and therefore their work status was determined at 1, 3, 6, 12, 18, 24, and 30 months after treatment. The preinjury job perceptions and other demographic variables were used to predict return to work. Stepwise discriminant analysis was used to predict return to work at the 1-month and final time points utilizing the above variables. Stepwise regression analysis was used to predict mean employment status (at all time points) also using the above variables. Setting: Multidisciplinary Pain Center. Patients: Consecutive CPPs. Results: For the 1-month time point, employment status was predicted by "intent," educational status, work dangerous perception complaint, job stress, job physical demands, job liking, and job role conflicts. Here, 79.49% of the CPPs were correctly classified. For the final time point, employment status was predicted by the first three predictors for the 1-month time point (in the same order) and age, job stress, and gender. Here, 74.58% of the CPPs were correctly classified. Mean employment status was predicted by "intent" and educational status. Conclusions: There is a relationship between preinjury job perceptions and actual return to work after pain facility treatment. Voiced "intent" not to return to the preinjury type of job is highly predictive of not returning to work after pain facility treatment. abstract_id: PUBMED:10382929 Prediction of "intent", "discrepancy with intent", and "discrepancy with nonintent" for the patient with chronic pain to return to work after treatment at a pain facility. Objective: We previously determined that "intent" to return to work post pain facility treatment is the strongest predictor for actual return to work. The purposes of the present study were the following: to identify variables predicting "intent"; to predict membership in the "discrepant with intent" group [those chronic pain patients (CPPs) who do intend to return to work but do not]; and to predict membership in the "discrepant with nonintent" group (those CPPs who do not intend to return to work but do). Design: A total of 128 CPPs completed a series of rating scales and yes/no questions relating to their preinjury job perceptions and a question relating to "intent" to return to the same type of preinjury job post-pain facility treatment. These CPPs were part of a grant study for prediction of return to work, and therefore their work status was determined at 1, 3, 6, 12, 18, 24, and 30 months posttreatment. Preinjury job perceptions and other demographic variables were utilized using stepwise discriminant analysis to identify variables predicting "intent" and predicting membership in the "discrepant with intent" and "discrepant with nonintent" groups. Setting: Pain facility (multidisciplinary pain center). Patients: Consecutive low back pain CPPs, mean age 41.66+/-9.54 years, with the most frequent highest educational status being high school completion (54.7%) and 60.2% being worker compensation CPPs. Results: "Intent" was predicted by (in decreasing order of probability) postinjury job availability variables, job characteristic variables, and a litigation variable. "Discrepant with intent" was predicted by (in decreasing order of probability) for the 1-month follow-up time point, postinjury job availability variables, pain variables, a litigation variable, and a function perception variable, and for the final follow-up time point, pain variables only. "Discrepant with nonintent" was predicted by (in order of decreasing probability) for the 1-month follow-up time point, a job availability variable, a demographic variable, and a functional perception variable, and for the final follow-up time point a pain variable and a job availability variable. The percentage of CPPs correctly classified by each of these analyses was as follows: "intent" 81.25%, "discrepant with intent" 87.01% (at 1-month follow-up) and 74.03% (final follow-up), "discrepant with nonintent" 92.16% (at 1-month follow-up) and 75.00% (final follow-up). Conclusions: CPPs intentions of returning to their preinjury jobs are mainly determined by job availability and job characteristic variables but surprisingly not by pain variables. However, the results with "discrepant with intent" and "discrepant with nonintent" groups indicate that actual return to work is determined by an interaction between job availability variables and pain variables with pain variables predominating for long-term outcome. abstract_id: PUBMED:8961453 "Movement" in work status after pain facility treatment. Study Design: This was a randomized prospective follow-up study of pain facility treatment of chronic pain patients with low back pain, with return to work and work capacity as the outcome measures. Objectives: To determine if after pain facility treatment chronic pain patients "move" in and out of work and in their work capacity; to determine the patterns of "movement;" and to determine the post-pain facility treatment follow-up sampling time points that would maximize the number of chronic pain patients correctly classified according to their final work and work capacity status. Summary Of Background Data: Past research and empiric observation have indicated that chronic pain patients may "move" after pain facility treatment in and out of work and in their job work capacity. Such "movement" can affect the results of outcome studies. Methods: Two hundred thirty-six consecutive chronic pain patients who fit study selection criteria were followed up at 1, 3, 6, 12, 18, 24, and 30 months after pain facility treatment for determination of work and work capacity status and separated according to the pattern of movement. Stepwise discriminant analysis was used to answer the study objectives. "Movement" in and out of work for these chronic pain patients also was compared with the US general population. Results: Chronic pain patients demonstrated eight work and four work capacity movement patterns. The 24- and 1-month time points predicted final work status correctly for 97.0% and 77.0% of the chronic pain patients, respectively, whereas the most significant predictor for correct work capacity status was the 24-month point. The annual percentage change in employment status for these chronic pain patients was more than in the US general population. Conclusions: Because chronic pain patients "move" in and out of employment and for work capacity status after pain facility treatment, future outcome studies using these measures will have to consider carefully the impact of "movement" on their results. abstract_id: PUBMED:31864589 Editorial Commentary: A Commentary on a Meta-analysis of Short-Term Outcomes. Hip pain is prevalent and costly, and young patients with hip femoroacetabular impingement have similar levels of pain and quality of life to older patients with hip osteoarthritis. Hip arthroscopic surgery for femoroacetabular impingement is increasing in prevalence, and the benefits of operative treatment compared with nonoperative treatment require consideration. Hip arthroscopy should not be a first-line treatment but can be necessary in cases in which high-quality, exercise-based nonsurgical treatment options have been exhausted. Patients should be informed that surgery is not necessarily cost-effective and may have an increased risk of comorbidities such as chronic pain and insomnia. Patient-reported outcomes may improve by up to 20% with either surgery or physical therapy but are not likely to return to those seen in patients without hip pain. Although most patients return to sport after hip arthroscopy, the ability to participate at a preinjury level is seen in fewer than a quarter of patients. Return-to-sport rates with physiotherapist-led nonoperative treatment are not known. abstract_id: PUBMED:28728063 Factors associated with non-return to work in the severely injured victims 3 years after a road accident: A prospective study. Road accidents may impact victims' physical and/or mental health and socio-occupational life, particularly the capacity to return to work. The purpose of our study is to assess modifiable medical and socio-occupational factors of non-return to work in the severely injured 3 years after a road accident. Among1,168 road accidents casualties in the Rhône administrative Département of France followed for five years, 141 of the 222 severely injured (Maximal Abbreviated Injury Scale ≥ 3) aged more than 16 years who were in work at the time of the accident, reported whether they had returned to work in the 3 years following the accident. The subgroups of those who had (n=113) and had not returned to work (n=28) were compared for socio-occupational (gender, age, educational level, marital status, socio-occupational group) accident-related medical factors (type of road user, type of journey, responsibility in the accident, initial care) and post-accident medical factors (pain intensity, post-traumatic stress disorder, physical sequelae, quality of life) by using standardized tools. Severity of initial head, face and lower-limb injury, intense persistent pain, post-traumatic stress disorder, poor self-assessed quality of life and health status at 3 years were associated with non-return to work on univariate analysis. On multivariate analysis, severity of initial head and lower-limb injury, intense persistent pain at 3 years and post-traumatic stress disorder were significantly associated with non-return to work 3 years following severe road-accident injury. Post-traumatic stress disorder and chronic pain were essential modifiable medical determinants of non-return to work in the severely injured after a road accident: early adapted management could promote return to work in the severely injured. Improve early adapted treatment of pain and PTSD in the rehabilitation team should help the severely injured return to work following a road accident. abstract_id: PUBMED:10382923 Validity of the dictionary of occupational titles residual functional capacity battery. Background Data: The Dictionary of Occupational Titles (DOT) is a U.S. government publication that defines each job in the United States according to 20 job factors. Fishbain et al. (Spine 1994;19:872-80) developed a DOT residual functional capacity (RFC) battery whose predictive validity for employment/unemployment had not been tested previously. Objectives: The purposes of this study were as follows: (a) to determine whether results of a DOT-RFC battery performed at completion of pain facility treatment predicted employment status at 30 months' follow-up and (b) to determine whether the DOT-RFC battery predicted employment capacity as determined by the DOT employment levels of the chronic pain patients' (CPPs) jobs. Study Design: This is a prospective low back pain CPP pain facility treatment study using employment status and the DOT occupational levels as outcome measures. Methods: One hundred eighty-five consecutive CPPs who fitted the selection criteria completed a DOT-RFC battery at the completion of pain facility treatment and were contacted at 1, 3, 6, 12, 18, 24, and 30 months for determination of their employment status and DOT employment level. Eight DOT job factors plus pain and worker compensation status were found to be significantly different between employed and unemployed CPPs and between those employed in different DOT employment levels. For the 10 variables, stepwise discriminant analysis was used to select final predictor variables. Sensitivity and specificity were calculated along with pain level cutpoints that separated the groups. Results: The eight DOT job factors found to be statistically significant between groups were the following: stooping, climbing, balancing, crouching, feeling shapes, handling left and right, lifting, carrying, and pain and worker compensation status. In the discriminant analysis, these variables could discriminate between the employed and unemployed categories, with a sensitivity and specificity of approximately 75%. The pain level cutpoint between employed and unemployed was 5.4 on a 10-point scale. Conclusions: We cannot as yet predict DOT-RFC employment levels. However, if a CPP can pass the above eight DOT job factors and has a pain level less than the 5.4 cutpoint, that CPP will have a 75% chance of being employed at 30 months after treatment at the pain facility. Therefore, some DOT-RFC battery job factors demonstrate a predictive validity in the "real work world." abstract_id: PUBMED:9522264 Goal setting as a predictor of return to work in a population of chronic musculoskeletal pain patients. To assess prospectively the association between personal attributes, vocational factors, and the return to work outcome for patients with chronic, nonmalignant, musculoskeletal pain, we assessed return to work (RTW) motivation though an open-format listing of treatment goals in 59 chronic pain patients admitted to a university pain management program. Patients were then followed (average of 17.9 months) in the posttreatment period to determine whether they had in fact returned to employment. Results indicated that a number of pretreatment factors predicted future employment status in this patient population. Age, marital status, education and decreased length of unemployment were predictive of RTW outcome. Overall, RTW goal was the single best predictor of return to work outcome. In contrast, increased number of premorbid jobs, compensation status, patient's race and sex were not predictive. The present study suggests that the assessment of an individual's motivation as defined by goal-setting may be a key factor in predicting a favorable outcome in this typically refractory population of patients. abstract_id: PUBMED:26560801 Illness perceptions are the main predictors of depression and anxiety symptoms in patients with chronic pain. Depression and anxiety symptoms in chronic pain are associated with adverse clinical outcomes, and appear highly related to patient's illness perceptions as well as with marital adjustment. This study aimed to investigate the predictive value of pain variables, marital adjustment and illness perceptions on depression and anxiety in patients with chronic pain. Two hundred patients were recruited from a pain unit in a public hospital in the north of Portugal. Patients completed a questionnaire that assessed illness perceptions (IPQ-Brief), marital adjustment (revised dyadic adjustment scale), depression and anxiety symptoms (hospital anxiety depression scale) and pain variables (pain intensity and pain disability index). Depression and anxiety symptoms were associated with pain intensity, pain-related disability, marital adjustment and illness perceptions. Results from hierarchical regression showed that illness perceptions contributed significantly to depression and anxiety symptoms over and above the effects of pain intensity, pain-related disability and marital adjustment, after controlling for gender. In multivariate analyses, pain intensity, pain-related disability and marital adjustment were uniquely related to depression and anxiety symptoms, whereas specific illness perceptions were uniquely related to depression symptoms (identity, treatment control, emotional response and coherence) and to anxiety symptoms (identity, emotional response and concern). Perceptions of greater symptomatology (identity) and of emotional impact, and lesser perceptions of treatment control and understanding of chronic pain (illness comprehensibility) were significantly associated with increased depression symptoms. Perceptions of greater symptomatology (identity), emotional impact and greater concern were associated with anxiety symptoms. These findings indicate that the contribution of illness perceptions was greater than that made by traditional covariates, and may therefore be a useful basis for future psychological interventions. Answer: Yes, chronic pain patients' (CPPs') perceptions about their preinjury jobs determine their intent to return to the same type of job post pain facility treatment. A study involving 225 CPPs who completed rating scales and yes/no questions relating to their preinjury job perceptions and a question relating to intent to return to the same type of preinjury job post-pain facility treatment found that CPPs not intending to return were more likely to complain of job excessive physical demands, job dissatisfaction, and job dislike. The study also utilized stepwise discriminant analysis to predict who planned to return to the preinjury job utilizing the job perceptions questions, and the analysis correctly classified 73.46% of the CPPs as to intent to return to the same type of preinjury job (PUBMED:8788574).
Instruction: Is axillary dissection necessary for breast cancer in old women? Abstracts: abstract_id: PUBMED:23621266 Is axillary dissection necessary for breast cancer in old women? A meta-analysis of randomized clinical trials. Background: We performed this meta-analysis to assess the effectiveness and safety of axillary dissection in old women. Methods: The Cochrane Library, PubMed, EMBASE and Chinese Biomedical Literature Database were searched and all randomized controlled trials of axillary dissection in old women (at least 60 years old) were considered. Meta-analyses were completed using RevMan5.1. Results: Three eligible randomized controlled trials (RCTs) including 5,337 patients were considered. There was weak evidence in favour of axillary dissection (AD) in old women. The meta-analysis showed that the overall survival (OS) after 1, 3, 5 and 7 years and the disease free survival (DFS) after 1, 3 and 5 year were not statistically significantly different between AD and no AD groups. However, there was a difference in the 7 year DFS. Conclusions: Axillary dissection did not provide survival benefit to the old women with breast cancer analysed. Therefore, axillary dissection is not well-indicated in old women with breast cancer. abstract_id: PUBMED:33664591 Dissection of Level III Axillary Lymph Nodes in Breast Cancer. Axillary lymph node dissection is an indispensable step in modified radical mastectomy for breast cancer. It is the most reliable method and the golden standard to determine the status of axillary lymph nodes. It is also of great importance to evaluate the prognosis and develop treatment plans for breast cancer patients. Axillary lymph node dissection can be anatomically divided into levels I, II, and III. Level I and Level II axillary lymph dissection is the standard clinical treatment of axillary lymph nodes positive breast cancer, whereas level III axillary lymph node dissection has been controversial. Level III axillary lymph node metastasis is one of the important factors that can easily cause distant metastasis and recurrence. It is also an important index to estimate the prognosis of breast cancer patients. To facilitate the decision of whether or not to perform level III lymph node dissection, we reviewed the indications, complications, and surgical procedures of level III lymph node dissection. abstract_id: PUBMED:25284834 Axillary Dissection in the Case of Positive Sentinel Lymph Nodes: Results of the Innsbruck Consensus Conference. The prognosis of breast cancer is most heavily influenced by the status of the axillary nodes. Until a few years ago, this knowledge was gained through radical axillary lymph node clearance. In the meantime, sentinel lymph node clearance has become an established part of the surgical treatment of breast cancer. With the development of this procedure, the morbidity caused by axillary dissection has been reduced significantly. Although comprehensive prospective, randomised data regarding the safe use of the sentinel concept are only now available, the focus currently, however, is on the question of whether in the case of positive sentinel lymph nodes, an axillary dissection can be done away with altogether without having any negative impact on the risk of loco-regional recurrence or on progression-free survival and overall survival. The results of the American ACOSOG-Z001 study have changed the fundamental perspective of this. In this study on the advantages of axillary dissection following the confirmation of tumour tissue in the sentinel lymph nodes, there were no statistically significant advantages from axillary dissection for women with a favourable overall risk profile who had received radiotherapy and systemic therapy. If this concept takes hold, the surgical treatment of node-positive breast cancer, at least in the axilla, would be reduced to a minimum, and the focus of treatment would in future lie more on the systemic treatment of this condition. As part of an interdisciplinary consensus meeting, a standardised approach for Austria with regard to this question was decided upon. abstract_id: PUBMED:31708307 Is axillary lymph node dissection necessary for positive preoperative aspiration cytology lymph node results? Introduction: Based on international guidelines, axillary lymph node dissection (ALND) is recommended in cases of breast cancer if preoperative examinations confirm axillary metastasis. We examined which set of preoperative parameters might render ALND unnecessary. Patients And Methods: Preoperative examinations (axillary ultrasound and aspiration cytology) confirmed axillary metastasis in 190 cases out of 2671 patients with breast cancer; primary ALN dissection was performed on these patients with or without prior neoadjuvant therapy. The clinicopathological results were analysed to determine which parameter might predict the presence of no more than 2 or 3 metastatic ALNs. Results: The final histological examination confirmed 1-3 metastatic lymph nodes in ALND samples in 116 cases and over 3 metastatic lymph nodes in 74 cases. For patients receiving neoadjuvant therapy (59 out of the 190 cases), if the size of the primary tumour was 2 cm or smaller and/or the metastatic ALN was 15 mm or smaller, then the patient was likely to have no more than 3 positive ALNs (stage N0-1 disease) (p &lt; 0.001). If the patient did not receive neoadjuvant therapy, stage N2 or N3 disease was very likely. No correlation was found between other clinicopathological characteristics of the tumour and involvement of the ALNs. Conclusion: Axillary lymph node dissection is not necessary for selected breast cancer patients with axillary metastasis receiving neoadjuvant therapy. In these cases, sentinel lymph node biopsy with or without radiation therapy and close follow-up may serve as adequate therapy. abstract_id: PUBMED:8956955 Breast cancer in women over 75 years: is axillary dissection always necessary? Objective: To study how the information gained from axillary dissection in women (75 years old or more) with breast cancer influenced postoperative adjuvant treatment. Design: Retrospective review of casenotes. Setting: University departments of surgery and oncology, Sweden. Subjects: 166 women (aged 75 years or more) operated on for primary breast cancer between 1980-1989. Main Outcome Measures: Type of operation and postoperative therapy given. Results: In 138/166 (83%) women axillary dissection was done, but in only 21/59 (36%) of these patients did information gained from the procedure influence the postoperative treatment according to the treatment guidelines for breast cancer. None of the 28 patients who did not undergo axillary dissection were subjected to further operations or radiotherapy for axillary nodal recurrence after a mean follow up of 47 months. Conclusions: Axillary dissection should be reserved for local control of disease and for those patients who preoperatively agree to undergo postoperative irradiation or chemotherapy if metastases are found. abstract_id: PUBMED:36702672 Axillary lymph node dissection: Dead or still alive? Although sentinel lymph node biopsy is now the primary method of axillary staging and is therapeutic for patients with limited nodal disease, axillary lymph node dissection (ALND) is still necessary for staging in groups where sentinel lymph node biopsy has not been proven to be accurate and to maintain local control in those with a heavy axillary tumor burden. Additionally, newer approaches to systemic therapy tailored to risk level sometimes necessitate knowledge of the number of involved axillary nodes which can only be obtained with ALND. Ongoing trials will address whether there are additional circumstances where radiotherapy can replace ALND. abstract_id: PUBMED:31534484 Outcomes of shoulder arthroplasty following axillary lymph node dissection. Introduction: Shoulder arthroplasty with previous axillary lymph node dissection historically has unsatisfactory outcomes. We analyzed outcomes of primary shoulder arthroplasty in patients with previous axillary lymph node dissection. Methods: Thirty-two primary shoulder arthroplasties after prior axillary lymph node dissection were performed. These patients were analyzed for patient-reported outcomes, range of motion, complications, and reoperations. Results: Average age was 70.8 ± 7.5 years old. There were 19 anatomic total shoulder arthroplasties, four hemiarthroplasties, and nine reverse total shoulder arthroplasties. Eight were performed by a superior approach while 24 were performed by a deltopectoral approach with cephalic vein preservation. There were three complications (one deltoid dehiscence, one axillary nerve palsy, and one postoperative pneumonia). There was one revision (hemiarthroplasty to reverse total shoulder arthroplasty for cuff failure at 91 weeks), two reoperations, and no infections. Patient-reported outcomes were available for 21/26 (80.1%) of the surviving shoulders at 4.8 ± 2.0 years. Average visual analog scale pain score was 7.1 ± 14.5, Simple Shoulder Test score 8.3 ± 2.6 "yes" responses, Single Assessment Numeric Evaluation score 80.2 ± 17.4, and American Shoulder and Elbow Surgeons score 83.6 ± 14.1. Conclusion: Axillary lymph node dissection is not a contraindication to shoulder arthroplasty. A deltopectoral exposure can be utilized without substantial risk of worsening lymphedema or wound complications. While a superior approach avoids cephalic vein injury, important approach-related complications (deltoid dehiscence and axillary nerve palsy) were observed.Level of evidence: Level IV-case series. abstract_id: PUBMED:22275958 Does LigaSure reduce fluid drainage in axillary dissection? A randomized prospective clinical trial. Background: Axillary lymph node dissection (ALND) is an integral part of breast cancer treatment. It is required in about 40-50% of patients. The placement of a drain in the axilla after an operation is current surgical practice. Short surgical stay programmes increase operating efficiency and reduce medical care costs, without compromising quality of care. LigaSure(™) is a new haemostatic device that uses bipolar energy to seal vessels. The aim of this study is to determine whether axillary dissection with LigaSure(™) reduces the time of wound drainage, the duration of surgical intervention and the volume of drainage after treatment. Patients And Methods: This study is a prospective randomized controlled trial. A total of 100 women with breast cancer who needed axillary dissection were randomized into the LigaSure(™) or conventional axillary dissection group. Levels I to III lymph node dissection was performed. A closed suction drain was always placed in the axilla and removed after 6-8 days or when fluid amount was &lt;60 cc in the previous 24 hours. Results: THERE WERE NO SIGNIFICANT DIFFERENCES BETWEEN THE TWO GROUPS WHEN CONSIDERING THE DURATION OF SURGICAL PROCEDURE: average duration was 70.7 ± 24.66 minutes for LigaSure(™) patients, while in the conventional dissection group the mean was 70.6 ± 22.47 minutes (p=0.98). Total amount of drained fluid was 624.49 cc in the LigaSure(™) axillary dissection group and 792.96 in the conventional ALND group; this difference did not achieve statistical significance (p=0.09); the duration of draining was also similar, with no statistical difference (p=0.15). Conclusions: The present study did not show clear advantages in LigaSure(™) use for ALND, although it represents a good haemostatic device, especially in abdominal surgery. abstract_id: PUBMED:24793627 Sentinel node invasion: is it necessary to perform axillary lymph node dissection? Randomized trial SERC Contribution of axillary lymph node dissection (ALND) is questioned for positive sentinel node (SN), micro-metastasis and isolated tumor cells but also for macro-metastasis. The aim of this work is to precise why a prospective randomized trial is necessary and the design of this trial. Why? For positive SN, the scientific level evidence appears insufficient for validation of ALND omission as a new standard. Rational is presented with non-sentinel node involved rate and number of NSL involved at complementary ALND, axillary recurrence rate, disease free survival rate and adjuvant treatment decision impact. How? The proposed Sentinelle Envahi et Randomisation du Curage (SERC) trial will randomly assign to observation only or complementary ALND with positive SN. The aim is to demonstrate the non-inferiority of ALND omission versus ALND. abstract_id: PUBMED:37102027 From Cadaveric Dissection to the Operating Room: A Unilateral Double Intercostobrachial Nerve and the Implications in Axillary Lymph Node Dissection. There are multiple treatment options for breast cancer (BC), including lumpectomy, chemo- and radiotherapy, complete mastectomy, and, when indicated, an axillary lymph node dissection. Such node dissections commonly lead the surgeon to encounter the intercostobrachial nerve (ICBN), which, if injured, leads to significant postoperative numbness of the upper arm. To assist in identifying the ICBN, we report a unilateral variation of a dual ICBN. The first ICBN (ICBN I) originates from the second intercostal space, as classically described in human anatomy. On the contrary, the second ICBN (ICBN II) originates from the second and third intercostal spaces. The anatomical knowledge of ICBN origin and its variations are crucial for axillary lymph node dissection in BC and other surgical interventions that involve the axillary region (e.g., regional nerve blocks). An iatrogenic injury of the ICBN has been associated with postoperative pain, paresthesia, and loss of upper extremity sensation in the dermatome supplied by this nerve. Therefore, maintaining the integrity of the ICBN is a worthy goal during axillary dissections in BC patients. Increasing the awareness of ICBN variants among surgeons reduces potential injuries, which would contribute to the BC patient's quality of life. Answer: Based on the provided abstracts, axillary dissection may not be necessary for all elderly women with breast cancer. A meta-analysis of randomized clinical trials indicated that axillary dissection did not provide a survival benefit to older women with breast cancer analyzed, suggesting that it is not well-indicated in this population (PUBMED:23621266). Another study focusing on women over 75 years old found that in many cases, the information gained from axillary dissection did not influence postoperative treatment, and none of the patients who did not undergo the procedure had axillary nodal recurrence after a mean follow-up of 47 months (PUBMED:8956955). Additionally, for selected breast cancer patients with axillary metastasis receiving neoadjuvant therapy, axillary lymph node dissection might not be necessary, as sentinel lymph node biopsy with or without radiation therapy and close follow-up may serve as adequate therapy (PUBMED:31708307). However, axillary lymph node dissection is still considered necessary for staging in groups where sentinel lymph node biopsy has not been proven to be accurate and to maintain local control in those with a heavy axillary tumor burden (PUBMED:36702672). The decision to perform axillary dissection, including level III lymph node dissection, can be influenced by the potential for distant metastasis and recurrence, and it remains an important index to estimate the prognosis of breast cancer patients (PUBMED:33664591). In conclusion, while axillary dissection may not be necessary for all elderly women with breast cancer, particularly those with a favorable overall risk profile or those receiving neoadjuvant therapy, it remains an important consideration for staging and local control in certain cases. The decision should be individualized based on the patient's specific clinical situation and in consultation with their healthcare provider.
Instruction: Shear-wave elastography and greyscale assessment of palpable probably benign masses: is biopsy always required? Abstracts: abstract_id: PUBMED:27007593 Shear-wave elastography and greyscale assessment of palpable probably benign masses: is biopsy always required? Objective: To establish if palpable breast masses with benign greyscale ultrasound features that are soft on shear-wave elastography (SWE) (mean stiffness &lt;50 kPa) have a low enough likelihood of malignancy to negate the need for biopsy or follow-up. Methods: The study group comprised 694 lesions in 682 females (age range 17-95 years, mean age 56 years) presenting consecutively to our institution with palpable lesions corresponding to discrete masses at ultrasound. All underwent ultrasound, SWE and needle core biopsy. Static greyscale images were retrospectively assigned Breast Imaging Reporting and Data System (BI-RADS) scores by two readers blinded to the SWE and pathology findings, but aware of the patient's age. A mean stiffness of 50 kPa was used as the SWE cut-off for calling a lesion soft or stiff. Histological findings were used to establish ground truth. Results: No cancer had benign characteristics on both modalities. 466 (99.8%) of the 467 cancers were classified BI-RADS 4a or above. The one malignant lesion classified as BI-RADS 3 was stiff on SWE. 446 (96%) of the 467 malignancies were stiff on SWE. No cancer in females under 40 years had benign SWE features. 74 (32.6%) of the 227 benign lesions were BI-RADS 3 and soft on SWE; so, biopsy could potentially have been avoided in this group. Conclusion: Lesions which appear benign on greyscale ultrasound and SWE do not require percutaneous biopsy or short-term follow-up, particularly in females under 40 years. Advances In Knowledge: None of the cancers had benign characteristics on both greyscale ultrasound and SWE, and 32% of benign lesions were BI-RADS 3 and soft on SWE; lesions that are benign on both ultrasound and SWE may not require percutaneous biopsy or short-term follow-up. abstract_id: PUBMED:28688156 Comparison and Combination of Strain and Shear Wave Elastography of Breast Masses for Differentiation of Benign and Malignant Lesions by Quantitative Assessment: Preliminary Study. Objectives: To compare the diagnostic performance of strain and shear wave elastography of breast masses for quantitative assessment in differentiating benign and malignant lesions and to evaluate the diagnostic accuracy of combined strain and shear wave elastography. Methods: Between January and February 2016, 37 women with 45 breast masses underwent both strain and shear wave ultrasound (US) elastographic examinations. The American College of Radiology Breast Imaging Reporting and Data System (BI-RADS) final assessment on B-mode US imaging was assessed. We calculated strain ratios for strain elastography and the mean elasticity value and elasticity ratio of the lesion to fat for shear wave elastography. Diagnostic performances were compared by using the area under the receiver operating characteristic curve (AUC). Results: The 37 women had a mean age of 47.4 years (range, 20-79 years). Of the 45 lesions, 20 were malignant, and 25 were benign. The AUCs for elasticity values on strain and shear wave elastography showed no significant differences (strain ratio, 0.929; mean elasticity, 0.898; and elasticity ratio, 0.868; P &gt; .05). After selectively downgrading BI-RADS category 4a lesions based on strain and shear wave elastographic cutoffs, the AUCs for the combined sets of B-mode US and elastography were improved (B-mode + strain, 0.940; B-mode + shear wave; 0.964; and B-mode, 0.724; P &lt; .001). Combined strain and shear wave elastography showed significantly higher diagnostic accuracy than each individual elastographic modality (P = .031). Conclusions: These preliminary results showed that strain and shear wave elastography had similar diagnostic performance. The addition of strain and shear wave elastography to B-mode US improved diagnostic performance. The combination of strain and shear wave elastography results in a higher diagnostic yield than each individual elastographic modality. abstract_id: PUBMED:28138406 Inability of shear-wave elastography to distinguish malignant from benign prostate tissue - a comparison of biopsy, whole-mount sectioning and shear-wave elastography. Aim: This study was designed to assess the possible usefulness of shear-wave elastography in differentiating between benign and malignant tissue in prostate neoplasia. Patients And Methods: A total of 120 prostate tissue samples were obtained from 10 patients treated by radical prostatectomy and investigated pre-operatively by ultrasound elastography followed by directed biopsy. After resection, whole-mount sectioning and histological examination was performed. The predictions based on shear-wave elastography were compared with biopsy and histological results. Results: The comparison between the results of shear-wave elastography and those of biopsy was performed by receiver operating characteristic analysis, which suggested an optimum cut-off tissue elasticity value of 50 kPa, in agreement with earlier studies aimed at distinguishing between benign and malignant tissue. However, the diagnostic selectivity (and thus the diagnostic power) was poor (area under the curve 0.527, which hardly differs from the value of 0.500 that would correspond to a complete lack of predictive power); furthermore, application of this cut-off value to the samples led to a sensitivity of only 74% and a specificity of only 43%. An analogous comparison between the results of shear-wave elastography and those of whole-mount histology, which itself is more reliable than biopsy, gave an even poorer diagnostic selectivity (sensitivity of 62%, specificity of 35%). Meaningful association with Gleason score was not found for D'Amico risk groups (p = 0.35). Conclusions: The (negative) findings of this investigation add to the dissonance among results of studies investigating the possible value of shear-wave elastography as a diagnostic tool to identify malignant neoplasia. There is a clear need for further research to elucidate the diversity of study results and to identify the usefulness, if any, of the method in question. abstract_id: PUBMED:35173778 Can shear wave elastography be utilized as an additional tool for the assessment of non-mass breast lesions? Introduction: We aimed to describe shear wave elastography parameters of non-mass lesions of the breast and to assess the measures of diagnostic accuracy of shear wave elastography in the differentiation of non-mass lesions compared with conventional ultrasound, using histopathologic results as the reference standard. Methods: This retrospective study included breast ultrasound-detected non-mass lesions with a confirmed pathologic diagnosis during a two-year study period. B-mode ultrasound and shear wave elastography were performed for all lesions before biopsy. Ultrasound features, shear wave elastography parameters (mean elasticity and maximum stiffness color), as well as Breast Imaging-Reporting and Data System categories were recorded for each lesion. Measures of diagnostic accuracy of ultrasound and ultrasound + shear wave elastography were also assessed. Results: From a total of 567 breast lesions requiring core-needle biopsy, 49 (8.6%) were considered as non-mass lesions. Based on histopathologic reports, 32 patients (65.3%) had non-high-risk benign lesions, five (10.2%) had high-risk benign lesions, five (10.2%) had ductal carcinoma in situ, and seven (14.3%) had invasive carcinoma. There was no significant difference in patients' age and palpability between benign and malignant lesions (p = 0.16 and p = 0.12, respectively). Mean elasticity values and Breast Imaging-Reporting and Data System categories were significantly higher among malignant lesions compared with benign non-mass lesions (both p &lt; 0.001). Furthermore, the addition of shear wave elastography to grayscale ultrasound increased the specificity, positive predictive value, and diagnostic accuracy. Conclusion: The complementary use of shear wave elastography with conventional ultrasound might help in the differentiation of non-mass breast lesions and has the potential to decrease the frequency of unnecessary biopsies performed for benign non-mass lesions. abstract_id: PUBMED:38448131 Ultrasound Shear Wave Elastography in Cardiology. The advent of high-frame rate imaging in ultrasound allowed the development of shear wave elastography as a noninvasive alternative for myocardial stiffness assessment. It measures mechanical waves propagating along the cardiac wall with speeds that are related to stiffness. The use of cardiac shear wave elastography in clinical studies is increasing, but a proper understanding of the different factors that affect wave propagation is required to correctly interpret results because of the heart's thin-walled geometry and intricate material properties. The aims of this review are to give an overview of the general concepts in cardiac shear wave elastography and to discuss in depth the effects of age, hemodynamic loading, cardiac morphology, fiber architecture, contractility, viscoelasticity, and system-dependent factors on the measurements, with a focus on clinical application. It also describes how these factors should be considered during acquisition, analysis, and reporting to ensure an accurate, robust, and reproducible measurement of the shear wave. abstract_id: PUBMED:34786088 The value of contrast-enhanced ultrasound versus shear wave elastography in differentiating benign and malignant superficial lymph node lesions. Objective: To analyze the value of contrast-enhanced ultrasound (CEUS) versus shear wave elastography (SWE) in differentiating benign and malignant superficial lymph node lesions. Methods: In this retrospective study, a total of 140 superficial lymph nodes from 140 patients pathologically confirmed to have an enlargement of their superficial lymph nodes were examined using CEUS and SWE. The results and diagnostic efficacy were analyzed. Results: Among the 67 benign lymph nodes, there were 38 cases of type I, 17 of type II, and 12 of types III and IV. Among the 73 malignant lymph nodes, there were 53 cases of type III, 11 of type IV, and 9 of types I and II. Among the patients with lymph nodes &lt;1 cm, there were 20, 4, 8, and 5 cases of types I, II, III, and IV, respectively. Among the patients with 1-2 cm lymph nodes, there were 15, 10, 26 and 7 cases of types I, II, III, and IV, respectively. There were 6, 10, 27, and 2 cases of types I, II, III, and IV in the &gt;2 cm lymph nodes, respectively. The accuracy, sensitivity, and specificity of CEUS in the diagnosis of malignant lymph nodes were 85.00%, 87.67%, and 82.09%, respectively, and those of SWE were 89.29%, 80.82%, and 98.51%, respectively. SWE showed higher specificity than CEUS (P&lt;0.05). SWE showed mean shear wave velocity (SWV) values of (2.11±0.41) m/s for the benign lymph nodes and (3.22±0.79) m/s for the malignant lymph nodes (P&lt;0.05). The receiver operating characteristic (ROC) curves of the SWV values for the benign and malignant lymph nodes showed AUC=0.9948. Conclusion: Both CEUS and SWE are valuable in the differentiation of benign and malignant lymph node lesions, but SWE has a higher specificity. The SWV value of SWE is superior in the differentiation of benign and malignant lymph nodes. The combination of the two methods can achieve a higher accuracy. abstract_id: PUBMED:32386847 A Novel, Reliable Protocol to Objectively Assess Scar Stiffness Using Shear Wave Elastography. The aim of this research was to investigate the use of shear wave elastography as a novel tool to quantify and visualize scar stiffness after a burn. Increased scar stiffness is indicative of pathologic scarring which is associated with persistent pain, chronic itch and restricted range of movement. Fifty-five participants with a total of 96 scars and 69 contralateral normal skin sites were evaluated. A unique protocol was developed to enable imaging of the raised and uneven burn scars. Intra-rater and inter-rater reliability was excellent (intra-class correlation coefficient &gt;0.97), and test-retest reliability was good (intra-class correlation coefficient &gt;0.85). Shear wave elastography was able to differentiate between normal skin, pathologic scars and non-pathologic scars, with preliminary cutoff values identified. Significant correlations were found between shear wave velocity and subjective clinical scar assessment (r = 0.66). Shear wave elastography was able to provide unique information associated with pathologic scarring and shows promise as a clinical assessment and research tool. abstract_id: PUBMED:33807358 Shear Wave Elastography and Shear Wave Dispersion Imaging in the Assessment of Liver Disease in Alpha1-Antitrypsin Deficiency. Liver affection of Alpha1-antitrypsin deficiency (AATD) can lead to cirrhosis and hepatocellular carcinoma (HCC). A noninvasive severity assessment of liver disease in AATD is urgently needed since laboratory parameters may not accurately reflect the extent of liver involvement. Preliminary data exist on two-dimensional shear wave elastography (2D-SWE) being a suitable method for liver fibrosis measurement in AATD. AATD patients without HCC were examined using 2D-SWE, shear wave dispersion imaging (SWD) and transient elastography (TE). Furthermore, liver steatosis was assessed using the controlled attenuation parameter (CAP) and compared to the new method of attenuation imaging (ATI). 29 AATD patients were enrolled, of which 18 had the PiZZ genotype, eight had PiMZ, two had PiSZ and one had a PiZP-Lowell genotype. 2D-SWE (median 1.42 m/S, range 1.14-1.83 m/S) and TE (median 4.8 kPa, range 2.8-24.6 kPa) values displayed a significant correlation (R = 0.475, p &lt; 0.05). 2D-SWE, ATI (median 0.56 dB/cm/MHz, range 0.43-0.96 dB/cm/MHz) and CAP (median 249.5 dB/m, range 156-347 dB/m) values were higher in PiZZ when compared to other AATD genotypes. This study provides evidence that 2D-SWE is a suitable method for the assessment of liver disease in AATD. The newer methods of SWD and ATI require further evaluation in the context of AATD. abstract_id: PUBMED:26675385 New aspects in shear-wave elastography of prostate cancer. Aim: This study was designed to evaluate the performance of shear-wave elastography as a diagnostic tool for prostate cancer in a larger cohort of patients than previously reported. Patients And Methods: Seventy-three patients with suspected prostate carcinoma were investigated by ultrasound elastography followed by directed biopsy. The elastographic and histological results for all biopsies were compared. Results: After exclusion of invalid and non-assessable results, 794 samples were obtained for which both a histological assessment and an elastometric result (tissue stiffness in kPa) were available: according to the histology 589 were benign and 205 were malignant. Tissue elasticity was found to be weakly correlated with patient's age, PSA level and gland volume. ROC analysis showed that, for the set of results acquired, elastometry did not fulfil literature claims that it could identify malignant neoplasia with high sensitivity and specificity. However, it did show promise in distinguishing between Gleason scores ≤6 and &gt;6 when malignancy had already been identified. Unexpected observations were the finding of a smaller proportion of tumours in the lateral regions of the prostate than generally expected, and also the observation that the elasticity of benign prostate tissue is region-sensitive, the tissue being stiffest in the basal region and more elastic at the apex. Conclusions: Shear-wave elastography was found to be a poor predictor of malignancy, but for malignant lesions an elasticity cut-off of 80 kPa allowed a fairly reliable distinction between lesions with Gleason ≤6 and those with Gleason &gt;6. We demonstrate an increase in elasticity of benign prostate tissue from the basal to the apical region. abstract_id: PUBMED:38376448 Diffusion-Weighted MRI-Based Virtual Elastography and Shear-Wave Elastography for the Assessment of Breast Lesions. Background: Diffusion-weighted imaging (DWI)-based virtual MR elastography (DWI-vMRE) in the assessment of breast lesions is still in the research stage. Purpose: To investigate the usefulness of elasticity values on DWI-vMRE in the evaluation of breast lesions, and the correlation with the values calculated from shear-wave elastography (SWE). Study Type: Prospective. Population/subjects: 153 patients (mean age ± standard deviation: 55 ± 12 years) with 153 pathological confirmed breast lesions (24 benign and 129 malignant lesions). Field Strength/sequence: 1.5-T MRI, multi-b readout segmented echo planar imaging (b-values of 0, 200, 800, and 1000 sec/mm2 ). Assessment: For DWI-vMRE assessment, lesions were manually segmented using apparent diffusion coefficient (ADC0-1000 ) map, then the region of interests were copied to the map of shifted-ADC (sADC200-800 , sADC 200-1500 ). For SWE assessment, the shear modulus of the lesions was measured by US elastic modulus (μUSE ). Intraclass/interclass kappa coefficients were calculated to measure the consistency. Statistical Tests: Pearson's correlation was used to assess the relationship between sADC and μUSE . A receiver operating characteristic analysis with the area under the curve (AUC) was performed to compare the diagnostic accuracy between benign and malignant breast lesions of sADC and μUSE . A P value &lt;0.05 was considered statistically significant. Results: There were significant differences between benign and malignant breast lesions of μUSE (24.17 ± 10.64 vs. 37.20 ± 12.61), sADC200-800 (1.38 ± 0.31 vs. 0.97 ± 0.18 × 10-3 mm2 /sec), and sADC200-1500 (1.14 ± 0.30 vs. 0.78 ± 0.13 × 10-3 mm2 /sec). In all breast lesions, a moderate but significant correlation was observed between μUSE and sADC200-800 /sADC200-1500 (r = -0.49/-0.44). AUC values to differentiate benign from malignant lesions were as follows: μUSE , 0.78; sADC200-800 , 0.89; sADC200-1500 , 0.89. Data Conclusions: Both SWE and DWI-vMRE could be used for the differentiation of benign versus malignant breast lesions. Furthermore, DWI-vMRE with the use of sADC show relatively higher AUC values than SWE. Level Of Evidence: 4 TECHNICAL EFFICACY: Stage 2. Answer: Based on the provided abstracts, biopsy may not always be required for palpable breast masses that appear benign on both greyscale ultrasound and shear-wave elastography (SWE). A study indicated that no cancer had benign characteristics on both modalities, and 99.8% of cancers were classified as BI-RADS 4a or above on greyscale ultrasound. The one malignant lesion classified as BI-RADS 3 was stiff on SWE. Additionally, 96% of malignancies were stiff on SWE. It was found that no cancer in females under 40 years had benign SWE features. Therefore, in cases where lesions appear benign on both greyscale ultrasound and SWE, particularly in females under 40 years, percutaneous biopsy or short-term follow-up may not be required (PUBMED:27007593). Furthermore, combining strain and shear wave elastography with B-mode ultrasound has been shown to improve diagnostic performance, suggesting that the use of multiple modalities can enhance the accuracy of breast mass characterization and potentially reduce the need for biopsies in cases where the likelihood of malignancy is low (PUBMED:28688156). However, it is important to note that the diagnostic power of SWE can vary depending on the organ being assessed. For instance, SWE was not found to be a reliable tool for distinguishing malignant from benign prostate tissue, indicating that its effectiveness may be organ-specific (PUBMED:28138406). In the context of non-mass breast lesions, the addition of SWE to conventional ultrasound increased specificity, positive predictive value, and diagnostic accuracy, suggesting that SWE can be a useful additional tool for assessing such lesions and potentially decreasing unnecessary biopsies (PUBMED:35173778). Overall, while SWE combined with greyscale ultrasound assessment can reduce the need for biopsies in certain cases, the decision to biopsy should be based on a comprehensive assessment that includes patient age, lesion characteristics, and the results of multiple imaging modalities.
Instruction: Surgical informatics on the Internet: any improvement? Abstracts: abstract_id: PUBMED:31477548 Surgical Informatics: Defining the Role of Informatics in the Current Surgical Training Paradigm. Objective: Clinical Informatics is a large part of the current surgical practice, yet, there is no formal curriculum to teach residents about informatics. The purpose of this article is to describe the field of clinical informatics, describe the potential of informatics to improve surgical care, and propose a framework to train surgical informaticians. Design: This is a perspective summarizing the role of clinical informatics in current surgical training. Conclusions: The manuscript concisely discusses the potential impact of novel technologies and information systems on surgical care delivery. We demonstrate the importance of informatics training for surgeons and describe the relatively new fellowship program as well as the various pathways to obtain board certification in clinical informatics. abstract_id: PUBMED:35414475 A Novel Fellowship in Perioperative Administration, Quality and Informatics: A Pilot Experience for Training Surgical Leaders. Value-based, outcome-oriented care supported with innovative technology is the future of surgery. We established a novel fellowship in Perioperative Administration, Quality, and Informatics. The aim is to equip future surgeon scholars with the requisite knowledge base and skillset to serve as institutional leaders capable of transforming surgical healthcare delivery. The model was designed as a project-based, "operations-focused" education with supplemental didactics and mentored by surgical leaders and institutional executives. We describe our initial experience, successes, and challenges such that a similar model may be replicated elsewhere. abstract_id: PUBMED:26065802 Role of Informatics in Patient Safety and Quality Assurance. Quality assurance encompasses monitoring daily processes for accurate, timely, and complete reports in surgical pathology. Quality assurance also includes implementation of policies and procedures that prevent or detect errors in a timely manner. This article presents uses of informatics in quality assurance. Three main foci are critical to the general improvement of diagnostic surgical pathology. First is the application of informatics to specimen identification with lean methods for real-time statistical control of specimen receipt and processing. Second is the development of case reviews before sign-out. Third is the development of information technology in communication of results to assure treatment in a timely manner. abstract_id: PUBMED:20919518 Biomedical and health informatics for surgery. The application of biomedical and health informatics to surgery holds tremendous opportunities to enhance surgical care. Better use of information in surgical practice has the potential to streamline care, remove inefficiencies, and allow for improvements in surgical research. With greater EHR adoption, health care reform, and direct investment in HIT, an increasing opportunity exists for surgeons to access and use patient information more effectively. For this to happen, greater focus on the specific needs of surgeons is particularly important, alongside increasing the number of surgical informatics stakeholders. abstract_id: PUBMED:28679873 On Informatics Diagnostics and Informatics Therapeutics - Good Medical Informatics Research Is Needed Here. In the era of digitization some new procedures play an increasing role for diagnosis as well as for therapy: informatics diagnostics and informatics therapeutics. Challenges for such procedures are described. It is discussed, when research on such diagnostics and therapeutics can be regarded as good research. Examples are mentioned for informatics diagnostics and informatics therapeutics, which are based on health-enabling technologies. abstract_id: PUBMED:17490722 A system for sharing routine surgical pathology specimens across institutions: the Shared Pathology Informatics Network. This report presents an overview for pathologists of the development and potential applications of a novel Web enabled system allowing indexing and retrieval of pathology specimens across multiple institutions. The system was developed through the National Cancer Institute's Shared Pathology Informatics Network program with the goal of creating a prototype system to find existing pathology specimens derived from routine surgical and autopsy procedures ("paraffin blocks") that may be relevant to cancer research. To reach this goal, a number of challenges needed to be met. A central aspect was the development of an informatics system that supported Web-based searching while retaining local control of data. Additional aspects included the development of an eXtensible Markup Language schema, representation of tissue specimen annotation, methods for deidentifying pathology reports, tools for autocoding critical data from these reports using the Unified Medical Language System, and hierarchies of confidentiality and consent that met or exceeded federal requirements. The prototype system supported Web-based querying of millions of pathology reports from 6 participating institutions across the country in a matter of seconds to minutes and the ability of bona fide researchers to identify and potentially to request specific paraffin blocks from the participating institutions. With the addition of associated clinical and outcome information, this system could vastly expand the pool of annotated tissues available for cancer research as well as other diseases. abstract_id: PUBMED:14650500 Dental informatics: an emerging biomedical informatics discipline. Biomedical informatics is a maturing discipline. During the last forty years, it has developed into a research discipline of significant scale and scope. One of its subdisciplines, dental informatics, is beginning to emerge as its own entity. While there is a growing cadre of trained dental informaticians, dental faculty and administrators in general are not very familiar with dental informatics as an area of scientific inquiry. Many confuse informatics with information technology (IT), are unaware of its scientific methods and principles, and cannot relate dental informatics to biomedical informatics as a whole. This article delineates informatics from information technology and explains the types of scientific questions that dental and other informaticians typically explore. Scientific investigation in informatics centers primarily on model formulation, system development, system implementation, and the study of effects. Informatics draws its scientific methods mainly from information science, computer science, cognitive science, and telecommunications. Dental informatics shares many types of research questions and methods with its parent discipline, biomedical informatics. However, there are indications that certain research questions in dental informatics require novel solutions that have not yet been developed in other informatics fields. abstract_id: PUBMED:29721362 Career Paths of Pathology Informatics Fellowship Alumni. Background: The alumni of today's Pathology Informatics and Clinical Informatics fellowships fill diverse roles in academia, large health systems, and industry. The evolving training tracks and curriculum of Pathology Informatics fellowships have been well documented. However, less attention has been given to the posttraining experiences of graduates from informatics training programs. Here, we examine the career paths of subspecialty fellowship-trained pathology informaticians. Methods: Alumni from four Pathology Informatics fellowship training programs were contacted for their voluntary participation in the study. We analyzed various components of training, and the subsequent career paths of Pathology Informatics fellowship alumni using data extracted from alumni provided curriculum vitae. Results: Twenty-three out of twenty-seven alumni contacted contributed to the study. A majority had completed undergraduate study in science, technology, engineering, and math fields and combined track training in anatomic and clinical pathology. Approximately 30% (7/23) completed residency in a program with an in-house Pathology Informatics fellowship. Most completed additional fellowships (15/23) and many also completed advanced degrees (10/23). Common primary posttraining appointments included chief medical informatics officer (3/23), director of Pathology Informatics (10/23), informatics program director (2/23), and various roles in industry (3/23). Many alumni also provide clinical care in addition to their informatics roles (14/23). Pathology Informatics alumni serve on a variety of institutional committees, participate in national informatics organizations, contribute widely to scientific literature, and more than half (13/23) have obtained subspecialty certification in Clinical Informatics to date. Conclusions: Our analysis highlights several interesting phenomena related to the training and career trajectory of Pathology Informatics fellowship alumni. We note the long training track alumni complete in preparation for their careers. We believe flexible training pathways combining informatics and clinical training may help to alleviate the burden. We highlight the importance of in-house Pathology Informatics fellowships in promoting interest in informatics among residents. We also observe the many important leadership roles in academia, large community health systems, and industry available to early career alumni and believe this reflects a strong market for formally trained informaticians. We hope this analysis will be useful as we continue to develop the informatics fellowships to meet the future needs of our trainees and discipline. abstract_id: PUBMED:27563486 Pathology Informatics Essentials for Residents: A flexible informatics curriculum linked to Accreditation Council for Graduate Medical Education milestones. Context: Recognition of the importance of informatics to the practice of pathology has surged. Training residents in pathology informatics have been a daunting task for most residency programs in the United States because faculty often lacks experience and training resources. Nevertheless, developing resident competence in informatics is essential for the future of pathology as a specialty. Objective: The objective of the study is to develop and deliver a pathology informatics curriculum and instructional framework that guides pathology residency programs in training residents in critical pathology informatics knowledge and skills and meets Accreditation Council for Graduate Medical Education Informatics Milestones. Design: The College of American Pathologists, Association of Pathology Chairs, and Association for Pathology Informatics formed a partnership and expert work group to identify critical pathology informatics training outcomes and to create a highly adaptable curriculum and instructional approach, supported by a multiyear change management strategy. Results: Pathology Informatics Essentials for Residents (PIER) is a rigorous approach for educating all pathology residents in important pathology informatics knowledge and skills. PIER includes an instructional resource guide and toolkit for incorporating informatics training into residency programs that vary in needs, size, settings, and resources. PIER is available at http://www.apcprods.org/PIER (accessed April 6, 2016). Conclusions: PIER is an important contribution to informatics training in pathology residency programs. PIER introduces pathology trainees to broadly useful informatics concepts and tools that are relevant to practice. PIER provides residency program directors with a means to implement a standardized informatics training curriculum, to adapt the approach to local program needs, and to evaluate resident performance and progress over time. abstract_id: PUBMED:16762252 Review of the development of ophthalmic informatics Digitized information technology, which is rising and permeating into other scientific domains, is a major trend of contemporary science development. The overlap between ophthalmology and informatics creates ophthalmic informatics, a new and developing discipline. Ophthalmic informatics utilizes the computer as the primary tool and with the aid of ophthalmic knowledge, develops a system to analyze patient data and evaluates the accuracy of the system. It also provides a new method for the development of traditional ophthalmology. In order to let more ophthalmologist to handle the knowledge of ophthalmic informatics and to use it in the practice, this article carries a comprehensive review of ophthalmic informatics, with emphasis on the principle, content and method of ophthalmic informatics. Some relevant studies on ophthalmic informatics in China in recent years are enumerated, as well as the basic frame for studying ophthalmic informatics. Answer: The abstracts provided do not directly address the question of whether there has been an improvement in surgical informatics on the Internet. However, they do offer insights into the development and implementation of informatics in surgical education, practice, and research, which may suggest advancements in the field. For instance, the establishment of a novel fellowship in Perioperative Administration, Quality, and Informatics aims to equip future surgical leaders with knowledge and skills in informatics, indicating an evolving recognition of the importance of informatics in surgery (PUBMED:35414475). Similarly, the development of a flexible informatics curriculum linked to Accreditation Council for Graduate Medical Education milestones, known as Pathology Informatics Essentials for Residents (PIER), suggests efforts to standardize and improve informatics training for pathology residents (PUBMED:27563486). The Shared Pathology Informatics Network, which allows for Web-based searching and retrieval of pathology specimens across multiple institutions, demonstrates an application of informatics that could enhance surgical research and collaboration (PUBMED:17490722). Additionally, the role of informatics in patient safety and quality assurance in surgical pathology is highlighted, with a focus on specimen identification, case reviews, and timely communication of results (PUBMED:26065802). The career paths of Pathology Informatics fellowship alumni, who fill diverse roles in academia, health systems, and industry, reflect the growing demand for expertise in informatics within the surgical field (PUBMED:29721362). Moreover, the discussion of informatics diagnostics and therapeutics suggests that informatics is playing an increasing role in both diagnosis and therapy, which could be indicative of improvements in the application of informatics to surgical care (PUBMED:28679873). Overall, while the abstracts do not provide a direct answer to the question of improvement in surgical informatics on the Internet, they collectively indicate ongoing efforts to integrate informatics into surgical training, practice, and research, which may imply advancements in the field.
Instruction: Do cigarette prices vary by brand, neighborhood, and store characteristics? Abstracts: abstract_id: PUBMED:19618790 Do cigarette prices vary by brand, neighborhood, and store characteristics? Objective: We assessed the price variability of cigarettes by brand, neighborhood characteristics (racial/ethnic and youth composition, number of schools, and number of stores), and store type. Methods: Trained research staff purchased three different brands of cigarettes (premium, menthol, and discount-all produced by the same company) at 214 stores in one metropolitan area. We assessed associations between price and neighborhood/store characteristics through multivariate regression, using four price variables as dependent variables-the price of each brand of cigarettes and the mean price across the three brands. Results: We found that the price of cigarettes varied by neighborhood and store characteristics, although this variability differed by brand. For the same brand, the maximum price was 1.7 to 1.8 times higher than the lowest price. We found a positive association between the percentage of a neighborhood that was nonwhite and the price of discount and premium cigarettes as well as the overall mean price of cigarettes, but not with the price of the menthol brand. We found a negative association between the percentage of youth in a neighborhood and the price of premium cigarettes as well as the mean price, but not with the price of the other two brands. In addition, we found a greater likelihood of higher discount brand prices at independent vs. chain-operated stores. Conclusions: Our findings showed that cigarette prices do vary by brand, the youth and racial/ethnic composition in a neighborhood, and store type, suggesting that the tobacco industry might vary its marketing strategies based on brand as well as neighborhood and store characteristics. abstract_id: PUBMED:28579499 Changes in cigarette prices, affordability, and brand-tier consumption after a tobacco tax increase in Thailand: Evidence from the Global Adult Tobacco Surveys, 2009 and 2011. Despite the 2009 implementation of a tobacco tax increase in Thailand, smoking rates remained unchanged between 2009 and 2011. Prior evidence has linked cigarette tax increases to compensatory behaviours aimed at lowering the cost of smoking, such as switching to lower-priced cigarette brands. Using data from 2009 and 2011 Global Adult Tobacco Surveys in Thailand, we estimated unadjusted changes in cigarette prices paid, cigarette affordability, and consumption of cigarettes in three price categories classified as upper-, middle-, and lower-priced brand tiers (or price tertiles). We used ordered logit regression to analyse the correlates of price-tier choice and to estimate the change in price-tier consumption adjusted for demographic and region characteristics. Between 2009 and 2011, real cigarette prices increased, but the affordability of cigarettes remained unchanged overall. There was a significant reduction in the consumption of cigarette brands in the top price-tier overall, accompanied by increases in the consumption of brands in the bottom and middle price-tiers, depending on the region. Adjusted estimates from the logit models indicate that, on average, the proportion of smokers selecting brands from upper- and middle price-tiers decreased while consumption of lower price-tier brands increased during the study period. The estimated shifts in consumption from more expensive to less expensive cigarette brands and the overall lack of change in cigarette affordability in Thailand between 2009 and 2011 are both factors that may have contributed to the observed lack of change in smoking rates after the 2009 tax increase. abstract_id: PUBMED:33575516 Smokers' perceptions of different classes of cigarette brand descriptors. Introduction: Cigarette brand descriptors such as 'light' are banned in several countries and often replaced by alternative descriptors that continue to mislead smokers about the relative risk from those brands. The objective of this study was to evaluate perceptions from current brand descriptors when presented independently of cigarette packaging. Methods: Eighty-eight daily cigarette smokers attended semi-structured interviews at three US research laboratories in 2018-2019 to assess smokers' perceptions of cigarette brand descriptors in four classes of brand features: prestige, connotation, taste, and color. Participants ranked descriptors within each brand feature on perceived harm, attractiveness, and appeal (willingness to try). Ranked perceptions were described using median rankings and percentages. Chi-squared was used to assess demographic effects on perceptions. Spearman's correlation was used to assess the association between harm, attractiveness, and appeal rankings. Results: Brand descriptors influenced perceptions of harm, attractiveness, and appeal within each brand feature. Smooth was perceived as the most attractive (42.5%) and appealing (33.0%) taste descriptor. Red was perceived as the most harmful (45.4%) and least appealing (22.7%) color descriptor. Perceptions of certain descriptors differed by demographic characteristics such as sex and age. Rankings of attractiveness and appeal were more strongly correlated (r=0.63) than rankings of harm and appeal (r=0.20, p&lt;0.01). Conclusions: Cigarette manufacturers replaced banned descriptors with alternative descriptors that continue to influence perceptions of cigarettes. Regulatory agencies should closely evaluate all brand descriptors and consider implications for alternative products with fewer regulations. abstract_id: PUBMED:32019508 Store and neighborhood differences in retailer compliance with a local staple foods ordinance. Background: Policies to improve healthy food retail have been recognized as a potential means of reducing diet-related health disparities. The revised 2014 Minneapolis Staple Foods Ordinance instituted minimum stocking standards for healthy, staple foods. The objective of this study was to examine retailer compliance with the policy, and whether compliance varied by neighborhood and store characteristics. Methods: In this natural experiment, audits were conducted annually pre- and post-ordinance (2014-2017) in 155 small/nontraditional stores in Minneapolis, MN and a comparison city (St. Paul, MN). Compliance measures for 10 product categories included: (1) met requirements for ≥8 categories; (2) 10-point scale (one point for each requirement met); and (3) carried any item in each category. Store characteristics included store size and ownership status. Neighborhood characteristics included census-tract socioeconomic status and low-income/low-access status. Analyses were conducted in 2018. Results: All compliance measures increased in both Minneapolis and St. Paul from pre- to post-policy; Minneapolis increases were greater only for carrying any item in each category (p &lt; 0.01). In Minneapolis, corporate (vs. independent) stores were generally more compliant. No differences were found by neighborhood characteristics. Conclusions: Overall trends suggest broad movement among Minneapolis stores towards providing a minimum level of staple foods. Increases were greater in corporate stores. Trends do not suggest neighborhood-level disparities in compliance. Study Registration: ClinicalTrials.gov NCT02774330, retrospectively registered May 17, 2016. abstract_id: PUBMED:31908190 Cigarette Prices in Rural and Urban Ohio: Effects of Census Tract Demographics. The prevalence of smoking among rural Americans and Americans of lower socioeconomic status (SES) remains higher than among their urban and higher SES counterparts. Potential factors contributing to these disparities are area-based differences in the retail environment and tobacco control policies. We describe the association between neighborhood demographics and cigarette prices in rural and urban areas. Prices of one pack of Marlboro Reds, Newport menthols, and the cheapest cigarettes in the store were recorded from a stratified random sample of tobacco retailers in urban (N = 104) and rural (N = 109) Ohio in 2016. Associations between prices and census tract demographics (SES and race) were modeled separately in each region, controlling for store type. In the rural region, as the census tract income increased, the price of Marlboro and Newport cigarettes decreased, and the price of the cheapest pack of cigarettes increased. In the urban region, as the census tract income and percentage of White non-Hispanic people increased, the price of Marlboro decreased; there were no associations between census tract characteristics and the price of Newports or the cheapest cigarettes. Results describe a complex association between cigarette brand, prices, and area characteristics, where the cheapest brands of cigarettes can be obtained for the lowest prices in lower SES rural areas. Tobacco control policies that raise the price of cheap cigarettes, particularly minimum price laws, have the potential to reduce SES-related smoking disparities in both rural and urban populations. abstract_id: PUBMED:37819918 The relationship of single-family detached house prices with neighborhood walkability and disaster risk. People's preferences regarding their neighborhood environment can vary depending on their socioeconomic status and the cities where they live. This study aims to discern the relationship between neighborhood environment factors and single-family detached house sales by sale price and by central and noncentral cities. We analyzed sale prices in the Tokyo Metropolitan Area from 2015 to 2020. The neighborhood environment was assessed using flood/sediment risk and neighborhood walkability measured by net residential density, intersection density, and facility density (walking opportunity). Flood and sediment risk is a major concern that restricts the available land and is included as a negative aspect of the neighborhood environment, taking the topographic features into consideration. A comparison of the results showed that the preference for neighborhood walkability varies by socioeconomic status as well as by target cities. For most facility types, the number of walking opportunities within walking distance from houses was found to be positively related to the sale price of single-family detached houses in all quantiles. The relationship of house price with population and intersection density was found to vary depending on the price level, with a negative relationship with the sale price of relatively more expensive houses being exhibited. People who considered buying houses with relatively higher sale prices were found to devalue houses located in flood/sediment-hazardous areas more. However, it was also found that the negative relationship was slightly mitigated in the highest quantile of sale prices for houses in areas with a moderate flood risk (maximum flooding depth: 3-5 m). Plains near rivers with amenities offer high walkability but pose a flood risk, resulting in a trade-off between flood risk and neighborhood walkability. The findings suggest the use of indices representing diverse preferences in accordance with the target socioeconomic status when policymakers assess the neighborhood environment. abstract_id: PUBMED:24161713 The intersection of neighborhood racial segregation, poverty, and urbanicity and its impact on food store availability in the United States. Background: Food store availability may determine the quality of food consumed by residents. Neighborhood racial residential segregation, poverty, and urbanicity independently affect food store availability, but the interactions among them have not been studied. Purpose: To examine availability of supermarkets, grocery stores, and convenience stores in US census tracts according to neighborhood racial/ethnic composition, poverty, and urbanicity. Methods: Data from 2000 US Census and 2001 InfoUSA food store data were combined and multivariate negative binomial regression models employed. Results: As neighborhood poverty increased, supermarket availability decreased and grocery and convenience stores increased, regardless of race/ethnicity. At equal levels of poverty, Black census tracts had the fewest supermarkets, White tracts had the most, and integrated tracts were intermediate. Hispanic census tracts had the most grocery stores at all levels of poverty. In rural census tracts, neither racial composition nor level of poverty predicted supermarket availability. Conclusions: Neighborhood racial composition and neighborhood poverty are independently associated with food store availability. Poor predominantly Black neighborhoods face a double jeopardy with the most limited access to quality food and should be prioritized for interventions. These associations are not seen in rural areas which suggest that interventions should not be universal but developed locally. abstract_id: PUBMED:37232382 The Role of Perceived Neighborhood Cohesion in the Association between Depressive Symptoms and Cigarette Smoking. Background: Greater depression has been linked to increased smoking rates. However, the mechanisms underlying this association are not fully understood. It is possible that high perceived neighborhood cohesion may serve as one such mechanism given its associations with decreased depression and smoking. Having increased levels of depression likely impacts one's perceptions of neighborhood cohesion, which could lead to further increases in depression and a need to manage these symptoms via cigarette smoking. As a first test of this theory, the current study examined the effect of neighborhood cohesion on the association between depressive symptoms and smoking frequency and quantity among past 30-day cigarette smokers. Methods: Participants were 201 combustible cigarette smokers (Mage = 48.33, SD = 11.64; 63.2% female; 68.2% White) who completed self-report measures as part of a larger study of environmental influences on cardiac health. Results: Greater depressive symptoms were associated with lower levels of perceived neighborhood cohesion, and there was a significant indirect effect of greater depressive symptoms on heavier smoking through decreased neighborhood cohesion (b = .07, SE = .04, 95% CI [.003, .15]). There was no significant indirect effect for daily smoking. Conclusion: These results suggest that neighborhood cohesion is an important contextual factor that serves as one explanatory mechanism for the well-established relationship between depression and smoking quantity. Thus, there may be utility in implementing interventions focused on increasing neighborhood cohesion as a way to decrease smoking behavior. abstract_id: PUBMED:34444117 Cigarette Prices and Smoking Behavior in Israel: Findings from a National Study of Adults (2002-2017). Tobacco taxation and price policies are considered the most effective for lowering demand for tobacco products. While this statement is based on research from numerous countries, scant evidence exists on this topic for Israel. Accordingly, we assessed the association between cigarette prices and smoking prevalence and intensity from a national sample of adults in Israel (2002-2017). Data on smoking behavior were derived from the Israeli Knowledge Attitudes and Practices (KAP) survey, a repeated cross-sectional survey. Price information is from the Economist Intelligence Unit (EIU) since it was not collected in the KAP survey. We used the price of a pack of 20 cigarettes for Marlboro and the local brand. These two price variables were the primary independent variables, and we adjusted for inflation. The dependent variables were current smoking (yes/no) and smoking intensity, defined as the number of cigarettes smoked per week. Multivariable analysis was employed using a two-part model while adjusting for covariates. The first step of the model utilized logistic regression with current smoking as the dependent variable. The second step examining smoking intensity as the dependent variable, used OLS regression. Price elasticity was estimated as well. Analysis revealed that a one-unit increase (Israeli currency) in the price of local brand of cigarettes was related to 2.0% (OR = 0.98; 95%CI 0.98, 0.99) lower odds of being a current smoker, adjusting for covariates including household income. Moreover, a one unit increase in the price of the local brand of cigarettes was related to consuming 1.49 (95% CI -1.97, -1.00) fewer weekly cigarettes, controlling for household income and covariates. Similar results were found with the Marlboro cigarette prices. The total price elasticity of cigarette demand, given by the sum of price elasticities of smoking prevalence and intensity, showed that a 10.0% increase in the price is associated with a 4.6-9.2% lower cigarette consumption among Israeli adults. Thus, increasing cigarette prices will likely lead to a reduction in cigarette smoking thereby improving public health in Israel. abstract_id: PUBMED:30991159 Inter-brand competition in the convenience store industry, store density and healthcare utilization. We investigate the impact of access to convenience stores and competition between convenience store chains on the use of medical care in Taiwan. Using insurance claims from 0.85 million individuals and administrative data on store sales, we find that greater store density and more inter-brand competition reduced expenditures on outpatient medical services and prescription drugs. In support of these findings, we demonstrate that convenience store competition was associated with greater consumption of healthy foods and lower obesity rates. Our estimates suggest that the rise in convenience store competition from 2002 to 2012 reduced outpatient expenditures in Taiwan by 0.44 percent and prescription drug expenditures by 0.85 percent. Answer: Yes, cigarette prices do vary by brand, neighborhood, and store characteristics. Research has shown that the price of cigarettes can be influenced by the racial/ethnic and youth composition of a neighborhood, the number of schools and stores in the area, and whether the store is independently owned or part of a chain (PUBMED:19618790). Additionally, the type of store and its location, whether urban or rural, can also affect cigarette prices. For instance, in rural areas, lower socioeconomic status (SES) neighborhoods may have access to cheaper cigarette brands compared to urban areas (PUBMED:32019508). Furthermore, the presence of supermarkets, grocery stores, and convenience stores, which can vary by neighborhood racial composition and poverty levels, may also impact the availability and pricing of cigarettes (PUBMED:24161713). In Thailand, for example, after a tobacco tax increase, there was a shift in consumption from more expensive to less expensive cigarette brands, suggesting that smokers may switch to lower-priced options in response to price changes (PUBMED:28579499). Similarly, in Israel, increases in cigarette prices were associated with lower odds of being a current smoker and a reduction in the number of cigarettes smoked weekly, indicating that price can influence smoking behavior (PUBMED:34444117). Moreover, cigarette manufacturers may use different brand descriptors to influence perceptions of harm, attractiveness, and appeal, which can affect consumer choice and potentially pricing strategies (PUBMED:33575516). Lastly, the broader retail environment, including competition among convenience store chains, has been linked to health-related outcomes and behaviors, such as the consumption of healthy foods and healthcare utilization, which could indirectly relate to cigarette pricing and purchasing patterns (PUBMED:30991159).
Instruction: Do disease specific characteristics add to the explanation of mobility limitations in patients with different chronic diseases? Abstracts: abstract_id: PUBMED:9519132 Do disease specific characteristics add to the explanation of mobility limitations in patients with different chronic diseases? A study in The Netherlands. Study Objectives: To determine whether disease specific characteristics, reflecting clinical disease severity, add to the explanation of mobility limitations in patients with specific chronic diseases. Design And Setting: Cross sectional study of survey data from community dwelling elderly people, aged 55-85 years, in the Netherlands. Participants And Methods: The additional explanation of mobility limitations by disease specific characteristics was examined by logistic regression analyses on data from 2830 community dwelling elderly people. Main Results: In the total sample, chronic non-specific lung disease, cardiac disease, peripheral atherosclerosis, diabetes mellitus, stroke, arthritis and cancer (the index diseases), were all independently associated with mobility limitations. Adjusted for age, sex, comorbidity, and medical treatment disease specific characteristics that explain the association between disease and mobility mostly reflect decreased endurance capacity (shortness of breath and disturbed night rest in chronic non-specific lung disease, angina pectoris and congestive heart failure in cardiac disease), or are directly related to mobility function (stiffness and lower body complaints in arthritis). For atherosclerosis and diabetes mellitus, disease specific characteristics did not add to the explanation of mobility limitations. Conclusions: The results provide evidence that, to obtain more detailed information about the differential impact of chronic diseases on mobility, disease specific characteristics are important to take into account. abstract_id: PUBMED:26678067 Associations between neighbourhood characteristics and community mobility in older adults with chronic health conditions. Purpose: To explore associations between perceptions of neighbourhood built and social characteristics and satisfaction with community mobility in older adults with chronic health conditions. Method: Two hundred and thirty-seven community-dwelling adults aged 60 years or more with one or more of arthritis (osteoarthritis or rheumatoid arthritis), chronic obstructive pulmonary disease, diabetes or heart disease completed a cross-sectional, mailed survey. The survey addressed community mobility and 11 neighbourhood characteristics: amenities (three types), problems (six), social cohesion and safety. Analysis involved logistic regression modeling for each neighbourhood characteristic. Results: Satisfaction with community mobility was associated with perception of no traffic problems (OR = 3.0, 95% CI = 1.4-6.2, p ≤ 0.05) and neighbourhood safety (OR = 3.4, 95% CI = 1.2-9.8, p ≤ 0.05), adjusted for age, ability to walk several blocks and depressive symptoms. Conclusion: Satisfaction with community mobility is associated with neighbourhood safety and no traffic problems among older adults with chronic conditions. While further research is needed to explore these neighbourhood characteristics in more detail and to examine causation, addressing these neighbourhood characteristics in health services or community initiatives may help promote community mobility in this population. Implications for Rehabilitation Community mobility, or the ability to move about one's community, is a key aspect of participation that enables other aspects of community participation. Good community mobility is associated with perception of no traffic problems and neighbourhood safety among older adults. Considering and addressing a broad range of environmental influences has the potential to improve community mobility in older adults, beyond traditional approaches. Health professionals can work with clients to develop strategies to avoid traffic and safety problems and can work with communities to develop safe spaces within neighbourhoods, to improve community mobility in older adults. abstract_id: PUBMED:29948347 Disease and management beliefs of elderly patients with rheumatoid arthritis and comorbidity: a qualitative study. To explore in elderly patients with rheumatoid arthritis (RA) and comorbidity (1) in which order and why patients prioritize their morbidities with regard to functioning and health, (2) their beliefs about common (age-related) musculoskeletal complaints, and (3) experiences about the influence of comorbidity on medication treatment of RA. Patients between 50 and 85 years with RA and ≥ 1 comorbidity or lifestyle risk factor were invited for a semi-structured interview. Two readers coded the transcripts of the interviews, by using NVivo11 software. Fifteen patients (14 women; mean age 67 years (range 51-83 years); mean disease duration 14 years (range 1-39 years)) were interviewed. Only 3 (20%) out of 15 patients prioritized RA over their comorbidity; these patients often experienced severe functional limitations. The level of current or (perceived) future disability, risk of dependency, and the perceived lethality of a condition were considered by participants when prioritizing morbidities. Most participants had misconceptions about common age-related musculoskeletal complaints. Consequently, these participants attributed all joint complaints or even all physical complaints to RA, disregarding degenerative joint disease and physiological aging as alternative diagnoses. Half of the participants ever had to change RA medication because of comorbidity. Most of these patients had prioritized the comorbidity, sometimes even over treatment of RA disease activity. Most elderly RA patients with comorbidity prioritize the importance and treatment of comorbidity over RA. Better understanding of patients' beliefs on RA and comorbidity is essential when managing chronic conditions in elderly patients. abstract_id: PUBMED:8970491 Self-reports and general practitioner information on the presence of chronic diseases in community dwelling elderly. A study on the accuracy of patients' self-reports and on determinants of inaccuracy. Object: The object of the study is to investigate the (in)accuracy of patients' self-reports, as compared with general practitioners' information, regarding the presence of specific chronic diseases, and the influence of patient characteristics. Methods: Questionnaire data of 2380 community-dwelling elderly patients, aged 55-85 years, on the presence of chronic non-specific lung disease, cardiac disease, peripheral atherosclerosis, stroke, diabetes, malignancies, and osteoarthritis/rheumatoid arthritis were compared with data from the general practitioners, using the kappa-statistic. Associations between the accuracy of self-reports and patient characteristics were studied by multiple logistic regression analyses. Results: Kappa's ranged from 0.30 to 0.40 for osteoarthritis/rheumatoid arthritis and atherosclerosis, to 0.85 for diabetes mellitus. In the multivariate analyses, educational level, level of urbanization, deviations in cognitive function, and depressive symptomatology had no influence on the level of accuracy. An influence of gender, age, mobility limitations, and recent contact with the general practitioner was shown for specific diseases. For chronic non-specific lung disease, both "underreporting" and "overreporting" are more prevalent in males, compared to females. Furthermore, males tend to overreport stroke and underreport malignancies and arthritis, whereas females tend to overreport malignancies and arthritis. Both overreporting and underreporting of cardiac disease are more prevalent as people are older. Also, older age is associated with overreporting of stroke, and with underreporting of arthritis. The self-reported presence of mobility limitations is associated with overreporting of all specific diseases studied, except for diabetes mellitus, and its absence is associated with underreporting, except for diabetes mellitus and atherosclerosis. Recent contact with the general practitioner is associated with overreporting of cardiac disease, atherosclerosis, malignancies and arthritis, and with less frequent underreporting of diabetes and arthritis. Conclusions: Results suggest that patients' self-reports on selected chronic diseases are fairly accurate, with the exceptions of atherosclerosis and arthritis. The associations found with certain patient characteristics may be explained by the tendency of patients to label symptoms, denial by the patient, or inaccuracy of medical records. abstract_id: PUBMED:23578847 Not just specific diseases: systematic review of the association of geriatric syndromes with hospitalization or nursing home admission. To examine the association between geriatric syndromes with hospitalization or nursing home admission, we reviewed studies that examined hospitalization and nursing home admission in community-dwelling older adults with multiple morbidities, cognitive impairment, frailty, disability, sarcopenia, malnutrition, impaired homeostasis, and chronic inflammation. Studies published in English language were identified through MEDLINE (1990 through April 2010), Cochrane databases, the Centers for Disease Control and Prevention website and manual searches of reference lists from relevant publications. The study had to include general (non-disease specific) populations of adults aged 65 years or older. Using a standardized protocol, two investigators independently abstracted information on participant characteristics and adjusted measures of the association. Studies that controlled for the presence of specific diseases were further identified and analyzed. When the syndrome examined was similar from different studies, we computed the pooled risk estimates using a random-effects model. We assessed the strength of evidence following the recommended guidelines. We identified 47 eligible articles from 6 countries. Multiple morbidity, frailty, and disabilities were associated with hospitalization and nursing home admission (moderate evidence). Cognitive impairment was associated with hospitalization (low evidence) and nursing home admission (moderate evidence). Among these studies, 20 articles controlled for specific diseases. Limited evidence suggested that these geriatric syndromes are associated with hospitalization and institutionalization after controlling for the presence of specific diseases. We conclude that geriatric syndromes are associated with risk of hospitalization or nursing home admission. Efforts to prevent hospitalization or nursing home admission should target strategies to prevent and manage these syndromes. abstract_id: PUBMED:23844776 Prevalence of physical disability in patients with chronic obstructive pulmonary disease and associated risk factors. Introduction: To analyze the prevalence of disability among patients with COPD as compared to people without this lung disease and to identify sociodemographic and health factors associated with these physical limitations. Methods: We performed a cross-sectional study based on data taken from the European Health Interview Survey for Spain (EHISS) 2009. The survey included adults aged 40 to 79 years (n = 13624). We identified patients with COPD using a direct question from the survey. The main study variable was disability, including basic activities of daily living (ADLs), instrumental activities (IADLs), and mobility disability (MD). Independent variables analyzed included socio-demographic characteristics, self-perceived health status, presence of mental disease and co-morbid chronic conditions and lifestyles. Results: The overall prevalence of self-reported COPD was 7.2%. The total prevalence of ADL, IADL and MD was higher among men suffering COPD than among the population without this disease (16.39% vs. 4.91%; 27.70% vs. 12.45%; 48.94% vs. 17.46% respectively). These significant differences were also observed among women with equivalent figures of: 23.25% vs. 8.71%; 31.03% vs. 10.53%; 64.83% vs. 34.08%. Being older, having a self-perceived health status of "fair/poor/very poor" and suffering from mental disease (anxiety and/or depression) were factors associated with a higher probability of reporting any disability in both men and women suffering from COPD. Conclusions: Higher rates of disability were seen among COPD patients compared to the general population. Among COPD patients older age, depression, anxiety and worse self-rated health were associated with higher disability. abstract_id: PUBMED:29851682 Pediatric Mobility: The Development of Standard Assessments and Interventions for Pediatric Patients for Safe Patient Handling and Mobility. Due to improvements in biomedical sciences and clinical expertise, childhood mortality has decreased. Pediatric patients experience similar complications from inactivity as adults, such as hospital-acquired conditions and delirium. Interdisciplinary collaboration is imperative to improve functional and developmental outcomes of children who suffers from either a chronic illnesses or an acute illness that results in a prolonged hospitalization. Pediatric nursing assessments should include gross motor function related to mobility. Activities to mobilize pediatric patients should be based on age and developmental assessment. The purpose of this article is to describe the modification of an adult mobility assessment for the pediatric population. abstract_id: PUBMED:20198376 Priorities of elderly dizzy patients in general practice. Findings and psychometric properties of the"Dizziness Needs Assessment" (DiNA). Background: Dizziness as a geriatric syndrome needs to be assessed using a multi-dimensional, patient-centred approach in addition to a disease-orientated strategy. The aim of the study was to determine the priorities of elderly patients by a specific needs questionnaire, the"Dizziness Needs Assessment" (DiNA), and to evaluate its psychometric properties. Methods: General practitioners (GPs) distributed questionnaires containing the DiNA as well as the Patients' Intentions Questionnaire (PIQ) to patients aged at least 65 years and suffering from dizziness. Items of both questionnaires were analysed by frequencies, means and rank correlations. Factor structure was explored by principal component analysis. Results: A total of n=123 patients (mean age 76 years, 73% women) had suffered from dizziness on average for more than 3 years (57% chronic, i.e. &gt;6 months). Knowing the cause of the dizziness was rated as very important by patients, and about half of them wished that their doctor would make more effort to investigate this. Among other differences, chronically dizzy patients ranked the risk of falling significantly higher than those with acute dizziness. Factor analysis revealed four subscales: "handicap and mobility" showed a very good reliability of 0.77 (Cronbach's α), indicating a "trait", whereas the other subscales rather indicated "state" characteristics. Validation coefficients showed that PIQ assesses general patient needs compared to the more specific dizziness-related needs revealed by the DiNA. Conclusion: The DiNA proved to be a valuable instrument to assess the specific priorities of elderly patients suffering from dizziness. Regarding the limited therapy options for dizziness in old age, a doctor-guided shift of patients' attention from causes to symptom-related implications could be a promising approach. abstract_id: PUBMED:37356873 Variation in sociodemographic, clinical and functional characteristics of patients with chronic venous insufficiency in a single public health center. Objectives: Describe and analyze the sociodemographic, clinical and functional characteristics of individuals with chronic venous insufficiency (CVI) and compare the clinical severity with levels of mobility. Methods: Exploratory cross-sectional study in a single public health center. The study cohort comprised 99 individuals. Interventions: Clinical, sociodemographic and physical activity level questionnaires were applied; we also evaluated ankle amplitude and sural triceps function. Descriptive statistics, independent t test and one-way ANOVA with post hoc Least Significant Difference (LSD) were performed. Results: Ninety-nine participants (87.9% woman) with an average age with an average age of 60.6 ± 14 years who where recruited to the study. The cohort was composed of individuals were moderately active, 80% exhibited symptoms of the disease and approximately 40% reported reduced mobility. Individuals who had reduced mobility and less physical activity showed reduced ankle amplitude and muscle function (p &lt;0.05). Conclusion: CVI impacts on functional capacity, even in individuals with less severe disease. abstract_id: PUBMED:25479983 The patient work system: an analysis of self-care performance barriers among elderly heart failure patients and their informal caregivers. Human factors and ergonomics approaches have been successfully applied to study and improve the work performance of healthcare professionals. However, there has been relatively little work in "patient-engaged human factors," or the application of human factors to the health-related work of patients and other nonprofessionals. This study applied a foundational human factors tool, the systems model, to investigate the barriers to self-care performance among chronically ill elderly patients and their informal (family) caregivers. A Patient Work System model was developed to guide the collection and analysis of interviews, surveys, and observations of patients with heart failure (n = 30) and their informal caregivers (n = 14). Iterative analyses revealed the nature and prevalence of self-care barriers across components of the Patient Work System. Person-related barriers were common and stemmed from patients' biomedical conditions, limitations, knowledge deficits, preferences, and perceptions as well as the characteristics of informal caregivers and healthcare professionals. Task barriers were also highly prevalent and included task difficulty, timing, complexity, ambiguity, conflict, and undesirable consequences. Tool barriers were related to both availability and access of tools and technologies and their design, usability, and impact. Context barriers were found across three domains-physical-spatial, social-cultural, and organizational-and multiple "spaces" such as "at home," "on the go," and "in the community." Barriers often stemmed not from single factors but from the interaction of several work system components. Study findings suggest the need to further explore multiple actors, contexts, and interactions in the patient work system during research and intervention design, as well as the need to develop new models and measures for studying patient and family work. Answer: Yes, disease-specific characteristics do add to the explanation of mobility limitations in patients with different chronic diseases. A study conducted in The Netherlands on community-dwelling elderly people aged 55-85 years found that disease-specific characteristics reflecting clinical disease severity were important in explaining mobility limitations in patients with specific chronic diseases. The study showed that chronic non-specific lung disease, cardiac disease, peripheral atherosclerosis, diabetes mellitus, stroke, arthritis, and cancer were all independently associated with mobility limitations. Disease-specific characteristics that explained the association between disease and mobility mostly reflected decreased endurance capacity or were directly related to mobility function. For example, shortness of breath and disturbed night rest in chronic non-specific lung disease, angina pectoris and congestive heart failure in cardiac disease, and stiffness and lower body complaints in arthritis were significant factors. However, for atherosclerosis and diabetes mellitus, disease-specific characteristics did not add to the explanation of mobility limitations (PUBMED:9519132). This finding is consistent with the understanding that chronic diseases can have a differential impact on mobility, and considering disease-specific characteristics can provide more detailed information about this impact. It is important to note that while some disease-specific characteristics are directly related to mobility, others may reflect a more general decrease in endurance capacity that can indirectly affect mobility.
Instruction: Role of adjuvant surgery in initially unresectable pancreatic cancer after long-term chemotherapy or chemoradiation therapy: survival benefit? Abstracts: abstract_id: PUBMED:24841048 Role of adjuvant surgery in initially unresectable pancreatic cancer after long-term chemotherapy or chemoradiation therapy: survival benefit? Background: The purpose of the present study was to analyze the survival benefit and safety of adjuvant surgery in patients with initially unresectable pancreatic cancer following chemo(radio)therapy. Methods: The 130 patients with unresectable pancreatic cancer treated during 2006 to 2013 were divided into a study group (15 patients) with planned adjuvant surgery, and a control group (115 patients with locally advanced disease) without adjuvant surgery. Results: The study group of 15 patients had shrunken tumor, decreased tumor marker, and maintained performance status after 9 months (range 5-18 months) of chemo(radio)therapy. Thirteen patients had curative resection and two patients were not resected. The remaining controls of 115 patients did not undergo surgical resection due to poor response to chemo(radio)therapy or performance status. The median survival time in the study group was better than in the control group (36 vs. 9 months, P &lt; 0.001). The mortality and morbidity rates in the study group were 0% and 46% respectively, in spite of concomitant organ resections in 77%. Conclusion: Patients who had adjuvant surgery had significant improvement of survival without increase in morbidity and mortality, relative to patients with locally advanced disease. Thus, adjuvant surgery may provide the promising results in this group who responded favorably to initial chemo(radio)therapy in unresectable pancreatic cancer. abstract_id: PUBMED:21455748 Adjuvant surgical therapy for patients with initially-unresectable pancreatic cancer with long-term favorable responses to chemotherapy. Background: To evaluate the prognostic impact of surgical intervention for initially-unresectable pancreatic ductal adenocarcinomas with long-term favorable responses to chemotherapy. Method: Twelve patients with initially-unresectable pancreatic ductal carcinomas who underwent radical surgery after a favorable response to chemotherapy for six months or longer in principle were enrolled in this study. We retrospectively reviewed the charts of these 12 patients and performed a survival analysis. Results: Initially, the included patients were unable to undergo resection secondary to locally-advanced disease in eight patients and metastatic disease in four patients. The length of preoperative therapy was five to 44 months (median 12). The operative procedure included resection of the area initially involved by tumor and regional major vessels. The postoperative mortality and morbidity rates were 0% for patients with locally-advanced disease and 50% in those with metastatic disease. R0 resection was achieved in nine patients (75%) and pathological CR was seen in one patient. Estimated overall five-year survival from initial therapy was 50.0%. The survival rate (0% at 5 years) of the four patients with metastatic disease as the cause of initial unresectability was significantly worse than that (100% at 5 years) of the eight patients with locally-advanced disease (P = 0.0014). Conclusion: Surgical intervention should be considered for patients with initially-unresectable pancreatic cancers who demonstrate long-term favorable responses to chemotherapy. R0 resection may significantly contribute to cure, especially in patients with initially locally-advanced disease. Large cohort prospective studies will be necessary to demonstrate the efficacy of this strategy. abstract_id: PUBMED:23660962 Role of adjuvant surgery for patients with initially unresectable pancreatic cancer with a long-term favorable response to non-surgical anti-cancer treatments: results of a project study for pancreatic surgery by the Japanese Society of Hepato-Biliary-Pancreatic Surgery. Purpose: A multicenter survey was conducted to explore the role of adjuvant surgery for initially unresectable pancreatic cancer with a long-term favorable response to non-surgical cancer treatments. Methods: Clinical data including overall survival were retrospectively compared between 58 initially unresectable pancreatic cancer patients who underwent adjuvant surgery with a favorable response to non-surgical cancer treatments over 6 months after the initial treatment and 101 patients who did not undergo adjuvant surgery because of either unchanged unresectability, a poor performance status, and/or the patients' or surgeons' wishes. Results: Overall mortality and morbidity were 1.7 and 47 % in the adjuvant surgery group. The survival curve in the adjuvant surgery group was significantly better than in the control group (p &lt; 0.0001). The propensity score analysis revealed that adjuvant surgery was a significant independent prognostic variable with an adjusted hazard ratio (95 % confidence interval) of 0.569 (0.36-0.89). Subgroup analysis according to the time from initial treatment to surgical resection showed a significant favorable difference in the overall survival in patients who underwent adjuvant surgery over 240 days after the initial treatment. Conclusion: Adjuvant surgery for initially unresectable pancreatic cancer patients can be a safe and effective treatment. The overall survival rate from the initial treatment is extremely high, especially in patients who received non-surgical anti-cancer treatment for more than 240 days. abstract_id: PUBMED:34631515 Prognostic Factors and Effect of Adjuvant Chemoradiation Following Chemotherapy in Resected Pancreatic Cancer Patients With Lymph Node Metastasis or R1 Resection. Pancreatic ductal adenocarcinoma (PDAC) is a lethal disease with a poor prognosis. In resectable PDAC, the recurrence rate is still high even when surgery and adjuvant chemotherapy (CT) are applied. Regional lymph node metastasis and positive margins are associated with higher recurrence risk and worse survival. Adjuvant radiotherapy has been explored, but its efficacy remains controversial. In recent years, some characteristics have been reported to stratify patients who may benefit from adjuvant chemoradiation (CRT), such as lymph node metastasis and margin status. Adjuvant chemotherapy followed by chemoradiation (CT-CRT) was also proposed. A total of 266 patients with resectable PDAC who have lymph node metastasis or R1 resection after surgery were enrolled. In multivariate Cox regression analyses, pancreatic body or tail tumor location (HR 0.433, p&lt;0.0001, compared with pancreatic head) and adjuvant CT predicted a better survival, while there were no significant differences among the different CT regimens. Higher T stage indicated poor survival (stage I: reference; stage II: HR 2.178, p=0.014; stage III: HR 3.581, p=0.001). Propensity score matching was applied in 122 patients to explore the role of CRT. A cohort of 51 patients (31 and 20 patients in the CT and CT-CRT groups, respectively) was generated by matching. Further analyses revealed adjuvant CT-CRT was associated with prolonged survival compared with CT alone (HR 0.284, p=0.014) and less frequent local recurrences (56.5% vs. 21.4% in the CT and CT-CRT group, respectively). However, no significant differences in disease-free survival among these two groups were observed. abstract_id: PUBMED:28983358 Unresectable Pancreatic Adenocarcinoma: Eight Years Later. Pancreatic cancer is the fourth leading cause of cancer deaths in the United States, and is considered uniformly fatal when patients present with unresectable, advanced-stage disease at the time of diagnosis. Long-term survival of patients with advanced-stage pancreatic adenocarcinoma remains rare, despite advances in adjuvant chemoradiation protocols. A 73-year-old male presented to our emergency department with abdominal pain and a history of biopsy-proven, stage III pancreatic adenocarcinoma. His initial staging CT scan and trans-duodenal ultrasound had demonstrated a stage IIa (T3, N0, Mx) lesion. On surgical exploration, he was up-staged to stage III (T4, N0, Mx), noting encasement of the superior mesenteric vessels and involvement of the portal vein. He underwent palliative choledochojejunostomy and was treated with 4 months of oxaliplatin and capecitabine, with concurrent radiation therapy (50.4 Gy), followed by 4 months of gemcitabine. After 7 months, the patient withdrew from therapy due to treatment intolerance. He then turned to self-medication with non-traditional herbal therapies. After 3 years of surveillance, he was lost to follow-up until presenting to our facility with abdominal pain 8 years after his initial diagnosis. On diagnostic CT scan during his current presentation for abdominal pain, he was found to have no evidence of pancreatic cancer. Based on our review of the literature, we present the longest known survival of a patient with surgically unresectable pancreatic adenocarcinoma. Further study of this patient's phenotypic or genotypic characteristics may provide insight into better therapeutic agents, or a predictive subset of patients who will benefit from specific chemotherapeutic options. abstract_id: PUBMED:31236851 Patterns of Care and Outcomes of Adjuvant Chemoradiation for Node-Positive Pancreatic Adenocarcinoma. Purpose: The literature has been conflicting on the superiority of adjuvant chemoradiation over chemotherapy for node-positive adenocarcinoma of the pancreas following definitive surgery. We aimed to evaluate the patterns of care and outcomes of these two treatment options using the National Cancer Database (NCDB). Methods: Patients diagnosed with non-metastatic, node-positive adenocarcinoma of the pancreas from 2006 to 2014 who received oncologic resection with negative margins were identified in the NCDB. Patient- and clinical-related factors were compared between those who received adjuvant chemotherapy alone (aC) versus adjuvant chemoradiation (aCRT). Univariable and multivariable logistic regression was performed to assess for predictors of adjuvant chemoradiation use. The Kaplan-Meier method was used to assess overall survival (OS) and Cox regression analysis was used to assess impact of covariables on OS. Results: There were 3609 patients who met the study criteria, of which 2988 (82.8%) received chemotherapy alone and 621 (17.2%) who received chemoradiation. Median follow up for living patients was 33.8 months (IQR 22-51). On multivariable logistic regression, those who received treatment in more recent years of diagnoses (OR 0.21-0.37, p &lt; 0.001) were less likely to receive aCRT over aC. Two-year OS for those who received chemo alone was 44.9% and for chemoradiation was 42.6% (p = 0.169). This finding was sustained on multivariable survival analysis (HR 0.99, p = 0.867). Conclusions: Adjuvant chemotherapy alone for adenocarcinoma of the pancreas is the predominant treatment of choice among US hospitals. There was no overall survival benefit noted in those who were treated with adjuvant chemoradiation. abstract_id: PUBMED:35839251 The Addition of Chemoradiation to Adjuvant Chemotherapy is Associated With Improved Survival Following Upfront Surgical Resection for Pancreatic Cancer With Nodal Metastases. Background: It is unclear whether the addition of chemoradiation (CRT) to adjuvant chemotherapy (CT) following upfront resection of pancreatic ductal adenocarcinoma (PDAC) provides any benefit. While some studies have suggested a benefit to combined modality therapy (CMT) (adjuvant CT plus CRT), it is not clear if this benefit was related to increased CT usage in patients who received CMT. We sought to clarify the use of CMT in patients who underwent upfront resection of PDAC. Methods: Patients with non-metastatic PDAC were retrospectively identified from the linked SEER-Medicare database. Those who underwent upfront resection were identified and divided into two cohorts - patients who received adjuvant CT and patients who received adjuvant CMT. Cohorts were compared. Univariate analysis described patient characteristics. Kaplan-Meier and multivariable Cox proportional hazards modeling were used to estimate overall survival (OS). Results: 3555 patients were identified; 856 (24%) received CT and 573 (16%) received CMT. The median number of CT doses was 11 for both groups. Patients who received CMT were younger, diagnosed in the earlier time frame, and had fewer comorbidities. The median OS was 21 months and 18 months for those treated with CMT and CT (P &lt; .0001), respectively, but when stratified by nodal status, the association with improved OS in the CMT cohort was only observed in node-positive patients. On multivariable analysis, receipt of CMT and removal of &gt;15 lymph nodes decreased the risk of death (P &lt; .05). Discussion: Receipt of CMT following upfront resection for PDAC was associated with improved survival, which was confined to node-positive patients. The role of adjuvant CMT in PDAC with nodal metastases warrants further study. abstract_id: PUBMED:35882183 Role of Chemoradiation in the Adjuvant Treatment of Radically Resected Pancreatic Cancer Patients: A Mono-Institutional Retrospective Analysis. Introduction: Pancreatic cancer (PC) represents an unfavorable prognosis condition, even in patients with resectable disease. The aim of this series was to investigate the role of treatment intensification with adjuvant chemoradiation (CRT) in radically resected PC patients. Methods: Data from PC patients who underwent radical surgery, adjuvant chemotherapy (CT), and CRT throughout a 20-year period were retrospectively collected. Actuarial local control (LC) and the overall survival (OS) were the primary endpoints, with disease-free survival and metastasis-free survival (MFS) representing secondary endpoints. Results: The analysis included 108 PC patients treated with adjuvant CRT and CT from January 2000 to August 2019. Median age was 66 years (range: 40-83), and all patients underwent radical surgical resection with adjuvant CT (88, 81.5%) plus concomitant CRT (101, 93.5%) or radiotherapy alone (7, 6.5%). The median dose delivered to the tumor bed was 50.4 Gy (range: 45-50.6 Gy), while median dose to regional lymphatic drainage stations was 39.6 Gy (range 39.6-45 Gy). Concomitant CT was a gemcitabine-based regimen in the vast majority of patients (87, 80.6%). Median follow-up time was 21 months; the 2- and 5-year LC rates were 75.8% and 59.1%, respectively. Perineural invasion at pathological assessment was found significantly associated with LC (p = 0.028). Median OS was 40 months with 2- and 5-year OS rates of 73.9% and 41.6%, respectively. Conclusions: The outcomes of this series suggest to investigate the possible impact of adding adjuvant CRT to CT in PC patients. Timing and combination of modern CRT with new systemic therapies need to be further investigated to personalize therapy and optimize clinical advantages. abstract_id: PUBMED:11745808 Survival benefits of adjuvant chemotherapy with oral doxifluridine (5'-DFUR) following radiotherapy in patients with unresectable pancreatic cancer. Background And Objectives: The combination of 5-fluorouracil and radiotherapy is thought to be the most effective treatment for locally unresectable pancreatic carcinoma. The outcomes, however, are far from acceptable from the viewpoint of long-term survival. We assessed the survival benefits of oral adjuvant chemotherapy with doxifluridine (5'-DFUR) following radiotherapy for patients with the disease. Methods: Thirty-five consecutive patients who underwent bypass surgery and radiotherapy for localized advanced unresectable adenocarcinoma of the pancreas head were retrospectively reviewed in regard to disease progression and survival. Ten of the 35 patients underwent adjuvant chemotherapy with 5'-DFUR after radiotherapy in an outpatient setting. Results: The 1-year survival for patients treated with radiotherapy alone was 29%. The 1-, 2-, and 3-year survivals for patients treated with the adjuvant chemotherapy after radiotherapy were 50, 40, and 30%, respectively (P = 0.0069, log-rank test). The elevation of tumor markers was delayed (P = 0.0346) and local control rate was improved (P = 0.0475) in patients with chemotherapy. Multivariate analysis demonstrated that the adjuvant chemotherapy with 5'-DFUR was a significant independent prognostic factor as well as tumor size. Conclusions: The adjuvant chemotherapy with 5'-DFUR following radiotherapy led to a significant prolongation of the survival for patients with unresectable localized pancreatic cancer. abstract_id: PUBMED:30949842 Conversion surgery for initially unresectable pancreatic cancer: current status and unresolved issues. Pancreatic cancer is one of the most lethal of all malignancies. One of the reasons for the dismal prognosis is that most diagnoses are made when the disease is either locally advanced or metastatic. Recent advances in chemotherapy and chemoradiotherapy (CRT) enable "conversion surgery" to be performed for selected patients with initially unresectable pancreatic cancer following favorable responses to preoperative treatment. Using FOLFIRINOX as preoperative treatment, the resection rate was reported as 6-44% of patients with locally advanced cancer and the prognosis of these patients was favorable. Even for metastasized cancer, recent reports show the effectiveness of conversion surgery, which has achieved 27-56 months of median overall survival. However, there are many unanswered questions about conversion surgery. The optimal regimen and duration of preoperative treatment remain unclear and there is still debate regarding the safety and effectiveness of vascular resection, which is often required for curative resection of locally advanced cancer. Accumulation of more data on conversion surgery is required to establish the safety and effectiveness of this treatment. In this review, we summarize the current status and unresolved issues about conversion surgery for initially unresectable pancreatic cancer. Answer: Adjuvant surgery in initially unresectable pancreatic cancer following long-term chemotherapy or chemoradiation therapy has been associated with a survival benefit in several studies. A study analyzing 130 patients with unresectable pancreatic cancer found that those who underwent adjuvant surgery after chemo(radio)therapy had a median survival time of 36 months, compared to 9 months in the control group that did not undergo surgery (PUBMED:24841048). Another study with 12 patients who underwent radical surgery after a favorable response to chemotherapy for six months or longer reported an estimated overall five-year survival of 50.0% (PUBMED:21455748). A multicenter survey comparing 58 patients who underwent adjuvant surgery with a favorable response to non-surgical cancer treatments over 6 months after the initial treatment to 101 patients who did not undergo surgery revealed that the survival curve in the adjuvant surgery group was significantly better (PUBMED:23660962). The propensity score analysis indicated that adjuvant surgery was a significant independent prognostic variable. Furthermore, a retrospective analysis of 108 pancreatic cancer patients treated with adjuvant chemoradiation and chemotherapy suggested that treatment intensification with adjuvant chemoradiation in radically resected patients may have a positive impact on local control and overall survival, warranting further investigation (PUBMED:35882183). However, it is important to note that the decision for adjuvant surgery should be considered on a case-by-case basis, taking into account the patient's response to initial treatments, performance status, and potential surgical risks. The studies suggest that patients with initially unresectable pancreatic cancer who demonstrate long-term favorable responses to chemotherapy or chemoradiation therapy may benefit from adjuvant surgery, potentially leading to improved survival outcomes.
Instruction: Tympanometry versus direct middle ear pressure measurement in an artificial model: is tympanometry an accurate method to measure middle ear pressure? Abstracts: abstract_id: PUBMED:14600462 Tympanometry versus direct middle ear pressure measurement in an artificial model: is tympanometry an accurate method to measure middle ear pressure? Hypothesis: Indirect tympanometric pressure measurements of the middle ear (ME) do not correspond to direct pressure measurements. Background: Tympanometry is an accepted method for estimating ME pressures. These pressure values are different, however, when measurements are made directly. Materials And Methods: Negative pressures were induced by volume changes in an artificial ME model with various "mastoid" volumes. Both tympanometric and direct pressure measurements were obtained simultaneously. Results: A substantial difference was demonstrated between tympanometric and direct pressure measurements. There was a linear correlation between tympanometry and direct pressures between -46.8 and -93.6 mmH2O, with a tympanometric overestimation of 40% to 20%, respectively. The smaller the "mastoid" volume, the greater the overestimation. The correlation was not linear for direct pressures between zero and -46.8 mmH2O. Tympanometric overestimation was greatest (i.e., 4-14-times the actual pressure) in the pressure range between zero and -12 mmH2O. Conclusions: Tympanometry does not yield precise ME pressure values. In a model with a larger "mastoid" and an actual pressure range of -46.8 to -93.6 mmH2O tympanometric readings are somewhat less unreliable (error of 40%-20%). However, tympanometry is in error of more than 400% in a model with a small mastoid in which the actual ME pressure is only slightly negative (i.e., range between -3.9 and -46.8 mmH2O). Therefore, this study implies that tympanometry has little use for measuring ME pressure especially in chronic ears or their sequelae that are usually characterized by having small negative pressures and small mastoids. abstract_id: PUBMED:33395350 Evaluation of Middle Ear Function by Tympanometry and the Influence of Lower Barometric Pressure at High Altitude. Jiang, Tao, Liping Zhao, Yanbo Yin, Huiqian Yu, and Qingzhong Li. Evaluation of middle ear function by tympanometry and the influence of lower barometric pressure at high altitude. High Alt Med Biol. 22:70-76, 2021. Objective: To evaluate middle ear function in residents of high-altitude areas in comparison to sea-level participants. Design: Prospective observational study. Setting: All experiments were performed by experienced audiologists with a calibrated tympanometry machine. Participants: Young adults between the age of 17 and 23 were recruited. Seventy-five participants from Shanghai (altitude 4 m) and 133 participants from the Shigatse area (altitude 4,040 m) were recruited. Any participant with any otological disorder was excluded. Main Outcome Measure: Four indexes of the tympanogram were evaluated in the two groups from different altitudes. Results: Our results showed that the peak of static compliance for the participants in Shigatse was smaller, but the absolute compliance of tympanic membrane remained the same. Similarly, the ear canal volume (ECV) from tympanometry was also affected by the elevated altitude in Tibet. In addition, the tympanometric peak pressure was decreased in high-altitude residents, which suggests a slightly declined function of the Eustachian tube at lower barometric pressure. However, no difference was found in the tympanometric width (TW). Conclusion: Our results indicate that tympanograms were affected by decreased atmospheric pressure at high altitude. Therefore, other than pressure-related indexes, TW is better for evaluating the function of the middle ear in high-altitude regions. abstract_id: PUBMED:14587979 Tympanometry accurately measures middle ear underpressures in monkeys. Tympanometry is useful for evaluating middle ear (ME) status, but its accuracy in estimating true ME pressure has been questioned. We evaluated the accuracy of tympanometry in 6 monkeys. Direct application and measurement of ME pressure were achieved with a probe introduced into the mastoid antrum, and tympanometry was done over a large range of applied ME pressures. For all ears, tympanometric pressure was a linear function of applied pressure. At large overpressures, the tympanometric pressure was approximately 40 mm H2O greater than the applied pressure, but there was little error in the measurement for applied underpressures. The measurement error was proportional to the ME pressure multiplied by the ratio of the extant volume displacement of the tympanic membrane to ME volume. These results show that in monkeys, tympanometry provides an accurate, relatively unbiased estimate of ME underpressure and suggest that the measurement error for tympanometry can be predicted for MEs of other species. abstract_id: PUBMED:19037824 Tympanometric experiments on retracted ear drums--does tympanometry reflect the true middle ear pressure? Conclusions: Tympanometry in ears with retracted or partially atelectatic tympanic membranes does not reflect the true middle ear (ME) pressure. The position of the tympanogram peak depends on the size of a retraction pocket and the remaining ME gas volume. Thus tympanometry in such cases cannot be used for measurements of ME pressure. Objective: To prove the hypothesis that tympanometry in ears with retraction pockets or atelectasis does not measure ME pressure correctly. Materials And Methods: Tympanometry was performed in a simplified artificial ME model, in which different 'tympanic membranes' with pockets could be integrated. Results: We found a shift of the tympanogram peaks, either towards negative or positive pressures, depending on the position of the pocket: in case of an 'inward' direction (towards the ME like a retraction pocket) the peak was in the negative pressure range, in case of an 'outward' direction (bulging into the ear canal) the shift was towards positive pressures. The shift increased with decreasing volume behind the tympanic membrane. The effect cannot be simply be explained by hysteresis but by the air 'cushion' that is 'trapped' behind the membrane, limiting its excursions during the tympanometric sweep. abstract_id: PUBMED:449503 Direct continuous measurement of middle ear pressure: method and clinical results (author's transl) Method (part 1): A method for the direct measurement of the middle ear pressure for clinical use is introduced. This method allows continuous pressure registration in the normal and pathological middle ear. The determination of the parameters influencing middle ear pressure like volume of the middle ear cavity, gas resorption, tube function is possible. The influence of body movement, changes of temperature etc. on the middle ear pressure can be registrated. The precondition for this method is a tube in the tympanic membrane. Clinical results (part 2): In 42 patients pressure development in physiological tube function and dysfunction, in patulous Eustachian tubes and tube obstructions can be registrated. The volume of the middle ear cavities and the increasing negative pressure because of gas resorption can be analysed with this method. In obstructed tubes different development of negative pressure may indicate differences in the elasticity of tympanic membranes. This method gives valuable criterion on middle ear pressure changes considering parameters like gas diffusion, tube function, cavity volume and membrane elasticity in normal and pathological middle ears. abstract_id: PUBMED:8435166 Effects of inhalant anesthesia on the middle ear as measured by tympanometry. This investigation examined the effects of inhalant anesthesia, nitrous oxide and halothane, on middle ear pressure. Moreover, the effect of inhalant anesthesia on the validation procedure for tympanometry was examined. Tympanometry was used to measure middle ear pressure variations. Subjects were examined with tympanometry prior to and after the administration of inhalant anesthesia. Group 1 (N = 86) received halothane and nitrous oxide. Group 2 (N = 52) received halothane only. Group 1 data were gathered in a previous investigation. The investigation was continued with subjects in group 2. Results indicated that middle ear status prior to anesthesia was not significantly different from middle ear status under anesthesia. In addition, middle ear pressure changes due to nitrous oxide were not significantly different from middle ear pressure changes due to halothane alone. Moreover, there was no effect on the tympanometry validation procedure. These findings are consistent with previous studies that indicate nonsignificant changes in middle ear pressure associated with the use of nitrous oxide and halothane. abstract_id: PUBMED:7629643 Study on middle ear ventilation using positional tympanometry--normal ear The middle ear is a cavity surrounded by solid bones, lined with mucosa, which has a gas-filled lumen. Cavernous organs such as the ear should have their own ventilation system under atmospheric pressure. The mechanism of ventilation in the middle ear has not been sufficiently clarified. Ventilation performed in the middle ear may be classified into two types: 1) passive ventilation via the Eustachian tube, required in cases of abnormal pressure and 2) unique physiological active ventilation of the middle ear performed under atmospheric pressure and not involving the Eustachian tube. The purpose of the present study is to prove the existence of this active ventilation under atmospheric pressure. The subjects were 50 normal ears and elevation of middle ear pressure in the lateral position (determined by positional tympanometry) was studied. The change in the peak level, on tympanometry, was used as an index. The results were continuously recorded every 12 seconds. The following results were obtained. 1. Middle ear pressure was elevated by changing from the sitting to the lateral position. Venous pressure was regarded as a causative factor in this pressure elevation. 2. The elevated middle ear pressure in the lateral position suggested gas production from mastoid cells of the middle ear. The observation that the middle ear pressure was stabilized with the increase in pressure, up to a level of 85-90 mm H2O, indicated the existence of gas leakage from the Eustachian tube and a mechanism for controlling gas production from the mastoid cells of the middle ear. abstract_id: PUBMED:3835879 Middle ear pressure in patients with middle ear effusion--direct measurement by pressure microtransducer. Middle ear pressure in patients with middle ear effusion were measured using a microtransducer which developed in our laboratory. Middle ear pressure showed not only negative but also positive pressure in these patients. The findings are discussed briefly. abstract_id: PUBMED:27118817 Effects of Continuous Positive Airway Pressure on Middle Ear Pressure and Acoustic Stapedial Reflex. Objective: This study investigated the effects of continuous positive airway pressure (CPAP) on middle ear pressure and acoustic stapedial reflex and the correlation between CPAP and middle ear pressure. Study Design: Prospective cohort study. Setting: Tertiary hospitals. Subjects And Methods: Fifty patients with obstructive sleep apnea-hypopnea syndrome were assigned to the study group, and 50 healthy volunteers were assigned to the control group. The subjects underwent standard tympanometry while wearing a CPAP device (ie, simulated CPAP treatment), which was set to 0, 5, 10, and 15 cm H2O, respectively. Tympanometry was performed before and after swallowing at each pressure of CPAP treatment. Results: The mean middle ear pressures were 21.2, 22.6, 22.7, and 23.4 daPa (before swallowing) and 21.6, 42.6, 81.4, and 118.6 daPa (after swallowing) in the study group and 17.6, 18.7, 19.5, and 20.8 daPa (before swallowing) and 17.7, 44.2, 85.6, and 120.5 daPa (after swallowing) in the control group at the CPAPs of 0, 5, 10, and 15 cm H2O, respectively. While the CPAPs were at 0 and 15 cm H2O, the stapedial muscle reflex at 1.0 kHz did not have a significant difference between the 2 groups (χ(2) = 0.521, P = .470). The Pearson correlation coefficient of the CPAP pressure and the middle ear pressure after swallowing was 0.812 (P &lt; .001). Conclusion: CPAP affected middle ear pressure and was directly proportional to the pressure of the CPAP. However, CPAP treatment had no significant effect on stapedial muscle reflex. abstract_id: PUBMED:1267350 Tympanometry revealing middle ear pathology. In order to evaluate the usefulness of impedance audiometry as a screening method for school children, two different pilot studies were performed. The results indicated that impedance audiometry (tympanometry and stapedius reflex test) was more efficient than otoscopy and pure tone screening in the detection of ears with secretory otitis media (SOM). In order to gain some more experience, an extended study of 800 7-year-olds was performed. The results from this investigation showed 6.5% pathological values with pure tone screening, 13.5% pathological values with tympanometry and 32% elevated or nonelicitable stapedius reflexes. Analysis of our observations indicates that the stapedius reflex may be too sensitive a test to be used as a screening method. Tympanometry, however, is recommended as a complement to pure tone screening in screening of children. A 6-12 months follow-up was also performed on 357 ears in which an initial study had shown a middle ear pressure of less than or equal to -100 mm H2O. At the repeated test 20% had pathological pure tone screening, 40% had a middle ear pressure of less than or equal to -100 mm H2O, 57% had elevated or nonelicitable stapedius reflexes, and 14% had middle ear effusion. The high frequency of persisting abnormal middle ear pressure and effusion in these 357 ears suggests that a middle ear pressure of less than or equal to -100 mm H2O can be a predisposing factor for SOM. In an experimental study on human temporal bones it was demonstrated that the tympanogram preserves its original appearance when the water level is low in the middle ear, while a higher level gives rise to a pathological tympanogram. It is also demonstrated that there is a discrepancy between the tympanometrically, indirectly recorded middle ear pressure and the middle ear pressure as measured manometrically, directly from the middle ear. Answer: Tympanometry has been widely used to estimate middle ear (ME) pressure, but its accuracy compared to direct pressure measurements has been questioned. In an artificial model, tympanometry did not correspond precisely to direct pressure measurements. A study using an artificial ME model with varying "mastoid" volumes found a substantial difference between tympanometric and direct pressure measurements. Tympanometry overestimated the pressure by 40% to 20% in the range of -46.8 to -93.6 mmH2O, with greater overestimation in smaller "mastoid" volumes. The overestimation was even more significant, by 4-14 times, in the pressure range between zero and -12 mmH2O. This suggests that tympanometry is not a precise method for measuring ME pressure, especially in conditions characterized by small negative pressures and small mastoids, such as chronic ears or their sequelae (PUBMED:14600462). In contrast, another study on monkeys showed that tympanometry provided an accurate estimate of ME underpressure, with little error for applied underpressures. The measurement error was proportional to the ME pressure multiplied by the ratio of the extant volume displacement of the tympanic membrane to ME volume. This indicates that tympanometry can be accurate in certain conditions and species (PUBMED:14587979). However, tympanometry's reliability is further questioned in cases of retracted or partially atelectatic tympanic membranes. In such cases, the tympanogram peak does not reflect the true ME pressure, as it is influenced by the size of a retraction pocket and the remaining ME gas volume (PUBMED:19037824). In summary, while tympanometry can provide an estimate of ME pressure, it does not always accurately reflect direct pressure measurements, particularly in certain pathological conditions or when the "mastoid" volume is small. The accuracy of tympanometry may vary depending on the specific circumstances and the species being tested.
Instruction: Does sunlight exposure improve survival in patients with non-small cell lung cancer? Abstracts: abstract_id: PUBMED:24377521 Does sunlight exposure improve survival in patients with non-small cell lung cancer? Background: Some epidemiological studies reported that sunlight exposure and highvitamin D levels may decrease the morbidity and mortality related to cancer. We aimed to evaluate whether sunlight exposure has an impact on survival in patients with non small cell lung cancer. Materials And Methods: A total of 546 patients with NSCLC from two different regions (Kayseri and Adana) differing according to sunlight exposure were analysed retrospectively. Results: The median overall survival (OS) rates were 11. 6 (CI: 9.50-13.6) and 15.6 months (CI: 12.4-18.8) for Kayseri and Adana, respectively, in all patients (p=0.880). Conclusions: There were no differences between groups in terms of OS. While there is strong evidence regarding inverse relationship between cancer incidence and sunlight exposure, it is still controversial whether sunlight exposure is a good prognostic factor for survival in patients with lung cancer. abstract_id: PUBMED:18456421 Do anticoagulants improve survival in patients with cancer? Activation of haemostasis is a key factor for the progression of malignant disease. Experimental data suggest that unfractionated heparin or low molecular weight heparin (LMWH) may reduce tumour progression. LMWH have antiangiogeneic effects and inhibit the binding of cancer cell surface glycosaminoglycans with platelet and endothelial selectins. LMWH also inhibit the degradation of the intercellular matrix by tumour cells. Retrospective analysis of several randomized controlled trials evaluating LMWH in patients with venous thromboembolism suggests that LMWH may reduce cancer related mortality in these patients. More recently, randomized controlled trials have tested this hypothesis in patients with cancer who did not have associated venous thromboembolism with conflicting results. LMWH given at prophylactic dosage in patients with advanced cancer was associated with no or minor survival improvement, although some positive results were obtained with such a regimen in patients with non-small cell lung cancer. On the other hand, it has been shown that LMWH given at a higher dosage was associated with a significant and clinically important improvement on overall survival as compared to placebo in localized cancer. Patients included in those trials had however cancer from different sites and this limit the relevance of these findings. Several trials are under way in order to evaluate different LMWH preparations in patients suffering from the same cancer type. The results will help to confirm or to refute a clinical role of LMWH as adjuvant therapy in patients with cancer. abstract_id: PUBMED:17145845 Second hand smoke exposure and survival in early-stage non-small-cell lung cancer patients. Purpose: Second hand smoke (SHS) exposure is associated with higher risk of lung cancer. However, the role of SHS in lung cancer survival is not clear. Experimental Design: We examined the association between self-reported SHS exposure before diagnosis and overall survival and recurrence-free survival in 393 early-stage non-small-cell lung cancer patients. SHS exposure was analyzed by both duration and location of exposure using log-rank test and Cox proportional hazard models, adjusting for covariates including pack-years of smoking. Results: The median follow-up time was 66 months (range, 0.2-140 months). There were 135 recurrences and 213 deaths. The 5-year overall survival rates were 71% [95% confidence interval (95% CI), 62-81%], 61% (51-72%), 49% (38-60%), and 47% (37-58%), respectively, for patients with the lowest to highest quartile of SHS exposure durations (P &lt; 0.001, log-rank test), with the adjusted hazard ratio (AHR) of 1.57 (95% CI, 1.02-2.41) for the highest versus lowest quartile of SHS exposure durations (P(trend) = 0.04). For different SHS exposure locations, a stronger association was found for SHS exposure at work (AHR of the highest versus lowest quartile, 1.71; 95% CI, 1.12-2.61; P(trend) = 0.03) than for exposure at home (AHR, 1.26; 95% CI, 0.86-1.86; P(trend) = 0.20) or leisure places (AHR, 1.28; 95% CI, 0.83-1.95; P(trend) = 0.16). Similar associations were observed when SHS exposure durations were dichotomized into two or three groups and between SHS exposure and recurrence-free survival. Conclusions: SHS exposure is associated with worse survival in early-stage non-small-cell lung cancer patients, especially for SHS exposure at the work. abstract_id: PUBMED:23426076 Sunlight may increase the FDG uptake value in primary tumors of patients with non-small cell lung cancer. Currently, positron emission tomography with computerized tomography (PET-CT) is the most sensitive technique for detecting extracranial metastases in non-small cell lung cancer (NSCLC). It has been reported that there is a correlation between the maximal standardized uptake value (SUV(max)) of primary tumors and prognosis in patients with NSCLC. The effect of sunlight exposure on PET-CT SUV(max) value is not known. Therefore, we aimed to evaluate the effect of sunlight exposure on PET-CT SUV(max) value in patients with NSCLC. A total of 290 patients with NSCLC from two different regions of Turkey (Kayseri, n=168 and Adana, n=122) that have different climate and sunlight exposure intensity, were included in the study. Age, gender, histology of cancer, cancer stage, smoking status, comorbidity and SUV(max) of the primary tumor area at the time of staging were evaluated as prognostic factors. In the multivariate analysis, we detected that the region was the only independent factor affecting SUV(max) (P=0.019). We identified that warmer climate and more sunlight exposure significantly increases the SUV(max) value of the primary tumor area in patients with NSCLC. Further studies are warranted to clarify the issue. abstract_id: PUBMED:34336886 Construction of a Nomogram for Predicting Survival in Elderly Patients With Lung Adenocarcinoma: A Retrospective Cohort Study. Elderly patients with non-small-cell lung cancer (NSCLC) exhibit worse reactions to anticancer treatments. Adenocarcinoma (AC) is the predominant histologic subtype of NSCLC, is diverse and heterogeneous, and shows different outcomes and responses to treatment. The aim of this study was to establish a nomogram that includes the important prognostic factors based on the Surveillance, Epidemiology, and End Results (SEER) database from 2010 to 2015. We collected 53,694 patients of older than 60 who have been diagnosed with lung AC from the SEER database. Univariate and multivariate Cox regression analyses were used to screen the independent prognostic factors, which were used to construct a nomogram for predicting survival rates in elderly AC patients. The nomogram was evaluated using the concordance index (C-index), calibration curves, net reclassification index (NRI), integrated discrimination improvement (IDI), and decision-curve analysis (DCA). Elderly AC patients were randomly divided into a training cohort and validation cohort. The nomogram model included the following 11 prognostic factors: age, sex, race, marital status, tumor site, histologic grade, American Joint Committee for Cancer (AJCC) stage, surgery status, radiotherapy status, chemotherapy status, and insurance type. The C-indexes of the training and validation cohorts for cancer-specific survival (CSS) (0.832 and 0.832, respectively) based on the nomogram model were higher than those of the AJCC model (0.777 and 0.774, respectively). The CSS discrimination performance as indicated by the AUC was better in the nomogram model than the AJCC model at 1, 3, and 5 years in both the training cohort (0.888 vs. 0.833, 0.887 vs. 0.837, and 0.876 vs. 0.830, respectively) and the validation cohort (0.890 vs. 0.832, 0.883 vs. 0.834, and 0.880 vs. 0.831, respectively). The predicted CSS probabilities showed optimal agreement with the actual observations in nomogram calibration plots. The NRI, IDI, and DCA for the 1-, 3-, and 5-year follow-up examinations verified the clinical usability and practical decision-making effects of the new model. We have developed a reliable nomogram for determining the prognosis of elderly AC patients, which demonstrated excellent discrimination and clinical usability and more accurate prognosis predictions. The nomogram may improve clinical decision-making and prognosis predictions for elderly AC patients. abstract_id: PUBMED:27157918 Metformin exposure is associated with improved progression-free survival in diabetic patients after resection for early-stage non-small cell lung cancer. Objective: There are little clinical data assessing the antineoplastic effect of metformin in patients with non-small cell lung cancer. We hypothesized that in diabetic patients undergoing pulmonary resection for early-stage non-small cell lung cancer, metformin exposure is associated with improved survival. Methods: An institutional database was used to identify patients with stage I or II non-small cell lung cancer who underwent pulmonary resection between 2004 and 2013. Patients were divided into 3 cohorts: type II diabetic patients with metformin exposure (cohort A, n = 81), type II diabetic patients without metformin exposure (cohort B, n = 57), and nondiabetic individuals (cohort C, n = 77). Univariate, multivariate, and propensity-matched analyses were performed to assess progression-free and overall survivals between groups. Results: A total of 215 patients with stage I and II non-small cell lung cancer treated with surgical resection were identified for analysis with a median follow-up of 19.5 months. Patients in cohort A had lower T- and N-stage tumors than those in cohorts B or C. However, on multivariate analysis adjusting for age, gender, and T and N stage, progression-free survival was greater for cohort A than cohort B (hazard ratio [HR], 0.410; 95% confidence interval, 0.199-0.874; P = .022) or cohort C (HR, 0.415; 95% confidence interval, 0.201-0.887; P = .017). Likewise, when propensity-matched analyses were performed, cohort A demonstrated a trend toward improved progression-free survival compared with cohort B (P = .057; HR, 0.44; c-statistic = 0.832) and improved progression-free survival compared with cohort C (P = .02; HR, 0.41; c-statistic = 0.843). No differences were observed in overall survival. Conclusions: Metformin exposure in diabetic patients with early-stage non-small cell lung cancer may be associated with improved progression-free survival, but no effect was seen on overall survival. Further studies are warranted to evaluate if there is a therapeutic role for metformin in the treatment of non-small cell lung cancer. abstract_id: PUBMED:32395278 Development and validation of nomograms for predicting overall and cancer-specific survival in young patients with non-small cell lung cancer. Background: Young patients with non-small cell lung cancer (NSCLC) represent a distinct subgroup of patients with this disease. This study aimed to construct nomograms to predict the overall survival (OS) and cancer-specific survival (CSS) of young patients with NSCLC. Methods: NSCLC patients under 50 years old diagnosed between 2010 and 2016 were selected from the Surveillance, Epidemiology, and End Results (SEER) database and randomly divided into training (n=1,357) and validation (n=678) cohorts at a ratio of 2:1. Independent prognostic factors for OS or CSS were identified through the log-rank test, Cox proportional hazards models or competing risk model and further integrated to construct nomograms. The predictive capability of the nomogram was assessed by Harrell's concordance index (C-index), the calibration curve and risk group stratification. Results: A total of 2,035 patients were enrolled. In the training cohort, insurance, marital status, histological type, grade, T stage, N stage and surgery were identified as independent prognostic for OS and CSS. The C-index value were 0.759 [95% confidence interval (CI): 0.731-0.787] for OS and 0.810 (95% CI: 0.803-0.818) for BCSS in the training cohort and 0.751 (95% CI: 0.711-0.790) for OS and 0.807 (95% CI: 0.795-0.819) for CSS in the validation cohort. The calibration curves showed optimal agreement between the predicted and actual survival both in internal and external validation. In addition, patients in the validation cohort within different risk groups exhibited significantly different survival even in each TNM stage. Conclusions: Nomograms were developed and validated to predict OS and CSS of young patients with NSCLC in our study. A prospective study with more potential prognostic factors and the latest TNM classification is required to ameliorate this model. abstract_id: PUBMED:18670307 Wood-smoke exposure as a response and survival predictor in erlotinib-treated non-small cell lung cancer patients: an open label phase II study. Introduction: Erlotinib, a tyrosine kinase inhibitor, has improved survival and quality of life in patients with non-small cell lung cancer (NSCLC) after first- or second-line chemotherapy. Asian origin, adenocarcinoma histology, female gender, lack of tobacco use, and expression of epidermal growth factor receptor are significant independent predictors of response to Erlotinib. Although tobacco use is considered a major cause of NSCLC, other risk factors such as wood-smoke exposure (WSE) are associated. Almost 3 billion people worldwide rely on solid fuels as their primary source of domestic energy for cooking and heating. Methods: In this study, 150 consecutive unselected patients with histologically proven NSCLC with progression after prior first- or second-line chemotherapy and/or poor performance status were treated with Erlotinib 150 mg/d. Clinical and pathologic characteristics were associated with response. Results: Overall response to Erlotinib was observed in 51 patients [34%; 95% confidence interval {95% CI}, 29.9-37.6]. In multivariate analysis, clinical features associated with response to Erlotinib were adenocarcinoma (35 versus 20%; p = 0.05) and WSE (83 versus 13%; p &lt; 0.001). Factors associated with longer progression-free survival in Cox analysis included adenocarcinoma (7.9 versus 2.3 months; p = 0.009), female gender (8.4 versus 5.3 months; p = 0.04), and WSE (17.6 versus 5.3 months; p = 0.006). Conclusions: WSE is associated with better response to Erlotinib and improved progression-free survival in patients with NSCLC. Additional studies in epidermal growth factor receptor signaling pathway in WSE-associated NSCLC are warranted. abstract_id: PUBMED:31420240 Survival Disparities in Black Patients With EGFR-mutated Non-small-cell Lung Cancer. Background: Little is known about the difference between black and non-black patients with epidermal growth factor receptor (EGFR)-mutated non-small-cell lung cancer (NSCLC), particularly regarding survival. We thus characterized the EGFR expression profile, clinical characteristics, and survival outcome in these patients. Patient And Methods: We reviewed the cancer registry and patient charts at a New York-Bronx network (n = 2773) treating a large population of minority patients, for non-squamous NSCLC (n = 1986) diagnosed between 2009 and 2015. Survival was adjusted for smoking, gender, age, weight, and stage. Results: The EGFR mutation rate was 15% (98/652) in tested patients (black, 14%; non-black, 16%). There was no significant difference between the 2 cohorts with respect to age at diagnosis, gender, presenting stages, and socioeconomic status. On the other hand, weight was noted to be heavier in black patients with EGFR-mutated NSCLC than their non-black counterparts (P = .012). After adjusting for gender, age, smoking status, weight, and stage, the multivariate analysis revealed no racial disparity in survival among patients with wild-type EGFR (P = .774); However, among patients with EGFR-mutated NSCLC, black patients had shorter survival in comparison with non-black patients (P = .001), with 2-year survival rates being 33% versus 61%, respectively. Such shorter survival was also observed among EGFR-inhibitor treated patients with common EGFR mutations (P = .040). Conclusions: To our knowledge, this is the first report of inferior survival among black patients with NSCLC with EGFR mutations, relative to non-black patients. The survival disparities suggest the need of more tailored management for this patient population. abstract_id: PUBMED:22185954 Can imaging help improve the survival of cancer patients? Survival, quality-adjusted survival and mortality are important and related measures of outcome in cancer care. The impact of imaging on these outcomes can be ascertained from observational and modelling studies, frequently performed to evaluate cost-effectiveness. Examples where incorporation of imaging into cancer care can be shown to improve survival include breast cancer screening, characterization of solitary pulmonary nodules, staging of non-small cell lung cancer, treatment response assessment in Hodgkin lymphoma, postoperative surveillance of colorectal cancer and selective internal radiation therapy of colorectal liver metastases. Modelling suggests the greatest opportunities for improvements in survival through imaging detection of cancer may lie in the investigation of mildly symptomatic patients. For applications where the improvements in survival are more modest, use of imaging frequently has additional demonstrable benefits including reductions in health care expenditure. Answer: The study reported in abstract PUBMED:24377521 aimed to evaluate whether sunlight exposure impacts survival in patients with non-small cell lung cancer (NSCLC). The study analyzed 546 patients with NSCLC from two different regions in Turkey, Kayseri and Adana, which differ in sunlight exposure. The results showed that the median overall survival (OS) rates were 11.6 months for Kayseri and 15.6 months for Adana, but the difference was not statistically significant (p=0.880). The study concluded that there were no differences in OS between the groups and that while there is evidence of an inverse relationship between cancer incidence and sunlight exposure, it remains controversial whether sunlight exposure is a good prognostic factor for survival in patients with lung cancer. Therefore, based on this study, sunlight exposure does not appear to improve survival in patients with NSCLC.
Instruction: Socioeconomic factors and adolescent pregnancy outcomes: distinctions between neonatal and post-neonatal deaths? Abstracts: abstract_id: PUBMED:16042801 Socioeconomic factors and adolescent pregnancy outcomes: distinctions between neonatal and post-neonatal deaths? Background: Young maternal age has long been associated with higher infant mortality rates, but the role of socioeconomic factors in this association has been controversial. We sought to investigate the relationships between infant mortality (distinguishing neonatal from post-neonatal deaths), socioeconomic status and maternal age in a large, retrospective cohort study. Methods: We conducted a population-based cohort study using linked birth-death certificate data for Missouri residents during 1997-1999. Infant mortality rates for all singleton births to adolescent women (12-17 years, n = 10,131; 18-19 years, n = 18,954) were compared to those for older women (20-35 years, n = 28,899). Logistic regression was used to estimate adjusted odds ratios (OR) and 95% confidence intervals (CI) for all potential associations. Results: The risk of infant (OR 1.95, CI 1.54-2.48), neonatal (1.69, 1.24-2.31) and post-neonatal mortality (2.47, 1.70-3.59) were significantly higher for younger adolescent (12-17 years) than older (20-34 years) mothers. After adjusting for race, marital status, age-appropriate education level, parity, smoking status, prenatal care utilization, and poverty status (indicated by participation in WIC, food stamps or Medicaid), the risk of post-neonatal mortality (1.73, 1.14-2.64) but not neonatal mortality (1.43, 0.98-2.08) remained significant for younger adolescent mothers. There were no differences in neonatal or post-neonatal mortality risks for older adolescent (18-19 years) mothers. Conclusion: Socioeconomic factors may largely explain the increased neonatal mortality risk among younger adolescent mothers but not the increase in post-neonatal mortality risk. abstract_id: PUBMED:18066471 Risk factors for early neonatal mortality Objective: To assess risk factors for early neonatal mortality. Methods: A population-based case-control study was carried out with 146 early neonatal deaths and a sample of 313 controls obtained among survivals of the neonate period in the south region of the city of São Paulo, in the period of 8/1/2000 to 1/31/2001. Information was obtained through home interviews and hospital charts. Hierarchical assessment was performed in five groups with the following characteristics 1) socioeconomic conditions of mothers and families, 2) maternal psychosocial conditions, 3) obstetrical history and biological characteristics of mothers, 4) delivery conditions, 5) conditions of newborns Results: Risk factors for early neonate mortality were: Group 1: poor education of household head (OR=1.6; 95% CI: 1.1;2.6), household located in a slum area (OR=2.0; 95% CI: 1.2;3.5) with up to one room (OR=2.2; 95% CI: 1.1;4.2); Group 2: mothers in recent union (OR=2.0; 95% CI: 1.0;4.2), unmarried mothers (OR=1.8; 95% CI: 1.1;3.0), and presence of domestic violence (OR=2.7; 95% CI: 1;6.5); Group 3: presence of complications in pregnancy (OR=8.2; 95% CI: 5.0;13.5), previous low birth weight (OR=2.4; 95% CI: 1.2;4.5), absence of pre-natal care (OR=16.1; 95% CI: 4.7;55.4), and inadequate pre-natal care (block 3) (OR=2.1; 95% CI: 2.0;3.5); Group 4: presence of clinical problems during delivery (OR=2.9; 95% CI: 1.4;5.1), mothers who went to hospital in ambulances (OR=3.8; 95% CI: 1.4;10.7); Group 5: low birth weight (OR=17.3; 95% CI: 8.4;35.6) and preterm live births (OR=8.8; 95% CI: 4.3;17.8). Conclusions: Additionally to proximal factors (low birth weight, preterm gestations, labor complications and unfavorable clinical conditions in gestation), the variables expressing social exclusion and presence of psychosocial factors were also identified. This context may affect the development of gestation and hinder the access of women to health services. Adequate prenatal care could minimize the effect of these variables. abstract_id: PUBMED:20464075 Teenage pregnancy and other risk factors for fetal and infant mortality in the city of Rio de Janeiro, Brazil This article analyzes teenage pregnancy and other risk factors for fetal and infant mortality in the city of Rio de Janeiro, Brazil. The study included a sample of births in Rio de Janeiro, and exposure variables were organized hierarchically. For fetal death, the results showed the protective effect of adequate prenatal care, while maternal morbidity increased the risk. For neonatal death, adequate prenatal care and female gender in the newborn were protective factors, while black or brown maternal skin color, history of stillbirth, maternal morbidity, and physical aggression during the index pregnancy increased the risk. Low birth weight and prematurity were corroborated as determinants of fetal and neonatal death. Risk of post-neonatal death was highest with increased parity, intra-gestational morbidity, and low birth weight. Teenage pregnancy itself was an independent factor for post-neonatal death. The findings reaffirm the relevance of social and health policies targeting adolescents, as well as improvements in prenatal care. abstract_id: PUBMED:17111757 Determinants of neonatal mortality at Wesley Guild Hospital, Ilesa, Nigeria. Background: More than 10 million under five children die each year of preventable and easily treatable conditions in developing countries. Of these, eight million are infants, half of whom are newborns in their first month of life. A high proportion of babies die in their first month of life, many of them during their first week. The objective of this study is to assess sociodemographic and other determinants of neonatal mortality in Wesley Guild Hospital (WGH), Ilesa, Nigeria. Methods: This is a record review of 235 neonatal deaths reported at WGH from January 01 2001 to December 31 2003. Similarly, records of equal number of neonates (235) admitted to the same hospital during the same period but who were discharged alive was also reviewed for comparison. Four hundred and seventy records were reviewed. The two groups were matched for age, sex and within a 7-day period of admission. Information was collected with the aid of predesigned schedule from the patients' case notes, death registers and discharge summaries in the Records Department of the hospital. Information collected included the bio-data of the mothers, birth weight of neonates, estimated gestational age at delivery, age at death or discharge, date of admission, duration of the illness and date of discharge. Others included mode and place of delivery, maternal booking status and complications of pregnancy and birth. Data were analyzed using descriptive and inferential statistics by computer software, Epi-Info 2002. Results: Teenage pregnancy, low birth weights (LBW), prematurity and neonatal tetanus were positively associated with neonatal death. Unbooked mothers, deliveries at missions and homes and low socioeconomic status were also positively associated with neonatal death (P &lt; 0.05 in all cases). There was no statistically significant association between the sex of neonate, parity of mother and complications in pregnancy with neonatal death (P &gt; 0.05 in all cases). Conclusion: The major determinants of neonatal deaths were teenage pregnancy, prematurity, LBW, poverty and lack of skilled attendance at delivery. Addressing the basic determinants of neonatal mortality will improve newborn survival and health and this will significantly reduce mortality among under five children in developing countries. abstract_id: PUBMED:22487462 Risk of infant mortality with weekend versus weekday births: a population-based study. Aim: To examine the association between infant mortality, the day of birth and sociodemographic factors. Methods: This population-based retrospective study analyzed all singleton live births in the state of Missouri during the period 1989 to 1997. The main outcome of interest was infant survival after birth. Hazard ratios (HR) and 95% confidence intervals (CI) for the association between infant mortality and the day of birth were obtained through parametric time to event models. Results: The neonatal mortality rate was higher on weekends (3.25/1000) compared to weekdays (2.87/1000) (P = 0.042). Cesarean section delivery increased the risk of neonatal death by a magnitude of 31.5 compared to vaginal births (HR = 31.47, 95% CI: 15.79, 62.74). Adolescent females (age &lt;18) were more likely to experience neonatal (HR = 2.20, 95% CI: 1.47, 3.31), post-neonatal (HR = 2.20, 95% CI: 1.47, 3.30) and infant mortality (HR = 4.06, CI: 2.02, 8.14). Conclusions: Cesarean section delivery heightens the risk of all infant death, including neonatal and post-neonatal death, regardless of the day of birth, underscoring the need for multi-tiered strategies to reduce the occurrence of medically unnecessary cesarean sections. Furthermore, the elevated risk of infant mortality among adolescent mothers highlights the importance of enhanced preconception care and age-appropriate pregnancy prevention interventions. abstract_id: PUBMED:18538263 Increased risks of neonatal and postneonatal mortality associated with teenage pregnancy had different explanations. Objective: To determine the potential pathway of the association between teenage pregnancy and neonatal and postneonatal mortality. Study Design And Setting: We carried out a retrospective cohort study of 4,037,009 nulliparous pregnant women under 25 years old who had a live singleton birth during 1995 to 2000, based on linked birth and infant death data set of the United States. Results: Teenage pregnancy (10-19 years old) was associated with increased neonatal mortality (odds ratio [OR]: 1.20, 95% confidence interval [CI]=1.16-1.24) and postneonatal mortality (OR: 1.47, 95% CI=1.41-1.54) after adjustment for potential confounders. With further adjustment for weight gain during pregnancy, teenage pregnancy was still associated with increased risk of neonatal (OR: 1.23, 95% CI=1.19-1.28) and postneonatal mortality (OR: 1.48, 95% CI=1.42-1.55). When adjustment was made for gestational age at birth, there was no association of teenage pregnancy with neonatal mortality (OR: 0.98, 95% CI=0.95-1.02), whereas there was significant association with postneonatal mortality (OR: 1.40, 95% CI=1.34-1.46). Conclusion: The increased risk of neonatal death associated with teenage pregnancy is largely attributable to higher risk of preterm births, whereas increased postneonatal mortality is independent of the known confounders and gestational age at birth. abstract_id: PUBMED:14711857 Associations between perinatal interventions and hospital stillbirth rates and neonatal mortality. Background: Previous studies suggest that high risk and low birthweight babies have better outcomes if born in hospitals with level III neonatal intensive care units. Relations between obstetric care, particularly intrapartum interventions and perinatal outcomes, are less well understood, however. Objective: To investigate effects of obstetric, paediatric, and demographic factors on rates of hospital stillbirths and neonatal mortality. Methods: Cross sectional data on all 65 maternity units in all Thames Regions, 1994-1996, covering 540 834 live births and stillbirths. Hospital level analyses investigated associations between staffing rates (consultant/junior paediatricians, consultant/junior obstetricians, midwives), facilities (consultant obstetrician/anaesthetist sessions, delivery beds, special care baby unit, neonatal intensive care unit cots, etc), interventions (vaginal births, caesarean sections, forceps, epidurals, inductions, general anaesthetic), parental data (parity, maternal age, social class, deprivation, multiple births), and birthweight standardised stillbirth rates and neonatal mortality. Results: Unifactorial analyses showed consistent negative associations between measures of obstetric intervention and stillbirth rates. Some measures of staffing, facilities, and parental data also showed significant associations. Scores for interventional, organisational, and parental variables were derived for multifactorial analysis to overcome the statistical problems caused by high intercorrelations between variables. A higher intervention score and higher number of consultant obstetricians per 1000 births were both independently and significantly associated with lower stillbirth rates. Organisational and parental factors were not significant after adjustment. Only Townsend deprivation score was significantly associated with neonatal mortality (positive correlation). Conclusions: Birthweight adjusted stillbirth rates were significantly lower in units that took a more interventionalist approach and in those with higher levels of consultant obstetric staffing. There were no apparent associations between neonatal death rates and the hospital factors measured here. abstract_id: PUBMED:32300758 A comparison of neonatal outcomes between adolescent and adult mothers in developed countries: A systematic review and meta-analysis. Evidence suggests that adolescent pregnancies are at increased risk of adverse neonatal outcomes compared to adult pregnancies; however, there are significant inconsistencies in the literature, particularly in studies conducted in developed countries. The objective of this study therefore is to systematically review the current literature with regard to the relationship between adolescent pregnancy and neonatal outcomes. A literature search was conducted in eight electronic databases (AMED, ASSIA, Child Development and Adolescent Studies, CINAHL, Cochrane Library, Health Source: Nursing, Maternity and Infant Care, MEDLINE and Scopus. The reference lists of included studies were also hand searched. Studies were included if: they were conducted in countries with very high human development according to the United Nations Human Development Index; reported at least one comparison between adolescents (19 years or under) and adult mothers (20-34 years); and were published between January 1998 and March 2018. Studies were screened for inclusion and data extracted by one reviewer. A second reviewer independently reviewed a sub-set of studies. Disagreements were resolved by consensus. Meta-analysis was performed using RevMan 5.3 using crude counts reported in the included studies. Sub-group analyses of adolescents aged 17 and under and 18-19 were conducted. Pooled analysis of adjusted odds ratios was also undertaken in order to consider the effect of confounding factors. Meta-analysis effect estimates are reported as risk ratios (RR) and pooled association as adjusted odds ratios (aORs). Point estimates and 95% confidence intervals are presented. After removal of duplicates a total of 1791 articles were identified, of which 20 met the inclusion criteria. The results of the meta-analysis showed adolescents to have increased risk of all primary adverse outcomes investigated. Sub-group analysis suggests an increased risk of perinatal death and low birthweight for children born to adolescent mothers; 17 and under (perinatal death: RR 1.50, CI 1.32-1.71: low birthweight RR 1.43, CI 1.20-1.70); 18-19 (perinatal death RR 1.21, CI 1.06-1.37: low birthweight RR 1.10, CI 1.08-1.57). Mothers aged 17 and under were also at increased risk of preterm delivery (RR 1.64, CI 1.54-1.75). Analysis adjusted for confounders showed increased risk of preterm delivery (aOR 1.23, CI 1.09-1.38), very preterm delivery (aOR 1.22, CI 1.03-1.44) and neonatal death (aOR 1.31, CI 1.14-1.52). Findings show that young maternal age is a significant risk factor for adverse neonatal outcomes in developed countries. Adolescent maternal age therefore should be considered as a potential cause for concern in relation to neonatal health and it is recommended that health care professionals respond accordingly with increased support and monitoring. abstract_id: PUBMED:14649592 Weekend birth and higher neonatal mortality: a problem of patient acuity or quality of care? Objective: To address the differences in neonatal mortality among births to teenage mothers on weekdays and weekends. Design: A retrospective descriptive design. Rates of neonatal mortality linked to maternal risk factors, low birth weight, gestational age, day of the week of the birth, and ethnicity/race were examined. Patients/participants: The population consisted of all recorded births to teenage mothers (&lt; 20 years of age) in Texas in 1999 and 2000 (N = 111,749). These births were linked to death certificates for a subset of neonatal deaths within the same time period (n = 397). Main Outcome Measures: The outcome of interest was any death attributed to conditions originating in the perinatal period and recorded as such on the infant death certificate. Results: Neonatal mortality was higher among the births on weekends than those during the week. Maternal risks and patient acuity levels of mothers and babies were not consistently higher on weekends. However, when risk factors were present, weekend births were more dangerous for Hispanics than for other ethnic or racial groups. Conclusions: Differences in patient acuity did not satisfactorily explain higher neonatal mortality rates on weekends. Thus, quality of care indicators such as lower hospital staffing and reduced availability of services on weekends may be critical sources of unnecessary neonatal deaths. abstract_id: PUBMED:11173020 Pregnancy outcomes in young Turkish women. Study Objective: We documented adolescent pregnancies that were thought to be at high risk for increased obstetric complications. Design, Setting, Participants: This study covered 442 pregnant women who were under 19 years of age and who delivered in Cukurova University, School of Medicine, Department of Obstetrics and Gynecology between January 1, 1993 and December 31, 1997, retrospectively. Results: The patients' mean age was 18.24 years and their mean gestational age was 38.2 weeks. The newborns' mean birthweight was 3093.05 g and their mean birth height was 45.75 cm. Apgar score in the 1st minute was 6.79 and at the 5th minute 8.37. Cesarean section rate was 28.5%. The most common causes were pregnancy-induced hypertension (PIH) and breech delivery. There were 32 stillborns and 5 early neonatal deaths. The most frequent obstetric complications were PIH (14.5%), preterm delivery (7.0%), and low birthweight (&lt; 2000 g) (10.2%). The pregnant adolescents with obstetric problems (44.4%) had poor obstetric results. Conclusions: Adolescent pregnancies are considered high risk with many obstetric complications and poor obstetric results. To decrease the complications adolescent pregnancies must be followed-up as high risk pregnancies, especially in developing countries where socioeconomic factors are more pronounced. Answer: The relationship between socioeconomic factors and adolescent pregnancy outcomes, particularly distinguishing between neonatal and post-neonatal deaths, has been explored in various studies. One study found that socioeconomic factors may largely explain the increased neonatal mortality risk among younger adolescent mothers but not the increase in post-neonatal mortality risk. After adjusting for race, marital status, age-appropriate education level, parity, smoking status, prenatal care utilization, and poverty status, the risk of post-neonatal mortality remained significant for younger adolescent mothers, while there were no differences in neonatal or post-neonatal mortality risks for older adolescent mothers (18-19 years) (PUBMED:16042801). Another study identified that, in addition to proximal factors like low birth weight and preterm gestations, variables expressing social exclusion and presence of psychosocial factors were also risk factors for early neonate mortality. Adequate prenatal care could minimize the effect of these variables (PUBMED:18066471). A study in Rio de Janeiro, Brazil, found that teenage pregnancy itself was an independent factor for post-neonatal death, reaffirming the relevance of social and health policies targeting adolescents, as well as improvements in prenatal care (PUBMED:20464075). In Nigeria, teenage pregnancy, low birth weights, prematurity, and lack of skilled attendance at delivery were major determinants of neonatal deaths, with poverty also being positively associated with neonatal death (PUBMED:17111757). A study examining the association between infant mortality, the day of birth, and sociodemographic factors found that adolescent females were more likely to experience neonatal and post-neonatal mortality (PUBMED:22487462). Research in the United States showed that the increased risk of neonatal death associated with teenage pregnancy is largely attributable to higher risk of preterm births, whereas increased post-neonatal mortality is independent of the known confounders and gestational age at birth (PUBMED:18538263). In summary, socioeconomic factors, including poverty, lack of education, and inadequate prenatal care, are associated with increased risks of neonatal and post-neonatal mortality among adolescent mothers. While some of the increased risk for neonatal mortality can be explained by factors such as low birth weight and prematurity, post-neonatal mortality appears to be influenced by broader socioeconomic and psychosocial factors that persist beyond the neonatal period.
Instruction: Tonsillectomy as a day-case surgery: a safe procedure? Abstracts: abstract_id: PUBMED:17067079 Tonsillectomy as a day-case surgery: a safe procedure? Unlabelled: PROBLEM/OBJECTIVES: To assess the safety of tonsillectomy as a one-day procedure, we retrospectively evaluated complications of tonsillectomy within two weeks after surgery. Methodology: The study included 1977 patients; 85% of the tonsillectomies were performed as a one-day procedure. Tonsillectomies were performed in Ziekenhuis Oost Limburg by one staff ENT and seven ENT residents between January 1, 1996 and January 1, 2005. We also reviewed the number of tonsillectomies performed as day surgery and their complication rate for Belgium, based on the National Office for Health and Disability Insurance billing, comparing our series with the number of tonsillectomies as day-case surgeries and their complication rate for Belgium. Results: In our study, 26 patients (1.3%) consulted within two weeks of surgery for major or minor complications secondary to tonsillectomy. The only major complication was primary bleeding (bleeding within 24 hours after tonsillectomy), which occurred in five patients, all of whom received a revision for haemostasis. No patient required transfusion. Minor complications (21 patients) were secondary bleeding, fever, and anorexia. During the study period, there were 214,524 tonsillectomies performed in Belgium. Based on invoice information, 4676 cases of postoperative bleeding occurred, requiring a revision for haemostasis. In our centre and generally for Belgium, there was an increase in tonsillectomies as an outpatient procedure without an increase in incidence of subsequent bleeding. Day-case surgery tonsillectomy incidence had risen from 36% in 1996 to 64% in 2004 in Belgium. Conclusion: Considering the low bleeding and complication rates, tonsillectomy can safely be performed as a day-case surgery. abstract_id: PUBMED:23056702 Prevalence of Post-tonsillectomy Bleeding as Day-case Surgery with Combination Method; Cold Dissection Tonsillectomy and Bipolar Diathermy Hemostasis. Objective: Post-tonsillectomy hemorrhage remains an important factor in determining the safety of performing tonsillectomy as a day case procedure. The aim of this study was to determine the safety of day case tonsillectomy by using combination method, cold dissection tonsillectomy and bipolar diathermy hemostasis. Methods: A prospective randomized clinical study conducted on the patients who had undergone day case tonsillectomy (DCT). There were two groups (DCT and control group) each group consisting of 150 cases. Tonsillectomy was performed by using combination method; cold dissection and hemostasis was achieved by ligation of vessels with bipolar electerocautery. Findings: We found 3 cases of post-tonsillectomy bleeding in DCT group and 4 cases in the control group. There was no statistically significant difference in the rate of post-operative hemorrhage between the two groups. Conclusion: The findings suggest the safety of the combination of cold dissection tonsillectomy and bipolar diathermy hemostasis as day case tonsillectomy. abstract_id: PUBMED:23120704 Safety of day-stay tonsillectomy. Summary: The appropriateness of tonsillectomy as a day case procedure is not universally accepted, however, the procedure has been performed in our institution since 1996. Our policy has been to continue this practice. To assess the appropriateness of this practice in our institution, a retrospective study of 802 tonsillectomies performed over 16 months was conducted. A telephone based questionnaire was formed to assess safety of the procedure and to evaluate the experiences and attitude of the served community towards our practice. Four hundred and thirty-three (61.3%) patients/parents were contactable to answer the questionnaire. Fifty-eight percent felt that a same day discharge was favorable with 6% considering that an overnight stay would have been preferable, others had no preference. Regarding the problems at home, 76% reported no problems on the first night, with 3% needing medical assistance that night, half of which were admitted in the hospital. We conclude that with the practices in place in this center, our policy of aiming for same day discharge is appropriate, being safe and agreeable to both patients and health care providers. abstract_id: PUBMED:33845449 Economic Viability and Efficacy of Day Case Versus Inpatient Tonsillectomy in Southern Iran. Objectives: Day case tonsillectomy compared with inpatient tonsillectomy has increasingly become a trend for many patients undergoing elective surgeries. Unjustified stays of tonsillectomy can be avoided by day case surgery, which consequently reduces treatment costs. The aim of this study was therefore to determine the cost and effectiveness of day case tonsillectomy compared with inpatient tonsillectomy. Methods: This cost-effectiveness study was performed on 300 patients from May 2017 to April 2018. Patients were randomly divided into 2 groups: day case (n = 150) and inpatient (n = 150). Consequences used in model included incidence of bleeding, blood transfusion, and reoperation frequency within 2 weeks after surgery and also the patients' pain during 24 hours after surgery. Costs were collected from societal perspective, so the costs included direct medical and nonmedical costs as well as indirect costs. One-way sensitivity analysis was conducted to measure the uncertainty effects of the parameters. The collected data were analyzed using software TreeAge and Excel 2016. Results: Day case tonsillectomy was more cost-effective than inpatient. Mean total costs in day case and inpatient were $915.1 and $1227.9, respectively. Besides, the mean effectiveness was 0.921 and 0.914 percent, respectively. Also, 1-way sensitivity analysis proved the robustness of the results of study. Conclusions: Day case tonsillectomy is a cost-effective strategy and can be suggested as a good alternative for a wide range of patients after tonsillectomy. Supporting day case surgery for tonsillectomy cases can significantly reduce the financial burden. abstract_id: PUBMED:34396825 Oxford guidelines for adult day-case tonsillectomy. Oxford guidelines were developed after critically reviewing the existing literature and aim to assist anaesthetists, surgeons and allied healthcare staff in providing optimal care for patients undergoing tonsillectomy as a day-case procedure. Appropriate patient selection, provision of robust analgesia, antiemesis, perioperative warming and hydration are key factors to ensure patient comfort and allow same-day discharge. Patients can be discharged home after a minimum observation of 6h as this is the period with the greatest risk of primary haemorrhage. All patients must have a clear and safe understanding of which complications may occur and know how to seek help. A team effort and close collaboration between the anaesthetic, surgical, theatre and ward teams are essential to achieve optimum outcomes and reduce the rate of failed discharges. abstract_id: PUBMED:15488965 Day stay pediatric tonsillectomy--a safe procedure. Objective: There is a controversy about day stay pediatric tonsillectomy in the UK and Australia. New Zealand has a similar health structure and we wished to compare day stay tonsillectomy from our hospital with those reported from other centers. Methods: We performed a prospective audit of day stay tonsillectomy to determine conversion to hospital admission rate and the incidence of postoperative hemorrhage. Results: There were 4850 paediatric tonsillectomies performed with 80% of them as day stay procedures over a 9-year study period (1993-2002) in a university-affiliated tertiary children's hospital. The primary postoperative hemorrhage rate (within 24h of surgery) was 0.9% (CI 0.68-1.22%) and 83% occurred within the mandatory 4h postoperative observation period. Primary hemorrhage requiring re-operation to achieve hemostasis occurred in 18 children (0.37%, CI 0.2-0.54%). No child with a primary hemorrhage who presented after discharge following day stay surgery required re-operation or blood transfusion. Day stay surgery was planned in 4041 children and 4.7% (CI 4.1-5.4%) required conversion to hospital admission. Postoperative vomiting was the most common indication for conversion (2.65%, CI 2.2-3.1%), while hemorrhage contributed only 0.95% (CI 0.64-1.24%). Conclusions: This study was performed in a health system similar to that of Australia and the UK. Complication and conversion rates are similar to those reported in North American centers. Pediatric day stay tonsillectomy is a safe procedure when performed with strict preoperative criteria, trained day stay unit (DSU) staff, and with in-patient facilities on site. abstract_id: PUBMED:7574246 Tonsillectomy in a day care hospital. A socioeconomic study at the Robert Debré Hospital in Paris Tonsillectomy as an outpatient procedure in children is a common practice in the United States and many other countries. In French public hospitals, it remains quite rare. A 6-month prospective study was conducted in order to check the procedure's feasibility in a large Paris university children's hospital, to describe the involved population and to search for the possible benefits which could be made by a public hospital from an increase in day surgery practice for tonsillectomy. In terms of quality of postoperative outcome, this study confirms the absence of major complications related to this procedure if the proper criteria are met for day surgery. In fact, medical and social findings from the involved patients tend to limit here a large extension of this procedure. For example, in the inpatients group, more than 43% of children had a tonsillar hypertrophy with significant airway obstruction and 47% did not have lodging conditions which would allow day surgery with the required safety. Only 20% of patients could be transferred to ambulatory surgery which leads to 2-3 additional cases each week here. Finally, the cost analysis shows no benefit in human resources at this hospital because the requested differential activity threshold would not be reached. abstract_id: PUBMED:10912310 Is paediatric day case tonsillectomy desirable? The parents' perspective. Tonsillectomy is the most commonly performed otolaryngological procedure in most units. There is considerable financial pressure to perform paediatric tonsillectomy as a day case. Responding to concern that day case tonsillectomy might not be welcomed by parents or felt to be in the best interests of the patients, we carried out an audit of paediatric tonsillectomy, by means of a questionnaire for parents and nursing staff to complete while 32 consecutive patients underwent tonsillectomy as inpatients. Although eight parents (25%) felt that day surgery should be an option, none of those surveyed felt that on this occasion their child was well enough to be discharged home six hours postoperatively. Furthermore, no child in the audit fulfilled all the essential discharge criteria at six hours. In this area, at least, it appears parents do not uniformly welcome paediatric day case tonsillectomy. abstract_id: PUBMED:10354748 Tonsillectomy--day surgery or hospitalization? Most current studies show that adenotonsillectomies can be performed safely as day case surgery. We wanted to evaluate patient acceptability of ambulatory surgery. A prospective study was undertaken in order to compare (adeno)tonsillectomies performed on an inpatient and outpatient basis. 104 patients underwent day surgery (adeno)tonsillectomy. 88 patients were treated as inpatients, but would have met the selection criteria for ambulatory surgery if an exception had been made in relation to travel distance. Tonsillectomy is an operation with high postoperative morbidity. Our study shows that patients undergoing day case surgery had better sleep, better oral intake and had less pain during the first postoperative night. The majority of the inpatients considered tonsillectomy performed as day case surgery less tolerable. With adequate information and patient selection, our study showed that most patients prefer day surgery. Questioned two weeks postoperatively, 92% of the patients preferred tonsillectomy performed as a day case procedure. Use of day surgery may improve health care efficiency. At the same time, it may improve quality of care for the patients. abstract_id: PUBMED:8908961 Day case tonsillectomy in adults: how many are suitable? Day case surgery is increasing to improve health care cost efficiency. The economic benefits of day surgery depend on how many patients are suitable. Adult tonsillectomy is a common operation. This study aims to define what proportion of adults undergoing tonsillectomy are suitable for day case surgery. One hundred consecutive adults undergoing elective in-patient tonsillectomy were studied prospectively. Patients were questioned, examined and investigated for any general medical or social contra-indications to day surgery as specified by the Royal College of Surgeons of England. They were also questioned about specific contra-indications to day case tonsillectomy, including NSAID contra-indications and tonsillectomy indicated for biopsy, quinsy or with another procedure. Seventy-six per cent patients satisfied the requirements. The most common reasons for failure were social. A high proportion of adults are suitable for day case tonsillectomy and therefore significant economic benefits are possible. However, prior to such surgery further work on operation suitability is required. Answer: Tonsillectomy as a day case surgery has been evaluated for safety in various studies, and the evidence suggests that it can be a safe procedure when appropriate measures are taken. A retrospective evaluation of complications within two weeks after surgery in 1977 patients showed that only 1.3% consulted for major or minor complications, with the only major complication being primary bleeding, which occurred in five patients. No patient required transfusion, and there was no increase in the incidence of subsequent bleeding despite an increase in tonsillectomies as an outpatient procedure in Belgium (PUBMED:17067079). Another study using a combination method of cold dissection tonsillectomy and bipolar diathermy hemostasis found no statistically significant difference in the rate of post-operative hemorrhage between day case and control groups, suggesting the safety of this combination method for day case tonsillectomy (PUBMED:23056702). A retrospective study of 802 tonsillectomies and a telephone-based questionnaire indicated that same day discharge was safe and agreeable to both patients and healthcare providers, with 76% reporting no problems on the first night and only 3% needing medical assistance (PUBMED:23120704). An economic viability study in Southern Iran found day case tonsillectomy to be more cost-effective than inpatient tonsillectomy, with lower total costs and similar effectiveness, suggesting that it can reduce the financial burden on healthcare systems (PUBMED:33845449). The Oxford guidelines for adult day-case tonsillectomy emphasize appropriate patient selection, robust analgesia, antiemesis, perioperative warming, and hydration as key factors to ensure patient comfort and allow same-day discharge, further supporting the safety of the procedure (PUBMED:34396825). A prospective audit of day stay pediatric tonsillectomy in New Zealand showed that it is a safe procedure when performed with strict preoperative criteria, trained day stay unit staff, and in-patient facilities on site (PUBMED:15488965). In conclusion, the evidence from these studies supports the notion that tonsillectomy can be safely performed as a day case surgery, provided that there is careful patient selection, proper perioperative management, and appropriate postoperative care.
Instruction: Are neurology residents in the United States being taught defensive medicine? Abstracts: abstract_id: PUBMED:16040189 Are neurology residents in the United States being taught defensive medicine? Objective: To study whether and how fear of litigation and defensive medicine are communicated during residency training and to assess whether this affects residents' attitudes. Methods: Neurology residents in the US (n=25) and, as a control group, Neurology residents training in Germany (n=42) were asked to rate multiple items regarding litigation, defensive strategies and how often these issues are raised by teaching physicians. Statistic analysis was performed using nonparametric tests. Results: Residents in both countries indicated that litigation is an "important problem", although US residents stated this significantly more often (p&lt;0.001). Initiation of tests motivated mainly by fear of litigation (p=0.004) and explicit teaching of defensive strategies by teaching physicians (p&lt;0.02) were reported more often by US residents. Conclusion: Neurology residents in both the US and Germany perceive a litigational threat, but significantly less so in Germany. This difference may result at least in part from teaching of defensive strategies reported more often in US programs. abstract_id: PUBMED:27476695 International Defensive Medicine in Neurosurgery: Comparison of Canada, South Africa, and the United States. Objective: Perception of medicolegal risk has been shown to influence defensive medicine behaviors. Canada, South Africa, and the United States have 3 vastly different health care and medicolegal systems. There has been no previous study comparing defensive medicine practices internationally. Methods: An online survey was sent to 3672 neurosurgeons across Canada, South Africa, and the United States. The survey included questions on the following domains: surgeon demographics, patient characteristics, physician practice type, surgeon liability profile, defensive behavior-including questions on the frequency of ordering additional imaging, laboratory tests, and consults-and perception of the liability environment. Responses were analyzed, and multivariate logistic regression was used to examine the correlation of medicolegal risk environment and defensive behavior. Results: The response rate was 30.3% in the United States (n = 1014), 36.5% in Canada (n = 62), and 41.8% in South Africa (n = 66). Canadian neurosurgeons reported an average annual malpractice premium of $19,110 (standard deviation [SD] = $11,516), compared with $16,262 (SD = $7078) for South African respondents, $75,857 (SD = $50,775) for neurosurgeons from low-risk U.S. states, and $128,181 (SD = $79,355) for those from high-risk U.S. states. Neurosurgeons from South Africa were 2.8 times more likely to engage in defensive behaviors compared with Canadian neurosurgeons, while neurosurgeons from low-risk U.S. states were 2.6 times more likely. Neurosurgeons from high-risk U.S. states were 4.5 times more likely to practice defensively compared with Canadian neurosurgeons. Conclusions: Neurosurgeons from the United States and South Africa are more likely to practice defensively than neurosurgeons from Canada. Perception of medicolegal risk is correlated with reported neurosurgical defensive medicine within these countries. abstract_id: PUBMED:32181105 Internal Medicine Residents and the Practice of Defensive Medicine: A Pilot Study Across Three Internal Medicine Residency Programs. Background Defensive medicine is becoming increasingly prevalent in the United States and is estimated to cost billions of dollars in excess healthcare spending. There is evidence that the practice of defensive medicine starts early in the medical career. Defensive medicine has been investigated among residents in high medico-legal risk specialties, but there is a paucity of information on its prevalence among internal medicine residents. Objective To examine the prevalence and patterns of defensive medical practices among internal medicine residents. Methods We conducted an online survey among the residents of three internal medicine residency programs in the 2018-2019 academic cycle. We invited all internal medicine residents within the selected programs to participate through email and asked them to complete an electronic survey assessing defensive medical practices. Results A total of 49 out of 143 residents participated in the study (response rate: 34.3%); 55% (n = 27) of the residents who participated considered the risk of being sued during residency to be low, compared to 40.8% (n = 20) who considered it to be moderate and 4.1% (n = 2) who considered it to be high. Defensive medical practices were found to be widely prevalent (40.0-91.3%) among internal medicine residents across all three clinical training stages. Assurance defensive practices were more common than avoidance practices. Conclusion Defensive medical practices, especially of the assurance type, were widely prevalent among our sample of internal medicine residents. abstract_id: PUBMED:12784685 The neurology training in the United States and Japan The practice of neurology, like many other fields of medicine, currently faces a number of critical problems in the United States mostly by cumbersome restrictions designed to contain rapidly escalating medical costs. Fortunately, however, the residency training has maintained the spirit of the time-tested tradition, which has served the medical community well ever since its inception in 1910. The system has continued with little modifications after a switch of internship to Basic Residency Program (PG1) in the late 1960s. In comparison, the neurology training in Japan suffers from following deficiencies: 1) a paucity of bedside teaching in most medical schools, necessitating the initiation of unprepared residents into patient care; 2) an insufficient number of staff, especially at a governmental school, precluding an adequate coverage of neurological subspecialities such as electrophysiology and neuropathology; 3) absence of a tutorialship from senior to junior residents, and from interns to medical students, mostly for the lack of organization and logistics; and 4) no incentive to specialize without board certification by the governmental agencies or proper recognition by insurance providers. We must address these fundamental issues to promote neurology as an independent discipline for improved care of patients with neurological disorders. abstract_id: PUBMED:29290889 Perceptions of emergency medicine residents on the quality of residency training in the United States and Saudi Arabia. Background: We compare educational environments (i.e. physical, emotional and intellectual experiences) of emergency medicine (EM) residents training in the United States of America (USA) and Saudi Arabia (SA). Methods: A cross-sectional survey study was conducted using an adapted version of the validated Postgraduate Hospital Educational Environment Measure (PHEEM) survey instrument from April 2015 through June 2016 to compare educational environments in all emergency medicine residency programs in SA and three selected programs in the USA with a history of training Saudi physicians. Overall scores were compared across programs, and for subscales (autonomy, teaching, and social Support), using chi-squared, t-tests, and analysis of variance. Results: A total of 219 surveys were returned for 260 residents across six programs (3 SA, 3 USA), with a response rate of 84%. Program-specific response rates varied from 79%-100%. All six residencies were qualitatively rated as "more positive than negative but room for improvement". Quantitative PHEEM scores for the USA programs were significantly higher: 118.7 compared to 109.9 for SA, P=0.001. In subscales, perceptions of social support were not different between the two countries (P=0.243); however, role autonomy (P&lt;0.001) and teaching (P=0.005) were better in USA programs. There were no significant differences by post-graduate training year. Conclusion: EM residents in all three emergency medicine residency programs in SA and the three USA programs studied perceive their training as high quality in general, but with room for improvements. USA residency programs scored higher in overall quality. This was driven by more favorable perceptions of role autonomy and teaching. Understanding how residents perceive their programs may help drive targeted quality improvement efforts. abstract_id: PUBMED:33966974 A National Survey of Wilderness Medicine Curricula in United States Emergency Medicine Residencies. Introduction: Wilderness medicine (WM) is a growing subspecialty of emergency medicine. In 2018, we surveyed all 240 emergency medicine residencies in the United States to assess the scope of WM education in emergency medicine training programs in light of the nearly 30% increase in the number of residencies since 2015. Methods: A survey was e-mailed to the Council of Residency Directors in Emergency Medicine listserv and individual program directors of each of the 240 residencies. The survey included questions on educational content, format, number of hours taught, availability of conference credit, offering of an elective or fellowship, and several predefined WM curricula. We evaluated differences between 3-y and 4-y residencies using the χ2 test, where P&lt;0.05 was considered significant. Results: We had a response rate of 57% for completed surveys. Analysis showed 63% of respondent programs teach WM material. The majority (86%) partially or completely developed their curriculum, with 33% offering at least 1 of the predefined curricula. Thirteen percent taught with lecture only, 2% taught by hands-on only, and 85% used a combination of the 2. WM electives were significantly more likely to be offered by 4-y than 3-y residencies (P=0.009). Conclusions: Almost two-thirds of respondent residency programs teach WM material. Of these, only one-third teach any of the predefined curricula. Four-year residencies are more likely to offer WM electives but are otherwise comparable to 3-y programs. abstract_id: PUBMED:24711537 International issues: Obtaining an adult neurology residency position in the United States: an overview. Around the world, there are marked differences in neurology training, including training duration and degree of specialization. In the United States, adult neurology residency is composed of 1 year of internal medicine training (preliminary year) and 3 years of neurology-specific training. Child neurology, which is not the focus of this article, is 2 years of pediatrics and 3 years of neurology training. The route to adult neurology residency training in the United States is standardized and is similar to most other US specialties. Whereas US medical graduates often receive stepwise guidance from their medical school regarding application for residency training, international graduates often enter this complex process with little or no such assistance. Despite this discrepancy, about 10%-15% of residency positions in the United States are filled by international medical graduates.(1,2) In adult neurology specifically, 35% of matched positions were filled by international graduates in 2013, 75% of whom were not US citizens.(1) In an effort to provide a preliminary understanding of the application process and related terminology (table 1) and thereby encourage international residency applicants, we describe the steps necessary to apply for neurology residency in the United States. abstract_id: PUBMED:33676778 Physicians' views and experiences of defensive medicine: An international review of empirical research. This study systematically maps empirical research on physicians' views and experiences of hedging-type defensive medicine, which involves providing services (eg, tests, referrals) to reduce perceived legal risks. Such practices drive over-treatment and low value healthcare. Data sources were empirical, English-language publications in health, legal and multi-disciplinary databases. The extraction framework covered: where and when the research was conducted; what methods of data collection were used; who the study participants were; and what were the study aims, main findings in relation to hedging-type defensive practices, and proposed solutions. 79 papers met inclusion criteria. Defensive medicine has mainly been studied in the United States and European countries using quantitative surveys. Surgery and obstetrics have been key fields of investigation. Hedging-type practices were commonly reported, including: ordering unnecessary tests, treatments and referrals; suggesting invasive procedures against professional judgment; ordering hospitalisation or delaying discharge; and excessive documentation in medical records. Defensive practice was often framed around the threat of negligence lawsuits, but studies recognised other legal risks, including patient complaints and regulatory investigations. Potential solutions to defensive medicine were identified at macro (law, policy), meso (organisation, profession) and micro (physician) levels. Areas for future research include qualitative studies to investigate the behavioural drivers of defensive medicine and intervention research to determine policies and practices that work to support clinicians in de-implementing defensive, low-value care. abstract_id: PUBMED:29068439 Perception of Resources Spent on Defensive Medicine and History of Being Sued Among Hospitalists: Results from a National Survey. The United States spends substantially more per capita for healthcare than any other nation. Defensive medicine is 1 source of such spending, but its extent is unclear. Using a national survey of approximately 1500 US hospitalists, we report the estimates the US hospitalists provided of the percent of resources spent on defensive medicine and correlates of their estimates. We also ascertained how many reported being sued. Sixty-eight percent of eligible recipients responded. Overall, respondents estimated that 37.5% of healthcare costs are due to defensive medicine. Just over 25% of our respondents, including 55% of those in practice for 20 years or more, reported being sued for medical malpractice. Veterans Affairs (VA) hospital affiliation, more years practicing as a physician, being male, and being a non-Hispanic white individual were all independently associated with decreased estimates of resources spent for defensive medicine. abstract_id: PUBMED:22482090 Incidence and costs of defensive medicine among orthopedic surgeons in the United States: a national survey study. Defensive medicine is defined as medical practices that may exonerate physicians from liability without significant benefit to patients. No study has evaluated the United States national incidence of defensive medicine in the field of orthopedic surgery. In the study reported here, we investigated the practice of defensive medicine and the resultant financial implications of such behavior by orthopedic surgeons in the US. A Web-based survey was sent to 2000 orthopedic surgeons in the US. Of the 1214 respondents, 1168 (96%) reported having practiced defensive medicine by ordering imaging, laboratory tests, specialist referrals, or hospital admissions mainly to avoid possible malpractice liability. On average, 24% of all ordered tests were for defensive reasons. Mean national Medicare payment information was used to calculate the cost of defensive medicine per respondent: approximately $100,000 per year. With there being 20,400 practicing orthopedic surgeons in the US, we estimated that the national cost of defensive medicine for the specialty of orthopedic surgery is $2 billion annually. Orthopedic surgeons' defensive medicine is a significant factor in health care costs and is of marginal benefit to patients. Policies aimed at managing liability risk may be useful in containing such practices. Answer: Yes, neurology residents in the United States are being taught defensive medicine. A study involving neurology residents in the US and Germany found that US residents reported more frequent initiation of tests motivated mainly by fear of litigation and explicit teaching of defensive strategies by teaching physicians (PUBMED:16040189). This suggests that the teaching of defensive medicine practices is more prevalent in US residency programs compared to those in Germany.
Instruction: Predictors of Gleason pattern 4/5 prostate cancer on prostatectomy specimens: can high grade tumor be predicted preoperatively? Abstracts: abstract_id: PUBMED:33933420 Pattern of Biopsy Gleason Grade Group 5 (4 + 5 vs 5 + 4 vs 5 + 5) Predicts Survival After Radical Prostatectomy or External Beam Radiation Therapy. Background: Previous cancer-specific mortality (CSM) analyses for different Gleason patterns in Gleason grade group (GGG) 5 cancer were limited by sample size. Objective: To test for differences in CSM according to biopsy GG 5 patterns (4 + 5 vs 5 + 4 vs 5 + 5) among patients undergoing radical prostatectomy (RP) or external beam radiation therapy (EBRT). Design, Setting, And Participants: Patients in the Surveillance, Epidemiology and End Results database treated with RP and EBRT (2004-2016) were identified and stratified according to Gleason 4 + 5 versus 5 + 4 versus 5 + 5. Intervention: RP or EBRT. Outcome Measurements And Statistical Analyses: Kaplan-Meier and multivariable Cox regression models predicting CSM were constructed. Results And Limitations: Of 17 263 eligible patients with GG 5 cancer at biopsy (RP: n = 7208; EBRT: n = 10 055), 12 705 had Gleason 4 + 5, 3302 had Gleason 5 + 4, and 1256 had Gleason 5 + 5 disease. Median age, prostate-specific antigen (PSA) at diagnosis, and advanced cT and cN stages significantly differed by Gleason pattern (Gleason 4 + 5 vs 5 + 4 vs 5 + 5; all p &lt; 0.001). The 10-yr CSM rate was 18.2% for Gleason 4 + 5, 28.0% for Gleason 5 + 4, and 39.1% for Gleason 5 + 5 (p &lt; 0.001). In multivariable analyses for the entire cohort adjusted for PSA, age at diagnosis, and cT and cN stage, Gleason 5 + 4 and Gleason 5 + 5 were associated with 1.6- and 2.2-fold higher CSM, respectively, relative to Gleason 4 + 5. In addition, Gleason 5 + 4 and Gleason 5 + 5 were associated with 1.6- and 2.5-fold, and 1.5- and 2.1-fold higher CSM rates in the RP and EBRT subgroups, respectively, relative to Gleason 4 + 5 (all p &lt; 0.001). Conclusions: For patients with biopsy GG 5 prostate cancer treated with RP or EBRT, there are important CSM differences by Gleason pattern (4 + 5 vs 5 + 4 vs 5 + 5). Ideally, the individual Gleason pattern should be considered in pretreatment risk stratification. Patient Summary: For patients with grade 5 prostate cancer, we found differences in cancer-specific death rates according to the pattern of abnormal cells in the prostate, called the Gleason score. The highest death rate was found for a Gleason pattern score of 5 + 5, followed by Gleason 5 + 4 and then Gleason 4 + 5. These differences were observed for both patients who were treated with prostate removal and patients who underwent radiotherapy. abstract_id: PUBMED:33580599 Prognostic significance of tertiary Gleason pattern in the contemporary era of Gleason grade grouping: A narrative review. Since the Gleason score was developed in 1966 as a histological classification for prostate cancer, it has been widely used in clinical practice and has evolved over time. The concept of a "tertiary Gleason pattern" (also known as a minor Gleason pattern) was first proposed in 2000, and has been used in clinical practice since the 2005 International Society of Urological Pathology conference. The prognostic significance of a tertiary Gleason pattern has been widely validated in various settings of prostate cancer, whereas its definition has yet to be fully established. Currently, a provisional definition of tertiary Gleason pattern is "&lt;5% Gleason pattern 4 or 5 in radical prostatectomy specimens." In contrast, "Gleason grade grouping" was proposed in 2013 and came into use in clinical practice in 2016 according to the 2014 International Society of Urological Pathology conference. Although the prognostic significance of Gleason grade grouping has already been widely confirmed, it does not incorporate the concept of tertiary Gleason pattern. Recently, the 2019 International Society of Urological Pathology conference discussed how to handle tertiary Gleason pattern in the current Gleason scoring system, but no consensus was reached on the issue. This review summarizes the evidence on the prognostic significance of tertiary Gleason pattern and discusses how to deal with it in the context of the contemporary Gleason grade grouping. It also refers to reporting of the percentage of Gleason patterns 4 and 5, as well as quantitative Gleason score models incorporating tertiary Gleason pattern. abstract_id: PUBMED:27993581 The effect of limited (tertiary) Gleason pattern 5 on the new prostate cancer grade groups. The risk of recurrence for prostatic adenocarcinoma after prostatectomy, as detected by prostate-specific antigen or other modalities, is based primarily on Gleason score along with pathologic tumor stage and surgical margin status. Recent large multi-institutional data spanning the last decade have supported modification of risk of recurrence stratification based on grade groups: grade group 1 (3+3=6), grade group 2 (3+4=7), grade group 3 (4+3=7), grade group 4 (4+4=8), and grade group 5 (Gleason scores 9 and 10). Using currently accepted grading definitions of grade patterns and grading rules, this study examines how the introduction of a limited, less than 5%, Gleason pattern 5 component at prostatectomy affects prognosis and fits into the grade group schema and reporting. The aggregate data from 2 independent major academic medical centers comprising 7606 patient records were analyzed with respect to biochemical recurrence-free survival. The presence of a limited (tertiary) Gleason pattern 5 component in the context of Gleason scores 3+4=7 (grade group 2) and 4+3=7 (grade group 3) imparts an intermediate prognosis relative to the next highest grade group. As such, we suggest that an additional comment and designation to the grade groups be provided reflecting the increased risk of recurrence in such cases (such as grade group 2+ or 3+). In contrast, the presence of limited (&lt;5%) Gleason pattern 5 in the context of Gleason score 4+4=8 imparts a poor prognosis equivalent to grade group 5 and therefore should be reported as grade group 5. abstract_id: PUBMED:32357849 Quantification of the individual risk of each Gleason pattern, including tertiary Gleason pattern 5, after radical prostatectomy: development of the modified Gleason grade grouping (mGGG) model. Background: While the new Gleason grade grouping (GGG), which started in 2016, has been widely validated in prostate cancer, it does not incorporate the concept of tertiary Gleason pattern 5. Furthermore, no study has "quantified" the individual risk of each Gleason pattern, including tertiary Gleason pattern 5, after radical prostatectomy. Methods: We reviewed 1022 men with adjuvant-treatment-naïve prostate cancer who underwent radical prostatectomy between 2005 and 2017. The primary endpoint was biochemical recurrence-free survival, defined as two consecutive prostate-specific antigen measurements ≥0.2 ng/ml after surgery. The individual quantitative risk score (IQRS) of each amount (primary/secondary/tertiary) of each Gleason pattern (3/4/5) was calculated using the Cox regression model. On the basis of the IQRS, the modified Gleason grade grouping (mGGG) model was developed. As a robustness analysis of the mGGG model, salvage treatment-free survival was also assessed. Results: During a median follow-up of 45 months, 229 of 1022 (22.4%) patients developed biochemical recurrence. The IQRS of each Gleason pattern was as follows: primary 5, 1.81 points (hazard ratio [HR] 6.13); secondary 5, 1.37 points (HR 3.92); tertiary 5, 0.87 points (HR 2.39); primary 4, 1.07 points (HR 2.91); secondary 4, 0.79 points (HR 2.21); and any Gleason pattern 3, 0 points (HR 1). Based on the IQRS, the mGGG model was developed, which classified patients into the following five groups: I (3 + 3 or less); II (3 + 4); III (4 + 3); IV (3 + 4 + t5, 4 + 3 + t5, 3 + 5, 5 + 3, and 4 + 4); V (4 + 4 + t5, 4 + 5, 5 + 4, and 5 + 5). The c-index for biochemical recurrence-free survival was significantly improved from 0.655 of the original GGG model to 0.672 of the mGGG model (P &lt; 0.05). In the robustness analysis, the c-index for salvage treatment-free survival was also significantly improved from 0.619 of the original GGG model to 0.638 of the mGGG model (P &lt; 0.05). Conclusions: The quantitative risk of tertiary (&lt; 5%) Gleason pattern 5 is slightly higher than that of secondary (5-50%) Gleason pattern 4. Our newly developed mGGG model more accurately predicts outcomes after radical prostatectomy than the original GGG model. abstract_id: PUBMED:11125378 Predictors of Gleason pattern 4/5 prostate cancer on prostatectomy specimens: can high grade tumor be predicted preoperatively? Purpose: Radical prostatectomy provides excellent cancer control in men with clinically localized prostate carcinoma. However, to our knowledge preoperative parameters for distinguishing indolent from clinically significant cancer are not well characterized. In fact, recent evidence suggests that the percent of Gleason pattern 4/5 carcinoma in the complete radical prostatectomy specimen is one of the strongest predictors of prostate cancer progression and a valid measure of cancer severity. However, it is unclear whether preoperative parameters, including biopsy Gleason pattern 4/5 carcinoma, may predict radical prostatectomy Gleason pattern 4/5 disease and, thereby, distinguish indolent from clinically significant cancer. Materials And Methods: We prospectively obtained 101 consecutive radical prostatectomy specimens and processed them in whole mount fashion. In addition to total tumor volume, we determined tumor volume for each Gleason pattern. Biopsy tumor area was measured in a similar fashion. Univariate and multivariate analyses were performed to identify preoperative clinical and pathology parameters for predicting Gleason pattern 4/5 carcinoma on prostatectomy specimens. Results: Biopsy Gleason score 7 or greater, Gleason pattern 4/5 carcinoma, perineural invasion and biopsy tumor area had statistically significant associations for identifying Gleason pattern 4/5 carcinoma on prostatectomy specimens. Logistic regression models for predicting any or greater than 10% Gleason pattern 4/5 carcinoma on prostatectomy specimens revealed that an area of pattern 4/5 disease of greater than 0.01 cm.2 on biopsy was the best single predictor with odds ratios of 15.0 (95% confidence interval 3.3 to 69.0, p = 0.0005) and 3.9 (95% confidence interval 1. 4 to 10.9, p = 0.009), respectively. For predicting any pattern 4/5 carcinoma on prostatectomy specimens a biopsy area of pattern 4/5 disease of greater than 0.01 cm.2 had only 38% sensitivity but 96% specificity. Similarly for predicting significant pattern 4/5 disease on prostatectomy specimens, defined as 10% or greater pattern 4/5, sensitivity and specificity for a biopsy area of greater than 0.01 cm.2 were 34% and 88%, respectively. Therefore, due to high false-negative rates these models had limited predictive value on an individual basis. Conclusions: Biopsy parameters such as Gleason pattern 4/5 carcinoma may provide adequate specificity for predicting clinically significant cancer, as defined by high grade Gleason patterns in the corresponding radical prostatectomy specimen. However, the accuracy of these parameters for predicting indolent cancer is limited by a prohibitive rate of false-negative findings. abstract_id: PUBMED:32119879 Gleason pattern 4 with cribriform morphology on biopsy is associated with adverse clinicopathological findings in a prospective radical prostatectomy cohort. The prognostic significance of the Gleason grading system has been well established. However, individual Gleason patterns comprise heterogeneous morphologies which might add additional prognostic information. Recent evidence suggests that Gleason pattern 4 with cribriform growth pattern is associated with an adverse prognosis. To determine the association between cribriform pattern on biopsies and pathological findings on subsequent prostatectomies, we evaluated the presence of cribriform architecture in a prospective cohort of 367 men from 2014 to 2018 treated at a single institution. Cribriform architecture was present in 63.5% of all biopsies and was correlated with the overall extent of Gleason pattern 4. In addition, cribriform morphology on biopsy showed a statistically significant association with higher Gleason grade and increased pathological stage and nodal metastasis. In a subset analysis of cases with Grade Group 2 (Gleason score 3 + 4, n = 208), these associations did not reach statistical significance, but the presence of cribriform growth in this subgroup showed a trend toward increased upgrading to Grade Group 5 (Gleason score 9/10) (1 [0.5%] vs. 5 [2.4%], P = 0.06). This large prospective study comparing biopsy and prostatectomy finding of cribriform architecture demonstrates that cribriform pattern 4 is associated with adverse prognostic features and highlights the relevance for recognizing specific morphologies with distinct biological and clinical features. abstract_id: PUBMED:35613972 Prognostic significance of percentage Gleason grade 5 prostatic adenocarcinoma in needle biopsies from patients treated by radical prostatectomy. Previous studies have shown that the percentage of high grade prostatic adenocarcinoma (Gleason patterns 4 and 5) in a biopsy correlates with outcome parameters. It has also been shown that the percentage Gleason pattern 4/5 tumour correlates with biochemical failure and overall survival. There are little data relating to the prognostic significance of quantifying the percentage of Gleason pattern 5 in isolation. We investigated the prognostic predictive value of quantifying the percentage of Gleason pattern 5 tumour in needle biopsies from a series of 196 cases of Gleason score 4+5=9 prostate adenocarcinoma from patients who had also undergone radical prostatectomy. Division of cases according to the percentage of Gleason pattern 5 present (based upon the core with the highest grade) and analysing these with tumour grouped as Gleason score 4+5 with &lt;5% pattern 5 (GS 4+5 &lt;5%), Gleason score 4+5 with 5-20% pattern 5 (GS 4+5 5-20%) and Gleason score 4+5 with 21-49% pattern 5 (GS 4+5 21-49%) showed no difference in outcome determined as time interval to prostate specific antigen biochemical failure. The results showed that each of the subgroups of GS 4+5 tumours had a significantly shorter biochemical recurrence-free survival than for a control group of 179 patients with Gleason score 4+3=7 (GS 4+3) cancer. Similar results were obtained when grading was based upon percentage of Gleason pattern 5 present in all the cores taken from the same patient (case-based grade). Adverse findings at radical prostatectomy showed each of the subgroups of GS 4+5 tumours to have a higher incidence of extraprostatic extension and seminal vesicle invasion than the GS 4+3 group of controls. Further, the differences in incidence between each of the subgroups were not significant for either extraprostatic extension or seminal vesicle invasion. These observations applied to both the highest core-based grade and the case-based grade. Our study has shown that any proportion of Gleason pattern 5 tumour in a needle biopsy is associated with a worse prognosis when compared to GS4+3 tumours and that these results are similar for grading that is core- or case-based. abstract_id: PUBMED:36925071 Convolutional Neural Network Quantification of Gleason Pattern 4 and Association With Biochemical Recurrence in Intermediate-Grade Prostate Tumors. Differential classification of prostate cancer grade group (GG) 2 and 3 tumors remains challenging, likely because of the subjective quantification of the percentage of Gleason pattern 4 (%GP4). Artificial intelligence assessment of %GP4 may improve its accuracy and reproducibility and provide information for prognosis prediction. To investigate this potential, a convolutional neural network (CNN) model was trained to objectively identify and quantify Gleason pattern (GP) 3 and 4 areas, estimate %GP4, and assess whether CNN-predicted %GP4 is associated with biochemical recurrence (BCR) risk in intermediate-risk GG 2 and 3 tumors. The study was conducted in a radical prostatectomy cohort (1999-2012) of African American men from the Henry Ford Health System (Detroit, Michigan). A CNN model that could discriminate 4 tissue types (stroma, benign glands, GP3 glands, and GP4 glands) was developed using histopathologic images containing GG 1 (n = 45) and 4 (n = 20) tumor foci. The CNN model was applied to GG 2 (n = 153) and 3 (n = 62) tumors for %GP4 estimation, and Cox proportional hazard modeling was used to assess the association of %GP4 and BCR, accounting for other clinicopathologic features including GG. The CNN model achieved an overall accuracy of 86% in distinguishing the 4 tissue types. Furthermore, CNN-predicted %GP4 was significantly higher in GG 3 than in GG 2 tumors (P = 7.2 × 10-11). %GP4 was associated with an increased risk of BCR (adjusted hazard ratio, 1.09 per 10% increase in %GP4; P = .010) in GG 2 and 3 tumors. Within GG 2 tumors specifically, %GP4 was more strongly associated with BCR (adjusted hazard ratio, 1.12; P = .006). Our findings demonstrate the feasibility of CNN-predicted %GP4 estimation, which is associated with BCR risk. This objective approach could be added to the standard pathologic assessment for patients with GG 2 and 3 tumors and act as a surrogate for specialist genitourinary pathologist evaluation when such consultation is not available. abstract_id: PUBMED:36897217 Significance of the Percentage of Gleason Pattern 4 at Prostate Biopsy in Predicting Adverse Pathology on Radical Prostatectomy: Application in Active Surveillance. Objectives: To determine the prognostic significance of the maximum allowable percentage of Gleason pattern 4 (GP4) at prostate biopsy compared with adverse pathology observed at radical prostatectomy (RP) to expand active surveillance eligibility among a cohort with intermediate risk of prostate cancer. Methods: A retrospective study of patients with grade group (GG) 1 or 2 prostate cancer on prostate biopsy with subsequent RP was performed at our institution. A Fisher exact test was used to understand the relationship among GP4 subgroups (0%, ≤5%, 6%-10%, and 11%-49%) assigned at biopsy and adverse pathologic findings at RP. Additional analyses comparing the GP4 ≤5% cohort's prebiopsy prostate-specific antigen (PSA) level and GP4 length with adverse pathology at RP were also performed. Results: No statistically significant difference in adverse pathology at RP was observed between the active surveillance-eligible control (GP4 0%) and the GP4 ≤5% subgroup. In total, 68.9% of the GP4 ≤5% cohort showed favorable pathologic outcomes. A separate analysis of the GP4 ≤5% subgroup revealed that neither prebiopsy serum PSA levels nor GP4 length showed statistical correlation with adverse pathology at RP. Conclusions: Active surveillance may be a reasonable option for management of patients in the GP4 ≤5% group until long-term follow-up data become available. abstract_id: PUBMED:30909685 Gleason Pattern 5 is a Possible Pathologic Predictor for Biochemical Recurrence after Laparoscopic Radical Prostatectomy Objective: Several prognostic factors for biochemical recurrence after radical prostatectomy have been reported, including initial prostate-specific antigen level, Gleason score, positive surgical margin, and seminal vesicle invasion. Here we investigate whether Gleason pattern 5 is a predictor for biochemical recurrence. Methods: This retrospective study included 168 patients who underwent laparoscopic radical prostatectomy from 2006 to 2015. The relationship between biochemical recurrence after laparoscopic radical prostatectomy and the presence of Gleason pattern 5, even as a tertiary pattern, was investigated. Biochemical recurrence was defined when the prostate-specific antigen level rose to &gt;0.2 ng/ml after having decreased to &lt;0.1 ng/ml following laparoscopic radical prostatectomy. Biochemical recurrence-free survival was estimated by the Kaplan-Meier method. Multivariate analysis was performed using a Cox proportional hazards regression model. Results: The median age was 66 years, median initial prostate-specific antigen level was 6.9 ng/ml, and median follow-up period was 47.3 months. Biochemical recurrence was recognized in 27 patients (16.1%) after laparoscopic radical prostatectomy, and 5-year biochemical recurrence-free survival was 78.6%. Gleason pattern 5 was noted in 5 patients as the primary pattern, in 10 as the secondary pattern, and in 5 as the tertiary pattern. According to multivariate analysis, presence of Gleason pattern 5 (HR = 4.75, p=0.001) and positive surgical margin (HR = 4.66, p=0.001) were independent predictive factors for biochemical recurrence-free survival. Conclusion: Gleason pattern 5 appears to be an important predictive factor for biochemical recurrence after laparoscopic radical prostatectomy. Answer: The ability to predict high-grade tumors preoperatively is crucial for determining the appropriate treatment strategy for prostate cancer patients. Several studies have investigated the predictors of Gleason pattern 4/5 prostate cancer on prostatectomy specimens and whether these can be identified preoperatively. One study found that preoperative parameters such as biopsy Gleason score 7 or greater, Gleason pattern 4/5 carcinoma, perineural invasion, and biopsy tumor area had statistically significant associations with identifying Gleason pattern 4/5 carcinoma on prostatectomy specimens. Specifically, an area of pattern 4/5 disease greater than 0.01 cm^2 on biopsy was the best single predictor, with high specificity but limited sensitivity due to high false-negative rates. This suggests that while biopsy parameters may provide adequate specificity for predicting clinically significant cancer, their accuracy for predicting indolent cancer is limited (PUBMED:11125378). Another study emphasized the prognostic significance of tertiary Gleason pattern 5, which is defined as less than 5% Gleason pattern 4 or 5 in radical prostatectomy specimens. The presence of a limited tertiary Gleason pattern 5 component in the context of Gleason scores 3+4=7 and 4+3=7 imparts an intermediate prognosis relative to the next highest grade group. This suggests that the presence of even a small amount of Gleason pattern 5 can affect prognosis and should be considered in risk stratification (PUBMED:27993581). Furthermore, the presence of Gleason pattern 4 with cribriform morphology on biopsy is associated with adverse clinicopathological findings in a prospective radical prostatectomy cohort. This indicates that specific morphologies within Gleason pattern 4, such as cribriform growth, are linked to worse prognostic features (PUBMED:32119879). In summary, high-grade tumors can be predicted preoperatively to some extent using biopsy parameters, including Gleason score, pattern 4/5 carcinoma, and perineural invasion. However, the predictive value is limited by false negatives, and additional factors such as tertiary Gleason pattern 5 and cribriform morphology may provide further prognostic information.
Instruction: Is penetrating injury on the increase in south-western Sydney? Abstracts: abstract_id: PUBMED:8712987 Is penetrating injury on the increase in south-western Sydney? Background: Few Australian studies describe the epidemiology of penetrating trauma. This study describes the incidence and demographic features of penetrating injuries with emphasis on trends and severity analysis. Methods: Case analysis was performed utilizing data from the Liverpool Hospital Trauma Registry (1989-94), NSW Department of Health Hospital Separations (1991-93), and the NSW Bureau of Crime Statistics (1991-93) with reference to the Liverpool and Fairfield Local Government Areas (LGA). Results: The Trauma Registry revealed 251 of penetrating trauma. The median age was 26 years (interquartile range 21-33). Ninety-one per cent of the victims were male. Fourteen per cent of patients had an Injury Severity Score (ISS) &gt; 15. Sixty-five per cent of cases were stab injuries and 20% gunshot injuries with the abdomen being the most commonly injured site. Twenty-one per cent of patients underwent laparotomy, 1.6% thoracotomy and 1.2% thoracotomy and laparatomy. There were 10 (4%) deaths. Trends in incidence of penetrating trauma and violent crime involving weapons were analysed. Static trends were observed for the annual incidence of penetrating trauma from the Liverpool Hospital Trauma Registry. Separations for penetrating trauma from Liverpool and Fairfield hospitals showed a slightly increasing trend. Violent crimes involving weapons in the Liverpool and Fairfield LGA showed a static trend. Nevertheless, separations for penetrating trauma and rates of violent crimes involving weapons were higher in south-western Sydney than metropolitan Sydney or NSW. Eight per cent of the LGA population are Vietnamese but this study identified 16% of victims as being Vietnamese. Conclusions: This study found no significant increase in penetrating trauma or violent crime predisposing to penetrating injury in south-western Sydney. abstract_id: PUBMED:16398844 Playground safety in South Western Sydney. Objective: Recognition of the cost of injuries sustained in playgrounds has led to the development of Australian/New Zealand Standards for playground safety. This study assesses compliance of public playgrounds in South Western Sydney with the current Standards. Methods: Playgrounds in South Western Sydney were randomly audited for compliance with Australian/New Zealand Standards for playground equipment pertaining to fall height, type of undersurface material, adequacy of maintenance of undersurface materials and adequacy of the safe fall zone. Comparisons were made with an audit of playgrounds in New South Wales undertaken in 1995 by Kidsafe. Results: Of the 257 pieces of equipment from 87 playgrounds audited, only three (3.4%) playgrounds and 65 (25.3%) pieces of equipment complied with all criteria assessed. Compared with the 1995 study, there has been a significant improvement in equipment complying with height standards (84.4% vs 72.2%, P &lt; 0.0001) and appropriate undersurface material (66.8% vs 45.4%, P &lt; 0.0001). Pinebark is now the most common appropriate undersurface material used but is poorly maintained, particularly in high-traffic areas. Of 131 pieces of equipment with an appropriate undersurface material, only 38 (29%) had an adequate safe fall zone. Conclusions: Improvements in playground safety have occurred; however, very few playgrounds meet the Australian/New Zealand Standards. Compliance with Standards pertaining to maintenance of undersurface materials and safe fall zone is poor. Mechanisms need to be developed to support councils in the upgrading and maintenance of existing playgrounds. abstract_id: PUBMED:15796728 Trauma bypass in south-eastern Sydney: an 8-year review. Objective: The New South Wales (NSW) Health Department and the Ambulance Service of NSW introduced a trauma bypass system in Sydney on 29 March 1992. This study aims to review the outcomes of trauma bypass patients brought to St George Hospital, a major trauma service in south-eastern Sydney, and to assess the performance of the current prehospital trauma triage protocol. Methods: The St George Hospital Department of Trauma Services prospectively collected data on all trauma bypass patients for the 8-year period from 29 March 1992 to 29 March 2000. Results: A total of 1990 patients were brought to hospital on trauma bypass. The average age was 32 years, 70% were men and 66% were from road traffic accidents. The positive predictive value of the prehospital triage tool for serious injury (Injury Severity Score [ISS] &gt; 15) was 18.6% (95% CI 16.9-20.4). This is well below the benchmark previously established by the NSW Health Department Trauma System Advisory Committee. For all trauma bypass patients, 33.8% (95% CI 31.7-35.9) were discharged home from the ED. The overall death rate was 2.5% (95% CI 1.9-3.3). Conclusions: According to the proposed benchmark, current prehospital trauma triage guidelines are underperforming. This suggests that a review of the benchmarks of current local trauma systems and of the trauma triage tool is required. abstract_id: PUBMED:36202188 Penetrating buttock trauma is morbid but rarely fatal - A South African experience. Background: Penetrating injuries to the buttock are relatively rare but are associated with significant morbidity. This study aimed to review our experience in managing penetrating trauma to the buttocks to contextualize the injury, document the most common associated injuries, and generate an algorithm to assist with the management of these patients. Methods: A retrospective study was conducted at a major trauma center in South Africa over 8 years (January 2012 to January 2020). All patients presenting with a penetrating buttock injury were included. Results: Our study included 40 patients. Gunshot wounds accounted for 93% (37/40), stab wounds accounted for 5% (2/40), and 1 case was gored by a cow. The majority (98%) underwent further investigation in the form of imaging or endoscopy. Forty percent (16/40) required surgical intervention. Of these 16 cases, 14 required a laparotomy, and 2 required gluteal exploration. Fifty-six percent (9/16) required a stoma. Five percent (2/40) experienced one or more complications, both of whom had stomas. The median length of stay for all patients was 3 days, whereas for the patients with stomas was 7 days. There were no ICU admissions or mortality in this study. Only 3 of the 9 stomas were reversed, and the median time to reversal was 16 months. Conclusion: Penetrating trauma to the buttock may result in injuries to surrounding vital structures, which must be actively excluded. Rectal injury was the most common injury, and most required a defunctioning colostomy as part of the management resulting in significant morbidity. abstract_id: PUBMED:35065788 Laparotomy for penetrating gastric trauma - A South African experience. Introduction: Penetrating gastric injury (PGI) is common and although primary repair is sufficient for most injuries, several areas surrounding the peri-operative management remain contentious. This study reviews our experience in the management of PGI and review the clinical outcome at a major trauma centre in South Africa. Materials And Methods: A retrospective study was conducted from January 2012 to April 2020 at a major trauma centre in South Africa. Results: 210 cases were included (184 male, median age: 30 years). Mechanism of injury was 59% stab wounds (SWs) and 41% gunshot wounds (GSWs). The AAST grade was predominantly (92%) grade 2 for all cases. 20% (41/210) were isolated gastric injuries. All cases underwent primary repair and there were no cases of suture line failure. Eleven cases (5%) had one or more injuries not identified at the index laparotomy: 7 were unidentified gastric injuries and the remaining 4 were unidentified non gastric injuries. There was no association between unidentified injuries and mechanism of injury or outcome. Fifty-seven (27%) cases experienced one or more complications. Eighty-two cases (39%) required intensive care unit admission. The overall median length of hospital stay was 7 (IQR 4-11) days. The overall mortality was 14%. GSW injuries were more likely to have additional organ injury, higher ISS and PATI scores, longer length of hospitalization, higher likelihood of ICU stay, greater morbidity and mortality than SW injuries. There was a slight increase in the wound sepsis rate as number of associated extra gastric injuries increased but this was not statistically significant. There was no difference in wound overall sepsis rate between SW and GSW injuries (2% vs 8%, p=0.121). Conclusions: Primary repair alone is sufficient for most PGI, but laparotomy is associated with high incidence of unidentified injury and surgeons must to be cognisant of the likelihood of these occult injuries. abstract_id: PUBMED:11908509 Penetrating vascular trauma in Johannesburg, South Africa. An awareness that time crucially affects outcome underpins the principles of management of vascular injury. Patients with hard signs of vascular injury should undergo urgent exploration. Soft signs mandate investigation, and arteriography is still the standard of care. Noninvasive vascular imaging may prove its worth in the future. All patients with penetrating arterial injury should receive broad-spectrum antibiotic prophylaxis. Early repair of carotid artery injury provides the best likelihood of a neurologically intact survivor. There is a definite and emerging role of endovascular therapy both for difficult access injuries and for the later complications of vascular injury, such as false aneurysm and arteriovenous fistulas. The experimental and clinical evidence for the use of intraluminal shunts in peripheral vascular injury is compelling, and experience in their use is accumulating. Vascular trauma is complex and ideally is carried out by experts in a multidisciplinary environment; resuscitation and prompt revascularization are likely to lead to satisfactory outcomes. The major trauma load in South Africa represents an unparalleled experience in management of vascular injury, which seems likely to continue for the foreseeable future. abstract_id: PUBMED:12354266 Penetrating injuries in children: is there a message? Objectives: To determine the frequency, management and outcome of penetrating trauma in children. Methods: A retrospective review of penetrating injuries in children under 16 years of age admitted to the Children's Hospital at Westmead (CHW), and deaths reported to the New South Wales Paediatric Trauma Death (NPTD) Registry, from January 1988 to December 2000. Patient details, circumstances of trauma, injuries identified, management and outcome were recorded. Results: Thirty-four children were admitted to the CHW with penetrating injuries during the 13-year period. This represented 0.2% of all trauma admissions, but 3% of those children with major trauma. The injury typically involved a male, school-age child that fell onto a sharp object or was assaulted with a knife or firearm by a parent or person known to them. Twenty-five children (75%) required operative intervention for their injuries and 14 survivors (42%) suffered long-term morbidity. Thirty children were reported to the NPTD Registry over the same interval, accounting for 2.3% of all trauma deaths in New South Wales. Of these, a significant minority was injured by falls from a mower or a tractor towing a machine with blades. Conclusions: Penetrating injuries are uncommon, but cause serious injury in children. There are two clear groups: (i) those dead at the scene or moribund on arrival, in whom prevention must be the main aim; and (ii) those with stable vital signs. Penetrating wounds should be explored in the operating theatre to exclude major injury. Young children should not ride on mowers or tractors. abstract_id: PUBMED:26762313 Epidemiology and outcome of penetrating injuries in a Western European urban region. Purpose: Severe life-threatening injuries in Western Europe are mostly caused by blunt trauma. However, penetrating trauma might be more common in urban regions, but their characteristics have not been fully elucidated. Methods: Retrospective analysis of data from patients admitted to our urban university level I trauma center between 2008 and 2013 with suspicion of severe multiple injuries. Collection of data was performed prospectively using a PC-supported online documentation program including epidemiological, clinical and outcome parameters. Results: Out of 2095 trauma room patients admitted over the 6-year time period 194 (9.3 %) suffered from penetrating trauma. The mean Injury Severity Score (ISS) was 12.3 ± 14.1 points. In 62.4 % (n = 121) the penetrating injuries were caused by interpersonal violence or attempted suicide, 98 of these by stabbing and 23 by firearms. We observed a widespread injury pattern where mainly head, thorax and abdomen were afflicted. Subgroup analysis for self-inflicted injuries showed higher ISS (19.8 ± 21.8 points) than for blunt trauma (15.5 ± 14.6 points). In 82.5 % of all penetrating trauma a surgical treatment was performed, 43.8 % of the patients received intensive care unit treatment with mean duration of 7.4 ± 9.3 days. Immediate emergency surgical treatment had to be performed in 8.0 vs. 2.3 % in blunt trauma (p &lt; 0.001). Infectious complications of the penetrating wounds were observed in 7.8 %. Conclusions: Specific characteristics of penetrating trauma in urban regions can be identified. Compared to nationwide data, penetrating trauma was more frequent in our collective (9.3 vs. 5.0 %), which may be due to higher crime rates in urban areas. Especially, self-inflicted penetrating trauma often results in most severe injuries. abstract_id: PUBMED:35871854 The Prevent Alcohol and Risk-Related Trauma in Youth (P.A.R.T.Y) program: An interactive injury prevention initiative in South-Western Sydney. Background: The Prevent Alcohol and Risk Related Trauma in Youth (P.A.R.T.Y) program is an interactive injury prevention intervention, specifically designed for "at risk" youths aged 15 to 19 years. Emerging evidence has highlighted its positive impact on altering student attitudes towards risk-taking behaviour across several Australian and international settings. This study aims to describe the risk-taking behaviours of youths in South-Western and Greater Western Sydney, and assess the effectiveness of the Liverpool P.A.R.T.Y program to alter attitudes towards risk-taking. Methods: From 2015 to 2020, schools and youth organisations across South-Western and Greater Western Sydney were invited to participate in the Liverpool Hospital P.A.R.T.Y program. Youths aged 15 to 19 years were selected to attend by their respective teachers based on eligibility criteria. Knowledge and attitudes towards risk-taking behaviours were measured using surveys across three time points (pre-program, immediately post-program, 3-to-6 months post-program). Results: A total of 2544 participants from 50 schools and youth organisations attended the Liverpool Hospital P.A.R.T.Y program. There were 130 participants who did not record a response to a single question across all three time points and were omitted from analysis. Of the remaining 2414 participants, 49% were male, and 41% had access to a provisional driver's license or learner's permit. There were significant changes in knowledge and attitudes to risk-taking behaviours from pre-to immediately-post-program. A separate analysis across all three time points was conducted in response to a poor 3-to-6-month follow-up rate (25%). There was decay in improvements across all six questions, with the largest change seen in perceived likelihood of injury when engaging in physically risk-taking activities (52.2% to 36.9%, OR 0.44, 95% CI 0.33 - 0.60, p &lt; 0.001). Conclusion: This study demonstrated significant changes in participant attitudes towards risk-taking behaviours and their consequences, immediately after participating in the Liverpool Hospital P.A.R.T.Y program. However, the poor response rates at later follow-up highlight the need for ongoing engagement of the South-Western and Greater Western Sydney youths, to ensure these improvements are sustained. abstract_id: PUBMED:16773259 Penetrating cardiac injuries: recent experience in South Africa. Background: Penetrating cardiac injury is a dramatic and lethal form of trauma. The majority of patients will die before reaching medical care, but for those who arrive at hospital alive, the diagnostic acumen and rapid surgical intervention of physicians and surgeons can lead to successful outcomes. Materials And Methods: A structured diagnostic and management approach is used in our trauma unit to deal with penetrating cardiac injury. A retrospective study of 117 patients with penetrating trauma to the heart was conducted over 32 months; the purpose of this study was to describe our protocol and review overall outcome with this type of injury. Demographic details, mechanism of injury, and mode of presentation were recorded. Mortality and morbidity data were collated and echocardiographic follow-up was performed. Results: There were 109 male patients, mean age 29 years (range: 15-54 years) and 8 female patients, mean age 35 years (range: 23-56 years). Most patients (96/117) had sustained stab wounds, and the associated mortality was 15.6%. Twenty-one patients had sustained gunshot wounds (GSW), with a mortality of 81%. Gunshot wounds were significantly more likely to result in death (P&lt;or=0.0001). Five patients sustained multiple cardiac wounds, and three of them died. The risk of death was not different between patients sustaining single or multiple cardiac wounds (P=0.11). Eleven patients had thoracoabdominal injury, and 3/11 (27%) died; all of those that died had sustained GSW. The risk of death was not different between patients sustaining thoracoabdominal wounds and those with isolated thoracic injury (P=1.0). Among the 51 patients with stab wounds who presented with cardiac tamponade, the mortality was only 8%. Compared to the other patients with stab wounds, cardiac tamponade conferred a significant survival advantage (P=0.02). Conclusions: A high index of suspicion for penetrating cardiac injury and an understanding of the modes of presentation can lead to rapid diagnosis even by inexperienced junior staff. Such a structured approach to patients with penetrating trauma to the chest leads to early definitive management and acceptable results. Answer: No, the study described in abstract PUBMED:8712987 found no significant increase in penetrating trauma or violent crime predisposing to penetrating injury in south-western Sydney. The study analyzed data from the Liverpool Hospital Trauma Registry, NSW Department of Health Hospital Separations, and the NSW Bureau of Crime Statistics with a focus on the Liverpool and Fairfield Local Government Areas (LGA). The trends in the incidence of penetrating trauma and violent crime involving weapons were found to be static. Although the rates of violent crimes involving weapons and separations for penetrating trauma were higher in south-western Sydney than in metropolitan Sydney or NSW, there was no significant upward trend observed during the study period.
Instruction: Fitness and exercise as correlates of sleep complaints: is it all in our minds? Abstracts: abstract_id: PUBMED:19997006 Fitness and exercise as correlates of sleep complaints: is it all in our minds? Purpose: Restoring sleep is associated with psychological well-being. In contrast, poor sleep leads to impaired daily cognitive, emotional, and social functioning. Both commonplace and expert opinion hold that exercise has a favorable impact on preventing poor sleep and improving its quality. However, the scientific basis for this opinion remains limited, and results are mixed. The aim of the present study, therefore, was to explore the impact of perceived physical fitness, exercise, and a perceived lack of activity on sleep in early adulthood. Gender-related patterns were also examined. Methods: A total of 862 participants (639 females and 223 males; mean +/- SD = 24.67 +/- 5.91 yr) took part in the study. Respondents completed a series of self-report questionnaires assessing perceived physical fitness, exercise, perceived lack of physical activity, insomnia (Insomnia Severity Index), dysfunctional sleep-related thoughts (Fragebogen zur Erfassung allgemeiner Persönlichkeitsmerkmale Schlafgestörter), and quality of sleep (Pittsburgh Sleep Quality Index). Results: High perceived physical fitness, but not exercise, was associated with favorable scores for various sleep indicators. A perceived lack of physical activity was associated with poor sleep. Perceived physical fitness and exercise were moderately correlated. Compared with males, females reported more sleep difficulties and also more dysfunctional sleep-related thoughts. Conclusions: For early adulthood, findings did not support commonplace or expert opinion that exercise behavior has a favorable influence on sleep. Rather, the findings lend support to the importance of cognitive processes in the onset and maintenance of sleep complaints. abstract_id: PUBMED:36360611 Exercise Training Program Improves Subjective Sleep Quality and Physical Fitness in Severely Obese Bad Sleepers. Background: Sleep quality is an important modulator of neuroendocrine function, as sleep problems are related to metabolic and endocrine alterations. Objective: The main objective was to determine the effects of an exercise training program on the sleep quality of severely obese patients with sleep problems. The secondary objective was to determine the relationship between fitness and anthropometric parameters with sleep quality scores. Methods: Thirty severely obese patients participated in 16 weeks of PA intervention (age: 39.30 ± 11.62 y, BMI: 42.75 ± 5.27 kg/m2). Subjective sleep quality, anthropometric parameters, and fitness (i.e., handgrip strength and cardiorespiratory fitness) were measured. Results: Two groups were defined as good sleepers (n = 15, 38.06 ± 12.26, men = 1) and bad sleepers (n = 15, 40.53 ± 11.23, men = 3). The good sleeper group reported improvement in cardiorespiratory fitness (61.33 ± 68.75 m vs. 635.33 ± 98.91 m, p = 0.003) and handgrip strength (29.63 ± 9.29 kg vs. 31.86 ± 7.17 kg, p = 0.049). The bad sleeper group improved their cardiorespiratory fitness (472.66 ± 99.7 m vs. 611.33 ± 148.75 m, p = 0.001). In terms of sleep quality dimensions, the bad sleeper group improved their subjective sleep quality (p &lt; 0.001), sleep latency (p = 0.045), sleep duration (p = 0.031), and habitual sleep efficiency (p = 0.015). Comparing the changes in both groups (∆), there were differences in subjective sleep quality scores (∆ = 2.23 vs. ∆ = -3.90, p = 0.002), where 86.6% of the bad sleeper group improved sleep quality (p = 0.030). An increase in handgrip strength was correlated to improving sleep quality scores (r = -0.49, p = 0.050). Conclusions: Severely obese bad sleepers improved their subjective sleep quality, the components of sleep, and cardiorespiratory fitness through an exercise training program. Improvement in subjective sleep quality was linked to an increase in handgrip strength. abstract_id: PUBMED:8341896 Aerobic fitness, acute exercise and sleep in older men. In the current study 12 aerobically fit and 12 sedentary older men underwent two nocturnal polysomnographic (PSG) studies. A control PSG was conducted following a day without aerobic activity, whereas a postexercise PSG study was conducted following an afternoon session of exhaustive aerobic exercise. In addition to deriving usual sleep parameters, a computer scoring program was used to count the number of individual electroencephalographic (EEG) slow waves in each PSG tracing. Multivariate and univariate analyses showed that the fit subjects had shorter sleep onset latencies, less wake time after onset, fewer discrete sleep episodes, fewer sleep stage shifts during the initial portion of the night, less stage 1 sleep, a higher sleep efficiency and more total slow waves during both PSGs than did the sedentary subjects. Although no main effects were found for the acute exercise challenge, post hoc analyses showed that high levels of body heating during exercise predicted increased sleep fragmentation for both fit and sedentary subjects. These findings provide initial support for the contention that exercise and fitness may have significant effects on the sleep of older men. However, results also suggest that high levels of body heating resulting from a single exercise challenge may have adverse effects. Implications of the study are discussed and suggestions for future research are provided. abstract_id: PUBMED:36497548 Effects of Multi-Component Exercise on Sleep Quality in Middle-Aged Adults. Sleep is a crucial factor in healthy aging. However, most middle-aged adults experience high levels of sleep disorders. While previous findings have suggested exercise training could benefit the quality of sleep, the effects of multi-component exercise on sleep quality are less examined. Accordingly, the current study aimed to assess the effectiveness of a multi-component exercise program on the quality of sleep among middle-aged adults. Twenty-four middle-aged adults were randomly assigned either to a multi-component exercise (MCE) group or a control group. The participants in the MCE group attended a 90-min session per week for 12 weeks. The control group was instructed to maintain their daily routine for 12 weeks. The primary outcome was the sleep quality evaluated by the Pittsburgh Sleep Quality Index (PSQI). The secondary outcome was physical fitness, including muscular strength and endurance, balance, and flexibility. Regarding sleep quality, the global mean score (p = 028), sleep disturbances (p = 011), and sleep efficiency (p = 035) of the PSQI scores were significantly reduced in the MCE group after the 12-week intervention. Regarding physical fitness, the flexibility of the MCE group improved significantly after the intervention (p = 028), yet, no significant change was observed in the control group. Additionally, the muscular strength of the control group declined significantly after the 12-week period (p = 034). Our results revealed the effectiveness of the MCE intervention in improving sleep quality and physical fitness in middle-aged adults. Further studies using larger sample sizes, objective measures of sleep quality, different types of exercise training, as well as different populations, are warranted to extend our current findings. abstract_id: PUBMED:31354533 Fitness-Dependent Effect of Acute Aerobic Exercise on Executive Function. Cognitive gains are reported to be induced by acute aerobic exercise, but the role of fitness in the effect of acute aerobic exercise on executive function remains unknown. Therefore, we aimed to examine the effect of fitness on acute exercise-induced changes in executive function from neural mechanism approach. Twenty-four female college students were assigned to high-fitness or low-fitness groups based on their cardiovascular fitness level, and then underwent functional magnetic resonance imaging while performing N-back tasks before and after 30 min of acute exercise. The behavioral results revealed significant interaction effects of group by time in the 0-back and 1-back tasks, but not in the 2-back task. The accuracy was significantly higher in the high-fitness group than in the low-fitness group before exercise in the 1-back and 2-back tasks. At the neural level, significant interaction effects of group by time were observed in all tasks. The 0-back and 1-back tasks activated the right cerebellum while the 2-back task activated subcortical regions. Our findings suggest that fitness moderates the effect of aerobic exercise on cognitive function, and provide the first neural evidence to support the influence of fitness on exercise-induced cognitive performance. abstract_id: PUBMED:32281081 Lower Extremity Exercise Improves Functional Fitness, Physiological Indexes, Exercise Self-Efficacy, Sleep Quality, and Mental Health in Middle-Aged and Older Individuals Background: Middle-aged and older individuals suffer from skeletal muscle loss due to aging, increasing the risk of sarcopenia. Muscular dystrophy reduces lower-extremity muscle endurance. The annual incidence of falls in the community is about 30-40%. Falls contribute to disability and fractures, affect quality of life, reduce mental health, and, in severe cases, result in death. Therefore, preventing lower limb muscle weakness in middle-aged and older individuals should be taken seriously. Purpose: The purpose of this study was to promote community health with a focus on older, community-dwelling individuals. The effects of a lower-extremity exercise intervention on middle-aged and older individuals in terms of improving functional fitness, physiological indexes, exercise self-efficacy, sleep quality, and mental health were explored. Methods: This study used convenience sampling to recruit community residents over 55 years old as participants, with 50 participants assigned to the control group and to the experimental group, respectively. The experimental group participated in a 50-min lower extremity exercise intervention three times a week for 12 weeks. Differences in functional fitness, basic physiological index, exercise self-efficacy, sleep quality, and mental health variables between the two groups were assessed at the conclusion of the intervention. Results: The lower-extremity muscle exercise program significantly improved functional fitness, physiological indexes, exercise self-efficacy, sleep quality, and overall mental health status in the experimental group, as compared to the control group (p &lt; .05). Conclusions / Implications For Practice: It is recommended that the concept and application of lower extremity movement intervention should be popularized among middle-aged and older individuals to promote physical and mental health, prevent the decline and loss of lower extremity muscle strength, and help realize healthy aging goals. abstract_id: PUBMED:37379255 Does Cardiorespiratory Fitness Protect Memory from Sleep Deprivation? Introduction: Animal studies have demonstrated that physical exercise can protect memory from the effects of sleep deprivation (SD). We examined whether having a high cardiorespiratory fitness (V̇O 2peak ) is associated with an enhanced capacity to encode episodic memory after one night of SD. Methods: Twenty-nine healthy young participants were allocated into either an SD group ( n = 19) that underwent 30 h of uninterrupted wakefulness, or a sleep control (SC) group ( n = 10) that followed a regular sleep routine. Following either the SD or SC period, participants were asked to view 150 images as the encoding part of the episodic memory task. Ninety-six hours after viewing the images, participants returned to the laboratory to perform the recognition part of the episodic memory task, which required the visual discrimination of the 150 images previously presented from 75 new images introduced as distractors. Cardiorespiratory fitness (V̇O 2peak ) was assessed with a bike ergometer graded exercise test. Group differences in memory performance were assessed with independent t tests and associations between V̇O 2peak and memory with multiple linear regression. Results: The SD group showed a significant increase in subjective fatigue (mean difference [MD] [standard error {SE}] = 38.94 [8.82]; P = 0.0001) and a worse capacity to identify the original 150 images (MD [SE] = -0.18 [0.06]; P = 0.005) and discriminate them from distractors (MD [SE] = -0.78 [0.21] P = 0.001). When adjusted for fatigue, higher V̇O 2peak was significantly associated with better memory scores in the SD (R 2 = 0.41; β [SE] = 0.03 [0.01]; P = 0.015) but not in the SC group ( R2 = 0.23; β [SE] = 0.02 [0.03]; P = 0.408). Conclusions: These results confirm that SD before encoding impairs the capacity to create robust episodic memories and provide preliminary support to the hypothesis that maintaining high levels of cardiorespiratory fitness could have a protective effect against the disruptive effects of sleep loss on memory. abstract_id: PUBMED:6542495 Fitness facilitates sleep. Eight army recruits were studied at the start, middle, and end of their initial 18-week training programme. At each point the subjects were studied for four consecutive nights in the sleep laboratory. Their sleep was characterized by the means of the recordings on the last two nights. Within 2 days of the sleep recordings (but never on the same day) each subject spent 2 non-consecutive days in the exercise laboratory. On the 1st day a maximum oxygen consumption (VO2 max) measurement was performed on a treadmill and on the 2nd day a 24-min progressive exercise bicycle ergometer test was carried out with simultaneous venous sampling (for lactic acid measurements) and oxygen consumption recordings from which the lactate turn point (LTP) was calculated. LTP was used as a measure of fitness. Approximately 1 week after the above measures lean muscle mass as calculated by total body potassium estimation was obtained for each subject. Slow wave sleep (SWS) as a percentage of total sleep time increased significantly between the start and the measurements at 9 and 18 weeks, being 21.9%, 29.9%, and 28.5% respectively. Anaerobic threshold increased significantly (P less than 0.05) over the first 9 weeks and continued to increase to the end of the training period (P less than 0.001) using VO2 when lactate level was 2 mmol/l as a percentage of VO2 max. With increase in fitness, sleep onset latency and wake time during sleep decreased and sleep efficiency improved. The results suggest that as fitness increases sleep quality improves. abstract_id: PUBMED:30903972 Is improved fitness following a 12-week exercise program associated with decreased symptom severity, better wellbeing, and fewer sleep complaints in patients with major depressive disorders? A secondary analysis of a randomized controlled trial. Major depressive disorder (MDD) is one of the most burdensome disorders worldwide. While exercise training in patients with MDD contributes to clinically relevant improvements in cardiorespiratory fitness, whether and to what degree changes in cardiorespiratory fitness impact depressive symptom severity has not yet been addressed systematically in prior research. The purpose of our study was threefold. Firstly, to examine whether baseline levels and improvements in objectively assessed VO2max and subjectively perceived fitness predicted endpoint levels and change in depressive symptoms, wellbeing and sleep. Secondly, to determine whether exercise modality (sprint interval training [SIT]) versus continuous aerobic exercise training [CAT]) predicted depressive symptoms, wellbeing and sleep. Thirdly, whether the affective responses during and following exercise predicted depressive symptoms, wellbeing and sleep. All measurements were taken in a sample of inpatients diagnosed with MDD. The sample consisted of 53 participants (41 women and 12 men, Mage = 36.3 years, SD = 11.3) with unipolar depression who were randomly assigned to SIT and CAT. Data were assessed at baseline and after four weeks of exercise training (including three weekly 35 min sessions). Multiple linear regression analyses showed that improvements in VO2max were associated with fewer depressive symptoms, better mental wellbeing, and better sleep after completion of the intervention. Additionally, improvements in perceived fitness were associated with fewer dysfunctional sleep-related cognitions and higher mental toughness post-intervention. Improvements in VO2max and perceived fitness were also associated with favorable changes in depressive symptoms, mental wellbeing, and sleep. More research is needed to find out which fitness tests are most time- and cost-efficient in a clinical setting and most acceptable for psychiatric patients. abstract_id: PUBMED:22584092 Descriptive assessment of exercise program on fitness and correlates of participation. Objective: To assess health-related fitness, physical activity correlates, and completion of a half-marathon using a 3-day training program in a college community. Methods: 26 volunteers participated in a 20-week, half-marathon training program. Results: All participants completed the half-marathon. Positive changes in health-related fitness and psychological correlates associated with future exercise participation were observed. Conclusions: Incorporating strategies that address physical activity correlates and barriers may increase adherence to a program with a challenging goal such as training for a half-marathon. A low-frequency program was sufficient for increasing health-related fitness and psychological factors related to exercise participation of campus community. Answer: The relationship between fitness, exercise, and sleep complaints is complex and multifaceted. While some studies suggest that exercise and fitness can have a positive impact on sleep quality, the evidence is not entirely consistent, and cognitive processes may also play a significant role. One study found that perceived physical fitness, rather than actual exercise behavior, was associated with better sleep indicators, suggesting that cognitive perceptions about one's own fitness level might influence sleep quality (PUBMED:19997006). This study also noted that a perceived lack of physical activity was linked to poor sleep, and that females reported more sleep difficulties and dysfunctional sleep-related thoughts compared to males. Conversely, other research has shown that exercise training programs can improve subjective sleep quality and physical fitness in severely obese bad sleepers (PUBMED:36360611), and that aerobic fitness and acute exercise can affect sleep in older men, with fit individuals experiencing better sleep parameters (PUBMED:8341896). Additionally, multi-component exercise has been found to improve sleep quality in middle-aged adults (PUBMED:36497548), and lower extremity exercise has been shown to improve functional fitness, physiological indexes, exercise self-efficacy, sleep quality, and mental health in middle-aged and older individuals (PUBMED:32281081). Furthermore, cardiorespiratory fitness has been suggested to protect memory from the effects of sleep deprivation (PUBMED:37379255), and increased fitness has been associated with improved sleep quality (PUBMED:6542495). In patients with major depressive disorders, improvements in cardiorespiratory fitness were linked to fewer depressive symptoms, better mental wellbeing, and better sleep (PUBMED:30903972). In summary, while cognitive perceptions about fitness may influence sleep complaints, there is also evidence that actual improvements in fitness and participation in exercise programs can have beneficial effects on sleep quality. It is not solely a matter of perception; physical changes and improvements in fitness levels appear to play a role in sleep health as well.
Instruction: Peripheral intravenous power injection of iodinated contrast media through 22G and 20G cannulas: can high flow rates be achieved safely? Abstracts: abstract_id: PUBMED:19280548 Peripheral intravenous power injection of iodinated contrast media through 22G and 20G cannulas: can high flow rates be achieved safely? A clinical feasibility study. Purpose: Modern examination protocols for computed tomography (CT) often require high injection rates of iodinated contrast media (CM). The purpose of this study was to evaluate the maximum achievable flow rates and stability of different peripheral intravenous catheters (IVC) in vitro and to assess the feasibility of higher injection rates through small IVC in vivo. Materials And Methods: For in vitro experiments flow measurements followed by high pressure testing of different types of IVC (22, 20, and 18 gauge [G]) were performed. For the in vitro study 91 patients with already inserted 22 or 20G IVC who had been referred for CT received Iopamidol (300 mg iodine/ml) at flow rates between 2 and 5 ml/sec. Complications were documented. Results: The maximal achievable flow rate of the tested IVC in vitro ranged from 5 to 8 ml/sec. No damage was observed during in vitro testing. The initially targeted in vivo flow rate was dropped in 33 of 91 (36 %) patients because the IVC could not be flushed adequately with saline before CM injection. Extravasation of CM occurred in 2 cases. In the remaining 58 patients the standard CT protocol was performed with flow rates of 3 ml/sec through 22G IVC and 5 ml/sec through 20G IVC, respectively. In this group, the extravasation of CM was observed twice (p &gt; 0.05). Conclusion: Even with highly viscous CM, high flow rates can be applied in vitro in 22, 20, and 18G IVC without risking material damage. In vivo power injection of iodinated CM through 22G and 20G IVC seems to be safely achievable in the majority of patients with flow rates of up to 3 ml/sec and 5 ml/sec. Extravasation rates do not differ significantly between patients with high-flow or low-flow injections. abstract_id: PUBMED:19896067 Peripheral intravenous power injection of iodinated contrast media: the impact of temperature on maximum injection pressures at different cannula sizes. Rationale And Objectives: Modern computed tomographic scanners and examination protocols often require high injection rates of iodinated contrast media (CM). The purpose of this study was to investigate the maximum injection pressures (MIPs) with different CM at different temperatures in the most common intravenous cannula (IVC) sizes. Materials And Methods: Three IVC sizes, 22, 20, and 18 gauge, were evaluated. All examinations were performed with a pressure-limited (300 psi) power injector. The MIPs of three different CM (Solutrast 300, Imeron 350, and Imeron 400) were measured at room temperature (20 degrees C) and at 37 degrees C using increasing flow rates (1-9 mL/s). The intactness of the IVCs was checked after injection. Results: Heating the CM led to reductions in injection pressures (P &lt; .001). Using constant flow rates, the difference in MIP between 20-gauge and 22-gauge IVCs was higher than that between 20-gauge and 18-gauge IVCs. By heating the CM, the manufacturer's suggested operating pressure limit was exceeded at higher flow rates, such as with an 18-gauge cannula at 8 mL/s instead of 6 mL/s using warmed iomeprol 400. Even with pressures of up to 159.7 psi, none of the IVCs ruptured. Conclusions: Heating of CM effectively reduces MIPs using power injection in common IVCs. Although the manufacturer's suggested MIP was exceeded at higher flow rates, safe CM injection seems to be possible even in small cannulas using power injection. The compilation of the obtained data is meant to serve as guidance for future decisions on parameters of the power injection of iodinated CM. abstract_id: PUBMED:28616535 Effect of intravenous infusion of iodinated contrast media on the coronary blood flow in dogs. Background: Coronary computed tomography angiography (CCTA) is obtained using peripheral intravenous iodinated contrast agents (ICA) injection. There is continuing attempts to derive coronary physiological information like coronary blood flow (CBF) and/or fractional flow reserve from CCTA images. However, no data is available regarding the effect of peripheral intravenous injection of ICA on CBF. Methods: A series of 4 experiments was performed using healthy mongrel dogs. All dogs underwent anesthesia and open thoracotomy with placement of ultrasound flowmeter to one of the coronary artery to provide real time absolute CBF measurements. Different infusion protocols of Isovue-370 and Visipaque-320 were injected into a peripheral vein. Similar doses of normal saline injection were performed to be used as controls. The effect of iodinated contrast media injection on absolute coronary blood flow was monitored and recorded. Results: Injection of normal saline in the peripheral vein did not produce any significant increase in CBF. Peripheral intravenous injection of ICA resulted in a consistent increase of 40-73% in absolute CBF as recorded 5 minutes post-contrast administration. The contrast effect starts about 30 seconds and peaks at about 2 minutepost-contrast injection then slowly fades away in the following 10-15 min. The increase in the CBF was dose related. There was greater increase in the CBF to 50 ml infusion compared to 25 ml infusion of both Visipaque and Isovue. Conclusions: Peripheral venous administration of iodinated contrast-media in dogs results in a dose related, significant and prolonged increase in CBF. abstract_id: PUBMED:17375236 Immediate adverse reactions to intravenous iodinated contrast media in computed tomography. This exploratory-descriptive, non-experimental quantitative research aimed to learn about immediate adverse reactions to intravenous iodinated contrast media in hospitalized patients submitted to computed tomography at a teaching hospital in the South of Brazil. During the study period, all adverse reactions showed mild intensity, at a frequency of 12.5% with ionic iodinated contrast media, and 1% with non-ionic contrast agent. The extravasation of contrast occurred in 2.2% of the injections in a peripheral vein without complications in any of the cases. The results are within the limits cited in international literature and suggest that tomography service professionals should know their own rates of adverse reactions to iodinated contrast agent, as well as the conditions in which they occur, in order to obtain evidence to evaluate the respective care delivery processes. abstract_id: PUBMED:134955 Waldenströms macroglobulinaemia and intravenous contrast media. In vivo and in vitro studies have been performed in a group of patients with immunoproliferative diseases to evaluate the risk of serious reactions due to serum jelling after intravenous injection of iodinated contrast media. Sol-jell convertion and/or turbidimetric variations have not been observed when either sera or plasmas have been mixed with variable amounts of a methylglucamine salt of ioglycamic acid (MGI) and other compounds. In addition, no side-effects have been clinically recorded in three patients with Waldenströms macroglobulinaemia (WM) whose sera and/or plasmas had been studied in vitro, when they have been submitted to intravenous contrast examinations. The results suggest that there is not an evidence of a relationship between iodinated contrast media and fatal reactions due to sol-jell alterations in patients with WM and therefore a radiological examination using contrast media may be carried out in those patients. abstract_id: PUBMED:33386919 Hemodialysis catheter integrity during mechanical power injection of iodinated contrast medium for computed tomography angiography. Purpose: CT angiography (CTA) requires vascular access with flow rates of 5-7 mL/s. Hemodialysis (HD) is performed at 6-10 mL/s. The purpose of our study is to evaluate the structural integrity of HD catheters in the administration of contrast media via a mechanical power injector under varying conditions. Methods: Four HD catheters were evaluated in an in vitro study. Tested were contrast media type (iopamidol 300 and 370 mgI/mL), temperature (25 and 37 °C), catheter diameter (14 Fr to 16 Fr all with double-lumen capacity), catheter length (19-32 cm), and simultaneous double-lumen or single-lumen injection within each of the catheters. Peak plateau pressures (psi) were recorded with flow rates from 5 to 20 mL/s in 5 mL/s increments. In total, 864 unique injections were performed. Results: No catheter failure (bulging/rupture) was observed in 864 injections. Maximum pressure for single-lumen injection was 51.7 psi (double-lumen: 26.3 psi). Peak pressures were significantly lower in simultaneous double-lumen vs. single-lumen injections (p &lt; 0.001) and low vs. high viscosity contrast media (p &lt; 0.001). Neither larger vs. smaller diameter lumens (p = 0.221) nor single-lumen injection in arterial vs. venous (p = 0.834) were significantly different. Conclusion: HD catheters can be used to safely administer iodinated contrast media via mechanical power injection in in vitro operating conditions. Maximum peak pressure is below the manufacturer's 30 psi limit at flow rates up to 20 mL/s in double-lumen injections and up to 10 mL/s in single-lumen injections, which is higher than the usual maximum of 8 mL/s for CT angiography in clinical settings. abstract_id: PUBMED:22560565 Safety essentials: acute reactions to iodinated contrast media. The objectives of this article are to review the diagnosis and management of acute nonrenal reactions to iodinated contrast media. We will begin by discussing the types of contrast media and their correlative rates of reaction. The mechanism of contrast reactions, predisposing risk factors, and preventative measures will then be discussed. The remainder of the article will review the assessment of potential reactions, initial management, and treatment algorithms for specific reactions. abstract_id: PUBMED:17449784 Effect of i.v. injection of radiographic contrast media on human renal blood flow. Objective: The purpose of our study was to assess the effect of an i.v. injection of iodinated radiographic contrast medium on human renal blood flow using cine phase-contrast MRI. Subjects And Methods: We examined 12 healthy adult volunteers. Blood flow in one renal artery was measured using cine phase-contrast imaging (1.5-T MR system). Each volunteer received 120 mL of isotonic sodium chloride on study day 1 and 120 mL of a low-osmolar, nonionic, iodinated contrast medium (iomeprol, 400 mg I/mL) on study day 2. Repetitive measurements were performed before (up to five measurements in 5 minutes) and after (up to 13 measurements in 30 minutes) the injection was started. Results: Mean basal renal artery blood flow was 664 mL/min. In response to the injection of the test substances, we found a significantly larger decrease in average renal blood flow for contrast medium than for sodium chloride (31.9 mL/min vs 18.3 mL/min, p = 0.0481). Furthermore, in analyzing the measurements at early time points, we found a significant decrease (11.4% +/- 4.7% [SD]; Bonferroni-corrected, p &lt; 0.05) in renal blood flow 2 minutes after the injection of the contrast medium was started. Sodium chloride did not produce a significant effect at any time. Conclusion: Cine phase-contrast MRI can measure a decrease in renal blood flow in humans in response to an i.v. injection of iodinated radiographic contrast medium. Therefore, cine phase-contrast MRI can be a helpful and noninvasive tool for further investigations of contrast media-induced changes in human renal blood flow and their possible impact on the development of contrast-induced nephropathy. abstract_id: PUBMED:24436602 Adverse reactions to iodinated contrast media. The use of iodinated contrast agents for angiography dates back to the 1920s. With over 75 million contrast-requiring procedures performed annually worldwide, it is important to understand the risk factors, pathogenesis, diagnosis, prevention, and treatment of adverse reactions caused by iodinated contrast media (ICM). As contrast media have evolved from ionic, high-osmolality to nonionic, low-osmolality formulations the risk of reactions has decreased over time; however, no pretreatment protocol has been shown to eliminate all repeat reactions. Clinical alertness and early recognition of adverse reactions is of paramount importance and key for appropriate management of these patients. In this article, we review the most recent literature regarding adverse reactions to ICM and provide an insight into the pathogenesis, clinical presentation, pretreatment, and management of contrast-related reactions. abstract_id: PUBMED:15173984 Iodine delivery rate of different concentrations of iodine-containing contrast agents with rapid injection Purpose: To determine how the concentration of iodinated contrast media for computed tomography studies affects the iodine delivery rate at various conditions. Materials And Methods: Three nonionic, iodinated contrast agents that are marketed for computed tomography applications were administered through 1.1 and 1.3 mm large peripheral vein catheters into a vein phantom with room temperature and with pre-heating of the agents at 37 degrees C using a power injector. Each injection applied 40 ml of contrast medium at a flow rate of four to eight ml/s. The iodine concentration of Iopromide, Iomeprol, and Iodixanol varied between 300 and 400 mg iodine per cc. The power injector used a pressure limit of 21 bar. For each experiment, the maximum iodine deliver rate was calculated from the highest possible flow rate recorded. Results: Larger vein catheters and the higher contrast temperature yielded the highest flow rates. With the higher iodine concentrations, viscosity limited the injection speed of the achievable pressure limit. The highest iodine delivery rate was 2560 mg/s, using an agent with 320 mg/ml iodine content. With the more concentrated agent, a maximal iodine delivery rate of 2400 mg/s was achieved. Conclusion: Very high iodinated contrast agent concentrations do not increase the iodine delivery rate of rapid peripheral intravenous injections, since the high viscosity of such agents causes the injection pressure to increase more than the higher iodine concentration would compensate for. With lower injection velocities, the higher viscosity of highly concentrated contrast agents may remain without practical consequences. Answer: Yes, high flow rates can be achieved safely through 22G and 20G peripheral intravenous cannulas (IVC) when injecting iodinated contrast media (CM). A clinical feasibility study demonstrated that even with highly viscous CM, high flow rates up to 3 ml/sec for 22G IVC and 5 ml/sec for 20G IVC can be applied in vivo without significant risk of material damage or increased rates of extravasation. The study found that the extravasation rates did not differ significantly between patients with high-flow or low-flow injections, indicating that the use of smaller IVCs at these flow rates can be considered safe for the majority of patients (PUBMED:19280548). Moreover, the impact of temperature on maximum injection pressures at different cannula sizes was investigated, revealing that heating the CM led to reductions in injection pressures, allowing for the safe injection of CM even in small cannulas using power injection. The study suggested that safe CM injection seems possible in common IVC sizes, and the data can serve as guidance for future decisions on parameters of power injection of iodinated CM (PUBMED:19896067). In summary, peripheral intravenous power injection of iodinated contrast media through 22G and 20G cannulas can achieve high flow rates safely, with appropriate precautions and monitoring for potential complications such as extravasation.
Instruction: Research into smoking or nicotine and human cognitive performance: does the source of funding make a difference? Abstracts: abstract_id: PUBMED:18855829 Extended cognitive behavior therapy for cigarette smoking cessation. Unlabelled: PRIMARY AIM: Examine the effectiveness of extended cognitive behavior therapy (CBT) in promoting longer-term smoking abstinence. Design: Open-label treatment phase followed by extended treatment phase. Randomization conducted prior to entry into open-label treatment phase; analysis based on intention-to-treat to avoid threat of selection bias. Setting: Community smoking cessation clinic. Participants: A total of 304 adult smokers (&gt; or = 18 years of age; &gt; or = 10 cigarettes/day). Intervention: Open-label (8 weeks): all participants received bupropion SR, nicotine patch, CBT. Extended treatment (12 weeks): participants received either CBT + voicemail monitoring and telephone counseling or telephone-based general support. Measurements: Seven-day point prevalence abstinence, expired-air carbon monoxide. Results: At week 20 follow-up, CBT produced a higher 7-day point prevalence abstinence rate: 45% versus 29%, P = 0.006; at 52 weeks the difference in abstinence rates (31% versus 27%) was not significant. History of depression was a moderator of treatment. Those with a positive history had a better treatment response at 20 weeks when assigned to the less intensive telephone support therapy (P &lt; 0.05). Conclusion: The superiority of CBT to 20 weeks suggests that continued emphasis on the development of cognitive and behavioral strategies for maintaining non-smoking during an extended treatment phase may help smokers to maintain abstinence in the longer term. At present, the minimum duration of therapy is unknown. abstract_id: PUBMED:36645978 Pilot randomized controlled trial of mindfulness-based relapse prevention vs cognitive behavioral therapy for smoking and alcohol use. Background: The combined use of cigarettes and alcohol is associated with a synergistic increase in the risk of morbidity and mortality. Continued alcohol use during a smoking quit attempt is a considerable risk factor for smoking relapse. As such, there is a need for interventions that address both behaviors concurrently. Mindfulness-based interventions hold much promise for simultaneously addressing tobacco and alcohol use. Method: This pilot study evaluated the feasibility and acceptability of a mindfulness-based intervention using a two-arm randomized controlled trial of Mindfulness-Based Relapse Prevention for Smoking and Alcohol (MBRP-SA) vs Cognitive Behavioral Therapy (CBT). Interventions were delivered via telehealth in a group setting; all participants received a 6-week supply of the nicotine patch. Participants (N = 69) were adults who smoked cigarettes who reported binge drinking and were motivated to both quit smoking and change their alcohol use. Primary outcomes were feasibility and acceptability of MBRP-SA compared to CBT. Changes in tobacco and alcohol use are also presented. Results: Participants in MBRP-SA and CBT indicated that the treatments were highly acceptable, meeting a priori benchmarks. Feasibility was mixed with some outcomes meeting benchmarks (e.g., recruitment) and others falling below (e.g., retention). Participants in both conditions demonstrated significant reductions in tobacco and alcohol use at the end of treatment. Conclusions: In sum, MBRP-SA had comparable outcomes to CBT on all metrics measured. Future research should evaluate the efficacy of MBRP-SA on smoking abstinence and drinking reductions in a large-scale, fully powered trial. This study was registered on clinicaltrials.gov (NCT03734666). abstract_id: PUBMED:26320081 Integrating smoking cessation and alcohol use treatment in homeless populations: study protocol for a randomized controlled trial. Background: Despite progress in reducing cigarette smoking in the general U.S. population, smoking rates, cancer morbidity and related heart disease remain strikingly high among the poor and underserved. Homeless individuals' cigarette smoking rate remains an alarming 70% or greater, and this population is generally untreated with smoking cessation interventions. Furthermore, the majority of homeless smokers also abuse alcohol and other drugs, which makes quitting more difficult and magnifies the health consequences of tobacco use. Methods/design: Participants will be randomized to one of three groups, including (1) an integrated intensive smoking plus alcohol intervention using cognitive behavioral therapy (CBT), (2) intensive smoking intervention using CBT or (3) usual care (i.e., brief smoking cessation and brief alcohol counseling). All participants will receive 12-week treatment with a nicotine patch plus nicotine gum or lozenge. Counseling will include weekly individual sessions for 3 months, followed by monthly booster group sessions for 3 months. The primary smoking outcome is cotinine-verified 7-day smoking abstinence at follow-up week 52, and the primary alcohol outcome will be breathalyzer-verified 90-day alcohol abstinence at week 52. Discussion: This study protocol describes the design of the first community-based controlled trial (n = 645) designed to examine the efficacy of integrating alcohol abuse treatment with smoking cessation among homeless smokers. To further address the gap in effectiveness of evidence-based smoking cessation interventions in the homeless population, we are conducting a renewed smoking cessation clinical trial called Power to Quit among smokers experiencing homelessness. Trial Registration: ClinicalTrials.gov Identifier: NCT01932996. Date of registration: 20 November 2014. abstract_id: PUBMED:10563033 Cue exposure treatment for smoking relapse prevention: a controlled clinical trial. Aims: In an additive design, test the efficacy of cue exposure treatment for smoking relapse prevention as an adjunct to current standard cognitive behavioral and pharmacological treatments. Design: Randomized, controlled clinical trial. Setting: Outpatient behavioral medicine clinic. Participants: One hundred and twenty-nine cigarette smokers recruited through newspaper advertisements. Intervention: After receiving an initial counseling session for cessation and setting a quit day, 129 smokers were randomly assigned to one of four relapse prevention treatment conditions: (1) brief cognitive behavioral; (2) cognitive behavioral and nicorette gum; (3) cognitive behavioral and cue exposure; and (4) cognitive behavioral and cue exposure with nicorette gum. All smokers met individually with their counselor for six RP sessions. Measures: Seven-day, point-prevalence abstinence rates (CO verified) taken at 1, 3, 6 and 12-months post-treatment and time to first slip. Findings: All manipulation checks and process measures suggested that the treatments were delivered as intended. There were no significant differences between conditions in point-prevalence abstinence rates or in time to first slip. Conclusions: These results call into question the utility of cue exposure treatment for smoking relapse prevention. abstract_id: PUBMED:25566367 Culturally specific versus standard group cognitive behavioral therapy for smoking cessation among African Americans: an RCT protocol. Background: African American smokers experience disproportionately higher rates of tobacco-related illnesses compared to Caucasians. It has been suggested that interventions targeted to specific racial/ethnic groups (i.e., culturally specific) are needed; however, the literature examining the efficacy of culturally specific interventions is equivocal. Moreover, there are few descriptions of methods used to create these interventions. The main aim of this study is to test the efficacy of a culturally specific smoking cessation intervention among African Americans. Methods/design: A 2-arm randomized controlled trial (RCT) will be conducted to assess the efficacy of a culturally specific group cognitive behavioral therapy (CBT), compared to standard group CBT among treatment-seeking smokers from the community. Participants in both conditions receive the transdermal nicotine patch (TNP) for 8-weeks. We intend to randomize at least 247 adult smokers who self-identify as African American into the trial. Enrolled participants are block randomized into one of two groups: Standard group CBT (control) or a culturally specific group CBT (CS-CBT). Groups are matched for time and attention, and consist of eight sessions. The primary outcome variable is 7-day point prevalence abstinence (7-day ppa). Smoking status is assessed at the end-of-counseling (EOC), and 3, 6, and 12-month follow-ups, with self-reported abstinence verified by saliva cotinine. We hypothesize that the CS-CBT condition will produce significantly greater smoking cessation rates compared to the control condition. We also expect that this effect will be moderated by acculturation and ethnic identity, such that the CS-CBT will show the greatest effect on cessation among participants who are less acculturated and have greater ethnic identity. Discussion: Answering the fundamental question of whether culturally specific interventions lead to incremental efficacy over established, evidence-based approaches is of utmost importance. This study will have implications for the development and implementation of smoking cessation interventions among African Americans and other racial/ethnic minority groups. Trial Registration: NCT01811758. abstract_id: PUBMED:9388153 Cost-effectiveness of the clinical practice recommendations in the AHCPR guideline for smoking cessation. Agency for Health Care Policy and Research. Context: The Agency for Health Care Policy and Research (AHCPR) published the Smoking Cessation: Clinical Practice Guideline in 1996. Based on the results of meta-analyses and expert opinion, the guideline identifies efficacious interventions for primary care clinicians and smoking cessation specialty providers. Objective: To determine the cost-effectiveness of clinical recommendations in AHCPR's guideline. Design: The guideline's 15 recommended smoking cessation interventions were analyzed to determine their relative cost-effectiveness. Then, using decision probabilities, the interventions were combined into a global model of the guideline's overall cost-effectiveness. Patients: The analysis assumes that primary care clinicians screen all presenting adults for smoking status and advise and motivate all smokers to quit during the course of a routine office visit or hospitalization. Smoking cessation interventions are provided to 75% of US smokers 18 years and older who are assumed to be willing to make a quit attempt during a year's time. Intervention: Three counseling interventions for primary care clinicians and 2 counseling interventions for smoking cessation specialists were modeled with and without transdermal nicotine and nicotine gum. Main Outcome Measure: Cost (1995 dollars) per life-year or quality-adjusted life-year (QALY) saved, at a discount of 3%. Results: The guideline would cost $6.3 billion to implement in its first year. As a result, society could expect to gain 1.7 million new quitters at an average cost of $3779 per quitter, $2587 per life-year saved, and $1915 for every QALY saved. Costs per QALY saved ranged from $1108 to $4542, with more intensive interventions being more cost-effective. Group intensive cessation counseling exhibited the lowest cost per QALY saved, but only 5% of smokers appear willing to undertake this type of intervention. Conclusions: Compared with other preventive interventions, smoking cessation is extremely cost-effective. The more intensive the intervention, the lower the cost per QALY saved, which suggests that greater spending on interventions yields more net benefit. While all these clinically delivered interventions seem a reasonable societal investment, those involving more intensive counseling and the nicotine patch as adjuvant therapy are particularly meritorious. abstract_id: PUBMED:29650283 Interventions for smoking cessation in 2018 Smoking cessation treatments have been proved effective to stop smoking. For pharmacological treatments, nicotine replacement therapies (NRT) as well as bupropion allow to increase 6 month-abstinence rates by more than 80% in comparison with placebo while varenicline prescription doubles success rates in the same conditions. These results mean that for 10 smokers who quit with placebo, 18 are expected to quit with NRT or bupropion and 28 are expected to quit with varenicline. Varenicline is 50% more effective than nicotine patch and 70% more effective than nicotine gum. Nevertheless, a combination including NRT patch and oral nicotine forms is as effective as varenicline, thus leading to encourage the prescription of a combination NRT when NRT are chosen. For these three pharmacological treatments, cardiovascular as well as neuropsychiatric tolerance were not found statistically different from placebo in randomized controlled trials. Yet, bupropion prescription leads to an increasing risk of seizure (1/1000 to 1/1500). For behavioral treatment, motivational interviewing as well as cognitive behavior therapies are been proven to be effective to stop smoking but few smokers have access to this treatment. Smoking cessation mobile application and smartphone application seem to be promising in terms of effectiveness and might be useful to reach more smokers. abstract_id: PUBMED:28224877 Helping people to give up smoking; efficacy and safety of smoking cessation interventions - This review evaluates the safety and efficacy of, smoking cessation interventions.- Behavioural counselling is the cornerstone of smoking cessation programmes.- This counselling can be supplemented with written self-help materials, telephone counselling and e-health tailored to individual patient preferences.- Behavioural counselling may be combined with pharmacotherapy for increased effectiveness. First choice is a combination of a nicotine patch and a nicotine lozenge or chewing gum, mainly because of the favourable side-effect profile.- Furthermore, varenicline, bupropion and nortripyline are effective in smoking cessation; varenicline appears to be the most effective drug. These drugs have, however, more contraindications and potential side effects, which necessitates a correct diagnosis and more intensive monitoring.- The e-cigarette is not recommended as it seems to be equally efficacious as nicotine replacement therapy, but its potential side effects - such as normalising smoking - have not been sufficiently investigated. abstract_id: PUBMED:15764422 Double-blind placebo-controlled trial of fluoxetine in smoking cessation treatment including nicotine patch and cognitive-behavioral group therapy. Smoking cessation attempts are often complicated by dysphoria/depression, weight gain, craving, and other nicotine withdrawal symptoms. Fluoxetine's antidepressant and anorectant properties, along with its capacity to attenuate compulsive behavior, suggest that this medication might facilitate smoking cessation treatment. We examined the effect of fluoxetine on smoking cessation in the context of a program that included group cognitive-behavioral therapy (six weeks) and transdermal nicotine patch(ten weeks). In a double-blind randomized trial of fluoxetine for smoking cessation, 150 daily smokers were assigned to placebo (n=48), 20 mg (n=51), or 40 mg fluoxetine (n=51). Fluoxetine did not significantly improve smoking cessation rates, either for those with or without major depressive disorder(MDD)histories or elevated current depression. Our results suggest that fluoxetine may moderate withdrawal symptoms, even if that was not manifested in improved smoking cessation rates. Our results, however, clearly favor the use of fluoxetine if weight gain is a major clinical obstacle to smoking cessation. abstract_id: PUBMED:28239942 Extended treatment for cigarette smoking cessation: a randomized control trial. Aim: To test the potential benefit of extending cognitive-behavioral therapy (CBT) relative to not extending CBT on long-term abstinence from smoking. Design: Two-group parallel randomized controlled trial. Patients were randomized to receive non-extended CBT (n = 111) or extended CBT (n = 112) following a 26-week open-label treatment. Setting: Community clinic in the United States. Participants: A total of 219 smokers (mean age: 43 years; mean cigarettes/day: 18). Intervention: All participants received 10 weeks of combined CBT + bupropion sustained release (bupropion SR) + nicotine patch and were continued on CBT and either no medications if abstinent, continued bupropion + nicotine replacement therapy (NRT) if increased craving or depression scores, or varenicline if still smoking at 10 weeks. Half the participants were randomized at 26 weeks to extended CBT (E-CBT) to week 48 and half to non-extended CBT (no additional CBT sessions). Measurements: The primary outcome was expired CO-confirmed, 7-day point-prevalence (PP) at 52- and 104-week follow-up. Analyses were based on intention-to-treat. Findings: PP abstinence rates at the 52-week follow-up were comparable across non-extended CBT (40%) and E-CBT (39%) groups [odds ratio (OR) = 0.99; 95% confidence interval (CI) = 0.55, 1.78]. A similar pattern was observed across non-extended CBT (39%) and E-CBT (33%) groups at the 104-week follow-up (OR = 0.79; 95% CI= 0.44, 1.40). Conclusion: Prolonging cognitive-behavioral therapy from 26 to 48 weeks does not appear to improve long-term abstinence from smoking. Answer: The provided abstracts do not directly address the question of whether the source of funding for research into smoking or nicotine and human cognitive performance makes a difference. The abstracts focus on the effectiveness of various smoking cessation interventions, including cognitive behavior therapy (CBT), mindfulness-based relapse prevention, integrated smoking cessation and alcohol use treatment, cue exposure treatment, culturally specific group CBT, and pharmacological treatments such as nicotine replacement therapy (NRT), bupropion, varenicline, and fluoxetine (PUBMED:18855829, PUBMED:36645978, PUBMED:26320081, PUBMED:10563033, PUBMED:25566367, PUBMED:9388153, PUBMED:29650283, PUBMED:28224877, PUBMED:15764422, PUBMED:28239942). The question of whether funding sources influence research outcomes is a broader issue that pertains to potential conflicts of interest, bias, and the integrity of research findings. It is a topic that is often discussed in the context of scientific research, as funding from tobacco companies or pharmaceutical companies could potentially influence the design, conduct, or reporting of research studies. However, none of the provided abstracts discuss the source of funding or its potential impact on the research findings related to smoking cessation or nicotine's effects on cognitive performance. To answer the question about the influence of funding sources on research into smoking or nicotine and human cognitive performance, one would need to look at studies that specifically analyze the relationship between funding sources and research outcomes, which is not provided in the abstracts here.
Instruction: Can emergency medicine practitioners predict disposition of psychiatric patients based on a brief medical evaluation? Abstracts: abstract_id: PUBMED:24910963 Can emergency medicine practitioners predict disposition of psychiatric patients based on a brief medical evaluation? Objective: Emergency medicine practitioners (EMPs) often provide 'medical clearance' before evaluation by a psychiatry practitioner (PP). We set out to determine the level of agreement between EMP impression and disposition as determined by PPs. Patients And Methods: This was a prospective observational study in an urban tertiary teaching hospital emergency department. We collected data from February to April 2011. We used a convenience sample of patient encounters evaluated by EMPs and subsequent referral for psychiatric evaluation. We asked EMPs whether they thought the patients would be admitted or discharged following psychiatric evaluation, and if discharged, whether to outpatient psychiatric follow-up or to no follow-up. EMPs were asked to base their opinion upon their general impression following their brief medical evaluation. They were not given guidelines on which to base their decision. The EMPs were blind to PP decisions. The κ-statistic was used to calculate agreement between the EMP's impression and disposition decision by the PP. We excluded patients who were acutely intoxicated, in police custody, or lived in an extended care facility. Results: We included 156 patient encounters over the study period and had complete data for 152 encounters. Of these, 86 (55%) were admitted, 46 (30%) were discharged with no specific psychiatric follow-up, and 20 (13%) were discharged with a follow-up plan. EMPs predicted the exact disposition in 77/152 (51%) cases (κ=0.264, 95% confidence interval 0.77-0.333). Agreement was higher for admitted patients, with EMPs predicting inpatient admission for 57/86 (66%) of these patients. Other factors associated with higher agreement scores were years in emergency medicine practice by the EMP and suicidal ideation by the patient. Conclusion: EMPs did not reliably predict psychiatric disposition decisions based on clinical 'gestalt'. Future research will focus on clinical guidelines to help EMPs better independently assess need for emergency psychiatric services. abstract_id: PUBMED:37205042 Contemporary Practices for Medical Evaluation of the Psychiatric Patient in the Emergency Department. Although emergency department (ED) visits for patients with mental illness are frequent, medical evaluation (i.e., "medical screening") of patients presenting with psychiatric complaints is inconsistent. This may largely be related to differing goals for medical screening, which often vary according to specialty. Although emergency physicians typically focus on stabilization of life-threatening diseases, psychiatrists tend to believe that care in the ED is more comprehensive, which often places the two fields at odds. The authors discuss the concept of medical screening, review the literature on this topic, and offer a clinically oriented update to the 2017 American Association for Emergency Psychiatry consensus guidelines on medical evaluation of the adult psychiatric patient in the ED. abstract_id: PUBMED:23945897 Evaluation of emergency medical technicians intermediate prediction about their transported patients final disposition in emergency department of Imam Khomeini Hospital. This was a prospective cross-sectional study of consecutive transported patients by emergency medical service (EMS) to a referral hospital. The goal of this study was the evaluation of emergency medical technician intermediate prediction about their transported patients disposition in Emergency Department of Imam Khomeini Hospital. 2950 patients were transported to this hospital, Questionnaires were submitted in 300 of consecutive patient transports and completed data were obtained and available upon arrival at hospital for 267 of these cases. Emergency medical technicians intermediate (EMT-I) were asked to predict whether the transported patient would require admission to the hospital, and if so, what will be their prediction of patient actual disposition. Their predictions were compared with emergency specialist physicians. EMT-I predicted that 208 (78%) transports would lead to admission to the hospital, after actual disposition, 232 (%87) patients became admitted. The sensitivity of predicting any admission was 65%, with positive predictive value (PPV) of 39% and specificity of 86% with negative predictive value (NPV) of 94%. The sensitivity of predicting trauma patients (56.2% of total patients) was 55% with PPV of 38%, specificity of 86% and for Non-trauma patients' sensitivity was 80% with PPV of 40% and specificity of 82%. EMT-I in our emergency medical system have very limited ability in prediction of admission and disposition in transported patients and their prediction were better in Non-trauma patients. So in our EMS, the pre-hospital diversion and necessity of transporting policies should not be based on EMS personnel disposition. abstract_id: PUBMED:26915260 PEEC (psychiatric evaluation in emergency care) Psychiatric evaluation in emergency care (PEEC) course was developed by a surveillance committee of clinical care for suicide attempt patients of the Japanese Society of Emergency Medicine (JSEM). This course is applicable to patients with a variety of psychiatric state not only the suicide attempt person. The goals of the course are standardization of evaluation and the initial management of psychiatric symptoms in emergency care. And it is a course with an emphasis on interprofessional collaboration. It is expected that PEEC course is to bridge the emergency care and psychiatric care in each region. abstract_id: PUBMED:9893888 Evidence-based emergency medicine Evidence Based (Emergency) Medicine (EB(E)M is a term referring to the application into daily clinical practice of only those methods, procedures, medications etc. which are based on scientific evidence. Where diagnostic and therapeutic principles have not been validated on a prospective, controlled randomised basis, this should be tried out at a later time, if at all possible. This concept may allow to bridge the gap between research and clinical practice, and represents the major goal of EB(E)M. Protagonists of EBEM are at times confronted with criticism that EBEM does not constitute the only but one out of several possible approaches to quality controlled medical care. The fact that more than 50% of all emergency procedures are not evidence based give rise to the question as to whether the performance of randomised controlled studies is ethically justifiable, if control groups are included whose treatment leaves out generally recommended and recognised (though not evidence based) therapeutic and/or diagnostic principles. The following examples may enumerate some of the procedures, methods or medications, respectively, without proven scientific evidence: Medication for resuscitation of cardiac arrest victims. Medication for acute asthmatic attacks Initial treatment of uncontrolled haemorrhagic shock. Endotracheal intubation in VF/VT. The principle need for initial ventilation and the volumes of ventilation in cardiac arrest patients. Effectiveness of ACD- and VEST-CPR. A few typical examples are presented to illustrate the requirements of current study designs which have to be met before results of an evaluation are accepted by the EBEM scientists to obtain approval for application of a procedure, method or medication in clinical practice (large patient numbers, power calculations, ethical issues) as well as their benefits and drawbacks. abstract_id: PUBMED:21496139 Emergency medicine and psychiatry agreement on diagnosis and disposition of emergency department patients with behavioral emergencies. Objectives: The objective was to determine the level of agreement between emergency physicians (EPs) and consulting psychiatrists in their diagnosis and disposition of emergency department (ED) patients with behavioral emergencies. Methods: The authors conducted a prospective study at a university teaching hospital ED with an annual census of approximately 45,000 patients. During study hours, each time a psychiatric consultation was requested, the emergency medicine (EM) and consulting psychiatry residents were asked to fill out similar short questionnaires concerning their diagnoses and disposition decisions after they consulted with their attending physicians. EM and psychiatry residents were blinded to the other's assessment of the patient. Residents were asked about their evaluation of patients regarding: 1) psychiatric assessments, 2) if the patients presented a danger to themselves or others or were gravely disabled, and 3) the need for emergency psychiatric hospitalization. Results: A total of 408 resident physician pairs were enrolled in the study. Patients ranged in age from 5 to 92 years, with a median age of 31 years; 50% were female. The most common psychiatric assessments, as evaluated by either EPs, consulting psychiatrists, or both, were mood disorder (66%), suicidality (57%), drug/alcohol abuse (26%), and psychosis (25%). Seventy-three percent were admitted for acute psychiatric hospitalization. Agreement between EPs and psychiatrists was 67% for presence of mood disorder, 82% for suicidality, 82% for drug/alcohol abuse, 85% for psychosis, and 85% for grave disability. There was 67% agreement regarding patient eligibility for involuntary psychiatric hold. EPs felt confident enough to make disposition decisions 87% of the time; for these patients there was 76% agreement with consulting psychiatrists about the final disposition decision. Conclusions: The 67% agreement between EPs and consulting psychiatrists regarding need for involuntary hold, and 76% agreement regarding final disposition, demonstrate a substantial disagreement between EPs and psychiatrists regarding management and disposition of ED patients with psychiatric complaints. Further studies with patient follow-up are needed to determine the accuracy of the ED assessments by both EPs and consulting psychiatrists. abstract_id: PUBMED:8161059 Emergency medical evaluation of psychiatric patients. Study Objectives: To determine the completeness of documentation and accuracy of medical evaluation for a sample of emergency psychiatric patients. Design: Descriptive, retrospective chart review. Setting: Nine hundred-bed community teaching hospital with a voluntary psychiatric inpatient unit. Type Of Participants: Two hundred ninety-eight emergency department patients with psychiatric chief complaints, all of whom were admitted to the voluntary psychiatric unit of the same community teaching hospital. Interventions: None Measurements And Main Results: There was failure to document mental status at triage in 56% of patients. The most frequent process deficiencies in the medical evaluation were in the neurological examination. Twelve patients (4%) required acute medical treatment within 24 hours of psychiatric admission, and the ED history and physical examination should have identified an acute condition in 83%. The chart was documented "medically clear" in 80% of patients in whom medical disease should have been identified. Patients less than 55 years old had a four times greater chance of a missed medical diagnosis. Conclusion: Process deficiencies in the medical history and physical examination accounted for the vast majority of missed acute medical conditions. The statement "medically clear" is inaccurate and should be replaced by a thorough discharge note. abstract_id: PUBMED:20117904 Medical clearance of the psychiatric patient in the emergency department. Background: As part of the emergency department (ED) evaluation of patients with psychiatric complaints, emergency physicians are often asked to perform screening laboratory tests prior to admitting psychiatric patients, the value of which is questionable. Study Objective: To determine if routine screening laboratory studies performed in the ED on patients with a psychiatric chief complaint would alter ED medical clearance (evaluation, management or disposition) of such patients. Methods: In this retrospective chart review, the patient charts were reviewed for triage notes, history and physical examination, laboratory study results, and patient disposition. The study investigators subjectively determined if any of the laboratory abnormalities identified after admission would have changed ED management or disposition of the patient had they been identified in the ED. Results: Subjects were 519 consecutive adult patients (18 years of age and older) admitted to the Medical College of Georgia's inpatient psychiatric ward through the ED. There were 502 patients who met inclusion criteria, and 50 of them had completely normal laboratory studies. Laboratory studies were performed in the ED for 148 patients. The most common abnormalities identified were positive urine drug screen (n = 221), anemia (n = 136), and hyperglycemia (n = 139). There was one case (0.19%) identified in which an abnormal laboratory value would have changed ED management or disposition of the patient had it been found during the patient's ED visit. Conclusions: Patients presenting to the ED with a psychiatric chief complaint can be medically cleared for admission to a psychiatric facility by qualified emergency physicians using an appropriate history and physical examination. There is no need for routine medical screening laboratory tests. abstract_id: PUBMED:28611885 American Association for Emergency Psychiatry Task Force on Medical Clearance of Adult Psychiatric Patients. Part II: Controversies over Medical Assessment, and Consensus Recommendations. Introduction: The emergency medical evaluation of psychiatric patients presenting to United States emergency departments (ED), usually termed "medical clearance," often varies between EDs. A task force of the American Association for Emergency Psychiatry (AAEP), consisting of physicians from emergency medicine, physicians from psychiatry and a psychologist, was convened to form consensus recommendations for the medical evaluation of psychiatric patients presenting to U.S.EDs. Methods: The task force reviewed existing literature on the topic of medical evaluation of psychiatric patients in the ED and then combined this with expert consensus. Consensus was achieved by group discussion as well as iterative revisions of the written document. The document was reviewed and approved by the AAEP Board of Directors. Results: Eight recommendations were formulated. These recommendations cover various topics in emergency medical examination of psychiatric patients, including goals of medical screening in the ED, the identification of patients at low risk for co-existing medical disease, key elements in the ED evaluation of psychiatric patients including those with cognitive disorders, specific language replacing the term "medical clearance," and the need for better science in this area. Conclusion: The evidence indicates that a thorough history and physical examination, including vital signs and mental status examination, are the minimum necessary elements in the evaluation of psychiatric patients. With respect to laboratory testing, the picture is less clear and much more controversial. abstract_id: PUBMED:31558931 Assessing the knowledge of emergency medical care practitioners in the Free State, South Africa, on aspects of pre-hospital management of psychiatric emergencies. Introduction: Studies have reported that emergency medical care practitioners (EMCPs) encounter challenges when attending to psychiatric emergencies. The EMC provider's ability to understand, assess and manage psychiatric emergencies has been reported to be poor due to limited knowledge and insufficient training. In South Africa (SA), little is known about the knowledge of EMCPs on pre-hospital management of psychiatric emergencies. The objective of this study was to assess the knowledge of EMCPs working in the Free State province on aspects of pre-hospital management of psychiatric emergencies. Methods: This descriptive study used a questionnaire survey to obtain data on the knowledge of EMCPs on aspects of pre-hospital management of psychiatric emergencies. Results: Only 159 of the initial 192 questionnaires distributed were returned, giving a response rate of 82.8%. The majority (87.4%) of the participants reported inadequate knowledge of pre-hospital management of psychiatric emergencies. More than a third of the participants reported that they are not knowledgeable on how to assess a psychiatric patient (P &lt; 0.01), 64.2% and 73.6% (P &lt; 0.001 in both cases) could not perform mental status examination and lack the knowledge of crisis intervention skills for managing a psychiatric emergencies. The majority (76.7%; P &lt; 0.001) of the participants are not conversant with the Mental Health Care Act 2002 (Act no. 17 of 2002). Finally, participants (94.3% and 86.8%, respectively; P &lt; 0.001) agree that teaching and prior exposure to a psychiatric facility, as in work integrated learning, will empower EMC graduates with skills required to effectively manage psychiatric emergencies. Conclusion: EMC practitioners are often the first healthcare professionals arriving at any scene of medical emergencies including psychiatric emergencies. To avoid malpractices, which could be detrimental to patient's health, it is of utmost importance that EMCPs are well trained and equipped to manage any form of medical emergency including those involving psychiatric patients. Answer: No, emergency medicine practitioners (EMPs) cannot reliably predict the disposition of psychiatric patients based solely on a brief medical evaluation. A study found that EMPs predicted the exact disposition in only 51% of cases, with a κ-statistic of 0.264, indicating only fair agreement between EMP impressions and the disposition decisions made by psychiatry practitioners (PUBMED:24910963). The study concluded that EMPs did not reliably predict psychiatric disposition decisions based on clinical 'gestalt' and suggested that future research should focus on clinical guidelines to help EMPs better independently assess the need for emergency psychiatric services.
Instruction: Is the stomach a main landmark on the abdominal circumference? Abstracts: abstract_id: PUBMED:26142212 Is the stomach a main landmark on the abdominal circumference? Audit of 3 operators Objectives: The fetal weight estimation depends largely on the accuracy of abdominal circumference. The quality criteria are standardized to minimize variability and include visualization of the stomach. The objective of this study is to investigate the presence or absence of the stomach on the abdominal circumference for 3 different operators. Material And Methods: We conducted re-reading of 204 ultrasound images in the second and third trimester of pregnancy, performed by three operators, at the maternity Port Royal in Paris in 2013. On these images, the presence of the stomach was sought and other quality criteria were verified. Results: Among the 204 images, 166 included the stomach (81%). When studying for each of the three operators, there were 79%, 72% and 98% of the stomach into the abdominal circumference, a significant difference between operators (P=0.0029). Regarding the secondary criteria, the quality criteria found most often were the calipers and well placed ellipse (89%). Conclusion: According to the previous studies, the stomach seems to be a reference to search. Our study shows variability depending on the different operators. If a hierarchy of quality criteria is to be considered, the stomach does not seem to be the first criterion to search. abstract_id: PUBMED:9664826 The fetal stomach circumference/abdominal circumference ratio: a possible parameter in assessing fetal stomach size. Nonvisualized, dilated or even a small fetal stomach can be associated with a variety of anomalies and poor fetal outcome. Therefore, we attempted to evaluate the stomach circumference (SC)/abdominal circumference (AC) ratio to assess normal limits of fetal stomach size. A total of 363 fetuses ranging from 15 to 39 weeks' gestation were prospectively evaluated with ultrasonography. The SC was measured from a plane that is perpendicular to the fetal longitudinal axis at the level where the largest axial circumference of the stomach was obtained using a digitizer. The AC was also measured at the same section and the ratio was calculated by dividing the SC by the AC and multiplying by 100. The SC increased linearly from 15 to 24 weeks and showed fluctuations in size thereafter to 39 weeks. A strong correlation was noted between gestational age and both SC (r: 0.842, P &lt; 0.0001) and AC (r: 0.975, P &lt; 0.0001). The SC/AC ratio was normally distributed with a mean of 20.4 +/- 3.9% and ranged between 14.8% and 27.03% throughout pregnancy (r: 0.021, P &gt; 0.05). Although the fetal stomach is a dynamically changing organ, the SC/AC ratio can be considered as a potentially useful parameter in assessing fetal stomach size. abstract_id: PUBMED:23565461 Glycemic control and fetal abdominal circumference. Aim: To study about the correlation between the glycemic status and increase in fetal abdominal circumference in gestational diabetes patients and its relationship with fetal birth weight. Materials And Methods: Seventy-five gestational diabetes mellitus (GDM) patients were taken up for study with duly informed consent and suggested for anthropometry profile and glycemic profile with HbA1C. Fetal abdominal circumference was measured during routine scans. The patients were followed up till delivery and the fetal birth was noted. Inclusion Criteria: Seventy-five gestational diabetic mothers who have attended a secondary level diabetic clinic and on regular follow-up were included in the study. Exclusion Criteria: Pre-GDM mothers, patients with co-morbid disease were excluded from the study. Expected Results: Fetal abdominal circumference correlated well with fluctuating glycemic control and fetal birth weight. abstract_id: PUBMED:32117842 Reference Values for Abdominal Circumference in Premature Infants. Objectives: Abdominal distention is a common indicator of feeding intolerance in premature newborns. In the absence of a precise definition, abdominal distention and its degree are highly subjective. The aim of this study was to construct references and smoothed percentiles for abdominal circumference (AC) and AC to head circumference (HC) ratio (AC/HC) in infants born between 24 weeks and 34 weeks of gestational age. Methods: ACs and HCs were collected weekly in eutrophic premature infants without congenital abdominal or cerebral malformation. AC and HC charts were modeled using the LMS method, excluding measures associated with abdominal distention at clinical examination or intracranial abnormality at cerebral ultrasounds. Changes in AC and AC/HC over time were studied by repeated-measures analysis using mixed-effects linear models. Results: A total of 1,605 measurements were made in 373 newborns with a mean gestational age of 31 [29-33] weeks and mean birth weight of 1,540 [1,160-1,968] g. Of these measurements, 1,220 were performed in normal conditions. Gestational age, postnatal age, singleton status, and respiratory support were significantly associated with AC and AC/HC. LMS curves were generated according to gestational age groups and postnatal age, with coherent profiles. AC/HC was 0.91 [0.86-0.95] in absence of abdominal distention. It was higher in cases of abdominal distention (0.95 [0.89-1.00], p &lt; 0.001) and necrotizing enterocolitis (0.98 [0.93-1.07], p &lt; 0.001). Conclusions: References constructed for AC and AC/HC might be used to assess feeding tolerance in premature infants. AC/HC was more relevant than AC to rationalize the diagnosis of abdominal distention. abstract_id: PUBMED:23285442 The waist circumference measurement: a simple method for assessing the abdominal obesity. Introduction: Excess abdominal fat is an independent predictor of the risk factors and the morbidity of obesity related diseases such as type 2 diabetes, hypertension, dyslipidaemia and cardiovascular diseases. The Waist Circumference (WC) is positively correlated with the abdominal fat. Hence, the waist circumference is a valuable, convenient and a simple measurement method which can be used for identifying the individuals who are at an increased risk for the above mentioned diseases. Objectives: To assess the abdominal obesity by measuring the waist circumference among the women who were aged 20 years and above in an urban slum of Chennai, India.To identify the socio -demographic factors which were associated with the abdominal obesity in the above study population. Settings And Design: A community based and a cross sectional study was carried out in an urban slum of Chennai, India. Methods And Materials: The present study was undertaken in an urban slum of Chennai city, among the women who were aged 20 years and above. One slum was selected randomly and the households in the slum were sampled by a systematic random sampling method. A pre-designed and a pre-tested questionnaire was used to collect the information regarding the socio-demographic profile of the women. Their waist circumference was measured by using a flexible inch tape. As per the World Health Organization (WHO), the International Obesity Task Force (IOTF) and the International Association for the Study of Obesity (IASO)(2000), the following cut off values for the waist circumference were used to assess the abdominal obesity for women: WC&lt;80cms - normal and WC ≥ 80cms-abdominal obesity. Statistical Analysis: It was done by using the Statistical Package For Social Science (SPSS ), version 11.5. The prevalence was expressed in percentage and the Chi square test was used to find its association with the factors. Results: In the study population, the prevalence of abdominal obesity (WC ≥ 80 cms) was 29.8% (95% Confidence Interval [CI] 25.9-34 %). A significant association was found between the age, religion, a higher socio-economic status and the abdominal obesity. No significant association was noted between the educational status, occupation, marital status, type of family and the abdominal obesity. Conclusion: Abdominal obesity among the urban slum women is on the rise. The abdominal obesity was found to be significantly higher among the slum women with increasing age and in those who belonged to the muslim religion and to a higher socio-economic status. abstract_id: PUBMED:35003776 Relationship between Serum Leptin Values and Abdominal Circumference Assessed in the First Trimester of Pregnancy in Obese Women. Higher serum leptin values have been correlated with several complications of pregnancy, while the abdominal circumference (AC) is an anthropometric measurement usually used to diagnose abdominal obesity. For this study we compared serum leptin levels between two groups of pregnant women: those with abdominal obesity (AC&gt;88cm) and those without abdominal obesity (AC&lt;88cm), diagnosed at the beginning of pregnancy. Serum leptin levels were obtained through measurements performed between 11-14 and 28-32 weeks of gestation. Higher serum leptin levels were found in the group of pregnant women with abdominal obesity comparing to the group without abdominal obesity, but a positive correlation between abdominal circumference and the leptin values was found only at 11-14 weeks of pregnancy. Our study showed that a simple measurement of AC performed at the beginning of pregnancy may predict the average serum levels of leptin throughout the gestation period. A better understanding of the role of leptin during development is needed to make us able to know how maternal-placental-fetal leptin exchange influence the fetal development. abstract_id: PUBMED:27688892 Changes in waist circumference and the prevalence of abdominal obesity during 1994-2008 - cross-sectional and longitudinal results from two surveys: the Tromsø Study. Background: Abdominal obesity increases all-cause mortality and is a risk factor for a number of diseases. There are few population-based studies of the longitudinal changes of abdominal obesity. Methods: Based on data from the Tromsø Study, we studied gender- and age-specific mean waist circumference and prevalence of abdominal overweight and abdominal obesity in two surveys in 1994-1995 (Tromsø 4, 6812 men and women aged 25 to 84) and 2007-2008 (Tromsø 6, 12,493 men and women aged 30 to 87). Furthermore, we describe the longitudinal changes of waist circumference and abdominal obesity during 13 years in 3144 subjects (aged 25-69 in 1994) who attended both surveys. Results: Cross-sectional analyses found a higher mean waist circumference in men than women and a direct relationship with age in both men and women in both Tromsø 4 and in Tromsø 6. As the WHO cut-off points for abdominal obesity are gender-specific, however, the prevalence of abdominal obesity was lower in men than in women. In 2007-2008, approximately 37 and 55 % of men and women, respectively, were classified as abdominally obese. Thirteen years before, in 1994-1995, the corresponding figures were 20 and 35 %. Longitudinal analyses of changes during the 13-year period clearly demonstrated that mean waist circumference increased in all examined birth cohorts in both men (mean change 6.1 cm) and women (mean change 8.4 cm), but increased more markedly the younger the subjects were. The prevalence of abdominal obesity in men aged 25-34 increased from 5 % in 1994 to 31 % 13 years later. The prevalence of abdominal obesity more than doubled among both men and women. Conclusions: The increasing mean waist circumference is of concern. There is a need for further longitudinal studies of the changes in waist circumference. abstract_id: PUBMED:35314534 Examination of the Validity of the Abdominal Circumference Measurement Method Used for Visceral Fat Evaluation Using Computed Tomography Purpose: The effects of obesity, including metabolic syndrome, on health are of concern around the world. In Japan, if the abdominal circumference of a man is 85 cm or more and the abdominal circumference of a woman is 90 cm or more, the visceral fat area is equivalent to 100 cm2 or more. The evaluation of visceral fat by abdominal circumference is based on the influence of tissues other than visceral fat. The purposes of this study were to compare the abundance of visceral fat area and subcutaneous fat area and to examine the validity of estimating the visceral fat area by the abdominal circumference measurement method. Methods: We obtained data on visceral fat area, subcutaneous fat area, and abdominal circumference of 822 subjects. Next, we investigated the correlation between the abdominal circumference and the visceral fat area, the abdominal circumference and the subcutaneous fat area, and the visceral fat area and the subcutaneous fat area of the subjects. Result: The abdominal circumference and visceral fat area were highly correlated, but the sum of visceral fat area and subcutaneous fat area was more highly correlated. There was a low correlation between the visceral fat area and the subcutaneous fat area. When the abdominal circumference exceeded the standard value, there was no correlation between the visceral fat area and the subcutaneous fat area. We concluded that there is a correlation between abdominal circumference and BMI and visceral fat area, but when these are used for estimation of visceral fat area, they are not accurate due to many errors. abstract_id: PUBMED:21141102 Laparoscopic treatment of a large trichobezoar in the stomach with gastric perforation and abdominal wall abscess Trichobezoar represents a mass of swallowed hair inside the stomach. Here we report a 17-year-old girl who presented in our department with symptoms of gastric ulcer. Ultrasound examination followed by upper endoscopy revealed a large trichobezoar in the stomach with simultaneous gastric perforation. Laparoscopy also revealed a penetration into the anterior abdominal wall accompanied by abscess at this level. We performed a laparoscopic gastrotomy with trichobezoar extraction and laparoscopic treatment of perforation and abdominal wall abscess. The postoperative evolution was normal and the patient was discharged on the fifth postoperative day. We show that laparoscopic approach may be safely used in the treatment of the large gastric complicated trichobezoar. Several laparoscopic approaches were described for the treatment of tricobezoar and its complications but as far as we know this is the first report of laparoscopic treatment of large tricobezoar and associate gastric perforation. abstract_id: PUBMED:29936734 Epidemiological characteristics of waist circumference and abdominal obesity among Chinese children and adolescents aged 6-17 years Objective: To analyze the epidemiological characteristics of waist circumference and abdominal obesity among Chinese children and adolescents aged 6-17 years. Methods: Data was from the samples of aged 6-17 years in the China National Nutrition and Health Surveillance program in 2010-2012. P(90) (the same age, the same sex) was used as the diagnostic value for abdominal obesity. Results: The overall waist circumference of children and adolescents in all the age groups appeared higher in males than that in females (P&lt;0.000 1), higher in cities than that in the rural areas (P&lt;0.05), and higher in children with high family income than those with middle or low family incomes (P=0.000 3). The rate of abdominal obesity in children and adolescents aged 6-17 years appeared as 11.2% on average and 10.7% and 11.8% for boys and girls, respectively but with no significant difference (P&gt;0.05). Rates on abdominal obesity appeared as 13.2% and 8.5% for boys while as 12.3% and 11.2% for girls respectively, in urban or rural areas. As for the levels of family income, the abdominal obesity rates appeared as 15.8%, 11.5% and 8.8% respectively for boys while 13.5%, 11.9% and 11.6% respectively for girls, under high, middle and low levels of family income. Conclusion: The rate of abdominal obesity in boys seemed more responsive to the impact of income in urban or rural areas. Answer: The stomach is considered a reference point in the measurement of abdominal circumference, but its importance as a main landmark can vary. According to a study that investigated the presence of the stomach on the abdominal circumference for three different operators, the stomach was included in 81% of the 204 ultrasound images examined. However, there was a significant difference between operators, with the presence of the stomach ranging from 72% to 98% in the abdominal circumference measurements. This suggests that while the stomach is often included as a quality criterion in abdominal circumference measurements, there is variability depending on the operator, and it may not be the first criterion to search for if a hierarchy of quality criteria is considered (PUBMED:26142212). In the context of fetal weight estimation, the accuracy of abdominal circumference is crucial, and the quality criteria are standardized to minimize variability, which includes visualization of the stomach. However, the study mentioned above indicates that the inclusion of the stomach in these measurements is not consistent across all operators, which could impact the standardization of this practice. In summary, while the stomach is frequently used as a reference in abdominal circumference measurements, its role as a main landmark is not absolute and can be subject to operator-dependent variability.
Instruction: Lower urinary tract symptoms in patients with erectile dysfunction: is there a vascular association? Abstracts: abstract_id: PUBMED:15960225 The association between erectile function and severity of lower urinary tract symptoms. It is unclear whether the erectile dysfunction (ED) that frequently occurs with lower urinary tract symptoms (LUTS) may have a common causative factor: sympathetic overactivity. The aim of this study was to evaluate the association between ED and LUTS. From June 1998 to March 2000, 75 male patients, presenting with LUTS, enrolled into the present study. A total of 63 patients were included into the study, age ranging from 51 - 74 years (mean 61.5). Allpatients completed an American Urological Association (AUA) symptom severity index and IIEF-5 questionnaires. The results from the present study demonstrated that the AUA symptom and IIEF-5 scores do not correlate with increasing age. When the statistical analyses were performed for each age group, there were no significant differences in mean IIEF-5 values between any degree of AUA symptom score in the same age group (p &gt; 0.05). The present results indicate that there is no association between the degree of LUTS and the erectile function. Moreover, the statistical analyses of the association between any degree of erectile function and the mean A UA symptom score either for obstructive or irritative symptoms revealed no significant differences (p &gt; 0.05). The present study demonstrates that there is no association between BPH and erectile function in any age group, inconsistent with the sympathetic overactivity theory. abstract_id: PUBMED:18923414 Association between ED and LUTS in Japanese motorcyclists. Recently, the association between motorcycling and erectile dysfunction (ED) has been reported. Also, lower urinary tract symptoms (LUTS) were reported to be associated with ED. The aim of this study is to evaluate the association of ED with LUTS in motorcyclists. We investigated the prevalence and the status of ED using a 5-item version of the International Index of Erectile Function (IIEF-5) in 150 motorcyclists. ED was diagnosed when the IIEF-5 score was less than 17. The International Prostate Symptom Score (IPSS) was also applied, and the relationship between IIEF-5 and IPSS was evaluated. Of the 150 motorcyclists, 37 (25%) had ED, and 31 (21%) had moderate or severe symptoms of LUTS (IPSS &gt;or=8). The IIEF-5 was significantly associated with the severity of IPSS (P=0.002) and age (P&lt;0.0001). The IIEF-5 was also significantly associated with the scores of both voiding (P&lt;0.0001) and storage symptoms (P=0.001). On stepwise logistic regression analysis, age and storage symptoms are independent risk factors for ED in motorcyclists. LUTS seemed to be associated with ED in motorcyclists. abstract_id: PUBMED:15586029 The association between lower urinary tract symptoms and sexual dysfunction: fact or fiction? Purpose Of Review: Both lower urinary tract symptoms and sexual dysfunction are common conditions in aging men. In the past few years, increasing attention has been paid to the question of whether these conditions are associated in any way. The conventional belief of the majority of urologists so far has been that the coexistence of these two conditions is coincidental. This review was conducted to discuss the recent literature on this issue. In addition we will concisely address the effect of the different treatment modalities on sexual function. Recent Findings: Several trials were conducted recently to assess the association between lower urinary tract symptoms and sexual dysfunction. These trials all made use of self-reported questionnaires on urinary symptoms and sexual function sent to representative samples of the community. The occurrence of urinary symptoms appeared to be a risk factor for the development of erectile dysfunction, ejaculation disorder and decreased libido. Importantly, this association was independent of age and various co-morbidities. Summary: In contrast to long-established opinion, an age-independent association between lower urinary tract symptoms and sexual dysfunction was found in several recent population-based trials. The nature of this association, however, remains unclear. Future study is required to gain a better understanding of the possible underlying mechanisms. abstract_id: PUBMED:15964128 Lower urinary tract symptoms in patients with erectile dysfunction: is there a vascular association? Objective: To assess if there is an underlying vascular association between lower urinary tract symptoms (LUTS) and erectile dysfunction (ED). Materials And Methods: A total of 374 male patients with ED were enrolled in this study. Patients were interviewed for ED using the International Index of Erectile Function. Assessments for penile vasculature using color Doppler ultrasonography and rigidometer were performed. All patients were also screened for LUTS using the International Prostate Symptoms Score. All patients underwent routine laboratory investigation, plus total testosterone and prolactin assessment. Results: Mean age+/-SD was 54.4+/-8.9 years with an age range of 28-84 years. Eighty-five percent of the patients had organic causes of ED. Of the patients, 80.7% had different degrees of LUTS. There was a significant association between presence of LUTS and: arteriogenic and neurogenic causes of ED, poor response to intracorporeal injection, poor rigidity in the rigidometer, and low peak systolic velocity of the cavernosal arteries (p &lt; 0.05 for each). No significant association was found between presence of LUTS and increasing values of end diastolic velocity or decreasing values of resistive index of the cavernosal arteries (p &gt; 0.05 for each). There was a significant association between the higher degrees of LUTS and the decreasing values of peak systolic velocity (p &lt; 0.05). Conclusions: This study demonstrated that reduced peak systolic velocity of the cavernous artery does associate with LUTS in patients with erectile dysfunction. Further studies are warranted to address the underlying vascular mechanisms of ED in patients with LUTS. abstract_id: PUBMED:28390836 Prevalence of metabolic syndrome and its association with lower urinary tract symptoms and sexual function. Objectives: To estimate the frequency of metabolic syndrome (MetS) in a daily urology practice and to determine its association with lower urinary tract symptoms (LUTS) and erectile dysfunction (ED). Material And Methods: A retrospective study was conducted. Data from all male patients aged ≥40 years who attended our outpatient urology clinic from 2010 to 2011 was collected. Prevalence of MetS was determined, and LUTS and ED were assessed. A logistic model was used to determine possible associations, controlling for confounders and interaction factors. Results: A total of 616 patients were included. MetS was observed in 43.8% (95% CI 39.6-48.3). The bivariate model showed an association between MetS and LUTS (p&lt;0.01), but not between MetS and ED. The logistic model showed an association between MetS and the International Prostate Symptom Score (IPSS), while controlling for other variables. Patients exhibiting moderate LUTS had a greater risk for MetS than patients with mild LUTS (OR 1.83, 95% CI 1.14-2.94). After analyzing for individual components of MetS, positive associations were found between diabetes and severe LUTS (OR 1.3, 95% CI 1.24-7.1), and between diabetes and ED (OR 2.57, 95% CI 1.12-5.8). Conclusion: This study was able to confirm an association between MetS and LUTS, but not for ED. Specific components such as diabetes were associated to both. Geographical differences previously reported in the literature might account for these findings. Given that MetS is frequent among urological patients, it is advisable that urologists actively screen for it. abstract_id: PUBMED:17146933 Association between lower urinary tract symptoms and erectile dysfunction Lower urinary tract symptoms (LUTS) and erectile dysfunction (ED) are highly prevalent in aging men and both of the conditions have a significant impact on the quality of life. In the past few years, various epidemiological trials were conducted to assess the association between LUTS and ED. These studies showed that LUTS, particularly the voiding symptoms, nocturia and the others caused by LUTS, independently increased the incidence of ED. There are some factors involved in the link between LUTS and ED: (1) rho-kinase expression/activity increased; (2) nitric oxide release decreased and corpus cavernosum smooth muscle contraction strengthened due to endothelin-1; (3) the composition of myosin isoform altered; (4) sympathetic hyperactivity and innervation of the corpus cavernosum smooth muscles decreased. These findings concerning the relationship between LUTS and ED have offered some new insights into the evaluation and treatment of patients with these conditions. The present paper briefly reviews the recent studies of the association between LUTS and ED. abstract_id: PUBMED:26676693 Association between Metabolic Syndrome and Lower Urinary Tract Symptoms: Evidences from Epidemiological Studies. Metabolic syndrome (MS) and lower urinary tract symptoms (LUTS) are both highly prevalent problems of public health in the modern era. MS has been demonstrated to increase the risks for developing cardiovascular, kidney and liver diseases, and recent epidemiological evidences have indicated a possible link between MS and urological diseases, such as erectile dysfunction (ED) in men and LUTS in both sexes. The relationship between MS and LUTS was first described by Hammarsten et al. and concluded that men with MS risk factors had a larger prostate volume and a faster growth rate. Several consequent studies have also supported the association between MS and LUTS suggestive of benign prostatic hyperplasia (BPH) in men. However, studies have reported that the female lower urinary tract was affected by the components of MS as well. However, two recent surveys did not find a significant association between MS and LUTS. To date, this association remains unclear, and future longitudinal studies are needed to further clarify the controversy. abstract_id: PUBMED:16574595 Association between lower urinary tract symptoms and erectile dysfunction. Our aim was to assess the association between lower urinary tract symptoms (LUTS) and erectile dysfunction by means of International Prostate Symptom Score (IPSS) and Sexual Health Inventory for Men (SHIM) questionnaire. A total of 69 eligible patients who were admitted to our outpatient clinic with lower urinary tract symptoms were included in the study. A self administered questionnaire of IPSS and SHIM were given to patients. Demographics and medical history data were recorded. Any risk factor that may be associated with erectile dysfunction, including coronary artery disease, diabetes, hypertension, and smoking status, was determined in each patient. Correlation tests were used to examine the relationship between lower urinary tract symptoms and erectile dysfunction by controlling the effects of age and comorbidities. Mean age was 58.6 +/- 13.1 31-86 years. Mean SHIM and IPSS total score was 14.3 +/- 7.5 and 11.5 +/- 8.1, respectively. Spearman correlation coefficient between IPSS and SHIM scores was found to be -0.41. There was a significant negative correlation with IPSS total scores of moderate degree when both age and presence of risk factor was controlled (r = -0.31; p = 0.009). Storage symptom scores showed significant correlation with SHIM scores (r = -0.33; p = 0.000). The association between SHIM score and each item of IPSS showed significant correlation for urgency, straining and nocturia when age controlled. The degree of bother by LUTS as determined by the IPSS quality of life question was also correlated with SHIM scores; however, this correlation was not significant when age or risk factor for ED was controlled. The presence of LUTS especially storage symptoms is strongly associated with erectile dysfunction independent of age and comorbidities. abstract_id: PUBMED:21593760 Association of lower urinary tract symptoms and erectile dysfunction: pathophysiological aspects and implications for clinical management. There is strong evidence from multiple epidemological studies that lower urinary tract symptoms (LUTS) and erectile dysfunction (ED) are correlated, independent of age or comorbidities as diabetes or hypertension. Although a direct causal relationship is not established yet, four pathophysiological mechanisms can explain the relationship. These include alteration in nitric oxide bioavailability, α1-adrenergic receptor hyperactivity, pelvic atherosclerosis and sex hormones. This association has different clinical implications on the management of both disorders. Men seeking care for one condition should always be screened for complaints of the other condition. Sexual function should be assessed and discussed with the patient when choosing the appropriate management strategy for LUTS, as well as when evaluating the patient's response to treatment. Multiple large clinical trials have shown an improvement in LUTS after phosphodiesterase-5 (PDE5)-inhibitor treatment. PDE5 inhibitors show promise as a future treatment for LUTS, either in conjunction with existing therapies or as a primary treatment. There may be a potential therapeutic role for testosterone in LUTS treatment in cases of testosterone deficiency that needs to be investigated. Much further investigation is required, but it is evident that the association between LUTS and ED is fundamental for future therapies and possible preventative strategies. abstract_id: PUBMED:18266709 Benign prostatic hyperplasia evaluation, treatment and association with sexual dysfunction: practice patterns according to physician specialty. Aims: Lower urinary tract symptoms suggestive of benign prostatic hyperplasia (LUTS/BPH) are a common problem in ageing men and are accompanied by sexual dysfunction (SD) in 40-70% of men evaluated in large-scale epidemiological studies. One year after the 2003 American Urological Association (AUA) guideline on BPH management was published, a survey of US urologists (UROs) and primary care physicians (PCPs) was conducted to ascertain physician knowledge of the AUA guideline and practice patterns regarding LUTS/BPH diagnosis, treatment and association with SD. Methods: A 19-question qualitative survey, sponsored by the American Foundation of Urologic Disease, was mailed April 2004 to 7500 UROs and 17,500 PCPs, with responses collected until May 2004. Results: A total of 788 surveys were returned (437 UROs; 351 PCPs). Only 62% of PCPs were aware of and only 41% of PCPs used the AUA-Symptom Index/International Prostate Symptom Score (AUA-SI/IPSS) to assess LUTS compared with 97% and 81% of UROs respectively. Alpha-blocker monotherapy was the treatment of choice for both UROs and PCPs. Compared with UROs, PCPs reported higher rates of SD in association with LUTS or BPH (37% vs. 27%) and BPH pharmacotherapy (27% vs. 21%). UROs and PCPs reported higher rates of SD side effects [ejaculatory dysfunction (EjD) and erectile dysfunction (ED)] for tamsulosin (EjD: UROs 22%, PCPs 12%; ED: UROs 7%, PCPs 10%) and doxazosin (EjD: UROs 14%, PCPs 10%; ED: UROs 7%, PCPs 12%) than for alfuzosin (EjD: UROs 6%, PCPs 4%; ED: UROs 4%, PCPs 5%). Conclusions: The results suggest that many PCPs are not using the AUA-SI/IPSS to assess LUTS in their ageing male patients. Both UROs and PCPs appear to be underestimating the prevalence of SD in men with LUTS/BPH relative to prevalence rates reported in large-scale epidemiological studies. Answer: The association between lower urinary tract symptoms (LUTS) and erectile dysfunction (ED) has been explored in various studies, with some suggesting a potential vascular link. A study by Shafik et al. (PUBMED:15964128) aimed to assess if there is an underlying vascular association between LUTS and ED. The study enrolled 374 male patients with ED and assessed them for penile vasculature using color Doppler ultrasonography and rigidometer, in addition to screening for LUTS using the International Prostate Symptoms Score. The results indicated a significant association between the presence of LUTS and arteriogenic and neurogenic causes of ED, poor response to intracorporeal injection, poor rigidity in the rigidometer, and low peak systolic velocity of the cavernosal arteries. Specifically, there was a significant association between higher degrees of LUTS and decreasing values of peak systolic velocity. This study demonstrated that reduced peak systolic velocity of the cavernous artery does associate with LUTS in patients with ED, suggesting a vascular component to the relationship between LUTS and ED. In contrast, another study (PUBMED:15960225) found no association between the degree of LUTS and erectile function, challenging the sympathetic overactivity theory that had been proposed as a common causative factor for both conditions. This study included 63 male patients with LUTS and used the American Urological Association (AUA) symptom severity index and IIEF-5 questionnaires to evaluate the association between ED and LUTS. The results showed no significant differences in mean IIEF-5 values between any degree of AUA symptom score in the same age group, indicating no association between BPH and erectile function in any age group. Overall, while some studies suggest a vascular link between LUTS and ED, the evidence is not entirely consistent, and further research is warranted to fully understand the underlying mechanisms and associations between these two conditions.
Instruction: Do children with ADHD and/or PDD-NOS differ in reactivity of alpha/theta ERD/ERS to manipulations of cognitive load and stimulus relevance? Abstracts: abstract_id: PUBMED:19059001 Do children with ADHD and/or PDD-NOS differ in reactivity of alpha/theta ERD/ERS to manipulations of cognitive load and stimulus relevance? Objective: We examined whether the method of event-related (de-)synchronization (ERD/ERS) revealed differential effects of selective attention and working memory load in children (8-11 years) with pervasive developmental disorder - not otherwise specified (PDD-NOS) or attention-deficit/hyperactivity disorder (ADHD). Methods: Fifteen healthy controls and three equally large groups of children with symptoms of PDD-NOS, ADHD or both (PDD/HD) performed a visual selective memory search task. The EEG was recorded from which occipital alpha and frontal theta were derived. Results: The effects of the overall task manipulations of task load, relevance and target/nontarget were clearly present in the overall analyses of alpha and theta ERD/ERS. However, no significant differences with respect to these manipulations existed between any of the subject groups. Conclusions: The results supply no evidence for a distinction in information processing abilities of selective attention and working memory as reflected by alpha and theta ERD/ERS between children diagnosed with either ADHD, PDD-NOS or healthy controls. Significance: Alpha and theta ERD/ERS are sensitive to manipulations of task load, relevance and target/nontarget, but supply no additional information on possible group differences in comparison to the more frequently used method of event-related potentials. abstract_id: PUBMED:37103259 Alpha ERS-ERD Pattern during Divergent and Convergent Thinking Depends on Individual Differences on Metacontrol. The role of metacontrol in creativity is theoretically assumed, but experimental evidence is still lacking. In this study, we investigated how metacontrol affects creativity from the perspective of individual differences. Sixty participants completed the metacontrol task, which was used to divide participants into a high-metacontrol group (HMC) versus a low (LMC) group. Then, these participants performed the alternate uses task (AUT; divergent thinking) and the remote associates test (RAT; convergent thinking), while their EEG results were recorded continuously. Regarding their behavior, the HMC group showed superior creative performance in the AUT and RAT, compared with the LMC group. For the electrophysiology, the HMC group showed larger stimulus-locked P1 and P3 amplitudes than the LMC group. Furthermore, the HMC group exhibited smaller alpha desynchronization (ERD) than the LMC group at the initial stages of the AUT task, followed by a flexible switching between alpha synchronization and desynchronization (ERS-ERD) during the process of selective retention in the AUT. In addition, the HMC group evoked smaller alpha ERD during the initial retrieval and the backtracking process in the RAT, associated with cognitive control adaptability. The aforementioned results indicate that metacontrol reliably contributes to the idea generation process, and HMC individuals could flexibly adjust their cognitive control strategies according to the demand for creative idea generation. abstract_id: PUBMED:16442346 The effects of memory load and stimulus relevance on the EEG during a visual selective memory search task: an ERP and ERD/ERS study. Objective: Psychophysiological correlates of selective attention and working memory were investigated in a group of 18 healthy children using a visually presented selective memory search task. Methods: Subjects had to memorize one (load1) or 3 (load3) letters (memory set) and search for these among a recognition set consisting of 4 letters only if the letters appeared in the correct (relevant) color. Event-related potentials (ERPs) as well as alpha and theta event-related synchronization and desynchronization (ERD/ERS) were derived from the EEG that was recorded during the task. Results: In the ERP to the memory set, a prolonged load-related positivity was found. In response to the recognition set, effects of relevance were manifested in an early frontal positivity and a later frontal negativity. Effects of load were found in a search-related negativity within the attended category and a suppression of the P3-amplitude. Theta ERS was most pronounced for the most difficult task condition during the recognition set, whereas alpha ERD showed a load-effect only during memorization. Conclusions: The manipulation of stimulus relevance and memory load affected both ERP components and ERD/ERS. Significance: The present paradigm may supply a useful method for studying processes of selective attention and working memory and can be used to examine group differences between healthy controls and children showing psychopathology. abstract_id: PUBMED:12048042 Probability interrelations between pre-/post-stimulus intervals and ERD/ERS during a memory task. Objectives: To investigate the functional relationship between oscillatory electroencephalographic (EEG) components (pre-/post-stimulus intervals) with audio-event-related desynchronization/synchronization (ERD/ERS). Methods: In an experimental study (9 subjects), the probability-classification analysis of single-trial spectral EEG changes was utilized. Results were compared with auditory ERD/ERS. Results: It was shown that (1) variability of EEG spectral patterns was considerable, (2) EEG activity was different at various task stages, (3) probability measures were different from the results of conventional frequency analysis, (4) probability of trials with alpha- and theta-patterns was characteristically different in various task stages, (5) the occurrence of alpha- and theta-trials were most probable, but not frequent enough to characterize all the trials. The results suggest that the ERD/ERS responses are influenced by EEG characteristics in the pre-stimulus interval, which also have a strong influence on the EEG in the post-stimulus interval. Conclusions: Alpha- and theta-ERD/ERS responses during memory task performance are not typical for all trials. They reflect EEG changes only in approximately 39% for alpha-activity and approximately 43% for theta-oscillations of all trials, what reflects piecewise stationary EEG structure. abstract_id: PUBMED:27536259 Cardiorespiratory Fitness Is Associated with Executive Control in Late-Middle-Aged Adults: An Event-Related (De) Synchronization (ERD/ERS) Study. The present study sought to determine whether cardiorespiratory fitness is associated with cognitive function in late-middle-aged adults from event-related desynchronization (ERD) and event-related synchronization (ERS) perspectives. Late-middle-aged adults were categorized into either the high-fitness group or the low-fitness group based on their estimated cardiorespiratory fitness values. The participants completed the Stroop Test, which is comprised of incongruent and neutral conditions, while the brain activities were recoded. The alpha ERD and ERS values based on the equation proposed by Pfurtscheller (1977) were further calculated. The results revealed that the adults with higher levels of cardiorespiratory fitness demonstrated superior Stroop performance, regardless of Stroop congruency. While these high-fitness adults had less positive upper alpha ERD values in the later epoch window compared to their lower-fitness counterparts, they had greater lower alpha ERD values in the early epoch window. Additionally, in the late epoch window, the high-fitness adults showed less positive lower alpha ERD values on neutral, but not incongruent condition, relative to their low-fitness counterparts. These findings suggest that cardiorespiratory fitness of the late-middle-aged adults is positively associated with cognitive functioning, especially the cognitive processes related to the inhibition of task-irrelevant information and those processes required the devotion of greater amounts of attentional resources to a given task. abstract_id: PUBMED:28487643 Perceptual Threshold Level for the Tactile Stimulation and Response Features of ERD/ERS-Based Specific Indices Upon Changes in High-Frequency Vibrations. This study was conducted to identify characteristics of the perceptual threshold level and electroencephalogram (EEG) responses to vibrotactile stimulations at various high frequencies, and to examine the possibility of distinguishing vibrotactile stimulations by frequency through such response characteristics. The vibrotactile stimulations of six frequencies (150, 200, 225, 250, 275 and 300 Hz) were exerted on the first joint of the right index finger. The perceptual threshold level was defined as the first minimum perceived intensity when the intensity stimulation was exerted step by step at each vibration frequency. EEG response characteristics were investigated by examining a single index corresponding to the peak or area of event-related desynchronization/synchronization (ERD/ERS) and seven specific indices derived by combining the single ERD/ERS indices. There was a significant difference in the perceptual threshold level across different frequencies. Specifically, the differences in vibration stimulus between 150 Hz and 200 Hz, and between 150 Hz and 225 Hz were significant. Of the EEG response characteristics, the single index of the peak or area of ERD/ERS did not show a significant difference by frequency. However, (ERS-ERD), ERD × (ERS-ERD), and ERS × (ERS-ERD) showed a significant difference between vibration stimulations at 150 Hz and 200 Hz, and between vibration stimulations at 150 Hz and 225 Hz, among the specific indices combined using the peak values of ERD/ERS. Furthermore, ERS × (ERS-ERD) showed a significant difference between 150 Hz and 225 Hz, and between 225 Hz and 275 Hz among the specific indices combined using the area of ERD/ERS. The perceptual threshold level and the specific indices of ERD/ERS suggested in the present study can be used as quantitative measurement indices to distinguish high-frequency vibration stimulation. abstract_id: PUBMED:36507340 Movement-related beta ERD and ERS abnormalities in neuropsychiatric disorders. Movement-related oscillations in the beta range (from 13 to 30 Hz) have been observed over sensorimotor areas with power decrease (i.e., event-related desynchronization, ERD) during motor planning and execution followed by an increase (i.e., event-related synchronization, ERS) after the movement's end. These phenomena occur during active, passive, imaged, and observed movements. Several electrophysiology studies have used beta ERD and ERS as functional indices of sensorimotor integrity, primarily in diseases affecting the motor system. Recent literature also highlights other characteristics of beta ERD and ERS, implying their role in processes not strictly related to motor function. Here we review studies about movement-related ERD and ERS in diseases characterized by motor dysfunction, including Parkinson's disease, dystonia, stroke, amyotrophic lateral sclerosis, cerebral palsy, and multiple sclerosis. We also review changes of beta ERD and ERS reported in physiological aging, Alzheimer's disease, and schizophrenia, three conditions without overt motor symptoms. The review of these works shows that ERD and ERS abnormalities are present across the spectrum of the examined pathologies as well as development and aging. They further suggest that cognition and movement are tightly related processes that may share common mechanisms regulated by beta modulation. Future studies with a multimodal approach are warranted to understand not only the specific topographical dynamics of movement-related beta modulation but also the general meaning of beta frequency changes occurring in relation to movement and cognitive processes at large. Such an approach will provide the foundation to devise and implement novel therapeutic approaches to neuropsychiatric disorders. abstract_id: PUBMED:32615476 Abnormal modulation of theta oscillations in children with attention-deficit/hyperactivity disorder. Previous studies have found that theta activities exhibit posterior lateralized modulation as well as midfrontal event-related synchronization (ERS) during covert visual attention in adults. The present study investigated whether these theta modulations existed in children and whether they were associated with attentional problems in attention-deficit/hyperactivity disorder (ADHD). Electroencephalography signals were recorded from typically developing (TD) children and children with ADHD (TD: n = 24; ADHD: n = 22) while they performed a cued covert visual attention task. The participants responded to a target following a cue designed as human eyes that gazed to the left or right visual field (70% validity). Compared with the TD children, the children with ADHD showed increased midfrontal theta ERS and significant posterior theta lateralization in response to the cues. More importantly, we found that the stronger posterior theta lateralization in the right hemisphere exhibited a positive trial-based correlation with the larger midfrontal theta ERS and predicted lower RT variability at the trial level in the children with ADHD. We suggest that ADHD may be associated with some enhanced systems in the frontal and posterior areas via theta oscillations, which may be involved in the compensatory maturation for their attention deficits in childhood, thereby promoting the stability of behavioral responses. abstract_id: PUBMED:36373033 The Effects of Different Theta and Beta Neurofeedback Training Protocols on Cognitive Control in ADHD. Neurofeedback (NF) is an important treatment for attention deficit/hyperactivity disorder (ADHD). In ADHD, cognitive control deficits pose considerable problems to patients. However, NF protocols are not yet optimized to enhance cognitive control alongside with clinical symptoms, partly because they are not driven by basic cognitive neuroscience. In this study, we evaluated different EEG theta and/or beta frequency band NF protocols designed to enhance cognitive control. Participants were n = 157 children and adolescents, n = 129 of them were patients with ADHD (n = 28 typically developing (TD) controls). Patients with ADHD were divided into five groups in the order of referral, with four of them taking part in different NF protocols systematically varying theta and beta power. The fifth ADHD group and the TD group did not undergo NF. All NF protocols resulted in reductions of ADHD symptoms. Importantly, only when beta frequencies were enhanced during NF (without any theta regulation or in combination with theta upregulation), consistent enhancing effects in both response inhibition and conflict control were achieved. The theta/beta NF protocol most widely used in clinical settings revealed comparatively limited effects. Enhancements in beta band activity are key when aiming to improve cognitive control functions in ADHD. This calls for a change in the use of theta/beta NF protocols and shows that protocols differing from the current clinical standard are effective in enhancing important facets of cognitive control in ADHD. Further studies need to examine regulation data within the neurofeedback sessions to provide more information about the mechanisms underlying the observed effects. Supplementary Information: The online version contains supplementary material available at 10.1007/s41465-022-00255-6. abstract_id: PUBMED:35553346 Cognitive control, interference inhibition, and ordering of information during working memory in younger and older healthy adults. Investigating effects of aging on neurophysiological mechanisms underlying working memory provides a better understanding of potential targets for brain intervention to prevent cognitive decline. Theta-gamma coupling (TGC) indexes the ability to order information processed during working memory tasks. Frontal theta event-related synchronization (ERS) and parietal alpha event-related desynchronization (ERD) index cognitive control and interference inhibition, respectively. Relative contributions of TGC, theta ERS, and alpha ERD in relation to stimulus presentation are not characterized. Further, differential effect of normal aging on pre- or post-stimulus processes is unknown. Electroencephalography was recorded in 66 younger and 41 older healthy participants while performing 3-back working memory task. We assessed relationships between 3-back task performance and each of post-stimulus TGC, pre-stimulus parietal alpha ERD, and pre-stimulus frontal theta ERS in each age group. While older adults performed worse on 3-back task than younger adults, TGC, alpha ERD, or theta ERS did not differ between the two groups. TGC was positively associated with 3-back performance in both age groups; pre-stimulus alpha ERD was associated with performance among younger adults; and pre-stimulus theta ERS was not associated with performance in either group. Our findings suggest that both pre-stimulus interference inhibition and post-stimulus ordering of information are important for working memory in younger adults. In contrast, performance in older adults appears to depend only on post-stimulus ordering of information. These specific contributions of neurophysiological resources may explain the poorer performance of older adults and suggest different targets to enhance working memory in age groups. Answer: Children with ADHD and/or PDD-NOS do not appear to differ in reactivity of alpha/theta ERD/ERS to manipulations of cognitive load and stimulus relevance when compared to healthy controls. A study examining whether the method of event-related (de-)synchronization (ERD/ERS) revealed differential effects of selective attention and working memory load in children with PDD-NOS, ADHD, or both, found that while the effects of the overall task manipulations of task load, relevance, and target/nontarget were clearly present in the overall analyses of alpha and theta ERD/ERS, there were no significant differences with respect to these manipulations between any of the subject groups (PUBMED:19059001). This suggests that alpha and theta ERD/ERS are sensitive to manipulations of task load, relevance, and target/nontarget, but do not supply additional information on possible group differences in comparison to the more frequently used method of event-related potentials.
Instruction: Are Italian medical societies bridging the distance from citizen and patients' associations? Abstracts: abstract_id: PUBMED:19278137 Are Italian medical societies bridging the distance from citizen and patients' associations? Results of a survey. Introduction: In the framework of PartecipaSalute--an Italian research project aimed to involve lay people, patients' associations and scientific-medical representatives on the health debate --we carried out a survey with the Italian Federation of Medical Societies. The aims of the survey were to know medical societies attitude vs. patients involvement in research activities and healthcare setting and to find out possible projects conducted in partnership with patients associations. Results: A web-questionnaire with 17 closed questions, and three open questions has been prepared on the basis of some experiences published on the literature and through the collaboration of members of the Italian Federation of Medical Societies. A total of 205 medical societies has been contacted by e-mail with a cover letter explaining the aims of the survey. At the end 74 medical societies completed the survey. Medical societies participating to the survey varied widely in terms of years of activity, number of members, and geographical distribution. Remarkably, 36 medical societies respondent organized collaborative initiatives with patients/consumers associations during the last three years. Among these, the most frequent were the preparation of written material for patients, organization of conferences or workshops, and health awareness campaigns. Moreover, 6 medical societies published documents on patients' rights but patients or their associations were involved in only 2 of these initiatives. Advantages and disadvantages reported by medical societies answering are also presented and discussed. Conclusions: In conclusion, to our knowledge this is the first survey on the value of patients' involvement conducted together with medical societies in Italy, and the results point the way to stronger collaboration in future between patients' associations and medical societies. abstract_id: PUBMED:27251686 Conflict of interest between professional medical societies and industry: a cross-sectional study of Italian medical societies' websites. Objective: To describe how Italian medical societies interact with pharmaceutical and medical device industries through an analysis of the information available on their websites. Design: Cross sectional study. Setting: Italy. Participants: 154 medical societies registered with the Italian Federation of Medical-Scientific Societies. Main Outcome Measures: Indicators of industry sponsorship (presence of industry sponsorship in the programme of the last medical societies' annual conference; presence of manufacturers' logos on the homepage; presence of industry sponsorship of satellite symposia during the last annual conference). Results: 131 Italian medical societies were considered. Of these, 4.6% had an ethical code covering relationships with industry on their websites, while 45.6% had a statute that mentioned the issue of conflict of interest and 6.1% published the annual financial report. With regard to industry sponsorship, 64.9% received private sponsorship for their last conference, 29.0% had manufacturers' logos on their webpage, while 35.9% had industry-sponsored satellite symposia at their last conference. The presence of an ethical code on the societies' websites was associated with both an increased risk of industry sponsorship of the last conference (relative risk (RR) 1.22, 95% CIs 1.01 to 1.48 after adjustment) and of conferences and/or satellite symposia (RR 1.22, 95% CIs 1.02 to 1.48 after adjustment) but not with the presence of manufacturers' logos on the websites (RR 1.79, 95% CIs 0.66 to 4.82 after adjustment). No association was observed with the other indicators of governance and transparency. Conclusions: This survey shows that industry sponsorship of Italian medical societies' conferences is common, while the presence of a structured regulatory system is not. Disclosure of the amount of industry funding to medical societies is scarce. The level of transparency therefore needs to be improved and the whole relationship between medical societies and industry should be further disciplined in order to avoid any potential for conflict of interest. abstract_id: PUBMED:26769777 Proliferation of gynaecological scientific societies and their financial transparency: an Italian survey. Objectives: To determine the number of Italian obstetrical and gynaecological societies, and to ascertain their financial transparency. Design: Internet-based national survey and website content analysis. Participants: Currently active, not privately owned, non-religious, apolitical, obstetrical and gynaecological associations. Methods: From October 2014 to June 2015, scientific societies were identified using combinations of search terms, and examining the website of the two main Italian obstetrical and gynaecological organisations. Individual societies' websites were scrutinised by two independent investigators. Primary And Secondary Outcome Measures: Number of Italian obstetrical and gynaecological associations and its variation over time; 12 information categories defining the general characteristics of the societies and their websites, and the financial transparency of the associations. Results: The initial web search yielded 56 professional obstetrical and gynaecological associations but nine were excluded for various reasons. Of the remaining 47 professional associations, 17 covered both obstetrics and gynaecology, four were specialised in obstetrics, 26 in gynaecology and 46 provided continuing medical education (CME) activities. The number of societies has quadrupled in the last 35 years, increasing at a mean rate of one additional society per year. The headquarters of the associations were located in the offices of a professional congress organiser in 15 instances, and advertisements or links to industry products were present in 12 societies' websites. Bylaws were accessible in 32 websites. No information was publicly available regarding competing interests, financial statements and quantitative external funding. Conclusions: The number of obstetrical and gynaecological societies is remarkably high in Italy, particularly in the gynaecological area. Despite CME activity provision, transparency of societies regarding financial issues and competing interests was almost non-existent. Policies addressing the interactions between medical associations and industry are available and should be implemented. abstract_id: PUBMED:6353785 History of continuing medical education within the scope of regional associations and societies In Halle the associations and societies with the aim of postgraduate medical education can look back on a tradition lasting for centuries. The beginnings which are to be proved for the 18th century led to the formation of associations and societies which besides the mediation of knowledge also dealt with problems of the medical profession. In Halle on the basis of regional societies as well as in form of clinico-scientific soirées is made allowance for the demands of modern medical postgraduate education, the latter institution existing since 1959 can at present refer to 200 arrangements performed and forms a not unimported partial complex for the regional postgraduate educational system. abstract_id: PUBMED:34785113 Glass ceilings in the scientific societies and professional associations in the field of health in Spain in 2019 Objective: To analyze the composition of the management structures of Spanish professional associations and scientific societies in the field of health in 2019 and compare them with the data obtained in 2014 and 2015. Method: descriptive cross-sectional study. Scientific societies in the health field with state representation as of May 30, 2019 and data from professional associations with provincial, regional or state representation were collected as of June 1, 2019. They are compared to 2014 and 2015. Results: 259 professional associations and 163 scientific societies were analyzed. Both professional associations and scientific societies in the field of Health in 2019 exceed the percentage of 40% of balanced composition. The positions that have increased the most in percentage of women are those of vice-secretary (20,8%) or secretaries (9,3% in companies), vice-presidencies (8,8% and 11%) and members (6,4% and 4%). Presidency (2%) and dean (6%), the least. Conclusions: It is necessary to continue with the efforts to advance in an equitable representation by sex. abstract_id: PUBMED:29239793 Medical societies, patient education initiatives, public debate and marketing of unproven stem cell interventions. Businesses marketing unproven stem cell interventions proliferate within the U.S. and in the larger global marketplace. There have been global efforts by scientists, patient advocacy groups, bioethicists, and public policy experts to counteract the uncontrolled and premature commercialization of stem cell interventions. In this commentary, we posit that medical societies and associations of health care professionals have a particular responsibility to be an active partner in such efforts. We review the role medical societies can and should play in this area through patient advocacy and awareness initiatives. abstract_id: PUBMED:29048980 The 2016 Lifetime Immunization Schedule, approved by the Italian scientific societies: A new paradigm to promote vaccination at all ages. Medical scientific societies have the core mission of producing, pooling and disseminating solid and updated scientific information. We report the successful experience of the partnership of four national Medical Scientific Societies active in Italy in producing scientific advice on vaccines and vaccination. In particular, i) the Italian Society of Hygiene, Preventive Medicine and Public Health; SitI, ii) the Italian Society of Paediatrics; SIP, iii) the "Italian Federation of General Practitioners"; FIMP, and iv) the Italian Federation of General Medicine FIMMG) have worked together since 2012 to produce shared evidence-based recommendations on vaccination schedules, namely the "Lifetime Immunization Schedule" which introduced for the first time in Italy a life-course approach to vaccination. The 2014 edition of the "Lifetime Immunization Schedule" was used as a basis to develop the 2017-2019 Italian National Prevention Plan, approved by The Italian Ministry of Health in February 2017. In this report, we present the structure, content and supporting evidence of the new 2016 "Lifetime Immunization Schedule" and we expand on the influential role of medical scientific societies in researching and advocating for effective and safe vaccination programmes' implementation at the national level. abstract_id: PUBMED:22512734 Citizens' participation in the Italian health-care system: the experience of the Mixed Advisory Committees. Background: In 1994, the region of Emilia-Romagna recognized the importance of citizens' participation in the regional health-care system and recommended the institution of Mixed Advisory Committees in the health districts and hospitals with the objective of monitoring and assessing health-care quality from the users' perspective. Design: This paper reports findings from a qualitatively based evaluation involving direct observations of the committees and 39 semi-structured interviews: 20 with representatives of patients and users' associations and 19 with health professionals and managers involved in the activities of the committees. Results: The Mixed Advisory Committees introduced for the first time in the Italian health system an ongoing deliberative approach that gave patients and users' representatives the opportunity to be involved in health service governance. The committees enabled the creation of a method of collective participation that overcame the reductive individualistic 'approach' to health-care participation. MAC participants evaluated the committees positively for their mixed composition, which integrated different cultures, experiences and professional profiles with the potential to contribute to solving health-care problems, in a consultative role. Although patients and users' representatives were able to exert some form of influence, their expectations were greater than the results obtained. The study illuminated some weak points of public consultation, such as a decline in participation by citizen representatives and a weak influence on decision making. Discussion And Conclusions: MACs developed a mechanism of participation that provides health-care users' representatives with the opportunity to contribute to the identification and analysis of critical points of the health-care system. A strength of the MACs is their level of institutionalization within the health system in the region of Emilia-Romagna. However, they contain a problem of representativeness, lacking a formal system of determining a representative composition from among patients and users' associations. The case study also illustrates that participation without the cooperation of the health service, and financial and organizational support, will result in a decline in citizen participation, given the resources required of these volunteer citizens in return for limited results. abstract_id: PUBMED:26680288 Neither bridging nor bonding: A test of socialization effects by ethnically diverse voluntary associations on participants' inter-ethnic tolerance, inter-ethnic trust and intra-ethnic belonging. The distinction between bridging and bonding associations is a cornerstone of social capital research. Nevertheless, this study is the first to provide a direct test of the socialization mechanism that supposedly causes ethnically mixed (bridging) associations to generate interethnic tolerance and trust, and homogenous (bonding) associations to cement self-affirming identities. This multilevel analysis of the Citizenship, Involvement &amp; Democracy (CID) 1999/2000 survey data on Mannheim (Germany), Enschede (the Netherlands), and Aberdeen (Scotland) covers 3166 active participants in 645 associations. The CID includes objective, exogenous measures of each association's composition and aim. Socialization and self-selection effects are pulled apart through interactions with detailed measures of associational involvement. The results display no evidence for (diverse and homogenous) associations as socializing agents. Although inter-ethnic tolerance is higher in ethnically diverse associations, this should be attributed to self-selection effects. abstract_id: PUBMED:31646774 Facing internet fake-medicine and web para-pharmacy in the total absence of official recommendations from medical societies. Purpose: Internet fake information, parapharmacy and counterfeit drugs are a market of hundreds of billion dollars. Misleading internet data decrease patients' compliance to medical care, promote use of questionable and detrimental practices, and jeopardize patient outcome. This is particularly harmful among cancer patients, especially when pain and nutritional aspects are considered. Provision of Web recommendations for the general audience (patients, relatives, general population) from official medical-providers might be useful to outweigh the detrimental internet information produced by non-medical providers. Methods: 370 oncology and anesthesiology related societies were analyzed. Our objective was to evaluate the magnitude of web-recommendation for cancer cachexia and cancer pain for the general audience provided by official medical organizations' web sites at global level. Results: Magnitude of web-recommendations at global level was surprisingly scant both for coverage and consistency. Seven official medical societies provided updated web-recommendation for cancer cachexia to their patients/family members, and 15 for cancer pain. Scantiness was unrelated by continent, developmental index, oncology tradition, economic-geographic area and society type scrutinized. Conclusions: Patients need expert advice when exposed to fake internet information largely dominated by paramedical market profits. In this era of "new media" the patients' net-education represents a new major educational challenge for medical societies. Answer: Italian medical societies have shown varying degrees of engagement with citizens and patients' associations. A survey conducted with the Italian Federation of Medical Societies revealed that 36 medical societies had organized collaborative initiatives with patients/consumers associations in the last three years, including the preparation of written material for patients, organization of conferences or workshops, and health awareness campaigns. However, only 6 medical societies published documents on patients' rights, and patients or their associations were involved in only 2 of these initiatives (PUBMED:19278137). This suggests that while there is some collaboration, the level of patient involvement in the development of such documents is low. Moreover, the presence of an ethical code on medical societies' websites was associated with an increased risk of industry sponsorship of the last conference and of conferences and/or satellite symposia, but not with the presence of manufacturers' logos on the websites. This indicates that while some societies may have ethical guidelines, the influence of industry sponsorship is still prevalent (PUBMED:27251686). In the specific field of obstetrics and gynecology, a survey found that the number of professional associations has quadrupled in the last 35 years, but transparency regarding financial issues and competing interests was almost non-existent. This suggests a need for policies addressing the interactions between medical associations and industry (PUBMED:26769777). In terms of patient education, medical societies and associations of health care professionals have a responsibility to counteract the commercialization of unproven interventions, such as stem cell therapies, and to promote patient advocacy and awareness initiatives (PUBMED:29239793). The Italian scientific societies have also been involved in producing evidence-based recommendations on vaccination schedules, demonstrating their role in advocating for effective and safe vaccination programs at the national level (PUBMED:29048980). However, the Mixed Advisory Committees in the Emilia-Romagna region, which were intended to involve patients and users' representatives in health service governance, have faced challenges such as a decline in participation and a weak influence on decision-making (PUBMED:22512734). In conclusion, while there are instances of Italian medical societies working towards bridging the distance from citizens and patients' associations, the overall engagement appears to be limited and varies widely among societies. There is room for improvement in terms of patient involvement, transparency, and influence on decision-making processes.
Instruction: Transradial artery intervention: an alternative approach for renal artery stent implantation? Abstracts: abstract_id: PUBMED:28587369 Efficacy and indications of transradial and transfemoral approaches for peripheral artery stent implantation. The transradial approach (TRA) has become an attractive alternative to the transfemoral approach (TFA) in percutaneous coronary intervention. To date, however, these two approaches have not been systematically compared in various percutaneous peripheral artery interventions (PPAIs). In the present study 258 patients with peripheral artery disease that underwent PPAI via the TRA (n=75) or the TFA (n=183) were analyzed. Clinical factors and outcomes in these two groups were compared. The puncture time was significantly longer (P&lt;0.05) and the prevalence of artery vasospasm significantly higher (P&lt;0.05) in patients who underwent PPAI via the TRA rather than via the TFA. However, the complication rate was significantly lower (P&lt;0.05) and the artery compression time (P&lt;0.05) and time bedridden (P&lt;0.05) were significantly shorter via the TRA than via the TFA. These results suggest that PPAI via the TRA was associated with a lower complication rate, and shorter artery compression time and bedridden time than PPAI via the TFA. The TRA may be preferable for bilateral vertebral artery stenosis, whereas the TFA may be preferable for interventional treatment of carotid and subclavian artery stenosis. Therefore, the catheter length, artery support and push force should be comprehensively considered before choosing the TRA or TFA in the interventional treatment of renal artery stenosis. abstract_id: PUBMED:22964334 Transradial artery intervention: an alternative approach for renal artery stent implantation? Background: Transfemoral artery access is the main approach for the interventional treatment of renal artery stenosis (RAS). This study aimed to investigate the technical feasibility of a transradial interventional (TRI) treatment of renal artery stenosis. Methods: A series of 23 patients who underwent transradial renal artery stenting from October 2010 to October 2011 were studied. Radial sheath system (Terumo, Japan) was used to get access to the radial artery. Radial tourniquet (Terumo) was used to stop bleeding. A 5Fr MPA (COOK, USA) was used to perform selective renal arteriography. Percutaneous renal artery stent systems were used to perform renal artery stenting. Results: Renal artery angiography showed that 15 patients had unilateral renal artery stenosis and eight patients had bilateral renal artery stenosis. The descending aorta could not be catheterized in one patient because of the type III aortic arch. Twenty-two patients successfully underwent transradial renal artery angiography and the technical success rate was 95.7%. There was no puncture site hematoma or pseudoaneurysm. Mean procedure time was (38.4 ± 7.2) minutes, the mean amount of contrast agent used was (93.2 ± 6.3) ml, and the mean postprocedure bleeding time was (3.2 ± 1.9) minutes. Conclusion: Transradial renal artery intervention is technically reliable with less invasion, rapid recovery, fewer complications and may become an alternative intervention approach for the treatment of renal artery stenosis. abstract_id: PUBMED:25075163 Impact of renal artery stent implantation on hypertension in patients with hemodialysis. The benefit from renal artery stent implantation to treat atherosclerotic renal artery stenosis (ARAS) is not well understood in hemodialysis patients. We sought to evaluate the effects of renal artery stenting on hypertension of hemodialysis patients. Renal artery stent implantation was successfully performed on eight hypertensive hemodialysis patients with ARAS (mean ± SD, 66 ± 10 years; men 6, women 2). Blood pressure was measured by automated oscillometric recordings just before hemodialysis. Mean values of the blood pressure, measured 12 times a month, were used for blood pressure analysis. Values of systolic blood pressure decreased at 6 months after renal artery stent implantation (162.6 ± 29.7 to 121.1 ± 23.3 mm Hg, p = 0.0015). Values of diastolic blood pressure also decreased from 77.6 ± 13.6 to 65.6 ± 7.2 mm Hg (p = 0.02). Renal artery stent implantation for ARAS had a beneficial effect on hypertension in hemodialysis patients. abstract_id: PUBMED:11747177 Transradial approach for renal artery stenting. Percutaneous interventional procedures in the renal arteries are usually performed using a femoral or brachial vascular access. The transradial approach, which has been extensively investigated for coronary angiography and intervention, could be an attractive new technique for renal artery angioplasty and stenting. In 18 patients with hemodynamically relevant unilateral renal artery stenosis (mean diameter stenosis, 83% +/- 18%; right, n = 7; left, n = 11), interventional treatment with PTA and stenting was performed using a left (n = 16) or right (n = 2) radial artery access. Indications for the transradial approach were acute aorto-renal angles or severe peripheral arterial obstructions. After engagement of the renal artery ostium with a 6 Fr Multipurpose guiding catheter (length, 125 cm; Cordis) the stenosis was passed with a 0.014" guidewire followed by stent implantation (ACS Multi-Link RX Ultra, Guidant; length, 18 mm; diameter, 5 mm). Direct stenting was successfully performed in 16 cases. Predilatations were required in two cases. In five patients, optimal stent expansion was achieved by additional postdilatations. A primary technical success (residual stenosis &lt; 30%) could be achieved in all cases. There were no periprocedural complications. According to color-coded doppler ultrasound, all access site arteries showed a normal perfusion. Clinically blood pressure control was improved in 11 patients with a significant reduction in systolic and diastolic blood pressure values. Serum creatinine values dropped from 1.81 +/- 0.3 mg/dl to 1.49 +/- 0.3 mg/dl (P &lt; 0.001). Transradial renal artery angioplasty and stenting is technically feasible and safe. Particularly in patients with unfavorable vessel anatomy, this new cranio-caudal approach is an attractive alternative technique. abstract_id: PUBMED:30186475 Percutaneous renal artery stent implantation in the treatment of atherosclerotic renal artery stenosis. We aimed to investigate the clinical curative effect of percutaneous renal artery stent implantation (PTRAS) in the treatment of atherosclerotic renal artery stenosis (ARAS), and to analyze the factors influencing the curative effect of PTRAS. A total of 230 patients with unilateral or bilateral renal artery stenosis were retrospectively analyzed. According to whether adverse cardiogenic or nephrogenic events occurred, 230 patients were divided into two groups to analyze the risk factors of adverse cardiogenic or nephrogenic events. The blood pressure of patients at each time-point after operation was decreased significantly compared with that before operation (P&lt;0.01). The levels of serum creatinine (SCr) at 24 h and 36 months after PTRAS were slightly increased compared with that before operation (P&gt;0.05). The estimated glomerular filtration rate (eGFR) at each time-point after operation was slightly decreased compared with that before operation, but the difference was not statistically significant (P&gt;0.05). Renography showed that GFR on the side of stent implantation at 36 months after PTRAS had no significant change compared with that before operation (P&gt;0.05), but GFR on the unaffected side without receiving PTRAS was significantly increased compared with that before operation (P=0.0014). During the 36-month follow-up, there were a total of 56 cases of adverse cardiogenic or nephrogenic events. Multivariate regression analysis results showed that adverse cardiogenic or nephrogenic events after PTRAS were obviously associated with age (≥65 years old), Charlson comorbidity index (CCI) score (≥2 points), diabetes mellitus, stroke and congestive heart failure (CHF) (P&lt;0.05). In conclusion, PTRAS can effectively control the blood pressure and reduce the types of antihypertensive drugs used by patients with ARAS, but it has no definitely protective effect on renal function. Age (≥65 years old), CCI score (≥2 points), diabetes mellitus, stroke and CHF are risk factors leading to adverse cardiogenic or nephrogenic events after PTRAS. abstract_id: PUBMED:11870270 Transradial renal artery angioplasty and stenting. Transradial arterial access is an alternative approach for coronary interventions. The utilization of these cardiac systems may facilitate endovascular treatment of other vascular territories. This report describes our first experience with percutaneous transluminal renal artery angioplasty and stenting (PTRAS) using the transradial approach. This case demonstrates the feasibility of the radial approach to treat severe renal artery disease safely with PTRAS. Comparison with femoral and brachial arterial access sites is beyond the scope of this paper, but the cardiac literature has demonstrated the safety and efficacy of transradial artery access. With refinement of the endovascular equipment, the radial approach could become an attractive alternative entry site for renal artery interventions. abstract_id: PUBMED:29880100 Combined Transradial and Transpedal Approach for Femoral Artery Interventions. Objectives: The purpose of this prospective study was to evaluate the acute success and complication rates of combined transradial and transpedal access for femoral artery intervention. Background: Improved equipment and techniques have resulted in transition from transfemoral to transradial access for intervention of superficial femoral artery. Methods: Between 2014 and 2016, clinical and angiographic data from 145 consecutive patients with symptomatic superficial femoral stenosis, treated via primary radial access using the 6-F SheathLess Eaucath PV guiding catheter were evaluated in a pilot study. Secondary access was achieved through the pedal or popliteal artery. The primary endpoints were major adverse events, target lesion revascularization, and rates of major and minor access-site complications. Secondary endpoints included angiographic outcome, procedural factors, crossover rate to femoral access site, and duration of hospitalization. Results: Technical success was achieved in 138 patients (95.2%). Combined radial and pedal access was obtained in 22 patients (15.1%). The crossover rate to a femoral access site was 2%. Stent implantation was necessary in 23.4% of patients. Chronic total occlusion recanalization was performed in 63 patients, with a 90.4% technical success rate. The mean contrast consumption, radiation dose, and procedure time were 112.9 ml (101.8 to 123.9 ml), 21.84 Gy/cm2 (9.95 to 33.72 Gy/cm2), and 34.9 min (31.02 to 38.77 min), respectively. The cumulative rate of access-site complications was 4.8% (0% major, 4.8% minor). The cumulative incidence rates of major adverse events at 3 and 12 months follow-up was 8.3% and 19.2%. The cumulative incidence rates of death at 3- and 12-month follow-up were 2.8% and 5.6%. Conclusions: Femoral artery intervention can be safely and effectively performed using radial and pedal access with acceptable morbidity and a high technical success rate. abstract_id: PUBMED:25848368 Transradial approach for vertebral artery stenting. Introductuion: Symptomatic severe vertebral artery (VA) stenosis may be treated safely with stent supported angioplasty via femoral access. There is limited clinical data on transradial approach for VA angioplasty in case of peripheral artery disease. Aim: To evaluate the safety and efficacy of transradial angioplasty of symptomatic VA stenosis. Material And Methods: Fifteen patients (age 66 ±7.4 years, 73% men, with VA &gt; 80% stenosis, 11 right-side, all symptomatic from posterior circulation (history of stroke, TIA, or chronic ischaemia symptoms)) with peripheral artery disease (PAD) or unsuccessful attempt via femoral approach were scheduled for VA angioplasty by radial access. Clinical and duplex ultrasound (DUS) follow-up were performed before discharge and 1, 12, and 24 months after VA angioplasty. Results: The technical success rate was 100%. In all cases VA angioplasty was performed with the use of single balloon-mounted stent (9 bare metal stents, 6 drug-eluting stents). The mean NASCET VA stenosis was reduced from 85.3% to 5.3% (p &lt; 0.001). No periprocedural death, stroke, myocardial infarction, or transient ischaemic attack occurred. During 24-months follow-up, in 12 of 15 patients chronic ischaemia symptoms release was observed, and no new acute ischaemic neurological symptoms were diagnosed in all patients. One patient died 20 months after intervention from unknown causes. There was one symptomatic borderline VA in-stent stenosis 12 months after angioplasty. Conclusions: Transradial VA stenting may be a very effective and safe procedure, and it may constitute an alternative to the femoral approach in patients with symptomatic VA stenosis. abstract_id: PUBMED:24033772 Cranio-caudal (transradial) approach for renal artery intervention. This report describes the technique of transradial renal angiography and interventions. We illustrate the approach using typical case images. We believe cranio-caudal approach to the renal artery for diagnostic and therapeutic catheterization offers advantages over the traditional femoral approach. abstract_id: PUBMED:28582049 Transradial Peripheral Arterial Procedures. Increased understanding and increased adoption of transradial catheterization across the world have led to further exploring of radial artery access for transradial endovascular interventions in peripheral artery disease (PAD). This article discusses the advantages and limitations of the transradial approach for endovascular medicine by using case examples, illustrations, and videos. The details about how to use a radial approach for PAD intervention, including and tips tricks, are discussed. Answer: Yes, transradial artery intervention (TRI) is considered an alternative approach for renal artery stent implantation. Studies have shown that TRI for renal artery stenosis is technically reliable with less invasion, rapid recovery, and fewer complications, suggesting it may become an alternative intervention approach for the treatment of renal artery stenosis (PUBMED:22964334). The transradial approach (TRA) has been compared to the transfemoral approach (TFA) and has been associated with a lower complication rate, shorter artery compression time, and reduced bedridden time, although it does have a longer puncture time and a higher prevalence of artery vasospasm (PUBMED:28587369). Transradial renal artery angioplasty and stenting has been successfully performed in patients, with technical feasibility and safety being highlighted, especially in those with unfavorable vessel anatomy or acute aorto-renal angles (PUBMED:11747177). Furthermore, the transradial approach has been used effectively for vertebral artery stenting, which may indicate its potential for broader applications in peripheral artery disease interventions (PUBMED:25848368). In the context of renal artery stent implantation, the transradial approach has been shown to be an attractive technique, particularly in patients with hemodynamically relevant unilateral renal artery stenosis (PUBMED:11747177). Additionally, the transradial approach has been described as a cranio-caudal approach for renal artery intervention, offering advantages over the traditional femoral approach (PUBMED:24033772). Overall, while the transfemoral approach remains the mainstay for renal artery stent implantation, the transradial approach is emerging as a viable and potentially preferable alternative in certain clinical scenarios, offering benefits such as reduced complications and quicker patient recovery (PUBMED:22964334; PUBMED:28587369; PUBMED:11747177; PUBMED:24033772; PUBMED:25848368).
Instruction: Does deep sedation impact the results of 48 hours catheterless pH testing? Abstracts: abstract_id: PUBMED:21455329 Does deep sedation impact the results of 48 hours catheterless pH testing? Aim: To study a cohort of patients undergoing 48 h Bravo pH testing receiving deep sedation with propofol. Methods: We retrospectively reviewed the charts of 197 patients (81 male, 116 female) who underwent Bravo esophageal pH monitoring from July 2003 to January 2008. All patients underwent Bravo pH probe placement via esophagogastroduodenoscopy (EGD) and received propofol for sedation. Patients on a proton pump inhibitor (89 patients) were excluded. Acid reflux variables measured included the total, upright, and supine fractions of time at pH &lt; 4 and DeMeester score, and were compared between day 1 and day 2. Results: Of the 108 patients that were included in the study, the most common indication for Bravo pH monitoring was heartburn, with chest pain being the second most common. A signed rank test revealed no statistically significant difference between day 1 and day 2 reflux episodes. Conclusion: Patients who received propofol for sedation for EGD with Bravo pH capsule placement did not experience any significant difference in reflux episodes from day 1 to day 2. abstract_id: PUBMED:25539736 Clinical value of wireless pH-monitoring of gastro-esophageal reflux in children before and after proton pump inhibitors. Background: Wireless pH-monitoring is an accurate method for diagnosing adults with gastroesophageal reflux disease (GERD). The aim of this study was to evaluate the use of the Bravo capsule on children investigated for GERD in terms of safety, tolerability and feasibility before and after administration of proton pump inhibitors. Methods: A Bravo capsule was inserted during upper endoscopy under general anaesthesia or deep sedation with propofol. 48-hour pH-metry was performed in 106 children (50 males, 56 females) at the median age of 11 years (range 17 months-18 years). On the second day of investigation, proton pump inhibitor (PPI) was given at a mean dose of 1.6 mg/kg (SD ±0.6 mg). The definition of GERD was set to a reflux index (RI) of ≥5% and DeMeester score (DMS) ≥14.7. Results: Application of the capsule was successful in 103 of the 106 children (97.2%) and interpretable in 99 of these 103 (96.1%). 49 of the children with interpretable results (49.5%) had GERD according to RI, while 51 (56.7%) had GERD according to DMS. After PPI was given on day 2, RI decreased from a median of 4.9% (range 0.3-63.4%) to 2.2% (0-58.0%), while DMS decreased from a median of 17.6 (range 2.2-207.6) to 8.2 (0.3-178.6), respectively (p &lt; 0.0001). No severe adverse events were reported. Conclusion: Wireless pH-metry is a safe and tolerable method when investigating children for GERD. PPI given on the second day of assessment provides additional information on response to treatment suggesting that pH-metry preferably should be extended to 48 hours. abstract_id: PUBMED:29498938 Sedation Intensity in the First 48 Hours of Mechanical Ventilation and 180-Day Mortality: A Multinational Prospective Longitudinal Cohort Study. Objectives: In the absence of a universal definition of light or deep sedation, the level of sedation that conveys favorable outcomes is unknown. We quantified the relationship between escalating intensity of sedation in the first 48 hours of mechanical ventilation and 180-day survival, time to extubation, and delirium. Design: Harmonized data from prospective multicenter international longitudinal cohort studies SETTING:: Diverse mix of ICUs. Patients: Critically ill patients expected to be ventilated for longer than 24 hours. Interventions: Richmond Agitation Sedation Scale and pain were assessed every 4 hours. Delirium and mobilization were assessed daily using the Confusion Assessment Method of ICU and a standardized mobility assessment, respectively. Measurements And Main Results: Sedation intensity was assessed using a Sedation Index, calculated as the sum of negative Richmond Agitation Sedation Scale measurements divided by the total number of assessments. We used multivariable Cox proportional hazard models to adjust for relevant covariates. We performed subgroup and sensitivity analysis accounting for immortal time bias using the same variables within 120 and 168 hours. The main outcome was 180-day survival. We assessed 703 patients in 42 ICUs with a mean (SD) Acute Physiology and Chronic Health Evaluation II score of 22.2 (8.5) with 180-day mortality of 32.3% (227). The median (interquartile range) ventilation time was 4.54 days (2.47-8.43 d). Delirium occurred in 273 (38.8%) of patients. Sedation intensity, in an escalating dose-dependent relationship, independently predicted increased risk of death (hazard ratio [95% CI], 1.29 [1.15-1.46]; p &lt; 0.001, delirium hazard ratio [95% CI], 1.25 [1.10-1.43]), p value equals to 0.001 and reduced chance of early extubation hazard ratio (95% CI) 0.80 (0.73-0.87), p value of less than 0.001. Agitation level independently predicted subsequent delirium hazard ratio [95% CI], of 1.25 (1.04-1.49), p value equals to 0.02. Delirium or mobilization episodes within 168 hours, adjusted for sedation intensity, were not associated with survival. Conclusions: Sedation intensity independently, in an ascending relationship, predicted increased risk of death, delirium, and delayed time to extubation. These observations suggest that keeping sedation level equivalent to a Richmond Agitation Sedation Scale 0 is a clinically desirable goal. abstract_id: PUBMED:24160809 Oversedation in postoperative patients requiring ventilator support greater than 48 hours: a 4-year National Surgical Quality Improvement Program-driven project. Prolonged mechanical ventilation of postoperative patients can contribute to an increase in morbidity. Every effort should be made to wean patients from the ventilator after surgery. Oversedation may prevent successful extubation. Cases identified by the National Surgical Quality Improvement Program (NSQIP) for Huntington Hospital were reviewed. Oversedation, days on the ventilator, type and duration of sedation, and cost were studied. Data were collected from the NSQIP database and patient charts. Oversedation was determined by the Richmond Agitation Sedation Score (RASS) of each patient. The hospital pharmacy provided data on propofol. Forty-three (35%) patients were oversedated. Propofol was used in 111 (90%) cases with an average use of 4.8 days. Propofol was used greater than 48 hours in 77 (62%) cases. After identifying inconsistent nurse documentation of sedation, corrective actions helped decrease oversedation, average number of days on the ventilator, number of days on propofol, hospital expenditure on propofol, and number of patients on the ventilator greater than 48 hours. Oversedation contributed to prolonged mechanical ventilation. Standardization of RASS and physician sedation order sheets contributed to improving our NSQIP rating. Sedation use decreased and fewer patients spent less time on the ventilator. NSQIP is an effective tool to identify issues with quality in surgical patients. abstract_id: PUBMED:34428460 Impact of Preoperative Testing on Patients Undergoing Ophthalmologic Surgery: A Retrospective Cohort Study. Objective: Routine medical testing is not recommended before cataract surgery, but no consensus exists about preoperative testing before general ophthalmologic surgery. We aimed to assess the impact of preoperative testing on patients undergoing ophthalmologic surgery by analyzing their surgical outcomes and complications. Methods: We retrospectively reviewed electronic health records of patients who had preoperative evaluations before cataract or noncataract ophthalmologic surgery at a tertiary care center from January 1, 2015, through December 31, 2019. Results: The cohort consisted of 2268 patients (1270 [56.0%] women). The most frequent ophthalmologic procedure was cataract extraction (n = 1450 [63.9%]). Laboratory tests results were available for 489 patients (33.7%) in the cataract group; of these, 275 results (56.2%) had abnormal values, and 18 patients (6.5%) required preoperative interventions. Preoperative test results were available for 772 out of 818 patients (94.4%) having noncataract procedures. Of these, 384 results (49.7%) had abnormal values, and 10 patients (2.6%) required additional intervention. No significant differences were observed for the rate of surgery cancellations between the cataract and noncataract patient groups (0.6% vs 1.0%; P = .24). Of the 12 patients (0.5%) who had complications, all had undergone preoperative testing. Conclusions: No differences in outcomes and complications were observed among patients who underwent cataract or noncataract surgery. It is reasonable to consider avoiding preoperative testing in patients undergoing ophthalmologic surgery. abstract_id: PUBMED:29137881 Comparison between chloral hydrate and propofol-ketamine as sedation regimens for pediatric auditory brainstem response testing. Introduction: The use of diagnostic auditory brainstem response testing under sedation is currently the "gold standard" in infants and young children who are not developmentally capable of completing the test. Objective: The aim of the study is to compare a propofol-ketamine regimen to an oral chloral hydrate regimen for sedating children undergoing auditory brainstem response testing. Methods: Patients between 4 months and 6 years who required sedation for auditory brainstem response testing were included in this retrospective study. Drugs doses, adverse effects, sedation times, and the effectiveness of the sedative regimens were reviewed. Results: 73 patients underwent oral chloral hydrate sedation, while 117 received propofol-ketamine sedation. 12% of the patients in the chloral hydrate group failed to achieve desired sedation level. The average procedure, recovery and total nursing times were significantly lower in the propofol-ketamine group. Propofol-ketamine group experienced higher incidence of transient hypoxemia. Conclusion: Both sedation regimens can be successfully used for sedating children undergoing auditory brainstem response testing. While deep sedation using propofol-ketamine regimen offers more efficiency than moderate sedation using chloral hydrate, it does carry a higher incidence of transient hypoxemia, which warrants the use of a highly skilled team trained in pediatric cardio-respiratory monitoring and airway management. abstract_id: PUBMED:23960622 Safe and effective use of conscious sedation for defibrillation threshold testing during ICD implantation. Background: Over a period of years general anesthesia has been a standard anesthetic technique for defibrillation threshold (DFT) testing at the time of implant. DFT testing without general anesthesia cover has gained limited acceptance. Use of local anesthesia combined with deep sedation for DFT testing might facilitate and simplify these procedures by reducing the procedural time, staff time, avoiding inefficient service in organizing anesthetic cover; thereby improving patient compliance. Objective: The objective of this study was to evaluate feasibility, safety and efficacy of conscious sedation for DFT testing during Implantable cardioverter defibrillators (ICD) implantation. Method: Data of 87 non-selected patients who achieved adequate sedation with titrated doses of midazolam and pethidine were analyzed retrospectively. These medications were administered by a circulating nurse under the supervision of the implanting physicians. All hemodynamic measures, treatment and complications were monitored and recorded throughout the procedure. Results: A retrospective analysis of data from 87 patients who underwent ICD implantation and DFT testing under conscious sedation at our center was reported. The mean dose of midazolam and pethidine administered was 4.9 ± 1.8 and 47.7 ± 20 mg, respectively. During the period of conscious sedation, no patient depicted episode of sustained apnea. No major complication or mortality was reported. Conclusion: Use of conscious sedation as an alternative to the use of general anesthesia for DFT testing during ICD implantation is found to be feasible, safe and effective, with an added advantage of reduced procedural time and improved patient compliance. abstract_id: PUBMED:28365219 Study of the efficacy of melatonin for auditory brainstem response (ABR) testing in children. Study Objective: To evaluate the efficacy of oral melatonin to induce sedation (natural sleep) while measuring auditory brainstem responses (ABR) in children with suspected hearing loss by calculating the ABR success rate with the use of melatonin, the percentage of children in whom sleep was induced by melatonin and the time required to perform complete ABR testing. Material And Methods: Twenty-nine children aged 1 to 6 years were included in this prospective study. ABR testing was performed as a day-case procedure in a single room ensuring suitable sleeping conditions. Five milligrams of melatonin (or 10mg in children over the age of 6 years) were administered orally before electrode positioning. An additional 5 milligrams were administered if the child failed to sleep after 30minutes. Results: ABR was performed with melatonin in 58 ears: complete ABR testing was performed in both ears in 19 children (65%), corresponding to 38 examinations; only one ear was tested in 7 children (24.2%), corresponding to 7 examinations; and ABR testing could not be performed in 3 children (10.3% failure rate). Good quality sleep was obtained in 17 children (59%), brief sleep was obtained in 4 children (14%), who nevertheless remained calm throughout the examination and 8 children (27%) did not fall asleep. The mean sleep-onset time after administration of melatonin was 41minutes with a mean sleep time of 33minutes and a mean examination time of 52minutes after the first dose of melatonin. A complete examination therefore took an average of 93minutes. Conclusion: The use of melatonin allowed partial or complete ABR testing in 90% of children. This method is effective, but it requires adapted facilities to receive the child and the child's parents. abstract_id: PUBMED:25928417 Early deep sedation is associated with decreased in-hospital and two-year follow-up survival. Introduction: There is increasing evidence that deep sedation is detrimental to critically ill patients. The aim of this study was to examine effects of deep sedation during the early period after ICU admission on short- and long-term survival. Methods: In this observational, matched-pair analysis, patients receiving mechanical ventilation that were admitted to ICUs of a tertiary university hospital in six consecutive years were grouped as either lightly or deeply sedated within the first 48 hours after ICU admission. The Richmond Agitation-Sedation Score (RASS) was used to assess sedation depth (light sedation: -2 to 0; deep: -3 or below). Multivariate Cox regression was conducted to investigate the impact of early deep sedation within the first 48 hours of admission on in-hospital and two-year follow-up survival. Results: In total, 1,884 patients met inclusion criteria out of which 27.2% (n = 513) were deeply sedated. Deeply sedated patients had longer ventilation times, increased length of stay and higher rates of mortality. Early deep sedation was associated with a hazard ratio of 1.661 (95% CI: 1.074 to 2.567; P = 0.022) for in-hospital survival and 1.866 (95% CI: 1.351 to 2.576; P &lt; 0.001) for two-year follow-up survival. Conclusions: Early deep sedation during the first 48 hours of intensive care treatment was associated with decreased in-hospital and two-year follow-up survival. Since early deep sedation is a modifiable risk factor, this data shows an urgent need for prospective clinical trials focusing on light sedation in the early phase of ICU treatment. abstract_id: PUBMED:31743173 Using Pupillary Pain Index to Assess Nociception in Sedated Critically Ill Patients. Background: Pupillary reflex dilation is a reliable indicator of response to noxious stimulation. In a proof of concept study, we investigated the performance of pupillary pain index, a new score derived from pupillary reflex dilation measurements, to predict nociceptive response to endotracheal suctioning in sedated critically ill patients. Methods: Twenty brain-injured and 20 non-brain-injured patients were studied within 48 hours of admission (T1) in the intensive care unit and at 48-72 hours later (T2). Video-based pupillometer was used to determine pupillary reflex dilation during tetanic stimulation. The tetanic stimulation (100 Hz) was applied to the skin area innervated by the ulnar nerve and was stepwise increased from 10 to 60 mA until pupil size had increased by 13% compared to baseline. The maximum intensity value allowed the determination of a pupillary pain index score ranging from 1 (no nociception) to 9 (high nociception). The Behavioral Pain Scale response to endotracheal suctioning was measured thereafter. Results: Behavioral Pain Scale responses to endotracheal suctioning and pupillary pain index scores were positively correlated at T1 and T2 (both P &lt; .01). After adjustments for repeated measurements and group of patients, the area under the receiver operating characteristic curve of pupillary pain index to predict Behavioral Pain Scale response to endotracheal suctioning was of 0.862 (95% CI, 0.714-0.954). In the combined set of patients, a pupillary pain index score of ≤4 could predict no nociceptive response to endotracheal suctioning with a sensitivity of 88% (95% CI, 68%-97%) and a specificity of 79% (95% CI, 66%-88%). By contrast with endotracheal suctioning, tetanic stimulation had no effect on intracranial pressure in the brain-injured group. Conclusions: These results are a proof of concept. The nociceptive response to endotracheal suctioning could be accurately predicted using the determination of pupillary pain index score in sedated critically ill patients whether they have brain injury or not. Answer: Based on the provided abstracts, deep sedation does not impact the results of 48 hours catheterless pH testing. Specifically, a study that retrospectively reviewed the charts of 197 patients who underwent Bravo esophageal pH monitoring with propofol sedation found no statistically significant difference in reflux episodes between day 1 and day 2 of the pH testing (PUBMED:21455329). This suggests that patients who received propofol for sedation during esophagogastroduodenoscopy (EGD) with Bravo pH capsule placement did not experience a significant difference in acid reflux variables measured, such as the total, upright, and supine fractions of time at pH < 4 and DeMeester score, from the first to the second day of monitoring. Therefore, deep sedation with propofol does not appear to impact the results of 48-hour catheterless pH testing.
Instruction: Are spirituality and religiosity resources for patients with chronic pain conditions? Abstracts: abstract_id: PUBMED:19284487 Are spirituality and religiosity resources for patients with chronic pain conditions? Objective: We studied whether or not spirituality/religiosity is a relevant resource for patients with chronic pain conditions, and to analyze interrelations between spirituality/religiosity (SpREUK Questionnaire; SpREUK is an acronym of the German translation of "Spiritual and Religious Attitudes in Dealing with Illness"), adaptive coping styles that refer to the concept of locus of disease control (AKU Questionnaire; AKU is an acronym of the German translation of "Adaptive Coping with Disease"), life satisfaction, and appraisal dimensions. Patients: In a multicenter cross-sectional study, 580 patients with chronic pain conditions were enrolled. Results: We found that the patients relied on both external powerful sources of disease control and on internal powers and virtues, while Trust in Higher Source (intrinsic religiosity) or Illness as Chance (reappraisal) were valued moderately; Search for Meaningful Support/Access (spiritual quest orientation) was of minor relevance. Stepwise regression analyses revealed that the internal sources of disease control, such as Conscious and Healthy Way of Living and Positive Attitudes, were (apart from the religious denomination) the strongest predictors of patients' reliance on spirituality/religiosity. Both behavioral styles were rated significantly lower in patients who regarded themselves as neither religious nor spiritual. Positive disease interpretations such as Challenge and Value were clearly associated with a spiritual quest orientation and intrinsic religiosity. Conclusion: The associations between spirituality/religiosity, positive appraisals. and internal adaptive coping strategies indicate that the utilization of spirituality/religiosity goes far beyond fatalistic acceptance, but can be regarded as an active coping process. The findings support the need for further research concerning the contributions of spiritual coping in adjustment to chronic pain. abstract_id: PUBMED:33484390 Relevance of Religiosity for Coping Strategies and Disability in Patients with Fibromyalgia Syndrome. Coping strategies are essential for the outcome of chronic pain. This study evaluated religiosity in a cohort of patients with fibromyalgia syndrome (FMS), its effect on pain and other symptoms, on coping and FMS-related disability. A total of 102 FMS patients were recruited who filled in questionnaires, a subgroup of 42 patients participated in a face-to-face interview, and data were evaluated by correlation and regression analyses. Few patients were traditionally religious, but the majority believed in a higher existence and described their spirituality as "transcendence conviction". The coping strategy "praying-hoping" and the ASP dimension "religious orientation" (r = 0.5, P &lt; 0.05) showed a significant relationship independent of the grade of religiosity (P &lt; 0.05). A high grade of belief in a higher existence was negatively associated with the choice of ignoring as coping strategy (r = - 0.4, P &lt; 0.05). Mood and affect-related variables had the highest impact on disability (b = 0.5, P &lt; 0.05). In this cohort, the grade of religiosity played a role in the choice of coping strategies, but had no effects on health and mood outcome. abstract_id: PUBMED:30406891 Spirituality Needs in Chronic Pain Patients: A Cross-Sectional Study in a General Hospital in Malaysia. Evidence-based practices that rely upon pain relieving medications and interventional strategies for symptom alleviation in chronic pain survivors have shown modest benefits. The recent emphasis of spiritual care as a new dimension of treatment strategy incorporated within the biopsychosocial model has inspired new hopes to mediate mental and physical health for illness coping. This study aimed to explore the factors associated with spirituality needs among chronic pain patients in a general hospital in Malaysia. An analytical cross-sectional study was conducted among 117 chronic pain patients in a general hospital in Malaysia. Clinical features and assessments were evaluated by an experienced pain physician and retrieved from patient medical records. An interviewer-administered questionnaire that consisted of items on socio-demographics, the validated 19-items spiritual needs questionnaire and the visual analog scale was utilized. Multivariate linear regression analysis was conducted to identify the factors associated with spiritual needs in chronic pain patients. Patients had higher actively giving score as compared to other spirituality need domains. Central neuropathic pain (β = 1.691, p = 0.040) predicted existential. Renal problems (β = 5.061, p = 0.019) highly predicted religiosity; followed by head pain (β = 3.774, p = 0.036), central neuropathic pain (β = 2.730, p = 0.022), heart problems (β = 1.935, p = 0.041), income (β = 0.001, p = 0.003), living arrangement (β = - 3.045, p = 0.022), face (β = - 3.223, p = 0.005) and abdominal (β = - 4.745, p = 0.0001) pains. Predictors of inner peace include renal problems (β = 3.752, p = 0.021), shoulder pain (β = 1.436, p = 0.038) and pain duration (β = - 0.012, p = 0.027). Predictors of actively giving were renal problems (β = 3.803, p = 0.001), central neuropathic pain (β = 1.448, p = 0.017), heart problems (β = 1.004, p = 0.042), income (β = 0.001, p = 0.0001), age (β = - 0.046, p = 0.004) and abdominal pain (β = - 2.617, p = 0.0001). Chronic pain patients had higher actively giving score compared to other spirituality needs. Their spirituality needs were significantly influenced by pain type, duration and site, co-existing medical conditions and socio-demographics. abstract_id: PUBMED:31535274 Does Religiosity/Spirituality Play a Role in Function, Pain-Related Beliefs, and Coping in Patients with Chronic Pain? A Systematic Review. This systematic review examined the extent to which measures of religiosity/spirituality (R/S): (1) are associated with pain, function, pain-related beliefs (beliefs), coping responses, and catastrophizing in people with chronic pain; and (2) moderate the association between beliefs, coping and catastrophizing, and pain and function. Experimental and observational studies examining at least one of these research questions in adults with chronic pain were eligible. Two reviewers independently performed eligibility screening, data extraction, and quality assessment. Twenty studies were included. Most studies focused on the association between R/S and pain or function. When significant associations emerged, those between R/S and psychological function were weak to strong and positive; those between religious/spiritual well-being and pain and physical dysfunction were negative, but weak. Few studies examined the associations between R/S and beliefs/coping/catastrophizing; none examined the moderation role of R/S. The findings suggest that R/S is associated with pain and psychological function in people with chronic pain, and that viewing oneself as being "spiritual," regardless of religion, may contribute to positive psychological adjustment. More research is needed to determine the reliability of this finding. PROSPERO registry CRD42018088803. abstract_id: PUBMED:33886455 Relationship between religiosity, spirituality and physical and mental outcomes in fibromyalgia patients. Objectives: The coping mechanisms utilised by patients with the fibromyalgia syndrome (FM) pose a crucial focus of treatment. Previous research points to the positive effects of religiosity and spirituality (R/S) as tools for coping with illness. The role of these factors in coping with chronic pain in FM has not previously been studied. The aim of this study was to evaluate the link between R/S and FM outcomes. Methods: Fifty-five FM patients (ACR criteria) attending a tertiary rheumatology clinic completed a packet of questionnaires assessing demographic data, levels of religiosity and spirituality (SpREUK) and locus of control (LOC). These variables were then individually assessed for influence on FM outcome measures, using the Fibromyalgia Impact Questionnaire (FIQ), the SF-36, and the Beck Depression Index (BDI). Results: A high score on SpREUK I (search for meaningful support) was negatively correlated with the Role-Physical (p=0.032) and Role-Emotional (p&lt;0.005) scales on SF-36. Secular patients scored higher on SF-36 domains of "Role limitation due to emotional health" and "General health" (p&lt;0.05). Employment demonstrated a positive correlation with the FIQ (p&lt;0.01), the BDI (p&lt;0.001), and the SF-36 (p&lt;0.05). Physical activity correlated positively with BDI scores (p=0.012) and better scores on SF-36: energy/fatigue (p=0.024), social-functioning (p=0.014) and physical-functioning (p&lt;0.01). No significant correlation was found between LOC (internal versus external) and FM outcomes. No significant correlation was found between SpREUK domains and the BDI. Conclusions: FM patients do not appear to benefit from high levels of R/S. Physicians should be aware of the impact of R/S on well-being in this population. abstract_id: PUBMED:34220588 Do Chronic Pain Patients Wish Spiritual Aspects to Be Integrated in Their Medical Treatment? A Cross-Sectional Study of Multiple Facilities. Background: Chronic pain is a complex, multidimensional experience. Spirituality is hypothesized to impact pain experience in various ways. Nevertheless, the role that spirituality plays in multimodal pain therapy remains controversial and, to date, quantitative data on whether and for which patients spiritual aspects should be considered in the treatment of chronic pain is lacking. The aim of this study was thus to investigate the proportion and characteristics of patients with chronic pain who wish spiritual aspects to be integrated in their treatment. Methods: Two hundred nine patients with chronic pain were recruited from five inpatient departments and outpatient clinics in the German-speaking part of Switzerland. Patients filled out validated questionnaires, such as the Hospital Anxiety and Depression Scale (HADS), the Resilience Scale (RS-11), the Spiritual and Religious Attitudes in Dealing with Illness (SpREUK), and the 12-item Spiritual Well-Being Scale (FACIT-Sp-12). Results: More than 60% (CI95%: 55.5-67.9%) of the patients wanted to address spiritual aspects in their treatment. These patients were significantly younger, had higher levels of education, and suffered from more frequent and more severe pain than patients who did not wish to address spiritual aspects. Furthermore, there were high correlations with existing spiritual resources and higher scores of spirituality. Conclusions: These results confirm that the majority of chronic pain patients wish spiritual aspects to be considered in their treatment. Additionally, the finding that these patients had more spiritual resources underlines the importance of integrating spiritual aspects in a resource-oriented, patient-centered care approach for this condition. abstract_id: PUBMED:19480361 The BENEFIT through spirituality/religiosity scale--a 6-item measure for use in health outcome studies. Objective: We intended to develop a brief and compact scale which measures the beneficial effects of spirituality/religiosity (SpR) on several dimensions of patients' life concerns, to be used in health outcome studies. Method: To attain a short measure valid for spiritual and religious dimensions, we designed a scale based on tested items of an independent item pool of the already established SpREUK inventory, which measures SpR attitudes and convictions and holds an independent data pool addressing the support of life concerns through SpR. To validate the intended BENEFIT scale, reliability and factor analyses were performed utilizing 371 individuals with different chronic diseases recruited in various medical centers in Germany (exploratory analysis), while for the confirmatory analyses we used an independent pool of 229 patients with chronic pain conditions. Results: Six items addressed the beneficial effects of SpR of the patients' concerns, and thus were chosen for the BENEFIT scale. These items had a good internal reliability (Cronbach's alpha = 0.922). Primary factor analysis pointed to a 1-factor solution, which explained 72% of variance. Correlation analyses revealed that the BENEFIT scale correlated with both uniquely religious and spiritual attitudes. Conclusions: The data demonstrate the reliability and validity of the 6-item BENEFIT scale that captures a unique aspect of SpR that can be used in epidemiological studies and clinical trials for those who wish to assess both the external and internal dimensions of SpR. This brief instrument can be easily incorporated into almost any study of mental health, physical health, or quality of life. abstract_id: PUBMED:18765136 Religiousness and spirituality in fibromyalgia and chronic pain patients. The influence of psychosocial factors on pain experience and patient response has received increasing interest and recognition. Patients with chronic pain from several sources (eg, musculoskeletal, cancer, or sickle cell) usually report that religiousness and spirituality are important in their lives. Prayer is the most used complementary therapy; religious coping is among the most common strategies used to deal with pain. Religious variables are not usually associated with pain measures, except in some studies indicating that petitionary prayer is related to higher pain levels, possibly suggesting a turning to religion due to increasing pain. The best available evidence supports a positive association between religiousness and spirituality, with higher well-being and positive affect, and a negative association with depressive and anxiety symptoms. We discuss the importance of addressing spiritual issues in clinical practice, and increasing and improving research on religiousness/spirituality in chronic pain patients. abstract_id: PUBMED:17541817 Exploring the relationship between spirituality, coping, and pain. There is growing recognition that persistent pain is a complex and multidimensional experience stemming from the interrelationship among biological, psychological, social, and spiritual factors. Chronic pain patients use a number of cognitive and behavioral strategies to cope with their pain, including religious/spiritual forms of coping, such as prayer, and seeking spiritual support to manage their pain. This article will explore the relationship between the experience of pain and religion/spirituality with the aim of understanding not only why some people rely on their faith to cope with pain, but also how religion/spirituality may impact the experience of pain and help or hinder the coping process. We will also identify future research priorities that may provide fruitful research in illuminating the relationship between religion/spirituality and pain. abstract_id: PUBMED:23870113 Spiritual needs among patients with chronic pain diseases and cancer living in a secular society. Objective: Research has shown that several patients report unmet psychosocial and spiritual needs. While most studies focus on patients with advanced stages of disease, we intended to identify unmet spiritual needs in patients with chronic pain diseases and cancer living in a secular society. Methods: In an anonymous cross-sectional study, standardized questionnaires were provided to German patients with chronic pain diseases (and cancer), i.e., Spiritual Needs Questionnaire (SpNQ), Spirituality/Religiosity and Coping (SpREUK-15), Spiritual Well-being (FACIT-Sp), Brief Multidimensional Life Satisfaction Scale, Interpretation of Illness Questionnaire, and Escape from Illness (Escape). Results: We enrolled 392 patients (67% women, mean age 56.3 ± 13.6 years; 61% Christian denomination) with chronic pain diseases (86%) and cancer (14%). Religious Needs (mean score 0.5 ± 0.8 on the scale) and Existential Needs (0.8 ± 0.8 on the scale) were low, while needs for Inner Peace (1.5 ± 0.9 on the scale) and Giving/Generativity were scored high (1.3 ± 1.0 on the scale). Regression analyses indicated that Religious Needs can be predicted best by (religious) "Trust," the illness interpretation "call for help," and living with a partner; Existential Needs can be predicted by "call for help" and to a weaker extent by (religious) "Trust." Existential Needs are influenced negatively by the illness interpretation "challenge." Needs for Inner Peace were predicted only in trend by the illness interpretation "threat," and there were no significant predictors for the Giving/Generativity needs in the respective regression model. Conclusions: Patients with chronic pain diseases predominantly report needs related to inner peace and generative relatedness on a personal level, whereas needs related to transcendent relatedness were of minor relevance. Nevertheless, even religious "skeptics" can express specific religious needs, and these should be recognized. Addressing patients' specific needs and also supporting them in their struggle with chronic illness remain a challenging task for the modern health care system. Answer: Yes, spirituality and religiosity are resources for patients with chronic pain conditions. Studies have found that patients with chronic pain often rely on spirituality and religiosity as coping mechanisms. For instance, a multicenter cross-sectional study involving 580 patients with chronic pain conditions revealed that patients valued both external powerful sources of disease control and internal powers and virtues, with Trust in Higher Source (intrinsic religiosity) and Illness as Chance (reappraisal) being moderately valued. The study also found that spirituality/religiosity was associated with positive appraisals and internal adaptive coping strategies, indicating that it can be regarded as an active coping process rather than fatalistic acceptance (PUBMED:19284487). Another study on patients with fibromyalgia syndrome (FMS) showed that religiosity played a role in the choice of coping strategies, although it did not affect health and mood outcomes (PUBMED:33484390). Similarly, a systematic review found that religiosity/spirituality (R/S) is associated with pain and psychological function in people with chronic pain, suggesting that viewing oneself as spiritual, regardless of religion, may contribute to positive psychological adjustment (PUBMED:31535274). A study in Malaysia found that chronic pain patients had higher actively giving scores compared to other spirituality needs, and their spirituality needs were significantly influenced by pain type, duration and site, co-existing medical conditions, and socio-demographics (PUBMED:30406891). In contrast, a study on fibromyalgia patients indicated that high levels of R/S did not appear to benefit the patients, suggesting that the impact of R/S on well-being in this population may vary (PUBMED:33886455). Furthermore, a cross-sectional study in Switzerland found that more than 60% of chronic pain patients wanted to address spiritual aspects in their treatment, with these patients being younger, more educated, and suffering from more frequent and severe pain than those who did not wish to address spiritual aspects (PUBMED:34220588). Overall, these studies support the notion that spirituality and religiosity are important resources for many patients with chronic pain conditions, although the extent of their benefit may vary among individuals and conditions.
Instruction: Is Obesity Associated With School Dropout? Abstracts: abstract_id: PUBMED:26331748 Is Obesity Associated With School Dropout? Key Developmental and Ethnic Differences. Background: We aimed to expand the literature on child obesity and school outcomes by examining associations between obesity and high school dropout, including the role of obesity onset and duration as well as ethnicity. Methods: Data on 5066 children obtained between 1986 and 2010 from the child cohort of the 1979 National Longitudinal Study of Youth (NLSY79) were analyzed. Group-based trajectory analysis identified obesity trajectories from 6 to 18 years. School completion information from age 14 into young adulthood was used to calculate school dropout. Chi-square and pairwise comparison tests were used to identify significant associations between obesity trajectories and school dropout. Results: Adolescents belonging to an increasing trajectory (adolescent-onset obesity) had a higher likelihood of dropping out of high school compared with those belonging to chronic, decreasing (childhood-only obesity), and nonobese trajectories. This association was particularly salient among white adolescents. Conclusions: Obesity onset during early adolescence increased risk of high school dropout. White adolescents were particularly vulnerable. Given that early adolescence is marked by significant biological and social changes, future research should seek to identify the underlying processes linking adolescent obesity and school dropout to decrease school dropout risk among this vulnerable population. abstract_id: PUBMED:33004301 Factors associated with dropout in a lifestyle modification program for weight management in children and adolescents. Background: Significant dropout rates remain a serious concern in pediatric weight control program, but few studies have identified predictors of dropout. Aims: The objective of the study is to identify factors associated with dropout from a pediatric lifestyle modification weight control program at different phases. Methods: Data on overweight and obese participants (n = 242) aged 11-18 years in the Intervention for Childhood and Adolescent Obesity via Activity and Nutrition (ICAAN) study were collected at baseline, 6-months, and 24-months through self-report and a laboratory test. Logistic regression analysis was performed for those who dropped out during the first 6-months, and multivariate generalized estimating equation analysis identified longitudinal factors associated with those who dropped out after 24 months. Results: Lower family functioning (OR = 2.30, 95% CI [1.18-4.46]), exercise group (OR = 0.36, 95% CI [0.15-0.86]), lower initial attendance rate (OR = 6.09, 95% CI [2.94-12.6]), and non-self -referral pathways (OR = 2.35, 95% CI [1.05-5.27]) were significantly associated with 6-month dropouts. For late dropout, lower family functioning (OR = 1.71, 95% CI [1.06-2.77]) and lower initial attendance rates (OR = 2.06, 95% CI [1.12-3.81]) remained significant. Conclusion: Family function and initial attendance rate were associated with lower dropout rates. Developing a supportive family environment and focusing on the early-stage factors at the intervention's outset may reduce overall dropout rates in obesity prevention intervention. abstract_id: PUBMED:24611111 Early dropout predictive factors in obesity treatment. Diet attrition and failure of long term treatment are very frequent in obese patients. This study aimed to identify pre-treatment variables determining dropout and to customise the characteristics of those most likely to abandon the program before treatment, thus making it possible to modify the therapy to increase compliance. A total of 146 outpatients were consecutively enrolled; 73 patients followed a prescriptive diet while 73 followed a novel brief group Cognitive Behavioural Treatment (CBT) in addition to prescriptive diet. The two interventions lasted for six months. Anthropometric, demographic, psychological parameters and feeding behaviour were assessed, the last two with the Italian instrument VCAO Ansisa; than, a semi-structured interview was performed on motivation to lose weight. To identify the baseline dropout risk factors among these parameters, univariate and multivariate logistic models were used. Comparison of the results in the two different treatments showed a higher attrition rate in CBT group, despite no statistically significant difference between the two treatment arms (P = 0.127). Dropout patients did not differ significantly from those who did not dropout with regards to sex, age, Body Mass Index (BMI), history of cycling, education, work and marriage. Regardless of weight loss, the most important factor that determines the dropout appears to be a high level of stress revealed by General Health Questionnaire-28 items (GHQ-28) score within VCAO test. The identification of hindering factors during the assessment is fundamental to reduce the dropout risk. For subjects at risk, it would be useful to dedicate a stress management program before beginning a dietary restriction. abstract_id: PUBMED:38397317 Treating Children and Adolescents with Obesity: Predictors of Early Dropout in Pediatric Weight-Management Programs. Background: Possible therapeutic failure of pediatric obesity is influenced by the high dropout rate. The aim of this study was to evaluate the rate of dropout and the rate of weight loss over the 24 months of follow-up. Methods: The retrospective, single-center study, involved 489 patients followed for obesity in the period 2016-2020. Patients' auxological data and blood samples were collected during the first (V1) and last visit (V2). Dropout was defined as a follow-up of less than 12 months and/or including less than one visit every 6 months. Patients were divided into two groups and compared: Group A of dropout (297 patients) and Group B of non-dropout (192 patients). Results: In the follow-up period, which had a mean duration of 24 months, the dropout rate was 60.7%. In Group A, the percentage of patients with BMI ≥ 3 SD at V2 was significantly higher than that in Group B. In Group B, the percentage of patients with pathological HOMA-IR and with fasting glucose &gt;100 mg/dL was higher than group A. The probability of dropout was positively associated with pubertal stage and negatively with impaired fasting glycemia and pathological insulinemia at V1. Conclusion: The study demonstrated a high dropout rate during follow-up, mainly among adolescents and patients with no glucometabolic alterations. abstract_id: PUBMED:24528102 Factors associated with dropout in a group weight-loss programme: a longitudinal investigation. Background: Attrition is a major limitation of most weight management intervention programmes. The present study aimed to conduct an extensive investigation of personal, sociodemographic and treatment-related factors associated with attrition at different stages of a 10-week group weight-loss programme. Methods: The present study is part of a longitudinal, clinical intervention study comparing three conditions in a 10-week group programme: Control, Behavioural Intentions and Implementation Intentions. The study included 587 participants with a mean (SD) age of 46 (11) years (range 18-78 years) and a mean (SD) body mass index (BMI) of 31.9 (5.5) kg m(-2), with 90% being female. To characterise dropout in each week separately, as well as overall dropout (dropout until week 9, the median time of dropout), we tested several logistic regression models, including multiple imputations to cope with missing data. Results: The results of the different dropout models consistently showed that a smaller reduction in BMI in the first 2 weeks of the programme was the strongest predictor of dropout. Dropout in the tenth and last week differed from the earlier weeks both in the relatively high dropout rate (56% of total dropout) and in that, in contrast to earlier weeks, the week 10 model included the reduction in BMI during the last 2 weeks before dropout but did not include the reduction in BMI at the initiation of the intervention. Conclusions: Weight-loss in the beginning of the programme is a crucial independent determinant of dropout in each week except the last one. This finding is important because it suggests a simple assessment for a major dropout risk factor in adult weight-loss intervention programmes. abstract_id: PUBMED:24086408 Adolescent health and high school dropout: a prospective cohort study of 9000 Norwegian adolescents (the Young-HUNT). Background: High school dropout is of major concern in the western world. Our aims were to estimate the risk of school dropout in adolescents following chronic somatic disease, somatic symptoms, psychological distress, concentration difficulties, insomnia or overweight and to assess to which extent the family contributes to the association between health and school dropout. Methods: A population of 8950 school-attending adolescents (13-21 years) rated their health in the Young-HUNT 1 Study (90% response rate) in 1995-1997. High school dropout or completion, was defined with the Norwegian National Education Database in the calendar year the participant turned 24 years old. Parental socioeconomic status was defined by using linkages to the National Education Database, the National Insurance Administration and the HUNT2 Survey. We used logistic regression to estimate odds ratios and risk differences of high school dropout, both in the whole population and among siblings within families differentially exposed to health problems. Results: All explored health dimensions were strongly associated with high school dropout. In models adjusted for parental socioeconomic status, the risk differences of school dropout according to health exposures varied between 3.6% (95% CI 1.7 to 5.5) for having ≥ 1 somatic disease versus none and 11.7% (6.3 to 17.0) for being obese versus normal weight. The results from the analyses comparing differentially exposed siblings, confirmed these results with the exception of weaker associations for somatic diseases and psychological distress. School dropout was strongly clustered within families (family level conditional intraclass correlation 0.42). Conclusions: Adolescent health problems are markers for high school dropout, independent of parental socioeconomic status. Although school dropout it strongly related to family-level factors, also siblings with poor health have reduced opportunity to complete high school compared to healthy siblings. Public health policy should focus on ensuring young people with poor health the best attainable education. abstract_id: PUBMED:29867305 Predictors of decline in self-reported health: addressing non-ignorable dropout in longitudinal studies of aging. Predictors of decline in health in older populations have been investigated in multiple studies before. Most longitudinal studies of aging, however, assume that dropout at follow-up is ignorable (missing at random) given a set of observed characteristics at baseline. The objective of this study was to address non-ignorable dropout in investigating predictors of declining self-reported health (SRH) in older populations (50 years or older) in Sweden, the Netherlands, and Italy. We used the SHARE panel survey, and since only 2895 out of the original 5657 participants in the survey 2004 were followed up in 2013, we studied whether the results were sensitive to the expectation that those dropping out have a higher proportion of decliners in SRH. We found that older age and a greater number of chronic diseases were positively associated with a decline in self-reported health in the three countries studies here. Maximum grip strength was associated with decline in self-reported health in Sweden and Italy, and self-reported limitations in normal activities due to health problems were associated with decline in self-reported health in Sweden. These results were not sensitive to non-ignorable dropout. On the other hand, although obesity was associated with decline in a complete case analysis, this result was not confirmed when performing a sensitivity analysis to non-ignorable dropout. The findings, thereby, contribute to the literature in understanding the robustness of longitudinal study results to non-ignorable dropout while considering three different population samples in Europe. abstract_id: PUBMED:35711533 Two, Six, and Twelve-Month Dropout Rate and Predictor Factors After a Multidisciplinary Residential Program for Obesity Treatment. A Prospective Cohort Study. Introduction: The aim of the present study was to assess the dropout rate at 2, 6, and 12 months after an inpatient multidisciplinary residential program (MRP) for the treatment of obesity. Furthermore, this study assessed anthropometric and biochemical predictors associated with the dropout. Methods: Adult and elderly patients (age 59 ± 14 years) with obesity had undergone an MRP, were followed up from 2 to 12 months. Biochemical and anthropometric markers have been assessed at the beginning of the follow-up period after the MRP. Results: The study enrolled 178 subjects, 117 women and 61 men. The overall dropout rate at 2 months was 21.3%, after 6 months was 44.4%, and after 1 year was 68.5%. There was no difference by gender recorded. Furthermore, patients under medical treatment with psychiatric disorders did not show an association with the dropout rate. Patients with a higher level of body mass index (BMI) at the discharge of MRP showed +48% of dropout at 6 months. After the MRP, the baseline values of uricemia and white blood cells (WBCs) resulted as predictors of dropout at 2 months (p &gt; 0.05). Furthermore, the excess percentage of fat mass lost during the MRP was associated with the risk of dropout at 2, 6, and 12 months (p &gt; 0.05). Conclusion: The MRP for obesity is an opportunity for losing weight for patients with established criteria. The future challenge will be addressing the best strategic plans in order to reduce the dropout rate after this intervention. Investigating deeply the main predictors could be an opportunity to improve the long-term efficacy of MRP. abstract_id: PUBMED:29939230 Predictors of dropout in the school-based multi-component intervention, 'Mexa-se'. To identify the predictors of dropout in the 'Mexa-se' intervention according to the body mass index (BMI) category. This was a controlled, non-randomized study. The intervention included: (i) increase in the intensity of physical activities (PA) in physical education (PE) classes; (ii) active recess; (iii) educational sessions on PA, nutrition and body image; and (iv) educational materials. Dropout was considered when students dropped out of intervention, or did not reach 75% attendance in PE classes. The independent variables were gender, age, study period, socioeconomic status, BMI, PA, screen time, food consumption, health perception, attitudes toward PA, self-efficacy for PA, perception of the school environment, body image and self-esteem. Binary logistic regression analysis was used. The dropout rate was 26.8%. In the total sample and among students with an adequate BMI, there was a greater probability of dropout with an increase in age. For overweight students, increased age and socioeconomic status, and studying in the afternoon period were predictors of dropout from the intervention. Socio-demographic factors were predictors of dropout from the 'Mexa-se' intervention; the associated factors differed based on the BMI category. abstract_id: PUBMED:35669034 Determinants of Dropout from and Variation in Adherence to an Exercise Intervention: The STRRIDE Randomized Trials. Purpose: This study aimed to characterize the timing and self-reported determinants of exercise dropout among sedentary adults with overweight or obesity. We also sought to explore variations in adherence among individuals who completed a 6- to 8-month structured exercise intervention. Methods: A total of 947 adults with dyslipidemia [STRRIDE I, STRRIDE AT/RT] or prediabetes [STRRIDE-PD] were enrolled to either control or to one of 10 exercise interventions, ranging from doses of 8-23 kcal/kg/week; intensities of 50%-75% V̇O2 peak; and durations of 6-8 months. Two groups included resistance training and one included dietary intervention (7% weight loss goal). Dropout was defined as an individual who withdrew from the study due a variety of determinants. Timing of intervention dropout was defined as the last session attended and categorized into phases. Exercise training adherence was calculated by dividing weekly minutes or total sets of exercise completed by weekly minutes or total sets of exercise prescribed. General linear models were used to characterize the associations between timing of dropout and determinant category. Results: Compared to exercise intervention completers (n=652), participants who dropped out (n=295) were on average non-white (98% vs. 80%, p&lt;0.01), had higher body mass index (31.0 kg/m2 vs. 30.2 kg/m2; p&lt;0.01), and were less fit at baseline (25.0 mg/kg/min vs. 26.7 ml/kg/min, p&lt;0.01). Of those who dropped out, 67% did so prior to the start of or while ramping up to the prescribed exercise volume and intensity. The most commonly reported reason for dropout was lack of time (40%). Notably, among individuals who completed the ramp training period, subsequent exercise intervention adherence did not waiver over the ensuing 6-8 months of training. Conclusion: These findings are some of the first to delineate associations between the timing of dropout and dropout determinants, providing guidance to future exercise interventions to better support individuals at-risk for dropout. Answer: Yes, obesity is associated with school dropout. Adolescents belonging to an increasing trajectory (adolescent-onset obesity) had a higher likelihood of dropping out of high school compared with those belonging to chronic, decreasing (childhood-only obesity), and nonobese trajectories. This association was particularly salient among white adolescents (PUBMED:26331748). Additionally, factors such as lower family functioning and lower initial attendance rates were associated with higher dropout rates from pediatric lifestyle modification weight control programs, suggesting that family environment and early engagement in interventions are important for reducing dropout rates in obesity prevention interventions (PUBMED:33004301).
Instruction: Is Montgomery tracheal Safe-T-Tube clinical failure induced by biofilm? Abstracts: abstract_id: PUBMED:23649498 Is Montgomery tracheal Safe-T-Tube clinical failure induced by biofilm? Objectives: Montgomery Safe-T-Tube deterioration and early biofilm colonization may explain the discomfort claimed by many patients and clinical failures. The aim of the study was to analyze the deterioration of Montgomery Safe-T-Tube morphological and mechanical properties in vivo in 16 patients by using microbiological methods, optical and electron microscopy, and engineering tests. Study Design: Prospective controlled study at a single medical center. Setting: University hospital. Subjects And Methods: The study, conducted from April 2007 to February 2012 at the "Sapienza" University of Rome, was designed to collect 2 Montgomery Safe-T-Tubes from each patient. The first was removed 3 to 15 days after insertion (group A) and the second at least 90 days after (group B). Specimens underwent microbiologic assays, electron microscopic analysis, immunocytologic analysis, and mechanical tests. Results: Microorganisms were not isolated in 2 group A cases (12%), whereas they were in all group B cases. Biofilm was identified in 11 of 16 (69%) group A samples and in 16 of 16 (100%) group B samples (P = .0149) using scanning electron microscopy. Immunohistochemistry showed monocyte-granulocyte line cells producing interleukin-1β on the external surfaces of Montgomery Safe-T-Tubes. The tensile test showed that the wear related to the longer period of use makes Montgomery Safe-T-Tubes more rigid than newer ones. Conclusion: Early biofilm colonization takes place in Montgomery Safe-T-Tubes in most cases. The mechanical decay could be justified in part by the destructive biofilm activity and by the release of inflammatory effectors and enzymes. abstract_id: PUBMED:30625769 The use of the Montgomery T-tube in postprocedural subglottic stenosis repair. The Montgomery T-tube is a device that supports the trachea and facilitates ventilation after laryngotracheal surgery. Because its use is sporadic, many anesthesiologists may not be familiar with this device, and its anesthetic management requires careful planning and discussion with surgeons. We describe the techniques of anesthetic management of exchanging endotracheal tube to Montgomery T-tube in case where upper tracheal stenosis exists. abstract_id: PUBMED:24529501 Successful management of metallic expandable stent-associated tracheal restenosis by Montgomery T-tube insertion. Objective: Tracheal stenosis can be life-threatening and has been seen with an increasing frequency. Methods: Metallic stent-associated stenosis remains a therapeutic challenge because of difficulty in tracheostomy. Results: Here, we report a case with repeated episodes of restenosis after insertion of metallic stents for benign tracheal stenosis which was successfully managed by Montgomery silicone T-tube placement. Conclusion: We further reviewed the literature of reported cases of inappropriate use of metallic stent for benign diseases. abstract_id: PUBMED:14535218 The Montgomery T-tube tracheal stent. The Montgomery T-tube is a valuable tracheal stent that provides a functional airway while supporting the tracheal mucosa. It is used in benign and malignant tracheal diseases and provides symptomatic relief to the majority of the patients. T-tubes are simple to insert and rarely cause serious complications. The use of T-tubes continues to gain popularity with the increasing incidence of benign trachea stenosis following the use of artificial airways. Physicians dealing with diseases of the airways should be familiar with the indications, contraindication, complications, and care of the Montgomery T-tubes. abstract_id: PUBMED:21286455 Airway management using laryngeal mask airway in insertion of the Montgomery tracheal tube for subglottic stenosis -A case report-. The Montgomery tracheal tube (T-tube) is a device used as a combined tracheal stent and airway after laryngotracheoplasty for patients with tracheal stenosis. This device can present various challenges to anesthesiologists during its placement, including the potential for acute loss of the airway, inadequate administration of inhalation agents, and inadequacy of controlled mechanical ventilation. The present case of successful airway management used a laryngeal mask airway under total intravenous anesthesia with propofol and remifentanil in the insertion of a Montgomery T-tube in a tracheal resection and thyrotracheal anastomosis because of severe subglottic stenosis. abstract_id: PUBMED:3830958 Use of the Montgomery T-tube in tracheal surgery. The Montgomery T-tube was used in 33 patients over a total of 100 cases undergoing tracheal surgery. Of 21 cases with inflammatory stricture the T-tube was used in 18 in association with resection and end-to-end anastomosis, while in the remaining 3 cases the T-tube was the only treatment carried out. The results were good in 48% of cases, poor in 17% and still to be assessed in 29%. In neoplastic diseases (10 cases) the use of the Montgomery T-tube ensures the patency of the tracheal lumen avoiding the possible, severe asphyxial crises. The major limits of the T-tube appear to be its ineffectiveness against extended tracheomalacia and its possible displacement which may lead to asphyxial crises. abstract_id: PUBMED:23493795 Anesthetic management of a patient with Montgomery t-tube in-situ for direct laryngoscopy. The Montgomery silicone t-tube used for post-procedural tracheal stenosis has advantage of acting as both stent and tracheostomy tube. The anesthetic management of patient with t-tube in situ poses a challenge. Safe management of such patients requires careful planning. We describe anesthetic management for direct laryngoscopy of a patient with t-tube in situ. abstract_id: PUBMED:28794964 Innovative use of a Montgomery cannula in the bronchoscopic management of tracheal stenosis and failed tracheostomy decannulation. Background: Endoprosthesis are being used in the everyday clinical practice either as a permanent solution or transient. They can be used in both benign and malignant situations. Case Presentation: We report a case in which a temporary Montgomery cannula is used in conjunction with therapeutic bronchoscopy to manage a patient with failed tracheostomy decannulation secondary to a distal complex tracheal stenosis. Conclusions: This innovative use of the Montgomery cannula allowed for successful management of the patient's tracheal stenosis and subsequent tracheostomy tube decannulation. abstract_id: PUBMED:32788060 Our 12 year experience with Montgomery T-tube in the management of acute blunt laryngotracheal trauma patients. Introduction: The Montgomery T-tube is a device used as a combined tracheal stent and tracheostomy tube to prevent post-operative tracheal stenosis. Objectives: The purpose of this retrospective study is to evaluate the outcome following Montgomery T-tube stenting performed in for neck and airway injury in patients with acute blunt laryngotracheal trauma over a period of 12 years. Methods: Between 2005 and 2017, 19 patients with acute blunt laryngotracheal trauma underwent Montgomery T-tube stenting. All 19 laryngotracheal trauma patients had undergone a preoperative tracheostomy in the emergency department by an ENT surgeon. Montgomery T-tube stenting was done later through an external approach. The follow up period ranged from 2 to 10 years. The Montgomery T-tube was removed after a period ranging from 6 months to 1½ year. Results: The majority of patients in the study were in the age group of 21-30 years. A preoperative tracheostomy was done in all 19 patients. All patients except 3 underwent successful decannulation, and experienced long-term satisfactory result. Conclusion: Management of acute blunt laryngotracheal trauma is a challenging problem that demands a multidisciplinary approach. The ideal treatment option should be individualized according to the patient's condition and characteristics of injury. According to our study we suggest that cases of acute blunt laryngotracheal trauma patients should be managed following the protocol as mentioned in our study, and we strongly emphasize that Montgomery T-tube should be left for at least 1 complete year in the airway as it results in negligible chances of post-traumatic stenosis of airway later. abstract_id: PUBMED:36637935 Case series: Montgomery T-tube placement for subglottic tracheal stenosis: a report of 3 cases. Rationale: Subglottic tracheal stenosis is stenosis of the trachea between the vocal cords and the lower margin of the cricoid cartilage. The Montgomery T-tube is used as a tracheostomy tube and a combined tracheal stent to avoid postoperative tracheal stenosis. Patient Concerns: Because the stenosis is close to the glottis, surgical treatment is complex, and many complications may arise. Diagnoses: Subglottic tracheal stenosis. Interventions: The patients underwent endotracheal intubation or tracheotomy because of acute pancreatitis, laryngeal malignancy, or cerebral hemorrhage after endotracheal intubation or tracheotomy and presented with varying degrees of tracheal stenosis and dyspnea. We relieved airway stenosis and improved dyspnea in these 3 patients by placing a Montgomery T-tube. Outcomes: None of the 3 patients had intraoperative complications. In 2 of the cases, airway secretions were stored after surgery. Lessons: Montgomery T-tube placement is safe and effective for patients with complex subglottic tracheal stenosis. Answer: Yes, the clinical failure of the Montgomery tracheal Safe-T-Tube can be induced by biofilm. A study analyzing the deterioration of Montgomery Safe-T-Tube morphological and mechanical properties in vivo found that early biofilm colonization takes place in most cases. The study, which included 16 patients, showed that biofilm was identified in 69% of the samples removed 3 to 15 days after insertion and in 100% of the samples removed at least 90 days after insertion. The presence of biofilm was confirmed using scanning electron microscopy. Additionally, the mechanical decay of the tubes could be partly justified by the destructive activity of the biofilm and the release of inflammatory effectors and enzymes (PUBMED:23649498).
Instruction: Quick starting contraception after emergency contraception: have clinical guidelines made a difference? Abstracts: abstract_id: PUBMED:24706265 Quick starting contraception after emergency contraception: have clinical guidelines made a difference? Objective: When initiating contraception after emergency contraception (EC), conventional practice had been to wait until the next menses. Since 2010, UK guidelines have endorsed quick starting (QS) contraception, namely offering immediate start when requested. We conducted an audit to assess clinical practice before and after QS guidance publication. Methods: A full cycle audit was performed on the clinical notes of women requesting EC during two 2-month periods in 2010 and 2011 in an Integrated Sexual Health Service. All case notes were identified using the National Sexual Health database of sexual health records (Scotland). Information was collated and interpreted using Microsoft Excel and SPSS V.17. Results: During January and February 2010 and 2011, 190 and 180 women, respectively, attended for EC, of whom 96 and 97 were identified as potential quick starters. Between 2010 and 2011, a statistically significant increase in QS practice was noted from 20.8% (n=20) to 37.1% (n=36) (p=0.011), with a corresponding decrease in the percentage of women traditionally started on hormonal contraception (HC): 24% (n=23) and 14.6% (n=14), respectively. There was also a decrease in those advised to return for commencement of HC [55.2% (n=53) vs 49% (n=47)]. Of those advised to return, 26.4% (n=14) and 31.9% (n=15) had no further contact with the service within at least 6 months. Conclusions: QS practice increased after the introduction of clinical guidelines. However, overall provision of HC remained low, with only around half of women prescribed a hormonal method. abstract_id: PUBMED:28663249 Quick starting hormonal contraception after using oral emergency contraception: a systematic review. Introduction: Unprotected intercourse after oral emergency contraception (EC) significantly increases pregnancy risk. This underlies the importance of promptly starting effective, ongoing contraception - known as 'quick starting'. However, theoretical concern exists that quick starting might interact with EC or hormonal contraception (HC) potentially causing adverse side effects. Method: A systematic review was conducted, evaluating quick starting HC after oral EC [levonorgestrel 1.5 mg (LNG) or ulipristal acetate 30 mg (UPA)]. PubMed, EMBASE, The Cochrane Library, ICTRP, ClinicalTrials.gov and relevant reference lists were searched in February 2016. A lack of comparable studies prevented meta-analysis. Results: Three randomised controlled trials were identified. Two biomedical studies suggested HC action was unaffected by quick starting after UPA; one study examined ovarian quiescence (OR 1.27; 95% CI 0.51-3.18) while taking combined oral contraception (COC). Another assessed cervical mucus impenetrability (OR 0.76; 95% CI 0.27-2.13) while taking progestogen-only pills (POP). Quick starting POP reduced the ability of UPA to delay ovulation (OR 0.04; 95% CI 0.01-0.37). Side effects (OR 1.22; 95% CI 0.48-3.12) and unscheduled bleeding (OR 0.53; 95% CI 0.16-1.81) were unaffected by quick starting COC after UPA. Another study reported higher self-reported contraceptive use at 8 weeks among women quick starting POP after LNG, compared with women given LNG alone (OR 6.73; 95% CI 2.14-21.20). abstract_id: PUBMED:32472012 The effectiveness of quick starting oral contraception containing nomegestrol acetate and 17-β estradiol on ovulation inhibition: A randomized controlled trial. To determine the effectiveness of quick starting combined oral contraception (COC) contain 2.5 mg nomegestrol acetate and 1.5 mg estradiol (NOMAC/E2) comparing with 0.075 mg gestodene and 0.02 mg ethinyl estradiol (GS/EE) on ovarian ovulation inhibition rate, we conducted a non-inferiority randomized controlled trial involving 69 healthy female volunteers aged 18-40 years who had normal menstrual history and were randomized at a 2:1 ratio to take one pack of COC containing either NOMAC/E2 (study group) or GS/EE (control group) starting on menstrual cycle Day7-9. The ovarian activity was assessed by using Hoogland and Skouby grading. Forty-six and 23 participants were randomized to NOMAC/E2 and GS/EE groups, respectively. Baseline characteristics were similar between groups. No significant difference was observed between the study and control groups for ovulation inhibition rate (93.4% vs. 95.6%, risk difference: -2.2%, 95% CI: -13.1, 8.8), ovarian quiescence rate (91.2% vs. 91.2%, P = 1.000), persistent cyst rate (2.2% vs. 4.4%, P = 1.000), and ovulation rate (6.6% vs. 4.4%, P = 1.000). Quick starting COC during day7-9 of menstrual cycle can inhibit ovulation for more than 90%. The quick starting NOMAC/E2 is non-inferior to GS/EE for preventing ovulation and suppressing follicular growth. abstract_id: PUBMED:30731216 Health Care Provider Attitudes about the Safety of "Quick Start" Initiation of Long-Acting Reversible Contraception for Adolescents. Study Objective: To identify characteristics associated with provider attitudes on the safety of "Quick Start" initiation of long-acting reversible contraception (LARC) for adolescents. DESIGN, SETTING, PARTICIPANTS, INTERVENTIONS, AND MAIN OUTCOME MEASURES: We conducted a cross-sectional survey of providers in public-sector health centers and office-based physicians (n = 2056) during 2013-2014. Results: Overall, the prevalence of considering "Quick Start" initiation of LARC for adolescents as safe was 70.9% for implants and 64.5% for intrauterine devices (IUDs). Among public-sector providers, those not trained in implant or IUD insertion had lower odds of perceiving the practice safe (adjusted odds ratio [aOR], 0.32; 95% confidence interval [CI], 0.25-0.41 for implants; aOR 0.42; 95% CI, 0.32-0.55 for IUDs), whereas those practicing at health centers that did not receive Title X funding had lower odds of perceiving the practice safe for IUDs (aOR, 0.77; 95% CI, 0.61-0.98). Among office-based physicians, lack of training in LARC insertion was associated with lower odds of perceiving "Quick Start" initiation to be safe for IUDs (aOR, 0.31; 95% CI, 0.12-0.77). Those specializing in adolescent medicine had higher odds of reporting "Quick Start" initiation of LARC as safe (implants: aOR, 2.21; 95% CI, 1.23-3.98; IUDs: aOR, 3.37; 95% CI, 1.39-8.21) compared with obstetrician-gynecologists. Conclusion: Approximately two-thirds of providers considered "Quick Start" initiation of LARC for adolescents safe; however, there were differences according to provider characteristics (eg, Title X funding, training in LARC insertion, specialty). Targeted LARC insertion training and dissemination of evidence-based family planning guidance and implementation into facility and practice-level policies might increase access to "Quick Start" initiation of LARC for adolescents. abstract_id: PUBMED:30392865 Health Care Provider Attitudes and Practices Related to 'Quick Start' Provision of Combined Hormonal Contraception and Depot Medroxyprogesterone Acetate to Adolescents. Purpose: Adolescents may encounter many barriers to initiating contraception. 'Quick Start' is a recommended approach for initiating contraception on the same day as a provider visit. We examined factors associated with health care provider attitudes and practices related to 'Quick Start' provision of combined hormonal contraception (CHC) and depot medroxyprogesterone acetate (DMPA) to adolescents. Methods: We analyzed weighted survey data from providers in publicly funded health centers and from office-based physicians (n = 2,056). Using multivariable logistic regression, we estimated adjusted odds ratios (aORs) and 95% confidence intervals (CIs) of the associations between provider characteristics and frequent (very often or often vs. not often or never) 'Quick Start' provision of CHC and DMPA to adolescents in the past year. Results: The prevalence of considering 'Quick Start' as safe was high for CHC (public-sector providers [87.5%]; office-based physicians [80.2%]) and DMPA (public-sector providers [80.9%]; office-based physicians [78.8%]). However, the prevalence of frequent 'Quick Start' provision was lower, particularly among office-based physicians (CHC: public-sector providers [74.2%]; office-based physicians [45.2%]; DMPA: public-sector providers [71.4%]; office-based physicians [46.9%]). Providers who considered 'Quick Start' unsafe or were uncertain about its safety had lower odds of frequent 'Quick Start' provision compared with those who considered it safe (public-sector providers: CHC aOR = 0.09 95% CI 0.06-0.13, DMPA aOR = 0.07 95% CI 0.05-0.10; office-based physicians: CHC aOR = 0.06 95% CI 0.02-0.22, DMPA aOR = 0.07 95% CI 0.02-0.20). Conclusions: While most providers reported that 'Quick Start' initiation of CHC and DMPA among adolescents is safe, fewer providers reported frequent 'Quick Start' provision in this population, particularly among office-based physicians. abstract_id: PUBMED:26598307 Contraceptive Method Initiation: Using the Centers for Disease Control and Prevention Selected Practice Guidelines. The US Selected Practice Recommendations is a companion document to the Medical Eligibility Criteria for Contraceptive Use that focuses on how providers can use contraceptive methods most effectively as well as problem-solve common issues that may arise. These guidelines serve to help clinicians provide contraception safely as well as to decrease barriers that prevent or delay a woman from obtaining a desired method. This article summarizes the Selected Practice Recommendations on timing of contraceptive initiation, examinations, and tests needed prior to starting a method and any necessary follow-up. abstract_id: PUBMED:36637987 Contraceptive methods in adolescence: a narrative review of guidelines. Purpose: Adolescent pregnancy, while recently in decline, remains a matter in need of addressing. Education and counselling are deemed crucial and this review aims at comparing published contraceptive guidelines, thus resolving any surrounding misconceptions. Materials And Methods: Recently published contraception guidelines regarding adolescent pregnancy were retrieved. In particular, guidelines and recommendations from ACOG, RCOG, SOCG, AAP, CPS, NICE, CDC, and WHO were compared and reviewed based on each guideline's method of reporting. Results: Three categories of contraceptive methods are available for adolescents and recommendations on their initiation should be made based on their efficacy, according to all guidelines. Therefore, long acting reversible contraceptives (LARCs) should be highly recommended as the most effective method (typical use failure rate: 0.05%), followed by short-acting hormonal contraceptives (typical use failure rate: 3-9%). The third contraceptive option includes contraceptives used in the moment of intercourse and displays the lowest effectiveness (typical use failure rate: 12-25%), mostly due to its dependence on personal consistency, however offers protection against STI transmission. Conclusion: Adolescents should be encouraged to initiate contraception, with LARCs being the primary choice followed by short-acting hormonal contraception. However, regardless of the chosen effective contraceptive method, the use of condom is necessary for STI prevention. abstract_id: PUBMED:18259783 European guidelines for the diagnosis and treatment of pelvic girdle pain. A guideline on pelvic girdle pain (PGP) was developed by "Working Group 4" within the framework of the COST ACTION B13 "Low back pain: guidelines for its management", issued by the European Commission, Research Directorate-General, Department of Policy, Coordination and Strategy. To ensure an evidence-based approach, three subgroups were formed to explore: (a) basic information, (b) diagnostics and epidemiology, and (c) therapeutical interventions. The progress of the subgroups was discussed at each meeting and the final report is based on group consensus. A grading system was used to denote the strength of the evidence, based on the AHCPR Guidelines (1994) and levels of evidence recommended in the method guidelines of the Cochrane Back Review group. It is concluded that PGP is a specific form of low back pain (LBP) that can occur separately or in conjunction with LBP. PGP generally arises in relation to pregnancy, trauma, arthritis and/or osteoarthritis. Uniform definitions are proposed for PGP as well as for joint stability. The point prevalence of pregnant women suffering from PGP is about 20%. Risk factors for developing PGP during pregnancy are most probably a history of previous LBP, and previous trauma to the pelvis. There is agreement that non risk factors are: contraceptive pills, time interval since last pregnancy, height, weight, smoking, and most probably age. PGP can be diagnosed by pain provocation tests (P4/thigh thrust, Patrick's Faber, Gaenslen's test, and modified Trendelenburg's test) and pain palpation tests (long dorsal ligament test and palpation of the symphysis). As a functional test, the active straight leg raise (ASLR) test is recommended. Mobility (palpation) tests, X-rays, CT, scintigraphy, diagnostic injections and diagnostic external pelvic fixation are not recommended. MRI may be used to exclude ankylosing spondylitis and in the case of positive red flags. The recommended treatment includes adequate information and reassurance of the patient, individualized exercises for pregnant women and an individualized multifactorial treatment program for other patients. We recommend medication (excluding pregnant women), if necessary, for pain relief. Recommendations are made for future research on PGP. abstract_id: PUBMED:34306970 Postpartum care indications and methodological quality:a systematic review of guidelines. Background: The coverage of postpartum care is not ideal, and has not been used very well due to not enough attention being paid to the puerperal women and newborns, especially in developing countries. Practice guidelines on postpartum care provide beneficial practice guidance and help to reduce maternal mortality. However, little is known about the credibility and consistency of those guidelines. This systematic review was conducted to summarize main postpartum care indications and appraise methodological quality of guidelines. Methods: Seven literature databases and guideline development institutions and organizations of obstetrics and maternity care were searched. Two reviewers independently assessed guideline quality using the AGREE II instrument, and synthesized consistent and non-consistent recommendations using the content analysis approach. Results: Twenty-nine guidelines were included and a total of eight postpartum care indications were identified. Most guidelines focused on care indications and interventions including exclusive breastfeeding, maternal nutrition, home visit, infant or newborn care and sexuality, contraception, and birth spacing. However, indications such as pain or weight management, pelvic floor muscle training, abdominal rehabilitation, and mental health got less attention. Additionally, the overall quality of all involving postpartum care guidelines is relatively good and acceptable. Conclusions: Guidelines developed by NICE, RANO, and WHO indicated higher methodological quality. For postpartum care indications, most guidelines are incomplete. Variation in practice guidelines for postpartum care recommendations exists. In the future, implementation research into shared decision-making, as well as further high-quality research to broaden the evidence base for postpartum care indications is recommended. Supplementary Information: The online version contains supplementary material available at 10.1007/s10389-021-01629-4. abstract_id: PUBMED:36627514 Sex and gender in hypertension guidelines. This paper reviews 11 current and previous international and some selected national hypertension guidelines regarding sex and gender-related differences. Those differences can be attributed to biological sex and to gender differences that are determined by socially constructed norms. All reviewed guidelines agree on a higher hypertension prevalence in men than in women. They also concur that evidence does not support different blood pressure thresholds and targets for treatment between men and women. Differences refer in addition to the differences in epidemiological aspects to differences in some morphometric diagnostic indices, e.g., left ventricular mass or the limits for daily alcohol intake. Concerning practical management, there are hardly any clear statements on different procedures that go beyond the consensus that blockers of the renin-angiotensin system should not be used in women of childbearing age wishing to become pregnant. Some further sex-specific aspects are related to differences in tolerability or drug-specific side effects of BP-lowering drugs. There is also a consensus about the need for blood pressure monitoring before and during the use of contraceptive pills. For management of pregnancy, several guidelines still recommend no active treatment in pregnant women without severe forms of hypertension, despite a wide consensus about the definition of hypertension in pregnancy. A disparity in treatment targets when treating severe and non-severe hypertension in pregnancy is also observed. Overall, sex-specific aspects are only very sparsely considered or documented in the evaluated guidelines highlighting an unmet need for future clinical research on this topic. Answer: The introduction of clinical guidelines endorsing quick starting (QS) contraception after emergency contraception (EC) has indeed made a difference in clinical practice. An audit conducted to assess the impact of these guidelines showed a statistically significant increase in QS practice from 20.8% to 37.1% after the guidelines were published, although the overall provision of hormonal contraception (HC) remained low, with only around half of the women being prescribed a hormonal method (PUBMED:24706265). A systematic review evaluating quick starting HC after oral EC found that quick starting did not significantly affect the action of HC or lead to adverse side effects. It also reported higher self-reported contraceptive use at 8 weeks among women who quick started progestogen-only pills (POP) after levonorgestrel (LNG) compared with women given LNG alone (PUBMED:28663249). Another study demonstrated that quick starting combined oral contraception (COC) during days 7-9 of the menstrual cycle can inhibit ovulation effectively, with the quick starting of nomegestrol acetate and 17-β estradiol (NOMAC/E2) being non-inferior to gestodene and ethinyl estradiol (GS/EE) for preventing ovulation (PUBMED:32472012). Health care provider attitudes also play a role in the adoption of QS initiation of contraception. A survey found that approximately two-thirds of providers considered "Quick Start" initiation of long-acting reversible contraception (LARC) for adolescents safe, but there were differences according to provider characteristics such as training in LARC insertion and specialty (PUBMED:30731216). Similarly, while most providers reported that 'Quick Start' initiation of CHC and depot medroxyprogesterone acetate (DMPA) among adolescents is safe, fewer providers reported frequent 'Quick Start' provision in this population, particularly among office-based physicians (PUBMED:30392865). In summary, clinical guidelines have positively influenced the practice of quick starting contraception after emergency contraception, but there is still room for improvement in terms of broader adoption and consistent application among healthcare providers.
Instruction: Can men be included in the population subjected to puerperal psychosis? Abstracts: abstract_id: PUBMED:27109325 Can men be included in the population subjected to puerperal psychosis? A case report Introduction: Puerperal psychosis (PP) is a psychiatric disorder that occurs in 1 out of 1000 pregnancies. Well known since antiquity, its symptoms have often been described in mothers, but few studies have successfully investigated a related disorder in fathers. The characteristic of this pathology is more related to its appearance than to its semiological description which is why its nosographic place is always discussed. The objective here is to focus on the definition of PP and to suggest an entity for both genders. Case Report: Our case report focused on the clinical description of an eighteen-year-old man suffering from an acute psychosis episode that occurred around the birth of his first child. Delusion followed a sudden decline in mood that lasted for a short period of time during the course of the third trimester of his wife's pregnancy. The delirium was rich with auditory and cenesthesic hallucinations, pregnancy and birth denial, feeling movements and hearing voices in his stomach. The symptoms disappeared after one month of treatment via an antipsychotic drug, risperidone. Conclusion: We can confirm that the symptomatic description of the disorder in this patient fits the classical descriptions of PP. Two elements make the PP different from other acute psychoses: the context of pregnancy and delirium focused on the child which can lead to a child murder. The absence of a framework precisely defining the PP does not improve its prevention and can lead to legal attitudes rather than medical care. Men suffering from acute psychosis in a context of pregnancy are submitted to the same risks as women. It is necessary to emphasize descriptions of PP in men to redefine the disease and consider that this entity involves both men and women. abstract_id: PUBMED:27533096 Men's experiences of having a partner who requires Mother and Baby Unit admission for first episode postpartum psychosis. Aims: The aim of this study was to explore the experiences of men during their partner's admission to a Mother and Baby Unit for first episode postpartum psychosis. Background: Postpartum psychosis can be life-changing for women, however little is known about the impact on their partners or their partners' needs. Design: An Interpretative Phenomenological Analysis approach was used to analyse interview transcripts. Methods: Semi-structured interviews were conducted with seven participants. Interviews were conducted in 2014-2015. Findings: Two main themes were identified: 'What the f*** is going on?' and 'Time to figure out how your family works'. Partners experienced shock and confusion during postpartum psychosis onset. Most felt hospital admission was beneficial, although there were barriers to accessing help and involvement in their partner's care. A process of understanding changes to roles, relationships and family identity was described. Loss was a common experience, with a potentially lasting impact. Conclusions: Men reported a range of challenging and positive experiences associated with their partner's postpartum psychosis. Improvements are needed in awareness of postpartum psychosis, access to appropriate services and support of partners. abstract_id: PUBMED:698707 Taxonomic map of the schizophrenias, with special reference to puerperal psychosis. Data collected by a single observer on 147 schizophrenic patients were subjected to clustering analysis. The results produced the hypothesis that schizophrenic illnesses directly after childbirth are a separate disease entity. This hypothesis was not disproved by experimental testing. Several disease entities may be included in the term schizophrenia. If this is so, the methods used in generating and testing the hypothesis that puerperal schizophrenia is a separate disease may provide a systematic method of classifying the various illnesses. abstract_id: PUBMED:26238989 What is in a name? Causative explanatory models of postpartum psychosis among patients and caregivers in India. Background: Explanatory models (EMs) influence decision-making related to treatment and compliance. There is little knowledge about belief systems related to postpartum psychosis in different cultures. Aims: To study EMs of illness among women with postpartum psychosis and their family members in India. Method: A total of 123 women with postpartum psychosis and their caregivers were assessed to understand their beliefs regarding causes of postpartum psychosis using Short Explanatory Model Interview (SEMI). Local names were listed and emerging themes were analysed. Results: Respondents often held more than one EM and only one-third held a biomedical EM. Other common models included stressors unique to childbirth, marital stress and supernatural causes. Local names reflected the underlying models. Conclusion: Non-biomedical EMs are common in women with postpartum psychosis. Cultural and social factors unique to childbirth appear to influence these models. There is a need to enhance awareness and knowledge about this serious disorder in the community. abstract_id: PUBMED:26243040 Pre-eclampsia and first-onset postpartum psychiatric episodes: a Danish population-based cohort study. Background: Recent evidence suggests that postpartum psychiatric episodes may share similar etiological mechanisms with immune-related disorders. Pre-eclampsia is one of the most prevalent immune-related disorders of pregnancy. Multiple clinical features are shared between pre-eclampsia and postpartum psychiatric disorders, most prominently a strong link to first pregnancies. Therefore, we aimed to study if pre-eclampsia is a risk factor for first-onset postpartum psychiatric episodes. Method: We conducted a cohort study using the Danish population registry, with a total of 400 717 primiparous women with a singleton delivery between 1995 and 2011. First-lifetime childbirth was the main exposure variable and the outcome of interest was first-onset postpartum psychiatric episodes. The main outcome measures were monthly incidence rate ratios (IRRs), with the period 11-12 months after birth as the reference category. Adjustments were made for age, calendar period, reproductive history, and perinatal maternal health including somatic and obstetric co-morbidity. Results: Primiparous women were at particularly high risk of first-onset psychiatric episodes during the first month postpartum [IRR 2.93, 95% confidence interval (CI) 2.53-3.40] and pre-eclampsia added to that risk (IRR 4.21, 95% CI 2.89-6.13). Having both pre-eclampsia and a somatic co-morbidity resulted in the highest risk of psychiatric episodes during the 3-month period after childbirth (IRR 4.81, 95% CI 2.72-8.50). Conclusions: We confirmed an association between pre-eclampsia and postpartum psychiatric episodes. The possible explanations for this association, which are not mutually exclusive, include the psychological impact of a serious medical condition such as pre-eclampsia and the neurobiological impact of pre-eclampsia-related vascular pathology and inflammation. abstract_id: PUBMED:29660641 Improvement of postpartum depression and psychosis after electroconvulsive therapy: A population-based study with a matched comparison group. Introduction: Electroconvulsive therapy (ECT) is used to treat postpartum depression and psychosis based on clinical experience and small observational studies. Aims: The primary aim was to test the hypothesis that the response rate to ECT for depression and psychosis is higher during the postpartum period than outside this period. The secondary aim was to identify predictors of a response to ECT during the postpartum period. Materials And Methods: Cases with postpartum depression and/or psychosis received ECT within 6 months of delivery. A matched comparison group with depression and/or psychosis (not within the postpartum period) was identified from the Swedish National Quality Register for ECT. The improvement 1 week after ECT was classified according to the Clinical Global Impressions Scale - Improvement scale (CGI-I) as responder (CGI-I score 1-2) or non-responder (CGI-I score 3-7). Results: 185 cases and 185 comparison group subjects were included (46% with psychosis in each groups). More cases (87.0%) than comparison group subjects (73.5%) responded to ECT (p = 0.001). Adjusted binary regression analysis revealed that more severe symptoms prior to treatment were the only statistically significant predictor of response. Limitations: There was no control group without ECT treatment. Conclusion: The response rate of those with postpartum depression and/or psychosis to ECT was high. The response rate of patients with psychosis or depression was higher during the postpartum period than outside it. This study supports the use of ECT for severe forms of postpartum depression and/or psychosis. abstract_id: PUBMED:21128087 Interventions for the prevention and treatment of postpartum psychosis: a systematic review. Postpartum psychosis is a serious disorder that can cause negative consequences for the mother, infant, and entire family. While reports of this condition date back for centuries, little is known about what interventions are most effective for this population. The purpose of this systematic review was to examine the research evidence on interventions for the prevention and treatment of postpartum psychosis. Studies were searched using CINAHL, EMBASE, MEDLINE, PsycINFO, and PubMed databases. All primary research studies published in English since 1970 that explored interventions for the prevention or treatment of postpartum psychosis were included. The search resulted in 26 studies on interventions for postpartum psychosis, with 10 focusing on prevention and 17 focusing on treatment. Studies on the prevention of postpartum psychosis have examined the effects of mood stabilizers, antipsychotics, and hormone therapy, while those examining treatment have included electroconvulsive therapy, mood stabilizers, antipsychotics, hormones, and the beta blocker propranolol. Only preliminary evidence suggests which interventions may be effective strategies to prevent (e.g., lithium) and treat (e.g., electroconvulsive therapy) postpartum psychosis. Due to methodological limitations in the studies reviewed, extensive evidence-based recommendations for the prevention and treatment of postpartum psychosis cannot be made. The known risk factors and negative consequences of postpartum psychosis point to the importance of preventative and acute treatment measures. Well-designed prospective studies are needed to determine the efficacy of prevention and treatment interventions for women who experience postpartum psychosis. abstract_id: PUBMED:31764450 Rehospitalization of Postpartum Depression and Psychosis After Electroconvulsive Therapy: A Population-Based Study With a Matched Control Group. Objectives: Electroconvulsive therapy (ECT) is used in some cases of postpartum depression (PPD) and postpartum psychosis (PPP). The risk of relapse for PPD and PPP after ECT is unknown. This study compared the relapse rate after ECT between women who had been treated for PPD and/or PPP and women who had been treated for depression and/or psychosis outside the postpartum period. Methods: The Swedish National Quality Register for ECT and the Swedish National Patient Register were used to identify women with PPD and/or PPP who had been treated with ECT within 6 months after delivery. For each case, a control (treated with ECT but not postpartum) patient was also selected. A Kaplan-Meier estimator was used to calculate the relapse rate (defined as rehospitalization or suicide) after ECT. Cox regression was used to identify variables associated with relapse. Results: A total of 180 patients were included in each group. The proportions of patients who suffered relapse after 6 months, 1 year, and 2 years were 28%, 31%, and 40% for the postpartum group and 39%, 50%, and 55% for the nonpostpartum group. Treatment with benzodiazepines, several previous psychiatric admissions, and the absence of improvement after ECT were associated with relapse. Conclusions: The risk of relapse after ECT is lower for patients with PPD and/or PPP than for patients outside the postpartum period, but the risk is nonetheless substantial in both groups. abstract_id: PUBMED:21903022 First-onset psychosis occurring in the postpartum period: a prospective cohort study. Objective: To prospectively characterize a cohort of patients for whom first lifetime episode of psychosis occurs in the postpartum period. Method: Included in the study were 51 women admitted to an inpatient facility for postpartum psychosis and a population-based control group (n = 6,969). All patients received naturalistic treatment using the sequential addition of benzodiazepines, antipsychotics, and lithium. A clinician-administered questionnaire and parallel history provided information about obstetric history, pregnancy, delivery, breastfeeding, neonatal outcomes, and onset of the disease. Clinical remission was defined as the absence of psychotic, manic, and depressive symptoms for at least 1 week. The primary outcome measure was the Clinical Global Impressions-Severity scale. The study was conducted from 2005 to 2009. Results: Compared to the general population sample, women with postpartum psychosis had a significantly higher incidence of primiparity (OR = 2.90; 95% CI, 1.49-5.67) but had no significant differences in delivery-related, lactational, or neonatal-related risk factors. The median onset of psychiatric symptoms occurred at 8 days' postpartum (interquartile range [IQR], 5-14), and median duration of episode was 40 days (IQR, 23-69). Patients with prominent depressive symptoms had a significantly later onset (P = .01) of psychosis and a longer duration of episode (P &lt; .01) than patients without depressive symptoms. Psychotic symptoms were mood-incongruent in 64.7% of patients. Conclusions: In contrast to other findings related to postpartum psychosis in bipolar patients, no delivery-related, neonatal-related, or lactational risk factors could be identified. Further, our findings of a delayed onset and mood incongruence of postpartum psychotic symptoms markedly contrasts with that of patients with a previous history of bipolar disorder. These results suggest that women with psychosis limited to the postpartum period might have a distinct risk profile and phenomenology. abstract_id: PUBMED:3737825 Postpartum psychosis in the male. The occurrence of postpartum psychosis in a male patient after the birth of a child is described in this case report. An association with the phenomenon of the couvade syndrome that is observed in all cultures is made. Also, the stressful life event of pregnancy in a partner related to the onset and development of psychotic disturbance is reported. Emotional disorders which develop in women during pregnancy are compared and contrasted with the variable responses which men manifest. Signs and symptoms heralding the onset of a more pathological pregnancy response in men are offered. The authors suggest that the experience of couvade be defined as a spectrum disorder of pathological responses in males ranging from physical symptoms to psychosis as a reaction to pregnancy. This case represents the extreme end of that spectrum and suggestions are made for early identification and treatment. Answer: Yes, men can be included in the population subjected to puerperal psychosis, as suggested by a case report that focused on an eighteen-year-old man who suffered from an acute psychosis episode around the birth of his first child. The symptoms he experienced, including delusion, auditory and cenesthesic hallucinations, pregnancy and birth denial, and feeling movements and hearing voices in his stomach, disappeared after treatment with an antipsychotic drug, risperidone. This case report confirms that the symptomatic description of the disorder in this patient fits the classical descriptions of puerperal psychosis (PP). The authors argue that men suffering from acute psychosis in the context of pregnancy are exposed to the same risks as women and emphasize the need to include descriptions of PP in men to redefine the disease as one that involves both men and women (PUBMED:27109325). The inclusion of men in the population affected by puerperal psychosis is further supported by the observation that the occurrence of postpartum psychosis in a male patient after the birth of a child has been described in a case report. This report draws an association with the couvade syndrome, which is observed in all cultures, and suggests that the experience of couvade can be defined as a spectrum disorder of pathological responses in males ranging from physical symptoms to psychosis as a reaction to pregnancy (PUBMED:3737825).
Instruction: Can genetics aggravate the health of isolated and remote populations? Abstracts: abstract_id: PUBMED:23534916 Can genetics aggravate the health of isolated and remote populations? The case of gout, hyperuricaemia and osteoarthritis in Dalmatia. Introduction: The aim of this study was to investigate whether genetics may be considered an additional risk factor for health in isolated and remote populations, compared with their populations of origin. In this study, two remote island population samples from Croatia (from the islands of Vis and the Korcula) were compared with mainland controls from the coastal city of Split. The analyses focused on gout, hyperuricaemia and osteoarthritis, as examples of complex, multifactorial diseases. Methods: A total of 3006 examinees from all three sites in Dalmatia, Croatia were included in the descriptive part of the study, within a large-scale project of 10,001 Dalmatians. Additionally, a subset of 2428 subjects was genotyped and information on three genomic loci was used in this study. All three loci belong to SLC2A9 gene, considered to have a major role in the regulation of serum uric acid concentration (rs6449213, rs1014290 and rs737267). Results: There was a much a higher prevalence of gout in the isolated populations compared with the mainland sample (3.3% in Vis, 2.2% in Korcula and 1.7% in Split, after age standardization). Furthermore, standardized prevalence of hyperuricaemia (defined as serum uric acid ≥403 mmol/L) was 9.9% in Vis, 5.6% in Korcula and 6.1% in Split. Analysis of the allele frequencies for the three loci of SLC2A9 suggested that in all three instances the prevalence of deleterious genotypes was highest in Vis, followed by Korcula, which had higher or comparable prevalence to the city of Split. Multivariate analysis, adjusted for the main confounder effects indicated that those on the island of Vis, which has the higher degree of isolation, had significantly higher odds ratio for both hyperuricaemia (odds ratio 1.90 95% confidence intervals [1.36-2.64]) and osteoarthritis, but not gout (3.37 [2.14-5.32]). The difference between Split and Korcula included only greater odds for osteoarthritis (1.92 [1.20-3.06]). Conclusions: Isolated and remote populations that maintain a sufficient level of genetic isolation may suffer not only from consequences of geographic and social isolation, but their population genetic structure may also further contribute to poorer health status and outcomes. abstract_id: PUBMED:27189761 Genetics of Type 2 Diabetes: the Power of Isolated Populations. Type 2 diabetes (T2D) affects millions of people worldwide. Improving the understanding of the underlying mechanisms and ultimately improving the treatment strategies are, thus, of great interest. To achieve this, identification of genetic variation predisposing to T2D is important. A large number of variants have been identified in large outbred populations, mainly from Europe and Asia. However, to elucidate additional variation, isolated populations have a number of advantageous properties, including increased amounts of linkage disequilibrium, and increased probability for presence of high frequency disease-associated variants due to genetic drift. Collectively, this increases the statistical power to detect association signals in isolated populations compared to large outbred populations. In this review, we elaborate on why isolated populations are a powerful resource for the identification of complex disease variants and describe their contributions to the understanding of the genetics of T2D. abstract_id: PUBMED:27297514 Adaptation in isolated populations: when does it happen and when can we tell? Isolated populations with novel phenotypes present an exciting opportunity to uncover the genetic basis of ecologically significant adaptation, and genomic scans have often, but not always, led to candidate genes directly related to an adaptive phenotype. However, in many cases these populations were established by a severe bottleneck, which can make identifying targets of selection problematic. Here, we simulate severe bottlenecks and subsequent selection on standing variation, mimicking adaptation after establishment of a new small population, such as an island or an artificial selection experiment. Using simulations of single loci under positive selection and population genetics theory, we examine how population size and age of the population isolate affect the ability of outlier scans for selection to identify adaptive alleles using both single-site measures and haplotype structure. We find and explain an optimal combination of selection strength, starting frequency, and age of the adaptive allele, which we refer to as a Goldilocks zone, where adaptation is likely to occur and yet the adaptive variants are most likely to derive from a single ancestor (a 'hard' selective sweep); in this zone, four commonly used statistics detect selection with high power. Real-world examples of both island colonization and experimental evolution studies are discussed. Our study provides concrete considerations to be made before embarking on whole-genome sequencing of differentiated populations. abstract_id: PUBMED:32974859 COVID-19 and pandemic planning in the context of rural and remote homelessness. Addressing the vulnerability and unique needs of homeless populations during pandemics has been a major component of the Canadian federal response to the COVID-19 crisis. Rural and remote communities, however, have received little to no funding to aid in their care of homeless people during the pandemic. Similarly, there has been little to no research on rural communities' pandemic preparedness in the context of homelessness. There are large numbers of homeless individuals in rural and remote Canada, including Indigenous peoples who are over-represented in homeless populations. Rural communities, including rural and remote Indigenous communities, are often isolated and more limited than urban areas in their capacity to respond to pandemics. They are particularly vulnerable due to fewer healthcare and social service resources-the lack of which has been particularly evident during the COVID-19 pandemic. In this commentary, we suggest that policy-makers need to take seriously the situation of rural homelessness in Canada, its implications for individual and community health, and consequences in the context of pandemics. Policy- and decision-makers can address these concerns through increased homelessness funding and support for rural and remote communities, policy change to recognize the unique challenges associated with rural pandemic planning and homelessness, and more research that can be translated into policy, programs, and supports for rural homelessness and pandemic planning response. abstract_id: PUBMED:26993633 Insights into metabolic disease from studying genetics in isolated populations: stories from Greece to Greenland. Over the last 10 years substantial progress has been made in our understanding of the genetic basis for type 2 diabetes and related traits. These developments have been facilitated by technological advancements that have allowed comprehensive genome-wide assessments of the impact of common genetic variation on disease risk. Current efforts are now focused on extending this to genetic variants in the rare and low-frequency spectrum by capitalising on next-generation sequencing technologies. This review discusses the important contributions that studies in isolated populations are making to this effort for diabetes and metabolic disease, drawing on specific examples from populations in Greece and Greenland. This review summarises a presentation given at the 'Exciting news in genetics of diabetes' symposium at the 2015 annual meeting of the EASD, with topics presented by Eleftheria Zeggini and Torben Hansen, and an overview by the Session Chair, Anna Gloyn. abstract_id: PUBMED:27601615 Medical Genetics and the First Studies of the Genetics of Populations in Mexico. Following World War II (WWII), there was a new emphasis within genetics on studying the genetic composition of populations. This probably had a dual source in the growing strength of evolutionary biology and the new international interest in understanding the effects of radiation on human populations, following the atomic bombings in Japan. These global concerns were shared by Mexican physicians. Indeed, Mexico was one of the leading centers of this trend in human genetics. Three leading players in this story were Mario Salazar Mallén, Adolfo Karl, and Rubén Lisker. Their trajectories and the international networks in human genetics that were established after WWII, paved the way for the establishment of medical and population genetics in Mexico. Salazar Mallén's studies on the distribution and characterization of ABO blood groups in indigenous populations were the starting point while Karl's studies on the distribution of abnormal hemoglobin in Mexican indigenous populations showed the relationships observed in other laboratories at the time. It was Lisker's studies, however, that were instrumental in the development of population genetics in the context of national public policies for extending health care services to the Mexican population. In particular, he conducted studies on Mexican indigenous groups contributing to the knowledge of the biological diversity of human populations according to international trends that focused on the variability of human populations in terms of genetic frequencies. From the start, however, Lisker was as committed to the reconstruction of shared languages and practices as he was to building networks of collaboration in order to guarantee the necessary groundwork for establishing the study of the genetics of human populations in Mexico. This study also allows us to place Mexican science within a global context in which connected narratives describe the interplay between global trends and national contexts. abstract_id: PUBMED:32754648 Health Profession Education in Remote or Geographically Isolated Settings: A Scoping Review. Remote health has been differentiated from rural health in Australia and defined as isolated, with poor service access and a relatively high proportion of Indigenous residents, necessitating different models of care. Educational strategies for remote health practice are often needs driven and the characteristics of remote health may be used to categorise remote health professional education. This scoping review aims to identify the purpose of health professional education for remote settings, the type of educational strategies implemented and the reported outcomes. A broad search of published literature available in online bibliographic databases was conducted. A total of 33 articles met the review inclusion criteria. A further 7 articles were identified for inclusion in the review through citation searches and the authors' networks giving a total of 40 articles. Six primary themes were established based on the educational purpose: (1) cultural competency; (2) social accountability; (3) rural and remote skill development for the general workforce; (4) remote specialisation; (5) specialist skills required for a remote workforce; and (6) remote teaching. These themes also reflect the philosophical change over time recognising remote health as a separate discipline and its value as a distinctive and efficacious learning environment. The concept of education for remote practice is proposed to describe this unique leaning environment which encompasses critical pedagogy to develop a sense of agency and social accountability, embedding the delivery of primary health care through service learning and developing relationships in a context which is transformative. abstract_id: PUBMED:27877205 Spatial patterns of immunogenetic and neutral variation underscore the conservation value of small, isolated American badger populations. Small and isolated populations often exhibit low genetic diversity due to drift and inbreeding, but may simultaneously harbour adaptive variation. We investigate spatial distributions of immunogenetic variation in American badger subspecies (Taxidea taxus), as a proxy for evaluating their evolutionary potential across the northern extent of the species' range. We compared genetic structure of 20 microsatellites and the major histocompatibility complex (MHC DRB exon 2) to evaluate whether small, isolated populations show low adaptive polymorphism relative to large and well-connected populations. Our results suggest that gene flow plays a prominent role in shaping MHC polymorphism across large spatial scales, while the interplay between gene flow and selection was stronger towards the northern peripheries. The similarity of MHC alleles within subspecies relative to their neutral genetic differentiation suggests that adaptive divergence among subspecies can be maintained despite ongoing gene flow along subspecies boundaries. Neutral genetic diversity was low in small relative to large populations, but MHC diversity within individuals was high in small populations. Despite reduced neutral genetic variation, small and isolated populations harbour functional variation that likely contribute to the species evolutionary potential at the northern range. Our findings suggest that conservation approaches should focus on managing adaptive variation across the species range rather than protecting subspecies per se. abstract_id: PUBMED:38392270 The Inclusion of Underrepresented Populations in Cardiovascular Genetics and Epidemiology. Novel genetic risk markers have helped us to advance the field of cardiovascular epidemiology and refine our current understanding and risk stratification paradigms. The discovery and analysis of variants can help us to tailor prognostication and management. However, populations underrepresented in cardiovascular epidemiology and cardiogenetics research may experience inequities in care if prediction tools are not applicable to them clinically. Therefore, the purpose of this article is to outline the barriers that underrepresented populations can face in participating in genetics research, to describe the current efforts to diversify cardiogenetics research, and to outline strategies that researchers in cardiovascular epidemiology can implement to include underrepresented populations. Mistrust, a lack of diverse research teams, the improper use of sensitive biodata, and the constraints of genetic analyses are all barriers for including diverse populations in genetics studies. The current work is beginning to address the paucity of ethnically diverse genetics research and has already begun to shed light on the potential benefits of including underrepresented and diverse populations. Reducing barriers for individuals, utilizing community-driven research processes, adopting novel recruitment strategies, and pushing for organizational support for diverse genetics research are key steps that clinicians and researchers can take to develop equitable risk stratification tools and improve patient care. abstract_id: PUBMED:33523369 Barriers and strategies to integrate medical genetics and primary care in underserved populations: a scoping review. Despite clinical and technological advances, serious gaps remain in delivering genetic services due to disparities in workforce distribution and lack of coverage for genetic testing and counseling. Genetic services delivery, particularly in medically underserved populations, may rely heavily on primary care providers (PCPs). This study aims to identify barriers to integrating genetic services and primary care, and strategies to support integration, by conducting a scoping review. Literature synthesis found barriers most frequently cited by PCPs including insufficient knowledge about genetics and risk assessment, lack of access to geneticists, and insufficient time to address these challenges. Telegenetics, patient-centered care, and learning communities are strategies to overcome these barriers. Telegenetics supplements face-to-face clinics by providing remote access to genetic services. It may also be used for physician consultations and education. Patient-centered care allows providers, families, and patients to coordinate services and resources. Access to expert information provides a critical resource for PCPs. Learning communities may represent a mechanism that facilitates information exchange and knowledge sharing among different providers. As PCPs often play a crucial role caring for patients with genetic disorders in underserved areas, barriers to primary care-medical genetics integration must be addressed to improve access. Strategies, such as telegenetics, promotion of evidence-based guidelines, point-of-care risk assessment tools, tailored education in genetics-related topics, and other system-level strategies, will facilitate better genetics and primary care integration, which in turn, may improve genetic service delivery to patients residing in underserved communities. Answer: Yes, genetics can aggravate the health of isolated and remote populations. The study comparing two remote island population samples from Croatia (from the islands of Vis and Korcula) with mainland controls from the coastal city of Split found a higher prevalence of gout, hyperuricaemia, and osteoarthritis in the isolated populations compared to the mainland sample. The analysis of allele frequencies for three loci of the SLC2A9 gene, which plays a major role in the regulation of serum uric acid concentration, suggested that the prevalence of deleterious genotypes was highest in the more isolated population of Vis, followed by Korcula, and then Split. Multivariate analysis indicated that those on the island of Vis had significantly higher odds ratios for both hyperuricaemia and osteoarthritis. This suggests that isolated and remote populations may suffer not only from the consequences of geographic and social isolation but also from a genetic structure that may contribute to poorer health outcomes (PUBMED:23534916). Isolated populations are known to have increased amounts of linkage disequilibrium and a higher probability for the presence of high-frequency disease-associated variants due to genetic drift. This can increase the statistical power to detect association signals in isolated populations compared to large outbred populations, which is advantageous for identifying genetic variation predisposing to diseases like Type 2 diabetes (T2D) (PUBMED:27189761). However, it is important to note that while isolated populations can exhibit low genetic diversity due to drift and inbreeding, they may simultaneously harbor adaptive variation. For example, a study on American badger subspecies found that small and isolated populations, despite reduced neutral genetic variation, harbored high functional variation within individuals, which could contribute to the species' evolutionary potential at the northern range (PUBMED:27877205). In summary, genetics can indeed aggravate the health of isolated and remote populations by contributing to a higher prevalence of certain diseases. However, these populations can also provide valuable insights into the genetics of complex diseases and hold adaptive genetic variation that is important for conservation and understanding evolutionary potential.
Instruction: Is H1N1 flu different from seasonal flu on initial plain chest films? Abstracts: abstract_id: PUBMED:21458832 Is H1N1 flu different from seasonal flu on initial plain chest films? Objectives: To determine whether there are differences in the findings on the initial plain chest films of patients with H1N1 influenza and those of patients with flu symptoms during the flu season. Material And Methods: All patients underwent plain-film chest radiography in the Emergency Department for flu symptoms; 95 patients had H1N1 influenza confirmed between July 2009 and December 2009 and 95 patients were attended for symptoms of seasonal flu in January 2009. We analyzed the views obtained, the distribution and location of the radiologic findings, and patients' age, sex, and previous disease. Results: Patients with H1N1 influenza were younger than those with seasonal flu symptoms (mean 40.2 vs 50.9 years; p&lt;0.001) and fewer had prior disease (48 vs. 63; p&lt;0.001). Plain films were acquired with patients in the standing position in 75 patients in the H1N1 group and in 77 in the seasonal flu group; pathological findings were present in nearly 50% of the patients in each group. The most common findings in the H1N1 group were multifocal patchy consolidations (41.2%; p&lt;0.001) and peribronchial-vascular opacities (16.3%), whereas in the seasonal flu group the most common finding was consolidation in a single lobe (43.9%). Conclusion: We found significant differences between the radiologic findings of patients with H1N1 influenza (severe) and those of patients with symptoms of flu during the flu season: the incidence of multifocal patchy consolidation was greater in H1N1 patients and H1N1 patients were younger. abstract_id: PUBMED:22802854 Determining symptoms for chest radiographs in patients with swine flu (H1N1). Background: The question arises about the chest X-ray findings and clinical symptoms in swine flu and about the most important clinical finding when correlated with the chest radiograph. Should physicians order a chest X-ray in each patient suspected of having swine flu? Material/methods: There were 179 patients with a high suspicion of swine flu. All 179 patients had an initial chest radiograph. As many as 65 males (representing 56% of the projected study population) had a normal chest radiograph, while 35 males (representing 55.6% of the study population) had an abnormal chest X-ray. As many as 51 females (representing 44% of the population) had a normal chest X-ray, while 20 females (representing 44% of the study population) had abnormal chest X-rays. Results: Polymerase chain reaction (PCR) was not a determining factor for normal vs. abnormal chest X-ray (CXR). Rapid antigen test was not a determining factor for normal vs. abnormal CXR. Fever was not a determining factor for normal vs. abnormal CXR. Cough appears to be a determining factor for normal vs. abnormal CXR. Sore throat appears to be a determining factor for normal vs. abnormal CXR. Chest pain was not a determining factor for normal vs. abnormal CXR. Presence of cough with PCR was statistically significant. Conclusions: In my opinion, chest radiographs in patients with suspected H1N1 should only be obtained if there is a cough or sore throat. Other symptoms associated with H1N1 do not warrant a chest radiograph unless absolutely necessary. abstract_id: PUBMED:36777774 Residential mobility predicts COVID-19 and seasonal flu vaccination behaviors in the United States. Aim: Vaccination is one of the most effective strategies to contain the transmission of infectious diseases; however, people's intentions and behavior for vaccination vary across different regions and countries around the world. It is not clear how socioecological factors such as residential mobility influence people's vaccination behaviors for infectious diseases. Methods: We analyzed public data on residential mobility and vaccination rates for COVID-19 and seasonal flu in the United States and explored how residential mobility in the previous year influenced vaccination rates for COVID-19 and seasonal flu (2011-2018) across 50 states of the US. The data were accessed and analyzed in 2021. Results: Study 1 demonstrated that collective-level residential mobility predicted COVID-19 vaccination rates across the United States (B = -168.162, 95% CI [-307.097, -29.227], adjusted R2 = 0.091, p = 0.019). Study 2 corroborated this finding by documenting that collective-level residential mobility predicted vaccination rates for seasonal flu from 2011 to 2018 across the United States (B = -0.789, 95% CI = [-1.018, -0.56], adjusted R2 = 0.222, p &lt; 0.001). The link between residential mobility and vaccination behavior was robust after controlling relevant variables, including collectivism, cultural tightness-looseness, and sociodemographic variables. Conclusions: Our research demonstrated that residential mobility is an important socioecological factor that influences people's vaccination behaviors for COVID-19 and seasonal flu. The results enrich our understanding of the socioecological factors that influence vaccination behaviors and have implications for developing tailored interventions to promote vaccination during pandemics of infectious diseases. abstract_id: PUBMED:33300998 Association between seasonal flu vaccination and COVID-19 among healthcare workers. Background: In next fall and winter, SARS-CoV-2 could circulate in parallel with seasonal influenza. The dual epidemics will result in considerable morbidity and mortality; therefore, influenza vaccination may be essential. Recent studies found increased risk of coronavirus in individuals receiving influenza vaccination. Aims: Our aim is to analyse the association between influenza vaccination and COVID-19 in a population of healthcare workers (HCWs). Methods: IgG antibodies against SARS-CoV-2 were detected in 3520 HCWs at a large hospital in Northern Italy. For each participant, we collected data on flu immunization status for the last five flu seasons. Logistic regression was used to test associations between seasonal flu vaccination status and a positive serology tests for COVID-19. Results: During the last five flu seasons, 2492 vaccinations were administered. Serology tests were negative for 3196 (91%) HCWs and residents and only 21 (1%) people had an equivocal test (12.0-15.0 AU/mL). Only 128 (4%) people received a diagnosis of COVID-19, with a positive swab test. No flu vaccinations for the last five flu seasons were specifically associated with diagnosis of COVID-19 or with positive results of serology tests. Conclusions: Flu vaccinations did not appear to be associated with SARS-CoV-2 infection. Influenza vaccination should continue to be recommended for HCWs and for individuals at increased risk for severe illness from respiratory infection. abstract_id: PUBMED:30865297 Physiologic Model for Seasonal Patterns in Flu Transmission. Seasonal patterns in flu transmission have observational validity in temperate climates. However, there is no consensus mechanism explaining the increased incidence of flu during the winter. The physiologic effects of cold weather and dry air on the upper respiratory system may contribute to immune dysfunction and increased susceptibly to flu-causing pathogens. Low temperature limits the absolute humidity of air. Persistent exposure to dry air leads to airway desiccation and failure of the mucociliary system. The resultant physiologic and histopathologic changes that occur in the airway increase susceptibility to flu-causing pathogens. Laryngoscope, 130:309-313, 2020. abstract_id: PUBMED:31027663 Seasonal Influenza (Flu). Seasonal influenza, or flu, is an airborne respiratory virus that occurs every year in the fall to early spring in the United States. The virus is highly contagious and symptoms can run from mild to severe. In some populations (very young, individuals with chronic comorbid conditions, immunocompromised individuals of all ages, pregnant women, and frail older adults), the virus can lead to increased morbidity and mortality. In a majority of cases, seasonal influenza is preventable through safe and readily available vaccinations. abstract_id: PUBMED:34938071 Parents' Willingness to Vaccinate Their Children Against Seasonal Influenza After the COVID-19 Pandemic in Saudi Arabia: A Retrospective Cross-Sectional Survey. Purpose: To explore the impact of the novel coronavirus (COVID-19) pandemic on caregivers' willingness to vaccinate their children against influenza in 2021 in Saudi Arabia and the factors influencing this decision. Patients And Methods: An online survey of 2501 caregivers in Saudi Arabia with children aged 6 months-18 years was conducted between July 15, 2021, and August 2, 2021. A convenience sample of participants that met the inclusion criteria was used as the study sample. Social Science Package Statistical (SPSS) was used for the statistical analysis. Categorical variables were reported as frequencies and percentages. The Chi-square test was used for categorical variables to assess the difference between the variables and the parents' willingness to vaccinate their children against seasonal influenza after the COVID-19 pandemic. Results: Of the 2501 respondents to the survey, 1185 (47.3%) parents plan to give their children the influenza vaccine next year, which is an increase from 745 (29.8%) in the previous year. The following were the main reasons for not giving children the vaccine: children were less likely to get seasonal flu (617, 24.7%), fear of side effects (491, 19.6%), and fear of needles and syringes (484, 19.4%). Among the parents, 1222 (48.9%) were more likely to receive the seasonal flu vaccine themselves the following year, and 1361 (54.4%), which is more than half, were concerned about their children contracting seasonal influenza. Conclusion: Parents' decision-making to vaccinate their children against seasonal influenza has been influenced by the COVID-19 pandemic. Numerous factors including a caregiver who had received the seasonal influenza vaccine themselves, who had vaccinated or planned to vaccinate their children during the previous or current seasonal influenza season, and/or who had a higher education level were associated with increased willingness to receive the flu vaccine. abstract_id: PUBMED:38487909 Relationship between the lung ultrasound and the clinical severity of the patient with seasonal flu Introduction: The seasonal flu is a very important reason for consultation every winter. Symptoms can quickly progress to severe pneumonia. Currently, few tools exist to assess the clinical severity of patients. The aim of this study is to demonstrate the role of lung ultrasound as a marker of severity in patients with influenza. Methods: 79 patients who arrived at the emergency department with flu-like symptoms were included. A pulmonary ultrasound looking for an interstitial syndrome or consolidation was performed. The qSOFA, the SOFA, the saturation, the PaO2/FiO2 ratio, the oxygen needs, the destination of the patient made it possible to establish the seriousness of the pathology of the patient. Ultrasound was then compared to these different tools. Results: The more the ultrasound became pathological, the more we observed a proportion of qSOFA (p = 0.001) and pathological SOFA (p = 0.009). Most patients with acute respiratory distress syndrome have pathological ultrasound (p &lt; 0.001). The average admission saturation is 89.2 % in the "pathological ultrasound" group compared to 95.8 % in the "normal ultrasound" group (p &lt; 0.001). Patients who required invasive therapies had pathological ultrasound (p &lt; 0.001). Of the 28 patients with pathological ultrasound, 24 required hospitalization (p &lt; 0.001). Conclusion: Lung ultrasound is a major asset for assessing the severity of the patient with seasonal flu. In addition, ultrasound allows better monitoring of the patient by being able to influence the destination of the latter towards a return home or monitoring in intensive care. abstract_id: PUBMED:29628113 Seasonal flu vaccination for older people: Evaluation of the adjuvanted vaccine. Positioning report Flu is a major public health problem, particularly for older people, and creates an important clinical and economic burden. A high mortality rate was reported in Spain during the period 2015 to 2016; 3,101 serious cases were hospitalised with a confirmed diagnosis of flu, of which 11% died (352 cases). Furthermore, financial and health costs are greatly increased by the complications of flu; people aged over 65 years represent approximately 64% of the total costs. Seasonal flu vaccination is the fundamental strategy, as demonstrated by cost-benefit and cost-effectiveness studies. A priority objective is to improve the vaccine's immune response and the search for and inclusion of adjuvants and immunostimulants in vaccines is a major line of research. This positioning report evaluates vaccination for older people and the importance of the adjuvanted vaccine in the elderly in strengthening immunogenicity, by means of a critical review of the literature based on the best evidence available on its immunogenicity and effectiveness, and an economic assessment. abstract_id: PUBMED:35661699 The Spanish flu. The Spanish flu occurred at the end of the First world war, in disastrous epidemiological conditions on populations exhausted by four years of war. At that time, there were no vaccines, no antibiotics, no oxygen and no resuscitation. It was even thought that the infectious agent was a bacterium. Humanity was poorly equipped to fight against a pandemic that caused 50-100 million deaths. The first palpable signs of the outbreak were the rapidly spreading multiple epidemics among young recruits in the American military training camps in March 1918. The flu then spread to the civilian populations and circled the globe twice, sparing no country, even the most remote islands, in tropical as well as polar climates, evolving in successive waves up until April 1919. The first was mild (lethality 0.21%), the second was lethal (lethality 2-4%), and during the third wave, lethality declined (1%), after which the flu became seasonal, with low lethality (0.1%). Between 20 and 40 years of age, patients often died within a few days of pneumonia, with respiratory distress leading to cyanosis, frequently associated with bacterial superinfection. The influenza virus, Myxovirus influenzae, was first discovered in 1931 by Richard Shope in pigs, and then in 1933 by Wilson Smith, Patrick Laidlaw and Christopher Andrews in humans during a seasonal influenza epidemic in London. In 1943, it was first observed under the electron microscope. Hemagglutinin and neuraminidase, the two main virulence factors, were discovered in the 1940s by George Hirst and Alfred Gottschalk. An RNA virus composed of 13,500 nucleotides in eight segments, it was initially sequenced in the 1980s, when Jeffrey Taubenberger determined the complete nucleotide sequence of the 1918 virus from lung tissue samples from patients who died of influenza. The 1918 H1N1 virus was found to have originated in birds. In 2005, it was successfully resuscitated in cell culture. It is 40,000 times more virulent in primates than the seasonal H1N1 virus. The lethality of the second wave could have been due to mutations in the hemagglutinin H1 gene, which would have resulted in a stronger affinity for α,2-6 galactose sialic acids, the virus' receptors on human epithelial cells. That said, the origin of the Spanish flu virus remains controversial. It probably emerged and circulated in the population before March 1918 in America, although European origin has also been evoked. The high mortality in the 20-40 age group remains an enigma. Some experts point to reduced immune response in patients previously exposed to related viral hemagglutinins during the 1889 pandemic. In any event, even though it concerns a markedly different virus, the history of the Spanish flu sheds light on the difficulties of management during today's pandemic. Answer: Yes, H1N1 flu appears to be different from seasonal flu on initial plain chest films. A study found significant differences between the radiologic findings of patients with H1N1 influenza and those of patients with symptoms of flu during the flu season. Specifically, the incidence of multifocal patchy consolidation was greater in patients with H1N1, and these patients were also generally younger. In contrast, the most common finding in the seasonal flu group was consolidation in a single lobe. These differences suggest that H1N1 influenza may present with more severe radiologic findings compared to seasonal flu (PUBMED:21458832).
Instruction: Are physician-patient communication practices slowly changing in Croatia? Abstracts: abstract_id: PUBMED:23630146 Are physician-patient communication practices slowly changing in Croatia? -- a cross-sectional questionnaire study. Aim: To explore physician-patient communication practices during the process of obtaining informed consent in a hospital setting in Croatia. Methods: Two hundred and fifty patients (response rate 78%) from five tertiary level hospitals in Zagreb, Croatia, anonymously filled in the questionnaire on informed consent and communication practices by Nemcekova et al in the period from April to December 2011. Results: Eighty five percent of patients received complete, understandable information, presented in a considerate manner. Patients in surgical departments received a higher level of information than those in internal medicine departments. Patients were informed about health risks of the proposed treatments (in 74% of cases) and procedures (76%), health consequences of refusing a medical intervention (69%), and other methods of treatment (46%). However, patients pointed out a number of problems in physician-patient communication. Conclusion: Communication practices during informed consent-obtaining process in hospitals in Zagreb are based on a model of shared decision-making, but paternalistic physician-patient relationship is still present. Our results indicate that Croatia is undergoing a transition in the physician-patient relationship and communication. abstract_id: PUBMED:18265820 The science of communication in the patient-physician relationship The authors dedicate their work to the improvement of inter-human communication within the healthcare system, mainly in the sub-system of the patient-physician relationship, with the aim of respecting human rights in general and in particular, of respecting patient rights. The combined usage of elements of medical ethics, acquired throughout professional training - university and post-university studies - and the knowledge assimilated following some last minute information relative to the science of communication is a permanent responsibility of all healthcare practitioners with the goal of improving their relationships with patients. The authors believe that this is the only way of increasing the degree of trust and satisfaction of the population towards healthcare providers. The authors are in favor of the implementation, in common medical practice, of this motto of communication: "If you do not communicate, you do not exist. If you do not know, you are at fault", as a founding principle of ethics and of the medical professional ethics, applicable equally in private and public medical practice. Effective patient-physician communication generates reciprocal trust. Its absence or poor communication can lead to distrust, suspicion, animosity and even conflicts which can cause physicians to be called before the College of Physicians of Romania or lead to legal repercussions for both physicians and patients. If it is true that, during medical assistance procedures, patients wish their right to be cared for and treated as dignified humans to be respected, it is also fair for those who care for them to evoke in turn their right to respect and dignity from the side of their patients. National legislation dedicated to issues relating to the professional patient-physician relationship contains provisions strictly in agreement with the regulations of the European Union. Once ethical and legal standards are introduced into national legislation, the next step should be towards the implementation and control of the procedure in which these desiderata are applied to life in Romanian society. abstract_id: PUBMED:22936477 Patient-centered communication in the physician-patient relationship As a result of increasing demands for more patient-centeredness in the German health care system, physician-patient communication has been subject to transformation. Physicians are being requested to take into account their patients' communicative needs, including information, shared decision making, and emotional support, more vigorously than they had been in the past. A cooperative model of the physician-patient relationship is considered most suitable for fulfilling these needs and for empowering patients to make informed decisions regarding their own health care. However, a large body of evidence exists-particularly regarding communication between cancer patients and their doctors-that shows that patients' needs are not adequately addressed or met. This potential for optimization is all the more important because targeting patients' needs during doctor-patient communication not only improves patients' satisfaction with the communication, quality of life, and well-being, but may also produce better treatment outcomes. abstract_id: PUBMED:30190769 The influence of patient-physician communication on physician loyalty and hospital loyalty of the patient. Objective: Patient-physician communication is important for an effective healthcare service and for the patient's development of loyalty to the hospital. In this regard, this study aimed to determine whether there is a relationship between the patient-physician communication, the loyalty of the patient to the physician and to the hospital. Also study aimed to determine whether there is a mediating role of the physician loyalty on the patient-physician communication effect on to the hospital loyalty. Method: Five hundred ten questionnaires were distributed to regular public patients of the government hospitals, clinics, and private clinic patients in Sakarya using a simple random sampling method. Data were analysed using descriptive statistics and Structural Equation Modelling (SEM). Results: According to the findings, there was a significant relationship between patient-physician communication and loyalty to physician and to hospital. Patient-physician communication has a significant effect on loyalty to physician and hospital. In addition, patient loyalty has a mediating role on the patient-physician communication effect on the hospital loyalty. Conclusion: According to the result of the study, physician-patient communication could be used as an important tool in creating physician loyalty and hospital loyalty. This study helps physicians and health service providers to formulate strategies and tactics that will effectively develop the loyalty of patients. abstract_id: PUBMED:20099777 Patient-physician communication. Extensive research has shown that no matter how knowledgeable the physician might be, if he/she is not able to open good communication channels with the patient, he/she may be of no help to the latter Despite this known fact and the fact that a patient-physician consultation is the most widely performed 'procedure' in a physician's professional lifetime, effective communication with the patient has been found to be sadly lacking. This review article seeks to discuss 'the what', 'the why' and 'the how' of doctor-patient communication. abstract_id: PUBMED:26830544 The relationship between physician humility, physician-patient communication, and patient health. Objective: Cultural portrayals of physicians suggest an unclear and even contradictory role for humility in the physician-patient relationship. Despite the social importance of humility, however, little empirical research has linked humility in physicians with patient outcomes or the characteristics of the doctor-patient visit. The present study investigated the relationship between physician humility, physician-patient communication, and patients' perceptions of their health during a planned medical visit. Methods: Primary care physician-patient interactions (297 patients across 100 physicians) were rated for the physician's humility and the effectiveness of the physician-patient communication. Additionally, patients reported their overall health and physicians and patients reported their satisfaction with the interaction. Results: Within-physician fluctuations in physician humility and self-reported patient health positively predicted one another, and mean-level differences in physician humility predicted effective physician-patient communication, even when controlling for the patient's and physician's satisfaction with the visit and the physician's frustration with the patient. Conclusions: The results suggest that humble, rather than paternalistic or arrogant, physicians are most effective at working with their patients. Practice Implications: Interventions to improve physician humility may promote better communication between health care providers and patients, and, in turn, better patient outcomes. abstract_id: PUBMED:15710660 Patient-physician communication: why and how. Patient-physician communication is an integral part of clinical practice. When done well, such communication produces a therapeutic effect for the patient, as has been validated in controlled studies. Formal training programs have been created to enhance and measure specific communication skills. Many of these efforts, however, focus on medical schools and early postgraduate years and, therefore, remain isolated in academic settings. Thus, the communication skills of the busy physician often remain poorly developed, and the need for established physicians to become better communicators continues. In this article, the authors briefly review the why and how of effective patient-physician communication. They begin by reviewing current data on the benefits of effective communication in the clinical context of physicians caring for patients. The authors then offer specific guidance on how to achieve effective communication in the patient-physician relationship. abstract_id: PUBMED:33035705 Measuring the quality of patient-physician communication. Patient-physician communication is an often overlooked yet a very important aspect of providing medical care. Positive patient-physician quality of communication within discourse has an influence on various aspects of a consultation such as a patient's treatment adherence to prescribed medical regimen and their medical care outcome. As few reference standards exist for exploring semantics within the patient-physician setting and its effects on personalized healthcare, this paper presents a study exploring three methods to capture, model and evaluate patient-physician communication among three distinct data-sources. We introduce, compare and contrast these methods for capturing and modeling patient-physician communication quality using relatedness between discourse content within a given consultation. Results are shown for all three data-sources and communication quality scores among physicians recorded. We found our models demonstrate the ability to capture positive communication quality between both participants within a consultation. We also evaluate these findings against self-reported questionnaires highlighting various aspects of the consultation and rank communication quality among seventeen physicians who consulted amid one-hundred and thirty-two patients. abstract_id: PUBMED:8553634 Physician-patient communication in managed care. The quality of physician-patient communication affects important health care outcomes. Managed care presents a number of challenges to physician-patient communication, including shorter visits, decreased continuity, and lower levels of trust. Good communication skills can help physicians create and maintain healthy relationships with patients in the face of these challenges. We describe 5 communication dilemmas that are common in managed care and review possible solutions suggested by recent literature on physician-patient communication. We also describe ways that managed care plans can promote more effective communication between physicians and patients. abstract_id: PUBMED:14770091 Patient-physician communication. Purpose Of Review: Patient-physician communication is a vital element of the process of care. In numerous studies, communication behaviors during the medical interaction have been significantly associated with a variety of patient satisfaction and health outcomes. The purpose of this review is to synthesize recent findings in the area of patient-physician communication, particularly as they relate to patients with rheumatic disease. Recent Findings: Although there is scant research on the medical interaction with patients with rheumatic disease specifically, there is a large body of literature examining patient-doctor communication in other chronic diseases. Because many of the findings of studies conducted in patients with other diseases can be extended to patients with rheumatic disease, this review also covers the most salient literature published in the past year on patient-doctor communication in chronic diseases in general, in addition to covering publications in the field of rheumatology. Summary: The literature reviewed shows that patients' expectations of medical encounters are not always fulfilled and that patients desire increased participation and information sharing. Establishing patient-centered care is a challenging goal, but fortunately, recent studies show that physicians can be trained, irrespective of the years in practice, to provide patient-centered care and increase patient participation and satisfaction with care. Answer: Yes, physician-patient communication practices are slowly changing in Croatia. A cross-sectional questionnaire study conducted in five tertiary level hospitals in Zagreb, Croatia, revealed that communication practices during the informed consent-obtaining process are based on a model of shared decision-making. However, the study also noted that a paternalistic physician-patient relationship is still present. The results suggest that Croatia is undergoing a transition in the physician-patient relationship and communication, indicating a shift towards more collaborative communication practices between physicians and patients (PUBMED:23630146).
Instruction: Does combination of antegrade and retrograde cardioplegia reduce coronary artery bypass grafting-related conduction defects? Abstracts: abstract_id: PUBMED:17060042 Does combination of antegrade and retrograde cardioplegia reduce coronary artery bypass grafting-related conduction defects? Background: Conduction disorders appearing after coronary artery bypass surgery (CABG) may have many different causes. In this study, we evaluated the postoperative conduction disorders after CABG with respect to the ante-grade blood cardioplegia and ante-grade plus continuous retrograde cardioplegia delivery methods. Materials And Methods: This retrospective study included 1824 patients undergoing CABG between January 2001 and December 2005. There were 694 female patients (38%) and 1130 male patients (62%). Myocardial protection was done by isothermic hyperkalemic blood cardioplegia. Patents in Group 1 (n = 704) were operated on using only intermittent antegrade cardioplegia and those in group 2 (n = 1120) were operated on using the antegrade plus retrograde continuous cardioplegia. The postoperative occurrences of a new right bundle branch block, left anterior hemiblock, left posterior hemiblock, left bundle branch block, or third-degree atrioventricular block were evaluated and compared. Results: Total mortality rate was 1.6% (29 patients) without significant difference between the groups. The preoperative and perioperative characteristics were statistically similar in the groups. The occurrence of conduction disorders was significantly higher in group 1 (P = .006, 55 versus 52 patients). The analysis of the patients with conduction disorders showed a significantly increased mortality rate (P &lt; .001) in addition to a significantly increased period of intensive care unit follow-up and duration of postoperative hospitalization (P &lt;.001). Conclusion: The present study demonstrated that the perioperative occurrence of conduction disorders after CABG was decreased by antegrade controlled and retrograde continuous combination cardioplegia. abstract_id: PUBMED:8800169 Antegrade and retrograde cardioplegia: alternate or simultaneous? Unlabelled: Neither antegrade nor retrograde cardioplegic protection provides homogeneous distribution, and a combination may be required to avoid anaerobic metabolism and depressed postoperative ventricular function. Tepid cardioplegia (29 degrees C) avoids the delayed recovery of cardiac function and metabolism associated with cold cardioplegia (15 degrees C) and reduces the anaerobic metabolism seen with warm (37 degrees C) cardioplegia. We compared two techniques that combine antegrade and retrograde tepid cardioplegia: alternate and simultaneous. Methods: Sixty patients undergoing elective isolated coronary artery bypass grafting were randomized to receive near continuous tepid retrograde and either intermittent antegrade cardioplegia (the alternate technique) or antegrade cardioplegia with the solution delivered concurrently through each completed vein graft (the simultaneous technique). Results: Myocardial lactate extraction was greater after crossclamp release following simultaneous than alternate cardioplegia. Postoperative ventricular function was better after alternate than simultaneous cardioplegia. Conclusion: Both techniques permitted rapid postoperative recovery of myocardial metabolism and ventricular function. However, simultaneous cardioplegia was simpler and did not require deairing the aortic root between antegrade infusions. abstract_id: PUBMED:7887719 Tepid antegrade and retrograde cardioplegia. To determine the optimal temperature for the combination of antegrade and retrograde cardioplegia, 42 patients undergoing coronary artery bypass grafting were randomized to receive cold (9 degrees C; n = 14), tepid (29 degrees C; n = 14), or warm (37 degrees C; n = 14) blood cardioplegia delivered continuously retrograde and intermittently antegrade. Myocardial oxygen utilization, lactate and acid metabolism, and coronary vascular resistance were measured during the operation and cardiac function was assessed postoperatively. Myocardial oxygen consumption, lactate release and acid release were greatest with warm, intermediate with tepid, and least with cold cardioplegia (p = 0.0001). However, washout of lactate and acid at the time of cross-clamp release was reduced (p = 0.022) with tepid or cold compared with warm cardioplegia. Early postoperative left ventricular function was best preserved (p = 0.01) after tepid than after cold or warm combination cardioplegia. These results suggest that tepid combination cardioplegia reduced metabolic demands but permitted immediate recovery of cardiac function. This technique may provide better myocardial protection than cold or warm combination cardioplegia. abstract_id: PUBMED:1491193 Postoperative conduction disturbances in patients with retrograde continuous cold blood cardioplegia Two hundred twenty-seven patients who underwent open heart surgery with retrograde continuous cold blood cardioplegia (RC-CBCP) were investigated to manifest the incidence of postoperative conduction disturbances (CD) and to determine factors related to the occurrence of CD. The incidence of CD in patients with RC-CBCP was 16.7%, which was lower than that (24.4%) in 41 patients with antegrade continuous cold blood cardioplegia. In addition, the CD resolved by the time of hospital discharge in 65% of the patients. There were no patients with new complete atrioventricular block requiring a pacemaker. The development of postoperative CD was related to topical cooling with slushed ice and lower myocardial temperature of the left ventricle, but not to kinds of diseases, duration of aortic cross-clamp, or the distribution of RC-CBCP evaluated from myocardial temperature at the end of initial infusion of cold cardioplegic solution. Furthermore, terminal warm blood cardioplegia reduced the occurrence of CD. We conclude that the occurrence of CD in patients with RC-CBCP is lower than that in patients with antegrade cardioplegia, and is most related to local hypothermia with slushed ice. This suggests that the most likely mechanism for the development of CD would be ischemic or reperfusion injury to the specialized conduction system with the disturbance of microcirculation because of rouleaux formation in CBCP at very low myocardial temperature. abstract_id: PUBMED:8680285 Comparison of antegrade with antegrade/retrograde cold blood cardioplegia for myocardial revascularization. There has been increasing interest in the use of retrograde coronary sinus perfusion for delivery of cardioplegic solution during myocardial revascularization. Despite evidence of improved cardiac protection, it is unclear if a combined antegrade/retrograde approach to myocardial preservation offers significant clinical benefits. One hundred twenty patients undergoing elective 1st-time coronary bypass surgery for 3-or-more-vessel disease received aortic root, antegrade cold blood cardioplegia (Group I, n=52) or combined antegrade/retrograde cardioplegia via coronary sinus cannulation (Group II, n=68). All preoperative variables were similar, including age, severity of coronary artery disease, functional status, and ejection fraction. Intraoperative and postoperative variables, including the degree of hypothermia, temperature of infusion solution, number of bypass grafts, defibrillation attempts and spontaneous return to sinus rhythm, the use of intraaortic balloon pump counterpulsation, and inotropic support during weaning from cardiopulmonary bypass, were not statistically different. Cardioplegia infusion time was longer in Group II than in Group I (2.5 +/- 0.8 vs 1.7 +/- 0.7 min, p &lt; 0.05). The postoperative cardia output, electrocardiographic and cardiac enzyme evidence of ischemia, the need for temporary pacing, and 30-day morbidity were similar for both groups. The data indicate that in this non-risk-stratified group of patients, the route of cardioplegia administration is not a determinant of clinical outcome. abstract_id: PUBMED:2366565 A new simplified method of optimizing cardioplegic delivery without right heart isolation. Antegrade/retrograde blood cardioplegia. We report our initial experience with antegrade/retrograde cardioplegia using a self-inflating/deflating balloon cannula that allows rapid transatrial retrograde cannulation of the coronary sinus (10 to 15 seconds) without right heart isolation and permits routine single venous cannulation. We subjected 141 consecutive adult patients and nine children to antegrade/retrograde cardioplegia using rapid transatrial insertion of the Retroplegia cannula (Research Medical, Inc., Salt Lake City, Utah). Single venous cannulation was used in 116 patients having coronary artery bypass grafting or aortic valve replacement, or both. Initial antegrade blood cardioplegia caused immediate arrest (less than 1 minute) and the cardioplegic dose was divided equally between antegrade and retrograde delivery. Included are 95 patients having isolated bypass grafting (34 with extending infarction, cardiogenic shock, or ejection fraction less than 20%); 19 having coronary reoperations, 42 with aortic or mitral valve procedures, or both; and nine children having repair of congenital defects (e.g., repair of ventricular septal defect, Rastelli operation, Konno operation). Septal temperature in patients with occlusion of the left anterior descending coronary artery fell to 11.5 degrees +/- 0.5 degrees C after retrograde cardioplegia versus only 16 degrees +/- 3 degrees C after antegrade cardioplegia (p less than 0.05). The overall hospital mortality rate was 2% and no complications followed transatrial retrograde cannulation of the coronary sinus. abstract_id: PUBMED:17827576 Antegrade versus antegrade with retrograde delivery of cardioplegic solution in myocardial revascularisation. A clinical study in patients with triple vessel coronary artery disease. The effects of antegrade and antegrade with retrograde delivery of cardioplegic solution were evaluated in 60 patients who underwent myocardial revascularisation. All patients had triple vessel coronary artery disease and underwent revascularisation using arterial and vein grafts. Myocardial protection consisted of administration of the St.Thomas' Hospital cardioplegic solution, topical slushed ice and systemic hypothermia (28 degrees C-30 degrees C). The patients were categorised into: group A (n=30), who received antegrade cardioplegia alone, and group B (n=30), who received antegrade and retrograde cardioplegia. With the exception of the total dose of cardioplegic solution ('p'=0.02), there was no significant difference between the two groups. Cardiac function was assessed before and after the patient was weaned from the cardio-pulmonary bypass. There was a significant increase in the right atrial pressure and a significant decrease in the mean arterial pressure from the baseline ('p'&lt;0.05), 10 minutes after cardiopulmonary bypass in group A. All patients in-group B had a spontaneous return to sinus rhythm after release of the aortic cross clamp, whereas 3 patients in group A required defibrillation to restore sinus rhythm. Intra aortic balloon pump support was necessary in 4 patients in group A, as against 1 patient in group B to terminate the cardiopulmonary bypass. The clinical outcome was similar in both groups. We conclude that the use of a combination of retrograde and antegrade cardioplegia facilitates early recovery of left ventricular function after coronary artery bypass grafting. abstract_id: PUBMED:10509991 Myocardial perfusion during warm antegrade and retrograde cardioplegia: a contrast echo study. Background: We evaluated distribution of warm antegrade and retrograde cardioplegia in patients undergoing coronary artery bypass grafting (CABG). Methods: Myocardial perfusion was evaluated pre- and post-CABG using transesophageal echocardiography with injection of sonicated albumin microbubbles (Albunex) during warm antegrade and retrograde cardioplegia. The left ventricle (LV) was evaluated in five segments and the right ventricle (RV) was evaluated in two segments. Segmental contrast enhancement was graded as absent (score = 0), suboptimal or weak (score = 1), optimal or excellent (score = 2), or excessive (score = 3). Results: Pre-CABG cardioplegic perfusion correlated weakly with severity of coronary artery stenoses (r = -0.331 and 0.276 for antegrade and retrograde cardioplegia, respectively). Antegrade cardioplegia administration resulted in 98% and 96% perfusion to the left ventricle pre- and post-CABG, respectively. Retrograde cardioplegic administration resulted in reduced LV perfusion, with 86% (p = 0.032 from antegrade) and 59% (p&lt;0.001 from antegrade) pre- and post-CABG, respectively. The average LV perfusion score (mean +/- SEM) was greater with antegrade than retrograde cardioplegia both pre-CABG (1.93+/-0.04 vs. 1.53+/-0.11, p&lt;0.001) and post-CABG (1.63+/-0.07 vs. 1.19+/-0.13, p = 0.004). RV perfusion was poor with both techniques pre-CABG, but improved significantly with antegrade cardioplegia post-CABG. Conclusions: We conclude that warm antegrade cardioplegia results in better left ventricular perfusion than warm retrograde cardioplegia. Right ventricular cardioplegic perfusion was suboptimal, but the best delivery was achieved with antegrade cardioplegia after coronary bypass. We therefore recommend construction of the saphenous vein graft to the right coronary artery early in the operative procedure. abstract_id: PUBMED:2334555 A new technique for delivering antegrade/retrograde blood cardioplegia without right heart isolation. We report our updated experience with combined antegrade/retrograde cardioplegia using a self-inflating/deflating balloon cannula that allows rapid transatrial retrograde coronary sinus cannulation (10-15 s) without right heart isolation. This permits routine single venous cannulation and optimizes myocardial protection when combined with antegrade cardioplegia. Two hundred fifty-five consecutive patients underwent antegrade/retrograde cardioplegia. Initial antegrade blood cardioplegia caused immediate arrest (less than 1 min), and the cardioplegic dose was divided equally between antegrade and retrograde delivery. Included are 173 isolated CABG patients (39 with either extending infarction, cardiogenic shock, or ejection fraction less than 20%), and 37 coronary reoperations, 67 with aortic and/or mitral valve procedures, 3 with arrhythmia surgery, and 7 children (VSD, Rastelli, Konno, etc). Septal temperature in patients with LAD occlusion fell to 11.6 degrees C +/- 0.5 after retrograde vs only 16.1 degrees C +/- 3 after antegrade cardioplegia (p less than 0.05). Overall hospital mortality was 2.8% and no complications followed transatrial retrograde coronary sinus cannulation. Antegrade/retrograde cardioplegia allowed retrograde flushing of debris in redo coronary operations, produced immediate arrest with low cardioplegic volumes, improved cardioplegic distribution during IMA grafting, allowed aortic and mitral valve procedures to proceed uninterrupted, and ensured distribution in unforeseen aortic insufficiency. Antegrade/retrograde cardioplegia is now used routinely in all adult and in many pediatric operations because of its speed, safety, and simplicity. abstract_id: PUBMED:7502281 Antegrade versus retrograde crystalloid cardioplegia: perioperative assessment of cardiac energy metabolism by means of myocardial lactate measurement. The effects of retrograde and antegrade delivery of cold St. Thomas' Hospital cardioplegia were evaluated and compared in 21 patients who underwent elective myocardial revascularization. The patients were randomly separated into two groups: the antegrade group (n = 10), and the retrograde group (n = 11). Cardiac energy metabolism was monitored by evaluation of arterial and coronary sinus (CS) lactate concentration. There was an increase of the CS lactate concentration during aortic cross-clamp period in both groups. After release of the aortic cross-clamp, there was an increase of the CS lactate concentration in the antegrade group, and a decrease of CS lactate in the retrograde group. Analysis of the patients operated with antegrade delivery of cardioplegia showed an increase of the CS lactate concentration in 9/10 patients after aortic cross-clamp release. In the retrograde group, in 8/11 patients the CS lactate concentration decreased immediately after aortic cross-clamp release. Whereas the differences in the CS lactate concentration were not significantly different, the lactate extaction immediately after aortic cross-clamp release was significantly higher for the retrograde group (p = 0.034). This can be related to a faster reconsolidation of mitochondrial oxidative phosphorylation in the retrograde group. For the other registered parameters, hemodynamic recovery of cardiac function, release of creatine kinase MB isoenzyme, and clinical outcome, there was no significant difference between the groups. Based on this study we conclude that retrograde delivery of a cold non-oxygenated cardioplegic solution results in a better preservation of myocardial energy reserve than antegrade delivery. Answer: Yes, the combination of antegrade and retrograde cardioplegia has been shown to reduce the occurrence of conduction disorders after coronary artery bypass grafting (CABG). A study demonstrated that the perioperative occurrence of conduction disorders was decreased by using a combination of antegrade controlled and retrograde continuous cardioplegia compared to using only intermittent antegrade cardioplegia (PUBMED:17060042). Another study found that the incidence of postoperative conduction disturbances (CD) in patients with retrograde continuous cold blood cardioplegia (RC-CBCP) was lower than that in patients with antegrade cardioplegia, and the CD resolved by the time of hospital discharge in 65% of the patients (PUBMED:1491193). Additionally, the use of a combination of retrograde and antegrade cardioplegia has been suggested to facilitate early recovery of left ventricular function after CABG (PUBMED:17827576). These findings indicate that the combination approach to cardioplegia delivery during CABG can be beneficial in reducing the risk of conduction defects postoperatively.
Instruction: Neonatal adrenal masses: do we have reliable criteria for differential diagnosis and expectant management? Abstracts: abstract_id: PUBMED:22555324 Neonatal adrenal masses: do we have reliable criteria for differential diagnosis and expectant management? Aim: Differential diagnosis of neonatal adrenal masses (NAM) is often based on empirical criteria. Expectant management relies on spontaneous regression, described either for NB as for adrenal hemorrhage (AH). Histology was available for biopsied cases only. Aim of the study was to correlate clinical, laboratory and imaging data of a series of NAM, to final diagnosis. Methods: Records of 23 NAM, diagnosed before or after birth, were reviewed, collecting data about: obstetrical history, clinical features, imaging, laboratory data, surgical findings, outcome. Results: M/F ratio was 1.6/1. Size of the lesion ranged from 17 to 50 mm.. Doppler sonography (PD) showed no flow in 14/23. Urinary catecholamines (UCM) were elevated in 5/23. Reduction in a mean time of 3 months occurred in 14/23. MIBG and CT scans resulted positive in 8 and 9 cases among 16 NAM, unvaried or increased at one month. Three cases developed as IV S NB; diagnosis was confirmed by CT guided biopsy; regression occurred within 20 months. Surgery was decided for 6 unchanged/increasing NAM after 3-6 months; they were all NB. Predictive value for NB was high for MIBG and CT scan and was lower for high UCM level and positive PD findings. Conclusion: Clear criteria to differentiate AH from NB are still missing. Despite spontaneous regression is thought to indirectly confirm a non neoplastic lesion, benign evolution has also been documented for NB. As histology is available only in operated or biopsied cases, we still lack a reliable set of signs for early differentiation and to reduce repeated, invasive investigations. abstract_id: PUBMED:34833425 Angular Heterotopic Pregnancy: Successful Differential Diagnosis, Expectant Management and Postpartum Care. Heterotopic pregnancy is a rare, difficult to diagnose and life-threatening pathology, which requires timely decisions made by an experienced multidisciplinary team. In this type of multiple pregnancy there are both intrauterine and ectopic pregnancies present. Its incidence increases in pregnancies conceived by assisted reproductive technology or in pregnancies with ovulation induction. This article presents an angular heterotopic pregnancy case in a 34-year-old multigravida. The patient was admitted on the 14th week of gestation due to abdominal pain on the left side with suspicion of heterotopic pregnancy. Transabdominal ultrasound and magnetic resonance imaging (MRI) were performed to confirm the diagnosis of heterotopic angular pregnancy in the left cornu of the uterus. Multidisciplinary team made a decision to keep monitoring the growth of both pregnancies by ultrasound while maternal vitals were stable. Due to intensifying abdominal pain, diagnostic laparoscopy was performed. No signs of uterine rupture were observed, and no additional surgical procedures were performed. Maternal status and ultrasonographic findings were closely monitored. The mass in the left cornu of the uterus did not change significantly and the fetal growth of the intrauterine pregnancy matched its gestational age throughout pregnancy. At the 41st week of gestation, a healthy female neonate was born via spontaneous vaginal delivery. The incidence rate of heterotopic pregnancy tends to grow due to an increased number of pregnancies after assisted reproductive technology and ovulation induction. It is important to always assess the risk factors. The main methods for diagnosing heterotopic pregnancies are ultrasonography and MRI. The main management tactics for heterotopic pregnancy include expectant management as well as surgical or medical termination of the ectopic pregnancy. Expectant management may be chosen as an option only in a limited number of cases, if the clinical situation meets the specific criteria. When applicable, expectant management may reduce the frequency of unnecessary interventions and help to prevent patients from its complications. abstract_id: PUBMED:37150282 Risks of stillbirth, neonatal mortality, and severe neonatal morbidity by birthweight centiles associated with expectant management at term. Background: Determining the optimal time of birth at term is challenging given the ongoing risks of stillbirth with increasing gestation vs the risks of significant neonatal morbidity at early-term gestations. These risks are more pronounced in small infants. Objective: This study aimed to evaluate the risks of stillbirth, neonatal mortality, and severe neonatal morbidity by comparing expectant management with delivery from 37+0 weeks of gestation. Study Design: This was a retrospective cohort study evaluating women with singleton, nonanomalous pregnancies at 37+0 to 40+6 weeks' gestation in Queensland, Australia, delivered from 2000 to 2018. Rates of stillbirth, neonatal death, and severe neonatal morbidity were calculated for &lt;3rd, 3rd to &lt;10th, 10th to &lt;25th, 25th to &lt;90th, and ≥90th birthweight centiles. The composite risk of mortality with expectant management for an additional week in utero was compared with rates of neonatal mortality and severe neonatal morbidity. Results: Of 948,895 singleton, term nonanomalous births, 813,077 occurred at 37+0 to 40+6 weeks' gestation. Rates of stillbirth increased with gestational age, with the highest rate observed in infants with birthweight below the third centile: 10.0 per 10,000 (95% confidence interval, 6.2-15.3) at 37+0 to 37+6 weeks, rising to 106.4 per 10,000 (95% confidence interval, 74.6-146.9) at 40+0 to 40+6 weeks' gestation. The rate of neonatal mortality was highest at 37+0 to 37+6 weeks for all birthweight centiles. The composite risk of expectant management rose sharply after 39+0 to 39+6 weeks, and was highest in infants with birthweight below the third centile (125.2/10,000; 95% confidence interval, 118.4-132.3) at 40+0 to 40+6 weeks' gestation. Balancing the risk of expectant management and delivery (neonatal mortality), the optimal timing of delivery for each birthweight centile was evaluated on the basis of relative risk differences. The rate of severe neonatal morbidity sharply decreased in the period between 37+0 to 37+6 and 38+0 to 38+6 weeks, particularly for infants with birthweight below the third centile. Conclusion: Our data suggest that the optimal time of birth is 37+0 to 37+6 weeks for infants with birthweight &lt;3rd centile and 38+0 to 38+6 weeks' gestation for those with birthweight between the 3rd and 10th centile and &gt;90th centile. For all other birthweight centiles, birth from 39+0 weeks is associated with the best outcomes. However, large numbers of planned births are required to prevent a single excess death. The healthcare costs and acceptability to women of potential universal policies of planned birth need to be carefully considered. abstract_id: PUBMED:35011954 Expectant Management of PPROM Improves Neonatal Outcome-A Retrospective Study of 562 Patients. Prelabor preterm rupture of the membranes (PPROM) refers to the rupture of the membranes before 37 weeks, but also before the onset of labor. Approximately 3% of pregnancies are complicated by PPROM, which is an important cause of neonatal morbidity and mortality. The aim of the study is to demonstrate the benefit of expectant management in PPROM, compared to immediate birth, defined in our study as birth in the first 48 h. We analyzed 562 pregnancies with PPROM by gestational age groups and short-term morbidities. Material And Methods: We conducted a retrospective observational analytical study, which included women with PPROM between 24 + 0 and 36 + 6 weeks. We divided the cohort into gestational age groups: group 1 gestational age (GA) between 24 and 27, group 2 GA between 28 and 31, group 3 GA between 32 and 34, group 4 GA &gt; 35 weeks. In each group, we analyzed the benefit of the latency period (established in our study as delivery after 48 h of hospitalization) in terms of short-term neonatal complications. Result: The latency period brought a significant benefit starting with GA greater than 28 weeks; therefore, in the group with GA between 28-31, the complications were significantly statistically lower, mentioning respiratory distress syndrome (no latency 100% vs. latency 85.1%) and admission to the neonatal intensive care unit (no latency 89.7% vs. latency 70.2%). In group 3, with GA between 32-34, we reached statistical significance in terms of respiratory distress syndrome (no latency 61.8% vs. latency 39%), hypoxia (no latency 50% vs. latency 31.7%) and admission to the neonatal intensive care unit (no latency 70.2% vs. latency 47.4%). Conclusion: Expectant management of pregnancies with PPROM can bring a real benefit in terms of the incidence of complications in premature infants, but this benefit depends most on the gestational age at which the membranes ruptured and the medical conduct put into practice during the latency period. abstract_id: PUBMED:33720593 Adrenal incidentaloma. Part 1. Computed tomography of adrenal incidentaloma: the possibilities and difficulties of differential diagnosis The adrenal incidentaloma is a lesion of a different etiology and found incidentally in patients who underwent a diagnostic study not about the disease of this organ. Lesions can be both hormonally inactive and hormonally active, can arise from different zones of the adrenal gland or have non-specific organ affiliation, can be benign or malignant. Computed tomography characterization of these lesions, especially the differential diagnosis of benign and malignant, is extremely important for the correct diagnosis in order to provide adequate management of the patient. The article presents the key computed tomography criteria that allow radiologist to characterize the lesion most accurately and consider appropriate diagnosis. abstract_id: PUBMED:25089043 Neonatal adrenal hematoma: various modes of presentation Neonatal adrenal hematoma is a rare condition, most frequently caused by trauma. We report three cases of adrenal hematoma admitted to the Neonatology and Neonatal Intensive Care Unit in the A. Harouchi Children's Hospital, the Ibn Rushd University Hospital in Casablanca, Morocco, over a 2-year period from January 2011 to December 2012. The average age of these patients was 5 days. The clinical presentations were diverse; the most common manifestations were intense jaundice in one case, acute adrenal insufficiency in one case, and severe anemia in the other case. Abdominal ultrasonography was used to confirm the diagnosis and monitor adrenal hemorrhage in all the patients. Analysis of clinical, laboratory, and ultrasonography data showed a favorable prognosis in all the patients. Based on these observations, we discuss the risk factors, clinical presentations, progression and management of neonatal adrenal hemorrhage. abstract_id: PUBMED:29748231 MANAGEMENT OF ENDOCRINE DISEASE: Differential diagnosis, investigation and therapy of bilateral adrenal incidentalomas. The investigation and management of unilateral adrenal incidentalomas have been extensively considered in the last decades. While bilateral adrenal incidentalomas represent about 15% of adrenal incidentalomas (AIs), they have been less frequently discussed. The differential diagnosis of bilateral incidentalomas includes metastasis, primary bilateral macronodular adrenal hyperplasia and bilateral cortical adenomas. Less frequent etiologies are bilateral pheochromocytomas, congenital adrenal hyperplasia (CAH), Cushing's disease or ectopic ACTH secretion with secondary bilateral adrenal hyperplasia, primary malignancies, myelolipomas, infections or hemorrhage. The investigation of bilateral incidentalomas includes the same hormonal evaluation to exclude excess hormone secretion as recommended in unilateral AI, but diagnosis of CAH and adrenal insufficiency should also be excluded. This review is focused on the differential diagnosis, investigation and treatment of bilateral AIs. abstract_id: PUBMED:33251165 Neonatal Adverse Outcomes of Induction and Expectant Management in Fetal Growth Restriction: A Systematic Review and Meta-Analysis. Background and Objective: Fetal growth restriction (FGR) is a pathological condition in which the fetus cannot reach its expected growth potential. When it is diagnosed as a suspected FGR, it remains an unsolved problem whether to direct induction or continue expectant management. To effectively reduce the incidence of neonatal adverse outcomes, we aimed to evaluate whether either method was associated with a lower incidence of neonatal adverse outcomes. Methods: We searched the relevant literature through the PubMed, Web of Science, and Cochrane Library from inception to January 10, 2020. We defined induction as the experimental group and expectant management as the control group. Pooled odds ratios (ORs) with 95% confidence intervals (CIs) were calculated using random-effects models owing to heterogeneity. Furthermore, we conducted a sensitivity analysis to explore the robustness of the included literature. We used the Newcastle-Ottawa scale (NOS) to evaluate the quality of the available studies. We applied the funnel plot to describe the publication bias. Additionally, subgroup analysis based on the study method, sample size, area, NOS score, Apgar score &lt;7 at 5 min, definition of suspected FGR, severity, and neonatal adverse outcomes were performed to further evaluate the differences between the induction and expectant management. Results: Our study included a total of eight articles with 6,706 patients, which consisted of four randomized controlled trials (RCTs), three retrospective cohort studies, and one prospective cohort study. The total pooled OR and 95% CI between the induction group and the expected management group was 1.38 (95% CI, 0.84-2.28) in the random model. The heterogeneity was I2 = 84%, P &lt; 0.01. The sensitivity analysis showed that the neonatal adverse outcomes of induction vs. expectant management still presented similar outcomes after omitting of any one of these studies. The funnel plot and linear regression equation showed that there was no publication bias in our study (P = 0.75). Subgroup analysis showed that induction increased the neonatal adverse outcome risks of hypoglycemia and respiratory insufficiency (ORneonatal hypoglycaemia = 8.76, 95% CI: 2.57-29.90; ORrespiratory insufficiency = 1.74, 95% CI: 1.35-2.24, respectively). However, no significant differences were observed based on the other subgroups (all P &gt; 0.05). Conclusion: Regardless of induction or expectant management of a suspected FGR, the neonatal adverse outcomes showed no obvious differences. More studies should be conducted and confounding factors should be taken into consideration to elucidate the differential outcomes of the two approaches for suspected FGR. abstract_id: PUBMED:36587447 Expectant management of advanced abdominal pregnancies: Is it justifiable? Objective: To evaluate the outcome of a case series of women with advanced abdominal pregnancies (AAP) who underwent expectant management. Study Design: A retrospective study that utilized prospective data of 46 women who were admitted for expectant management at a tertiary hospital in Durban, South Africa. All data was analyzed descriptively and presented in percentages. Results: The average period of expectant management was 27 days; thirty-three (72 %) of the 46 women were discharged from the hospital with live babies. There were 11 (24 %) neonatal deaths and two cases of stillbirths. There were no maternal deaths and morbidity was minimal. Conclusion: When diagnosed after the 24th week of gestation, an option is expectant management which includes careful patient selection, prolonged hospitalization, and close antenatal fetal and maternal surveillance to achieve fetal viability. Management by experienced clinicians and a multidisciplinary team in a tertiary institution is recommended. Informed consent needs to take into consideration the risks with interventional laparotomy, social separation from family and friends, and the guarded perinatal outcome. abstract_id: PUBMED:3326577 Adrenal haemorrhage presenting as an abdominal mass in the newborn. Adrenal haemorrhage in the newborn, an entity well recognized at autopsy and as a cause of massive haemorrhage, may present as an asymptomatic abdominal mass. Three cases of neonatal adrenal haemorrhage are described and the literature is reviewed. Most of the affected babies were of normal birthweight, full term and with no history of difficult delivery or neonatal asphyxia. Most presented with a flank mass and jaundice. The haemoglobin was not always low. Intravenous urography and sonography were the investigations of choice. The most serious condition in the differential diagnosis is neuroblastoma, but the chance of this presenting in the newborn period with normal catecholamine secretion and no evident metastases is sufficiently rare to justify expectant management. Follow-up physical examination and repeat sonography at 1 month are recommended. This interval is long enough to allow resolution of a mass caused by adrenal haemorrhage. Answer: Neonatal adrenal masses present a diagnostic challenge, as reliable criteria for differential diagnosis and expectant management are still lacking. The differential diagnosis of neonatal adrenal masses (NAM) often relies on empirical criteria, and while expectant management depends on the spontaneous regression of the mass, this phenomenon has been observed in both neuroblastoma (NB) and adrenal hemorrhage (AH), making it an unreliable indicator of the nature of the lesion (PUBMED:22555324). Histological confirmation is only available for cases that undergo biopsy or surgery, which means that many cases may not have a definitive diagnosis early on. The predictive value for NB is high for metaiodobenzylguanidine (MIBG) and computed tomography (CT) scans, but is lower for elevated urinary catecholamine levels and positive power Doppler findings (PUBMED:22555324). In the context of adrenal incidentalomas, CT imaging plays a crucial role in characterizing lesions and aiding in the differential diagnosis between benign and malignant masses (PUBMED:33720593). However, the differentiation between various causes of bilateral adrenal incidentalomas, such as metastasis, primary bilateral macronodular adrenal hyperplasia, and bilateral cortical adenomas, among others, requires careful evaluation of hormonal excess and other factors (PUBMED:29748231). For neonatal adrenal hematoma, which is a rare condition, the clinical presentations can vary, and abdominal ultrasonography is typically used to confirm the diagnosis and monitor the condition. The prognosis is generally favorable, and the management of neonatal adrenal hemorrhage involves consideration of risk factors, clinical presentations, and progression (PUBMED:25089043). In summary, while there are imaging and laboratory tests that can aid in the diagnosis of neonatal adrenal masses, there is still no universally reliable set of signs that can definitively differentiate between benign and malignant causes or between NB and AH without histological confirmation. This uncertainty often necessitates repeated and invasive investigations, and in some cases, expectant management may be chosen while closely monitoring the lesion's characteristics and the patient's clinical status (PUBMED:22555324).
Instruction: The urological personality: is it unique? Abstracts: abstract_id: PUBMED:26675409 A Spectrum of Urological Emergency Reported at a Tertiary Care Teaching Hospital: An Experience. Introduction: A substantial number of urology admissions constitute of emergency cases and sizeable proportion are urology emergency referral cases. There have been few studies conducted on this aspect but there lie geographical variations in the presentations of urological emergencies. Hence, this study was conducted to analyse various urological emergency presentations and their interventions. Objective: To estimate the proportion of urological emergencies and analyse the different type of urological emergencies with the required management. Materials And Methods: A hospital based observational descriptive study was undertaken in our institution over a period of one year. Results: A total of 11,139 cases were admitted in the urology department; of which a significant percentage (21.05%) was from emergency room. Majority of cross references came from the Department of Medicine (22.59 %). Renal colic (24.2%) happened to be the most common presentation in emergency room followed by acute urinary retention (14.7%). Among referred cases, hematuria was the leading presentation with 17.75% followed by traumatic catheterization (11.97%). Most common urological intervention in referred cases was supra pubic catheterization (27.20%) while it was percutaneous nephrostomy (32.78%) in directly admitted cases. Conclusion: Urological emergencies constitute a significant proportion of total urology admissions (27.18%). The most common non-traumatic injury was renal colic whereas traumatic was traumatic catheterization in our study. Most common surgical intervention in direct admitted emergency cases was percutaneous nephrostomy whereassuprapubic catheterization in within hospital emergency referral cases. abstract_id: PUBMED:29641868 The Indices of Common and Early Detection of Urological Pathology as Criteria of Efficiency of Urological Care Rendering The comparative statistical and logical analysis of curves of dynamics of common and early detection of urological pathology was implemented with purpose of evaluating efficiency of functioning of regional urological service in the format of developed three-level system of medical care as exemplified by the Voronezh oblast. The obtained statistical data testify a significant increasing of common detection and detection at early stages of urolithiasis, benign hyperplasia and prostate cancer. This outcome is an evidence of partial or complete realization of transition from model of medical care by appealability to model by detection at the expense of implementation of developed three-level system of medical care of population in the region. abstract_id: PUBMED:35777716 Exosomes in urological diseases - Biological functions and clinical applications. Exosomes are extracellular vesicles with a variety of biological functions that exist in various biological body fluids and exert their functions through proteins, nucleic acids, lipids, and metabolites. Recent discoveries have revealed the functional and biomarker roles of miRNAs in urological diseases, including benign diseases and malignancies. Exosomes have several uses in the diagnosis, treatment, and monitoring of urological diseases, especially cancer. Proteins and nucleic acids can be used as alternative biomarkers for detecting urological diseases. Additionally, exosomes can be detected in most body fluids, thereby avoiding pathogenesis. More importantly, for urological tumors, exosomes display a higher sensitivity than circulating tumor cells and tumor-derived DNA in body fluid biopsies because of their low immunogenicity and high stability. These advantages have made it a research hotspot in recent years. In this review, we focus on the biological characteristics and functions of exosomes and summarize their advantages and the latest progress in the diagnosis and treatment of urological diseases. abstract_id: PUBMED:34159110 The past, present, and future of urological quality improvement collaboratives. Surgical quality improvement collaboratives (QIC) have been established across the nation in numerous specialties. These QICs have shown efficacy in improving the quality, safety and value of care delivered to patients with a wide range of medical conditions. In recent years, urological QICs have emerged, including regional collaboratives such as the Michigan Urological Surgical Improvement Collaborative (MUSIC) and Pennsylvania Urologic Regional Collaborative (PURC), as well as the national American Urological Association Quality Registry Program (AQUA). These urological collaboratives, developed with an initial focus on prostate cancer, have demonstrated an ability to accurately measure prostate cancer outcomes, compare these outcomes among providers and institutions, and enact change among both patients and providers to optimize outcomes for men with prostate cancer. Physician-led regional collaboratives may be uniquely positioned to respond quickly to the rapidly-evolving healthcare landscape and enact practice and provider-level changes when appropriate. This review describes the historical background, current structure and function, and potential future directions of these urologic QICs. abstract_id: PUBMED:27819017 Trends in Inpatient Urological Surgery Practice Patterns. Introduction: As the nation's population ages and the number of practicing urologists per capita declines, characterization of practice patterns is essential to understand the current state of the urological workforce and anticipate future needs. Accordingly, we examined trends in adult inpatient urological surgery practice patterns over a five-year period. Methods: We used the Nationwide Inpatient Sample (NIS) data from 2005 through 2009 to identify both surgeons and urological surgeries. We classified the latter into 1 of 7 clinical domains (Endourology &amp; Stone Disease, Incontinence, Urogenital Reconstruction, Urologic Oncology, Benign Prostate, Renal Transplant, and Other Urological Procedures). For each urological surgeon, three parameters were determined for each year: 1) Case-diversity (the number of distinct urological clinical domains in which they performed ≥2 procedures/year); 2) Subspecialty (the predominant clinical domain of cases that each surgeon performed); and 3) Subspecialty-focus (the proportion of a surgeon's total urological cases/year that belonged to their assigned clinical domain). We examined trends in these metrics over a five-year period, and compared results between urban and rural practice settings. Results: We analyzed data for 2,237 individual surgeons performing 144,138 inpatient surgeries. Over time, urologist's practice patterns evolved toward lower case-diversity (p&lt;0.001) and greater subspecialty-focus (p&lt;0.001). These trends were more pronounced for surgeons practicing in urban versus rural practice settings (p-values &lt;0.05). Conclusions: At a national level, urologists' inpatient surgical practice patterns are narrowing, with less case-diversity and higher subspecialty-focus. These trends are even more prominent among urologists in urban, compared with rural, practice settings. abstract_id: PUBMED:26568885 The standard of urological consultation of patients qualified for renal transplant - a review. Urological consultation is an important step in the procedure of a patient's preparation before placing him/her on a waiting list for a renal transplant. Urological work-up aims to diagnose, treat, and optimize any preexisting urological disease. In the present paper we present the review of the literature together with the authors' conclusions based on literature and their experience. There is not enough data in current literature and urology manuals on the adequate sequence of the urological management with patients qualified for renal transplant and the literature needs an update. This study presents the crucial steps of the qualification and emphasizes the urge for a more standardized urological approach in patients qualified for a kidney transplantation. abstract_id: PUBMED:35967737 The application of 3D bioprinting in urological diseases. Urologic diseases are commonly diagnosed health problems affecting people around the world. More than 26 million people suffer from urologic diseases and the annual expenditure was more than 11 billion US dollars. The urologic cancers, like bladder cancer, prostate cancer and kidney cancer are always the leading causes of death worldwide, which account for approximately 22% and 10% of the new cancer cases and death, respectively. Organ transplantation is one of the major clinical treatments for urological diseases like end-stage renal disease and urethral stricture, albeit strongly limited by the availability of matching donor organs. Tissue engineering has been recognized as a highly promising strategy to solve the problems of organ donor shortage by the fabrication of artificial organs/tissue. This includes the prospective technology of three-dimensional (3D) bioprinting, which has been adapted to various cell types and biomaterials to replicate the heterogeneity of urological organs for the investigation of organ transplantation and disease progression. This review discusses various types of 3D bioprinting methodologies and commonly used biomaterials for urological diseases. The literature shows that advances in this field toward the development of functional urological organs or disease models have progressively increased. Although numerous challenges still need to be tackled, like the technical difficulties of replicating the heterogeneity of urologic organs and the limited biomaterial choices to recapitulate the complicated extracellular matrix components, it has been proved by numerous studies that 3D bioprinting has the potential to fabricate functional urological organs for clinical transplantation and in vitro disease models. abstract_id: PUBMED:26328181 Retrospective evaluation of urological admissions to emergency service of a training and research hospital. Objective: Many patients consult emergency services with urological complaints. The aim of this study was to investigate the epidemiology, clinical presentation and treatments of urological emergency cases in a training and research hospital. Material And Methods: We retrospectively evaluated urological emergency patients referred to the emergency unit between July 2012 and July 2013 according to age, gender, affected organ, radiological imaging techniques and treatment. Results: Among 141.844 emergency cases, 3.113 (2.19%) were urological emergencies and 53.2% of the patients were male (mean age: 49.1), and 46.8% of them were female (median age: 42.8). The most frequent illness was genitourinary infection constituting 41.2% of the cases followed by renal colic (36.9%). Among the urological emergencies 483 (15.5%) patients were hospitalized and 152 surgical operations were performed. The mostly performed procedure was the placement of a suprapubic catheter in 34 patients constituting (22.3%) of the cases. Totally eight patients were referred to another experienced health center due to different reasons. Conclusion: Most of the urological emergency patients do not require emergency surgical interventions however, timely identification and management of urological emergencies with in-depth clinical evaluation are important to prevent late complications. Therefore the doctors working in emergency services must be heedful of urological emergencies. abstract_id: PUBMED:23728759 Urological complications of inguinal hernia surgery. Background And Aims: A systematic review of the literature is presented with regard to urological complications resulting from inguinal hernia surgery. Considering the amount of inguinal hernia operations performed, the resulting complications, which may be urological in presentation, have potential late irreversible and medico-legal implications. Methods And Results: A Pubmed search of 'urological' 'complications' and 'inguinal hernia surgery' was carried out and clinical practice was also taken into consideration. Discussion: Approximately 75% of hernias occur in the groin; two-third of these are indirect and about one-third direct. Most of these repairs are carried out by the general surgeons and any complication, including urological, are often initially managed by the operating general surgeon. Often a urological opinion is sought late for conditions which may be reversible. We present potential urological complications, their presenting features and management. Conclusion: Recognition, timely referral and appropriate treatment of urological complications after hernia surgery are necessary to avoid potential consequences and long-term morbidity. abstract_id: PUBMED:38149615 Five historical innovations that have shaped modern urological surgery. Throughout history, many innovations have contributed to the development of modern urological surgery, improving patient outcomes and expanding the range of treatment options available to patients. This article explores five key historical innovations that have shaped modern urological surgery: External shockwave lithotripsy, transurethral resection of prostate, cystoscope, perioperative prostate-specific antigen and robotic surgery. The selection of innovations for inclusion in this article was meticulously determined through expert consensus and an extensive literature review. We will review the development, impact and significance of each innovation, highlighting their contributions to the field of urological surgery and their ongoing relevance in contemporary and perioperative practice. Answer: The provided abstracts do not directly address the concept of a "urological personality" or suggest that there is a unique personality type associated with urologists or the field of urology. The abstracts focus on various aspects of urological practice, including emergency presentations and interventions (PUBMED:26675409), the efficiency of urological care (PUBMED:29641868), the role of exosomes in urological diseases (PUBMED:35777716), quality improvement collaboratives in urology (PUBMED:34159110), trends in inpatient urological surgery (PUBMED:27819017), the standard of urological consultation for renal transplant patients (PUBMED:26568885), the application of 3D bioprinting in urological diseases (PUBMED:35967737), epidemiology and treatment of urological emergencies (PUBMED:26328181), complications from inguinal hernia surgery (PUBMED:23728759), and historical innovations in urological surgery (PUBMED:38149615). The question of whether there is a unique "urological personality" would likely require a different type of study, potentially involving psychological assessments or surveys of urologists to determine if there are common personality traits among professionals in this field. Such a study would explore aspects of personality, such as traits, behaviors, and attitudes, that might be prevalent among urologists or influenced by the nature of their work. However, none of the abstracts provided offer information on this topic. Therefore, based on the given abstracts, it is not possible to conclude whether there is a unique urological personality.
Instruction: Minimally invasive surgery for endometrial cancer: does operative start time impact surgical and oncologic outcomes? Abstracts: abstract_id: PUBMED:24945591 Minimally invasive surgery for endometrial cancer: does operative start time impact surgical and oncologic outcomes? Objective: Recent literature in ovarian cancer suggests differences in surgical outcomes depending on operative start time. We sought to examine the effects of operative start time on surgical outcomes for patients undergoing minimally invasive surgery for endometrial cancer. Methods: A retrospective review was conducted of patients undergoing minimally invasive surgery for endometrial cancer at a single institution between 2000 and 2011. Surgical and oncologic outcomes were compared between patients with an operative start time before noon and those with a surgical start time after noon. Results: A total of 380 patients were included in the study (245 with start times before noon and 135 with start times after noon). There was no difference in age (p=0.57), number of prior surgeries (p=0.28), medical comorbidities (p=0.19), or surgical complexity of the case (p=0.43). Patients with surgery starting before noon had lower median BMI than those beginning after noon, 31.2 vs. 35.3 respectively (p=0.01). No significant differences were observed for intraoperative complications (4.4% of patients after noon vs. 3.7% of patients before noon, p=0.79), estimated blood loss (median 100 cc vs. 100 cc, p=0.75), blood transfusion rates (7.4% vs. 8.2%, p=0.85), and conversion to laparotomy (12.6% vs. 7.4%, p=0.10). There was no difference in operative times between the two groups (198 min vs. 216.5 min, p=0.10). There was no association between operative start time and postoperative non-infectious complications (11.9% vs. 11.0%, p=0.87), or postoperative infections (17.8% vs. 12.3%, p=0.78). Length of hospital stay was longer for surgeries starting after noon (median 2 days vs. 1 day, p=0.005). No differences were observed in rates of cancer recurrence (12.6% vs. 8.8%, p=0.39), recurrence-free survival (p=0.97), or overall survival (p=0.94). Conclusion: Our results indicate equivalent surgical outcomes and no increased risk of postoperative complications regardless of operative start time in minimally invasive endometrial cancer staging, despite longer length of hospital stay for surgeries beginning after noon. abstract_id: PUBMED:34663530 Oncologic safety of minimally invasive surgery in non-endometrioid endometrial cancer. Objective: This study was aimed to compare the oncologic outcomes of patients with non-endometrioid endometrial cancer who underwent minimally invasive surgery with the outcomes of patients who underwent open surgery. Method: This is a retrospective, multi-institutional study of patients with non-endometrioid endometrial cancer who were surgically staged by either minimally invasive surgery or open surgery. Oncologic outcomes of the patients were compared according to surgical approach. Results: 113 patients met the inclusion and exclusion criteria. 57 underwent minimally invasive surgery and 56 underwent open surgery. Patients who underwent minimally invasive surgery had smaller tumors (median size, 3.3 vs. 5.2%, p = 0.0001) and a lower lymphovascular space invasion rate (29.8% vs. 48.2%, p = 0.045). In the overall population, the numbers and rate of recurrence were significantly higher in the open surgery group (p = 0.016). In multivariate analysis, disease stage and tumor size were associated with DFS in contrast to surgical procedure. Conclusion: Minimally invasive surgery showed similar survival outcomes when compared to open surgery in non-endometrioid endometrial cancer patients, irrespective of disease stage. When minimally invasive surgery is managed by expert surgeons, non-endometrioid histological subtypes should not be considered a contraindication for minimally invasive surgery. abstract_id: PUBMED:28336363 Does Surgical Teaching Take Time? Resident Participation in Minimally Invasive Hysterectomy for Endometrial Cancer. Study Objective: To determine the association between resident involvement and operative time for minimally invasive surgery (MIS) for endometrial cancer. Design: A retrospective cohort study (Canadian Task Force classification II-2). Setting: An integrated health care system in Northern California. Patients: A total of 1433 women who underwent MIS for endometrial cancer and endometrial intraepithelial neoplasia from January 2009 to January 2014. Interventions: Resident participation in 430 of 688 laparoscopic cases (62%) and 341 of 745 robotic cases (46%). Measurements And Main Results: The primary outcome was the impact of resident involvement on surgical time. When residents were involved in laparoscopic and robotic surgery, there was an increase of 61 minutes (median operative time, 186 vs 125 minutes; p &lt; .001) and 31 minutes (median operative time, 165 vs 134 minutes; p &lt; .001), respectively. Resident participation was associated with increased operative times in all levels of surgical complexity from hysterectomy alone to hysterectomy with pelvic and para-aortic lymph node dissection. Resident participation was also associated with increased major intraoperative complications (3.4% vs 1.8%, p = .02) as well as major postoperative complications (6.4% vs 3.8%, p = .003). Conclusion: The presence of a resident was associated with a 32% increase in operative time for minimally invasive cases in gynecologic oncology for endometrial cancer. Because of the retrospective nature, we cannot infer causality of operative outcomes because residents were also involved in more high-risk patients and complex cases. For health care systems using surgical metrics, there may be a need to allocate more time for resident involvement. abstract_id: PUBMED:34682910 Minimally Invasive Surgery and Surgical Volume-Specific Survival and Perioperative Outcome: Unmet Need for Evidence in Gynecologic Malignancy. This study examined the effect of hospital surgical volume on oncologic outcomes in minimally invasive surgery (MIS) for gynecologic malignancies. The objectives were to assess survival outcomes related to hospital surgical volume and to evaluate perioperative outcomes and examine non-gynecologic malignancies. Literature available from the PubMed, Scopus, and the Cochrane Library databases were systematically reviewed. All surgical procedures including gynecologic surgery with hospital surgical volume information were eligible for analysis. Twenty-three studies met the inclusion criteria, and nine gastro-intestinal studies, seven genitourinary studies, four gynecological studies, two hepatobiliary studies, and one thoracic study were reviewed. Of those, 11 showed a positive volume-outcome association for perioperative outcomes. A study on MIS for ovarian cancer reported lower surgical morbidity in high-volume centers. Two studies were on endometrial cancer, of which one showed lower treatment costs in high-volume centers and the other showed no association with perioperative morbidity. Another study examined robotic-assisted radical hysterectomy for cervical cancer and found no volume-outcome association for surgical morbidity. There were no gynecologic studies examining the association between hospital surgical volume and oncologic outcomes in MIS. The volume-outcome association for oncologic outcome in gynecologic MIS is understudied. This lack of evidence calls for further studies to address this knowledge gap. abstract_id: PUBMED:35142980 Peri-operative and survival outcomes analysis of patients with endometrial cancer managed by three surgical approaches: a long-term Bulgarian experience. The study aim was to assess the peri-operative, oncologic, and survival outcomes for patients with endometrial cancer (EC) managed by abdominal hysterectomy (AH), laparoscopic hysterectomy (LH), or robotic hysterectomy (RH) approaches at premier centers in Bulgaria. We analyzed histologically diagnosed EC cases operated via any of the three surgical methods during 2008-2019. Data analyses included patients and tumor characteristics, peri-operative outcomes, and disease status. We grouped FIGO stages I and II to represent early-stage EC and to investigate their survival. Kaplan-Meier and Cox regression analyses were performed to determine disease-free survival (DFS) and overall survival (OS). Consecutive 917 patients (AH = 466; LH = 60, RH = 391) formed the basis of study analyses. Most of demographics and tumor characteristics of the patients were comparable across the groups except few minor variations (e.g., LH/RH cases were younger, heavier, more stage IA, endometrioid, G1, low-risk group). LH and RH group cases had significantly lower operative time than AH (p &lt; 0.001), shorter hospital length-of-stay (p &lt; 0.001), higher post-operative Hgb (p &lt; 0.001). RH cases had fewer blood transfusions than AH or LH (p &lt; 0.001). Cox multivariate analyses indicate that OS was not influenced by the type of surgical approach. Despite the fact that the DFS in "early-stage" EC is significantly better in AH group than RH, the type of surgery (i.e., AH, LH, or RH) for "all stages" is insignificant factor for DFS. With our long-term experience, minimally invasive surgical approach resulted in superior peri-operative, oncologic, and survival outcomes. Specifically, RH is not only safe in terms of post-operative results, but also for mortality and oncologic rates. abstract_id: PUBMED:33268212 Robot-assisted surgery for women with endometrial cancer: Surgical and oncologic outcomes within a Belgium gynaecological oncology group cohort. Objective: To evaluate surgical and oncologic outcomes of patients treated by robot-assisted surgery for endometrial cancer within the Belgium Gynaecological Oncology Group (BGOG). Study Design: We performed a retrospective analysis of women with clinically Stage I endometrial cancer who underwent surgical treatment from 2007 to 2018 in five institutions of the BGOG group. Results: A total of 598 consecutive women were identified. The rate of conversion to laparotomy was low (0.8%). The mean postoperative Complication Common Comprehensive Index (CCI) score was 3.4. The rate of perioperative complications did not differ between age groups, however the disease-free survival was significantly lower in patients over 75 years compared to patients under 65 years of age (p=0.008). Per-operative complications, conversion to laparotomy rate, post-operative hospital stay, CCI score and disease-free survival were not impacted by increasing BMI. Conclusion: Robot-assisted surgery for the surgical treatment of patients suffering from early-stage endometrial cancer is associated with favourable surgical and oncologic outcomes, particularly for unfavourable groups such as elderly and obese women, thus permitting a low morbidity minimally-invasive surgical approach for the majority of patients in expert centres. abstract_id: PUBMED:34474173 Five-Year Experience in the Surgical Treatment of Endometrial Cancer: Comparing Laparotomy with Robotic and Minimally Invasive Hysterectomy. Objectives: To compare surgical and oncological outcomes in the treatment of endometrial cancer between laparotomy and minimally invasive surgery. The secondary objective was to determine which MIS approach was the most beneficial. Methods: This was a single-centre retrospective review of all endometrial cancer surgeries performed between November 1, 2012 and October 31, 2017 in a gynaecologic oncology unit of a university hospital. Descriptive statistics were used to compare histopathologic results and oncological outcomes, and Kaplan-Meier estimates were used to compare survival. Results: A total of 735 cases were reviewed. The majority of patients (77%) underwent either laparotomy (35%) or robotic-assisted hysterectomy (42%); the remaining patients underwent total laparoscopic hysterectomy (12%) or a laparoscopic-assisted vaginal hysterectomy (8.7%). There was a statistically significant overall survival benefit (P = 0.02), a shorter hospital stay (P &lt; 0.0001), and fewer early surgical complications (&lt;30 d; P = 0.0002), as well as a survival benefit in elderly patients (&gt;70 y) in the robotic-assisted hysterectomy group (P = 0.043) than the laparotomy group. Operating time was shorter in the laparotomy group (P &lt; 0.0001). Recurrence rates in stage 1 low-risk disease were similar between groups. Conclusion: Minimally invasive surgical approaches, particularly robotic surgery, do not compromise oncologic outcomes, especially for early-stage low-risk disease. In addition, these approaches are associated with fewer early surgical complications and shorter hospital stay, with significantly more same-day discharges. Overall survival and survival in a subgroup of elderly patients were significantly better in the robotic-assisted hysterectomy group. abstract_id: PUBMED:23231801 The technicity index: a useful performance indicator for minimally invasive surgery in the management of endometrial cancer. Introduction: Minimally invasive surgery is the surgical approach of choice for the management of numerous non-oncologic and oncologic conditions. The Technicity Index (TI) is defined in this study as the ratio of minimally invasive hysterectomies to abdominal hysterectomies performed during the studied time period. The TI indicates, in this context, the level of surgical technique and can thus be used as a health care indicator. The objective of this study was to evaluate whether the TI of the surgical management of endometrial cancer had significantly increased over three different time periods in our gynaecologic oncology division. Methods: The TI was calculated from the data recorded in a surgical management system from July 1999 to June 2011. The study period was divided into three equal time periods (P1: July 1999 to June 2003; P2: July 2003 to June 2007; P3: July 2007 to June 2011). The z-score was used to compare the TI in each period, with P = 0.05 indicating the level of significance. Results: The TIs in the first two time periods were comparable (P = 0.96); however, the difference between each of the first two time periods and the third (P1 vs. P3 and P2 vs. P3) was statistically significant (P &lt; 0.001 for each). Conclusion: The TI for the surgical management of endometrial cancer increased more significantly from 2007 to 2011 than in the previous eight years. The TI could be used as a performance indicator and as an incentive to promote minimally invasive surgery in the surgical management of endometrial cancer. abstract_id: PUBMED:32103792 Surgical and oncologic outcomes of minimally invasive surgery for stage I high-grade endometrial cancer. Objective: To evaluate the prevalence and outcomes of minimally invasive surgery for stage I high grade endometrial cancer. We hypothesized that route of surgery is not associated with survival. Materials: Patients diagnosed between 2010 and 2014, with stage I grade 3 endometrioid, serous, clear cell and carcinosarcoma endometrial carcinoma, who underwent hysterectomy with lymphadenectomy were drawn from the National Cancer Database. Patients converted to open surgery were excluded. Overall survival was assessed with Kaplan-Meier curves and compared with the log-rank test. A Cox model was constructed to control for confounders. Results: A total of 12852 patients were identified. The rate of minimally invasive surgery was 62.2%. An increase in the use between 2010 and 2014 was noted (p &lt; 0.001). Open surgery was associated with longer hospital stay (median 3 vs 1 day, p &lt; 0.001), higher 30-day unplanned re-admission rate (4.5% vs 2.4%, p &lt; 0.001) and 30-day mortality (0.6% vs 0.3%, p = 0.008). There was no difference in overall survival between patients who had open or minimally invasive surgery, p = 0.22; 3-yr overall survival rates were 83.7% and 84.4% respectively. After controlling for patient age, tumor histology, substage, type of insurance, type of reporting facility, receipt of radiation therapy and chemotherapy, extent of lymphadenectomy, the presence of comorbidities and personal history of another tumor, minimally invasive surgery was not associated with a worse survival (hazard ratio: 1.06, 95% confidence interval: 0.97, 1.15). Conclusions: Minimally invasive surgery for patients with stage I high grade endometrial cancer, was associated with superior short-term outcomes with no difference in overall survival noted. abstract_id: PUBMED:20144471 Surgical outcomes of robotic-assisted surgical staging for endometrial cancer are equivalent to traditional laparoscopic staging at a minimally invasive surgical center. Objective: To compare peri- and post-operative complications and outcomes of robotic-assisted surgical staging with traditional laparoscopic surgical staging for women with endometrial cancer. Methods: A retrospective chart review of cases of women undergoing minimally invasive total hysterectomy and pelvic and para-aortic lymphadenectomy by a robotic-assisted approach or traditional laparoscopic approach was conducted. Major intraoperative complications, including vascular injury, enterotomy, cystotomy, or conversion to laparotomy, were measured. Secondary outcomes including operative time, blood loss, transfusion rate, number of lymph nodes retrieved, and the length of hospitalization were also measured. Results: 275 cases were identified-102 patients with robotic-assisted staging and 173 patients with traditional laparoscopic staging. There was no significant difference in the rate of major complications between groups (p=0.13). The mean operative time was longer in cases of robotic-assisted staging (237 min vs. 178 min, p&lt;0.0001); however, blood loss was significantly lower (109 ml vs. 187 ml, p&lt;0.0001). The mean number of lymph nodes retrieved were similar between groups (p=0.32). There were no significant differences in the time to discharge, re-admission, or re-operation rates between the two groups. Conclusion: Robotic-assisted surgery is an acceptable alternative to laparoscopy for minimally invasive staging of endometrial cancer. In addition to the improved ease of operation, visualization, and range of motion of the robotic instruments, robotic surgery results in a lower mean blood loss, although longer operative time. More data are needed to determine if the rates of urinary tract injuries and other surgical complications can be reduced with the use of robotic surgery. Answer: The study identified in abstract PUBMED:24945591 investigated whether the operative start time impacts surgical and oncologic outcomes for patients undergoing minimally invasive surgery for endometrial cancer. The retrospective review included 380 patients, with 245 surgeries starting before noon and 135 after noon. The study found no significant differences in intraoperative complications, estimated blood loss, blood transfusion rates, conversion to laparotomy, operative times, postoperative non-infectious complications, or postoperative infections between the two groups. However, the length of hospital stay was longer for surgeries starting after noon. Importantly, there were no differences observed in rates of cancer recurrence, recurrence-free survival, or overall survival between the groups. The conclusion drawn from these results is that equivalent surgical outcomes and no increased risk of postoperative complications are associated with operative start time in minimally invasive endometrial cancer staging, despite the longer length of hospital stay for surgeries beginning after noon.
Instruction: Adhesive small bowel obstruction in children: should we still operate? Abstracts: abstract_id: PUBMED:33511902 Adhesive Small Bowel Obstruction and the six w's: Who, How, Why, When, What, and Where to diagnose and operate? Background And Aims: Approximately 75% of patients admitted with small bowel obstruction have intra-abdominal adhesions as their cause (adhesive small bowel obstruction). Up to 70% of adhesive small bowel obstruction cases, in the absence of strangulation and bowel ischemia, can be successfully treated with conservative management. However, emerging evidence shows that surgery performed early during the first episode of adhesive small bowel obstruction is highly effective. The objective of this narrative review is to summarize the current evidence on adhesive small bowel obstruction management strategies. Materials And Methods: A review of the literature published over the last 20 years was performed to assess Who, hoW, Why, When, What, and Where diagnose and operate on patients with adhesive small bowel obstruction. Results: Adequate patient selection through physical examination and computed tomography is the key factor of the entire management strategy, as failure to detect patients with strangulated adhesive small bowel obstruction and bowel ischemia is associated with significant morbidity and mortality. The indication for surgical exploration is usually defined as a failure to pass contrast into the ascending colon within 8-24 h. However, operative management with early adhesiolysis, defined as operative intervention on either the calendar day of admission or the calendar day after admission, has recently shown to be associated with an overall long-term survival benefit compared to conservative management. Regarding the surgical technique, laparoscopy should be used only in selected patients with an anticipated single obstructing band, and there should be a low threshold for conversion to an open procedure in cases of high risk of bowel injuries. Conclusion: Although most adhesive small bowel obstruction patients without suspicion of bowel strangulation or gangrene are currently managed nonoperatively, the long-term outcomes following this approach need to be analyzed in a more exhaustive way, as surgery performed early during the first episode of adhesive small bowel obstruction has shown to be highly effective, with a lower rate of recurrence. abstract_id: PUBMED:27022449 Adhesive small bowel adhesions obstruction: Evolutions in diagnosis, management and prevention. Intra-abdominal adhesions following abdominal surgery represent a major unsolved problem. They are the first cause of small bowel obstruction. Diagnosis is based on clinical evaluation, water-soluble contrast follow-through and computed tomography scan. For patients presenting no signs of strangulation, peritonitis or severe intestinal impairment there is good evidence to support non-operative management. Open surgery is the preferred method for the surgical treatment of adhesive small bowel obstruction, in case of suspected strangulation or after failed conservative management, but laparoscopy is gaining widespread acceptance especially in selected group of patients. "Good" surgical technique and anti-adhesive barriers are the main current concepts of adhesion prevention. We discuss current knowledge in modern diagnosis and evolving strategies for management and prevention that are leading to stratified care for patients. abstract_id: PUBMED:22253960 Role of water soluble contrast agents in assigning patients to a non-operative course in adhesive small bowel obstruction. Objectives: Adhesive small bowel obstruction (SBO) is a common surgical emergency. It is estimated that at least 60% of SBO are due to post-operative adhesions. Water soluble contrast agents (gastrografin) have been used to identify patients who might be treated non-operatively. This study aims to determine the role of gastrografin in adhesive intestinal obstruction patients. Methods: In this prospective study, 27 patients admitted between 1(st) August 2004 and 1(st) July 2006 with clinical signs suggestive of postoperative adhesive SBO met the inclusion criteria. After intravenous hydration, nasogastric tube insertion and complete suctioning of the gastric fluid, 100 ml of gastrograsfin was given and plain abdominal radiography was taken 6 hours and 24 hours if the contrast is not seen in the colon. Those in whom the contrast reached the colon in 24 hours were considered to have partial SBO and started oral intake. If gastrografin failed to reach the colon in 24 hours and the patient did not improve in the following 24 hours, laparotomy was performed. Results: Conservative treatment was successful in 31 cases (91%) and 3 (9%) required operation. Patients treated conservatively had short hospital stay (mean=4 days) and tolerated oral feeding with no morbidity or mortality. Conclusion: Oral gastrografin helps in the management of patients with postoperative adhesive SBO. abstract_id: PUBMED:37405103 Predictors for success of non-operative management of adhesive small bowel obstruction. Background: Majority of adhesive small bowel obstruction (SBO) cases can be managed non-operatively. However, a proportion of patients failed non-operative management. Aim: To evaluate the predictors of successful non-operative management in adhesive SBO. Methods: A retrospective study was performed for all consecutive cases of adhesive SBO from November 2015 to May 2018. Data collated included basic demographics, clinical presentation, biochemistry and imaging results and management outcomes. The imaging studies were independently analyzed by a radiologist who was blinded to the clinical outcomes. The patients were divided into group A operative (including those that failed initial non-operative management) and group B non-operative for analysis. Results: Of 252 patients were included in the final analysis; group A (n = 90) (35.7%) and group B (n = 162) (64.3%). There were no differences in the clinical features between both groups. Laboratory tests of inflammatory markers and lactate levels were similar in both groups. From the imaging findings, the presence of a definitive transition point [odds ratio (OR) = 2.67, 95% confidence interval (CI): 0.98-7.32, P = 0.048], presence of free fluid (OR = 2.11, 95%CI: 1.15-3.89, P = 0.015) and absence of small bowel faecal signs (OR = 1.70, 95%CI: 1.01-2.88, P = 0.047) were predictive of the need of surgical intervention. In patients that received water soluble contrast medium, the evidence of contrast in colon was 3.83 times predictive of successful non-operative management (95%CI: 1.79-8.21, P = 0.001). Conclusion: The computed tomography findings can assist clinicians in deciding early surgical intervention in adhesive SBO cases that are unlikely to be successful with non-operative management to prevent associated morbidity and mortality. abstract_id: PUBMED:30921590 Complete ileal transection: A rare complication of adhesive bowel disease-A case report. Introduction: Adhesive bowel disease is a sequel of abdominal surgeries and has come forth as the leading cause of small bowel obstruction (SBO). It may also lead to chronic abdomino-pelvic pain and infertility in women. Adhesions form as a result of peritoneal irritation, giving rise to an inflammatory process. The treatment of choice for adhesive small bowel obstruction remains controversial, and relies on the surgeon's judgment. Case Presentation: Being described is an unusual case of a young girl who developed complete ileal transection secondary to postoperative adhesive band formation. Conclusions: Adhesive small bowel disease (ASBD) and small bowel obstruction are a major cause of patient distress and financial and clinical practice burden. It is usually a diagnosis of exclusion and leads to delayed treatment and morbidity. Many preventive techniques and materials are being tested to prevent adhesion formation. A lot more needs to be done to prevent, in the first place, the formation of adhesions or treat this condition to reduce such morbid outcomes as strangulation or transection of gut. abstract_id: PUBMED:27041815 Laparoscopic Management of Adhesive Small Bowel Obstruction. Introduction: Adhesions are the reason for bowel obstruction in 80% of the cases. In well selected patients the adhesive ileus laparoscopic treatment has multiple advantages which include the shorter hospitalization period, earlier food taking, and less postoperative morbidity rate. Case Report: Here we have a patient in the age of 35 hospitalized at the clinic due to occlusive symptoms. Two years before an opened appendectomy had been performed on him. He underwent the treatment of exploration laparoscopy and laparoscopic adhesiolysis. Dilated small bowel loops connected with the anterior abdominal wall in the ileocecal region by adhesions were found intraoperatively and then resected harmonically with scalpel. One strangulation around which a small bowel loop was wrapped around was found and dissected. Postoperative course was normal. abstract_id: PUBMED:32124812 Role and place of ultrasonography in diagnostics of adhesive intestinal obstruction. Objective: The aim: To study the possibilities of ultrasound in adhesive ileus obstruction diagnosis and indications for surgical treatment determination. Patients And Methods: Materials and methods: The results of diagnosis and treatment of 60 patients with acute adhesive intestinal obstruction with comparative use of X-ray and ultrasound methods were analyzed. Results: Results: The use of a standard X-ray examination of the abdominal organs upon admission to the hospital allowed the diagnosis of acute adhesions of small bowel obstruction in only 32 (53.3%) patients. Transabdominal ultrasonography allowed the establishment of adhesive ileus during hospitalization in 53 (88.3%) patients and also in 7 (11.7%) during a dynamic examination 2-3 hours after admission. Due to the lack of positive effect from the conducted conservative therapy relying on the data of ultrasound studies, 33 (55%) patients were operated on. During X-ray examination, only 10 (30.3%) of these patients demonstrated negative X-ray dynamics, and with an ultrasound study, deterioration of echo pictures was observed in all 33 (100%) patients. Conclusion: Conclusions: Compared with the X-ray method, transabdominal ultrasonography is established to be safe, affordable, simpler, non-invasive and highly informative method for the study of acute adhesions of the small intestinal obstruction. The method allows the assessment of the ongoing conservative therapy dynamics and determination for surgical treatment indications. abstract_id: PUBMED:22279359 Postoperative adhesive intestinal obstruction: The role of intestinal stenting. Aim: Six children with adhesive obstruction in the postoperative period were treated with stenting the small bowel with long intestinal tube. Materials And Methods: In two children the stenting was done through jejunostomy, and in the other four through the base of appendix. Results: During a follow-up period of 2-14 years, there had been no recurrence. Conclusions: Use of an intraluminal tube stent in preventing recurrent small bowel obstruction due to adhesions is safe and effective when used on appropriately selected patients. abstract_id: PUBMED:36562678 Nasointestinal intubation in the treatment of acute adhesive small bowel obstruction Proximal gastrointestinal decompression is essential in the treatment of acute adhesive small bowel obstruction. Improvement of endoscopic methods, surgical instruments and radiological methods of control makes it possible to timely assess lavage and intra-intestinal therapy. The review presents current views on upper gastrointestinal decompression and other options for the treatment of adhesive small bowel obstruction. abstract_id: PUBMED:38447338 Assessing the Modified Frailty Index and Post-Operative Outcomes in Adhesive Small Bowel Obstruction: A Retrospective Cohort Study. Introduction: Studies identifying predictors of postoperative outcomes in adhesive small bowel obstruction are limited. This study investigates the efficacy of the modified frailty index (mFI)to predict postoperative morbidity and mortality among patients undergoing surgery for adhesive small bowel obstruction. Methods: A multicentre retrospective cohort study including patients undergoing surgery for adhesive small bowel obstruction after failed trial of nonoperative management between January 2015 and December 2020 was performed. Impact of frailty status using the mFI, stratified as frail (≥0.27) and robust (&lt;0.27), on postoperative morbidity, mortality, length of stay, and discharge destination was evaluated using multiple logistic regression. Results: Ninety-two robust patients (mean age 62.4 y, 68% female) and 41 frail patients (mean age 81.7 y, 63% female) were included. On simple stratification, frail patients had significantly increased 30-d morbidity (overall morbidity 80% versus 49%) and need for higher level of care on discharge (41% versus 9%). However, on multiple regression, functional dependence but not the mFI, was independently associated with worse 30-d overall morbidity (odds ratio [OR] 3.97, confidence interval [CI] 1.29-12.19) and lower likelihood of returning to preoperative disposition (OR 0.21, CI 0.05-0.91). The delay in operation beyond 5 d was independently associated with worse 30-d outcomes including overall morbidity and mortality (OR 7.54, CI 2.13-26.73) and decreased return to preoperative disposition (OR 0.14, CI 0.04-0.56). Conclusions: The mFI, although promising, was not independently predictive of outcomes following surgery for adhesive small bowel obstruction. Further adequately powered studies are required. Answer: The provided abstracts do not specifically address the management of adhesive small bowel obstruction (SBO) in children. They focus on the general population, predominantly adults, and discuss various aspects of diagnosis, management, and treatment options for adhesive SBO. The abstracts highlight that adhesive SBO is commonly caused by intra-abdominal adhesions, often following abdominal surgery (PUBMED:33511902, PUBMED:27022449). Non-operative management, including the use of water-soluble contrast agents like gastrografin, is supported by evidence for patients without signs of strangulation, peritonitis, or severe intestinal impairment (PUBMED:27022449, PUBMED:22253960). The role of imaging, particularly computed tomography (CT) scans, is emphasized in aiding the decision-making process for non-operative versus operative management (PUBMED:37405103). Laparoscopic surgery is mentioned as a treatment option for adhesive SBO, with benefits such as shorter hospital stays and earlier food intake, but it is recommended for well-selected patients and there should be a low threshold for conversion to open surgery if necessary (PUBMED:27041815, PUBMED:33511902). Ultrasonography is also presented as a useful diagnostic tool, potentially more informative than X-ray in some cases (PUBMED:32124812). One abstract discusses the use of nasointestinal intubation as part of the treatment for acute adhesive SBO, indicating the importance of proximal gastrointestinal decompression (PUBMED:36562678). Another abstract mentions the use of intestinal stenting in children with adhesive obstruction in the postoperative period, suggesting it can be safe and effective in preventing recurrence (PUBMED:22279359). The abstracts do not provide a direct answer to whether children with adhesive SBO should be operated on. However, they suggest that the decision to operate should be based on careful patient selection, the presence of specific symptoms and signs, the results of imaging studies, and the response to non-operative management. In the absence of specific pediatric-focused data, it would be prudent to apply these general principles to the management of adhesive SBO in children, with additional consideration of the unique physiological and developmental aspects of the pediatric population.
Instruction: Cochlear implants: is there any relationship between stimulation rates and adaptation? Abstracts: abstract_id: PUBMED:11568671 Cochlear implants: is there any relationship between stimulation rates and adaptation? Objective: To determine the relationship between stimulation rate and adaptation in cochlear implant users. Study Design: A prospective clinical study. Setting: This study was conducted at Hacettepe University Medical Faculty, which is a tertiary care and cochlear implant center in Ankara, Turkey. Patients: Seven patients (three women, four men) who were using Nucleus CI 24 M cochlear implants (Cochlear Corp., Lane Cove, NSW, Australia). Two of them had become deaf after developing language skills, and five had congenitally deafness. Results: Six patients demonstrated no tone decay. Only one patient, who had congenital hearing loss, demonstrated tone decay at stimulation rates of 250, 500, 720, 900 pulses per second. There is no statistically significant relationship between the stimulation rate and the selected electrode. Conclusion: The majority of the cochlear implant patients sustained the perception of a continuous electrical signal for 1 minute at 5 to 10 current levels above threshold. abstract_id: PUBMED:12204351 Neural adaptation to pulsatile acoustical stimulation in the cochlear nucleus of the rat. This study, carried out in adult Long-Evans rats, was designed to investigate the adaptive properties of the cochlear nucleus to pulsatile acoustical stimuli. To achieve this purpose, near-field evoked potentials were picked up from the ventral cochlear nucleus in awake animals. Individual auditory thresholds were measured and responses to 250 ms trains of repetitive clicks with pulse rates ranging from 100 to 2000 pulses per second were collected. The amplitude of the first negative (N(1)) component of the evoked potentials to consecutive individual pulses in the train was measured by using a subtraction method. As expected, a rapid amplitude decrement of the responses in the train was obtained and a three phase adaptation was described. The decrease of individual N(1) component amplitude was fitted for each rate of stimulation with exponential decrease equations and time constants were calculated. Such an analysis allowed us to characterize three distinct adaptive processes which were discussed. The results were comparable to those obtained in previous studies in the auditory nerve and suggest that the adaptation recorded in the ventral cochlear nucleus by using near-field evoked potentials reflects the adaptive properties of auditory nerve fibers. abstract_id: PUBMED:15084819 Matching the neural adaptation in the rat ventral cochlear nucleus produced by artificial (electric) and acoustic stimulation of the cochlea. To investigate neural adaptive properties, near-field evoked potentials were recorded from a chronically implanted electrode in the ventral cochlear nucleus in awake Long-Evans rats exposed to acoustic stimuli or receiving intracochlear electric stimulation. Stimuli were 250-ms trains of repetitive acoustic clicks (10, 30 and 50 dB SPL) or biphasic electric pulses (30, 50 and 70 microA) with intratrain pulse rates ranging from 100 to 1000 pulses per second (pps). The amplitude of the first negative (N(1)) to positive (P(1)) component of the average evoked potentials was measured for each consecutive individual pulse in the train. While a progressive exponential decrease in N(1)-P(1) amplitude was observed as a function of the position of the pulse within the train for both types of stimulation, the decrement of electric responses (adaptive pattern) was substantially less prominent than that observed for acoustic stimuli. Based on this difference, the present work was extended by modifying electric stimuli in order to try to restore normal adaptation phenomena. The results suggest the feasibility of mimicking acoustic adaptation by stimulation with exponentially decreasing electric pulse trains, which may be clinically applicable in the auditory implant field. abstract_id: PUBMED:14574431 Effects of intensity of repetitive acoustic stimuli on neural adaptation in the ventral cochlear nucleus of the rat. To study neural adaptation as a function of stimulus intensity, auditory near-field evoked potentials were recorded from the ventral cochlear nucleus in awake Long Evans rats. Responses to 250-ms trains of repetitive clicks (pulse rates ranging from 100 to 1000 pulses per second) were collected at stimulus intensities of 5, 10, 30, 50 and 70 dB SPL. The amplitude of the first negative (N1) component of the average evoked potentials to individual pulses in the train was measured by using a subtraction method. The N1 responses were normalized with respect to the highest cochlear nucleus potential observed in the train, and then plotted as a function of click position in the train. As expected, the general trend of the curves was an exponential decay reaching a plateau more or less rapidly as a function of both intensity and rate of stimulation. Fitting these curves with exponential decay equations revealed that the rapid time constant decreased for increasing stimulus intensities whereas the short-term time constant is relatively independent of intensity. The amount of adaptation (expressed as the ratio of the plateau to the first peak amplitude) was substantially less prominent at low intensities (5-10 dB SPL) and low rates (100-200 pulses per second) than at higher intensities and high rates. These results indicate that adaptation patterns obtained in the ventral cochlear nucleus by using near-field evoked potentials exhibit properties comparable to those already present at the level of the auditory nerve. abstract_id: PUBMED:27594099 A fast, stochastic, and adaptive model of auditory nerve responses to cochlear implant stimulation. Cochlear implants (CIs) rehabilitate hearing impairment through direct electrical stimulation of the auditory nerve. New stimulation strategies can be evaluated using computational models. In this study, a computationally efficient model that accurately predicts auditory nerve responses to CI pulse train input was developed. A three-dimensional volume conduction and active nerve model developed at Leiden University Medical Center was extended with stochasticity, adaptation, and accommodation. This complete model includes spatial and temporal characteristics of both the cochlea and the auditory nerve. The model was validated by comparison with experimentally measured single fiber action potential responses to pulse trains published in the literature. The effects of pulse rate and pulse amplitude on spiking patterns were investigated. The modeled neural responses to CI stimulation were very similar to the single fiber action potential measurements in animal experiments. The model's responses to pulse train stimulation with respect to spatial location were also investigated. Adaptation was stronger at the borders of the stimulated area than in the center. By combining spatial details with long-term temporal components and a broad implementation of stochasticity a comprehensive model was developed that was validated for long duration electric stimulation of a wide range of pulse rates and amplitudes. The model can be used to evaluate auditory nerve responses to cochlear implant sound coding strategies. abstract_id: PUBMED:9888192 Electrically evoked compound action potential (ECAP) of the cochlear nerve in response to pulsatile electrical stimulation of the cochlea in the rat: effects of stimulation at high rates. Some cochlear implant patients achieve better speech recognition with pulsatile electrical stimulation presented at high rates. The present study aimed to explore, in an animal model of cochlear implants, how the excitability of the cochlear nerve is affected by pulsatile electrical stimulation delivered at high rates, of up to 1,000-2,000 pulses per second (pps). Adult rats (n=23) were implanted with two or three stimulating electrodes in the left cochlea. In four of these rats, the left cochlea was deafened by local perfusion with 1 per cent or 4 per cent neomycin solutions prior to implantation. Pulsatile stimuli consisted of 20 micros electrical pulses, delivered in trains of 200 ms duration, separated by a pause of 200 ms. The pulse rates ranged from 100 to 2,000 pps (intra-train pulse rate). Electrically evoked compound action potentials (ECAPs) of the cochlear nerve were recorded either intracochlearly or from epidural electrodes (extra-cochlearly). With increasing pulse rates, the average ECAP amplitude decreased, whereas the average ECAP latency and its variability (SD) increased. For rates above 300 pps, the amplitude of the ECAP to the individual successive pulses delivered in the train progressively decreased during the initial part of the train, corresponding to a short-term adaptation of the cochlear nerve. This effect progressively increased for pulse rates ranging from 300 to 2,000 pps. In addition, there was a phenomenon of long-term adaptation, as indicated by a decrease in the amplitude of the ECAP to the first pulse of the train, indicating that the pause of 200 ms between each train was not long enough for full recovery of the cochlear nerve. This long-term adaptation was progressively more pronounced for increasing pulse rates. To characterize further the recovery in excitability of the cochlear nerve, forward masking experiments were conducted, showing a decrease of the ECAP amplitude when the interval between the first pulse (masker) and the second pulse (probe) was shorter than 2 ms. This ECAP decrease was slow for intervals between 2 and 1 ms and then abrupt for shorter intervals. The observations described above were similar for extra- and intra-cochlear recordings and were little, if at all, affected by treatment of the cochlea with neomycin. abstract_id: PUBMED:26501873 Temporal Considerations for Stimulating Spiral Ganglion Neurons with Cochlear Implants. A wealth of knowledge about different types of neural responses to electrical stimulation has been developed over the past 100 years. However, the exact forms of neural response properties can vary across different types of neurons. In this review, we survey four stimulus-response phenomena that in recent years are thought to be relevant for cochlear implant stimulation of spiral ganglion neurons (SGNs): refractoriness, facilitation, accommodation, and spike rate adaptation. Of these four, refractoriness is the most widely known, and many perceptual and physiological studies interpret their data in terms of refractoriness without incorporating facilitation, accommodation, or spike rate adaptation. In reality, several or all of these behaviors are likely involved in shaping neural responses, particularly at higher stimulation rates. A better understanding of the individual and combined effects of these phenomena could assist in developing improved cochlear implant stimulation strategies. We review the published physiological data for electrical stimulation of SGNs that explores these four different phenomena, as well as some of the recent studies that might reveal the biophysical bases of these stimulus-response phenomena. abstract_id: PUBMED:27374479 Place dependent stimulation rates improve pitch perception in cochlear implantees with single-sided deafness. In normal hearing, the pitch of an acoustic tone can theoretically be encoded by either the place of stimulation in the cochlea or the corresponding rate of vibration. Thus spectral attributes and temporal fine structure of an acoustic signal are naturally correlated. Cochlear implants (CIs), neural prosthetic devices that restore hearing in the profoundly hearing impaired, currently disregard this mechanism; electrical stimulation is provided at fixed electrode positions with default place independent stimulation rate assignments. This does not account for individual cochlear encoding depending on electrode array placement, variations in insertion depth, and the proximity to nerve fibers. Encoding pitch in such manner delivers limited tonal information. Consequently, music appraisal in CI users is often rated cacophonic while speech perception in quiet is close to normal in top performers. We hypothesize that this limitation in electric stimulation is at least partially due to the mismatch between frequency and place encoding in CIs. In the present study, we determined individual electrode locations by analysis of cochlear radiographic images obtained after surgery and calculated place dependent stimulation rates according to models of the normal tonotopic function. Pitch matching in CI users with single-sided deafness shows that place dependent stimulation rates allow thus far unparalleled restoration of tonotopic pitch perception. Collapsed data of matched pitch frequencies as a function of calculated electrical stimulation rate were well fitted by linear regression (R(2) = 0.878). Sound processing strategies incorporating place dependent stimulation rates are expected to improve pitch perception in CI users. abstract_id: PUBMED:17391881 Direct comparison between properties of adaptation of the auditory nerve and the ventral cochlear nucleus in response to repetitive clicks. The present study was designed to complete two previous reports [Loquet, G., Rouiller, E.M., 2002. Neural adaptation to pulsatile acoustical stimulation in the cochlear nucleus of the rat. Hear. Res. 171, 72-81; Loquet, G., Meyer, K., Rouiller, E.M., 2003. Effects of intensity of repetitive acoustic stimuli on neural adaptation in the ventral cochlear nucleus of the rat. Exp. Brain Res. 153, 436-442] on neural adaptation properties in the auditory system of the rat. Again, auditory near-field evoked potentials (ANEPs) were recorded in response to 250-ms trains of clicks from an electrode chronically implanted in the ventral cochlear nucleus (VCN). Up to now, our interest had focused on the adaptive behavior of the first one (N1) of the two negative ANEP components. A re-examination of our data for the second negative component (N2) was now undertaken. Results show that the adaptation time course observed for N2 displayed the same three-stage pattern previously reported for N1. Similarly, adaptation became more pronounced and occurred faster as stimulus intensity and/or repetition rate were increased. Based on latency data which suggest N1 and N2 to be mainly due to the activity of auditory-nerve (AN) fibers and cochlear nucleus neurons, respectively, it was concluded that neural adaptation assessed by gross-potentials was similar in the AN and VCN. This finding is meaningful in the context of our search to restore normal adaptation phenomena via electro-auditory hearing with an auditory brainstem implant on the same lines as our work in cochlear implants. abstract_id: PUBMED:18182489 Binaural jitter improves interaural time-difference sensitivity of cochlear implantees at high pulse rates. Interaural time difference (ITD) arises whenever a sound outside of the median plane arrives at the two ears. There is evidence that ITD in the rapidly varying fine structure of a sound is most important for sound localization and for understanding speech in noise. Cochlear implants (CIs), neural prosthetic devices that restore hearing in the profoundly deaf, are increasingly implanted to both ears to provide implantees with the advantages of binaural hearing. CI listeners have been shown to be sensitive to fine structure ITD at low pulse rates, but their sensitivity declines at higher pulse rates that are required for speech coding. We hypothesize that this limitation in electric stimulation is at least partially due to binaural adaptation associated with periodic stimulation. Here, we show that introducing binaurally synchronized jitter in the stimulation timing causes large improvements in ITD sensitivity at higher pulse rates. Our experimental results demonstrate that a purely temporal trigger can cause recovery from binaural adaptation. Thus, binaurally jittered stimulation may improve several aspects of binaural hearing in bilateral recipients of neural auditory prostheses. Answer: Yes, there is a relationship between stimulation rates and adaptation in cochlear implant users. Studies have shown that the rate of stimulation can affect the adaptation of the auditory nerve and the cochlear nucleus to electrical stimuli. For instance, one study found that with increasing pulse rates, the average amplitude of electrically evoked compound action potentials (ECAPs) decreased, while the average ECAP latency and its variability increased. This suggests that higher stimulation rates can lead to a form of short-term adaptation of the cochlear nerve, which becomes more pronounced as pulse rates increase from 300 to 2,000 pulses per second (pps) (PUBMED:9888192). Another study observed that the amplitude decrement of evoked potentials in response to repetitive clicks was fitted with exponential decrease equations, indicating distinct adaptive processes in the auditory system (PUBMED:12204351). Moreover, a study conducted on cochlear implant users at Hacettepe University Medical Faculty found that the majority of patients sustained the perception of a continuous electrical signal for 1 minute at 5 to 10 current levels above threshold, with no statistically significant relationship between the stimulation rate and the selected electrode (PUBMED:11568671). However, one patient with congenital hearing loss did demonstrate tone decay at various stimulation rates, suggesting individual variability in adaptation responses. Additionally, research has indicated that adaptation patterns can be influenced by the intensity of repetitive acoustic stimuli, with less prominent adaptation at low intensities and low rates compared to higher intensities and rates (PUBMED:14574431). Furthermore, a study on rats showed that neural adaptation in the ventral cochlear nucleus produced by electric and acoustic stimulation could be matched by modifying electric stimuli to mimic acoustic adaptation, which may have clinical implications for auditory implant strategies (PUBMED:15084819). In summary, the relationship between stimulation rates and adaptation in cochlear implants is complex and can vary among individuals. Adaptation phenomena such as refractoriness, facilitation, accommodation, and spike rate adaptation are thought to be relevant for cochlear implant stimulation of spiral ganglion neurons and are influenced by the rate and intensity of the electrical stimulation (PUBMED:26501873).
Instruction: Health practices of critical care nurses: are these nurses good role models for patients? Abstracts: abstract_id: PUBMED:9215422 Health practices of critical care nurses: are these nurses good role models for patients? Background: Few studies have explored the health practices of critical care nurses. Critical care nurses routinely teach patients about using healthy practices such as low-fat diets, exercise, and routine screening examinations. However, it may be even more important that the nurses themselves have a healthy lifestyle, thus serving as role models for patients. Nurses are selling a product, and that product is health. The best salespersons are those who are genuinely committed to their product and model its benefits. Therefore, critical care nurses' healthful practices can have a profound effect on their patients. Objectives: The purpose of this descriptive exploratory study was to examine critical care nurses' responses to three questions about health practices in their daily lives: (1) What are critical care nurses doing currently to stay healthy? (2) Do they anticipate making any changes in their lifestyle in the future? (3) Would they recommend their lifestyle to their patients? Methods: One hundred twenty-seven critical care nurses attending a midwestern critical care conference completed a two-part questionnaire designed to produce a health profile. In a man-on-the-street approach, 23 nurses participated in an interview via video camera. Descriptive statistics were used to analyze the data retrieved from the questionnaires. Interviews were transcribed verbatim and analyzed for themes with a constant comparative method. Results: More than 70% of the critical care nurses who responded engage in exercise and follow a healthy, low-fat diet. Seventy-one percent said that they anticipate making a change in their lifestyle in the future, and 70% said that they would recommend their lifestyle to their patients. Five themes emerged from the videotaped interviews: (1) Heart-healthy practices predominated the responses. (2) Incorporating a healthy lifestyle was easy for some and a struggle for others. (3) Critical care nurses readily listed barriers to healthy living. (4) The nurses had a positive attitude about their healthy lifestyles and felt optimistic about being role models for their patients. (5) Future plans were either singular in focus or limited to maintenance of current health habits. Conclusions: The majority of the nurses reported practicing a healthy lifestyle and thought that they were good role models for patients. abstract_id: PUBMED:35917236 Systematization of initiatives in sexual and reproductive health about good practices criteria in response to the COVID-19 pandemic in primary health care in Chile. Introduction The COVID- 19 pandemic discontinued sexual and reproductive health care in Chile and the world. The national focus on hospital care led primary care teams to respond in natural and diverse ways. Understanding the factors involved in this process may improve future responses from the judgment of good practices. Therefore, this study aimed to identify and systematize sexual and reproductive health initiatives raised by primary care teams in response to the COVID- 19 pandemic in Chile. Methods We systematically evaluated initiatives and practices in sexual and reproductive health in prima-ry care between June 2020 and November 2021. This study was developed in three methodological phases: a review of documents, a collection of experiences through an electronic instrument sent to the 29 health services in Chile, and in-depth interviews. According to best practice criteria, mapping and characterizing the initiatives and critical discourse analysis of narratives and interviews were carried out. Results Forty-one initiatives from 19 health services were identified, mainly from the South Central macro zone and urban areas. In these areas, care was recognized. These practices were relevant, aligned with their objectives, rapidly implemented, and used novel strategies through new technologies. However, these initiatives had little intercultural relevance or evaluation. Perceived success was related to motivation, leadership, and institutional and community resilience. The adaptability of initiatives emerged as a new need and criterion of analysis. Conclusion The lessons learned from these initiatives invite us to consider health care teams' mental health, their relationship with the community, the use of new technologies, the evaluation of practices considering satisfaction, cross- cutting approaches, and their adaptability. In all, these aspects may improve primary care response in sexual and reproductive health to new crises. abstract_id: PUBMED:23222224 Literature review: patients' and health care providers' perceptions of stressors in critical care units. The purpose of this article was to provide a historical review and analysis of the research literature focusing on patients', nurses', and health care providers' perceptions of stressors in the critical care setting. A narrative historical literature review method was used. The inclusion criteria include English-language studies of stressors as perceived by patients and health care providers from 1984 to 2011. Several studies investigated perceptions of stressors using the Environmental Stressors Questionnaire and the findings did not show any significant changes of stress perception over time since 1989 until the present. There is a need to further investigate stress perceptions in the health care setting using a mixed qualitative and quantitative approach to provide a more precise description of stressors and stress perceptions in the critical care setting. abstract_id: PUBMED:15267163 The role of the senior health care worker in critical care. This article identifies that the introduction of the support worker role in the critical care team facilitates flexibility when organizing and managing patient care. Qualified nurses' time can be used more effectively, enhancing the quality of the patient care delivered. Aspects of the qualified nurses' workload in critical care can be shared and delegated successfully to unqualified staff. It is our view that staffing levels in critical care environments need to be reviewed with more flexible working practices to meet the current and future demands of critical care. There is a need for national consensus amongst qualified nurses to clarify and define the role of the support worker and develop a critical care competency framework to standardize training. To ensure proficiency, adequate training and appropriate accountability, support workers require regulation by a nationally recognized body. abstract_id: PUBMED:1592608 Health practices of critical care nurses. Little is known about the health practices of critical care nurses (CCNs). Because their health behaviors may influence their inclinations to counsel patients, it is important that CCNs engage in a healthy lifestyle and serve as health exemplars. The purpose of this survey was to describe the health practices of 499 CCNs. Data were gathered by questionnaires that elicited information regarding smoking habits, oral health and dietary practices, energy expenditure, seat belt use, alcohol consumption, and health surveillance behaviors. This article is a sequel to a previous manuscript that reported findings relative to the smoking practices of CCNs. Results of the study suggest that the CCNs surveyed were not fulfilling their roles as health exemplars. Although some reported favorable health practices, many indicated habits that were less than desirable. These data document the need to develop strategies for improving the health behaviors of CCNs, thereby protecting their future health. Ultimately, these strategies may benefit their patients. abstract_id: PUBMED:12943134 Nursing's role in complementary and alternative therapy use in critical care. Critical care nurses can expect to encounter more patients using CAT and increasing opportunities and requests for CAT use in their critical care environments. This provides an opportunity for nurses' involvement to shape proactively how the use of these therapies will unfold in critical care. This can be accomplished in various ways. Actively ask patients and families about use of CAT. Initiate discussions with colleagues and peers about professional and personal use of therapies. Explore the knowledge and education needed to administer specific CAT. Engage in research regarding the use of CAT in critical care. Identify experts in the institution and surrounding community. Encourage critical care units and institutions to consider how CAT should be implemented across the institution. From a broader perspective, nurses may become part of professional political processes shaping patient accessibility to CAT and the use of CAT in the discipline, across disciplines, and in healthcare settings and public domains. It is crucial that nurses not relinquish their role as traditional providers of CAT in providing safe, effective, and holistic care at the bedside of critically ill patients. abstract_id: PUBMED:21481113 The critical care nurse's role in end-of-life care: issues and challenges. Aim: The purpose of this article is to discuss the challenges critical care nurses face when looking after patients needing End-of-Life (EoL) care in critical care environments. Background: Critical care nurses frequently provide care to patients who fail to respond to treatments offered to support and prolong life. The dying phase for individuals in critical care settings, commonly after withholding/withdrawing treatment, is very short posing great demands on critical care nurses to provide physical and emotional support to both patients and their families. Despite the existence of recognized care planning frameworks that may help nurses in providing EoL care, these are not used by all units and many nurses rely on experience to inform practice. A number of aspects such as communication, patient/family-centred decision-making, continuity of care, emotional/spiritual support and support for health professionals have been indicated as contributing factors towards the provision of effective EoL care. These are considered from the perspective of critical care nursing. Conclusion: Skills development in key aspects of care provision may improve the provision of EoL care for critical care patients and their families. Relevance To Clinical Practice: Critical care nurses have an essential role in the provision of effective EoL care; however, this dimension of their role needs further exploration. It is noted that educational opportunities need to be provided for critical care nurses to increase the knowledge on planning and delivering EoL care. To inform this evaluation of current EoL care provision in critical care is necessary to address a knowledge deficit of the needs of nurses who seek to support patients and their families at a critical time. abstract_id: PUBMED:33740477 Hepatitis C standards of care: A review of good practices since the advent of direct-acting antiviral therapy. Significant steps must be taken to reduce the global incidence and prevalence of hepatitis C virus (HCV) and mortality from HCV infection to achieve the WHO goal of eliminating viral hepatitis as a public health threat by 2030. Proper epidemiological surveillance of the full continuum of care is essential for monitoring progress and identifying gaps that need to be addressed. The tools required for elimination have largely been established, and the issue at hand is more how they should best be implemented in different settings around the world. Documenting good practices allows for knowledge exchange to prevent transmission and improve health outcomes for people with HCV. This review found 13 well documented HCV good practices that have become the standard of care or that should become the standard of care as soon as possible. In 2013, highly effective direct-acting antiviral therapy became available, which has cure rates of over 95%. Together with this new therapy, evidence-based good practices can help countries eliminate viral hepatitis C. abstract_id: PUBMED:19542975 HIPP lead to self-health: healthy infection prevention practices in intensive care units. Healthy infection prevention practices (HIPP) include hand hygiene, respiratory etiquette, environmental cleanliness, and use of personal protective equipment. These healthy practices are most used to protect individuals against exposure to bacterial and viral infections in the workplace, as well as in the home. Most often these infection prevention behaviors in critical care units are promoted to protect the patient from healthcare-associated infections. Yet, these practices are just as important to the health of the critical care nurse. Self-health in the workplace is essential to creating a healthy workplace environment, which is the goal of many intensive care units today. The benefits of creating a healthy work environment are improvement of patient/nurse satisfaction and nurse retention. HIPP reduce the risk of the critical care nurse's exposure to microbial pathogens such as methicillin-resistant Staphylococcus aureus and influenza. Pathogens that cause infections may contaminate the hands, the clothing, equipment, and blood, putting the nurse at risk for unhealthy hands, unhealthy flora, and unhealthy blood. The intensive care nurse is encouraged to embrace HIPP to nurture self, as well as protect the patient. abstract_id: PUBMED:33623309 A Descriptive Study to Assess the Knowledge, Attitude, Practices, Perceived Barriers, and Support Regarding Palliative Care and End-of-Life Care among Critical Care Nurses of Tertiary Care Medical Institute. Introduction: The role of nurses working in critical care setup is one of the key members for optimal provision of the palliative care and end-of-life care (EOLC). This study aims to assess knowledge, attitude, practices, and perceived barriers regarding palliative and EOLC among critical care nurses. Methods: A cross-sectional study was conducted among 386 critical care nurses of a tertiary care center, selected through convenience sampling technique. Data related to knowledge, attitude, practices, and perceived barriers were collected using a standardized tool. The collected data were analyzed using SPSS version 24. Results: We observed that mean knowledge, attitude, and practice scores of critical care nurses were 9.83 ± 2.50 of 20, 104.91 ± 13.04 of 150, and 17.61 ± 4.36 of 27, respectively. One of the major barriers in providing palliative and EOLC was being called for attending to other newly admitted patients. The mean knowledge score of nurses was found to be significantly higher among female nurses (P = 0.016). Nurses who had additional training pertaining to palliative and EOLC had significantly higher mean knowledge (P = 0.001) and attitude scores (P = 0.02) than those who did not have such training. The mean practice (P = 0.02) and attitude score (P = 0.002) was significantly higher among postgraduate nurses as compared to nurses with lower qualification. The mean attitude score was significantly higher among nurses of lower age group (P = 0.04) along with a negative correlation of age of nurses with attitude (P = 0.01). Nurses' practice was positively correlated to the attitude (P = 0.001) and knowledge (P = 0.01). Conclusion: There is a considerable gap between critical care nurses' knowledge and practices, in spite of having a good attitude regarding palliative and EOLC. This necessitates the strengthening of both preservice and in-service education for nurses. Answer: The health practices of critical care nurses have been a subject of interest, particularly in the context of whether these nurses serve as good role models for their patients. According to a descriptive exploratory study examining critical care nurses' health practices, more than 70% of the nurses who responded reported engaging in exercise and following a healthy, low-fat diet. Additionally, 71% anticipated making changes in their lifestyle in the future, and 70% would recommend their lifestyle to their patients. The study concluded that the majority of the nurses reported practicing a healthy lifestyle and believed they were good role models for patients (PUBMED:9215422). However, another study suggested that critical care nurses (CCNs) were not fulfilling their roles as health exemplars. While some reported favorable health practices, many indicated habits that were less than desirable, highlighting the need to develop strategies for improving the health behaviors of CCNs (PUBMED:1592608). In the context of specific practices, critical care nurses are also involved in promoting healthy infection prevention practices (HIPP), which include hand hygiene, respiratory etiquette, environmental cleanliness, and the use of personal protective equipment. These practices are important not only for patient protection but also for the self-health of the nurses, emphasizing the role of HIPP in creating a healthy work environment (PUBMED:19542975). Overall, while there is evidence that many critical care nurses engage in healthful practices and view themselves as role models, there is also recognition of the need for improvement in certain areas to ensure that all nurses can effectively serve as exemplars of health for their patients.
Instruction: Does insufficient access to dual-energy X-ray absorptiometry (DXA) stifle the provision of quality osteoporosis care in New Zealand? Abstracts: abstract_id: PUBMED:23361437 Does insufficient access to dual-energy X-ray absorptiometry (DXA) stifle the provision of quality osteoporosis care in New Zealand? Unlabelled: Access to dual-energy X-ray absorptiometry (DXA) scanning varies significantly throughout New Zealand with the majority of scans funded privately or through the health industry. Barriers to access need to be addressed if osteoporosis guidelines are to be implemented across the country equitably, to reduce the incidence and cost of fragility fractures in New Zealand. Purpose: This study aims (1) to estimate the number of dual-energy X-ray absorptiometry scans performed in New Zealand, (2) to determine funding sources of DXA scans and (3) to determine the level of regional variation in access. Methods: DXA scan providers in New Zealand were accessed through a nationwide database and asked to provide data on DXA scans performed in 2007. The numbers of DXA scans performed in each District Health Board (DHB) region were calculated by using a funding source and compared with DHB population estimates provided by Statistics New Zealand for 2007. Results: In New Zealand in 2007, 33,104 DXA scans were performed, with a population rate of 78.1 DXA scans per annum per 10,000 general population, significantly less than international guidelines. There were important regional differences in access to DXA scanning. Funding for scans was predominately by private and pharmaceutical industry funders. DHBs funded only 31 % of DXA scans during this time period. Conclusions: Access to DXA scan technology varies significantly throughout New Zealand, with the majority of DXA scans funded by the private sector or health industry. Barriers to access need to be addressed if osteoporosis guidelines are to be implemented across the country in an equitable fashion and so reduce the incidence and cost of fragility fractures to New Zealand. abstract_id: PUBMED:28159711 Quality in dual-energy X-ray absorptiometry scans. Dual-energy X-ray absorptiometry (DXA) is the gold standard for measuring bone mineral density (BMD), making the diagnosis of osteoporosis, and for monitoring changes in BMD over time. DXA data are also used in the determination of fracture risk. Procedural steps in DXA scanning can be broken down into scan acquisition, analysis, interpretation, and reporting. Careful attention to quality control pertaining to these procedural steps should theoretically be beneficial in patient management. Inattention to procedural steps and errors that may occur at each step has the possibility of providing information that would inform inappropriate clinical decisions, generating unnecessary healthcare expenses and ultimately causing avoidable harm to patients. This article reviews errors in DXA scanning that affect trueness and precision related to the machine, the patient, and the technologist and reviews articles which document problems with DXA quality in clinical and research settings. An understanding of DXA errors is critical for DXA quality; programs such as certification of DXA technologists and interpreters help in assuring quality bone densitometry. As DXA errors are common, pay for performance requiring DXA technologists and interpreters to be certified and follow quality indicators is indicated. abstract_id: PUBMED:32742961 Body composition with dual energy X-ray absorptiometry: from basics to new tools. Dual-energy X-ray absorptiometry (DXA) in nowadays considered one of the most versatile imaging techniques for the evaluation of metabolic bone disorders such as osteoporosis, sarcopenia and obesity. The advantages of DXA over other imaging techniques are the very low radiation dose, its accuracy and simplicity of use. In addition, fat mass (FM) and lean mass (LM) values by DXA shows very good accuracy compared to that of computed tomography and magnetic resonance imaging. In this review we will explain the technical working principles of body composition with DXA, together with the possible limitations and pitfalls that should be avoided in daily routine to produce high-quality DXA examinations. We will also cover the current clinical practical application of whole body DXA values, with particular emphasis on the use of LM indices in the diagnostic workup of reduced muscle mass, sarcopenia and osteosarcopenic obesity according to the most recent guidelines. The possible use of adipose indices will be considered, such as the fat mass index (FMI) or the android/gynoid ratio, as well as lipodystrophy indices and the evaluation of visceral adipose tissue (VAT). Whenever available, we will provide possible cut-off diagnostic values for each of these LM and FM indices, according to current literature and guidelines. abstract_id: PUBMED:32890786 Assessing underlying bone quality in spine surgery patients: a narrative review of dual-energy X-ray absorptiometry (DXA) and alternatives. Poor bone quality and low bone mineral density (BMD) have been previously tied to higher rates of postoperative mechanical complications in patients undergoing spinal fusion. These include higher rates of proximal junctional kyphosis, screw pullout, pseudoarthrosis, and interbody subsidence. For these reasons, accurate preoperative assessment of a patient's underlying bone quality is paramount for all elective procedures. Dual-energy X-ray absorptiometry (DXA) is currently considered to be the gold standard for assessing BMD. However, a growing body of research has suggested that in vivo assessments of BMD using DXA are inaccurate and have, at best, moderate correlations to postoperative mechanical complications. Consequently, there have been investigations into using alternative methods for assessing in vivo bone quality, including using computed tomography (CT) and magnetic resonance imaging (MRI) volumes that are commonly obtained as part of surgical evaluation. Here we review the data regarding the accuracy of DXA for the evaluation of spine bone quality and describe the alternative imaging modalities currently under investigation. abstract_id: PUBMED:27790018 A review of the use of dual-energy X-ray absorptiometry (DXA) in rheumatology. The principal use of dual-energy X-ray absorptiometry (DXA) is to diagnose and monitor osteoporosis and therefore reduce fracture risk, associated morbidity, and mortality. In the field of rheumatology, DXA is an essential component of patient care because of both rheumatologists' prescription of glucocorticoid treatment as well as the effects of rheumatological diseases on bone health. This review will summarize the use of DXA in the field of rheumatology, including the concern for glucocorticoid-induced osteoporosis, as well as the association of osteoporosis with a sampling of such rheumatologic conditions as rheumatoid arthritis (RA), systemic lupus erythematosus, ankylosing spondylitis, juvenile idiopathic arthritis, and scleroderma or systemic sclerosis. Medicare guidelines recognize the need to perform DXA studies in patients treated with glucocorticoids, and the World Health Organization FRAX tool uses data from DXA as well as the independent risk factors of RA and glucocorticoid use to predict fracture risk. However, patient access to DXA measurement in the US is in jeopardy as a result of reimbursement restrictions. DXA technology can simultaneously be used to discover vertebral fractures with vertebral fracture assessment and provide patients with a rapid, convenient, and low-radiation opportunity to clarify future fracture and comorbidity risks. An emerging use of DXA technology is the analysis of body composition of RA patients and thus the recognition of "rheumatoid cachexia," in which patients are noted to have a worse prognosis even when the RA appears well controlled. Therefore, the use of DXA in rheumatology is an important tool for detecting osteoporosis, reducing fracture risk and unfavorable outcomes in rheumatological conditions. The widespread use of glucocorticoids and the underlying inflammatory conditions create a need for assessment with DXA. There are complications of conditions found in rheumatology that could be prevented with more widespread patient access to DXA. abstract_id: PUBMED:31352546 Can digital X-ray radiogrammetry be an alternative for dual-energy X-ray absorptiometry in the diagnosis of secondary low bone quality in children? Bone quality in children is generally measured with dual-energy X-ray absorptiometry (DXA). Digital X-ray radiogrammetry (DXR) uses BoneXpert to measure cortical bone quality on hand radiographs. This prospective study compared DXR and DXA results in children with high probability of secondary low bone quality, defined as DXA of the lumbar spine (DXALS) Z-score ≤ - 2.0. One hundred one children underwent both DXA and DXR assessment. DXALSZ-scores were also adjusted for bone age. DXR Z-scores were compared with both DXALSZ-scores, using Pearson correlations, Bland-Altman analysis, and sensitivity-specificity analysis. Mean bone age, DXR, and both DXA Z-scores were significantly impaired. Pearson correlation coefficients were significant between DXR Z-scores and both DXALSZ-scores 0.507-0.564 (p &lt; 0.001). Bland-Altman analysis showed a mean difference of 0.05-0.48 between DXR and both DXA Z-scores and showed more than 90% similarity for both DXALSZ-scores ≤ - 2.0. DXR had a sensitivity of 67-71% and specificity of 77-83% compared to both DXALSZ-scores.Conclusion: DXR correlates well with as well DXALS as bone age-adjusted DXALSZ-scores and shows good agreement with as well DXALS as bone age-adjusted DXALSZ-scores ≤ - 2.0. DXR shows best results when compared with DXALSZ-scores. What is Known: • Digital X-ray radiogrammetry (DXR) may correlate well with dual-energy X-ray absorptiometry (DXA) in pediatric, adolescent, and adult patients. • DXR is a feasible method for assessment of bone quality in children. What is New: • This is the first prospective study in children with suspected secondary low bone quality that illustrates correlation between DXR and bone age-adjusted DXA Z-scores and that shows good agreement between DXR and DXA as bone age-adjusted DXA Z-scores ≤ -2.0. • Our results suggest DXR to be a good alternative for DXA for determining low bone quality. abstract_id: PUBMED:28432574 A critical appraisal of the quality of adult dual-energy X-ray absorptiometry guidelines in osteoporosis using the AGREE II tool: An EuroAIM initiative. Objectives: Dual energy X-ray absorptiometry (DXA) is the most widely used technique to measure bone mineral density (BMD). Appropriate and accurate use of DXA is of great importance, and several guidelines have been developed in the last years. Our aim was to evaluate the quality of published guidelines on DXA for adults. Methods: Between June and July 2016 we conducted an online search for DXA guidelines, which were evaluated by four independent readers blinded to each other using the AGREE II instrument. A fifth independent reviewer calculated scores per each domain and agreement between reviewers' scores. Results: Four out of 59 guidelines met inclusion criteria and were included. They were published between 2005 and 2014. Three out of four guidelines reached a high level of quality, having at least five domain scores higher than 60%. Domain 1 (Scope and Purpose) achieved the highest result (total score = 86.8 ± 3.7%). Domain 6 (Editorial Independence) had the lowest score (total score = 54.7 ± 12.5%). Interobserver agreement ranged from fair (0.230) to good (0.702). Conclusions: Overall, the quality of DXA guidelines is satisfactory when evaluated using the AGREE II instrument. The Editorial Independence domain was the most critical, thus deserving more attention when developing future guidelines. Main Messages: • Three of four guidelines on DXA had a high quality level (&gt;60%). • Scope/purpose had the highest score (86.8 ± 3.7%). • Editorial Independence had the lowest score (54.7 ± 12.5%). • Interobserver agreement ranged from fair (0.230) to good (0.702). abstract_id: PUBMED:27989544 Use of dual energy X-ray absorptiometry in pediatric patients. Dual Energy X-ray Absorptiometry (DXA) is a vital tool for assessing bone health in patients at risk for fragility fractures. In pediatric patients, this technology is used in conjunction with clinical fracture history to diagnosis osteoporosis and monitor treatment response. Childhood and adolescence is characterized by linear growth and bone mass accrual; thus there are important differences in the interpretation of bone measurements obtained by DXA in these young patients. This review aims to explore the current indications for DXA use and interpretation of DXA in the pediatric age group using currently available reference databases. Limitations of DXA in pediatric patients, specifically in children with short stature, will be explored. We will review several pathophysiologic mechanisms that may lead to low bone density in children, discussing representative diseases and the recommendations for monitoring bone health with DXA in these conditions. Finally, we will highlight new methods by which DXA imaging can gather additional information on bone health in children and may improve our ability to predict fractures and osteoporosis. abstract_id: PUBMED:35264829 Dual-Energy X-Ray Absorptiometry Scanning in Practice, Technical Aspects, and Precision Testing. The last three decades have witnessed considerable progress in the field of bone densitometry. Osteoporosis may be diagnosed in postmenopausal women and in men aged 50 and older if the bone mineral density (BMD) T-scores of the lumbar spine, total hip, or femoral neck are -2.5 or less. For reporting T-scores, the Hologic dual-energy X-ray absorptiometry (DXA) scanner uses the Caucasian (nonrace adjusted) female normative database for women and men of all ethnic groups although reference database used does have an impact on the categorization of BMD and must be chosen judiciously considering the regional and ethnic characteristics of the population. The quality control for DXA systems should be periodically done in accordance with manufacturer guidelines for DXA. Beyond conventional BMD assessment, DXA may also be utilized to assess the trabecular bone score, hip structural analysis, vertebral fractures, and body composition. abstract_id: PUBMED:32839098 Operator-Related Errors and Pitfalls in Dual Energy X-Ray Absorptiometry: How to Recognize and Avoid Them. Dual-energy X-ray absorptiometry (DXA) is the most common modality for quantitative measurements of bone mineral density. Nevertheless, errors related to this exam are still very common, and may significantly impact on the final diagnosis and therapy. Operator-related errors may occur during each DXA step and can be related to wrong patient positioning, error in the acquisition process or in the scan analysis. The aim of this review is to provide a practical guide on how to recognize such errors in spine and hip DXA scan and how to avoid them, also presenting some of the most common artifacts encountered in clinical practice. Answer: Yes, insufficient access to dual-energy X-ray absorptiometry (DXA) does stifle the provision of quality osteoporosis care in New Zealand. A study highlighted that access to DXA scanning varies significantly throughout New Zealand, with the majority of scans being funded privately or through the health industry. Only 31% of DXA scans were funded by District Health Boards (DHBs) during the study period, indicating a reliance on private funding for the majority of scans. The study concluded that barriers to access need to be addressed to implement osteoporosis guidelines equitably across the country and reduce the incidence and cost of fragility fractures in New Zealand (PUBMED:23361437).
Instruction: 'Isn't it all Whites? Abstracts: abstract_id: PUBMED:28716453 The effects of whiteness on the health of whites in the USA. Whites in the USA are the dominant racial group, with greater than average access to most material and social rewards. Yet, while whites have better outcomes than other racial groups on some health indicators, whites paradoxically compare poorly on other measures. Further, whites in the USA also rank poorly in international health comparisons. In this paper, we present a framework that combines the concept of whiteness-a system that socially, economically, and ideologically benefits European descendants and disadvantages people in other groups-with research from a variety of fields in order to comprehensively model the social factors that influence whites' health. The framework we present describes how whiteness and capitalism in the USA shape societal conditions, individual social characteristics and experiences, and psychosocial responses to circumstances to influence health outcomes. We detail specific examples of how social policies supported by whiteness, the narratives of whiteness, and the privileges of whiteness may positively and negatively affect whites' health. In doing so, we suggest several areas for future research that can expand our understanding of how social factors affect health and can contribute to the patterns and paradoxes of whites' health. By expanding research to include theoretically-grounded analyses of the dominant group's health, we can achieve a more complete picture of how systems of racial inequity affect health. abstract_id: PUBMED:28934128 Neuroticism Predicts Subsequent Risk of Major Depression for Whites but Not Blacks. Cultural and ethnic differences in psychosocial and medical correlates of negative affect are well documented. This study aimed to compare blacks and whites for the predictive role of baseline neuroticism (N) on subsequent risk of major depressive episodes (MDD) 25 years later. Data came from the Americans' Changing Lives (ACL) Study, 1986-2011. We used data on 1219 individuals (847 whites and 372 blacks) who had data on baseline N in 1986 and future MDD in 2011. The main predictor of interest was baseline N, measured using three items in 1986. The main outcome was 12 months MDD measured using the Composite International Diagnostic Interview (CIDI) at 2011. Covariates included baseline demographics (age and gender), socioeconomics (education and income), depressive symptoms [Center for Epidemiologic Studies Depression Scale (CES-D)], stress, health behaviors (smoking and driking), and physical health [chronic medical conditions, obesity, and self-rated health (SRH)] measured in 1986. Logistic regressions were used to test the predictive role of baseline N on subsequent risk of MDD 25 years later, net of covariates. The models were estimated in the pooled sample, as well as blacks and whites. In the pooled sample, baseline N predicted subsequent risk of MDD 25 years later (OR = 2.23, 95%CI = 1.14-4.34), net of covariates. We also found a marginally significant interaction between race and baseline N on subsequent risk of MDD (OR = 0.37, 95% CI = 0.12-1.12), suggesting a stronger effect for whites compared to blacks. In race-specific models, among whites (OR = 2.55; 95% CI = 1.22-5.32) but not blacks (OR = 0.90; 95% CI = 0.24-3.39), baseline N predicted subsequent risk of MDD. Black-white differences in socioeconomics and physical health could not explain the racial differences in the link between N and MDD. Blacks and whites differ in the salience of baseline N as a psychological determinant of MDD risk over a long period of time. This finding supports the cultural moderation hypothesis and is in line with other previously reported black-white differences in social, psychological, and medical correlates of negative affect and depression. abstract_id: PUBMED:30705780 Social Relations and Health: Comparing "Invisible" Arab Americans to Blacks and Whites. This paper establishes preliminary benchmarks by comparing average values of social relations and health among Arab Americans, Blacks and Whites. Specifically, we expand traditional racial/ethnic categories to distinguish Arab Americans, historically and legally considered White. Data come from a unique random digit dial (RDD) sample of Arab Americans (N=96), Blacks (N=102) and Whites (N=100) from metro-Detroit collected in 2011, ranging in age from 19-89. Analysis of covariance (ANCOVA) was conducted to compare health, network structure, composition and support quality. Findings established preliminary benchmarks showing that Arab Americans reported more depressive symptoms (7.6) than Whites (5.2), but no difference in physical health. Arab Americans also reported more contact frequency (4.4) than Blacks (4.1) and Whites (4.0), yet lower proportions of networks comprised of the same ethnicity (77%) compared to Blacks (96%) and Whites (97%). Unpacking the White category to identify Arab Americans in a comparative analysis identified benchmarks to show how Arab American health and social relations are distinct from Blacks and Whites, yielding unique avenues for thinking about new ways to conceptualize how race and social relations impact health disparities. abstract_id: PUBMED:34898768 Disaggregating Heterogeneity among Non-Hispanic Whites: Evidence and Implications for U.S. Racial/Ethnic Health Disparities. Research has made strides in disaggregating health data among racial/ethnic minorities, but less is known about the extent of diversity among Whites. Using logistic regression modeling applied to data on respondents aged 40+ from the 2008 to 2016 American Community Survey, we disaggregated the non-Hispanic White population by ancestry and other racial/ethnic groups (non-Hispanic Black, non-Hispanic Asian, and Hispanic) by common subgroupings and examined heterogeneity in disability. Using logistic regression models predicting six health outcome measures, we compared the spread of coefficients for each of the large racial/ethnic groups and all subgroupings within these large categories. The results revealed that health disparities within the White population are almost as large as disparities within other racial groups. In fact, when Whites were disaggregated by ancestry, mean health appeared to be more varied among Whites than between Whites and members of other racial/ethnic groups in many cases. Compositional changes in the ancestry of Whites, particularly declines in Whites of western European ancestry and increases in Whites of eastern European and Middle Eastern ancestry, contribute to this diversity. Together, these findings challenge the oft-assumed notion that Whites are a homogeneous group and indicate that the aggregate White category obscures substantial intra-ethnic heterogeneity in health. abstract_id: PUBMED:28763017 Positive and Negative Affect More Concurrent among Blacks than Whites. Background: While positive and negative affect are inversely linked, people may experience and report both positive and negative emotions simultaneously. However, it is unknown if race alters the magnitude of the association between positive and negative affect. The current study compared Black and White Americans for the association between positive and negative affect. Methods: We used data from MIDUS (Midlife in the United States), a national study of Americans with an age range of 25 to 75. A total number of 7108 individuals were followed for 10 years from 1995 to 2004. Positive and negative affect was measured at baseline (1995) and follow-up (2004). Demographic (age and gender), socioeconomic (education and income) as well as health (self-rated health, chronic medical conditions, and body mass index) factors measured at baseline were covariates. A series of linear regressions were used to test the moderating effect of race on the reciprocal association between positive and negative affect at baseline and over time, net of covariates. Results: In the pooled sample, positive and negative affect showed inverse correlation at baseline and over time, net of covariates. Blacks and Whites differed in the magnitude of the association between positive and negative affect, with weaker inverse associations among Blacks compared to Whites, beyond all covariates. Conclusion: Weaker reciprocal association between positive and negative affect in Blacks compared to Whites has implications for cross-racial measurement of affect and mood, including depression. Depression screening programs should be aware that race alters the concordance between positive and negative affect domains and that Blacks endorse higher levels of positive affect compared to Whites in the presence of high negative affect. abstract_id: PUBMED:27872599 Confirmatory Factor Analysis of the 12-Item Center for Epidemiologic Studies Depression Scale among Blacks and Whites. Background: The Center for Epidemiologic Studies Depression (CES-D) scale is one of the most widely used tools to measure depressive symptoms in epidemiological studies. Given the importance of cross-racial measurement equivalence of the CES-D scale for research, we performed confirmatory factor analysis (CFA) of the 12-item CES-D in a nationally representative sample of Black and White adults in the United States. Methods: We used data from the National Survey of American Life (NSAL), 2001-2003. A total number of 3570 Blacks (African-Americans) and 891 non-Hispanic Whites were included in the present study. CFA was carried out on the 12-item CES-D scale using multi-group structural equation modeling. Results: For both Blacks and Whites, the best fitting model was found to be the 3-factor model, indicating invariance of factor structure between Blacks and Whites. A statistically different fit of the models with and without constraints indicated lack of invariance of factor loadings between Blacks and Whites. Some of the positive (i.e., "as good" and "hopeful") and interpersonal (i.e., "people were unfriendly") items showed poor loadings, even in the 3-factor solution that allowed separate domains for positive affect, negative affect, and interpersonal problems. Despite the good fit of our final model, more items (i.e., "as good," "hopeful," "keeping mind," and "everything effort") had poorer loadings in Blacks than Whites (i.e., "as good"). Conclusion: There is invariance in factor structure but lack of invariance in factor/item loadings between Blacks and Whites. These findings have implications for cross-racial studies of depressive symptoms using CES-D scale among Blacks and Whites. Further research is warranted to scrutinize the role of socioeconomics and culture in explaining the lack of invariance of the CES-D scale between Blacks and Whites. abstract_id: PUBMED:30980610 Fifteen-year gap between oral health of blacks and whites in the USA. Aim: The purpose of the present study was to consider racial differences across three survey questions related to adult oral health in the National Oral Health Surveillance System (NOHSS) between 1999 and 2014. Methods: The NOHSS tracks population-level progress made toward improved oral health in the USA. NOHSS adult indicators of oral health data were extracted for 1999-2014, and trends were studied by race. Results: Among adults ≥18 years in 1999, 70.7% of whites and 60.3% of blacks had visited a dentist in the past year. By 2014, 68.4% of whites and 56.5% of blacks had visited a dentist in the past year. Among adults aged ≥65 years in 1999, 24.5% of whites and 33.2% of blacks had lost all natural teeth due to dental caries or gingival/periodontal disease. By 2014, 14.3% of whites and 22.1% of blacks had lost all natural teeth. Conclusions: There have been overall gains in key indicators of oral health in the USA; however, blacks remain far behind whites in the NOHSS adult oral health indicators. abstract_id: PUBMED:32133428 Neuroticism polygenic risk score predicts 20-year burden of depressive symptoms for Whites but not Blacks. Background: Black-White differences are reported in social, psychological, behavioral, medical, and biological correlates of depression. This study was conducted to compare Black and White older adults for the association between neuroticism polygenic risk score (N-PRS) and chronicity of depressive symptoms over 20 years. Methods: Data came from the Health and Retirement Study (HRS), 1990 - 2012, a nationally representative sample of Americans above age 50. Current analysis followed 9,249 individuals (7,924 Whites and 1,325 Blacks) for up to 22 years. Depressive symptoms were measured every two years between 1992 and 2012 using the 8-item Center for Epidemiological Studies-Depression Scale (CES-D-8). The independent variable was N-PRS. The dependent variable was average depressive symptoms between 1992 and 2012. Linear regression was used for data analysis. Results: In the pooled sample, higher N-PRS was associated with higher average depressive symptoms over the 20-year follow up period [b=0.01, 95%CI=0.00 to 0.04], net of all covariates. We also found an interaction between race and N-PRS [b=-0.02, 95%CI=-0.03 to 0.00], suggesting a stronger effect of N-PRS on 20-year average depressive symptoms for Whites than Blacks. Based on our race-specific linear regression models, higher N-PRS was associated with higher depressive symptoms from 1992 to 2012 for Whites [b=0.01, 95%CI=0.01 to 0.02] but not Blacks [b=0.00, 95%CI=-0.02 to 0.02]. Conclusion: Black and White older adults may differ in the salience of the existing N-PRS for depressive symptoms, which better reflects the burden of depression for Whites than Blacks. This may be because the existing PRSs are derived from mostly or exclusively White samples, limiting their applicability in other race groups. Racial variation in psychosocial, clinical, and biological correlates of depression needs further research. abstract_id: PUBMED:35951655 Alcohol use disorder among Whites and Hispanics on and off the U.S./Mexico border in California. This paper examines whether U.S./Mexico border residence in California is related to the prevalence of DSM-5 alcohol use disorder (AUD) among Whites and Hispanics. Household survey data were obtained from 1,209 adults (59.7% female) 18 to 39 years of age resident in four counties in California: Imperial on the U.S./Mexico border; and Kern, Tulare, and Madera in California's Central Valley. Households were selected using a list assisted sample, with data collected on the phone or online. Results show that AUD rates were not different between border and non-border location and between Whites and Hispanics. AUD was negatively associated with higher income ($20,000 to $60,000: AOR=.38; 95%CI=.17-.86; p&lt;.01-more than $60,000: AOR=.27; 95%CI: .09-.81; p&lt;.01) and poor risk perception (AOR=.86; 95%CI=.78-.94; p&lt;.01). AUD was positively associated with continued volume of drinking (AOR = 1.05; 95%CI = 1.01-1.09; p&lt;.01), drinking in Mexico (AOR = 4.28; 95%CI = 1.61-11.36; p&lt;.01), marijuana use (AOR = 4.11; 95%CI = 1.73-9.77; p&lt;.01), and impulsivity (AOR = 1.55; 95%CI = 1.23-1.94). Efforts to prevent AUD in the population in California, and especially among those who live close to the border with Mexico, should take into consideration factors such as impulsivity, marijuana use, border crossing to drink in Mexico, all of which increased risk of AUD. abstract_id: PUBMED:29342081 Age and Gender Differences in Psychological Distress among African Americans and Whites: Findings from the 2016 National Health Interview Survey. Previous studies report a race and mental health paradox: Whites score higher on measures of major depression compared to African Americans, but the opposite is true for psychological distress (i.e., African Americans score higher on distress measures compared to Whites). Independently, race, age, and gender outcomes for psychological distress are well documented in the literature. However, there is relatively little research on how psychological distress interferes with the lives of African Americans and Whites at the intersection of their various race, age, and gender identities. This study uses data from the 2016 National Health Interview Survey to examine age and gender differences in psychological distress and how much psychological distress interferes with the lives of African Americans and Whites. Our study findings are contrary to the paradox such that young White women (M = 3.36, SD = 1.14) and middle-aged White men (M = 2.55, SD = 3.97) experienced higher psychological distress than all other race, age, and gender groups. Psychological distress interference was relatively high among the high distress groups, except for older African American men (M = 1.73, SD = 1.05) and young African American women (M = 1.93, SD = 0.95). Implications for studies that consider cultural experiences of psychological distress, and how it impacts different demographic groups are discussed. Answer: The research suggests that the health outcomes and experiences of Whites in the USA are not uniform and that there is significant heterogeneity within this racial group. For instance, a study found that the effects of neuroticism on the risk of major depression differed between Whites and Blacks, with neuroticism being a stronger predictor for Whites (PUBMED:28934128). Another study highlighted that when disaggregating the non-Hispanic White population by ancestry, the health disparities within Whites were almost as large as those within other racial groups, challenging the notion that Whites are a homogeneous group (PUBMED:34898768). Additionally, Arab Americans, who are historically and legally considered White, were found to have distinct health and social relations compared to Blacks and Whites, further complicating the understanding of racial health disparities (PUBMED:30705780). Moreover, the association between positive and negative affect was found to be different for Blacks and Whites, with Blacks showing weaker inverse associations, which has implications for cross-racial measurement of mood and depression (PUBMED:28763017). Furthermore, confirmatory factor analysis of the CES-D scale used to measure depressive symptoms showed lack of invariance in factor loadings between Blacks and Whites, suggesting that socioeconomics and culture might play a role in how depressive symptoms are experienced and reported across races (PUBMED:27872599). The study on neuroticism polygenic risk score also indicated that the existing genetic risk scores, which are derived mostly from White samples, may not be as applicable to other racial groups, such as Blacks (PUBMED:32133428). In summary, the research indicates that there is considerable diversity within the White population in the USA, and health outcomes cannot be generalized across this group. Factors such as ancestry, socioeconomic status, culture, and genetics contribute to the heterogeneity in health experiences and outcomes among Whites (PUBMED:28716453, PUBMED:34898768, PUBMED:30705780, PUBMED:28763017, PUBMED:27872599, PUBMED:32133428).
Instruction: Does the use of a specialised paediatric retrieval service result in the loss of vital stabilisation skills among referring hospital staff? Abstracts: abstract_id: PUBMED:14500299 Does the use of a specialised paediatric retrieval service result in the loss of vital stabilisation skills among referring hospital staff? Aims: To compare the proportion of airway and vascular access procedures performed by referring hospital staff on critically ill children in two discrete time periods, before and after widespread use of a specialised paediatric retrieval service. Methods: Transport data were obtained from retrieval logs of all children for whom a paediatric retrieval team was launched in each of two time periods (October 1993 to September 1994; and October 2000 to September 2001). Results: The overall intubation rate was similar in the first and second time periods (83.9% v 79.1%). However, 31/51 (61%) retrieved children were intubated by referring hospital staff in 1993-94, compared to 227/269 (84%) in 2000-01. Referring hospital staff gained central venous access in 11% v 18% and arterial access in 22% v 19% of retrieved children in the first and second time periods respectively. This was in spite of a significant reduction in the proportion of children on whom these procedures were performed. Conclusion: Referring hospital staff are performing a greater proportion of initial airway and vascular access procedures undertaken in the stabilisation of sick children retrieved by a specialised paediatric retrieval team. The provision of this service has not resulted in the loss of vital skills at the local hospital. abstract_id: PUBMED:36033240 Paediatric admissions to a district general hospital adult intensive care unit over a six year period: A single centre retrospective case series, with review of practitioner confidence and competence in paediatric critical care. Background: Critically ill children presenting to district general hospitals (DGH) are admitted to adult intensive care units (AICUs) for stabilisation prior to transfer to paediatric intensive care units (PICUs). Current training in PICU for adult intensive care physicians is only three months. This single centre retrospective case series examines the case mix of children presenting to a DGH AICU and a multidisciplinary survey assesses confidence and previous experience, highlighting continued training needs for DGH AICU staff. Methods: all paediatric admissions to AICU and paediatric retrievals were reviewed over a 6-year period (2014-2019). Cases were identified from the Electronic Patient Record (EPR) and from data provided by the regional paediatric retrieval service. A questionnaire survey was sent to AICU doctors and nurses to assess confidence and competence in paediatric critical care. Results: Between 2014-2019, 284 children were managed by AICU. In total 35% of cases were &lt;1 y, 48% of cases were &lt;2 y and 64% of cases were &lt;5 y, and 166/284 (58%) children were retrieved. Retrieval reduced with increasing age (OR 0.49 [0.40-0.60], p &lt; 0.0001). The survey had an 82% response rate, and highlighted that only 13% of AICU nurses and 50% of doctors had received prior PICU training. Conclusion: At least one critically unwell child presents to the AICU each week. Assessment, stabilisation and management of critically unwell children are vital skills for DGH AICU staff, but confidence and competence are lacking. Formalised strategies are required to develop and maintain paediatric competencies for AICU doctors and nurses. abstract_id: PUBMED:14987782 The use of basic life support skills by hospital staff; what skills should be taught? Objectives: To assess the frequency of use of basic life support (BLS) skills among hospital staff of all disciplines. Design: Postal survey of 9600 teaching hospital staff. Participants: 3807 respondents from all disciplines. Main Outcome Measures: Frequency of attendance, and the use of BLS skills, at patients with cardiopulmonary arrest. Results: Most respondents reported having attended BLS training previously: 27.9% in the prior 6 months; 24.5% 6-12 months previously; 17.1% over 1 year ago; and 11.5% over 2 years ago. 17.1% reported never having received BLS training. 1.9% gave no valid response. Nearly half of all respondents had never attended a cardiopulmonary arrest. Among those most likely to have attended, i.e. qualified nursing and medical staff, the median frequency of attendance was less than once per year. Ventilation delivered using a pocket mask or bag-valve-mask was reported by 9.4 and 29.2% of respondents, respectively. Less than 7% reported the use of mouth-to-mouth ventilation. Only among qualified nursing (8.8%) and medical (24.7%) staff did this proportion exceed 5%. The vast majority of non-qualified nursing staff (84.9%), allied health professionals (86%) and administrative and clerical staff (98%) had used neither chest compressions nor mouth-to-mouth ventilation. Conclusions: Some skills taught during BLS training are used infrequently in the in-hospital situation. The likelihood of attendance at arrest events and of the use of BLS skills is extremely low among some identified professional groups. BLS skills teaching should be targeted at those groups most likely to actually use them in order to make best use of the resources available. abstract_id: PUBMED:19666940 Stabilisation of critically ill children at the district general hospital prior to intensive care retrieval: a snapshot of current practice. Objective: To describe current practice during stabilisation of children presenting with critical illness to the district general hospital (DGH), preceding retrieval to intensive care. Design: Observational study using prospectively collected transport data. Setting: A centralised intensive care retrieval service in England and referring DGHs. Patients: Emergency transports to intensive care during 2-month epochs from 4 consecutive years (2005-2008). Interventions: None. Main Outcome Measures: Proportion of key airway, breathing, and circulatory and neurological stabilisation procedures, such as endotracheal intubation, mechanical ventilation, vascular access, and initiation of inotropic agents, performed by referring hospital staff prior to the arrival of the retrieval team. Results: 706 emergency retrievals were examined over a 4-year period. The median age of transported children was 10 months (IQR, 18 days to 43 months). DGH staff performed the majority of endotracheal intubations (93.7%, CI 91.3% to 95.5%), initiated mechanical ventilation in 76.9% of cases (CI 73.0% to 80.4%), inserted central venous catheters frequently (67.4%, CI 61.7% to 72.6%), and initiated inotropic agents in 43.7% (CI 36.6% to 51.1%). The retrieval team was more likely to perform interventions such as reintubation for air leak, repositioning of misplaced tracheal tubes, and administration of osmotic agents for raised intracranial pressure. The performance of one or more interventions by the retrieval team was associated with severity of illness, rather than patient age, diagnostic group, or team response time (OR 3.62, 95% CI 1.47 to 8.92). Conclusions: DGH staff appropriately performs the majority of initial stabilisation procedures in critically ill children prior to retrieval. This practice has not changed significantly for the past 4 years, attesting to the crucial role played by district hospital staff in a centralised model of paediatric intensive care. abstract_id: PUBMED:31264388 Endotracheal intubation by a specialised paediatric retrieval team. Objectives: Intubation of critically ill children is an infrequent procedure, and is associated with significant risk. We set out to describe intubations carried out by the Children's Health Queensland Retrieval Service (CHQRS), with the intention to identify areas for improvement in the performance of intubation in the retrieval setting. Methods: Paediatric patients undergoing transport while intubated were identified, and cases where intubation occurred after the arrival of the CHQRS team were examined. Results: Over the study period of January 2015 to September 2018, 498 intubated retrievals were undertaken; 85 patients were intubated after the arrival of CHQRS; the age range was 1 day to 16.5 years (median 0.5, interquartile range [IQR] 0.11-3 years). The median weight was 6.2 kg (IQR 3.7-16.5 kg). The pathology requiring intubation included respiratory 36 (42.3%), sepsis 21 (24.7%), neurological 11 (12.9%) and trauma 7 (8.2%). A total of 470 of 498 (94.4%) of intubated patients were from regional referral or tertiary hospitals, 28 of 498 (5.6%) were from rural and remote facilities. Of 85 patients, 57 (67.1%) were intubated by CHQRS and 28 (32.9%) were intubated by a doctor from the referring facility. The CHQRS team was more likely to perform the intubation in smaller children (median weight 5.0 vs 9.9 kg, P = 0.03). The mean scene time was 2.8 h. The scene time was shorter if the intubation was performed by CHQRS (mean 2.6 h, median 2.5, IQR 1.8-3.3; median 3, IQR 2.2-3.9; P = 0.048). The scene time was shorter when the intubation was predicted from tasking information (2.6 vs 3.1 h; P = 0.03). Conclusion: Paediatric endotracheal intubation is an infrequent procedure in our service. An airway registry could improve documentation and gather information to identify specific training requirements and areas for practice improvement. abstract_id: PUBMED:12706736 "The worst journey of our lives": parents' experiences of a specialised paediatric retrieval service. In this retrospective study, a sample of 233 parents were surveyed, by means of a postal questionnaire, about their experience of a specialised paediatric retrieval service (median time interval after child's retrieval=10 months). Although all parents were routinely provided with written information about the retrieval service, only 46% remember receiving it. Also, although generally high, satisfaction ratings relating to the period of the child's transit were significantly lower (P&lt;0.005) than those relating to the other stages of the transfer. Two main reasons were given by parents for their dissatisfaction: distress at being separated from their critically ill child and logistic problems locating and parking at the new hospital. Implications for future service provision are considered. abstract_id: PUBMED:27473661 Which skills boost service provider confidence when managing people presenting with psychiatric emergencies? The way service seekers interact with the staff at emergency services has been shown to influence the standard of care, especially in the case of certain psychiatric manifestations. Staff reactions to psychiatric complaints have been linked to their comfort dealing with these types of service users as well as their competencies understanding the illness. It is therefore vital to understand which skills increase confidence in treating psychiatric emergencies. Twenty-six open-ended convergent interviews were conducted with staff working in a psychiatric emergency department. Thematic analysis was used to analyze the data. Participants reported several non-technical skills which developed from exclusively serving people with psychiatric emergencies: 1) Vigilance allowed staff to be sensitive to minor changes in behavior which precede psychiatric emergencies. 2) The ability to negotiate and find tangible solutions was particularly important when dealing with psychiatric complaints which may not have tangible resolutions. 3) The ability to appraise social support networks allowed staff to plan follow-up actions and ensure continuity of care when support was available. 4) The ability to self-reflect allowed participants to learn from their experience and avoid burnout, frustration, and fatigue. Participants also reported several other clinical skills which they gained during training, including teamwork, de-escalating techniques and risk assessment. Tentatively speaking, these skills improve staff's confidence when treating psychiatric emergencies. Certain skills may be generalized to staff working in medical emergency departments who frequently encounter psychiatric complaints. abstract_id: PUBMED:36815059 Antimicrobial use among paediatric inpatients in a Nigerian tertiary hospital: A three-year point prevalence survey. Aim: Antimicrobials are among the most widely prescribed therapeutic agents among paediatric population. Irrational use of these agents increases the risk of antimicrobial resistance. We described the prevalence and pattern of antimicrobial use among paediatric inpatients from 2017 to 2019. Subject And Method: The study was a repeated point prevalence survey over a 3-year period using the global point prevalence standardized tools among all paediatric inpatients. The prevalence of antimicrobial use, the prescription indicators and patterns of antimicrobial use were estimated. Result: Among 191 paediatric inpatients assessed, the 3-year period prevalence antimicrobial use was 85.9% (164/191) with prevalence of 80.6% in 2017, 94.6% in 2018, and 83.6% in 2019. Antimicrobial agents used ranged from one agent (20.1%) to five different agents (5.5%). Parenteral route (66.6%) was the preferred route of administration. The reason for the use of antimicrobial agents (92.6%) and the stop/review date (99.5%) were mostly well documented. Only 4.5% of the antimicrobial use were targeted. There were no antimicrobial guidelines or policy guiding the use of antimicrobial agents, except the national guideline on the treatment of malaria and tuberculosis. Ceftriaxone, a third generation cephalosporin was the most commonly used agent across the period under review. Community-acquired infection was the commonest indication for antimicrobial use. Conclusion: Our findings of high antimicrobial prevalence has raised the attention for the need to develop hospital-based antimicrobial guideline and antimicrobial stewardship program to protect the vulnerable children, their contacts and the environment from the impact of antimicrobial resistance. abstract_id: PUBMED:7580489 Morbidity and severity of illness during interhospital transfer: impact of a specialised paediatric retrieval team. Objective: To evaluate the morbidity and severity of illness during interhospital transfer of critically ill children by a specialised paediatric retrieval team. Design: Prospective, descriptive study. Setting: Hospitals without paediatric intensive care facilities in and around the London area, and a paediatric intensive care unit at a tertiary centre. Subjects: 51 critically ill children transferred to the paediatric intensive care unit. Main Outcome Measures: Adverse events related to equipment and physiological deterioration during transfer. Paediatric risk of mortality score before and after retrieval. Therapeutic intervention score before and after arrival of retrieval team. Results: Two (4%) patients had preventable physiological deterioration during transport. There were no adverse events related to equipment. Severity of illness decreased during stabilisation and transport by the retrieval team, suggested by the difference between risk of mortality scores before and after retrieval (P &lt; 0.001). The median (range) difference between the two scores was 3.0 (-6 to 17). Interventions during stabilisation by the retrieval team increased, demonstrated by the difference between intervention scores before and after retrieval, median (range) difference between the two scores being 6 (-8 to 38) (P &lt; 0.001). Conclusions: Our study indicates that a specialised paediatric retrieval team can rapidly deliver intensive care to critically ill children awaiting transfer. Such children can be transferred to a paediatric intensive care unit with minimal morbidity and mortality related to transport. There was no deterioration in the clinical condition of most patients during transfer. abstract_id: PUBMED:30116679 Referring Service Effect on Muscle Biopsy Diagnosis and Management in Myopathy. Introduction Requests for muscle biopsy for evaluation of myopathy originate from a variety of different specialties. It is unknown whether the specialty of the referring service affects the likelihood of diagnostic biopsy or the therapeutic usefulness of a biopsy. Methods We reviewed 106 consecutive muscle biopsies requested by healthcare providers in neurology, rheumatology, family medicine, oncology, and gastroenterology. We tested for an association between referring service and whether the biopsy yielded a definitive pathology result or provided therapeutically useful results. Results Half of the biopsies (49%) returned definitive pathology and 88% of the biopsies provided information that contributed to therapeutic decisions. The diagnostic yield for muscle biopsies referred by different services was not statistically significant (p-value 0.1344) nor was the therapeutic yield statistically significant for muscle biopsies referred by different services (p-value 0.5525). Discussion The specialty of the service that requests a muscle biopsy does not influence the likelihood of obtaining a definitive pathological diagnosis or therapeutically useful information. Other factors may be more important in determining the likelihood of obtaining a clinically useful biopsy result. Answer: The use of a specialised paediatric retrieval service does not result in the loss of vital stabilisation skills among referring hospital staff. On the contrary, referring hospital staff are performing a greater proportion of initial airway and vascular access procedures undertaken in the stabilisation of sick children retrieved by a specialised paediatric retrieval team. This is evidenced by the fact that in 1993-94, 61% of retrieved children were intubated by referring hospital staff, compared to 84% in 2000-01. Additionally, referring hospital staff gained central venous access in 11% versus 18% and arterial access in 22% versus 19% of retrieved children in the first and second time periods, respectively. This was despite a significant reduction in the proportion of children on whom these procedures were performed. Therefore, the provision of a specialised paediatric retrieval service has not resulted in the loss of vital skills at the local hospital (PUBMED:14500299). Moreover, district general hospital (DGH) staff appropriately performs the majority of initial stabilisation procedures in critically ill children prior to retrieval, which has not changed significantly over the past 4 years, attesting to the crucial role played by district hospital staff in a centralised model of paediatric intensive care (PUBMED:19666940). In summary, the evidence suggests that the introduction and use of specialised paediatric retrieval services have not led to a degradation of vital stabilisation skills among referring hospital staff. Instead, these staff members continue to perform a significant proportion of critical initial stabilisation procedures.
Instruction: Lobulated margination of liver hemangiomas: Is this a definitive feature? Abstracts: abstract_id: PUBMED:27130984 Lobulated margination of liver hemangiomas: Is this a definitive feature? Purpose: To evaluate the occurrence rate of lobulated margination of hepatic hemangiomas. Methods: The study population included 585 hemangiomas in 261 patients (161 females; mean age: 51.9years). Two readers independently reviewed all cases. Results: Hemangiomas with lobulated margins accounted for 74.7% of all lesions. The incidence of lobulated margins was significantly higher (P&lt;.001) in medium- and large-sized hemangiomas (77.6% and 99.5%, respectively). The majority of hemangiomas with type-3 enhancement showed lobulated margins (83.5%) (P&lt;.001). Conclusion: Virtually, all hemangiomas&gt;18mm show lobulated margins. The majority of hemangiomas with type-3 enhancement show lobulated margins. Lack of lobulated margins in larger lesions lowers the likelihood ratio for being hemangiomas. abstract_id: PUBMED:33181737 Giant Liver Hemangioma Mimicking a Hydronephrotic Kidney on 99mTc-DTPA Scintigraphy. We report a case of a 56-year-old woman who underwent Tc-DTPA renal scintigraphy for the evaluation of a right pelviureteric junction calculi, which incidentally showed radiotracer uptake in a giant liver hemangioma. The initial scintigraphic images showed a large lobulated lesion with peripheral uptake of radiotracer at the right abdominal region, resembling a large hydronephrotic kidney. However, the pattern of progression of the radiotracer, coupled with the clinical history and available corresponding radiologic images, confirmed the uptake to be that of a giant liver hemangioma. abstract_id: PUBMED:3900307 Nuclear magnetic resonance imaging of liver hemangiomas. Nine patients with cavernous hemangioma of the liver were examined by nuclear magnetic resonance imaging (MRI) with a 0.5 T superconductive magnet. Spin-echo technique was used with varying time to echo (TE) and repetition times (TR). Results were compared with 99mTc red blood cell (RBC) scintigraphy, computed tomography (CT), echography, and arteriography. Four illustrated cases are reported. It was possible to establish a pattern for MRI characteristics of cavernous hemangiomas; rounded or smooth lobulated shape, marked increase in T1 and T2 values as compared with normal liver values. It is concluded that, although more experience is necessary to compare the specificity with that of ultrasound and CT, MRI proved to be very sensitive for the diagnosis of liver hemangioma, especially in the case of small ones which may be missed by 99mTc-labeled RBC scintigraphy. abstract_id: PUBMED:3510934 Clinical significance of focal echogenic liver lesions. During a 4-year period, 53 focal echogenic liver lesions were demonstrated by sonography in 41 patients, in whom there was no evidence of metastatic origin. Most of the lesions were hemangiomas. One of the purposes of this study was to determine the characteristic ultrasound features for liver hemangioma. Small (less than 2 cm), homogeneous, echogenic, well-circumscribed, subcapsular lesions almost prove their hemangiomatous nature. Lesions with a diameter of more than 2 cm are usually more lobulated and heterogeneous. They are located more centrally in the liver and nearly all show a close anatomical relation with 1 of the hepatic veins. Very large lesions (greater than 5 cm) with a heterogeneous and irregular aspect suggest focal nodular hyperplasia, which must be proven by a Tc-isotopic liver scan. abstract_id: PUBMED:20598089 Living donor liver transplantation for giant cavernous hemangioma of liver in a child. A 10-yr-old girl presented with a seven-month history of upper abdominal discomfort and weight loss. Physical examination revealed an abdominal lump in the right hypochondrium and epigastrium. Ultrasound examination and a computerized tomographic scan showed a large lobulated mass arising from segments I, 1V, and VIII of liver with arteriovenous shunting and multiple small masses in segments VI and VII. An initial diagnosis of hemangioendothelioma with metastasis was made elsewhere following which she received chemotherapy. She had persistent abdominal discomfort because of which she became dependent on narcotics. The patient had fever because of tumor necrosis and also developed peripheral neuropathy. Finally, owing to progressively worsening of symptoms, she underwent left lobe living donor liver transplantation. Histopathological examination showed the mass to be a cavernous hemangioma, and the patient is now well. abstract_id: PUBMED:20437137 Sclerosed hemangioma of the liver: possible diagnostic value of diffusion-weighted magnetic resonance imaging. We present a case of a sclerosed hemangioma (SH) of the liver that showed a high apparent diffusion coefficient (ADC) value. The patient was undergoing preoperative evaluation for a metastatic breast cancer lesion when a liver mass with a diameter of 3 cm was found. It was described as a heterogeneously hyperechoic mass on ultrasonography and as a well-defined, lobulated mass with early peripheral enhancement and internal heterogeneous enhancement in the delayed phase on computed tomography. The fat-suppressed T2-weighted images demonstrated a heterogeneously hyperintense mass, which showed an ADC value of 2.01 x 10(-3) mm(2)/s. Liver metastasis and cholangiocellular carcinoma could not be excluded based on the imaging findings. After surgery, a definite diagnosis of SH was obtained. Microscopically, many hyalinized portions with poor cellular and fibrous components were observed in the tumor, and this hyalinization accompanied with liquiform degeneration, which may have been one of the causes of the high ADC value. We discuss the diagnostic value of diffusion-weighted imaging for SH of the liver. abstract_id: PUBMED:3500603 Hemangioma of the liver: heterogeneous appearance on T2-weighted images. We reviewed the gross photographs of 19 patients with hemangioma of the liver. On cut sections, all 19 tumors were inhomogeneous. In 17 cases (89%) there were areas of predominantly central fibrosis. Lobulations were present in 10 cases (53%). Areas of thrombosis (nine cases), calcification (four cases), and hemorrhage (four cases) were also present. We also reviewed 15 hemangiomas of the liver in 12 patients studied by MR imaging. Gross photographs and MR scans were available in three hemangiomas, thus allowing direct radiologic-pathologic correlation. Twelve (80%) of the hemangiomas were inhomogeneous on T2-weighted images and three (20%) were homogeneous. Other characteristics were sharp margins in all 15 cases (100%), lobulations in 11 cases (73%), peripheral location in 10 cases (67%), and internal septations in four cases (27%). We concluded that (1) hemangiomas of the liver are frequently inhomogeneous on cut sections; (2) heterogeneity in T2 images is commonly seen and should not exclude hemangioma; and (3) other signs, such as sharp margins, lobulated contour, and peripheral location, may be helpful in the MR diagnosis of hemangioma of the liver. abstract_id: PUBMED:21354897 CT image retrieval of the liver with intrahepatic lesions Objective: This paper presents a method for global feature extraction and the application of the boostmetric distance metric method for medical image retrieval. The global feature extraction method used the low frequency subband coefficient of the wavelet decomposition based on the non-tensor product coefficient for piecewise Gaussian fitting. The local features were extracted after semi-automatic segmentation of the lesion areas in the images in the database. The experimental verification of the method using 1688 CT images of the liver containing lesions of liver cancer, liver angioma, and liver cyst confirmed that this feature extraction method improved the detection rate of the lesions with good image retrieval performance. abstract_id: PUBMED:29735443 Differential diagnosis of hepatocellular carcinoma and hepatic hemangiomas based on radiomic features of gadoxetate disodium-enhanced magnetic resonance imaging Objective: To evaluate the feasibility of using radiomic features for differential diagnosis of hepatocellular carcinoma (HCC) and hepatic cavernous hemangioma (HHE). Methods: Gadoxetate disodium-enhanced magnetic resonance imaging data were collected from a total of 135 HCC and HHE lesions. The radiomic texture features of each lesion were extracted on the hepatobiliary phase images, and the performance of each feature was assessed in differentiation and classification of HCC and HHE. In multivariate analysis, the performance of 3 feature selection algorithms (namely minimum redundancy-maximum relevance, mRmR; neighborhood component analysis, NCA; and sequence forward selection, SFS) was compared. The optimal feature subset was determined according to the optimal feature selection algorithm and used for testing the 3 classifier algorithms (namely the support vector machine, RBF-SVM; linear discriminant analysis, LDA; and logistic regression). All the tests were repeated 5 times with 10-fold cross validation experiments. Results: More than 50% of the radiomic features exhibited strong distinguishing ability, among which gray level co-occurrence matrix feature S (3, -3) SumEntrp showed a good classification performance with an AUC of 0.72 (P&lt;0.01), a sensitivity of 0.83 and a specificity of 0.57. For the multivariate analysis, 15 features were selected based on the SFS algorithm, which produced better results than the other two algorithms. Testing of these 15 selected features for their average cross-validation performance with RBF-SVM classifier yielded a test accuracy of 0.82∓0.09, an AUC of 0.86∓0.12, a sensitivity of 0.88∓0.11, and a specificity of 0.76∓0.18. Conclusion: The radiomic features based on gadoxetate disodium-enhanced magnetic resonance images allow efficient differential diagnosis of HCC and HHE, and can potentially provide important assistance in clinical diagnosis of the two diseases. abstract_id: PUBMED:23887160 Anastomosing hemangioma of the liver and gastrointestinal tract: an unusual variant histologically mimicking angiosarcoma. Anastomosing hemangioma, a benign vascular neoplasm histologically simulating angiosarcoma, is newly recognized and has been described primarily in the genitourinary tract. We have encountered this lesion in the liver and gastrointestinal tract, where it can be diagnostically challenging, especially in core biopsy. Herein, we described 6 cases of anastomosing hemangioma of the liver and gastrointestinal tract. They occurred in 4 women and 2 men, ranging in age from 48 to 71 years. The tumors ranged from 0.2 to 6 cm (median, 3.1 cm) and were grossly well demarcated with a gray-brown spongy appearance. Microscopically, they had a loosely lobulated architecture. At higher magnification, lesions consisted of anastomosing sinusoidal capillary-sized vessels with scattered hobnail endothelial cells. Mild cytologic atypia occurred in all cases. Mitoses were absent. Vascular thrombi were seen in 4 cases (66.7%) without necrosis. One tumor (16.7%) featured prominent extramedullary hematopoiesis and 1 (16.7%) hyaline globules. Immunohistochemistry results were available for 3 cases, and the lesions stained with CD34 and/or CD31. Five cases had clinical follow-up information; there were no recurrences or metastases (range, 8 to 96 mo; mean, 41 mo), and 1 patient received no follow-up after a benign diagnosis on her colon polyp. In summary, anastomosing hemangioma of the liver and gastrointestinal tract is a rare distinctive vascular neoplasm displaying overlapping features with well-differentiated angiosarcoma. Despite small numbers and limited follow-up information in our series, evidence to date supports that the lesion is benign. Awareness of this entity is essential to avoid overdiagnosis and unnecessary aggressive treatment. Answer: Lobulated margination is a common characteristic of liver hemangiomas, but it is not a definitive feature for all cases. The occurrence rate of lobulated margination of hepatic hemangiomas was evaluated in a study that included 585 hemangiomas in 261 patients. The results showed that hemangiomas with lobulated margins accounted for 74.7% of all lesions. The incidence of lobulated margins was significantly higher in medium- and large-sized hemangiomas (77.6% and 99.5%, respectively). The majority of hemangiomas with type-3 enhancement also showed lobulated margins (83.5%). The study concluded that virtually all hemangiomas larger than 18mm show lobulated margins, and the majority of hemangiomas with type-3 enhancement show lobulated margins. However, the lack of lobulated margins in larger lesions lowers the likelihood ratio for being hemangiomas (PUBMED:27130984). Other studies and case reports have also described lobulated margins in liver hemangiomas. For instance, a giant liver hemangioma was incidentally discovered during Tc-DTPA renal scintigraphy, which showed a large lobulated lesion with peripheral uptake of radiotracer (PUBMED:33181737). Additionally, nuclear magnetic resonance imaging (MRI) of liver hemangiomas has established a pattern for MRI characteristics of cavernous hemangiomas, which includes a rounded or smooth lobulated shape (PUBMED:3900307). Similarly, sonography has shown that lesions larger than 2 cm are usually more lobulated and heterogeneous (PUBMED:3510934). However, it is important to note that while lobulated margins are common, they are not exclusive to hemangiomas, and not all hemangiomas will present with this feature. For example, sclerosed hemangiomas may show a well-defined, lobulated mass with early peripheral enhancement and internal heterogeneous enhancement on computed tomography (PUBMED:20437137). Moreover, hemangiomas of the liver can be inhomogeneous on T2-weighted images, and while lobulations are frequently seen, they are not present in all cases (PUBMED:3500603).
Instruction: Do stone size and impaction influence therapeutic approach to proximal ureteral stones? Abstracts: abstract_id: PUBMED:19281124 Do stone size and impaction influence therapeutic approach to proximal ureteral stones? Background/aim: Primary therapeutic approach to lumbar ureteral stones is still contraversial. The aim of the study was to investigate the influence of stone impaction and size on the effectiveness of proximal ureteral stone lithotripsy. Methods: A total of 123 patients with proximal ureteral stones were investigated in this prospective study performed in a 10-month period. The patients were divided into the group I--86 patients treated with extracorporeal shock wave lithotripsy (ESWL) and the group II--37 patients treated with "Swiss" Lithoclast. In the group I, 49 stones (57%) were classified as impacted, while 20 stones (23.3%) were larger than 100 mm2. In the group II, 26 stones (70.3%) were impacted, and 11 stones (29.7%) were larger than 100 mm2. Stones were defined as impacted by the radiographic, echosonographic as well as endoscopic findings in the group II of patients. Stone size was presented in mm2. Chemical composition of stones were almost the same in both groups of the patients. Results: Generally, there was no statistically significant difference in the treatment success between the groups. However, stones larger than 100 mm2 were statistically more successfully treated endoscopically, while there was no statistical difference in the treatment success of impacted stones between these two groups. Conclusion: ESWL can by considered as primary first therapeutic approach in treatment of all proximal ureteral stones except for stones larger than 100 mm2 that should primarily be treated endoscopically. abstract_id: PUBMED:20622268 Influence of stone size, location and impaction on the success of ureteroscopic pneumolithotripsy. This study was conducted on 336 patients that had undergone ureteroscopic pneumolithotripsy for ureteric stones between August 2005 and June 2008. Patient data were analyzed with respect to stone localization, size and impaction. All ureterorenoscopic procedures were performed under general anesthesia using a 9.5F rigid ureteroscope. A plain film of the kidney ureter bladder was performed on the first postoperative day and week 4 to evaluate treatment efficacy. Patient age was 42, 63+/-15 (14-77) yr. Mean stone size for all groups was 9.77+/-4mm (5-20 mm). A total of 336 patients were treated with ureteroscopic pneumolithotripsy, 129 female and 207 male. The overall initial success rate was 85.71%. The success rate of ureteroscopic pneumolithotripsy, proximal, middle and distal ureter, for Stone location, 76.54%, 85.48%, 90.74%, for Stone size (&gt;10 mm) 54%, 77%, 88 % and for Stone impaction; 50%, 75,6%, 85,4% respectively. There was a statistically significant difference between stone free rates with respect to the proximal ureter stone, increasing stone size (&gt;10 mm) and stone impaction (p&lt;0.05) and a positive correlation with the operative complication rate. It is concluded that it is possible to achieve high success and low complication rates in pneumolithotripsy for distal ureteric stones. Nevertheless, the success rate decreases and complication rate increases with stone locations close to the proximal ureter. Complication rate increases with increasing stone size and/or impacted ureteral stones. abstract_id: PUBMED:34552456 Is there any predictive value of the ratio of the upper to the lower diameter of the ureter for ureteral stone impaction? Background: We aimed to determine if the ratio of the upper to the lower diameter of the ureter could have any predictive value for ureteral stone impaction. Materials And Methods: Patients who had a solitary unilateral ureteric stone, determined by noncontrast computerized tomography, were assessed if they had undergone ureteroscopic lithotripsy. A total of 111 patients, 84 males (76%), and 27 females (24%), were recruited to the study. Demographic data of the patients and preoperative radiological parameters based on noncontrast computerized tomography were recorded. The impaction status was also assessed during the operation. Results: Of the 111 patients, ureteral stones in 63 (57%) patients were determined to be impacted, and ureteral stones in 48 (43%) were nonimpacted. Impacted stones were more common in older patients, female patients, and patients with an American Society of Anesthesiologists score of 2. Conclusions: Significant relationships were found between the impaction status and transverse stone length, longest stone length, upper diameter of the ureter, ratio (upper diameter of the ureter/lower diameter of the ureter), and anteroposterior diameter of the pelvis. These parameters were higher in patients with impacted stones. abstract_id: PUBMED:32883100 Can CT-Based Stone Impaction Markers Augment the Predictive Ability of Spontaneous Stone Passage? Introduction: A number of clinical and radiological predictors of either stone impaction or ureteral stone passage (SP) have been proposed. We aimed at identifying the key predictors of successful SP by using readily available CT-based tools/measurements. Methods: Patients presenting to the emergency department from February 2017 to February 2018 with an acute unilateral ureteral stone confirmed on non-contrast CT and managed conservatively were followed for SP. Patients with renal impairment, sepsis or requiring emergent intervention were excluded. Patients were followed at 1 month to confirm SP (stone collection/repeat imaging) or failure of passage. The CT variables analyzed included: Stone factors [location, size, volume, HU density (HUD)], impaction factors [ureteral HUD above and below the stone, maximal ureteral wall thickness (UWT) at the stone site, contralateral UWT, and ureteral diameter above and below the stone]. Binary logistic regression analysis was performed to identify predictors of SP. Results: Forty-nine patients met study inclusion criteria, of whom 32 (65.3%) passed the stone without further intervention. Patients with successful passage were more likely to have smaller, lower volume and less dense stones located in the distal ureter (p &lt; 0.01). Lower ureteral HUD below the stone, lower maximal UWT, and lower ureteral diameter above the stone were associated with successful passage (p &lt; 0.01). On multivariable logistic regression analysis, only maximal UWT at the stone site was a significant independent predictor of SP outcome (p = 0.01). Youden's criterion identified 2.3 mm as the optimal UWT cut-off point, which will accurately predict SP with 82.4% sensitivity and 87.5% specificity. Conclusions: Maximal UWT at the stone site was the most significant predictor of successful passage in acute unilateral ureteral stones, with an optimal cut-off point of 2.3 mm. Further prospective studies are needed to accurately predict spontaneous SP. abstract_id: PUBMED:37606748 Medial deviation of the ureter is a new sign that could predict stone impaction: a pilot study. Objective: To assess the value medial deviation of the ureter at site of ureteric stone as a sign of ureteric stone impaction. Patients And Methods: All cases with medial deviation of the ureter at site of ureteric stones in our department over 4 years were enrolled in this pilot study. All cases were diagnosed with KUB and non-contrast CT (NCCT). Radiological and ureteroscopic findings were recorded. Results: A total of 32 patients with a single impacted stone in the proximal and middle third of the ureter were included in the study. Medial deviation of the ureter at the site of the stone was detected in the upper third of the ureter in 24 (75%) cases and in the middle third in 8 (25%) cases. There were mucosal polyps and mucosal erythema (inflammatory changes) seen by ureteroscopy in all cases (100%). Ureteroscopy was successfully completed with stone fragmentation in 23 (71.8%) patients: 8 of them needed ureteric catheter and 15 required JJ stent insertion. Failure of ureteroscopy with insertion of JJ stent was done in 5 (15.6%) patients. Removal of the stent and ureteroscopy was done after 4 weeks. Conclusion: We conclude from this study that medial deviation of the ureter is a new reliable radiological sign of ureteric stone impaction. abstract_id: PUBMED:36609272 Value of preoperative ureteral wall thickness in prediction of impaction of ureteric stones stratified by size in laser ureteroscopic lithotripsy. Objectives: To evaluate the role of preoperative UWT in the prediction of impaction of ureteral stones stratified according to stone size in ureteroscopic laser lithotripsy. Patient And Methods: This study included 154 patients submitted to URSL for ureteral stones. Radiological data comprised the presence of hydronephrosis, anteroposterior pelvic diameter (PAPD), proximal ureteric diameter (PUD), and maximum UWT at the stone site. Collected stone characteristics were stone size, side, number, site, and density. Results: The study included 154 patients subjected to URSL. They comprised 74 patients (48.1%) with impacted stones and 80 (51.9%) with non-impacted stones. Patients were stratified into those with stone size ≤ 10 mm and others with stone size &gt; 10 mm. In the former group, we found that stone impaction was significantly associated with higher PAPD, PUD, and UWT. In patients with stone size &gt; 10 mm, stone impaction was related to higher UWT, more stone number, and higher frequency of stones located in the lower ureter. ROC curve analysis revealed good power of UWT in discrimination of stone impaction in all patients [AUC (95% CI) 0.65 (0.55-0.74)] at a cut-off of 3.8 mm, in patients with stone size ≤ 10 mm [AUC (95% CI) 0.76 (0.61-0.91)] at a cut-off of 4.1 mm and in patients with stone size &gt; 10 mm [AUC (95% CI) 0.72 (0.62-0.83)] at a cut-off of 3.0 mm. Conclusions: Stratifying ureteric stones according to size would render UWT a more practical and clinically-oriented approach for the preoperative prediction of stone impaction. abstract_id: PUBMED:17459573 Impact of stone size, location, composition, impaction, and hydronephrosis on the efficacy of holmium:YAG-laser ureterolithotripsy. Objectives: The impact and outcome of holmium:YAG-laser (Ho:YAG) ureterolithotripsy in treating proximal and distal ureteral stones was investigated. Methods: A total of 543 patients harbouring proximal (n=194) or distal (n=349) ureteral stones underwent semirigid Ho:YAG ureterolithotripsy. The degree of hydronephrosis; stone size, location, impaction, and composition; and complication and stone-free rates were recorded. Results: Mean stone size for proximal and distal stones was 7.1+/-3.6 mm and 6.2+/-2.5 mm, respectively. The stone-free rate on the first postoperative day was 79.4% for proximal and 96.8% for distal stones (p&lt;0.0001). For proximal stones &lt;10mm and &gt; or =10mm, respectively, the stone-free rate was 80.5% and 74.3% (p=0.4) and for distal stones 97.3% and 94.3% (p=0.2). Stone-free rates for radio-opaque versus radiolucent stones in proximal stones were 79.6% versus 77.9% (p=0.8) and 97.6% versus 96.2% in distal stones (p=0.5). Impaction correlated significantly with stone-free rates (p&lt;0.0001). Stone-free rates for nonimpacted versus impacted proximal stones were 85.8% versus 67.2% (p=0.003) and for distal stones 99.2% versus 91.4% (p&lt;0.003), respectively. The presence or degree of hydronephrosis did not correlate with treatment success (p=0.4, p=0.8). The presence of intraoperative complications correlated significantly with proximal compared to distal ureteral stone location (p=0.004). Auxiliary measures in proximal versus distal stones were performed in 20.6% versus 2.9% (p&lt;0.0001). Conclusions: In this series, stone-free rates in Ho:YAG ureterolithotripsy were significantly higher in distal and nonimpacted stones but were independent of stone size and composition and the degree of hydronephrosis. Ureterolithotripsy in proximal stones was associated with higher intraoperative complication and retreatment rates compared to distal stones. abstract_id: PUBMED:26847338 A comparison between an in vitro ureteroscopic stone size estimation and the stone size measurement with the help of a scale on stone baskets. Introduction: Endoscopic treatment of ureter stones and renal calculi relies on the surgeon's estimation of the stone size for both lithotripsy and removal of stones or stone fragments. We therefore compared precision and reliability of the endoscopic estimation of stone size by the surgeon with measurements on a scale on a stone basket. Materials And Methods: Two surgeons (one high experienced and one low experienced) first estimated, then measured the size of 12 stones differing in size and color using different stone baskets (2.5, 3.0, 4.0 Ch) each via a semirigid renoscope in an artificial ureter under water repeatedly on two different days. All together, we had 288 measurements and 288 estimations. Results: On the whole, the accuracy of the estimation diminished with bigger stones. There is an increasing underestimation with increasing stone size. Factors, which significantly influence the estimation, are the operating surgeon, the color of the stone, the time sequence, and the size of the closed basket, which was held beside the stone. The accuracy of the measurement of the stone baskets is not as good as the estimation. The small 2.5-Ch basket is the most accurate in measuring big stones (&gt;6 mm), the 3.5 Ch in intermediate stones (3-6 mm), the big basket (4.0 Ch) in small stones (&lt;3 mm). Conclusion: This first attempt at validation of a scale on stone baskets shows different results for each basket which could be systematically improved. Until now, the estimation of the surgeons is better than the measurement, but it is also influenced by factors like the surgeon or the color of the stone. abstract_id: PUBMED:33133662 Is spinal anesthesia an alternative and feasible method for proximal ureteral stone treatment? Introduction: We investigated the clinical, operational, and pain parameters of patients who underwent semirigid ureterorenoscopy (sURS) under spinal anesthesia (SA) and general anesthesia (GA) for proximal ureter stones. Material And Methods: Patients treated with sURS after diagnosis of proximal ureter stones between January 2014 and May 2017 were reviewed retrospectively. The patients were divided into two groups (the SA group and the GA group) based on the type of anesthesia used. Perioperative variables and operation results were evaluated and compared. Success was defined as the patient being stone-free as observed on low-dose non-contrast computed tomography performed in the first month postoperatively. Results: The SA and GA groups had 40 and 32 patients, respectively. There were no statistically significant differences between the groups in terms of age (p = 0.593), gender (p = 0.910), average stone size (p = 0.056), side (p = 0.958), or density (p = 0.337). Based on the Clavien classification system, complication rates between the two groups were similar. The postoperative visual pain scale in the SA group was statistically significantly lower (p &lt;0.05) than in the GA group. Success rates in the SA and GA groups were found to be 90% (36/40) and 93.7% (30/32), respectively, with no significant difference between the groups (p = 0.819). Conclusions: Ureterorenoscopy, which is performed for proximal ureter stone treatment in adult patients, is a reliable surgical method that can be performed under both SA and GA. SA offers the advantage of reduced postoperative pain as compared to GA. abstract_id: PUBMED:31650853 Peri-Calculus Ureteral Thickness on Computed Tomography Predicts Stone Impaction at Time of Surgery: A Prospective Study. Introduction and Objectives: Ureteroscopic management of impacted ureteral stones poses multiple challenges. We examined whether ureteral thickness measured on preoperative noncontrast computed tomography (CT) is predictive of stone impaction at the time of ureteroscopy (URS). Methods: Thirty-eight patients with preoperative CT scan and undergoing URS for ureteral stones were prospectively studied. Ureteral thickness directly above, around (peri-calculus ureteral thickness [P-CUT]), and below the calculus were determined on CT scan. The cross-sectional surface area around the stone (P-CUT) was also calculated. Stones were considered impacted at surgery if contrast had difficulty passing, a guidewire could not pass, and the stone was visually impacted &gt;5 on a Likert scale. The surgeon determining intraoperative impaction and the individual measuring the ureteral thickness on CT were blinded. Results: Fourteen of the 38 patients were found to have an impacted ureteral stone at the time of surgery. Patients with an intraoperative finding of an impacted stone had significantly higher ureteral thickness above, around (P-CUT), below the calculus, and P-CUT surface area with all four p-values &lt;0.001. P-CUT surface area was independently predictive of impaction at the time of surgery on multivariate analysis. A mean P-CUT of 6.1 and 1.6 mm and P-CUT surface area of 92.6 and 15.4 mm2 were noted for the impacted and nonimpacted stones, respectively. Conclusion: Ureteral thicknesses above, around (P-CUT), and below the calculus are readily measurable on CT scan. P-CUT surface area can also be calculated for a stronger representation of ureteral thickness. These four parameters are predictive of ureteral stone impaction and useful for surgical planning and patient counseling. Answer: Yes, stone size and impaction do influence the therapeutic approach to proximal ureteral stones. Studies have shown that stones larger than 100 mm^2 are statistically more successfully treated endoscopically, while there is no statistical difference in the treatment success of impacted stones between extracorporeal shock wave lithotripsy (ESWL) and endoscopic treatment (PUBMED:19281124). Additionally, the success rate of ureteroscopic pneumolithotripsy decreases with stone locations close to the proximal ureter, increasing stone size (>10 mm), and stone impaction, with a positive correlation with the operative complication rate (PUBMED:20622268). Furthermore, certain preoperative radiological parameters based on noncontrast computerized tomography, such as the ratio of the upper to the lower diameter of the ureter, have been found to have significant relationships with the impaction status, with these parameters being higher in patients with impacted stones (PUBMED:34552456). CT-based markers, such as maximal ureteral wall thickness (UWT) at the stone site, have also been identified as significant predictors of spontaneous stone passage, with an optimal UWT cut-off point of 2.3 mm for predicting successful passage (PUBMED:32883100). Moreover, new radiological signs like medial deviation of the ureter at the site of the stone have been proposed as reliable indicators of ureteric stone impaction (PUBMED:37606748). Preoperative UWT has been suggested as a practical approach for the preoperative prediction of stone impaction, with its predictive value enhanced when stratifying ureteric stones according to size (PUBMED:36609272). In terms of treatment outcomes, stone-free rates in holmium:YAG-laser (Ho:YAG) ureterolithotripsy were significantly higher in distal and nonimpacted stones but were independent of stone size and composition and the degree of hydronephrosis (PUBMED:17459573). These findings collectively suggest that both stone size and impaction are important factors that influence the choice of therapeutic approach for proximal ureteral stones.
Instruction: Automated Cellient(™) cytoblocks: better, stronger, faster? Abstracts: abstract_id: PUBMED:24943912 Automated Cellient(™) cytoblocks: better, stronger, faster? Objective: Cytoblocks (CBs), or cell blocks, provide additional morphological detail and a platform for immunocytochemistry (ICC) in cytopathology. The Cellient(™) system produces CBs in 45 minutes using methanol fixation, compared with traditional CBs, which require overnight formalin fixation. This study compares Cellient and traditional CB methods in terms of cellularity, morphology and immunoreactivity, evaluates the potential to add formalin fixation to the Cellient method for ICC studies and determines the optimal sectioning depth for maximal cellularity in Cellient CBs. Methods: One hundred and sixty CBs were prepared from 40 cytology samples (32 malignant, eight benign) using four processing methods: (A) traditional; (B) Cellient (methanol fixation); (C) Cellient using additional formalin fixation for 30 minutes; (D) Cellient using additional formalin fixation for 60 minutes. Haematoxylin and eosin-stained sections were assessed for cellularity and morphology. ICC was assessed on 14 cases with a panel of antibodies. Three additional Cellient samples were serially sectioned to determine the optimal sectioning depth. Scoring was performed by two independent, blinded reviewers. Results: For malignant cases, morphology was superior with Cellient relative to traditional CBs (P &lt; 0.001). Cellularity was comparable across all methods. ICC was excellent in all groups and the addition of formalin at any stage during the Cellient process did not influence the staining quality. Serial sectioning through Cellient CBs showed optimum cellularity at 30-40 μm with at least 27 sections obtainable. Conclusions: Cellient CBs provide superior morphology to traditional CBs and, if required, formalin fixation may be added to the Cellient process for ICC. Optimal Cellient CB cellularity is achieved at 30-40 μm, which will impact on the handling of cases in daily practice. abstract_id: PUBMED:33269981 Analysis of Vitreous Samples by the Cellient® Automated Cell Block System: A Six-year Review of Specimens in a Uveitis Population. Purpose: To further evaluate the value of a standardized method to analyze vitreous samples using the Cellient® automated cell block system in the diagnosis of unsolved uveitis. Methods: Six hundred sixty-four pure vitreous samples obtained from patients with unsolved uveitis between March 2012 and May 2018 at University Hospitals Leuven, fixed in PreservCyt® and processed by the Cellient tissue processor, were included in the study. Results: n 86.3% of the cases, sufficient material was found for diagnosis. A diagnosis of acute inflammation was made in 20.2% of the cases. In 34.0% of the cases, the diagnosis was chronic active inflammation; in 26.5%, low-grade inflammation; and in 5.6%, a malignant process. Conclusion: Our standardized protocol can be used to diagnose endophthalmitis, lymphoma, granulomatous disease, inactive and storage disease, and metastatic infiltration. Cellient analysis of vitreous specimens had a diagnostic yield of 86.3% in unsolved uveitis cases. abstract_id: PUBMED:24678031 Validation of 31 of the most commonly used immunohistochemical antibodies in cytology prepared using the Cellient(®) automated cell block system. The Cellient(®) cell block system has become available as an alternative, partially automated method to create cell blocks in cytology. We sought to show a validation method for immunohistochemical (IHC) staining on the Cellient cell block system (CCB) in comparison with the formalin fixed paraffin embedded traditional cell block (TCB). Immunohistochemical staining was performed using 31 antibodies on 38 patient samples for a total of 326 slides. Split samples were processed using both methods by following the Cellient(®) manufacturer's recommendations for the Cellient cell block (CCB) and the Histogel method for preparing the traditional cell block (TCB). Interpretation was performed by three pathologists and two cytotechnologists. Immunohistochemical stains were scored as: 0/1+ (negative) and 2/3+ (positive). Inter-rater agreement for each antibody was evaluated for CCB and TCB, as well as the intra-rater agreement between TCB and CCB between observers. Interobserver staining concordance for the TCB was obtained with statistical significance (P &lt; 0.05) in 24 of 31 antibodies. Interobserver staining concordance for the CCB was obtained with statistical significance in 27 of 31 antibodies. Intra-observer staining concordance between TCB and CCB was obtained with statistical significance in 24 of 31 antibodies tested. In conclusions, immunohistochemical stains on cytologic specimens processed by the Cellient system are reliable and concordant with stains performed on the same split samples processed via a formalin fixed-paraffin embedded (FFPE) block. The Cellient system is a welcome adjunct to cytology work-flow by producing cell block material of sufficient quality to allow the use of routine IHC. abstract_id: PUBMED:25043793 A new and standardized method to sample and analyse vitreous samples by the Cellient automated cell block system. Purpose: In this prospective study, a universal protocol for sampling and analysing vitreous material was investigated. Vitreous biopsies are difficult to handle because of the paucity of cells and the gelatinous structure of the vitreous. Histopathological analysis of the vitreous is useful in difficult uveitis cases to differentiate uveitis from lymphoma or infection and to define the type of cellular reaction. Methods: Hundred consecutive vitreous samples were analysed with the Cellient tissue processor (Hologic). This machine is a fully automated processor starting from a specified container with PreservCyt (fixative fluid) with cells to paraffin. Cytology was compared with fixatives Cytolyt (contains a mucolyticum) and PreservCyt. Routine histochemical and immunostainings were evaluated. Results: In 92% of the cases, sufficient material was found for diagnosis. In 14%, a Cytolyt wash was necessary to prevent clotting of the tubes in the Cellient due to the viscosity of the sample. In 23%, the diagnosis was an acute inflammation (presence of granulocytes); in 33%, chronic active inflammation (presence of T lymphocytes); in 33%, low-grade inflammation (presence of CD68 cells, without T lymphocytes); and in 3%, a malignant process. Conclusion: A standardized protocol for sampling and handling vitreous biopsies, fixing in PreservCyt and processing by the Cellient gives a satisfactory result in morphology, number of cells and possibility of immuno-histochemical stainings. The diagnosis can be established or confirmed in more than 90% of cases. abstract_id: PUBMED:23444168 Effective application of the methanol-based PreservCyt(™) fixative and the Cellient(™) automated cell block processor to diagnostic cytopathology, immunocytochemistry, and molecular biology. We studied the feasibility of immunocytochemistry (ICC), in situ hybridization (ISH), and polymerase chain reaction (PCR) after Cellient(™) automated cell block processing, and tested whether methanol-based PreservCyt(™) fixation could replace formalin fixation, in an attempt to eliminate toxic formaldehyde vapors. Immunostaining with 30 different antibodies was performed on cell blocks from 73 FNA specimens and 42 body cavity fluid specimens prepared by Cellient(™) automated processing that uses the methanol-based fixative (PreservCyt(™) ). For each antibody we evaluated ICC in at least three different cell block specimens and compared it with immunohistochemistry (IHC) in formalin-fixed, paraffin-embedded (FFPE) histological sections from the corresponding tumors. The quality of DNA and RNA in Cellient(™) blocks was analyzed by ISH, applying a SYT gene break-apart assay and EBER probes, respectively. Moreover, DNA quality was analyzed by PCR by using primer sets for DNA products of 100, 200, 300, 400, 500, and 600 base pairs, and evaluated by gel electrophoresis. When compared with IHC results in corresponding FFPE tumor tissue from the same patient, 24 out of 30 antibodies showed concordant ICC results. With FISH, distinctive hybridization signals were observed for SYT DNA sequences and EB virus RNA sequences. With PCR, DNA products, up to 600 base pairs in size, were readily observed after gel electrophoresis. The antibodies that showed concordant immunostaining in Cellient(™) blocks could be applied to diagnostic algorithms that proved to be helpful in the discrimination of major tumor types (carcinoma, lymphoma, melanoma, and germ cell tumors), discrimination of carcinoma subtypes, and determination of primary tumor site in cases of metastatic carcinoma. In a separate study, we found that the application of ICC to this cell block technique provided additional diagnostic and clinically important information in 24% of 100 consecutive cases. The high quality of DNA and RNA in Cellient(™) cell blocks allowed sensitive and specific molecular biologic analysis, in particular FISH and PCR. abstract_id: PUBMED:25376104 Comparison of the Cellient(™) automated cell block system and agar cell block method. Objective: To compare the Cellient(TM) automated cell block system with the agar cell block method in terms of quantity and quality of diagnostic material and morphological, histochemical and immunocytochemical features. Materials And Methods: Cell blocks were prepared from 100 effusion samples using the agar method and Cellient system, and routinely sectioned and stained for haematoxylin and eosin and periodic acid-Schiff with diastase (PASD). A preliminary immunocytochemical study was performed on selected cases (27/100 cases). Sections were evaluated using a three-point grading system to compare a set of morphological parameters. Statistical analysis was performed using Fisher's exact test. Results: Parameters assessing cellularity, presence of single cells and definition of nuclear membrane, nucleoli, chromatin and cytoplasm showed a statistically significant improvement on Cellient cell blocks compared with agar cell blocks (P &lt; 0.05). No significant difference was seen for definition of cell groups, PASD staining or the intensity or clarity of immunocytochemical staining. A discrepant immunocytochemistry (ICC) result was seen in 21% (13/63) of immunostains. Conclusion: The Cellient technique is comparable with the agar method, with statistically significant results achieved for important morphological features. It demonstrates potential as an alternative cell block preparation method which is relevant for the rapid processing of fine needle aspiration samples, malignant effusions and low-cellularity specimens, where optimal cell morphology and architecture are essential. Further investigation is required to optimize immunocytochemical staining using the Cellient method. abstract_id: PUBMED:23963870 The Cellient automated cell block system is useful in the differential diagnosis of atypical glandular cells in Papanicolaou tests. Background: Atypical glandular cells (AGC) is a very important diagnosis in gynecological cytology. In the current study, the authors investigated the usefulness of Cellient cell blocks (CB) for characterizing AGC on Papanicolaou (Pap) tests. Methods: A total of 148 patients with an AGC diagnosis based on Pap tests by cytotechnologists and referred to cytopathologists were studied. Among these patients, there were 68 patients with CB preparations and 80 patients with Pap tests only (TP-AGC group). Follow-up results by Pap tests or biopsies were obtained in 117 of 148 patients. The median follow-up was 13 months (range, 1 month-36 months). Results: Of the 68 patients with CBs, 31 (46%) were reclassified as negative for dysplasia or low-grade intraepithelial lesion; 30 patients (44%) retained a diagnosis of AGC (CB-AGC group); and 7 patients (10%) were given specific diagnoses of high-grade intraepithelial lesion (3 patients), endocervical adenocarcinoma in situ (1 patient), and invasive adenocarcinoma (3 patients). On follow-up, the CB-AGC group was found to have a significantly lower rate of negative/low-grade squamous intraepithelial lesion diagnoses compared with the TP-AGC group (55% vs 85%; P= .006). The CB-AGC group had a significantly higher rate of endocervical or endometrial adenocarcinoma compared with the TP-AGC group (36% vs 8%; P= .003) at the time of follow-up. The rates of high-grade squamous intraepithelial lesion were not found to be statistically different between these 2 groups (9% vs 7%; P= .66). Conclusions: The Cellient CB is a useful technique to further categorize a diagnosis of AGC on Pap tests. Using the Cellient CB system, the pathologist has the ability to improve the diagnostic accuracy of AGC so that unnecessary colposcopic evaluation or biopsies can be avoided. abstract_id: PUBMED:27338858 The utility of cellient cell blocks in low-cellularity thyroid fine needle aspiration biopsies. Background: Low cellularity can be problematic in thyroid fine needle aspiration (FNA) biopsies. The Cellient cell block (CB) system has been reported to improve cell recovery compared to traditional methods. Therefore, we studied the utility of Cellient CBs in the evaluation of thyroid FNAs, with an emphasis on low-cellularity specimens. Methods: ThinPrep slides were prepared from thyroid FNAs submitted in Cytolyt. After assessment using TBSRTC criteria, Cellient CBs were requested on samples with residual FNA material and an initial cytologic impression of non-diagnostic, AUS/FLUS, and on apparently benign samples with marginally adequate cellularity. The contribution of the CB findings to the final diagnosis was assessed. Results: 965 cases of paired ThinPrep and CB sections were examined. Overall, the cell block findings resulted in a change of the initial ThinPrep impression in 15% (n = 148) of cases. The vast majority of these changed cases were initially inadequate for interpretation, and specifically, 31% (n = 123) of the non-diagnostic ThinPrep samples became diagnostic with a CB. The cell block findings contributed to a change in diagnosis in 8% (n = 23) of AUS/FLUS cases, and in less than 1% of low-cellularity benign samples. Conclusion: The use of CBs in low-cellularity thyroid FNAs has not been well described. In this study, we found that the contribution of CBs in this setting varied by TBSRTC category. Specifically, the samples that benefited most were initially non-diagnostic specimens and select cases of AUS/FLUS, while low-cellularity benign samples gained very little additional information. Diagn. Cytopathol. 2016;44:737-741. © 2016 Wiley Periodicals, Inc. abstract_id: PUBMED:20949469 Cellient™ automated cell block versus traditional cell block preparation: a comparison of morphologic features and immunohistochemical staining. Traditional cell block (TCB) sections serve as an important diagnostic adjunct to cytologic smears but are also used today as a reliable preparation for immunohistochemical (IHC) studies. There are many ways to prepare a cell block and the methods continue to be revised. In this study, we compare the TCB with the Cellient™ automated cell block system. Thirty-five cell blocks were obtained from 16 benign and 19 malignant nongynecologic cytology specimens at a large university teaching hospital and prepared according to TCB and Cellient protocols. Cell block sections from both methods were compared for possible differences in various morphologic features and immunohistochemical staining patterns. In the 16 benign cases, no significant morphologic differences were found between the TCB and Cellient cell block sections. For the 19 malignant cases, some noticeable differences in the nuclear chromatin and cellularity were identified, although statistical significance was not attained. Immunohistochemical or special stains were performed on 89% of the malignant cases (17/19). Inadequate cellularity precluded full evaluation in 23% of Cellient cell block IHC preparations (4/17). Of the malignant cases with adequate cellularity (13/17), the immunohistochemical staining patterns from the different methods were identical in 53% of cases. The traditional and Cellient cell block sections showed similar morphologic and immunohistochemical staining patterns. The only significant difference between the two methods concerned the lower overall cell block cellularity identified during immunohistochemical staining in the Cellient cell block sections. abstract_id: PUBMED:24403950 Protein extraction from methanol fixed paraffin embedded tissue blocks: A new possibility using cell blocks. Background: Methanol fixed and paraffin embedded (MFPE) cellblocks are an essential cytology preparation. However, MFPE cellblocks often contain limited material and their relatively small size has caused them to be overlooked in biomarker discovery. Advances in the field of molecular biotechnology have made it possible to extract proteins from formalin fixed and paraffin embedded (FFPE) tissue blocks. In contrast, there are no established methods for extracting proteins from MFPE cellblocks. We investigated commonly available CHAPS (3-[(3-cholamidopropyl) dimethylammonio]-1-propanesulfonate) buffer, as well as two commercially available Qiagen(®) kits and compared their effectiveness on MFPE tissue for protein yields. Materials And Methods: MFPE blocks were made by Cellient™ automated system using human tissue specimens from normal and malignant specimens collected in ThinPrep™ Vials. Protein was extracted from Cellient-methanol fixed and paraffin embedded blocks with CHAPS buffer method as well as FFPE and Mammalian Qiagen(®) kits. Results: Comparison of protein yields demonstrated the effectiveness of various protein extraction methods on MFPE cellblocks. Conclusion: In the current era of minimally invasive techniques to obtain minimal amount of tissue for diagnostic and prognostic purposes, the use of commercial and lab made buffer on low weight MFPE scrapings obtained by Cellient(®) processor opens new possibilities for protein biomarker research. Answer: The studies provided offer insights into the performance and utility of the Cellient™ automated cell block system in comparison to traditional cell block methods. The Cellient system is designed to produce cell blocks in a shorter time frame, using methanol fixation, as opposed to the overnight formalin fixation required by traditional methods. According to the research, Cellient cytoblocks offer several advantages over traditional cell blocks. For instance, in the context of malignant cases, the morphology was found to be superior with Cellient cytoblocks compared to traditional cell blocks (PUBMED:24943912). Additionally, the Cellient system does not compromise cellularity or immunoreactivity, and the addition of formalin fixation at any stage during the Cellient process does not influence the staining quality (PUBMED:24943912). This suggests that the Cellient system is not only faster but also produces high-quality results that are suitable for immunocytochemistry studies. The Cellient system has also been shown to have a high diagnostic yield in the analysis of vitreous specimens from unsolved uveitis cases, with a diagnostic yield of 86.3% (PUBMED:33269981). Furthermore, the system has been validated for immunohistochemical staining, demonstrating reliable and concordant results with traditional methods (PUBMED:24678031). Moreover, the Cellient system has been found to be useful in the differential diagnosis of atypical glandular cells in Papanicolaou tests, potentially improving diagnostic accuracy and reducing unnecessary evaluations (PUBMED:23963870). It has also been beneficial in low-cellularity thyroid fine needle aspiration biopsies, particularly in initially non-diagnostic specimens and select cases of AUS/FLUS (PUBMED:27338858). In terms of protein extraction for biomarker research, methods have been investigated for extracting proteins from methanol-fixed and paraffin-embedded cell blocks made by the Cellient system, opening new possibilities for protein biomarker research (PUBMED:24403950). Overall, the evidence suggests that the Cellient automated cell block system is indeed better in terms of morphology, at least as strong in terms of cellularity and immunoreactivity, and faster compared to traditional cell block methods, making it a valuable tool in cytopathology.
Instruction: Does gender inequity increase the risk of intimate partner violence among women? Abstracts: abstract_id: PUBMED:32686152 Prevalence and risk factors of intimate partner violence among Chinese immigrant women. Aims: To investigate the prevalence and risk factors of intimate partner violence among Chinese immigrant women residing in the USA. Design: A cross-sectional study using the online survey approach. Methods: A total of 475 Chinese immigrant women were recruited online between April-June 2019. We measured past year intimate partner violence, sociodemographic, relationship-related and immigration-related factors, religion, social support, acculturation, adverse childhood experiences, and gender role beliefs. Results: Twenty-one percent of Chinese immigrant women experienced past year intimate partner violence. Younger age, cohabiting relationship, lower levels of relationship satisfaction and social support, older age at arrival, longer length of stay, religious belief, and adverse childhood experiences were associated with the occurrence of intimate partner violence. Conclusion: Chinese immigrant women experience higher rates of intimate partner violence compared with other populations of women. Future research is still needed to elucidate abused Chinese immigrant women's violence experiences, help-seeking behaviours, availability of resources and needs for services to tailor prevention and intervention programs culturally. Impact: The findings added to our understanding of intimate partner violence among this largest Asian immigrant group. Modifiable risk factors such as social support should be targeted in future preventions and interventions for abused Chinese immigrant women residing in the USA. abstract_id: PUBMED:32402008 Inversion of traditional gender roles and intimate partner violence against pregnant women. This study analyzed the association between the inversion of traditional gender roles and exclusive psychological and physical/sexual intimate partner violence, in a cross-sectional study of Brazilian pregnant women, identified through prenatal services in the municipalities of São Luís, Maranhão State (n = 992) and Ribeirão Preto, São Paulo State (n = 943). The pregnant women ranged from 12 to 45 years. Inversion of traditional gender roles was assessed by calculating differences in age, education and occupation between pregnant women and their co-residing intimate partners and identifying the largest contribution to family income. The conceptual model was tested with structural equation modeling and showed acceptable fit. The prevalence of any type of intimate partner violence was 29.8% in São Luís and 20.1% in Ribeirão Preto. In both municipalities, pregnant women were more likely to suffer exclusive psychological and physical/sexual violence when they had the highest income in the family (p &lt; 0.005). In São Luís, physical/sexual violence was more common among women who were better educated than their partners (standardized coefficient, SC = -0.466; p = 0.007). In Ribeirão Preto, exclusive psychological violence was more frequent among women who had lower status occupations than their partners (SC = 0.236; p = 0.004). Inversion of traditional gender roles is associated with exclusive psychological and physical/sexual violence against pregnant women by their co-residing intimate partners. These findings suggest that women's empowerment at an individual level does not necessarily relieve them of intimate partner abuse in social contexts where traditional gender norms persist. abstract_id: PUBMED:37142278 Prevalence and risk factors for intimate partner violence and indigenous women: A scoping review. Aim: This study aims to map intimate partner violence evidence among indigenous women and explore the prevalence, social and systemic factors contributing to this occurrence. Methods: This is a scoping review following the steps recommended by the JBI. We searched the MEDLINE/PubMed®, Web of Science™, Embase, CINAHL and LILACS databases on March 2023. Studies that addressed the intimate partner violence topic among indigenous women and risk factors, without time and language limitations, were included. Detailed information was extracted, standardized by JBI. Results: Twenty studies of different designs were included, all published in English, between 2004 and 2022. A high intimate partner violence prevalence among indigenous women was identified, associated with a great diversity of risk factors. Conclusion: The great diversity of identified factors associated with its occurrence shows the complexity of this problem and the vulnerability of indigenous women. abstract_id: PUBMED:30924708 Variations in Emotional, Sexual, and Physical Intimate Partner Violence Among Women in Uganda: A Multilevel Analysis. Evidence shows that a significant proportion of ever-partnered women suffer some form of intimate partner violence (IPV) perpetuated by male partners. The prevalence of IPV in sub-Saharan African countries is considerably higher than global estimates. Although existing studies show the effect of women's and intimate male partner's characteristics on IPV, knowledge on how these factors increase or reduce women's risk to specific types of IPV is limited. Using the 2016 Ugandan Demographic and Health Survey (UDHS), we examine regional variations in women's and intimate male partner's characteristics and their effect on emotional, sexual, and physical violence perpetuated by men and experienced by women in Uganda. The result shows that women's educational status is a significant predictor of all forms of IPV, whereas other characteristics, such as employment and housing ownership, have differential effects on specific types of IPV. Less educated women were more likely to experience emotional, sexual, and physical violence. Alcohol abuse was a significant determinant of men perpetuating all types of IPV; other male characteristics had differential effects on specific types of IPV. Male partners who abuse alcohol "often" and "sometimes" were more likely to commit acts of emotional, sexual, and physical violence against their female intimate partners. The findings also show that ~5%, ~8%, and ~2% of the variance in emotional, sexual, and physical violence (respectively; in the final models) are attributable to regional differences. The findings suggest the need for interventions aimed at increasing women's access to higher education, working with men and boys to reduce the occurrence of alcohol abuse and address harmful constructions of masculinity, and promoting gender equality among men as well as women. abstract_id: PUBMED:35725404 Spatial distribution and predictors of intimate partner violence among women in Nigeria. Background: Globally, intimate partner violence is one of the major health problems women face every day. Its consequences are enormous. However, our search of the available literature revealed that no study had examined the spatial distribution of intimate partner violence and the predictors of intimate partner violence among women in Nigeria using current nationally representative data. This study, therefore, sought to examine the spatial distribution of intimate partner violence and its predictors among women in Nigeria. Method: We sourced data from the 2018 Nigeria Demographic and Health Survey for this study. A sample size of 8,968 women was considered for this study. We employed both multilevel and spatial analyses to ascertain the factors associated with intimate partner violence and its spatial clustering. Results: The hot spot areas for intimate partner violence in Nigeria were Gombe, Bauchi, Adamawa, Plateau, Kogi, Edo, Ebonyi, and Rivers. The likelihood of experiencing intimate partner violence among women in Nigeria was high among women with primary education, those that were previously married, women currently working, women who were Yoruba, women with parity of four and above and women who were exposed to mass media while low odds of intimate partner violence was reported among women who were Muslims. Women who resided in the North East region and those who lived in communities with medium socioeconomic status were more likely to experience intimate partner violence, while women who were within the richest wealth index and those residing in the South West region were less likely to experience intimate partner violence. Conclusion: The study found regional variations in the prevalence of intimate partner violence among women in Nigeria. Therefore, policymakers should focus their attention on the hotspots for intimate partner violence in the country. There is also the need to consider the factors identified in this study to reduce intimate partner violence among women in Nigeria. Empowering women would yield a significant improvement in the fight against gender-based violence. abstract_id: PUBMED:31795808 Bidirectional Intimate Partner Violence Among Chinese Women: Patterns and Risk Factors. Bidirectional intimate partner violence (BIPV) refers to the co-occurrence of violence perpetration by both partners. BIPV has been analyzed using samples from different sociodemographic contexts but has yet to be fully explored in China. The present study employed a latent class approach to identify BIPV patterns, rates of prevalence, and associated risk factors among a sample of 1,301 heterosexual adult women in mainland China. Five distinct patterns of BIPV were identified, including (a) bidirectional psychological aggression, (b) bidirectional violence of all types, (c) multi-type victimization with psychological aggression, (d) minimal violence, and (e) bidirectional multi-types without physical violence. Marital status, education, employment status, acceptance of male dominance, and justification of intimate partner violence (IPV) were found to be predictive of different types of BIPV. Our findings suggest a need for a conceptual recognition of the heterogeneity and bidirectionality of IPV among Chinese women. Future research should extend to other diverse populations and sociocultural or clinical contexts in China. IPV assessments, research, and social programs ought to recognize the complexity of IPV and consider various IPV patterns specific to heterosexual women. abstract_id: PUBMED:35816435 Stories of Backlash in Interviews With Survivors of Intimate Partner Violence Against Women in Sweden. Despite high gender equality ratings, Sweden shows a high prevalence of intimate partner violence against women (IPVAW). Suggested factors underlying this apparent paradox include backlash effects against women's empowerment. This study explores stories of backlash in interviews with 23 IPVAW survivors in Sweden. Thematic analysis identified categories of narrative segments referring to phenomena provoking violence; the victims' resources, agency, breaking with gender norms and resistance, and the partner's feelings of subordination, while case-centered narrative analysis pointed to divergences between how these categories appear in the stories. The study underscores the complexity of links between gender (in)equality and IPVAW in Sweden. abstract_id: PUBMED:33170394 The Gender Violence - Implicit Association Test to measure attitudes toward intimate partner violence against women. Attitudes play a central role in intimate partner violence against women and are related to its origin, to the responses of women who suffer violence, and to the settings where it occurs. In fact, these attitudes are recognized as one of the risk factors linked to violent perpetration and to public, professional, and victim responses to this type of violence. However, even though available research generally shows a broad rejection of this violence, it remains a serious social and health problem that has reached epidemic proportions. This suggests that the information available about these attitudes (obtained through explicit and direct measures, i.e., self-reports) may be distorted or influenced by factors such as social desirability. In this context, the overall objective of our research project is to provide multi-method measures (explicit and implicit) of attitudes toward intimate partner violence against women, and the main goal of this paper is to propose an instrument for the implicit measurement of these attitudes. In this regard, the Implicit Association Test (IAT) is the most common procedure used, providing a superior predictive validity compared to explicit measures for socially sensitive topics. We will present an exploratory study that describes its adaptation for our purposes, and the development of the Gender Violence - Implicit Association Test (GV-IAT) to use among Spanish-speaking populations, and discuss the strengths and limitations of this proposal. abstract_id: PUBMED:32522974 Intimate partner violence in pregnant women: predisposing factors. Objective: Intimate partner violence is a serious health problem worldwide and should be a priority issue for policy makers. Having adequate information helps to develop strategies to eradicate this scourge. The objective of this study was to determine the factors associated with situations of intimate partner violence during pregnancy. Methods: During 2015 a cross-sectional study was carried out in pregnant women in southern Spain. Information on sociodemographic variables, information on pregnancy, childbirth, and the puerperium was collected through interviews and official clinical documents. To determine the situation of violence, the Woman Abuse Screening Tool (WAST) was used in its short version. Comparisons of means were made using the t test; chi square and binary regression, as well as the Mann-Whitney U test or the Kruskal-Wallis test for variables that did not follow a normal distribution were used. Results: 141 women participated in the study; the percentage of women in situations of intimate partner violence during pregnancy was 31.2% (44). A low level of education for women's partners, multiparous women, coexistence with people who require care and care, single marital status, women's partners were unemployed, women not having a permanent employment and having an unwanted pregnancy seem be predisposing factors for a situation of intimate partner violence during pregnancy (p&lt;0.05). Conclusions: Certain social factors, such as educational level and job stability, can influence the existence of intimate partner violence during pregnancy. abstract_id: PUBMED:30574828 Meaning Attribution to Intimate Partner Violence by Counselors Who Support Women With Intimate Partner Violence Experiences in Greece. The present study refers to the meaning attributed to intimate partner violence by counselors supporting women who have experienced abuse and seek for help. The main focus of the study is the investigation of the effect of the counselors' work experience in changing the meaning of intimate partner violence. The research involved 10 counselors working in social services (counseling centers and shelter services), in Greece, who support women who have suffered intimate partner violence in their relationships. The research data were collected through semi-structured interviews, and their analysis followed the interpretative phenomenological method. The qualitative analysis of the interviews showed that the counselors' work experience broadens the meaning of intimate partner violence. The broadened definition of intimate partner violence seems to affect simultaneously the counselors' sense of (their) vulnerability to violence. Answer: Yes, gender inequity does increase the risk of intimate partner violence (IPV) among women. The inversion of traditional gender roles, which can be seen as a form of gender inequity, is associated with exclusive psychological and physical/sexual violence against pregnant women. This association was observed in a study where pregnant women who had the highest income in the family or were better educated than their partners were more likely to suffer from IPV (PUBMED:32402008). This suggests that women's empowerment at an individual level does not necessarily protect them from IPV in contexts where traditional gender norms are still prevalent. Additionally, the study on Chinese immigrant women in the USA found that younger age, cohabiting relationships, lower levels of relationship satisfaction and social support, religious belief, and adverse childhood experiences were associated with the occurrence of IPV (PUBMED:32686152). While this study does not directly mention gender inequity, factors such as relationship satisfaction and social support can be influenced by gender norms and expectations, which are components of gender equity. Furthermore, the study on bidirectional intimate partner violence among Chinese women identified acceptance of male dominance and justification of IPV as predictive of different types of BIPV (PUBMED:31795808). Acceptance of male dominance is a clear indicator of gender inequity and suggests that attitudes supporting gender inequity can increase the risk of IPV. In Sweden, despite high gender equality ratings, there is a high prevalence of IPV against women, and suggested factors include backlash effects against women's empowerment (PUBMED:35816435). This indicates that even in societies with high gender equality, there can be resistance to changes in traditional gender roles, which can result in increased IPV. Overall, these studies indicate that gender inequity, whether through the inversion of traditional gender roles, acceptance of male dominance, or backlash against women's empowerment, can increase the risk of IPV among women.
Instruction: Are referral centers for non-muscle-invasive bladder cancer compliant to EAU guidelines? Abstracts: abstract_id: PUBMED:24373477 EAU guidelines on muscle-invasive and metastatic bladder cancer: summary of the 2013 guidelines. Context: The European Association of Urology (EAU) guidelines panel on Muscle-invasive and Metastatic bladder cancer (BCa) updates its guidelines yearly. This updated summary provides a synthesis of the 2013 guidelines document, with emphasis on the latest developments. Objective: To provide graded recommendations on the diagnosis and treatment of patients with muscle-invasive BCa (MIBC), linked to a level of evidence. Evidence Acquisition: For each section of the guidelines, comprehensive literature searches covering the past 10 yr in several databases were conducted, scanned, reviewed, and discussed both within the panel and with external experts. The final results are reflected in the recommendations provided. Evidence Synthesis: Smoking and work-related carcinogens remain the most important risk factors for BCa. Computed tomography (CT) and magnetic resonance imaging can be used for staging, although CT is preferred for pulmonary evaluation. Open radical cystectomy with an extended lymph node dissection (LND) remains the treatment of choice for treatment failures in non-MIBC and T2-T4aN0M0 BCa. For well-informed, well-selected, and compliant patients, however, multimodality treatment could be offered as an alternative, especially if cystectomy is not an option. Comorbidity, not age, should be used when deciding on radical cystectomy. Patients should be encouraged to actively participate in the decision-making process, and a continent urinary diversion should be offered to all patients unless there are specific contraindications. For fit patients, cisplatinum-based neoadjuvant chemotherapy should always be discussed, since it improves overall survival. For patients with metastatic disease, cisplatin-containing combination chemotherapy is recommended. For unfit patients, carboplatin combination chemotherapy or single agents can be used. Conclusions: This 2013 EAU Muscle-invasive and Metastatic BCa guidelines updated summary aims to increase the quality of care and outcome for patients with muscle-invasive or metastatic BCa. Patient Summary: In this paper we update the EAU guidelines on Muscle-invasive and Metastatic bladder cancer. We recommend that chemotherapy be administered before radical treatment and that bladder removal be the standard of care for disease confined to the bladder. abstract_id: PUBMED:21196690 Are referral centers for non-muscle-invasive bladder cancer compliant to EAU guidelines? A report from the vesical antiblastic therapy Italian study. Introduction: Adherence to international guidelines is viewed as a prerequisite for optimal medical care delivery. Previously reported surveys for non-muscle-invasive bladder cancer (NMIBC) employed mailed questionnaires to urologists or patients resulting in conflicting degrees of agreement with existing guidelines. In the current study, contemporary information on the management of NMIBC was generated from a sample of italian centers. Patients And Methods: Eight Italian referral centers for the treatment of NMIBC were asked to collect information relative to all consecutive patients with a histology-proven NMIBC undergoing a transurethral resection from January 1 to March 31, 2009. The primary study objective was to verify the level of adherence of disease management with European guidelines. Results: 344 patients resulted in being evaluable. 49.2% of high-risk patients underwent a repeat transurethral resection. Bacillus Calmette-Guérin was employed in 35% of cases, while chemotherapy was in 22%. An early single regimen was adopted in 136 patients and only in 1 out of 3 low-risk patients. High-risk NMIBC received bacillus Calmette-Guérin and chemotherapy as first-line therapy in 66 and 12.5% respectively. After 3 months, cystoscopy had been reported for 82.5% of patients with a recurrence rate of 13%. Conclusion: Adherence of Italian Institutions to EAU guidelines was optimal when reporting baseline variables. Significant degrees of discrepancy emerged in treatment choices. abstract_id: PUBMED:27375033 Updated 2016 EAU Guidelines on Muscle-invasive and Metastatic Bladder Cancer. Context: Invasive bladder cancer is a frequently occurring disease with a high mortality rate despite optimal treatment. The European Association of Urology (EAU) Muscle-invasive and Metastatic Bladder Cancer (MIBC) Guidelines are updated yearly and provides information to optimise diagnosis, treatment, and follow-up of this patient population. Objective: To provide a summary of the EAU guidelines for physicians and patients confronted with muscle-invasive and metastatic bladder cancer. Evidence Acquisition: An international multidisciplinary panel of bladder cancer experts reviewed and discussed the results of a comprehensive literature search of several databases covering all sections of the guidelines. The panel defined levels of evidence and grades of recommendation according to an established classification system. Evidence Synthesis: Epidemiology and aetiology of bladder cancer are discussed. The proper diagnostic pathway, including demands for pathology and imaging, is outlined. Several treatment options, including bladder-sparing treatments and combinations of treatment modalities (different forms of surgery, radiation therapy, and chemotherapy) are described. Sequencing of these modalities is discussed. Potential indications and contraindications, such as comorbidity, are related to treatment choice. There is a new paragraph on organ-sparing approaches, both in men and in women, and on minimal invasive surgery. Recommendations for chemotherapy in fit and unfit patients are provided including second-line options. Finally, a follow-up schedule is provided. Conclusions: The current summary of the EAU Muscle-invasive and Metastatic Bladder Cancer Guidelines provides an up-to-date overview of the available literature and evidence dealing with diagnosis, treatment, and follow-up of patients with metastatic and muscle-invasive bladder cancer. Patient Summary: Bladder cancer is an important disease with a high mortality rate. These updated guidelines help clinicians refine the diagnosis and select the appropriate therapy and follow-up for patients with metastatic and muscle-invasive bladder cancer. abstract_id: PUBMED:23827737 EAU guidelines on non-muscle-invasive urothelial carcinoma of the bladder: update 2013. Context: The first European Association of Urology (EAU) guidelines on bladder cancer were published in 2002 [1]. Since then, the guidelines have been continuously updated. Objective: To present the 2013 EAU guidelines on non-muscle-invasive bladder cancer (NMIBC). Evidence Acquisition: Literature published between 2010 and 2012 on the diagnosis and treatment of NMIBC was systematically reviewed. Previous guidelines were updated, and the levels of evidence and grades of recommendation were assigned. Evidence Synthesis: Tumours staged as Ta, T1, or carcinoma in situ (CIS) are grouped as NMIBC. Diagnosis depends on cystoscopy and histologic evaluation of the tissue obtained by transurethral resection (TUR) in papillary tumours or by multiple bladder biopsies in CIS. In papillary lesions, a complete TUR is essential for the patient's prognosis. Where the initial resection is incomplete, where there is no muscle in the specimen, or where a high-grade or T1 tumour is detected, a second TUR should be performed within 2-6 wk. The risks of both recurrence and progression may be estimated for individual patients using the EORTC scoring system and risk tables. The stratification of patients into low-, intermediate-, and high-risk groups is pivotal to recommending adjuvant treatment. For patients with a low-risk tumour, one immediate instillation of chemotherapy is recommended. Patients with an intermediate-risk tumour should receive one immediate instillation of chemotherapy followed by 1 yr of full-dose bacillus Calmette-Guérin (BCG) intravesical immunotherapy or by further instillations of chemotherapy for a maximum of 1 yr. In patients with high-risk tumours, full-dose intravesical BCG for 1-3 yr is indicated. In patients at highest risk of tumour progression, immediate radical cystectomy should be considered. Cystectomy is recommended in BCG-refractory tumours. The long version of the guidelines is available from the EAU Web site: http://www.uroweb.org/guidelines/. Conclusions: These abridged EAU guidelines present updated information on the diagnosis and treatment of NMIBC for incorporation into clinical practice. Patient Summary: The EAU Panel on Non-muscle Invasive Bladder Cancer released an updated version of their guidelines. Current clinical studies support patient selection into different risk groups; low, intermediate and high risk. These risk groups indicate the likelihood of the development of a new (recurrent) cancer after initial treatment (endoscopic resection) or progression to more aggressive (muscle-invasive) bladder cancer and are most important for the decision to provide chemo- or immunotherapy (bladder installations). Surgical removal of the bladder (radical cystectomy) should only be considered in patients who have failed chemo- or immunotherapy, or who are in the highest risk group for progression. abstract_id: PUBMED:28058776 Guideline of guidelines: non-muscle-invasive bladder cancer. Non-muscle-invasive bladder cancer (NMIBC) represents the vast majority of bladder cancer diagnoses, but this definition represents a spectrum of disease with a variable clinical course, notable for significant risk of recurrence and potential for progression. Management involves risk-adapted strategies of cystoscopic surveillance and intravesical therapy with the goal of bladder preservation when safe to do so. Multiple organizational guidelines exist to help practitioners manage this complicated disease process, but adherence to management principles among practising urologists is reportedly low. We review four major organizational guidelines on NMIBC: the American Urological Association (AUA)/Society of Urologic Oncology (SUO), European Association of Urology (EAU), National Comprehensive Cancer Network (NCCN), and National Institute for Health and Care Excellence (NICE) guidelines. abstract_id: PUBMED:27324428 EAU Guidelines on Non-Muscle-invasive Urothelial Carcinoma of the Bladder: Update 2016. Context: The European Association of Urology (EAU) panel on Non-muscle-invasive Bladder Cancer (NMIBC) released an updated version of the guidelines on Non-muscle-invasive Bladder Cancer. Objective: To present the 2016 EAU guidelines on NMIBC. Evidence Acquisition: A broad and comprehensive scoping exercise covering all areas of the NMIBC guidelines published between April 1, 2014, and May 31, 2015, was performed. Databases covered by the search included Medline, Embase, and the Cochrane Libraries. Previous guidelines were updated, and levels of evidence and grades of recommendation were assigned. Evidence Synthesis: Tumours staged as TaT1 or carcinoma in situ (CIS) are grouped as NMIBC. Diagnosis depends on cystoscopy and histologic evaluation of the tissue obtained by transurethral resection of the bladder (TURB) in papillary tumours or by multiple bladder biopsies in CIS. In papillary lesions, a complete TURB is essential for the patient's prognosis. If the initial resection is incomplete, there is no muscle in the specimen, or a high-grade or T1 tumour is detected, a second TURB should be performed within 2-6 wk. The risks of both recurrence and progression may be estimated for individual patients using the European Organisation for Research and Treatment of Cancer (EORTC) scoring system and risk tables. The stratification of patients into low-, intermediate-, and high-risk groups is pivotal to recommending adjuvant treatment. For patients with a low-risk tumour and intermediate-risk patients at a lower risk of recurrence, one immediate instillation of chemotherapy is recommended. Patients with an intermediate-risk tumour should receive 1 yr of full-dose bacillus Calmette-Guérin (BCG) intravesical immunotherapy or instillations of chemotherapy for a maximum of 1 yr. In patients with high-risk tumours, full-dose intravesical BCG for 1-3 yr is indicated. In patients at highest risk of tumour progression, immediate radical cystectomy (RC) should be considered. RC is recommended in BCG-refractory tumours. The long version of the guidelines is available at the EAU Web site (www.uroweb.org/guidelines). Conclusions: These abridged EAU guidelines present updated information on the diagnosis and treatment of NMIBC for incorporation into clinical practice. Patient Summary: The European Association of Urology has released updated guidelines on Non-muscle-invasive Bladder Cancer (NMIBC). Stratification of patients into low-, intermediate-, and high-risk groups is essential for decisions about adjuvant intravesical instillations. Risk tables can be used to estimate risks of recurrence and progression. Radical cystectomy should be considered only in case of failure of instillations or in NMIBC with the highest risk of progression. abstract_id: PUBMED:19157687 The updated EAU guidelines on muscle-invasive and metastatic bladder cancer. Context: New data regarding diagnosis and treatment of muscle-invasive and metastatic bladder cancer (MiM-BC) has emerged and led to an update of the European Association of Urology (EAU) guidelines for MiM-BC. Objective: To review the new EAU guidelines for MiM-BC. Evidence Acquisition: A comprehensive workup of the literature obtained from Medline, the Cochrane central register of systematic reviews, and reference lists in publications and review articles was developed and screened by a group of urologists, oncologists, and radiologist appointed by the EAU Guideline Committee. Previous recommendations based on the older literature on this subject were taken into account. Levels of evidence and grade of guideline recommendations were added, modified from the Oxford Centre for Evidence-based Medicine Levels of Evidence. Evidence Synthesis: The diagnosis of muscle-invasive bladder cancer (BCa) is made by transurethral resection (TUR) and following histopathologic evaluation. Patients with confirmed muscle-invasive BCa should be staged by computed tomography (CT) scans of the chest, abdomen, and pelvis, if available. Adjuvant chemotherapy is currently only advised within clinical trials. Radical cystectomy (RC) is the treatment of choice for both sexes, and lymph node dissection should be an integral part of cystectomy. An orthotopic bladder substitute should be offered to both male and female patients lacking any contraindications, such as no tumour at the level of urethral dissection. Multimodality bladder-preserving treatment in localised disease is currently regarded only as an alternative in selected, well-informed, and compliant patients for whom cystectomy is not considered for clinical or personal reasons. An appropriate schedule for disease monitoring should be based on (1) natural timing of recurrence, (2) probability of disease recurrence, (3) functional deterioration at particular sites, and (4) consideration of treatment of a recurrence. In metastatic disease, the first-line treatment for patients fit enough to sustain cisplatin is cisplatin-containing combination chemotherapy. Presently, there is no standard second-line chemotherapy. Conclusions: These EAU guidelines are a short, comprehensive overview of the updated guidelines of (MiM-BC) as recently published in the EAU guidelines and also available in the National Guideline Clearinghouse. abstract_id: PUBMED:33419683 European Association of Urology (EAU) Prognostic Factor Risk Groups for Non-muscle-invasive Bladder Cancer (NMIBC) Incorporating the WHO 2004/2016 and WHO 1973 Classification Systems for Grade: An Update from the EAU NMIBC Guidelines Panel. Background: The European Association of Urology (EAU) prognostic factor risk groups for non-muscle-invasive bladder cancer (NMIBC) are used to provide recommendations for patient treatment after transurethral resection of bladder tumor (TURBT). They do not, however, take into account the widely used World Health Organization (WHO) 2004/2016 grading classification and are based on patients treated in the 1980s. Objective: To update EAU prognostic factor risk groups using the WHO 1973 and 2004/2016 grading classifications and identify patients with the lowest and highest probabilities of progression. Design, Setting, And Participants: Individual patient data for primary NMIBC patients were collected from the institutions of the members of the EAU NMIBC guidelines panel. Intervention: Patients underwent TURBT followed by intravesical instillations at the physician's discretion. Outcome Measurements And Statistical Analysis: Multivariable Cox proportional-hazards regression models were fitted to the primary endpoint, the time to progression to muscle-invasive disease or distant metastases. Patients were divided into four risk groups: low-, intermediate-, high-, and a new, very high-risk group. The probabilities of progression were estimated using Kaplan-Meier curves. Results And Limitations: A total of 3401 patients treated with TURBT ± intravesical chemotherapy were included. From the multivariable analyses, tumor stage, WHO 1973/2004-2016 grade, concomitant carcinoma in situ, number of tumors, tumor size, and age were used to form four risk groups for which the probability of progression at 5 yr varied from &lt;1% to &gt;40%. Limitations include the retrospective collection of data and the lack of central pathology review. Conclusions: This study provides updated EAU prognostic factor risk groups that can be used to inform patient treatment and follow-up. Incorporating the WHO 2004/2016 and 1973 grading classifications, a new, very high-risk group has been identified for which urologists should be prompt to assess and adapt their therapeutic strategy when necessary. Patient Summary: The newly updated European Association of Urology prognostic factor risk groups for non-muscle-invasive bladder cancer provide an improved basis for recommending a patient's treatment and follow-up schedule. abstract_id: PUBMED:29074050 Discrepancy Between European Association of Urology Guidelines and Daily Practice in the Management of Non-muscle-invasive Bladder Cancer: Results of a European Survey. Background: The European Association of Urology (EAU) non-muscle-invasive bladder cancer (NMIBC) guidelines are meant to help minimise morbidity and improve the care of patients with NMIBC. However, there may be underuse of guideline-recommended care in this potentially curable cohort. Objective: To assess European physicians' current practice in the management of NMIBC and evaluate its concordance with the EAU 2013 guidelines. Design, Setting, And Participants: Initial 45-min telephone interviews were conducted with 20 urologists to develop a 26-item questionnaire for a 30-min online quantitative interview. A total of 498 physicians with predefined experience in treatment of NMIBC patients, from nine European countries, completed the online interviews. Outcome Measurements And Statistical Analysis: Descriptive statistics of absolute numbers and percentages of the use of diagnostic tools, risk group stratification, treatment options chosen, and follow-up regimens were used. Results And Limitations: Guidelines are used by ≥87% of physicians, with the EAU guidelines being the most used ones (71-100%). Cystoscopy (60-97%) and ultrasonography (42-95%) are the most used diagnostic techniques. Using EAU risk classification, 40-69% and 88-100% of physicians correctly identify all the prognostic factors for low- and high-risk tumours, respectively. Re-transurethral resection of the bladder tumour (re-TURB) is performed in 25-75% of low-risk and 55-98% of high-risk patients. Between 21% and 88% of patients received a single instillation of chemotherapy within 24h after TURB. Adjuvant intravesical treatment is not given to 6-62%, 2-33%, and 1-20% of the patients with low-, intermediate-, and high-risk NMIBC, respectively. Patients with low-risk NMIBC are likely to be overmonitored and those with high-risk NMIBC undermonitored. Our study is limited by the possible recall bias of the selected physicians. Conclusions: Although most European physicians claim to apply the EAU guidelines, adherence to them is low in daily practice. Patient Summary: Our survey among European physicians investigated discrepancies between guidelines and daily practice in the management of non-muscle-invasive bladder cancer (NMIBC). We conclude that the use of the recommended diagnostic tools, risk-stratification of NMIBC, and performance of re-TURB have been adopted, but adjuvant intravesical treatment and follow-up are not uniformly applied. abstract_id: PUBMED:35225148 Swedish National Guidelines on Urothelial Carcinoma: 2021 update on non-muscle invasive bladder cancer and upper tract urothelial carcinoma. Objective: To overview the updated Swedish National Guidelines on Urothelial Carcinoma 2021, with emphasis on non-muscle-invasive bladder cancer (NMIBC) and upper tract urothelial carcinoma (UTUC). Methods: A narrative review of the updated version of the Swedish National Guidelines on Urothelial Carcinoma 2021 and highlighting new treatment recommendations, with comparison to the European Association of Urology (EAU) guidelines and current literature. Results: For NMIBC the new EAU 2021 risk group stratification has been introduced for non-muscle invasive bladder cancer to predict risk of progression and the web-based application has been translated to Swedish (https://nmibc.net.). For patients with non-BCG -responsive disease treatment recommendations have been pinpointed, to guide patient counselling in this clinical situation. A new recommendation in the current version of the guidelines is the introduction of four courses of adjuvant platinum-based chemotherapy to patients with advanced disease in the nephroureterectomy specimen (pT2 or higher and/or N+). Patients with papillary urothelial neoplasms with low malignant potential (PUNLMP) can be discharged from follow-up already after 3 years based on a very low subsequent risk of further recurrences. Conclusions: The current version of the Swedish national guidelines introduces a new risk-stratification model and follow-up recommendation for NMIBC and adjuvant chemotherapy after radical surgery for UTUC. Answer: The compliance of referral centers for non-muscle invasive bladder cancer (NMIBC) to European Association of Urology (EAU) guidelines appears to be variable. A study conducted in Italian referral centers found that while adherence to EAU guidelines was optimal when reporting baseline variables, significant discrepancies emerged in treatment choices (PUBMED:21196690). Another European survey assessing the management of NMIBC among physicians revealed that although most claimed to apply the EAU guidelines, adherence to them was low in daily practice (PUBMED:29074050). The EAU guidelines provide detailed recommendations for the diagnosis, treatment, and follow-up of NMIBC, stratifying patients into low-, intermediate-, and high-risk groups to guide adjuvant treatment decisions (PUBMED:27324428; PUBMED:33419683). Despite the availability of these guidelines, the actual application in clinical practice may not always align with the recommended protocols. For instance, re-transurethral resection of the bladder tumor (re-TURB) is performed in a wide range of percentages of low-risk and high-risk patients, and adjuvant intravesical treatment is not given to a significant proportion of patients across different risk groups (PUBMED:29074050). In summary, while EAU guidelines are widely recognized and used by physicians, there is evidence of inconsistency in their application in the management of NMIBC across referral centers, with variations in the use of diagnostic tools, risk stratification, treatment, and follow-up regimens.
Instruction: Are high-volume surgeons and hospitals the most important predictors of in-hospital outcome for colon cancer resection? Abstracts: abstract_id: PUBMED:12219022 Are high-volume surgeons and hospitals the most important predictors of in-hospital outcome for colon cancer resection? Background: Although recent studies have reported that high-volume surgeons and hospitals have better outcomes for colon cancer resections, it remains unclear whether there are other factors that are more important than volume. This study aims to evaluate the importance of the volume variables relative to other factors in an attempt to target specific areas for improving outcomes. Methods: Using nationwide data from the Healthcare Cost and Utilization Program, full-model logistic regression was performed on all patients undergoing colon cancer resection. In hospital mortality was regressed against more than 30 different independent variables, including demographic factors (eg, age, gender, race, ethnicity, and socioeconomic status), burden of morbid and comorbid disease (prevalence and severity), and provider variables (eg, hospital size, location, teaching status, hospital and surgeon volume). A separate baseline probability analysis was then performed to compare the relative importance for all predictor variables. Results: The sample size for this analysis was 22,408; 622 in-hospital deaths occurred (2.8%). Average age was 70 years old, 51% of particIpants were male, and 60% had at least 1 comorbid disease. An operation was elective (64%), urgent (19%), or emergency (15%). The significant predictors for mortality (at P &lt;.05) included age, gender, comorbid disease (ie, cardiovascular, pulmonary, liver), operation severity (ie, emergency, urgent), and volume (both hospital and surgeon). The baseline probability analysis shows that the mortality for a baseline case is 12/1000. If this baseline case goes to a high-volume hospital or surgeon, the mortality will decrease to 11/1000 and 10/1000, respectively. If a patient with a baseline case of colon cancer also has coexistent liver disease or requires an emergency operation, mortality increases to 44/1000 and 45/1000, respectively. Overall, the volume variables, although statistically significant, have a relatively smaller effect on outcome compared with other factors. Conclusions: Whereas other factors have a stronger association with outcome than volume, volume is the only acutely mutable variable. Although the regionalization controversy (ie, using only high-volume surgeons or hospitals) is not solved with our findings, this study elucidates and compares the relative importance of several different factors on outcome, which is essential when considering the conclusions and implications of this type of policy-relevant outcomes research. abstract_id: PUBMED:31592096 Nationwide in-hospital mortality following colonic cancer resection according to hospital volume in Germany. Background: Colonic cancer is the most common cancer of the gastrointestinal tract. The aim of this study was to determine mortality rates following colonic cancer resection and the effect of hospital caseload on in-hospital mortality in Germany. Methods: Patients admitted with a diagnosis of colonic cancer undergoing colonic resection from 2012 to 2015 were identified from a nationwide registry using procedure codes. The outcome measure was in-hospital mortality. Hospitals were ranked according to their caseload for colonic cancer resection, and patients were categorized into five subgroups on the basis of hospital volume. Results: Some 129 196 colonic cancer resections were reviewed. The overall in-house mortality rate was 5·8 per cent, ranging from 6·9 per cent (1775 of 25 657 patients) in very low-volume hospitals to 4·8 per cent (1239 of 25 825) in very high-volume centres (P &lt; 0·001). In multivariable logistic regression analysis the risk-adjusted odds ratio for in-house mortality was 0·75 (95 per cent c.i. 0·66 to 0·84) in very high-volume hospitals performing a mean of 85·0 interventions per year, compared with that in very low-volume hospitals performing a mean of only 12·7 interventions annually, after adjustment for sex, age, co-morbidity, emergency procedures, prolonged mechanical ventilation and transfusion. Conclusion: In Germany, patients undergoing colonic cancer resections in high-volume hospitals had with improved outcomes compared with patients treated in low-volume hospitals. abstract_id: PUBMED:21279550 Elective resection of colon cancer by high-volume surgeons is associated with decreased morbidity and mortality. Background: The purpose of this study was to determine whether morbidity and mortality in patients undergoing elective resection of colon cancer are associated with surgeon or hospital volume. Methods: Using the Nationwide Inpatient Sample database, we identified all adult patients who underwent elective resection for colon cancer as their primary procedure between 2003 and 2007. Cases were divided into three groups according to the mean number of resections performed annually by each surgeon: low volume (≤4/year), intermediate volume (5-9/year), or high volume (≥10/year). Annual hospital case-load was also categorized as low volume (≤30/year), intermediate volume (31-60/year), and high volume (≥61/year). Multiple logistic regression models were used to identify differences in morbidity and mortality. Results: A total of 54,000 patients underwent resection of colon cancer by 7,313 surgeons in 1,398 hospitals. After adjusting for important covariates including hospital volume, colon cancer resection by high-volume surgeons was an independent predictor of decreased morbidity (odds ratio [OR], 0.91; 95% CI, 0.85-0.97) and mortality (OR, 0.75; 95% CI, 0.65-0.86). Mortality was lowest among patients operated on by high-volume surgeons in high-volume hospitals (2.2% vs. 3.9%; OR, 0.56; 95% CI, 0.46-0.68). Conclusions: In patients undergoing elective resection of colon cancer, procedures done by high-volume surgeons are associated with decreased morbidity and mortality. abstract_id: PUBMED:24044806 No difference between lowest and highest volume hospitals in outcome after colorectal cancer surgery in the southern Netherlands. Aim: To investigate the quality of surgical colorectal cancer (CRC) care in the southern Netherlands by evaluating differences between the five hospitals with the lowest volume and the five hospitals with the highest volume. Methods: Patients who underwent resection for primary CRC diagnosed between 2008 and 2011 in southern Netherlands were included (n = 5655). The five hospitals performing &lt;130 resections/year were classified 'low volume'; the five hospitals performing ≥ 130 resections/year 'high volume'. Differences in surgical approach, circumferential resection margins (CRM), anastomotic leakage and 30-day mortality between hospital volumes were analysed using Chi(2) tests. Expected proportions anastomotic leakage and 30-day mortality were calculated using multivariable logistic regression. Crude 3-year survival was calculated using Kaplan-Meier curves. Cox regression was used to discriminate independent risk factors for death. Results: 23% of patients with locally advanced rectal cancer (LARC) diagnosed in a low volume centre was referred to a high volume centre. Patients with colon cancer underwent less laparoscopic surgery and less urgent surgery in low compared to high volume hospitals (10% versus 32%, p &lt; 0.0001, and 8% versus 11%, p = 0.003, respectively). For rectal cancer, rates of abdominoperineal resections versus low anterior resections, and CRM were not associated with hospital volume. Anastomotic leakage, 30-day mortality, and survival did not differ between hospital volumes. Conclusion: In southern Netherlands, low volume hospitals deliver similar high quality surgical CRC care as high volume hospitals in terms of CRM, anastomotic leakage and survival, also after adjustment for casemix. However, this excludes LARC since a substantial proportion was referred to high volume hospitals. abstract_id: PUBMED:37974191 How to direct patients to high-volume hospitals: exploring the influencing drivers. Background: During the last decade, planning concentration policies have been applied in healthcare systems. Among them, attention has been given to guiding patients towards high-volume hospitals that perform better, acccording to the volume-outcome association. This paper analyses which factors drive patients to choose big or small hospitals (with respect to the international standards of volumes of activity). Methods: We examined colon cancer surgeries performed in Piedmont (Italy) between 2004 and 2018. We categorised the patient choice of the hospital as big/small, and we used this outcome as main dependent variable of descriptive statistics, tests and logistic regression models. As independent variables, we included (i) patient characteristics, (ii) characteristics of the closest big hospital (which should be perceived as the most immediate to be chosen), and (iii) territorial characteristics (i.e., characteristics of the set of hospitals among which the patient can choose). We also considered interactions among variables to examine which factors influence all or a subset of patients. Results: Our results confirm that patient personal characteristics (such as age) and hospital characteristics (such as distance) play a primary role in the patient decision process. The findings seem to support the importance of closing small hospitals when they are close to big hospitals, although differences emerge between rural and urban areas. Other interesting insights are provided by examining the interactions between factors, e.g., patients affected by comorbidities are more responsive to hospital quality even though they are distant. Conclusions: Reorganising healthcare services to concentrate them in high-volume hospitals emerged as a crucial issue more than forty years ago. Evidence suggests that concentration strategies guarantee better clinical performance. However, in healthcare systems in which patients are free to choose where to be treated, understanding patients' behaviour and what drives them towards the most effective choice is of paramount importance. Our study builds on previous research that has already analysed factors influencing patients' choices, and takes a step further to enlighten which factors drive patients to choose between a small or a big hospital (in terms of volume). The results could be used by decision makers to design the best concentration strategy. abstract_id: PUBMED:12772198 Surgeon volume compared to hospital volume as a predictor of outcome following primary colon cancer resection. Background And Objectives: A strong association between high hospital procedure volume and survival following colon cancer resection has been demonstrated. However, the importance of surgeon case volume as a determinant of outcome has been less well studied, and it is unclear whether hospital or surgeon volume is the more powerful predictor of outcomes. Methods: A retrospective population-based cohort study utilizing the Surveillance, Epidemiology, and End Results (SEER)-Medicare linked database identified 24,166 colon cancer patients aged 65 years and older who had surgery for a primary tumor diagnosed in 1991-1996 in a SEER area. Hospital and surgeon-specific procedure volume was ascertained based on the number of claims submitted over the 6-year study period. Outcome measures were mortality at 30 days and 2 years, overall survival, and the frequency of operations requiring an intestinal stoma. Age, sex, race, comorbid illness, cancer stage, socioeconomic status, emergent hospitalization, and the presence of obstruction/perforation were used to adjust for differences in case-mix. Results: After adjusting for surgeon procedure volume, high hospital procedure volume remained a strong predictor of low post-operative mortality rates (P &lt; 0.001 for each outcome with and without adjustment for surgeon procedure volume). Surgeon-specific procedure volume was also an important predictor of surgical outcomes (P = 0.002 for 30-day mortality, P = 0.001 for 2-year mortality), although this effect was attenuated after adjusting for hospital volume (P = 0.03 for 30-day mortality, P = 0.02 for 2-year mortality). Hospital volume and surgeon volume were each an important predictor of the ostomy rate. Among high volume institutions and surgeons, individual providers with unusually high ostomy rates could be identified. Conclusions: Both hospital and surgeon-specific procedure volume predict outcomes following colon cancer resection; but hospital volume may exert a stronger effect. Therefore, efforts to optimize the quality of colon cancer surgery should focus on multidisciplinary aspects of hospital care rather than solely on intraoperative technique. abstract_id: PUBMED:15798899 Impact of hospital case volume on short-term outcome after laparoscopic operation for colonic cancer. Background: High hospital case volume has been associated with improved outcome after open operation for colorectal malignancies. Methods: To assess the impact of hospital case volume on short-term outcome after laparoscopic operation for colon cancer, we conducted an analysis of patients who underwent laparoscopic colon resection within the COlon Cancer Laparoscopic or Open Resection (COLOR) trial. Results: A total of 536 patients with adenocarcinoma of the colon were included in the analysis. Median operating time was 240, 210 and 188 min in centers with low, medium, and high case volumes, respectively (p &lt; 0.001). A significant difference in conversion rate was observed among low, medium, and high case volume hospitals (24% vs 24% vs 9%; p &lt; 0.001). A higher number of lymph nodes were harvested at high case volume hospitals (p &lt; 0.001). After operation, fewer complications (p = 0.006) and a shorter hospital stay (p &lt; 0.001) were observed in patients treated at hospitals with high caseloads. Conclusions: Laparoscopic operation for colon cancer at hospitals with high caseloads appears to be associated with improved short-term results. abstract_id: PUBMED:10493487 Hospital volume can serve as a surrogate for surgeon volume for achieving excellent outcomes in colorectal resection. Objective: To examine the association of surgeon and hospital case volumes with the short-term outcomes of in-hospital death, total hospital charges, and length of stay for resection of colorectal carcinoma. Methods: The study design was a cross-sectional analysis of all adult patients who underwent resection for colorectal cancer using Maryland state discharge data from 1992 to 1996. Cases were divided into three groups based on annual surgeon case volume--low (&lt; or =5), medium (5 to 10), and high (&gt;10)--and hospital volume--low (&lt;40), medium (40 to 70), and high (&gt; or =70). Poisson and multiple linear regression analyses were used to identify differences in outcomes among volume groups while adjusting for variations in type of resections performed, cancer stage, patient comorbidities, urgency of admission, and patient demographic variables. Results: During the 5-year period, 9739 resections were performed by 812 surgeons at 50 hospitals. The majority of surgeons (81%) and hospitals (58%) were in the low-volume group. The low-volume surgeons operated on 3461 of the 9739 total patients (36%) at an average rate of 1.8 cases per year. Higher surgeon volume was associated with significant improvement in all three outcomes (in-hospital death, length of stay, and cost). Medium-volume surgeons achieved results equivalent to high-volume surgeons when they operated in high- or medium-volume hospitals. Conclusions: A skewed distribution of case volumes by surgeon was found in this study of patients who underwent resection for large bowel cancer in Maryland. The majority of these surgeons performed very few operations for colorectal cancer per year, whereas a minority performed &gt;10 cases per year. Medium-volume surgeons achieved excellent outcomes similar to high-volume surgeons when operating in medium-volume or high-volume hospitals, but not in low-volume hospitals. The results of low-volume surgeons improved with increasing hospital volume but never equaled those of the high-volume surgeons. abstract_id: PUBMED:20404284 Teaching hospital status and operative mortality in the United States: tipping point in the volume-outcome relationship following colon resections? Objective: To compare risk- and volume-adjusted outcomes of colon resections performed at teaching hospitals (THs) vs non-THs to assess whether benign disease may influence the volume-outcome effect. Design: Retrospective data analysis examining colon resections determined by International Classification of Diseases, Ninth Revision, Clinical Modification classification performed in the United States from 2001 through 2005 using the Nationwide Inpatient Sample (NIS) and the Area Resource File (2004). Patient covariates used in adjustment included age, sex, race, Charlson Index comorbidity score, and insurance status. Hospital covariates included TH status, presence of a colorectal surgery fellowship approved by the Accreditation Council for Graduate Medical Education, geographical region, institutional volume, and urban vs rural location. County-specific surgeon characteristics used in adjustment included average age of surgeons and proportion of colorectal board-certified surgeons within each county. Environmental or county covariates included median income and percentage of county residents living below the federal poverty level. Setting: A total of 1045 hospitals located in 38 states in the United States that were included in the NIS. Patients: All patients older than 18 years who had colon resection and were discharged from a hospital included in the NIS. Main Outcome Measures: Operative mortality, length of stay (LOS), and total charges. Results: A total of 115 250 patients were identified, of whom 4371 died (3.8%). The mean LOS was 10 days. Fewer patients underwent surgical resection in THs than in non-THs (46 656 vs 68 589). Teaching hospitals were associated with increased odds of death (odds ratio, 1.14) (P = .03), increased LOS (P = .003), and a nonsignificant trend toward an increase in total charges (P = .36). Conclusions: With the inclusion of benign disease, colon surgery displays a volume-outcome relationship in favor of non-THs. Inclusion of benign disease may represent a tipping point. abstract_id: PUBMED:14568853 Association of hospital procedure volume and outcomes in patients with colon cancer at high risk for recurrence. Background: Studies that use registry data have demonstrated superior long-term overall survival after curative surgical resection of colon cancer at hospitals where the volume of such surgeries is high. However, because such administrative data lack information on cancer recurrence, the true nature of this relation remains uncertain. Objective: To determine whether hospital procedure volume predicts long-term outcomes of colon cancer surgery. Design: Nested cohort study within a randomized clinical trial. Setting: Intergroup 0089 national adjuvant colon cancer study conducted between 1988 and 1992. Patients: 3161 patients with high-risk stage II and stage III colon cancer. Measurements: Overall survival and recurrence-free survival, by hospital procedure volume as defined by Medicare claims data. Results: With a median follow-up of 9.4 years, 5-year overall survival significantly differed across tertiles of hospital procedure volume (63.8% for patients who had resection at low-volume hospitals compared with 67.3% at high-volume hospitals; P = 0.04). After adjustment for other predictors of colon cancer outcome, the hazard ratio for overall mortality in patients treated at low-volume centers was 1.16 (95% CI, 1.03 to 1.32). However, the risk for cancer recurrence was not associated with hospital procedure volume. Five-year recurrence-free survival was 63.9% for patients who had resection at low-volume hospitals compared with 63.0% at high-volume hospitals (adjusted hazard ratio, 1.03 [CI, 0.89 to 1.18]). These findings did not materially change after stratification by other potential demographic and clinical predictors of outcome. Conclusions: According to prospectively recorded data from a large clinical trial, patients whose colon cancer was resected at low-volume hospitals experienced a higher risk for long-term mortality; however, this increased mortality was not attributable to differences in colon cancer recurrences. Answer: High-volume surgeons and hospitals have been associated with better outcomes for colon cancer resections, but they are not the sole or most important predictors of in-hospital outcomes. Several studies have found that while high-volume surgeons and hospitals are significant predictors of lower mortality and morbidity rates, other factors may have a stronger association with outcomes. For instance, one study using nationwide data from the Healthcare Cost and Utilization Program found that although hospital and surgeon volume were statistically significant predictors of mortality, other factors such as age, gender, comorbid disease, and operation severity (emergency, urgent) had a relatively larger effect on outcomes. The volume variables had a smaller effect on outcome compared to these other factors (PUBMED:12219022). Similarly, another study from Germany showed that patients undergoing colonic cancer resections in high-volume hospitals had improved outcomes compared with patients treated in low-volume hospitals (PUBMED:31592096). Additionally, a study using the Nationwide Inpatient Sample database indicated that high-volume surgeons were associated with decreased morbidity and mortality in patients undergoing elective resection of colon cancer (PUBMED:21279550). However, a study from the southern Netherlands found no difference in outcomes such as anastomotic leakage, 30-day mortality, and survival between the lowest and highest volume hospitals after colorectal cancer surgery (PUBMED:24044806). This suggests that in some regions, hospital volume may not be the most critical factor for outcomes. Moreover, a study examining the influence of hospital and surgeon volume on outcomes following primary colon cancer resection found that both hospital and surgeon-specific procedure volumes predict outcomes, but hospital volume may exert a stronger effect (PUBMED:12772198). In conclusion, while high-volume surgeons and hospitals are important predictors of better in-hospital outcomes for colon cancer resection, they are not the most important factors. Other demographic, clinical, and operative variables can have a more significant impact on patient outcomes. Efforts to improve colon cancer surgery outcomes should consider a multifaceted approach that includes but is not limited to increasing surgeon and hospital volumes.
Instruction: Does cannabis use lead to depression and suicidal behaviours? Abstracts: abstract_id: PUBMED:34741634 Associations between cannabis use, cannabis use disorder, and mood disorders: longitudinal, genetic, and neurocognitive evidence. Rationale: Cannabis use among people with mood disorders increased in recent years. While comorbidity between cannabis use, cannabis use disorder (CUD), and mood disorders is high, the underlying mechanisms remain unclear. Objectives: We aimed to evaluate (1) the epidemiological evidence for an association between cannabis use, CUD, and mood disorders; (2) prospective longitudinal, genetic, and neurocognitive evidence of underlying mechanisms; and (3) prognosis and treatment options for individuals with CUD and mood disorders. Methods: Narrative review of existing literature is identified through PubMed searches, reviews, and meta-analyses. Evidence was reviewed separately for depression, bipolar disorder, and suicide. Results: Current evidence is limited and mixed but suggestive of a bidirectional relationship between cannabis use, CUD, and the onset of depression. The evidence more consistently points to cannabis use preceding onset of bipolar disorder. Shared neurocognitive mechanisms and underlying genetic and environmental risk factors appear to explain part of the association. However, cannabis use itself may also influence the development of mood disorders, while others may initiate cannabis use to self-medicate symptoms. Comorbid cannabis use and CUD are associated with worse prognosis for depression and bipolar disorder including increased suicidal behaviors. Evidence for targeted treatments is limited. Conclusions: The current evidence base is limited by the lack of well-controlled prospective longitudinal studies and clinical studies including comorbid individuals. Future studies in humans examining the causal pathways and potential mechanisms of the association between cannabis use, CUD, and mood disorder comorbidity are crucial for optimizing harm reduction and treatment strategies. abstract_id: PUBMED:31780343 Moderating role of cannabis use between insight and depression in early psychosis. Background: A high level of insight in first episode psychosis (FEP) is positively correlated to important prognostic factors such as medication adherence and functional outcome but is associated with increased depression level and suicidal behavior. Aims: This is the first study questioning the potential moderating role of cannabis use in the relationship between insight and depression one year after a FEP. Method: In this prospective observational study, we enrolled 214 FEP patients who had provided informed consent and been referred to a specialized early psychosis program and followed for 36 months. A series of multivariate regression models were used. Baseline insight, medication adherence and cannabis use (level of use on a continuum) were entered as independent variables, while the PANSS (positive and negative), the MADRS and the SOFAS scores after one year were alternately selected as the dependent variable. Results: We found a three-way interaction term between cannabis use, insight and medication adherence on depression level one year after the entry into the program. A high level of insight was significantly associated with higher MADRS scores in patients with high cannabis use, while depression decreased in high-insight patients with low cannabis use. Conclusions: Cannabis use continuation during the year following a first episode psychosis may play a significant role in the development or the maintenance of post-psychotic depression in patients who present with high level of insight and adherence to medication, stressing the need for specific therapeutic strategies in this subgroup of patients. abstract_id: PUBMED:32487935 Cannabis Use as a Risk Factor for Depression, Anxiety, and Suicidality: Epidemiological Associations and Implications for Nurses. Psychiatric disorders and substance use disorders commonly co-occur and are major public health concerns given the morbidity and mortality associated with them. Globally, cannabis is among the most commonly used drugs, and cannabis use frequently begins in adolescence or emerging adulthood, both important periods of development and periods in which psychiatric symptoms and disorders frequently emerge. Thus, the relationships between cannabis use and mental illnesses are essential for nurses and other healthcare professionals and researchers to explore and understand. This literature review examines the relationships between cannabis use and depression, anxiety, and suicide. It includes a primer on the neurobiology of cannabis effects; an overview on the epidemiological evidence on the associations between cannabis use and depression, anxiety, and suicide; and a discussion of implications for nurses, particularly important given changes in the medical and recreational cannabis legislation in North America. Overall, this review found consistent evidence showing a cross-sectional association between recreational cannabis use and depression, anxiety, and suicide, despite some limitations and conflicts in the literature. In addition, most evidence from longitudinal or case control studies suggested cannabis use preceded the development of depressive symptoms and suicidal behaviors, although the quality of this evidence was mixed. Implications for future research and nursing practice are discussed. abstract_id: PUBMED:37538853 Cannabis Use Is Associated With Depression Severity and Suicidality in the National Comorbidity Survey-Adolescent Supplement. Objective: To investigate the association of cannabis use with major depression and suicidal behavior in adolescence. Method: Data are from the National Comorbidity Survey-Adolescent Supplement N=10,123, a nationally representative survey of adolescents aged 13 to 18 years. Weighted logistic regression and ordinal regression analyses of major depression and suicidal behavior outcomes were conducted on cannabis variables, incorporating sociodemographic characteristics. Results: Adolescents with lifetime cannabis use have 2.07 times higher odds of mild/moderate (adjusted odds ratio [aOR]; 95% CI=1.69, 2.53) and 3.32 times higher odds of severe major depressive disorder (MDD; aOR; 95% CI=2.31, 4.75). Cannabis use (aOR 6.90, 95% CI=4.67,10.19), mild/moderate MDD (aOR 4.10, 95% CI=2.82, 5.98), and severe MDD (aOR 13.97, 95% CI = 7.59, 25.70) were associated with higher odds of suicide attempt. Past 12-month cannabis use (aOR 3.70, 95% CI = 2.16, 6.32), mild/moderate major depressive episodes (MDE) (aOR 7.85, 95% CI=3.59, 17.17), and severe MDE (aOR 36.36, 95% CI=13.68,96.64) were associated with higher odds of suicide attempt. The frequency of past 12-month cannabis use was associated with higher odds of suicide attempt and with MDE severity, with higher odds among individuals who use cannabis 3 or more days than among individuals who use cannabis less frequently, suggesting a dose effect. Among cannabis users, older age of onset of cannabis use was associated with lower odds of suicidal behaviors. Conclusion: Cannabis use is associated with higher odds of depression and depression severity in adolescence. Furthermore, depression and cannabis use are independently associated with higher odds of suicide attempt. Diversity & Inclusion Statement: We worked to ensure sex and gender balance in the recruitment of human participants. We worked to ensure race, ethnic, and/or other types of diversity in the recruitment of human participants. The author list of this paper includes contributors from the location and/or community where the research was conducted who participated in the data collection, design, analysis, and/or interpretation of the work. One or more of the authors of this paper self-identifies as a member of one or more historically underrepresented racial and/or ethnic groups in science. We actively worked to promote sex and gender balance in our author group. We actively worked to promote inclusion of historically underrepresented racial and/or ethnic groups in science in our author group. abstract_id: PUBMED:37923416 Association Between Suicidal Behaviour and Cannabis and Tranquilizer use, Depression, Aggression and Other Borderline Personality Traits Among Students in Sincelejo, Colombia. Objective: This objective of this study is to examine the association between suicidal behaviour and substance use, depression, aggressiveness and borderline personality traits among adolescents from Sincelejo, a rural city in the north of Colombia. Methods: This cross sectional study included 352 participants selected by purposive sampling, from a public and a private school located in Sincelejo, Sucre district, in the north of Colombia. Students ages ranged from 12 to 18 years old (mean, 15.09±1.82). The participants completed three screening tools: a socio-demographic questionnaire, a screening instrument to collect information related to the frequency of use of some substances, such as tobacco and cannabis, and a self-report inventory to assess various personality and psychopathology domains. A series of t-tests, ANOVA and linear regression analyses were conducted. Results: Physical aggression (t=7.74; p &lt;0.01), cognitive depression (t=5.03; p &lt;0.01), affective depression (t=8.24; p &lt;0.01), affective instability (t=3.46; p &lt;0.01), few social relationships (t=3.36; p &lt;0.01), self-harm (t=3.45; p&lt;.01), cannabis and tranquilizer use (t=2.83; p &lt;0.05; and t=2.37; p &lt;0.05) had a significant independent relationship with suicidal behaviour. Aggression (t=2.59; p &lt;0.05), components of depression (t=9.03; p &lt;0.01) and borderline personality traits (t=4.12; p &lt;0.01) also predicted suicidal behaviour. Conclusions: More longitudinal studies are needed in this area to identify the causal relations between the factors studied and the suicidal behaviour of young people in Sincelejo. abstract_id: PUBMED:18798834 Does cannabis use lead to depression and suicidal behaviours? A population-based longitudinal study. Objective: This study investigates the relationship between cannabis use and later depression, suicidal ideation and suicide attempts in a cohort of young Norwegians. Method: Data were gathered through the Young in Norway longitudinal study, in which a population-based sample of 2033 Norwegians were followed up over a 13-year period, from their early teens to their late twenties. Data were gathered on: (a) exposure to cannabis use; and (b) depression, suicide ideation and suicide attempts. In addition, information about possible confounding factors was included. Results: In early adolescence, no associations with later depression or suicidal behaviours were observed. In the sample's twenties, we observed highly significant associations with suicide ideation and suicide attempts. When adjusting for confounders, the OR was 2.9 (95% CI 1.3-6.1) for later suicide attempts in the group who had used cannabis 11+ times during the past 12 months. Conclusion: The findings suggest that exposure to cannabis by itself does not lead to depression but that it may be associated with later suicidal thoughts and attempts. abstract_id: PUBMED:35428897 The impact of recreational cannabis legalization on youth: the Colorado experience. Over the past two decades, cannabis policies have liberalized in the United States, culminating in the legalization of recreational cannabis in Colorado and Washington state in 2012. This review will focus on the impact of recreational cannabis legalization in Colorado. While the prevalence of past-year and past-month cannabis use has remained relatively steady among adolescents post-legalization, the prevalence of young adult use has continued to rise. Cannabis products with increased potency have become more prevalent and widely used. There are concerns that the use of products with increased potency will increase risk for cannabis use and comorbid mental health disorders, particularly cannabis-induced psychosis and suicidal behavior, although more research is needed to fully understand the impact of high-potency products. Since the legalization of recreational cannabis, numerous additional public health concerns have emerged: unintentional overdoses requiring acute care visits, electronic vaping-associated lung injury, increased motor vehicle collisions related to cannabis consumption, and higher rates of cannabis use during pregnancy. Lessons learned from the first decade of cannabis legalization emphasize the need for research efforts to provide evidence-based guidelines, inform prevention strategies, and implement selective interventions for at-risk populations including youth, individuals with comorbid mental health disorders, and individuals who are pregnant. abstract_id: PUBMED:38009854 Suicide Attempts in an Italian Population with Cannabis Use Disorders: Results of a Follow-Up Study. The relationship between cannabis use and suicidal behavior is complex, with no consensus in the literature. We used electronic health records of national health services to identify individuals who received a diagnosis of Cannabis Use Disorder in the Metropolitan area of Bologna from 2009 to 2019. In this cohort we identified accesses to Emergency Departments for suicide attempts from 2009 to 2019. The Crude Suicide Rate for 1,000 Person Years was 2.5, higher in females, in patients with Alcohol Use Disorders, with any psychiatric diagnosis, within one year from the first visit, and during the COVID-19 period. The risk was over 22 times higher than in the general population. Considering the high prevalence of cannabis use in the general population and the consequent risk of Cannabis Use Disorders, these data suggest the importance of a clinical evaluation for suicidal risk. abstract_id: PUBMED:35966616 Early Age of Cannabis Initiation and Its Association With Suicidal Behaviors. Background: Suicide rates in the U.S. have continued to rise over the last 2 decades. The increased availability and broader legalization of cannabis is a public health concern, particularly among adolescents. The objective of this study was to examine the association between the age of cannabis initiation and lifetime suicidal ideations and attempts in a sample of adults aged 18 or older. Methods: Data are from the Collaborative Psychiatric Epidemiology Surveys (CPES), 2001 to 2003 (N = 15 238). The primary objective of the CPES was to collect data about the prevalence of mental disorders, impairments associated with these disorders, and their treatment patterns from representative samples of majority and minority adult populations in the U.S. Logistic regression analyses were conducted to test the association between cannabis initiation age (early ⩽14 years old; later &gt;14 years old) and outcomes of lifetime suicide ideation and attempts. Cigarette use, cannabis use, gender, income, race, education, and age were controlled for the analyses. Results: Overall, 12.5% of participants reported suicide ideation, while 4.2% reported attempt. Early cannabis use was associated with a higher risk of suicide ideation (AOR = 3.32, 95% CI [2.75, 3.80]) than later cannabis use (AOR = 2.15, 95% CI [1.92, 2.39]). Early cannabis use was associated with a higher risk of suicide attempt (AOR = 4.38, 95% CI [3.48, 5.52]) than later cannabis use (AOR = 2.56, 95% CI [2.14, 3.06]). Wald chi-squared tests revealed significant differences between the early and late initiation for both ideation (χ2 = 26.99; P &lt; .001) and attempts (χ2 = 26.02; P &lt; .001). Conclusions: Significant associations were found between early initiation of cannabis and suicide behaviors. As suicide rates continue to rise, it is recommended that clinicians, treatment providers, and other professionals consider the use of cannabis at an early age as a risk for subsequent suicide behaviors. abstract_id: PUBMED:30758486 Association of Cannabis Use in Adolescence and Risk of Depression, Anxiety, and Suicidality in Young Adulthood: A Systematic Review and Meta-analysis. Importance: Cannabis is the most commonly used drug of abuse by adolescents in the world. While the impact of adolescent cannabis use on the development of psychosis has been investigated in depth, little is known about the impact of cannabis use on mood and suicidality in young adulthood. Objective: To provide a summary estimate of the extent to which cannabis use during adolescence is associated with the risk of developing subsequent major depression, anxiety, and suicidal behavior. Data Sources: Medline, Embase, CINAHL, PsycInfo, and Proquest Dissertations and Theses were searched from inception to January 2017. Study Selection: Longitudinal and prospective studies, assessing cannabis use in adolescents younger than 18 years (at least 1 assessment point) and then ascertaining development of depression in young adulthood (age 18 to 32 years) were selected, and odds ratios (OR) adjusted for the presence of baseline depression and/or anxiety and/or suicidality were extracted. Data Extraction And Synthesis: Study quality was assessed using the Research Triangle Institute item bank on risk of bias and precision of observational studies. Two reviewers conducted all review stages independently. Selected data were pooled using random-effects meta-analysis. Main Outcomes And Measures: The studies assessing cannabis use and depression at different points from adolescence to young adulthood and reporting the corresponding OR were included. In the studies selected, depression was diagnosed according to the third or fourth editions of Diagnostic and Statistical Manual of Mental Disorders or by using scales with predetermined cutoff points. Results: After screening 3142 articles, 269 articles were selected for full-text review, 35 were selected for further review, and 11 studies comprising 23 317 individuals were included in the quantitative analysis. The OR of developing depression for cannabis users in young adulthood compared with nonusers was 1.37 (95% CI, 1.16-1.62; I2 = 0%). The pooled OR for anxiety was not statistically significant: 1.18 (95% CI, 0.84-1.67; I2 = 42%). The pooled OR for suicidal ideation was 1.50 (95% CI, 1.11-2.03; I2 = 0%), and for suicidal attempt was 3.46 (95% CI, 1.53-7.84, I2 = 61.3%). Conclusions And Relevance: Although individual-level risk remains moderate to low and results from this study should be confirmed in future adequately powered prospective studies, the high prevalence of adolescents consuming cannabis generates a large number of young people who could develop depression and suicidality attributable to cannabis. This is an important public health problem and concern, which should be properly addressed by health care policy. Answer: The relationship between cannabis use and the development of depression and suicidal behaviors is a complex and multifaceted issue, with various studies suggesting different aspects of this association. Several studies have found evidence of an association between cannabis use and an increased risk of depression and suicidal behaviors. For instance, adolescents with lifetime cannabis use have been found to have higher odds of mild/moderate and severe major depressive disorder (MDD), as well as higher odds of suicide attempts (PUBMED:37538853). Furthermore, a systematic review and meta-analysis indicated that cannabis use during adolescence is associated with an increased risk of developing major depression and suicidal behavior in young adulthood (PUBMED:30758486). Longitudinal studies have also suggested that cannabis use may precede the development of depressive symptoms and suicidal behaviors, although the quality of evidence is mixed (PUBMED:32487935). In particular, early initiation of cannabis use has been associated with a higher risk of suicidal ideation and attempts compared to later initiation (PUBMED:35966616). However, the relationship is not straightforward, as some studies have found that the association between cannabis use and depression or suicidal behaviors may be moderated by other factors. For example, a study found that a high level of insight in first episode psychosis (FEP) patients was associated with higher depression levels in patients with high cannabis use, while depression decreased in high-insight patients with low cannabis use (PUBMED:31780343). This suggests that cannabis use may interact with other variables, such as insight and medication adherence, to influence depression outcomes. Moreover, the evidence is not entirely consistent across all studies. For example, one study found no associations between cannabis use in early adolescence and later depression or suicidal behaviors, but did observe significant associations with suicide ideation and attempts in the sample's twenties (PUBMED:18798834). Overall, while there is consistent evidence showing a cross-sectional association between recreational cannabis use and depression, anxiety, and suicide, the causal nature of this relationship and the underlying mechanisms remain to be fully elucidated. Future research, including well-controlled prospective longitudinal studies, is needed to better understand the causal pathways and potential mechanisms of the association between cannabis use and mood disorder comorbidity (PUBMED:34741634).
Instruction: Insulation defects of thin high-voltage ICD leads: an underestimated problem? Abstracts: abstract_id: PUBMED:21457385 Insulation defects of thin high-voltage ICD leads: an underestimated problem? Background: Long-term lead failure is a known complication of ICD therapy. The precise incidence and sequelae of insulation defects at the tricuspid level, however, are not well characterized. Objective: This study determined the risk of lead failure, with particular emphasis on insulation defects at the level of the tricuspid valve, in a large series of consecutive ICD recipients. Methods: Data from 357 consecutive patients, who had received transvenous 7 and 8 French ICD-leads (St. Jude Medical, Riata family) and were followed at our center, formed the basis of this study. Results: During a mean follow-up of 42 ± 24 months, 30 of 357 (8%) patients required surgical intervention due to lead failure. For overall lead defects, lead access via the subclavian vein and subpectoral device placement were independent predictors of overall lead failure (OR 3.47, 95% CI 1.38-8.72, P = 0.013 and OR 3.83, 95% CI 1.77-8.27, P = 0.001, respectively). Lead insulation defects at the level of the tricuspid valve accounted for 20% of all lead failures. Diagnosis of this specific insulation defect could only be established by fluoroscopy, while electrical parameters were within normal limits in all of these patients. On univariate but not on multivariate analysis the presence of nonischemic cardiomyopathy was a predictor of this lead complication (OR 8.2, CI 1.5-46.1, P = 0.02). Conclusion: Insulation defects of 7 and 8 French ICD leads at the tricuspid level represent an important complication of device therapy. Even moderate changes in lead impedance within the normal limits at follow-up should prompt careful fluoroscopic evaluation to avoid spurious shocks. abstract_id: PUBMED:25504254 A new application area for fullerenes: voltage stabilizers for power cable insulation. Fullerenes are shown to be efficient voltage-stabilizers for polyethylene, i.e., additives that increase the dielectric strength of the insulation material. Such compounds are highly sought-after because their use in power-cable insulation may considerably enhance the transmission efficiency of tomorrow's power grids. On a molal basis, fullerenes are the most efficient voltage stabilizers reported to date. abstract_id: PUBMED:34048610 Repurposing Poly(3-hexylthiophene) as a Conductivity-Reducing Additive for Polyethylene-Based High-Voltage Insulation. Poly(3-hexylthiophene) (P3HT) is found to be a highly effective conductivity-reducing additive for low-density polyethylene (LDPE), which introduces a new application area to the field of conjugated polymers. Additives that reduce the direct-current (DC) electrical conductivity of an insulation material at high electric fields have gained a lot of research interest because they may facilitate the design of more efficient high-voltage direct-current power cables. An ultralow concentration of regio-regular P3HT of 0.0005 wt% is found to reduce the DC conductivity of LDPE threefold, which translates into the highest efficiency reported for any conductivity-reducing additive to date. The here-established approach, i.e., the use of a conjugated polymer as a mere additive, may boost demand in absolute terms beyond the quantities needed for thin-film electronics, which would turn organic semiconductors from a niche product into commodity chemicals. abstract_id: PUBMED:27581623 Ultrastable Natural Ester-Based Nanofluids for High Voltage Insulation Applications. Nanofluids for high voltage insulation systems have emerged as a potential substitute for liquid dielectrics in industrial applications. Nevertheless, the sedimentation of nanoparticles has been so far a serious barrier for their wide and effective exploitation. The present work reports on the development and in-depth characterization of colloidally ultrastable natural ester oil insulation systems containing iron oxide nanocrystals which lift the problem of sedimentation and phase separation. Compared to state-of-the-art systems, the final product is endowed with increased dielectric strength, faster thermal response, lower dielectric losses (decreased dissipation factor: tan δ), and very high endurance during discharge stressing. The developed nanofluid was studied and compared with a similar system containing commercial iron oxide nanoparticles, the latter demonstrating extensive sedimentation. Herein, the dielectric properties of the nanofluids are analyzed at various concentrations by means of breakdown voltage and dissipation factor measurements. The characterization techniques unequivocally demonstrate the high performance reliability of the reported nanofluid, which constitutes a significant breakthrough in the field of high voltage insulation technologies. abstract_id: PUBMED:30175586 Solution-Processable, Thin, and High-κ Dielectric Polyurea Gate Insulator with Strong Hydrogen Bonding for Low-Voltage Organic Thin-Film Transistors. We developed a solution-processable, thin, and high-dielectric polyurea-based organic gate insulator for low-voltage operation and high performance of organic thin-film transistors (OTFTs). A 60 nm-thick polyurea thin film exhibited a high dielectric constant of 5.82 and excellent electrical insulating properties owing to strong hydrogen bonding. The hydrogen bonding of the synthesized polyurea was confirmed using infrared spectroscopy and was quantitatively evaluated by measuring the interactive force using atomic force microscopy. Moreover, the effect of hydrogen bonding of polyurea on the insulating properties was systematically investigated through the combination of various monomers and control of the thickness of the polyurea film. The dinaphtho[2,3- b:2',3'- f]thieno[3,2- b]thiophene-based OTFTs with the polyurea gate insulator showed excellent thin-film transistor (TFT) performance with a field-effect mobility of 1.390 cm2/V·s and an on/off ratio of ∼105 at a low operation voltage below 2 V. In addition, it is possible to fabricate flexible polymer organic semiconductor (OSC)-based TFT devices using a solution process, owing to excellent solvent stability in various organic solvents. We believe that the solution-processable polyurea gate insulator with a high dielectric constant and good insulation properties is a promising candidate for low-voltage-operated OTFTs using various OSCs. abstract_id: PUBMED:36431615 Aerospace Environmental Challenges for Electrical Insulation and Recent Developments for Electrified Aircraft. The growing trend towards high voltage electrical assets and propulsion in the aeronautics and space industry pose new challenges in electrical insulation materials that cannot be overlooked. Transition to new high voltage electrified systems with unprecedented high levels of voltage, power, and efficiency must be safe and reliable. Improvements in both performance and safety of megawatt power systems is complicated because of the need for additional power transmission wiring and cabling and new safety requirements that have the potential of making the resulting systems heavier. To mitigate this issue, novel lightweight materials and system solutions are required that would result in lower specific weights in the insulator and conductor. Although reduced size and weight of system components can be achieved with new concepts, designs, and technologies, the high voltage (≥300 V) operation presents a significant challenge. This challenge is further complicated when considering the extreme operating environment that is experienced in aircraft, spacecraft, and targeted human exploration destinations. This paper reviews the extreme environmental challenges for aerospace electrical insulation and the needs associated with operating under high voltage and extreme environments. It also examines several recently developed robust lightweight electrical insulation materials that could enhance insulation performance and life. In aerospace, research must consider mass when developing new technologies. The impact of these recent developments provides a pathway which could enable next generation high altitude all electric aircraft, lightweight power transmission cables for a future sustained presence on the Moon and missions to Mars using HV propulsion, such as spacecraft with Nuclear Electric Propulsion systems. abstract_id: PUBMED:33430366 Increase in Fast Response Time of the Resistance-to-Voltage Converter When Monitoring the Cable Products' Insulation Resistance. Theoretical and experimental studies were conducted to investigate the impact of the cable capacitance during measurements of insulation resistance on the fast response time of a resistance-to-voltage converter. From a comparison of the results of simulation with the data obtained during the experiments, it was determined that the dependence characteristics of the settling time of resistance under measurement on the capacitance are identical to the analogous characteristics of electronic components of the resistance-to-voltage converter. It was experimentally proven that using T-shaped feedback in the resistance-to-voltage converter during the cable insulation resistance measurements reduces the settling time of the data by 1-3 times in comparison with a classical feedback system. Furthermore, when using the optimal parameters, the settling time of the resistance-to-voltage converter with T-shaped feedback depends to a lesser degree on the capacitance of the object under control. abstract_id: PUBMED:26983433 Insulation failure in electrosurgery instrumentation: a prospective evaluation. Background: The use of electrosurgery has expanded to a wide variety of surgical specialities, but it has also been accompanied by its share of complications, including thermal injuries to nontargeted tissues, caused by a break or defect in the insulation of the instrument's coat. The purpose of this study was to determine the prevalence and the location of insulation failures (IFs) in electrosurgical instruments, then to assess the necessity of routine IF testing. Methods: Electrosurgical instruments were visually inspected and checked for IF using a high-voltage detector. Two different detectors were used during two testing sessions: DTU-6 (Petel company) and DIATEG (Morgate company). Laparoscopic and non-laparoscopic instruments were determined to have IF if current crossed the instrument's insulation, signaled by an alarm sound. Results: A total of 489 instruments were tested. The overall prevalence of IFs was 24.1 % with only visual inspection and 37.2 % with the IF detector. Among the 489 instruments, 13.1 % were visually intact, but had an electric test failure. DTU-6 and DIATEG detectors showed comparable efficiency in detection of overall IFs and for laparoscopic and non-laparoscopic instruments. The median location of IFs was more pronounced for laparoscopic instruments (50.4 %) and the distal location for non-laparoscopic instruments (40.4 %). Conclusion: Accidental burns are a hidden problem and can lead to patient complications. In Central Sterilization Service Department, prevention currently includes only visual control of electrosurgery instrumentation, but testing campaigns are now necessary in order to identify maximum instruments' defects. abstract_id: PUBMED:31020135 Giant right atrial thrombus associated with ICD lead externalized conductors: a case report. Introduction: Narrow calibre ICD leads are prone to present insulation defects and conductor externalization. Close follow-up of these leads is recommended but as long as their electrical function is maintained, no prophyllactic replacement or extraction is advised. Although the risk of thrombus formation involving externalized conductors has been described, this risk seems considered as negligible compared with the risk of a prophylactic lead extraction. However, when an intracavitar thrombus is identified, the safest therapeutic approach remains undetermined. Case Presentation: In the present clinical vignette, we describe the case of a giant thrombus developed along the externalized portion of an electrically functional ICD lead. In this case, the thrombus was successfully treated with a systemic oral anticoagulation. Discussion: This case report supports the concept of a prolonged anticoagulation for both the diagnosis and the long-term treatment of thrombus developed along externalized ICD leads, in particular when the patient prefers to avoid or postpone the risk of a trans-venous lead extraction. abstract_id: PUBMED:30861988 Partial Discharge in Nanofluid Insulation Material with Conductive and Semiconductive Nanoparticles. This study provides a thorough investigation of partial discharge (PD) activities in nanofluid insulation material consisting of different types of nanoparticles, which are conductive and semiconductive when subjected to high voltage stress is presented. Nanofluids have become a topic of interest because they can be an alternative to liquid insulation in electrical apparatus due to their promising dielectric strength and cooling ability. However, during in-service operation, PDs can occur between conductors in the insulation system. Therefore, this study presents the behavior of PDs within nanofluid dielectric materials consisting of conductive and semiconductive nanoparticles. The results show that there is an improvement in the PD resistance and a reduction in the tan delta of nanofluids at power frequency after the incorporation of conductive or semiconductive nanoparticles in the nanofluid oil. However, the most suitable concentration of conductive and semiconductive nanoparticles in the base fluid was found to be, respectively, 0.01 g/L and 1.0 g/L at PD inception and PD steady-state conditions. The clustering of nanoparticles in a nanofluid suspension due to PD activities is also discussed in this study. Answer: Insulation defects in thin high-voltage ICD (implantable cardioverter-defibrillator) leads are indeed an important complication of device therapy that may be underestimated. A study focusing on this issue found that insulation defects at the level of the tricuspid valve accounted for 20% of all lead failures in a cohort of 357 consecutive ICD recipients. These defects could only be diagnosed by fluoroscopy since electrical parameters remained within normal limits for all patients with this specific insulation defect. The study suggests that even moderate changes in lead impedance that are still within normal limits should prompt a careful fluoroscopic evaluation to prevent inappropriate shocks (PUBMED:21457385). This finding underscores the significance of recognizing and addressing insulation defects in ICD leads, as they can have serious consequences for patients relying on these devices for cardiac rhythm management. The study's results highlight the need for vigilance in monitoring and diagnosing potential insulation issues to ensure patient safety and the effective functioning of ICDs.
Instruction: Is bariatric surgery necessary after intragastric balloon treatment? Abstracts: abstract_id: PUBMED:27350727 Current status of intragastric balloon for obesity treatment. Endoscopic bariatric therapy may be a useful alternative to pharmacological treatment for obesity, and it provides greater efficacy with lower risks than do conventional surgical procedures. Among the various endoscopic treatments for obesity, the intragastric balloon is associated with significant efficacy in body weight reduction and relief of comorbid disease symptoms. Anatomically, this treatment is based on gastric space-occupying effects that increase the feeling of satiety and may also affect gut neuroendocrine signaling. The simplicity of the intragastric balloon procedure may account for its widespread role in obesity treatment and its applicability to various degrees of obesity. However, advances in device properties and procedural techniques are still needed in order to improve its safety and cost-effectiveness. Furthermore, verification of the physiological outcomes of intragastric balloon treatment and the clinical predictive factors for treatment responses should be considered. In this article, we discuss the types, efficacy, safety, and future directions of intragastric balloon treatment. abstract_id: PUBMED:34466335 Acute Pancreatitis as a Complication of Intragastric Balloon. The intragastric balloon is a common minimally invasive procedure used prior to bariatric surgery for weight reduction. There are complications of this balloon with varying degrees of severity ranging from mild to severe life-threatening complications. Acute pancreatitis due to direct compression or catheter migration of the balloon should be considered in these patients. In the literature, there is little evidence that intragastric balloons could cause acute pancreatitis. We present two cases in which they had a history of IGB insertion complicated by acute pancreatitis. The diagnosis of acute pancreatitis due to the intragastric balloon was made after excluding other possible causes of acute pancreatitis. Both patients were hospitalized and managed conservatively. abstract_id: PUBMED:30193905 Intragastric balloon outcomes in super-obesity: a 16-year city center hospital series. Background: Intragastric balloons represent an endoscopic therapy aimed at achieving weight loss by mechanical induction of satiety. Their exact role within the bariatric armamentarium remains uncertain. Objective: Our study aimed to evaluate the use of intragastric balloon therapy alone and before definitive bariatric surgery over a 16-year period. Setting: A large city academic bariatric center for super-obese patients. Methods: Between January 2000 and February 2016, 207 patients underwent ORBERA intragastric balloon placement at esophagogastroduodenoscopy. Four surgeons performed the procedures, and data were entered prospectively into a dedicated bariatric database. Patients' weight loss data were measured through body mass index (BMI) and excess weight loss and recorded at each clinic review for up to 5 years (60 mo). Treatment arms included intragastric balloon alone with lifestyle therapy or intragastric balloon and definitive bariatric surgery: gastric banding, sleeve gastrectomy, or Roux-en-Y gastric bypass. An additional treatment arm of analysis included the overall results from intragastric balloon followed by stapled procedure. Results: One hundred twenty-nine female and 78 male patients had a mean age of 44.5 (±11.3) years and a mean BMI of 57.3 (±9.7) kg/m2. Fifty-eight percent of patients suffered from type 2 diabetes. Time from initial or first balloon insertion to definitive surgical therapy ranged between 9 and 13 months. Seventy-six patients had intragastric balloon alone, and 131 had intragastric balloon followed by definitive procedure. At 60 months postoperatively the intragastric balloon alone with lifestyle changes demonstrated an excess weight loss of 9.04% and BMI drop of 3.8; intragastric balloon with gastric banding demonstrated an excess weight loss of 32.9% and BMI drop of 8.9. Intragastric balloon and definitive stapled procedure demonstrated a BMI drop of 17.6 and an excess weight loss of 52.8%. Overall, there were 3 deaths (1.4%), 2 within 10 days due to acute gastric perforation secondary to vomiting and 1 cardiac arrest at 4 weeks postoperatively. Conclusion: Intragastric balloons can offer effective weight loss in selected super-obese patients within a dedicated bariatric center offering multidisciplinary support. Balloon insertion alone offers only short-term weight loss; however, when combined with definitive bariatric surgical approaches, durable weight loss outcomes can be achieved. A strategy of early and continual vigilance for side effects and a low threshold for removal should be implemented. Surgeon and unit experience with intragastric balloons can contribute to "kick starting" successful weight loss as a bridge to definitive therapy in an established bariatric surgical pathway. abstract_id: PUBMED:34787766 Intragastric Balloon as Bridging Therapy Prior to Bariatric Surgery for Patients with Severe Obesity (BMI ≥ 50 kg/m2): a Systematic Review and Meta-analysis. Bariatric surgery for patients with severe obesity (body mass index (BMI) ≥ 50kg/m2) is technically challenging. Intragastric balloon (IGB) has been proposed for weight loss before bariatric surgery to reduce surgical risks but its efficacy remains unclear. We conducted a systematic review and meta-analysis of the effectiveness of IGB as bridging therapy and assess potential complications. Amongst 2419 citations, 13 studies were included. IGB resulted in a BMI reduction of 6.60 kg/m2 (MD=6.60, 95% CI: 5.06-8.15; I2=72%). The total post-procedural complication rate was 8.13% (95% CI: 4.04-13.17%), with majority being balloon intolerance. Overall, IGB is effective as a bridging therapy with adequate procedural safety profile, but further study is needed to evaluate the risk reduction for bariatric surgery and long-term weight-loss outcomes. abstract_id: PUBMED:28441654 Gastric Ulcer Hemorrhage - a Potential Life-Threatening Complication of Intragastric Balloon Treatment of Obesity. Background: Some morbidly obese patients do not qualify for bariatric surgery due to general health contraindications. Intragastric balloon treatment might be a therapeutic option in the above-mentioned cases. It can prime super-obese patients with end-stage disease for bariatric surgery. As a neoadjuvant therapy before surgery, it leads to a downstage of the disease by preliminary weight reduction, to an improvement in general health and, in summary, to a reduction of the perioperative risk. It is generally considered to be a safe method. However, due to the wide range of possible complications and unusual symptoms after intragastric balloon treatment, an interdisciplinary, instead of only a surgical or endoscopic, treatment and follow-up might be recommended in these patients. Case Report: We here describe a potential life-threatening complication in the form of gastric bleeding as a consequence of intragastric balloon treatment and simultaneous aspirin taking and Helicobacter pylori infection. Conclusion: There have been reports of some complications of intragastric balloon treatment. However, to the best of our knowledge there were no reports concerning life-threatening hemorrhage from gastric ulcer. abstract_id: PUBMED:29475822 Gastric Perforation as a complication of intragastric balloon. Obesity is considered the most common nutritional disorder in Western countries and is related to multiple morbidity and mortality. There are different options for obesity treatment, including diet, behavioral therapy, medications, and surgery. If patients do not meet the criteria for bariatric surgery, intragastric balloons may be used to achieve weight reduction. Currently, the intragastric balloon is one of the most common bariatric procedures in obese patients in Europe. Gastric perforation associated with intragastric balloon is a rare but dangerous complication. We report a case of a 42-year-old female patient who presented to the emergency department with acute abdomen. Chest x-ray in an erect posture indicated free gas under the diaphragm. She had undergone placement of an intragastric adjustable balloon device 13 months earlier and was overdue for removal of the balloon. In the emergency theater, a large perforated ulcer was found in the posterior wall of the stomach, which was repaired laparoscopically. Her postoperative course was uneventful. We also review the literature on intragastric balloon-induced gastric perforation. Our case is a very rare report of late gastric perforation after adjustable intragastric balloon placement. We recommend regular follow-up and removal in proper time after insertion of the gastric balloon. abstract_id: PUBMED:16608635 Intragastric balloon in bariatric surgery Intragastric balloon (BIB) is a mechanical restrictive device formed by a silicone wrapper inserted endoscopically in the stomach, filled with 500 cc of physiological blue methyl solution and kept in situ for 6 months. In two years 98 intragastric balloon have been set in 88 patients at the average age of 37.1 years, of 41.9 of BMI and 47.6 kg of weight. Complications were intolerance, gastro-oesophageal reflux, break and/or deflation and meteorism. BIB is an effective procedure in the temporary treatment of obesity leading to satisfactory weight loss and to short-term improvement of comorbidities. abstract_id: PUBMED:33168475 Migration of an intragastric balloon may necessitate enterotomy for extraction. Intragastric balloon (IGB) placement for the treatment of obesity is presented as a non-invasive and safe technique leading to significant weight loss. We report a case of an IGB migration that led to small bowel occlusion. Radiological analysis confirmed impaction at 250cm from the angle of Treitz. Enterotomy was necessary to treat the occlusion. abstract_id: PUBMED:35257223 Impact of preoperative weight loss achieved by gastric balloon on peri- and postoperative outcomes of bariatric surgery in super-obese patients: a retrospective matched-pair analysis. Background: An intragastric balloon is used to cause weight loss in super-obese patients (BMI &gt; 60 kg/m2) prior to bariatric surgery. Whether weight loss from intragastric balloon influences that from bariatric surgery is poorly studied. Methods: In this retrospective, single-center study, the effects of intragastric balloon in 26 patients (BMI 69.26 ± 6.81) on weight loss after bariatric surgery (primary endpoint), postoperative complications within 30 days, hospital readmission, operation time, and MTL30 (secondary endpoints) were evaluated. Fifty-two matched-pair patients without intragastric balloon prior to bariatric surgery were used as controls. Results: Intragastric balloon resulted in a weight loss of 17.3 ± 14.1 kg (BMI 5.75 ± 4.66 kg/m2) with a nadir after 5 months. Surgical and postoperative outcomes including complications were comparable between both groups. Total weight loss was similar in both groups (29.0% vs. 32.2%, p = 0.362). Direct postoperative weight loss was more pronounced in the control group compared to the gastric balloon group (29.16 ± 7.53% vs 23.78 ± 9.89% after 1 year, p &lt; 0.05 and 32.13 ± 10.5% vs 22.21 ± 10.9% after 2 years, p &lt; 0.05), who experienced an earlier nadir and started to regain weight during the follow-up. Conclusion: A multi-stage therapeutic approach with gastric balloon prior to bariatric surgery in super-obese patients may be effective to facilitate safe surgery. However, with the gastric balloon, pre-treated patients experienced an attenuated postoperative weight loss with an earlier nadir and earlier body weight regain. This should be considered when choosing the appropriate therapeutic regime and managing patients' expectations. abstract_id: PUBMED:22166356 Intragastric Balloon (BIB system) in the treatment of obesity and preparation of patients for surgery - own experience and literature review. Unlabelled: Despite the recognition of bariatric surgery as the only effective method of weight reduction, we remain in search of minimally invasive methods, both for the treatment of obesity and preparation of patients for surgical procedures. The aim of the study was to determine intragastric balloon (Allergan) implantation results as the only method of treating obesity, and patient preparation for further stages of abdominal hernia operations, as well as other surgical procedures. Material And Methods: The study presented own results considering the use of Bioenteric Intragastric Balloons (BIB system) in the treatment of pathological obesity, and preparation of patients for bariatric surgery and abdominal hernia operations. The study group comprised 18 female and 7 male patients. Average patient age amounted to 50.6 and 54 years, respectively. Indications for intragastric balloon insertion were as follows: preparation for hernia (10 cases) and bariatric (5 patients) operations, and weight reduction management (8 patients). In two cases the implantation of the intragastric balloon served the preparation for gynecological and orthopedic (vertebral column) operations. One female patient was prepared for both bariatric and hernial surgery. The procedure was performed under general anesthesia. Statistical analysis considered 22 patients (one female patient was excluded from analysis, due to death, two other were excluded because of lack of sufficient data).Results. In case of two patients the intragastric balloon system was removed before 6 months had elapsed because of intolerance. One female patient died during the observation period for reasons not related to the procedure. The obtained BMI reduction ranged between 2 and 6 kg/m2, which amounted to a maximum weight loss of 24 kg. In one patient a weight gain of 2 kg was observed. Considering patients prepared for abdominal hernia operations weight reduction was greater and better maintained after the removal of the BIB system. Conclusions: According to the authors of the presented study the intragastric balloon serves its role as a bridge to bariatric procedures and weight reduction, before planned extensive postoperative hernia operations. The use of the intragastric balloon only to reduce weight has no medical and economic justification. Answer: Bariatric surgery after intragastric balloon treatment is not always necessary, but it may be recommended in certain cases. Intragastric balloon (IGB) treatment is an endoscopic therapy aimed at achieving weight loss by mechanically inducing satiety and is considered a less invasive option compared to bariatric surgery (PUBMED:27350727). It can be effective for weight loss in selected super-obese patients and may serve as a bridge to definitive bariatric surgical approaches (PUBMED:30193905). In some cases, IGB is used as a bridging therapy prior to bariatric surgery for patients with severe obesity (BMI ≥ 50 kg/m^2) to reduce surgical risks and facilitate safer surgery (PUBMED:34787766). The use of IGB alone with lifestyle changes has been shown to offer only short-term weight loss, whereas when combined with definitive bariatric surgical procedures, more durable weight loss outcomes can be achieved (PUBMED:30193905). However, it is important to note that the decision to proceed with bariatric surgery after IGB treatment depends on individual patient factors, including the degree of weight loss achieved with the balloon, the patient's overall health, and their weight loss goals. Some patients may experience significant weight loss with IGB alone and may not require further surgical intervention, while others may benefit from additional bariatric surgery to achieve and maintain weight loss (PUBMED:30193905; PUBMED:34787766). In summary, while IGB can be an effective standalone treatment for some patients, bariatric surgery may be necessary for others to achieve long-term weight loss, especially in cases of super-obesity or when IGB is used as a preparatory step before surgery (PUBMED:30193905; PUBMED:34787766).
Instruction: Interpretation of probe-based confocal laser endomicroscopy of indeterminate biliary strictures: is there any interobserver agreement? Abstracts: abstract_id: PUBMED:24344750 Pre- and post-training session evaluation for interobserver agreement and diagnostic accuracy of probe-based confocal laser endomicroscopy for biliary strictures. Background And Aim: Current diagnostic modalities for indeterminate biliary strictures offer low accuracy. Probe-based confocal laser endomicroscopy (pCLE) permits microscopic assessment of mucosal structures by obtaining real-time high-resolution images of the mucosal layers of the gastrointestinal tract. Previously, an interobserver study demonstrated poor to fair agreement even among experienced confocal endomicroscopy operators. Our objective was to assess interobserver agreement and diagnostic accuracy upon completion of a pCLE training session. Methods: Forty de-identified pCLE video clips of indeterminate biliary strictures were sent to five endoscopists at four tertiary care centers for scoring. Observers subsequently attended a teaching session by an expert pCLE user that included 20 training clips and rescored the same pCLE video clips, which were randomized and renumbered. Results: Pre-training interobserver agreement for all observers was 'fair' (Κ: 0.31, P-value: &lt;0.0001) and diagnostic accuracy was 72% (55-80%). Post-training interobserver agreement for all observers was 'substantial' (Κ: 0.74, P-value: &lt;0.0001) and diagnostic accuracy was 89% (80-95%). Using a paired t-test, we observed an increase of 17% (95% CI 7.6-26.4) in post-training diagnostic accuracy (t = 5.01, df = 4, P-value 0.007). Conclusions: Interobserver agreement and diagnostic accuracy improved after observers underwent training by an expert pCLE user with a specific sequence set. Users should participate in such training programs to maximize diagnostic accuracy of pCLE evaluation. abstract_id: PUBMED:26640347 Probe based confocal laser endomicroscopy of the pancreatobiliary system. Aim: To review applications of confocal laser endomicroscopy (CLE) in pancreatobiliary lesions and studies that assessed training and interpretation of images. Methods: A computerized literature search was performed using OVID MEDLINE, EMBASE, Cochrane library, and the ISI Web of Knowledge from 1980 to October 2014. We also searched abstracts from major meetings that included the Digestive Disease Week, Canadian Digestive Disease Week and the United European Gastroenterology Week using a combination of controlled vocabulary and text words related to pCLE, confocal, endomicroscopy, probe-based confocal laser endomicroscopy, and bile duct to identify reports of trials. In addition, recursive searches and cross-referencing was performed, and manual searches of articles identified after the initial search was also completed. We included fully published articles and those in abstract form. Given the relatively recent introduction of CLE we included randomized trials and cohort studies. Results: In the evaluation of indeterminate pancreatobiliary strictures CLE with ERCP compared to ERCP alone can increase the detection of cancerous strictures with a sensitivity of (98% vs 45%) and has a negative predictive value (97% vs 69%), but decreased the specificity (67% vs 100%) and the positive predictive value (71% vs 100%) when compared to index pathology. Modifications in the classification systems in indeterminate biliary strictures have increased the specificity of pCLE from 67% to 73%. In pancreatic cystic lesions there is a need to develop similar systems to interpret and characterize lesions based on CLE images obtained. The presence of superficial vascular network predicts serous cystadenomas accurately. Also training in acquiring and interpretation of images is feasible in those without any prior knowledge in CLE in a relatively simple manner and computer-aided diagnosis software is a promising innovation. Conclusion: The role of pCLE in the evaluation of pancreatobiliary disorders might be better suited for those with an intermediate and low probability. abstract_id: PUBMED:35885521 The Use of Confocal Laser Endomicroscopy in Diagnosing Barrett's Esophagus and Esophageal Adenocarcinoma. Confocal laser endomicroscopy (CLE) is a diagnostic technique that enables real-time microscopic imaging during microscopic examination and evaluation of epithelial structures with 1000-fold magnification. CLE can be used in the diagnosis of various pathologies, in pneumology, and in urology, and it is very widely utilized in gastroenterology, most importantly in the diagnosis of Barrett's esophagus (BE), esophageal adenocarcinoma (EAC), biliary strictures, and cystic pancreatic lesions. A literature search was made in MEDLINE/PubMed and Google Scholar databases while focusing on diagnostics using CLE of BE and EAC. We then examined randomized and observational studies, systematic reviews, and meta-analyses relating to the utilization of CLE in BE and EAC diagnostics. Here, we discuss whether CLE can be a suitable diagnostic method for surveillance of BE. Even though many studies have proven that CLE increases diagnostic accuracy in detecting neoplastic transformation of BE, CLE is still not used as a standard diagnostic tool in BE surveillance due to a deficiency of scientific evidence. More studies and data are needed if CLE is to find a place as a new technique in BE surveillance. abstract_id: PUBMED:22875310 Interpretation of probe-based confocal laser endomicroscopy of indeterminate biliary strictures: is there any interobserver agreement? Background: Probe-based confocal laser endomicroscopy (pCLE) has enabled in vivo histopathology by obtaining high resolution images of the mucosal layers of the gastrointestinal tract. For indeterminate bile duct strictures, biopsy, cytologic brushing and needle aspiration have low levels of diagnostic accuracy. Aim: The objective of this multi-center pilot study was to assess the interobserver agreement in interpretation of pCLE imaging. Methods: Twenty-five de-identified pCLE video clips of indeterminate biliary strictures were sent to 6 observers at 5 institutions. Miami Classification was used to standardize image interpretation. Seven variables were assessed for interobserver agreement using the Fleiss kappa statistic which included: presence of thick (&gt;20 μm) or thin (&lt;20 μm) dark or white bands, dark clumps, epithelium including glandular structures, interstitial fluorescein leakage, ease of interpretation, and final diagnosis. Based on operator experience, observers were categorized into 3 categories of experience (Category 1: 0-10; Category 2: 11-20; Category 3: &gt;21 cases). Results: Upon stratification, Category 1 interobserver agreement ranged from "Poor" to "Fair" (κ = 0.277, κ = -0.079, κ = -0.025, κ = -0.066, κ = 0.128, κ = 0.088), and for the final diagnosis variable, the agreement was slight (κ = 0.033). Category 2 and 3 interobserver agreement ranged from "Poor" to "Fair" (κ = 0.211, κ = 0.181, κ = 0.347, κ = 0.238, κ = -0.050, κ = 0.092), and for the final diagnosis variable, the agreement was slight (κ = 0.195). Conclusion: The overall interobserver agreement for pCLE image interpretation in indeterminate biliary strictures ranges from poor to fair. Interpretation criteria as well as training require further standardization toward improving inter-rater reliability. abstract_id: PUBMED:30394137 Individual probe based confocal laser endomicroscopy criteria in the analysis of indeterminate biliary strictures. Objectives: Evaluation of indeterminate biliary strictures remains challenging due to limited sensitivity of endoscopic tissue sampling. Biliary probe-based confocal laser endomicroscopy (pCLE) has shown promise to detect and exclude neoplasia. However, knowledge of whether individual inflammatory criteria are more prevalent in neoplasia compared to benign strictures is limited. The objective of this work is to improve diagnosis of neoplastic and inflammatory conditions using pCLE. Materials And Methods: The charts of all patients who underwent pCLE at a single referral center between 2009 and 2015 were reviewed. ERCP reports were reviewed for eleven Miami and Paris criteria. Primary outcome was the identification of neoplasia by histopathology (defined as high-grade dysplasia and/or adenocarcinoma). To model predictors of neoplasia, we fit a binary regression model incorporating data from pCLE operating criteria, pCLE impression, and PSC status. Results: 97 patients were identified. In the 27 patients with neoplasia, there was increasing number of Miami malignant criteria (Pearson r = 0.512, p &lt; .001) while inflammatory criteria were less prevalent. 10% (5/51, p &lt; .001) of patients with benign pCLE impression developed neoplasia, while 48% (22/46, p &lt; .001) with suspicious pCLE impressions developed neoplasia. The binary regression model to predict neoplasia had a sensitivity of 83.3%, specificity of 92.5%, and overall accuracy 89.7%. Conclusions: Presence of malignant criteria and absence of certain inflammatory criteria are more prevalent in patients with neoplasia. Our model, which weights individual imaging components, shows impressive sensitivity and specificity over prior prognostic efforts. Prospective studies will be required to evaluate this model. abstract_id: PUBMED:30646731 Confocal laser endomicroscopy in the diagnostics of gastrointestinal lesions literary review and personal experience. Introduction: Confocal laser endomicroscopy (CLE) is a new method enabling real-time histopathological diagnosis. Two platforms of CLE are used standardly: the so called probe-based CLE in which an endomicroscopic probe is inserted through an endoscope, and so-called needle-based CLE inserted through a needle. Methods: CLE enables evaluation of epithelial and subepithelial structures with 1000x magnification in any part of the gastrointestinal system. The main advantage over conventional biopsies can be immediate diagnosis and larger area evaluated. CLE might play a promising role in indeterminate biliary lesions where biopsies are often not sufficient and fail to allow for accurate diagnosis. CLE can also have a promising potential in pancreatic lesions where it is used along with the needle designed for tissue aspiration during endosonography. It is also possible to use CLE in other organ systems and medical specialties, e.g. in pneumology, neurosurgery, ENT and others. Conclusion: CLE is a promising diagnostic method; however, it is unlikely to replace standard biopsies currently since these are still considered a gold standard of histopathological diagnosis. High purchase price is a main disadvantage of the method, hampering its expansion in medicine. Key words: confocal laser endomicroscopy CLE biliary strictures pancreatic lesions esophageal carcinoma Rozhl Chir 2018;97:531-538. abstract_id: PUBMED:27642847 A Review of Probe-Based Confocal Laser Endomicroscopy for Pancreaticobiliary Disease. Confocal laser endomicroscopy (CLE) is a novel in vivo imaging technique that can provide real-time optical biopsies in the evaluation of pancreaticobiliary strictures and pancreatic cystic lesions (PCLs), both of which are plagued by low sensitivities of routine evaluation techniques. Compared to pathology alone, CLE is associated with a higher sensitivity and accuracy for the evaluation of indeterminate pancreaticobiliary strictures. CLE has the ability to determine the malignant potential of PCLs. As such, CLE can increase the diagnostic yield of endoscopic retrograde cholangiopancreatography and endoscopic ultrasound, reducing the need for repeat procedures. It has been shown to be safe, with an adverse event rate of ≤1%. Published literature regarding its cost-effectiveness is needed. abstract_id: PUBMED:28874959 Clinical impact of confocal laser endomicroscopy in the management of gastrointestinal lesions with an uncertain diagnosis. Aim: To evaluate the clinical impact of confocal laser endomicroscopy (CLE) in the diagnosis and management of patients with an uncertain diagnosis. Methods: A retrospective chart review was performed. Patients who underwent CLE between November 2013 and October 2015 and exhibited a poor correlation between endoscopic and histological findings were included. Baseline characteristics, indications, previous diagnostic studies, findings at the time of CLE, clinical management and histological results were analyzed. Interventions based on CLE findings were also analyzed. We compared the diagnostic accuracy of CLE and target biopsies of surgical specimens. Results: A total of 144 patients were included. Of these, 51% (74/144) were female. The mean age was 51 years old. In all, 41/144 (28.4%) lesions were neoplastic (13 bile duct, 10 gastric, 8 esophageal, 6 colonic, 1 duodenal, 1 rectal, 1 ampulloma and 1 pancreatic). The sensitivity, specificity, positive predictive value, negative predictive value, and observed agreement when CLE was used to detect N-lesions were 85.37%, 87.38%, 72.92%, 93.75% and 86.81%, respectively. Cohen's Kappa was 69.20%, thus indicating good agreement. Changes in management were observed in 54% of the cases. Conclusion: CLE is a new diagnostic tool that has a significant clinical impact on the diagnosis and treatment of patients with uncertain diagnosis. abstract_id: PUBMED:26927493 Is Seeing Really Believing? Probe-based Confocal Laser Endomicroscopy in the Evaluation of Pancreaticobiliary Disease. Confocal laser endomicroscopy for real-time diagnosis during endoscopic procedures has now been in the mainstream clinical arena for a decade. Indeterminate biliary strictures and pancreatic cysts remain 2 difficult diagnostic challenges for the gastroenterologist, and the role this technology will play in the approach to these problems is still evolving. There is now a body of literature to guide the endoscopist in the use of this imaging tool, including how it may be useful in excluding biliary malignancy, and how miniaturization has allowed for endoscopic ultrasound-guided application of the probe within cysts. Interobserver variability remains a weakness of the system. Tips for use of this tool and interpretation of the imaging data it provides are discussed. abstract_id: PUBMED:32405892 SAGES TAVAC safety and efficacy analysis confocal laser endomicroscopy. Background: Confocal laser endomicroscopy (CLE) is a novel endoscopic adjunct that allows real-time in vivo histological examination of mucosal surfaces. By using intravenous or topical fluorescent agents, CLE highlights certain mucosal elements that facilitate an optical biopsy in real time. CLE technology has been used in different organ systems including the gastrointestinal tract. There has been numerous studies evaluating this technology in gastrointestinal endoscopy, our aim was to evaluate the safety, value, and efficacy of this technology in the gastrointestinal tract. Methods: The Society of American Gastrointestinal and Endoscopic Surgeons (SAGES) Technology and Value Assessment Committee (TAVAC) performed a PubMed/Medline database search of clinical studies involving CLE in May of 2018. The literature search used combinations of the keywords: confocal laser endomicroscopy, pCLE, Cellvizio, in vivo microscopy, optical histology, advanced endoscopic imaging, and optical diagnosis. Bibliographies of key references were searched for relevant studies not covered by the PubMed search. Case reports and small case series were excluded. The manufacturer's website was also used to identify key references. The United States Food and Drug Administration (U.S. FDA) Manufacturer And User facility and Device Experience (MAUDE) database was searched for reports regarding the device malfunction or injuries. Results: The technology offers an excellent safety profile with rare adverse events related to the use of fluorescent agents. It has been shown to increase the detection of dysplastic Barrett's esophagus, gastric intraepithelial neoplasia/early gastric cancer, and dysplasia associated with inflammatory bowel disease when compared to standard screening protocols. It also aids in the differentiation and classification of colorectal polyps, indeterminate biliary strictures, and pancreatic cystic lesions. Conclusions: CLE has an excellent safety profile. CLE can increase the diagnostic accuracy in a number of gastrointestinal pathologies. Answer: The interpretation of probe-based confocal laser endomicroscopy (pCLE) for indeterminate biliary strictures has shown varying levels of interobserver agreement. Initially, an interobserver study demonstrated poor to fair agreement even among experienced confocal endomicroscopy operators (PUBMED:24344750). A multi-center pilot study assessing the interobserver agreement in the interpretation of pCLE imaging found that the overall interobserver agreement for pCLE image interpretation in indeterminate biliary strictures ranged from poor to fair (PUBMED:22875310). However, after a training session by an expert pCLE user, interobserver agreement improved from 'fair' (Κ: 0.31) to 'substantial' (Κ: 0.74), and diagnostic accuracy increased from 72% to 89% (PUBMED:24344750). This suggests that with standardized training and experience, interobserver agreement can be significantly enhanced. Moreover, modifications in the classification systems for indeterminate biliary strictures have increased the specificity of pCLE, indicating that the development of standardized interpretation criteria can further improve interobserver agreement (PUBMED:26640347). The presence of malignant criteria and the absence of certain inflammatory criteria have been found to be more prevalent in patients with neoplasia, suggesting that individual imaging components can be weighted to improve sensitivity and specificity (PUBMED:30394137). In summary, while initial studies showed poor to fair interobserver agreement in the interpretation of pCLE for indeterminate biliary strictures, subsequent training and standardization of interpretation criteria have led to substantial improvements in agreement among observers.
Instruction: Cortisol and perioperative period. Could be certified any correlation? Abstracts: abstract_id: PUBMED:25799621 Cortisol and perioperative period. Could be certified any correlation? . Aim: The purpose of this study is to investigate the correlation of general, epidural or spinal anesthesia on fluctuations of the perioperative cortisol concentrations. Material And Methods: Seventy five (75)--ASA physical status I, II--patients who were scheduled to undergo a medium severity surgery with general, spinal and epidural anesthesia, were enrolled in this study. These patients were divided in three groups. Twenty five (25) of these patients where received general anesthesia (group G) , twenty five (25) of these patients where received spinal anesthesia (group R) and twenty five (25) of these patients where received epidural anaesthesia (group Ep). Five blood samples were collected from each patient at intervals: A) 1h preoperative, B) immediately after the introduction in anesthesia and the intubation for group G, after the process of spinal anesthesia completion for group S and after the process of epidural anesthesia completion for group E, C) at the end of the surgery, D) 8h, and E) 24h after the operation. The quantitative determination of cortisol that was possible with the electro- chemiluminescence immunoassay method, detected the fluctuation of perioperative cortisol concentrations. Results: The cortisol levels (μg/dl) of group G were: A) 18.7056, B)18.8272, C)22.1384, D)25.232, E)19.4344. The cortisol levels (μg/dl) of group Ep were: A)28.8132, B)22.6876, C)23.002, D)21.92, E)16.36. The cortisol levels (μg/dl) of group R were: A)19.1688, B)16.0988, C)15.5656, D)18.5992, E)20.35. Conclusions: In conclusion we found that there is correlation between the type of anaesthesia and the fluctuation of perioperative cortisol levels. We would like to point out the fact that there is statistically significant correlation between the age and the preoperative value of cortizol concentration (Stage A) in the blood serum of the patients who participated in this study. abstract_id: PUBMED:27502794 Relief from anxiety using complementary therapies in the perioperative period: A principle-based concept analysis. Aims And Objectives: To analyze and clarify the concept of providing relief from anxiety using complementary therapies in the perioperative period utilizing the epistemological, pragmatic, linguistic and logical principles of a principle-based concept analysis to examine the state of the science. Background: The majority of patients scheduled for surgery experience anxiety in the perioperative period. Anxiety has the potential to limit a patient's ability to participate in his or her care throughout their hospitalization. Although medications are the conventional medical treatment for anxiety in the perioperative period, the addition of a complementary therapy could be an effective holistic approach to providing relief from anxiety. Design: Principle-based concept analysis. Methods: In 2015, strategic literature searches of CINHAL and PUBMED using keywords were performed. Fifty-six full text articles were assessed for eligibility. Results: Twelve studies were used in the final analysis to clarify the concept of relief from anxiety using complementary therapies in the perioperative period. Conclusion: This analysis has clarified the maturity and boundaries, within the four principles of a principle-based concept analysis, of the concept of relief from anxiety using complementary therapies in the perioperative period. A greater understanding of relief from anxiety using complimentary therapies in the perioperative period as an adjunct to conventional medicine will allow perioperative nurses and anesthesia providers to modify and specify the plan of care for their surgical patients. The use of complementary therapies for relief in the perioperative period appears to be an area of promising research and treatment for patients, families and providers. abstract_id: PUBMED:32944922 Correlation analysis of cortisol concentration in hair versus concentrations in serum, saliva, and urine. Introduction: Cortisol concentration is measured in blood, urine, and saliva samples. It has been recently proven that cortisol could also be detected in hair samples. Cortisol measurements in different samples have their own individual characteristics and clinical utility. We aimed to investigate the correlation between hair cortisol concentration and standard cortisol measurements used in clinical practice. Material And Methods: Fifty adult volunteers with a negative history of endocrine disorders were enrolled in the study. Morning serum cortisol (MSC), evening serum cortisol (ESC), evening free salivary cortisol (EFSC), urine free cortisol (UFC), and hair cortisol concentration (HCC) were analysed in all participants. Eventually, 41 volunteers were included into the study, whose cortisol concentration in the 1 mg overnight dexamethasone suppression test (1 mg ONDST) were &lt; 50 nmol/L, and cortisol levels in serum, saliva, and urine were within reference ranges. Hair cortisol concentration test was performed for 20 mg of hair strands of the proximal 1 cm hair segments. Results: Hair cortisol concentration ranged from 0.3036 to 2.65 nmol/mg, and the average value was 0.8125 ± 0.4834 nmol/mg. No significant correlations were found between HCC and MSC (rho = 0.04419, p = 0.7838), HCC and ESC (rho = -0.2071, p = 0.1938), HCC and EFSC (rho = 0.1005, p = 0.532), or HCC and UFC (rho = 0.1793, p = 0.262). Conclusions: This work is another step in the discussion on the application of HCC determinations in clinical practice. Our results have showed no correlations between HCC and single point cortisol assessment in blood, saliva, and urine in patients with reference cortisol levels. abstract_id: PUBMED:34558248 Application of transcutaneous electrical acupoint stimulation in perioperative period Transcutaneous electrical acupoint stimulation (TEAS) has the characteristics of simple operation, non-invasive, and high patient acceptability, and is widely used in clinical practice. This article summarized the effects of TEAS on analgesia, gastrointestinal tract regulation, circulation regulation, postoperative cognitive function improvement, immune function regulation, anti-inflammatory and anti-stress during the perioperative period. At the same time, this article analyzed the problems of the application of TEAS in the perioperative period, and aimed to promote its clinical application. abstract_id: PUBMED:7818044 Perioperative endothelin, ACTH and cortisol plasma concentrations in coronary bypass patients Study Objective: To follow up endothelin (ET), adrenocorticotropic hormone (ACTH), and cortisol levels in patients undergoing aortocoronary bypass grafting (CABG) and to determine whether these are extracted from plasma by the pulmonary circulation. Design: Convenience sample trial. Setting: University hospital. Patients: Eight male routine CABG patients without appreciable concomitant disease. Interventions: Patients were given anaesthesia in a strictly standardised manner using etomidate, flunitrazepam, fentanyl, and pancuronium. Nitroglycerin was administered prior to cardiopulmonary bypass (CPB) at 2 mg/h and dopamine as the only catecholamine starting from CPB weaning until the end of sampling at 3.5-5 micrograms/kg.min. Samples were drawn in rapid sequence from cannulated radial and a distal pulmonary arteries (Swan-Ganz catheter) at eight sampling times starting after induction of anaesthesia and catheter placement and finishing 22 h after the end of operation. Measurements And Results: ET levels were determined by an ET-1, 2, 3-sensitive radioimmunoassay (RIA), ACTH and cortisol by commercially available RIA kits. There was significant (P = 0.032, linear regression analysis) correlation between ET and cortisol from pulmonary arterial samples. ET was significantly (P = 0.042, two-tailed Wilcoxon test) extracted by the pulmonary circulation after induction of anaesthesia. This pulmonary-systemic arterial difference nearly disappeared intraoperatively, but tended to be restored 22 h after the end of operation at an approximately twofold increased level. Conclusions: No interrelation between ET and the hypothalamic-pituitary-adrenal axis could be established by the ET, ACTH, and cortisol plasma levels. However, the significant correlation between perioperative ET and cortisol lends further support to the hypothesis of ET release by cortisol from vascular smooth-muscle cells. There is a net pulmonary clearance of ET in patients prior to CABG that is lost intra- and early postoperatively, but tends to be restored on the 1st day thereafter at an increased level. abstract_id: PUBMED:26016707 Effect of perioperative dexmedetomidine on the endocrine modulators of stress response: a meta-analysis. This study examined the effects of perioperative dexmedetomidine treatment on physiological modulators of surgical stress response. The quality of the included studies was assessed prior to performing meta-analyses of the weighted mean differences in the changes from baseline of stress hormones and interpreted in the light of statistical heterogeneity between the studies. Nineteen studies (844 surgical subjects) data were used for this meta-analysis. Dexmedetomidine administration significantly decreased blood cortisol levels (μg/dL) postoperatively (mean difference with 95% confidence interval (CI) from controls: -18.78 (-28.45, -9.10); P &lt; 0.05). In the subgroup analysis, the mean difference between dexmedetomidine-treated and saline-treated subjects in the changes from baseline of the cortisol levels was -20.10 (-30.96, -9.25; P &lt; 0.05) but, between dexmedetomidine- and comparator-treated subjects, it was not statistically significantly different (-15.13 (-49.78, 19.52); P &lt; 0.05). Compared with controls, dexmedetomidine treatment also decreased adrenaline and noradrenaline levels significantly (mean difference in the percent changes from baseline: -90.41 (-145.79, -35.03)%; P &lt; 0.05 and -62.82 (-85.47, -0.40.17)%; P &lt; 0.05, respectively). Dexmedetomidine also decreased prolactin levels with a mean difference of -19.42 (-39.37, 0.52) μg/L (P = 0.06). In conclusion, perioperative use of dexmedetomidine reduces serum catecholamine and cortisol levels but the decrease in cortisol levels was not statistically different from the comparator anaesthetics. More data will be required to assess the effects of dexmedetomidine on corticotropin, prolactin, and growth hormone. abstract_id: PUBMED:21546886 Statin therapy in the perioperative period. Statins are frequently used as chronic therapy for reducing cardiovascular mortality and morbidity, but there has been less emphasis on the role of statins in the perioperative period. This review evaluates data regarding statin use in vascular and noncardiac surgery, the use of statins in combination with β-blockers in the perioperative period, perioperative statin use in patients already treated with statins, and the safety of statin therapy in the perioperative period. Current recommendations state that patients who are prescribed statins as chronic therapy should continue treatment in the perioperative period, but data suggest that there may be benefit from the use of perioperative statins in a wider population. abstract_id: PUBMED:30351804 The Implication of Dietary Supplements and Herbal Medicines in Perioperative Period Total sales of dietary supplements and herbal medi- cines exceed two trillion yen in Japan. Approximately 60% of Japanese use dietary supplements or herbal medicines. In general, many people believe that dietary supple- ments and herbal medicines are safe natural prod- ucts; however, they could induce serious adverse events (bleeding, myocardial infarction stroke and glucose intolerance) in perioperative period. Coagulation and cardiovascular system can be nega- tively affected by those products. Thus careful preop- erative assessment is required for the patients who need regional anesthesia (e.g. epidural anesthesia and spinal anesthesia) and have cardiovascular complica- tions. Unfortunately there is no concise guideline regarding the use of supplements and herbal medicines during perioperative period. Anesthesiologists should be familiar with the adverse effects of dietary supplements and herbal medicines and pay more attention to non-prescribed medicines in preoperative assessment. abstract_id: PUBMED:25601442 Exploiting the critical perioperative period to improve long-term cancer outcomes. Evidence suggests that the perioperative period and the excision of the primary tumour can promote the development of metastases—the main cause of cancer-related mortality. This Review first presents the assertion that the perioperative timeframe is pivotal in determining long-term cancer outcomes, disproportionally to its short duration (days to weeks). We then analyse the various aspects of surgery, and their consequent paracrine and neuroendocrine responses, which could facilitate the metastatic process by directly affecting malignant tissues, and/or through indirect pathways, such as immunological perturbations. We address the influences of surgery-related anxiety and stress, nutritional status, anaesthetics and analgesics, hypothermia, blood transfusion, tissue damage, and levels of sex hormones, and point at some as probable deleterious factors. Through understanding these processes and reviewing empirical evidence, we provide suggestions for potential new perioperative approaches and interventions aimed at attenuating deleterious processes and ultimately improving treatment outcomes. Specifically, we highlight excess perioperative release of catecholamines and prostaglandins as key deleterious mediators of surgery, and we recommend blockade of these responses during the perioperative period, as well as other low-risk, low-cost interventions. The measures described in this Review could transform the perioperative timeframe from a prominent facilitator of metastatic progression, to a window of opportunity for arresting and/or eliminating residual disease, potentially improving long-term survival rates in patients with cancer. abstract_id: PUBMED:33966407 Randomized trial of the application value of comprehensive nursing intervention in the perioperative period of ruptured bleeding of ectopic pregnancy. Background: The purpose of this study was to explore the value of comprehensive nursing intervention in the perioperative period of ruptured bleeding of ectopic pregnancy. Methods: From January 2015 to January 2020, 164 patients with rupture and bleeding of ectopic pregnancy who needed laparoscopic treatment in the department of gynecology at our hospital were selected and randomly divided into the basic nursing group and the comprehensive nursing group, with 82 cases each. During the perioperative period, comprehensive nursing intervention or basic nursing intervention were performed, and the nursing effects of the two nursing interventions were compared. Results: The disappearance time of abdominal pain, the time to get out of bed, and the length of hospitalization in the comprehensive nursing group were significantly shorter than those in the basic nursing group (P&lt;0.05). After surgery, blood sugar levels, aldosterone, cortisol, C-reactive protein (CRP), and IL-6 in the two groups were significantly higher than those before surgery (P&lt;0.05), but there was no statistically significant difference between the groups (P&gt;0.05). After the operation, the proportion of patients with Visual Analogue Scale (VAS) scores of 7-10 in the comprehensive nursing group was significantly lower than that in the basic nursing group (P&lt;0.05). Before the intervention, the Hamilton Anxiety Scale (HAMA) and Hamilton Depression Scale (HAMD) scores of the two groups were compared, and the difference was not statistically significant (P&gt;0.05). After the intervention, the HAMA and HAMD scores of the comprehensive nursing group were significantly lower than those of the basic nursing group (P&lt;0.05). The fallopian tube recanalization rate of patients in the comprehensive care group was significantly higher than that of the basic care group (P&lt;0.05), and the complication rate was significantly lower than that of the basic care group (P&lt;0.05). Conclusions: In summary, a comprehensive nursing program during the perioperative period can improve the treatment effect and significantly shorten the recovery time of patients, which is worthy of clinical promotion. Answer: Yes, there is evidence of a correlation between the type of anesthesia used during surgery and the fluctuation of perioperative cortisol levels. A study investigated the impact of general, spinal, and epidural anesthesia on perioperative cortisol concentrations in patients undergoing medium severity surgery. The results showed that cortisol levels varied significantly depending on the type of anesthesia administered. Specifically, patients who received general anesthesia exhibited an increase in cortisol levels during and after surgery, while those who received spinal anesthesia showed a decrease in cortisol levels during the surgery. Epidural anesthesia resulted in initially high cortisol levels that decreased after the anesthesia was administered. Additionally, there was a statistically significant correlation between the age of the patients and their preoperative cortisol concentration (PUBMED:25799621). Furthermore, other studies have examined the effects of perioperative interventions on cortisol levels. For instance, the use of dexmedetomidine during the perioperative period was found to significantly decrease blood cortisol levels postoperatively (PUBMED:26016707). This suggests that certain medications can modulate the stress response and cortisol levels during the perioperative period. In conclusion, the available evidence supports the existence of a correlation between the perioperative period and cortisol levels, with the type of anesthesia and perioperative interventions playing a significant role in these fluctuations.
Instruction: Are Biomarkers Predictive of Anthracycline-Induced Cardiac Dysfunction? Abstracts: abstract_id: PUBMED:27221934 Are Biomarkers Predictive of Anthracycline-Induced Cardiac Dysfunction? Background: The early detection of anthracycline- induced cardiotoxicity is very important since it might be useful in prevention of cardiac decompensation. This study was designed with the intent of assessing the usefulness of cardiac troponin T (cTnT) and NT- Pro BNP estimation in early prediction of anthracycline induced cardiotoxicity. Materials And Methods: In this prospective study histologically proven breast cancer patients who were scheduled to receive anthracycline containing combination chemotherapy as a part of multimodality treatment were enrolled. Baseline cardiac evaluation was performed by echocardiography (ECHO) and biomarkers like cardiac troponin T (cTnT) and N terminal- pro brain natriuretic peptide (NT- Pro BNP). All patients underwent cTnT and NT- Pro BNP estimation within 24 hours of each cycle of chemotherapy and were followed up after 6 months of initiation of chemotherapy. Any changes in follow up ECHO were compared to ECHO at baseline and cTnT and NT- Pro BNP levels after each cycle of anthracycline-based chemotherapy. Results: Initial data were obtained for 33 patients. Mean change in left ventricular diastolic diameter (LVDD) within 6 months was 0.154± 0 .433 cms (p value=0.049). Seven out of 33 patients had an increase in biomarker cTnT levels (p value=0.5). A significant change in baseline and follow up LVDD was observed in patients with raised cTnT levels (p value=0.026) whereas no change was seen in ejection fraction (EF) and left atrial diameters (LAD) within 6 months of chemotherapy. NT- Pro BNP levels increased in significant number of patients (p value ≤0.0001) but no statistically significant change was observed in the ECHO parameters within 6 months. Conclusions: Functional monitoring is a poorly effective method in early estimation of anthracycline induced cardiac dysfunction. Estimation of biomarkers after chemotherapy may allow stratification of patients in various risk groups, thereby opening window for interventional strategies in order to prevent permanent damage to the myocardium. abstract_id: PUBMED:18850478 The role of biomarkers in the early detection of anthracycline-induced cardiotoxicity in children: a review of the literature. Anthracycline-induced cardiotoxicity can cause serious health problems for an increasing number of children surviving childhood malignancies. Early detection of cardiac failure is critically important for the prevention and management of anthracycline-induced cardiotoxicity. The aim of this research was to determine the role of biomarkers in the early detection of anthracycline-induced cardiotoxicity in children. A literature review is presented of studies regarding the use of the biomarkers B-type natriuretic peptide (BNP), N-terminal pro-BNP (NT-pro-BNP), cardiac troponin T (cTnT), and cardiac troponin I (cTnI) in relation with anthracycline-induced cardiotoxicity in children. Six of 14 studies in children showed a significant relation between elevated biomarkers BNP, NT-pro-BNP, and cTnT and cardiac dysfunction. Six studies, although small, suggest that BNP, NT-pro-BNP, and cTnT might be useful markers in the early detection of anthracycline-induced cardiotoxicity. abstract_id: PUBMED:27079675 The Utility of Point-of-Care Biomarkers to Detect Cardiotoxicity During Anthracycline Chemotherapy: A Feasibility Study. Background: Anthracycline chemotherapy is associated with an increased risk of developing heart failure (HF). The current standard for detecting HF or cardiotoxicity during chemotherapy involves episodic cardiac imaging typically at prescribed intervals and there are limited studies examining techniques beyond measuring left ventricular (LV) function. This study explores whether cardiac biomarkers troponin I (TnI) and B-type natriuretic peptide (BNP) could be part of a screening strategy for early detection of the development of cardiotoxicity in patients undergoing anthracycline chemotherapy. Methods And Results: Patients were enrolled from a single medical center. Cardiac biomarkers (TnI, BNP) were measured before and within 24 hours after completion of anthracycline administration for each cycle of therapy. Cardiac imaging was obtained at baseline and at completion of chemotherapy (commonly at 6 or 12 months) or based on clinical suspicion of a cardiac event. Of the enrolled 109 patients, 11 (10.1%) experienced cardiac events; all of these patients had at least 1 BNP value &gt;100 pg/mL before the cardiac event. Significant reduction in LV ejection fraction as defined for cardiotoxicity occurred in only 3 of 10 patients (30%) with a cardiac event. Conclusions: The use of cardiac biomarkers, particularly BNP, may allow early detection of cardiotoxicity related to anthracycline chemotherapy. abstract_id: PUBMED:33517910 Baseline global longitudinal strain predictive of anthracycline-induced cardiotoxicity. Background: Cancer therapy-related cardiac dysfunction (CTRD) is a major source of morbidity and mortality in long-term cancer survivors. Decreased GLS predicts decreased left ventricular ejection fraction (LVEF) in patients receiving anthracyclines, but knowledge regarding the clinical utility of baseline GLS in patients at low-risk of (CTRD) is limited. Objectives: The purpose of this study was to investigate whether baseline echocardiographic assessment of global longitudinal strain (GLS) before treatment with anthracyclines is predictive of (CTRD) in a broad cohort of patients with normal baseline LVEF. Methods: Study participants comprised 188 patients at a single institution who underwent baseline 2-dimensional (2D) speckle-tracking echocardiography before treatment with anthracyclines and at least one follow-up echocardiogram 3 months after chemotherapy initiation. Patients with a baseline LVEF &lt;55% were excluded from the analysis. The primary endpoint, (CTRD), was defined as an absolute decline in LVEF &gt; 10% from baseline and an overall reduced LVEF &lt;50%. Potential and known risk factors were evaluated using univariable and multivariable Cox proportional hazards regression analysis. Results: Twenty-three patients (12.23%) developed (CTRD). Among patients with (CTRD), the mean GLS was -17.51% ± 2.77%. The optimal cutoff point for (CTRD) was -18.05%. The sensitivity was 0.70 and specificity was 0.70. The area under ROC curve was 0.70. After adjustment for cardiovascular and cancer therapy related risk factors, GLS or decreased baseline GLS ≥-18% was predictive of (CTRD) (adjusted hazards ratio 1.17, 95% confidence interval 1.00, 1.36; p = 0.044 for GLS, or hazards ratio 3.54; 95% confidence interval 1.34, 9.35; p = 0.011 for decreased GLS), along with history of tobacco use, pre-chemotherapy systolic blood pressure, and cumulative anthracycline dose. Conclusions: Baseline GLS or decreased baseline GLS was predictive of (CTRD) before anthracycline treatment in a cohort of cancer patients with a normal baseline LVEF. This data supports the implementation of strain-protocol echocardiography in cardio-oncology practice for identifying and monitoring patients who are at elevated risk of (CTRD). abstract_id: PUBMED:22664114 Biomarkers and early detection of late onset anthracycline-induced cardiotoxicity in children. Background: The main strategy for minimizing anthracycline cardiotoxicity is early detection of high-risk patients. Aim Of The Study: To investigate the role of cardiac biomarkers; cardiac troponin T (cTnT) and N-terminal probrain natriuretic peptide (NT-pro-BNP), and tissue Doppler imaging (TDI), as early predictors of chronic cardiotoxicity in survivors of acute leukemia. Patients And Methods: We carried a retrospective study on 50 asymptomatic survivors of acute leukemia who received anthracycline in their treatment protocols. All patients underwent blood sampling to determine the levels of NT-pro-BNP and cTnT along with conventional echocardiography and TDI. Results: None had abnormal cTnT levels. About 20% had abnormal NT-pro-BNP levels. Diastolic dysfunction of the left ventricle was the most significant in conventional echocardiography. TDI was superior as it detected myocardial affection in 10% more than echo. TDI demonstrated global myocardial damage with significant aberrations in peak myocardial velocities and ratios. Conclusions: NT-pro-BNP can be used as a sensitive cardiac biomarker in monitoring of anthracycline-induced cardiotoxicity. Follow up is essential to validate the role of NT-pro-BNP as an early marker for late onset anthracycline-induced cardiotoxicity. Tissue Doppler is marvelous as it could detect early cardiac dysfunction even in those with normal study by conventional echocardiography. abstract_id: PUBMED:20619024 Dexrazoxane in anthracycline induced cardiotoxicity and extravasation Cardiotoxicity and extravasation injuries are extremely serious complications of anthracycline use. Both complications are probably caused by oxidative stress. Dexrazoxane has been approved as a cardioprotective agent and as an antidote in extravasation of anthracyclines. Randomized clinical trials have shown that dexrazoxane is the only cardioprotective agent proven to be effective in the treatment of anthracycline-induced cardiotoxicity. In these clinical studies dexrazoxane decreased the incidence of cardiac events and heart failure. Possible adverse effects of dexrazoxane when administered as a cardioprotective agent are a decreased antitumor effect of anthracyclines and the onset of secondary malignancies in children. As an antidote in anthracycline extravasation, clinical studies showed dexrazoxane to be highly efficacious in preventing the need for surgical resection. Dexrazoxane can be considered as the treatment of first choice for this indication. Dexrazoxane is well tolerated in general. The most commonly reported side effects are leukopenia, thrombocytopenia and local reactions at the infusion site. abstract_id: PUBMED:37453981 The diagnostic value of global longitudinal strain combined with cardiac biomarkers on early detection of anthracycline-related cardiac dysfunction. Background: Cardiac dysfunction associated with anthracyclines is a significant side effect of chemotherapy, and early detection is crucial. We aimed to assess the diagnostic value of combining global longitudinal strain (GLS) with biomarkers for the early detection of anthracycline-related cardiac dysfunction. Methods: In a prospective cohort study, 80 consecutive adult patients (mean age 51 ± 11 years; 68.8% females) were screened and underwent 2D echocardiographic assessments and biomarker assessments [high-sensitivity troponin-I (hs-Troponin-I) and NT-pro brain natriuretic peptide (NT-proBNP)] before and after anthracycline-based chemotherapy's initial regimen. The patients were followed up for 12 weeks to monitor for the development of cardiotoxicity. Results: Ten patients (12.5%) developed cardiotoxicity at the end of the 12-week follow-up. Baseline values of hs-Troponin-I and NT-proBNP were significantly higher in patients who developed cardiotoxicity compared to those who did not, with a similar pattern observed at the 3-week follow-up. Receiver operating characteristic (ROC) curve analysis demonstrated that a cutoff value of baseline hs-Troponin-I &gt; 11 ng/L, NT-proBNP &gt; 90.1 pg/mL, 3-week left ventricular ejection fraction (LVEF) ≤ 52%, 3-week GLS ≥ - 14.5%, 3-week hs-Troponin-I &gt; 13.1 ng/L, and 3-week NT-proBNP &gt; 118.1 pg/mL predicted the occurrence of cardiotoxicity with high sensitivity (range 83-94%) and specificity (range 77-92%). Conclusion: Combination of GLS with biomarkers had a high diagnostic value in early identification of anthracycline-related cardiac dysfunction, with an estimated diagnostic accuracy of over 85%. This information could potentially help in the identification of patients at high risk of developing cardiac dysfunction, allowing for earlier management. abstract_id: PUBMED:32477593 Evaluating anthracycline cardiotoxicity associated single nucleotide polymorphisms in a paediatric cohort with early onset cardiomyopathy. Background: Anthracyclines are a mainstay of chemotherapy. However, a relatively frequent adverse outcome of anthracycline treatment is cardiomyopathy. Multiple genetic studies have begun to dissect the complex genetics underlying cardiac sensitivity to the anthracycline drug class. A number of single nucleotide polymorphisms (SNPs) have been identified to be in linkage disequilibrium with anthracycline induced cardiotoxicity in paediatric populations. Methods: Here we screened for the presence of SNPs resulting in a missense coding change in a cohort of children with early onset chemotherapy related cardiomyopathy. The SNP identity was evaluated by Sanger sequencing of PCR amplicons from genomic DNA of patients with anthracycline related cardiac dysfunction. Results: All of the published SNPs were observed within our patient group. There was no correlation between the number of missense variants an individual carried with severity of disease. Furthermore, the time to cardiac disease onset post-treatment was not greater in those individuals carrying a high load of SNPs resulting from missense variants. Conclusions: We conclude that previously identified missense SNPs are present within a paediatric cohort with early onset heart damage induced by anthracyclines. However, these SNPs require further replication cohorts and functional validation before being deployed to assess anthracycline cardiotoxicity risk in the clinic. abstract_id: PUBMED:33330656 Red Blood Cell Distribution Width Is a Predictive Factor of Anthracycline-Induced Cardiotoxicity. Background: Red blood cell distribution width (RDW) is associated with prognosis in widespread cardiovascular fields, but little is known about relationship with the onset of cancer therapeutics-related cardiac dysfunction (CTRCD). Objectives: The purpose of this study was to assess whether RDW could predict the onset of CTRCD by anthracycline. Methods: Consequential 202 cancer patients planed for anthracycline treatment were enrolled and followed up for 12 months. The patients were divided into 2 groups based on the median value of baseline RDW before chemotherapy [low RDW group, n = 98, 13.0 [12.6-13.2]; high RDW group, n = 104, 14.9 [13.9-17.0]]. Cardiac function was assessed serially by echocardiography at baseline (before chemotherapy), as well as at 3, 6, and 12 months after chemotherapy with anthracycline. Results: Baseline left ventricular end systolic volume index and ejection fraction (EF) were similar between two groups. After chemotherapy, EF decreased at 3- and 6-month in the high RDW group [baseline, 64.5% [61.9-68.9%]; 3-month, 62.6% [60.4-66.9%]; 6-month, 63.9% [60.0-67.9%]; 12-month, 64.7% [60.8-67.0%], P = 0.04], but no change was observed in low RDW group. The occurrence of CTRCD was higher in high RDW group than in low RDW group (11.5 vs. 2.0%, P = 0.008). When we set the cut-off value of RDW at 13.8, sensitivity and specificity to predict CTRCD were 84.6 and 62.0%, respectively. Multivariable logistic regression analysis revealed that baseline RDW value was an independent predictor of the development of CTRCD [odds ratio 1.390, 95% CI [1.09-1.78], P = 0.008]. The value of net reclassification index (NRI) and integrated discrimination improvement (IDI) for detecting CTRCD reached statistical significance when baseline RDW value was added to the regression model including known risk factors such as cumulative anthracycline dose, EF, albumin, and the presence of hypertension; 0.9252 (95%CI 0.4103-1.4402, P &lt; 0.001) for NRI and 0.1125 (95%CI 0.0078-0.2171, P = 0.035) for IDI. Conclusions: Baseline RDW is a novel parameter to predict anthracycline-induced CTRCD. abstract_id: PUBMED:29224368 Immune response proteins as predictive biomarkers of doxorubicin-induced cardiotoxicity in breast cancer patients. Cancer treatment with doxorubicin (DOX) can induce cumulative dose-dependent cardiotoxicity. Currently, there are no specific biomarkers that can identify patients at risk during the initial doses of chemotherapy. The aim of this study was to examine plasma cytokines/chemokines and potential cardiovascular biomarkers for the prediction of DOX-induced cardiotoxicity. Plasma samples were collected before (T0), and after the first (T1) and the second (T2) cycles of DOX-based chemotherapy of 27 breast cancer patients, including five patients who presented with &gt;10% decline of left ventricular ejection fraction (LVEF), five patients with LVEF decline of 5-10%, and 17 patients who maintained normal LVEF at the end of chemotherapy (240 mg/m2 cumulative dose of DOX from four cycles of treatment). Multiplex immunoassays were used to screen plasma samples for 40 distinct chemokines, nine matrix metalloproteinases, 33 potential markers of cardiovascular diseases, and the fourth-generation cardiac troponin T assay. The results showed that the patients with abnormal decline of LVEF (&gt;10%) had lower levels of CXCL6 and sICAM-1 and higher levels of CCL23 and CCL27 at T0; higher levels of CCL23 and lower levels of CXCL5, CCL26, CXCL6, GM-CSF, CXCL1, IFN-γ, IL-2, IL-8, CXCL11, CXCL9, CCL17, and CCL25 at T1; and higher levels of MIF and CCL23 at T2 than the patients who maintained normal LVEF. Patients with LVEF decline of 5-10% had lower plasma levels of CXCL1, CCL3, GDF-15, and haptoglobin at T0; lower levels of IL-16, FABP3, and myoglobin at T1; and lower levels of myoglobin and CCL23 at T2 as compared to the patients who maintained normal LVEF. This pilot study identified potential biomarkers that may help predict which patients are vulnerable to DOX-induced cardiotoxicity although further validation is needed in a larger cohort of patients. Impact statement Drug-induced cardiotoxicity is one of the major concerns in drug development and clinical practice. It is critical to detect potential cardiotoxicity early before onset of symptomatic cardiac dysfunction or heart failure. Currently there are no qualified clinical biomarkers for the prediction of cardiotoxicity caused by cancer treatment such as doxorubicin (DOX). By using multiplex immunoassays, we identified proteins with significantly changed plasma levels in a group of breast cancer patients who were treated with DOX-based chemotherapy and produced cardiotoxicity. These proteins were associated with immune response and were identified before DOX treatment or at early doses of treatment, thus they could be potential predictive biomarkers of cardiotoxicity although further validation is required to warrant their clinical values. Answer: Yes, biomarkers are predictive of anthracycline-induced cardiac dysfunction. Several studies have demonstrated the utility of biomarkers such as cardiac troponin T (cTnT), N-terminal pro-brain natriuretic peptide (NT-proBNP), B-type natriuretic peptide (BNP), and high-sensitivity troponin-I (hs-Troponin-I) in the early prediction of cardiotoxicity associated with anthracycline chemotherapy. For instance, a study found that an increase in cTnT levels was significantly associated with a change in left ventricular diastolic diameter (LVDD) in patients undergoing anthracycline-based chemotherapy, although no change was seen in ejection fraction (EF) and left atrial diameters (LAD) within 6 months of chemotherapy. NT-proBNP levels also increased significantly in a number of patients, but no statistically significant change was observed in the ECHO parameters within 6 months (PUBMED:27221934). Another study in children indicated that biomarkers such as BNP, NT-proBNP, and cTnT might be useful in the early detection of anthracycline-induced cardiotoxicity, with six out of fourteen studies showing a significant relation between elevated biomarkers and cardiac dysfunction (PUBMED:18850478). Furthermore, a feasibility study suggested that the use of cardiac biomarkers, particularly BNP, may allow early detection of cardiotoxicity related to anthracycline chemotherapy (PUBMED:27079675). Additionally, a prospective cohort study demonstrated that combining global longitudinal strain (GLS) with biomarkers had a high diagnostic value in early identification of anthracycline-related cardiac dysfunction, with an estimated diagnostic accuracy of over 85% (PUBMED:37453981). Other studies have also supported the predictive value of biomarkers for anthracycline-induced cardiotoxicity. For example, NT-proBNP was suggested as a sensitive cardiac biomarker for monitoring cardiotoxicity, and tissue Doppler imaging (TDI) was found to be superior in detecting myocardial affection (PUBMED:22664114). Moreover, baseline global longitudinal strain (GLS) was found to be predictive of cardiotoxicity before anthracycline treatment in patients with a normal baseline LVEF (PUBMED:33517910).
Instruction: Saline Flush Test: Can Bedside Sonography Replace Conventional Radiography for Confirmation of Above-the-Diaphragm Central Venous Catheter Placement? Abstracts: abstract_id: PUBMED:26112633 Saline Flush Test: Can Bedside Sonography Replace Conventional Radiography for Confirmation of Above-the-Diaphragm Central Venous Catheter Placement? Objectives: Resuscitation often requires rapid vascular access via central venous catheters. Chest radiography is the reference standard to confirm central venous catheter placement and exclude complications. However, radiographs are often untimely. The purpose of this study was to determine whether dynamic sonographic visualization of a saline flush in the right side of the heart after central venous catheter placement could serve as a more rapid confirmatory study for above-the-diaphragm catheter placement. Methods: A consecutive prospective enrollment study was conducted in the emergency departments of 2 major tertiary care centers. Adult patients of the study investigators who required an above-the-diaphragm central venous catheter were enrolled during the study period. Patients had a catheter placed with sonographic guidance. After placement of the catheter, thoracic sonography was performed. The times for visualization of the saline flush in the right ventricle and sonographic exclusion of ipsilateral pneumothorax were recorded. Chest radiography was performed per standard practice. Results: Eighty-one patients were enrolled; 13 were excluded. The mean catheter confirmation time by sonography was 8.80 minutes (95% confidence interval, 7.46-10.14 minutes). The mean catheter confirmation time by chest radiograph availability for viewing was 45.78 minutes (95% confidence interval, 37.03-54.54 minutes). Mean sonographic confirmation occurred 36.98 minutes sooner than radiography (P&lt; .001). No discrepancy existed between sonographic and radiographic confirmation. Conclusions: Confirmation of central venous catheter placement by dynamic sonographic visualization of a saline flush with exclusion of pneumothorax is an accurate, safe, and more efficient method than confirmation by chest radiography. It allows the central line to be used immediately, expediting patient care. abstract_id: PUBMED:20870370 Evaluation of proper above-the-diaphragm central venous catheter placement: the saline flush test. hest radiographs are routinely obtained after the insertion of above-the-diaphragm central venous catheters to confirm placement and detect mechanical complications. At times, performing the radiograph can be time-consuming and findings may be inaccurate. We present a case of a patient with difficult access who required emergent resuscitation. A central venous catheter was inserted under ultrasound guidance into the patient's right internal jugular vein. After the procedure, a 2-dimensional bedside ultrasound of the heart was performed with simultaneous flush of saline through the catheter. Ultrasound detected an immediate echogenic turbulent pattern from the flush in the right atrium and then the right ventricle. This correlated to the proper placement of the catheter tip in the superior vena cava. A bedside ultrasound of the ipsilateral lung evaluated and ruled out iatrogenic pneumothorax. Other investigators have demonstrated different methods to confirm central venous catheter placement by bedside ultrasound, utilizing multiple views, Doppler technology, or microbubble contrast. This novel technique, as described here, is most simple. Further research is needed to demonstrate accuracy and utility. abstract_id: PUBMED:26112635 The Ultrasound-Only Central Venous Catheter Placement and Confirmation Procedure. The placement of a central venous catheter remains an important intervention in the care of critically ill patients in the emergency department. We propose an ultrasound-first protocol for 3 aspects of central venous catheter placement above the diaphragm: dynamic procedural guidance, evaluation for pneumothorax, and confirmation of the catheter tip location. abstract_id: PUBMED:29201957 Saline Flush versus Chest x ray in Confirmation of Central Venous Catheter Placement; a Diagnostic Accuracy Study. Introduction: Central venous catheterization (CVC) is a commonly performed procedure in critically ill patients of emergency department. This study was designed to compare the diagnostic accuracy of saline flush with CXR in confirmation of above-the-diaphragm CVC placement. Methods: This prospective cross sectional study was conducted on adult patients in need of CVC placement in emergency department. Placement Confirmation was performed with saline flush method and CXR, then chest computed tomography (CT) was performed as the gold standard. The screening performance characteristics of the two methods were calculated and compared using SPSS 21 and STATA 11. Results: 103 cases with the mean age of 57.18±9.3 (35 -80) years were studied (52.4% male). The mean duration of procedure was 2.5±1.24 in saline flush and 32.11±5.52 minutes in CXR method (P&lt;0.001(. The area under the ROC curves for saline flush and CXR in confirmation of CVC placement were 0.90 (95%CI: 0.70 - 0.100) and 0.80 (95%CI: 0.55 - 0.100), respectively (p = 0.317). The sensitivity, specificity, positive and negative predictive value, and positive and negative likelihood ratio of saline flush were 80%, 100%, 100, 98.9%, Infinity, and 0.01, respectively. These measures were 60%, 100%, 100%, 98%, Infinity, and 0.02 for CXR, respectively. Conclusion: It seems that saline flush method could be considered as a safe, rapid, and accurate bedside method for CVC placement confirmation in emergency department. abstract_id: PUBMED:36277575 Point-of-Care Ultrasound Protocol for Insertion and Confirmation of Central Venous Catheter Placement. Central venous catheterization is a common procedure in the management of critically ill patients, in the context of medical emergencies, and before surgical interventions. Placing a central venous catheter (CVC) in the internal jugular vein (IJV) using anatomical references is associated with a high risk of complications, in particular pneumothorax and arterial puncture. Thus, the placement of CVCs with ultrasound support is recommended by several medical societies and health regulators at the international level. When compared with chest radiography, ultrasound is accessible, safe, cost-effective, and time efficient. This technical report is meant to detail a point-of-care ultrasound protocol designed for the insertion and confirmation of the correct placement of a CVC in the IJV. abstract_id: PUBMED:24439714 The FLUSH study--flush the line and ultrasound the heart: ultrasonographic confirmation of central femoral venous line placement. Study Objective: Inadvertent arterial placement of a femoral venous catheter may result in serious morbidity, including limb necrosis. The Flush the Line and Ultrasound the Heart (FLUSH) test is visualization of the heart by a subxiphoid ultrasonic view while the central catheter is flushed with agitated saline solution. We wish to determine whether the FLUSH test can verify proper femoral venous line placement. Methods: We prospectively studied a convenience sample of children undergoing cardiac catheterization, for whom both femoral venous and arterial access were part of their standard care. The cardiologist flushed manually agitated saline solution through each catheter in randomized sequence while the blinded physician sonographer recorded the presence or absence of right atrial opacification. We calculated the sensitivity and specificity of the FLUSH test relative to our reference standard, the cardiologist's fluoroscopic visualization of catheter wire placement. Results: Of the 51 subjects enrolled, the FLUSH test was 100% sensitive (95% confidence interval 95% to 100%) and 90.3% specific (95% confidence interval 81% to 96%) in confirming femoral catheter placement. In no case was an arterial flush misidentified as a femoral flush. The interrater reliability of the test was strong: κ 0.82 for all images and 0.9 for those of good quality. Conclusion: The FLUSH test is simple and reliable, and appears to accurately confirm femoral venous line placement. abstract_id: PUBMED:28660688 Use of Contrast-Enhanced Ultrasound for Confirmation of Central Venous Catheter Placement: Systematic Review and Meta-analysis. Objectives: The purpose of this meta-analysis was to determine the sensitivity, specificity, and positive and negative predictive values of contrast-enhanced ultrasound (US) for confirming the tip location and placement of central venous catheters in adult patients. Methods: A systematic review was performed using electronic databases, including MEDLINE, ClinicalTrials.gov, Cochrane, Embase, PubMed, and Scopus. Inclusion criteria were studies conducted on adult patients receiving an internal jugular or a subclavian central venous catheter in the emergency department or intensive care unit. Furthermore, the catheter tip location had to be checked with the use of the agitated saline contrast-enhanced US technique. Results: A total of 2245 articles were screened by title and abstract. Seventeen articles were retrieved and assessed for the predefined inclusion criteria. Four articles and 1 abstract were used in the final analysis. Contrast-enhanced US showed pooled sensitivity of 72% (95% confidence interval, 44%-91%), pooled specificity of 100% (95% confidence interval, 99%-100%), a positive predictive value of 92.1%, and a negative predictive value of 98.5% compared with chest radiography for confirming the placement of central venous catheters. Conclusions: In the setting of central venous catheter placement, postprocedural contrast-enhanced US imaging is a safe, efficient, and highly specific confirmatory test for the catheter tip location compared with chest radiography. abstract_id: PUBMED:31681643 Heparin flush vs. normal saline flush to maintain the patency of central venous catheter among adult patients: A systematic review and meta-analysis. Background: Around the globe, protocols for flushing the catheter to maintain the patency of central venous catheter (CVC) vary by institution to institution or by practitioner to practitioner. Therefore, this review was carried out with the aim of evaluating the efficacy of heparin flush vs. normal saline flush to maintain the patency of CVC among adult patients. Methods: We followed the guidelines of Cochrane handbook for interventions and searched in MEDLINE, Embase, Cochrane library, Clinical trials database, and reference list of related articles, which were published from Jan. 2012 to 31 Dec. 2018 in English language. We included only randomized controlled trials, and nine studies were included in this review. The pooled standard mean difference and relative risk were calculated by using Rev Man Review Manager 5. Results: We identified nine eligible studies with a total number of 3,113 participants. Consolidated results from eight studies conveyed little favorable effect to maintain patency of CVC with heparin when compared with normal saline as evident by risk ratio 0.83, 95% CI 0.50 - 1.40; P = 0.13. We also carried out analysis for secondary outcomes, and there was no evidence that heparin was better than normal saline in terms of safety except heparin-induced thrombocytopenia. Conclusions: Heparin has little favorable effects to maintain patency of catheter than normal saline but not in secondary outcomes. As the quality of evidence was very low, therefore, results should be comprehend with care. abstract_id: PUBMED:30132930 Use of Bedside Ultrasonography and Saline Flush Technique for Evaluation of Central Venous Catheter Placement in Children. Our study investigated the reliability of appearance of rapid atrial swirl flow (RASF) by ultrasonography (US) in the right atrium (RA), which occurred as a result of rapid isotonic saline infusion (RISI) into the central venous catheter (CVC), in predicting catheter tip position. This prospective observational study included 95 CVC procedures performed on 77 pediatric patients (41 boys and 36 girls) with a median age of 0.6 (0.29-1.53) years. Seventy-three (76.84%) catheter tips were found to be correctly placed, and 22 (23.15%) catheter tips were misplaced. While ultrasonographic examination revealed RASF in the RA after 93 catheterization procedures, it was not observed after two catheterization procedures. One of these two catheters was an arterial catheter, and the other was a catheter that was directed toward the subclavian vein after curling around itself. There was no significant difference between the groups with incorrect and correct positioned catheter tip in terms of the appearance of RASF by US after RISI. There was no significant difference between the groups with upward (n = 8) and downward (n = 86) positioned catheter tip in terms of the time until the first appearance of RASF after RISI and the phase of RASF (P &gt; 0.05). There was a significant difference between these two groups in terms of the disappearance time of RASF in the RA (P &lt; 0.001). The mean disappearance time of RASF was 3 (2-3) s for downward positioned catheters and 5 (4-7) s for upward positioned catheters, respectively. When the cut-off for the disappearance time of RASF was set to 3 s, US had a sensitivity of 85.71% and a specificity of 77.91% for detecting upward positioned catheters. In conclusion, the appearance of RASF in the RA in a short time by US is not a reliable finding for correct positioning of the CVC tip in the pediatric patient group. The fact that the disappearance time of RASF in the RA is longer than 3 s indicates upward positioned CVCs. These catheters must never be used without radiological confirmation. In CVCs in which the disappearance time of RASF in the RA is shorter than 3 s, we think that the catheter can be used until radiological confirmation in emergency cases. According to the available literature, our study is the first study in children. There is a need for new studies on this subject. abstract_id: PUBMED:32728038 Agitated saline contrast to delineate central venous catheter position in neonates. Objective: Central venous catheter (CVC) insertion is required for the management of sick neonates. Ultrasonography/targeted neonatal echocardiography (TNE) with/without normal saline (NS) flush is used to identify CVC position. The present study compared the visibility and safety of agitated saline (AS) with normal saline (NS) flush. Study Design: This prospective interventional study included 110 CVC insertions, both umbilical venous catheterization (UVC) and peripherally inserted central catheterization (PICC). Catheter position was monitored by real-time TNE. Results: Overall visibility of catheter tip (combined UVC and PICC) was significantly better in AS (n = 55) compared with NS group (n = 55) [48/55 (87.2%) vs. 28/55 (50.9%); p &lt; 0.0001]. Time to detect catheter tip by AS push was significantly less than that of NS push. There was no difference in the amount of saline flush required with either method. No major adverse effect was observed. Conclusions: AS push can be used as a safe method to delineate CVC position in neonates. Answer: The Saline Flush Test, which involves dynamic sonographic visualization of a saline flush in the right side of the heart after central venous catheter placement, has been studied as a potential alternative to conventional radiography for confirming above-the-diaphragm central venous catheter placement. Research suggests that this method is not only accurate but also significantly faster than chest radiography. One study found that the mean catheter confirmation time by sonography was 8.80 minutes, compared to 45.78 minutes for chest radiograph availability, with sonographic confirmation occurring 36.98 minutes sooner (PUBMED:26112633). Another study reported that the saline flush method could be considered a safe, rapid, and accurate bedside method for CVC placement confirmation in the emergency department (PUBMED:29201957). Additional studies have proposed an ultrasound-first protocol for central venous catheter placement above the diaphragm, which includes dynamic procedural guidance, evaluation for pneumothorax, and confirmation of the catheter tip location (PUBMED:26112635). The use of point-of-care ultrasound protocols for insertion and confirmation of CVC placement has been detailed as a safe, cost-effective, and time-efficient alternative to chest radiography (PUBMED:36277575). The saline flush test has been demonstrated to be simple and reliable, with one study showing 100% sensitivity and 90.3% specificity in confirming femoral venous line placement using the FLUSH test (PUBMED:24439714). Contrast-enhanced ultrasound has also been evaluated, showing high specificity for confirming the placement of central venous catheters compared with chest radiography (PUBMED:28660688). In conclusion, bedside sonography, particularly the saline flush test, appears to be a viable alternative to conventional radiography for the confirmation of above-the-diaphragm central venous catheter placement, offering advantages in terms of speed and safety.
Instruction: The use of complementary and alternative medicine by patients attending a general otolaryngology clinic: can we afford to ignore it? Abstracts: abstract_id: PUBMED:24332665 Use of complementary alternative medicine in pediatric otolaryngology patients: a survey. Objective: Limited data are available about complementary alternative medicine (CAM) use in children attending otolaryngology services. We investigated the pattern of CAM use among children and adolescents attending a pediatric otolaryngology clinic. Methods: A cross-sectional survey. Anonymous questionnaires were administered, prior to doctor's admission, to parents accompanying young patients attending the pediatric otolaryngology clinic. Parents were asked about their general attitude toward CAM and whether they had ever consulted or considered a consultation with a CAM therapist. Subsequently, CAM users were asked to provide details on CAM modalities used and on their overall satisfaction with CAM therapy. Results: Of 308 questionnaires administered, 294 parents responded (95% response rate). Ninety-four parents (32%) reported considering CAM, or previous or current CAM use. Commonly used CAM treatments were acupuncture (44%), homeopathy (36%), and naturopathy (6.7%). CAM users assessed success rate as being: successful (37%), unsuccessful (24%) or undetermined (39%). Successful treatment was described in terms of cure, improvement and better awareness of the problem being addressed. In most cases parents stated that the primary physician was aware of CAM use by the child (74%). Conclusions: CAM plays a substantial role among parents of children referred to pediatric otolaryngology consultation. The otolaryngologist awareness of parents' preference and interest may contribute to decision making regarding pediatric patients' management. Further investigations regarding CAM are warranted and clinical collaboration with CAM therapists should be considered. abstract_id: PUBMED:17714796 Use of complementary and alternative medicine in pediatric otolaryngology patients attending a tertiary hospital in the UK. Objective: Little data is available on complementary and alternative medicine (CAM) use in children attending otolaryngology services. We investigated the prevalence and pattern of CAM use among children attending the pediatric otolaryngology department in a tertiary pediatric teaching hospital in Scotland. Design: A cross-sectional survey conducted by administering an anonymous questionnaire to the parents accompanying patients attending the pediatric otolaryngology department. Elective admissions and clinic attendees were included over a 3-month period in 2005/2006. Setting: Academic tertiary care referral centre in North-East Scotland. Patients: Five hundred and fifty-four consecutive patients aged less than 16 years were eligible. The response rate was 59% (n=327). Main Outcome Measures: Prevalence of CAM use in children. Secondary measures include types of CAM used, indications for use and communication with family physicians. Results: Based on 327 responses, 93 patients (29%) had ever used CAM, 20% within the last year. Commonly used CAM preparations were cod-liver oil, echinacea, aloe vera, cranberry, primrose oil and herbal vitamin supplements. The popular non-herbal CAM included homeopathy, massage, aromatherapy, chiropractic, yoga and reiki. Nineteen percent used CAM for their admission illness. Sixty-one percent of parents thought that CAM was effective and 65% would recommend it to others. Fifty-one percent of parents stated that the family physician was unaware of CAM use by the child. Conclusions: Despite concerns regarding the efficacy, safety and cost effectiveness of complementary and alternative medicine, its use among the pediatric otolaryngology population is more common than many providers may realize. This has implications for all healthcare workers involved in their care. abstract_id: PUBMED:20015757 The use of complementary and alternative medicine by patients attending a general otolaryngology clinic: can we afford to ignore it? Background: There is growing interest in use of complementary and alternative medicine (CAM) among the general population. Little information is available, however, on CAM use in adults attending an otolaryngology outpatient clinic in the UK. Objective: The purpose of this article is to study the prevalence and pattern of CAM use among adult patients attending the ear, nose, and throat (ENT) clinic in a UK teaching hospital. Study Design: A cross-sectional study was used. Subjects And Methods: All patients on their primary visit to an ENT clinic were asked to complete an anonymous questionnaire for a 14-week period from October 2005 to January 2006. Results: Based on 1366 completed questionnaires, 53% (728/1366) were female and 47% (638/1366) were male. Twenty-nine percent (395/1366) were older than 60 years, and 56% (763/1366) were married. Twenty percent (275/1366) had a university education. Sixty-one percent (833/1366) had used CAM, almost 36% in the preceding 12 months. The popular remedies were cod liver oil (368/833), garlic (197/833), cranberry (181/833), aloe vera (176/833), primrose oil (174/833), and Echinacea (163/833). Nonherbal therapies included massage (230/833), acupuncture (186/833), aromatherapy (135/833), chiropractic (121/833), reflexology (119/833), and homeopathy (110/833). Seventeen percent (143/833) used CAM for their current illness. Only 8% (64/833) found CAM ineffective; yet, 57% (473/833) would recommend CAM to others. Fifty-one percent (421/833) failed to inform their primary physician of their CAM use. Conclusion: Despite concerns over CAM efficacy, safety, and cost effectiveness, use of CAM is popular among patients attending an ENT clinic. Their use is not generally related to their presenting illness. Otolaryngologists should be aware of current trends in CAM use when managing patients, including possible interactions with other medication. abstract_id: PUBMED:30275790 Role of Complementary and Alternative Medicine in Otolaryngologic Perioperative Care. Background: During the perioperative period for otolaryngologic surgical cases, complications and delays can occur as the result of anxiety, pain, nausea, and vomiting. Conventional methods used to treat these symptoms include medications that can be expensive and invasive or that can cause adverse effects. Because of the concerns about opioid use in the United States, providers might want to consider using complementary and alternative medicine (CAM) as adjunctive or primary treatment plans. Methods: To assess the current knowledge about the clinical effectiveness of CAM for patients undergoing otolaryngologic procedures, we searched the literature using MEDLINE, PubMed, and Google Scholar. We excluded studies published prior to 1990 and articles about surgeries that were unrelated to otolaryngology. Results: An analysis of the selected studies revealed that CAM therapies-acupuncture, aromatherapy, hypnosis, and music therapy-have been shown to be effective at reducing preoperative anxiety, postoperative pain, and postoperative nausea and vomiting. No adverse side effects were associated with CAM use in these studies. Conclusion: The use of CAM in patients undergoing otolaryngologic surgeries may relieve common perioperative symptoms. While further study is warranted, otolaryngology providers might consider implementing CAM with patients electing surgery. abstract_id: PUBMED:27863612 Complementary and alternative medicine use in children with cystic fibrosis. Purpose: To estimate the overall prevalence of complementary and alternative medicine use among children with cystic fibrosis, determine specific modalities used, predictors of use and subjective helpfulness or harm from individual modalities. Results: Of 53 children attending the cystic fibrosis clinic in London, Ontario (100% recruitment), 79% had used complementary and alternative medicine. The most commonly used modalities were air purifiers, humidifiers, probiotics, and omega-3 fatty acids. Family complementary and alternative medicine use was the only independent predictor of overall use. The majority of patients perceived benefit from specific modalities for cystic fibrosis symptoms. Conclusions: Given the high frequency and number of modalities used and lack of patient and disease characteristics predicting use, we recommend that health care providers should routinely ask about complementary and alternative medicine among all pediatric cystic fibrosis patients and assist patients in understanding the potential benefits and risks to make informed decisions about its use. abstract_id: PUBMED:34557877 Complementary Alternative Medicine (CAM) Use and Associated Factors among HIV Infected Children and Adolescents Seeking Mental Health Services in Uganda. Background: Given the limited integration of mental health services into pediatric HIV care in sub-Saharan Africa, there is limited information on the nature of mental health service use sought by caregivers of children and adolescents living with HIV/AIDS. Methods: We analyzed data from a sample of 135 children and adolescents living with HIV/AIDS and attending a pediatric HIV care service whose parents or caregivers had sought for mental health care for their emotional or behavior problems in the past year. We assessed complementary alternative medicine use, socio-demographic characteristics and types of behavioral problems using a SOCIO-DEMOGRAPHIC questionnaire and the child behavioral checklist (6 - 18) respectively. Logistic regression models were used to explore factors independently associated with specific patterns of mental health service use. Results: Of 135 parents/caregivers interviewed, 38 (28.15%) sought mental health care from only complementary and alternative medicine (CAM) providers (traditional or faith healers), 38 (28.15%) from only conventional medical providers (general physicians, general nurses, psychiatrists or HIV counselors) and 59 (43.70%) sought care from both). Severe HIV disease in the young children [PRR = 2.09, 95%CI = 1.36 to 3.2, p-value = 0.001] and thought problems [PRR 1.26, 95%CI = 0.81 to 1.72, p-value = 0.04] in adolescents were independently associated with complementary and alternative medicine use. Conclusion: Caregivers of children and adolescents with progressive HIV disease and thought problems were more likely to seek mental health services from complementary and alternative medicine providers. abstract_id: PUBMED:28718252 Use Of Complementary And Alternative Medicine Among Asthmatic Patients. Background: The prevalence of complementary and alternative medicine (CAM) use in some developed countries is on the rise as surveys conducted on large scale vouch for. This study was conducted with the aim to determine the frequency of complementary and alternative medicine (CAM) use among asthmatic patients at outpatient department of tertiary care hospitals in Peshawar, Khyber Pakhtunkhwa, Pakistan. Methods: This cross-sectional study was conducted in outpatient department of two tertiary care hospitals. A face to face interview of 423 patients was conducted through a questionnaire. Non-probability consecutive sampling method was used to select the respondents. Stata version 12.1 (StataCorp), College Station, Texas) was used to carry out the statistical analysis. Results: Overall asthmatic patients in the study were 423, in which 232 (54.85%) patients reported as using CAM. There were 177 (41.84%) males and 246 (58.16%) females. The CAM use was significantly more in older age patients, suffering from asthma for more than 5 years, severe persistent type of asthma, married, unemployed, rural and current smokers. Conclusions: The use of CAM reflects a high CAM use among asthmatic patients in Pakistan. Clinicians should be aware about their patient's use of CAM. abstract_id: PUBMED:35162621 Complementary and Alternative Medicine Use in Hospitalized Cancer Patients-Study from Silesia, Poland. Background: The use of complementary and alternative medicine (CAM) is common amongst cancer patients. The aim of the study was to investigate the use of CAM, beliefs about CAM and the purpose of using it amongst Polish cancer patients. Methods: The study included 864 cancer patients (median 63 years old), who were individually interviewed. The questionnaire was designed specifically for this study. Results: Amongst 732 patients who declared that they heard about CAM, 342 patients (46.7%) had used CAM; 91% of these patients had used it as a complementary therapy and 9% had used it as an alternative therapy. Patients younger in age, highly educated, professionally active, with longer medical history, and at more advanced cancer stages have, statistically, used CAM more often; 66% of participants could not state what the health effect of CAM is. Patients received information about CAM from the internet, friends, family and other patients. Only 18% of patients discussed using CAM with a doctor. Conclusions: CAM was popular amongst Polish cancer patients, especially in younger, educated and professionally active patients with longer cancer history at advanced stage. Patients used CAM as a complementary therapy for strengthening immune system, improving morphological and biochemical test parameters, reducing the side effects of conventional therapy and improving their well-being. abstract_id: PUBMED:25873293 Complementary and Alternative Medicine Use in Cancer Patients in Rural Australia. Aim: Numerous studies have demonstrated the high prevalence of complementary and alternative medicine (CAM) use in metropolitan cancer cohorts but few have been conducted in regional and remote populations. This study aimed to investigate the trends and regional variations in CAM use by cancer patients at a regional cancer care center in Toowoomba, South East Queensland, Australia. Methods: All English-speaking adult cancer patients attending the regional cancer care center were invited to participate. Eligible patients were provided a self-administered questionnaire that was developed based on published surveys. Ethics approval was obtained. Results: Overall 142 patients completed the questionnaire and 68% were currently or had previously used at least one form of CAM. CAM users and nonusers did not differ significantly by region, age, gender, time since diagnosis, income, town size, treatment intent, or metastases. CAM users were more likely to have a higher level of education. Concurrent CAM use with conventional treatment was reported by approximately half of respondents. The most common reason for CAM use was "to improve general physical well-being." The most common sources of CAM information were family (31%) and friends (29%). Disclosure of CAM use to either the general practitioner or specialist was reported by 46% and 33% of patients, respectively. The most common reason for nondisclosure was "doctor never asked." Conclusion: This study supports previous research that CAM use is as common in regional and remote areas as metropolitan areas. Nondisclosure of CAM use to health professionals was common. Future research needs to focus on strategies to improve communication between patients and health professionals about the use of CAM. abstract_id: PUBMED:35027078 Acceptance and use of complementary and alternative medicine among medical specialists: a 15-year systematic review and data synthesis. Background: Complementary and Alternative Medicine (CAM) has gained popularity among the general population, but its acceptance and use among medical specialists have been inconclusive. This systematic review aimed to identify relevant studies and synthesize survey data on the acceptance and use of CAM among medical specialists. Methods: We conducted a systematic literature search in PubMed and Scopus databases for the acceptance and use of CAM among medical specialists. Each article was assessed by two screeners. Only survey studies relevant to the acceptance and use of CAM among medical specialists were reviewed. The pooled prevalence estimates were calculated using random-effects meta-analyses. This review followed both PRISMA and SWiM guidelines. Results: Of 5628 articles published between 2002 and 2017, 25 fulfilled the selection criteria. Ten medical specialties were included: Internal Medicine (11 studies), Pediatrics (6 studies), Obstetrics and Gynecology (6 studies), Anesthesiology (4 studies), Surgery (3 studies), Family Medicine (3 studies), Physical Medicine and Rehabilitation (3 studies), Psychiatry and Neurology (2 studies), Otolaryngology (1 study), and Neurological Surgery (1 study). The overall acceptance of CAM was 52% (95%CI, 42-62%). Family Medicine reported the highest acceptance, followed by Psychiatry and Neurology, Neurological Surgery, Obstetrics and Gynecology, Pediatrics, Anesthesiology, Physical Medicine and Rehabilitation, Internal Medicine, and Surgery. The overall use of CAM was 45% (95% CI, 37-54%). The highest use of CAM was by the Obstetrics and Gynecology, followed by Family Medicine, Psychiatry and Neurology, Pediatrics, Otolaryngology, Anesthesiology, Internal Medicine, Physical Medicine and Rehabilitation, and Surgery. Based on the studies, meta-regression showed no statistically significant difference across geographic regions, economic levels of the country, or sampling methods. Conclusion: Acceptance and use of CAM varied across medical specialists. CAM was accepted and used the most by Family Medicine but the least by Surgery. Findings from this systematic review could be useful for strategic harmonization of CAM and conventional medicine practice. Systematic Review Registration: PROSPERO CRD42019125628. Answer: The use of complementary and alternative medicine (CAM) is indeed prevalent among patients attending general otolaryngology clinics, and it is a phenomenon that cannot be ignored by healthcare providers. A cross-sectional study conducted in a UK teaching hospital found that 61% of adult patients attending an ENT clinic had used CAM, with almost 36% using it in the preceding 12 months. The popular remedies included cod liver oil, garlic, cranberry, aloe vera, primrose oil, and Echinacea, while nonherbal therapies included massage, acupuncture, aromatherapy, chiropractic, reflexology, and homeopathy. Despite only 8% finding CAM ineffective, 57% would recommend it to others, and 51% did not inform their primary physician of their CAM use (PUBMED:20015757). This trend is not limited to adult patients; pediatric otolaryngology patients also show substantial CAM use. A survey in a pediatric otolaryngology clinic revealed that 32% of parents reported considering or using CAM for their children, with acupuncture, homeopathy, and naturopathy being the most common treatments. Parents assessed the success rate of CAM as successful (37%), unsuccessful (24%), or undetermined (39%), and in most cases, the primary physician was aware of the child's CAM use (74%) (PUBMED:24332665). Given the significant number of patients using CAM and the potential for interactions with conventional treatments, otolaryngologists and other healthcare providers should be aware of current trends in CAM use. This awareness is crucial for managing patients effectively, including discussing possible interactions with other medications and ensuring comprehensive patient care (PUBMED:20015757). Therefore, it is clear that the use of CAM in otolaryngology patients is a reality that healthcare providers cannot afford to ignore.
Instruction: Does perfluoro-n-octane use in 23G vitrectomy for retinal detachment surgery affect the integrity of the ellipsoid zone? Abstracts: abstract_id: PUBMED:27102352 Does perfluoro-n-octane use in 23G vitrectomy for retinal detachment surgery affect the integrity of the ellipsoid zone? Purpose: To examine the integrity of the ellipsoid zone after perfluoro-n-octane use for reattachment of the central retina in macula-off retinal detachment cases. Methods: Intraoperative video documentation with microscope-integrated intraoperative spectral-domain optical coherence tomography (OCT) of 10 consecutive patients with macula-off retinal detachment, scheduled for primary vitrectomy for retinal detachment repair, was retrospectively examined for defects in the integrity of the ellipsoid zone due to perfluoro-n-octane use. Results: Successful intraoperative visualization was possible in all cases, retinal detachment was reattached in all cases, but residual islands of subretinal fluid persisted directly after perfluoro-n-octane use in 80% of cases. These were resorbed in all cases in the postoperative OCTs. Disturbances in the integrity of the ellipsoid zone or intraoperative macular hole formation did not present in any of our cases. Conclusions: Perfluoro-n-octane use seems to be a safe procedure regarding integrity of the ellipsoid zone in cases scheduled for primary vitrectomy for retinal detachment repair. abstract_id: PUBMED:33195330 PFOh No: Spontaneous Globe Rupture During Injection of Perfluoro-n-Octane (PFO) During Retinal Detachment Repair. The authors describe a case of spontaneous globe rupture during instillation of perfluoro-n-octane (PFO) during retinal surgery. A 71-year-old male with a macular-involving rhegmatogenous retinal detachment underwent pars plana vitrectomy. During instillation of PFO manually on a single-bore cannula by the assisting resident, spontaneous globe rupture occurred superotemporally in an area with no visible underlying structural abnormalities. Factors that led to this complication include the use of single-bore cannula, error in judgment of resistance during PFO injection, and inexperience. This is the first report of this complication without an identifiable structural abnormality predisposing patient to perforation. abstract_id: PUBMED:38239947 Pars plana vitrectomy for retinal detachment using perfluoro-n-octane as intraoperative tamponade: a multicenter, randomized, non-inferiority trial. Aim: To evaluate the efficacy and safety of perfluoro-n-octane (PFO) for ophthalmic surgery versus F-Octane as an intraoperative tamponade in pars plana vitrectomy (PPV) in management of retinal detachment. Methods: This multicenter, prospective, randomized, double-masked, parallel-controlled, non-inferiority trial was conducted in three ophthalmology clinical centers in China. Patients with retinal detachment, who were eligible for PPV were consecutively enrolled. Participants were assigned to PFO for ophthalmic surgery or F-Octane for intraocular tamponade in a 1:1 ratio. Best-corrected visual acuity (BCVA), intraocular pressure (IOP) measurement, and dilated fundus examination were performed preoperatively and at 1, 7±1, 28±3d postoperatively. The primary outcome was complete retinal reattachment rate at postoperative day one. The non-inferiority margin was set at 9.8%. The secondary outcomes included intraoperative retinal reattachment rate, and mean changes in IOP and BCVA from baseline to 1, 7±1, 28±3d postoperatively, respectively. Safety analyses were presented for all randomly assigned participates in this study. Results: Totally 124 eligible patients completed the study between Mar. 14, 2016 and Jun. 7, 2017. Sixty of them were randomly assigned to the PFO for ophthalmic surgery group, and 64 were assigned to the F-Octane group. Baseline characteristics were comparable between the two groups. Both groups achieved 100% retinal reattachment at postoperative day one (difference 0, 95%CI: -6.21% to 5.75%, P=1). The pre-defined noninferiority criterion was met. No significant difference was observed in intraoperative retinal reattachment rate (difference 1.77%, P=0.61), mean changes in IOP (difference 0.36, -0.09, 2.22 mm Hg at 1, 7±1, 28±3d postoperatively, with all P&gt;0.05) and BCVA (difference 0.04, -0.02, 0.06 logMAR at 1, 7±1, 28±3d postoperatively, all P&gt;0.05) between the two groups. No apparent adverse events related to the utilization of PFO were reported. Conclusion: In patients with retinal detachment undergoing PPV, PFO for ophthalmic surgery is non-inferior to F-Octane as an intraocular tamponade, and both are safe and well-tolerated. abstract_id: PUBMED:25100918 Nine years of retained perfluoro-n-octane in the anterior chamber after retinal detachment repair with no signs of ocular toxicity. Perfluoro-n-octane (PFO) is commonly used in vitreoretinal surgery. In this case report, we present a 35-year-old patient with retained PFO up to 9 years after par plana vitrectomy. Post-operatively, PFO bubbles occupied 15% of the anterior chamber (AC). Follow-up over 9 years, consistently showed a quiet AC, normal intraocular pressure and endothelial cell counts remained stable. Until date, the patient has been under observation and there have been no ocular symptoms or side-effects. Residual PFO that inadvertently remains in the AC can be well- tolerated, without side-effects for up to 9 years. abstract_id: PUBMED:22068861 Perfluoro-n-octane in orbital cavity after posterior vitrectomy and suture of eye perforating injury: case report A case of a perfluoro-n-octane leakage into the orbital cavity after corneoscleral suture, scleral buckling and pars plana vitrectomy in an eye with perforating injury after trauma is reported for the first time. A previously healthy 39-year-old man was sent for ophthalmic evaluation one day after suffering a penetrating ocular trauma in his right eye while hammering a nail. On the initial evaluation, the patient presented sudden reduction of visual acuity on his right eye with a perforating corneoscleral injury, intraocular foreign body and retinal detachment. The patient was submitted to corneoscleral suture, pars plana vitrectomy with perfluoro-n-octane administration to flatten the retina and scleral buckling, when it was found transfixation of the globe by the intraocular foreign body. Postoperatively, computed tomography scans of the orbit were ordered due to proptosis, which showed the presence of hyperdense images, suggesting leakage of perfluoro-n-octane into the orbital cavity. Thus, in cases of perforating eye injury, one should be suspicious about the possibility of intraocular foreign body, as well as possible occurrence of transfixation of the globe when scheduling the surgery. abstract_id: PUBMED:10213234 Primary vitrectomy with perfluoro-n-octane use in the treatment of pseudophakic retinal detachment with undetected retinal breaks. Purpose: This report describes the results of a prospective trial to evaluate the efficacy of pars plana vitrectomy (PPV) in conjunction with perfluoro-n-octane (PFO) as initial treatment of pseudophakic retinal detachment (RD) with no breaks diagnosed preoperatively. Methods: Fourteen consecutive eyes presenting with pseudophakic RD in which retinal breaks could not be identified preoperatively underwent primary PPV, internal microsurgical identification of the retinal breaks with endoillumination and noncontact wide angle viewing system, PFO retinal reattachment, transscleral cryopexy or endolaser treatment of breaks, PFO-air exchange, and final injection of 20% sulfur hexafluoride. In five eyes, a scleral buckle was also used. Mean follow-up period was 18 months. Results: In 13 of the 14 eyes in which no breaks had been identified preoperatively, breaks were diagnosed during surgery. Perfluoro-n-octane retinal attachment facilitated accurate microscopic treatment of retinal breaks. The retina was reattached with a single operation in all eyes. Eleven eyes had final visual acuity of 20/60 or better. Complications were minimal. Conclusions: Pars plana vitrectomy in conjunction with PFO expression of subretinal fluid is effective in the initial treatment of pseudophakic RD with no preoperative diagnosis of retinal breaks. abstract_id: PUBMED:27555718 Reversible Corneal Toxicity of Retained Intracameral Perfluoro-n-octane. A 58-year-old female presented with intracameral retained perfluoro-n-octane (PFO) following previous retinal reattachment surgery. After 4 years of follow-up without related sequelae, the patient complained of a gradual decrease in vision secondary to corneal edema with whitish corneal precipitate inferiorly corresponding to the area of retained PFO. Three weeks after anterior chamber washout, corneal edema resolved and the patient obtained 20/40 visual acuity. Even though PFO considered to have a relatively good safety profile, early anterior chamber washout may prevent corneal toxicity and avoid later persistent corneal decompensation. abstract_id: PUBMED:24332376 Efficacy of short-term postoperative perfluoro-n-octane tamponade for pediatric complex retinal detachment. Purpose: To evaluate the efficacy of perfluoro-n-octane as a postoperative short-term tamponade after vitrectomy in pediatric cases with complex retinal detachment (RD) and proliferative vitreoretinopathy (PVR). Design: Prospective, noncomparative, interventional case series. Methods: The medical records of 10 eyes of 9 children (6 boys and 3 girls), whose age ranged from 3 months to 11 years, with a median of 7.5 months, were reviewed. The cause of the PVR was retinopathy of prematurity (7 eyes of 6 patients); familial exudative vitreoretinopathy; or tractional RD associated with congenital optic nerve anomalies,(1) and persistent fetal vasculature. Perfluoro-n-octane was injected into the eyes at the primary surgery in 2 eyes and at the repeat surgeries in 8 eyes. The perfluoro-n-octane was removed after 1 to 4 postoperative weeks. The patients were followed for 5 to 43 months. Results: At the last examination, the retinas were reattached in 8 eyes (80%). In the other two eyes, a retinal attachment was not obtained. Postoperatively, the best-corrected visual acuity improved from hand motion to 0.1 in 1 eye and could not be measured in the other 9 patients because of their ages. No apparent adverse events related to the use of perfluoro-n-octane were noted. Conclusions: Although cautions should be exercised regarding potential mechanical retinal injuries by heavy liquids in the eye, short-term perfluoro-n-octane tamponade was effective in pediatric cases with severe PVR in which retinal reattachment is considered to be difficult with conventional gas or silicone oil tamponade. abstract_id: PUBMED:21573090 Ocular tolerance and efficacy of short-term tamponade with double filling of polydimethyloxane and perfluoro-n-octane. Objective: The aim of the study was to evaluate the ocular tolerance and efficacy of double filling with perfluoro-n-octane (n-C8F18) (PFO) and polydimethyloxane (PDMS) as a temporary vitreous substitute in patients with retinal detachment complicated by proliferative vitreoretinopathy (PVR). Material And Methods: Tamponade was performed in 30 eyes of 30 patients by double filling with 30% PFO and 70% PDMS for an average of 23 (standard deviation 2.2) days. The follow-up visits were scheduled 1 week, 1 month, and 3 months after surgery. The main outcome measures were visual acuity, intraocular pressure (IOP), PVR reproliferation, and electrophysiological parameters. Results: The primary success rate was 80% (24/30). Fourteen patients (46.7%) had a postoperative improvement in visual acuity, 12 patients (40.0%) maintained their preoperative visual acuity, and four patients (13.3%) experienced a reduction in visual acuity. The mean postoperative IOP was 19.7 mm Hg (11-32 mm Hg); nine cases (30.0%) developed an IOP increase that was treated with topical drops and/or systemic carbonic anhydrase inhibitors. The electroretinogram (ERG) and the bright flash electroretinogram (bf ERG) parameters showed a statistically significant difference of means between 4- and 8-week follow-up visits. Conclusion: Our experience with double filling in selected cases of retinal detachment has been positive. No electroretinographic signs of retinal toxicity and a low incidence of PVR reproliferation were observed. abstract_id: PUBMED:20337323 Extended Endotamponade with Perfluoro-n-Octane in Pediatric Retinal Detachment. The use of perfluoro-n-octane (PFnO) for extended endotamponade in an infant with X-linked retinoschisis and complex tractional retinal detachments associated with proliferative vitreoretinopathy (PVR) was reported. The patient had undergone multiple vitrectomy surgeries with silicone oil tamponade. After extensive retinectomy, slippage of the retina produced folds that were flattened under extended PFnO tamponade. The retinas of both eyes remained anatomically reattached 11 months after exchange of PFnO for silicone oil. Answer: The use of perfluoro-n-octane (PFO) in 23G vitrectomy for retinal detachment surgery does not appear to affect the integrity of the ellipsoid zone. A study examining the integrity of the ellipsoid zone after PFO use for reattachment of the central retina in macula-off retinal detachment cases found that disturbances in the integrity of the ellipsoid zone or intraoperative macular hole formation did not present in any of the cases examined. Successful intraoperative visualization was possible in all cases, and while residual islands of subretinal fluid persisted directly after PFO use in 80% of cases, these were resorbed in all cases in the postoperative optical coherence tomography (OCT) scans. The study concluded that PFO use seems to be a safe procedure regarding the integrity of the ellipsoid zone in cases scheduled for primary vitrectomy for retinal detachment repair (PUBMED:27102352).
Instruction: Are people with chronic diseases interested in using telehealth? Abstracts: abstract_id: PUBMED:24811914 Are people with chronic diseases interested in using telehealth? A cross-sectional postal survey. Background: There is growing interest in telehealth-the use of technology to support the remote delivery of health care and promote self-management-as a potential alternative to face-to-face care for patients with chronic diseases. However, little is known about what precipitates interest in the use of telehealth among these patients. Objective: This survey forms part of a research program to develop and evaluate a telehealth intervention for patients with two exemplar chronic diseases: depression and raised cardiovascular disease (CVD) risk. The survey was designed to explore the key factors that influence interest in using telehealth in these patient groups. Methods: Thirty-four general practices were recruited from two different regions within England. Practice records were searched for patients with (1) depression (aged 18+ years) or (2) 10-year risk of CVD ≥20% and at least one modifiable risk factor (aged 40-74 years). Within each general practice, 54 patients in each chronic disease group were randomly selected to receive a postal questionnaire. Questions assessed five key constructs: sociodemographics, health needs, difficulties accessing health care, technology-related factors (availability, confidence using technology, perceived benefits and drawbacks of telehealth), and satisfaction with prior use of telehealth. Respondents also rated their interest in using different technologies for telehealth (phone, email and Internet, or social media). Relationships between the key constructs and interest in using the three mediums of telehealth were examined using multivariable regression models. Results: Of the 3329 patients who were sent a study questionnaire, 44.40% completed it (872/1740, 50.11% CVD risk; 606/1589, 38.14% depression). Overall, there was moderate interest in using phone-based (854/1423, 60.01%) and email/Internet-based (816/1425, 57.26%) telehealth, but very little interest in social media (243/1430, 16.99%). After adjusting for health needs, access difficulties, technology-related factors, and prior use of telehealth, interest in telehealth had largely no association with sociodemographic variables. For both patient groups and for each of the three technology mediums, the most important constructs related to interest in telehealth were having the confidence to use the associated technology, as well as perceiving greater advantages and fewer disadvantages from using telehealth. To illustrate, greater confidence using phone technologies (b=.16, 95% CI 0.002-0.33), while also perceiving more benefits (b=.31, 95% CI 0.21-0.40) and fewer drawbacks (b=-.23, 95% CI -0.28 to -0.17) to using telehealth were associated with more interest in using phone-based telehealth technologies for patients with depression. Conclusions: There is widespread interest in using phone-based and email/Internet-based telehealth among patients with chronic diseases, regardless of their health status, access difficulties, age, or many other sociodemographic factors. This interest could be increased by helping patients gain confidence using technologies and through highlighting benefits and addressing concerns about telehealth. While the same pattern exists for social media telehealth, interest in using these technologies is minimal. abstract_id: PUBMED:33083791 Satisfaction with the use of telehealth during COVID-19: An integrative review. Background: The novel coronavirus (COVID-19) pandemic has defined 2020 thus far. Businesses, social and religious gatherings, travel, and almost all forms of transportation shut down to halt the spread of COVID-19. People were ordered to quarantine in place, and the world appears to go into a standstill. In the midst of being quarantined people with acute and chronic conditions still require medical care and treatment. An alternative way for people to receive needed health care was necessary. Hence, we saw an unprecedented surge in telehealth. With this unprecedented surge in use of telehealth, there is matter of delivering quality care. Objective: Guided by the Donabedian Model, the purpose of this integrative review was to examine current evidence on patient and healthcare provider's satisfaction with the use of telehealth during the COVID-19 pandemic. Design: An integrative review of the literature. Data Sources: The literature review was conducted utilizing PubMED, CINAHL, Google Scholar and Cochrane Library databases. Inclusion criteria were studies published from January 2020 to July 11, 2020, published and translated in English language, and studies that evaluated patients and providers satisfaction with the use of telehealth or telemedicine during COVID-19. Eighteen articles were included in this review. Review Methods: An evaluation matrix was developed to collect data from the included articles. The articles were appraised using Fineout-Overholt &amp; Gallagher-Ford Rapid Critical Appraisal Checklist for Descriptive Studies and Rapid Critical Appraisal of Evidence-Based Practice Implementation or Quality Improvement Projects (Melnyk &amp; Fineout-Overholt, 2015). Authors independently appraised each article using the appropriate appraisal tools. Results: Ten of the articles (53%) included were studies conducted in the United States. Sixteen out of 18 studies evaluated patient satisfaction and five studies examined healthcare providers' satisfaction with the use of telehealth. The majority of telehealth services offered were by subspecialists. Overall, patients and healthcare providers have high level of satisfaction with the use of telehealth during COVID-19 pandemic. Many patients and healthcare providers reported willingness to continue using telehealth after the pandemic. Conclusion: This integrative review provided additional evidence on patient and healthcare provider's satisfaction with the use of telehealth. Findings in this review may not be surprising as individuals, healthcare providers, and health systems needed to adopt the use telehealth due to necessity. However, the results for telehealth are promising. abstract_id: PUBMED:28224943 Medicare Telehealth Services and Nephrology: Policies for Eligibility and Payment. The criteria for Medicare payment of telehealth nephrology services, and all other Medicare telehealth services, are set forth in section 1834(m) of the Social Security Act. There are just over 80 professional physician or practitioner services that may be furnished via telehealth and paid under Medicare Part B, when an interactive audio and video telecommunication system that permits real-time communication between a beneficiary at the originating site and the physician or practitioner at the distant site substitutes for an in-person encounter. These services include 16 nephrology billing codes for furnishing ESRD services for monthly monitoring and assessment and two billing codes for chronic kidney disease education. In recent years, many mobile health devices and other web-based tools have been developed in support of monitoring, observation, and collaboration for people living with chronic disease. This article reviews the statutory and program guidance that governs Medicare telehealth services, defines payment policy terms (e.g., originating site and distant site), and explains payment policies when telehealth services are furnished. abstract_id: PUBMED:31744701 Adding value to remote monitoring: Co-design of a health literacy intervention for older people with chronic disease delivered by telehealth - The telehealth literacy project. Objective: To co-design, test and evaluate a health literacy, chronic disease self-management and social support intervention for older people delivered by group videoconferencing into the home. Method: The Telehealth Literacy Project (THLP) was a mixed methods, quasi-experimental, non-randomised trial nested within a telehealth remote monitoring study. An intervention group (n = 52) participated in five, weekly videoconference group meetings lasting for 1.5 h and a control group (n = 60) received remote monitoring only. Outcomes were measured using the nine-scale Health Literacy Questionnaire (HLQ) and two scales of the Health Education Impact Questionnaire (heiQ). Semi-structured interviews and focus group data were thematically analysed. Result: At 3 month follow-up, univariate analysis identified small effects in the intervention group only, with improved health literacy behaviours (five HLQ scales) and self-management skills (two heiQ scales). ANOVA of HLQ scales indicated no significant differences between the two groups over time indicating a contributing effect of the remote monitoring project. Intervention participants reported improved perception of companionship, emotional and informational support. Conclusion: The THLP delivered with telemonitoring indicates potential to improve social support and some health literacy factors in older people. Practice Implications: Patient education can be delivered by group videoconferencing. abstract_id: PUBMED:37052797 Telehealth for Chronic Disease Management Among Vulnerable Populations. Chronic diseases disproportionately affect patients in low-income minority groups who traditionally use in-person healthcare services. COVID-19 disrupted their routines and limited options for people to receive care; this could exacerbate health inequities. The study examined telehealth chronic disease management among low-income minority groups. We used Florida Medicaid claims data from March to December 2020 and the American Consumer Survey to examine the study objectives. Data were analyzed using Linear and Logistic Regression. We retrieved claim records of 52,904 unique patients; 31,999 were female and 49% of the sample had at least one telehealth visit. Medicaid patients were 8% less likely to use telehealth and 21% more likely to have audio visits when compared to Medicare patients. The analyses suggest that Non-Hispanic Black patients and individuals with a lack of education experience significant health inequities. People with chronic obstructive pulmonary disease (5%) and heart failure (14%) were less likely to use telehealth than patients with diabetes. Telehealth will continue to be a health delivery option; thus we recommend that strategies are enacted to educate, and resources are provided to promote equity among Non-Hispanic Black patients. Without priority attention to people among low-income minority populations, health inequities will continue to plague this community. abstract_id: PUBMED:32596950 Dietitians Australia position statement on telehealth. It is the position of Dietitians Australia that clients can receive high-quality and effective dietetic services such as Medical Nutrition Therapy (MNT) delivered via telehealth. Outcomes of telehealth-delivered dietetic consultations are comparable to those delivered in-person, without requiring higher levels of additional training nor compromising quality of service provision. Dietitians Australia recommends that policy makers and healthcare funders broaden the recognition for telehealth-delivered dietetic consultations as a responsive and cost-effective alternative or complement to traditional in-person delivery of dietetic services. The successful implementation of telehealth can help to address health and service inequalities, improve access to effective nutrition services, and support people with chronic disease to optimise their diet-related health and well-being, regardless of their location, income or literacy level, thereby addressing current inequities. abstract_id: PUBMED:35640452 Feasibility, satisfaction, acceptability and safety of telehealth for First Nations and culturally and linguistically diverse people: a scoping review. Objectives: The COVID-19 pandemic has highlighted the importance of access to telehealth as an alternative model of service during social restrictions and for urban and remote communities alike. This study aimed to elucidate whether First Nations and culturally and linguistically diverse (CALD) patients also benefited from the resource before or during the pandemic. Study Design: This study was a scoping review. Methods: A scoping review of MEDLINE, CINAHL and PsycINFO databases from 2000 to 2021 was performed. Paired authors independently screened titles, abstracts and full texts. A narrative synthesis was undertaken after data extraction using a standard template by a team including First Nations and CALD researchers. Results: Seventeen studies (N = 4,960 participants) mostly qualitative, covering First Nations and CALD patient recipients of telehealth in the United States, Canada, Australia, and the Pacific Islands, met the inclusion criteria. Telehealth was perceived feasible, satisfactory, and acceptable for the delivery of health screening, education, and care in mental health, diabetes, cancer, and other chronic conditions for remote and linguistically isolated populations. The advantages of convenience, lower cost, and less travel promoted uptake and adherence to the service, but evidence was lacking on the wider availability of technology and engagement of target communities in informing priorities to address inequalities. Conclusions: Further studies with larger samples and higher level evidence methods involving First Nations and CALD people as co-designers will assist in filling the gap of safety and cultural competency. abstract_id: PUBMED:27789615 Economic evaluation strategies in telehealth: Obtaining a more holistic valuation of telehealth interventions. Telehealth is an emerging area of medical research. Its translation from conception, to research and into practice requires tailored research and economic evaluation methods. Due to their nature telehealth interventions exhibit a number of extra-clinical benefits that are relevant when valuing their costs and outcomes. By incorporating methods to measure societal values such as patient preference and willingness-to-pay, a more holistic value can be placed on the extra-clinical outcomes associated with telehealth and evaluations can represent new interventions more effectively. Cost-benefit analysis is a method by which relevant costs and outcomes in telehealth can be succinctly valued and compared. When health economic methods are conducted using holistic approaches such as cost-benefit analysis they can facilitate the translation of telehealth research into policy and practice. abstract_id: PUBMED:38088346 Improving telehealth for persons with multiple sclerosis - a cross-sectional study from the Australian MS longitudinal study. Purpose: The COVID-19 pandemic impacted healthcare delivery, especially for people with chronic disease. We investigated telehealth use by persons with multiple sclerosis (MS) during the COVID-19 pandemic and their suggestions to improve their telehealth experience. Methods: From persons with MS, we collected data on demographic, disease-related and social determinants of health, telehealth use during the COVID-19 pandemic, satisfaction with telehealth, and suggestions to improve telehealth. We conducted univariate log-binomial regression to establish factors associated with telehealth use versus no use, summarised experiences and suggestions for improvement with frequency tables, and conducted thematic analysis on free-text suggestions for improvement. Results: Of 1,485 participants, 69.8% used telehealth during the first phase of the COVID-19 pandemic. Only small differences were observed for demographic, disease and social health determinants between telehealth users and non-users. Most participants who used telehealth had good or very good experiences (new providers:74.3%; existing providers:78.6%). The most common suggestion for improving telehealth experience was "guidance on preparing for telehealth sessions." Participants also wanted expansion in telehealth availability and utility. Conclusion: Persons with MS in Australia commonly used telehealth during the COVID-19 pandemic and were generally satisfied with their experiences. Implementing the suggested improvements will help optimise telehealth for persons with MS. Registration: N/A. abstract_id: PUBMED:33814179 Public perceptions and disparities in access to telehealth orthopaedic services in the COVID-19 era. Background: We used online crowdsourcing to explore public perceptions and attitudes towards virtual orthopaedic care, and to identify factors associated with perceived difficulty navigating telehealth services during the COVID-19 pandemic. Methods: A modified version of the validated Telemedicine Satisfaction and Usefulness Questionnaire was completed by 816 individuals using crowd-sourcing methods. Multivariable logistic regression modelling was used to determine population characteristics associated with perceived difficulty using telehealth technology. Results: Most respondents (85%) believed that telehealth visits would be a convenient form of healthcare delivery, and 64% would prefer them over in-person office visits. The majority (92%) agreed that telehealth would save them time, but 81% had concerns regarding the lack of physical contact during a musculoskeletal examination. More respondents would feel comfortable using telehealth for routine follow-up care (81%) compared to initial assessment visits (59%) and first postoperative appointments (60%). Roughly 1 in 15 (7%) expressed difficulty with using telehealth; these respondents were more often unmarried, lower-income, and more medically infirm, and reported greater symptoms of depression. After multivariable adjustment, lower income and poor health were retained as predictors of difficulty with navigating telehealth technology (p = 0.027,p = 0.036, respectively). Conclusion: The majority of the public appears receptive to telehealth for orthopaedic care for both new patient visits and follow-up appointments. The finding that people with multiple chronic conditions and psychosocial needs struggle to engage with telehealth suggests that those who arguably stand to benefit the most from continued care are the ones being unintentionally left out of this digitization boom. Answer: Yes, people with chronic diseases are interested in using telehealth. A cross-sectional postal survey indicated that there was moderate interest in using phone-based (60.01%) and email/Internet-based (57.26%) telehealth among patients with chronic diseases, such as depression and raised cardiovascular disease (CVD) risk. However, there was very little interest in using social media for telehealth (16.99%) (PUBMED:24811914). Additionally, during the COVID-19 pandemic, patients and healthcare providers reported high levels of satisfaction with the use of telehealth, and many expressed willingness to continue using telehealth after the pandemic (PUBMED:33083791). Moreover, telehealth has been found to be feasible, satisfactory, and acceptable for delivering health screening, education, and care in various chronic conditions for remote and linguistically isolated populations, including First Nations and culturally and linguistically diverse (CALD) patients (PUBMED:35640452). In the context of Medicare services, telehealth nephrology services are among the professional physician or practitioner services that may be furnished via telehealth and paid under Medicare Part B (PUBMED:28224943). Furthermore, a health literacy intervention for older people with chronic disease delivered by telehealth showed potential to improve social support and some health literacy factors (PUBMED:31744701). For vulnerable populations, telehealth for chronic disease management has been examined, and strategies are recommended to educate and provide resources to promote equity among Non-Hispanic Black patients and other low-income minority populations (PUBMED:37052797). Dietitians Australia also supports the position that clients can receive high-quality and effective dietetic services via telehealth, which can help address health and service inequalities (PUBMED:32596950). Lastly, persons with multiple sclerosis (MS) in Australia commonly used telehealth during the COVID-19 pandemic and were generally satisfied with their experiences, suggesting that telehealth is a viable option for chronic disease management (PUBMED:38088346).
Instruction: Screening for distant metastases in patients with head and neck cancer: is chest computed tomography sufficient? Abstracts: abstract_id: PUBMED:16222201 Screening for distant metastases in patients with head and neck cancer: is chest computed tomography sufficient? Objectives/hypothesis: The detection of distant metastases during screening influences the choice of treatment in patients with head and neck squamous cell carcinoma. A previous study in the authors' institution showed that chest computed tomography (CT) scan was the most important screening technique. Different clinical risk factors in patients with head and neck squamous cell carcinoma for the development of distant metastases were identified. Study Design: Retrospective cohort study. Methods: To evaluate the authors' diagnostic strategy, the accuracy of screening for distant metastases with chest CT in 109 consecutive patients with head and neck squamous cell carcinoma with risk factors between 1997 and 2000 was retrospectively analyzed. Results: Preoperative screening with CT revealed 20 patients (18%) with lung metastases and 1 liver metastasis. Despite negative screening with chest CT, 9 (11%) patients developed distant metastases within 12 months during follow-up. Sensitivity of the chest CT was 73%; the specificity was 80%. Conclusion: Although chest CT frequently detects distant metastases, there seems to be a need for a more sensitive and whole-body screening technique. abstract_id: PUBMED:18177532 Effectiveness of X-ray and computed tomography screening for assessing pulmonary involvement in patients with head and neck squamous cell carcinoma. Objectives: To evaluate the benefits of chest computed tomography and X-ray as screening tools in patients with newly diagnosed head and neck squamous cell carcinoma, to determine the incidence of lung metastases or synchronous pulmonary lesions, and to evaluate factors associated with positive radiological findings. Design: Five-year, retrospective survey of all newly diagnosed cases of head and neck squamous cell carcinoma. Results: We included 102 patients (63 men and 39 women), with a mean age of 67 years (range 33-91 years). The incidence of pulmonary involvement was 17 per cent. The sensitivity and specificity of computed tomography were 100 and 89.8 per cent, respectively. For chest X-ray, the sensitivity was 35.7 per cent and the specificity 92.7 per cent. The accuracy of computed tomography was 91.5 per cent and that of chest X-ray 83.1 per cent. There was a clear correlation between higher nodal stage and larger tumour with the development of distant metastases. In patients with a positive chest computed tomography scan, 86 per cent had T3 or T4 tumours, in contrast to 38 per cent of those with a negative chest scan (p &lt; 0.05). In addition, 71 per cent of patients with positive findings had N2 or N3 nodal disease, compared with 29 per cent of those with negative findings (p &lt; 0.05). Conclusion: There is currently no consensus on the use of chest X-ray and computer tomography for screening newly diagnosed cases of head and neck squamous cell carcinoma. We recommend routine scanning of high-staged head and neck squamous cell carcinoma. The National Institute of Health and Clinical Excellence guidelines should be reappraised. abstract_id: PUBMED:11296049 Occult primary tumors of the head and neck: accuracy of thallium 201 single-photon emission computed tomography and computed tomography and/or magnetic resonance imaging. Objective: To determine the accuracy of thallium 201 single-photon emission computed tomography (thallium SPECT) and computed tomography and/or magnetic resonance imaging (CT/MRI) in the detection of occult primary tumors of the head and neck. Design: Study of diagnostic tests. Setting: National Cancer Institute, Amsterdam, the Netherlands. Patients And Methods: Thirty-two patients with a neck node metastasis of an epithelial tumor and negative findings by mirror examination at initial presentation were included in the study. Twenty-nine patients underwent thallium SPECT and CT/MRI before examination under general anesthesia (EUA). In 3 patients only thallium SPECT was performed before EUA. Histological confirmation of an occult primary tumor during EUA was used as the gold standard. Negative radiodiagnostic and nuclear findings in the upper aerodigestive tract in the presence of a primary carcinoma other than of the head and neck were interpreted as true-negative findings. Results: For thallium SPECT the following results were recorded: sensitivity, 67%; specificity, 69%; accuracy, 69%; positive predictive value, 33%; and negative predictive value, 90%. In 1 patient, thallium whole body scan indicated a primary carcinoma beyond the mucosal lining of the upper aerodigestive tract. The CT/MRI results were as follows: sensitivity, 71%; specificity, 73%; accuracy, 72%; positive predictive value, 45%; and negative predictive value, 89%. Conclusions: Thallium SPECT and CT/MRI showed comparable results for detection of occult primary tumors of the head and neck. A potential advantage of thallium SPECT is that it allows total body screening. abstract_id: PUBMED:8344847 Computed tomography evaluation of neck node metastases from nasopharyngeal carcinoma. Purpose: To study the role of computed tomography in the pre-therapy evaluation of nasopharyngeal carcinoma. Methods And Materials: The computed tomography of 119 new patients of nasopharyngeal carcinoma were evaluated independent of clinical findings for neck node metastases, and then compared with clinical findings. Contrast enhanced axial scans were obtained at 5 mm intervals with the infraorbitomeatal line parallel to the gantry. Scans were obtained from the supra-sellar cistern to the C5 or C6 vertebra for the evaluation of the base of skull, nasopharynx, paranasopharyngeal space and the upper and mid neck. Result: The present study confirmed the disparity of nodal extent documented by clinical palpation and computed tomography. Of the 37 patients who have no clinically palpable node (N0), computed tomography showed nodal involvement in 11 (29.7%) of them, and they were up-staged from N0 to N1. Computed tomography showed multiple or bilateral nodes in seven (58.3%) of the 12 patients with AJC N1 disease and they were hence up-staged to N2. All together, there were 28 (23.5%) patients who have no computed tomography evidence of nodal involvement by tumor. In agreement with clinical experience, the most commonly involved nodal groups were the upper internal jugular and upper spinoaccessary, followed by the lateral retropharyngeal. The percentage of nodes which were not clinically palpable was roughly the same for different regions (15-30%), except, as expected, that all the retropharyngeal nodes were not palpable. The risk of harboring retropharyngeal node was proportional to the size of the largest node in the ipsilateral neck. Conclusion: A significant proportion of patients with clinically negative neck (N0) or AJC N1 disease will be upstaged by computed tomography, thus supporting its routine use in pre-therapy evaluation of nasopharyngeal carcinoma. abstract_id: PUBMED:26490394 Comparing staging by positron emission tomography with contrast-enhanced computed tomography and by pathology in head and neck squamous cell carcinoma. Objective: This study aimed to evaluate the ability of positron emission tomography with contrast-enhanced computed tomography to correctly stage head and neck squamous cell carcinomas, in comparison with pathological staging. Methods: Positron emission tomography computed tomography was used to determine the tumour-node-metastasis classification and overall cancer stage in 85 head and neck squamous cell carcinoma patients who underwent pre-operative imaging using this modality and primary surgery between July 2010 and January 2013. Staging by positron emission tomography computed tomography was retrospectively compared with staging using pathological specimens. Agreement between imaging stage and pathological stage was examined by univariate and multivariate analysis both overall and for each primary tumour site. Results: This imaging modality was 87.5 per cent sensitive and 44.8 per cent specific in identifying regional cervical metastases, and had false positive and false negative rates of 18.8 per cent and 8.2 per cent, respectively. The positive predictive and negative predictive values were 75.4 per cent and 65.0 per cent, respectively. Univariate and multivariate analyses revealed a significant agreement between positron emission tomography computed tomography and pathological node classification in older patients and for the oral cavity primary tumour site. There was significant agreement between both methods in the overall classification only for tumours classified as T3 or greater. Conclusion: Positron emission tomography computed tomography should be used with caution for the pre-operative staging of head and neck cancers because of its high false positive and false negative rates. abstract_id: PUBMED:12239959 Staging with helical computed tomography and laparoscopy in pancreatic head cancer. Background/aims: Helical computed tomography provides valuable information about extent of pancreatic cancer. However, it remains difficult to detect small distant metastases. Laparoscopic examination is becoming standard for cancer staging. Methodology: Between 1995 and 1999, 45 patients with radiologically resectable pancreatic head cancer were analyzed retrospectively to clarify the indications for and role of staging laparoscopy. Computed tomography was examined for tumor size and spread to portal or superior mesenteric veins (PV) and celiac or superior mesenteric arteries (A). Results: There were 29 (64%) patients with resectable disease, 4 (9%) with localized unresectable, and 12 (27%) with metastatic (hepatic in 7 and peritoneal in 5). Patients with metastatic disease were more likely to present with abdominal pain, vomiting, and back pain than were patients with resectable disease (p &lt; 0.05). The mean tumor size and involvement of PV or A were greater in metastatic patients than in resectable patients (p &lt; 0.005 or p &lt; 0.01). The survival rate for patients with metastatic disease was lower than that for patients with resectable disease (p &lt; 0.0001). Conclusions: According to clinical features and computed tomography findings, laparoscopic exploration is recommended for cancer staging. Helical computed tomography and staging laparoscopy categorize patients into those with localized and those with metastatic disease which considerably correlated with survivals. abstract_id: PUBMED:2242260 Improving diagnostic accuracy of cervical metastases with computed tomography and magnetic resonance imaging. Elective neck dissection in patients with head and neck cancer continues to be controversial. The management of these patients would be greatly facilitated by improvements in predicting cervical metastases. Recent investigations have suggested that computed tomography and magnetic resonance imaging are more sensitive in detecting cervical metastases than physical examination. The Department of Otolaryngology at the Ohio State University Hospitals, Columbus, undertook a prospective study to compare the preoperative sensitivities of physical examination, computed tomography, and magnetic resonance imaging with pathologic findings in 27 patients undergoing neck dissections for head and neck cancer. The results indicate that computed tomography and magnetic resonance imaging were more sensitive (84% and 92%, respectively) than physical examination (75%), although the results did not achieve statistical significance. The sensitivity of combined computed tomography and magnetic resonance imaging was 90%. abstract_id: PUBMED:35507996 The impact of multimodality integrated positron emission tomography-computed tomography on improving the staging and management of head and neck malignancy: a cross- sectional study. Background: Clinical assessment of head and neck cancers is highly challenging owing to the complexity of regional anatomy and wide range of lesions. The diagnostic evaluation includes detailed physical examination, biopsy and imaging modalities for disease extent and staging. Appropriate imaging is done to enable determination of precise tumor extent and involvement of lymph nodes, and detection of distant metastases and second primary tumors. Objective: To evaluate the initial staging discrepancy between conventional contrasted computed tomography (CT) and 18F-fluorodeoxy-D-glucose positron emission tomography/computed tomography (18F-FDG PET/CT) and its impact on management plans for head and neck malignancies. Design And Setting: Prospective cross-sectional study in two tertiary-level hospitals. Methods: This study included 30 patients with primary head and neck malignant tumors who underwent contrasted computed tomography and whole-body 18F-FDG PET/CT assessments. The staging and treatment plans were compared with the incremental information obtained after 18F-FDG PET/CT. Results: 18F-FDG PET/CT was found to raise the stage in 33.3% of the cases and the treatment intent was altered in 43.3% of them, while there was no management change in the remaining 56.7%. 18F-FDG PET/CT had higher sensitivity (96% versus 89.2%) and accuracy (93% versus 86.7%) than conventional contrast-enhanced computed tomography. Conclusion: Our study demonstrated that 18F-FDG PET/CT had higher sensitivity and accuracy for detecting head and neck malignancy, in comparison with conventional contrast-enhanced computed tomography. 18F-FDG PET/CT improved the initial staging and substantially impacted the management strategy for head and neck malignancies. abstract_id: PUBMED:32765874 Diagnostic value of fluorine-18 fluorodeoxyglucose positron emission tomography/computed tomography in sublingual and submandibular salivary gland tumors. The aim of the present study was to compare the diagnostic accuracy of fluorine-18 fluorodeoxyglucose positron emission tomography/computed tomography (F-18 FDG PET/CT) with that of conventional imaging studies (CIS), such as CT or magnetic resonance imaging (MRI), in the clinical diagnosis and staging of submandibular and sublingual salivary gland tumors. In addition, the data obtained were used to evaluate the significance of maximum standardized uptake value (SUVmax) in diagnosing benign or malignant lesions. For the present study, 18 patients with submandibular or sublingual neoplasms underwent F-18 FDG PET/CT imaging with accompanying CT or MRI. The diagnostic values from 43 F-18 FDG PET/CT scans and 28 CIS of the 18 patients were compared to the gold standard histopathological and/or cytopathological diagnosis. The results demonstrated that the diagnostic accuracy for predicting primary tumors was similar between F-18 FDG PET/CT and CIS. By contrast, PET/CT imaging was more accurate in detecting lymph node metastasis compared with CT or MRI (95.4 vs. 66.7%). F-18 FDG PET/CT had a sensitivity of 88.9% and a specificity of 97.1%, whereas CT or MRI had a sensitivity of 54.5% and a specificity of 75.0%. F-18 FDG PET/CT also enabled screening for distant metastasis, as observed in 10 cases in the present study. Furthermore, there were no significant differences in SUVmax between benign or malignant salivary gland lesions, as high glucose metabolism was also observed in benign tumors. In conclusion, F-18 FDG PET/CT provides more accurate diagnostic information for the evaluation of submandibular and sublingual salivary gland tumors as compared with CIS in terms of lymph node and distant metastasis. abstract_id: PUBMED:19245775 Diagnosis and evaluation of head and neck paragangliomas. Computed tomography and magnetic resonance imaging Cross-sectional imaging techniques [magnetic resonance imaging (MRI) and computed tomography (CT)] are essential in the presurgical evaluation of paragangliomas. These techniques provide data on tumoral hypervascularity, tumoral margins, invasion of neighboring structures, tumoral multifocality and the presence of metastasis. These imaging techniques are also essential for sequential volume assessment in non-surgical cases, postoperative evaluation and screening of familial forms. MRI is the technique of choice in paraganglioma assessment due to high contrast resolution among tissues, optimizing visualization of tumoral delimitation and the degree of local invasion. In addition, with the new dynamic MR angiography techniques, early contrast uptake can be confirmed, facilitating the diagnostic approach and demonstrating multifocal lesions. CT is required in tympanic paragangliomas and in those involving the skull base due to the efficacy of this technique in the evaluation of middle ear lesions and invasion of the skull base bone structures. Answer: Based on the provided abstracts, chest computed tomography (CT) is a commonly used screening tool for detecting distant metastases in patients with head and neck squamous cell carcinoma. However, the sufficiency of chest CT as the sole screening method for distant metastases is questioned. One study found that preoperative screening with chest CT revealed lung metastases in 18% of patients and one liver metastasis, but despite negative screening results, 11% of patients developed distant metastases within 12 months during follow-up. The sensitivity and specificity of chest CT were 73% and 80%, respectively, suggesting that there might be a need for a more sensitive and whole-body screening technique (PUBMED:16222201). Another study reported that chest CT had a sensitivity of 100% and specificity of 89.8% for detecting pulmonary involvement, which was significantly higher than chest X-ray. The study recommended routine scanning of high-staged head and neck squamous cell carcinoma and suggested that guidelines should be reappraised (PUBMED:18177532). Thallium 201 single-photon emission computed tomography (SPECT) and CT/MRI showed comparable results for the detection of occult primary tumors of the head and neck, with a potential advantage of thallium SPECT being that it allows total body screening (PUBMED:11296049). In the evaluation of neck node metastases from nasopharyngeal carcinoma, CT was found to upstage a significant proportion of patients with clinically negative neck or AJC N1 disease, supporting its routine use in pre-therapy evaluation (PUBMED:8344847). Positron emission tomography with contrast-enhanced computed tomography (PET/CT) was found to have high sensitivity but also high false positive and false negative rates for pre-operative staging of head and neck cancers, indicating that it should be used with caution (PUBMED:26490394). A study on the staging of pancreatic head cancer with helical CT and laparoscopy showed that helical CT provides valuable information, but small distant metastases may be difficult to detect without laparoscopic examination (PUBMED:12239959). Comparing the diagnostic accuracy of CT and MRI with pathologic findings, both imaging modalities were found to be more sensitive than physical examination for detecting cervical metastases (PUBMED:2242260).
Instruction: Patients with benign paroxysmal positional vertigo and cervical spine problems: is Epley's manoeuvre contraindicated, and is a proposed new manoeuvre effective and safer? Abstracts: abstract_id: PUBMED:20412614 Patients with benign paroxysmal positional vertigo and cervical spine problems: is Epley's manoeuvre contraindicated, and is a proposed new manoeuvre effective and safer? Background: Benign paroxysmal positional vertigo is one of the commonest peripheral vestibular causes of vertigo. The particle repositioning manoeuvre (Epley's manoeuvre) has become the 'gold standard' treatment for this disorder. Benign paroxysmal positional vertigo can affect any age group but is commoner in older patients. Cervical spine problems (e.g. spondylosis and disc prolapse) are commoner in this age group. Epley's manoeuvre necessitates passive neck movements. Such movements may not be wise in patients at risk of cervical spine fracture. Patients And Methods: This study included 40 patients complaining of vertigo and diagnosed as having benign paroxysmal positional vertigo. A new particle repositioning manoeuvre was designed for these patients, as an alternative to Epley's manoeuvre. Results: At one week review, 36 patients (90 per cent) reported total relief from vertigo. Three patients reported a major improvement in their vertigo, and their residual vertigo was relieved by performing the new manoeuvre again after two weeks. Further clinical reviews at one month, three months, six months and one year found that seven patients had suffered minor attacks of typical benign paroxysmal positional vertigo after three months. All seven were relieved of their symptoms after undergoing the new particle repositioning manoeuvre again. Conclusion: The proposed new manoeuvre is simple, effective and safe for treating patients with benign paroxysmal positional vertigo and cervical spine problems. abstract_id: PUBMED:37206840 Comparative Effectiveness Research: Betahistine add-on Therapy with Epley's Manoeuvre Versus Epley's Manoeuvre Alone in Treating Posterior BPPV Patients. Introduction: Benign paroxysmal positional vertigo is one of the common vestibular disorders encountered in ENT clinics with accompanying vertigo. Study to evaluate the additive effects of betahistine on Epley's manoeuvre in treating posterior BPPV patients. Material And Methods: Prospective study was conducted on 50 patients of posterior BPPV diagnosed by the Dix Hallpike test. Subjects were divided into Group A which received the Betahistine therapy along with the canalith repositioning technique (Epley's manoeuvre) and the second group (Group B) which received the Epley's manoeuvre alone. The patients were assessed at 1 week and 4 weeks by Visual Analogue Scale (VAS), Dizziness Handicap Inventory (DHI) and Short Form 36 (SF-36). Results: At the end of 4 weeks, 2 patients in group A (E + B) had positive Dix-Hallpike and 23(92%) had negative Dix-Hallpike, while 11 patients in group B (E) had positive Dix-Hallpike and 14 (56%) had negative Dix-Hallpike with P value &lt; 0.001. The Mean baseline (T0) Visual Analogue Scale (VAS) score in group A (E + B) was 8.60 ± 1.080 and in group B (E) was 8.92 ± 0.996. Post-treatment VAS score was significantly lower in both the groups and was significantly lower in group A(E + B) than in group B (E) (0.680 ± 1.930 vs. 3.96 ± 3.587, respectively, p-value &lt; 0.001). The Mean baseline (T0) Dizziness Handicap Inventory (DHI) scores were similar in groups A and B (77.36 ± 9.49 vs. 80.00 ± 8.9, respectively, p = 0.271). After treatment, the DHI values were significantly lower in both groups. Group A had a better DHI score than group B (10.56 ± 17.12 vs. 44.72 ± 27.35, p &lt; 0.001). The mean baseline (T0) Short Form 36 (SF-36) scores were also similar in groups A and B (19.53 ± 6.85 vs. 18.79 ± 5.50, p = 0.823). Post-treatment of 4 weeks, the SF-36 score significantly improved in both the groups and significantly improved in group A than group B (84.27 ± 17.28 vs. 46.53 ± 24.53, p &lt; 0.001). Conclusion: Betahistine therapy in conjunction with Epley's manoeuvre leads to better symptom control and is more effective than Epley's alone in treating BPPV patients. abstract_id: PUBMED:37206749 Epleys Versus Semonts Manoeuvre in Posterior Canal Benign Paroxysmal Positional Vertigo. Introduction: Benign paroxysmal positional vertigo (BPPV), the most common vestibular disorder is characterized by short-lived episodes of rotatory vertigo in association with rapid changes in head position. Diagnosis of BPPV is clinical. The principle of treatment of BPPV involves maneuvers comprising of head movements, in order to bring the free debris from the semicircular canal back to the utricle. In this study we tried to evaluate and compare Epleys and the Semonts manoeuvre in the treatment of Benign paroxysmal positional vertigo of the posterior semicircular canal in terms of subjective and objective improvement. Materials and methods: Study Design: This prospective randomized study included 200 vertigo patients attending ENT OPD of tertiary care centre with Dix Hallpike positive status. (September 2017 -August 2019). Objective improvement was compared in terms of Dix Hallpike positivity between both groups on weekly follow up for four weeks. Subjective improvement was compared in both the groups by dizziness Handicapp index ( DHI) on follow-ups. Results: Total 200 patients formed part of study, with 100 in each group. On comparing Dix Hallpike positivity between both groups on weekly follow ups there was no significant difference between both the groups. DHI, compared between both groups showed that Semonts Maneouver was significantly better. Conclusion: In patients of BPPV both Epleys and Semonts are equally effective objectively. However the subjective improvement was better in patients on whom Semonts manoeuvre was performed. Supplementary Information: The online version contains supplementary material available at 10.1007/s12070-023-03624-5. abstract_id: PUBMED:31275830 Vestibular Evoked Myogenic Potentials After Epleys Manoeuvre Among Individuals with Benign Paroxysmal Positional Vertigo. VEMP abnormalities in individuals with BPPV are often reported to be associated with utricle and saccule degeneration. The aim of the present study is to assess the frequency of VEMP abnormalities using vestibular evoked myogenic potentials in individuals with Posterior canal Benign Paroxysmal Positional Vertigo (BPPV) after Epley's manoeuvre. 36 individuals (36 ears) with definite posterior canal BPPV and 36 healthy controls were considered for the present study. All the them underwent otoscopic examination, Dix-Hallpike manoeuvre to diagnose posterior canal BPPV. Further Audiological Evaluation including pure tone audiometry was carried out to rule out vestibular disorders associated with hearing loss. Epley's manoeuvre was performed on all individuals with BPPV by an experienced otorhinolaryngologist. Cervical and Ocular Vestibular Evoked Myogenic Potentials (VEMP) were used to investigate the saccule and utricle functions following Epley's manoeuvre. Cervical VEMP and ocular VEMP abnormalities were observed in 8/36 (22.22%) and 18/36 (50%) affected ears with BPPV respectively. Cervical VEMP responses were reduced in amplitude among 1/36 (2.77%) and absent in 7/36 (19.44%) of affected ears with BPPV. Ocular VEMP responses were reduced in amplitude on 11/36 (30.55%), followed by absent responses in 5/36 (13.88%) ears with BPPV. Two patients with posterior canal BPPV i.e., 4/64 (5.55%) ears had bilateral absence of ocular VEMP responses. Two ears with BPPV 2/36 (5.55%) had absence of both cervical and ocular VEMP responses in BPPV affected ear. T test showed significant difference (p &lt; 0.01) in the amplitude of ocular VEMP among posterior canal BPPV individuals when compared to cervical VEMP. The present study highlights individuals with Posterior canal BPPV may have otoconia dislodgement or macular degeneration of utricle, saccule, both utricle and saccule unilaterally, or bilaterally. VEMP may be useful in evaluating degeneration of both otolith organs associated with BPPV. abstract_id: PUBMED:36742929 Epley's Manoeuvre: A Single Line Treatment for Posterior Semicircular Canal Benign Paroxysmal Positional Vertigo. Benign Paroxysmal Positional Vertigo (BPPV) is the commonest cause of peripheral vertigo. Displaced free floating otoconia in the semicircular canals are responsible for brief attacks of vertigo and nystagmus. Epley's manoeuvre relocates these particles into the utricle. Here we studied the efficacy of Epley's manoeuvre in posterior semicircular canal BPPV without labyrinthine sedatives. 120 patients presented with positional vertigo were included in the study based on positive Dix- hallpike test from August 2018 to July 2019. These patients were treated with only the Epley's manoeuvre and followed up for 6 months. Patients who were previously on labyrinthine sedatives were advised to stop them and treated with only the Epley's manoeuvre. In our study of 120 patients, mean age was 43.5 yrs. Females (52.5%) were commonly affected. Left side posterior semicircular involvement was more than the right side. Epley's manoeuvre had 90% efficacy at 1st week and 100% efficacy at 4th week. Duration of vertigo had significant association (p &lt; 0.01) with the number of sessions required. There was significant improvement in the duration of vertigo attack (p &lt; 0.0001) and frequency of attack (p &lt; 0.0001) before and after the manoeuvre. Epley's manoeuvre lead to significant (p &lt; 0.001) improvement in the quality of life of affected patients measured by DHI scoring. In our 6 months follow up, 10 recurrences occurred having significant (p &lt; 0.01) association with the duration of vertigo attack. Only Epley's manoeuvre without any labyrinthine sedatives is an effective treatment for posterior semicircular canal BPPV patients. It also improves the quality of life of affected patients. abstract_id: PUBMED:23120204 Epley's canalith-repositioning manoeuvre for benign paroxysmal positional vertigo. The efficacy of the Epley's canalith-repositioning manoeuvre in the treatment of BPPV was assessed in this prospective study of 62 patients. Patients were selected based on symptoms of positional vertigo and positive Dix-Hallpike's positional test. Patients were divided into two groups; first group comprising 34 patients underwent Epley's manoeuvre alone where as the other group comprising 28 patients underwent Epley's manoeuvre along with mastoid oscillator. At the end of 1 month patients were assessed subjectively by visual analogue scale (VAS) and objectively by Dix-Hallpike's positional test. On VAS, 85.7% patients had complete resolution of symptoms of BPPV in both the groups. Objectively 88.2% did not have positional nysfagmus after 1 month in first group whereas in the second group 86% had complete response at the end of 1 month of therapy. Follow up of 6 months could be done in 38 patients, out of which 7 (18.4%) had recurrence of their symptom of vertigo and positive Dix-Hallpike's positional test, whereas one patient continued to have no relief by Epley's manoeuvre. There was no difference in subjective and objective parameters even when CRP was performed using mastoid vibrator. abstract_id: PUBMED:30097716 Therapeutic efficacy of the Galletti-Contrino manoeuvre for benign paroxysmal positional vertigo of vertical semicircular canals in overweight subjects. Objective: To compare the Galletti-Contrino manoeuvre with the more widely used Semont-Toupet in overweight subjects presenting with benign paroxysmal positional vertigo (BPBV) of vertical semicircular canals (posterior and anterior canals). Study Design: Prospective cohort study. Patients: 204 patients (BMI range 25-30) with a diagnosis of BPPV of vertical semicircular canals were randomly divided in two groups treated with two different maneuvers: Galletti-Contrino (Group A) and Semont-Toupet manoeuvre (Group B). The results were compared with those obtained from a control group (204 non-overweight subjects with BPV of vertical semicircular canals.) INTERVENTION: Galletti Contrino/ Semont Toupet manoeuvres. Main Outcome Measure(s): Liberatory nystagmus or vertigo after maximum 2 maneuvers. Vertigo and dizziness intensity scores (Visual analogue scale VAS 0-10) from day 0 to day 5 following the repositioning manoeuvre were also recorded in responsive patients. Results: While in non-overweight subjects no significant difference comparing the effectiveness of the two manoeuvres was found, liberatory nystagmus and vertigo were more frequently observed after Galletti Contrino manoeuvre in overweight subjects; this difference was statistically significant when posterior canals were involved (P &lt; 0.03). Vertigo and dizziness VAS scores reduced significantly from day 0 to day 5 after therapy in all groups. A more significant reduction of dizziness VAS was recorded in patients undergoing Galletti-Contrino manoeuvre at days 4-5 (P &lt; 0.005). Conclusion: Galletti-Contrino manoeuvre seems to be significantly more effective than Semont-Toupet manoeuvre in the treatment of BPPV of posterior semicircular canal and may be preferential in patients with limited body movements. abstract_id: PUBMED:31750147 Comparative Study of Management of BPPV (Benign Paroxysmal Positional Vertigo) with only Drugs Versus Drugs Plus Epley Manoeuvre. Benign paroxysmal positional vertigo (BPPV) is the most common peripheral vestibular disorder, accounting for 20% of all vertigo cases. Idiopathic BPPV is most common between the ages of 50 and 70, although the condition is found in all age groups. The importance of early diagnosis and treatment can lead to a much improved quality of life for patients afflicted by this ailment. It is presently common for physicians to treat these patients mainly with benzodiazepines, antihistamines, and anticholinergic medications, especially if the history and physical is consistent with BPPV. This method of treatment has had questionable success. Several reviews of the management of vertigo have shown that no medication in current use has well established curative or prophylactic value or is suitable for long-term treatment. Epleys manoeuvre is also used in the treatment of BPPV. This manoeuvre relocates free floating particles from the affected semi-circular canals back into utricle, thus relieving the symptoms of vertigo. The purpose of this study is to compare the efficacy of Epleys manoeuvre with conventional drug therapy versus conventional therapy alone in patients who present with vertigo. The purpose of this study to evaluate and examine two methods of treatment. abstract_id: PUBMED:23120716 Evaluation of vestibular exercises in the management of benign paroxysmal positional vertigo. Aim: (1) To evaluate the role of vestibular exercises in the management of benign paroxysmal positional vertigo (BPPV). (2) To compare the three vestibular exercises (Semont's manoeuvre, Epley's manoeuvre or Brandt-Daroff exercises) in the treatment of BPPV. Settings And Design: Prospective randomised study Methods And Material: Fourty-eight patients diagnosed as BPPV in the study period November 2005-2006 were included in the study. They were randomly assigned to one of three groups, and the corresponding manoeuvre was done on them. Follow-up were at 2 weeks and 3 months. Results: Thirty-five of the 48 patients (72.91%) reported relief of symptoms at 2 weeks and no recurrence of symptoms at 3 months. Result was best for the group which was subjected to the Epley manoeuvre (87.5%). Conclusions: Performance of any of the three manoeuvres can be expected to give good results in the management of BPPV. The Epley manoeuvre appears to be better than the other two in terms of relief of symptoms and prevention of recurrence. abstract_id: PUBMED:30510098 Effectiveness of the Epley manoeuvre in posterior canal benign paroxysmal positional vertigo: a randomised clinical trial in primary care. Background: Evidence on the effectiveness of the Epley manoeuvre in primary care is scarce. Aim: To evaluate effectiveness at 1 week, 1 month, and 1 year of a single Epley manoeuvre versus a sham manoeuvre in primary care. Design And Setting: Multicentre, double-blind randomised controlled trial in two primary care practices in Spain from November 2012 to January 2015. Method: Patients were ≥18 years diagnosed with subjective or objective posterior benign paroxysmal positional vertigo (vertigo only, or vertigo and nystagmus after a Dix-Hallpike test [DHT]). The intervention group received the Epley manoeuvre, and the control group received a sham manoeuvre. Betahistine was prescribed following the same regimen in both groups. The main outcome measures were the DHT result classified as negative (neither vertigo nor nystagmus) or positive. Positive results were further divided into a positive result for both vertigo and nystagmus (positive DHT with nystagmus), and a positive result for vertigo only (positive DHT without nystagmus); self-reported resolution of vertigo; and self-reported severity of vertigo evaluated on a 10-point Likert scale (10 = worst imaginable vertigo). Results: In total, 134 patients were randomised to either the intervention group (n = 66) or the sham group (n = 68). The intervention group showed better results in the unadjusted analyses at 1 week, with a lower rate of positive DHT with nystagmus (P = 0.022). A positive baseline DHT with nystagmus was associated with a reduction in vertigo severity (marginal effect for 10-point Likert-like question -1.73, 95% confidence interval [CI] = -2.95 to -0.51) and better positive DHT rates in the intervention group (adjusted odds ratio 0.09, 95% CI = 0.01 to 0.92) in the multivariate analyses. Conclusion: A single Epley manoeuvre performed in primary care is an effective treatment for reversing a positive DHT and reducing vertigo severity in patients with baseline nystagmus in the DHT. Answer: Patients with benign paroxysmal positional vertigo (BPPV) who also have cervical spine problems may be concerned about the safety of Epley's manoeuvre, as it involves passive neck movements which could potentially be harmful in those at risk of cervical spine fracture (PUBMED:20412614). However, a study included 40 patients with BPPV and cervical spine issues and designed a new particle repositioning manoeuvre as an alternative to Epley's manoeuvre. The results showed that 90% of patients reported total relief from vertigo after one week, and those with residual vertigo experienced relief after performing the new manoeuvre again after two weeks. Further reviews up to one year found that patients who suffered minor attacks of BPPV were relieved of their symptoms after undergoing the new manoeuvre again (PUBMED:20412614). This suggests that the proposed new manoeuvre is effective and safe for treating patients with BPPV who have concurrent cervical spine problems, indicating that while Epley's manoeuvre may be contraindicated in these patients due to the risk associated with neck movements, alternative manoeuvres can be safely and effectively utilized.
Instruction: Does provision of targeted health care for the unemployed enhance re-employment? Abstracts: abstract_id: PUBMED:25416020 Does provision of targeted health care for the unemployed enhance re-employment? Background: There is increasing pressure to develop services to enhance the health of the workforce on the periphery of the labour market. Health promotion among unemployed people may improve their health but also to increase their employability. We tested whether re-employment can be enhanced with a health care intervention targeted at the unemployed. Methods: A 3-year follow-up, controlled design was used. The data were collected among unemployed people (n = 539) participating in active labour market policy measures. The baseline survey included established habitually used health questionnaires. The intervention consisted of three health check-ups and on-demand health services. Logistic regression analyses were used to obtain the odds ratios of the intervention group versus control group for being re-employed at follow-up. Health-related differences in the re-employment effects of the intervention were assessed through the significance of the interaction in the regression analyses. Results: The intervention did not serve to improve re-employment: at follow-up 50% of both the intervention group and the control group were at work. In further analyses, the odds ratios showed that the intervention tended to improve re-employment among participants in good health, whereas an opposite tendency was seen among those with poor health. The differences, however, were statistically non-significant. Conclusion: The experimental health service did not show any beneficial effects on re-employment. Nevertheless, rather than considering any particular health care as unnecessary and ineffective, we would like to stress the complexity of providing health services to match the diversity of the unemployed. abstract_id: PUBMED:35382786 Health-related selection into employment among the unemployed. Background: Successful transitions from unemployment to employment are an important concern, yet little is known about health-related selection into employment. We assessed the association of various physical and psychiatric conditions with finding employment, and employment stability. Methods: Using total population register data, we followed Finnish residents aged 30-60 with an unemployment spell during 2009-2018 (n = 814,085) for two years from the onset of unemployment. We predicted any, stable, and unstable employment by health status using Cox proportional hazards models. The data on specialized health care and prescription reimbursement were used to identify any alcohol-related conditions and poisonings, psychiatric conditions and self-harm, injuries, and physical conditions. We further separated physical conditions into cancer, diabetes, heart disease, and neurological conditions, and psychiatric conditions into depression, anxiety disorders and substance use disorders. Results: The likelihood of any employment was lower among those who had any of the assessed health conditions. It was lowest among those with alcohol-related or psychiatric conditions with an age-adjusted hazard ratio of 0.45 (95% confidence interval 0.44, 0.46) among men and 0.39 (0.38, 0.41) among women for alcohol-related and 0.64 (0.63, 0.65) and 0.66 (0.65, 0.67) for psychiatric conditions, respectively. These results were not driven by differences in socioeconomic characteristics or comorbidities. All the included conditions were detrimental to both stable and unstable employment, however alcohol-related and psychiatric conditions were more harmful for stable than for unstable employment. Conclusions: The prospects of the unemployed finding employment are reduced by poor health, particularly alcohol-related and psychiatric conditions. These two conditions may also lead to unstable career trajectories. The selection process contributes to the health differentials between employed and unemployed people. Unemployed people with health problems may therefore need additional support to improve their chances of employment. abstract_id: PUBMED:28397017 Influence of an Interdisciplinary Re-employment Programme Among Unemployed Persons with Mental Health Problems on Health, Social Participation and Paid Employment. Purpose To evaluate the influence of an interdisciplinary re-employment programme on labour force participation and perceived health among unemployed persons with common mental health problems. In addition, the influence of entering paid employment on self-rated physical health and mental health was investigated. Methods In this quasi-experimental study with 2 years follow up, 869 persons were enrolled after referral to an interdisciplinary re-employment programme (n = 380) or regular re-employment programme (n = 489). The propensity score technique was used to account for observed differences between the intervention and control group. The intervention programme was provided by an interdisciplinary team, consisting of mental health care professionals as well as employment specialists. Mental health problems were addressed through cognitive counselling and individual tailored job-search support was provided by an employment professional. Primary outcome measures were paid employment and voluntary work. Secondary outcome measures were self-rated mental and physical health, measured by the Short Form 12 Health Survey, and anxiety and depressive symptoms, measured by the Kessler Psychological Distress Scale. Changes in labour force participation and health were examined with repeated-measures logistic regression analyses by the generalized estimating equations method. Results The interdisciplinary re-employment programme did not have a positive influence on entering employment or physical or mental health among unemployed persons with mental health problems. After 2 years, 10% of the participants of the intervention programme worked fulltime, compared to 4% of the participants of the usual programmes (adjusted OR 1.65). The observed differences in labour force participation were not statistically significant. However, among persons who entered paid employment, physical health improved (+16%) and anxiety and depressive symptoms decreased (-15%), whereas health remained unchanged among persons who continued to be unemployed. Conclusions Policies to improve population health should take into account that promoting paid employment may be an effective intervention to improve health. It is recommended to invest in interdisciplinary re-employment programmes with a first place and train approach. abstract_id: PUBMED:8146505 Unemployment, re-employment and the use of primary health care services. Objective: To examine the association between re-employment and the use of primary health care services. Design: A cohort study of employed teenagers and a 'natural experiment' among adult long-term unemployed with an intervention group of re-employed, a control group of not re-employed, and a matched control group of permanently employed. Setting: An industrial town (pop. 25,000) in southwestern Finland. Participants: 84 teenage and 143 adult long-term unemployed and 82 permanently employed individuals. Main Outcome Measure: Frequency of primary health care visits. Results: Primary health care visits increased during re-employment among both teenage and adult re-employed (n = 82), but not in a control group of 61 long-term unemployed who were not employed under the re-employment scheme. A comparison with persons in regular wage employment indicated that visits increased from a low to a normal level. Conclusion: To explain the results, reference is made to the need for sickness absence certificates and to epidemiological factors. The findings also raise serious questions with regard to the ability of the health care system to reach, let alone help, unemployed citizens. abstract_id: PUBMED:31132120 Improving employment opportunities of the unemployed by health and work ability promotion in Finland. The study explored the impact of self-rated health (SRH) and work ability on employment, and examined the effects of meaningfulness, social networks and physical activity on health and work ability with a population-based cross-sectional survey data (n = 12 729) of full-time employed and unemployed workers. SRH, perceived work ability and levels of perceived meaningfulness, social networks and physical activity were all significantly better among employed workers compared to unemployed people. The results showed that work ability mediated the effect of health on employment and predicted employment. Meaningfulness, social networks and physical activity had direct positive effects on health and work ability, and also on employment. The results indicated that good health is a substantial element of good work ability, which in turn is needed to maintain or to get a job. Our findings suggested that perceiving one's life as meaningful, having a strong social network and being physically active are important predictors of good health as well as of good work ability. These could be contributing factors to re-employment among unemployed people. abstract_id: PUBMED:27878692 Medical employment growth, unemployment, and the opportunity cost of health care. This policy note examines the relationship between the growth in the share of the workforce in medical care and the shares of workers who are unemployed, working in services or government employment, or working elsewhere in the economy. These changes provide measures of the opportunity cost of higher medical care spending, the majority of which is on labor. Using state data over the period 1990-2010, we find that, in years of high economy-wide unemployment, growth in medical employment in a state reduces the unemployment rate significantly; it does not appear to displace employment in other services or government employment. In periods of low economy wide-unemployment, the growth in the medical employment share does not reduce unemployment. We argue that the opportunity cost of higher medical care employment may sometimes not be so high in terms of real labor resources, nor in terms of employment for needed government services. abstract_id: PUBMED:24004786 Muscular fitness and re-employment among unemployed job seekers in Finland: a three-year follow-up study. Background: Research suggests that health is an important determinant of re-employment. However, the association between specific physical health components, such as muscular fitness, and re-employment has not been well studied. Objective: In a three-year follow-up of unemployed people, we investigated whether muscular fitness is associated with re-employment. Methods: In 2002-2005, unemployed people (n=130) who were participants in active labour market policy measures were recruited to participate in a Career Health Care (CHC) project. Data on background characteristics and muscular fitness were collected in 2002/2003 using a questionnaire and laboratory measurements. Clientship in the CHC lasted for three years, and at its end, participants' employment status was assessed by a questionnaire. Logistic regression was used to investigate the association between baseline muscular fitness (repetitive lift, sit-ups, and squats) and re-employment after three years. Results: The probability of re-employment within three years significantly increased with improved performances on muscular fitness tests (lifting, sit-up, and squatting) after adjustment for age and gender. Conclusions: Unemployed people with good muscular fitness had increased probability for re-employment as compared to those with poor muscular fitness. abstract_id: PUBMED:31967870 Employment and Health Among Recently Incarcerated Men Before and After the Affordable Care Act (2009-2017). Objectives. To explore whether and how the Affordable Care Act (ACA) affects the relationship between employment and health insurance coverage, health care utilization, and health outcomes among recently incarcerated men aged 18 to 64 years in the United States.Methods. With data from the National Survey on Drug Use and Health (NSDUH), we used a difference-in-differences approach to compare changes in outcomes by employment status among recently incarcerated men.Results. Uninsurance declined significantly among recently incarcerated men after ACA implementation. As the uninsured rate of unemployed men fell below that of their employed counterparts, the ACA helped to fully eliminate the effect of employment on insurance coverage among recently incarcerated men. The employment gap in diabetes widened after ACA implementation as unemployed men saw significant increases in diagnosed diabetes. Employment disparities in hospital visits, diagnosed hypertension, and reported mental illness also declined in the period following ACA implementation, but these changes were not statistically significant.Conclusions. These findings highlight how the ACA, by providing a new route to health care, reduces the confounding forces associated with employment that are linked to both incarceration and health. abstract_id: PUBMED:10421716 Health and re-employment in a five-year follow-up of long-term unemployed. Objectives: To study health and re-employment in a five-year follow-up of a community sample of long-term unemployed people. Methods: A random sample registered for more than 12 weeks at the labour offices in Grenland, Norway, were given medical check-ups in 1988 and 1990, and a postal questionnaire in 1993. Results: Only 37% were employed five years after the first spell of long-term unemployment. Recovery after re-employment was demonstrated, indicating that unemployment causes poor mental health. Health-related selection to continuous unemployment was only found for medical diagnoses. The unemployed who might be selected to continuous unemployment because of poor health at the two-year follow-up seemed to deteriorate most. Conclusion: The high morbidity of mental health problems among unemployed people is mostly explained by the causal hypothesis. This is a challenge to preventive medicine. Sick unemployed people should be given special attention. abstract_id: PUBMED:31912888 Health-improving interventions for obtaining employment in unemployed job seekers. Background: Unemployment is associated with decreased health which may be a reason or a consequence of becoming unemployed. Decreased health can inhibit re-employment. Objectives: To assess the effectiveness of health-improving interventions for obtaining employment in unemployed job seekers. Search Methods: We searched (3 May 2018, updated 13 August 2019) the Cochrane Central Register of Controlled Trials, MEDLINE, Scopus, PsycINFO, CINAHL, SocINDEX, OSH Update, ClinicalTrials.gov, the WHO trials portal, and also reference lists of included studies and selected reviews. Selection Criteria: We included randomised controlled trials (RCTs) of the effectiveness of health-improving interventions for obtaining employment in unemployed job seekers. The primary outcome was re-employment reported as the number or percentage of participants who obtained employment. Our secondary outcomes were health and work ability. Data Collection And Analysis: Two authors independently screened studies, extracted outcome data, and assessed risk of bias. We pooled study results with random-effect models and reported risk ratios (RRs) with 95% confidence intervals (CIs) and assessed the overall quality of the evidence for each comparison using the GRADE approach. Main Results: We included 15 randomised controlled trials (16 interventions) with a total of 6397 unemployed participants. Eight studies evaluated therapeutic interventions such as cognitive behavioural therapy, physical exercise, and health-related advice and counselling and, in seven studies, interventions were combined using therapeutic methods and job-search training. Therapeutic interventions Therapeutic interventions compared to no intervention may increase employment at an average of 11 months follow-up but the evidence is very uncertain (RR = 1.41, 95% CI 1.07 to 1.87, n = 1142, 8 studies with 9 interventions, I² = 52%, very low-quality evidence). There is probably no difference in the effects of therapeutic interventions compared to no intervention on mental health (SMD 0.12, 95% CI -0.06 to 0.29, n = 530, 2 studies, low-quality evidence) and on general health (SMD 0.19, 95% CI -0.04 to 0.41, n = 318, 1 study, moderate-quality evidence). Combined interventions Combined interventions probably increase employment slightly compared to no intervention at an average of 10 months follow-up (RR 1.12, 95% CI 1.06 to 1.20, n = 4101, 6 studies, I² = 7%). There were no studies that measured work-ability, adverse events, or cost-effectiveness. Authors' Conclusions: Interventions combining therapeutic methods and job-search training probably have a small beneficial effect in increasing employment. Therapeutic interventions may have an effect on re-employment, but we are very uncertain. Therapeutic interventions may not improve health in unemployed job seekers. Large high-quality RCTs targeting short-term or long-term unemployed people are needed to increase the quality of the evidence. A cost-effectiveness assessment is needed of the small beneficial effects. Answer: The provision of targeted health care for the unemployed does not consistently enhance re-employment, based on the available evidence from the abstracts provided. A study with a 3-year follow-up controlled design found that an experimental health service consisting of health check-ups and on-demand health services did not show any beneficial effects on re-employment among the unemployed. At follow-up, 50% of both the intervention group and the control group were at work, and the intervention tended to improve re-employment among participants in good health, whereas an opposite tendency was seen among those with poor health, although the differences were statistically non-significant (PUBMED:25416020). Another study evaluating an interdisciplinary re-employment program among unemployed persons with mental health problems also did not find a positive influence on entering employment or on physical or mental health. However, among persons who entered paid employment, physical health improved and anxiety and depressive symptoms decreased, suggesting that promoting paid employment may be an effective intervention to improve health (PUBMED:28397017). A study on muscular fitness and re-employment found that unemployed people with good muscular fitness had an increased probability for re-employment compared to those with poor muscular fitness (PUBMED:24004786). Additionally, a study on health-improving interventions for obtaining employment in unemployed job seekers found that combined interventions of therapeutic methods and job-search training probably have a small beneficial effect in increasing employment, while therapeutic interventions alone may have an effect on re-employment, but the evidence is very uncertain (PUBMED:31912888). Overall, while some health-related interventions may have a small beneficial effect on re-employment, the evidence does not strongly support the idea that targeted health care for the unemployed consistently enhances re-employment. The complexity of providing health services to match the diversity of the unemployed and the need for more high-quality randomized controlled trials targeting this population are highlighted (PUBMED:25416020, PUBMED:31912888).
Instruction: Assessing bioequivalence of antiepileptic drugs: are the current requirements too permissive? Abstracts: abstract_id: PUBMED:24934550 Assessing bioequivalence of antiepileptic drugs: are the current requirements too permissive? Purpose: In order to evaluate the permissiveness of current bioequivalence requirements for antiepileptic drugs we investigated how accurate Cmax and AUC0-t of generic antiepileptic drugs approved in Brazil are in comparison to reference products. Methods: Data collected from assessment reports of approved bioequivalence studies archived in the Brazilian regulatory agency in 2007-2012 were: geometric mean ratios and 90% confidence intervals (CI) for Cmax and AUC0-t, intra-subject variability (CV) of Cmax and AUC0-t and number of subjects. Results: The average difference in Cmax and AUC0-t between generic and reference products was 5% and 3%, respectively. Maximum deviation from 1.00 of the CI of Cmax can achieve 15-20% (demonstrated in 27% of studies); for AUC0-t, 25% of studies showed the deviation can be &gt;10%. All studies that used adequate number of subjects for a 90% CI of 0.90-1.11 complied with it for AUC0-t, except one of carbamazepine, but only 33% complied with it for both AUC0-t and Cmax. The CV was strongly correlated to the maximum CI deviation for AUC0-t (CV of approximately 15% corresponding to deviation of 10%). Studies that presented maximum CI deviation ≤ 10 % together with CV ≤ 15% for AUC0-t represented 65% of the total. Weaker correlation was observed for Cmax and no correlation was seen between maximum CI deviation and number of subjects. Conclusions: Modification in legislation for bioequivalence of antiepileptic drugs is suggested, not only with constraint of AUC0-t 90% CI to 0.90-1.11, but also with limitation of the CV to 15%, as to assure similar variance in pharmacokinetics and diminish the risk of critical plasma-level fluctuation when switching between generic and reference formulations. Although most generics presented differences ≤ 10% in AUC0-t compared to their references, some narrow therapeutic index drugs displayed differences that could be clinically significant after product substitution. abstract_id: PUBMED:28929357 Risk-Based Bioequivalence Recommendations for Antiepileptic Drugs. Purpose Of Review: This review summarizes the current FDA practice in developing risk- and evidence-based product-specific bioequivalence guidances for antiepileptic drugs (AEDs). Recent Findings: FDA's product-specific guidance (PSG) for AEDs takes into account the therapeutic index of each AED product. Several PSGs for AEDs recommend fully replicated studies and a reference-scaled average bioequivalence (RS-ABE) approach that permit the simultaneous equivalence comparison of the mean and within-subject variability of the test and reference products. The PSGs for AEDs published by FDA reflect the agency's current thinking on the bioequivalence studies and approval standards for generics of AEDs. Bioequivalence between brand and generic AED products demonstrated in controlled studies with epilepsy patients provides strong scientific support for the soundness of FDA bioequivalence standards. abstract_id: PUBMED:25473327 Study on requirements of bioequivalence for registration of pharmaceutical products in USA, Europe and Canada. The present study was aimed to study the requirements of bioequivalence for the registration of pharmaceutical products in the USA, Europe and Canada. Before going into bioequivalence studies it is essential for the pharmaceutical industry to study the guidelines of bioequivalence for the respective country where the industry wants to market its products and thus enter into generic market. This study reviews the requirements of bioequivalence with study parameters such as study design, fasting or fed state studies, volunteers recruitment, study dose, sampling points, analytical method validation parameters, moieties to be measured in plasma, pharmacokinetic parameters, criteria for bioequivalence, GCP requirements etc, which are needed for the pharmaceutical industry to carry out bioequivalence studies and to file ANDA. Test products and reference products are needed for this study. Test products are usually manufactured by a sponsor and reference products are provided by the government laboratories of the respective countries. Sampling points also vary with respect to the regulatory guidelines of these countries. All these countries follow ICH GCP guidelines. The criterion of bioequivalence for these countries is 90% CI 80-125% for C max, AUC t , AUC0-∞. abstract_id: PUBMED:20384761 Generic products of antiepileptic drugs: a perspective on bioequivalence and interchangeability. Most antiepileptic drugs (AEDs) are currently available as generic products, yet neurologists and patients are reluctant to switch to generics. Generic AEDs are regarded as bioequivalent to brand AEDs after meeting the average bioequivalence criteria; consequently, they are considered to be interchangeable with their respective brands without loss of efficacy and safety. According to the U.S. Food and Drug Administration (FDA) the present bioequivalence requirements are already so rigorous and constrained that there is little possibility that generics that meet regulatory bioequivalence criteria could lead to therapeutic problems. So is there a scientific rationale for the concerns about switching patients with epilepsy to bioequivalent generics? Herein we discuss the assessment of bioequivalence and propose a scaled-average bioequivalence approach where scaling of bioequivalence is carried out based on brand lot-to-lot variance as an alternative to the conventional bioequivalence test as a means to determine whether switching patients to generic formulations, or vice versa, is a safe and effective therapeutic option. Meeting the proposed scaled-average bioequivalence requirements will ensure that when an individual patient is switched, he or she has fluctuations in plasma levels similar to those from lot-to-lot of the brand reference levels and thus should make these generic products safely switchable without change in efficacy and safety outcomes. abstract_id: PUBMED:32473200 The global bioequivalence harmonisation initiative: Report of EUFEPS/AAPS third conference. The European Federation of Pharmaceutical Sciences (EUFEPS) and American Association of Pharmaceutical Scientists (AAPS) have collaborated since 2015 to organize international conferences to support global harmonization of regulatory requirements for bioequivalence (BE) assessment. This collaboration has resulted in three Global Bioequivalence Harmonization Initiative (GBHI) workshops which provided a unique opportunity for scientists from academia, industry, and regulatory agencies to discuss current, complex BE issues. The 3rd GBHI workshop was held in April 2018 in Amsterdam/The Netherlands and covered the following topics: (a) the necessity of multiple-dose studies in BE testing; (b) BE of transdermal delivery systems, and (c) liposomal parenteral preparations. This report summarizes the extensive discussions that led to better understanding of the similarities and differences across the major regulatory agencies on these topics and paved the way for future international harmonization. abstract_id: PUBMED:25215170 Bioavailability and Bioequivalence in Drug Development. Bioavailability is referred to as the extent and rate to which the active drug ingredient or active moiety from the drug product is absorbed and becomes available at the site of drug action. The relative bioavailability in terms of the rate and extent of drug absorption is considered predictive of clinical outcomes. In 1984, the United States Food and Drug Administration (FDA) was authorized to approve generic drug products under the Drug Price Competition and Patent Term Restoration Act based on evidence of average bioequivalence in drug absorption through the conduct of bioavailability and bioequivalence studies. This article provides an overview (from an American point of view) of definition of bioavailability and bioequivalence, Fundamental Bioequivalence Assumption, regulatory requirements, and process for bioequivalence assessment of generic drug products. Basic considerations including criteria, study design, power analysis for sample size determination, and the conduct of bioequivalence trial, and statistical methods are provided. Practical issues such as one size-fits-all criterion, drug interchangeability and scaled average criteria for assessment of highly variable drug products are also discussed. abstract_id: PUBMED:27026878 Potential problems and recommendations regarding substitution of generic antiepileptic drugs: a systematic review of literature. Despite the availability of generic antiepileptic drugs (AEDs), still patients and neurologists hesitate to make a switch due to assorted reasons. The objectives of this review were to evaluate the risks associated with the generic substitution of AEDs. In this context, we also summarized the recommendations of various international societies to treat epileptic patients. We used a number of electronic databases to identify the relevant published studies which demonstrated the potential problems and recommendations regarding generic substitution of AEDs. Of 204 articles found initially, 153 were selected for additional review. Subsequently, 68 articles were finally selected. This review concluded that potential problems linked with the generic substitution of AEDs could be bioequivalence issues, failure of drug therapy, emergence of adverse events and increase in the frequency of seizures. The reasons could be the pharmacokinetics properties of AEDs and unique characteristics of some epilepsy patients. Consequently, the generic substitution of AEDs affects the successful treatment and quality of life of the patients. Various guidelines recommend the well-controlled epileptic patients to avoid switching from brand-to-generic products, generic-to-brand products or generic to some other generic products. abstract_id: PUBMED:30342171 Summary report of second EUFEPS/AAPS conference on global harmonization in bioequivalence. The Global Bioequivalence Harmonization Initiative (GBHI) was launched by the Network on Bioavailability and Biopharmaceutics (BABP) under the auspices of European Federation for Pharmaceutical Sciences (EUFEPS) several years ago. Since 2015, EUFEPS in collaboration with the American Association of Pharmaceutical Scientists (AAPS) has organized three international conferences to support global harmonization of regulatory requirements for bioequivalence (BE) assessment. These conferences provided an open forum for pharmaceutical scientists from academia, industry and regulatory agencies to discuss various BE topics at issue. The current report summarizes the discussion of BE issues at the 2nd GBHI conference held in 2016, Rockville, USA. Three important BE topics were discussed at the meeting: (a) prodrugs and compounds with pre-systemic extraction, (b) scaling procedures and two-stage designs, and (c) exclusion of pharmacokinetic data in BE assessment. The presentations and discussions of these issues have enhanced the mutual understanding of scientific background for BE evaluation and further facilitated harmonization of regulatory approaches for establishing BE of multisource drug products. abstract_id: PUBMED:38452875 Open forum conference on the ICH M13A bioequivalence guideline. The Network on Bioavailability and Biopharmaceutics of EUFEPS (European Federation for Pharmaceutical Sciences) had organised an Open Discussion Forum on the ICH M13A draft "Guideline on bioequivalence for immediate-release solid oral dosage forms". This conference was cosponsored by the Arbeitsgemeinschaft Pharmazeutische Verfahrenstechnik (APV) and the Frankfurt Foundation Quality of Medicines. Scientists from academia and industry attended this workshop on May 15, 2023, in Frankfurt/Germany, to discuss the suggested regulations with the European members of the ICH drafting group. The aim of this report is to summarise and highlight the main discussion points such as choice of study population (females and/or males), request for fasted and/or fed studies, consequences of differences in drug product content, handling of aberrant plasma profiles and additional requirements in case of pH-dependant solubility. During the discussion important arguments were presented for a revision of certain requirements suggested in the draft guideline. abstract_id: PUBMED:21660146 Modern methods for analysis of antiepileptic drugs in the biological fluids for pharmacokinetics, bioequivalence and therapeutic drug monitoring. Epilepsy is a chronic disease occurring in approximately 1.0% of the world's population. About 30% of the epileptic patients treated with availably antiepileptic drugs (AEDs) continue to have seizures and are considered therapy-resistant or refractory patients. The ultimate goal for the use of AEDs is complete cessation of seizures without side effects. Because of a narrow therapeutic index of AEDs, a complete understanding of its clinical pharmacokinetics is essential for understanding of the pharmacodynamics of these drugs. These drug concentrations in biological fluids serve as surrogate markers and can be used to guide or target drug dosing. Because early studies demonstrated clinical and/or electroencephalographic correlations with serum concentrations of several AEDs, It has been almost 50 years since clinicians started using plasma concentrations of AEDs to optimize pharmacotherapy in patients with epilepsy. Therefore, validated analytical method for concentrations of AEDs in biological fluids is a necessity in order to explore pharmacokinetics, bioequivalence and TDM in various clinical situations. There are hundreds of published articles on the analysis of specific AEDs by a wide variety of analytical methods in biological samples have appears over the past decade. This review intends to provide an updated, concise overview on the modern method development for monitoring AEDs for pharmacokinetic studies, bioequivalence and therapeutic drug monitoring. Answer: The assessment of bioequivalence for antiepileptic drugs (AEDs) has been a topic of discussion and concern among regulatory bodies, healthcare professionals, and patients. The question of whether the current requirements for bioequivalence are too permissive is addressed in several studies and reviews. One study that evaluated the accuracy of Cmax and AUC0-t of generic AEDs approved in Brazil found that the average difference in Cmax and AUC0-t between generic and reference products was 5% and 3%, respectively. However, the study also noted that for some narrow therapeutic index drugs, the differences could be clinically significant after product substitution. The study suggested a modification in legislation for bioequivalence of AEDs, including a constraint of AUC0-t 90% CI to 0.90-1.11 and a limitation of the CV to 15%, to assure similar variance in pharmacokinetics and reduce the risk of critical plasma-level fluctuation when switching between generic and reference formulations (PUBMED:24934550). The FDA's approach to bioequivalence for AEDs considers the therapeutic index of each product and recommends fully replicated studies and a reference-scaled average bioequivalence (RS-ABE) approach. This allows for the simultaneous equivalence comparison of the mean and within-subject variability of the test and reference products. Bioequivalence demonstrated in controlled studies with epilepsy patients provides support for the soundness of FDA bioequivalence standards (PUBMED:28929357). A systematic review of literature on potential problems and recommendations regarding substitution of generic AEDs concluded that issues could arise from bioequivalence, failure of drug therapy, adverse events, and increased seizure frequency. The review highlighted the importance of considering the pharmacokinetic properties of AEDs and the unique characteristics of some epilepsy patients, suggesting that generic substitution could affect treatment success and quality of life. Various guidelines recommend that well-controlled epileptic patients avoid switching between brand and generic products (PUBMED:27026878). In summary, while most generic AEDs meet the current bioequivalence standards, there is evidence suggesting that for certain drugs with narrow therapeutic indices, the requirements may be too permissive, potentially leading to clinically significant differences after substitution. Regulatory agencies like the FDA have implemented more stringent guidelines for AEDs, reflecting a risk-based approach to ensure safety and efficacy when switching between products.
Instruction: Are bladder neoplasms more aggresive in patients with a smoking-related second malignancy? Abstracts: abstract_id: PUBMED:24935590 Are bladder neoplasms more aggresive in patients with a smoking-related second malignancy? Background: Relationships between smoking and bladder neoplasms, one of the common malignancies, are well-known. Different smoking-related malignancies may occur together. In this study, we evaluated the stage and grade of bladder neoplasms in patients also featuring lung or larynx cancer. Materials And Methods: From January 2006 to February 2012, patients who underwent surgery for bladder neoplasms in our clinic were screened retrospectively. In the evaluation, 5 patients had larynx cancer and 20 patients have lung cancer in addition, all having been smoking for a long time. The bladder tumor stage and grade were investigated in these 25 cases. Results: Mean age of patients was 66.8 (49-78). In the evaulation, all of 5 patients who had larnyx cancer also had high grade urothelial cancer. One had T2 urothelial, and 3 T1 urothelial cancer. In the same way, all of the 20 patients with lung cancer also have high grade urothelial cancer, three T2, and 13 T1. Bladder cancer stage and grade were determined to be significantly increased in patients with concomitant bladder and lung or larynx cancer. Conclusions: In the patients who have smoking releated second malignancy, bladder cancer prognosis appears more aggressive. We now need a larger series and multi-center studies for understanding relevant pathophysiology. abstract_id: PUBMED:7933174 Bladder cancer and the risk of smoking-related cancers during followup. The risk of smoking-related secondary cancers developing in bladder cancer patients was studied. The study population consisted of 10,014 bladder cancer patients reported to the Finnish Cancer Registry between 1953 and 1989. The risk of contracting a new primary cancer was estimated as a standardized incidence ratio, defined as the ratio of the observed and expected numbers of cases. Of 660 secondary cancers (6.6%) observed (standardized incidence ratio 0.96) 44% were considered to be smoking-related. Lung cancer was the most common secondary cancer (30% overall), and it occurred significantly more often than expected (standardized incidence ratio 1.31, 95% confidence interval 1.13 to 1.50). Also, larynx cancer among men (standardized incidence ratio 1.67, 95% confidence interval 0.95 to 2.79) and kidney cancer among women (standardized incidence ratio 3.55, 95% confidence interval 1.84 to 6.20) were found more often than expected. These excess risks were observed up to 20 years after diagnosis of bladder cancer. Therefore, bladder cancer patients experience an excess risk of smoking-related new tumors, which must be acknowledged during the initial evaluation and regular followup of such patients. abstract_id: PUBMED:30613963 Trends in the risk and burden of second primary malignancy among survivors of smoking-related cancers in the United States. While there are a growing number of cancer survivors, this population is at increased risk of developing second primary malignancies (SPMs). We described the incidence, most common tumor sites, and trends in burden of SPM among survivors of the most commonly diagnosed smoking-related cancers. The current study was a population-based study of patients diagnosed with a primary malignancy from the top 10 smoking-related cancer sites between 2000 and 2014 from Surveillance, Epidemiology, and End Results data. SPM risks were quantified using standardized incidence ratios (SIRs) and excess absolute risks (EARs) per 10,000 person-years at risk (PYR). Trends in the burden of SPM were assessed using Joinpoint regression models. A cohort of 1,608,607 patients was identified, 119,980 (7.5%) of whom developed SPM (76% of the SPMs were smoking-related). The overall SIR of developing second primary malignancies was 1.51 (95% CI, 1.50-1.52) and the EAR was 73.3 cases per 10,000 PYR compared to the general population. Survivors of head and neck cancer had the highest risk of developing a SPM (SIR = 2.06) and urinary bladder cancer had the highest excess burden (EAR = 151.4 per 10,000 PYR). The excess burden of SPM for all smoking-related cancers decreased between 2000 and 2003 (annual percentage change [APC] = -13.7%; p = 0.007) but increased slightly between 2003 and 2014 (APC = 1.6%, p = 0.032). We show that 1-in-12 survivors of smoking-related cancers developed an SPM. With the significant increase in the burden of SPM from smoking-related cancers in the last decade, clinicians should be cognizant of long-term smoking-related cancer risks among these patients as part of their survivorship care plans. abstract_id: PUBMED:25023787 The effect of smoking and timing of smoking cessation on clinical outcome in non-muscle-invasive bladder cancer. Objectives: Cigarette smoking is the most important risk factor for urinary bladder cancer. The prognostic effect of cigarette smoking on disease recurrence and progression in patients with non-muscle-invasive bladder cancer (NMIBC), however, is still unclear. We evaluated the effect of smoking status and intensity, and timing of smoking cessation, on NMIBC prognosis. Methods And Materials: A population-based series of patients diagnosed with NMIBC from 1995 until 2010 was identified through the Netherlands Cancer Registry. Self-reported smoking history was obtained by a postal lifestyle questionnaire at study inclusion. Detailed clinical data concerning diagnosis, treatment, and disease course were collected retrospectively through a medical file survey. The association of smoking variables with recurrence- and progression-free survival of 963 patients with primary NMIBC was studied using cumulative incidence curves and competing risk regression analyses. Results: The study population comprised 181 never smokers (18.8%), 490 former smokers (50.9%), and 292 current smokers (30.3%) at the time of diagnosis. No statistically significant difference or trend in risk of recurrence (Ptrend = 0.47) or progression (Ptrend = 0.23) across the 3 smoking status categories was found. Moreover, no dose-response association was observed across categories of smoking quantity, duration, or cumulative exposure in relation to NMIBC prognosis. The timing of smoking cessation (i.e., ceased smoking≥10y before diagnosis,&lt;10y before diagnosis, vs. current smoker at diagnosis) did not significantly affect the risk of recurrence (Ptrend = 0.31) and progression (Ptrend = 0.19). Conclusions: Based on our study, smoking status, smoking intensity, or cessation at any time before diagnosis does not seem to alter the risks of recurrence and progression among patients with NMIBC. Patients' smoking history is not useful for the guidance of clinical management decisions. Patients should nevertheless be advised to quit considering the known beneficial effects on other non-NMIBC-related end points such as cardiovascular disease and second primary cancers. abstract_id: PUBMED:28753798 Selection Effects May Explain Smoking-related Outcome Differences After Radical Cystectomy. The impact of smoking on mortality among patients with bladder cancer is subject to controversy. We investigated 1000 patients who consecutively underwent radical cystectomy between 1993 and 2013. Proportional hazards models for competing risks were used to study the combined effects of variables on mortality. Compared to nonsmokers, current smokers were more frequently male (35.7% vs 12.0%, p&lt;0.0001), younger (63.5 vs 70.5 yr, p&lt;0.0001), had a lower body mass index (26.2 vs 27.1kg/m2, p&lt;0.0001), and suffered less frequently from cardiac insufficiency (12.7% vs 19.3%, p=0.0129). Among current smokers there was a trend towards lower bladder cancer mortality and higher competing mortality in comparison to nonsmokers. On multivariable analysis, current smoking was not a predictor of bladder cancer mortality (hazard ratio [HR] in the full model 0.76; p=0.0687) but was a predictor of competing mortality (HR in the optimal model 1.62; p=0.0044). In conclusion, this study did not confirm adverse bladder cancer-related outcome among current smokers after radical cystectomy. With a younger mean age and a male predominance, there was a trend towards lower bladder cancer mortality current smokers that was eventually neutralized by higher competing mortality, illustrating that selection effects may explain some smoking-related outcome differences after radical cystectomy. The single-center design is a study limitation. PATIENT SUMMARY: Current smokers are not at higher risk of bladder cancer after radical cystectomy but have a higher risk of competing mortality. abstract_id: PUBMED:25385740 Cigarette smoking prior to first cancer and risk of second smoking-associated cancers among survivors of bladder, kidney, head and neck, and stage I lung cancers. Purpose: Data on smoking and second cancer risk among cancer survivors are limited. We assessed associations between smoking before first cancer diagnosis and risk of second primary smoking-associated cancers among survivors of lung (stage I), bladder, kidney, and head/neck cancers. Methods: Data were pooled from 2,552 patients with stage I lung cancer, 6,386 with bladder cancer, 3,179 with kidney cancer, and 2,967 with head/neck cancer from five cohort studies. We assessed the association between prediagnostic smoking and second smoking-associated cancer risk with proportional hazards regression, and compared these estimates to those for first smoking-associated cancers in all cohort participants. Results: Compared with never smoking, current smoking of ≥ 20 cigarettes per day was associated with increased second smoking-associated cancer risk among survivors of stage I lung (hazard ratio [HR] = 3.26; 95% CI, 0.92 to 11.6), bladder (HR = 3.67; 95% CI, 2.25 to 5.99), head/neck (HR = 4.45; 95% CI, 2.56 to 7.73), and kidney cancers (HR = 5.33; 95% CI, 2.55 to 11.1). These estimates were similar to those for first smoking-associated cancer among all cohort participants (HR = 5.41; 95% CI, 5.23 to 5.61). The 5-year cumulative incidence of second smoking-associated cancers ranged from 3% to 8% in this group of cancer survivors. Conclusion: Understanding risk factors for second cancers among cancer survivors is crucial. Our data indicate that cigarette smoking before first cancer diagnosis increases second cancer risk among cancer survivors, and elevated cancer risk in these survivors is likely due to increased smoking prevalence. The high 5-year cumulative risks of smoking-associated cancers among current smoking survivors of stage I lung, bladder, kidney, and head/neck cancers highlight the importance of smoking cessation in patients with cancer. abstract_id: PUBMED:7653476 Methodology for evaluating the incidence of second primary cancers with application to smoking-related cancers from the Surveillance, Epidemiology, and End Results (SEER) program. The Surveillance, Epidemiology, and End Results (SEER) database is used to estimate the standardized incidence ratios (SIRs) of second primaries for all pairs of smoking-related cancers and to interpret the results in the context of our knowledge of the known effects of smoking on the incidence of each of the individual cancers. In evaluating the simultaneous incidence of two cancers, one must recognize the inherent duality of the two relevant SIRs linking the two cancers (e.g., A and B), namely, the SIRs of A following B and B following A. Under fairly general assumptions, the two SIRs are seen to be equal, and departures from equality suggest changes in risk status or the introduction of new risk factors after the incidence of the first primary. Based on these observations, a methodological strategy is developed. The data reveal several clear patterns. First, short-term incidence is uniformly much greater than long-term incidence. Second, the SIRs are consistently much higher for women than for men, for every pair of cancers studied. Third, the magnitudes of the SIRs are generally high and often substantially higher than would be expected on the basis of the known risks of smoking. Exceptionally high SIRs are observed between kidney and bladder cancer and between head and neck and esophageal cancer. Various influences may affect these high observed SIRs, including artifactual influences such as diagnostic, surveillance, and misclassification biases, and the effect of different exposure prevalences on subsequent SIRs, which may to some extent explain the strong sex differences. However, these artifacts do not appear to explain the magnitude of the observed SIRs, especially the very strong associations between kidney and bladder cancer and between head and neck and esophageal cancer. It seems likely that other factors play a role, including, possibly, host susceptibility factors or additional common risk factors other than smoking. Although multiple primary cancers are rare, they represent an especially fruitful population for detailed epidemiologic study. abstract_id: PUBMED:27613550 Urinary pH Levels are Strongly Associated with Bladder Recurrence After Nephroureterectomy in Upper Tract Urothelial Carcinoma Patients with a Smoking History. Purpose: Aromatic amines, well-known bladder carcinogens, derived from cigarette smoke are activated by acidic urine. We herein determined whether urinary pH levels are associated with bladder recurrence in upper tract urothelial carcinoma patients with a positive smoking history. Methods: A total of 256 upper tract urothelial carcinoma patients who were surgically treated at our institution between 1990 and 2013 were included. Urinary pH levels were defined as the median of at least two consecutive measurements within 1 month of surgery. Results: Ninety-six patients (37.5 %) had pH &lt;5.5 and 160 (62.5 %) had pH ≥5.5, and urinary pH levels were identified as one of the significant predictors for bladder recurrence in univariate but not multivariate Cox regression analysis in overall. In patients with a positive smoking history among those without a history of bladder tumor (N = 110), the 5-year bladder recurrence-free survival rate was 52.5 % in patients with pH ≥5.5, which was significantly higher than that in those with pH &lt;5.5 (25.9 %, p = 0.032). In the multivariate analysis, urinary pH &lt;5.5 (p = 0.022, HR; 1.86) was independently associated with bladder recurrence. No significant difference for bladder recurrence was observed between these two groups in patients with no smoking history among them. Conclusions: Urinary pH &lt;5.5 is associated with an increased risk of bladder recurrence in upper tract urothelial carcinoma patients with a positive smoking history among those without a history of bladder tumor. Modifications to pH for urine alkalization may prevent bladder recurrence. abstract_id: PUBMED:30320536 Occupational variation in bladder cancer in Nordic males adjusted with approximated smoking prevalence. Background: Occupational exposure has been identified as the most important risk factor for bladder cancer second to smoking. The objective of this study was to estimate the occupational variation in risk of bladder cancer that is not attributable to smoking. Material And Methods: In the Nordic Occupational Cancer study (NOCCA), 111,458 cases of bladder cancer and 208,297 cases of lung cancer cases were observed among men in Denmark, Finland, Iceland, Norway and Sweden during 1961-2005. Relative smoking prevalence in an occupation was estimated based on standardized incidence ratio (SIR) for lung cancer in the given occupation. Crude and smoking-adjusted SIRs with 95% confidence intervals (CI) for bladder cancer were calculated for each occupation. Results: The smoking-adjusted SIR for most of the occupations was closer to 1.00 than the unadjusted SIR. The highest statistically significant smoking-adjusted SIRs were observed among chimney sweeps (SIR 1.29, 95% CI 1.05-1.56), waiters (1.22, 1.07-1.38) hairdressers (1.14, 1.02-1.26), cooks and stewards (1.12, 1.01-1.25), printers (1.11, 1.04-1.18) and seamen (1.09, 1.03-1.14). Conclusions: Smoking is a strong risk factor for bladder cancer but there may also be other factors in some specific occupations in addition to smoking. The occupational variation in risk of bladder cancer is small when adjusted for smoking, but risk increasing factors are indicated in some occupations. abstract_id: PUBMED:28242577 Risk of second primary cancer associated with pre-diagnostic smoking, alcohol, and obesity in women with keratinocyte carcinoma. Keratinocyte carcinoma (KC), which includes basal-cell carcinoma (BCC) and squamous-cell cancer (SCC), has been associated with an increased risk of second primary cancers (SPCs), although the reason for this increase is unknown. We assessed the effects of smoking, alcohol, and obesity prior to the diagnosis of KC on the development of SPCs, as these are well-established risk factors for multiple cancers and may also contribute to the increased risk of SPCs among those with KC. A total of 15,628 women with self-reported KC were identified in the Nurses' Health Study. Incident SPCs were assessed throughout the follow-up until June 2012. Cox proportional hazards models were used to calculate the hazard ratios (HRs) of SPC associated with pre-diagnostic smoking, alcohol and body mass index (BMI). We also compared these risk estimates to those for first cancers in all cohort participants. During 193,695 person-years of follow-up, we recorded 2839 SPC cases. Compared with never smokers, current smokers had a significantly elevated risk for SPC overall and specifically for lung, colorectal, and bladder cancers. We also found a positive association between higher BMI and risk for SPC overall as well as for endometrial and bladder SPCs. Women with KC who consumed alcohol ≥30g/day had a marginally higher risk of SPC compared to non-drinkers. The associations between incident SPC risk among KC cases and smoking, alcohol, and obesity appeared similar to the associations between these risk factors and the incident first primary cancers in the whole cohort. Only in the heavy smoking (≥25 cigarettes/day) category was the HR for SPC after KC (2.34; 95% CI 1.98-2.76) slightly higher than that for the first cancer in the overall cohort (HR 1.86; 95% CI 1.75-1.98, Pheterogeneity=0.01). In conclusion, pre-diagnostic smoking, alcohol and obesity prior to KC diagnosis were associated with risk of SPCs. Answer: Yes, bladder neoplasms appear to be more aggressive in patients with a smoking-related second malignancy. A study that evaluated the stage and grade of bladder neoplasms in patients who also had lung or larynx cancer found that all patients with larynx cancer had high-grade urothelial cancer, with a majority presenting with T1 urothelial cancer. Similarly, all patients with lung cancer also had high-grade urothelial cancer, with a majority presenting with T1 and some with T2 urothelial cancer. The study concluded that bladder cancer stage and grade were significantly increased in patients with concomitant bladder and lung or larynx cancer, suggesting a more aggressive prognosis for bladder cancer in patients with a smoking-related second malignancy (PUBMED:24935590). This finding is consistent with other research indicating that bladder cancer patients experience an excess risk of smoking-related new tumors, which must be acknowledged during the initial evaluation and regular follow-up of such patients (PUBMED:7933174). Moreover, survivors of smoking-related cancers have been shown to have a higher risk of developing second primary malignancies (SPMs), with survivors of head and neck cancer having the highest risk of developing a SPM and urinary bladder cancer having the highest excess burden (PUBMED:30613963). Therefore, the evidence suggests that patients with a history of smoking who develop bladder cancer are at an increased risk of having more aggressive bladder neoplasms, especially if they have a concurrent smoking-related second malignancy. Clinicians should be aware of the long-term smoking-related cancer risks among these patients as part of their survivorship care plans (PUBMED:30613963).
Instruction: Do gold rings protect against articular erosion in rheumatoid arthritis? Abstracts: abstract_id: PUBMED:9306874 Do gold rings protect against articular erosion in rheumatoid arthritis? Objective: To examine the hypothesis that gold rings might delay articular erosion at the metacarpophalangeal (MCP) joint of the left ring finger in ring wearers with rheumatoid arthritis (RA). Methods: Consecutive patients with RA were recruited. They were classified as ring wearers if they had worn a gold ring on the left ring finger throughout most of the time since disease onset, or as non-ring wearers if they had never worn a gold ring. Standard hand radiographs (with rings removed, where possible) were taken and articular erosion was quantified at the MCP and proximal interphalangeal joints. Results: Thirty ring wearers (27 female) and 25 non-ring wearers (12 female) were included. The median (25th-75th centile) Larsen score in the left hand ring MCP joint of ring wearers was 1.0 (1.0-2.0), which was significantly less than in their equivalent right hand joint (1.0, 1.0-5.0, p = 0.01). It also tended to be less than the equivalent left hand joint of non-ring wearers (4.0, 1.0-5.0, p = 0.06), with a similar but significant difference observed at the adjacent middle finger MCP joint (p = 0.01). Conclusions: The results of this preliminary study suggest that there may be less articular erosion at the left hand ring, and perhaps adjacent, MCP joints observed in ring wearers with RA. These data support the hypothesis that gold could pass from a gold ring through skin and local lymphatics 'downstream' to nearby MCP joint in sufficient quantities to delay articular erosion. abstract_id: PUBMED:1125918 Treatment of persistent knee effusions with intra-articular radioactive gold. Patients with a total of 112 chronic knee effusions unresponsive to the usual conservative methods of therapy were treated with intra-articular injections of radioactive gold and followed from 6 months to 5 years. Most patients in the study had rheumatoid arthritis but others had ankylosing spondylitis, psoriatic arthritis, intermittent hydrathrosis and undiagnosed synovitis. After 6 months 81% showed improvement. This figure diminished to approximately 70% at 1 and 2 years, 60% at 3 and 4 years and 50% at 5 years. The mose beneficial results were seen in patients with intermittent hydrarthrosis. Twelve patients with a synovial cyst and/or rupture improved. Patients with thin synovia and anatomic stage I or II disease did best, but those with stage III disease also did well. After the injection 33% of the patients had a reactive synovitis. A variable amount of radioactive material escaped from the knee to the regional lymph nodes and general circulation. abstract_id: PUBMED:5439327 The treatment of persistent knee effusions with intra-articular radioactive gold: preliminary report. Eighteen chronic knee effusions unresponsive to the usual methods of therapy were treated by intra-articular injections of radioactive gold ((198)Au) and followed up for one year. Ten patients had classical rheumatoid arthritis; three intermittent hydrarthrosis (both knees were treated in one patient); two ankylosing spondylitis, and one synovitis of undetermined cause. In 12 knees the effusion completely disappeared, usually within three months. Temporary increased pain and swelling occurred during the first week in five cases. Radiation dosimetry is discussed in detail. abstract_id: PUBMED:5728414 On the intra-articular treatment of rheumatoid arthritis using radioactive gold N/A abstract_id: PUBMED:22491387 Effect of gold nanoparticles in the treatment of established collagen arthritis in rats. Background And Objective: The role of gold nanoparticles (AuNPs) in the treatment of autoimmune diseases remains vague. Therefore, the aim of this study was to determine the effect of AuNPs in the treatment of rats with established collagen-induced arthritis (CIA). Material And Methods: A total of 24 Wistar male rats with established CIA were used. AuNPs measuring 13-nm and 50-nm were prepared according to standard procedures, and their size was determined using transmission electron microscopy. These gold particles were injected intra-articularly 5 times a week, 12 injections in total. Body and organ weight, arthritic profiles based on paw swelling, histological changes in the joints and internal organs, blood indices, and serum oxidative products were investigated. Results: An examination of the course of the experimental disease and a subsequent histological analysis as well as hematological studies revealed a nontoxic effect of AuNPs on the vital organs. The treatment of the rats with established CIA by 13-nm and 50-nm gold nanoparticles decreased joint swelling by 49.7% (P&lt;0.002) and 45.03% (P&lt;0.01), respectively. That corresponded to the decrease in statistically significant histological changes in articular tissues. AuNPs showed their antioxidant effect by increasing the level of antioxidant enzyme catalase. Conclusions: The continuous intra-articular administration of AuNPs not only reduced the inflammation, joint swelling, and development of polyarthritis, but also reduced histological changes in articular tissues without toxic effects on the internal organs. The results obtained disclose the role of AuNPs as antioxidant agents. abstract_id: PUBMED:32096106 Protecting the Normal Physiological Functions of Articular and Periarticular Structures by Aurum Nanoparticle-Based Formulations: an Up-to-Date Insight. Taking the articular and periarticular structures as a litmus test for gold-based nanoformulations, the potential of gold nanoparticles in protecting the normal physiological functions of these structures particularly in geriatric patients is one of the research areas of current interest. Aside from its use to make the traditional and fashionable ornaments for human usage, the gold metal is also known for its rich therapeutic activity. This is especially true when the gold is converted from its bulk form into nanosized form before its administering into the human body. Since it is the age of nanocomponents in medical and pharmaceutical research areas, this review is therefore mainly focused on nanoparticulate systems consisting of aurum. Accumulating research reports nevertheless show concrete evidence indicating the potential of gold-based nanoformulations to manage joint syndromes such as osteoarthritis and rheumatoid arthritis. This review embarks from preparation techniques and characterization methods to therapeutical application potentials of gold-based nanoformulations. abstract_id: PUBMED:8474059 Quantitative microfocal radiography detects changes in erosion area in patients with early rheumatoid arthritis treated with myocrisine. Patients with early rheumatoid arthritis (RA) were randomly divided into those receiving gold early (n = 13) or 6 months later (n = 10). They were followed 6 monthly over 18 months. Mean erosion area in gold and delayed gold, measured from macroradiographs, was comparable at baseline and increased significantly over the first 6 months. In the second 6 months, gold showed no increase and delayed gold an insignificant increase. By the third 6 months both groups showed a decrease. On comparing the second 6 months of gold therapy in gold and delayed gold with a group of patients with RA of similar disease duration (n = 34) not receiving gold, a lower proportion (p &lt; 0.005) had erosion area progression and a higher proportion (p &lt; 0.001) erosion repair. abstract_id: PUBMED:26046639 Intra-Articular Injections of Polyphenols Protect Articular Cartilage from Inflammation-Induced Degradation: Suggesting a Potential Role in Cartilage Therapeutics. Arthritic diseases, such as osteoarthritis and rheumatoid arthritis, inflict an enormous health care burden on society. Osteoarthritis, a degenerative joint disease with high prevalence among older people, and rheumatoid arthritis, an autoimmune inflammatory disease, both lead to irreversible structural and functional damage to articular cartilage. The aim of this study was to investigate the effect of polyphenols such as catechin, quercetin, epigallocatechin gallate, and tannic acid, on crosslinking type II collagen and the roles of these agents in managing in vivo articular cartilage degradation. The thermal, enzymatic, and physical stability of bovine articular cartilage explants following polyphenolic treatment were assessed for efficiency. Epigallocatechin gallate and tannic acid-treated explants showed &gt;12 °C increase over native cartilage in thermal stability, thereby confirming cartilage crosslinking. Polyphenol-treated cartilage also showed a significant reduction in the percentage of collagen degradation and the release of glycosaminoglycans against collagenase digestion, indicating the increase physical integrity and resistance of polyphenol crosslinked cartilage to enzymatic digestion. To examine the in vivo cartilage protective effects, polyphenols were injected intra-articularly before (prophylactic) and after (therapeutic) the induction of collagen-induced arthritis in rats. The hind paw volume and histomorphological scoring was done for cartilage damage. The intra-articular injection of epigallocatechin gallate and tannic acid did not significantly influence the time of onset or the intensity of joint inflammation. However, histomorphological scoring of the articular cartilage showed a significant reduction in cartilage degradation in prophylactic- and therapeutic-groups, indicating that intra-articular injections of polyphenols bind to articular cartilage and making it resistant to degradation despite ongoing inflammation. These studies establish the value of intra-articular injections of polyphenol in stabilization of cartilage collagen against degradation and indicate the unique beneficial role of injectable polyphenols in protecting the cartilage in arthritic conditions. abstract_id: PUBMED:6537095 Lysosomal gold accumulations in pulmonary macrophages. Although soluble gold has been widely used in the treatment of rheumatoid arthritis, little is known about the distribution of gold deposits in extra-articular tissues. In the synovial lining cells and articular cartilage, the morphology of lysosomes containing gold, the aurosomes, is well documented. Because gold may cause pulmonary injury, the morphology and distribution of gold deposits in pulmonary tissue should be recognized. We found morphologically typical aurosomes with electron-dense membranes and granules, giving the spectrum of gold in electron microprobe analysis, in the interstitial and alveolar macrophages in the open lung biopsies of 3 patients who had received gold treatment but not in 12 patients who had not received gold treatment. In contrast to previous studies, aurosomes were not found in the endothelial cells. abstract_id: PUBMED:114646 Metabolism and distribution of gold compounds. The pharmacokinetics of gold in blood and urine are compared during intramuscular and oral chrysotherapy for rheumatoid arthritis. Blood gold levels are 3 to 10 fold higher with injectable aurothioglucose and gold sodium thiomalate (50 mg/week) than with oral auranofin (6 mg/day). But, the serum half-life of the parenteral compounds is significantly shorter (5.5 days) than the oral agent (14-21 days). The gold content of urine is nearly 10 times higher with the intramuscular compounds than with the oral drug. The excretory pathways and tissue distribution of gold during conventional intramuscular chrysotherapy are reviewed. Forty percent of a 50 mg injection of gold sodium thiomalate is excreted in 7 days, of which 70% is recovered in the urine. Highest gold concentrations are found in the reticuloendothelial system, adrenal glands and kidneys, while the bone marrow, liver, skin and bone contain the greatest quantities of gold. Answer: The hypothesis that gold rings might protect against articular erosion in rheumatoid arthritis (RA) was examined in a study where patients with RA were classified as ring wearers if they had worn a gold ring on the left ring finger throughout most of the time since disease onset, or as non-ring wearers if they had never worn a gold ring. The results of this preliminary study suggested that there may be less articular erosion at the left hand ring, and perhaps adjacent, MCP joints observed in ring wearers with RA. These data support the hypothesis that gold could pass from a gold ring through skin and local lymphatics 'downstream' to nearby MCP joint in sufficient quantities to delay articular erosion (PUBMED:9306874).
Instruction: Post-sternotomy chronic osteomyelitis: is sternal resection always necessary? Abstracts: abstract_id: PUBMED:22869252 Post-sternotomy chronic osteomyelitis: is sternal resection always necessary? Objectives: The goal of this study was to investigate alternative strategies to the sternal resection in the treatment of post-sternotomy osteomyelitis. We report our experience in the treatment of chronic infection of median sternotomy following open heart surgery without sternal resection. Methods: A 4-year retrospective study was performed, consisting of 70 patients affected by post-sternotomy sternocutaneous fistulas due to chronic osteomyelitis: 45 patients underwent only medical treatment and 25 underwent steel wire removal and surgical debridement (conservative surgery). Of the 25, 7 patients underwent an additional vacuum assisted closure (VAC) therapy due to widespread infected subcutaneous tissue. The diagnosis of osteomyelitis was supported via 3D CT scan images. Results: Complete wound healing was achieved in 67 patients including a patient who achieved healing after being affected by a fistula for over 24 years before coming under our observation, another, affected by mycobacteria other than tuberculosis osteomyelitis, who needed antimicrobial treatment for a period of 30 months and 2 who were affected by Aspergillus infection and needed radical cartilage removal. Fistula relapses were observed in 6 patients of the total 70, possibly due to the too short-term antibiotic therapy used in the presence of coagulase-negative Staphylococcus (CoNS) with multiple resistances and in the presence of Corynebacterium species. Conclusions: Post-sternotomy chronic osteomyelitis can be successfully treated mainly by systemic antimicrobial therapy alone, without mandatory surgical treatments, provided that accurate microbiological and radiological studies are performed. The presence of CoNS and Corynebacterium species seemed to be associated with a need for a prolonged combined antimicrobial therapy with a minimum of 6 months up to a maximum of 18 months. The CT scan and the 3D reconstruction of the sternum proved to be a good method to evaluate the status of the sternum and support the treatments. The VAC therapy was not useful in treating osteomyelitis, although, if used appropriately in the postoperative deep sternal wound infection with the sponge fitted between the sternal edges, it seems to be an effective method to eradicate the infection in the sternum and to prevent chronic osteomyelitis. abstract_id: PUBMED:34719276 Resection of the proximal femur during one-stage revision for infected hip arthroplasty : risk factors and effectiveness. Aims: One-stage revision hip arthroplasty for periprosthetic joint infection (PJI) has several advantages; however, resection of the proximal femur might be necessary to achieve higher success rates. We investigated the risk factors for resection and re-revisions, and assessed complications and subsequent re-revisions. Methods: In this single-centre, case-control study, 57 patients who underwent one-stage revision arthroplasty for PJI of the hip and required resection of the proximal femur between 2009 and 2018 were identified. The control group consisted of 57 patients undergoing one-stage revision without bony resection. Logistic regression analysis was performed to identify any correlation with resection and the risk factors for re-revisions. Rates of all-causes re-revision, reinfection, and instability were compared between groups. Results: Patients who required resection of the proximal femur were found to have a higher all-cause re-revision rate (29.8% vs 10.5%; p = 0.018), largely due to reinfection (15.8% vs 0%; p = 0.003), and dislocation (8.8% vs 10.5%; p = 0.762), and showed higher rate of in-hospital wound haematoma requiring aspiration or evacuation (p = 0.013), and wound revision (p = 0.008). The use of of dual mobility components/constrained liner in the resection group was higher than that of controls (94.7% vs 36.8%; p &lt; 0.001). The presence and removal of additional metal hardware (odds ratio (OR) = 7.2), a sinus tract (OR 4), ten years' time interval between primary implantation and index infection (OR 3.3), and previous hip revision (OR 1.4) increased the risk of proximal femoral resection. A sinus tract (OR 9.2) and postoperative dislocation (OR 281.4) were associated with increased risk of subsequent re-revisions. Conclusion: Proximal femoral resection during one-stage revision hip arthroplasty for PJI may be required to reduce the risk of of recurrent or further infection. Patients with additional metalware needing removal or transcortical sinus tracts and chronic osteomyelitis are particularly at higher risk of needing proximal femoral excision. However, radical resection is associated with higher surgical complications and increased re-revision rates. The use of constrained acetabular liners and dual mobility components maintained an acceptable dislocation rate. These results, including identified risk factors, may aid in preoperative planning, patient consultation and consent, and intraoperative decision-making. Cite this article: Bone Joint J 2021;103-B(11):1678-1685. abstract_id: PUBMED:37611921 Surgical debridement in long bone chronic osteomyelitis: is wide tumour-like resection necessary? Aims: The standard of wide tumour-like resection for chronic osteomyelitis (COM) has been challenged recently by adequate debridement. This paper reviews the evolution of surgical debridement for long bone COM, and presents the outcome of adequate debridement in a tertiary bone infection unit. Methods: We analyzed the retrospective record review from 2014 to 2020 of patients with long bone COM. All were managed by multidisciplinary infection team (MDT) protocol. Adequate debridement was employed for all cases, and no case of wide resection was included. Results: A total of 53 patients (54 bones) with median age of 45.5 years (interquartile range 31 to 55) and mean follow-up of 29 months (12 to 59) were included. In all, ten bones were Cierny-Mader type I, 39 were type III, and five were type IV. All patients were treated with single-staged management, except for one (planned two-stage stabilization). Positive microbial cultures grew in 75%. Overall, 46 cases (85%) had resolution of COM after index procedure, and 49 (90.7%) had resolution on last follow-up. Four patients (7%) underwent second surgical procedure and six patients (11%) had complications. Conclusion: We challenge the need for wide tumour-like resection in all cases of COM. Through detailed preoperative evaluation and planning with MDT approach, adequate debridement and local delivery of high concentration of antibiotic appears to provide comparable outcomes versus radical debridement. abstract_id: PUBMED:2800867 The resection site in the treatment of giant cell bone tumors The authors discuss surgical procedures (resections and amputations) in a group of 35 patients with giant-cell bone tumours who were treated at the Orthopaedic Clinic in Hradec Králové. The diagnosis was established in cooperation by the orthopaedist, pathologist and roentgenologist. Resections were made in 24 patients, i.e. in 13 patients as primary operations, in 11 patients as secondary operations. The group comprised three tumours classified as malignant. After primary resection a relapse developed once, after secondary resection there were two relapses. Twenty-one patients, i.e. 87.5%, had no relapse. On account of two extensive relapses in soft tissues the affected extremity had to be amputated. One amputation was necessary on account of chronic osteomyelitis of the graft. The follow up period of the mentioned patients is between 3 and 26 years. The described investigation tested the possibility to extend indications of resection of giant-cell bone tumours also to conditions when the tumour perforates the affected skeleton and spreads to soft tissues in a favourable direction, i.e. muscles, without affecting important nerves and blood vessels. abstract_id: PUBMED:32125944 Shorter Telomere Length in Peripheral Blood Leukocytes Is Associated with Post-Traumatic Chronic Osteomyelitis. Background: This study investigated the association between post-traumatic chronic osteomyelitis (COM) and peripheral leukocyte telomere length (PLTL) and explored factors associated with PLTL in COM. Methods: A total of 56 patients with post-traumatic COM of the extremity and 62 healthy control subjects were recruited. The PLTL was measured by real-time PCR. Binary logistic regression analysis was used to identify factors in correlation with telomere length. Sex, age, white blood cell (WBC) count, erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), and infection duration were included as independent variables in the logistic regression model. Results: Post-traumatic COM patients had significantly shorter PLTLs (5.39 ± 0.40) than healthy control subjects (5.69 ± 0.46; p &lt; 0.001). Binary logistic regression analysis showed that PLTL had a statistically significant association with age (B = -0.072; p = 0.013) and CRP (B = -0.061; p = 0.033). The logistic regression model was statistically significant and explained 31.4% (Nagelkerke R2) of the change in telomere length and correctly classified 69.6% of the cases. Conclusions: Patients with post-traumatic COM have shorter PLTLs than healthy subjects. The PLTL erosion of post-traumatic COM was partially explained by age and CRP. abstract_id: PUBMED:10355453 Split pectoralis major muscle flap reconstruction after clavicular-manubrial resection. Resection of the clavicular head and clavicular-manubrial junction is required in cases of chronic osteomyelitis or tumor. This article describes a technique for soft tissue coverage in an infected or irradiated area after resection using a split pectoralis major rotational muscle flap. abstract_id: PUBMED:23223970 Epiphysis preserving resection of malignant proximal tibial tumours. Purpose: The purpose of the study was to evaluate the treatment of five patients with proximal tibial epiphysis preserving resection for malignant tumours of the tibia. Method: Three patients suffered from Ewing sarcoma, two had osteosarcoma. The proximal level of the resection was in the line of the growth plate in four patients, and 2 cm below of the growth plate in one patient. The distal resection level was in the diaphysis, depending on the tumour border. The resected part of the tibia was substituted by both the ipsilateral and contralateral fibulas. The two fibulas were proximally built into the epiphysis, the contralateral, freely transplanted fibula was placed distally into the tibial diaphysis. The leg was fixed in an Ilizarov frame. Results: The follow-up period was 4.9 years on average (range, 3.5-8 years). Neither local recurrence nor metastasis was noticed in patients with tibia tumour resection during the observation time. Proper fixation of the transplanted fibula with all of the patients, both proximal to the tibia epiphysis and distal to diaphysis, was experienced. The range of motion of the knee was 122° on the average. Two patients can walk without support and three can walk with an orthosis fully weight bearing. The limb shortening of the affected side was 2.6 cm on average (range,1.5-4.5 cm). In three patients the transplanted opposite side fibula has broken. In one patient, chronic osteomyelitis developed at the distal part of the transplanted fibula. Conclusions: The difficulties of intercalary segment supplementation were present to a greater extent because of the small size of proximal epiphysis, but the preserved knee joint can produce better function for the patient in this limb-saving surgery. abstract_id: PUBMED:11073134 Chronic osteomyelitis in childhood: is surgery always indicated? Background: Presently, most children with chronic osteomyelitis undergo surgery with the inherent risk of damage to their growth plate. We demonstrate a treatment regimen based on imaging procedures focussing on antibiotics in order to reduce the rate of surgical interventions. Patients And Methods: We retrospectively evaluated all 11 patients with clinically suspected chronic osteomyelitis who were treated at our institution from 1989 to 1995. Patients underwent open biopsy and surgical treatment only if imaging procedures showed signs indistinguishable from malignancy, or if they were highly suggestive for the presence of pus, joint infection or osteonecrosis. The patients were followed up for a minimum of 3 years. Results: All five patients treated solely with antibiotics recovered completely. Three patients were subjected to open biopsy and surgical treatment since they showed radiological signs indistinguishable from malignancy, and two patients due to pus or osteonecrosis. In follow-up, there was one relapse of chronic osteomyelitis 11 months after the first treatment course with surgery and antibiotics. One patient suffering from Ewing's sarcoma as detected by open biopsy was excluded. Conclusion: Our diagnostic procedure was useful to reduce the rate of surgery. Surgical treatment of chronic osteomyelitis is not always neccessary especially in cases of missing necrosis, joint infection and abscess as demonstrated by the complete recovery of our patients treated solely with antibiotics. abstract_id: PUBMED:35936558 Case of maxillary actinomycotic osteomyelitis, a rare post COVID complication-case report. Introduction: and Importance: Maxillary actinomycosis is a persistent, very rare disease produced by Actinomyces species which may include only soft tissue or bone or the two together. Actinomycotic osteomyelitis of maxilla is very infrequent when compared to mandible. Case Presentation: Here we are conferring a case of an elderly male patient who had history of COVID-19 infection 4 months ago, with constant complaint of non-remitting vague pain in the region of maxilla with tooth loosening and extractions. He was given a provisional diagnosis of chronic osteomyelitis of maxilla which was later on proved by histopathology as actinomycotic osteomyelitis. Clinical Discussion: A saprophytic fungus causes mucor mycosis, and it is quite unusual. Strawberry gingivitis is one of the signs and symptoms. Mucormycosis and post-covid oral maxillofacial problems can be improved with early diagnosis. Oral Mucormycosis should be suspected in individuals with weakened immune systems, uncontrolled diabetes or post-covid instances. Surgery and adequate antibiotic treatment are necessary to treat actinomycosis. Infection may return after a period of inactivity, so long-term follow-up is necessary. Conclusion: We conclude a positive causal association between COVID-19 and actinomycosis. Maxillary osteomyelitis, a very rare infection, and in our case, the causative organism was Actinomyces Patients who have been infected should be tested for Actinomycin, which may masquerade as a head and neck illness. abstract_id: PUBMED:11341428 Chronic osteomyelitis. The effect of the extent of surgical resection on infection-free survival. We studied prospectively a consecutive series of 50 patients with chronic osteomyelitis. Patients were allocated to the following treatment groups: 1) wide resection, with a clearance margin of 5 mm or more; 2) marginal resection, with a clearance margin of less than 5 mm; and 3) intralesional biopsy, with debulking of the infected area. All patients had a course of antibiotics, intravenously for six weeks followed by orally for a further six weeks. No patients in group 1 had recurrence. In patients treated by marginal resection (group 2), 8 of 29 (28%) had recurrence. All patients who had debulking had a recurrence within one year of surgery. We performed a survival analysis to determine the time of the recurrence of infection. In group 2 there was a higher rate of recurrence in type-B hosts (p &lt; 0.05); no type-A hosts had recurrence. This information is of use in planning surgery for chronic osteomyelitis. Answer: Based on the provided abstracts, sternal resection is not always necessary for the treatment of post-sternotomy chronic osteomyelitis. A study reported that post-sternotomy chronic osteomyelitis could be successfully treated mainly by systemic antimicrobial therapy alone, without mandatory surgical treatments, provided that accurate microbiological and radiological studies are performed (PUBMED:22869252). The study included 70 patients affected by post-sternotomy sternocutaneous fistulas due to chronic osteomyelitis, where 45 patients underwent only medical treatment and 25 underwent conservative surgery without sternal resection. Complete wound healing was achieved in 67 patients, suggesting that systemic antimicrobial therapy can be effective without the need for sternal resection. The study also highlighted that the presence of coagulase-negative Staphylococcus (CoNS) and Corynebacterium species seemed to be associated with a need for a prolonged combined antimicrobial therapy with a minimum of 6 months up to a maximum of 18 months. The CT scan and the 3D reconstruction of the sternum proved to be good methods to evaluate the status of the sternum and support the treatments. Vacuum-assisted closure (VAC) therapy was not useful in treating osteomyelitis directly, but it could be effective in eradicating infection in the sternum and preventing chronic osteomyelitis when used appropriately in the postoperative deep sternal wound infection (PUBMED:22869252). In conclusion, while sternal resection may be necessary in some cases, especially when conservative treatments fail or are not suitable, the evidence suggests that it is not always required for the successful treatment of post-sternotomy chronic osteomyelitis. Non-surgical approaches, including systemic antimicrobial therapy and careful radiological monitoring, can be effective alternatives.
Instruction: Disease progression in melanoma patients with negative sentinel lymph node: does false-negative specimens entirely account for this phenomenon? Abstracts: abstract_id: PUBMED:21466591 Disease progression in melanoma patients with negative sentinel lymph node: does false-negative specimens entirely account for this phenomenon? Background: Sentinel lymph node (SLN) status is the most important prognostic factor for subjects with primary melanoma thicker than 1 mm. Objective: We focused our study on patients with disease progression after negative SLN biopsy (SLNB), with the aim of elucidating their clinical and histopathological characteristics, outcome and real incidence of false negative. Methods: A total of 688 melanoma patients who underwent SLNB (1 May 1998-31 December 2008) were analysed; all patients had Breslow &gt;1 mm or Breslow &lt;1 mm and at least one of the following features: regression, ulceration and/or Clark level IV-V. Results: Progression developed in 114 of 503 negative SLN patients (22.7%); the first metastatic site was regional in 64% and distant in 36% of these cases. Thirty-nine patients had nodal metastases in the SLN basin as first site of progression. High-risk melanomas (P = 0.001) and elderly patients (P = 0.0005) had an increased probability of progression. Women with a higher median age and lower limbs primary melanoma developed mainly regional skin metastases, while an increased probability of distant metastases was demonstrated in patients with primary on the trunk and axillary SLN (P = 0.003, P = 0.001 respectively). Age at diagnosis, Breslow thickness and regression showed a prognostic relevance in univariate and multivariate analyses on disease-free survival and overall survival. Conclusions: Even if SLN status remains the most important prognostic factor for melanoma patients, progressive disease after a negative SLNB is a relatively frequent event. However, in our opinion, only a part of negative SLNB patients with metastatic spreading should be considered as false negative (7.75%). abstract_id: PUBMED:30190855 Analysis of melanoma recurrence following a negative sentinel lymph node biopsy. Little attention has been paid to the characteristics and outcomes of patients who experience distant, local or regional recurrence of melanoma following a negative sentinel lymph node biopsy. This article aims to review the published literature on the topic and presents some general summaries regarding this patient population. Patients who experience a disease recurrence following a negative sentinel lymph node biopsy have a worse overall survival compared with patients with a positive sentinel lymph node biopsy. The implications and possible explanations for these findings are discussed in order to both underscore the need for in-depth investigation of local, regional or distant melanoma recurrence among patients following a true negative sentinel lymph node biopsy, as well as increased efforts to minimize the rate of false negative sentinel lymph node biopsies. abstract_id: PUBMED:17597345 Impact of false-negative sentinel lymph node biopsy on survival in patients with cutaneous melanoma. Background And Objectives: Sentinel lymph node biopsy is widely accepted as standard care in melanoma despite lack of pertinent randomized trials results. A possible pitfall of this procedure is the inaccurate identification of the sentinel lymph node leading to biopsy and analysis of a nonsentinel node. Such a technical failure may yield a different prognosis. The purpose of this study is to analyze the incidence of false negativity and its impact on clinical outcome and to try to understand its causes. Methods: The Melanoma Data Base at National Cancer Institute of Naples was analyzed comparing results between false-negative and tumor-positive sentinel node patients focusing on overall survival and prognostic factors influencing the clinical outcome. Results: One hundred fifty-one cases were diagnosed to be tumor-positive after sentinel lymph node biopsy and were subjected to complete lymph node dissection. Thirty-four (18.4%)patients with tumor-negative sentinel node subsequently developed lymph node metastases in the basin site of the sentinel procedure. With a median follow-up of 42.8 months the 5-year overall survival was 48.4% and 66.3% for false-negative and tumor-positive group respectively with significant statistical differences (P &lt; .03). Conclusions: The sensitivity of sentinel lymph node biopsy was 81.6%, and a regional nodal basin recurrence after negative-sentinel node biopsy means a worse prognosis, compared with patients submitted to complete lymph node dissection after a positive sentinel biopsy. The evidence of higher number of tumor-positive nodes after delayed lymphadenectomy in false-negative group compared with tumor-positive sentinel node cases, confirmed the importance of an early staging of lymph nodal involvement. Further data will better clarify the role of prognostic factors to identify cases with a more aggressive biological behavior of the disease. abstract_id: PUBMED:25677366 Sentinel lymph node biopsy is indicated for patients with thick clinically lymph node-negative melanoma. Background: Sentinel lymph node biopsy (SLNB) is indicated for the staging of clinically lymph node-negative melanoma of intermediate thickness, but its use is controversial in patients with thick melanoma. Methods: From 2002 to 2012, patients with melanoma measuring ≥4 mm in thickness were evaluated at a single institution. Associations between survival and clinicopathologic characteristics were explored. Results: Of 571 patients with melanomas measuring ≥4 mm in thickness and no distant metastases, the median age was 66 years and 401 patients (70.2%) were male. The median Breslow thickness was 6.2 mm; the predominant subtype was nodular (45.4%). SLNB was performed in 412 patients (72%) whereas 46 patients (8.1%) presented with clinically lymph node-positive disease and 113 patients (20%) did not undergo SLNB. A positive SLN was found in 161 of 412 patients (39.1%). For SLNB performed at the study institution, 14 patients with a negative SLNB developed disease recurrence in the mapped lymph node basin (false-negative rate, 12.3%). The median disease-specific survival (DSS), overall survival (OS), and recurrence-free survival (RFS) for the entire cohort were 62.1 months, 42.5 months, and 21.2 months, respectively. The DSS and OS for patients with a negative SLNB were 82.4 months and 53.4 months, respectively; 41.2 months and 34.7 months, respectively, for patients with positive SLNB; and 26.8 months and 22 months, respectively, for patients with clinically lymph node-positive disease (P&lt;.0001). The median RFS was 32.4 months for patients who were SLNB negative, 14.3 months for patients who were SLNB positive, and 6.8 months for patients with clinically lymph node-positive disease (P&lt;.0001). Conclusions: With an acceptably low false-negative rate, patients with thick melanoma and a negative SLNB appear to have significantly prolonged RFS, DSS, and OS compared with those with a positive SLNB. Therefore, SLNB should be considered as indicated for patients with thick, clinically lymph node-negative melanoma. abstract_id: PUBMED:32014933 Progressive Disease in Sentinel-negative Melanoma Patients: Biological Differences and Importance of Sentinel Lymph Node Biopsy. Background/aim: Among the most important prognostic factors in melanoma is the sentinel lymph node (SLN) status. Materials And Methods: Using our electronic database we identified 109 of 890 SLN-negative patients with progressive disease (PD). These patients were characterized for melanoma type, molecular type, sequence and extent of metastatic spread. Results: A total of 61 of 109 SLN-negative patients had PD in the SLN-basin indicating false-negative SLN (group-1). Forty eight of 109 patients had PD at distant sites and were therefore impossible to be identified using SLN biopsy (group-2). Despite distant spread these patients had significantly more single organ metastasis (p&lt;0.001) and significantly longer disease-free-survival (p=0.001) compared to group-1. Additionally, to significant differences on a molecular basis between the two groups (p=0.01), all lentigo maligna and spindle-cell-melanomas belonged to group-2 and all, except one lentigo maligna melanoma, had single visceral metastasis. Conclusion: Two different biological groups among SLN-negative patients with PD were demonstrated. Extravascular-migratory-metastasis, rather than hematogenous spread, might be responsible for the observed PD with single organ involvement. abstract_id: PUBMED:24561692 Sentinel lymph node mapping in melanoma: the issue of false-negative findings. Management of cutaneous melanoma has changed after introduction in the clinical routine of sentinel lymph node biopsy (SLNB) for nodal staging. By defining the nodal basin status, SLNB provides a powerful prognostic information. Nevertheless, some debate still surrounds the accuracy of this procedure in terms of false-negative rate. Several large-scale studies have reported a relatively high false-negative rate (5.6%-21%), correctly defined as the proportion of false-negative results with respect to the total number of "actual" positive lymph nodes. In this review, we identified all the technical aspects that the nuclear medicine physician, the surgeon, and the pathologist should take into account to improve accuracy of the procedure and minimize the false-negative rate. In particular, SPECT/CT imaging detects more SLNs than those found by planar lymphoscintigraphy. Furthermore, the nuclear medicine community should reach a consensus on the radioactive counting rate threshold to better guide the surgeon in identifying the lymph nodes with the highest likelihood of housing metastases ("true biologic SLNs"). Analysis of the harvested SLNs by conventional techniques is also a further potential source for error. More accurate SLN analysis (eg, molecular analysis by reverse transcriptase-polymerase chain reaction) and more extensive SLN sampling identify more positive nodes, thus reducing the false-negative rate.The clinical factors identifying patients at higher-risk local recurrence after a negative SLNB include older age at diagnosis, deeper lesions, histological ulceration, and head-neck anatomic location of the primary lesion.The clinical impact of a false-negative SLNB on the prognosis of melanoma patients remains controversial, because the majority of studies have failed to demonstrate overall statistically significant disadvantage in melanoma-specific survival for false-negative SLNB patients compared with true-positive SLNB patients.When new more effective drugs will be available in the adjuvant setting for stage III melanoma patients, the implication of an accurate staging procedure for the sentinel lymph nodes will be crucial for both patients and clinicians. Standardization and accuracy of SLN identification, removal, and analysis are required. abstract_id: PUBMED:24764188 Recurrence of cutaneous melanoma of the head and neck after negative sentinel lymph node biopsy. Background: Sentinel lymph node biopsy remains controversial for cutaneous melanoma of the head and neck (CMHN). Incidence and factors associated with recurrence after negative sentinel lymph node biopsy have not been well delineated. Methods: Characteristics of 204 patients with head and neck melanoma who had a negative sentinel lymph node biopsy were studied. Main outcomes were overall survival and disease-free survival. Results: Recurrence developed in 45 patients (22%) with a median time to recurrence of 20.1 months. Five-year overall survival was 91.8% for patients without recurrence and 57.0% for those with recurrence. The overall regional recurrence rate was 8.8% (n = 18) and was associated with Breslow thickness &gt;2 mm and scalp location of the primary tumor. The false omission rate was 3.4%. Conclusion: For CMHN, regional recurrence after negative sentinel lymph node biopsy occurs at acceptably low rates relative to other sites, although it is associated with adverse survival. abstract_id: PUBMED:21659495 False-negative sentinel lymph node biopsy in head and neck melanoma. Objective: The results of sentinel lymph node biopsy (SLNB) can be useful for staging and deciding on adjuvant treatment for patients with head and neck melanoma. False-negative SLNB can result in treatment delay. This study aimed to evaluate the characteristics and outcome of patients with false-negative SLNB in cutaneous melanoma of the head and neck. Study Design: Longitudinal cohort study using a prospective institutional tumor registry. Setting: Academic health center. Subjects And Methods: Data from 153 patients who underwent SLNB for melanoma of the head and neck were analyzed. False-negative biopsy was defined as recurrence of tumor in a previously identified negative nodal basin. Statistical analysis was performed on registry data. Results: Positive sentinel lymph nodes were identified in 19 (12.4%) patients. False-negative SLNB was noted in 9 (5.9%) patients, with a false-negative SLNB rate of 32.1%. Using multivariate regression analysis, only examination of a single sentinel lymph node was a significant predictor of false-negative SLNB (P = .01). The mean treatment delay for the false-negative SLNB group was 470 days compared with 23 days in the positive SLNB group (P &lt; .001). The 2-year overall survival of patients with false-negative SLNB was 75% compared with 84% and 98% in positive and negative SLNB groups, respectively (P = .02). Conclusions: False-negative SLNB is more likely to occur when a single sentinel lymph node is harvested. There is significant treatment delay in patients with false-negative SLNB. False-negative SLNB is associated with poor outcome in patients with melanoma of the head and neck. abstract_id: PUBMED:11840573 What doctors and patients think about false-negative sentinel lymph nodes in vulvar cancer. The sentinel lymph node procedure is a relatively new, minimally-invasive method for the assessment of nodal status in malignancies such as breast cancer, cutaneous melanoma and vulvar cancer. Although highly accurate, this new method is inevitably associated with a certain false-negative rate, possibly leading to worse survival in a small subset of patients. The clinical implementation of the sentinel lymph node procedure is therefore a matter of ongoing debate, especially among doctors. The aim of this study was to assess opinions on the acceptable false-negative rate of the sentinel lymph node procedure in patients with vulvar cancer, who in the past had undergone standard routine radical vulvectomy and complete inguinofemoral lymphadenectomy (and frequently experienced complications), and in gynecologists treating patients with vulvar cancer. Structured questionnaires were sent to both patients and gynecologists. The patients had been treated for vulvar cancer between 1985 and 1993, and were all in complete remission with a median follow-up of 118 months (range: 76-185). Questions to the patients dealt with experienced side-effects of the standard treatment and opinion on the acceptable false-negative rate of the sentinel lymph node procedure. The response rate among patients was 91% (106/117). Forty per cent of the patients experienced one or more infections in the legs (cellulitis) and 49% of the patients still experience either severe pain and/or severe lymphedema in the legs. Sixty-six per cent of the patients preferred complete inguinofemoral lymphadenectomy in preference to a 5% false-negative rate of the sentinel lymph node procedure of 5%. Their preference was not related to age or the side-effects they had experienced. The response rate among gynecologists was 80% (80/100), of whom 60% were willing to accept a 5-20% false-negative rate of the sentinel lymph node procedure. While gynecologists may consider the sentinel lymph node procedure to be a promising diagnostic tool, the majority of vulvar cancer patients, who have undergone complete inguinofemoral lymphadenectomy in the past and have frequently experienced complications, would not advise introduction of this technique because they do not want to take any risk of missing a lymph node metastasis. abstract_id: PUBMED:28650537 Predictors of false negative sentinel lymph node biopsy in trunk and extremity melanoma. Background: Nodal recurrence following negative sentinel lymph node biopsy (SLNB) for melanoma is known as false-negative (FN) SLNB. Risk factors for FN SLNB among patients with trunk and extremity melanoma have not been well-defined. Methods: After retrospective review, SLNB procedures were classified FN, true positive (TP; positive SLNB), or true negative (TN; negative SLNB without recurrence). Factors associated with high false negative rate (FNR) and low negative predictive value (NPV) were identified by comparing FNs to TPs and TNs, respectively. Survival was evaluated using Kaplan-Meier methods. Results: Of 1728 patients, 234 were TP and 37 were FN for overall FNR of 14% and NPV of 97.5%. Age ≥65 years was independently associated with high FNR (FNR 20% in this group). Breslow thickness &gt;1 mm and ulceration were independently associated with low NPV. Among patients with ulcerated tumors &gt;4 mm, NPV was 88%. Median time to recurrence for FNs was 13 months. Among patients with primary melanomas ≤2 mm in depth, overall and distant disease-free survival were significantly shorter with FN SLNB than TP SLNB. Conclusions: Older age is associated with increased FNR; patients with thick, ulcerated lesions should be considered for increased nodal surveillance after negative SLNB given low NPV in this group. Answer: Disease progression in melanoma patients with a negative sentinel lymph node (SLN) biopsy does not entirely account for false-negative specimens. While false-negative SLN biopsies do contribute to cases of disease progression, they are not the sole reason for this phenomenon. The false-negative rate for SLN biopsies in melanoma patients has been reported to vary, with some studies indicating rates as high as 7.75% to 32.1% (PUBMED:21466591, PUBMED:21659495). However, other factors also play a role in disease progression after a negative SLN biopsy. For instance, the study by PUBMED:21466591 found that 22.7% of patients with a negative SLN biopsy experienced disease progression, with only a part of these cases considered false negatives. The study also identified high-risk melanomas and older age as factors increasing the probability of progression. Similarly, PUBMED:32014933 identified two biological groups among SLN-negative patients with progressive disease (PD), suggesting that different biological mechanisms, such as extravascular migratory metastasis, might be responsible for PD in these patients. Moreover, PUBMED:30190855 highlights that patients who experience disease recurrence following a negative SLN biopsy have worse overall survival compared to those with a positive SLN biopsy, emphasizing the need for further investigation into the characteristics of melanoma recurrence in this patient population. The study by PUBMED:17597345 also notes that a regional nodal basin recurrence after a negative SLN biopsy indicates a worse prognosis compared to patients who undergo complete lymph node dissection after a positive SLN biopsy. This suggests that factors beyond false-negative results, such as the timing and extent of lymph node involvement, may influence disease progression. Furthermore, PUBMED:25677366 reports an acceptably low false-negative rate for SLN biopsy in patients with thick melanoma, indicating that a negative SLN biopsy is associated with significantly prolonged recurrence-free survival, disease-specific survival, and overall survival compared to those with a positive SLN biopsy. In summary, while false-negative SLN biopsies do contribute to disease progression in melanoma patients, they do not entirely account for this phenomenon. Other clinical and biological factors, including tumor characteristics and patient demographics, also play a significant role in disease progression after a negative SLN biopsy (PUBMED:21466591, PUBMED:30190855, PUBMED:17597345, PUBMED:25677366, PUBMED:32014933).
Instruction: Are hemodynamics surrogate end points in pulmonary arterial hypertension? Abstracts: abstract_id: PUBMED:24951771 Are hemodynamics surrogate end points in pulmonary arterial hypertension? Background: Although frequently assessed in trials and clinical practice, hemodynamic response to therapy has never been validated as a surrogate end point for clinical events in pulmonary arterial hypertension (PAH). Methods And Results: We performed a patient-level pooled analysis of 4 randomized, placebo-controlled trials to determine whether treatment-induced changes in hemodynamic values at 12 weeks accounted for the relationship between treatment assignment and the probability of early clinical events (death, lung transplantation, atrial septostomy, PAH hospitalization, withdrawal for clinical worsening, or escalation in PAH therapy). We included 1119 subjects with PAH. The median (interquartile range) age was 48 years (37-59 years), and 23% were men. A total of 656 patients (59%) received active therapy (101 [15%] iloprost, 118 [18%] sitaxsentan, 204 [31%] sildenafil, and 233 [36%] subcutaneous treprostinil). Active treatment significantly lowered right atrial pressure, mean pulmonary artery pressure, and pulmonary vascular resistance and increased cardiac output and index (P&lt;0.01 for all). Changes in hemodynamic values (except for right atrial pressure and mean pulmonary artery pressure) were significantly associated with the risk of a clinical event (P&lt;0.02 for all). Although active treatment approximately halved the odds of a clinical event compared with placebo (P&lt;0.001), changes in hemodynamics accounted for only 1.2% to 13.9% of the overall treatment effect. Conclusions: Treatment-induced changes in hemodynamics at 12 weeks only partially explain the impact of therapy on the probability of early clinical events in PAH. These findings suggest that resting hemodynamics are not valid surrogate end points for short-term events in PAH clinical trials. abstract_id: PUBMED:15459583 Surrogate end points for pulmonary arterial hypertension. Pulmonary arterial hypertension is a rare disease that is characterized by increased pulmonary vascular resistance and right heart failure. Recent advances in the understanding of pulmonary arterial hypertension have been translated into effective therapies tested in clinical trials. These trials have used surrogate end points as the primary outcomes of interest. However, it is not clear which potential surrogate end points are reliable and valid for studying pulmonary arterial hypertension. Identification of suitable end points not only would help investigators design appropriate clinical trials but would assist clinicians in caring for this patient population. Hemodynamic, echocardiographic, neurohormone, and exercise measures hold some promise as potential surrogate end points for clinical trials of therapy for pulmonary arterial hypertension. Hemodynamic measures have the most evidence to support their use. Functional studies, such as the distance walked in 6 minutes, also may be meaningful. We present the available data as well as the strengths and weaknesses of each metric. Further studies should focus on validating the most promising of these surrogate end points, so that future investigators, subjects, and patients may benefit from the advantages they confer on clinical trials and patient care. abstract_id: PUBMED:18625754 Surrogate and combined end points in pulmonary arterial hypertension. Pulmonary arterial hypertension is a rare and often devastating disease, although various effective therapies are now available. Clinical trials have used hemodynamic, cardiac imaging, laboratory, and exercise measurements as surrogate and intermediate end points in pulmonary arterial hypertension. Yet, based on the current literature, it is difficult to surmise which of these (if any) have been definitively validated. In addition, investigators have advocated the use of combined clinical end points in future clinical trials. The dependence of clinical trials and clinical management on such end points warrants a review of their use. abstract_id: PUBMED:17338929 Surrogate end points in pulmonary arterial hypertension: assessing the response to therapy. Recent discoveries in the disease pathophysiology of pulmonary arterial hypertension have been translated into effective therapies tested in clinical trials. The studies have focused on surrogate and intermediate end points, thought to reflect quantity and quality of life, respectively. The authors present the necessary requirements for establishing the reliability and validity of such end points before they may be used dependably. The authors also review the available data, strengths, and weaknesses of potential end points in pulmonary arterial hypertension. abstract_id: PUBMED:28943808 The Role of Biomarkers and Surrogate End Points in Drug Development for Neonatal Pulmonary Arterial Hypertension. Pulmonary arterial hypertension (PAH) is a rare disease in newborns, infants, and children. It is associated with significant morbidity and mortality, but has limited treatment options. Except for inhaled nitric oxide, which is approved for persistent pulmonary hypertension of the newborn (PPHN), no drug is approved for the treatment of newborns, infants, and children with PAH. The lack of developmentally appropriate pediatric efficacy end points and pediatric clinical trials contribute to this unmet medical need. The noninvasive biomarkers reported in the literature that can be used as potential surrogate end points to assess disease severity and treatment response in neonates, infants, and children with PAH are reviewed herein. In addition, the role of the US Food and Drug Administration in developing potential biomarkers as surrogate end points to facilitate drug development for the treatment of children with PPHN and PAH in children is reviewed herein. abstract_id: PUBMED:20819765 Clinical end-points and surrogate markers of pulmonary arterial hypertension in the light of evidence-based treatment Pulmonary arterial hypertension (PAH) is a rare, fatal and progressive disease. There is an acceleration in the advent of new therapies in parallel to the development of the knowledge about etiogenesis and pathogenesis of PAH. Therefore, to optimize the goals of PAH-specific treatment and to determine the time to shift from monotherapy to combination therapy, simple, objective and reproducible end-points, which may predict the disease severity, progression rate and life expectancy are needed. The adventure of end points in PAH has started with six minute walk distance and functional capacity, and continues with new parameters (biochemical marker, time to clinical worsening, echocardiography and magnetic resonance imaging etc.), which can better reflect the clinical outcome. abstract_id: PUBMED:15219011 End points in pulmonary arterial hypertension: the way forward. Pulmonary arterial hypertension is a rare disease of poor prognosis. Despite its rarity &gt;1,000 patients have been randomised in placebo-controlled trials using novel therapies, including prostacyclin analogues, endothelin receptor antagonists and, most recently, phosphodiesterase 5 inhibitors. Nearly all of these trials have used exercise capacity, measured by the unencouraged 6-min walking distance, as the primary end point and a variety of other measurements as secondary end points. This approach has been productive, leading to the licensing of a number of effective treatments. Future clinical trials, however, will probably assess drug combinations, make comparisons between drugs and include less severely ill patients. It is, therefore, timely to examine the end points used. The authors discussed the various end points that have been used in the past and possible end points that might be used in the future. End points considered included measurements of: exercise capacity, haemodynamics, quality of life, imaging of the right heart and circulation, and chemical markers of pulmonary hypertension. Many of these show promise but will have to be used in parallel and compared with conventional end points such as the 6-min walking distance before their value can be demonstrated convincingly to the regulatory authorities. abstract_id: PUBMED:15194178 End points and clinical trial designs in pulmonary arterial hypertension: clinical and regulatory perspectives. To date, randomized controlled clinical trials performed in pulmonary arterial hypertension (PAH) have been relatively short-term studies involving mainly patients with advanced disease. The primary end points in these trials have addressed exercise capacity, usually by using the 6-min walk test. Although this approach is still warranted in future trials assessing new treatments, it is likely that the focus will shift toward trials of longer duration, involving patients with less advanced disease, and that different drugs and drug-combination regimens will be compared. In such trials, it is possible that a composite of markers indicating clinical deterioration (e.g., hospitalization for right heart failure, the requirement for the introduction of an alternative treatment, and predefined indicators of worsening exercise tolerance) may be more useful as primary end points. Quality of life will become a very important issue; however, appropriate quality-of-life questionnaires for PAH have yet to be developed. In addition, hemodynamics will likely remain valuable as secondary end points, but future clinical trials should include hemodynamics obtained both during exercise and at rest. Finally, cardiopulmonary exercise testing, echocardiographic studies, and biochemical parameters, such as brain natriuretic peptide or troponin T, may also prove useful as secondary end points in the future. abstract_id: PUBMED:19555863 End points and clinical trial design in pulmonary arterial hypertension. New and emerging therapies might provide benefit in patients with pulmonary arterial hypertension. Their efficacy and safety will be compared with existing combination therapies in randomized clinical trials. Appropriate end points for these trials need to be identified: these will include exercise testing, the composite end point of time to clinical worsening, and hemodynamic markers, including advanced imaging modalities and biomarkers. Quality-of-life questionnaires are useful and important secondary end points; pulmonary arterial hypertension-specific questionnaires are currently being developed. Advantages and disadvantages of various trial designs, including placebo-controlled monotherapy or add-on trials, noninferiority studies, and withdrawal trials are also discussed. abstract_id: PUBMED:27416322 Reliable and developmentally appropriate study end points are needed to achieve drug development for treatment of pediatric pulmonary arterial hypertension. Objective: To identify suitable end points and surrogates for pediatric pulmonary arterial hypertension (PAH) as the lack of developmentally appropriate end point and clinical trials contribute to the unmet medical need. Study Design: Reviewed the efficacy end points and surrogates for all trials (1995 to 2013) that were submitted to the Food and Drug Administration (FDA) to support the approval of PAH therapy and conducted literature search. Results: An increase in the 6 min walking distance (6MWD) was used as a primary end point in 8/9 adult PAH trials. This end point is not suitable for infants and young children because of performance limitations and lack of control data. One adult PAH trial used time to the first morbidity or mortality event as a primary end point, which could potentially be used in pediatric PAH trials. In the sildenafil pediatric PAH trial, the change in pulmonary vascular resistance index or mean pulmonary artery pressure was used as a surrogate for the 6MWD to assess exercise capacity. However, two deaths and three severe adverse events during the catheterizations made this an unacceptably high-risk surrogate. The INOmax persistent pulmonary hypertension of the newborn trial used a reduction in initiation of extracorporeal membrane oxygenation treatment as a primary end point, which is not feasible for other pediatric PAH trials. A Literature review revealed none of the existing noninvasive markers are fully validated as surrogates to assess PAH efficacy and long-term safety. Conclusions: For pediatric PAH trials, clinical end points are acceptable, and novel validated surrogates would be helpful. FDA seeks collaboration with academia, industry and parents to develop other suitable and possibly more efficient efficacy end points to facilitate pediatric PAH drug development. Answer: Hemodynamics have been frequently assessed in trials and clinical practice for pulmonary arterial hypertension (PAH), but their validity as surrogate end points for clinical events in PAH has been questioned. A patient-level pooled analysis of four randomized, placebo-controlled trials revealed that while active treatment significantly altered hemodynamic values and these changes were associated with the risk of a clinical event, changes in hemodynamics accounted for only a small fraction (1.2% to 13.9%) of the overall treatment effect on the probability of early clinical events. This suggests that resting hemodynamics are not valid surrogate end points for short-term events in PAH clinical trials (PUBMED:24951771). Other studies have also discussed the use of surrogate end points in PAH, indicating that while hemodynamic measures have the most evidence to support their use, it is not clear which potential surrogate end points are reliable and valid for studying PAH (PUBMED:15459583). The literature indicates that it is difficult to determine which hemodynamic, cardiac imaging, laboratory, and exercise measurements, if any, have been definitively validated as surrogate or intermediate end points in PAH (PUBMED:18625754, PUBMED:17338929). Furthermore, the role of biomarkers and surrogate end points in drug development for neonatal PAH has been reviewed, highlighting the lack of developmentally appropriate pediatric efficacy end points and the need for validated surrogates to facilitate drug development (PUBMED:28943808). Clinical end-points and surrogate markers of PAH have been evolving, with a shift from six-minute walk distance and functional capacity to new parameters that may better reflect clinical outcomes (PUBMED:20819765). In summary, while hemodynamic measures are commonly used and have some association with clinical events in PAH, they only partially explain the impact of therapy and are not considered valid surrogate end points for short-term events in PAH clinical trials. There is a need for further validation of potential surrogate end points to assist in the design of clinical trials and patient care in PAH (PUBMED:24951771, PUBMED:15459583, PUBMED:18625754, PUBMED:17338929, PUBMED:28943808, PUBMED:20819765).
Instruction: Can we expand the Milan criteria for hepatocellular carcinoma in living donor liver transplantation? Abstracts: abstract_id: PUBMED:27011489 Living Donor Liver Transplantation Outcomes for Hepatocellular Carcinoma Beyond Milan or UCSF Criteria. Liver transplantation (LT) is the most effective treatment for hepatocellular carcinoma (HCC) that arises from cirrhosis. The Milan and the University of California, San Francisco (UCSF) selection criteria have resulted in major improvements in patient survival. We assessed our outcomes for patients with HCC that were beyond the Milan and UCSF criteria after living donor liver transplantation. We reviewed the data for 109 patients with cirrhosis and HCC who underwent living donor right lobe liver transplantation (living donor liver transplantation; LDLT) during the period from July 2004 to July 2012. Sixteen (14.7 %) patients had HCC recurrences during a mean follow-up of 35.4 ± 26.2 months (range 4-100 months). The mean time to recurrence was 11 ± 9.4 months (range 4-26 months). Survival rates were not significantly different between patients with HCC that met and were beyond the Milan and UCSF criteria (p = 0.761 and p = 0.861, respectively). The Milan and UCSF criteria were not independent risk factors for HCC recurrence or patient survival. Only poorly differentiated tumors were associated with a lower survival rate (OR = 8.656, 95 % confidence interval (CI) 2.01-37.16; p = 0.004). Survival rates for patients with HCC that were beyond conventional selection criteria should encourage reconsidering the acceptable thresholds of these criteria so that more HCC patients may undergo LT without affecting outcomes. abstract_id: PUBMED:34798847 Living donor liver transplantation for hepatocellular carcinoma beyond the Milan criteria: outcome of expanded criteria in tumor size. Background: The Milan criteria are the universal standard of liver transplantation for hepatocellular carcinoma (HCC). Numerous expanded criteria have shown outcomes as good as the Milan criteria. In Taiwan, living donor liver transplant (LDLT) accounts for the majority of transplantations due to organ shortages. Methods: We retrospectively enrolled 155 patients who underwent LDLT for HCC from July 2005 to June 2017 and were followed up for at least 2 years. Patients beyond the Milan criteria (n = 78) were grouped as recurrent or nonrecurrent, and we established new expanded criteria based on these data. Results: Patients beyond the Milan criteria with recurrence (n = 31) had a significantly larger maximal tumor diameter (4.13 ± 1.96 cm versus 6.10 ± 3.41 cm, p = 0.006) and total tumor diameter (7.19 ± 4.13 cm versus 10.21 ± 5.01 cm, p = 0.005). Therefore, we established expanded criteria involving maximal tumor diameter ≤ 6 cm and total tumor diameter &lt; 10 cm. The 5-year survival rate of patients who met these criteria (n = 134) was 77.3%, and the 5-year recurrence rate was 20.5%; both showed no significant differences from those of the Milan criteria. Under the expanded criteria, the pool of eligible recipients was 35% larger than that of the Milan criteria. Conclusion: Currently, patients with HCC who undergo LDLT can achieve good outcomes even when they are beyond the Milan criteria. Under the new expanded criteria, patients can achieve outcomes as good as those with the Milan criteria and more patients can benefit. abstract_id: PUBMED:26722079 Living Donor Liver Transplantation for Hepatocellular Carcinoma within Milan Criteria in the Present Era. Background: The aim of the present study was to clarify the outcome of living donor liver transplantation (LDLT) in patients with hepatocellular carcinoma (HCC) within Milan criteria. Patients And Methods: The study comprised of 197 adult patients. One hundred twenty-nine patients were within Milan criteria. The overall and recurrence-free survival rates after the LDLT were calculated. Results: The 1-, 5- and 10-year overall survival rates were 94.5%, 89.9% and 88.6%, respectively. The 1-, 5- and 10-year recurrence-free survival rates were 100%, 97.0% and 94.0%, respectively. Four patients had HCC recurrence. The mean neutrophil to lymphocyte ratio (NLR) (6.75 vs. 2.75, p=0.002) or alpha-fetoprotein (AFP) (3,239 vs. 197, p&lt;0.001) of these four recipients was significantly higher compared to that of 125 recipients without HCC recurrence. Conclusion: The outcome of LDLT for patients with HCC within Milan criteria was outstanding. Careful follow-up after LDLT is necessary for patients with high NLR or AFP. abstract_id: PUBMED:30166858 Expansion of the hepatocellular carcinoma Milan criteria in liver transplantation: Future directions. Milan criteria are currently the benchmark related to liver transplantation (LT) for hepatocellular carcinoma. However, several groups have proposed different expanded criteria with acceptable results. In this article, we review the current status of LT beyond the Milan criteria in three different scenarios-expanded criteria with cadaveric LT, downstaging to Milan criteria before LT, and expansion in the context of adult living donor LT. The review focuses on three main questions: what would the impact of the expansion beyond Milan criteria be on the patients on the waiting list; whether the dichotomous criteria (yes/no) currently used are appropriate for LT or continuous survival estimations, such as the one of "Metroticket" and whether it should enter into the clinical practice; and, whether the use of living donor LT in the context of expansion beyond Milan criteria is justified. abstract_id: PUBMED:28827903 Predictors of Outcome of Living Donor Liver Transplantation for Hepatocellular Carcinoma. The aim of this work is to study the different factors that affect the outcome of living donor liver transplantation for patients with hepatocellular carcinoma (HCC). Between April 2003 to November 2014, 62 patients with liver cirrhosis and HCC underwent living donor liver transplantation (LDLT) in the National Liver Institute, Menoufia University, Egypt. The preoperative, operative, and postoperative data were analyzed. After studying the pathology of explanted liver; 44 (71 %) patients were within the Milan criteria, and 18 (29 %) patients were beyond Milan; 13 (21.7 %) of patients beyond the Milan criteria were also beyond the University of California San Francisco criteria (UCSF) criteria. Preoperative ablative therapy for HCC was done in 22 patients (35.5 %), four patients had complete ablation with no residual tumor tissues. Microvascular invasion was present in ten patients (16 %) in histopathological study. Seven (11.3 %) patients had recurrent HCC post transplantation. The 1, 3, 5 years total survival was 88.7, 77.9, 67.2 %, respectively, while the tumor-free survival was 87.3, 82.5, 77.6 %, respectively. Expansion of selection criteria beyond Milan and UCSF had no increased risk effect on recurrence of HCC but had less survival rate than patients within the Milan criteria. Microvascular invasion was an independent risk factor for tumor recurrence. abstract_id: PUBMED:26430399 Can living donor liver transplantation offer similar outcomes to deceased donor liver transplantation using expanded selection criteria for hepatocellular carcinoma? Objective: To compare the outcomes of living donor liver transplantation (LDLT) versus deceased donor liver transplantation (DDLT) for patients with hepatocellular carcinoma (HCC) in different selection criteria. Methods: Data of patients with HCC who underwent liver transplantation between 2005 and 2013 at our center were reviewed. Clinical data of LDLT recipients and DDLT recipients were compared. The postoperative recurrence-free survival (RFS) rate and overall survival (OS) rate after LDLT versus DDLT were compared in the Milan recipients, the University of California, San Francisco (UCSF) recipients, the up-to-seven recipients, the Hangzhou recipients and the Chengdu recipients. Results: Data of 255 patients were retrospectively reviewed in this study. Seventeen DDLT recipient and 9 LDLT recipients died during the perioperative period. Among the remaining 229 recipients (NLDLT=66, NDDLT=163), 96 patients met the Milan criteria, 123 recipients met the UCSF criteria, 135 patients met the up-to-seven criteria, 216 patients met the Hangzhou criteria, and 229 recipients met the Chengdu criteria. The overall RFS and OS rates of the Milan recipients, the UCSF recipients, the up-to-seven recipients, the Hangzhou recipients and the Chengdu recipients after LDLT and DDLT were all similar. Conclusion: Using well-studied selection criteria, LDLT offers similar outcomes to DDLT for patient with HCC, even using expanded selection criteria. abstract_id: PUBMED:27358787 Proposal of new expanded selection criteria using total tumor size and (18)F-fluorodeoxyglucose - positron emission tomography/computed tomography for living donor liver transplantation in patients with hepatocellular carcinoma: The National Cancer Center Korea criteria. Aim: To expand the living donor liver transplantation (LT) pool of eligible patients with hepatocellular carcinoma (HCC) using new morphological and biological criteria. Methods: Patients with HCC who underwent living donor LT (LDLT) from March 2005 to May 2013 at the National Cancer Center Korea (NCCK) were enrolled. We performed the (18)F-fluorodeoxyglucose positron emission tomography/computed tomography (PET/CT) before LDLT. Overall and disease-free survival analysis was done in patients to evaluate the usefulness of new NCCK criteria using PET/CT and total tumor size (10 cm). Results: We enrolled a total of 280 patients who pathologically confirmed to have HCC and performed the PET/CT before transplantation. Among them, 164 (58.6%) patients fulfilled the NCCK criteria and 132 patients (47.1%) met the Milan criteria. Five-year overall and disease-free survival rates for patients who fulfilled the NCCK criteria showed 85.2% and 84.0%, respectively, and were significantly higher than those beyond the NCCK criteria (60.2% and 44.4%, respectively; P &lt; 0.001). The correlation analysis between preoperative imaging tests and pathologic reports using Cohen's Kappa demonstrated the better results in the NCCK criteria than those in the Milan criteria (0.850 vs 0.583). The comparison of disease-free analysis among the NCCK, Milan, and University of California, San Francisco (UCSF) criteria using the receiver operating characteristics curves revealed the similar area under the curve value criteria (NCCK vs Milan, P = 0.484; NCCK vs UCSF, P = 0.189 at 5-years). Conclusion: The NCCK criteria using hybrid concept of both morphological and biological parameters showed an excellent agreement between preoperative imaging and pathological results, and favorable survival outcomes. These new criteria might select the optimal patients with HCC waiting LDLT and expand the selection pool. abstract_id: PUBMED:32857265 Do We Need to Be Limited by Matching Milan Criteria for Survival in Living Donor Liver Transplantation? Purpose: Hepatocellular carcinoma (HCC) is the second leading cause of cancer deaths and the 7th most common cancer. It has two characteristic features: being advanced stage at diagnosis and association with liver cirrhosis. Liver transplantation (LT) offers the only curative option to treat both components of the disease. The Milan criteria have been extensively used for selecting patients with HCC for LT. However, using Milan criteria, we can only transplant 30% of the patients. The aim of the present review is to evaluate the role of LT in HCC beyond the Milan criteria. Methods: We evaluated the studies that have introduced extended criteria to select patients with HCC beyond the Milan criteria. We evaluated the outcomes in terms of disease-free survival rates and HCC recurrences. Results: There are patients with tumors that are beyond Milan criteria that could benefit from LT. Selection of these patients has paramount importance in the era of living donor liver transplantation. Current expanded criteria depend on either the bulk of the tumor or the additional surrogate markers of tumor biology such as alpha-fetoprotein (AFP) and des-gamma carboxyprothrombin (DCP). Conclusion: There is no ideal marker or an extended criterion for selecting patients with HCC beyond the Milan criteria and it needs further research to find an effective biomarker that has prognostic significance to select patients with advanced tumors. abstract_id: PUBMED:15561222 Can we expand the Milan criteria for hepatocellular carcinoma in living donor liver transplantation? Introduction: The Milan criteria, namely, tumors 5 cm or less in diameter in patients with single hepatocellular carcinoma (HCC), no more than 3 tumor nodules, and each 3 cm or less in diameter in patients with multiple tumors, are accepted for cadaveric liver allocation. However, in living donor liver transplantation (LDLT), graft donation may only depend on the donor's intention. The aim of this study was to elucidate the feasibility of Milan criteria in LDLT. Materials And Methods: From January 2001 to December 2002, 46 cases of liver transplantation (LT) for HCC included 5 hospital mortalities and 3 cadaveric transplantations, all of which were excluded. We classified the patients into Group I cases that met the Milan criteria and Group II cases that did not meet the Milan criteria. The analyses examined tumor-related risk factors affecting recurrence and survival, such as tumor size, number of tumor nodules, and presence of microvascular and macrovascular invasion. Results: Twenty-one cases belonged to Group I and 17 to Group II. There was no significant difference in the recurrence or survival rates between Groups I and II. The risk factors affecting recurrence were macrovascular invasion and tumor size (5 cm). The number of tumor nodules and microvascular invasion did not appear to affect recurrence. The risk factor affecting survival was macrovascular invasion. Conclusion: We suggest that in selected cases the Milan criteria could be extended to increase the number of tumor nodules as long as the HCC were small and did not macrovascular invasion. abstract_id: PUBMED:17960066 HCC in living donor liver transplantation: can we expand the Milan criteria? Background: The tumor biology of hepatocellular carcinoma (HCC) affects recurrence after liver transplantation (LT), but most selection guidelines are based only on tumor size and number. The aim of the study is to evaluate the possibility of expanding the selection criteria in living donor LT (LDLT) without compromising patient survival by adding alpha-fetoprotein (AFP) in selection guideline. Methods: One hundred thirty-nine patients who received LDLT with the diagnosis of HCC and survived more than 3 months were enrolled. The operability was based on Milan criteria but LT beyond the criteria was performed when requested by the patients and/or the guardian after thorough explanation. Results: The median follow-up duration was 28 months. One-, three- and five-year survival rates were 92.2, 82.6, and 79.9%. There was no survival difference between patients within or beyond Milan (p = 0.76). Serum AFP level &gt;400 ng/ml, tumor size &gt;5 cm, and vascular invasion were significant on univariate analysis, but only vascular invasion was significant on multivariate analysis (p = 0.007). Patients with &gt;3 tumor nodules had better survival compared to &lt;or=3 nodules (p = 0.196). Patient selection using tumor size &lt;or=5 cm and AFP &lt;or=400 ng/ml without limitation of tumor numbers could expand patient selection and improve patient survival. Conclusion: Application of serum AFP level to selection of HCC for LT affords better patient selection criteria. Answer: Yes, there is evidence suggesting that the Milan criteria for hepatocellular carcinoma (HCC) in living donor liver transplantation (LDLT) can be expanded. Several studies have proposed different expanded criteria with acceptable outcomes, indicating that patients with HCC beyond the Milan criteria can still achieve good survival rates after LDLT. For instance, one study found that survival rates were not significantly different between patients with HCC that met and were beyond the Milan and UCSF criteria, suggesting that the conventional selection criteria may not be independent risk factors for HCC recurrence or patient survival (PUBMED:27011489). Another study established new expanded criteria based on maximal tumor diameter ≤ 6 cm and total tumor diameter < 10 cm, which resulted in a 5-year survival rate of 77.3% and a 5-year recurrence rate of 20.5%, both showing no significant differences from those of the Milan criteria (PUBMED:34798847). Furthermore, research has shown that outcomes of LDLT for patients with HCC within Milan criteria are outstanding, but careful follow-up is necessary for patients with high neutrophil to lymphocyte ratio (NLR) or alpha-fetoprotein (AFP) (PUBMED:26722079). Other studies have proposed the use of additional markers of tumor biology, such as AFP and des-gamma carboxyprothrombin (DCP), to select patients with HCC beyond the Milan criteria (PUBMED:32857265). The National Cancer Center Korea (NCCK) criteria, which use a hybrid concept of both morphological and biological parameters, have also shown favorable survival outcomes and could expand the selection pool (PUBMED:27358787). Additionally, a study suggested that the Milan criteria could be extended to increase the number of tumor nodules as long as the HCC were small and did not have macrovascular invasion (PUBMED:15561222). In summary, expanding the Milan criteria for HCC in LDLT is supported by various studies that have identified alternative or additional criteria that could allow more patients to benefit from transplantation without compromising survival outcomes.
Instruction: Is antioxidant use protective of cognitive function in the community-dwelling elderly? Abstracts: abstract_id: PUBMED:33620005 The relationship between activity level and cognitive function in Chinese community-dwelling elderly. To study the relationship between daily activity level and cognitive function in community-dwelling elderly. We collected demographic features, cognitive function, activity level and self-rating depression scale scores in 53 community-dwelling olderly aged 60 years or above. The activity level and moderate-to-vigorous physical activity (MVPA) time were assessed by using the accelerometer for 7 consecutive days. We compared activity level, MVPA time and depression scores between cognitive impaired and normal groups. Cognitive functions were compared in groups with different MVPA level, and the correlation between cognitive function and MVPA time was analysed. Of the 53 subjects, 27 had varying degrees of cognitive impairment. Individuals with cognitive impairment shown significantly shorter MVPA time and higher depression score compared to the cognitive normal group (P &lt; 0.05). After controlling for confounding factors (age, BMI), MVPA time was associated with cognitive function (r = 0.358, P = 0.009). The memory factor score correlated with MVPA time (r = 0.357, P = 0.012) and mean activity level (r = 0.287, P = 0.046). Moderate-to-vigorous physical activity in the elderly was positively related to their cognitive function. Strengthening daily activities may beneficial for the elderly to maintain better cognitive function. abstract_id: PUBMED:15555461 Is antioxidant use protective of cognitive function in the community-dwelling elderly? Background: The role of oxidative stress in the pathogenesis of diseases such as macular degeneration, certain types of cancer, and Alzheimer's disease has received much attention. Thus, there is considerable interest in the potential contribution of antioxidants to the prevention of these diseases. Objective: The objective of this study was to determine whether use of supplemental antioxidants (vitamins A, C, or E, plus selenium or zinc) was associated with a reduced risk of development of cognitive impairment or cognitive decline in a representative sample of the community-dwelling elderly. Methods: The sample consisted of 2082 nonproxy subjects from the Duke Established Populations for Epidemiologic Studies of the Elderly who were not cognitively impaired at the 1989-1990 interview (baseline for the present analysis). Medication use was determined during in-home interviews. Cognitive function was assessed 3 and 7 years from baseline in terms of incident cognitive impairment, as measured on the Short Portable Mental Status Questionnaire (SPMSQ) using specific cut points (number of errors) based on race and education, and cognitive decline, defined as an increase of &gt; or = 2 errors on the SPMSQ. Multivariate analyses were performed using weighted data adjusted for sampling design and controlled for sociodemographic characteristics, health-related behaviors, and health status. Results: At baseline, 224 (10.8%) subjects were currently taking a supplement containing an antioxidant. During the follow-up period, 24.0% of subjects developed cognitive impairment and 34.5% experienced cognitive decline. Current antioxidant users had a 34.0% lower risk of developing cognitive impairment compared with non-antioxidant users (adjusted relative risk [RR], 0.66; 95% CI, 0.44-1.00) and a 29.0% lower risk of experiencing cognitive decline (adjusted RR, 0.71; 95% CI, 0.49-1.01). Conclusion: The results of this analysis suggest a possible beneficial effect of antioxidant use in terms of reducing cognitive decline among the community-dwelling elderly. abstract_id: PUBMED:37303887 Association between self-reported oral health and cognitive function among the community-dwelling elderly in Jinan, China: the mediating effect of life satisfaction. Background: Deterioration of self-reported oral health and decline in cognitive function are two main adverse health outcomes experienced by the older adults. Little evidence was found on the psychosocial mechanism between self-reported oral health and cognitive function. This study explores the association between self-reported oral health and cognitive function and examines the mediating effect of life satisfaction among the community-dwelling elderly in Jinan, China. Methods: A total of 512 older individuals (60+) were included in the study. Cognitive function was assessed using the Chinese version of the Mini-Mental State Examination scale (MMSE), and self-reported oral health was measured using the Chinese version of the Geriatric Oral Health Assessment Index (GOHAI). Pearson correlation analysis was used to determine the relationship between self-reported oral health, life satisfaction, and cognitive function. Multivariate linear regression analysis was conducted to explore the possible effect of covariates. Structural equation modelling and Bootstrap analyses were conducted to verify the mediating role of life satisfaction. Results: The mean MMSE score was 25.65 ± 4.42. Better self-reported oral health was significantly associated with a higher level of life satisfaction, and those with higher life satisfaction experienced better cognitive function. Age, educational level and source of living expenses were found to be cofounding variables. Life satisfaction partially mediated the effect of self-reported oral health on cognitive function (95% confidence interval [CI]: 0.010 to 0.075). The mediating effect of life satisfaction accounted for 24% of the total effect. Conclusion: The level of cognitive function was relatively high. Self-reported oral health was positively associated with cognitive function, and the mediating effect of life satisfaction was proven to exist in the community-dwelling older individuals. Early screening for oral diseases and a greater focus on life satisfaction are recommended. abstract_id: PUBMED:38326578 Pattern of cognitive impairment among community-dwelling elderly in Egypt and its relation to socioeconomic status. Background: Cognitive decline is one of the aging health problems that strongly affects daily functioning and quality of life of older adults and threatens their independence. The aim of this study was to assess the prevalence and pattern of cognitive impairment (CI) among community-dwelling elderly in Egypt and the contribution of socioeconomic status to inequality in cognitive impairment. Methods: A cross-sectional study involved 470 community-dwelling elderly aged 60 years or older living in Kafr El-Sheikh Governorate, Egypt. Subjects were recruited from home visits, geriatric clubs, and outpatient clinics. The Montreal Cognitive Assessment tools (MoCA &amp; MoCA-B) were used to assess the prevalence of cognitive impairment, Hachinski ischemic score (HIS) to investigate the type of cognitive impairment, Ain Shams Cognitive Assessment (ASCA) tool to assess the pattern of specific cognitive domain affection, and an Egyptian socioeconomic status (SES) scale to classify the SES of the study participants. Results: The prevalence of cognitive impairment was 50.2% distributed as 37.7% for mild cognitive impairment (MCI) and 12.5% for dementia. The most common type of cognitive impairment was the degenerative type (47.9%). Pattern of specific domain affection among cognitively impaired subjects ranged from 94% for visuospatial function to 12.7% for abstraction. Cognitive impairment was significantly higher with increasing age, female sex, marital status (single or widow), low education, higher number of comorbidities, and positive family history of cognitive impairment (p &lt; 0.001). Also, cognitive impairment was concentrated mainly among participants with low socioeconomic score (p &lt; 0.001). Conclusion: In Egypt, cognitive impairment is significantly prevalent and concentrated among those who are in low socioeconomic status. Patients with mild CI were more than those with dementia, and the most common type of CI was the degenerative type. Increasing educational level of low SES population and improving their access to healthcare services are highly recommended to improve the inequity of cognitive impairment. abstract_id: PUBMED:36148108 Community support and promoting cognitive function for the elderly. Proper cognitive functions are critical to the life of the elderly. With the rapid aging of the population, community support plays an important role in cognitive functioning. This study examines the association between community support and the level of cognitive functioning in the elderly, and the mediating effect of social participation in the relationship. Based on the panel data of China Longitudinal Healthy Longevity Survey (CLHLS) in 2005, 2008, 2011, 2014, and 2018, people aged 65 and over are selected as the research object (N = 35,479). The panel Logit model is used to analyze the influence of community support on their cognitive functioning. In addition, the stepwise regression and KHB decomposition methods are used to test the influence mechanism of community support on their cognitive function. The benchmark regression results show that there is a significant correlation between community support and cognitive function in the elderly (OR: 1.64, 95% CI: 1.41-1.91, p &lt; 0.01). Daily care (OR: 1.75, 95% CI: 1.33-2.29, p &lt; 0.01) has the strongest impact on the cognitive function of the elderly, followed by health care (OR: 1.70, 95% CI: 1.43-2.01, p &lt; 0.01) and legal support (OR: 1.64, 95% CI: 1.37-1.95, p &lt; 0.01), while psychological care (OR: 1.62, 95% CI: 1.31-2.01, p &lt; 0.01) has the weakest impact on the cognitive function of the elderly. The results of the mediation effect test show that social participation plays a significant intermediary role in the impact of community support on the cognitive function of the elderly (mediation percentage: 16.89%), demonstrating that community support can improve the cognitive function of the elderly by promoting the social participation of the elderly. In classified community support, social participation plays a significant intermediary role in the impact of psychological care on cognition (mediation percentage: 46.10%). abstract_id: PUBMED:36497729 Relationships of Physical Activity, Depression, and Sleep with Cognitive Function in Community-Dwelling Older Adults. This cross-sectional, observational study aimed to integrate the analyses of relationships of physical activity, depression, and sleep with cognitive function in community-dwelling older adults using a single model. To this end, physical activity, sleep, depression, and cognitive function in 864 community-dwelling older adults from the Suwon Geriatric Mental Health Center were assessed using the International Physical Activity Questionnaire, Montgomery-Asberg Depression Rating Scale, Pittsburgh Sleep Quality Index, and Mini-Mental State Examination for Dementia Screening, respectively. Their sociodemographic characteristics were also recorded. After adjusting for confounders, multiple linear regression analysis was performed to investigate the effects of physical activity, sleep, and depression on cognitive function. Models 4, 5, 7, and 14 of PROCESS were applied to verify the mediating and moderating effects of all variables. Physical activity had a direct effect on cognitive function (effect = 0.97, p &lt; 0.01) and indirect effect (effect = 0.36; confidence interval: 0.18, 0.57) through depression. Moreover, mediated moderation effects of sleep were confirmed in the pathways where physical activity affects cognitive function through depression (F-coeff = 13.37, p &lt; 0.001). Furthermore, these relationships differed with age. Thus, the associations among physical activity, depression, and sleep are important in interventions for the cognitive function of community-dwelling older adults. Such interventions should focus on different factors depending on age. abstract_id: PUBMED:31874118 Dysphagia is associated with oral, physical, cognitive and psychological frailty in Japanese community-dwelling elderly persons. Objective: To investigate which domains of frailty are associated with dysphagia in community-dwelling elderly persons. Background: Dysphagia is a common cause of aspiration pneumonia in frail elderly persons in Japan. Although frailty is considered to be multidimensional, it is unclear which domains of frailty affect dysphagia. Methods: The participants were 3475 independent Japanese elderly persons (≥65 years; 1555 men and 1920 women). A self-report questionnaire with a frailty checklist consisting of several domains (lifestyle, physical function, nutrition, oral function, homebound status, cognitive function and depressive mood) was used to determine the participants' characteristics. Dysphagia was defined as impaired swallowing in the oral function domain. To determine the associations between dysphagia and age, sex and the other domains on the frailty checklist, adjusted odds ratios (AORs) and 95% confidence intervals (95% CIs) were calculated using multiple logistic regression analysis. Results: The multiple logistic regression analysis showed that dysphagia was independently associated with female sex (AOR = 1.35; 95% CI = 1.08-1.68), chewing ability (AOR = 1.70; 95% CI = 1.32-2.18), oral dryness (AOR = 1.94; 95% CI = 1.48-2.54), physical function (AOR = 2.19; 95% CI = 1.66-2.90), cognitive function (AOR = 1.68; 95% CI = 1.34-2.12) and depressive mood (AOR = 1.82; 95% CI = 1.41-2.35). Conclusion: Dysphagia was independently associated with oral, physical, cognitive and psychological frailty. These results suggest that frailty prevention strategy including swallowing training might be useful for community-dwelling independent elderly persons aged ≥65 years. abstract_id: PUBMED:38088994 Effects of Home-Based Computerized Cognitive Training in Community-Dwelling Adults With Mild Cognitive Impairment. Objective: There is a growing importance for the home-based (HB) support services, and computerized cognitive training (CCT) has been reported as an effective intervention for cognitive impairment. However, there is still a need for further verification of the effect of HB-CCT. This study aimed to determine the effectiveness of HB-CCT on the cognitive function of community-dwelling adults with mild cognitive impairment (MCI) as well as safety in its use. Methods: Fifty community-dwelling adults with MCI were included, of which 25 each were randomized to either HB-CCT or control groups. Evaluations of comprehensive cognition, memory, attention, language, executive function, and depression were performed before and after the intervention, including three times a week for eight weeks in the intervention group and eight weeks apart with no intervention in the control group. Results: In baseline and post-evaluation comparisons, the HB-CCT group showed significant improvements, while the control group did not show significant changes. Statistically significant variations were noted between the HB-CCT and control groups in all post-intervention evaluations relative to baseline. Additionally, no side effects were observed. Conclusion: Beneficial effects on cognition and depression were noted in the intervention group compared with the control group, suggesting that HB-CCT may be a positive tool for cognitive improvement in adults with MCI. abstract_id: PUBMED:30378340 Cognitive Impairments and Associated Factors in Community-dwelling Elderly in Chengdu, Sichuan Objective: To determine the prevalence and associated factors of cognitive impairments in the community-dwelling elderly aged 60 years or older in Chengdu of Sichuan province. Methods: A random cluster sampling strategy was adopted to select 621 community-dwelling elderly. Face-to-face interviews were conducted to assess the cognitive status of the participants. 2 tests and logistic regression analyses were performed to identify factors associated with cognitive impairments. Results: About 40.9% of the participants had cognitive impairments. Those attended primary schools had a lower risk of cognitive impairments [odds ratio OR)=0.369, P&lt;0.001] compared with the illiterate ones. Older age OR=1.505 for 70-79 years, P=0.042; OR=3.069 for ≥80 years, P&lt;0.001), cerebrovascular disease OR=2.159, P=0.003) and smoking OR=2.388, P&lt;0.001) were risk factors of cognitive impairments. Men had lower risk OR=0.489, P=0.005) of cognitive impairments than women. Conclusion: The prevalence of cognitive impairments in community-dwelling elderly in Chengdu is high in comparison with those in other cities. Illiteracy, older age (over 70 years), women, smoking, and cerebrovascular disease are risk factors of cognitive impairments. abstract_id: PUBMED:34362220 Analyses of Swallowing Function and Its Related Factors in Community-Dwelling Elderly Patients: A Case-Control Study. This retrospective case-control study evaluated the prevalence of declined swallowing function and the association with oral functions and gender in community-dwelling elderly patients. Their profiles, the results of swallowing function (Eating Assessment Tool: EAT-10) and other oral functions (oral dryness, maximum occlusal force (MOF), tongue-lip motor function (oral diadochokinesis: ODK), maximum tongue pressure (MTP) and masticatory performance (MP)) were extracted for analyses. The patients were categorized into three groups according to EAT-10 score (Group 1: 0, Group 2: 1 and 2, Group 3: ≥3). In total, 242 patients were enrolled and 46 of them (19.0%) were categorized into declined swallowing function (Group 3). In two-group comparisons (Group 1, 2 versus Group 3), significant differences were identified in age and the number of remaining teeth, but they were not identified in three-group comparisons. The patients with declined swallowing function (Group 3) had significantly lower function in ODK and MTP. Multiple logistic regression analyses identified that declined swallowing function was independently associated with declined functions in ODK /ka/ (OR: 5.31, 95% CI: 1.03-27.23, p = 0.04) and in MTP (OR: 2.74, 95% CI: 1.12-6.66, p = 0.03). This study could confirm the critical role of tongue functions in swallowing in community-dwelling elderly patients. Answer: Yes, antioxidant use appears to be protective of cognitive function in the community-dwelling elderly. A study found that current antioxidant users had a 34.0% lower risk of developing cognitive impairment compared with non-antioxidant users, and a 29.0% lower risk of experiencing cognitive decline (PUBMED:15555461). This suggests a possible beneficial effect of antioxidant use in terms of reducing cognitive decline among this population.
Instruction: Dementia related aggression in the acute sector: is a Code Black really the answer? Abstracts: abstract_id: PUBMED:16594887 Dementia related aggression in the acute sector: is a Code Black really the answer? Objective: This paper presents the findings of one facet of a larger study that explored the management of people with dementia as inpatients of acute care. The aim of the study is to provide a comprehensive multiprofessional view of the experience of providing care to people with dementia in an acute setting. Design: Descriptive exploratory study. Setting: Three acute care hospitals in South Australia. Participants: 25 health professionals who regularly provided care to people admitted to hospital with concurrent dementia. Results: The information gathered provided a rich understanding of the frailty, dependency and unpredictable behaviour of people living with some form of dementia and how that creates tensions in a place that is environmentally, culturally and resourcefully bereft of what is required for truly effective care. Conclusion: The study showed that people with dementia are increasingly being admitted to hospital across almost every specialty and that many staff are ill prepared for meeting their needs. The heavily medically dominated environment of an acute care system lends itself to increased confusion and agitation in these people that may rapidly escalate to aggression and violence. Due to various shortfalls in that system, staff will often resort to restraint in managing disruptive behaviour--which is not in the best interest of the patient or the health care system. The way forward is to foster a multifactorial approach to a 'dementia friendly' system of care supported by education. abstract_id: PUBMED:1976621 Pharmacologic management of aggression in the elderly. Issues related to the epidemiology, nosology, and differential diagnosis of organic aggressive disorders in the elderly are discussed. Rating and monitoring of aggressive events using the Overt Aggression Scale are reviewed. The management of acute aggression with antipsychotic agents and benzodiazepines is presented, as well as an approach for using beta-blockers, anticonvulsant agents, and serotonin-specific antidepressants and an antianxiety agent for the treatment of chronic aggression. abstract_id: PUBMED:7999540 Dementia in acute units: aggression. This is the fourth article in a series which looks at how nurses in acute units can manage the behaviour problems presented by people with dementia. This month, the focus is on aggression, a topic which causes nurses extreme concern. The author discusses some of the precipitating factors which can cause a person with dementia to become aggressive while in hospital, then sets out some appropriate management strategies. abstract_id: PUBMED:10466910 Estrogen for dementia-related aggression in elderly men. Objective: To report the use of conjugated estrogens for aggression in two elderly men with dementia and to review the available literature relating to estrogen use for aggression in the elderly. Case Summaries: Case 1. A 78-year-old white man had probable Alzheimer dementia and aggression (verbal and physical) that was nonresponsive to antipsychotic, antidepressant, and mood stabilizer therapy. However, conjugated estrogens 1.875 mg/d reduced his physical aggression, as evidenced by a 75% decline in aggressive episodes, per 24-hour nursing reports and progress notes. Case 2. A 78-year-old African-American man with vascular dementia, physical aggression, and sexual inappropriateness was unsuccessfully managed with several antipsychotic and benzodiazepine trials. He did, however, exhibit a decline in physical and sexual aggression by 80%, as well as a 55% reduction in sexual comments, after daily treatment with conjugated estrogens 0.625 mg. Estrogen was well tolerated by both patients. Data Sources: MEDLINE (January 1966-May 1998) and PsychLit (January 1974-May 1998) database searches were performed to identify case reports, letters, or clinical trials discussing estrogen use in aggressive elderly patients. Conclusions: As with previous reports, these cases suggest that conjugated estrogens may be used to reduce physical and sexual aggression associated with dementia in elderly men. Additional study is needed to determine the exact role of this alternative pharmacologic intervention. abstract_id: PUBMED:18494535 Elderly patients with dementia-related symptoms of severe agitation and aggression: consensus statement on treatment options, clinical trials methodology, and policy. Objective: Atypical antipsychotic drugs have been used off label in clinical practice for treatment of serious dementia-associated agitation and aggression. Following reports of cerebrovascular adverse events associated with the use of atypical antipsychotics in elderly patients with dementia, the U.S. Food and Drug Administration (FDA) issued black box warnings for several atypical antipsychotics titled "Cerebrovascular Adverse Events, Including Stroke, in Elderly Patients With Dementia." Subsequently, the FDA initiated a metaanalysis of safety data from 17 registration trials across 6 antipsychotic drugs (5 atypical antipsychotics and haloperidol). In 2005, the FDA issued a black box warning regarding increased risk of mortality associated with the use of atypical antipsychotic drugs in this patient population. Participants: Geriatric mental health experts participating in a 2006 consensus conference (Bethesda, Md., June 28-29) reviewed evidence on the safety and efficacy of antipsychotics, as well as nonpharmacologic approaches, in treating dementia-related symptoms of agitation and aggression. EVIDENCE/CONSENSUS PROCESS: The participants concluded that, while problems in clinical trial designs may have been one of the contributors to the failure to find a signal of drug efficacy, the findings related to drug safety should be taken seriously by clinicians in assessing the potential risks and benefits of treatment in a frail population, and in advising families about treatment. Information provided to patients and family members should be documented in the patient's chart. Drugs should be used only when nonpharmacologic approaches have failed to adequately control behavioral disruption. Participants also agreed that there is a need for an FDA-approved medication for the treatment of severe, persistent, or recurrent dementia-related symptoms of agitation and aggression (even in the absence of psychosis) that are unresponsive to nonpharmacologic intervention. Conclusions: This article outlines methodological enhancements to better evaluate treatment approaches in future registration trials and provides an algorithm for improving the treatment of these patients in nursing home and non-nursing home settings. abstract_id: PUBMED:34025202 Measurement of aggression in older adults. Aggressive behaviors are prevalent in late-life and are associated with important consequences for older adults, caregivers, and healthcare providers. Age-related changes in the manifestation of aggression are precipitated in part by the rise of cognitive impairment. Such changes necessitate the use of psychometrically sound measures. The present article identifies existing measures of aggression for older adults, highlights the strengths and limitations of these measures, and proposes avenues for future research in this area. Five full-scale measures of aggression, as well as five subscales of aggression embedded within larger non-aggression measures in older adults were identified. Overall, measures of aggression specific to late-life are predominately observational and limited to individuals with dementia or older adults living in long-term care settings. The psychometric properties of aggression scales in late-life generally indicate adequate internal consistency, interrater reliability, and concurrent validity. In contrast, the reliability and validity of subscales of aggression contained within larger neuropsychiatric measures are more difficult to ascertain due to limited research. Future investigations would benefit from examining the psychometric properties of widely-used self-report measures of aggression among older adults, further evaluating the psychometric properties of aggression subscales, and developing additional measures which are predictive of aggressive behaviors. abstract_id: PUBMED:32402574 A review of measures of three common dementia-related behaviors: Rejection of care, aggression, and agitation. Background: Clustering of behavioral symptoms in dementia is common in dementia scales. However, lack of distinction may have negative treatment implications when a treatment response differs depending on classification of behaviors. Historically, rejection of care, aggression, and agitation have been lumped together. Yet, several studies have indicated these may be conceptually different behaviors. Objective: To examine how rejection of care, aggression, and agitation are described and operationalized in existing measures of dementia-related behaviors. Method: We identified instruments developed to measure behavioral symptoms of dementia from two existing systematic reviews. Additionally, we conducted a literature review of peer-reviewed articles published from 1980 to 2017 to identify measures that were not captured in the two previous reviews. Results: 43 instruments developed to measure behavioral symptoms of dementia were examined. Of these, 25 (58.1%) included items related to rejection of care; 32 (74.4%) included items related to aggression; and 35 (81.4%) included agitation items. Descriptions of these behaviors were highly variable across the instruments. Conclusions: The review demonstrated that rejection, aggression, and agitation are measured in most scales, yet their operationalization is highly variable, and they are not typically distinguished from each other. Future efforts should be directed at developing uniform terminology to describe dementia-related behaviors while distinguishing rejection of care, aggression, and agitation in dementia scales. abstract_id: PUBMED:33013301 Potential Pathways for Circadian Dysfunction and Sundowning-Related Behavioral Aggression in Alzheimer's Disease and Related Dementias. Patients with Alzheimer's disease (AD) and related dementias are commonly reported to exhibit aggressive behavior and other emotional behavioral disturbances, which create a tremendous caretaker burden. There has been an abundance of work highlighting the importance of circadian function on mood and emotional behavioral regulation, and recent evidence demonstrates that a specific hypothalamic pathway links the circadian system to neurons that modulate aggressive behavior, regulating the propensity for aggression across the day. Such shared circuitry may have important ramifications for clarifying the complex interactions underlying "sundowning syndrome," a poorly understood (and even controversial) clinical phenomenon in AD and dementia patients that is characterized by agitation, aggression, and delirium during the late afternoon and early evening hours. The goal of this review is to highlight the potential output and input pathways of the circadian system that may underlie circadian dysfunction and behavioral aggression associated with sundowning syndrome, and to discuss possible ways these pathways might inform specific interventions for treatment. Moreover, the apparent bidirectional relationship between chronic disruptions of circadian and sleep-wake regulation and the pathology and symptoms of AD suggest that understanding the role of these circuits in such neurobehavioral pathologies could lead to better diagnostic or even preventive measures. abstract_id: PUBMED:2664199 Impaired black physicians: a methodology for detection and rehabilitation. The author contends that the great majority of black physicians lead happy and productive lives. Regrettably, black physicians must not only cope with stress related to marriage, finances, parenting, and other daily problems, but they must also cope with institutional racism. Consequently, some black physicians become impaired. This article discusses issues related to the problem of impairment in black physicians and suggests a methodology for detection and rehabilitation. It is the author's belief that future intervention should be culturally compatible and that black physicians themselves must bolster the effort to prevent impairment. At an empirical level, the author brings together the sparse amount of epidemiological and cultural data related to the subject. abstract_id: PUBMED:32052375 Management of Dementia-Related Psychosis, Agitation and Aggression: A Review of the Pharmacology and Clinical Effects of Potential Drug Candidates. Along with cognitive decline, 90% of patients with dementia experience behavioral and psychological symptoms of dementia, such as psychosis, aggression, agitation, and depression. Atypical antipsychotics are commonly prescribed off-label to manage certain symptoms, despite warnings from the regulatory agencies regarding the increased risk of mortality associated with their use in elderly patients. Moreover, these compounds display a limited clinical efficacy, mostly owing to the fact that they were developed to treat schizophrenia, a disease characterized by neurobiological deficits. Thus, to improve clinical efficacy, it has been suggested that patients with dementia should be treated with exclusively designed and developed drugs that interact with pharmacologically relevant targets. Within this context, numerous studies have suggested druggable targets that might achieve therapeutically acceptable pharmacological profiles. Based on this, several different drug candidates have been proposed that are being investigated in clinical trials for behavioral and psychological symptoms of dementia. We highlight the recent advances toward the development of therapeutic agents for dementia-related psychosis and agitation/aggression and discuss the relationship between the relevant biological targets and their etiology. In addition, we review the compounds that are in the early stage of development (discovery or preclinical phase) and those that are currently being investigated in clinical trials for dementia-related psychosis and agitation/aggression. We also discuss the mechanism of action of these compounds and their pharmacological utility in patients with dementia. Answer: The management of dementia-related aggression in the acute sector presents significant challenges, and resorting to a Code Black, which typically refers to a security or emergency response to a threat in a healthcare setting, may not be the most effective or appropriate solution. The findings from the study presented in abstract PUBMED:16594887 suggest that the acute care environment, which is heavily medically dominated, can increase confusion and agitation in patients with dementia, potentially escalating to aggression and violence. The study indicates that many staff members are ill-prepared to meet the needs of patients with dementia, and due to system shortfalls, they often resort to restraint, which is not in the best interest of the patient or the healthcare system. The paper advocates for a multifactorial approach to creating a 'dementia-friendly' system of care supported by education. Other abstracts highlight various strategies and considerations for managing aggression in elderly patients with dementia. For instance, abstract PUBMED:1976621 discusses the pharmacologic management of aggression, including the use of antipsychotic agents, benzodiazepines, beta-blockers, anticonvulsant agents, and serotonin-specific antidepressants. Abstract PUBMED:7999540 focuses on understanding precipitating factors for aggression and outlines appropriate management strategies. Abstract PUBMED:10466910 reports on the use of conjugated estrogens for aggression in elderly men with dementia, suggesting it may reduce physical and sexual aggression. Abstract PUBMED:18494535 emphasizes the importance of considering drug safety and the need for an FDA-approved medication for treating severe dementia-related symptoms of agitation and aggression that are unresponsive to nonpharmacologic intervention. Abstract PUBMED:34025202 and PUBMED:32402574 discuss the measurement of aggression and the need for distinguishing between different dementia-related behaviors, respectively. Lastly, abstract PUBMED:33013301 explores the potential pathways for circadian dysfunction and sundowning-related behavioral aggression in dementia, which could inform specific interventions for treatment. In conclusion, while a Code Black may be a response to immediate threats, it is not a comprehensive solution to dementia-related aggression in the acute sector. A multifaceted approach that includes staff education, nonpharmacologic interventions, careful pharmacologic management, and consideration of the underlying causes of aggression is necessary to effectively address this complex issue.
Instruction: Carrier frequency of autosomal-recessive disorders in the Ashkenazi Jewish population: should the rationale for mutation choice for screening be reevaluated? Abstracts: abstract_id: PUBMED:18264947 Carrier frequency of autosomal-recessive disorders in the Ashkenazi Jewish population: should the rationale for mutation choice for screening be reevaluated? Background: Ashkenazi Jewish (AJ) population is at increased risk for several recessive inherited diseases. Therefore, carrier testing of AJ members is important in order to identify couples at risk of having offspring with an autosomal recessive disorder. Methods: In the present study, a database containing the results of 28 410 genotyping assays was screened. Ten thousand seventy eight nonselected healthy members of the AJ population were tested for carrier status for the following diseases; Gaucher disease (GD), cystic fibrosis (CF), Familial dysautonomia (FD), Alpha 1 antitrypsin (A1AT), Mucolipidosis type 4 (ML4), Fanconi anemia type C (FAC), Canavan disease (CD), Neimann-Pick type 4 (NP) and Bloom syndrome (BLM). Results: The results demonstrated that 635 members were carriers of one mutation and 30 members were found to be carriers of two mutations in the different genes related to the development of the above mentioned diseases. GD was found to have the highest carrier frequency (1:17) followed by CF (1:23), FD (1:29), A1AT (1:65), ML4 (1:67) and FAC (1:77). The carrier frequency of CD, NP and BLM was 1:82, 1:103 and 1:157, respectively. Conclusions: The frequency of the disease-causing mutations screened routinely among the AJ population indicated that there are rare mutations with very low frequencies. The screening policy of the disease-causing mutations should be reevaluated and mutations with a high frequency should be screened, while rare mutations with a lower frequency may be tested in partners of carriers. abstract_id: PUBMED:10464671 Ashkenazi Jewish population frequency of the Bloom syndrome gene 2281 delta 6ins7 mutation. Bloom syndrome is an autosomal recessive disorder characterized clinically by small size, sun-sensitive facial erythema, and immunodeficiency, and cytogenetically by increased chromosome breakage and sister chromatid exchange. Genomic instability renders Bloom syndrome patients at elevated risk for multiple cancers. Bloom syndrome occurs most commonly in the Ashkenazi Jewish population due to an apparent founder effect. The BLM gene on chromosome 15q26.1 was identified to encode a RecQ DNA helicase. Multiple mutations were identified, with Ashkenazi Jewish Bloom syndrome patients almost exclusively homozygous for a complex frameshift mutation (6-bp deletion/7-bp insertion at BLM nucleotide 2,281). This molecular genetic study seeks to verify the Ashkenazi Jewish carrier frequency of the BLM 2281 delta 6ins7 allele using semiautomated allele-specific oligonucleotide (ASO) analysis. Anonymized DNA samples from 1,016 Ashkenazi Jewish individuals and 307 non-Jewish individuals were screened. Ten Ashkenazi heterozygote carriers for the 2281 delta 6ins7 mutation were identified, giving a carrier frequency estimate of 0.98%, or approximately 1 carrier out of 102 individuals in the Ashkenazi Jewish population. These results are consistent with previous estimates, and combining our findings with the published molecular data collectively yields an Ashkenazi Jewish carrier frequency of approximately 1 in 104. Given its high population frequency and detection rate among Ashkenazi Jewish patients, the blmAsh mutation constitutes an appropriate addition to screening panels for Ashkenazi Jewish disease testing. abstract_id: PUBMED:25802884 Carrier frequencies of eleven mutations in eight genes associated with primary ciliary dyskinesia in the Ashkenazi Jewish population. Primary ciliary dyskinesia (PCD) is a genetically heterogeneous, autosomal recessive disorder that results from functional and ultrastructural abnormalities of motile cilia. Patients with PCD have diverse clinical phenotypes that include chronic upper and lower respiratory tract infections, situs inversus, heterotaxy with or without congenital heart disease, and male infertility, among others. In this report, the carrier frequencies for eleven mutations in eight PCD-associated genes (DNAI1, DNAI2, DNAH5, DNAH11, CCDC114, CCDC40, CCDC65, and C21orf59) that had been found in individuals of Ashkenazi Jewish descent were investigated in order to advise on including them in existing clinical mutation panels for this population. Results showed relatively high carrier frequencies for the DNAH5 c.7502G&gt;C mutation (0.58%), the DNAI2 c.1304G&gt;A mutation (0.50%), and the C21orf59 c.735C&gt;G mutation (0.48%), as well as lower frequencies for mutations in DNAI1, CCDC65, CCDC114, and DNAH11 (0.10-0.29%). These results suggest that several of these genes should be considered for inclusion in carrier screening panels in the Ashkenazi Jewish population. abstract_id: PUBMED:15316959 Mutation frequencies for glycogen storage disease Ia in the Ashkenazi Jewish population. Glycogen storage disease type Ia (GSDIa) is a severe autosomal recessive disorder caused by deficiency of the enzyme D-glucose-6-phosphatase (G6Pase). While numerous mutations have been found in cosmopolitan European populations, Ashkenazi Jewish (AJ) patients appear to primarily carry the R83C mutation, but possibly also the Q347X mutation found generally in Caucasians. To determine the frequency for both these mutations in the AJ population, we tested 20,719 AJ subjects for the R83C mutation and 4,290 subjects for the Q347X mutation. We also evaluated the mutation status of 30 AJ GSDIa affected subjects. From the carrier screening, we found 290 subjects with R83C, for a carrier frequency for this mutation of 1.4%. This carrier frequency translates into a predicted disease prevalence of 1 in 20,000, five times higher than for the general Caucasian population, confirming a founder effect and elevated frequency of GSDIa in the AJ population. We observed no carriers of the Q347X mutation. Among the 30 GSDIa affected AJ subjects, all were homozygous for R83C. These results indicate that R83C is the only prevalent mutation for GSDIa in the Ashkenazi population. abstract_id: PUBMED:7492758 Carrier frequency of the IVS4 + 4 A--&gt;T mutation of the Fanconi anemia gene FAC in the Ashkenazi Jewish population. Fanconi anemia (FA) is a genetically and phenotypically heterogeneous autosomal recessive disorder defined by a cellular hypersensitivity to DNA cross-linking agents. One of the FA genes, FAC, has been cloned and the genomic structure of the coding region has been characterized. We have developed amplification refractory mutation system (ARMS) assays for five known mutations in FAC, and have applied these assays to determine the carrier frequency of the IVS4 + 4 A--&gt;T (IVS4) mutation in an Ashkenazi Jewish population. We tested 3,104 Jewish individuals, primarily of Ashkenazi descent, for the two most common FAC mutations, IVS4 and 322delG. Thirty-five IVS4 carriers were identified, for a carrier frequency of 1 in 89 (1.1%; 95% confidence interval 0.79% to 1.56%); no 322delG carriers were found. To determine if the IVS4 mutation was confined to the Ashkenazi Jewish population, we tested 563 Iraqi Jews for IVS4, and no carriers were found. Because the IVS4 mutation has only been found on chromosomes of Ashkenazi Jewish origin and is the only FAC mutation found on these chromosomes, we suggest that a founder effect is responsible for the high frequency of this mutation. With a carrier frequency greater than 1% and simple testing available, the IVS4 mutation merits inclusion in the battery of tests routinely provided to the Jewish population. abstract_id: PUBMED:35943032 Expanding the phenotypic spectrum of COLEC10-Related 3MC syndrome: A glimpse into COLEC10-Related 3MC syndrome in the Ashkenazi Jewish population. Bi-allelic variants in COLEC11 and MASP1 have been associated with 3MC syndrome, a clinical entity made of up four rare autosomal recessive disorders: Carnevale, Mingarelli, Malpuech, and Michels syndromes, characterized by variable expression of facial dysmorphia, cleft lip/palate, postnatal growth deficiency, hearing loss, cognitive impairment, craniosynostosis, radioulnar synostosis, and genital and vesicorenal anomalies. More recently, bi-allelic variants in COLEC10 have been described to be associated with 3MC syndrome. Syndromic features seen in 3MC syndrome are thought to be due to disruption of the chemoattractant properties that influence neural crest cell migration. We identified nine individuals from five families of Ashkenazi Jewish descent with homozygosity of the c.311G &gt; T (p.Gly104Val) variant in COLEC10 and phenotype consistent with 3MC syndrome. Carrier frequency was calculated among 52,278 individuals of Jewish descent. Testing revealed 400 carriers out of 39,750 individuals of Ashkenazi Jewish descent, giving a carrier frequency of 1 in 99 or 1.01%. Molecular protein modeling suggested that the p.Gly104Val substitution alters local conformation. The c.311G &gt; T (p.Gly104Val) variant likely represents a founder variant, and homozygosity is associated with features of 3MC syndrome. 3MC syndrome should be in the differential diagnosis for individuals with short stature, radioulnar synostosis, cleft lip and cleft palate. abstract_id: PUBMED:29144512 Ashkenazi Jewish genomic variants: integrating data from the Israeli National Genetic Database and gnomAD. Purpose: The aim of the study was to compare the data for mutations related to clinical disorders reported among Ashkenazi Jewish patients in the Israeli National Genetic Database (INGD) with variants included in the Genome Aggregation Database (gnomAD). Methods: We extracted data for mutations claimed to cause disorders reported among Ashkenazi Jews from the INGD and searched gnomAD for each of them. We compared the allele frequency of each variant in Ashkenazi Jews with that of other delineated populations. Results: Of the 58 INGD-reported mutations related to autosomal-dominant disorders, 19 were present in gnomAD (32.8%). Of the 309 mutations related to autosomal-recessive disorders, 240 (77.7%) were variants found in gnomAD. Of these variants, 202 (84.2%) were documented among one or more Ashkenazi individuals. At this point in the INGD, there are 168 Ashkenazi assumed founder mutations in 128 different genes corresponding to 111 autosomal-recessive disorders. Conclusion: Integration of information on mutations among Ashkenazi Jews extracted from the INGD with their population frequency recorded in gnomAD is important for effective straightforward molecular diagnosis as well as for targeted carrier screening either for reproductive decision-making or for implementation of disease-modifying behavior. abstract_id: PUBMED:26334176 Expanded genetic screening panel for the Ashkenazi Jewish population. Purpose: Carrier screening programs that identify the presence of known mutations have been effective for reducing the incidence of autosomal recessive conditions in the Ashkenazi Jewish (AJ) population and other populations. Yet, these programs have not realized their full potential. Furthermore, many known autosomal recessive and dominant conditions are not screened for and the molecular basis of other conditions for which screening might be offered is unknown. Methods: Through literature review and annotation of full sequenced genomes from healthy individuals, we expanded the list of mutations. Mutations were identified in a sample of 128 fully sequenced AJ genomes that were filtered through clinical databases and curated manually for clinical validity and utility using the American College of Medical Genetics and Genomics scoring (ACMG) system. Other known mutations were identified through literature review. Results: A panel of 163 mutations was identified for 76 autosomal recessive, 24 autosomal dominant, and 3 X-linked disorders. Conclusion: Screening for a broader range of disorders not only could further reduce the incidence of autosomal recessive disorders but also could offer the benefits of early or presymptomatic diagnosis.Genet Med 18 5, 522-528. abstract_id: PUBMED:22943850 The adult polyglucosan body disease mutation GBE1 c.1076A&gt;C occurs at high frequency in persons of Ashkenazi Jewish background. Mutations of the glycogen branching enzyme gene, GBE1, result in glycogen storage disease (GSD) type IV, an autosomal recessive disorder having multiple clinical forms. One mutant allele of this gene, GBE1 c.1076A&gt;C, has been reported in Ashkenazi Jewish cases of an adult-onset form of GSD type IV, adult polyglucosan body disease (APBD), but no epidemiological analyses of this mutation have been performed. We report here the first epidemiological study of this mutation in persons of Ashkenazi Jewish background and find that this mutation has a gene frequency of 1 in 34.5 (95% CI: 0.0145-0.0512), similar to the frequency of the common mutation causing Tay-Sachs disease among Ashkenazi Jews. This finding reveals APBD to be another monogenic disorder that occurs with increased frequency in persons of Ashkenazi Jewish ancestry. abstract_id: PUBMED:21059483 The GALT rush: high carrier frequency of an unusual deletion mutation of the GALT gene in the Ashkenazi population. Classic galactosemia is an autosomal recessive disorder of galactose metabolism manifesting in the first weeks of life following exposure to a milk-based diet. Despite the benefit of avoidance of lactose, many patients suffer from long-term complications including neurological deficits and ovarian failure. To date, over 230 mutations have been described in the GALT gene resulting in galactosemia. Recently, an unusual mutation was characterized causing a 5.5 kb deletion, with a relatively high carrier rate in subjects of Ashkenazi Jewish (AJ) descent. The aim of this study was to estimate the carrier frequency of this mutation in the AJ population in Israel. For this purpose we developed a high-throughput methodology to genotype both normal and deleted alleles using a chip-based matrix-assisted laser desorption-time-of-flight (MALDI-TOF) mass spectrometer and Multiplex PCR. DNA samples of 760 anonymous AJ subjects were submitted for analysis, subsequently detecting six individuals heterozygous for the GALT deletion mutation, giving a carrier frequency of 1 in 127 (0.79%). Based on these results, we suggest that the method described here provides a basis for genetic screening and prenatal counseling and can potentially reduce the morbidity and mortality associated with delayed diagnosis of galactosemia in this patient population. Answer: The carrier frequency of autosomal recessive disorders in the Ashkenazi Jewish (AJ) population has been extensively studied, and the data suggest that the rationale for mutation choice for screening may indeed need to be reevaluated. The AJ population is at increased risk for several recessive inherited diseases, and carrier testing is important for identifying couples at risk of having offspring with an autosomal recessive disorder (PUBMED:18264947). For example, Gaucher disease (GD) was found to have the highest carrier frequency (1:17), followed by cystic fibrosis (CF) (1:23), Familial dysautonomia (FD) (1:29), Alpha 1 antitrypsin (A1AT) (1:65), Mucolipidosis type 4 (ML4) (1:67), and Fanconi anemia type C (FAC) (1:77) (PUBMED:18264947). The carrier frequency of Bloom syndrome (BLM) was found to be approximately 1 in 104 (PUBMED:10464671). Other disorders, such as primary ciliary dyskinesia, glycogen storage disease Ia (GSDIa), and Fanconi anemia, also show significant carrier frequencies in the AJ population (PUBMED:25802884, PUBMED:15316959, PUBMED:7492758). The carrier frequency of the GBE1 c.1076A>C mutation, associated with adult polyglucosan body disease (APBD), was found to be 1 in 34.5 in the AJ population, which is similar to the frequency of the common mutation causing Tay-Sachs disease among Ashkenazi Jews (PUBMED:22943850). Additionally, a high carrier frequency (1 in 127) was observed for a deletion mutation of the GALT gene, which causes classic galactosemia (PUBMED:21059483). Given the high frequency of certain disease-causing mutations and the availability of genetic screening technologies, there is a strong argument for reevaluating the screening policy. It may be more effective to focus on mutations with a high frequency while considering partner testing for rarer mutations (PUBMED:18264947). Moreover, integrating data from databases like the Israeli National Genetic Database and gnomAD can aid in refining the mutation panels used for carrier screening in the AJ population (P gnomAD (PUBMED:29144512). An expanded genetic screening panel for the AJ population identified a panel of 163 mutations for a range of autosomal recessive, autosomal dominant, and X-linked disorders, suggesting that broader screening could further reduce the incidence of these disorders and offer benefits of early or presymptomatic diagnosis (PUBMED:26334176). The carrier frequency of the COLEC10 c.311G>T (p.Gly104Val) variant, associated with 3MC syndrome, was found to be 1 in 99 or 1.01% among individuals of Ashkenazi Jewish descent, indicating that this variant likely represents a founder variant in this population (PUBMED:35943032). Overall, the data suggest that the current mutation panels used for carrier screening in the AJ population may need to be updated to include a broader range of disorders and to prioritize mutations based on their frequency and clinical impact. This could lead to more effective carrier screening programs, better reproductive decision-making, and the potential for disease-modifying interventions.