input
stringlengths 6.82k
29k
|
---|
Instruction: Can urine bound diagnostic tests replace cystoscopy in the management of bladder cancer?
Abstracts:
abstract_id: PUBMED:9598479
Can urine bound diagnostic tests replace cystoscopy in the management of bladder cancer? Purpose: We compare the diagnostic value of NMP22 and BTA stat testing, and QUANTICYT computer assisted dual parameter image analysis to cytology and cystoscopy in patients who had symptoms suggestive of transitional cell cancer or were being followed after treatment for that disease.
Materials And Methods: We prospectively evaluated voided urine and/or barbotage specimens from 291 patients a mean of 65.2 years old. All voided urine samples were evaluated by quick staining and standard cytology, the BTA stat 1-step qualitative assay (which detects a bladder tumor associated antigen) and the NMP22 test (which detects a nuclear mitotic apparatus protein). In addition, barbotage specimens were evaluated by QUANTICYT computer assisted dual parameter image analysis. All patients underwent subsequent cystoscopy and biopsy evaluation of any suspicious lesion. Sensitivity, specificity, and the predictive value of positive and negative results were determined in correlation with endoscopic and histological findings.
Results: In 91 patients with histologically proved transitional cell carcinoma overall sensitivity was 48, 57, 58, 59 and 59% for the NMP22 test, the BTA stat test, rapid staining cytology of barbotage samples, rapid staining cytology of voided urine specimens and image analysis, respectively. For histological grades 1 to 3 underlying transitional cell carcinoma sensitivity was 17, 61 and 90% for urinary cytology, 48, 58 and 63% for the BTA stat test, and 52, 45 and 50% for the NMP22 test, respectively. Specificity was 100% for cytology, 93% for image analysis, 70% for the NMP22 test and 68% for the BTA stat test.
Conclusions: Immunological markers are superior to cytological evaluation and image analysis for detecting low grade transitional cell carcinoma but they have low specificity and sensitivity in grade 3 transitional cell carcinoma. Urine bound diagnostic tools cannot replace cystoscopy.
abstract_id: PUBMED:11493772
Can biological markers replace cystoscopy? An update. Cystoscopy is currently considered the gold standard for the detection of bladder tumors. The role of urine cytology in the initial detection and follow-up of patients is under discussion. New elaborative and rapid assays are available that may circumvent the low sensitivity and poor reproducibility of urine cytology. The methods that have been tested extensively are the nuclear matrix protein (NMP22) assay, the BTA stat assay, and the BTA TRAK enzyme-linked immunosorbent assay. Both outperform cytology in the detection of low-grade lesions. The specificity of both assays, however, lags behind that of cytology. The data from retrospective analyses are insufficient to justify clinical integration, and the need to replace cystoscopy with these novel assays remains to be proven.
abstract_id: PUBMED:16153203
Urine cytology after flexible cystoscopy. Objective: To correlate urine cytology findings before and after flexible cystoscopy.
Patients And Methods: A total of 153 patients undergoing surveillance for bladder tumour provided voided urine for cytology before and immediately after flexible cystoscopy.
Results: Of the 153 patients, 116 had negative urine cytology before and after (96%) a visibly normal cystoscopy and 37 had positive urine cytology before and after cystoscopy that showed recurrent tumour.
Conclusions: Urine cytology immediately after flexible cystoscopy correlates well with results of urine cytology before cystoscopy.
abstract_id: PUBMED:30283995
Can urinary biomarkers replace cystoscopy? Purpose: Diagnosis and follow-up in patients with non-muscle invasive bladder cancer (NMIBC) rely on cystoscopy and urine cytology. The aim of this review paper is to give an update on urinary biomarkers and their diagnosis and surveillance potential. Besides FDA-approved markers, recent approaches like DNA methylation assays, mRNA gene expression assays and cell-free DNA (cfDNA) are evaluated to assess whether replacing cystoscopy with urine markers is a potential scenario for the future.
Methods: We performed a non-systematic review of current literature without time period restriction using the National Library of Medicine database ( http://ww.pubmed.gov ). The search included the following key words in different combinations: "urothelial carcinoma", "urinary marker", "hematuria", "cytology" and "bladder cancer". Further, references were extracted from identified articles. The results were evaluated regarding their clinical relevance and study quality.
Results: Currently, replacing cystoscopy with available urine markers is not recommended by international guidelines. For FDA-approved markers, prospective randomized trials are lacking. Newer approaches focusing on molecular, genomic and transcriptomic aberrations are promising with good accuracies. Furthermore, these assays may provide additional molecular information to guide individualized surveillance strategies and therapy. Currently ongoing prospective trials will determine if cystoscopy reduction is feasible.
Conclusion: Urinary markers represent a non-invasive approach for molecular characterization of the disease. Although fully replacing cystoscopy seems unrealistic in the near future, enhancing the current gold standard by additional molecular information is feasible. A reliable classification and differentiation between aggressive and nonaggressive tumors by applying DNA, mRNA, and cfDNA assays may change surveillance to help reduce cystoscopies.
abstract_id: PUBMED:22122739
Evaluation of diagnostic strategies for bladder cancer using computed tomography (CT) urography, flexible cystoscopy and voided urine cytology: results for 778 patients from a hospital haematuria clinic. Unlabelled: Study Type - Diagnostic (exploratory cohort) Level of Evidence 2b What's known on the subject? and What does the study add? Haematuria clinics with same day imaging and flexible cystoscopy are an efficient way for investigating patients with haematuria. The principal role of haematuria clinics with reference to bladder cancer is to determine which patients are 'normal' and may be discharged, and which patients are abnormal and should undergo rigid cystoscopy. It is well recognised that CT urography offers a thorough evaluation of the upper urinary tract for stones, renal masses and urothelial neoplasms but the role of CT urography for diagnosing bladder cancer is less certain. The aim of the present study was to evaluate the diagnostic accuracy of CT urography in patients with visible haematuria aged >40 years and to determine if CT urography has a role for diagnosing bladder cancer. This study shows that the optimum diagnostic strategy for investigating patients with visible haematuria aged >40 years with infection excluded is a combined strategy using CT urography and flexible cystoscopy. Patients positive for bladder cancer on CT urography should be referred directly for rigid cystoscopy and so avoid flexible cystoscopy. The number of flexible cystoscopies required therefore may be reduced by 17%. The present study also shows that the diagnostic accuracy of voided urine cytology is too low to justify its continuing use in a haematuria clinic using CT urography and flexible cystoscopy.
Objectives: To evaluate and compare the diagnostic accuracy of computed tomography (CT) urography with flexible cystoscopy and voided urine cytology for diagnosing bladder cancer. To evaluate diagnostic strategies using CT urography as: (i) an additional test or (ii) a replacement test or (iii) a triage test for diagnosing bladder cancer in patients referred to a hospital haematuria rapid diagnosis clinic.
Patients And Methods: The clinical cohort consisted of a consecutive series of 778 patients referred to a hospital haematuria rapid diagnosis clinic from 1 March 2004 to 17 December 2007. Criteria for referral were at least one episode of macroscopic haematuria, age >40 years and urinary tract infection excluded. Of the 778 patients, there were 747 with technically adequate CT urography and flexible cystoscopy examinations for analysis. On the same day, patients underwent examination by a clinical nurse specialist followed by voided urine cytology, CT urography and flexible cystoscopy. Voided urine cytology was scored using a 5-point system. CT urography was reported immediately by a uroradiologist and flexible cystoscopy performed by a urologist. Both examinations were scored using a 3-point system: 1, normal; 2, equivocal; and 3, positive for bladder cancer. The reference standard consisted of review of the hospital imaging and histopathology databases in December 2009 for all patients and reports from the medical notes for those referred for rigid cystoscopy. Follow-up was for 21-66 months.
Results: The prevalence of bladder cancer in the clinical cohort was 20% (156/778). For the diagnostic strategy using CT urography as an additional test for diagnosing bladder cancer, when scores of 1 were classified as negative and scores of 2 and 3 as positive, sensitivity was 1.0 (95% confidence interval [CI] 0.98-1.00), specificity was 0.94 (95% CI 0.91-0.95), the positive predictive value (PPV) was 0.80 (95% CI 0.73-0.85) and the negative predictive value (NPV) was 1.0 (95% CI 0.99-1.00). For the diagnostic strategy using CT urography as a replacement test for flexible cystoscopy for diagnosing bladder cancer, when scores of 1 were classified as negative and scores of 2 and 3 as positive, sensitivity was 0.95 (95% CI 0.90-0.97), specificity was 0.83 (95% CI 0.80-0.86), the PPV was 0.58 (95% CI 0.52-0.64), and the NPV was 0.98 (95% CI 0.97-0.99). Similarly using flexible cystoscopy for diagnosing bladder cancer, if scores of 1 were classified as negative and scores of 2 and 3 as positive, sensitivity was 0.98 (95% CI 0.94- 0.99), specificity was 0.94 (95% CI 0.92-0.96), the PPV was 0.80 (95% CI 0.73-0.85) and the NPV was 0.99 (95% CI 0.99-1.0). For the diagnostic strategy using CT urography and flexible cystoscopy as a triage test for rigid cystoscopy and follow-up (option 1), patients with a positive CT urography score are referred directly for rigid cystoscopy, and patients with an equivocal or normal score were referred for flexible cystoscopy. Sensitivity was 1.0 (95% CI 0.98-1.0), specificity was 0.94 (95% CI 0.91-0.95), the PPV was 0.80 (95% CI 0.73-0.85), and the NPV was 1.0 (95% CI 0.99-1.0). For the diagnostic strategy using CT urography and flexible cystoscopy as a triage test for rigid cystoscopy and follow-up (option 2), patients with a positive CT urography score are referred directly for rigid cystoscopy, patients with an equivocal score are referred for flexible cystoscopy and patients with a normal score undergo clinical follow-up. Sensitivity was 0.95 (95% CI 0.90-0.97), specificity was 0.98 (95% CI 0.97-0.99), the PPV was 0.93 (95% CI 0.87-0.96), and the NPV was 0.99 (95% CI 0.97-0.99). For voided urine cytology, if scores of 0-3 were classified as negative and 4-5 as positive for bladder cancer, sensitivity was 0.38 (95% CI 0.31-0.45), specificity was 0.98 (95% CI 0.97-0.99), the PPV was 0.82 (95% CI 0.72-0.88) and the NPV was 0.84 (95% CI 0.81-0.87).
Conclusions: There is a clear advantage for the diagnostic strategy using CT urography and flexible cystoscopy as a triage test for rigid cystoscopy and follow-up (option 1), in which patients with a positive CT urography score for bladder cancer are directly referred for rigid cystoscopy, but all other patients undergo flexible cystoscopy. Diagnostic accuracy is the same as for the additional test strategy with the advantage of a 17% reduction of the number of flexible cystoscopies performed. The sensitivity of voided urine cytology is too low to justify its continuing use in a hospital haematuria rapid diagnosis clinic using CT urography and flexible cystoscopy.
abstract_id: PUBMED:23106855
Diagnostic tests in urology: urine cytology. What's known on the subject? and What does the study add? Urine cytology is frequently used by urologists to evaluate patients with microscopic or gross haematuria. The results of urine cytology can be used as impetus to perform or triage further diagnostic studies, e.g. cystoscopy. The impact of urine cytology results on patient care warrants clarifying. This evidence-based medicine article explores how positive or negative urine cytology will impact the probability that a patient has urothelial carcinoma of the bladder before cystoscopy.
abstract_id: PUBMED:32494260
Diagnostic accuracy of NMP 22 and urine cytology for detection of transitional cell carcinoma urinary bladder taking cystoscopy as gold standard. Objective: To determine diagnostic accuracy of NMP 22 and urine cytology in the detection of transitional cell carcinoma (TCC) urinary bladder taking cystoscopy as a gold standard in patients having provisional diagnosis of bladder cancer (BC).
Methods: This cross sectional validational study enrolled 380 patients fulfilling selection criteria and was conducted at Armed Forces Institute of Urology (AFIU) Rawalpindi, Pakistan form July 2018 to July 2019. The urine sample collected underwent NMP22 and cytological analysis followed by rigid cystoscopy. Reports of all three tests divided patients into positive or negative for malignancy as per defined criteria. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and diagnostic accuracy of NMP 22, urine cytology and their combination was determined. Receiver operating characteristic (ROC) curve analysis performed and area under the curve (AUC) compared among these tests.
Results: The average age of patients was 53.08 ± 12.41 years having male to female ratio 3.75:1(300 males and 80 females). NMP 22 had better sensitivity and comparable specificity to cytology (81.9 & 81.2% vs 54 & 93.9%). Combination of NMP 22 / cytology outperformed both in terms of sensitivity (91.63 vs 81.83 vs 53.96), NPV (87.59 vs 77.46 vs 61.02) and diagnostic accuracy (85.26 vs 81.58 vs 71.32) but at the cost of specificity (76.97 vs 81.21 vs 93.94) and PPV (83.83 vs 85.02 vs 92.06). ROC curve revealed statistically significant higher AUC (0.843 vs .815 vs .73) for combination as compared to NMP 22 and Cytology (p < 0.001).
Conclusion: NMP22 is a quick, point of care test having higher sensitivity, NPV and accuracy but similar specificity and PPV to urine cytology for detection of TCC urinary bladder. Combination outperformed both in terms of sensitivity while having modest specificity.
abstract_id: PUBMED:9116739
Can the combination of bladder ultrasonography and urinary cytodiagnosis replace cystoscopy in the diagnosis and follow-up of tumors of the bladder? Objectives: Cystoscopy is currently the reference examination for the diagnosis and surveillance of bladder tumours (BT). However, this examination remains unpleasant for the patient, despite the development of flexible cystoscopes. Among the many diagnostic methods performed in combination with cystoscopy, the authors decided to evaluate the performances of the combination of ultrasonography+urine cytology in the diagnosis and follow-up of bladder tumours.
Methods: This prospective study included 124 cases in the context of postoperative surveillance of BT (86) or aetiological assessment of haematuria (38). All patients were assessed by cystoscopy, suprapubic vesical ultrasonography, and urine cytology.
Result: Cystoscopy revealed a bladder tumour in 30 patients. Urine cytology had a sensitivity of 53% and a negative predictive value (NPV) of 86%. Vesical ultrasonography had a sensitivity of 50% and an NPV of 85%. The false-positive and false-negative results of ultrasonography and urine cytology make these examinations unreliable when considered separately. The combination of ultrasonography and urine cytology had an overall sensitivity of 80% and an NPV of 93%. However, analysis of the group of patients undergoing postoperative surveillance for BT showed that although the combination of the two examinations had a diagnostic sensitivity of 100% in the case of high-grade tumour or CIS, this value was only 66% for low-grade tumours. The authors review other methods of bladder tumour diagnosis, but none of them appears to have demonstrated a sufficient reliability at the present time.
Conclusion: The diagnostic sensitivity of the combination of ultrasonography and urine cytology, accurate but not recommended in high-risk patients with a high-grade BT, does not appear to be sufficient for systematic surveillance of patients with low-grade BT, despite the low risk of recurrence.
abstract_id: PUBMED:21684071
Urine markers for detection and surveillance of non-muscle-invasive bladder cancer. Context: Bladder cancer diagnosis and surveillance includes cystoscopy and cytology. The limitation of urinary cytology is its low sensitivity for low-grade recurrences. As of now, six urine markers are commercially available to complement cystoscopy in the detection of bladder cancer. Several promising tests are under investigation.
Objective: In this nonsystematic review, we summarize the existing data on commercially available and promising investigational urine markers for the detection of bladder cancer.
Evidence Acquisition: A PubMed search was carried out. We reviewed the recent literature on urine-based markers for bladder cancer. Articles were considered between 1997 and 2011. Older studies were included selectively if historically relevant.
Evidence Synthesis: Although different studies have shown the superiority of urine markers regarding sensitivity for bladder cancer detection as compared with cytology, none of these tests is ideal and can be recommended unrestrictedly.
Conclusions: Urine markers have been studied extensively to help diagnose bladder cancer and thereby decrease the need for cystoscopy. However, no marker is available at present that can sufficiently warrant this. Several urinary markers have higher but still insufficient sensitivity compared with cytology. Urinary cytology or markers cannot safely replace cystoscopy in this setting. To identify an optimal marker that can delay cystoscopy in the diagnosis of bladder cancer, large prospective and standardized studies are needed.
abstract_id: PUBMED:27746284
Validation of a DNA Methylation-Mutation Urine Assay to Select Patients with Hematuria for Cystoscopy. Purpose: Only 3% to 28% of patients referred to the urology clinic for hematuria are diagnosed with bladder cancer. Cystoscopy leads to high diagnostic costs and a high patient burden. Therefore, to improve the selection of patients for cystoscopy and reduce costs and over testing we aimed to validate a recently developed diagnostic urine assay.
Materials And Methods: Included in study were 200 patients from a total of 3 European countries who underwent cystoscopy for hematuria, including 97 with bladder cancer and 103 with nonmalignant findings. Voided urine samples were collected prior to cystoscopy. DNA was extracted and analyzed for mutations in FGFR3, TERT and HRAS, and methylation of OTX1, ONECUT2 and TWIST1. Logistic regression was used to analyze the association between predictor variables and bladder cancer.
Results: Combining the methylation and mutation markers with age led to an AUC of 0.96 (95% CI 0.92-0.99) with 93% sensitivity and 86% specificity, and an optimism corrected AUC of 0.95. The AUC was higher for T1 or greater tumors compared to Ta tumors (0.99 vs 0.93). The AUC was also higher for high grade tumors compared to low grade tumors (1.00 vs 0.93). Overall negative predictive value was 99% based on the 5% to 10% prevalence of bladder cancer in patients with hematuria. This would lead to a 77% reduction in diagnostic cystoscopy.
Conclusions: Analyzing hematuria patients for the risk of bladder cancer using novel molecular markers may lead to a reduction in diagnostic cystoscopy. Combining methylation analysis (OTX1, ONECUT2 and TWIST1) with mutation analysis (FGFR3, TERT and HRAS) and patient age resulted in a validated accurate prediction model.
Answer: Based on the provided abstracts, urine bound diagnostic tests cannot fully replace cystoscopy in the management of bladder cancer. Several studies have evaluated the diagnostic value of urine-based tests such as NMP22, BTA stat test, QUANTICYT computer-assisted image analysis, and urine cytology in comparison to cystoscopy (PUBMED:9598479; PUBMED:11493772; PUBMED:30283995; PUBMED:22122739; PUBMED:23106855; PUBMED:32494260; PUBMED:9116739; PUBMED:21684071; PUBMED:27746284).
The sensitivity and specificity of these urine tests vary, with some showing promise in detecting low-grade lesions better than cytology (PUBMED:11493772; PUBMED:32494260). However, their specificity often lags behind that of cytology, and the data from retrospective analyses are insufficient to justify their clinical integration to replace cystoscopy (PUBMED:11493772). Additionally, the combination of NMP22 and urine cytology has shown improved sensitivity and negative predictive value but at the cost of specificity and positive predictive value (PUBMED:32494260).
Some studies suggest that urine cytology immediately after flexible cystoscopy correlates well with results before cystoscopy (PUBMED:16153203), and the combination of bladder ultrasonography and urinary cytodiagnosis does not appear to be sufficient for systematic surveillance of patients with low-grade bladder tumors (PUBMED:9116739). Moreover, the diagnostic accuracy of voided urine cytology is too low to justify its continuing use in a haematuria clinic using CT urography and flexible cystoscopy (PUBMED:22122739).
Newer approaches focusing on molecular, genomic, and transcriptomic aberrations are promising, with good accuracies, and may provide additional molecular information to guide individualized surveillance strategies and therapy (PUBMED:30283995). A DNA methylation-mutation urine assay has shown high sensitivity and specificity, potentially leading to a reduction in diagnostic cystoscopy (PUBMED:27746284). However, replacing cystoscopy with available urine markers is not currently recommended by international guidelines, and prospective randomized trials are lacking for FDA-approved markers (PUBMED:30283995).
In conclusion, while urine bound diagnostic tests have shown potential as adjuncts to cystoscopy, they cannot replace cystoscopy as the gold standard in the management of bladder cancer at this time. Cystoscopy remains essential for accurate diagnosis and surveillance of bladder cancer. |
Instruction: Effect of body mass index on outcomes after cardiac surgery: is there an obesity paradox?
Abstracts:
abstract_id: PUBMED:25215473
Body mass index as an influencing factor for outcome of trabectome surgery? Background: The aim of this study was to investigate a possible influence of body mass index (BMI) to the outcome of trabectome surgery.
Methods: 131 eyes with primary open angle glaucoma, myopia-associated glaucoma and pseudoexfoliation glaucoma were included into this retrospective study. The data were extracted from the Freiburg trabectome database from June 2009 to April 2013. We fitted a Cox proportional hazards model in order to assess the influence of the BMI on trabectome outcome.
Results: The absolute success after trabectome surgery (20 % pressure reduction without anti-glaucomatous medication) was statistically significantly better in the group with BMI > 25 kg/m(2) (p = 0.047). No statistically significant effect was observed for relative success or the rate of re-operation respectively.
Conclusion: In our patient cohort of 131 eyes, a high BMI was associated with a reduced success, as long as an absolute success is required. No difference is seen if additional anti-glaucomatous medication is acceptable (relative success).
abstract_id: PUBMED:27853687
The Effect of Body Mass Index on Outcome of Abdominoplasty Operations. Background: Increased body mass index (BMI) increase the incidence of seroma formation and wound infection rates and subsequently increases wound dehiscence and ugly scar formation following abdomenoplasty and body contour surgery and also many other aesthetic and plastic surgery. The aim of this study was to determine the effect of BMI on the outcome of abdominoplasty operation.
Methods: We carried out a prospective study of all patients who underwent abdominoplasty at our institution. Patient were divided into two groups. Group I were subjects with body mass index <30 kg/m2 while group II were patients with body mass index >30 kg/m2. Demographics and complications (minor and major) were recorded.
Results: Sixty seven patients were enrolled. Group I were 32 patients with a mean age of 35.71 and group II 35 patients with mean age of 36.26 years. Seroma formation, wound complications, prolonged hospital stay and complications were significantly more in group II.
Conclusion: We found that increased BMI significantly increased operative time, hospital stay, drainage duration and drainage amount. Our findings showed that obesity alone could increase the incidence of complications and poor outcome of abdominoplasty.
abstract_id: PUBMED:36860168
A meta-analysis of the effect of different body mass index on surgical wound infection after colorectal surgery. We conducted a meta-analysis to assess the effect of different body mass index on surgical wound infection after colorectal surgery. A systematic literature search up to November 2022 was performed and 2349 related studies were evaluated. The chosen studies comprised 15 595 colorectal surgery subjects participated in the selected studies' baseline trials; 4390 of them were obese according to the selected body mass index cut-off used to measure obesity in the selected studies, while 11 205 were nonobese. Odds ratio (OR) with 95% confidence intervals (CIs) were calculated to assess the effect of different body mass index on wound infection after colorectal surgery by the dichotomous methods with a random or fixed effect model. The body mass index ≥30 kg/m2 resulted in significantly higher surgical wound infection after colorectal surgery (OR, 1.76; 95% CI, 1.46-2.11, P < .001) compared with the body mass index <30 kg/m2 . The body mass index ≥25 kg/m2 resulted in significantly higher surgical wound infection after colorectal surgery (OR, 1.64; 95% CI, 1.40-1.92, P < .001) compared with the body mass index <25 kg/m2 . The subjects with higher body mass index had a significantly higher surgical wound infection after colorectal surgery compared with the subjects with normal body mass index.
abstract_id: PUBMED:30576662
The effect of body mass index on retropubic midurethral slings. Background: Analyzing surgical databases uses "real-life" outcomes rather than highly selected cases from randomized controlled trials. Retropubic midurethral slings are a highly effective surgical treatment for stress urinary incontinence; however, if modifiable patient characteristics alter outcomes, thereby rendering treatments less effective, patients should be informed and given the opportunity to change that characteristic.
Objective: The aim of this study was to evaluate the effect of body mass index on patient-reported outcome measures by analyzing midurethral slings from the British Society of Urogynaecology database.
Materials And Methods: The British Society of Urogynaecology approved analysis of 11,859 anonymized midurethral slings from 2007 to 2016. The primary outcome of this retrospective cohort study was to assess how body mass index affects patient-reported outcome measures. Outcomes were assessed at 6 weeks, 3 months, 6 months, or 12 months after surgery, depending on local arrangements. Outcomes were compared by body mass index groups using χ2 tests.
Results: As BMI increased, Patient Global Impression of Improvement (PGI-I) scores declined. Women with a normal body mass index (18 to <25) reported feeling better in 91.6% of cases compared to lower rates in BMI groups >30 (87.7-72%) (P < .001). Patient-reported outcome measures for stress urinary incontinence inversely correlated with body mass index, with 97% of women with normal body mass index stating that they were cured/improved compared to women in higher body mass index groups (84-94%) reporting lower rates (P < .005). Patient-reported outcome measures for overactive bladder show that as body mass index increases, patients reported higher rates of worsening symptoms (P < .05). There were higher rates of perforation at the low and high extremes of body mass index.
Conclusion: Our results suggest increased body mass index is associated with poorer outcomes after midurethral sling surgery, and that patients should be given the opportunity to change their body mass index. These data could help to develop a model to predict personalized success and complication rates, which may improve shared decision making and give an impetus to modify characteristics to improve outcomes.
abstract_id: PUBMED:24887880
Effect of body mass index on early clinical outcomes after cardiac surgery. Background: there are several reports on the outcomes of cardiac surgery in relation to body mass index. Some concluded that obesity did not increase morbidity or mortality after cardiac surgery, whereas others demonstrated that obesity was a predictor of both morbidity and mortality.
Methods: this was a retrospective study of 3370 adult patients undergoing cardiac surgery. The patients were divided into 4 groups according to body mass index. The 4 groups were compared in terms of preoperative, operative, and postoperative characteristics.
Results: obese patients had a significantly younger mean age. Diabetes, hypertension, and hyperlipidemia were significantly more common in obese patients. The crossclamp time was significantly longer in the underweight group. Reoperation for bleeding, and pulmonary, gastrointestinal, and renal complications were significantly more common in the underweight group. Wound complications were significantly more frequent in the obese group. Mortality was inversely proportional to body mass index. The adjusted odds ratios of the early clinical outcomes demonstrated a higher risk of wound complications in overweight and obese patients
Conclusion: body mass index has no effect on early clinical outcomes after cardiac surgery, except for a higher risk of wound complications in overweight and obese patients.
abstract_id: PUBMED:30023970
Effects of body mass index on cecal intubation time in women. Objective: During colonoscopy, cecal intubation time is prolonged with increase in difficulty of the procedure. Cecal intubation time may be affected by age, gender, and body structure. We investigated the relationship between body mass index and cecal intubation time in women.
Material And Methods: This prospective study included 61 women who underwent colonoscopy in the endoscopy unit of the General Surgery Clinic in Trabzon Kanuni Training and Research Hospital between January 2016 and September 2016. The colonoscopies were performed by a single surgeon. The height and weight of all the participants were measured, and their body mass index values were calculated before the procedure. The timer was activated as soon as entry was made from the anal region with colonoscope and stopped when the cecum was reached. The cecal intubation time was recorded for each subject. The results were evaluated statistically, and p<0.05 was considered to be significant.
Results: The mean body mass index was 29.6±6.8 kg/m2. The median cecal intubation time was 4 min. (minimum 2 min; maximum 8 min). A significantly strong positive correlation was found between body mass index and cecal intubation time (r:-0.891, p<0.001).
Conclusion: Cecal intubation time was found to be shorter in women whose body mass index values were high. This outcome may help to eliminate the "the colonoscopy will be difficult" preconception, which is common among endoscopists with regard to the colonoscopies for obese female patients.
abstract_id: PUBMED:37974868
Effect of Body Mass Index on Post Tonsillectomy Hemorrhages. Aims: Obesity affects adverse outcomes in patients undergoing various surgeries. The study was carried out to assess the clinical association between body mass index and post tonsillectomy hemorrhages.
Materials And Methods: This prospective study was carried out on 60 patients, age between 5 and 40 years, admitted in Department of ENT with chronic tonsillitis. Body mass index and post tonsillectomy hemorrhage were evaluated in all patients who underwent surgery. Bleeding episode were categorized according to the Austrian tonsil study.
Results: This prospective study was carried out on 60 patients (adults and children), between December 2021 and November 2022. All patients underwent tonsillectomy under general anaesthesia. It was seen that most of the patients did not have any significant bleeding i.e., Grade A1 (Dry, no clot), and A2 (Clot, but no active bleeding after clot removal) whereas 4 patients (6.7%) had Grade B1 post tonsillectomy hemorrhage (Minimal bleeding requiring minimal intervention by vasoconstriction using adrenaline swab). Post tonsillectomy hemorrhage was seen more in adults. Post tonsillectomy bleeding of Grade B1 was recored in 28.6% of underweight patients, 8% of normal weight patients and no significant bleeding occurred in any of the overweight and obese patients (p-value 0.256).
Conclusion: Overweight and obesity (higher BMI) did not increase the risk of post tonsillectomy hemorrhage in either children or adults.
abstract_id: PUBMED:29947696
TRUNK BODY MASS INDEX: A NEW REFERENCE FOR THE ASSESSMENT OF BODY MASS DISTRIBUTION. Background: Body mass index (BMI) has some limitations for nutritional diagnosis since it does not represent an accurate measure of body fat and it is unable to identify predominant fat distribution.
Aim: To develop a BMI based on the ratio of trunk mass and height.
Methods: Fifty-seven patients in preoperative evaluation to bariatric surgery were evaluated. The preoperative anthropometric evaluation assessed weight, height and BMI. The body composition was evaluated by bioimpedance, obtaining the trunk fat free mass and fat mass, and trunk height. Trunk BMI (tBMI) was calculated by the sum of the measurements of the trunk fat free mass (tFFM) and trunk fat mass (tFM) in kg, divided by the trunk height squared (m2)). The calculation of the trunk fat BMI (tfBMI) was calculated by tFM, in kg, divided by the trunk height squared (m2)). For the correction and adjustment of the tBMI and tfBMI, it was calculated the relation between trunk extension and height, multiplying by the obtained indexes.
Results: The mean data was: weight 125.3±19.5 kg, height 1.63±0.1 m, BMI was 47±5 kg/m2) and trunk height was 0.52±0,1 m, tFFM was 29.05±4,8 kg, tFM was 27.2±3.7 kg, trunk mass index was 66.6±10.3 kg/m², and trunk fat was 32.3±5.8 kg/m². In 93% of the patients there was an increase in obesity class using the tBMI. In patients with grade III obesity the tBMI reclassified to super obesity in 72% of patients and to super-super obesity in 24% of the patients.
Conclusion: The trunk BMI is simple and allows a new reference for the evaluation of the body mass distribution, and therefore a new reclassification of the obesity class, evidencing the severity of obesity in a more objectively way.
abstract_id: PUBMED:29902832
Cost and Revenue Relationship in Orthopaedic and Trauma Surgery Patients in Relation to Body Mass Index Background: Growing numbers of patients in orthopaedic and trauma surgery are obese. The risks involved are e.g. surgical complications, higher costs for longer hospital stays or special operating tables. It is a moot point whether revenues in the German DRG system cover the individual costs in relation to patients' body mass index (BMI) and in which area of hospital care potentially higher costs occur.
Material And Methods: Data related to BMI, individual costs and revenues were extracted from the hospital information system for 13,833 patients of a large hospital who were operated in 2007 to 2010 on their upper or lower extremities. We analysed differences in cost revenue relations dependent on patients' BMI and surgical site, and differences in the distribution of hospital cost areas in relation to patients' BMI by t and U tests.
Results: Individual costs of morbidly obese (BMI ≥ 40) and underweight patients (BMI < 18.5) significantly (p < 0.05) exceeded individual DRG revenues. Significantly higher cost revenue relations were detected for all operations on the lower and upper extremities except for ankle joint surgeries in which arthroscopical procedures predominate. Most of the incremental costs resulted from higher spending for nursing care, medication and special appliances. Costs for doctors and medical ancillary staff did not increase in relation to patients' BMI.
Conclusion: To avoid BMI related patient discrimination, supplementary fees to cover extra costs for morbidly obese or underweight patients with upper or lower extremities operations should raise DRG revenues. Moreover, hospitals should be organisationally prepared for these patients.
abstract_id: PUBMED:28555197
Effect of Body Mass Index on Posttonsillectomy Hemorrhage. Objective: Obesity affects adverse outcomes in patients undergoing various surgeries. Tonsillectomy is one of the most common surgical procedures and posttonsillectomy hemorrhage (PTH) is the major complication in patients with tonsillectomy. However, the effect of body mass index (BMI) on posttonsillectomy bleeding episodes is not well known. This study aimed to assess the clinical association between obesity and PTH.
Methods: A total of 98 tonsillectomies were retrospectively reviewed. Patient charts were analyzed regarding demographic data and the indication for surgery. Patients with PTH were compared with uneventful cases. Patients were divided into three groups based on BMI: normal weight (BMI < 25 kg/m2), overweight (BMI ≥ 25 and <30 kg/m2), and obese (≥30 kg/m2).
Results: PTH occurred in 13% of patients with normal weight, in 23.5% of patients with overweight, and in 50% patients with obesity. The occurrence of PTH was significantly higher in patients with obesity than in those with normal weight and overweight (p = 0.008). Multivariate analysis showed that obesity was a significant factor affecting the incidence of PTH after adjusting for confounding factors.
Conclusions: Our findings suggest that the obese condition is independently associated with the incidence of PTH.
Answer: The concept of an "obesity paradox" in cardiac surgery refers to the counterintuitive observation that, in some studies, obese patients have better outcomes after surgery than their normal-weight counterparts. However, the abstracts provided offer mixed evidence regarding the impact of body mass index (BMI) on outcomes after cardiac surgery.
According to the study in abstract PUBMED:24887880, BMI does not significantly affect early clinical outcomes after cardiac surgery, except for a higher risk of wound complications in overweight and obese patients. This suggests that while there may be an increased risk for certain complications, overall outcomes are not adversely affected by higher BMI. This could be interpreted as a form of the obesity paradox, as the increased risk of wound complications does not translate into worse overall outcomes.
On the other hand, the abstract does not provide evidence of improved outcomes for obese patients, which would be necessary to fully support the concept of an obesity paradox. The study found that mortality was inversely proportional to BMI, indicating that higher BMI was not associated with better survival rates, which contradicts the obesity paradox.
In conclusion, based on the abstract provided (PUBMED:24887880), there is no clear evidence of an obesity paradox in cardiac surgery, as BMI does not appear to have a significant effect on early clinical outcomes, except for an increased risk of wound complications in overweight and obese patients. However, the abstract does not support the notion that obesity is protective or associated with better outcomes after cardiac surgery. |
Instruction: Specialized care and primary care in the treatment of asthma: do differences exist?
Abstracts:
abstract_id: PUBMED:32597993
Use of General Primary Care, Specialized Primary Care, and Other Veterans Affairs Services Among High-Risk Veterans. Importance: Integrated health care systems increasingly focus on improving outcomes among patients at high risk for hospitalization. Examining patterns of where patients obtain care could give health care systems insight into how to develop approaches for high-risk patient care; however, such information is rarely described.
Objective: To assess use of general and specialized primary care, medical specialty, and mental health services among patients at high risk of hospitalization in the Veterans Health Administration (VHA).
Design, Setting, And Participants: This national, population-based, retrospective cross-sectional study included all veterans enrolled in any type of VHA primary care service as of September 30, 2015. Data analysis was performed from April 1, 2016, to January 1, 2019.
Exposures: Risk of hospitalization and assignment to general vs specialized primary care.
Main Outcome And Measures: High-risk veterans were defined as those who had the 5% highest risk of near-term hospitalization based on a validated risk prediction model; all others were considered low risk. Health care service use was measured by the number of encounters in general primary care, specialized primary care, medical specialty, mental health, emergency department, and add-on intensive management services (eg, telehealth and palliative care).
Results: The study assessed 4 309 192 veterans (mean [SD] age, 62.6 [16.0] years; 93% male). Male veterans (93%; odds ratio [OR], 1.11; 95% CI, 1.10-1.13), unmarried veterans (63%; OR, 2.30; 95% CI, 2.32-2.35), those older than 45 years (94%; 45-65 years of age: OR, 3.49 [95% CI, 3.44-3.54]; 66-75 years of age: OR, 3.04 [95% CI, 3.00-3.09]; and >75 years of age: OR, 2.42 [95% CI, 2.38-2.46]), black veterans (23%; OR, 1.63; 95% CI, 1.61-1.64), and those with medical comorbidities (asthma or chronic obstructive pulmonary disease: 33%; OR, 4.03 [95% CI, 4.00-4.06]; schizophrenia: 4%; OR, 5.14 [95% CI, 5.05-5.22]; depression: 42%; OR, 3.10 [95% CI, 3.08-3.13]; and alcohol abuse: 20%; OR, 4.54 [95% CI, 4.50-4.59]) were more likely to be high risk (n = 351 012). Most (308 433 [88%]) high-risk veterans were assigned to general primary care; the remaining 12% (42 579 of 363 561) were assigned to specialized primary care (eg, women's health and homelessness). High-risk patients assigned to general primary care had more frequent primary care visits (mean [SD], 6.9 [6.5] per year) than those assigned to specialized primary care (mean [SD], 6.3 [7.3] per year; P < .001). They also had more medical specialty care visits (mean [SD], 4.4 [5.9] vs 3.7 [5.4] per year; P < .001) and fewer mental health visits (mean [SD], 9.0 [21.6] vs 11.3 [23.9] per year; P < .001). Use of intensive supplementary outpatient services was low overall.
Conclusions And Relevance: The findings suggest that, in integrated health care systems, approaches to support high-risk patient care should be embedded within general primary care and mental health care if they are to improve outcomes for high-risk patient populations.
abstract_id: PUBMED:25914455
Tools for primary care management of inflammatory bowel disease: do they exist? Healthcare systems throughout the world continue to face emerging challenges associated with chronic disease management. Due to the likely increase in chronic conditions in the future it is now vital that cooperation and support between specialists, generalists and primary health care physicians is conducted. Inflammatory bowel disease (IBD) is one such chronic disease. Despite specialist care being essential, much IBD care could and probably should be delivered in primary care with continued collaboration between all stakeholders. Whilst most primary care physicians only have few patients currently affected by IBD in their caseload, the proportion of patients with IBD-related healthcare issues cared for in the primary care setting appears to be widespread. Data suggests however, that primary care physician's IBD knowledge and comfort in management is suboptimal. Current treatment guidelines for IBD are helpful but they are not designed for the primary care setting. Few non-expert IBD management tools or guidelines exist compared with those used for other chronic diseases such as asthma and scant data have been published regarding the usefulness of such tools including IBD action plans and associated supportive literature. The purpose of this review is to investigate what non-specialist tools, action plans or guidelines for IBD are published in readily searchable medical literature and compare these to those which exist for other chronic conditions.
abstract_id: PUBMED:29866987
Specialized Care without the Subspecialist: A Value Opportunity for Secondary Care. An underutilized value strategy that may reduce unnecessary subspecialty involvement in pediatric healthcare targets the high-quality care of children with common chronic conditions such as obesity, asthma, or attention deficit hyperactivity disorder within primary care settings. In this commentary, we propose that "secondary care", defined as specialized visits delivered by primary care providers, a general pediatrician, or other primary care providers, can obtain the knowledge, skill and, over time, the experience to manage one or more of these common chronic conditions by creating clinical time and space to provide condition-focused care. This care model promotes familiarity, comfort, proximity to home, and leverages the provider's expertise and connections with community-based resources. Evidence is provided to prove that, with multi-disciplinary and subspecialist support, this model of care can improve the quality, decrease the costs, and improve the provider's satisfaction with care.
abstract_id: PUBMED:9264683
Specialized care and primary care in the treatment of asthma: do differences exist? Objective: To determine whether differences exist in the monitoring, diagnosis and treatment of asthmatic patients between family doctors (FD) and pneumology specialists (PD).
Design: A descriptive crossover study, performed through an interview with the patients and a medical exploration.
Setting: Six health centres.
Patients: 195 asthmatic patients between 14 and 65, chosen by simple random sampling from among all those registered by computer in the SICAP.
Measurements And Main Results: Each patient answered a structured interview and had a spirometry test. Which doctor usually monitored their illness, social and demographic data, morbidity parameters, treatment prescribed and their compliance with it, were all determined. 66% of patients were under their FD. No differences were found in the clinical characteristics of patients treated by their FD against those treated by their PD.
Conclusions: Most adult asthmatics are under the care of FDs; however, these appear to under-treat to a considerable degree, especially respecting the use of inhaled corticosteroids. It must be emphasised strongly that Asthma is an inflammatory disease; and FDs must become better informed of the directives of the international consensus on asthma.
abstract_id: PUBMED:22935133
PELICAN: A quality of life instrument for childhood asthma: study protocol of two randomized controlled trials in primary and specialized care in the Netherlands. Background: Asthma is one of the major chronic health problems in children in the Netherlands. The Pelican is a paediatric asthma-related quality of life instrument for children with asthma from 6-11 years old, which is suitable for clinical practice in primary and specialized care. Based on this instrument, we developed a self-management treatment to improve asthma-related quality of life. The Pelican intervention will be investigated in different health care settings. Results of intervention studies are often extrapolated to other health care settings than originally investigated. Because of differences in organization, disease severity, patient characteristics and care provision between health care settings, extrapolating research results could lead to unnecessary health costs without the desired health care achievements. Therefore, interventions have to be investigated in different health care settings when possible. This study is an example of an intervention study in different health care settings. In this article, we will present the study protocol of the Pelican study in primary and specialized care.
Method/design: This study consists of two randomized controlled trials to assess the effectiveness of the Pelican intervention in primary and specialized care. The trial in primary care is a multilevel design with 170 children with asthma in 16 general practices. All children in one general practices are allocated to the same treatment group. The trial in specialized care is a multicentre trial with 100 children with asthma. Children in one outpatient clinic are randomly allocated to the intervention or usual care group. In both trials, children will visit the care provider four times during a follow-up of nine months. This study is registered and ethically approved.
Discussion: This article describes the study protocol of the Pelican study in different health care settings. If the Pelican intervention proves to be effective and efficient, implementation in primary and specialized care for paediatric asthma in the Netherlands will be recommended.
Trial Registration: This study is registered by clinicaltrial.gov (NCT01109745).
abstract_id: PUBMED:25183554
Under the same roof: co-location of practitioners within primary care is associated with specialized chronic care management. Background: International and national bodies promote interdisciplinary care in the management of people with chronic conditions. We examine one facilitative factor in this team-based approach - the co-location of non-physician disciplines within the primary care practice.
Methods: We used survey data from 330 General Practices in Ontario, Canada and New Zealand, as a part of a multinational study using The Quality and Costs of Primary Care in Europe (QUALICOPC) surveys. Logistic and linear multivariable regression models were employed to examine the association between the number of disciplines working within the practice, and the capacity of the practice to offer specialized and preventive care for patients with chronic conditions.
Results: We found that as the number of non-physicians increased, so did the availability of special sessions/clinics for patients with diabetes (odds ratio 1.43, 1.25-1.65), hypertension (1.20, 1.03-1.39), and the elderly (1.22, 1.05-1.42). Co-location was also associated with the provision of disease management programs for chronic obstructive pulmonary disease, diabetes, and asthma; the equipment available in the centre; and the extent of nursing services.
Conclusions: The care of people with chronic disease is the 'challenge of the century'. Co-location of practitioners may improve access to services and equipment that aid chronic disease management.
abstract_id: PUBMED:30453871
Models of care for severe asthma: the role of primary care. Severe asthma encompasses treatment-refractory asthma and difficult-to-treat asthma. There are a number of barriers in primary, secondary and tertiary settings which compromise optimal care for severe asthma in Australia. Guidelines recommend a multidimensional assessment of severe asthma, which includes confirming the diagnosis, severity and phenotype and identifying and treating comorbidities and risk factors. This approach has been found to improve severe asthma symptoms and quality of life and reduce exacerbations. Primary care providers can contribute significantly to the multidimensional approach for severe asthma by performing spirometry, optimising therapy and addressing risk factors such as non-adherence and smoking before referring the patient to a respiratory physician for review. Primary care practitioners are encouraged to remain engaged with the management of a patient with severe asthma following specialist review by assisting with community-based allied health referrals, managing general medical comorbidities and administering prescribed biological therapies. Specialists can support primary care by providing advice to individuals with indeterminate diagnosis, streamlining investigation and management of unrecognised risk factors and complex comorbidities, optimising treatment for severe or difficult asthma including assessment of suitability for and, if appropriate, initiating advanced therapies such as biological therapies. When discharging patients back to primary care, specialists should provide clear recommendations regarding ongoing management and should specify the indications requiring further specialist review, ideally offering a streamlined re-referral pathway.
abstract_id: PUBMED:25530290
Improving asthma care in rural primary care practices: a performance improvement project. Introduction: Rural areas are often underserviced health areas, lack specialty care services, and experience higher levels of asthma-related burden. A primary care, asthma-focused, performance improvement program was provided to a 6-county, rural-frontier region in Colorado to determine whether asthma care practices could be enhanced to become concordant with evidence-based asthma care guidelines.
Methods: A pre-post, quasi-experimental design was used. A complex, multifaceted intervention was provided to multidisciplinary primary care teams in practices serving children and adults with asthma. Intervention elements included face-to-face trainings, clinical support tools, patient education materials, a website, and clinic visits. Performance improvement and behavior change indicators were collected through chart audits and surveys from the entire health care team.
Results: Participants included three health care organizations and their staff in 13 primary care practices. Overall, all team members reported statistically significant improvements in confidence levels for providing quality asthma care. Chart reviews of asthma patient encounters completed before and after the program demonstrated statistically significant improvements in asthma care practices for asthma control assessment (1% vs 20%), provision of asthma action plans (2% vs 29%), controller prescription (39% vs 71%), inhaler technique assessment (1% vs 18%), and arrangement of follow-up appointment (20% vs 37%).
Conclusion: The asthma care-focused, multifaceted, complex, performance improvement intervention provided to rural primary health care teams lead to significant improvements in all indicators of quality asthma care provision to adults and children with asthma. However, significant barriers exist for rural practices to adopt evidence-based asthma care practices.
abstract_id: PUBMED:22249552
Assistance model for patients with asthma in the primary care. To create a program structured for the control and prevention towards asthma worsening, it is necessary to settle down actions of regionalization, planning and management. Currently, the Ministerial orders allow each municipality district to cope their needs with local initiatives, based on the search of the health indicators with University partnerships. Taking into account this context, it is feasible the implantation of an effective model through organized demand of attendance flow and physical structure, besides the withdrawal of medications and professional training. To describe the modus operandi situation currently in the Primary Health Care Units regarding these patients' reception, diagnosis, and follow-up, as well as the current situation according to the professional profile and sector. To introduce an assistance model for reception, of these patients in these primary care units. This is a bibliographical review based on the specialized literature such as scientific papers selected through the search on the SciELO and Bireme databases, from Medline and Lilacs data sources. A Committee was set up by members from the Health and Service , the Medical School, and scientific societies for discussion and planning.
abstract_id: PUBMED:26370255
Barriers to care and quality of primary care services in children with sickle cell disease. Aims: The aims of this study were: to (1) identify barriers to care in children with sickle cell disease; (2) examine the quality of primary care services received by these children and (3) examine the relationship between barriers to care and quality of primary care services in children with sickle cell disease.
Background: Effective management in children with sickle cell disease requires early access to a comprehensive range of preventive screenings, urgent care treatments for vaso-occlusive pain crisis and ongoing prophylactic treatments.
Design: A cross-sectional survey of parents of children with sickle cell disease was conducted between April-September 2011.
Methods: Parents of children with sickle cell disease completed the Barriers to Care Questionnaire and Parent's Perceptions of Primary Care.
Results: Parents of children with sickle cell disease (n = 38) reported health system barriers such as inability to contact doctors or clinics, extended wait times and inconvenient clinic hours. Some barriers were reported more frequently among children with concurrent sickle cell disease and asthma, compared with those children without a concurrent asthma condition. Parents who reported more barriers were least likely to perceive their care as accessible, comprehensive and coordinated.
Conclusions: Minimizing healthcare barriers may improve the quality of primary care services received by children with sickle cell disease and consequently prevent complications associated with sickle cell disease.
Implications For Nursing Practice: Nurses and other care providers need to identify healthcare barriers, so that access, coordination, comprehensiveness and overall quality of primary care services may be improved in children with sickle cell disease.
Answer: Yes, differences exist in the treatment of asthma between specialized care and primary care. A study aimed to determine whether differences exist in the monitoring, diagnosis, and treatment of asthmatic patients between family doctors (FD) and pneumology specialists (PD) found that most adult asthmatics are under the care of FDs. However, FDs appear to under-treat to a considerable degree, especially with respect to the use of inhaled corticosteroids. It was emphasized that asthma is an inflammatory disease and that FDs must become better informed of the directives of the international consensus on asthma (PUBMED:9264683).
Another study investigating the use of general and specialized primary care among high-risk veterans, including those with asthma, found that high-risk patients assigned to general primary care had more frequent primary care visits than those assigned to specialized primary care. They also had more medical specialty care visits and fewer mental health visits. The study concluded that approaches to support high-risk patient care, including asthma, should be embedded within general primary care and mental health care to improve outcomes for high-risk patient populations (PUBMED:32597993).
Furthermore, the study "Under the same roof: co-location of practitioners within primary care is associated with specialized chronic care management" found that co-location of non-physician disciplines within the primary care practice was associated with the availability of special sessions/clinics for patients with chronic conditions such as asthma. Co-location was also associated with the provision of disease management programs for chronic conditions, including asthma, and the extent of nursing services (PUBMED:25183554).
In summary, there are differences in the treatment of asthma between specialized care and primary care, with specialized care often providing more targeted and comprehensive management of the condition. However, primary care can play a significant role in managing asthma, especially when there is co-location of disciplines and when primary care providers are well-informed about evidence-based asthma care guidelines. |
Instruction: POP-Q, dynamic MR imaging, and perineal ultrasonography: do they agree in the quantification of female pelvic organ prolapse?
Abstracts:
abstract_id: PUBMED:19221680
POP-Q, dynamic MR imaging, and perineal ultrasonography: do they agree in the quantification of female pelvic organ prolapse? Introduction And Hypothesis: This study evaluates the agreement in prolapse staging between clinical examination, dynamic magnetic resonance (MR), imaging and perineal ultrasonography.
Methods: Anatomical landmarks in the anterior, central, and posterior compartment were assessed in relation to three reference lines on dynamic MR imaging and one reference line on dynamic ultrasonography. These measurements were compared to the according POP-Q measurements. Agreement between the three methods was analyzed with Spearman's rank correlation coefficient (r(s)) and Bland and Altman plots.
Results: Correlations were good to moderate in the anterior compartment (r(s) range = 0.49; 0.70) and moderate to poor (r(s) range = -0.03; 0.49) in the central and posterior compartment. This finding was independent of the staging method and reference lines used.
Conclusion: Pelvic organ prolapse staging with the use of POP-Q, dynamic MR imaging, and perineal ultrasonography only correlates in the anterior compartment.
abstract_id: PUBMED:31720767
Comparison of magnetic resonance defecography grading with POP-Q staging and Baden-Walker grading in the evaluation of female pelvic organ prolapse. Purpose: The physical examination and pelvic imaging with MRI are often used in the pre-operative evaluation of pelvic organ prolapse. The objective of this study was to compare grading of prolapse on defecography phase of dynamic magnetic resonance imaging (dMRI) with physical examination (PE) grading using both the Pelvic Organ Prolapse Quantification (POP-Q) staging and Baden-Walker (BW) grading systems in the evaluation of pelvic organ prolapse (POP).
Methods: We retrospectively reviewed the charts of 170 patients who underwent dMRI at our institution. BW grading and POP-Q staging were collected for anterior, apical, and posterior compartments, along with absolute dMRI values and overall grading of dMRI. For the overall grading/staging from dMRI, BW, and POP-Q, Spearman rho (ρ) was used to assess the correlation. The correlations between dMRI grading and POP-Q staging were compared to the correlations between dMRI grading and BW grading using Fisher's Z transformation.
Results: A total of 54 patients were included. dMRI grading was not significantly correlated with BW grading for anterior, apical, and posterior compartment prolapse (p > 0.15). However, overall dMRI grading demonstrated a significant (p = 0.025) and positive correlation (ρ = 0.305) with the POP-Q staging system. dMRI grading for anterior compartment prolapse also demonstrated a positive correlation (p = 0.001, ρ = 0.436) with the POP-Q staging derived from measurement locations Aa and Ba. The overall dMRI grade is better correlated with POP-Q stage than with BW grade (p = 0.024).
Conclusion: Overall and anterior compartment grading from dMRI demonstrated a significant and positive correlation with the overall POP-Q staging and anterior compartment POP-Q staging, respectively. The overall dMRI grade is better correlated with POP-Q staging than with BW grading.
abstract_id: PUBMED:37619710
Correlation between clinical examination and perineal ultrasound in women treated for pelvic organ prolapse. Introduction: Lifetime risk of surgery for female pelvic organ prolapse (FPOP) is estimated at 10 to 20%. Prolapse assessment is mostly done by clinical examination. Perineal ultrasound is easily available and performed to evaluate and stage FPOP. This study's aim is to evaluate the agreement between clinical examination by POP-Q and perineal sonography in women presenting pelvic organ prolapse.
Materials And Methods: We carried out a prospective study from December 2015 to March 2018 in the gynecologic department of a teaching hospital. Consecutive woman requiring a surgery for pelvic organ prolapse were included. All women underwent clinical examination by POP-Q, perineal ultrasound with measurements of each compartment descent, levator hiatus area and posterior perineal angle. They also answered several functional questionnaires (PFDI 20, PFIQ7, EQ-5D and PISQ12) before and after surgery. Data for clinical and sonographic assessments were compared with Spearman's test and correlation with functional questionnaires was tested.
Results: 82 women were included. We found no significant agreement between POP-Q and sonographic measures of bladder prolapse, surface of the perineal hiatus or perineal posterior angle. There was a significant improvement of most of the functional scores after surgery.
Discussion: Our study does not suggest correlation between clinical POP-Q and sonographic assessment of bladder prolapse, hiatus surface or perineal posterior angle. Ultrasound datasets were limited by an important number of missing data resulting in a lack of power.
abstract_id: PUBMED:27924376
Predicting levator avulsion from ICS POP-Q findings. Introduction And Hypothesis: Levator avulsion is a common consequence of vaginal childbirth. It is associated with symptomatic female pelvic organ prolapse and is also a predictor of recurrence after surgical correction. Skills and hardware necessary for diagnosis by imaging are, however, not universally available. Diagnosis of avulsion may benefit from an elevated index of suspicion. The aim of this study was to examine the predictive value of the International Continence Society Pelvic Organ Prolapse Quantification (ICS POP-Q) for the diagnosis of levator avulsion by tomographic 4D translabial ultrasound.
Methods: This is a retrospective analysis of data obtained in a tertiary urogynaecological unit. Subjects underwent a standardised interview, POP-Q examination and 4D translabial pelvic floor ultrasound. Avulsion of the puborectalis muscle was diagnosed by tomographic ultrasound imaging. We tested components of the ICS POP-Q associated with symptomatic prolapse and other known predictors of avulsion, including previous prolapse repair and forceps delivery with uni- and multivariate logistic regression. A risk score was constructed for clinical use.
Results: The ICS POP-Q components Ba, C, gh and pb were all significantly associated with avulsion on multivariate analysis, along with previous prolapse repair and forceps delivery. A score was assigned for each of these variables and patients were classified as low, moderate or high risk according to total score. The odds of finding an avulsion on ultrasound in patients in the "high risk" group were 12.8 times higher than in the "low risk" group.
Conclusion: Levator avulsion is associated with ICS POP-Q measures. Together with simple clinical data, it is possible to predict the risk of avulsion using a scoring system. This may be useful in clinical practice by modifying the index of suspicion for the condition.
abstract_id: PUBMED:19597719
Symptoms of pelvic floor dysfunction are poorly correlated with findings on clinical examination and dynamic MR imaging of the pelvic floor. Introduction And Hypothesis: The aim of the study was to determine whether patients' symptoms agree with findings on clinical examination and dynamic MR imaging of the pelvic floor.
Methods: Symptoms of pelvic organ dysfunction were measured with the use of three validated questionnaires. The domain scores were compared with POP-Q and dynamic MR imaging measurements. The Spearman's rank correlation coefficient (r(s)) was used to assess agreement.
Results: Only the domain score genital prolapse was significantly correlated in the positive direction with the degree of pelvic organ prolapse as assessed by POP-Q and dynamic MR imaging (r(s) = 0.64 and 0.27, respectively), whereas the domain score urinary incontinence was inversely correlated (r(s) = -0.32 and -0.35, respectively).
Conclusions: The sensation or visualization of a bulge in the vagina was the only symptom which correlated positively with the degree of pelvic organ prolapse, and clinical examination and dynamic MR imaging showed similar correlation in this respect.
abstract_id: PUBMED:37904129
Clinical application of a fixed reference line in the ultrasound quantitative diagnosis of female pelvic organ prolapse. Objective: This study explored using an improved ultrasound (US) for quantitative evaluation of the degree of pelvic organ prolapse(POP).
Design: A transluminal probe was used to standardize ultrasound imaging of pelvic floor organ displacements. A US reference line was fixed between the lower edge of the pubic symphysis and the central axis of the pubic symphysis at a 30°counterclockwise angle.
Method: Points Aa, Ba, C and Bp on pelvic organ prolapse quantification (POP-Q) were then compared with the points on pelvic floor ultrasound (PFUS).
Results: One hundred thirteen patients were included in the analysis of the standard US plane. Correlations were good in the anterior and middle compartments (PBN:Aa, ICC = 0.922; PBB:Ba, ICC = 0.923; and PC:C, ICC = 0.925), and Bland-Altman statistical maps corresponding to the average difference around the 30°horizontal line were close to 0. Correlations were poor in the posterior compartment (PRA:Bp, ICC = 0.444). However, eight (7.1%) cases of intestinal hernia and 21 (18.6%) cases of rectocele were diagnosed.
Conclusions: Introital PFUS using an intracavitary probe, which is gently placed at the introitus of the urethra and the vagina, may be accurately used to evaluate organ displacement. The application of a 30°horizontal line may improve the repeatability of the US diagnosis of POP.
abstract_id: PUBMED:24314799
The effect of episiotomy on pelvic organ prolapse assessed by pelvic organ prolapse quantification system. Objective: This study aimed to assess the association between episiotomy and measures of pelvic organ prolapse quantification system (POP-Q) in a cohort of women with vaginal parturition.
Study Design: A prospective study was conducted with 549 eligible patients with vaginal delivery history. Women who were pregnant, gave birth within the preceding 6 months period, had a known history of pre-pregnant prolapse, had a history of hysterectomy or any operation performed for pelvic organ prolapsus and stress urinary incontinence, refused to participate and to whom POP-Q examination could not be performed (due to anatomic or orthopedic problems) were excluded. Patients were categorized as women with episiotomy and without episiotomy. The degree of genital prolapse was assessed by using POP-Q system. The effect of episiotomy on overall POP-Q stage and individual POP-Q points was calculated with logistic regression.
Results: 439 patients had a history of episiotomy whereas 110 patients had no episiotomy. 38.2% of women without an episiotomy, and 32.0% of women with episiotomy had genital prolapse determined by POP-Q system. There was no statistically significant association between episiotomy and POP-Q stage (AOR, -0.24; 95% CI, -0.65-0.18, P=0.26). Episiotomy was found among the independent predictors for certain POP-Q points such as Bp, perineal body (pb) and total vaginal length (tvl). Episiotomy was negatively correlated with prolapse of Bp and with pb and tvl.
Conclusion: Episiotomy had an effect on certain POP-Q indices, but had no influence on overall POP-Q stage.
abstract_id: PUBMED:20135303
Perineal descent and patients' symptoms of anorectal dysfunction, pelvic organ prolapse, and urinary incontinence. Introduction And Hypothesis: The aim of this dynamic magnetic resonance (MR) imaging study was to assess the relation between the position and mobility of the perineum and patients' symptoms of pelvic floor dysfunction.
Methods: Patients' symptoms were measured with the use of validated questionnaires. Univariate logistic regression analyses were used to study the relationship between the questionnaires domain scores and the perineal position on dynamic MR imaging, as well as baseline characteristics (age, body mass index, and parity).
Results: Sixty-nine women were included in the analysis. Only the domain score genital prolapse was associated with the perineal position on dynamic MR imaging. This association was strongest at rest.
Conclusions: Pelvic organ prolapse symptoms were associated with the degree of descent of the perineum on dynamic MR imaging. Perineal descent was not related to anorectal and/or urinary incontinence symptoms.
abstract_id: PUBMED:25854801
Association between ICS POP-Q coordinates and translabial ultrasound findings: implications for definition of 'normal pelvic organ support'. Objectives: Female pelvic organ prolapse is quantified on clinical examination using the pelvic organ prolapse quantification system of the International Continence Society (ICS POP-Q). Pelvic organ descent on ultrasound is strongly associated with symptoms of prolapse, but associations between clinical and ultrasound findings remain unclear. This study was designed to compare clinical examination and imaging findings, especially regarding cut-offs for the distinction between normal pelvic organ support and prolapse.
Methods: This was a retrospective study using 839 archived datasets of women referred to a tertiary urogynecological center for symptoms of lower urinary tract and pelvic floor dysfunction between June 2011 and May 2013. The main outcome measures were the maximum downward displacement of the anterior vaginal wall (point Ba), the cervix (point C) and the posterior vaginal wall (point Bp), the length of the genital hiatus (Gh) and the length of the perineal body (Pb), as defined by the ICS POP-Q; explanatory parameters were measures of pelvic organ descent on translabial ultrasound, ascertained by offline volume data analysis at a later date, by an operator blinded to all other data.
Results: Full datasets were available for 825 women. On clinical examination, 646 (78.3%) were found to have prolapse of at least POP-Q Stage 2. All coordinates on clinical examination were strongly associated with the ultrasound measurements of pelvic organ descent (P < 0.001). These relationships were almost linear, especially for the anterior compartment.
Conclusions: There is a near linear relationship between sonographic and clinical measures of prolapse. Previously proposed cut-offs to define 'significant prolapse' on ultrasound and POP-Q (Ba ≥ -0.5 and cystocele ≥ 10 mm below the symphysis pubis, C ≥ -5 and uterine position of 15 mm above the symphysis pubis, Bp ≥ -0.5 and rectocele ≥ 15 mm below the symphysis pubis) are plausible and mutually consistent. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd.
abstract_id: PUBMED:32527108
Modern terminology and classification of female pelvic organ prolapse. Objective: The aim of this study is to clarify the news and to summarize recommended methods in the quantification of female pelvic organ prolapse (POP).
Design: Summarizing study.
Setting: Department of Obstetrics and Gynecology, Masaryk University, University Hospital Brno.
Methods: The terminology of POP was significantly reworked in last decades. It is important to use common classification system for mutual communication of specialists and for exact interpretationof research.
Results: The older classifications of POP are not exact enough for interpretation of research. International classification system Pelvic organ prolapse quantification (POP-Q) brought necessary consensus in the terminology, encompassing many parameters that exactly define individual anatomy of each female patient. This detailed terminology could be replaced with simplified but also satisfactorily detailed version called Simplified POP-Q for the regular clinical practice. Modern classification of POP is still developing and new concepts of vaginal and perineal measurements for description of operation techniques effectiveness are waiting for further validation.
Conclusion: Modern terminology and classification of POP meets the requirements of current science and research and also is usable for regular clinical practice.
Answer: The agreement between POP-Q (Pelvic Organ Prolapse Quantification), dynamic MR imaging, and perineal ultrasonography in the quantification of female pelvic organ prolapse varies depending on the compartment being assessed.
In the anterior compartment, there is good to moderate agreement between these methods, with Spearman's rank correlation coefficients (r(s)) ranging from 0.49 to 0.70 (PUBMED:19221680). Similarly, dynamic MR imaging grading for anterior compartment prolapse demonstrated a positive correlation with POP-Q staging derived from measurement locations Aa and Ba (PUBMED:31720767). However, in the central and posterior compartments, the agreement is moderate to poor, with r(s) ranging from -0.03 to 0.49, indicating less consistency between the methods in these areas (PUBMED:19221680).
A study found no significant agreement between POP-Q and sonographic measures of bladder prolapse, surface of the perineal hiatus, or perineal posterior angle (PUBMED:37619710). Another study reported that overall and anterior compartment grading from dynamic MR imaging demonstrated a significant and positive correlation with overall POP-Q staging and anterior compartment POP-Q staging, respectively, and that the overall dynamic MR imaging grade is better correlated with POP-Q staging than with Baden-Walker grading (PUBMED:31720767).
Furthermore, a study using an intracavitary probe for introital pelvic floor ultrasound found good correlations in the anterior and middle compartments when using a fixed US reference line, but poor correlation in the posterior compartment (PUBMED:37904129). This suggests that while there is some agreement between these diagnostic tools, particularly in the anterior compartment, the correlation is not consistent across all compartments of the pelvic floor.
In summary, POP-Q, dynamic MR imaging, and perineal ultrasonography show some level of agreement in the quantification of female pelvic organ prolapse, particularly in the anterior compartment, but the agreement is less consistent in the central and posterior compartments. |
Instruction: Dyspeptic patients with visceral hypersensitivity: sensitisation of pain specific or multimodal pathways?
Abstracts:
abstract_id: PUBMED:15951533
Dyspeptic patients with visceral hypersensitivity: sensitisation of pain specific or multimodal pathways? Background And Aims: Patients with functional dyspepsia who have hypersensitivity to gastric distension have more prevalent pain, suggesting the presence of hyperalgesia. It is unclear whether this reflects activation of pain specific afferent pathways or multimodal afferent pathways that also mediate non-painful sensations. In the former case, hyperalgesia should occur when intensity of non-painful sensations is still low. The aim of the study was to analyse whether the symptom profile during gastric dissentions in functional dyspepsia patients with hyperalgesia reflects sensitisation of pain specific or multimodal pathways.
Methods: Forty eight consecutive dyspeptic patients (35 female) underwent gastric sensitivity testing with a barostat balloon using a double random staircase protocol. At the end of every distending step, patients scored perception of upper abdominal sensations on a graphic 0-6 rating scale and completed visual analogue scales (VAS 0-100 mm) for pain, nausea, satiety, and fullness. The end point was a rating scale of 5 or more.
Results: Hypersensitivity was present in 20 patients (40%); gastric compliance did not differ between normo- and hypersensitive patients. At maximal distension (score 5 or more), hypersensitive patients had significantly lower distending pressures and intra-balloon volumes, but similar VAS scores for pain, nausea, satiety, and fullness compared with normosensitive patients. In both normosensitive and hypersensitive patients, elevation of pain VAS scores with increasing distending pressures paralleled the elevation in VAS scores for nausea, satiety, and fullness.
Conclusions: Hypersensitive dyspeptic patients reach the same intensity of painful and non-painful sensations as normosensitive patients but at lower distending pressures. Hyperalgesia occurs in hypersensitive dyspeptic patients at distending pressures that also induce intense non-painful sensations. These findings argue against isolated upregulation of pain specific afferents in functional dyspepsia patients with visceral hypersensitivity.
abstract_id: PUBMED:37022606
Perioperative multimodal analgesic injection for patients with adolescent idiopathic scoliosis undergoing posterior spinal fusion surgery. Purpose: This retrospective cohort study compared postoperative as-needed (PRN) opioid consumption pre and postimplementation of a perioperative multimodal analgesic injection composed of ropivacaine, epinephrine, ketorolac, and morphine in patients undergoing posterior spinal fusion (PSF) for adolescent idiopathic scoliosis (AIS). Secondary outcomes include pain score measurements, time to ambulation, length of stay, blood loss, 90-day complication rate, operating room time, nonopioid medication usage, and total inpatient medication cost before and after the initiation of this practice.
Methods: Consecutive patients weighing ≥ 20 kg who underwent PSF for a primary diagnosis of AIS between January 2017 and December 2020 were included. Data from 2018 were excluded to account for standardization of the practice. Patients treated in 2017 only received PCA. Patients treated in 2019 and 2020 only received the injection. Excluded were patients who had any diagnoses other than AIS, allergies to any of the experimental medications, or who were nonambulatory. Data were analyzed utilizing the two-sample t-test or Chi-squared test as appropriate.
Results: Results of this study show that compared with 47 patients treated postoperatively with patient-controlled analgesia (PCA), 55 patients treated with a multimodal perioperative injection have significantly less consumption of PRN morphine equivalents (0.3 mEq/kg vs. 0.5 mEq/kg; p = 0.02). Furthermore, patients treated with a perioperative injection have significantly higher rates of ambulation on postoperative day 1 compared with those treated with PCA (70.9 vs. 40.4%; p = 0.0023).
Conclusion: Administration of a perioperative injection is effective and should be considered in the perioperative protocol in patients undergoing PSF for AIS.
Level Of Evidence: Therapeutic Level III.
abstract_id: PUBMED:36548429
Layer-specific pain relief pathways originating from primary motor cortex. The primary motor cortex (M1) is involved in the control of voluntary movements and is extensively mapped in this capacity. Although the M1 is implicated in modulation of pain, the underlying circuitry and causal underpinnings remain elusive. We unexpectedly unraveled a connection from the M1 to the nucleus accumbens reward circuitry through a M1 layer 6-mediodorsal thalamus pathway, which specifically suppresses negative emotional valence and associated coping behaviors in neuropathic pain. By contrast, layer 5 M1 neurons connect with specific cell populations in zona incerta and periaqueductal gray to suppress sensory hypersensitivity without altering pain affect. Thus, the M1 employs distinct, layer-specific pathways to attune sensory and aversive-emotional components of neuropathic pain, which can be exploited for purposes of pain relief.
abstract_id: PUBMED:33528930
Allergen-specific subcutaneous immunotherapy-pain evaluation in pediatric age. Background: Allergen-specific immunotherapy is a potentially disease-modifying therapy that is effective for the treatment of patients with allergic diseases. Although the pain caused by the administration of subcutaneous immunotherapy with allergens (SCITA) is considered to be minimal, no studies assessing that pain for the treatment of only pediatric patients have been reported. Objectives: This research aimed to evaluate the pain associated with SCITA for pediatric patients followed at our Immunoallergology Department.
Methods: During four consecutive weeks, the nurse who administered the injection completed a questionnaire recording the child's assessment of the pain associated with SCITA; these questionnaires were randomized before any analyses were done. Two different pain evaluation scales were used, with the choice of scale being determined based on the child's age: the self-reporting faces scale (score: 0-10; 5 to 8 years old) and the numeric scale (score: 0-10; >8 years old). Demographic and clinical data, as well as any adverse reactions, were documented.
Results: We included 46 pediatric patients (mean age: 12.3 ± 2.6 years; 69.5% male), most of whom were suffering from rhinitis/rhinoconjunctivitis and undergoing subcutaneous immunotherapy with mites. Seven local adverse reactions were recorded, and all were mild. Ten patients did not mention any pain associated with SCITA. Of the 36 patients who mentioned some pain, 33 mentioned mild pain (scores between 1 and 3); only three mentioned moderate pain (scores between 4 and 6). For both scales, the median score obtained was 1. The maximum pain reported had a score of 6. No significant differences were observed between different groups of patients.
Conclusions: In this study, SCITA was shown to be a mildly painful procedure that is associated with only a few local reactions. Therefore, SCITA should be considered as a safe option for the treatment of most pediatric patients suffering from allergies.
abstract_id: PUBMED:17300286
Assessment of gastric sensorimotor function in paediatric patients with unexplained dyspeptic symptoms and poor weight gain. Recent studies indicate that impaired meal accommodation or hypersensitivity to distention are highly prevalent in adult functional dyspepsia (FD). Our aim was to investigate whether similar abnormalities also occur in paediatric FD. Sixteen FD patients (15 girls, 10-16 years) were studied. The severity (0-3; 0, absent; 3, severe) of eight dyspeptic symptoms (epigastric pain, fullness, bloating, early satiety, nausea, vomiting, belching and epigastric burning) and the amount of weight loss were determined by questionnaire. All children underwent a gastric barostat study after an overnight fast to determine sensitivity to distention and meal-induced accommodation, which were compared with normal values in young adults (18-22 years). On a separate day, all patients underwent a gastric emptying breath test. A mean weight loss of 4.8 +/- 0.9 kg was present in 14 children. Compared with controls, patients had lower discomfort thresholds to gastric distention (8.8 +/- 1.0 mmHg vs 13.9 +/- 1.9 mmHg, P < 0.02) and gastric accommodation (87 +/- 25 mL vs 154 +/- 20 mL P < 0.04). Hypersensitivity to distention and impaired accommodation were present in respectively nine (56%) and 11 (69%) patients. No relationship was found between barostat and gastric emptying, which was delayed in only three patients. The majority of children with unexplained epigastric symptoms have abnormalities of gastric sensorimotor function.
abstract_id: PUBMED:34561389
Tyrosine kinase type A-specific signalling pathways are critical for mechanical allodynia development and bone alterations in a mouse model of rheumatoid arthritis. Abstract: Rheumatoid arthritis is frequently associated with chronic pain that still remains difficult to treat. Targeting nerve growth factor (NGF) seems very effective to reduce pain in at least osteoarthritis and chronic low back pain but leads to some potential adverse events. Our aim was to better understand the involvement of the intracellular signalling pathways activated by NGF through its specific tyrosine kinase type A (TrkA) receptor in the pathophysiology of rheumatoid arthritis using the complete Freund adjuvant model in our knock-in TrkA/C mice. Our multimodal study demonstrated that knock-in TrkA/C mice exhibited a specific decrease of mechanical allodynia, weight-bearing deficit, peptidergic (CGRP+) and sympathetic (TH+) peripheral nerve sprouting in the joints, a reduction in osteoclast activity and bone resorption markers, and a decrease of CD68-positive cells in the joint with no apparent changes in joint inflammation compared with wild-type mice after arthritis. Finally, transcriptomic analysis shows several differences in dorsal root ganglion mRNA expression of putative mechanotransducers, such as acid-sensing ionic channel 3 and TWIK-related arachidonic acid activated K+ channel, as well as intracellular pathways, such as c-Jun, in the joint or dorsal root ganglia. These results suggest that TrkA-specific intracellular signalling pathways are specifically involved in mechanical hypersensitivity and bone alterations after arthritis using TrkA/C mice.
abstract_id: PUBMED:36900357
Behavioral Voluntary and Social Bioassays Enabling Identification of Complex and Sex-Dependent Pain-(-Related) Phenotypes in Rats with Bone Cancer. Cancer-induced bone pain (CIBP) is a common and devastating symptom with limited treatment options in patients, significantly affecting their quality of life. The use of rodent models is the most common approach to uncovering the mechanisms underlying CIBP; however, the translation of results to the clinic may be hindered because the assessment of pain-related behavior is often based exclusively on reflexive-based methods, which are only partially indicative of relevant pain in patients. To improve the accuracy and strength of the preclinical, experimental model of CIBP in rodents, we used a battery of multimodal behavioral tests that were also aimed at identifying rodent-specific behavioral components by using a home-cage monitoring assay (HCM). Rats of all sexes received an injection with either heat-deactivated (sham-group) or potent mammary gland carcinoma Walker 256 cells into the tibia. By integrating multimodal datasets, we assessed pain-related behavioral trajectories of the CIBP-phenotype, including evoked and non-evoked based assays and HCM. Using principal component analysis (PCA), we discovered sex-specific differences in establishing the CIBP-phenotype, which occurred earlier (and differently) in males. Additionally, HCM phenotyping revealed the occurrence of sensory-affective states manifested by mechanical hypersensitivity in sham when housed with a tumor-bearing cagemate (CIBP) of the same sex. This multimodal battery allows for an in-depth characterization of the CIBP-phenotype under social aspects in rats. The detailed, sex-specific, and rat-specific social phenotyping of CIBP enabled by PCA provides the basis for mechanism-driven studies to ensure robustness and generalizability of results and provide information for targeted drug development in the future.
abstract_id: PUBMED:17964216
Modulation of visceral nociceptive pathways. Increased sensitivity of visceral nociceptive pathways contributes to symptoms in an array of clinical gastrointestinal conditions, however, the search for a consistently effective pharmacological agent to treat these conditions remain elusive. Modulation of visceral nociceptive pathways can occur at peripheral, spinal and supra-spinal sites and a dizzying array of potential drug targets exists. Till date, only tricyclic anti-depressants (TCAs) such as amitriptyline and, more recently, selective serotonin reuptake inhibitors (SSRIs) such as citalopram have demonstrated convincing visceral anti-nociceptive properties and clinical benefit in a limited population of patients with visceral hypersensitivity. Unfortunately, there is an incomplete understanding of the receptors and/or primary site of action at which these compounds exert their effects and significant side effects are often encountered. There is a continuing and concerted effort underway to develop target-specific visceral analgesic/anti-hyperalgesic compounds and the aim of this article is to provide a concise update on the most recent advances in this area.
abstract_id: PUBMED:11572572
Role of autonomic dysfunction in patients with functional dyspepsia. Background: The role of autonomic dysfunction in patients with functional dyspepsia is not completely understood.
Aims: 1. to prospectively assess abnormalities of autonomic function in patients with functional dyspepsia, 2. to assess whether autonomic dysfunction in these patients is associated with a. visceral hypersensitivity or b. delayed gastric emptying or c. severity of dyspeptic symptoms.
Patients: A series of 28 patients with functional dyspepsia and 14 healthy volunteers without gastrointestinal symptoms were studied.
Methods: All patients and controls were submitted to a battery of five standard cardiovascular autonomic reflex tests, dyspeptic questionnaire, gastric barostat tests and gastric emptying tests.
Results: 1. Autonomic function tests showed that both sympathetic and parasympathetic scores of dyspeptic patients were significantly higher than in controls; 2. visceral hypersensitivity was confirmed in dyspeptics in response to proximal gastric distension, demonstrating lower pain threshold; 3. delayed gastric emptying occurred more frequently in patients with functional dyspepsia than in controls; 4. epigastric pain and epigastric burning were significantly more prevalent in patients with definite evidence of autonomic dysfunction; 5. No significant association was found between presence of autonomic dysfunction and presence of visceral hypersensitivity or presence of delayed gastric emptying in patients with functional dyspepsia.
Conclusions: We concluded that a possible role of autonomic dysfunction in eliciting dyspeptic symptoms could not be determined from alterations in visceral hypersensitivity or delayed gastric emptying. Autonomic dysfunction might not be the major explanation for symptoms associated with functional dyspepsia.
abstract_id: PUBMED:33911892
Undetected Jawbone Marrow Defects as Inflammatory and Degenerative Signaling Pathways: Chemokine RANTES/CCL5 as a Possible Link Between the Jawbone and Systemic Interactions? Background: Cytokines, especially chemokines, are of increasing interest in immunology. This study characterizes the little-known phenomenon of "bone marrow defects of the jawbone" (BMDJ) with known overexpression of the chemokine RANTES/CCL5 (R/C).
Purpose: Our investigation clarifies why BMDJ and the intensity of local R/C overexpression are challenging to detect, as examined in patients with seven different systemic immunological diseases. Specifically, we investigate whether R/C overexpression is specific to certain disease groups or if it represents a type of signal disruption found in all systemic immunological diseases.
Patients And Methods: In a total of 301 patients, BMDJ was surgically repaired during clinical practice to reduce "silent inflammation" associated with the presence of jaw-related pathologies. In each case of BMDJ, bone density was measured preoperatively (in Hounsfield units [HU]), while R/C expression was measured postoperatively. Each of the 301 patients suffered from allergies, atypical facial and trigeminal pain, or were diagnosed with neurodegenerative diseases, tumors, rheumatism, chronic fatigue syndrome, or parasympathetic disorders.
Results: In all BMDJ cases, strongly negative HU values indicated decreased bone density or osteolysis. Consistently, all cases of BMDJ showed elevated R/C expression. These findings were consistently observed in every disease group.
Discussion: BMDJ was confirmed in all patients, as verified by the HU measurements and laboratory results related to R/C expression. The hypothesis that a specific subset of the seven disease groups could be distinguished either based on the increased presence of BMDJ and by the overexpression of R/C could not be confirmed. A brief literature review confirms the importance of R/C in the etiology of each of the seven disease groups.
Conclusion: In this research, the crucial role played by BMDJ and the chemokine R/C in inflammatory and immune diseases is discussed for seven groups of patients. Each specific immune disease can be influenced or propelled by BMDJ-derived R/C inflammatory signaling pathways.
Answer: The study of dyspeptic patients with visceral hypersensitivity aimed to determine whether the symptom profile during gastric distensions reflects sensitization of pain-specific or multimodal pathways. The results showed that hypersensitive dyspeptic patients reached the same intensity of painful and non-painful sensations as normosensitive patients but at lower distending pressures. Hyperalgesia occurred in hypersensitive dyspeptic patients at distending pressures that also induced intense non-painful sensations. This suggests that the hyperalgesia in these patients does not result from isolated upregulation of pain-specific afferents but rather from a sensitization of multimodal pathways that mediate both painful and non-painful sensations (PUBMED:15951533). |
Instruction: Predictive value of the NIHSS for ADL outcome after ischemic hemispheric stroke: does timing of early assessment matter?
Abstracts:
abstract_id: PUBMED:20439108
Predictive value of the NIHSS for ADL outcome after ischemic hemispheric stroke: does timing of early assessment matter? Background And Purpose: Early prediction of future functional abilities is important for stroke management. The objective of the present study was to investigate the predictive value of the 13-item National Institutes of Health Stroke Scale (NIHSS), measured within 72 h after stroke, for the outcome in terms of activities of daily living (ADL) 6 months post stroke. The second aim was to examine if the timing of NIHSS assessment during the first days post stroke affects the accuracy of predicting ADL outcome 6 months post stroke.
Methods: Baseline characteristics including neurological deficits were measured in 188 stroke patients, using the 13-item NIHSS, within 72 h and at 5 and 9 days after a first-ever ischemic hemispheric stroke. Outcome in terms of ADL dependency was measured with the Barthel Index (BI) at 6 months post stroke. The area under the curve (AUC) from the receiver operating characteristic (ROC) was used to determine the discriminative properties of the NIHSS at days 2, 5 and 9 for outcome of the BI. In addition, at optimal cut-off odds ratio (OR), sensitivity, specificity, positive (PPV) and negative predicted values (NPV) for the different moments of NIHSS assessment post stroke were calculated.
Results: One hundred and fifty-nine of the 188 patients were assessed at a mean of 2.2 (1.3), 5.4 (1.4) and 9.0 (1.8) days after stroke. Significant Spearman rank correlation coefficients were found between BI at 6 months and NIHSS scores on days 2 (r(s)=0.549, p<0.001), 5 (r(s)=0.592, p<0.001) and 9 (r(s)=0.567, p<0.001). The AUC ranged from 0.789 (95%CI, 0.715-0.864) for measurements on day 2 to 0.804 (95%CI, 0.733-0.874) and 0.808 (95%CI, 0.739-0.877) for days 5 and 9, respectively. Odds ratio's ranged from 0.143 (95%CI, 0.069-0.295) for assessment on day 2 to a maximum of 0.148 (95%CI, 0.073-0.301) for day 5. The NPV gradually increased from 0.610 (95%CI, 0.536-0.672) for assessment on day 2 to 0.679 (95%CI, 0.578-0.765) for day 9, whereas PPV declined from 0.810 (95%CI, 0.747-0.875) for assessment on day 2 to 0.767 (95%CI, 0.712-0.814) for day 9. The overall accuracy of predictions increased from 71.7% for assessment on day 2 to 73.6% for day 9.
Conclusions: When measured within 9 days, the 13-item NIHSS is highly associated with final outcome in terms of BI at 6 months post stroke. The moment of assessment beyond 2 days post stroke does not significantly affect the accuracy of prediction of ADL dependency at 6 months. The NIHSS can therefore be used at acute hospital stroke units for early rehabilitation management during the first 9 days post stroke, as the accuracy of prediction remained about 72%, irrespective of the moment of assessment.
abstract_id: PUBMED:23936957
A clinical study of ischaemic strokes with micro-albuminuria for risk stratification, short-term predictive value and outcome. Stroke results more than 4.3 million deaths worldwide per annum and 85% of all strokes are ischaemic in nature. Besides numerous modifiable and non-modifiable known risk factors, microalbuminuria is thought to be an important marker of global endothelial dysfunction and associated with cardiovascular disease including stroke. Fifty ischaemic stroke cases and 50 (age, sex matched) control subjects were subjected to study to compare and evaluate risk stratification of micro-albuminuria, its predictive value and outcome on day 1 and day 7 among admitted ischaemic stroke cases.The result was found that micro-albuminuria was present in 66% of ischaemic stroke cases compared to only 8% of control group (p < 0.001). Most validated National Institute of Health Stroke Scale (NIHSS) score was used for evaluation and calculation of predictive value and outcome of micro-albuminuria positive patient where higher value indicates poor prognosis, and the result was mean NIHSS score 29.12 versus 18.88 between two groups of strokes ie, with and without micro-albuminuria. Out of 50 ischaemic stroke patients 33 (66%) had micro-albuminuria. Among 11 patients who died, 10 (90.9%) had micro-albuminuria and NIHSS score was 33.64 and 25.0 on day 1 and day 7. Among 39 patients who were discharged, 23 patients (58.97%) were MA positive and NIHSS score was much less than death group ie, 23.38 and 16.38 on day 1 and day 7 respectively. So this study reveals micro-albuminuria itself results higher risk for ischaemic stroke compared to control group and it shows good predictive value for early assessment of clinical severity and subsequent fatal outcome. This is also simple, cost effective and affordable.
abstract_id: PUBMED:23975559
Predictive ability of C-reactive protein for early mortality after ischemic stroke: comparison with NIHSS score. We aimed to compare the association of high-sensitivity C-reactive protein (CRP) and National Institutes of Health Stroke Scale (NIHSS) score with mortality risk and to determine the optimal threshold of CRP for prediction of mortality in ischemic-stroke patients. A series of 162 patients with first-ever ischemic-stroke admitted within 24 h after onset of symptoms was enrolled. CRP and NIHSS score were estimated on admission and their predictive abilities for mortality at 7 days were determined by logistic-regression analyses. Receiver-Operating Characteristic (ROC) curves were depicted to identify the optimal cut-off of CRP, using the maximum Youden-index and the shortest-distance methods. Deceased patients had higher levels of CRP and NIHSS on admission (8.87 ± 7.11 vs. 2.20 ± 4.71 mg/l for CRP, and 17.31 ± 6.36 vs. 8.70 ± 4.85 U for NIHSS, respectively, P < 0.01). CRP and NIHSS were correlated with each other (r (2) = 0.39, P < 0.001) and were also independently associated with increased risk of mortality [odds ratios (95 % confidence interval) of 1.16 (1.05-1.28) and 1.20 (1.07-1.35) for CRP and NIHSS, respectively, P < 0.01]. The areas under the ROC curves of CRP and NIHSS for mortality were 0.82 and 0.84, respectively. The CRP value of 2.2 mg/l was identified as the optimal cut-off value for prediction of mortality within 7 days (sensitivity: 0.81, specificity: 0.80). Thus, CRP as an independent predictor of mortality following ischemic-stroke is comparable with NIHSS and the value of 2.2 mg/l yields the optimum sensitivity and specificity for mortality prediction.
abstract_id: PUBMED:33508726
Predictive Validity of the Scale for the Assessment and Rating of Ataxia for Medium-Term Functional Status in Acute Ataxic Stroke. Objectives: This study examines the prognostic validity of the Scale for the Assessment and Rating of Ataxia for patients with acute stroke.
Materials And Methods: We enrolled 120 patients with posterior circulation stroke having ischemic or hemorrhagic lesions with ataxia who had physical therapy. We recorded the clinical stroke features and obtained the scale for the assessment and rating of ataxia and National Institutes of Health Stroke Scale scores 7 days after admission and at discharge. Predictive factors for a 3-month modified Rankin Scale score of <3 were investigated.
Results: During hospitalization, the Scale for the Assessment and Rating of Ataxia score decreased from 7.5 (interquartile range, 4.5-12.5) to 4.0 (interquartile range, 1.5-8.0) points, whereas the National Institutes of Health Stroke Scale score changed from 1 (interquartile range, 0-3) to 1 (interquartile range, 0-2) point. A significant correlation between functional outcome and the Scale for the Assessment and Rating of Ataxia scores 7 days after onset was observed. The cutoff value for the assessment and rating of ataxia for predicting favorable outcome (modified Rankin scale, 0-2) at 3 months post-onset was 14 points (0-40) at 7 days after onset.
Conclusions: The Scale for the Assessment and Rating of Ataxia scores showed good responsiveness to neurological changes in patients with acute ataxic stroke, could predict functional outcomes 3 months after onset on day 7, and could be a useful and reliable marker for patients with ataxic stroke.
abstract_id: PUBMED:32912550
Role of Non-Perfusion Factors in Mildly Symptomatic Large Vessel Occlusion Stroke. Introduction: Uncertainty regarding reperfusion of mildly-symptomatic (minor) large vessel occlusion (LVO)-strokes exists. Recently, benefits from reperfusion were suggested. However, there is still no strong data to support this. Furthermore, a proportion of those patients don't improve even after non-hemorrhagic reperfusion. Our study evaluated whether or not non-perfusion factors account for such persistent deconditioning.
Methods: Patients with identified minor LVO-strokes (NIHSS ≤ 8) from our stroke alert registry between January-2016 and May-2018 were included. Variables/ predictors of outcome were tested using univariate/multivariate logistic and linear regression analyses. Three month-modified ranking scale (mRS) was used to differentiate between favorable (mRS = 0-2) and unfavorable outcomes (mRS = 3-6).
Results: Eighty-one patients were included. Significant differences between the two outcome groups regarding admission-NIHSS and discharge-NIHSS existed (OR = 0.47, 0.49 / p = 0.0005, <0.0001 respectively).The two groups had matching perfusion measures. In the poor outcome group, discharge-NIHSS was unchanged from the admission-NIHSS while in the good outcome group, discharge-NIHSS significantly improved.
Conclusion: Admission and discharge NIHSS are independent predictors of outcome in patients with minor-LVO strokes. Unchanged discharge-NIHSS predicts worse outcomes while improved discharge-NIHSS predicts good outcomes. Unchanged NIHSS in the poor outcome group was independent of the perfusion parameters. In literature, complement activation and pro-inflammatory responses to ischemia might account for the progression of stroke symptoms in major-strokes. Our study concludes similar phenomena might be present in minor-strokes. Therefore, discharge-NIHSS may be useful as a clinical marker for future therapies.
abstract_id: PUBMED:34318373
NIHSS-the Alberta Stroke Program Early CT Score mismatch in guiding thrombolysis in patients with acute ischemic stroke. Objective: This study investigates the mismatch between the National Institutes of Health Stroke Scale (NIHSS) score and the computed tomography (CT) findings measured by the Alberta Stroke Program Early CT Score (ASPECTS) for predicting the functional outcome and safety of intravenous thrombolysis (IVT) treatment in patients with acute ischemic stroke (AIS).
Methods: This prospective observational study includes patients with AIS who underwent CT imaging within 4.5 h of the onset of symptoms. Patients were divided into the NIHSS-ASPECTS mismatch (NAM)-positive and NAM-negative groups (group P and N, respectively). The clinical outcome was assessed using the Modified Rankin Scale (mRS). Safety outcomes included progression, symptomatic intracerebral hemorrhage (sICH), intracerebral hemorrhage (ICH), adverse events, clinical adverse events, and mortality.
Results: A total of 208 patients were enrolled in the study. In group P, IVT treatment was associated with a good functional outcome at 3 months (p = 0.005) and 1 year (p = 0.001). A higher percentage of patients with favorable mRS (0-2) (p = 0.01) and excellent mRS (0-1) (p = 0.011) functional outcomes was obtained at 1 year in group P with IVT treatment. Group N did not benefit from the same treatment (p = 0.352 and p = 0.480 at 3 months and 1 year, respectively). There were no statistically significant differences in sICH, ICH, mortality rates, or other risks between the IVT and conventional treatment groups.
Conclusion: IVT treatment is associated with a good functional outcome in patients with NAM, without increasing the risks of sICH, ICH, mortality, or other negative outcomes. NAM promises to be an easily obtained indicator for guiding the treatment decisions of AIS.
abstract_id: PUBMED:16202820
Predictive value of median-SSEP in early phase of stroke: a comparison in supratentorial infarction and hemorrhage. Objective: To compare the prognostic value of median somatosensory evoked potentials (M-SSEP) changes in the early phase of supratentorial infarction and hemorrhage.
Material And Methods: This study includes 130 patients (mean age 62+/-11.4 years, 43 women, large middle cerebral artery territory infarction in 36 patients, restricted/lacunar in 55, massive supratentorial hemorrhage in 10, small/medium size hemorrhage in 31). M-SSEP were recorded early (0-7 days in ischemia, 0-21 days in hemorrhage) and patients stratified into groups with absent, abnormal, normal response. Clinical state was determined by the Medical Research Council (MRC) scale, Barthel Index and Rankin score and followed for at least 6 months.
Results: Moderate prognostic correlation was established between N20-P25 amplitudes (r=0.34, p<0.05) and N20-P25 amplitude ratio (r=0.45, p<0.01) and Barthel Index at 6 months in patients with ischemic stroke. Moderate relationship (r=-0.34, p<0.05) exists also between N20-P25 ratio and Rankin score at 6 months in patients with small/medium size hemorrhage. In large infarctions and small/medium size cerebral hemorrhages correlations with all clinical indices of outcome are weak. In massive hemorrhage, only a weak correlation (r=-0.19, p<0.05) between amplitude ratio and Rankin score was found. The combination of initial MRC and N20-P25 amplitude ratio has 10% (in hemorrhage) to 15% (in infarction) greater prognostic value (p<0.05) than initial alone.
Conclusions: M-SSEP have independent predictive value regarding functional recovery in ischemic stroke and small/medium size cerebral hemorrhage. Combined assessment of initial MRC and M-SSEP substantially improves prognosis in acute stroke.
abstract_id: PUBMED:17082505
Predictive value of ischemic lesion volume assessed with magnetic resonance imaging for neurological deficits and functional outcome poststroke: A critical review of the literature. Objective: Ischemic lesion volume is assumed to be an important predictor of poststroke neurological deficits and functional outcome. This critical review examines the methodological quality of MRI studies and the predictive value of hemispheric infarct volume for neurological deficits (at body function level) and functional outcome (at activities level).
Methods: Using Medline, PiCarta, and Embase to identify studies, 13 of the 747 identified studies met the authors' inclusion criteria. Subsequently, studies were tested for adherence to the key methodological criteria for internal, statistical, and external validity. Each criterion was weighted binary, and studies with 6 points or more were judged to be valid for assessing the predictive value of MRI for outcome.
Results: The 13 included studies had several methodological weaknesses with respect to internal validity, and none of them took lesion location into account. Only a few used outcome measures according to the International Classification of Functioning, Disability and Health and followed patients beyond 6 months. Correlation coefficients between MRI lesion volume and outcomes were higher for outcomes defined at body function level (National Institutes of Health Stroke Scale; median 0.67; range: 0.57-0.91) than for those defined at the level of activities (Barthel Index; median -0.49; range: -0.33 to -0.74).
Conclusions: Methodological shortcomings of most studies confound the prognostic value of MRI in predicting stroke outcome, and few studies have focused on functional outcome. Future studies should investigate the added value of MRI volume over clinical neurological variables in predicting functional outcome beyond 6 months poststroke.
abstract_id: PUBMED:36701939
Predictive value of computed tomography perfusion for acute ischemic stroke patients with ASPECTS < 6 in an early time window. Objective: The standard for computed tomography perfusion (CTP) assessment has not been well established in early acute ischemic stroke (AIS). We aimed to examine the prognostic factors for good outcomes in patients who received CTP, with an Alberta Stroke Program Early CT Score (ASPECTS) < 6 after endovascular thrombectomy (EVT) in the early time window (0-6 h).
Methods: We retrospectively reviewed 59 patients who met the criteria from October 2019 to April 2021. Based on the modified Rankin Score (mRS) at 90 days, the patients were divided into a good outcome group (mRS 0-2) and a poor outcome group (mRS 3-6). Baseline and procedural characteristics were collected for unilateral and multivariate regression analyses to explore the influencing factors for good outcomes.
Results: Of the 59 patients included, good outcomes were observed in 21 (35.6%). Multivariate logistic regression analysis showed that smaller ischemic core volume (odds ratio [OR]: 0.950; 95% CI: 0.908-0.994; P = 0.026), lower National Institutes of Health Stroke Scale (NIHSS) score (OR: 0.750; 95% CI: 0.593-0.949; P = 0.017) and shorter stroke onset to reperfusion time (ORT) (OR: 0.981; 95% CI: 0.966-0.996; P = 0.016) were independent predictors for good outcomes at 90 days.
Conclusion: Smaller ischemic core volume based on CTP, lower NIHSS score and shorter ORT were significant independent predictors of good outcomes in patients with ASPECTS < 6 in the early time window after EVT.
abstract_id: PUBMED:32588608
Clinical and biochemical predictors of late-outcome in patients after ischemic stroke. Objective: The aim of this study is to evaluate neurological scales, as well as biochemical and radiological parameters measured on day 10 after ischemic stroke (IS), according to their value as predictors of the long-term outcome.
Material And Methods: 45 patients were assessed according to the Barthel Index (BI) and National Institute of Health Stroke Scale (NIHSS) on day 10, and according to Modified Rankin Scale (mRS) 3 months after the onset of IS. On day 10 of IS, the serum level of C-reactive protein (CRP), albumin, D-dimers (DD), S100BB and Tau proteins was measured and the volume of ischemic focus assessed with the use of Computed Tomography (CT). The patients were divided into groups with good outcome (GO) and mRS 0-2, and with bad outcome (BO) and mRS 3-6.
Results: NIHSS and BI scores (p<0.001), the volume of ischemic focus (p<0.01), CRP (p<0.01) and albumin level (p<0.05), but not DD, S100BB and Tau protein levels evaluated on day 10, correlated with mRS after 3 months since IS onset. Patients from the BO group were observed to have lower BI (p=0.001), higher NIHSS (p<0.01) and CRP levels (p<0.05), and bigger volume of ischemic focus (p<0.05) measured on day 10 of IS. In the GO group, there were more patients with atherosclerotic etiology (p=0.02 x2=7.856). Regression analysis showed that only the BI score assessed on day 10 of IS can predict the outcome after 3 months assessed by mRS (OR=1.102, 95%, CI:1.01-1.203; p=0.001).
Conclusions: BI assessed on day 10 has a predictive value for the outcome evaluated by mRS 3 months after the onset of IS.
Answer: The predictive value of the National Institutes of Health Stroke Scale (NIHSS) for activities of daily living (ADL) outcome after ischemic hemispheric stroke has been investigated, and the timing of early assessment has been considered in relation to its accuracy. A study by PUBMED:20439108 found that the NIHSS, when measured within 72 hours after stroke and at 5 and 9 days post-stroke, is highly associated with ADL dependency outcomes at 6 months. The accuracy of predicting ADL dependency at 6 months post-stroke did not significantly change based on whether the NIHSS was assessed at 2, 5, or 9 days after the stroke. The overall accuracy of predictions increased slightly from 71.7% for assessment on day 2 to 73.6% for day 9, indicating that the timing of assessment beyond 2 days post-stroke does not significantly affect the accuracy of prediction. Therefore, the NIHSS can be used for early rehabilitation management during the first 9 days post-stroke with about 72% accuracy, irrespective of the moment of assessment.
In summary, the timing of early NIHSS assessment within the first 9 days after an ischemic hemispheric stroke does not significantly impact the predictive value for ADL outcomes at 6 months post-stroke. The scale remains a reliable tool for early rehabilitation management in acute hospital stroke units during this period. |
Instruction: Is Seprafilm valuable in infant cardiac redo procedures?
Abstracts:
abstract_id: PUBMED:25880562
Is Seprafilm valuable in infant cardiac redo procedures? Background: Morbidity and mortality are higher for cardiac reoperations than first operation due to the presence of post-operative adhesions. We retrospectively evaluated the efficacy of the bioresorbable membrane Seprafilm to prevent pericardial adhesions after cardiac surgery in a paediatric congenital heart disease population.
Methods: Seventy-one children undergoing reoperations with sternotomy redo and cardiopulmonary bypass for congenital malformations were included. Twenty-nine of these patients were reoperated after previous application of Seprafilm (treatment group). The duration of dissection, aortic cross clamping and total surgery were recorded. A tenacity score was established for each intervention from the surgeon's description in the operating report.
Results: In multivariate analysis, the duration of dissection and the tenacity score were lower in the treatment than control group (p < 0.01), independent of age and interval since preceding surgery.
Conclusion: Our results suggest that Seprafilm is effective in reducing the post-operative adhesions associated with infant cardiac surgery. We recommend the use of Seprafilm in paediatric cardiac surgery when staged surgical interventions are necessary.
abstract_id: PUBMED:33442208
The impact of advances in percutaneous catheter interventions on redo cardiac surgery. Toward the end of the twentieth century, redo cardiac surgery accounted for approximately 15-20% of total cardiac surgical volume. Major risk factors for redo cardiac surgery include young age at time of the first operation, progression of native coronary artery disease (CAD), vein graft atherosclerosis, bioprosthetic valve failure and endocarditis, and transplantation for end stage heart failure. Historically, redo coronary artery bypass grafting (CABG) alone carried a mortality risk of around 4%. Factors such as older age, female sex, comorbidities, combined procedures, hemodynamic instability, and emergency procedures contributed to even higher mortality and morbidity. These poor outcomes made it necessary to look for less invasive alternate methods of treatment. Advances in catheter-based interventions have made a major impact on redo cardiac surgeries, making it no longer the first option in a majority of cases. Percutaneous interventions for recurrence following CABG, transcutaneous aortic valve replacement (TAVR) for calcific aortic stenosis, valve in valve (VIV) implantations, device closure of paravalvular leaks (PVL), and thoracic endovascular aortic repair (TEVAR) for residual and recurrent aneurysms and mitral clip to correct mitral regurgitation (MR) in heart failure are rapidly developing or developed, obviating the need for redo cardiac surgery. Our intent is to review these advances and their impact on redo cardiac surgery.
abstract_id: PUBMED:26527452
Predictors of in-hospital mortality following redo cardiac surgery: Single center experience. Purpose: Redo cardiac operations represent one of the main challenges in heart surgery. The purpose of the study was to analyze the predictors of in-hospital mortality in patients undergoing reoperative cardiac surgery by a single surgical team.
Methods: A total of 1367 patients underwent cardiac surgical procedures and prospectively entered into a computerized database. Patients were divided into 2 groups based on the reoperative cardiac surgery (n = 109) and control group (n = 1258). Uni- and multivariate logistic regression analysis were performed to evaluate the possible predictors of hospital mortality.
Results: Mean age was 56 ± 13, and 46% were female in redo group. In-hospital mortality was 4.6 vs. 2.2%, p = 0.11. EuroSCORE (6 vs. 3; p < 0.01), cardiopulmonary bypass time (90 vs. 71 min; p < 0.01), postoperative bleeding (450 vs. 350 ml; p < 0.01), postoperative atrial fibrillation (AF) (29 vs. 16%; p < 0.01), and inotropic support (58 vs. 31%; p = 0.001) were significantly different. These variables were entered into uni- and multivariate regression analysis. Postoperative AF (OR1.76, p = 0.007) and EuroSCORE (OR 1.42, p < 0.01) were significant risk factors predicting hospital mortality.
Conclusions: Reoperative cardiac surgery can be performed under similar risks as primary operations. Postoperative AF and EuroSCORE are predictors of in-hospital mortality for redo cases.
abstract_id: PUBMED:11450118
Experimental results of the use of hyaluronic acid based materials (CV Seprafilm and CV Sepracoat) in postoperative pericardial adhesions Repeat cardiac surgical procedures are associated with increased technical difficulty and risk related to the presence of dense adhesions between the heart and the surrounding tissues. We examined the efficacy of a bioabsorbable membrane containing hyaluronic acid in the prevention of pericardial adhesions in 23 rabbits. After thoracotomy and pericardiotomy the animals were divided in three groups: Group 1 (9 animals) in which the epicardial surfaces were covered by Seprafilm membrane, Group 2 (9 animals) treated with both Seprafilm membrane and Sepracoat solution, and Group 3 (5 animals) as controls. The animals were reexplored at 10, 30 and 60 days: no intrapericardial adhesions were found in all the animals of Group 2. In 4 animals (44%) of Group 1 localized post-operative adhesions were detected, in absence of epicardial hyperplasia; in contrast, dense and diffuse adhesions were present in all the control animals. The use of the bioabsorbable membrane Seprafilm significantly reduces adhesion formation even if better results are possible with the previous intrapericardial administration of Sepracoat solution. Application of these biocompatible products could reduce the technical difficulty and risk of repeat surgical procedures.
abstract_id: PUBMED:32652793
The long-term impact of peripheral cannulation for redo cardiac surgery. Background: Redo cardiac surgery carries an inherent risk for adverse short-term outcomes and worse long-term survival. Strategies to mitigate these risks have been numerous, including initiation of cardiopulmonary bypass via peripheral cannulation before resternotomy. This study evaluated the impact of central versus peripheral cannulation on long-term survival after redo cardiac surgery.
Methods: This was an observational study of open cardiac surgeries between 2010 and 2018. Patients undergoing open cardiac surgery that utilized cardiopulmonary bypass, who also had more than equal to 1 prior cardiac surgery, were identified. Kaplan-Meier survival estimation and multivariable Cox regression analysis were performed to assess the impact of peripheral cannulation on survival. To isolate long-term survival, patients with operative mortality were excluded and survival time was counted from the date of discharge until the date of death.
Results: Of the 1660 patients with more than equal to 1 prior cardiac surgery, 91 (5.5%) received peripheral cannulation. After excluding patients with operative mortality and after multivariable risk-adjustment, the peripheral cannulation group had significantly increased hazard of death, as compared to the central cannulation group (HR 1.53, 95% CI: 1.01, 2.30, P = .044). Yet, there were no relevant differences for other postoperative outcomes, including blood product requirement, prolonged ventilation (>24 hours), pneumonia, reoperation for bleeding, stroke, sepsis, and new dialysis requirement.
Conclusions: This is the first study reporting the long-term impact of peripheral cannulation for redo cardiac surgery after excluding patients with operative mortality. These data suggest that central cannulation may to be the preferred approach to redo cardiac surgery whenever safe and possible.
abstract_id: PUBMED:26777931
Hemopatch Application for Ventricular Wall Laceration in Redo Cardiac Surgical Procedures. As survival among patients with complex congenital heart disease continues to improve, long-term survivors frequently require redo surgical procedures, with potentially escalating technical difficulty and bleeding risk. This report describes our experience with a new hemostatic pad, Hemopatch (Baxter Deutschland GmbH, Unterschleissheim, Germany) in redo cardiac surgery.
abstract_id: PUBMED:30549222
Safe and easy technique for the laparoscopic application of Seprafilm® in gynecologic surgery. Introduction: Laparoscopic surgery is a minimally invasive surgery, and the rate of postoperative adhesions is low. Although Seprafilm® helps to reduce adhesions, its application in the abdominal cavity during laparoscopic surgery is difficult because of its material. Therefore, we propose an easy method for applying this adhesion barrier.
Materials And Surgical Technique: The Seprafilm is cut into four equal pieces. The four pieces are stacked, firmly folded twice, and grasped with the forceps. The reducer sleeve is slid over the bundle of Seprafilm. The forceps with the reducer sleeve is inserted through a 12-mm trocar near the target area. The reducer sleeve is then slid down the forceps to uncover the Seprafilm. Finally, each piece of Seprafilm is applied over the suture area. In all cases, the Seprafilm was successfully applied to the intended target. There were no cases in which Seprafilm was incompletely applied or in which it could not be used because of moistening. The average application times of surgeon 1 and surgeon 2 were 4.8 min and 5.0 min, respectively; this difference was not significant. There were no postoperative complications in any case.
Discussion: It is safe and easy to use our simple technique to apply Seprafilm adhesion barrier laparoscopically. Further studies are warranted to prove Seprafilm's efficacy after such application.
abstract_id: PUBMED:34120802
Comparison of robotic and conventional sternotomy in redo mitral valve surgery. Background/purpose: Redo operation for mitral valve surgery carries higher risks than first time cardiac surgery. The adhesion between sternum and heart, and also the complexity of second time operation make the redo operation more difficult. The robotic surgery carries some benefit in terms of magnification, assisted by the scope view and precise movement of the instruments. We compared the results of our robotic redo mitral valve surgeries with those of conventional re-sternotomy.
Methods: Medical records of patients who underwent redo mitral valve surgeries between 2012 and 2019 at our hospital were retrospectively analyzed. Demographic data, patients' medical histories, presenting symptoms, image analyses, echocardiogram data, operative procedures and postoperative clinical outcomes were collected through chart review.
Results: A total of 67 redo mitral valve surgeries, including 23 robotic and 44 re-sternotomy procedures were performed. There were no differences in age, previous operation times, and intervals to previous surgery. Comorbidities of both groups were similar. There was no surgical mortality in the robotic group, and it was 9.0% in the re-sternotomy group (p = 0.287). Operation time was shorter in the robotic group (176 vs. 321 min; robotic vs. re-sternotomy, p=0.0279). Blood transfusion was lower in the robotic group (1 vs. 2 units; robotic vs. re-sternotomy, p = 0.01189). The ventilation time, ICU stay time, and recheck bleeding rate were similar in both groups.
Conclusion: In select patients, robotic redo mitral valve surgery is safe and feasible. It could offer low operative mortality. It is associated with shorter operative times, than re-sternotomy and provides equal immediate operative results.
abstract_id: PUBMED:31178024
Intraoperative Implantation of Temporary Endocardial Pacing Catheter During Thoracoscopic Redo Tricuspid Surgery. Background: The placement of a temporary epicardial pacing wire is a challenge during a minimally invasive redo cardiac operation. The aim of this study is to assess the application of temporary endocardial pacing in patients who underwent minimally invasive redo tricuspid surgery.
Methods: Perioperative data of consecutive patients who underwent thoracoscopic redo tricuspid surgery were collected. All the tricuspid surgeries and combined procedures were performed under peripheral cardiopulmonary bypass without aortic cross-clamping. A sheath was introduced into the right jugular vein beside the percutaneous superior vena cava cannula and a temporary endocardial pacing catheter was guided into the right ventricle via the sheath prior to the right atrial closure. The pacemaker was connected and run as needed during or after operation.
Results: A total of 33 patients who underwent thoracoscopic redo tricuspid surgery were enrolled. Symptomatic tricuspid valve regurgitation (93.9%) and tricuspid valvular prosthesis obstruction (6.1%) after previous cardiac operations were noted as indications for a redo surgery. The mean time from previous cardiac operation to this time redo surgery was 13.3±6.4years. Isolated tricuspid valve replacement was performed in 18 patients (54.5%) and tricuspid valve plasty combined with or without mitral valve replacement was performed in 15 patients (45.5%). A temporary endocardial pacing catheter was successfully placed in the right ventricle for all patients with good sensing and pacing. No temporary pacing related complications occurred from insertion to removal of pacing catheter in the patients.
Conclusions: This application of temporary endocardial pacing provided a safe and effective substitute for epicardial pacing in patients who underwent minimally invasive redo tricuspid surgery.
abstract_id: PUBMED:33704457
Use of carotid artery cannulation during redo sternotomy in congenital cardiac surgery: a single-centre experience. Objectives: Management of resternotomy is often a difficult challenge in patients with congenital diseases who have undergone multiple sternotomies. Our goal was to report our single-centre experience with carotid cannulation using a graft interposed during redo sternotomy in complex cardiac reintervention procedures.
Methods: We performed a retrospective review of all patients who had undergone complex redo sternotomies between January 2019 and May 2020 utilizing a cervical cannulation technique with a Gore-Tex graft interposed on the carotid artery. We classified our population study on the basis of the primary diagnosis and the type of surgery. The primary outcomes of our analysis were the evaluation of the safety of the procedure in terms of survival and freedom from neurological events.
Results: We analysed 22 patients who had undergone previous complex operations. The median age and weight at the time of reintervention were 130.35 (range 0.46-435) months and 31.5 (range 2.2-85) kg, respectively. Composite graft carotid cannulation provided adequate arterial flow in all patients with a median arterial flow of 3.5 l/min/m2 (range 0.6-6). One major cardiac injury occurred during sternotomy when emergency cardiopulmonary bypass (CPB) was initiated. Moreover, during their hospital stays, all patients had an uneventful recovery without neurological or vascular complications and no cervical wound infections.
Conclusions: Carotid cannulation using interposition of a side graft on the common carotid artery for arterial inflow is a reliable and safe method for initiation of CPB in complex redo surgeries in patients with congenital disease. Complications directly associated with this type of cannulation are uncommon and allow surgical re-entry with overall low risks.
Answer: Yes, Seprafilm appears to be valuable in infant cardiac redo procedures. A retrospective evaluation of the efficacy of Seprafilm in preventing pericardial adhesions after cardiac surgery in a pediatric congenital heart disease population showed that the duration of dissection and the tenacity score, which measures the severity of adhesions, were lower in the treatment group that used Seprafilm compared to the control group. This was independent of age and the interval since the preceding surgery. The study concluded that Seprafilm is effective in reducing post-operative adhesions associated with infant cardiac surgery and recommended its use in pediatric cardiac surgery when staged surgical interventions are necessary (PUBMED:25880562).
Additionally, experimental results in rabbits using hyaluronic acid-based materials, including Seprafilm, demonstrated a significant reduction in adhesion formation, suggesting that the application of such biocompatible products could reduce the technical difficulty and risk of repeat surgical procedures (PUBMED:11450118).
Moreover, a safe and easy technique for the laparoscopic application of Seprafilm in gynecologic surgery was described, indicating that the material can be successfully applied to the intended target without complications, which may further support its use in a variety of surgical contexts, including potentially in pediatric cardiac surgery (PUBMED:30549222).
These findings suggest that Seprafilm has a valuable role in reducing the morbidity associated with adhesions in infant cardiac redo procedures. |
Instruction: Non-tuberculous pulmonary infections in Scotland: a cluster in Lothian?
Abstracts:
abstract_id: PUBMED:7701462
Non-tuberculous pulmonary infections in Scotland: a cluster in Lothian? Background: A retrospective study was carried out to confirm the clinical impression that, in Lothian, non-tuberculous mycobacterial infections are as common as pulmonary tuberculosis.
Methods: All pulmonary isolates of Mycobacterium tuberculosis/bovis and non-tuberculous mycobacteria in Scotland from April 1990 to March 1993, and the notes of all patients with M malmoense isolates in Lothian, were reviewed. Information on mycobacterial culture procedures in Scottish laboratories was obtained as part of an audit project.
Results: Of all pulmonary isolates of mycobacteria in Lothian 53% (108/205) were non-tuberculous strains compared with 18% (140/800) for Scotland outside Lothian. Although comparable in population size and laboratory techniques, Lothian (108) had almost twice as many isolates of non-tuberculous mycobacteria as Glasgow (56), but the proportions of M malmoense and M avium intracellulare complex were similar in both areas. Of 41 patients with M malmoense isolates in Lothian 30 (75%) had clinically significant lung disease; only one was HIV positive.
Conclusions: Non-tuberculous mycobacteria pose an increasing clinical problem in Scotland as a cause of pulmonary disease. There is a cluster of cases with M malmoense infection in Lothian which cannot be attributed to the high local prevalence of HIV.
abstract_id: PUBMED:18329141
Non tuberculous mycobacterial infections Purpose: Non tuberculous mycobacterial (NTM) infections, also called atypical mycobacterial infections, are caused by environmental mycobacteria and usually occur in cases of general or local immunosupression. These infections usually concern the lungs, the lymphatic system, the skin or the bones tissues. They are sometimes disseminated. In spite of new efficient antibiotics, including macrolides, therapeutic failures are common and favoured by long treatments with their potential adverse effects and drug interactions.
Current Knowledge And Key Points: The prevalence of atypical mycobacterial infections is increasing and is also observed in internal medicine and geriatric wards. Their clinical expression can be varied. Nowadays, these infections are more and more frequent in non-infected HIV patients, whether immunosupressed or not. Concerning other localisations of atypical mycobacterial infections, iatrogenic causes seem to be increasing and cases of nosocomial transmissions have also been described. When a NTM is found in a sample, its role in the cause of an infection must be assessed with criterias distinguishing infection from colonisation.
Future Prospects And Projects: For those who are not locally or generally immunosupressed, it is important to search for an immunological deficiency. Indeed, patients having congenital deficiencies occurring in the interferon and interleukine pathways can develop repeated NTM infections. Therefore, for pulmonary infections in treatment failure and for disseminated infections, an adjuvant treatment by interferon gamma could be proposed. New molecules have recently been tested and can be used in some atypical mycobacterial infections.
abstract_id: PUBMED:35317085
Mycobacterium szulgai: A Rare Cause of Non-Tuberculous Mycobacteria Disseminated Infection. Mycobacterium szulgai (MS) is a rare and slow-growing type of non-tuberculous mycobacteria (NTM), with a human isolation prevalence of less than 0.2% of all NTM cases. MS may cause pulmonary infection, extra-pulmonary localized disease involving the skin, lymph nodes, bone, synovial tissue or kidneys and disseminated infection, when two or more organs are affected. When disseminated infection is present, the patients usually have an underlying immunosuppressive condition. The authors report the case of a 25-year-old patient with systemic lupus erythematosus, presenting with recurrent fever, non-productive coughing, weight loss and asthenia, as well as two violaceous plaques with superficial ulceration in the gluteal region. MS was isolated from the bronchial lavage and skin biopsy cultures, confirming the rare disseminated form of MS infection. After 10 months of follow-up on isoniazid, rifampin, ethambutol and pyrazinamide, no signs of relapse were evident. To date, only 16 other cases of MS disseminated disease have been reported.
abstract_id: PUBMED:27854334
Characterizing Non-Tuberculous Mycobacteria Infection in Bronchiectasis. Chronic airway infection is a key aspect of the pathogenesis of bronchiectasis. A growing interest has been raised on non-tuberculous mycobacteria (NTM) infection. We aimed at describing the clinical characteristics, diagnostic process, therapeutic options and outcomes of bronchiectasis patients with pulmonary NTM (pNTM) disease. This was a prospective, observational study enrolling 261 adult bronchiectasis patients during the stable state at the San Gerardo Hospital, Monza, Italy, from 2012 to 2015. Three groups were identified: pNTM disease; chronic P. aeruginosa infection; chronic infection due to bacteria other than P. aeruginosa. NTM were isolated in 32 (12%) patients, and among them, a diagnosis of pNTM disease was reached in 23 cases. When compared to chronic P. aeruginosa infection, patients with pNTM were more likely to have cylindrical bronchiectasis and a "tree-in-bud" pattern, a history of weight loss, a lower disease severity and a lower number of pulmonary exacerbations. Among pNTM patients who started treatment, 68% showed a radiological improvement, and 37% achieved culture conversion without recurrence, while 21% showed NTM isolation recurrence. NTM isolation seems to be a frequent event in bronchiectasis patients, and few parameters might help to suspect NTM infection. Treatment indications and monitoring still remain an important area for future research.
abstract_id: PUBMED:25013527
Diffuse Pulmonary Uptake of Tc-99m Methylene Diphosphonate in a Patient with Non-tuberculosis Mycobacterial Infection. Extra-osseous uptake of bone-seeking radiopharmaceuticals has been reported at various sites and it is known to be induced by various causes. Diffuse pulmonary infection, such as tuberculosis, can be a cause of lung uptake of bone-scan agent. Here we report on a patient with non-tuberculosis mycobacterial infection (NTM) who demonstrated diffuse pulmonary uptake on Tc-99m MDP bone scan. After medical treatment for NTM, the patient's lung lesions improved. Extraskeletal lung Tc-99m MDP uptake on bone scan may suggest lung parenchymal damage associated with disease activity.
abstract_id: PUBMED:38094879
A Rare Case of Co-existing Non-small Cell Lung Carcinoma and Non-tuberculous Mycobacteria. A solitary pulmonary mass is commonly associated with malignancy; however, the possibility of co-existence with a pulmonary infection is rarely considered. Here, we present an extraordinary case, underscoring the importance of considering the possibility of concurrent lung cancer even when a bronchoscopy examination and bronchial lavage yield a positive mycobacterium culture result.
abstract_id: PUBMED:32372587
Primary Immunodeficiency Disorders in children with Non-Cystic Fibrosis Bronchiectasis. Summary: Introduction. Primary immunodeficiency diseases (PID) are common in patients with non-cystic fibrosis bronchiectasis (NCFB). Our objective was to determine ratio/types of PID in NCFB. Methods. Seventy NCFB patients followed up in a two-year period were enrolled. Results. Median age was 14 years (min-max: 6-30). Male/female ratio was 39/31; parental consanguinity, 38.6%. Most patients with NCFB (84.28%) had their first pulmonary infection within the first year of their lives. Patients had their first pulmonary infection at a median age of 6 months (min-max: 0.5-84), were diagnosed with bronchiectasis at about 9 years (114 months, min-max: 2-276). PID, primary ciliary dyskinesia (PCD), bronchiolitis obliterans, rheumatic/autoimmune diseases, severe congenital heart disease and tuberculosis were evaluated as the most common causes of NCFB. About 40% of patients (n=16) had bronchial hyperreactivity (BH) and asthma. Twenty-nine patients (41.4%) had a PID, and nearly all (n=28) had primary antibody deficiency, including patients with combined T and B cell deficiency. PID and non-PID groups did not differ according to gender, parental consanguinity, age at first pneumonia, age of onset of chronic pulmonary symptoms, bronchiectasis, presence of gastroesophageal reflux disease (GERD), BH and asthma (p greater-than 0.05). Admission to immunology clinic was about 3 years later in PID compared with non-PID group (p less-than 0.001). Five patients got molecular diagnosis, X-linked agammaglobulinemia (n=2), LRBA deficiency (n=1), RASGRP1 deficiency (n=1), MHC Class II deficiency (n=1). They were given monthly IVIG and HSCT was performed for three patients. Conclusions. PID accounted for about 40% of NCFB. Early diagnosis/appropriate treatment have impact on clinical course of a PID patient. Thus, follow-up in also immunology clinics should be a routine for patients who experience pneumonia in the first year of their lives and those with NCFB.
abstract_id: PUBMED:26976549
Update on pulmonary disease due to non-tuberculous mycobacteria. Non-tuberculous mycobacteria (NTM) are emerging worldwide as significant causes of chronic pulmonary infection, posing a number of challenges for both clinicians and researchers. While a number of studies worldwide have described an increasing prevalence of NTM pulmonary disease over time, population-based data are relatively sparse and subject to ascertainment bias. Furthermore, the disease is geographically heterogeneous. While some species are commonly implicated worldwide (Mycobacterium avium complex, Mycobacterium abscessus), others (e.g., Mycobacterium malmoense, Mycobacterium xenopi) are regionally important. Thoracic computed tomography, microbiological testing with identification to the species level, and local epidemiology must all be taken into account to accurately diagnose NTM pulmonary disease. A diagnosis of NTM pulmonary disease does not necessarily imply that treatment is required; a patient-centered approach is essential. When treatment is required, multidrug therapy based on appropriate susceptibility testing for the species in question should be used. New diagnostic and therapeutic modalities are needed to optimize the management of these complicated infections.
abstract_id: PUBMED:23991261
Non-contiguous genome sequence of Mycobacterium simiae strain DSM 44165(T.). Mycobacterium simiae is a non-tuberculosis mycobacterium causing pulmonary infections in both immunocompetent and imunocompromized patients. We announce the draft genome sequence of M. simiae DSM 44165(T). The 5,782,968-bp long genome with 65.15% GC content (one chromosome, no plasmid) contains 5,727 open reading frames (33% with unknown function and 11 ORFs sizing more than 5000 -bp), three rRNA operons, 52 tRNA, one 66-bp tmRNA matching with tmRNA tags from Mycobacterium avium, Mycobacterium tuberculosis, Mycobacterium bovis, Mycobacterium microti, Mycobacterium marinum, and Mycobacterium africanum and 389 DNA repetitive sequences. Comparing ORFs and size distribution between M. simiae and five other Mycobacterium species M. simiae clustered with M. abscessus and M. smegmatis. A 40-kb prophage was predicted in addition to two prophage-like elements, 7-kb and 18-kb in size, but no mycobacteriophage was seen after the observation of 10(6) M. simiae cells. Fifteen putative CRISPRs were found. Three genes were predicted to encode resistance to aminoglycosides, betalactams and macrolide-lincosamide-streptogramin B. A total of 163 CAZYmes were annotated. M. simiae contains ESX-1 to ESX-5 genes encoding for a type-VII secretion system. Availability of the genome sequence may help depict the unique properties of this environmental, opportunistic pathogen.
abstract_id: PUBMED:38314101
Pneumonia due to Achromobacter xylosoxidans with a chronic course resembling non-tuberculous mycobacterial infection. Achromobacter xylosoxidans is a common bacterium that rarely causes pneumonia. Determining whether A. xylosoxidans is the cause of lung infection in patients suspected of having chronic infectious lung disease is challenging because it can present with colonization. We report a case of a 56-year-old immunocompetent woman suspected of having non-tuberculous mycobacteria (NTM) infection on imaging examination and monitored for 3 years. Sputum examinations revealed A. xylosoxidans several times, and it was determined to be a colonization. A. xylosoxidans was isolated from bronchial lavage fluid and aspirated sputum, but no evidence of NTM was observed. She was diagnosed with A. xylosoxidans infection and given ceftazidime for 2 weeks. Her symptoms and imaging findings improved rapidly after treatment, without recurrences. A. xylosoxidans rarely causes chronic lower respiratory tract infections similar to NTM in immunocompetent patients. A. xylosoxidans may be a target for treatment when detected in lower respiratory tract specimens.
Answer: The retrospective study conducted to assess the prevalence of non-tuberculous mycobacterial infections in Scotland revealed a notable cluster of cases in the Lothian region. In Lothian, 53% (108/205) of all pulmonary isolates of mycobacteria were non-tuberculous strains, which was significantly higher than the 18% (140/800) observed in the rest of Scotland. This finding was particularly striking given that Lothian and Glasgow are comparable in population size and laboratory techniques, yet Lothian had almost twice as many isolates of non-tuberculous mycobacteria as Glasgow. Among the patients with M malmoense isolates in Lothian, 75% (30 out of 41) had clinically significant lung disease, and only one was HIV positive. This suggests that the high prevalence of non-tuberculous mycobacterial infections in Lothian cannot be attributed to the local prevalence of HIV, indicating that non-tuberculous mycobacteria pose an increasing clinical problem in Scotland as a cause of pulmonary disease (PUBMED:7701462). |
Instruction: Is ultrasound really helpful in the detection of rib fractures?
Abstracts:
abstract_id: PUBMED:15135274
Is ultrasound really helpful in the detection of rib fractures? Objective: To determine the usefulness of ultrasound in the detection of rib fractures.
Patients And Methods: A prospective study was performed over a 3-month period. Patients presenting with a high clinical suspicion of rib fracture(s) to the Accident and Emergency Department were referred for radiological work-up with a PA chest radiograph, an oblique rib view and a chest ultrasound. Associated lesions, e.g. pleural effusion, splenic laceration and pneumothorax were recorded.
Results: Fourteen patients were radiologically assessed. The mean patient age was 31 years (range 16-55 years) and the M:F ratio 3.7:1 (11 men and 3 women). Ten patients displayed a total of 15 broken ribs. Chest radiography detected 11, oblique rib views 13 and ultrasound 14 broken ribs. Ultrasound findings included discontinuity of cortical alignment in 12 fractures, an acoustic linear edge shadow in nine and a reverberation artifact in six. Concordance with plain film findings, and especially oblique rib views, was good, though better when the rib fractures fragments were markedly displaced. One splenic laceration was detected with an associated small pleural effusion. There were no pneumothoraces. The average time of ultrasound examination was 13 min.
Conclusion: Ultrasound does not significantly increase the detection rate of rib fractures, may be uncomfortable for the patient and is too time-consuming to justify its routine use to detect rib fractures.
abstract_id: PUBMED:30470688
Comparison of the use of lung ultrasound and chest radiography in the diagnosis of rib fractures: a systematic review. Introduction: It is well-recognised that the detection of rib fractures is unreliable using chest radiograph. The aim of this systematic review was to investigate whether the use of lung ultrasound is superior in accuracy to chest radiography, in the diagnosis of rib fractures following blunt chest wall trauma.
Methods: The search filter was used for international online electronic databases including MEDLINE, EMBASE, Cochrane and ScienceDirect, with no imposed time or language limitations. Grey literature was searched. Two review authors completed study selection, data extraction and data synthesis/analysis process. Quality assessment using the Quality Assessment of Diagnostic Accuracy Studies Tool (QUADAS-2) was completed.
Results: 13 studies were included. Overall, study results demonstrated that the use of lung ultrasound in the diagnosis of rib fractures in blunt chest wall trauma patients appears superior compared with chest radiograph. All studies were small, single centre and considered to be at risk of bias on quality assessment. Meta-analysis was not possible due to high levels of heterogeneity, lack of appropriate reference standard and poor study quality.
Discussion: The results demonstrate that lung ultrasound may be superior to chest radiography, but the low quality of the studies means that no definitive statement can be made.
abstract_id: PUBMED:26855659
The usefulness of ultrasound in the diagnosis of patients after chest trauma - two case reports. The effectiveness of ultrasound in diagnosing fractures of the ribs and sternum has been confirmed in the literature. The aim of our study was to present two case reports of patients with chest trauma history in whom ultrasound examination proved useful in the diagnostic process. The role of thoracic ultrasound in the diagnosis of ribs and sternal fractures is discussed as well. The authors conclude the following: 1) the examination was easy to perform and assess, and provided clinically useful conclusions; 2) due to the mobility of the ultrasound machine, the examination may be carried out outside of radiology departments, e.g. by the patient's bedside - in departments of surgery; 3) ultrasound should be the examination of choice after chest trauma and can be performed successfully by non-radiologist physicians.
abstract_id: PUBMED:35692975
Ultrasound Instead of X-Ray to Diagnose Neonatal Fractures: A Feasibility Study Based on a Case Series. Background: Fracture is a common birth injury in neonates, and its diagnosis mainly depends on chest X-ray examination, while ultrasound is typically not included in the diagnostic work-up of neonatal fractures. The aim of this study was to investigate the feasibility of using ultrasound to replace X-rays for the diagnosis of fractures in newborns and to determine the ultrasound characteristics of such fractures.
Methods: Bedside ultrasound with an appropriate probe and scanning angle was performed on 52 newborn infants with suspected fractures based on physical examination findings, and the ultrasound results were compared with the X-ray examination results.
Results: All 52 infants (100%) showed typical signs of fracture on ultrasound, including 46 cases of clavicle fracture, 3 cases of skull fracture, 2 cases of rib fracture, and 1 case of humerus fracture. Ultrasound was able to detect interrupted cortical continuity, displacement or angulation at the broken end, and callus formation during the recovery period. Chest X-ray examination was performed on 30 patients and identified 96.7% (29/30) of fractures, and the coincidence rate between ultrasound and X-ray was 100%. However, the sensitivity of ultrasound was higher than that of X-ray.
Conclusion: Ultrasound diagnosis of neonatal fracture is accurate, reliable, simple, and feasible. Therefore, it can replace X-ray examinations for the routine diagnosis of common types of neonatal bone fractures.
abstract_id: PUBMED:32175160
The effect of low-intensity pulsed ultrasound on rib fracture: An experimental study. Background: In this study, we aimed to investigate the effects of lowintensity pulsed ultrasound on rib fracture healing in a rat model.
Methods: A total of 72 male Wistar-Albino rats were randomly divided into three equal groups. To induce a rib fracture, right thoracotomy was performed under general anesthesia and a 0.5-cm segment was removed from the fourth and fifth ribs. After 24 h of surgery, low-intensity pulsed ultrasound was implemented according to the groups. Group 1 served as the control group for the observation of normal bone healing. Low-intensity pulsed ultrasound was applied at a dose of 20% (2 msn pulse-8 msn pause) 100 mW/cm2 and 50% (5 msn pulse-5 msn pause) 200 mW/cm2 for six min, respectively in Group 2 and Group 3. All subjects were followed for six weeks. Eight animals from each group were sacrificed at two, four, and six weeks for further assessment. Histological alterations in the bone were examined.
Results: Although there was no statistically significant difference in osteoblasts, osteoclasts, new bone formation, and lymphocyte count among the groups, histological consolidation was significantly increased by low-intensity pulsed ultrasound. While low-intensity pulsed ultrasound induced osteoblastic, osteoclastic, and new bone formation, it inhibited lymphocyte infiltration.
Conclusion: Low-intensity pulsed ultrasound, either at low or high doses, induced the histological consolidation of rib fractures and inhibited lymphocyte infiltration. This effect was more prominent in the long-term and at higher dose with increased daily and total administration time. We, therefore, believe that accelerating the natural healing process in patients with rib fractures would enable to treat more effectively in short-term.
abstract_id: PUBMED:25890918
Basic lung ultrasound. Part 1. Normal lung ultrasound and diseases of the chest wall and the pleura Lung ultrasound has become part of the diagnostic armamentarium in Resuscitation and Recovery Units with an enormous potential due to its many advantages: capacity to diagnose more precisely than conventional radiology, earlier diagnosis, convenience due to being able to performed at the bedside, possibility of being performed by one person, absence of ionising radiation, and, due to its dynamic character, is capable of transforming into physiological processes that were once static images. However, lung ultrasound also has its limitations and has a learning curve. The aim of this review is to provide sufficient information that may help the specialist starting in this field to approach the technique with good possibilities of success. To do this, the review is structured into two parts. In the first, the normal ultrasound of the chest wall is presented, as well as the pleura, diaphragm, and lung parenchyma, and the most important pathologies of the chest wall (rib fractures and hematomas), the pleura (pleural effusion and its different types, and pneumothorax), and the diaphragm (hypokinesia and paralysis). In the second part, parenchymal diseases will be approached and will include, atelectasis, pneumonia and abscess, lung oedema, respiratory distress, and pulmonary thromboembolism.
abstract_id: PUBMED:32292684
The Challenges of Ultrasound-guided Thoracic Paravertebral Blocks in Rib Fracture Patients. Thoracic paravertebral blocks (TPVBs) provide an effective pain relief modality in conditions where thoracic epidurals are contraindicated. Historically, TPVBs were placed relying solely on the landmark-based technique, but the availability of ultrasound imaging makes it a valuable and practical tool during the placement of these blocks. TPVBs also provide numerous advantages over thoracic epidurals, namely, minimal hypotension, absence of urinary retention, lack of motor weakness, and remote risk of an epidural hematoma. Utilization of both landmark-based and ultrasound-guided techniques may increase the successful placement of a TPVB. This article reviews relevant sonoanatomy as it pertains to TPVBs. However, certain patient-related issues, including pneumothoraces, surgical emphysema, body habitus, and transverse process fractures, all may make imaging with ultrasound challenging. The changes noted on ultrasound imaging as a result of these issues will be further described in this review.
abstract_id: PUBMED:30450203
The use of a portable ultrasound system in the surgical assessment of rib fractures in an elderly patient. Introduction: Portable ultrasound is a modality of medical ultrasonography that utilizes small and light devices, and is an established diagnostic method used in clinical settings such as Cardiology, Vascular Surgery, Radiology, Endocrinology, Pediatric and Obstetric & Gynecology.
Presentation Of Cases: We present a case report of 86-years old patient who underwent surgical rib fixation for multiple rib fractures followed by falling from standing height and our management experience.
Discussion: The use of portable ultrasound device in operation theatre demonstrates several advantages.We believe that Portable color doppler ultrasound system would be necessary in the management of rib fracture.
Conclusion: This study demonstrates that the portable ultrasound system is a valuable method of imaging in the assessment of rib fractures, and which can save time, economically affordable for many patients, and allow surgeons to make a minor incision in order to avoid complications such as infection, particularly in this group of vulnerable patients.
abstract_id: PUBMED:37842508
Exploring a Cadaver-Based Model for Teaching Emergency Medicine Residents Ultrasound-Guided Serratus Anterior Plane Blocks. Background Ultrasound-guided regional anesthesia (USGRA) is increasingly being incorporated into ED clinical practice to provide pain control for a variety of traumatic injuries. The serratus anterior plane block (SAPB) has been shown to be effective at reducing intravenous opioid use and improving pulmonary function for patients with rib fractures, but there is limited prior research about how to safely teach this procedure to emergency medicine (EM) residents. Our goal was to examine the effect of a cadaver-based education model on EM residents' confidence in performing USGRA and provide a review of commonly encountered errors. Methods EM residents participated in a half-day cadaver-based education session that included a variety of less-commonly performed procedures including SAPB and fascia iliaca compartment block (FICB) USGRA. Residents received a didactic lecture and hands-on simulation practice during the month prior to the session. During the session, residents performed a SAPB and FICB on the cadaver patient using the same nerve block kit and ultrasound machine they would use for a living patient, with 1:1 supervision by an emergency ultrasound fellowship-trained physician who provided real-time feedback during the procedure. Representative ultrasound video clips were saved and reviewed. Surveys that were completed by residents after the session were analyzed. Results There were 23 residents who participated, and most had not performed any FICB (74%) or SAPB (87%) previously. The percentage of residents comfortable with general USGRA increased from 8.7% to 91.3% (p<0.001). Comfort with FICB increased from 9.1% to 77.3% (p<0.001), and comfort with SAPB increased from 9.1% to 77.3% (p<0.001). Instructors identified and corrected several common errors, such as overly aggressive needle insertion, instillation of air, and instillation of anesthetic into muscle rather than the fascial plane. Conclusion We found that a cadaver-based education model for teaching EM residents the SAPB and the FICB was associated with significant increases in resident confidence in performing the procedure and facilitated identification and correction of common procedural errors that may otherwise have gone undetected.
abstract_id: PUBMED:19497469
Emergency bedside ultrasound for the diagnosis of rib fractures. The scope of clinical applications of emergency ultrasonography continues to expand. Bedside ultrasound has been typically described as an "extension of the palpating hand" and a "visual stethoscope" during physical examination. One of the relatively newer applications is its use in the detection of rib fractures. The author discusses the technique, applicability, limitations, and role of emergency ultrasound in the diagnosis of rib fractures.
Answer: The usefulness of ultrasound in the detection of rib fractures has been explored in various studies, with mixed results. One study found that ultrasound did not significantly increase the detection rate of rib fractures compared to chest radiography and oblique rib views, and it was deemed too time-consuming to justify its routine use for this purpose (PUBMED:15135274). However, a systematic review suggested that lung ultrasound appears superior to chest radiography in the diagnosis of rib fractures following blunt chest wall trauma, although the included studies were small, single-center, and at risk of bias (PUBMED:30470688).
Case reports have indicated that ultrasound can be useful in diagnosing rib and sternal fractures, emphasizing its ease of use, the ability to perform it at the bedside, and its suitability for use by non-radiologist physicians (PUBMED:26855659). In neonates, ultrasound has been shown to be accurate, reliable, and feasible for diagnosing fractures, with a higher sensitivity than X-ray, suggesting it could replace X-ray examinations for routine diagnosis of common neonatal bone fractures (PUBMED:35692975).
While ultrasound has its limitations and a learning curve, it is increasingly being used in various clinical settings due to its advantages, such as the absence of ionizing radiation and the ability to perform it at the bedside (PUBMED:25890918). In the context of rib fractures, ultrasound-guided thoracic paravertebral blocks have been used for pain relief, although certain patient-related issues may make imaging with ultrasound challenging (PUBMED:32292684).
The use of a portable ultrasound system has been demonstrated to be valuable in the surgical assessment of rib fractures, offering advantages such as time-saving, economic affordability, and the potential to minimize complications (PUBMED:30450203). Additionally, a cadaver-based education model has been explored to teach emergency medicine residents ultrasound-guided regional anesthesia techniques, which has been shown to increase their confidence in performing such procedures (PUBMED:37842508).
In summary, while there is evidence to support the use of ultrasound in the detection of rib fractures, its effectiveness may vary depending on the context, the skill of the operator, and the specific patient population. Some studies advocate for its use, especially in settings where it offers clear advantages over traditional radiography, while others suggest that it may not offer significant benefits over other imaging modalities in all cases (PUBMED:15135274, PUBMED:30470688, PUBMED:26855659, PUBMED:35692975, PUBMED:25890918, PUBMED:32292684, PUBMED:30450203, PUBMED:37842508). |
Instruction: W-plasty technique in tracheal reconstruction: a new technique?
Abstracts:
abstract_id: PUBMED:37956496
Refining macrostomia correction: Case series applying square flap technique and Z/W-plasty skin closure for enhanced aesthetic and functional outcome. Introduction And Importance: Macrostomia is a congenital deformity found in Tessier no. 7 facial clefts defined as an enlargement of the mouth at the oral commissure. Several techniques are described in literature to achieve optimal functional and aesthetic results, with varying results and surgeon preferences. In this case series we report surgical repair of macrostomia with a vermillion square flap method for the oral commissure combined with either Z-plasty or W-plasty closure for the skin.
Cases Presentation: A retrospective case analysis of 12 patients with macrostomia operated over the past 7 years at our plastic surgery division was performed (by two different operators; 11 cases by A.S. and 1 case by R.S.). Clinical features of the patients were analyzed through photography documentation, and patient description such as age of operation, operation technique, and complications were obtained through patient records. Macrostomia was corrected with a vermillion square flap method for commissure, overlapping muscle closure, along with either Z-plasty or W-plasty closure for the skin. Quality of lip commissure position, symmetry, thickness of vermillion, and scar result were recorded.
Clinical Discussion: In all twelve patients repaired with the overlapping muscle closure and square flap, the lip commissures were formed with satisfactory shape, position, and thickness with no commissure contracture during the follow up period. The Z-plasty was a simpler method compared to the W-plasty, and resulted in comparable scars. One patient (adult with hemifacial macrostomia and W-plasty skin closure) underwent revision surgery for more accurate symmetry and position of the oral commissure.
Conclusion: There are many varieties of surgical repair for macrostomia, and each method should be adjusted and combined according to each patient. Overall, macrostomia repair with this technique combination produced satisfactory aesthetic and functional results in all twelve patients. Z-plasty for skin closure after muscle and vermillion closure was a simpler technique and resulted in comparable scars than W-pasty closure in this case series.
abstract_id: PUBMED:18802354
W-plasty technique in tracheal reconstruction: a new technique? An experimental study. Background: Tracheal stenosis and dehiscence of anastomosis due to excessive tension are well-known problems after long-segment tracheal resections. The aim of this study was to evaluate the efficacy of the W-plasty technique to prevent these two complications.
Methods: Animals were divided into a study and a control group. Each group consisted of 6 animals. In the control group, we performed a 5-cm tracheal segment resection, and then reconstruction was performed with an interrupted technique with 6/0 Prolene sutures. In the study group, we used the W-plasty technique with 6/0 Prolene interrupted sutures. The animals were sacrificed on the 30th day postoperatively and tracheal resection including the entire anastomosis site was performed. The traction and pullout test was applied to each specimen and all the specimens were analysed histopathologically. The intraluminal diameter and the thickness of the tracheal wall at the level of anastomoses were measured by using a micrometer. The pattern of the reaction and localization were recorded.
Results: The traction and pullout test results were 131.6 +/- 4.3 g and 187.5 +/- 6.4 g in the control and the study group, respectively, which was a significant difference (p = 0.004). The intraluminal diameters were 3.3 +/- 1.2 mm and 4.3 +/- 0.9 mm in the control and study group, respectively (p = 0.134). In contrast to the control group, early inflammatory and late fibroblastic reactions were negative in the study group.
Conclusion: Considering the outcomes of this study, we think that the W-plasty technique has much more advantages than the standard techniques in terms of anastomosis durability and development of stenosis.
abstract_id: PUBMED:34651426
Continuous tension reduction technique in facial scar management: A comparison of W-plasty and straight-line closure on aesthetic effects in Asian patients. W-plasty is a very popular scar excisional revision technique. The core of the technique is to break up the scar margins into small triangular components, so as to cause light scattering and make the scar less noticeable. However, due to skin tension, facial incision scars tend to spread. Applying W-plasty alone cannot achieve the ideal repair effect of facial scars. In this study, we proposed a scar revision technique combined W-plasty with continuous tension-reduction (CTR) technique to improve the appearance of facial scars. Sixty patients with facial scar were comprised in this retrospective study. Scars were assessed independently using the scar scale before and at 12-month follow-up. Clinical results showed a significant difference in scar appearance between different groups at 12-month follow-up. Vancouver scar scale (VSS), visual analogue scale (VAS) scores, and patient satisfaction were significant better in W-plasty and CTR than other groups at 12-month follow-up. No severe complications were reported. The application of the tension offloading device provides an environment where the tension is continuously reduced, which could greatly decrease tension on the surgical incision. Combined with W-plasty, this technique could significantly improve the scar's aesthetic appearance.
abstract_id: PUBMED:24265877
Prevention of tracheal cartilage injury with modified Griggs technique during percutaneous tracheostomy - Randomized controlled cadaver study. Introduction: Tracheal stenosis is the most common severe late complication of percutaneous tracheostomy causing significant decrease in quality of life. Applying modified Griggs technique reduced the number of late tracheal stenoses observed in our clinical study. The aim of this study was to investigate the mechanism of this relationship.
Materials And Methods: Forty-six cadavers were randomized into two groups according to the mode of intervention during 2006-2008. Traditional versus modified Griggs technique was applied in the two groups consequently. Wider incision, surgical preparation, and bidirectional forceps dilation of tracheal wall were applied in modified technique. Injured cartilages were inspected by sight and touch consequently. Age, gender, level of intervention, and number of injured tracheal cartilages were registered.
Results: Significantly less frequent tracheal cartilage injury was observed after modified (9%) than original (91%) Griggs technique (p < 0.001). A moderate association between cartilage injury and increasing age was observed, whereas the level of intervention (p = 0.445) and to gender (p = 0.35) was not related to injury. Risk of cartilage injury decreased significantly (OR: 0.0264, 95%, CI: 0.005-0.153) with modified Griggs technique as determined in adjusted logistic regression model.
Discussion: Modified Griggs technique decreased the risk of tracheal cartilage injury significantly in our cadaver study. This observation may explain the decreased number of late tracheal stenosis after application of the modified Griggs method.
abstract_id: PUBMED:2593221
Revision of a vertical tracheotomy scar using the W-plasty technique. Repair of a vertical tracheotomy scar using a W-plasty technique is discussed, and an illustrative case is presented. The indications and technique of the W-plasty repair are reviewed. The advantages over the classical running Z-plasty in this type of scar repair are outlined.
abstract_id: PUBMED:2723225
Vermilionectomy using the W-plasty technique. We describe a variation of the conventional vermilionectomy consisting of incisions using the W-plasty technique, which allows a more distensible scar with better cosmetic and functional results. Its primary indications are actinic cheilitis and leukoplakia of the lower lip. It is a more complicated technique than the classic one, and requires a larger loss of skin, but the benefits largely outweigh the drawbacks.
abstract_id: PUBMED:31819844
Outcomes of Coccygectomy Using the "Z" Plasty Technique of Wound Closure. Study Design: Technical note.
Objectives: Coccygectomy for chronic coccydynia has a high rate of successful clinical outcome. However, the procedure is associated with increased incidence of wound dehiscence and surgical site infection. The main objective was to evaluate the clinical outcomes of coccygectomy using the Z plasty technique of wound closure.
Methods: Patients with chronic coccydynia refractory to conservative treatment underwent coccygectomy followed by Z plasty technique of wound closure between January 2013 and February 2018. Primary outcome measure was evaluation of the wound healing in the postoperative period and at follow-up; secondary outcome measure included visual analogue scale (VAS) score for coccygeal pain.
Results: Ten patients (male:female 6:4) fulfilled the inclusion criteria. The mean age of patients was 40.78 years (range 19-55 years). The mean follow-up was 1.75 years (range 6 months to 5 years). All wounds healed well with no incidence of wound dehiscence or surgical site infections. The mean VAS improved from 7.33 ± 0.5 to 2.11 ± 1.2 (P < .05). Nine patients reported excellent outcomes and 1 patient reported poor outcome with regards to relief from coccydynia.
Conclusion: Z plasty technique of wound closure is recommended as procedure of choice to avoid wound healing problems and surgical site infections associated with coccygectomy. Coccygectomy remains a successful treatment modality for chronic coccydynia.
abstract_id: PUBMED:28587949
Usefulness of direct W-plasty application to wound debridement for minimizing scar formation in the ED. Purpose: A suture line placed with the same direction as the relaxed skin tension line leads to good healing, but a suture line with over 30 degrees of angle from the relaxed skin tension line leads to longer healing time and more prominent scarring. W-plasty is widely used to change the direction of the scar or to divide it into several split scars. In this study, we applied W-plasty to patients with facial lacerations in the emergency department.
Methods: From June 2012 to December 2014, 35 patients underwent simple repair or W-plasty for facial lacerations. Patients in the simple repair group underwent resection following a thermal margin, and the W-plasty group was resected within a pre-designed margin of W-shaped laceration. We assessed prognosis using the Stony Brook Scar Evaluation Scale (SBSES) after 10 days (short-term) and six months (long-term), respectively, following suture removal.
Results: Among 35 patients, 15 (42.9%) underwent simple debridement and 20 (57.1%) underwent W-plasty. In the W-plasty group, there was no difference between short-term and long-term follow-up showing high SBSES, but in the simple debridement group, long-term follow-up SBSES significantly decreased. W-plasty SBSES was higher than simple debridement at short-term as well as long-term follow-up.
Conclusion: We experienced good results of direct W-plasty application at six-month long-term follow-up. Therefore, W-plasty application is more effective in reducing scar appearance than a simple debridement method for facial laceration patients with an angle of 30 degrees or more to the relaxed skin tension line.
abstract_id: PUBMED:36292193
Expiratory Technique versus Tracheal Suction to Obtain Good-Quality Sputum from Patients with Suspected Lower Respiratory Tract Infection: A Randomized Controlled Trial. Microbiological diagnostics of good-quality sputum samples are fundamental for infection control and targeted treatment of lower respiratory tract infections (LRTI). This study aims to compare the expiratory technique and tracheal suction on the quality of sputa from adults acutely hospitalized with suspected LRTI. We performed an open-label, randomized controlled trial. Patients were randomized to sputum sampling by tracheal suction (standard care) or the expiratory technique. The primary outcome was quality of sputum evaluated by microscopy and was analysed in the intention-to-treat population. The secondary outcomes were adverse events and patients experience. In total, 280 patients were assigned to tracheal suction (n = 141, 50.4%) or the expiratory technique (n = 139, 49.6%). Sputum samples were collected from 122 (86.5%) patients with tracheal suction and 67 (48.2%) patients with expiratory technique. Good-quality sputa were obtained more often with tracheal suction than with expiratory technique (odds ratio 1.83 [95% CI 1.05 to 3.19]; p = 0.035). There was no statistical difference in adverse events (IRR 1.21 [95% CI, 0.94 to 1.66]; p = 0.136), but patient experience was better in the expiratory technique group (p < 0.0001). In conclusion, tracheal suction should be considered a routine procedure in emergency departments for patients with suspected LRTI.
abstract_id: PUBMED:35932352
Overlapping-plasty technique for pulmonary artery aneurysm. Massive pulmonary artery aneurysms, while extremely rare, might require surgical intervention. Most previous cases have been repaired either by pulmonary artery plication or synthetic graft replacement. We report a case of massive pulmonary artery aneurysm that was successfully repaired using an 'overlapping-plasty' technique with the help of 3D image simulation. This specially designed procedure might be useful as a surgical option for pulmonary artery aneurysms.
Answer: The W-plasty technique is not described as a new technique in tracheal reconstruction within the provided abstracts. However, it is mentioned as a method used in various other surgical procedures to improve aesthetic and functional outcomes. For instance, the W-plasty technique has been applied in the revision of facial scars to minimize scar formation and improve aesthetic effects (PUBMED:34651426), in the repair of vertical tracheotomy scars (PUBMED:2593221), and in vermilionectomy for actinic cheilitis and leukoplakia of the lower lip (PUBMED:2723225). Additionally, the W-plasty technique has been used in an experimental study to evaluate its efficacy in preventing tracheal stenosis and dehiscence of anastomosis after long-segment tracheal resections, suggesting its potential advantages over standard techniques in terms of anastomosis durability and development of stenosis (PUBMED:18802354).
In the context of tracheal reconstruction, the abstracts do not directly refer to W-plasty as a new technique but rather discuss its use in experimental settings or in related surgical procedures. The abstracts also mention other techniques such as the Z-plasty technique in wound closure for coccygectomy (PUBMED:31819844) and the modified Griggs technique to prevent tracheal cartilage injury during percutaneous tracheostomy (PUBMED:24265877).
In summary, while the W-plasty technique is not explicitly described as a new technique for tracheal reconstruction in the provided abstracts, it is recognized for its utility in other surgical applications that could potentially be adapted for use in tracheal reconstruction. |
Instruction: Are antibiotics necessary after 48 hours of improvement in infected/septic abortions?
Abstracts:
abstract_id: PUBMED:21195382
Are antibiotics necessary after 48 hours of improvement in infected/septic abortions? A randomized controlled trial followed by a cohort study. Objective: We sought to investigate whether oral antibiotics are necessary, after 48 hours of clinical improvement, in uncomplicated septic abortion.
Study Design: In a randomized double-blind clinical trial, 56 women with uncomplicated septic abortion were treated with intravenous antibiotics, followed by uterine evacuation. On hospital discharge (day 1), patients were randomized to receive either oral doxycycline plus metronidazole or placebo, until completing 10 days of treatment. Clinical cure was defined by the absence of fever (<37.7°C), reduced vaginal bleeding, and minimal or no pelvic pain.
Results: Cure was observed in all 56 patients. The institutional review board stopped the treatment arm as it was adding risk with no further benefit to the patients. An observational cohort with additional 75 cases was followed up in the no treatment arm and no failure was identified (probability of an adverse event, 0%; 95% confidence interval, 0-0.03).
Conclusion: After 48 hours of clinical improvement, antibiotics may not be necessary.
abstract_id: PUBMED:31443937
Assessing the effectiveness of empiric antimicrobial regimens in cases of septic/infected abortions. Introduction: Infected abortion is a life-threatening condition that requires immediate surgical and medical interventions. We aimed to assess the common pathogens associated with infected abortion and to test the microbial coverage of various empiric antimicrobial regimens based on the bacteriological susceptibility results in women with infected abortions.
Methods: A retrospective study in a single university-affiliated tertiary hospital. Electronic records were searched for clinical course, microbial characteristics, and antibiotic susceptibility of all patients diagnosed with an infected abortion. The effectiveness of five antibiotic regimens was analyzed according to bacteriological susceptibility results.
Results: Overall, 84 patients were included in the study. The mean age of patients was 32.3(SD ± 5.8) years, and the median gestational age was 15 (IQR 8-19) weeks. Risk factors for infection were identified in 23 patients (27.3%), and included lack of medical insurance (n = 12), recent amniocentesis/chorionic villus sampling or fetal reduction due to multifetal pregnancies (n = 10). The most common pathogens isolated were Enterobacteriaceae (35%), Streptococci (31%), Staphylococci (9%) and Enterococci (9%). The combination of intravenous ampicillin, gentamicin and metronidazole showed significant superiority over all the other tested regimens according to the susceptibility test results. Piperacillin-tazobactam as an empiric single-agent drug of choice and provided a superior microbial coverage, with a coverage rate of 93.3%.
Conclusions: A combination of ampicillin, gentamicin, and metronidazole had a better spectrum of coverage as a first-line empiric choice for patients with infected abortion.
abstract_id: PUBMED:1085832
Corynebacterium vaginale (Hemophilus vaginalis) bacteremia: clinical study of 29 cases. Twenty-nine patients with bacteremia due to Corynebacterium vaginale, an inhabitant of the female genital tract, are described. Four were newborn babies. Nineteen were healthy young women delivered at full term by an operative procedure, cesarean section, or episiotomy. Within 48 hours fever and bacteremia developed. While receiving antibiotics the fever returned to normal, usually within 48 hours. The remaining cases were associated with septic abortion, tubal pregnancy, an intrauterine device, hydatidiform mole, and cellulitis. None of the adults showed evidence of brain abscess, meningitis, or endocarditis. Corynebacterium vaginale is an opportunistic minor pathogen that apparently gains access to the blood stream via an exposed vascular bed rather than as the result of immunosupression.
abstract_id: PUBMED:27364644
Antibiotics for treating septic abortion. Background: A septic abortion refers to any abortion (spontaneous or induced) complicated by upper genital tract infection including endometritis or parametritis. The mainstay of treatment of septic abortion is antibiotic therapy alone or in combination with evacuation of retained products of conception. Regimens including broad-spectrum antibiotics are routinely recommended for treatment. However, there is no consensus on the most effective antibiotics alone or in combination to treat septic abortion. This review aimed to bridge this gap in knowledge to inform policy and practice.
Objectives: To review the effectiveness of various individual antibiotics or antibiotic regimens in the treatment of septic abortion.
Search Methods: We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, LILACS, and POPLINE using the following keywords: 'Abortion', 'septic abortion', 'Antibiotics', 'Infected abortion', 'postabortion infection'. We also searched the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP) and ClinicalTrials.gov for ongoing trials on 19 April, 2016.
Selection Criteria: We considered for inclusion randomised controlled trials (RCTs) and non-RCTs that compared antibiotic(s) to another antibiotic(s), irrespective of route of administration, dosage, and duration as well as studies comparing antibiotics alone with antibiotics in combination with other interventions such as dilation and curettage (D&C).
Data Collection And Analysis: Two review authors independently extracted data from included trials. We resolved disagreements through consultation with a third author. One review author entered extracted data into Review Manager 5.3, and a second review author cross-checked the entry for accuracy.
Main Results: We included 3 small RCTs involving 233 women that were conducted over 3 decades ago.Clindamycin did not differ significantly from penicillin plus chloramphenicol in reducing fever in all women (mean difference (MD) -12.30, 95% confidence interval (CI) -25.12 to 0.52; women = 77; studies = 1). The evidence for this was of moderate quality. "Response to treatment was evaluated by the patient's 'fever index' expressed in degree-hour and defined as the total quantity of fever under the daily temperature curve with 99°F (37.2°C) as the baseline".There was no difference in duration of hospitalisation between clindamycin and penicillin plus chloramphenicol. The mean duration of hospital stay for women in each group was 5 days (MD 0.00, 95% CI -0.54 to 0.54; women = 77; studies = 1).One study evaluated the effect of penicillin plus chloramphenicol versus cephalothin plus kanamycin before and after D&C. Response to therapy was evaluated by "the time from start of antibiotics until fever lysis and time from D&C until patients become afebrile". Low-quality evidence suggested that the effect of penicillin plus chloramphenicol on fever did not differ from that of cephalothin plus kanamycin (MD -2.30, 95% CI -17.31 to 12.71; women = 56; studies = 1). There was no significant difference between penicillin plus chloramphenicol versus cephalothin plus kanamycin when D&C was performed during antibiotic therapy (MD -1.00, 95% CI -13.84 to 11.84; women = 56; studies = 1). The quality of evidence was low.A study with unclear risk of bias showed that the time for fever resolution (MD -5.03, 95% CI -5.77 to -4.29; women = 100; studies = 1) as well as time for resolution of leukocytosis (MD -4.88, 95% CI -5.98 to -3.78; women = 100; studies = 1) was significantly lower with tetracycline plus enzymes compared with intravenous penicillin G.Treatment failure and adverse events occurred infrequently, and the difference between groups was not statistically significant.
Authors' Conclusions: We found no strong evidence that intravenous clindamycin alone was better than penicillin plus chloramphenicol for treating women with septic abortion. Similarly, available evidence did not suggest that penicillin plus chloramphenicol was better than cephalothin plus kanamycin for the treatment of women with septic abortion. Tetracyline enzyme antibiotic appeared to be more effective than intravenous penicillin G in reducing the time to fever defervescence, but this evidence was provided by only one study at low risk of bias.There is a need for high-quality RCTs providing reliable evidence for treatments of septic abortion with antibiotics that are currently in use. The three included studies were carried out over 30 years ago. There is also a need to include institutions in low-resource settings, such as sub-Saharan Africa, Latin America and the Caribbean, and South Asia, with a high burden of abortion and health systems challenges.
abstract_id: PUBMED:12997549
Septic abortion and antibiotics N/A
abstract_id: PUBMED:3047888
Hysterectomy for septic abortion--is bilateral salpingo-oophorectomy necessary? Ovarian conservation at the time of hysterectomy for complicated septic abortion is important in this young population group. In a retrospective study, the histological evaluation of the ovaries of 25 patients were compared with the macroscopic description in the operation reports. In 72.3% of the ovaries examined there was no infection. None of the ovaries described clinically as normal at laparotomy showed histological signs of infection. The clinical assessment of infected ovaries was false-positive in 40% of cases but there was no false-negative decision-making. It is concluded that ovaries which appear normal at hysterectomy for septic abortion should be conserved.
abstract_id: PUBMED:25932831
Treating spontaneous and induced septic abortions. Worldwide, abortion accounts for approximately 14% of pregnancy-related deaths, and septic abortion is a major cause of the deaths from abortion. Today, septic abortion is an uncommon event in the United States. The most critical treatment of septic abortion remains the prompt removal of infected tissue. Antibiotic administration and fluid resuscitation provide necessary secondary levels of treatment. Most young physicians have never treated septic abortion. Many obstetrician-gynecologists experience, or plan to experience, global health activities and will likely care for women with septic abortion. Thus, updated knowledge of the pathophysiology, clinical presentation, microbes, and proper treatment is needed to optimally treat this emergency condition when it exists. The pathophysiology of septic abortion involves infection of the placenta, especially the maternal villous space that leads to a high frequency of bacteremia. Symptoms and signs range from mild to severe. The microbes involved are usually common vaginal bacteria, including anaerobes, but occasionally potentially very serious and lethal infection is caused by bacteria that produce toxins. The primary treatment is early curettage to remove infected and devitalized tissue even in the face of continued fetal heart tones. Important secondary treatments are the administration of fluids and antibiotics. Updated references of sepsis and septic shock are reviewed.
abstract_id: PUBMED:23705080
Clostridium perfringens Sepsis and Fetal Demise after Genetic Amniocentesis. Clostridium perfringens is a rare cause of intrauterine infection. There have been five case reports concerning infection associated with invasive procedures. We report a woman who underwent a genetic amniocentesis due to her history of chronic granulomatous disease. She presented to the hospital ∼38 hours after the amniocentesis complaining of fever and chills. Due to acute decompensation, she underwent an emergent dilatation and evacuation. During her stay, blood cultures came back positive for C. perfringens. Gradual improvement with intensive monitoring led to hospital discharge 4 days after the procedure. Uterine infection due to C. perfringens leading to maternal sepsis is associated with a high morbidity and mortality rate. Our patient was able to survive without a hysterectomy due to the rapid administration of antibiotics and surgical intervention while being evaluated.
abstract_id: PUBMED:13949112
Management of infected abortion; an analysis of 342 cases. N/A
abstract_id: PUBMED:799994
Efficacy of amikacin in septic abortion: serum and urine antibiotic concentrations. Septic abortion is a frequent cause of maternal death in our patient population. The great majority of bacteria isolated were enterobacteria. Since antibiotics capable of destroying the cell wall release endotoxins (reducing vasoconstriction and tissue hypoxia) antibiotics which do not destroy bacteria like amikacin are preferred. Amikacin, 500 mg intramuscularly every twelve hours, was used concomitantly with uterine curettage and fluid and electrolyte restoration in thirty septic abortion patients. All the patients responded, obviating the need for more heroic surgical treatment such as hysterectomy. Previously used antibiotics have not been as effective in our patient population.
Answer: Based on the available abstracts, there is evidence suggesting that antibiotics may not be necessary after 48 hours of clinical improvement in cases of uncomplicated septic abortion. A randomized controlled trial followed by a cohort study found that after 48 hours of clinical improvement, all 56 patients in the trial observed a cure, and the treatment arm was stopped by the institutional review board as it was adding risk with no further benefit to the patients. An additional observational cohort with 75 cases followed up in the no-treatment arm also identified no failures, indicating a probability of an adverse event of 0% (95% confidence interval, 0-0.03) (PUBMED:21195382). This suggests that continuing antibiotics beyond 48 hours of clinical improvement may not provide additional benefits in uncomplicated cases.
However, it is important to note that the management of septic abortion should be individualized based on the clinical scenario, and the decision to discontinue antibiotics should be made cautiously. Other studies have emphasized the importance of appropriate antibiotic therapy in the treatment of septic abortion, with one study highlighting the effectiveness of a combination of ampicillin, gentamicin, and metronidazole as a first-line empiric choice for patients with infected abortion (PUBMED:31443937). Another study mentioned that the mainstay of treatment for septic abortion is antibiotic therapy, possibly in combination with evacuation of retained products of conception (PUBMED:27364644).
Therefore, while the evidence from one study suggests that antibiotics may not be necessary after 48 hours of clinical improvement in uncomplicated septic abortion (PUBMED:21195382), the overall approach to treating septic abortion should be comprehensive and consider the severity of infection, the presence of any complications, and the individual patient's response to initial treatment. It is also important to consider that the evidence is based on a specific study population and may not be generalizable to all cases of septic abortion. |
Instruction: Providing patient progress feedback and clinical support tools to therapists: is the therapeutic process of patients on-track to recovery enhanced in psychosomatic in-patient therapy under the conditions of routine practice?
Abstracts:
abstract_id: PUBMED:24840143
Providing patient progress feedback and clinical support tools to therapists: is the therapeutic process of patients on-track to recovery enhanced in psychosomatic in-patient therapy under the conditions of routine practice? Objectives: In previous studies of patients on-track to recovery (OT) involving therapists receiving only patient progress feedback without clinical support tools (CST) inconsistent results were found. Possible effects of combining patient progress feedback with CST on OT patients remain unclear.
Methods: At intake (t1), 252 patients of two in-patient psychosomatic clinics were randomized either into the experimental group (EG) or the treatment-as-usual control group (CG). Both groups were monitored weekly using the self-report instruments "Outcome Questionnaire" (OQ-45) and "Assessment of Signal Cases" (ASC). Therapists received weekly patient progress feedback (OQ-45) and CST feedback (ASC) only for EG patients starting at the week following intake (t2). Patients who did not deviate negatively from expected recovery curves by at least one standard deviation were considered OT patients (N=209; NEG=111; NCG=98). Since therapists received feedback at t2 for the first time, different patterns of change (OQ-45 scales) between the groups from t1 to t2, t2 to t3 (intake+two weeks), t2 to t4 (intake+three weeks), and t2 to t5 (last available OQ-45 score) were evaluated by multilevel models.
Results: Merely from t2 to t3, the EG improved significantly more on the OQ-45 symptom distress scale than the CG (p<0.05; g=0.12).
Conclusion: Providing patient progress feedback and CST to therapists did not substantially surpass treatment-as-usual for OT patients in this explorative study except for a very small time-limited enhancement of symptom change.
abstract_id: PUBMED:23972415
Feedback on patient progress and clinical support tools for therapists: improved outcome for patients at risk of treatment failure in psychosomatic in-patient therapy under the conditions of routine practice. Objectives: Although psychosomatic in-patient treatment is effective, 5-10% of the patients deteriorate. Providing patient progress feedback and clinical support tools to therapists improves the outcome for patients at risk of deterioration in counseling, outpatient psychotherapy, and substance abuse treatment. This study investigated the effects of feedback on psychosomatically treated in-patients at risk of treatment failure.
Methods: At intake, all patients of two psychosomatic clinics were randomized either into the experimental group or the treatment-as-usual control group. Both groups were tracked weekly with the "Outcome Questionnaire" (OQ-45) measuring patient progress and with the clinical support tool "Assessment of Signal Cases" (ASC). Therapists received feedback from both instruments for all their experimental group patients. "Patients at risk" were defined as patients who deviated from expected recovery curves by at least one standard deviation. Of 252 patients, 43 patients were at risk: 23 belonged to the experimental group, 20 to the control group. The feedback effect was analyzed using a level-2-model for discontinuous change, effect size (d), reliable change index (RCI), and odds ratio for reliable deterioration.
Results: For patients at risk, the experimental group showed an improved outcome on the OQ-45 total scale compared to the control group (p<0.05, d=0.54). By providing feedback, the rate of reliably deteriorated patients at risk was reduced from 25.0% (control group) to 8.7% (experimental group) - odds ratio=0.29. All reliably improved patients at risk belonged to the experimental group.
Conclusion: Feedback improves the outcome of patients at risk undergoing psychosomatic in-patient treatment.
abstract_id: PUBMED:36931228
Routine Outcome Monitoring (ROM) and Feedback: Research Review and Recommendations. Objective: To provide a research review of the components and outcomes of routine outcome monitoring (ROM) and recommendations for research and therapeutic practice.
Method: A narrative review of the three phases of ROM - data collection, feeding back data, and adapting therapy - and an overview of patient outcomes from 11 meta-analytic studies.
Results: Patients support ROM when its purpose is clear and integrated within therapy. Greater frequency of data collection is more important for shorter-term therapies, and use of graphs, greater specificity of feedback, and alerts are helpful. Overall effects on patient outcomes are statistically significant (g ≈ 0.15) and increase when clinical support tools (CSTs) are used for not-on-track cases (g ≈ 0.36-0.53). Effects are additive to standard effects of psychological therapies. Organizational, personnel, and resource issues remain the greatest obstacles to the successful adoption of ROM.
Conclusion: ROM offers a low-cost method for enhancing patient outcomes, on average resulting in an ≈ 8% advantage (success rate difference; SRD) over standard care. CSTs are particularly effective for not-on-track patients (SRD between ≈ 20% and 29%), but ROM does not work for all patients and successful implementation is a major challenge, along with securing appropriate cultural adaptations.
abstract_id: PUBMED:28166849
Assessing Patient Progress in Psychological Therapy Through Feedback in Supervision: the MeMOS* Randomized Controlled Trial (*Measuring and Monitoring clinical Outcomes in Supervision: MeMOS). Background: Psychological therapy services are often required to demonstrate their effectiveness and are implementing systematic monitoring of patient progress. A system for measuring patient progress might usefully 'inform supervision' and help patients who are not progressing in therapy.
Aims: To examine if continuous monitoring of patient progress through the supervision process was more effective in improving patient outcomes compared with giving feedback to therapists alone in routine NHS psychological therapy.
Method: Using a stepped wedge randomized controlled design, continuous feedback on patient progress during therapy was given either to the therapist and supervisor to be discussed in clinical supervison (MeMOS condition) or only given to the therapist (S-Sup condition). If a patient failed to progress in the MeMOS condition, an alert was triggered and sent to both the therapist and supervisor. Outcome measures were completed at beginning of therapy, end of therapy and at 6-month follow-up and session-by-session ratings.
Results: No differences in clinical outcomes of patients were found between MeMOS and S-Sup conditions. Patients in the MeMOS condition were rated as improving less, and more ill. They received fewer therapy sessions.
Conclusions: Most patients failed to improve in therapy at some point. Patients' recovery was not affected by feeding back outcomes into the supervision process. Therapists rated patients in the S-Sup condition as improving more and being less ill than patients in MeMOS. Those patients in MeMOS had more complex problems.
abstract_id: PUBMED:32673272
Feasibility of a Mobile Health App for Routine Outcome Monitoring and Feedback in Mutual Support Groups Coordinated by SMART Recovery Australia: Protocol for a Pilot Study. Background: Despite the importance and popularity of mutual support groups, there have been no systematic attempts to implement and evaluate routine outcome monitoring (ROM) in these settings. Unlike other mutual support groups for addiction, trained facilitators lead all Self-Management and Recovery Training (SMART Recovery) groups, thereby providing an opportunity to implement ROM as a routine component of SMART Recovery groups.
Objective: This study protocol aims to describe a stage 1 pilot study designed to explore the feasibility and acceptability of a novel, purpose-built mobile health (mHealth) ROM and feedback app (Smart Track) in SMART Recovery groups coordinated by SMART Recovery Australia (SRAU) The secondary objectives are to describe Smart Track usage patterns, explore psychometric properties of the ROM items (ie, internal reliability and convergent and divergent validity), and provide preliminary evidence for participant reported outcomes (such as alcohol and other drug use, self-reported recovery, and mental health).
Methods: Participants (n=100) from the SMART Recovery groups across New South Wales, Australia, will be recruited to a nonrandomized, prospective, single-arm trial of the Smart Track app. There are 4 modes of data collection: (1) ROM data collected from group participants via the Smart Track app, (2) data analytics summarizing user interactions with Smart Track, (3) quantitative interview and survey data of group participants (baseline, 2-week follow-up, and 2-month follow-up), and (4) qualitative interviews with group participants (n=20) and facilitators (n=10). Feasibility and acceptability (primary objectives) will be analyzed using descriptive statistics, a cost analysis, and a qualitative evaluation.
Results: At the time of submission, 13 sites (25 groups per week) had agreed to be involved. Funding was awarded on August 14, 2017, and ethics approval was granted on April 26, 2018 (HREC/18/WGONG/34; 2018/099). Enrollment is due to commence in July 2019. Data collection is due to be finalized in October 2019.
Conclusions: To the best of our knowledge, this study is the first to use ROM and tailored feedback within a mutual support group setting for addictive behaviors. Our study design will provide an opportunity to identify the acceptability of a novel mHealth ROM and feedback app within this setting and provide detailed information on what factors promote or hinder ROM usage within this context. This project aims to offer a new tool, should Smart Track prove feasible and acceptable, that service providers, policy makers, and researchers could use in the future to understand the impact of SMART Recovery groups.
Trial Registration: Australian New Zealand Clinical Trials Registry (ANZCTR): ACTRN12619000686101; https://anzctr.org.au/Trial/Registration/TrialReview.aspx?id=377336.
International Registered Report Identifier (irrid): PRR1-10.2196/15113.
abstract_id: PUBMED:33522465
Understanding routine outcome monitoring and clinical feedback in context: Introduction to the special section. The practice of routine outcome monitoring and providing clinical feedback has been widely studied within psychotherapy. Nevertheless, there are many outstanding questions regarding this practice. Is it an evidence-based adjunct to ongoing psychotherapies, or an ineffective complication of treatment? If it is effective, through what mechanism(s) does it act? Is it effective with all patient populations, treatment types, and service delivery mechanisms, or does its impact vary across context? What choices in the implementation process affect the utility of patient-reported data feedback on psychotherapy outcomes? The studies in this special section explore these questions using a wide variety of methods and significantly expand the reach of studies on feedback. Together, these studies represent a snapshot of a maturing field of study: Initial discoveries are developed into more robust theories and applied in a wider range of contexts, while the limits of that theory are tested. They also signal directions for future clinical and research work that may improve patient care in psychosocial interventions into the future.
abstract_id: PUBMED:34612829
Feasibility of a Mobile Health App for Routine Outcome Monitoring and Feedback in SMART Recovery Mutual Support Groups: Stage 1 Mixed Methods Pilot Study. Background: Mutual support groups are an important source of long-term help for people impacted by addictive behaviors. Routine outcome monitoring (ROM) and feedback are yet to be implemented in these settings. SMART Recovery mutual support groups focus on self-empowerment and use evidence-based techniques (eg, motivational and behavioral strategies). Trained facilitators lead all SMART Recovery groups, providing an opportunity to implement ROM.
Objective: The aim of this stage 1 pilot study is to explore the feasibility, acceptability, and preliminary outcomes of a novel, purpose-built mobile health ROM and feedback app (SMART Track) in mutual support groups coordinated by SMART Recovery Australia (SRAU) over 8 weeks.
Methods: SMART Track was developed during phase 1 of this study using participatory design methods and an iterative development process. During phase 2, 72 SRAU group participants were recruited to a nonrandomized, prospective, single-arm trial of the SMART Track app. Four modes of data collection were used: ROM data directly entered by participants into the app; app data analytics captured by Amplitude Analytics (number of visits, number of unique users, visit duration, time of visit, and user retention); baseline, 2-, and 8-week follow-up assessments conducted through telephone; and qualitative telephone interviews with a convenience sample of study participants (20/72, 28%) and facilitators (n=8).
Results: Of the 72 study participants, 68 (94%) created a SMART Track account, 64 (88%) used SMART Track at least once, and 42 (58%) used the app for more than 5 weeks. During week 1, 83% (60/72) of participants entered ROM data for one or more outcomes, decreasing to 31% (22/72) by the end of 8 weeks. The two main screens designed to provide personal feedback data (Urges screen and Overall Progress screen) were the most frequently visited sections of the app. Qualitative feedback from participants and facilitators supported the acceptability of SMART Track and the need for improved integration into the SRAU groups. Participants reported significant reductions between the baseline and 8- week scores on the Severity of Dependence Scale (mean difference 1.93, SD 3.02; 95% CI 1.12-2.73) and the Kessler Psychological Distress Scale-10 (mean difference 3.96, SD 8.31; 95% CI 1.75-6.17), but no change on the Substance Use Recovery Evaluator (mean difference 0.11, SD 7.97; 95% CI -2.02 to 2.24) was reported.
Conclusions: Findings support the feasibility, acceptability, and utility of SMART Track. Given that sustained engagement with mobile health apps is notoriously difficult to achieve, our findings are promising. SMART Track offers a potential solution for ROM and personal feedback, particularly for people with substance use disorders who attend mutual support groups.
Trial Registration: Australian New Zealand Clinical Trials Registry ACTRN12619000686101; https://anzctr.org.au/Trial/Registration/TrialReview.aspx?id=377336.
International Registered Report Identifier (irrid): RR2-10.2196/15113.
abstract_id: PUBMED:22755547
Providing patient progress information and clinical support tools to therapists: effects on patients at risk of treatment failure. The current study examined the effects of providing treatment progress information and problem-solving tools to both patients and therapists during the course of psychotherapy. Three hundred and seventy patients were randomly assigned to one of two treatment groups: treatment-as-usual, or an experimental condition based on the use of patient/therapist feedback and clinical decision-support tools. Patients in the feedback condition were significantly more improved at termination than the patients in the treatment-as-usual condition. Treatment effects were not a consequence of different amounts of psychotherapy received by experimental and control clients. These findings are consistent with past research on these approaches although the effect size was smaller in this study. Not all therapists were aided by the feedback intervention.
abstract_id: PUBMED:28523962
Patients' experiences with routine outcome monitoring and clinical feedback systems: A systematic review and synthesis of qualitative empirical literature. Routine outcome monitoring (ROM) and clinical feedback (CF) systems have become important tools for psychological therapies, but there are challenges for their successful implementation.
Objective: To overcome these challenges, a greater understanding is needed about how patients experience the use of ROM/CF.
Method: We conducted a systematic literature search of qualitative studies on patient experiences with the use of ROM/CF in mental health services.
Results: The findings from 16 studies were synthesized, resulting in four meta-themes: (1) Suspicion towards service providers, (2) Flexibility and support to capture complexity, (3) Empowering patients, and (4) Developing collaborative practice.
Conclusions: We discuss the implications of these meta-themes for further development and implementation of ROM/CF into clinical practice, acknowledging the limitations of our review and suggesting avenues for further research. Clinical or methodological significance of this article: This article provides useful and actionable knowledge about the patient perspective on ROM/CF, an important discussion on the current state of research in this area, and useful and concrete suggestions for further avenues of research.
abstract_id: PUBMED:33721605
Using progress feedback to improve outcomes and reduce drop-out, treatment duration, and deterioration: A multilevel meta-analysis. Progress feedback is an intervention aimed at enhancing patient outcomes in routine clinical practice. This study reports a comprehensive multilevel meta-analysis on the effectiveness of progress feedback in psychological treatments in curative care. The short- and long-term effects of feedback on symptom reduction were investigated using 58 (randomized and non-randomized) studies, analyzing 110 effect sizes in a total of 21,699 patients. Effects of feedback on dropout rate, percentage of deteriorated cases, and treatment duration were also examined. Moderation analyses were conducted for study and feedback characteristics. A small significant effect of progress feedback on symptom reduction (d = 0.15, 95% CI: [0.10, 0.20]) was found, compared to control groups. This was also true for not-on-track cases (d = 0.17, 95% CI: [0.11, 0.22]). In addition, feedback had a small favorable effect on dropout rates (OR = 1.19, 95% CI: [1.03, 1.38]). The moderation analyses identified several potentially interesting variables for further research, including feedback instrument, outcome instrument, type of feedback, feedback frequency, treatment intensity, and country in which the study was conducted. Future studies should report on these variables more consistently so that we can obtain a better understanding of when and why feedback improves outcomes.
Answer: The therapeutic process of patients on track to recovery in psychosomatic inpatient therapy under routine practice conditions may be enhanced by providing patient progress feedback and clinical support tools to therapists, but the evidence suggests that the effects are not consistently substantial.
In one study, providing patient progress feedback and clinical support tools (CST) to therapists showed a very small time-limited enhancement of symptom change for patients on track (OT) to recovery, but it did not substantially surpass treatment-as-usual except for a short period from the second to the third week of treatment (PUBMED:24840143).
However, another study found that for patients at risk of treatment failure, the use of feedback significantly improved outcomes on the Outcome Questionnaire (OQ-45) total scale compared to the control group, and the rate of reliably deteriorated patients at risk was reduced by providing feedback (PUBMED:23972415).
A research review on routine outcome monitoring (ROM) and feedback indicated that ROM, particularly when combined with CSTs for not-on-track cases, offers a low-cost method for enhancing patient outcomes, with statistically significant effects on patient outcomes (PUBMED:36931228).
In contrast, a randomized controlled trial examining the effect of continuous monitoring of patient progress through the supervision process found no differences in clinical outcomes of patients between the condition where feedback was given to both the therapist and supervisor and the condition where feedback was given to therapists alone (PUBMED:28166849).
Furthermore, a study on the feasibility of a mobile health app for routine outcome monitoring and feedback in mutual support groups found that the app was feasible, acceptable, and useful, with participants reporting significant reductions in substance dependence and psychological distress (PUBMED:34612829).
Overall, while there is evidence to suggest that providing patient progress feedback and CST to therapists can have positive effects on patient outcomes, particularly for those at risk of treatment failure, the impact may vary and is not guaranteed to be substantial for all patients on track to recovery in psychosomatic inpatient therapy (PUBMED:22755547, PUBMED:28523962, PUBMED:33721605). |
Instruction: Metalinguistics, stress accuracy, and word reading: does dialect matter?
Abstracts:
abstract_id: PUBMED:22562865
Metalinguistics, stress accuracy, and word reading: does dialect matter? Purpose: The authors examined the influence of demographic variables on nonmainstream American English (NMAE) use; the differences between NMAE speakers and mainstream American English (MAE) speakers on measures of metalinguistics, single-word reading, and a new measure of morphophonology; and the differences between the 2 groups in the relationships among the measures.
Method: Participants were typically developing 3rd graders from Memphis, TN, including 21 MAE and 21 NMAE speakers. Children received a battery of tests measuring phonological and morphological awareness (PA and MA), morphophonology (i.e., accurately produced lexical stress in derived words), decoding, and word identification (WID).
Results: Controlling for socioeconomic status, measures of PA, decoding, and WID were higher for MAE than for NMAE speakers. There was no difference in stress accuracy between the dialect groups. Only for the NMAE group were PA and MA significantly related to decoding and WID. Stress accuracy was correlated with word reading for the NMAE speakers and with all measures for the MAE speakers.
Conclusion: Stress accuracy was consistently related to reading measures, even when PA and MA were not. Morphophonology involving suprasegmental factors may be an area of convergence between language varieties because of its consistent relationship to word reading.
abstract_id: PUBMED:25346708
Speed and accuracy of dyslexic versus typical word recognition: an eye-movement investigation. Developmental dyslexia is often characterized by a dual deficit in both word recognition accuracy and general processing speed. While previous research into dyslexic word recognition may have suffered from speed-accuracy trade-off, the present study employed a novel eye-tracking task that is less prone to such confounds. Participants (10 dyslexics and 12 controls) were asked to look at real word stimuli, and to ignore simultaneously presented non-word stimuli, while their eye-movements were recorded. Improvements in word recognition accuracy over time were modeled in terms of a continuous non-linear function. The words' rhyme consistency and the non-words' lexicality (unpronounceable, pronounceable, pseudohomophone) were manipulated within-subjects. Speed-related measures derived from the model fits confirmed generally slower processing in dyslexics, and showed a rhyme consistency effect in both dyslexics and controls. In terms of overall error rate, dyslexics (but not controls) performed less accurately on rhyme-inconsistent words, suggesting a representational deficit for such words in dyslexics. Interestingly, neither group showed a pseudohomophone effect in speed or accuracy, which might call the task-independent pervasiveness of this effect into question. The present results illustrate the importance of distinguishing between speed- vs. accuracy-related effects for our understanding of dyslexic word recognition.
abstract_id: PUBMED:28088677
The relationship between children's sensitivity to dominant and non-dominant patterns of lexical stress and reading accuracy. This study reports on a new task for assessing children's sensitivity to lexical stress for words with different stress patterns and demonstrates that this task is useful in examining predictors of reading accuracy during the elementary years. In English, polysyllabic words beginning with a strong syllable exhibit the most common or dominant pattern of lexical stress (e.g., "coconut"), whereas polysyllabic words beginning with a weak syllable exhibit a less common non-dominant pattern (e.g., "banana"). The new Aliens Talking Underwater task assesses children's ability to match low-pass filtered recordings of words to pictures of objects. Via filtering, phonetic detail is removed but prosodic contour information relating to lexical stress is retained. In a series of two-alternative forced choice trials, participants see a picture and are asked to choose which of two filtered recordings matches the name of that picture; one recording exhibits the correct lexical stress of the target word, and the other recording reverses the pattern of stress over the initial two syllables of the target word rendering it incorrect. Target words exhibit either dominant stress or non-dominant stress. Analysis of data collected from 192 typically developing children aged 5 to 12years revealed that sensitivity to non dominant lexical stress was a significant predictor of reading accuracy even when age and phonological awareness were taken into account. A total of 76.3% of variance in children's reading accuracy was explained by these variables.
abstract_id: PUBMED:33892529
Lexical analyses of the function and phonology of Papuan Malay word stress. The existence of word stress in Indonesian languages has been controversial. Recent acoustic analyses of Papuan Malay suggest that this language has word stress, counter to other studies and unlike closely related languages. The current study further investigates Papuan Malay by means of lexical (non-acoustic) analyses of two different aspects of word stress. In particular, this paper reports two distribution analyses of a word corpus, 1) investigating the extent to which stress patterns may help word recognition and 2) exploring the phonological factors that predict the distribution of stress patterns. The facilitating role of stress patterns in word recognition was investigated in a lexical analysis of word embeddings. The results show that Papuan Malay word stress (potentially) helps to disambiguate words. As for stress predictors, a random forest analysis investigated the effect of multiple morpho-phonological factors on stress placement. It was found that the mid vowels /ɛ/ and /ɔ/ play a central role in stress placement, refining the conclusions of previous work that mainly focused on /ɛ/. The current study confirms that non-acoustic research on stress can complement acoustic research in important ways. Crucially, the combined findings on stress in Papuan Malay so far give rise to an integrated perspective to word stress, in which phonetic, phonological and cognitive factors are considered.
abstract_id: PUBMED:33395954
How the speed of word finding depends on ventral tract integrity in primary progressive aphasia. Primary progressive aphasia (PPA) is a clinical neurodegenerative syndrome with word finding problems as a core clinical symptom. Many aspects of word finding have been clarified in psycholinguistics using picture naming and a picture-word interference (PWI) paradigm, which emulates naming under contextual noise. However, little is known about how word finding depends on white-matter tract integrity, in particular, the atrophy of tracts located ventrally to the Sylvian fissure. To elucidate this question, we examined word finding in individuals with PPA and healthy controls employing PWI, tractography, and computer simulations using the WEAVER++ model of word finding. Twenty-three individuals with PPA and twenty healthy controls named pictures in two noise conditions. Mixed-effects modelling was performed on naming accuracy and reaction time (RT) and fixel-based tractography analyses were conducted to assess the relation between ventral white-matter integrity and naming performance. Naming RTs were longer for individuals with PPA compared to controls and, critically, individuals with PPA showed a larger noise effect compared to controls. Moreover, this difference in noise effect was differentially related to tract integrity. Whereas the noise effect did not depend much on tract integrity in controls, a lower tract integrity was related to a smaller noise effect in individuals with PPA. Computer simulations supported an explanation of this paradoxical finding in terms of reduced propagation of noise when tract integrity is low. By using multimodal analyses, our study indicates the significance of the ventral pathway for naming and the importance of RT measurement in the clinical assessment of PPA.
abstract_id: PUBMED:34539378
Differential Associations of White Matter Brain Age With Language-Related Mechanisms in Word-Finding Ability Across the Adult Lifespan. Research on cognitive aging has established that word-finding ability declines progressively in late adulthood, whereas semantic mechanism in the language system is relatively stable. The aim of the present study was to investigate the associations of word-finding ability and language-related components with brain aging status, which was quantified by using the brain age paradigm. A total of 616 healthy participants aged 18-88 years from the Cambridge Centre for Ageing and Neuroscience databank were recruited. The picture-naming task was used to test the participants' language-related word retrieval ability through word-finding and word-generation processes. The naming response time (RT) and accuracy were measured under a baseline condition and two priming conditions, namely phonological and semantic priming. To estimate brain age, we established a brain age prediction model based on white matter (WM) features and estimated the modality-specific predicted age difference (PAD). Mass partial correlation analyses were performed to test the associations of WM-PAD with the cognitive performance measures under the baseline and two priming conditions. We observed that the domain-specific language WM-PAD and domain-general WM-PAD were significantly correlated with general word-finding ability. The phonological mechanism, not the semantic mechanism, in word-finding ability was significantly correlated with the domain-specific WM-PAD. In contrast, all behavioral measures of the conditions in the picture priming task were significantly associated with chronological age. The results suggest that chronological aging and WM aging have differential effects on language-related word retrieval functions, and support that cognitive alterations in word-finding functions involve not only the domain-specific processing within the frontotemporal language network but also the domain-general processing of executive functions in the fronto-parieto-occipital (or multi-demand) network. The findings further indicate that the phonological aspect of word retrieval ability declines as cerebral WM ages, whereas the semantic aspect is relatively resilient or unrelated to WM aging.
abstract_id: PUBMED:37425183
ERP evidence for Slavic and German word stress cue sensitivity in English. Word stress is demanding for non-native learners of English, partly because speakers from different backgrounds weight perceptual cues to stress like pitch, intensity, and duration differently. Slavic learners of English and particularly those with a fixed stress language background like Czech and Polish have been shown to be less sensitive to stress in their native and non-native languages. In contrast, German English learners are rarely discussed in a word stress context. A comparison of these varieties can reveal differences in the foreign language processing of speakers from two language families. We use electroencephalography (EEG) to explore group differences in word stress cue perception between Slavic and German learners of English. Slavic and German advanced English speakers were examined in passive multi-feature oddball experiments, where they were exposed to the word impact as an unstressed standard and as deviants stressed on the first or second syllable through higher pitch, intensity, or duration. The results revealed a robust Mismatch Negativity (MMN) component of the event-related potential (ERP) in both language groups in response to all conditions, demonstrating sensitivity to stress changes in a non-native language. While both groups showed higher MMN responses to stress changes to the second than the first syllable, this effect was more pronounced for German than for Slavic participants. Such group differences in non-native English word stress perception from the current and previous studies are argued to speak in favor of customizable language technologies and diversified English curricula compensating for non-native perceptual variation.
abstract_id: PUBMED:26792367
The role of metrical information in apraxia of speech. Perceptual and acoustic analyses of word stress. Several factors are known to influence speech accuracy in patients with apraxia of speech (AOS), e.g., syllable structure or word length. However, the impact of word stress has largely been neglected so far. More generally, the role of prosodic information at the phonetic encoding stage of speech production often remains unconsidered in models of speech production. This study aimed to investigate the influence of word stress on error production in AOS. Two-syllabic words with stress on the first (trochees) vs. the second syllable (iambs) were compared in 14 patients with AOS, three of them exhibiting pure AOS, and in a control group of six normal speakers. The patients produced significantly more errors on iambic than on trochaic words. A most prominent metrical effect was obtained for segmental errors. Acoustic analyses of word durations revealed a disproportionate advantage of the trochaic meter in the patients relative to the healthy controls. The results indicate that German apraxic speakers are sensitive to metrical information. It is assumed that metrical patterns function as prosodic frames for articulation planning, and that the regular metrical pattern in German, the trochaic form, has a facilitating effect on word production in patients with AOS.
abstract_id: PUBMED:24808879
Processing word prosody-behavioral and neuroimaging evidence for heterogeneous performance in a language with variable stress. In the present behavioral and fMRI study, we investigated for the first time interindividual variability in word stress processing in a language with variable stress position (German) in order to identify behavioral predictors and neural correlates underlying these differences. It has been argued that speakers of languages with variable stress should perform relatively well in tasks tapping into the representation and processing of word stress, given that this is a relevant feature of their language. Nevertheless, in previous studies on word stress processing large degrees of interindividual variability have been observed but were ignored or left unexplained. Twenty-five native speakers of German performed a sequence recall task using both segmental and suprasegmental stimuli. In general, the suprasegmental condition activated a subcortico-cortico-cerebellar network including, amongst others, bilateral inferior frontal gyrus, insula, precuneus, cerebellum, the basal ganglia, pre-SMA and SMA, which has been suggested to be dedicated to the processing of temporal aspects of speech. However, substantial interindividual differences were observed. In particular, main effects of group were observed in the left middle temporal gyrus (below vs. above average performance in stress processing) and in the left precuneus (above vs. below average). Moreover, condition (segmental vs. suprasegmental) and group (above vs. below average) interacted in the right hippocampus and cerebellum. At the behavioral level, differences in word stress processing could be partly explained by individual performance in basic auditory perception including duration discrimination and by working memory performance (WM). We conclude that even in a language with variable stress, interindividual differences in behavioral performance and in the neuro-cognitive foundations of stress processing can be observed which may partly be traced back to individual basic auditory processing and WM performance.
abstract_id: PUBMED:33719759
The role of orthographic cues to stress in Italian visual word recognition. In this study, stress diacritics were used to investigate the processing of stress information in lexical decision. We ran two experiments in Italian, a language in which stress position is not predictable by rule and only final stress-that is, the less common pattern-is orthographically marked with a diacritic. In Experiment 1, a lexical decision task, two factors were manipulated: the stress pattern of words-antepenultimate (nondominant) and penultimate (dominant)-and the presence/absence of the diacritics, signalling the stress position. Participants were faster to categorise stimuli as words when they bear dominant than nondominant stress. However, the advantage disappeared when the diacritic was used. In Experiment 2, a same-different verification task was used in which participants had to decide if a referent word and a target were same (carota-CAROTA, /ka'rɔta/; tavolo-TAVOLO, /'tavolo/) or different. We compared two conditions requiring a "different" response, in which referent and target with dominant and nondominant stress were congruent (caròta-CAROTA; tàvolo-TAVOLO) or incongruent (càrota-CAROTA; tavòlo-TAVOLO) with the word's stress. For words with dominant stress, "different" responses were faster in the incongruent condition than the congruent condition. This congruency effect was not observed for words with nondominant stress pattern. Overall, the data suggest that stress information is based on lexical phonology, and the stress dominance effect has a lexical base in word recognition.
Answer: The influence of dialect on metalinguistics, stress accuracy, and word reading has been examined in various studies. One study found that when controlling for socioeconomic status, mainstream American English (MAE) speakers outperformed nonmainstream American English (NMAE) speakers on measures of phonological awareness (PA), decoding, and word identification (WID). However, there was no difference in stress accuracy between the two dialect groups. Interestingly, for NMAE speakers, PA and morphological awareness (MA) were significantly related to decoding and WID, and stress accuracy was correlated with word reading. This suggests that morphophonology involving suprasegmental factors may be an area of convergence between language varieties due to its consistent relationship with word reading (PUBMED:22562865).
Another study focusing on children's sensitivity to lexical stress patterns found that sensitivity to non-dominant lexical stress was a significant predictor of reading accuracy, even when age and phonological awareness were considered. This indicates that awareness of stress patterns in words can be an important factor in reading proficiency (PUBMED:28088677).
In contrast, a study on Papuan Malay, a language with controversial word stress, showed that word stress potentially helps to disambiguate words, suggesting that stress patterns may play a role in word recognition (PUBMED:33892529).
Overall, these findings suggest that dialect can influence metalinguistic abilities and word reading, with stress accuracy playing a consistent role across dialects in relation to word reading. However, the relationship between these factors can be complex and may be influenced by other variables such as socioeconomic status, age, and phonological awareness. |
Instruction: Are there radiologically identifiable prodromal changes in Thoroughbred racehorses with parasagittal fractures of the proximal phalanx?
Abstracts:
abstract_id: PUBMED:23663185
Are there radiologically identifiable prodromal changes in Thoroughbred racehorses with parasagittal fractures of the proximal phalanx? Reasons For Performing Study: Fractures of the proximal phalanx are generally considered to result from monotonic supraphysiological loads, but radiological observations from clinical cases suggest there may be a stress-related aetiology.
Objectives: To determine whether there are radiologically identifiable prodromal changes in Thoroughbred racehorses with confirmed parasagittal fractures of the proximal phalanx.
Study Design: Retrospective cross-sectional study.
Methods: Case records and radiographs of Thoroughbred racehorses with parasagittal fractures of the proximal phalanx were analysed. Thickness of the subchondral bone plate was measured in fractured and contralateral limbs, and additional radiological features consistent with prodromal fracture pathology documented.
Results: The subchondral bone plate was significantly thicker in affected than in contralateral limbs. Evidence of additional prodromal fracture pathology was observed in 15/110 (14%) limbs with parasagittal fractures, and in 4% of contralateral limbs.
Conclusions: The results of this study are not consistent with monotonic loading as a cause of fracture in at least a proportion of cases, but suggest a stress-related aetiology. Increased thickness of the subchondral bone plate may reflect (failed) adaptive changes that precede fracture.
Potential Relevance: Better understanding of the aetiology of fractures of the proximal phalanx may help develop strategies to reduce the risk of fracture.
abstract_id: PUBMED:23663221
Radiographic configuration and healing of 121 fractures of the proximal phalanx in 120 Thoroughbred racehorses (2007-2011). Reasons For Performing Study: Although fractures of the proximal phalanx are one of the most common long bone fractures of Thoroughbred horses in training, limited details on variations in morphology and radiological progression have been published.
Objectives: To describe in detail the configuration of parasagittal fractures of the proximal phalanx in a group of Thoroughbred racehorses, to report fracture distribution within this group of horses and to document radiological progression of fracture healing in cases treated by internal fixation.
Study Design: Restrospective case series.
Methods: Case records and radiographs of Thoroughbred racehorses with parasagittal fractures of the proximal phalanx admitted to Newmarket Equine Hospital between 2007 and 2011 were analysed.
Results: One hundred and twenty-one fractures were identified in 120 Thoroughbred racehorses. Fractures were frequently more complex than was appreciated immediately following injury; a feature that has not been reported previously. There was seasonality of fractures in 2- and 3-year-old horses, but not in older horses.
Conclusions And Potential Relevance: Fractures of the proximal phalanx may be more complex than recognised previously, although often their complexity cannot be identified radiographically immediately following injury. The seasonality observed in 2- and 3-year-old horses is most likely to be a consequence of the timing of the turf-racing season in the UK. The Summary is available in Chinese - see Supporting information.
abstract_id: PUBMED:30828037
Bone marrow oedema-type signal in the proximal phalanx of Thoroughbred racehorses. This study focused on 8 Thoroughbred racehorses showing bone marrow oedema-type signal in the proximal sagittal groove of the proximal phalanx, with the aim of understanding its clinical significance. Standing magnetic resonance imaging played an important role in assessing osseous abnormalities that were not radiographically identifiable. Further, a histopathological result from one of the cases showed there was oedema surrounding adipose tissues with increase in density of trabecular scaffolding. This may indicate presence of osseous injury within the area of decreased elasticity due to subchondral bone modeling. This study suggests that detection of osseous abnormality based on bone marrow oedema-type signal, and application of appropriate care following injury would contribute to prevent deterioration of stress-related fractures of the proximal phalanx.
abstract_id: PUBMED:34944142
Imaging and Gross Pathological Appearance of Changes in the Parasagittal Grooves of Thoroughbred Racehorses. (1) Background: Parasagittal groove (PSG) changes are often present on advanced imaging of racing Thoroughbred fetlocks and have been suggested to indicate increased fracture risk. Currently, there is limited evidence differentiating the imaging appearance of prodromal changes in horses at risk of fracture from horses with normal adaptive modelling in response to galloping. This study aims to investigate imaging and gross PSG findings in racing Thoroughbreds and the comparative utility of different imaging modalities to detect PSG changes. (2) Methods: Cadaver limbs were collected from twenty deceased racing/training Thoroughbreds. All fetlocks of each horse were examined with radiography, low-field magnetic resonance imaging (MRI), computed tomography (CT), contrast arthrography and gross pathology. (3) Results: Horses with fetlock fracture were more likely to have lateromedial PSG sclerosis asymmetry and/or lateral PSG lysis. PSG lysis was not readily detected using MRI. PSG subchondral bone defects were difficult to differentiate from cartilage defects on MRI and were not associated with fractures. The clinical relevance of PSG STIR hyperintensity remains unclear. Overall, radiography was poor for detecting PSG changes. (4) Conclusions: Some PSG changes in Thoroughbred racehorses are common; however, certain findings are more prevalent in horses with fractures, possibly indicating microdamage accumulation. Bilateral advanced imaging is recommended in racehorses with suspected fetlock pathology.
abstract_id: PUBMED:28556936
Parasagittal fractures of the proximal phalanx in Thoroughbred racehorses in the UK: Outcome of repaired fractures in 113 cases (2007-2011). Background: Thirty years have elapsed since the last published review of outcome following fracture of the proximal phalanx in Thoroughbred racehorses in the UK and contemporary results are needed to be able to advise of expected outcome.
Objectives: Collect and analyse outcome data following repair of fractures of the proximal phalanx in Thoroughbred racehorses in the UK.
Study Design: Retrospective case series.
Methods: Case records of all Thoroughbred racehorses admitted to Newmarket Equine Hospital for evaluation of a parasagittal fracture of the proximal phalanx during a 5 years period were reviewed. Follow-up data regarding racing careers was collected for horses that underwent repair. Following exclusion of outliers, cases with incomplete data sets and comminuted fractures, mixed effect logistic regression was used to identify variables affecting returning to racing and odds ratios and confidence intervals calculated.
Results: Of 113 repaired cases, fracture configurations included short incomplete parasagittal (n = 12), long incomplete parasagittal (n = 86), complete parasagittal (n = 12) and comminuted (n = 3). A total of 54 (48%) cases raced after surgery. Horses that fractured at 2 years of age had increased odds of racing following surgery than those older than 2 years of age (OR 1.34; 95% CI 1.13-1.59, P = 0.002). Horses sustaining short incomplete parasagittal fractures had increased odds of racing following surgery compared with those with complete parasagittal fractures (OR 2.62; 95% CI 1.36-5.07, P = 0.006). No horses with comminuted fractures returned to racing.
Main Limitations: Data are relevant only to Thoroughbred racehorses in the UK.
Conclusions: Approximately half of the cases in this series raced following surgical repair. More 2-year-old horses raced following surgery, but this likely reflects horses, specifically older horses, passing out of training from unrelated factors. Fracture configuration affects odds of racing, which is relevant to owners when deciding on treatment.
abstract_id: PUBMED:20156254
Clinical and imaging features of suspected prodromal fracture of the proximal phalanx in three Thoroughbred racehorses. Sagittal fracture of the proximal phalanx (P1) is an important musculoskeletal injury of the performance horse. Although widely considered to be monotonic in nature, there is emerging evidence that some P1 fractures may have stress-injury aetiology. Three cases are described in which imaging features found were suggestive of prodromal bone injury. All cases returned to full performance use after a period of rest. The authors conclude that it is possible that some P1 fractures in the Thoroughbred racehorse may develop through stress/fatigue injury pathways. It is proposed that intervention prior to overt fracture may be possible in some cases.
abstract_id: PUBMED:32885508
Arthroscopic evaluation of the metacarpophalangeal and metatarsophalangeal joints in horses with parasagittal fractures of the proximal phalanx. Background: Fractures of the proximal phalanx are one of the most common long bone fractures of Thoroughbred racehorses. Although the degree of disruption and damage to the articular surface is generally considered a major prognostic determinant, associated arthroscopic findings have not previously been reported.
Objectives: To describe the metacarpo/metatarsophalangeal (MCP/MTP) joint lesions associated with parasagittal fractures of the proximal phalanx arthroscopically identified at the time of fracture repair and compare radiographic and arthroscopic appearance of complete fractures.
Study Design: Retrospective case series.
Methods: Case records and arthroscopic images of horses with parasagittal fractures of the proximal phalanx admitted to Newmarket Equine Hospital from 2007 to 2017 were analysed.
Results: 81 MCP/MTP joints in 78 horses underwent arthroscopic evaluation concurrent to parasagittal fracture repair. Tears of the joint capsule and dorsal synovial plica were noted in 43 cases. Arthroscopy identified articular incongruity in three horses where fracture displacement was not predicted at all on pre-operative radiographs, and incongruity in additional plane(s) to the radiographic displacement in 14 horses. Concurrent osteochondral fragmentation and disruption of cartilage were present in some cases.
Main Limitations: As a retrospective study, the arthroscopic data available for review were variable. Arthroscopic assessment of fracture reduction and joint congruency was evaluated in all cases but there was variation in the completeness of evaluation of the entire dorsal joint space of the fetlock joint. This may have led to the underestimation of soft tissue lesions in these cases.
Conclusions: Some horses suffering from parasagittal proximal phalanx fractures have concurrent tearing of the joint capsule and/or dorsal plica, which may have relevance in the acute course of events resulting in the development of fractures. Fracture displacement and incongruency at the articular surface cannot confidently be excluded pre-operatively by radiographs alone.
abstract_id: PUBMED:28710894
Short frontal plane fractures involving the dorsoproximal articular surface of the proximal phalanx: Description of the injury and a technique for repair. Background: Chip fractures of the dorsoproximal articular margin of the proximal phalanx are common injuries in racehorses. Large fractures can extend distal to the joint capsule insertion and have been described as dorsal frontal fractures.
Objectives: To report the location and morphology of short frontal plane fractures involving the dorsoproximal articular surface of the proximal phalanx and describe a technique for repair under arthroscopic and radiographic guidance.
Study Design: Single centre retrospective case study.
Methods: Case records of horses with frontal plane fractures restricted to the dorsoproximal epiphysis and metaphysis of the proximal phalanx referred to Newmarket Equine Hospital were retrieved, images reviewed and lesion morphology described. A technique for repair and the results obtained are reported.
Results: A total of 22 fractures in 21 horses commencing at the proximal articular surface exited the dorsal cortex of the proximal phalanx distal to the metacarpophalangeal/metatarsophalangeal joint capsule in 17 hind- and five forelimbs. All were in Thoroughbred racehorses. In 16 cases these were acute racing or training injuries; 20 fractures were medial, one lateral and one was midline. All were repaired with a single lag screw using arthroscopic and radiographically determined landmarks. A total of 16 horses raced after surgery with performance data similar to their preinjury levels.
Main Limitations: The study demonstrates substantial morphological similarities between individual lesions supporting a common pathophysiology, but does not identify precise causation. There are no cases managed differently that might permit assessment of the comparative efficacy of the treatment described.
Conclusions: Short frontal plane fractures involving the dorsoproximal margin of the proximal phalanx that exit the bone distal to the metacarpophalangeal/metatarsophalangeal joint capsule have substantial morphological similarities, are amenable to minimally invasive repair and carry a good prognosis for return to training and racing.
abstract_id: PUBMED:32367546
Arthroscopic debridement of short frontal plane proximal phalanx fractures preserves racing performance. Background: Outcomes have been reported for a limited number of short frontal plane fractures of the proximal phalanx following nonsurgical treatment and internal fixation.
Objectives: To describe a new approach, arthroscopic debridement, of short frontal plane fractures of the proximal phalanx in flat-racing Thoroughbreds and post-operative racing outcome.
Study Design: Retrospective case-control study.
Methods: Medical records of 81 Thoroughbred racehorses treated with arthroscopic debridement for frontal plane fractures of the proximal phalanx were reviewed. Diagnostic images and operative reports were used to characterise lesions and a technique for arthroscopic treatment was described. Post injury racing career length, starts, earnings and race quality are compared with matched controls.
Results: Of 81 treated horses, 74 (91%) raced post-operatively. Treated horses had fewer post-operative starts compared with controls (median 12, 95% CI 9-16 vs median 19, 95% CI 15-23; P < .001), but there was no difference in post-operative earnings (median $51 465, 95% CI $39 868-$85 423 vs median $68 017, 95% CI $54 247-$87 870, P = .7) or career length (median 7 quarters, 95% CI 5-8 vs median 9 quarters, 95% CI 8-10, P = .1).
Main Limitations: Retrospective studies prevent prospective control of sampling bias and limit selection of matched controls.
Conclusions: Treatment of frontal plane fractures of the proximal phalanx by arthroscopic debridement results in racing performance comparable to uninjured controls with respect to longevity and earnings.
abstract_id: PUBMED:33244781
Microstructural properties of the proximal sesamoid bones of Thoroughbred racehorses in training. Background: Proximal sesamoid bone fractures are common catastrophic injuries in racehorses. Understanding the response of proximal sesamoid bones to race training can inform fracture prevention strategies.
Objectives: To describe proximal sesamoid bone microstructure of racehorses and to investigate the associations between microstructure and racing histories.
Study Design: Cross-sectional.
Methods: Proximal sesamoid bones from 63 Thoroughbred racehorses were imaged using micro-computed tomography. Bone volume fraction (BVTV) and bone material density (BMD) of the whole bone and four regions (apical, midbody dorsal, midbody palmar and basilar) were determined. Generalised linear regression models were used to identify the associations between bone parameters and race histories of the horses.
Results: The mean sesamoid BVTV was 0.79 ± 0.08 and BMD was 806.02 ± 24.66 mg HA/ccm. BVTV was greater in medial sesamoids compared with lateral sesamoids (0.80 ± 0.07 vs 0.79 ± 0.08; P < .001) predominantly due to differences in the apical region (medial-0.76 ± 0.08 vs lateral-0.72 ± 0.07; P < .001). BVTV in the midbody dorsal region (0.86 ± 0.06) was greater than other regions (midbody palmar-0.79 ± 0.07, basilar-0.78 ± 0.06 and apical-0.74 ± 0.08; P < .001). BVTV was greater in sesamoids with more microcracks on their articular surface (Coef. 0.005; 95% CI 0.001, 0.009; P = .01), greater extent of bone resorption on their abaxial surface (Grade 2-0.82 ± 0.05 vs Grade 1-0.80 ± 0.05 or Grade 0-0.79 ± 0.06; P = .006), in horses with a low (0.82 ± 0.07) or mid handicap rating (0.78 ± 0.08) compared with high rating (0.76 ± 0.07; P < .001), in 2- to 5-year-old horses (0.81 ± 0.07) compared with younger (0.68 ± 0.08) or older horses (0.77 ± 0.08; P < .001) and in horses that commenced their racing career at less than 4 years of age (0.79 ± 0.08 vs 0.77 ± 0.77; P < .001). BMD was greater in the midbody dorsal (828.6 ± 19.6 mg HA/ccm) compared with other regions (apical-805.8 ± 21.8, midbody palmar-804.7 ± 18.4 and basilar-785.0 ± 17.1; P < .001), in horses with a handicap rating (low-812.1 ± 20.0, mid-821.8 ± 21.3 and high-814.6 ± 19.4) compared with those with no rating (791.08 ± 24.4, P < .001), in females (806.7 ± 22.0) and geldings (812.2 ± 22.4) compared with entires (792.7 ± 26.2; P = .02) and in older horses (<2-year-old-763.7 ± 24.8 vs 2- to 5-year-old-802.7 ± 23.4, and 6- to 12-year-old-817.8 ± 20.0; P = .002).
Main Limitations: Data were cross-sectional.
Conclusions: Densification of the proximal sesamoid bones is associated with the commencement of racing in younger horses and the presence of bone fatigue-related pathology. Lower sesamoid BVTV was associated with longevity and better performance.
Answer: Yes, there are radiologically identifiable prodromal changes in Thoroughbred racehorses with parasagittal fractures of the proximal phalanx. A study found that the subchondral bone plate was significantly thicker in affected limbs than in contralateral limbs, and evidence of additional prodromal fracture pathology was observed in 14% of limbs with parasagittal fractures. These findings suggest a stress-related aetiology rather than monotonic loading as a cause of fracture in at least a proportion of cases (PUBMED:23663185). Additionally, bone marrow oedema-type signal in the proximal phalanx, which may indicate the presence of osseous injury, can be detected using standing magnetic resonance imaging, contributing to the prevention of stress-related fractures (PUBMED:30828037). Imaging studies have also shown that fractures of the proximal phalanx may be more complex than recognized immediately following injury, and the complexity of these fractures may not be identifiable radiographically right after the injury occurs (PUBMED:23663221). |
Instruction: Can health utility measures be used in lupus research?
Abstracts:
abstract_id: PUBMED:10381044
Can health utility measures be used in lupus research? A comparative validation and reliability study of 4 utility indices. Objective: To assess validity and reliability of 4 utility indices in patients with systemic lupus erythematosus (SLE).
Methods: Twenty-five patients with stable SLE underwent assessment of disease activity [Systemic Lupus Disease Activity Measure (SLAM-R) and SLE Disease Activity Index (SLEDAI)] and damage [Systemic Lupus Collaborating Clinics/American College of Rheumatology Damage Index (SLICC/ACR DI)] and completed a health survey [Medical Outcome Survey Short Form-36 (SF-36)] and 4 utility measures: the visual analog scale (VAS), the time trade-off (TTO), the standard gamble (SG), and the McMaster Health Utilities Index Mark 2 (HUI2). To assess validity, Pearson's correlations were calculated between the SF-36 subscales and the utility measures. To assess reliability, intraclass correlations or kappa coefficients were calculated between first and second assessments, performed from 2 to 4 weeks apart, in patients without important clinical change in disease activity.
Results: Disease activity from a SLAM-R varied from 0 to 14 (median = 4) and SLEDAI from 0 to 18 (median = 0). All subscales of the SF-36 correlated well with the VAS [lowest r = 0.56, 95% confidence interval (CI) (0.17, 0.80)] and poorly with the SG [maximum r = 0.41, CI (-0.01, 0.70); minimum r = 0.10, CI (-0.32, 0.50)]. The subscales of bodily pain (r = 0.56), mental health (r = 0.45), physical functioning (r = 0.62), role-emotional (r = 0.47), social functioning (r = 0.49) and vitality (r = 0.44) correlated significantly with TTO. All subscales correlated significantly [lowest r = 0.48, CI (0.09, 0.75)] with the HUI2 index of pain. Intraclass correlations for the VAS (ICC = 0.67) and TTO (ICC = 0.60) were good. They were fair for the SG (ICC = 0.45). The kappa coefficient was poor (0.32) for the HUI attribute of pain, but varied from fair (0.46) to excellent (0.88) for the remaining attributes. Regression analysis showed that a model incorporating the SLAM-R value and SF-36 subset of mental health was a good predictor of VAS and TTO utility measures.
Conclusion: The VAS, TTO, and to some extent, the HUI2, when compared with the SF-36 health survey, are valid and reliable measures to assess health related quality of life in a group of patients with SLE and may be useful for future research in this population. On the basis of these results the usefulness of the SG is questionable in these patients.
abstract_id: PUBMED:9972993
Outcome measures to be used in clinical trials in systemic lupus erythematosus. The optimal outcome measures to be employed in clinical trials of systemic lupus erythematosus (SLE) have yet to be determined. Useful instruments should assess disease outcome in terms of all organ system involvement, as well as measures important to the patient. This article reviews those outcome measures that have been utilized in cohort studies in SLE, as well as their limited use in randomized clinical trials (RCT). Six disease activity measures have been developed: British Isles Lupus Assessment Group Scale (BILAG), European Consensus Lupus Activity Measure (ECLAM), Lupus Activity Index (LAI), National Institutes of Health SLE Index Score (SIS), Systemic Lupus Activity Measure (SLAM), and Systemic Lupus Erythematosus Disease Activity Index (SLEDAI). They have been validated in cohort studies as reflecting change in disease activity, and against each other. RCT utilizing SLAM, SLEDAI, BILAG, ECLAM, SIS, SLAM, SLEDAI are ongoing. It is recommended that the disease activity index of choice be selected; but simultaneous computer generation of multiple indices will facilitate comparisons across therapeutic interventions. A damage index has been developed and validated as the Systemic Lupus International Cooperating Clinics (SLICC)/American College of Rheumatology (ACR) Damage Index or SDI. In several cohort studies it has been shown sensitive to change over time, and to reflect cumulative disease activity. There is no health status or disability instrument specific to SLE. The Medical Outcomes Survey (SF-20) captures health status/health related quality of life (HRQOL) better than the Health Assessment Questionnaire (HAQ) in patients with SLE, but does not adequately reflect fatigue. The SF-36 does assess fatigue, and correlates closely with the SF-20. These data indicate that any individual measure of clinical response to a therapeutic intervention in SLE may reflect only a portion of what might be termed the "true outcome." Based on this work, the way is now paved to attempt to develop consensus on the important domains to be measured in clinical trials in SLE, the most appropriate instruments to use and the minimal clinically important differences in their results.
abstract_id: PUBMED:23264550
Comparison of the psychometric properties of health-related quality of life measures used in adults with systemic lupus erythematosus: a review of the literature. Objective: A review of the literature was undertaken to evaluate the development and psychometric properties of health-related quality of life (HRQoL) measures used in adults with SLE. This information will help clinicians make an informed choice about the measures most appropriate for research and clinical practice.
Methods: Using the key words lupus and quality of life, full original papers in English were identified from six databases: OVID MEDLINE, EMBASE, Allied and Complementary Medicine, Psychinfo, Web of Science and Health and Psychosocial Instruments. Only studies describing the validation of HRQoL measures in adult SLE patients were retrieved.
Results: Thirteen papers were relevant; five evaluated generic instruments [QOLS-S (n = 1), EQ-5D/SF-6D (n = 1), SF-36 (n = 3)] and eight evaluated disease-specific measures [L-QOL (n = 1), LupusQoL (UK) (n = 1), LupusQoL (US) (n = 1), SSC (n = 2), SLEQOL (n = 3)]. For the generic measures, there is moderate evidence of good content validity and internal consistency, whereas there is strong evidence for both these psychometric properties in disease-specific measures. There is limited to moderate evidence to support the construct validity and test-retest reliability for the disease-specific measures. Responsiveness and floor/ceiling effects have not been adequately investigated in any of the measures.
Conclusions: Direct comparison of the psychometric properties was difficult because of the different methodologies employed in the development and evaluation of the different HRQoL measures. However, there is supportive evidence that multidimensional disease-specific measures are the most suitable in terms of content and internal reliability for use in studies of adult patients with SLE.
abstract_id: PUBMED:37537705
Meeting report: the ALPHA project: a stakeholder meeting on lupus clinical trial outcome measures and the patient perspective. Drug development in lupus has improved over the past 10 years but still lags behind that of other rheumatic disease areas. Assessment of prospective lupus therapies in clinical trials has proved challenging for reasons that are multifactorial including the heterogeneity of the disease, study design limitations and a lack of validated biomarkers which greatly impacts regulatory decision-making. Moreover, most composite outcome measures currently used in trials do not include patient-reported outcomes. Given these factors, the Addressing Lupus Pillars for Health Advancement Global Advisory Committee members who serve on the drug development team identified an opportunity to convene a meeting to facilitate information sharing on completed and existing outcome measure development efforts. This meeting report highlights information presented during the meeting as well as a discussion on how the lupus community may work together with regulatory agencies to simplify and standardise outcome measures to accelerate development of lupus therapeutics.
abstract_id: PUBMED:36008224
Valuing Health Gain from Composite Response Endpoints for Multisystem Diseases. Objectives: This study aimed to demonstrate how to estimate the value of health gain after patients with a multisystem disease achieve a condition-specific composite response endpoint.
Methods: Data from patients treated in routine practice with an exemplar multisystem disease (systemic lupus erythematosus) were extracted from a national register (British Isles Lupus Assessment Group Biologics Register). Two bespoke composite response endpoints (Major Clinical Response and Improvement) were developed in advance of this study. Difference-in-differences regression compared health utility values (3-level version of EQ-5D; UK tariff) over 6 months for responders and nonresponders. Bootstrapped regression estimated the incremental quality-adjusted life-years (QALYs), probability of QALY gain after achieving the response criteria, and population monetary benefit of response.
Results: Within the sample (n = 171), 18.2% achieved Major Clinical Response and 49.1% achieved Improvement at 6 months. Incremental health utility values were 0.0923 for Major Clinical Response and 0.0454 for Improvement. Expected incremental QALY gain at 6 months was 0.020 for Major Clinical Response and 0.012 for Improvement. Probability of QALY gain after achieving the response criteria was 77.6% for Major Clinical Response and 72.7% for Improvement. Population monetary benefit of response was £1 106 458 for Major Clinical Response and £649 134 for Improvement.
Conclusions: Bespoke composite response endpoints are becoming more common to measure treatment response for multisystem diseases in trials and observational studies. Health technology assessment agencies face a growing challenge to establish whether these endpoints correspond with improved health gain. Health utility values can generate this evidence to enhance the usefulness of composite response endpoints for health technology assessment, decision making, and economic evaluation.
abstract_id: PUBMED:9604367
Health services research and systemic lupus erythematosus: a reciprocal relationship. Many other statistical and epidemiologic approaches have been used in studies of systemic lupus erythematosus. Many of these, such as multivariate analysis and cost-benefit analysis, have substantially added to our understanding of the disease. However, in these instances it has been a clear application of the technique to a clinical problem. In the four areas I described, there was a convergence of clinical dilemmas and methodologic improvements, with the clinical problem actually contributing to the development of the methodologic technique used to address the question. In conclusion, major methodologic advancements in health services research and clinical epidemiology have developed pari passu with studies of the natural history of systemic lupus erythematosus. In some circumstances the clinical questions drove methodologic innovation and in other cases the methodologic studies were rapidly adopted and adapted to clinical investigation. It is unlikely that this was happenstance. We can anticipate that future investigations in lupus will utilize innovative techniques in health services research.
abstract_id: PUBMED:34215371
Systemic Lupus Erythematosus Outcome Measures for Systemic Lupus Erythematosus Clinical Trials. The assessment of systemic lupus erythematosus (SLE) disease activity in clinical trials has been challenging. This is related to the wide spectrum of SLE manifestations and the heterogeneity of the disease trajectory. Currently, composite outcome measures are most commonly used as a primary endpoint while organ-specific measures are often used as secondary outcomes. In this article, we review the outcome measures and endpoints used in most recent clinical trials and explore potential avenues for further development of new measures and the refinement of existing tools.
abstract_id: PUBMED:30167315
Patient-reported outcome measures for use in clinical trials of SLE: a review. Inclusion of patient-reported outcomes is important in SLE clinical trials as they allow capture of the benefits of a proposed intervention in areas deemed pertinent by patients. We aimed to compare the measurement properties of health-related quality of life (HRQoL) measures used in adults with SLE and to evaluate their responsiveness to interventions in randomised controlled trials (RCTs). A systematic review was undertaken using full original papers in English identified from three databases: MEDLINE, EMBASE and PubMed. Studies describing the validation of HRQoL measures in English-speaking adult patients with SLE and SLE drug RCTs that used an HRQoL measure were retrieved. Twenty-five validation papers and 26 RCTs were included in the indepth review evaluating the measurement properties of 4 generic (Medical Outcomes Study Short-Form 36 (SF36), Patient Reported Outcomes Measurement Information System (PROMIS) item-bank, EuroQol-5D, and Functional Assessment of Chronic Illness Therapy-Fatigue) and 3 disease-specific (Lupus Quality of Life (LupusQoL), Lupus Patient Reported Outcomes, Lupus Impact Tracker (LIT)) instruments. All measures had good convergent and discriminant validity. PROMIS provided the strongest evidence for known-group validity and reliability among generic instruments; however, data on its responsiveness have not been published. Across measures, standardised response means were generally indicative of poor-moderate sensitivity to longitudinal change. In RCTs, clinically important improvements were reported in SF36 scores from baseline; however, between-arm differences were frequently non-significant and non-important. SF36, PROMIS, LupusQoL and LIT had the strongest evidence for acceptable measurement properties, but few measures aside from the SF36 have been incorporated into clinical trials. This review highlights the importance of incorporating a broader range of SLE-specific HRQoL measures in RCTs and warrants further research that focuses on longitudinal responsiveness of newer instruments.
abstract_id: PUBMED:29152588
The Delivery Science Rapid Analysis Program: A Research and Operational Partnership at Kaiser Permanente Northern California. Introduction: Health care researchers and delivery system leaders share a common mission to improve health care quality and outcomes. However, differing timelines, incentives, and priorities are often a barrier to research and operational partnerships. In addition, few funding mechanisms exist to generate and solicit analytic questions that are of interest to both research and to operations within health care settings, and provide rapid results that can be used to improve practice and outcomes.
Methods: The Delivery Science Rapid Analysis Program (RAP) was formed in 2013 within the Kaiser Permanente Northern California Division of Research, sponsored by The Permanente Medical Group. A Steering Committee consisting of both researchers and clinical leaders solicits and reviews proposals for rapid analytic projects that will use existing data and are feasible within 6 months and with up to $30,000 (approximately 25-50% full-time equivalent) of programmer/analyst effort. Review criteria include the importance of the analytic question for both research and operations, and the potential for the project to have a significant impact on care delivery within 12 months of completion.
Results: The RAP funded 5 research and operational analytic projects between 2013 and 2017. These projects spanned a wide range of clinical areas, including lupus, pediatric obesity, diabetes, e-cigarette use, and hypertension. The hypertension RAP project, which focused on optimizing thiazide prescribing in Black/African-American patients with hypertension, led to new insights that inform an equitable care quality metric designed to reduce blood pressure control disparities throughout the KPNC region.
Conclusions: Programs that actively encourage research and operational analytic partnerships have significant potential to improve care, enhance research collaborations, and contribute to the building and sustaining of learning health care systems.
abstract_id: PUBMED:28662595
Fatigue and depression predict reduced health-related quality of life in childhood-onset lupus. We aimed to identify risk factors for persistently reduced health-related quality of life in childhood-onset lupus and describe a risk profile for persistently reduced health-related quality of life. At a tertiary rheumatology clinic, 50 childhood onset lupus patients were assessed twice, approximately six months apart. Measures of disease activity and patient-reported measures of health-related quality of life, pain, depressive symptoms, anxiety and disability were collected at each visit. At visits 1 and 2, respectively, clinically relevant fatigue was present in 66% and 56% of patients; clinically significant depressive symptoms in 26% and 24%; and clinically significant anxiety in 34% and 28%. Poorer health-related quality of life at follow-up was significantly predicted by higher fatigue and depressive symptoms at the initial visit. Using clinically relevant cut-offs for fatigue and depressive symptoms, patients were assigned to Low ( n = 27) or High Risk ( n = 23) groups. A profile of significantly greater pain, anxiety and coping difficulties was seen in the High Risk group. Routine assessment of fatigue and mood symptoms in youth with childhood-onset lupus could be helpful in identifying those at risk for persistently poor health-related quality of life. Integration of behavioral interventions to address fatigue and mood symptoms into medical care for such patients may be beneficial, but more research in this area is needed.
Answer: Yes, health utility measures can be used in lupus research. A comparative validation and reliability study of four utility indices in patients with systemic lupus erythematosus (SLE) found that the visual analog scale (VAS), the time trade-off (TTO), and the McMaster Health Utilities Index Mark 2 (HUI2) are valid and reliable measures to assess health-related quality of life in a group of patients with SLE. These measures may be useful for future research in this population (PUBMED:10381044). Additionally, a review of the literature evaluating the development and psychometric properties of health-related quality of life (HRQoL) measures used in adults with SLE supports the evidence that multidimensional disease-specific measures are the most suitable in terms of content and internal reliability for use in studies of adult patients with SLE (PUBMED:23264550).
Moreover, the ALPHA project stakeholder meeting on lupus clinical trial outcome measures and the patient perspective highlighted the importance of including patient-reported outcomes in trials to capture the benefits of interventions in areas deemed pertinent by patients (PUBMED:37537705). Furthermore, a study aimed to demonstrate how to estimate the value of health gain after patients with a multisystem disease achieve a condition-specific composite response endpoint, using systemic lupus erythematosus as an exemplar, suggests that health utility values can generate evidence to enhance the usefulness of composite response endpoints for health technology assessment, decision making, and economic evaluation (PUBMED:36008224).
In conclusion, health utility measures are not only applicable but also valuable in lupus research for assessing the impact of the disease and treatments on patients' quality of life, and for informing clinical trials and health technology assessments. |
Instruction: Does chronic hepatitis B increase Staphylococcus nasal carriage?
Abstracts:
abstract_id: PUBMED:17447373
Does chronic hepatitis B increase Staphylococcus nasal carriage? Objectives: To determine the prevalence of Staphylococcus aureus nasal carriage in patients with chronic hepatitis B virus infection.
Patients And Methods: The prevalence of S. aureus nasal carriage was determined in patients with chronic hepatitis B virus infection and compared with the prevalence of S. aureus nasal carriage among control patients.
Results: Between February 2003 and November 2004, 70 chronic hepatitis B patients and 70 control patients were enrolled in the study. S. aureus nasal carriage was shown in 15 (12%) of the patients with chronic hepatitis B and 13 (19%) of the control group (P > 0.05). There was no difference in nasal colonization between the cases and controls when analysed by age, sex, frequency of skin infection, prior use of antibiotics and hospital admission in the preceding six months.
Conclusion: The results of our study show that chronic hepatitis B virus infection is not associated with S. aureus nasal carriage.
abstract_id: PUBMED:24944714
L-asparaginase-induced severe acute pancreatitis in an adult with extranodal natural killer/T-cell lymphoma, nasal type: A case report and review of the literature. L-asparaginase (L-Asp)-associated pancreatitis (AAP) occurs occasionally; however, this side-effect has predominantly been observed among pediatric patients. Usually, it is not life-threatening and generally responds to intensive medical therapy. The present study presents a rare case of lethal AAP in an adult. The patient was recently diagnosed with extranodal natural killer/T-cell lymphoma (ENKTL), nasal type, and the chronic hepatitis B virus (HBV) infection and was receiving L-Asp as part of a chemotherapy regimen. Severe acute pancreatitis occurred and the patient succumbed 72 h after completion of chemotherapy. The HBV infection and lipid disorders may have been potential risk factors for the development of severe acute pancreatitis in the patient.
abstract_id: PUBMED:18988936
Case report: Staphylococcus gallinarum bacteremia in a patient with chronic hepatitis B virus infection. An unusual staphylococcal isolate was recovered from blood cultures in a patient with chronic hepatitis B virus infection, who presented with low grade fever accompanied by increased upper abdominal pain, nausea and weakness. The isolate was identified as Staphylococcus gallinarum based on biochemical characteristics and 16S rRNA gene sequence analyses.
abstract_id: PUBMED:20819529
Coagulase-negative staphylococcus and enterococcus as predominant pathogens in liver transplant recipients with Gram-positive coccal bacteremia. Background: Gram-positive bacteria such as Staphylococcus aureus have been a common cause of infection among liver transplant (LT) recipients in recent decades. The understanding of local epidemiology and its evolving trends with regard to pathogenic spectra and antibiotic susceptibility is beneficial to prophylactic and empiric treatment for LT recipients. This study aimed to investigate etiology, timing, antibiotic susceptibility and risk factors for multidrug resistant (MDR) Gram-positive coccal bacteremia after LT.
Methods: A cohort analysis of prospectively recorded data was performed to investigate etiologies, timing, antibiotic susceptibility and risk factors for MDR Gram-positive coccal bacteremia in 475 LT recipients.
Results: In 475 LT recipients in the first six months after LT, there were a total of 98 episodes of bacteremia caused by Gram-positive cocci in 82 (17%) patients. Seventy-five (77%) bacteremic episodes occurred in the first post-LT month. The most frequent Gram-positive cocci were methicillin-resistant coagulase-negative staphylococcus (CoNS, 46 isolates), methicillin-resistant Staphylococcus aureus (MRSA, 13) and enterococcus (34, E. faecium 30, E. faecalis 4). In all Gram-positive bacteremic isolates, 59 of 98 (60%) were MDR. Gram-positive coccal bacteremia and MDR Gram-positive coccal bacteremia predominantly occurred in patients with acute severe exacerbation of chronic hepatitis B and with fulminant/subfulminant hepatitis. Four independent risk factors for development of bacteremia caused by MDR Gram-positive coccus were: LT candidates with encephalopathy grades II - IV (P = 0.013, OR: 16.253, 95%CI: 1.822 - 144.995), pre-LT use of empirical antibiotics (P = 0.018, OR: 1.029, 95%CI: 1.002 - 1.057), post-LT urinary tract infections (P < 0.001, OR: 20.340, 95%CI: 4.135 - 100.048) and abdominal infection (P = 0.004, OR: 2.820, 95%CI: 1.122 - 10.114). The main infectious manifestations were coinfections due to gram-positive cocci and gram-negative bacilli.
Conclusions: Methicillin-resistant CoNS and enterococci are predominant pathogens among LT recipients with Gram-positive coccal bacteremia. Occurrences of Gram-positive coccal bacteremia may be associated with the severity of illness in the perioperative stage.
abstract_id: PUBMED:29149205
KIR content genotypes associate with carriage of hepatitis B surface antigen, e antigen and HBV viral load in Gambians. Background: Hepatocellular carcinoma (HCC) causes over 800,000 deaths worldwide annually, mainly in low income countries, and incidence is rising rapidly in the developed world with the spread of hepatitis B (HBV) and C (HCV) viruses. Natural Killer (NK) cells protect against viral infections and tumours by killing abnormal cells recognised by Killer-cell Immunoglobulin-like Receptors (KIR). Thus genes and haplotypes encoding these receptors may be important in determining both outcome of initial hepatitis infection and subsequent chronic liver disease and tumour formation. HBV is highly prevalent in The Gambia and the commonest cause of liver disease. The Gambia Liver Cancer Study was a matched case-control study conducted between September 1997 and January 2001 where cases with liver disease were identified in three tertiary referral hospitals and matched with out-patient controls with no clinical evidence of liver disease.
Methods: We typed 15 KIR genes using the polymerase chain reaction with sequence specific primers (PCR-SSP) in 279 adult Gambians, 136 with liver disease (HCC or Cirrhosis) and 143 matched controls. We investigated effects of KIR genotypes and haplotypes on HBV infection and associations with cirrhosis and HCC.
Results: Homozygosity for KIR group A gene-content haplotype was associated with HBsAg carriage (OR 3.7, 95% CI 1.4-10.0) whilst telomeric A genotype (t-AA) was associated with reduced risk of e antigenaemia (OR 0.2, 95% CI 0.0-0.6) and lower viral loads (mean log viral load 5.2 vs. 6.9, pc = 0.022). One novel telomeric B genotype (t-ABx2) containing KIR3DS1 (which is rare in West Africa) was also linked to e antigenaemia (OR 8.8, 95% CI 1.3-60.5). There were no associations with cirrhosis or HCC.
Conclusion: Certain KIR profiles may promote clearance of hepatitis B surface antigen whilst others predispose to e antigen carriage and high viral load. Larger studies are necessary to quantify the effects of individual KIR genes, haplotypes and KIR/HLA combinations on long-term viral carriage and risk of liver cancer. KIR status could potentially inform antiviral therapy and identify those at increased risk of complications for enhanced surveillance.
abstract_id: PUBMED:38105251
Chronic Rhinosinusitis and Premorbid Gastrointestinal Tract Diseases: A Population-Based Case-Control Study. Objectives: The primary aim was to determine the prevalence of gastrointestinal diseases in patients with chronic rhinosinusitis (CRS), utilizing the National Health Insurance Research Database (NHIRD) in Taiwan. Several studies have supported the existence of distinct immune patterns between the Asian and Western populations in CRS patients. Through the population-based case-control study, we could compare the differences between various regions and provide further treatment strategies for subsequent studies in Asian CRS patients. The secondary aim was to assess whether different types of CRS influence the correlation with specific GI diseases. Understanding how different phenotypes or endotypes of CRS may relate to distinct GI disease patterns could provide valuable insights into the underlying mechanisms and potential shared pathways between these conditions. Methods: We use the NHIRD in Taiwan. Newly diagnosed patients with CRS were selected between January 1, 2001 and December 31, 2017 as the case group, and the controls were defined as individuals without a history of CRS. Patients with CRS were divided into two groups: with nasal polyps and without nasal polyps. We also separated GI tract diseases into four groups based on their different pathophysiologies. Results: This study included 356,245 participants (CRS: 71,249 and control: 284,996). The results showed that CRS was significantly associated with some specific GI tract diseases, including acute/chronic hepatitis B, gastroesophageal reflux disease (GERD) with/without esophagitis, achalasia of cardia, peptic/gastrojejunal ulcer, Crohn's disease, and ulcerative colitis. In addition, when CRS was subcategorized into chronic rhinosinusitis with nasal polyps (CRSwNP) and chronic rhinosinusitis without nasal polyps (CRSsNP), GERD with esophagitis and peptic ulcer were significantly associated with CRSsNP. Conclusions: A significant association between CRS and premorbid GI tract diseases has been identified. Remarkably, GERD with esophagitis and peptic ulcer were significantly associated with CRSsNP. The underlying mechanisms require further investigation and may lead to new treatments for CRS. Researchers can further investigate the mechanisms by referring to our classification method to determine the implications for diagnosis and treatment.
abstract_id: PUBMED:15479440
Development of a nasal vaccine for chronic hepatitis B infection that uses the ability of hepatitis B core antigen to stimulate a strong Th1 response against hepatitis B surface antigen. There are estimated to be 350 million chronic carriers of hepatitis B infection worldwide. Patients with chronic hepatitis B are at risk of liver cirrhosis with associated mortality because of hepatocellular carcinoma and other complications. An important goal, therefore, is the development of an effective therapeutic vaccine against chronic hepatitis B virus (HBV). A major barrier to the development of such a vaccine is the impaired immune response to HBV antigens observed in the T cells of affected patients. One strategy to overcome these barriers is to activate mucosal T cells through the use of nasal vaccination because this may overcome the systemic immune downregulation that results from HBV infection. In addition, it may be beneficial to present additional HBV epitopes beyond those contained in the traditional hepatitis B surface antigen (HbsAg) vaccine, for example, by using the hepatitis B core antigen (HBcAg). This is advantageous because HBcAg has a unique ability to act as a potent Th1 adjuvant to HbsAg, while also serving as an immunogenic target. In this study we describe the effect of coadministration of HBsAg and HBcAg as part of a strategy to develop a more potent and effective HBV therapeutic vaccine.
abstract_id: PUBMED:35062707
The Safety and Efficacy of a Therapeutic Vaccine for Chronic Hepatitis B: A Follow-Up Study of Phase III Clinical Trial. The objective of the present study was to assess the safety and efficacy of a therapeutic vaccine containing both HBsAg and HBcAg (NASVAC) in patients with chronic hepatitis B (CHB) three years after the end of treatment (EOT) as a follow-up of a phase III clinical trial. NASVAC was administered ten times by the nasal route and five times by subcutaneous injection. A total of 59 patients with CHB were enrolled. Adverse events were not seen in any of the patients. Out of the 59 CHB patients, 54 patients exhibited a reduction in HBV DNA, compared with their basal levels. Although all the patients had alanine transaminase (ALT) above the upper limit of normal (>42 IU/L) before the commencement of therapy, the levels of ALT were within the ULN level in 42 patients. No patient developed cirrhosis of the liver. The present study, showing the safety and efficacy of NASVAC 3 years after the EOT, is the first to report follow-up data of an immune therapeutic agent against CHB. NASVAC represents a unique drug against CHB that is safe, of finite duration, can be administered by the nasal route, is capable of reducing HBV DNA and normalizing ALT, and contains hepatic fibrosis.
abstract_id: PUBMED:29458131
Inhibition of hepatitis B virus replication via HBV DNA cleavage by Cas9 from Staphylococcus aureus. Chronic hepatitis B virus (HBV) infection is difficult to cure due to the presence of covalently closed circular DNA (cccDNA). Accumulating evidence indicates that the CRISPR/Cas9 system effectively disrupts HBV genome, including cccDNA, in vitro and in vivo. However, efficient delivery of CRISPR/Cas9 system to the liver or hepatocytes using an adeno-associated virus (AAV) vector remains challenging due to the large size of Cas9 from Streptococcus pyogenes (Sp). The recently identified Cas9 protein from Staphylococcus aureus (Sa) is smaller than SpCas9 and thus is able to be packaged into the AAV vector. To examine the efficacy of SaCas9 system on HBV genome destruction, we designed 5 guide RNAs (gRNAs) that targeted different HBV genotypes, 3 of which were shown to be effective. The SaCas9 system significantly reduced HBV antigen expression, as well as pgRNA and cccDNA levels, in Huh7, HepG2.2.15 and HepAD38 cells. The dual expression of gRNAs/SaCas9 in these cell lines resulted in more efficient HBV genome cleavage. In the mouse model, hydrodynamic injection of gRNA/SaCas9 plasmids resulted in significantly lower levels of HBV protein expression. We also delivered the SaCas9 system into mice with persistent HBV replication using an AAV vector. Both the AAV vector and the mRNA of Cas9 could be detected in the C3H mouse liver cells. Decreased hepatitis B surface antigen (HBsAg), HBV DNA and pgRNA levels were observed when a higher titer of AAV was injected, although this decrease was not significantly different from the control. In summary, the SaCas9 system accurately and efficiently targeted the HBV genome and inhibited HBV replication both in vitro and in vivo. The system was delivered by an AAV vector and maybe used as a novel therapeutic strategy against chronic HBV infection.
abstract_id: PUBMED:29510155
Nasal route favors the induction of CD4+ T cell responses in the liver of HBV-carrier mice immunized with a recombinant hepatitis B surface- and core-based therapeutic vaccine. Immunization routes and number of doses remain largely unexplored in therapeutic vaccination. The aim of the present work is to evaluate their impact on immune responses in naïve and hepatitis B virus (HBV)-carrier mouse models following immunization with a non-adjuvanted recombinant vaccine comprising the hepatitis B surface (HBsAg) and core (HBcAg) antigens. Mice were immunized either by intranasal (i.n.), subcutaneous (s.c.) or simultaneous (i.n. + s.c.) routes. Humoral immunity was detected in all the animal models with the induction of a potent antibody (Ab) response against HBcAg, which was stronger than the anti-HBs response. In the HBV-carrier mouse model, the anti-HBs response was predominantly subtype-specific and preferentially induced by the i.n. route. However, the Ab titers were not sufficient to clear the high concentration of HBsAg present in the sera of these mice. The i.n. route was the most efficacious at inducing cellular immune responses, in particular CD4+ T cells. In naïve mice, cellular responses in spleen were strong and mainly due to CD4+ T cells whereas the CD8+ T-cell response was low. In HBV-carrier mice, high frequencies of HBs-specific CD4+ T cells secreting interferon (IFN)-γ, interleukin (IL)-2 and tumor necrosis factor (TNF)-α were found in liver only after i.n. immunization. Increased frequencies of CD4+ T cells expressing the integrin CD49a in liver suggest a role of nasal route in the cellular homing process. Multiple dose schedules appear to be a prerequisite for protein-based immunization in order to overcome immunotolerance in HBV-carrier mice. These findings provide new avenues for further preclinical and clinical development.
Answer: No, chronic hepatitis B does not appear to increase Staphylococcus nasal carriage. A study aimed at determining the prevalence of Staphylococcus aureus nasal carriage in patients with chronic hepatitis B virus infection found that there was no significant difference in nasal colonization between chronic hepatitis B patients and control patients. The study enrolled 70 chronic hepatitis B patients and 70 control patients, and S. aureus nasal carriage was shown in 15 (21%) of the patients with chronic hepatitis B and 13 (19%) of the control group, which was not statistically significant (P > 0.05). The study concluded that chronic hepatitis B virus infection is not associated with S. aureus nasal carriage (PUBMED:17447373). |
Instruction: Is an integrative laboratory algorithm more effective in detecting alpha-1-antitrypsin deficiency in patients with premature chronic obstructive pulmonary disease than AAT concentration based screening approach?
Abstracts:
abstract_id: PUBMED:24969923
Is an integrative laboratory algorithm more effective in detecting alpha-1-antitrypsin deficiency in patients with premature chronic obstructive pulmonary disease than AAT concentration based screening approach? Introduction: Alpha-1-antitrypsin deficiency (AATD), genetic risk factor for premature chronic obstructive pulmonary disease (COPD), often remains undetected. The aim of our study was to analyse the effectiveness of an integrative laboratory algorithm for AATD detection in patients diagnosed with COPD by the age of 45 years, in comparison with the screening approach based on AAT concentration measurement alone.
Subjects And Methods: 50 unrelated patients (28 males/22 females, age 52 (24-75 years) diagnosed with COPD before the age of 45 years were enrolled. Immunonephelometric assay for alpha-1-antitrypsin (AAT) and PCR-reverse hybridization for Z and S allele were first-line, and isoelectric focusing and DNA sequencing (ABI Prism BigDye) were reflex tests.
Results: AATD associated genotypes were detected in 7 patients (5 ZZ, 1 ZMmalton, 1 ZQ0amersfoort), 10 were heterozygous carriers (8 MZ and 2 MS genotypes) and 33 were without AATD (MM genotype). Carriers and patients without AATD had comparable AAT concentrations (P = 0.125). In majority of participants (48) first line tests were sufficient to analyze AATD presence. In two remaining cases reflex tests identified rare alleles, Mmalton and Q0amersfoort, the later one being reported for the first time in Serbian population. Detection rate did not differ between algorithm and screening both for AATD (P = 0.500) and carriers (P = 0.063).
Conclusion: There is a high prevalence of AATD affected subjects and carriers in a group of patients with premature COPD. The use of integrative laboratory algorithm does not improve the effectiveness of AATD detection in comparison with the screening based on AAT concentration alone.
abstract_id: PUBMED:24713750
Challenging identification of a novel PiISF and the rare PiMmaltonZ α1-antitrypsin deficiency variants in two patients. Objectives: α1-Antitrypsin (AAT) deficiency is associated with an increased risk for lung and liver disease. Identification of AAT deficiency as the underlying cause of these diseases is important in correct patient management.
Methods: AAT deficiency is commonly diagnosed by demonstrating low concentrations of AAT followed by genotype and/or phenotype testing. However, this algorithm may miss novel AAT phenotypes.
Results: We report two cases of AAT deficiency in two patients: a case of the novel phenotype PiISF, misclassified as PiII by phenotyping, and a case of the rare phenotype PiMmaltonZ misclassified as PiM2Z.
Conclusions: These cases highlight the importance of understanding the limitations of a commonly used diagnostic algorithm, use of further gene sequencing in applicable cases, and the potential for underdiagnosis of AAT deficiency in patients with chronic obstructive pulmonary disease.
abstract_id: PUBMED:30374448
A Novel Approach to Screening for Alpha-1 Antitrypsin Deficiency: Inpatient Testing at a Teaching Institution. Chronic obstructive pulmonary disease (COPD) currently affects more than 16 million Americans and it is estimated that roughly 100,000 Americans have undiagnosed, severe alpha-1 antitrypsin deficiency (AATD) (Chest. 2005;128[3]:1179-1186) (Chest. 2002;122[5]:1818-1829). Patients with AATD have an accelerated rate of decline of lung function caused by proteolytic enzymes. The morbidity associated with this inherited disorder is preventable due to the availability of augmentation therapy. Appropriate inpatient screening of patients with COPD for AATD is lacking and most screening is exclusively limited to outpatient pulmonary clinics. Between May 2016 and February 2017, genetic screening was completed on 54 individuals who were admitted with either a former diagnosis of COPD or active COPD exacerbation to Arnot Ogden Medical Center (AOMC) in Elmira, New York. The incorporation of inpatient genetic screening by resident physicians for AATD in COPD patients led to a high rate of screened and newly diagnosed AATD carriers with a variety of AATD genotypes. It is recommended that there should be an expansion of screening for AATD in hospitalized patients with COPD, regardless of age or smoking history.
abstract_id: PUBMED:12550013
Screening program for alpha-1 antitrypsin deficiency in patients with chronic obstructive pulmonary disease, using dried blood spots on filter paper Alpha-1 antitrypsin (AAT) deficiency is an under-diagnosed disease and screening programs have therefore been recommended for patients with chronic obstructive pulmonary disease (COPD). We present the results of the pilot phase of a screening program for AAT deficiency in order to evaluate the technique used, the procedures for transporting samples and the results obtained. Over a period of one month, five centers collected samples from all COPD patients for whom plasma concentrations of AAT or Pi phenotype had not yet been determined. Capillary blood spots were dried on filter paper and then sent by surface mail to a central laboratory for study. An immunonephelometric assay was used to determine AAT and DNA phenotyping was done by use of a Light Cycler. Samples were analyzed from 86 COPD patients (76 men, 10 women) with a mean age of 68.2 years. AAT deficiency was ruled out for 74 patients (86%) who had concentrations above the cutoff established, although one of them was MZ heterozygote by genotype. Among the 12 remaining patients (13.9%), only two also had a Z allele. The rest were individuals with concentrations below the established threshold and no evidence of a Z allele (10 patients, 11.6%). The Z allele frequency observed (3/172; 1.74%) was very similar to that found in the general population. The results of this pilot study allowed us to confirm that the method used to collect samples worked well. The sampling method is applicable, easy and well-accepted by participating physicians. It allowed AAT concentrations and Z allele deficiency to be determined. The method correlates well with standard techniques used for samples in whole blood.
abstract_id: PUBMED:18722101
Screening for alpha1-antitrypsin deficiency in Lithuanian patients with COPD. Background: Alpha1-antitrypsin (AAT) deficiency is an under-diagnosed condition in patients with chronic obstructive pulmonary disease (COPD). The objective of the present screening was to estimate the AAT gene frequency and prevalence and to identify AAT deficiency cases in a large cohort of Lithuanian patients with COPD.
Methods: A nationwide program of AAT deficiency was conducted in 1167 COPD patients, defined according to the GOLD criteria. Patients were collected from outpatient clinics in five different Lithuanian regions (Kaunas, Vilnius, Siauliai, Klaipeda and Alytus). AAT serum concentrations were measured by nephelometry; PI-phenotypes characterized by isoelectric-focusing.
Results: Mean age and FEV(1) were 62.0 (10.3) and 54.7% (10.9), respectively. Ninety-one AAT deficiency genotypes (40 MZ, 39 MS, 1 SS, 3 SZ and 8 ZZ) were identified. Calculated PI(*)S and PI(*)Z frequencies, expressed in per 1000, were 18.8 (95% CI: 13.9-25) and 25.3 (95% CI: 19.4-32.7), respectively. The calculated AAT gene prevalence (Hardy-Weinberg principle) was: 1/1.09 for MM, 1/28 for MS, 1/2814 for SS, 1/20 for MZ, 1/1049 for SZ and 1/1565 for ZZ. Calculated Odds ratio (OR) for PI(*)Z in COPD vs. Lithuanian healthy people was of 1.87 (P=0.004).
Conclusion: The OR for each genotypic class demonstrated a significant increase of MZ, SZ and ZZ genotypes in COPD patients. The results of the present study, with a significant number of ZZ individuals detected, support the general concept of targeted screening for AAT deficiency in countries like Lithuania, with a large population of COPD patients and low awareness among care-givers about this genetic condition.
abstract_id: PUBMED:30775428
Intravenous Alpha-1 Antitrypsin Therapy for Alpha-1 Antitrypsin Deficiency: The Current State of the Evidence. Alpha-1 antitrypsin deficiency (AATD) is a largely monogenetic disorder associated with a high risk for the development of chronic obstructive pulmonary disease (COPD) and cirrhosis. Intravenous alpha-1 antitrypsin (AAT) therapy has been available for the treatment of individuals with AATD and COPD since the late 1980s. Initial Food and Drug Administration (FDA) approval was granted based on biochemical efficacy. Following its approval, the FDA, scientific community and third-party payers encouraged manufacturers of AAT therapy to determine its clinical efficacy. This task has proved challenging because AATD is a rare, orphan disorder comprised of individuals who are geographically dispersed and infrequently identified. In addition, robust clinical trial outcomes have been lacking until recently. This review provides an update on the evidence for the clinical efficacy of intravenous AAT therapy for patients with AATD-related emphysema.
abstract_id: PUBMED:33116457
Trends in Diagnosis of Alpha-1 Antitrypsin Deficiency Between 2015 and 2019 in a Reference Laboratory. Background: Alpha-1 antitrypsin deficiency (AATD) remains largely underdiagnosed despite recommendations of healthcare institutions and programmes designed to increase awareness. The objective was to analyse the trends in AATD diagnosis during the last 5 years in a Spanish AATD reference laboratory.
Methods: This was a retrospective revision of all alpha-1 antitrypsin (AAT) determinations undertaken in our laboratory from 2015 to 2019. We analysed the number of AAT determinations performed and described the characteristics of the individuals tested, as well as the medical specialties and the reasons for requesting AAT determination.
Results: A total of 3507 determinations were performed, of which 5.5% corresponded to children. A significant increase in the number of AAT determinations was observed from 349 in 2015 to 872 in 2019. Among the samples, 57.6% carried an intermediate AATD (50-119 mg/dL) and 2.4% severe deficiency (<50 mg/dL). The most frequent phenotype in severe AATD individuals was PI*ZZ (78.5%), and aminotransferase levels were above normal in around 43% of children and 30% of adults. Respiratory specialists requested the highest number of AAT determinations (31.5%) followed by digestive diseases and internal medicine (27.5%) and primary care physicians (19.7%). The main reason for AAT determination in severe AATD adults was chronic obstructive pulmonary disease (41.7%), but reasons for requesting AAT determination were not reported in up to 41.7% of adults and 58.3% of children.
Conclusion: There is an increase in the frequency of AATD testing despite the rate of AAT determination remaining low. Awareness about AAT is probably increasing, but the reason for testing is not always clear.
abstract_id: PUBMED:25800328
Alpha-1 Antitrypsin Deficiency in COPD Patients: A Cross-Sectional Study. Introduction: Alpha-1 antitrypsin deficiency (AATD) is a genetic disorder associated with early onset chronic obstructive pulmonary disease (COPD) and liver disease. It is also a highly under-diagnosed condition. As early diagnosis could prompt specific interventions such as smoking cessation, testing of family members, genetic counselling and use of replacement therapy, screening programs are needed to identify affected patients.
Objective: To estimate the prevalence of severe AATD in COPD patients by routine dried blood spot testing and subsequent genotyping in patients with alpha-1 antitrypsin (AAT) levels below an established threshold.
Materials And Methods: Cross-sectional study of adult COPD patients attending the Hospital Dr. Antonio Cetrángolo (Buenos Aires, Argentina) between 2009 and 2012. The study consisted of capillary blood collection via finger stick to determine AAT levels, clinical evaluation and lung function tests. Genotype was determined in AAT-deficient patients.
Results: A total of 1,002 patients were evaluated, of whom 785 (78.34%) had normal AAT levels, while low AAT levels were found in 217 (21.66%). Subsequent genotyping of the latter sub-group found: 15 (1.5%, 95% CI 0.75-2.25) patients with a genotype associated with severe AATD, of whom 12 were ZZ (1.2%, 95% CI 0.52-1.87) and 3 SZ (0.3%, 95% CI 0-0.64). The remaining 202 patients were classified as: 29 Z heterozygotes (2.89%, 95% CI 1.86-3.93), 25 S heterozygotes (2.5%, 95% CI 1.53-3.46) and 4 SS (0.4%, 95% CI 0.01-0.79). A definitive diagnosis could not be reached in 144 patients (14.37%, 95% CI 12.2-16.54).
Conclusion: The strategy using an initial serum AAT level obtained by dried blood spot testing and subsequent genotyping was a satisfactory initial approach to a screening program for severe AAT, as a definitive diagnosis was achieved in 87% of patients. However, results were not obtained for logistical reasons in the remaining 13%. This major obstacle may be overcome by the use of dried blood spot phenotyping techniques. We believe this approach for detecting AATD in COPD patients, in compliance with national and international guidelines, is supported by our results.
abstract_id: PUBMED:27282198
Results from a large targeted screening program for alpha-1-antitrypsin deficiency: 2003 - 2015. Background: Alpha-1-antitrypsin deficiency (AATD) is an autosomal codominant inherited disease that is significantly underdiagnosed. We have previously shown that the combination of an awareness campaign with the offer of free diagnostic testing results in the detection of a relevant number of severely deficient AATD patients. The present study provides an update on the results of our targeted screening program (German AAT laboratory, University of Marburg) covering a period from August 2003 to May 2015.
Methods: Diagnostic AATD detection test kits were offered free of charge. Dried blood samples were sent to our laboratory and used for the semiquantitative measurement of the AAT-level (nephelometry) and the detection of the S- or Z-allele (PCR). Isoelectric focusing was performed when either of the initial tests was indicative for at least one mutation. Besides, we evaluated the impact of additional screening efforts and the changes of the detection rate over time, and analysed the relevance of clinical parameters in the prediction of severe AATD.
Results: Between 2003 and 2015, 18,638 testing kits were analysed. 6919 (37.12 %) carried at least one mutation. Of those, we identified 1835 patients with severe AATD (9.82 % of the total test population) including 194 individuals with rare genotypes. Test initiatives offered to an unselected population resulted in a dramatically decreased detection rate. Among clinical characteristics, a history of COPD, emphysema, and bronchiectasis were significant predictors for Pi*ZZ, whereas a history of asthma, cough and phlegm were predictors of not carrying the genotype Pi*ZZ.
Conclusion: A targeted screening program, combining measures to increase awareness with cost-free diagnostic testing, resulted in a high rate of AATD detection. The clinical data suggest that testing should be primarily offered to patients with COPD, emphysema, and/or bronchiectasis.
abstract_id: PUBMED:24661570
Role of alpha-1 antitrypsin in human health and disease. Alpha-1 antitrypsin (AAT) deficiency is an under-recognized hereditary disorder associated with the premature onset of chronic obstructive pulmonary disease, liver cirrhosis in children and adults, and less frequently, relapsing panniculitis, systemic vasculitis and other inflammatory, autoimmune and neoplastic diseases. Severe AAT deficiency mainly affects Caucasian individuals and has its highest prevalence (1 : 2000-1 : 5000 individuals) in Northern, Western and Central Europe. In the USA and Canada, the prevalence is 1: 5000-10 000. Prevalence is five times lower in Latin American countries and is rare or nonexistent in African and Asian individuals. The key to successful diagnosis is by measuring serum AAT, followed by the determination of the phenotype or genotype if low concentrations are found. Case detection allows implementation of genetic counselling and, in selected cases, the application of augmentation therapy. Over the past decade, it has been demonstrated that AAT is a broad-spectrum anti-inflammatory, immunomodulatory, anti-infective and tissue-repair molecule. These new capacities are promoting an increasing number of clinical studies, new pharmacological formulations, new patent applications and the search for alternative sources of AAT (including transgenic and recombinant AAT) to meet the expected demand for treating a large number of diseases, inside and outside the context of AAT deficiency.
Answer: Based on the information provided in the abstracts, an integrative laboratory algorithm does not appear to be more effective in detecting alpha-1-antitrypsin deficiency (AATD) in patients with premature chronic obstructive pulmonary disease (COPD) than a screening approach based solely on AAT concentration measurement. A study analyzing the effectiveness of an integrative laboratory algorithm for AATD detection in patients diagnosed with COPD by the age of 45 years, compared with screening based on AAT concentration alone, found that the use of the integrative laboratory algorithm did not improve the effectiveness of AATD detection (PUBMED:24969923).
The integrative algorithm included immunonephelometric assay for AAT and PCR-reverse hybridization for Z and S allele as first-line tests, with isoelectric focusing and DNA sequencing as reflex tests. The majority of participants (48 out of 50) were sufficiently analyzed for AATD presence with the first-line tests. In two cases, reflex tests identified rare alleles. However, the detection rate did not differ between the algorithm and screening for both AATD and carriers, suggesting that the additional complexity of the integrative algorithm did not confer a significant advantage over AAT concentration measurement alone in this study population.
It is important to note that while the integrative algorithm may not have shown increased effectiveness in this particular study, other studies have highlighted the importance of understanding the limitations of commonly used diagnostic algorithms and the potential for underdiagnosis of AATD in patients with COPD (PUBMED:24713750). Therefore, while the integrative approach may not be superior in terms of detection rates, it could still play a role in identifying complex or rare cases that might be missed by AAT concentration screening alone. |
Instruction: Can dysfunctional HDL explain high coronary artery disease risk in South Asians?
Abstracts:
abstract_id: PUBMED:18255168
Can dysfunctional HDL explain high coronary artery disease risk in South Asians? Background: Coronary artery disease (CAD) is the leading cause of mortality and morbidity in United States, and South Asian immigrants (SAIs) have a higher risk for CAD compare to Caucasians. Traditional risk factors do not completely explain high risk, and some of the unknown risk factors need to be explored. We assessed dysfunctional pro-inflammatory high density lipoprotein (HDL) in SAIs and assessed its association with sub-clinical CAD using carotid intima-media thickness (IMT) as a surrogate marker for atherosclerosis.
Methods: Cross-sectional study on SAIs aged 40-65 years. Sub-clinical CAD was measured using carotid intima media thickness (IMT) as a surrogate marker of atherosclerosis. Dysfunctional or pro-inflammatory HDL was determined by novel cell free assay and HDL inflammatory Index.
Results: Dysfunctional HDL was found in the 50% participants, with HDL-inflammatory index of >or=1.00, suggesting pro-inflammatory HDL (95% CI, 0.8772-1.4333). The prevalence of sub-clinical CAD using carotid IMT (>or=0.80 mm) was seen in 41.4% (95% CI, 0.2347-0.5933). On logistic regression analysis, positive carotid IMT was found to be associated with dysfunctional HDL after adjusting for age, family history of cardiovascular disease, and hypertension (p=0.030).
Conclusions: The measurement of HDL level as well as functionality plays an important role in CAD risk assessment. Those SAIs with dysfunctional HDL and without known CAD can be a high risk group requiring treatment with lipid lowering drugs to reduce future risk of CAD. Further large studies are required to explore association of dysfunctional HDL with CAD and identify additional CAD risk caused by dysfunctional HDL.
abstract_id: PUBMED:25395937
Carotid intima media thickness and low high-density lipoprotein (HDL) in South Asian immigrants: could dysfunctional HDL be the missing link? Introduction: South Asian immigrants (SAIs) in the US exhibit higher prevalence of coronary artery disease (CAD) and its risk factors compared with other ethnic populations. Conventional CAD risk factors do not explain the excess CAD risk; therefore there is a need to identify other markers that can predict future risk of CAD in high-risk SAIs. The objective of the current study is to assess the presence of sub-clinical CAD using common carotid artery intima-media thickness (CCA-IMT), and its association with metabolic syndrome (MS) and pro-inflammatory/dysfunctional HDL (Dys-HDL).
Material And Methods: A community-based study was conducted on 130 first generation SAIs aged 35-65 years. Dys-HDL was determined using the HDL inflammatory index. Analysis was completed using logistic regression and Fisher's exact test.
Results: Sub-clinical CAD using CCA-IMT ≥ 0.8 mm (as a surrogate marker) was seen in 31.46%. Age and gender adjusted CCA-IMT was significantly associated with type 2 diabetes (p = 0.008), hypertension (p = 0.012), high-sensitivity C-reactive protein (p < 0.001) and homocysteine (p = 0.051). Both the presence of MS and Dys-HDL was significantly correlated with CCA-IMT, even after age and gender adjustment. The odds of having Dys-HDL with CCA-IMT were 5 times (95% CI: 1.68, 10.78).
Conclusions: There is a need to explore and understand non-traditional CAD risk factors with a special focus on Dys-HDL, knowing that SAIs have low HDL levels. This information will not only help to stratify high-risk asymptomatic SAI groups, but will also be useful from a disease management point of view.
abstract_id: PUBMED:21804640
Atherothrombosis in South asians: implications of atherosclerotic and inflammatory markers. South Asian immigrants (SAIs) have a higher prevalence of cardiovascular (CV) morbidity and mortality compared with other populations. The major challenge associated with primary prevention of cardiovascular to coronary artery diseases (CAD) in SAIs involves early and accurate detection of CAD in asymptomatic individuals at high cardiovascular risk. Inflammatory processes are now recognized to play a central role in the pathogenesis of atherosclerosis and are found to be associated with future CV risk in a variety of clinical settings. Imaging measures, such as common carotid artery intima-media thickness (CCA-IMT), are being applied as surrogate markers for end-points, such as myocardial infarction (MI) and death in clinical trials. Considering high CAD risk in SAIs and knowing that conventional risk factors may not fully explain the excess CAD risk in this group, studies on the role of CCA-IMT in CAD prediction have been discussed. Also, C-reactive protein (CRP) validity in risk prediction, the role of dysfunctional high density lipoprotein (HDL) as a CAD risk marker in SAIs have been presented.
abstract_id: PUBMED:19183743
Excess coronary artery disease risk in South Asian immigrants: can dysfunctional high-density lipoprotein explain increased risk? Background: Coronary artery disease (CAD) is the leading cause of mortality and morbidity in the United States (US), and South Asian immigrants (SAIs) have a higher risk of CAD compared to Caucasians. Traditional risk factors may not completely explain high risk, and some of the unknown risk factors need to be explored. This short review is mainly focused on the possible role of dysfunctional high-density lipoprotein (HDL) in causing CAD and presents an overview of available literature on dysfunctional HDL.
Discussion: The conventional risk factors, insulin resistance parameters, and metabolic syndrome, although important in predicting CAD risk, may not sufficiently predict risk in SAIs. HDL has antioxidant, antiinflammatory, and antithrombotic properties that contribute to its function as an antiatherogenic agent. Recent Caucasian studies have shown HDL is not only ineffective as an antioxidant but, paradoxically, appears to be prooxidant, and has been found to be associated with CAD. Several causes have been hypothesized for HDL to become dysfunctional, including Apo lipoprotein A-I (Apo A-I) polymorphisms. New risk factors and markers like dysfunctional HDL and genetic polymorphisms may be associated with CAD.
Conclusions: More research is required in SAIs to explore associations with CAD and to enhance early detection and prevention of CAD in this high risk group.
abstract_id: PUBMED:27022456
Lipoprotein abnormalities in South Asians and its association with cardiovascular disease: Current state and future directions. South Asians have a high prevalence of coronary heart disease (CHD) and suffer from early-onset CHD compared to other ethnic groups. Conventional risk factors may not fully explain this increased CHD risk in this population. Indeed, South Asians have a unique lipid profile which may predispose them to premature CHD. Dyslipidemia in this patient population seems to be an important contributor to the high incidence of coronary atherosclerosis. The dyslipidemia in South Asians is characterized by elevated levels of triglycerides, low levels of high-density lipoprotein (HDL) cholesterol, elevated lipoprotein(a) levels, and a higher atherogenic particle burden despite comparable low-density lipoprotein cholesterol levels compared with other ethnic subgroups. HDL particles also appear to be smaller, dysfunctional, and proatherogenic in South Asians. Despite the rapid expansion of the current literature with better understanding of the specific lipid abnormalities in this patient population, studies with adequate sample sizes are needed to assess the significance and contribution of a given lipid parameter on overall cardiovascular risk in this population. Specific management goals and treatment thresholds do not exist for South Asians because of paucity of data. Current treatment recommendations are mostly extrapolated from Western guidelines. Lastly, large, prospective studies with outcomes data are needed to assess cardiovascular benefit associated with various lipid-lowering therapies (including combination therapy) in this patient population.
abstract_id: PUBMED:20300285
Can novel Apo A-I polymorphisms be responsible for low HDL in South Asian immigrants? Coronary artery disease (CAD) is the leading cause of death in the world. Even though its rates have decreased worldwide over the past 30 years, event rates are still high in South Asians. South Asians are known to have low high-density lipoprotein (HDL) levels. The objective of this study was to identify Apolipoprotein A-I (Apo A-I) polymorphisms, the main protein component of HDL and explore its association with low HDL levels in South Asians. A pilot study on 30 South Asians was conducted and 12-h fasting samples for C-reactive protein, total cholesterol, HDL, low-density lipoprotein (LDL), triglycerides, Lipoprotein (a), Insulin, glucose levels, DNA extraction, and sequencing of Apo A-I gene were done. DNA sequencing revealed six novel Apo A-I single nucleotide polymorphisms (SNPs) in South Asians, one of which (rs 35293760, C938T) was significantly associated with low (<40 mg/dl) HDL levels (P = 0.004). The association was also seen with total cholesterol (P = 0.026) and LDL levels (P = 0.032). This pilot work has highlighted some of the gene-environment associations that could be responsible for low HDL and may be excess CAD in South Asians. Further larger studies are required to explore and uncover these associations that could be responsible for excess CAD risk in South Asians.
abstract_id: PUBMED:21305840
South Asians and risk of cardiovascular disease: current insights and trends. Patients from the Indian subcontinent have a distinct cardiovascular risk profile with profound health consequences. South Asians tend to develop more severe coronary artery disease at a younger age, and may also suffer from earlier myocardial infarction and heart failure. The genesis of this risk is multi-factorial. One important culprit is increased insulin resistance, possibly due to recently identified genetic polymorphisms. Another possible explanation is subclinical inflammation and a prothrombotic environment, as evidenced by increased levels of homocysteine, plasminogen activator inhibitor-1, and fibrinogen. The lipid profile of South Asians may play a role, as this population is known to have elevated levels of lipoprotein (a), as well as lower levels of HDL. In addition, this HDL may be dysfunctional, as this population may have a higher prevalence of low levels of HDL2b, as well as an increased preponderance of smaller HDL. Current guidelines for primary and secondary prevention have not reflected our growing insight into the unique characteristics of the South Asian population, and may need to evolve to reflect our knowledge.
abstract_id: PUBMED:33833849
Lipids in South Asians: Epidemiology and Management. Purpose Of Review: This review focuses on lipoprotein abnormalities in South Asians (SA) and addresses risk stratification and management strategies to lower atherosclerotic cardiovascular disease (ASCVD) in this high-risk population.
Recent Findings: South Asians (SAs) are the fastest growing ethnic group in the United States (U.S) and have an increased risk of premature coronary artery disease (CAD). While the etiology may be multifactorial, lipoprotein abnormalities play a key role. SAs have lower low-density lipoprotein cholesterol (LDL-C) compared with Whites and at any given LDL-C level, SA ethnicity poses a higher risk of myocardial infarction (MI) and coronary artery disease (CAD) compared with other non-Asian groups. SAs have lower high-density lipoprotein cholesterol (HDL-C) with smaller particle sizes of HDL-C compared with Whites. SAs also have higher triglycerides than Whites which is strongly related to the high prevalence of metabolic syndrome in SAs. Lipoprotein a (Lp(a)) levels are also higher in SAs compared with many other ethnic groups. This unique lipoprotein profile plays a vital role in the elevated ASCVD risk in SAs. Studies evaluating dietary patterns of SAs in the U.S show high consumption of carbohydrates and saturated fats.
Summary: SAs have a high-risk lipoprotein profile compared with other ethnicities. Lipid abnormalities play a central role in the pathogenesis of CAD in SAs. More studies are needed to understand the true impact of the various lipoproteins and their contribution to increasing ASCVD in SAs. Aggressive lowering of LDL-C in high-risk groups using medications, such as statins, and lifestyle modification including dietary changes is essential in overall CAD risk reduction.
abstract_id: PUBMED:29374801
Premature Coronary Heart Disease in South Asians: Burden and Determinants. Purpose Of Review: While the burden of cardiovascular disease (CVD) is on the decline globally, it is on the rise among South Asians. South Asians are also believed to present early with coronary artery disease (CAD) compared with other ethnicities.
Recent Findings: South Asians have demonstrated a higher burden of premature CAD (PCAD) compared with other ethnicities. These findings are not limited to non-immigrant South Asians but have also been found in immigrant South Asians settled around the world. In this article, we first discuss studies evaluating PCAD among South Asians residing in South Asia and among South Asian immigrants in other countries. We then discuss several traditional risk factors that could explain PCAD in South Asians (diabetes, hypertension, dietary factors, obesity) and lipoprotein-associated risk (low HDL-C levels, higher triglycerides, and elevated apolipoprotein B levels). We then discuss several emerging areas of research among South Asians including the role of dysfunctional HDL, elevated lipoprotein(a), genetics, and epigenetics. Although various risk markers and risk factors of CAD have been identified in South Asians, how they impact therapy is not well-known. PCAD is prevalent in the South Asian population. Large-scale studies are needed to identify how this information can be rationally utilized for early identification of risk among South Asians, and how currently available therapies can mitigate this increased risk.
abstract_id: PUBMED:28462120
High-density lipoprotein cholesterol (HDL-C) in cardiovascular disease: effect of exercise training. Decreases in high-density lipoprotein cholesterol (HDL-C) levels are associated with an increased risk of coronary artery disease (CAD), whereas increased HDL-C levels are related to a decreased risk of CAD and myocardial infarction. Although HDL prevents the oxidation of low-density lipoprotein under normal conditions, it triggers a structural change, inhibiting antiarteriosclerotic and anti-inflammatory functions, under pathological conditions such as oxidative stress, inflammation, and diabetes. HDL can transform into various structures based on the quantitative reduction and deformation of apolipoprotein A1 and is the primary cause of increased levels of dysfunctional HDL, which can lead to an increased risk of CAD. Therefore, analyzing the structure and components of HDL rather than HDL-C after the application of an exercise training program may be useful for understanding the effects of HDL.
Answer: Yes, dysfunctional HDL can potentially explain the high coronary artery disease (CAD) risk in South Asians. Studies have indicated that South Asian immigrants (SAIs) have a higher prevalence of CAD and its risk factors compared to other ethnic populations, and conventional CAD risk factors do not fully account for this excess risk (PUBMED:25395937; PUBMED:21804640; PUBMED:19183743). Dysfunctional or pro-inflammatory high-density lipoprotein (HDL), determined by the HDL inflammatory index, has been associated with sub-clinical CAD in SAIs, as measured by carotid intima-media thickness (IMT), a surrogate marker for atherosclerosis (PUBMED:18255168; PUBMED:25395937).
The presence of metabolic syndrome (MS) and dysfunctional HDL was significantly correlated with carotid artery intima-media thickness (CCA-IMT), even after adjusting for age and gender, suggesting that dysfunctional HDL could be a contributing factor to the increased CAD risk in this population (PUBMED:25395937). Furthermore, SAIs are known to have low HDL levels, and novel Apo A-I polymorphisms have been identified that may be responsible for low HDL levels in this group, which could contribute to their excess CAD risk (PUBMED:20300285).
Additionally, South Asians have a unique lipid profile characterized by elevated levels of triglycerides, low levels of HDL cholesterol, and smaller, dysfunctional, and proatherogenic HDL particles (PUBMED:27022456; PUBMED:21305840). This altered lipid profile, including the presence of dysfunctional HDL, is thought to play a key role in the elevated atherosclerotic cardiovascular disease (ASCVD) risk in South Asians (PUBMED:33833849).
In summary, the evidence suggests that dysfunctional HDL, along with other lipid abnormalities, may be an important factor in explaining the high CAD risk observed in South Asians. Further research is needed to fully understand the impact of these lipoprotein abnormalities and to develop targeted prevention and treatment strategies for this high-risk population (PUBMED:29374801). |
Instruction: Do neck incisions influence nerve deficits after carotid endarterectomy?
Abstracts:
abstract_id: PUBMED:8024456
Do neck incisions influence nerve deficits after carotid endarterectomy? Objectives: To determine whether transverse neck incisions for carotid endarterectomy were associated with a similar or greater incidence of cranial nerve complications when compared with vertical skin incisions, and to assess the patient's perception of the appearance of the incision.
Design: Prospective, but not randomized.
Setting: A university-affiliated tertiary care hospital.
Patients/interventions: Eighty-five consecutive carotid endarterectomy procedures were evaluated prospectively in 80 patients. Although patients were not randomly assigned, consideration was given to having approximately the same number of patients who had carotid endarterectomy performed through transverse neck incision as through vertical neck incision. Forty-four carotid endarterectomies were performed with a vertical incision and 41 procedures were performed with a transverse incision.
Main Outcome Measure: To determine the incidence of cranial nerve dysfunction (primarily nerves VII and XII) after operation.
Results: The incidence of palsies of cranial nerves VII and XII in the two groups was similar; there was no statistical significance (the seventh nerve palsy, 32% transverse vs 25% vertical; the 12th nerve palsy, 15% transverse vs 20% vertical). Seventy-two percent of the deficits had disappeared by the 3- to 6-month follow-up. Patients expressed a clear preference for the transverse incision (P = .04).
Conclusions: Although surgical exposure was simpler with the vertical incision, adequate exposure with the transverse incision was always possible. The incidence of mostly temporary deficits of cranial nerves VII and XII was similar. Patients favored the transverse incision.
abstract_id: PUBMED:11280832
Cranial and neck nerve injuries following carotid endarterectomy intervention. Review of the literature The aim of the study was to establish the operative techniques and findings that can influence the reported incidence of cranial and cervical nerve injuries. Eight main studies comprising 1,616 carotid endarterectomies and published over the period from 1990 to October 2000 were reviewed. There were no statistically significant differences between neck incision (vertical or transverse) and number of injuries. In one study, multiple deficits were observed most frequently in patients treated by the eversion technique (P = 0.2). Additional prospective trials are needed in large numbers of patients to assess the incidence of cranial and cervical nerve injuries. Most injuries are transient and involve the vagus and hypoglossal nerves. A number of factors related to the operation, such as general anaesthesia, eversion technique and the surgeon's experience, may influence the incidence of such injuries. Repeat endarterectomy is associated with a high incidence of cranial and/or cervical nerve injuries. This is extremely important for establishing the real advantage of endovascular angioplasty or stenting of the carotid artery.
abstract_id: PUBMED:37062645
Neck dissection for head and neck malignancies with concurrent carotid endarterectomy. Head and neck malignancies share similar risk factors as carotid artery stenosis and these can often present together. Patients who require external beam radiotherapy are at a higher risk of developing significant worsening stenosis. The workup of the oncologic patient often includes computed tomography, which can reveal underlying carotid artery stenosis, offering an opportunity to address both conditions in one operation and prevent the need for a complicated carotid endarterectomy (CEA) in irradiated and previously operated tissue. It was postulated that these two operations can be combined safely. The surgical protocol, surgical technique, and outcomes of a case series of four patients with head and neck cancer who underwent neck dissection and CEA for carotid artery stenosis during the same operation is presented. CEA was performed safely, simultaneously with neck dissection. CEA did not affect the surgical outcomes or postoperative course of the patients, and no minor or major complications were observed related to this procedure. Carotid endarterectomy performed by a vascular surgeon can be safely combined with oncologic neck dissection in the same procedure to avoid future complications in head and neck cancer patients.
abstract_id: PUBMED:19215658
Carotid artery-hypoglossal nerve relationships in the neck: an anatomical work. Objective: To review the surgical anatomy of the hypoglossal nerve in the neck, analyse its relationship to surrounding structures and offer landmarks to identify the nerve during carotid endarterectomy.
Method: The carotid bifurcation, external carotid artery, internal carotid artery, extracranial part of the hypoglossal nerve, occipital artery, sternocleidomastoid artery and surrounding neurovascular structures were dissected and studied on 15 formalin-fixed adult cadaver heads (30 sides and 15 pairs) via a surgical microscope. Landmarks for the hypoglossal nerve and measurements of its distance from the carotid bifurcation are described. The relationship between the sternocleidomastoid artery and the occipital artery is also described.
Results: The distance from the carotid bifurcation to the point at which the hypoglossal nerve crosses over the internal carotid artery was variable, ranging from 3.89 to 37.03 mm (mean, 20.95 +/- 7.78 mm). The distance from the bifurcation to the point at which the hypoglossal nerve crosses over the external carotid artery ranged from 2.63 to 29.43 mm (mean, 15.33 +/- 7.86 mm; Table 1). The sternocleidomastoid artery had a very characteristic course and close relationship with the hypoglossal nerve. Ascending for a short distance in a cranial direction, it crossed over the hypoglossal nerve and then descended toward the sternocleidomastoid muscle. The sternocleidomastoid artery originated from the occipital artery (33.4%), the external carotid artery-internal carotid artery junction (30%), the external carotid artery itself (30%) or even the lingual artery (6.6%).
Conclusion: The relationship between the hypoglossal nerve and the carotid bifurcation is quite variable, and this explains the vulnerability of the nerve during carotid endarterectomy. The sternocleidomastoid artery is a good landmark for identifying the hypoglossal nerve. If there is exact anatomical knowledge about the relationship between the sternocleidomastoid artery and the hypoglossal nerve, the incidence of nerve injuries during carotid endarterectomy can be minimized.
abstract_id: PUBMED:3278413
Bilateral hypoglossal nerve injury after bilateral carotid endarterectomy. A case of severe bilateral injury to the hypoglossal nerves after two-stage carotid endarterectomy is described. Injury to the hypoglossal nerve occurs in up to 20% of patients undergoing carotid endarterectomy and may result in mild or unnoticed deficits. These injuries must be carefully searched for in patients who will undergo a similar procedure on the opposite side since a bilateral deficit of the hypoglossal nerve is poorly tolerated, causing potentially serious impairment of speech and risk of aspiration.
abstract_id: PUBMED:9707234
Neck dissection with simultaneous carotid endarterectomy. Background: Patients with metastatic neck disease from upper aerodigestive tract carcinomas have an extensive history of tobacco and alcohol abuse. These patients are predisposed to develop atherosclerotic vascular disease.
Objective: An increased incidence and severity of carotid stenosis in patients receiving radiotherapy for head and neck cancers is known. Management of patients with severe carotid stenosis who require surgical treatment of their neck disease has not been described. The authors describe our experience with simultaneous carotid endarterectomy and neck dissection.
Study Design: Prospective data collection.
Methods: From 1991 to 1997 at West Virginia University Hospitals, Morgantown, West Virginia, and State University of New York (SUNY) at Buffalo, three patients with severe carotid stenosis required surgery for metastatic neck disease. Preoperative evaluation revealed a bilateral carotid stenosis greater than 90% in all patients. All patients underwent modified radical neck dissections and simultaneous carotid endarterectomies with saphenous vein grafting. Two patients, one undergoing partial pharyngectomy and laryngectomy and the other a laryngectomy and neck dissection, had coverage of the carotid artery with the myogenous component of a pectoral major graft. One patient had only a neck dissection.
Results: Two patients healed with no local morbidity, no neck recurrence, and a patent carotid artery by Doppler. No strokes were encountered. One patient died of a myocardial infarction.
Conclusion: Severe carotid stenosis that requires revascularization may have endarterectomy performed simultaneously with treatment of head and neck primary with no increase in morbidity.
abstract_id: PUBMED:16425786
Permanent local nerve injuries after carotid endarterectomy Unlabelled: Functional assessment of nerves, especially motor rami of cranial nerves, in patients at postoperative period after carotid endarterectomy (CEA), is particularly important in case of necessity of contralateral carotid artery surgery. Bilateral damage to recurrent laryngeal or hypoglossal nerve is a potentially life-threatening complication. Sensory disturbances due to intraoperative injuries of cervical plexus branches may cause residual discomfort in numerous patients. The aim of this study was the assessment and comparison of frequency of persistent (for more than 12 months postoperatively) manifestations of cranial and cervical nerves injuries in patients after CEA performed either in the standard or eversion technique. A prospective study evaluating cranial and cervical nerves dysfunction after carotid endarterectomies in 144 out of 193 patients operated on from January 1999 until June 2001 was undertaken at the Department of General and Vascular Surgery, Pomeranian Medical University in Szczecin, Poland. CEA was performed in the standard way (i.e. by primary closure) in 92 patients, while 52 others were operated on by eversion technique. Neurological examination with careful functional assessment of cranial nerves: V, VII, IX, X, XII and cervical plexus, was performed according to a standard protocol within two follow-up periods: 3 to 6 and 12 to 18 months after discharge from the hospital.
Results: Dysfunction of recurrent laryngeal nerve and hypoglossal nerve were registered 12 to 18 months after CEA with similar incidence of 1.4%. There was no sign of residual damage to other cranial nerves. Sensory disturbances in the area supplied by cervical plexus, mainly transverse cervical and greater auricular nerve, were diagnosed in 26% of patients. There were no statistically significant differences in local neurological complication rates between patients operated on according to standard and eversion procedures.
Conclusions: 1. Permanent damage of cranial nerves refers to small group of patients after carotid endarterectomy and concern predominantly recurrent laryngeal nerve and hypoglossal nerve. 2. Majority of local neurological complications are injuries to cervical plexus branches. 3. Eversion carotid endarterectomy is not related to higher incidence of local neurological deficits compared to standard procedure.
abstract_id: PUBMED:36348507
High Mini-Skin Incision during Carotid Endarterectomy for Carotid Stenosis. Background: Carotid endarterectomy (CEA) is used to treat carotid stenosis, which is associated with cerebral infarction and may result in neurologic deficits such as stroke, transient ischemic attack (TIA), and local nerve injury. To decrease surgery-related complications and improve patient satisfaction with esthetic outcomes, efforts have been made to minimize incision size instead of using a standard longitudinal incision.
Methods: We performed a retrospective analysis of 151 cases of CEA, of which 110 used conventional incisions and 41 used high mini-skin incisions (HMIs), from March 2015 to December 2021 at a single institution. Short-term (30-day) postoperative results were evaluated for rates of mortality, stroke, TIA, and cranial/cervical nerve injuries. Risk factors for nerve injury were also assessed.
Results: The HMI group showed significantly (p<0.01) shorter operative and clamp times than the conventional group. The HMI group also had significantly shorter incision lengths (5.3±0.9 cm) than the conventional group (11.5±2.8 cm). The rates of stroke, TIA, and death at 30 days were not significantly different between the 2 groups. There was no significant difference in the rate of cranial and cervical nerve injuries, and all injuries were transient. A high lesion level (odds ratio [OR], 9.56; 95% confidence interval [CI], 3.21-28.42; p<0.01) and the clamp time (OR, 1.07; 95% CI, 1.03-1.12; p<0.01) were found to be risk factors for nerve injuries.
Conclusion: Use of the HMI in CEA for carotid stenosis was advantageous for its shorter operative time, shorter internal carotid artery clamp time, reduced neurologic complications, and improved esthetics.
abstract_id: PUBMED:17405594
Literature review of cranial nerve injuries during carotid endarterectomy. Objective: In the recent prospective randomised trials on carotid endarterectomy (CEA), the incidence of cranial nerve injuries (CNI) are reported to be higher than in previously published studies. The objective of this study is to review the incidence of post CEA cranial nerve injury and to discover whether it has changed in the last 25 years after many innovations in vascular surgery.
Methods: Generic terms including carotid endarterectomy, cranial nerve injuries, post CEA complications and cranial nerve deficit after neck surgery were used to search a variety of electronic databases. Based on selection criteria, decisions regarding inclusion and exclusion of primary studies were made. The incidence of CNI before and after 1995 was compared.
Results: We found 31 eligible studies from the literature. Patients who underwent CEA through any approach were included in the study. All patients had cranial nerves examined both before and after surgery. The total number of patients who had CEA before 1995 was 3521 with 10.6% CNI (352 patients) and after 1995, 7324 patients underwent CEA with 8.3% CNI (614 patients). Cranial nerves XII, X and VII were most commonly involved (rarely IX and XI). Statistical analysis showed that the incidence of CNI has decreased (X(2) = 5.89 + 0.74 = 6.63 => p-value = 0.0100).
Conclusion: CNI is still a significant postoperative complication of carotid endarterectomy. Despite increasing use of CEA, the incidence of CNI has decreased probably because of increased awareness of the possibility of cranial nerve damage.
abstract_id: PUBMED:27141598
The role of neck wrinkles for cosmetic effect during carotid artery endarterectomy. Introduction: Minimally invasive procedures in carotid endarterectomy are currently preferred. Smaller skin incisions (transverse or longitudinal) and the mapping of carotid artery bifurcation with Doppler ultrasound are promoted. There is a lack of papers about cosmetic effects and patients' preferences. The aim of this study was to qualify natural skin wrinkles as potential anatomical markers of carotid artery bifurcation and scar camouflage.
Material And Methods: 50 patients underwent carotid endarterectomy in 2013-2014 in the Vascular Surgery Department of the Pomeranian Medical University in Szczecin. The positions of wrinkles were compared with the anatomical location of carotid artery bifurcation with the use of Doppler - Duplex Ultrasound. Patients' preferences and cosmetic effect were also assessed.
Results: 50% of patients declared the cosmetic effect important. Nevertheless, every patient preferred a transverse incision along a wrinkle line. The wound scar was invisible in 76% of cases 3-6 months after surgery. Wrinkles were close, within 1 centimeter, to carotid artery bifurcation in 80% of cases. It was mainly 4-5 cm from the angle of the mandible. Surgical access along wrinkles located 3, 4, 5, 6, 7 cm from the gonial angle was technically easy in 55%, 77%, 79%, 45%, and 21% of cases, respectively. Using ultrasound before surgery allows the planning of the optimal incision in 98% of cases.
Conclusions: Transverse incision hidden in the wrinkles of the neck gives a good cosmetic effect and allows carotid endarterectomy in every case when carotid artery bifurcation is marked prior to surgery.
Answer: Neck incisions do not appear to significantly influence the incidence of nerve deficits after carotid endarterectomy. Studies have shown that there are no statistically significant differences in the incidence of cranial nerve injuries, such as those affecting the hypoglossal and recurrent laryngeal nerves, between different types of neck incisions (vertical or transverse) used during the procedure (PUBMED:8024456, PUBMED:11280832). The incidence of cranial nerve dysfunction, primarily nerves VII (facial nerve) and XII (hypoglossal nerve), was found to be similar between patients who had carotid endarterectomy performed through transverse neck incision and those with vertical neck incision (PUBMED:8024456). Additionally, the majority of these deficits were temporary and resolved by the 3- to 6-month follow-up (PUBMED:8024456).
Moreover, the literature review indicates that the incidence of cranial nerve injuries has decreased over time, possibly due to increased awareness and improved surgical techniques, rather than the type of incision used (PUBMED:17405594). It is also noted that most injuries are transient and involve the vagus and hypoglossal nerves, and factors such as general anesthesia, eversion technique, and the surgeon's experience may influence the incidence of such injuries (PUBMED:11280832).
Furthermore, the use of high mini-skin incisions (HMIs) during carotid endarterectomy has been shown to be advantageous for shorter operative time, reduced neurologic complications, and improved esthetics without a significant difference in the rate of cranial and cervical nerve injuries compared to conventional incisions (PUBMED:36348507). Patients have also expressed a clear preference for transverse incisions, which are associated with better cosmetic outcomes and can be hidden within natural neck wrinkles (PUBMED:8024456, PUBMED:27141598).
In conclusion, the type of neck incision does not seem to significantly impact the incidence of nerve deficits following carotid endarterectomy, and patient preference and cosmetic outcomes may be considered when choosing the incision type. |
Instruction: Community-based stroke information for clients with stroke and their carers: is there congruency between actual and recommended practice?
Abstracts:
abstract_id: PUBMED:18782734
Community-based stroke information for clients with stroke and their carers: is there congruency between actual and recommended practice? Purpose: Information provision is an integral part of poststroke care, and there is a need to identify how to provide it most effectively. Intervention details, such as content, delivery style, format, and timing, are infrequently reported in the literature. This project describes in detail the provision of information to clients with stroke and their carers by community services in Brisbane, Australia, and compares these to current recommendations in the literature.
Method: Fifty-seven metropolitan-based community services were surveyed regarding the content, delivery style, format, and timing of information available to clients with stroke and their carers, using a telephone-administered questionnaire designed for this study.
Results: Services provided information using a range of formats and delivery styles. The most frequently provided topics were information on services and benefits available and practical management strategies. Less than 75% of services provided written information to most of their clients and/or carers. Less than 40% of services considered client and carer input when designing written information materials.
Conclusion: Community services surveyed in this study demonstrated congruency with some, but not all, of the current content, format, and delivery style recommendations in the literature. Areas for improvement are discussed.
abstract_id: PUBMED:21673196
Development of the Champlain primary care cardiovascular disease prevention and management guideline: tailoring evidence to community practice. Problem Addressed: A well documented gap remains between evidence and practice for clinical practice guidelines in cardiovascular disease (CVD) care.
Objective Of Program: As part of the Champlain CVD Prevention Strategy, practitioners in the Champlain District of Ontario launched a large quality-improvement initiative that focused on increasing the uptake in primary care practice settings of clinical guidelines for heart disease, stroke, diabetes, and CVD risk factors.
Program Description: The Champlain Primary Care CVD Prevention and Management Guideline is a desktop resource for primary care clinicians working in the Champlain District. The guideline was developed by more than 45 local experts to summarize the latest evidence-based strategies for CVD prevention and management, as well as to increase awareness of local community-based programs and services.
Conclusion: Evidence suggests that tailored strategies are important when implementing specific practice guidelines. This article describes the process of creating an integrated clinical guideline for improvement in the delivery of cardiovascular care.
abstract_id: PUBMED:20795828
Developing theory and practice: creation of a Community of Practice through Action Research produced excellence in stroke care. Much emphasis is placed on expert knowledge like evidence-based stroke guidelines, with insufficient attention paid to processes required to translate this into delivery of everyday good care. This paper highlights the worth of creating a Community of Practice (CoP) as a means to achieve this. Drawing on findings from a study conducted in 2000-2002 of processes involved in establishing a nationally lauded high quality Stroke Unit, it demonstrates how successful development of a new service was linked to creation of a CoP. Recent literature suggests CoPs have a key in implementing evidence-based practice; this study supports this claim whilst revealing for the first time the practical knowledge and skills required to develop this style of working. Findings indicate that participatory and democratic characteristics of Action Research are congruent with the collaborative approach required for developing a CoP. The study is an exemplar of how practitioner researchers can capture learning from changing practice, thus contributing to evidence-based healthcare with theoretical and practical knowledge. Findings are relevant to those developing stroke services globally but also to those interested in evidence-based practice.
abstract_id: PUBMED:20854588
Information provision to clients with stroke and their carers: self-reported practices of occupational therapists. Background: The literature promotes the use of a wide range of educational materials for teaching and training clients with chronic conditions such as stroke. Client education is a valuable tool used by occupational therapists to facilitate client and carer ability to manage the stroke-affected upper limb. The aim of this study was to identify what information was provided to clients and carers, how this information was delivered, when the information was delivered and the client factors that influenced the method of information provision.
Methods: Convenience and snowball sampling was used to recruit occupational therapists working in stroke. Twenty-eight participants completed the study questionnaire anonymously and their responses were summarised descriptively.
Results: There was a clinically important trend for carers to receive less information than clients. Written and/or verbal information was the favoured method for delivering information related to handling (57%), soft-tissue injury minimisation (46.4%) and oedema management (50%). Information was delivered with decreasing frequency from admission (86%) to discharge (64%). More than 90% of participants indicated that the client's cognitive ability, visual ability, level of communication, primary language and perceptual ability were considered prior to the delivery of information.
Discussion: Participants regularly conveyed information to clients and carers with respect to management of the stroke-affected upper limb. However, an increased emphasis on the development of practical self-management skills, awareness of the impact of personal factors and a timeline for information provision may prove useful.
abstract_id: PUBMED:36141980
Perceived Facilitators and Barriers for Actual Arm Use during Everyday Activities in Community Dwelling Individuals with Chronic Stroke. Background: Our aim was to gain a deeper understanding of perceived predictors for actual arm use during daily functional activities.
Methods: Qualitative study. Semi-structured interview data collected from individuals with chronic stroke living in the community. Codebook thematic analysis used for the data analysis.
Results: Six participants 5-18 years post stroke with moderate to severe UE impairment. Three domains were identified: Person, Context, and Task. Themes for the Person domain included mental (cognitive effort, lack of acceptance), behavioral (routines/habits, self-evaluation), and physical (stiffness/fatigue). Themes for the Context domain included social environment (being in public, presence, and actions of others) and time constraints (being in a hurry). Themes for the task domain included necessity to complete bilateral and unilateral tasks, and safety (increased risk of accidents).
Conclusion: Actual arm use is a complex construct related to the characteristics of the person, contextual environment, and the nature of the task. Facilitators included cognitive effort, routines/habits, self-evaluation, and the perceived necessity. Barriers included in lack of acceptance, stiffness/fatigue, being in public, being in a hurry, and risk of ac-cidents. Social support was both a facilitator and a barrier. Our results support the growing call to adopt a broader biopsychosocial framework into rehabilitation delivery.
abstract_id: PUBMED:19753416
Interprofessional, practice-driven research: reflections of one "community of inquiry" based in acute stroke. Research is often scholarship driven and the findings are then channelled into the practice community on the assumption that it is utilising an evidence-based approach in its service delivery. Because of persisting difficulties in bridging the practice-evidence gap in health care, there has been a call for more active links between researchers and practitioners. The authors were part of an interprofessional research initiative which originated from within an acute stroke clinical community. This research initiative aimed to encourage active participation of health professionals employed in the clinical setting and active collaboration across departments and institutions. On reflection, it appeared that in setting up an interprofessional, practice-driven research collaborative, achievements included the instigation of a community of inquiry and the affording of opportunities for allied health professionals to be actively involved in research projects directly related to their clinical setting. Strategies were put in place to overcome the challenges faced which included managing a demanding and frequently changing workplace, and overcoming differences in professional knowledge, skills and expertise. From our experience, we found that interprofessional, practice-driven research can encourage allied health professionals to bridge the practice-evidence gap, and is a worthwhile experience which we would encourage others to consider.
abstract_id: PUBMED:32269542
Practice Motions Performed During Preperformance Preparation Drive the Actual Motion of Golf Putting. Of the various types of preperformance preparatory behavior that are acquired during motor learning, the effect of a practice motion performed just prior to execution of an actual motion is not yet fully understood. Thus, the present study employed a golf putting task to investigate how a practice motion in the preparation phase would affect the accuracy of motor control in the execution phase and how proficiency would influence this relationship. To examine the impacts on kinematics and final ball position, the velocities of practice strokes made by tour professional and amateur golfers were experimentally manipulated in the following three conditions: the equal condition, which presented a target that was at the same distance during the practice strokes and the actual stroke; the confusing condition, which had two different distances during the practice and actual strokes; and the no condition, which did not include a practice stroke. The results, based on final ball position, indicated that practice strokes in the equal condition were linked with the highest accuracy levels during the actual stroke in both professionals and amateurs. In the confusing condition, regardless of skill level, the velocity of the actual stroke was influenced by a faster or slower stroke during the pre-shot phase. These relationships between the practice and actual strokes imply that the golfers effectively utilized kinesthetic information obtained during the practice strokes as a reference for the actual stroke. Furthermore, the differences in proficiency level indicated that the club head velocity of amateurs in the no condition was significantly faster than in the equal condition. Therefore, the present results imply that the role of a practice stroke may differ between professionals and amateurs.
abstract_id: PUBMED:24079302
Synthesising practice guidelines for the development of community-based exercise programmes after stroke. Background: Multiple guidelines are often available to inform practice in complex interventions. Guidance implementation may be facilitated if it is tailored to particular clinical issues and contexts. It should also aim to specify all elements of interventions that may mediate and modify effectiveness, including both their content and delivery. We conducted a focused synthesis of recommendations from stroke practice guidelines to produce a structured and comprehensive account to facilitate the development of community-based exercise programmes after stroke.
Methods: Published stroke clinical practice guidelines were searched for recommendations relevant to the content and delivery of community-based exercise interventions after stroke. These were synthesised using a framework based on target intervention outcomes, personal and programme proximal objectives, and recommended strategies.
Results: Nineteen guidelines were included in the synthesis (STRIDES; STroke Rehabilitation Intervention-Development Evidence Synthesis). Eight target outcomes, 14 proximal objectives, and 94 recommended strategies were identified. The synthesis was structured to present best practice recommendations in a format that could be used by intervention programme developers. It addresses both programme content and context, including personal factors, service standards and delivery issues. Some recommendations relating to content, and many relating to delivery and other contextual issues, were based on low level evidence or expert opinion. Where opinion varied, the synthesis indicates the range of best practice options suggested in guidelines.
Conclusions: The synthesis may assist implementation of best practice by providing a structured intervention description that focuses on a particular clinical application, addresses practical issues involved in programme development and provision, and illustrates the range of best-practice options available to users where robust evidence is lacking. The synthesis approach could be applied to other areas of stroke rehabilitation or to other complex interventions.
abstract_id: PUBMED:17239062
A quasi-experimental study on a community-based stroke prevention programme for clients with minor stroke. Aim: The aim of this study was to determine the effectiveness of a community-based stroke prevention programme in (1) improving knowledge about stroke; (2) improving self-health-monitoring practice; (3) maintaining behavioural changes when adopting a healthy lifestyle for stroke prevention.
Background: People with minor stroke (or transient ischaemic attack) tend to under-estimate the long-term impact of this on their health. The challenge for nurses is to prevent subsequent strokes by finding ways to promote and sustain appropriate behaviours. Educational intervention is of paramount importance in equipping those at risk with relevant knowledge and self-care strategies for secondary stroke prevention.
Design: This study adopted a quasi-experimental design.
Method: One hundred and ninety subjects were recruited, of whom 147 (77 in the intervention group and 70 in the control group) completed the study. Data were obtained at three time points: baseline (T0); one week after (T1) and three months after (T2) the intervention. The intervention programme consisted of eight weekly two-hour sessions, with the aims of improving the participants' awareness of their own health signals and of actively involving them in self-care management of their own health for secondary stroke prevention.
Results: Significant positive changes were found among participants of the intervention group in the knowledge on stroke warning signs (P < 0.001); treatment seeking response in case of a stroke (P < 0.001); medication compliance (P < 0.001); self blood pressure monitoring (P < 0.001) as well as lifestyle modification of dietary habits (reduction in salted food intake, P = 0.004). No significant improvement was found in walking exercise participation in the intervention group, yet a significant decrease was detected among the control group.
Conclusion: This study found a three-month-sustained effect of positive changes in knowledge and skill from participants who undertook a nurse-led community-based stroke prevention programme.
Relevance To Clinical Practice: Effective educational intervention by professional nurses helped clients integrate their learned knowledge into their real-life practice. This empowering, that is, the taking of responsibility by clients for their own self-care management on a daily basis, affirms that patient education has moved beyond teaching people facts.
abstract_id: PUBMED:30567968
Community-Based Stroke Recognition Education and Response: An Evidence-Based Intervention Project. Background: Stroke has a significant impact on mortality and disability in the United States. This led the aforementioned master's students to create a community-based educational intervention using stroke curriculum from the American Heart Association/American Stroke Association (AHA/ASA).
Purpose: The purpose of this evidence-based intervention project was to examine the effectiveness of public stroke education utilizing the AHA/ASA's Face, Arm, Speech, Time (FAST) curriculum for stroke symptom recognition and response (Jauch, et al., 2013) at three central Connecticut senior centers.
Design: This evidence-based intervention project was based on the theoretical framework of Dorothea Orem's Self-Care Deficit Theory (Petiprin, 2016). Nurses can provide stroke based education to older adults in the community, ultimately empowering participants to recognize and respond to stroke symptoms.
Methods: An educational session on the AHA/ASA FAST curriculum was presented by master's students to groups of senior adults at three different senior centers, followed by a post teach-back session conducted by the students in smaller groups to assess learning. The total number of participants was 62 (n = 62).
Results: The majority of participants (87%) were able to accurately teach back the four components of the FAST curriculum after the educational intervention.
Conclusions: FAST was simple to teach and engaging for participants. Using the FAST curriculum in senior center populations increased the participants' awareness of stroke signs and symptoms, and their willingness to activate the 911 emergency system.
Implications For Practice: Educational interventions in the community with older adults can produce increased awareness of stroke symptoms and willingness to respond when necessary.
Answer: The provision of community-based stroke information for clients with stroke and their carers does not fully align with recommended practices. A study surveying 57 metropolitan-based community services in Brisbane, Australia, found that while services provided information using various formats and delivery styles, there were areas for improvement. Less than 75% of services provided written information to most clients and carers, and less than 40% considered client and carer input when designing written information materials. This indicates a lack of congruency with some of the current content, format, and delivery style recommendations in the literature (PUBMED:18782734).
Moreover, the development of the Champlain Primary Care CVD Prevention and Management Guideline, which was tailored to local practice in the Champlain District of Ontario, suggests that tailored strategies are important when implementing specific practice guidelines. This approach may help bridge the gap between evidence and practice (PUBMED:21673196).
The creation of a Community of Practice (CoP) through Action Research has been highlighted as a means to translate expert knowledge, like evidence-based stroke guidelines, into everyday good care. The participatory and democratic characteristics of Action Research are congruent with the collaborative approach required for developing a CoP, which is essential for implementing evidence-based practice (PUBMED:20795828).
Occupational therapists self-reported that they regularly conveyed information to clients and carers regarding the management of the stroke-affected upper limb. However, there was a trend for carers to receive less information than clients, and the timing of information delivery decreased from admission to discharge. This suggests a need for increased emphasis on the development of practical self-management skills and awareness of the impact of personal factors (PUBMED:20854588).
In summary, while there are efforts to provide stroke information to clients and their carers in community settings, there is a discrepancy between actual practice and recommended guidelines. Improvements are needed in the areas of information content, consideration of client and carer input, and the development of practical self-management skills to achieve better congruency with recommended practices. |
Instruction: Is informal caregiving independently associated with poor health?
Abstracts:
abstract_id: PUBMED:32326797
Accentuate the Positive: The Association Between Informal and Formal Supports and Caregiving Gains. Background: To promote resilience among caregivers for persons living with dementia (PLWDs), we examine how formal and informal supports are linked to caregiving gains, and whether gender moderates the association between supports and gains. Method: Using the National Health and Aging Trends Study and associated National Study of Caregiving, sources of informal (emotional support, practical support, and help with the PLWD) and formal support (respite care, training program, support group) are considered as predictors of caregiving gains, with gender as a moderator of these associations. The sample included 707 caregivers for 502 PLWDs. Results: Greater caregiving gains were significantly associated with emotional support from friends/family (β = 0.14, SE = 0.09, p = .03). Furthermore, attending a caregiver training program was only associated with increased caregiving gains among men (β = 0.11, SE = 0.08, p = .02). Conclusion: Emotional support from family/friends appears particularly consequential for caregiving gains, and male caregivers may benefit most from programs that emphasize skill building.
abstract_id: PUBMED:22875077
Is informal caregiving independently associated with poor health? A population-based study. Background: Providing informal care has been linked with poor health but has not previously been studied across a whole population. We aimed to study the association between informal care provision and self-reported poor health.
Method: We used data from the UK 2001 Census. The relationship between informal caregiving and poor health was modelled using logistic regression, adjusting for age, sex, marital status, ethnicity, economic activity and educational attainment.
Results: We included 44,465,833 individuals free from permanent sickness or disability. 5,451,902 (12.3%) participants reported providing informal care to another person. There was an association between provision of informal caregiving and self-reported poor health; OR 1.100, 95% CI 1.096 to 1.103. This association remained after adjustment for age, sex, ethnic group, marital status, economic activity and educational attainment. The association also increased with the amount of care provided (hours per week).
Conclusions: Around one in eight of the UK population reports that he or she is an informal caregiver. This activity is associated with poor health, particularly in those providing over 20 h care per week.
abstract_id: PUBMED:33407201
Factors associated with caregiving self-efficacy among primary informal caregivers of persons with dementia in Singapore. Background: Informal caregivers of persons with dementia (PWD) are often associated with negative health outcomes. Self-efficacy in dementia caregiving has been reported to have protective effects on caregiver's health. This study aims to examine the factors associated with the domains of caregiving self-efficacy among informal caregivers in Singapore, a country with a rapidly aging population and a 10% prevalence of dementia among older adults.
Methods: Two hundred eighty-two informal caregivers were recruited and data including participant's caregiving self-efficacy, sociodemographic information, perceived social support, positive aspects of caregiving, knowledge of dementia, as well as behavioral and memory problems of care recipients were collected. A confirmatory factor analysis (CFA) was performed for the 3-factor model of the Revised Scale for Caregiving Self-Efficacy (RSCSE), and multiple linear regressions were conducted using the RSCSE subscales as dependent variables.
Results: Our CFA found that the RSCSE 3-factor model proposed by the original scale developer was an acceptable fit among informal caregivers in Singapore. Having established that the 3-factor model of the RSCSE was compatible among our sample, a series of multiple regressions were conducted using each of the factors as a dependent variable. Regressions revealed several factors that were significantly associated with caregiving self-efficacy. Importantly, outlook on life was positively associated to all 3 domains of the RSCSE, while social support was positively associated with self-efficacy in obtaining respite and controlling upsetting thoughts.
Conclusion: The 3-factor model of the RSCSE was found to be an appropriate fit for our sample. Findings from this study elucidated important novel insights into the factors that influences caregiving self-efficacy amongst informal caregivers in Singapore. Crucially, caregivers' outlook on life and social support should be improved in order to enhance their caregiving self-efficacy.
abstract_id: PUBMED:28389976
Distinct impacts of high intensity caregiving on caregivers' mental health and continuation of caregiving. Although high-intensity caregiving has been found to be associated with a greater prevalence of mental health problems, little is known about the specifics of this relationship. This study clarified the burden of informal caregivers quantitatively and provided policy implications for long-term care policies in countries with aging populations. Using data collected from a nationwide five-wave panel survey in Japan, I examined two causal relationships: (1) high-intensity caregiving and mental health of informal caregivers, and (2) high-intensity caregiving and continuation of caregiving. Considering the heterogeneity in high-intensity caregiving among informal caregivers, control function model which allows for heterogeneous treatment effects was used.This study uncovered three major findings. First, hours of caregiving was found to influence the continuation of high-intensity caregiving among non-working informal caregivers and irregular employees. Specifically, caregivers who experienced high-intensity caregiving (20-40 h) tended to continue with it to a greater degree than did caregivers who experienced ultra-high-intensity caregiving (40 h or more). Second, high-intensity caregiving was associated with worse mental health among non-working caregivers, but did not have any effect on the mental health of irregular employees. The control function model revealed that caregivers engaging in high-intensity caregiving who were moderately mentally healthy in the past tended to have serious mental illness currently. Third, non-working caregivers did not tend to continue high-intensity caregiving for more than three years, regardless of co-residential caregiving. This is because current high-intensity caregiving was not associated with the continuation of caregiving when I included high-intensity caregiving provided during the previous period in the regression. Overall, I noted distinct impacts of high-intensity caregiving on the mental health of informal caregivers and that such caregiving is persistent among non-working caregivers who experienced it for at least a year. Supporting non-working intensive caregivers as a public health issue should be considered a priority.
abstract_id: PUBMED:32472979
Caregiving burden among informal caregivers of people with disability. Objective: Chinese informal caregivers experience burden due to their caregiving responsibilities that violate their belief of reciprocal parent-child relationship, but little is known about this burden and coping processes among Chinese. It is believed that internal coping (i.e., self-reliance) and external coping (i.e., seeking help from others) better captured cultural characteristics of coping styles observed among Chinese. Thus, the aim of this study was to estimate the prevalence of mental ill health, identify correlates, investigate the impact of caregiving burden on mental health, and explore the potentially moderating role of two coping strategies.
Design: A purposive sample of 234 informal caregivers of family with intellectual or mental disability in Macao (SAR), China, from August to September 2018 was investigated.
Methods: DASS-21, Caregiving Burden Inventory (CBI), Perceived Difficulty Scale (PD), and a modified Chinese Coping Scale were used. Multiple regression analyses were conducted.
Results: CBI and PD were associated with depression, anxiety, and stress. Whereas internal coping buffered the effect of PD on depression and anxiety, external coping exacerbated the effect of PD on anxiety and the effect of CBI on depression and anxiety.
Conclusion: Poor mental health among caregivers is associated with greater caregiving challenges and burdens. Internal coping helped to buffer but external coping worsened the effect of burdens on mental health outcomes. Interventions that improve internal coping and mental health might be helpful for ageing informal caregivers.
abstract_id: PUBMED:32620034
Factors associated with caregiving appraisal of informal caregivers: A systematic review. Aims And Objectives: To identify factors associated with the caregiving appraisal of informal caregivers.
Background: Caregiving appraisal, the cognitive evaluation of the caregiving situation, is an essential factor in determining positive or negative caregiving outcomes. Identifying factors associated with appraisal is fundamental for designing effective health promotion strategies.
Design: A systematic review.
Methods: PubMed, EMBASE, CINAHL, PsycINFO, Social Sciences Citation Index, Scopus, CNKI and Wanfang Database were searched for papers published from 1984 to December 2018. Keywords related to informal caregivers' caregiving appraisal were used. Cross-sectional and cohort studies were included. The Quality Assessment and Validity Tool for Correlational Studies, and the CASP Cohort Study Checklist were used for quality assessment. Descriptive and narrative synthesis were used to analyse data. Social ecological model was used for classifying the associated factors into different levels. The PRISMA checklist was followed.
Results: Forty studies were included. The quality of the studies was moderate to high. Data were organised into three levels (individual, interpersonal and community level) and categorised into modifiable factors (e.g. patient behavioural problems, caregiver self-efficacy and social support) and nonmodifiable factors (e.g. caregiving duration, gender and education). The majority of studies have investigated the factors at the individual level.
Conclusion: There are inconsistencies in the understanding of caregiving appraisal, and consensus is needed for conceptual clarity. Caregiving appraisal is associated with three levels of factors. These modifiable factors provide evidence for designing evidence-based interventions, and the nonmodifiable factors help identify confounding factors in assessment and appraisal.
Relevance To Clinical Practice: Nurses are the best-placed healthcare professionals to support informal caregivers. The three levels of associated factors and the interactive approaches provide direction for informing clinical nursing practice. They also provide evidence for healthcare researchers and policymakers to develop interventions and theoretical perspectives and to better allocate healthcare resources.
abstract_id: PUBMED:36353526
Longitudinal association between informal unpaid caregiving and mental health amongst working age adults in high-income OECD countries: A systematic review. Background: Informal unpaid caregivers provide most of the world's care needs, experiencing numerous health and wealth penalties as a result. As the COVID-19 pandemic has highlighted, informal care is highly gendered. Longitudinal evidence is needed to assess the causal effect of caregiving on mental health. This review addresses a gap by summarising and appraising the longitudinal evidence examining the association between unpaid caregiving and mental health among working age adults in high-income Organisation for Economic Co-operation and Development (OECD) countries and examining gender differences.
Methods: Six databases were searched (Medline, PsycInfo, EMBASE, Scopus, Web of Science, Econlit) from Jan 1, 2000 to April 1, 2022. Population-based, peer-reviewed quantitative studies using any observational design were included. Population of interest was working age adults. Exposure was any unpaid caregiving, and studies must have had a non-caregiving comparator for inclusion. Mental health outcomes (depression, anxiety, psychological distress/wellbeing) were measurable by validated self-report tools or professional diagnosis. Screening, data extraction and quality assessment (ROBINS-E) were conducted by two reviewers. The study was prospectively registered with PROSPERO (CRD42022312401).
Findings: Of the 4536 records screened; 13 eligible studies (133,426 participants) were included. Overall quality of evidence was moderate. Significant between-study heterogeneity precluded meta-analysis, so albatross and effect-direction plots complement the narrative synthesis. Results indicate a negative association between informal unpaid care and mental health in adults of working age. Importantly, all included studies were longitudinal in design. Where studies were stratified by gender, caregiving had a consistently negative impact on the mental health of women. Few studies examined men but revealed a negative effect where an association was found.
Interpretation: Our review highlights the need to mitigate the mental health risks of caregiving in working age adults. Whilst men need to be included in further scholarship, reducing the disproportionate caregiving load on women is a crucial requirement for policy development.
Funding: Melbourne School of Population and Global Health, Targeted Research Support Grant.
abstract_id: PUBMED:38012592
Physical and mental health of informal caregivers before and during the COVID-19 pandemic in the United States. Background: Informal caregiving, a common form of social support, can be a chronic stressor with health consequences for caregivers. It is unclear how varying restrictions during the COVID-19 pandemic affected caregivers' physical and mental health. This study explores pre-post March 2020 differences in reported days of poor physical and mental health among informal caregivers.
Methods: Data from the 2019/2020 Behavioral Risk Factor Surveillance System survey were used to match, via propensity scores, informal caregivers who provided care during COVID-19 restrictions to those who provided care before the pandemic. Negative binomial weighted regression models estimated incidence rate ratios (IRRs) and differences by demographics of reporting days of poor physical and mental health. A sensitivity analysis including multiple imputation was also performed.
Results: The sample included 9,240 informal caregivers, of whom 861 provided care during the COVID-19 pandemic. The incidence rate for days of poor physical health was 26% lower (p = 0.001) for those who provided care during the COVID-19 pandemic, though the incidence rates for days of poor mental health were not statistically different between groups. Informal caregivers with low educational attainment experienced significantly higher IRRs for days of poor physical and mental health. Younger informal caregivers had a significantly lower IRR for days of poor physical health, but higher IRR for days of poor mental health.
Conclusions: This study contends that the physical and mental health burden associated with informal caregiving in a period of great uncertainty may be heightened among certain populations. Policymakers should consider expanding access to resources through institutional mechanisms for informal caregivers, who may be likely to incur a higher physical and mental health burden during public health emergencies, especially those identified as higher risk.
abstract_id: PUBMED:36186787
Use of dementia and caregiving-related internet resources by informal caregivers: A cross-sectional study. Informal dementia caregivers are at greater risk of experiencing physical and mental health issues as compared to the general population. Internet-based resources may provide accessible opportunities to backing informal dementia caregivers by addressing their information and support needs. This cross-sectional study aims to characterize the use of dementia and caregiving-related internet resources by caregivers and identify variables associated with such use. Primary data were collected through a web-based survey (N = 158). Linear regression models were used to assess the associations of predisposing, enabling, and need variables with the frequency of using the internet for caregiving-related purposes. Most caregivers (93%) have ever used the internet to gather general information about dementia. The frequency of using internet resources was, however, moderate. The multivariable linear regression model suggests that being younger (β = -0.110, p = 0.009), not having a source of support to provide care (β = -2.554, p = 0.012), having used a face-to-face psychosocial intervention at some point (β = 2.731, p = 0.003), being employed (β = 2.558, p = 0.013), and appraising one's own physical health negatively (vs. appraising it as similar; β = 3.591, p < 0.001), are associated with a higher frequency of using caregiving-related internet resources. Our findings confirmed the association of age and perceived health status with caregiving-related internet use reported in other studies. The role of enabling variables as lifetime access to psychosocial interventions and having a usual source of support to provide care was a new addition. This study informs the design and deployment of information and support to dementia caregivers.
abstract_id: PUBMED:33956360
Rural-urban differences in informal caregiving and health-related quality of life. Objectives: Rural-urban health disparities are pervasive among older adults. Rural US locations have a disproportionately high population of older adults, have reduced access to services, and are therefore more reliant on family and friends for care. However, little is known about rural-urban disparities among the 40+ million informal caregivers nationwide. There is a critical need to understand how rural-urban disparities impact caregiving experiences and health-related quality of life (HRQoL). The objectives of this study were to assess rural-urban differences in informal caregiving status, caregiving intensity (caregiving hours/week and types of care provided), and caregiver HRQoL.
Methods: Data were abstracted from the 2018 Behavioral Risk Factor Surveillance System. The primary measure of rural-urban status was "metropolitan status." Associations between rural-urban status and caregiving and rural-urban differences in caregiving intensity and HRQoL were examined using generalized linear models, controlling for confounding and accounting for complex sampling.
Results: Rural respondents were more likely to be caregivers than urban respondents (OR = 1.17, 95% CI: 1.02-1.34). After adjustment for confounders, rural caregivers were more likely than urban caregivers to provide 20 or more hours of caregiving per week (OR 1.38, 95% CI: 1.07-1.77), although the findings for health and HRQoL were somewhat mixed.
Conclusion: Results indicate that rural older adults offer more care than urban counterparts, which may have implications for caregiver health, well-being, and quality of life. These results can inform policies designed to improve caregiver health, and facilitate the translation and adaptation of existing policies, programs, and interventions to address rural caregivers' needs.
Answer: Yes, informal caregiving is independently associated with poor health. A population-based study using data from the UK 2001 Census found that providing informal care is associated with self-reported poor health, with an odds ratio of 1.100, indicating a 10% increase in the likelihood of reporting poor health among caregivers. This association remained significant even after adjusting for various demographic and socioeconomic factors, and the association increased with the amount of care provided (hours per week) (PUBMED:22875077).
Furthermore, informal caregivers of persons with dementia (PWD) are often associated with negative health outcomes, and caregiving self-efficacy has been reported to have protective effects on caregivers' health (PUBMED:33407201). High-intensity caregiving has been found to be associated with a greater prevalence of mental health problems among informal caregivers (PUBMED:28389976). Caregiving burden among informal caregivers of people with disability has also been associated with poor mental health outcomes, and coping strategies such as internal coping can help buffer the effect of caregiving burdens on mental health (PUBMED:32472979).
A systematic review also identified that caregiving appraisal, the cognitive evaluation of the caregiving situation, is associated with both modifiable and nonmodifiable factors that can influence positive or negative caregiving outcomes, further indicating the relationship between caregiving and health (PUBMED:32620034). Additionally, a longitudinal study found a negative association between informal unpaid caregiving and mental health among working-age adults in high-income OECD countries, with caregiving having a consistently negative impact on the mental health of women (PUBMED:36353526).
Lastly, a study exploring pre-post March 2020 differences in reported days of poor physical and mental health among informal caregivers during the COVID-19 pandemic found that the incidence rate for days of poor physical health was lower for those who provided care during the pandemic, but the incidence rates for days of poor mental health were not statistically different between groups. However, certain populations, such as those with low educational attainment and younger caregivers, experienced a higher burden of poor physical and mental health (PUBMED:38012592). |
Instruction: Are changes in worry associated with treatment response in cognitive behavioral therapy for insomnia?
Abstracts:
abstract_id: PUBMED:24215302
Are changes in worry associated with treatment response in cognitive behavioral therapy for insomnia? Aim: Little is known about why some patients respond to cognitive behavioral therapy for insomnia, whereas other patients do not. To understand differences in treatment response, there is a dire need to examine processes of change. The purpose was to investigate the long-term association between insomnia-related worry and outcomes following cognitive behavior therapy for insomnia.
Methods: Sixty patients with early insomnia (3-12 months duration) received group cognitive behavioral therapy for insomnia. At pretreatment and at a 1-year follow-up, the patients completed questionnaires indexing two domains of insomnia-related worry (sleeplessness and health), insomnia severity, anxiety, and depression as well as sleep diaries.
Results: Decreases in the two worry domains were associated with improvements in all of the outcomes, except for sleep onset latency (SOL), at a medium to large level. Reductions in insomnia-related worry were associated with improvements in insomnia severity, wake after sleep onset (WASO), total sleep time (TST), and depression, but not in SOL or anxiety. While reductions in worry for sleeplessness were related to improvements in insomnia severity and TST, decreases in worry for health were associated with enhancements in WASO and depression.
Conclusion: The findings suggest that reductions in insomnia-related worry might be one process route in which cognitive behavioral therapy operates to improve insomnia symptomatology. The results are discussed in relation to theory, clinical implications, and future research.
abstract_id: PUBMED:35260292
Long-Term Effects of Cognitive-Behavioral Therapy and Yoga for Worried Older Adults. Objectives: Cognitive-behavioral therapy (CBT) and yoga decrease worry and anxiety. There are no long-term data comparing CBT and yoga for worry, anxiety, and sleep in older adults. The impact of preference and selection on these outcomes is unknown. In this secondary data analysis, we compared long-term effects of CBT by telephone and yoga on worry, anxiety, sleep, depressive symptoms, fatigue, physical function, social participation, and pain; and examined preference and selection effects.
Design: In this randomized preference trial, participants (N = 500) were randomized to a: 1) randomized controlled trial (RCT) of CBT or yoga (n = 250); or 2) preference trial (selected CBT or yoga; n = 250). Outcomes were measured at baseline and Week 37.
Setting: Community.
Participants: Community-dwelling older adults (age 60+ years).
Interventions: CBT (by telephone) and yoga (in-person group classes).
Measurements: Penn State Worry Questionnaire - Abbreviated (worry);1,2 Insomnia Severity Index (sleep);3 PROMIS Anxiety Short Form v1.0 (anxiety);4,5 Generalized Anxiety Disorder Screener (generalized anxiety);6,7 and PROMIS-29 (depression, fatigue, physical function, social participation, pain).8,9 RESULTS: Six months after intervention completion, CBT and yoga RCT participants reported sustained improvements from baseline in worry, anxiety, sleep, depressive symptoms, fatigue, and social participation (no significant between-group differences). Using data combined from the randomized and preference trials, there were no significant preference or selection effects. Long-term intervention effects were observed at clinically meaningful levels for most of the study outcomes.
Conclusions: CBT and yoga both demonstrated maintained improvements from baseline on multiple outcomes six months after intervention completion in a large sample of older adults.
Trial Registration: www.
Clinicaltrials: gov Identifier NCT02968238.
abstract_id: PUBMED:33107666
Comparison of cognitive-behavioral therapy and yoga for the treatment of late-life worry: A randomized preference trial. Background: The purpose of this study was to compare the effects of cognitive-behavioral therapy (CBT) and yoga on late-life worry, anxiety, and sleep; and examine preference and selection effects on these outcomes.
Methods: A randomized preference trial of CBT and yoga was conducted in community-dwelling adults 60 years or older, who scored 26 or above on the Penn State Worry Questionnaire-Abbreviated (PSWQ-A). CBT consisted of 10 weekly telephone sessions. Yoga consisted of 20 biweekly group yoga classes. The primary outcome was worry (PSWQ-A); the secondary outcomes were anxiety (PROMIS-Anxiety) and sleep (Insomnia Severity Index [ISI]). We examined both preference effects (average effect for those who received their preferred intervention [regardless of whether it was CBT or yoga] minus the average for those who did not receive their preferred intervention [regardless of the intervention]) and selection effect (which addresses the question of whether there is a benefit to getting to select one intervention over the other, and measures the effect on outcomes of self-selection to a specific intervention).
Results: Five hundred older adults were randomized to the randomized trial (125 each in CBT and yoga) or the preference trial (120 chose CBT; 130 chose yoga). In the randomized trial, the intervention effect of yoga compared with CBT adjusted for baseline psychotropic medication use, gender, and race was 1.6 (-0.2, 3.3), p = .08 for the PSWQ-A. Similar results were observed with PROMIS-Anxiety (adjusted intervention effect: 0.3 [-1.5, 2.2], p = .71). Participants randomized to CBT experienced a greater reduction in the ISI compared with yoga (adjusted intervention effect: 2.4 [1.2, 3.7], p < .01]). Estimated in the combined data set (N = 500), the preference and selection effects were not significant for the PSWQ-A, PROMIS-Anxiety, and ISI. Of the 52 adverse events, only two were possibly related to the intervention. None of the 26 serious adverse events were related to the study interventions.
Conclusions: CBT and yoga were both effective at reducing late-life worry and anxiety. However, a greater impact was seen for CBT compared with yoga for improving sleep. Neither preference nor selection effects was found.
abstract_id: PUBMED:24831251
Speed and trajectory of changes of insomnia symptoms during acute treatment with cognitive-behavioral therapy, singly and combined with medication. Objectives: To examine the speed and trajectory of changes in sleep/wake parameters during short-term treatment of insomnia with cognitive-behavioral therapy (CBT) alone versus CBT combined with medication; and to explore the relationship between early treatment response and post-treatment recovery status.
Methods: Participants were 160 adults with insomnia (mean age, 50.3 years; 97 women, 63 men) who underwent a six-week course of CBT, singly or combined with 10 mg zolpidem nightly. The main dependent variables were sleep onset latency, wake after sleep onset, total sleep time, sleep efficiency, and sleep quality, derived from sleep diaries completed daily by patients throughout the course of treatment.
Results: Participants treated with CBT plus medication exhibited faster sleep improvements as evidenced by the first week of treatment compared to those receiving CBT alone. Optimal sleep improvement was reached on average after only one week for the combined treatment compared to two to three weeks for CBT alone. Early treatment response did not reliably predict post-treatment recovery status.
Conclusions: Adding medication to CBT produces faster sleep improvement than CBT alone. However, the magnitude of early treatment response is not predictive of final response after the six-week therapy. Additional research is needed to examine mechanisms involved in this early treatment augmentation effect and its impact on long-term outcome.
abstract_id: PUBMED:33719795
Examining Patient Feedback and the Role of Cognitive Arousal in Treatment Non-response to Digital Cognitive-behavioral Therapy for Insomnia during Pregnancy. Objective: Insomnia affects over half of pregnant and postpartum women. Early evidence indicates that cognitive-behavioral therapy for insomnia (CBTI) improves maternal sleep and mood. However, standard CBTI may be less efficacious in perinatal women than the broader insomnia population. This study sought to identify patient characteristics in a perinatal sample associated with poor response to CBTI, and characterize patient feedback to identify areas of insomnia therapy to tailor for the perinatal experience.
Participants: Secondary analysis of 46 pregnant women with insomnia symptoms who were treated with digital CBTI in a randomized controlled trial.
Methods: We assessed insomnia, cognitive arousal, and depression before and after prenatal treatment, then 6 weeks postpartum. Patients provided feedback on digital CBTI.
Results: Residual cognitive arousal after treatment was the most robust factor associated with treatment non-response. Critically, CBTI responders and non-responders differed on no other sociodemographic or pretreatment metrics. After childbirth, short sleep (<6 hrs/night) was associated with maternal reports of poor infant sleep quality. Patient feedback indicated that most patients preferred online treatment to in-person treatment. Although women described digital CBTI as convenient and helpful, many patients indicated that insomnia therapy would be improved if it addressed sleep challenges unique to pregnancy and postpartum. Patients requested education on maternal and infant sleep, flexibility in behavioral sleep strategies, and guidance to manage infant sleep.
Conclusions: Modifying insomnia therapy to better alleviate refractory cognitive arousal and address the changing needs of women as they progress through pregnancy and early parenting may increase efficacy for perinatal insomnia.Name: Insomnia and Rumination in Late Pregnancy and the Risk for Postpartum DepressionURL: clinicaltrials.govRegistration: NCT03596879.
abstract_id: PUBMED:36764782
Acupuncture as an Adjunct Treatment to Cognitive-Behavioral Therapy for Insomnia. Cognitive-behavioral therapy for insomnia (CBT-I) is the main recommended treatment for patients presenting with insomnia; however, the treatment is not equally effective for all, and several factors can contribute to a diminished treatment response. The rationale for combining CBT-I treatment with acupuncture is explored, and evidence supporting its use in treating insomnia and related comorbidities is discussed. Practical, regulatory, and logistical issues with implementing a combined treatment are examined, and future directions for research are made. Growing evidence supports the effectiveness of acupuncture in treating insomnia and comorbid conditions, and warrants further investigation of acupuncture as an adjunct to CBT-I.
abstract_id: PUBMED:36764788
Acceptance and Commitment Therapy as an Adjunct or Alternative Treatment to Cognitive Behavioral Therapy for Insomnia. Although cognitive behavioral therapy for insomnia (CBT-I) is an effective treatment of insomnia, difficulties exist with adherence to recommendations and premature discontinuation of treatment does occur. The current article aims to review existing research on acceptance and commitment therapy (ACT)-based interventions, demonstrate differences and similarities between ACT for insomnia and CBT-I, and describe treatment components and mechanisms of ACT that can be used to treat insomnia disorder.
abstract_id: PUBMED:31317911
Cognitive-behavioral therapy and pharmacotherapy for chronic insomnia Cognitive-behavioral therapy for insomnia (CBT-I) is the treatment of choice for chronic insomnia. Together with advantages it has such limitations like shortage of trained staff and low response rate. That is why the alternative methods of CBT-I induce high interest: bibliotherapy, phone psychotherapy, brief behavioral therapy and online-CBT-I. Hypnotics administration is recommended as adjuvant to extent the CBT-I effect. It may also be used as monotherapy when CBT-I is unavailable.
abstract_id: PUBMED:30512815
Cognitive-behavioral approach to treating insomnia Cognitive-Behavioral approach to treating insomnia Cognitive behavioral therapy (CBT) for insomnia is a brief and structured therapeutic intervention aimed at changing maladaptive sleep habits and unhelpful sleep-related beliefs and attitudes that perpetuate insomnia. The main therapeutic components include restriction of time spent in bed, stimulus control procedures, cognitive restructuring of beliefs and perceptions related to insomnia and its perceived consequences, and sleep hygiene education. Based upon a solid conceptual framework and supported by strong empirical evidence, CBT is now recognized and endorsed in clinical practice guidelines of medical and sleep societies as the first-line treatment for adults with chronic insomnia. Although it requires more time both from patients and clinicians, CBT produces clinically meaningful and more durable changes in sleep and associated insomnia symptoms than those obtained with medication treatment used singly.
abstract_id: PUBMED:29157588
Sleep spindles may predict response to cognitive-behavioral therapy for chronic insomnia. Background: While cognitive-behavioral therapy for insomnia constitutes the first-line treatment for chronic insomnia, only few reports have investigated how sleep architecture relates to response to this treatment. In this pilot study, we aimed to determine whether pre-treatment sleep spindle density predicts treatment response to cognitive-behavioral therapy for insomnia.
Methods: Twenty-four participants with chronic primary insomnia participated in a 6-week cognitive-behavioral therapy for insomnia performed in groups of 4-6 participants. Treatment response was assessed using the Pittsburgh Sleep Quality Index and the Insomnia Severity Index measured at pre- and post-treatment, and at 3- and 12-months' follow-up assessments. Secondary outcome measures were extracted from sleep diaries over 7 days and overnight polysomnography, obtained at pre- and post-treatment. Spindle density during stage N2-N3 sleep was extracted from polysomnography at pre-treatment. Hierarchical linear modeling analysis assessed whether sleep spindle density predicted response to cognitive-behavioral therapy.
Results: After adjusting for age, sex, and education level, lower spindle density at pre-treatment predicted poorer response over the 12-month follow-up, as reflected by a smaller reduction in Pittsburgh Sleep Quality Index over time. Reduced spindle density also predicted lower improvements in sleep diary sleep efficiency and wake after sleep onset immediately after treatment. There were no significant associations between spindle density and changes in the Insomnia Severity Index or polysomnography variables over time.
Conclusion: These preliminary results suggest that inter-individual differences in sleep spindle density in insomnia may represent an endogenous biomarker predicting responsiveness to cognitive-behavioral therapy. Insomnia with altered spindle activity might constitute an insomnia subtype characterized by a neurophysiological vulnerability to sleep disruption associated with impaired responsiveness to cognitive-behavioral therapy.
Answer: Yes, changes in worry are associated with treatment response in cognitive behavioral therapy for insomnia (CBT-I). Research has shown that decreases in insomnia-related worry are associated with improvements in various outcomes following CBT-I. Specifically, reductions in worry about sleeplessness and health were linked to improvements in insomnia severity, wake after sleep onset (WASO), total sleep time (TST), and depression, although not in sleep onset latency (SOL) or anxiety (PUBMED:24215302). This suggests that a reduction in insomnia-related worry might be a process through which CBT-I operates to improve insomnia symptoms.
Furthermore, other studies have indicated that both CBT-I and other interventions, such as yoga, can lead to sustained improvements in worry, anxiety, sleep, and other related symptoms in older adults (PUBMED:35260292; PUBMED:33107666). These findings support the notion that addressing worry and cognitive arousal is an important component of effective insomnia treatment. However, it is also noted that standard CBT-I may be less efficacious in certain populations, such as perinatal women, and that modifications to the therapy that address unique challenges and cognitive arousal may increase its efficacy (PUBMED:33719795).
Additionally, while CBT-I is effective, it is not equally effective for all individuals, and factors such as sleep spindle density may predict treatment response, with lower spindle density predicting poorer response (PUBMED:29157588). This highlights the potential for individual differences in treatment response and the need for personalized approaches to CBT-I. |
Instruction: Does brain temperature correlate with intracranial pressure?
Abstracts:
abstract_id: PUBMED:18362771
Does brain temperature correlate with intracranial pressure? Objective: A positive correlation between brain temperature and intracranial pressure (ICP) has been proposed for patients under intensive care conditions.
Design And Methods: Data were recorded at 5-minute intervals in patients under ICP monitoring conditions. Brain temperature: combined ICP/temperature probe (Raumedic), core temperature: indwelling urinary catheter with temperature probe (Rüsch). The correlation between brain temperature and ICP was assessed by computing an estimated mean correlation coefficient (re) and by a time series analysis.
Patients: Forty consecutive neurosurgical patients receiving intensive care therapy for trauma, cerebrovascular malformation, and spontaneous hemorrhage were studied. A total of 48,892 measurements (9778 h) were analyzed. No additional interventions were performed.
Results: The median ICP was 14 mm Hg (range: -13 to 167). The brain temperature (median 38 degrees C; range 23.2 to 42.1) was 0.3 degrees C (range: -3.6 to 2.6) higher than the core temperature (median 37.7 degrees C; range 16.6 to 42.0), P<0.001. The mean Pearson correlation between ICP and brain temperature in all patients was re=0.13 (P<0.05); the time series analysis (assuming a possible lagged correlation between ICP and brain temperature) revealed a mean correlation of 0.05+/-0.25 (P<0.05). Both correlation coefficients indicate that any relationship between brain temperature and ICP accounts for less than 2% of the variability [coefficient of determination (r)<0.02].
Conclusions: These data do not support the notion of a clinically useful correlation between brain temperature and ICP.
abstract_id: PUBMED:29862680
Development of the Intracranial Pressure and Intracranial Temperature Monitor and Experimental Research on Trauma Objectives: A set of intracranial pressure and intracranial temperature monitor was developed. Moreover, it was verified to be effective in the monitoring of intracranial parameters by designed experiments.
Methods: The intracranial pressure and intracranial temperature monitor was tested in the water bath comparing with the Codman intracranial pressure monitor and mercury thermometers. As well, the monitor was applied in the monitoring of rat brain edema in vivo.
Results: The maximum error is less than 266.64 Pa in the intracranial pressure measurement compared to the Codman intracranial pressure monitor, and the maximum error is less than 0.3 oC in the temperature measurement according to mercury thermometers. Furthermore, the monitor could real-time obtain the intracranial pressure and intracranial temperature in the brain edema in vivo.
Conclusions: The intracranial pressure and intracranial temperature monitor realizes the real-time in vivo monitoring of intracranial pressure and intracranial temperature. The measurement accuracy meets the acquirement of doctors. The instrument has potential for clinical use.
abstract_id: PUBMED:34331210
Brain Temperature Influences Intracranial Pressure and Cerebral Perfusion Pressure After Traumatic Brain Injury: A CENTER-TBI Study. Background: After traumatic brain injury (TBI), fever is frequent. Brain temperature (BT), which is directly linked to body temperature, may influence brain physiology. Increased body and/or BT may cause secondary brain damage, with deleterious effects on intracranial pressure (ICP), cerebral perfusion pressure (CPP), and outcome.
Methods: Collaborative European NeuroTrauma Effectiveness Research in Traumatic Brain Injury (CENTER-TBI), a prospective multicenter longitudinal study on TBI in Europe and Israel, includes a high resolution cohort of patients with data sampled at a high frequency (from 100 to 500 Hz). In this study, simultaneous BT, ICP, and CPP recordings were investigated. A mixed-effects linear model was used to examine the association between different BT levels and ICP. We additionally focused on changes in ICP and CPP during the episodes of BT changes (Δ BT ≥ 0.5 °C lasting from 15 min to 3 h) up or downward. The significance of ICP and CPP variations was estimated with the paired samples Wilcoxon test (also known as Wilcoxon signed-rank test).
Results: Twenty-one patients with 2,435 h of simultaneous BT and ICP monitoring were studied. All patients reached a BT of 38 °C and experienced at least one episode of ICP above 20 mm Hg. The linear mixed-effects model revealed an association between BT above 37.5 °C and higher ICP levels that was not confirmed for lower BT. We identified 149 episodes of BT changes. During BT elevations (n = 79) ICP increased, whereas CPP was reduced; opposite ICP and CPP variations occurred during episodes of BT reduction (n = 70). All these changes were of moderate clinical relevance (increase of ICP of 4.5 and CPP decrease of 7.5 mm Hg for BT rise, and ICP reduction of 1.7 and CPP elevation of 3.7 mm Hg during BT defervescence), even if statistically significant (p < 0.0001). It has to be noted, however, that a number of therapeutic interventions against intracranial hypertension was documented during those episodes.
Conclusions: Patients after TBI usually develop BT > 38 °C soon after the injury. BT may influence brain physiology, as reflected by ICP and CPP. An association between BT exceeding 37.5 °C and a higher ICP was identified but not confirmed for lower BT ranges. The relationship between BT, ICP, and CPP become clearer during rapid temperature changes. During episodes of temperature elevation, BT seems to have a significant impact on ICP and CPP.
abstract_id: PUBMED:18186416
The impact of brain temperature and core temperature on intracranial pressure and cerebral perfusion pressure. Hyperthermia has been demonstrated to increase neuronal injury when present during or after an acute brain injury. The assumption that core temperature equals brain temperature exists. If the temperature of an injured brain is higher than core temperature, episodes of neural hyperthermia may go undetected. The objectives of this study were to (1) determine if differences exist between brain temperature and core temperature in subjects with acute neurological injuries in both normothermic and febrile states and (2) investigate the impact of brain and core temperatures on intracranial pressure (ICP) and cerebral perfusion pressure (CPP). The study was conducted through a retrospective chart audit of patients age 18 years or older admitted to a level I trauma center with a diagnosis of brain injury whose condition warranted placement of a pulmonary artery catheter (which measured core temperature) and an intraventricular catheter (which measured brain temperature). Thirty-one charts contained complete data; nine charts provided partial data. Mean brain temperature (100.8 degrees F, SD = 0.69) was found to be significantly higher than mean core temperature (100.2 degrees F, SD = 0.74; p = .00). Brain temperature means were hyperthermic (> or = 100.9 degrees F), while matching core temperatures were normothermic in almost one-third of the subjects. There was no significant difference found between hyperthermic ICP or CPP and normothermic ICP or CPP determined by brain or core temperature. No significant correlation was found between temperature and intracranial dynamics. Future research is needed with prospectively collected data of adequate sample size to continue to investigate the impact of core and brain temperature on the intracranial dynamics of ICP and CPP.
abstract_id: PUBMED:12493106
Optimal temperature for the management of severe traumatic brain injury: effect of hypothermia on intracranial pressure, systemic and intracranial hemodynamics, and metabolism. Objective: We studied the effect of hypothermia on intracranial pressure, systemic and intracranial hemodynamics, and metabolism in patients with severe traumatic brain injury to clarify the optimal temperature for hypothermia, with a view toward establishing the proper management techniques for such patients.
Methods: The study was performed in 31 patients with severe head injury (Glasgow Coma Scale score as high as 5). All patients were sedated, paralyzed, ventilated, and cooled to 33 degrees C. Brain temperature, core temperature, intracranial pressure, cerebral perfusion pressure, jugular venous oxygen saturation, mixed venous oxygen saturation, cardiac output, oxygen delivery, oxygen consumption, and resting energy expenditure were monitored continuously.
Results: Intracranial pressure decreased significantly at brain temperatures below 37 degrees C and decreased more sharply at temperatures 35 to 36 degrees C, but no differences were observed at temperatures below 35 degrees C. Cerebral perfusion pressure peaked at 35.0 to 35.9 degrees C and decreased with further decreases in temperature. Jugular venous oxygen saturation and mixed venous oxygen saturation remained in the normal range during hypothermia. Resting energy expenditure and cardiac output decreased progressively with hypothermia. Oxygen delivery and oxygen consumption decreased to abnormally low levels at rectal temperatures below 35 degrees C, and the correlation between them became less significant at less than 35 degrees C than that when temperatures were 35 degrees C or higher. Brain temperature was consistently higher than rectal temperature by 0.5 +/- 0.3 degrees C.
Conclusion: These results suggest that, after traumatic brain injury, decreasing body temperature to 35 to 35.5 degrees C can reduce intracranial hypertension while maintaining sufficient cerebral perfusion pressure without cardiac dysfunction or oxygen debt. Thus, 35 to 35.5 degrees C seems to be the optimal temperature at which to treat patients with severe traumatic brain injury.
abstract_id: PUBMED:3956641
Effects of temperature and elevated intracranial pressure on peripheral and brain stem auditory responses in dogs. Far-field recordings of central (P2 through P4) and peripheral (cochlear microphonic; and compound action potential of the eighth nerve) auditory responses were used to assess changes in auditory function resulting from elevated intracranial pressure. Normative data for eight dogs were obtained. The relationship between response latency and core temperature was examined. A mean slope of -0.17 ms/degrees C resulted for the temperature range of 35.0 to 40.0 degrees C. Systemic arterial pressure was measured in order to identify the cerebral ischemic response. Responses were not altered significantly unless the intracranial pressure approached within 15 to 30 mm Hg of mean systemic arterial pressure. Changes in the response consisted of both enhancement and deterioration during intracranial pressure elevation and were accompanied by increases in systemic arterial pressure during that elevation. Supernormal amplitudes of the action potential also occurred during recovery periods. Results suggest that: (i) during elevated intracranial pressure, changes in both central and peripheral auditory function result from ischemia rather than pressure-induced distortion of the cochlea or central neural assemblies. (ii) Far-field auditory responses may include an O2-dependent cochlear microphonic. (iii) An unknown process causing enhancement of central and peripheral neural responses exists and operates in connection with intracranial hypertension. Possible mechanisms underlying enhancement of response components are discussed.
abstract_id: PUBMED:16098539
A mathematical model of intracranial pressure dynamics for brain hypothermia treatment. Brain hypothermia treatment is used as a neuroprotectant to decompress the elevated intracranial pressure (ICP) in acute neuropatients. However, a quantitative relationship between decompression and brain hypothermia is still unclear, this makes medical treatment difficult and ineffective. The objective of this paper is to develop a general mathematical model integrating hemodynamics and biothermal dynamics to enable a quantitative prediction of transient responses of elevated ICP to ambient cooling temperature. The model consists of a lumped-parameter compartmental representation of the body, and is based on two mechanisms of temperature dependence encountered in hypothermia, i.e. the van't Hoff's effect of metabolism and the Arrhenius' effect of capillary filtration. Model parameters are taken from the literature. The model is verified by comparing the simulation results to population-averaged data and clinical evidence of brain hypothermia treatment. It is possible to assign special model inputs to mimic clinical maneuvers, and to adjust model parameters to simulate pathophysiological states of intracranial hypertension. Characteristics of elevated ICP are quantitatively estimated by using linear approximation of step response with respect to ambient cooling temperature. Gain of about 4.9 mmHg degrees C(-1), dead time of about 1.0 h and a time constant of about 9.8h are estimated for the hypothermic decompression. Based on the estimated characteristics, a feedback control of elevated ICP is introduced in a simulated intracranial hypertension of vasogenic brain edema. Simulation results suggest the possibility of an automatic control of the elevated ICP in brain hypothermia treatment.
abstract_id: PUBMED:32195898
Automated Pupillary Measurements Inversely Correlate With Increased Intracranial Pressure in Pediatric Patients With Acute Brain Injury or Encephalopathy. Objectives: The purpose of this study was to determine correlation and temporal association between automated pupillary measurements and intracranial pressure in pediatric patients with brain injury or encephalopathy requiring intracranial pressure monitoring. We hypothesized that abnormal pupillary measurements would precede increases in intracranial pressure.
Design: A prospective cohort study was performed. Automated pupillometry measurements were obtained at the same frequency as the patients' neurologic assessments with concurrent measurement of intracranial pressure, for up to 72 hours. Pupillary measurements and the Neurologic Pupil index, an algorithmic score that combines measures of pupillary reactivity, were assessed for correlation with concurrent and future intracranial pressure measurements.
Setting: Single-center pediatric quaternary ICU, from July 2017 to October 2018.
Patients: Pediatric patients 18 years or younger with a diagnosis of acute brain injury or encephalopathy requiring an intracranial pressure monitor.
Interventions: None.
Measurements And Main Results: Twenty-eight patients were analyzed with a total of 1,171 intracranial pressure measurements. When intracranial pressure was elevated, the Neurologic Pupil index, percent change in pupillary size, constriction velocity, and dilation velocity were significantly lower than when intracranial pressure was within normal range (p < 0.001 for all). There were mild to moderate negative correlations between concurrent intracranial pressure and pupillary measurements. However, there was an inconsistent pattern of abnormal pupillary measurements preceding increases in intracranial pressure; some patients had a negative association, while others had a positive relationship or no relationship between Neurologic Pupil index and intracranial pressure.
Conclusions: Our data indicate automated assessments of pupillary reactivity inversely correlate with intracranial pressure, demonstrating that pupillary reactivity decreases as intracranial pressure increases. However, a temporal association in which abnormal pupillary measurements precede increases in intracranial pressure was not consistently observed. This work contributes to limited data available regarding automated pupillometry in neurocritically ill patients, and the even more restricted subset available in pediatrics.
abstract_id: PUBMED:9740935
Temperature of the cerebrovenous blood in a model of increased intracranial pressure. Unlabelled: Hypothermia has a considerable protective effect during brain ischemia. On the other hand small increases of brain temperature have a remarkable effect on the exacerbation of neurological damage following an ischemic event. Hyperthermia of the brain tissue after severe head injury is described. The effect of acutely increased intracranial pressure on cerebrovenous blood temperature is not described yet. The aim of this study was to investigate the relationship between temperature in the cerebrovenous compartment (Tcv) and changes of the CPP in an animal model of raised intracranial pressure.
Methods: A thermocouple was inserted in the sagittal sinus in 9 pigs under general anesthesia. By stepwise inflating a supracerebral and infratentorial placed balloon catheter intracranial pressure (ICP) was increased and CPP concomitantly decreased. The central body temperature was measured simultaneously in the abdominal aorta (Ta) with a second thermocouple.
Results: In our model th Tcv was lower than Ta at the beginning of the ICP increase. The mean difference between Ta and Tcv, (delta Ta-cv) was 0.86 degree C (+/- 0.44) prior to ICP increase and 1.19 degrees C (0.58) at the maximum ICP increase. Thus, delta Tav increased during CPP reduction. This relation was represented by an adjusted R(square) of r2 = 0.89 (p < 0.001).
Conclusions: The CPP decrease, caused by an increasing ICP, results in changes of the cerebrovenous blood temperature. Interpreting the present results the experimental situation of a relative colder cerebral compartment in comparison to the central body temperature has to be considered. However, the results imply, that simultaneous temperature monitoring of the central body temperature and the cerebrovenous blood temperature is an additional source of information about relative changes of the CBF.
abstract_id: PUBMED:11561026
Brain temperature, body core temperature, and intracranial pressure in acute cerebral damage. Objectives: To assess the frequency of hyperthermia in a population of acute neurosurgical patients; to assess the relation between brain temperature (ICT) and core temperature (Tc); to investigate the effect of changes in brain temperature on intracranial pressure (ICP).
Methods: The study involved 20 patients (10 severe head injury, eight subarachnoid haemorrhage, two neoplasms) with median Glasgow coma score (GCS) 6. ICP and ICT were monitored by an intraventricular catheter coupled with a thermistor. Internal Tc was measured in the pulmonary artery by a Swan-Ganz catheter.
Results: Mean ICT was 38.4 (SD 0.8) and mean Tc 38.1 (SD 0.8) degrees C; 73% of ICT and 57.5% of Tc measurements were > or =38 degrees C. The mean difference between ICT and Tc was 0.3 (SD 0.3) degrees C (range -0.7 to 2.3 degrees C) (p=0. 0001). Only in 12% of patients was Tc higher than ICT. The main reason for the differences between ICT and Tc was body core temperature: the difference between ICT and Tc increased significantly with body core temperature and fell significantly when this was lowered. The mean gradient between ICT and Tc was 0.16 (SD 0.31) degrees C before febrile episodes (ICT being higher than Tc), and 0.41 (SD 0.38) degrees C at the febrile peak (p<0.05). When changes in temperature were considered, ICT had a profound influence on ICP. Increases in ICT were associated with a significant rise in ICP, from 14.9 (SD 7.9) to 22 (SD 10.4) mm Hg (p<0.05). As the fever ebbed there was a significant decrease in ICP, from 17.5 (SD 8.62) to 16 (SD 7.76) mm Hg (p=0.02).
Conclusions: Fever is extremely frequent during acute cerebral damage and ICT is significantly higher than Tc. Moreover, Tc may underestimate ICT during the phases when temperature has the most impact on the intracranial system because of the close association between increases in ICT and ICP.
Answer: The relationship between brain temperature and intracranial pressure (ICP) is complex and varies across studies. Some studies suggest a correlation, while others do not find a clinically useful relationship.
One study found that brain temperature was typically 0.3 degrees C higher than core temperature and that there was a very weak positive correlation between ICP and brain temperature, with a Pearson correlation coefficient of 0.13, indicating that any relationship between the two accounted for less than 2% of the variability (PUBMED:18362771). This suggests that there is not a strong or clinically useful correlation between brain temperature and ICP.
Another study reported that after traumatic brain injury (TBI), brain temperature (BT) may influence brain physiology, as reflected by ICP and cerebral perfusion pressure (CPP). The study found an association between BT above 37.5 °C and higher ICP levels, but this was not confirmed for lower BT. During episodes of BT elevation, ICP increased and CPP was reduced, indicating that BT seems to have a significant impact on ICP and CPP during rapid temperature changes (PUBMED:34331210).
A different study found no significant difference between hyperthermic ICP or CPP and normothermic ICP or CPP determined by brain or core temperature, and no significant correlation was found between temperature and intracranial dynamics (PUBMED:18186416).
Research on the optimal temperature for the management of severe traumatic brain injury found that hypothermia could reduce intracranial hypertension while maintaining sufficient CPP without cardiac dysfunction or oxygen debt, suggesting that a temperature of 35 to 35.5 degrees C might be optimal for treating patients with severe TBI (PUBMED:12493106).
In summary, while there is some evidence that brain temperature can influence ICP, particularly during rapid temperature changes or in the context of TBI, the correlation is not consistently strong across different studies and conditions. Some studies suggest that managing brain temperature could be beneficial for controlling ICP, but the relationship is not straightforward and may not be clinically significant in all cases. |
Instruction: Does waiting time affect the outcome of larynx cancer treated by radiotherapy?
Abstracts:
abstract_id: PUBMED:10944052
Does waiting time for radiotherapy affect local control of T1N0M0 glottic laryngeal carcinoma? This is a retrospective study of 362 patients with a T1N0M0 glottic laryngeal carcinoma treated by radiotherapy. Waiting time was defined as time from the day of histopathological diagnosis to the first day of radiotherapy. The Cox regression model was used to analyse the influence of waiting time for radiotherapy on the incidence of recurrence. The median follow-up time was 4.4 years. The median waiting time for radiotherapy was 43 days. Local recurrences were found in 58 patients. There was no significant correlation (P= 0.88) between waiting time and the outcome of early glottic cancer as analysed by Cox regression. This retrospective study did not demonstrate an effect of waiting time for radiotherapy on the outcome of early glottic laryngeal cancer.
abstract_id: PUBMED:9288841
Does waiting time affect the outcome of larynx cancer treated by radiotherapy? Aim: To determine the impact of waiting for radiotherapy on local control in early larynx cancer treated by radiotherapy alone.
Methods: Records of patients with T1 and T2, N0-2 larynx cancer were examined at three radiotherapy centres. Waiting time was defined in three ways, (1) time from biopsy to radiotherapy, (2) time from presentation to radiation department to start of radiotherapy and (3) the minimum of (1) and (2). Time to relapse was the major end point.
Results: There were 581 patients with a median follow-up of 6.8 years. Stage distribution was as follows: T1, 370; T2a, 106; T2b, 94; T2 unspecified, 11; N0, 563; N+, 18. Median times from biopsy, presentation and minimum time to treatment were 24, 16 and 15 days, respectively. Ninety percent of minimum waiting times were < or = 31 days. The median dose was 61 Gy in a median of 30 fractions over a median 46 days. Local recurrence occurred in 126 patients. The actuarial recurrence free rate at 5 years was 77% (SE 2%). In a multivariate analysis the significant predictors of relapse were higher T stage, longer treatment duration and increasing field area. Waiting time was not significantly associated with local relapse.
Conclusion: This study did not show longer waiting time to be a significant predictor of relapse in early larynx cancer. Other end-points which are relevant, such as quality of life, have not been examined. Longer treatment times were significantly associated with relapse.
abstract_id: PUBMED:8083117
Waiting for radiotherapy in Ontario. Purpose: Waiting lists for radiotherapy are a fact of life at many Canadian cancer centers. The purpose of this study was to provide a detailed description of the magnitude of the problem in Ontario.
Methods And Materials: The interval between diagnosis and initiation of radiation treatment was calculated for all patients receiving primary radiotherapy for carcinoma of the larynx, cervix, lung, and prostate at seven Ontario cancer centers between 1982 and 1991. The interval between surgery and initiation of postoperative radiotherapy for breast cancer was also calculated over the same period. The intervals between diagnosis and referral (t1), between referral and consultation (t2), and between consultation and initiation of radiotherapy (t3), were analyzed separately to determine where delay occurred.
Results: Median waiting times between diagnosis and initiation of radical treatment for carcinoma of the larynx, carcinoma of the cervix, nonsmall cell lung cancer, and carcinoma of the prostate were 30.3 days, 27.2 days, 27.3 days, and 93.3 days, respectively. The exceptional interval between diagnosis and treatment of prostate cancer was due to much longer delays between diagnosis and referral. The median waiting time between diagnosis and initiation of postoperative radiotherapy for breast cancer was 61.4 days and the median time between the completion of surgery and initiation of postoperative radiotherapy was 57.8 days. There were significant intercenter variations in median waiting times, but in every situation the median waiting time in Ontario as a whole increased steadily between 1982 and 1991. Median waiting times from diagnosis to the start of curative treatment for laryngeal cancer, cervical cancer, nonsmall cell lung cancer, and prostate cancer increased by 178.7%, 105.6%, 158.3%, and 62.9%, respectively. Waiting time from completion of surgery to initiation of postoperative radiotherapy for breast cancer increased by 102.7%. Most of the increase in treatment delay was found in the interval between consultation and initiation of radiotherapy.
Conclusions: The Committee on Standards of the Canadian Association of Radiation Oncologists recommends that the interval between referral and consultation should not exceed 2 weeks and that the interval between consultation and initiation of radiotherapy should also not exceed 2 weeks. The majority of patients treated in Ontario met both those standards in 1982, but by 1991 few patients received care within the prescribed intervals.
abstract_id: PUBMED:24659653
Does treatment interruption and baseline hemoglobin affect overall survival in early laryngeal cancer treated with radical radiotherapy? 10 years follow up. Purpose: In this retrospective study we assessed different factors affecting the outcome of early laryngeal cancer, focusing on the impact of the pretreatment hemoglobin (Hb) level, time interval between diagnosis and start of radiotherapy, as well as treatment interruption during the course of radiotherapy.
Methods: We reviewed the hospital records, oncology database and radiotherapy treatment sheets of 88 patients with T1-T3 N0M0 squamous cell carcinoma of the larynx who had been treated with radical radiotherapy at Northamptonshire Centre for Oncology during the period from 1st January 1996 till 31st December 2002 inclusive. Patients were followed up for 10 years.
Results: There were no significant overall survival differences with regard to sex , stage, radiotherapy dose received, treatment interruption for 1 to 2 days , as well as the delay to start radiotherapy (mean delay 57 days). However, there was statistically significant adverse overall survival outcome with increasing age (p<0.001). On the other hand, patients with pretreatment Hb level >12 g/dl had significant statistical overall survival benefit over those with ≤12 g/dl (p=0.018).
Conclusion: Pretreatment Hb level had a significant impact on overall survival in patients with early laryngeal carcinoma treated with radical radiotherapy. Time to start radiation treatment, treatment interruption for 1 or 2 days and different dose / fractionations did not affect the overall survival.
abstract_id: PUBMED:15708258
Duration of symptoms: impact on outcome of radiotherapy in glottic cancer patients. Purpose: To study the relationship between the durations of symptoms before the start of radiotherapy and treatment outcome in Stage I-III glottic cancer.
Methods And Materials: From 1965 to 1997, 611 glottic cancer patients from the Southern Region of Denmark were treated with primary radiotherapy. A total of 544 patients fulfilled the criteria for inclusion to the study (Stage I-III glottic cancer, a duration of symptoms less than or equal to 36 months, primary radiotherapy with at least 50 Gy and sufficient data for analysis). The total radiation dose ranged from 50.0 to 71.6 Gy in 22 to 42 fractions, and the median dose per fraction was 2.00 Gy (range, 1.56-2.29 Gy). All patients had 5 years of follow-up, and the 5-year recurrence-free survival rate was used as the primary endpoint.
Results: The 5-year recurrence-free survival rate was 74%. In a multivariate Cox regression analysis, duration of symptoms was a significant factor (p < 0.0001) with a hazard ratio of 1.045 (95% CI 1.023, 1.069). Other significant factors included tumor stage and radiation dose, whereas duration of treatment time was borderline significant (p = 0.06).
Conclusions: The duration of symptoms was statistically significantly related to a decrease in recurrence-free survival. One-month delay from onset of symptoms to start of radiotherapy was equivalent to a 4.5% decrease in recurrence-free survival.
abstract_id: PUBMED:9027934
Adverse effect of treatment gaps in the outcome of radiotherapy for laryngeal cancer. Background And Purpose: A correlation has been demonstrated between unplanned prolongation of radiotherapy and increased local relapse. This review was performed to assess the importance of overall time on the outcome of curative radiotherapy of larynx cancer.
Materials And Methods: Retrospective analysis was performed of 383 patients with laryngeal cancer managed by elective radiotherapy between 1976-1988 in the Department of Clinical Oncology, University of Edinburgh, Western General Hospital, Edinburgh All cancers were confirmed histologically to be squamous cell carcinomas. All subjects received radiotherapy in 20 daily fractions (except Saturdays and Sundays), employing individual beam direction techniques and computer dose distribution calculations. Main outcome measures were complete resolution of the cancer in the irradiated volume; local relapse; survival and cause-specific survival rates.
Results: Radiotherapy was completed without any unplanned interruption (28 +/- 2 days) in 230/383 (60%) of patients. A statistically significant two-fold increase in local relapse rates was observed when treatment was given in 31 days or more. There also was a statistically significant four-fold increase in laryngeal cancer deaths when the treatment time exceeded 30 days.
Conclusions: In patients with laryngeal cancer, accelerated repopulation of cancer cells probably occurs after the start of radiotherapy. When the overall treatment time is 4 weeks or less, gaps at weekends are not detrimental. However, long holiday periods or gaps in treatment longer than 4 days increase the risk of laryngeal cancer relapse and cancer-related mortality. Significant gaps in treatment should be avoided. If treatment has to be prolonged, additional radiation dose should be prescribed to compensate for increased tumour cell proliferation.
abstract_id: PUBMED:11173149
Interaction between potential doubling time and TP53 mutation: predicting radiotherapy outcome in squamous cell carcinoma of the head and neck. Purpose: To investigate the correlation between tumor potential doubling time, Tpot, and mutations in the p53 gene, TP53, and the potential of these parameters to predict outcome of head and neck cancer patients treated with radiotherapy.
Methods And Materials: Data from two independent studies on Tpot and TP53 mutations were combined, including 58 patients with squamous cell carcinoma of the head and neck. Tpot was estimated on biopsies obtained 6-9 h after infusion of iododeoxyuridine by combined flow cytometry and immunohistology. TP53 mutations were detected using DGGE and sequenced. All patients received primary radiotherapy alone.
Results: The predictive value of Tpot alone was of borderline significance. However, in TP53 wild-type tumors, Tpot was a strong predictor of outcome, whereas Tpot in TP53 mutant tumors failed to provide any information. Tpot and TP53 were not associated with nodal control; however, there was a strong relationship with control in the T-position, disease-specific survival, and overall survival.
Conclusion: Tpot can to be a relevant parameter for predicting outcome of radiotherapy in head and neck cancer but only in the subset of patients without mutations in the p53 gene.
abstract_id: PUBMED:18392632
The impact of treatment center on the outcome of patients with laryngeal cancer treated with surgery and radiotherapy. For laryngeal cancer, surgical excision of the primary tumor should be undertaken with the aim of achieving tumor-free margins. Adequate pathological assessment of the specimen and the competency of the treatment center play a crucial role in achieving cure. The present study aimed to analyze the significance of place of surgery on the outcome of patients with laryngeal cancer who underwent surgical operation in other centers and were subsequently referred to Doküz Eylul University Head and Neck Tumour Group (DEHNTG) for postoperative irradiation. Patients were divided into three groups according to their place of surgery. The first group (Group I) consisted of patients who had their surgical operation at DEUH. Patients in the second group (Group II) were referred from centers with oncological surgical experience. The third group (Group III) consisted of patients referred from hospitals with no surgical teams experienced in head and neck cancer treatment. The clinical and pathological features of patients in these three groups were analyzed to assess the impact of place of surgery on clinical outcome as well as the prognostic factors for survival. The study population consisted of 253 patients who were treated between 1991 and 2006 with locally advanced laryngeal cancer according to the protocol of DEHNTG. The median follow-up was 48 (3-181) months. The 5 years overall, loco-regional disease-free and distant disease-free survivals were 66, 88 and 91%, respectively. When patients' clinical and histopathological features were analyzed for the impact of place of surgery, surgical margin positivity rates were found to be higher in Group III (P = 0.032), although the other two groups had more advanced clinical and pathological N stage disease (P = 0.012, P = 0.001). In multivariate analysis, older age (P < 0.0001), presence of perinodal invasion (P = 0.012), time interval between surgery and radiotherapy longer than 6 weeks (P = 0.003) and tumor grade (P = 0.049) were the most significant factors. For loco-regional failure-free survival, advanced clinical stage (P = 0.002), place of surgery (P = 0.031) and presence of clinical subglottic invasion (P = 0.029) were shown to be important prognostic factors. For distant metastasis-free survival, only pathological (+) lymph node status (P = 0.046) was a significant factor in multivariate analysis. The significance of place of surgery as well as other well-known prognostic factors underlines the importance of an experienced multidisciplinary treatment team if best results are to be obtained for the patient.
abstract_id: PUBMED:14587383
Time factors in postoperative radiotherapy in the years 1986-19990 and 2000-2002 Postoperative radiotherapy plays an important role in management of advanced laryngeal cancer. Numerous retrospective data has given strong evidence that prolongation of treatment time of radiotherapy has negative influence on treatment results. Generally, it might be concluded that each additional day of irradiation is connected with drop of locoregional control in range from 1% to 2%. In this paper, the duration time and frequency of gap which occurred during postoperative radiotherapy of the laryngeal cancer in the following two time periods were compared: 1986-1990 (group I) and 2000-2002 (group II). An analysis indicated that in the I group of patients gaps during the course of postoperative radiotherapy were noted in 52% (160/311 patients) and in the group II in 26% (71/270 patients). The median time of the gap dropped from 8.3 days to 4.2 days. In the same time the percentage of gaps due to acute side effects was reduced from 30% (94/311) to 9% (24/270) and in addition the reduction of the time of gaps by 4.1 days was observed. The significant reduction of the time of gaps due to breakdown of therapeutic machine was observed, but the number of gaps with this reason was generally low. In the II group of patients treated in years 2000-2002 the percentage of gaps caused by holidays increased from 10% (33/3110) to 14% (39/270). The median time of gap in the same time was lowered by 1 day. In conclusion, during the last decade, the length of the postoperative radiotherapy for patients after total laryngectomy due to advanced laryngeal cancer was reduced. Still is a place for additional time reduction of gap by intensification of patient's medical care during irradiation and logistic improvement in patient's care.
abstract_id: PUBMED:12162018
Timing and duration of postoperative radiotherapy in patients with laryngeal cancer Aim of this was an evaluation of the impact of prolongation of combined treatment time on the local-regional control laryngeal cancer patients treated with surgery and postoperative radiotherapy. Study was based on retrospective analysis of 254 patients with T3/T4 and N0-N2 laryngeal cancer who were treated between January 1993 and December 1996 with surgery and postoperative radiotherapy (RT). Median age of investigated group was 56.3 years. Surgery consisted of total laryngectomy in all cases and with selective neck dissection. RT began 22 to 78 days postoperatively (median 45 days) and continued for a median of 47 days (range, 40-74 days). The primary tumor bed and all lymph neck nodes were treated in all patients. The total dose to the primary tumor bed was about 60 Gy, fraction dose was 2 Gy. All lymph nodes were irradiated with a dose of 50 Gy. The factors studied for prognostic importance for a local-regional failure included: overall combined treatment time, interval between surgery and the start of radiotherapy, radiotherapy treatment time, age, sex, T and N categories. The 5-year actuarial disease-free local-regional survival was 50%. On univariate analysis, using log rank test, only N status, sex were predicted for the locoregional control of postoperative radiotherapy. Overall treatment time longer than 90 days and especially 100 days was correlated with worse locoregional results of combined treatment respectively (p = 0.003 and p = 0.0007). Also prolongation of interval time between surgery and postoperative radiotherapy beyond 50 days was connected with higher rate of failures (p = 0.02). The same trend was observed when time of irradiation was longer than 47 days (p = 0.01). This analysis indicated that the prolongation of the overall treatment time of combined modality, the the time between surgery, and radiotherapy and time of postoperative radiotherapy were correlated with increased of local regional failures.
Answer: The impact of waiting time on the outcome of larynx cancer treated by radiotherapy has been investigated in several studies, with varying results. Some studies did not find a significant correlation between waiting time and the outcome of early glottic laryngeal cancer (PUBMED:10944052; PUBMED:9288841). Specifically, one study with a median follow-up of 4.4 years and a median waiting time of 43 days found no significant correlation between waiting time and the incidence of recurrence in patients with T1N0M0 glottic laryngeal carcinoma (PUBMED:10944052). Another study with a median follow-up of 6.8 years also did not show longer waiting time to be a significant predictor of relapse in early larynx cancer (PUBMED:9288841).
However, other studies have suggested that time factors can influence the outcome of radiotherapy in laryngeal cancer. For instance, one study found that treatment interruptions and longer treatment durations were significantly associated with relapse (PUBMED:9288841). Another study indicated that a one-month delay from the onset of symptoms to the start of radiotherapy was equivalent to a 4.5% decrease in recurrence-free survival (PUBMED:15708258). Additionally, gaps in treatment longer than 4 days were found to increase the risk of laryngeal cancer relapse and cancer-related mortality (PUBMED:9027934).
Moreover, the timing and duration of postoperative radiotherapy have been shown to be important, with prolongation of combined treatment time, the interval between surgery and the start of radiotherapy, and the duration of radiotherapy itself being correlated with increased local regional failures (PUBMED:12162018).
In summary, while some studies have not found a significant impact of waiting time on the outcome of larynx cancer treated by radiotherapy, other research suggests that treatment interruptions, longer treatment durations, and delays in starting treatment can negatively affect patient outcomes. It is important to consider that the impact of waiting time may also be influenced by other factors such as the stage of the cancer, treatment interruptions, and the overall treatment duration. |
Instruction: Are house dust mite allergen levels influenced by cold winter weather?
Abstracts:
abstract_id: PUBMED:15969691
Are house dust mite allergen levels influenced by cold winter weather? Background: Moisture is vitally important for house dust mites and they cannot survive in cold or hot-dry climates.
Aims Of The Study: To investigate the influence of two extraordinarily cold and dry winters in 1995/1996 and 1996/1997 on house dust mite levels in German homes.
Methods: Dust samples were collected between June 1995 and December 2001 on the mattresses of 655 adults and 454 schoolchildren living in five different areas of Germany. We compared house dust mite allergen Dermatophagoides pteronyssinus (Der p 1) levels before and during the winters of 1995/1996 and 1996/1997 with levels after these winters.
Results: D. pteronyssinus (Der p 1) levels in samples taken after the cold winters of 1995/1996 and 1996/1997 were approximately two times lower than Der p 1 levels in dust samples collected before or during these respective winters (Geometric means: Erfurt 89 vs 33 ng/g; Hamburg 333 vs 219 ng/g; Bitterfeld, Hettstedt, and Zerbst 296 vs 180 ng/g). Except for Hamburg, the decrease in Der p 1 levels was statistically significant. D. pteronyssinus levels measured in dust samples collected in 2001 (i.e. 3 years after the two cold winters) show a statistically non-significant increase (Geometric means: Erfurt 33 vs 39 ng/g; Hamburg 219 vs 317 ng/g), suggesting that it may take a long time for mite allergen levels to increase again after a sudden decrease.
Conclusion: We conclude that Der p 1 levels in German mattress dust samples have been approximately reduced by a factor of three to four by the two consecutive cold winters of 1995/1996 and 1996/1997.
abstract_id: PUBMED:23115731
Standardization of house dust mite extracts in Korea. Purpose: House dust mites are the most important cause of respiratory allergy in Korea. Standardization of allergen extracts is essential for improving diagnostics and immunotherapeutics. This study was undertaken to evaluate the allergenicity of standardized house dust mite allergen extracts from Korean house dust mite isolates.
Methods: Allergen extracts were prepared from cultured Korean house dust mites (Dermatophagoides farinae and D. pteronyssinus). Allergenic activities of Korean house dust mite extracts were compared to standardized extracts from a company in the United States whose allergen concentrations were expressed as Allergy Units (AUs). Specifically, we compared group 1 and 2 major allergens using two-site enzyme-linked immunosorbent assay (ELISA) kits and an in vivo intradermal test.
Results: Major allergen concentrations were 17.0 µg/mg (5.0 µg/mg of Der f 1 and 12.0 µg/mg of Der f 2) for a D. farinae extract and 24.0 µg/mg (11.6 µg/mg of Der p 1 and 12.4 µg/mg of Der p 2) for a D. pteronyssinus extract. Using chloramphenicol (CAP) inhibition assays, AUs were 12.5 AU/µg for a D. farinae extract and 12.8 AU/µg for a D. pteronyssinus extract. Allergenic activities were 3- to 4-fold stronger when assessed by intradermal skin tests for in vivo standardization.
Conclusions: Allergen extracts were prepared from Korean house dust mites and the allergenicities of the extracts were estimated using AU measurements. House dust mite extracts prepared in this study could be utilized as a reference material, which will be useful for the development of diagnostic and immunotherapeutic reagents in Korea.
abstract_id: PUBMED:36040279
Role of ventilation and cleaning for controlling house dust mite allergen infestation: A study on associations of house dust mite allergen concentrations with home environment and life styles in Tianjin area, China. House dust mites produce well-known allergens for asthma and allergy among children. To study house dust mite allergen exposure level in northeast China and characterize its association with indoor environmental factors and cleaning habits, we inspected 399 homes in Tianjin area and collected dust from mattresses. Dermatophagoides farinae (Der f) and Dermatophagoides pteronyssinus (Der p) were detected by the enzyme-linked immunosorbent assay (ELISA) method. The medians of total allergen concentrations for spring, summer, autumn, and winter were 524 ng/g, 351 ng/g, 1022 ng/g, and 1010 ng/g. High indoor air relative humidity (RH), low air change rate, indoor dampness, and frequent changing of quilt cover/bedsheet/pillow case were significantly associated with high house dust mite allergen concentration (relative risk [RR]: RH, 1.18-1.34; air change rate, 0.97-1.00; dampness, 2.92-3.83; changing quilt cover/bedsheet/pillow case, 0.66-0.75). The decrease in the absolute humidity gradient between indoors and outdoors that occurs with increased air change rate may explain why a high ventilation reduces house dust mite allergen concentration. The findings of this study show the importance of ventilation and cleaning for controlling house dust mite allergens. We found that the decrease in additional absolute humidity (e.g., humidity indoor -humidity outdoor ) with increased air change rate may be the main reason that a high ventilation rate reduces house dust mite allergen concentration. Ventilation and cleaning should be both considered for creating a healthy home environment.
abstract_id: PUBMED:32144500
House Dust Mite-Shrimp Allergen Interrelationships. Purpose Of Review: Focusing on the strict relationship between house dust mites and crustaceans from the allergenic point of view.
Recent Findings: The well-known tropomyosin was considered for years as the cross-reacting allergen between shrimp and house dust mites. In the last few years, several allergens not only in shrimps but also in house dust mite have been identified and other molecules other than tropomyosin have been shown to cross-react between crustaceans and mites. The present review investigates the very complex allergen sources in shrimp and mites, giving a satisfactorily complete picture of the interrelationships between common allergens. Several minor HDM allergens are homologous to major and minor shrimp allergens; tropomyosin is not the only cross-reactive allergen between shrimp and mites.
abstract_id: PUBMED:32561997
Update on House Dust Mite Allergen Avoidance Measures for Asthma. Purpose Of Review: To critically review the evidence in favor or against the use of house dust mite (HDM) allergen avoidance measures in patients with asthma.
Recent Findings: Systematic reviews and meta-analyses suggested no positive effect of mite allergen avoidance strategies on asthma outcomes, resulting in a lack of consensus regarding the utility of these measures. However, such analyses have a number limitations and might not be the most adequate tool to evaluate current evidence and to derive clinical recommendations regarding mite allergen avoidance in asthmatic patients. We should not disproportionately rely on the results of meta-analyses and systematic reviews to inform clinical practice and asthma guidelines in this area. Recent high-quality evidence from randomized controlled trial in children confirmed that mite allergen-impermeable bed encasings reduce emergency hospital attendance with acute severe asthma exacerbations. Until better evidence is available, we suggest that physicians should adopt a pragmatic approach to mite allergen avoidance and advise sensitized patients to implement a multifaceted set of measures to achieve as great a reduction in exposure as possible. Potential predictors of positive response (e.g., patient's sensitization and exposure status) can pragmatically be evaluated using the size of skin test wheal or the titer of allergen-specific IgE. Finally, the intervention should be started as early as possible.
abstract_id: PUBMED:23115727
House dust mite allergy in Korea: the most important inhalant allergen in current and future. The house-dust mite (HDM), commonly found in human dwellings, is an important source of inhalant and contact allergens. In this report, the importance of HDM allergy in Korea and the characteristics of allergens from dust mite are reviewed with an emphasis on investigations performed in Korea. In Korea, Dermatophagoides farinae is the dominant species of HDM, followed by D. pteronyssinus. Tyrophagus putrescentiae is also found in Korea, but its role in respiratory allergic disease in Korea is controversial. The relatively low densities of mite populations and concentrations of mite major allergens in dust samples from Korean homes, compared to westernized countries, are thought to reflect not only different climatic conditions, but also cultural differences, such as the use of 'ondol' under-floor heating systems in Korean houses. HDM are found in more than 90% of Korean houses, and the level of exposure to HDM is clinically significant. About 40%-60% of Korean patients suffering from respiratory allergies, and more than 40% of patients suffering from atopic dermatitis, are sensitized to HDM. Mite allergens can be summarized according to their inherent auto-adjuvant activities and/or their binding affinities to the adjuvant-like substances: proteolytic enzymes, lipid binding proteins, chitin binding proteins, and allergens not associated with adjuvant-like activity. In general, allergens with a strong adjuvant-like activity or adjuvant-binding activity elicit potent IgE reactivity. In Korea, Der f 2 is the most potent allergen, followed by Der f 1. Immune responses are modulated by the properties of the allergen itself and by the adjuvant-like substances that are concomitantly administered with the antigens. Characterization of allergenic molecules and elucidation of mechanisms by which adjuvant-like molecules modulate allergic reactions, not only in Korea but also worldwide, will provide valuable information on allergic diseases, and are necessary for the development of diagnostic tools and therapeutic strategies.
abstract_id: PUBMED:36039254
House Dust Mite and Grass Pollen Allergen Extracts for Seasonal Allergic Rhinitis Treatment: A Systematic Review. Background: The treatment of allergic rhinitis is important due to the burden that the disease causes globally. The objective of this review is to explore the efficiency of house dust mite and grass pollen extracts in allergic rhinitis treatment.
Methods: We performed research in electronic databases and searched relevant articles on PubMed, CINAHL, OVID, ScienceDirect, Cochrane CENTRAL, and MEDLINE. We used keywords such as 'allergic rhinitis', 'sublingual immunotherapy', 'randomized controlled trials', 'grass pollen', 'allergen immunotherapy', and 'house dust mite'. We included nine randomized controlled trials (RCTs). Quality assessment of included studies was performed independently by two authors.
Results: We included nine eligible RCTs in this review. Five RCTs were about grass pollen extracts and four RCTs were about house dust mite extracts. Most of the studies reported positive results and suggested further evaluation of sublingual immunotherapy (SLIT) treatment. Grass pollen extracts mostly used were Dactylis glomerata, Poa pratensis, Lolium perenne, Anthoxanthum odoratum, Phleum pratense, and Parietaria. House dust mite extracts used were from Dermatophagoides pteronyssinus and Dermatophagoides farina. According to the quality assessment, no bias was observed in the included studies.
Conclusions: Although sublingual allergen immunotherapy shows a benefit compared to placebo in the treatment of allergic rhinitis and rhino-conjunctivitis in adults, the results are interpreted with caution due to the high heterogenicity among studies in treatment protocols and dosing. More standardization among studies is needed.
abstract_id: PUBMED:31890153
Association between component-resolved diagnosis of house dust mite and efficacy of allergen immunotherapy in allergic rhinitis patients. Data regarding clinical relevance of house dust mite (HDM) components over allergen immunotherapy (AIT) for allergic rhinitis (AR) are lacking. 18 adult AR patients receiving HDM-AIT for 52 weeks were followed up to assess serum levels of sIgE and sIgG4 to HDM components. The study showed that Der p1, p2, p23, Der f1 and f2, are important sensitizing components of HDM, of which Der p1 appears to be the most clinically relevant allergenic component for effective AIT.
abstract_id: PUBMED:21432080
Comparative study of simple semiquantitative dust mite allergen tests. Objective: Two simple, commercially available and semiquantitative dust mite allergen tests, namely, the Acarex test(®) and Mitey Checker(®), were compared using 2 and 10 μg of Der 1 allergen per gram of dust, as evaluated by enzyme-linked immunosorbent assay (ELISA), to clarify which method is better suited for practical use.
Methods: Mite allergen exposure levels of 106 floor, bed and sofa surfaces were evaluated by the Acarex test(®), Mitey Checker(®), and ELISA. A template of 100 cm×100cm was placed on the same surfaces to identify the examined areas. A dust collection filter was attached to a vacuum cleaner, and the area in the template (1 m(2)) was vacuumed. Then, to evaluate the other two tests, samples from the two other areas in the template (1 m(2)) that neighbored each other and did not overlap were vacuumed.
Results: To predict Der 1 levels of 2 μg/g dust or higher, the sensitivity and specificity of the Acarex test(®) were 100% and 13.3%, and those, of Mitey Checker(®) were 91.8% and 71.1%, respectively. To predict Der 1 levels of 10 μg/g dust or higher, the sensitivity and specificity of the Acarex test(®) were 50.0% and 96.2%, and those of Mitey Checker(®) were 85.7% and 79.5%, respectively. Compared with Der 1<2.0, 2.0-9.9.≥10.0 (μg/g dust), the percent agreement and kappa of the Acarex test(®) were 47.2% and 0.234, and those of Mitey Checker(®) were 70.0% and 0.505, respectively.
Conclusion: To evaluate mite allergen exposure level for practical use in Japanese living environments, Mitey Checker(®) is better than the Acarex test(®) because of its higher sensitivity and specificity.
abstract_id: PUBMED:37119758
House dust mite allergy: The importance of house dust mite allergens for diagnosis and immunotherapy. House dust mite (HDM) allergy belongs to the most important allergies and affects approximately 65-130 million people worldwide. Additionally, untreated HDM allergy may lead to the development of severe disease manifestations such as atopic dermatitis or asthma. Diagnosis and immunotherapy of HDM allergic patients are well established but are often hampered by the use of mite extracts that are of bad quality and lack important allergens. The use of individual allergens seems to be a promising alternative to natural allergen extracts, since they represent well-defined components that can easily be produced and quantified. However, a thorough characterization of the individual allergens is required to determine their clinical relevance and to identify those allergens that are required for correct diagnosis of HDM allergy and for successful immunotherapy. This review gives an update on the individual HDM allergens and their benefits for diagnosis and immunotherapy of HDM allergic patients.
Answer: Yes, house dust mite allergen levels are influenced by cold winter weather. A study investigating the influence of two extraordinarily cold and dry winters in 1995/1996 and 1996/1997 on house dust mite levels in German homes found that house dust mite allergen Dermatophagoides pteronyssinus (Der p 1) levels in samples taken after these cold winters were approximately two times lower than Der p 1 levels in dust samples collected before or during these respective winters. The decrease in Der p 1 levels was statistically significant in all areas except for Hamburg. This suggests that cold winter weather can significantly reduce house dust mite allergen levels in homes (PUBMED:15969691). |
Instruction: Does controlled ovarian stimulation prior to chemotherapy increase primordial follicle loss and diminish ovarian reserve?
Abstracts:
abstract_id: PUBMED:18854408
Does controlled ovarian stimulation prior to chemotherapy increase primordial follicle loss and diminish ovarian reserve? An animal study. Background: Storage of embryos for fertility preservation before chemotherapy is widely practiced. For multiple oocyte collection, the ovaries are hyperstimulated with gonadotrophins that significantly alter ovarian physiology. The effects of ovarian stimulation prior to chemotherapy on future ovarian reserve were investigated in an animal model.
Methods: Cyclophosphamide (Cy) in doses of 0, 50 or 100 mg/kg was administered to 38 adult mice (control, unstimulated). A second group of 12 mice were superovulated with equine chorionic gonadotrophin (eCG, 10 IU on Day 0) before Cy administration; hCG (10 IU) was administered (Day 2) followed by 0, 50 or 100 mg/kg Cy (Day 4). In both groups ovaries were removed, serially sectioned (7-day post-Cy), primordial follicles were counted and differences between groups evaluated.
Results: Follicle number dropped from 469 +/- 24 (mean +/- SE) to 307 +/- 27 and 234 +/- 19 with 50 or 100 mg/kg Cy, respectively (P < 0.0001). In the eCG pretreated group, follicle count dropped from 480 +/- 31 to 345 +/- 16 and 211 +/- 26 when 50 or 100 mg/kg Cy were administered (P < 0.0001). There were no significant differences in follicle count between the pretreated eCG group and controls for each chemotherapy dose.
Conclusions: This animal study indicates that ovarian stimulation before administration of Cy does not adversely affect ovarian reserve post-treatment. These results provide support for the safety of fertility preservation using ovarian stimulation and IVF-embryo cryopreservation procedures prior to chemotherapy.
abstract_id: PUBMED:32514568
Unraveling the mechanisms of chemotherapy-induced damage to human primordial follicle reserve: road to developing therapeutics for fertility preservation and reversing ovarian aging. Among the investigated mechanisms of chemotherapy-induced damage to human primordial follicle reserve are induction of DNA double-strand breaks (DSBs) and resultant apoptotic death, stromal-microvascular damage and follicle activation. Accumulating basic and translational evidence suggests that acute exposure to gonadotoxic chemotherapeutics, such as cyclophosphamide or doxorubicin, induces DNA DSBs and triggers apoptotic death of primordial follicle oocytes within 12-24 h, resulting in the massive loss of ovarian reserve. Evidence also indicates that chemotherapeutic agents can cause microvascular and stromal damage, induce hypoxia and indirectly affect ovarian reserve. While it is possible that the acute reduction of the primordial follicle reserve by massive apoptotic losses may result in delayed activation of some primordial follicles, this is unlikely to be a predominant mechanism of loss in humans. Here, we review these mechanisms of chemotherapy-induced ovarian reserve depletion and the potential reasons for the discrepancies among the studies. Based on the current literature, we propose an integrated hypothesis that explains both the acute and delayed chemotherapy-induced loss of primordial follicle reserve in the human ovary.
abstract_id: PUBMED:38003481
Overactivation or Apoptosis: Which Mechanisms Affect Chemotherapy-Induced Ovarian Reserve Depletion? Dormant primordial follicles (PMF), which constitute the ovarian reserve, are recruited continuously into the cohort of growing follicles in the ovary throughout female reproductive life. Gonadotoxic chemotherapy was shown to diminish the ovarian reserve pool, to destroy growing follicle population, and to cause premature ovarian insufficiency (POI). Three primary mechanisms have been proposed to account for this chemotherapy-induced PMF depletion: either indirectly via over-recruitment of PMF, by stromal damage, or through direct toxicity effects on PMF. Preventative pharmacological agents intervening in these ovotoxic mechanisms may be ideal candidates for fertility preservation (FP). This manuscript reviews the mechanisms that disrupt follicle dormancy causing depletion of the ovarian reserve. It describes the most widely studied experimental inhibitors that have been deployed in attempts to counteract these affects and prevent follicle depletion.
abstract_id: PUBMED:35718464
Rapamycin maintains the primordial follicle pool and protects ovarian reserve against cyclophosphamide-induced damage. Any abnormal activation of primordial follicles and subsequent depletion can irreversibly diminish the ovarian reserve, which is one of the major chemotherapy-induced adverse effects in young patients with cancer. Herein, we investigated the effects of rapamycin on the activation and development of ovarian follicles to evaluate its fertility-sparing therapeutic value in a cyclophosphamide (CTX)-treated mouse model. Based on ovarian histomorphological changes and follicle counting in 50 SPF female C57BL/6 mice, daily administration of 5 mg/kg rapamycin for 30 days was deemed an ideal dosage and duration for administration in subsequent experiments. Compared with the control group, rapamycin treatment inhibited the activation of quiescent primordial follicles, with no obvious side effects observed. Finally, 48 mice were randomly divided into four groups: control, rapamycin-treated, cyclophosphamide-treated, and rapamycin intervention. Body weight, ovarian histomorphological changes, number of primordial follicles, DDX4/MVH expression, apoptosis of follicular cells, and expression of apoptosis protease-activating factor (APAF)-1, cleaved caspase 3, and caspase 3 were monitored. Co-administration of rapamycin reduced primordial follicle loss and the development of follicular cell apoptosis, thereby rescuing the ovarian reserve after CTX treatment. On analyzing the mTOR signaling pathway, we observed that rapamycin significantly decreased CTX-mediated overactivation of mTOR and its downstream molecules. These findings suggest that rapamycin exhibits potential as an ovarian-protective agent that could maintain the ovarian primordial follicle pool and preserve fertility in young female patients with cancer undergoing chemotherapy.
abstract_id: PUBMED:31437492
Doxorubicin obliterates mouse ovarian reserve through both primordial follicle atresia and overactivation. Ovarian toxicity and infertility are major side effects of cancer therapy in young female cancer patients. We and others have previously demonstrated that doxorubicin (DOX), one of the most widely used chemotherapeutic chemicals, has a dose-dependent toxicity on growing follicles. However, it is not fully understood if the primordial follicles are the direct or indirect target of DOX. Using both prepubertal and young adult female mouse models, we comprehensively investigated the effect of DOX on all developmental stages of follicles, determined the impact of DOX on primordial follicle survival, activation, and development, as well as compared the impact of age on DOX-induced ovarian toxicity. Twenty-one-day-old CD-1 female mice were intraperitoneally injected with PBS or clinically relevant dose of DOX at 10 mg/kg once. Results indicated that DOX primarily damaged granulosa cells in growing follicles and oocytes in primordial follicles and DOX-induced growing follicle apoptosis was associated with the primordial follicle overactivation. Using the 5-day-old female mice with a more uniform primordial follicle population, our data revealed that DOX also directly promoted primordial follicle death and the DNA damage-TAp63α-C-CASP3 pathway was involved in DOX-induced primordial follicle oocyte apoptosis. Compared to 21-day- and 8-week-old female mice that were treated with the same dose of DOX, the 5-day-old mice had the most severe primordial follicle loss as well as the least degree of primordial follicle overactivation. Taken together, these results demonstrate that DOX obliterates mouse ovarian reserve through both primordial follicle atresia and overactivation and the DOX-induced ovarian toxicity is age dependent.
abstract_id: PUBMED:32651677
Altered expression of activator proteins that control follicle reserve after ovarian tissue cryopreservation/transplantation and primordial follicle loss prevention by rapamycin. Purpose: We investigated whether expression of activator proteins that control follicle reserve and growth change after ovarian tissue vitrification and re-transplantation. Moreover, we assessed whether inhibition of mTOR signaling pathway by rapamycin would protect primordial follicle reserve after ovarian tissue freezing/thawing and re-transplantation.
Methods: Fresh control, frozen/thawed, fresh-transplanted, frozen/thawed and transplanted, rapamycin/control, rapamycin fresh-transplanted, and rapamycin frozen-thawed and transplanted groups were established in rats. After freezing and thawing process, two ovaries were transplanted into the back muscle of the same rat. After 2 weeks, grafts were harvested, fixed, and embedded into paraffin block. Normal and atretic primordial/growing follicle count was performed in all groups. Ovarian tissues were evaluated for the dynamic expressions of Gdf-9, Bmp-15, KitL, Lif, Fgf-2, and p-s6K using immunohistochemistry, and H-score analyses were done.
Results: Primordial follicle reserve reduced almost 50% after ovarian tissue re-transplantation. Expression of Gdf-9 and Lif increased significantly in primordial and growing follicles in frozen-thawed, fresh-transplanted, and frozen/thawed and transplanted groups, whereas expression of Bmp-15, KitL, and Fgf-2 decreased in primordial follicles. Freezing and thawing of ovarian tissue solely significantly increased p-s6K expression in primordial follicles, and on the other hand, suppression of mTORC1 pathway using rapamycin preserved the primordial follicle pool.
Conclusion: Altered expressions of activator proteins that regulate primordial follicle reserve and growth may lead to primordial follicle loss and rapamycin treatment can protect ovarian reserve after ovarian tissue cryopreservation/transplantation.
abstract_id: PUBMED:36329711
The impact of oocyte death on mouse primordial follicle formation and ovarian reserve. Background: Ovaries, the source of oocytes, maintain the numbers of primordial follicles, develop oocytes for fertilization and embryonic development. Although it is well known that about two-thirds of oocytes are lost during the formation of primordial follicles through cyst fragmentation and the aggregation of oocytes within the cyst, the mechanism responsible for this remains unclear.
Methods: We provide an overview of cell death that is associated with the oocyte cyst breakdown and primordial follicle assembly along with our recent findings for mice that had been treated with a TNFα ligand inhibitor.
Main Findings: It is generally accepted that apoptosis is the major mechanism responsible for the depletion of germ cells. In fact, a gene deficiency or the overexpression of apoptosis regulators can have a great effect on follicle numbers and/or fertility. Apoptosis, however, may not be the only cause of the large-scale oocyte attrition during oocyte cyst breakdown, and other mechanisms, such as aggregation, may also be involved in this process.
Conclusion: The continued study of oocyte death during primordial follicle formation could lead to the development of novel strategies for manipulating the primordial follicle pool, leading to improved fertility by enhancing the ovarian reserve.
abstract_id: PUBMED:36430860
DNA Damage Stress Response and Follicle Activation: Signaling Routes of Mammalian Ovarian Reserve. Chemotherapy regimens and radiotherapy are common strategies to fight cancer. In women, these therapies may cause side effects such as premature ovarian insufficiency (POI) and infertility. Clinical strategies to protect the ovarian reserve from the lethal effect of cancer therapies needs better understanding of the mechanisms underlying iatrogenic loss of follicle reserve. Recent reports demonstrate a critical role for p53 and CHK2 in the oocyte response to different DNA stressors, which are commonly used to treat cancer. Here we review the molecular mechanisms underlying the DNA damage stress response (DDR) and discuss crosstalk between DDR and signaling pathways implicated in primordial follicle activation.
abstract_id: PUBMED:37661919
ROCK1 is a multifunctional factor maintaining the primordial follicle reserve and follicular development in mice. The follicle is the basic structural and functional unit of the ovary in female mammals. The excessive depletion of follicles will lead to diminished ovarian reserve or even premature ovarian failure, resulting in diminished ovarian oogenesis and endocrine function. Excessive follicular depletion is mainly due to loss of primordial follicles. Our analysis of published human ovarian single-cell sequencing results by others revealed a significant increase in rho-associated protein kinase 1 (ROCK1) expression during primordial follicle development. However, the role of ROCK1 in primordial follicle development and maintenance is not clear. This study revealed a gradual increase in ROCK1 expression during primordial follicle activation. Inhibition of ROCK1 resulted in reduced primordial follicle activation, decreased follicular reserve, and delayed development of growing follicles. This effect may be achieved through the HIPPO pathway. The present study indicates that ROCK1 is a key molecule for primordial follicular reserve and follicular development.NEW & NOTEWORTHY ROCK1, one of the Rho GTPases, plays an important role in primordial follicle reserve and follicular development. ROCK1 was primarily expressed in the cytoplasm of oocytes and granulosa cell in mice. Inhibition of ROCK1 significantly reduced the primordial follicle reserve and delayed growing follicle development. ROCK1 regulates primordial follicular reserve and follicle development through the HIPPO signaling pathway. These findings shed new lights on the physiology of sustaining female reproduction.
abstract_id: PUBMED:28130407
A putative role for anti-Müllerian hormone (AMH) in optimising ovarian reserve expenditure. The mammalian ovary has a finite supply of oocytes, which are contained within primordial follicles where they are arrested in a dormant state. The number of primordial follicles in the ovary at puberty is highly variable between females of the same species. Females that enter puberty with a small ovarian reserve are at risk of a shorter reproductive lifespan, as their ovarian reserve is expected to be depleted faster. One of the roles of anti-Müllerian hormone (AMH) is to inhibit primordial follicle activation, which slows the rate at which the ovarian reserve is depleted. A simple interpretation is that the function of AMH is to conserve ovarian reserve. However, the females with the lowest ovarian reserve and the greatest risk of early reserve depletion have the lowest levels of AMH. In contrast, AMH apparently strongly inhibits primordial follicle activation in females with ample ovarian reserve, for reasons that remain unexplained. The rate of primordial follicle activation determines the size of the developing follicle pool, which in turn, determines how many oocytes are available to be selected for ovulation. This review discusses the evidence that AMH regulates the size of the developing follicle pool by altering the rate of primordial follicle activation in a context-dependent manner. The expression patterns of AMH across life are also consistent with changing requirements for primordial follicle activation in the ageing ovary. A potential role of AMH in the fertility of ageing females is proposed herein.
Answer: Controlled ovarian stimulation prior to chemotherapy does not appear to increase primordial follicle loss or diminish ovarian reserve, according to an animal study. In this study, mice were either left unstimulated or were superovulated with equine chorionic gonadotrophin (eCG) before being administered cyclophosphamide (Cy), a chemotherapeutic agent. The results showed that there were no significant differences in the number of primordial follicles between the eCG pretreated group and the control group for each dose of chemotherapy. This indicates that ovarian stimulation before the administration of Cy does not adversely affect ovarian reserve post-treatment (PUBMED:18854408).
These findings provide support for the safety of fertility preservation strategies that involve ovarian stimulation and in vitro fertilization (IVF)-embryo cryopreservation procedures prior to chemotherapy. However, it is important to note that this conclusion is based on an animal model, and further research may be needed to fully understand the implications for human fertility preservation. |
Instruction: Do adult disease severity subclassifications predict use of cyclophosphamide in children with ANCA-associated vasculitis?
Abstracts:
abstract_id: PUBMED:22859342
Do adult disease severity subclassifications predict use of cyclophosphamide in children with ANCA-associated vasculitis? An analysis of ARChiVe study treatment decisions. Objective: To determine whether adult disease severity subclassification systems for antineutrophil cytoplasmic antibody-associated vasculitis (AAV) are concordant with the decision to treat pediatric patients with cyclophosphamide (CYC).
Methods: We applied the European Vasculitis Study (EUVAS) and Wegener's Granulomatosis Etanercept Trial (WGET) disease severity subclassification systems to pediatric patients with AAV in A Registry for Childhood Vasculitis (ARChiVe). Modifications were made to the EUVAS and WGET systems to enable their application to this cohort of children. Treatment was categorized into 2 groups, "cyclophosphamide" and "no cyclophosphamide." Pearson's chi-square and Kendall's rank correlation coefficient statistical analyses were used to determine the relationship between disease severity subgroup and treatment at the time of diagnosis.
Results: In total, 125 children with AAV were studied. Severity subgroup was associated with treatment group in both the EUVAS (chi-square 45.14, p < 0.001, Kendall's tau-b 0.601, p < 0.001) and WGET (chi-square 59.33, p < 0.001, Kendall's tau-b 0.689, p < 0.001) systems; however, 7 children classified by both systems as having less severe disease received CYC, and 6 children classified as having severe disease by both systems did not receive CYC.
Conclusion: In this pediatric AAV cohort, the EUVAS and WGET adult severity subclassification systems had strong correlation with physician choice of treatment. However, a proportion of patients received treatment that was not concordant with their assigned severity subclass.
abstract_id: PUBMED:26231832
Adult onset Still's disease with small vessel vasculitis This article presents a particularly severe case of adult onset Still's disease aggravated by small vessel vasculitis. A satisfactory therapy was concluded 1.5 years after onset of the disease. The small vessel vasculitis was difficult to treat: methotrexate (MTX), cyclophosphamide and rituximab were not sufficiently effective. Tocilizumab in combination with intravenous immunoglobulin (IVIG) induced remission and maintenance therapy was carried out with tocilizumab.
abstract_id: PUBMED:21928092
Severity-based treatment for Japanese patients with MPO-ANCA-associated vasculitis: the JMAAV study. We (JMAAV [Japanese patients with MPO-ANCA-associated vasculitis] Study Group) performed a prospective, open-label, multi-center trial to evaluate the usefulness of severity-based treatment in Japanese patients with myeloperoxidase-anti-neutrophil cytoplasmic antibodies (MPO-ANCA)-associated vasculitis. Patients with MPO-ANCA-associated vasculitis received a severity-based regimen according to the appropriate protocol: low-dose corticosteroid and, if necessary, cyclophosphamide or azathioprine in patients with mild form; high-dose corticosteroid and cyclophosphamide in those with severe form; and the severe-form regimen plus plasmapheresis in those with the most severe form. We followed up the patients for 18 months. The primary end points were the induction of remission, death, and end-stage renal disease (ESRD). Fifty-two patients were registered, and 48 patients were enrolled in this study (mild form, n = 23; severe form, n = 23; most severe form, n = 2). Among the 47 patients who received the predefined therapies, 42 achieved remission within 6 months, 5 died, and 1 developed ESRD. Disease flared up in 8 of the 42 patients with remission during the 18-month follow-up period. The JMAAV trial is the first prospective trial for MPO-ANCA-associated vasculitis to be performed in Japan. The remission and death rates were comparable to those in several previous clinical trials performed in western counties. The regimen employed in this trial was tailor-made based on patients' disease severity and disease type, and it seems that standardization can be consistent with treatment choices made according to severity.
abstract_id: PUBMED:28784150
Clinical practice variation and need for pediatric-specific treatment guidelines among rheumatologists caring for children with ANCA-associated vasculitis: an international clinician survey. Background: Because pediatric antineutrophil cytoplasmic antibody-associated vasculitis is rare, management generally relies on adult data. We assessed treatment practices, uptake of existing clinical assessment tools, and interest in pediatric treatment protocols among rheumatologists caring for children with granulomatosis with polyangiitis (GPA) and microscopic polyangiitis (MPA).
Methods: A needs-assessment survey developed by an international working group of pediatric rheumatologists and two nephrologists was circulated internationally. Data were summarized with descriptive statistics. Pearson's chi-square tests were used in inferential univariate analyses.
Results: The 209 respondents from 36 countries had collectively seen ~1600 children with GPA/MPA; 144 had seen more than two in the preceding 5 years. Standardized and validated clinical assessment tools to score disease severity, activity, and damage were used by 59, 63, and 36%, respectively; barriers to use included lack of knowledge and limited perceived utility. Therapy varied significantly: use of rituximab rather than cyclophosphamide was more common among respondents from the USA (OR = 2.7 [1.3-5.5], p = 0.0190, n = 139), those with >5 years of independent practice experience (OR = 3.8 [1.3-12.5], p = 0.0279, n = 137), and those who had seen >10 children with GPA/MPA in their careers (OR = 4.39 [2.1-9.1], p = 0.0011, n = 133). Respondents who had treated >10 patients were also more likely to continue maintenance therapy for at least 24 months (OR = 3.0 [1.4-6.4], p = 0.0161, n = 127). Ninety six percent of respondents believed in a need for pediatric-specific treatment guidelines; 46% supported adaptation of adult guidelines while 69% favoured guidelines providing a limited range of treatment options to allow comparison of effectiveness through a registry.
Conclusions: These data provide a rationale for developing pediatric-specific consensus treatment guidelines for GPA/MPA. While pediatric rheumatologist uptake of existing clinical tools has been limited, guideline uptake may be enhanced if outcomes of consensus-derived treatment options are evaluated within the framework of an international registry.
abstract_id: PUBMED:25986390
Rituximab for treatment of severe renal disease in ANCA associated vasculitis. Background: Rituximab (RTX) is approved for remission induction in ANCA associated vasculitis (AAV). However, data on use of RTX in patients with severe renal disease is lacking.
Methods: We conducted a retrospective multi-center study to evaluate the efficacy and safety of RTX with glucocorticoids (GC) with and without use of concomitant cyclophosphamide (CYC) for remission induction in patients presenting with e GFR less than 20 ml/min/1.73 m(2). We evaluated outcomes of remission at 6 months (6 M), renal recovery after acute dialysis at diagnosis, e-GFR rise at 6 M, patient and renal survival and adverse events.
Results: A total 37 patients met the inclusion criteria. The median age was 61 years. (55-73), 62 % were males, 78 % had new diagnosis and 59 % were MPO ANCA positive. The median (IQR) e-GFR at diagnosis was 13 ml/min/1.73 m(2) (7-16) and 15 required acute dialysis. Eleven (30 %) had alveolar hemorrhage. Twelve (32 %) received RTX with GC, 25 (68 %) received RTX with GC and CYC and seventeen (46 %) received plasma exchange. The median (IQR) follow up was 973 (200-1656) days. Thirty two of 33 patients (97 %) achieved remission at 6 M and 10 of 15 patients (67 %) requiring dialysis recovered renal function. The median prednisone dose at 6 M was 6 mg/day. The mean (SD) increase in e-GFR at 6 months was 14.5 (22) ml/min/m(2). Twelve patients developed ESRD during follow up. There were 3 deaths in the first 6 months. When stratified by use of concomitant CYC, there were no differences in baseline e GFR, use of plasmapheresis, RTX dosing regimen or median follow up days between the groups. No differences in remission, renal recovery ESRD or death were observed.
Conclusions: This study of AAV patients with severe renal disease demonstrates that the outcomes appear equivalent when treated with RTX and GC with or without concomitant CYC.
abstract_id: PUBMED:30203375
Biologics for childhood systemic vasculitis. Recent advances have allowed better understanding of vasculitis pathogenesis and led to more targeted therapies. Two pivotal randomized controlled trials, RITUXVAS and rituximab in ANCA-associated vasculitis (RAVE), provide high-quality evidence demonstrating rituximab (RTX) is efficacious in inducing remission in adult ANCA-associated vasculitis (AAV) patients compared with cyclophosphamide (CYC). RAVE also demonstrated superiority of RTX to oral CYC for induction of remission in relapsing disease. Disappointingly, the RTX regimen was not associated with reduction in early serious adverse events. At least nine randomized trials are in progress, aiming to further delineate optimal dosing and duration of RTX therapy in AAV. In particular, the 6-month interim results of the PEPRS trial provide encouraging data specific to children. Due to special concerns related to growth, preservation of fertility, and potential for high cumulative medication doses, children with AAV should be considered as candidates for RTX even as a first-line remission induction therapy. Two randomized clinical trials have defined the role of infliximab in Kawasaki disease (KD), which appears to be as an alternative to a second infusion of intravenous immunoglobulin (IVIG) for treatment-resistant disease. Support for other biologics in the treatment of AAV or for biologics in the treatment of other vasculidities is largely lacking due to either unimpressive trial results or lack of trials. Except for the KD trials and the PEPRS, trials enrolling children remain scant. This review touches on the key trials and case series with biologics in the treatment of vasculitis that have influenced practice and shaped current thinking.
abstract_id: PUBMED:34054846
Rituximab Associated Hypogammaglobulinemia in Autoimmune Disease. Objective: To evaluate the characteristics of patients with autoimmune disease with hypogammaglobulinemia following rituximab (RTX) and describe their long-term outcomes, including those who commenced immunoglobulin replacement therapy.
Methods: Patients received RTX for autoimmune disease between 2003 and 2012 with immunoglobulin G (IgG) <7g/L were included in this retrospective series. Hypogammaglobulinemia was classified by nadir IgG subgroups of 5 to <7g/L (mild), 3 to <5g/L (moderate) and <3g/L (severe). Characteristics of patients were compared across subgroups and examined for factors associated with greater likelihood of long term hypogammaglobulinemia or immunoglobulin replacement.
Results: 142 patients were included; 101 (71%) had anti-neutrophil cytoplasm antibody (ANCA) associated vasculitis (AAV), 18 (13%) systemic lupus erythematosus (SLE) and 23 (16%) other conditions. Mean follow-up was 97.2 months from first RTX. Hypogammaglobulinemia continued to be identified during long-term follow-up. Median time to IgG <5g/L was 22.5 months. Greater likelihood of moderate hypogammaglobulinemia (IgG <5g/L) and/or use of immunoglobulin replacement therapy at 60 months was observed in patients with prior cyclophosphamide exposure (odds ratio (OR) 3.60 [95% confidence interval (CI) 1.03 - 12.53], glucocorticoid use at 12 months [OR 7.48 (95% CI 1.28 - 43.55], lower nadir IgG within 12 months of RTX commencement [OR 0.68 (95% CI 0.51 - 0.90)] and female sex [OR 8.57 (95% CI 2.07 - 35.43)]. Immunoglobulin replacement was commenced in 29/142 (20%) and associated with reduction in infection rates, but not severe infection rates.
Conclusion: Hypogammaglobulinemia continues to occur in long-term follow-up post-RTX. In patients with recurrent infections, immunoglobulin replacement reduced rates of non-severe infections.
abstract_id: PUBMED:18021517
Disease-specific quality indicators, guidelines, and outcome measures in vasculitis. Measuring quality of care in the anti neutrophil cytoplasm antibody (ANCA) associated vasculitides (AAV) has become more complex, because the introduction of immunosuppressive therapy has resulted in a substantial improvement in survival. Early diagnosis remains a problem, because many patients are seen by non-specialists who may not recognize vasculitis or fail to initiate therapy promptly. A comprehensive assessment to determine the pattern and severity of organ involvement allows a specialist to plan a therapeutic regimen, and to manage co-morbidity effectively. Recent guidelines from the European League Against Rheumatism (EULAR) address the conduct of high-quality clinical trials in vasculitis. Risk factors for poor outcome in vasculitis are probably similar in the different forms of AAV. The risk factors are discussed in the context of failing to achieve remission, relapse, organ failure, and death. Factors indicating a poor prognosis include: the presence of high disease activity at diagnosis (which increases mortality risk even though it is associated with a greater likelihood of response to therapy); the pattern of organ involvement, for example with cardiac features carrying an adverse outcome in Wegener's granulomatosis; significant damage; renal impairment; persistence of ANCA; elderly age at diagnosis; under-use of cyclophosphamide and glucocorticoids in the first 3 months of treatment; persistent nasal carriage of Staphylococcus aureus; and the increased risk of bladder cancer in patients who are given large amounts of cyclophosphamide.
abstract_id: PUBMED:31414912
Treatment of systemic necrotizing vasculitides: recent advances and important clinical considerations. Introduction: Primary systemic necrotizing vasculitides (SNVs) include polyarteritis nodosa, Kawasaki disease, ANCA-associated vasculitides, IgA vasculitis, and cryoglobulinemic vasculitis. All are rare but potentially severe, life-threatening conditions. Evidence-based treatments are well established, but continue to evolve and management requires some expertise. Areas covered: The objectives of this review are to outline results of the main recent therapeutic studies for SNV, which have led to the establishment of current treatment strategies and significant improvement in patients' outcomes, and to describe knowledge gaps that ongoing research hopes to bridge. Expert opinion: Therapy is mainly dictated by diagnosis, disease extent, and severity. In ANCA-associated vasculitis, an initial induction phase consists of tapering glucocorticoids combined with specific immunosuppressants. Maintenance therapy begins after 3 to 6 months, once all evidence of active disease has resolved, and can require years of therapy to prevent relapse. Results from ongoing and future trials for vasculitis will likely impact these treatment approaches. Entirely avoiding GC may become possible, perhaps even the next gold standard, if medications such as avacopan are confirmed to be safe and effective. New combination strategies, more individualized for each patient, may also prove to be more effective, faster.
abstract_id: PUBMED:26044574
Treatment of severe renal disease in ANCA positive and negative small vessel vasculitis with rituximab. Background/aims: Rituximab and glucocorticoids are a non-inferior alternative to cyclophosphamide and glucocorticoid therapy for induction of remission in antineutrophil cytoplasmic antibody (ANCA) associated vasculitis (AAV) patients with moderate renal disease. The efficacy and safety of this approach in patients with severe renal impairment are unknown. We report the outcomes and safety profile of rituximab and glucocorticoid therapy for induction of remission in patients with AAV and ANCA-negative vasculitis presenting with severe renal disease.
Methods: A multicenter, retrospective, cohort study was conducted between 2005 and 2014. Patients with new or relapsing disease with an estimated glomerular filtration rate (eGFR) of ≤20 ml/min/1.73 m(2) treated with rituximab and glucocorticoid induction with or without plasmapheresis were included. Fourteen patients met the inclusion criteria. The primary outcomes were rate of remission and dialysis independence at 6 months. The secondary outcomes were eGFR at 6 months, end-stage renal disease (ESRD), survival rates and adverse events.
Results: All patients were Caucasian, and 57% were male. The mean eGFR was 12 ml/min/1.73 m(2) at diagnosis. All patients achieved remission with a median time to remission of 55 days. Seven patients required dialysis at presentation of which 5 patients recovered renal function and discontinued dialysis by 6-month follow-up. The mean eGFR for the 11 patients without ESRD who completed 6-month follow-up was 33 ml/min/1.73 m(2). Four patients ultimately developed ESRD, and one died during the follow-up period.
Conclusion: Patients with AAV and severe renal disease achieve high rates of remission and dialysis independence when treated with rituximab and glucocorticoids without cyclophosphamide.
Answer: The study analyzed in PUBMED:22859342 aimed to determine if adult disease severity subclassifications could predict the use of cyclophosphamide (CYC) in children with ANCA-associated vasculitis (AAV). The researchers applied the European Vasculitis Study (EUVAS) and Wegener's Granulomatosis Etanercept Trial (WGET) disease severity subclassification systems to pediatric patients with AAV from the A Registry for Childhood Vasculitis (ARChiVe). They found that the severity subgroup was associated with the treatment group in both the EUVAS and WGET systems, indicating a strong correlation with physician choice of treatment. However, there were instances where the treatment received by the patients was not concordant with their assigned severity subclass, with some children classified as having less severe disease receiving CYC and some classified as having severe disease not receiving CYC. This suggests that while adult severity subclassifications can be indicative of treatment decisions in pediatric AAV, they do not always predict the use of cyclophosphamide in children with AAV. |
Instruction: Does early emotional distress predict later child involvement in gambling?
Abstracts:
abstract_id: PUBMED:20723278
Does early emotional distress predict later child involvement in gambling? Objective: Younger people are engaging in gambling, with some showing excessive involvement. Although a consequence of gambling could be anxiety and depression, emotional distress could be a precursor to gambling involvement. This could reflect developmental proneness toward problem behaviour. We assessed whether early emotional distress directly influences later gambling or if it operates through an indirect pathway.
Methods: Using a prospective longitudinal design, an intentional subsample of children from the 1999 kindergarten cohort of the Montreal Longitudinal Preschool Study (Quebec) from intact families were retraced in 2005 for follow-up in Grade 6. Consenting parents and children were separately interviewed. Key child variables and sources included kindergarten teacher ratings of emotional distress and impulsivity and self-reported parent and child gambling.
Results: Higher levels of teacher-rated emotional distress in kindergarten significantly predicted a higher propensity toward later gambling behaviour. Impulsivity, a factor often comorbidly present with emotional distress, completely explained this predictive relation above and beyond potential child- and family-related confounds, including parental gambling.
Conclusions: Children with higher levels of emotional distress at kindergarten were more inclined toward child gambling behaviour in Grade 6. The influence of early emotional distress completely vanished when behaviours reflecting impulsivity were considered when predicting later child gambling behaviour. The relation between emotional distress and child gambling involvement in children was thus explained by its comorbidity with early impulsivity. This study does not rule out the possibility that emotional distress could become a correlate or consequence of excessive involvement in gambling activities at a later developmental period.
abstract_id: PUBMED:33391090
The Roles of Fluid Intelligence and Emotional Intelligence in Affective Decision-Making During the Transition to Early Adolescence. The current study mainly explored the influence of fluid intelligence (IQ) and emotional intelligence (EI) on affective decision-making from a developmental perspective, specifically, during the transition from childhood into early adolescence. Meanwhile, their age-related differences in affective decision-making were explored. A total of 198 participants aged 8-12 completed the Iowa Gambling Task (IGT), the Cattell's Culture Fair Intelligence Test and the Trait Emotional Intelligence Questionnaire-Child Form. Based on the net scores of IGT, the development of affective decision-making ability did not increase monotonically with age, and there was a developmental trend of an impaired IGT performance in early adolescence (aged 11-12), especially in the early learning phase (first 40 trials) of the IGT. More importantly, IQ and EI played different roles for children and early adolescents: IQ and EI jointly predicted the IGT performance for 8-10 years old children, whereas only EI contributed to the IGT performance of 11-12 years old early adolescents. The present study extends the evidence how cognitive processing and emotional processing interact in affective decision-making from the developmental perspective. Furthermore, it provides insights of future research and intervention with early adolescents' poor affective decision-making.
abstract_id: PUBMED:28458113
Maternal psychological distress and child decision-making. Background: There is much research to suggest that maternal psychological distress is associated with many adverse outcomes in children. This study examined, for the first time, if it is related to children's affective decision-making.
Methods: Using data from 12,080 families of the Millennium Cohort Study, we modelled the effect of trajectories of maternal psychological distress in early-to-middle childhood (3-11 years) on child affective decision-making, measured with a gambling task at age 11.
Results: Latent class analysis showed four longitudinal types of maternal psychological distress (chronically high, consistently low, moderate-accelerating and moderate-decelerating). Maternal distress typology predicted decision-making but only in girls. Specifically, compared to girls growing up in families with never-distressed mothers, those exposed to chronically high maternal psychological distress showed more risk-taking, bet more and exhibited poorer risk-adjustment, even after correction for confounding. Most of these effects on girls' decision-making were not robust to additional controls for concurrent internalising and externalising problems, but chronically high maternal psychological distress was associated positively with risk-taking even after this adjustment. Importantly, this association was similar for those who had reached puberty and those who had not.
Limitations: Given the study design, causality cannot be inferred. Therefore, we cannot propose that treating chronic maternal psychological distress will reduce decision-making pathology in young females.
Conclusions: Our study suggests that young daughters of chronically distressed mothers tend to be particularly reckless decision-makers.
abstract_id: PUBMED:27592413
Decision making, cognitive distortions and emotional distress: A comparison between pathological gamblers and healthy controls. Background And Objectives: The etiology of problem gambling is multifaceted and complex. Among others factors, poor decision making, cognitive distortions (i.e., irrational beliefs about gambling), and emotional factors (e.g., negative mood states) appear to be among the most important factors in the development and maintenance of problem gambling. Although empirical evidence has suggested that cognitive distortions facilitate gambling and negative emotions are associated with gambling, the interplay between cognitive distortions, emotional states, and decision making in gambling remains unexplored.
Methods: Pathological gamblers (N = 54) and healthy controls (N = 54) completed the South Oaks Gambling Screen (SOGS), the Iowa Gambling Task (IGT), the Gambling Related Cognitions Scale (GRCS), and the Depression Anxiety Stress Scale (DASS-21).
Results: Compared to healthy controls, pathological gamblers showed poorer decision making and reported higher scores on measures assessing cognitive distortions and emotional distress. All measures were positively associated with gambling severity. A significant negative correlation between decision making and cognitive distortions was also observed. No associations were found between poor decision making and emotional distress. Logistic regression analysis indicated that cognitive distortions, emotional distress, and poor decision making were significant predictors of problem gambling.
Limitations: The use of self-report measures and the absence of female participants limit the generalizability of the reported findings.
Conclusions: The present study is the first to demonstrate the mutual influence between irrational beliefs and poor decision making, as well as the role of cognitive bias, emotional distress, and poor decision making in gambling disorder.
abstract_id: PUBMED:20480423
Iowa Gambling Task performance and emotional distress interact to predict risky sexual behavior in individuals with dual substance and HIV diagnoses. HIV+ substance-dependent individuals (SDIs) show emotional distress and executive impairment, but in isolation these poorly predict sexual risk. We hypothesized that an executive measure sensitive to emotional aspects of judgment (Iowa Gambling Task; IGT) would identify HIV+ SDIs whose sexual risks were influenced by emotional distress. We assessed emotional distress and performance on several executive tasks in 190 HIV+ SDIs. IGT performance interacted significantly with emotional distress, such that only in better performers were distress and risk related. Our results are interpreted using the somatic marker hypothesis and indicate that the IGT identifies HIV+ SDIs for whom psychological distress influences HIV risk.
abstract_id: PUBMED:33333137
Neural substrates of the interplay between cognitive load and emotional involvement in bilingual decision making. Prior work has reported that foreign language influences decision making by either reducing access to emotion or imposing additional cognitive demands. In this fMRI study, we employed a cross-task design to assess at the neural level whether and how the interaction between cognitive load and emotional involvement is affected by language (native L1 vs. foreign L2). Participants completed a Lexico-semantic task where in each trial they were presented with a neutrally or a negatively valenced word either in L1 or L2, either under cognitive load or not. We manipulated cognitive load by varying the difficulty of the task: to increase cognitive demands, we used traditional characters instead of simplified ones in L1 (Chinese), and words with capital letters instead of lowercase letters in L2 (English). After each trial, participants decided whether to take a risky decision in a gambling game. During the Gamling task, left amygdala and right insula were more activated after having processed a negative word under cognitive load in the Lexico-semantic task. However, this was true for L1 but not for L2. In particular, in L1, cognitive load facilitated rather than hindered access to emotion. Further suggesting that cognitive load can enhance emotional sensitivity in L1 but not in L2, we found that functional connectivity between reward-related striatum and right insula increased under cognitive load only in L1. Overall, results suggest that cognitive load in L1 can favor access to emotion and lead to impulsive decision making, whereas cognitive load in L2 can attenuate access to emotion and lead to more rational decisions.
abstract_id: PUBMED:32951057
Children's Road-Crossing Behavior: Emotional Decision Making and Emotion-Based Temperamental Fear and Anger. Objective: Child pedestrian injuries represent a global public health burden. To date, most research on psychosocial factors affecting children's risk of pedestrian injury focused on cognitive aspects of children's functioning in traffic. Recent evidence suggests, however, that emotional aspects such as temperament-based fear and anger/frustration, as well as executive function-based emotional decision making, may also affect children's safety in traffic. This study examined the role of emotions on children's pedestrian behavior. Three hypotheses were considered: (a) emotion-based temperament factors of fear and anger/frustration will predict children's risky decisions and behaviors; (b) emotional decision making will predict risky pedestrian decisions and behaviors; and (c) children's pedestrian decision making will mediate relations between emotion and risky pedestrian behavior. The role of gender was also considered.
Methods: In total, 140 6- to 7-year-old children (M = 6.7 years, SD = 0.39; 51% girls) participated. Parent-report subscales of Child Behavior Questionnaire measured temperamental fear and anger/frustration. The Hungry Donkey Task, a modified version of Iowa Gambling Task for children, measured children's emotional decision making, and a mobile virtual reality pedestrian environment measured child pedestrian behavior.
Results: Greater anger/frustration, lesser fear, and more emotional decision making all predicted poorer pedestrian decision making. The mediational model demonstrated that pedestrian decision making, as assessed by delays entering safe traffic gaps, mediated the relation between emotion and risky pedestrian behavior. Analyses stratified by gender showed stronger mediation results for girls than for boys.
Conclusions: These results support the influence of emotions on child pedestrian behavior and reinforce the need to incorporate emotion regulation training into child pedestrian education programs.
abstract_id: PUBMED:31084279
Intensive online videogame involvement: A new global idiom of wellness and distress. Extending classic anthropological "idioms of distress" research, we argue that intensive online videogame involvement is better conceptualized as a new global idiom, not only of distress but also of wellness, especially for emerging adults (late teens through the 20s). Drawing on cognitive anthropological cultural domain interviews conducted with a small sample of U.S. gamers (N = 26 free-list and 34 pile-sort respondents) (Study 1) and a large sample of survey data on gaming experience (N = 3629) (Study 2), we discuss the cultural meaning and social context of this new cultural idiom of wellness and distress. Our analysis suggests that the "addiction" frame provides a means for gamers to communicate their passion and commitment to online play, even furthering their enthusiasm for the hobby and community in the process, but also a way for players to express and even resolve life distress such as depression and loneliness. The American Psychiatric Association (APA) has recently included "Internet gaming disorder" (IGD) as a possible behavioral addiction, akin to gambling, warranting further consideration for eventual formal inclusion in the next iteration of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). Our study leads us to suggest that clinicians only sparingly use IGD as a clinical category, given that medical and gamer understandings of "addictive" play differ so markedly. This includes better distinguishing positive online gaming involvement-also sometimes framed by gamers as "addictive"-from other play patterns more clearly entailing distress and dysfunction.
abstract_id: PUBMED:27995842
Maternal depression and trajectories of child internalizing and externalizing problems: the roles of child decision making and working memory. Background: Maternal depression may affect the emotional/behavioural outcomes of children with normal neurocognitive functioning less severely than it does those without. To guide prevention and intervention efforts, research must specify which aspects of a child's cognitive functioning both moderate the effect of maternal depression and are amenable to change. Working memory and decision making may be amenable to change and are so far unexplored as moderators of this effect.
Method: Our sample was 17 160 Millennium Cohort Study children. We analysed trajectories of externalizing (conduct and hyperactivity) and internalizing (emotional and peer) problems, measured with the Strengths and Difficulties Questionnaire at the ages 3, 5, 7 and 11 years, using growth curve models. We characterized maternal depression, also time-varying at these ages, by a high score on the K6. Working memory was measured with the Cambridge Neuropsychological Test Automated Battery Spatial Working Memory Task, and decision making (risk taking and quality of decision making) with the Cambridge Gambling Task, both at age 11 years.
Results: Maternal depression predicted both the level and the growth of problems. Risk taking and poor-quality decision making were related positively to externalizing and non-significantly to internalizing problems. Poor working memory was related to both problem types. Neither decision making nor working memory explained the effect of maternal depression on child internalizing/externalizing problems. Importantly, risk taking amplified the effect of maternal depression on internalizing problems, and poor working memory that on internalizing and conduct problems.
Conclusions: Impaired decision making and working memory in children amplify the adverse effect of maternal depression on, particularly, internalizing problems.
abstract_id: PUBMED:32656746
Parental Gambling as a Moderator of the Child Delinquency-Gambling Relationship: Does Having a Role Model in the Home Make a Difference? The relationship between delinquency and gambling has drawn significant attention from researchers in the behavioral and social sciences, yet there are aspects of this relationship that remain largely unexplored. The role of "third variables" in moderating the connection between child delinquency and gambling involvement is one such aspect. Accordingly, the current study set out to examine the impact of parent gambling involvement on the child delinquency-gambling relationship in a sample of 3089 adolescents (1576 males, 1513 females) from Cohort K of the Longitudinal Study of Australian Children (LSAC-K). Using cross-sectional data from Wave 7 of the LSAC, the current study tested the possibility that gambling involvement in parents may moderate the relationship between delinquency and gambling in their offspring. Results obtained from a regression analysis revealed that the link between delinquency, conceptualized as an early marker of general deviance, and gambling variety, as measured across ten different forms of wagering (e.g., Casino gambling, sports betting, horse and dog racing, scratch tickets), changed as a function of parental involvement in these same ten behaviors. In seeking clarity on this relationship, it was noted that the association between child delinquency and gambling involvement grew in proportion to the strength of parental involvement in gaming activities. Hence, having a gambling role model in the home may increase opportunities and incentive for gambling in children predisposed to antisocial behavior or general deviance.
Answer: Yes, early emotional distress does predict later child involvement in gambling. A study with a prospective longitudinal design found that higher levels of teacher-rated emotional distress in kindergarten significantly predicted a higher propensity toward later gambling behavior in Grade 6. However, this predictive relation was completely explained by impulsivity, a factor often comorbid with emotional distress, when considering potential child- and family-related confounds, including parental gambling. This suggests that while children with higher levels of emotional distress at kindergarten were more inclined toward child gambling behavior later on, the influence of early emotional distress on later child gambling behavior was explained by its comorbidity with early impulsivity (PUBMED:20723278). |
Instruction: Are anxiety disorders more frequent in subjects with eating disorders?
Abstracts:
abstract_id: PUBMED:14593310
Are anxiety disorders more frequent in subjects with eating disorders? Objective: We designed a controlled study comparing referred women with an eating disorder (ED) to a matched normal control group to answer the following questions: what are the frequencies of anxiety disorders (AD) in anorexia nervosa (AN) and bulimia nervosa (BN), according to DSM-IV criteria? Are AD significantly more frequent among women with an ED than among women from the community?
Method: We assessed frequencies of six specific AD among 271 women with a current diagnosis of AN or BN and 271 controls, using the Mini International Neuropsychiatric Interview (MINI), French DSM-IV version.
Results: Seventy-one percent of both the AN and the BN subjects had a lifetime comorbidity with at least one AD, significantly more (p<0.001) than the percentage of controls with an AD. Prevalence was significantly higher in the ED groups than in controls for most types of AD, and between 41.8% and 53.3% of comorbid cases had an AD preceding the onset of the ED.
Conclusion: Evidence that AD are significantly more frequent in subjects with ED than in the community has important etiological and therapeutic implications.
abstract_id: PUBMED:16142042
Are anxiety or depressive disorders more frequent among one of the anorexia or bulimia nervosa subtype? Unlabelled: Our objective was to answer the following question: are there differences between diagnostic groups of eating disorders (ED) for the prevalence of depressive and anxiety disorders, when clinical differences between the groups are taken into account (ie age of subjects, ED duration, inpatient or outpatient status, and Body Mass Index)?
Method: We evaluated the frequency of anxiety disorders and depressive disorders in 271 subjects presenting with a diagnosis of either anorexia nervosa or bulimia, using the Mini International Neuropsychiatric Interview (MINI), DSM IV version. We compared the prevalences between sub-groups of anorexics (AN-R and AN-BN), between sub-groups of bulimics (BN-P and BN-NP) and between anorexics and bulimics while adjusting for the variables defined below.
Results: Current or lifetime comorbidity of anxiety and depressive disorders did not differ between AN-Rs and AN-BNs, nor between BN-Ps and BN-NPs. Only current diagnoses of agoraphobia and obsessive-compulsive disorder were significantly more frequent in anorexics than in bulimics.
Conclusion: The greater frequency of comorbidity between obsessive-compulsive disorder and AN compared to BN, already well documented, is not questioned. The remaining anxiety disorders are equally frequent among all the diagnostic types of ED.
abstract_id: PUBMED:32458276
Psychiatric symptoms are frequent in idiopathic intracranial hypertension patients. Idiopathic intracranial hypertension (IIH) is a rare disease with an incidence rate of 0.5-2.0/100,000/year. Characteristic symptoms are headache and several degrees of visual impairment. Psychiatric symptoms in association with IIH are usually poorly described and underestimated. In this study, we evaluated IIH subjects to determine the association with psychiatric symptoms. We evaluated thirty consecutive patients with IIH submitted to neurosurgery from January 2017 to January 2020 in two Brazilian tertiary hospitals. They underwent clinical evaluation, obtaining medical history, comorbidities, body mass index (BMI-kg/m2), and applying Neuropsychiatric Inventory Questionnaire (NPI-Q). There were 28 females and 2 males. Ages ranged from 18 to 66 years old, with mean age of 37.97 ± 12.78. Twenty-five (83%) presented comorbidities, being obese and having arterial hypertension the most frequent. Body mass index ranged from 25 to 35 kg/m2 and mean value was 31 ± 3.42. After application of Neuropsychiatric Interview, 26 of 30 presented psychiatric symptoms (86%). Depression-anxiety syndromes were reported in 25 patients (83%). Nighttime disturbances were reported by 14 subjects (46%). Appetite and eating disorders were described by 23 (76%). Psychiatric symptoms in association with IIH are usually poorly described and underestimated. In our sample, twenty-six out of 30 (86%) reported psychiatric symptoms. We highlight the high prevalence of psychiatric symptoms among IIH patients and the need of managing these patients with a multidisciplinary team, including psychiatrists.
abstract_id: PUBMED:12686367
Anxiety disorders in subjects seeking treatment for eating disorders: a DSM-IV controlled study. Women who were referred with an eating disorder (ED) were compared with a matched normal control group to answer the following questions: What are the frequencies of anxiety disorders in cases of anorexia and bulimia nervosa diagnosed according to DSM-IV criteria? Are anxiety disorders significantly more frequent among women with an eating disorder than among women from the community? We assessed the frequencies of six specific anxiety disorders among 271 women with a current diagnosis of anorexia or bulimia nervosa and 271 controls, using the Mini-International Neuropsychiatric Interview, French DSM-IV version. A lifetime comorbidity with at least one anxiety disorder was found in 71% of both the anorexic and the bulimic subjects, significantly higher than the percentage of controls with an anxiety disorder. The prevalence was significantly higher in the eating disorder groups than in controls for most types of anxiety disorder, and between 41.8 and 53.3% of comorbid cases had an anxiety disorder preceding the onset of the eating disorder. Anxiety disorders are significantly more frequent in subjects with eating disorders than in volunteers from the community, a finding that has important etiological and therapeutic implications.
abstract_id: PUBMED:20537213
High frequency of psychopathology in subjects wishing to lose weight: an observational study in Italian subjects. Objective: To investigate the frequency of psychiatric disorders in subjects wishing to lose weight categorized according to BMI.
Design: Cross-sectional study.
Setting: An academic outpatient clinical nutrition service in Italy.
Subjects: A total of 207 subjects (thirty-nine men and 168 women; mean age: 38·7 (sd 14·1) years) consecutively attending the study centre for the first time between January 2003 and December 2006.
Results: In the entire study group, eighty-three (40 %) subjects had a psychiatric disorder according to criteria of the Diagnostic and Statistical Manual of Mental Disorders, fourth edition, text revision. Eating disorders were the most prevalent psychiatric condition (thirty-six subjects, 17·4 %), followed by mood and anxiety disorders (9·7 % and 8·7 %, respectively). The frequency of psychiatric disorders among different BMI categories was as follows: 75·0 % in underweight, 50·0 % in normal weight, 33·3 % in overweight and 33·3 % in obese subjects.
Conclusions: Psychiatric disorders may be frequently found in subjects wishing to lose weight. Our results highlight the importance of psychiatric assessment especially in underweight and normal-weight subjects.
abstract_id: PUBMED:7581415
An age-matched comparison of subjects with binge eating disorder and bulimia nervosa. The purpose of this study was to compare data from a group of obese subjects with binge eating disorder (BED) with data from a group of normal weight bulimia nervosa (BN) subjects. Subjects were compared using the Eating Disorder Questionnaire (EDQ), the Eating Disorder Inventory (EDI), the Personality Disorders Questionnaire for DSM-III-R (PDQ-R), the Hamilton Anxiety and Depression Rating Scales, and the Beck Depression Inventory. A group of 35 age-matched subjects were selected retrospectively from treatment study subjects. The EDQ findings indicated that members of the BN group desired a lower body mass index, were more afraid of becoming fat, and more uncomfortable with their binge eating behavior than the BED group members. The BED subjects had a younger age of onset of binge eating behavior (14.3) than the BN subjects (19.8), even though both groups started dieting at a similar age (BED = 15.0, BN = 16.2). The EDI results showed BN subjects had more eating and weight-related pathology, with significantly higher scores on five of the eight subscales. On the PDQ-R more BN subjects endorsed Axis II impairment (BN = 69%, BED = 40%). While demonstrating greater eating pathology in the BN group, this study also found significant pathology and distress in BED subjects.
abstract_id: PUBMED:11407271
Is cocoa a psychotropic drug? Psychopathologic study of a population of subjects self-identified as chocolate addicts The aim of this work was to search for eating disorders, DSM III-R Axis I mental disorders, personality disorders, and addictive behavior, in self-labeled "chocolate addicts". Subjects were recruited through advertisements placed in a university and a hospital. Fifteen subjects were included, 3 men and 12 women aged between 18 and 49. Most of them were not overweight, although 7 thought they had a weight problem. They consumed an average of 50 g per day of pure cacao and, for 13 subjects, this consumption was lasting since childhood or adolescence. The psychological effects of chocolate, as indicated by the subjects, consisted in feelings of increased energy or increased concentration ability, and in an anxiolytic effect during stress. Seven subjects described minor withdrawal symptoms. None of the subjects reached the thresholds for eating disorders on the EAT and BULIT scales. The structured interview (MINI) identified an important ratio of subjects with a history of major depressive episode (13/15), and one woman was currently experiencing a major depressive episode. Four people suffered, or had suffered from anxiety disorders. Although only one subject satisfied all criteria for a personality disorder on the DIP-Q, seven displayed some pathological personality features. The self-labeled "chocoholics" do not seem to suffer from eating disorders, but may represent a population of psychologically vulnerable and depression--or anxiety--prone people. They seem to use chocolate as a light psychotropic drug able to relieve some of their distress. The amount of cacao consumed, although very chronically, remains moderate, and they rarely display other addictive behaviors.
abstract_id: PUBMED:10463064
Psychopathological characteristics of recovered bulimics who have a history of physical or sexual abuse. We sought to clarify the influence of a history of sexual or physical abuse on a variety of psychopathologies in subjects with bulimia nervosa (BN). To avoid confounding effects, the presence of a history of sexual or physical abuse, lifetime axis I disorders, and personality disorders were assessed through direct structured interviews in 44 subjects recovered from BN for at least 1 year. Twenty abused subjects (45%) were significantly more likely than 24 subjects without abuse to have severe general psychopathology and eating disturbance. Compared with nonabused subjects, abused subjects showed a trend toward more frequent lifetime diagnoses of posttraumatic stress disorder and substance dependence. These results suggest that abusive experiences may be associated with some psychopathology of BN, particularly related to anxiety, substance abuse, and more severe core eating disorder pathology.
abstract_id: PUBMED:18555056
Eating disorder psychopathology does not predict the overweight severity in subjects seeking weight loss treatment. Background: Many obese subjects show relevant psychological distress. The aims of this study were to assess the psychopathological and clinical features of a sample of overweight or obese subjects seeking weight loss treatment and to evaluate the possible, significant associations between the levels of overweight and the specific and general eating disorder psychopathology.
Methods: A total of 397 consecutive overweight (body mass index > or =25 kg/m(2)) patients seeking treatment for weight loss at the Outpatient Clinic for Obesity of the University of Florence were studied. The prevalence of binge eating disorder was assessed using Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, criteria. All subjects were assessed through the self-report version of the Eating Disorder Examination Questionnaire, the Beck Depression Inventory, and the State-Trait Anxiety Inventory.
Results: The current prevalence of binge eating disorder was 24.2%; 35% of the subjects were overweight during childhood. High prevalence rates of clinical significant depressive (38%) and anxious (71.5%) symptoms were observed. Binge eating disorder, the severity of specific eating disorder psychopathology, and depressive and anxious symptoms were not associated with the severity of overweight.
Conclusions: The severity of the specific and general eating disorder psychopathology does not predict the levels of overweight. A positive association between severe eating disorder psychopathology and clinical depression was observed.
abstract_id: PUBMED:32913825
Clinical presentation and need for treatment of a cohort of subjects accessing to a mental illness prevention service. Outreaching activities decrease prognostic accuracy of at-risk mental state defining tools, over-attracting subjects who are not at increased risk of mental illness. The setting was a mental illness primary indicated prevention outpatients service embedded within the Psychiatry Unit of Padua University Hospital, Italy. Help-seeking patients accessing the service between January 2018 and December 2018 were evaluated with validated tools assessing functioning, at-risk mental state, schizotypal personality features, depressive and anxious symptoms, together with medical and family history collection. The primary outcome was the prevalence of drop in functioning at presentation according to the Social and Occupational Functioning Assessment Scale (SOFAS). Secondary outcomes were diagnoses according to DSM-5 criteria and meeting criteria for at-risk mental state. Fifty-nine patients accessed the service, mean age was 18.8 (2.12) years old, 54.2% were females. Virtually all subjects (97.7%) had a drop in functioning. Baseline primary diagnoses were depressive episode in 33%, anxiety disorder in 21%, personality disorder in 17%, adjustment disorder 9%, conduct disorder 7%, schizophrenia spectrum disorder 5%, bipolar disorder 5%, eating disorder in 1.7%, dissociative disorder 1.7%. Overall, 59.1% met at-risk mental state criteria. Lower functioning was associated with anxious symptoms (p=0.031), a family history of mental illness (p=0.045) and of suicide (p=0.042), and schizotypal personality traits (p=0.036). Subjects accessing a prevention service embedded within the mental health department already present a trans-diagnostic drop in functioning, mainly due to a non-psychotic mental disorder, with at-risk mental state in one patient out of two, and schizophrenia or bipolar disorder already present in only 10% of subjects. Prevention service within mental health facility setting appears to properly detect subjects in need of treatment with a drop in functioning, at risk of developing severe mental illness, without any outreaching activity in the general population.
Answer: Yes, anxiety disorders are more frequent in subjects with eating disorders. Studies have shown that a significant proportion of individuals with eating disorders, such as anorexia nervosa (AN) and bulimia nervosa (BN), also have comorbid anxiety disorders (AD). A controlled study comparing women with an eating disorder to a matched normal control group found that 71% of both the AN and BN subjects had a lifetime comorbidity with at least one AD, which was significantly more than the percentage of controls with an AD (PUBMED:14593310). The prevalence of most types of AD was significantly higher in the eating disorder groups than in controls, and between 41.8% and 53.3% of comorbid cases had an AD preceding the onset of the eating disorder (PUBMED:14593310; PUBMED:12686367).
Furthermore, the study also indicated that the greater frequency of comorbidity between obsessive-compulsive disorder and AN compared to BN is well documented, although the remaining anxiety disorders are equally frequent among all the diagnostic types of ED (PUBMED:16142042). This evidence suggests that anxiety disorders are indeed more frequent in subjects with eating disorders, which has important implications for understanding the etiology and treatment of these conditions. |
Instruction: Does bleach disinfection of syringes protect against hepatitis C infection among young adult injection drug users?
Abstracts:
abstract_id: PUBMED:12410020
Does bleach disinfection of syringes protect against hepatitis C infection among young adult injection drug users? Background: Hepatitis C virus (HCV) has emerged as a major public health problem among injection drug users. In this analysis we examine whether disinfection of syringes with bleach has a potentially protective effect on anti-HCV seroconversion.
Methods: We conducted a nested case-control study comparing 78 anti-HCV seroconverters with 390 persistently anti-HCV seronegative injection drug users. These data come from the Second Collaborative Injection Drug Users Study, a prospective cohort study that recruited injection drug users from five U.S. cities between 1997 and 1999. We used conditional logistic regression to determine the effect of bleach disinfection of syringes on anti-HCV seroconversion.
Results: Participants who reported using bleach all the time had an odds ratio (OR) for anti-HCV seroconversion of 0.35 (95% confidence interval = 0.08-1.62), whereas those reporting bleach use only some of the time had an odds ratio of 0.76 (0.21-2.70), when compared with those reporting no bleach use.
Conclusions: These results suggest that bleach disinfection of syringes, although not a substitute for use of sterile needles or cessation of injection, may help to prevent HCV infection among injection drug users.
abstract_id: PUBMED:24920342
Individual and socio-environmental factors associated with unsafe injection practices among young adult injection drug users in San Diego. Unsafe injection practices significantly increase the risk of hepatitis C virus (HCV) and human immunodeficiency virus (HIV) infection among injection drug users (IDUs). We examined individual and socio-environmental factors associated with unsafe injection practices in young adult IDUs in San Diego, California. Of 494 IDUs, 46.9 % reported receptive syringe sharing and 68.8 % sharing drug preparation paraphernalia in the last 3 months. Unsafe injection practices were associated with increased odds of having friends who injected drugs with used syringes, injecting with friends or sexual partners, and injecting heroin. Perceived high susceptibility to HIV and perceived barriers to obtaining sterile syringes were associated with increased odds of receptive syringe sharing, but not with sharing injection paraphernalia. Over half the IDUs reported unsafe injection practices. Our results suggest that personal relationships might influence IDUs' perceptions that dictate behavior. Integrated interventions addressing individual and socio-environmental factors are needed to promote safe injection practices in this population.
abstract_id: PUBMED:20726768
Survival of hepatitis C virus in syringes: implication for transmission among injection drug users. Background: We hypothesized that the high prevalence of hepatitis C virus (HCV) among injection drug users might be due to prolonged virus survival in contaminated syringes.
Methods: We developed a microculture assay to examine the viability of HCV. Syringes were loaded with blood spiked with HCV reporter virus (Jc1/GLuc2A) to simulate 2 scenarios of residual volumes: low void volume (2 microL) for 1-mL insulin syringes and high void volume (32 microL) for 1-mL tuberculin syringes. Syringes were stored at 4 degrees C, 22 degrees C, and 37 degrees C for up to 63 days before testing for HCV infectivity by using luciferase activity.
Results: The virus decay rate was biphasic (t1/2alpha= 0.4 h and t1/2beta = 28 hh). Insulin syringes failed to yield viable HCV beyond day 1 at all storage temperatures except 4 degrees , in which 5% of syringes yielded viable virus on day 7. Tuberculin syringes yielded viable virus from 96%, 71%, and 52% of syringes after storage at 4 degrees, 22 degrees, and 37 degrees for 7 days, respectively, and yielded viable virus up to day 63.
Conclusions: The high prevalence of HCV among injection drug users may be partly due to the resilience of the virus and the syringe type. Our findings may be used to guide prevention strategies.
abstract_id: PUBMED:23908999
Prescription drug misuse and risk behaviors among young injection drug users. Prescription drug misuse among young adults, especially opioids, is a substantial public health problem in the United States. Although risks associated with injection of illicit drugs are well established, injection and sexual risks associated with misuse of prescription drugs are under-studied. Forty young injection drug users aged 16 to 25 who reported injection of a prescription drug were recruited in 2008-09 in Los Angeles and New York City. Descriptive quantitative and qualitative data were analyzed to illustrate risky injection and sexual behaviors reported in this sample. Over half of participants engaged in risky injection behavior, three-quarters engaged in risky sexual behavior, nearly half reported both risky behaviors, and five did not report either risk behavior while misusing a prescription drug. Prescription opioids, tranquilizers, and stimulants were misused in the context of risky sexual behaviors while only opioids were misused in the context of injection risk behaviors. Access to clean syringes, attitudes and beliefs regarding hepatitis C, and risk reduction through partner selection were identified as key themes that contextualized risk behaviors. Although these findings help identify areas to target educational campaigns, such as prevention of sexually transmitted infections, risk behaviors specifically associated with prescription drug misuse warrant further study.
abstract_id: PUBMED:26034767
Disinfection of syringes contaminated with hepatitis C virus by rinsing with household products. Background. Hepatitis C virus (HCV) transmission among people who inject drugs (PWID) is associated with the sharing of injection paraphernalia. People who inject drugs often "disinfect" used syringes with household products when new syringes are unavailable. We assessed the effectiveness of these products in disinfecting HCV-contaminated syringes. Methods. A genotype-2a reporter virus assay was used to assess HCV infectivity in syringes postrinsing. Hepatitis C virus-contaminated 1 mL insulin syringes with fixed needles and 1 mL tuberculin syringes with detachable needles were rinsed with water, Clorox bleach, hydrogen peroxide, ethanol, isopropanol, Lysol, or Dawn Ultra at different concentrations. Syringes were either immediately tested for viable virus or stored at 4°C, 22°C, and 37°C for up to 21 days before viral infectivity was determined. Results. Most products tested reduced HCV infectivity to undetectable levels in insulin syringes. Bleach eliminated HCV infectivity in both syringes. Other disinfectants produced virus recovery ranging from high (5% ethanol, 77% ± 12% HCV-positive syringes) to low (1:800 Dawn Ultra, 7% ± 7% positive syringes) in tuberculin syringes. Conclusions. Household disinfectants tested were more effective in fixed-needle syringes (low residual volume) than in syringes with detachable needles (high residual volume). Bleach was the most effective disinfectant after 1 rinse, whereas other diluted household products required multiple rinses to eliminate HCV. Rinsing with water, 5% ethanol (as in beer), and 20% ethanol (as in fortified wine) was ineffective and should be avoided. Our data suggest that rinsing of syringes with household disinfectants may be an effective tool in preventing HCV transmission in PWID when done properly.
abstract_id: PUBMED:25347412
Association of opioid agonist therapy with lower incidence of hepatitis C virus infection in young adult injection drug users. Importance: Injection drug use is the primary mode of transmission for hepatitis C virus (HCV) infection. Prior studies suggest opioid agonist therapy may reduce the incidence of HCV infection among injection drug users; however, little is known about the effects of this therapy in younger users.
Objective: To evaluate whether opioid agonist therapy was associated with a lower incidence of HCV infection in a cohort of young adult injection drug users.
Design, Setting, And Participants: Observational cohort study conducted from January 3, 2000, through August 21, 2013, with quarterly interviews and blood sampling. We recruited young adult (younger than 30 years) injection drug users who were negative for anti-HCV antibody and/or HCV RNA.
Exposures: Substance use treatment within the past 3 months, including non-opioid agonist forms of treatment, opioid agonist (methadone hydrochloride or buprenorphine hydrochloride) detoxification or maintenance therapy, or no treatment.
Main Outcomes And Measures: Incident HCV infection documented with a new positive result for HCV RNA and/or HCV antibodies. Cumulative incidence rates (95% CI) of HCV infection were calculated assuming a Poisson distribution. Cox proportional hazards regression models were fit adjusting for age, sex, race, years of injection drug use, homelessness, and incarceration.
Results: Baseline characteristics of the sample (n = 552) included median age of 23 (interquartile range, 20-26) years; 31.9% female; 73.1% white; 39.7% who did not graduate from high school; and 69.2% who were homeless. During the observation period of 680 person-years, 171 incident cases of HCV infection occurred (incidence rate, 25.1 [95% CI, 21.6-29.2] per 100 person-years). The rate ratio was significantly lower for participants who reported recent maintenance opioid agonist therapy (0.31 [95% CI, 0.14-0.65]; P = .001) but not for those who reported recent non-opioid agonist forms of treatment (0.63 [95% CI, 0.37-1.08]; P = .09) or opioid agonist detoxification (1.45 [95% CI, 0.80-2.69]; P = .23). After adjustment for other covariates, maintenance opioid agonist therapy was associated with lower relative hazards for acquiring HCV infection over time (adjusted hazard ratio, 0.39 [95% CI, 0.18-0.87]; P = .02).
Conclusions And Relevance: In this cohort of young adult injection drug users, recent maintenance opioid agonist therapy was associated with a lower incidence of HCV infection. Maintenance treatment with methadone or buprenorphine for opioid use disorders may be an important strategy to prevent the spread of HCV infection among young injection drug users.
abstract_id: PUBMED:12489619
Injection drug users report good access to pharmacy sale of syringes. Objective: To examine injection drug users (IDUs) opinions and behavior regarding purchase of sterile syringes from pharmacies.
Design: Focus groups.
Setting: Urban and rural sites in Colorado, Connecticut, Kentucky, and Missouri.
Patients Or Other Participants: Eight focus groups, with 4 to 15 IDU participants per group.
Interventions: Transcripts of focus group discussions were evaluated for common themes by the authors and through the use of the software program NUD*IST.
Main Outcome Measures: Knowledge of human immunodeficiency virus (HIV), pharmacy use, barriers to access from pharmacies, high-risk and risk-reducing behavior, and rural/urban difference.
Results: Almost all participants knew the importance of using sterile syringes for disease prevention and reported buying syringes from pharmacies more than from any other source. Two IDUs believed pharmacists knew the syringes were being used for injecting drugs and perceived pharmacists' sales of syringes to be an attempt to contribute to HIV prevention. Most IDUs reported that sterile syringes were relativity easy to buy from pharmacies, but most also reported barriers to access, such as having to buy in packs of 50 or 100, being made to sign a book, having to make up a story about being diabetic, or having the feeling that the pharmacists were demeaning them. While the majority of IDUs reported properly cleaning or not sharing syringes and safely disposing of them, others reported inadequate cleaning of syringes and instances of sharing syringes or of improper disposal. There were few differences in IDUs' reported ability to buy syringes among states or between urban and rural sites, although the data suggest that IDUs could buy syringes more easily in the urban settings.
Conclusion: For the most part, participants understood the need for sterile syringes in order to protect themselves from HIV, hepatitis B virus, and hepatitis C virus and saw pharmacies as the best source of sterile syringes. Although these data are not generalizable, they suggest that pharmacists can and do serve as HIV-prevention service providers in their communities.
abstract_id: PUBMED:17145000
Factors associated with sharing syringes among street-recruited injecting drug users Background And Objective: To estimate the prevalence of risk behaviors related to drug use and to identify factors associated with of accepting and passing on used syringes among intravenous drug users (IDU) recruited in Barcelona city and other surrounding areas in 2004.
Subjects And Method: A cross-sectional study of IDU recruited from the streets by ex-IDU interviewers. A standardized and anonymous questionnaire which explored behaviors in the previous 6 months was used. Saliva samples were collected to determine human immunodeficiency virus (HIV) prevalence. Logistic regression models were used to identify determinants of accepting and passing on used syringes.
Results: Of the 300 participants, 17.7% and 13.3% accepted and passed on used syringes, respectively. 74.8% practiced front-backloading (to prepare the drug solution in a syringe and then divide it up into other syringes) and 77.9% shared other equipment. The prevalence of HIV was 57.7%. The predictors of accepting used syringes were using more than 4 drugs (odds ratio [OR] = 5.6), having a positive hepatitis C virus status (OR = 7.3), practising front/backloading (OR = 12.6) and having an IDU steady partner (OR = 2.9); and with passing on used syringes were practicing front/backloading (OR = 4.9), having an IDU steady partner (OR = 5.8), and having sexual risk behaviors with casual partners (OR = 4.0). Starting to inject drugs older than 15 years of age was a protective factor (OR = 0.2).
Conclusions: The prevalence of risk behaviors related to drug use remains high, especially indirect sharing, just as the prevalence of HIV and hepatitis C virus. Prevention programs should be targeted to IDU, especially to young IDU, polydrug users and those who have an IDU steady partner.
abstract_id: PUBMED:26080690
HIV, Hepatitis C, and Abstinence from Alcohol Among Injection and Non-injection Drug Users. Individuals using illicit drugs are at risk for heavy drinking and infection with human immunodeficiency virus (HIV) and/or hepatitis C virus (HCV). Despite medical consequences of drinking with HIV and/or HCV, whether drug users with these infections are less likely to drink is unclear. Using samples of drug users in treatment with lifetime injection use (n = 1309) and non-injection use (n = 1996) participating in a large, serial, cross-sectional study, we investigated the associations between HIV and HCV with abstinence from alcohol. About half of injection drug users (52.8 %) and 26.6 % of non-injection drug users abstained from alcohol. Among non-injection drug users, those with HIV were less likely to abstain [odds ratio (OR) 0.55; adjusted odds ratio (AOR) 0.58] while those with HCV were more likely to abstain (OR 1.46; AOR 1.34). In contrast, among injection drug users, neither HIV nor HCV was associated with drinking. However, exploratory analyses suggested that younger injection drug users with HIV or HCV were more likely to drink, whereas older injection drug users with HIV or HCV were more likely to abstain. In summary, individuals using drugs, especially non-injection users and those with HIV, are likely to drink. Age may modify the risk of drinking among injection drug users with HIV and HCV, a finding requiring replication. Alcohol intervention for HIV and HCV infected drug users is needed to prevent further harm.
abstract_id: PUBMED:27285847
Event-level analyses of sex-risk and injection-risk behaviors among nonmedical prescription opioid users. Background: Nonmedical prescription opioid use has been linked to hepatitis C virus (HCV) infection among people who inject drugs and with using high dead space syringes that retain more blood and transfer more HIV if shared. Little is known regarding its effects on sex-risk behaviors.
Objectives: This paper examines event-level associations between nonmedical prescription opioid use and sharing high dead space syringes (injection risk) and unprotected intercourse (sex risk) behaviors.
Methods: We recruited 1,985 participants from two overlapping risk groups-drug users and men who have sex with men (MSM)-and their sex partners. Participants completed an interview that included event-level sex questions with recent sex partners and injection questions with recent injection partners. We used multivariable generalized estimating equations (GEE) to assess the associations between nonmedical prescription opioid use and unprotected intercourse during sexual encounters and sharing syringes during injection episodes, while adjusting for within-person correlations.
Results: When both partners used nonmedical prescription opioids, its use was independently associated with unprotected intercourse in sexual encounters (OR = 2.24; 95% CI = 1.12, 4.49). The use of nonmedical prescription opioids was also associated with sharing high dead space syringes during injection episodes (OR = 6.57; 95% CI = 1.63, 26.51).
Conclusion: Nonmedical prescription opioid use is associated with an increase in the risk of unprotected sex and sharing high dead space syringes. HIV and HCV prevention interventions for nonmedical prescription opioid users should address sex-risk behaviors and encourage the use of acceptable low dead space needles and syringes.
Answer: Yes, bleach disinfection of syringes has a potentially protective effect against hepatitis C infection among young adult injection drug users. A study examining the effect of bleach disinfection on anti-HCV seroconversion found that participants who reported using bleach all the time had a lower odds ratio (OR) for anti-HCV seroconversion (OR = 0.35, 95% confidence interval = 0.08-1.62) compared to those who did not use bleach. Although not a substitute for using sterile needles or stopping injection altogether, bleach disinfection may help to prevent HCV infection among this population (PUBMED:12410020). |
Instruction: Day and night surgery: is there any influence in the patient postoperative period of urgent colorectal intervention?
Abstracts:
abstract_id: PUBMED:26744066
Day and night surgery: is there any influence in the patient postoperative period of urgent colorectal intervention? Background: Medical activity performed outside regular work hours may increase risk for patients and professionals. There is few data with respect to urgent colorectal surgery. The aim of this work was to evaluate the impact of daytime versus nighttime surgery on postoperative period of patients with acute colorectal disease.
Methods: A retrospective study was conducted in a sample of patients with acute colorectal disease who underwent urgent surgery at the General Surgery Unit of Braga Hospital, between January 2005 and March 2013. Patients were stratified by operative time of day into a daytime group (surgery between 8:00 and 20:59) and the nighttime group (21:00-7:59) and compared for clinical and surgical parameters. A questionnaire was distributed to surgeons, covering aspects related to the practice of urgent colorectal surgery and fatigue.
Results: A total of 330 patients were included, with 214 (64.8%) in the daytime group and 116 (35.2%) in the nighttime group. Colorectal cancer was the most frequent pathology. Waiting time (p < 0.001) and total length of hospital stay (p = 0.008) were significantly longer in the daytime group. There were no significant differences with respect to early or late complications. However, 100% of surgeons reported that they are less proficient during nighttime.
Conclusions: Among patients with acute colorectal disease subjected to urgent surgery, there was no significant association between nighttime surgery and the presence of postoperative medical and surgical morbidities. Patients who were subjected to daytime surgery had longer length of stay at the hospital.
abstract_id: PUBMED:38114879
The impact of patient activation on the effectiveness of digital health remote post-discharge follow-up and same-day-discharge after elective colorectal surgery. Background: Low patient activation (PA) is associated with worse postoperative outcomes, however, its impact on the effectiveness of digital health interventions is unknown. We sought to determine the impact of PA on the effectiveness of digital health application for remote post-discharge follow-up for patients undergoing elective colectomy.
Methods: Data analysis included a control cohort (CC) of patients undergoing elective colorectal surgery from 10/2017 to 04/2018 without the digital health intervention and a digital application cohort (DAC) that received a smart phone application for remote post-discharge follow-up from 03/2021 to 08/2022, including a subset of same-day discharge (SDD) patients. PA was measured using the Patient Activation Measure (PAM; score 0-100) and categorized into low (< 55.1) and high (≥ 55.1). The PAM was administered 4-6 weeks before surgery in the DAC group and on postoperative day (POD) 1 in the CC group. The main outcome measure was 30-day emergency department (ED) visits.
Results: A total of 164 patients were included (89DAC with 50 SDD, 75CC), with no differences in patient characteristics other than more stoma closures in the DAC group. Overall, 77% of patients had high PA level, with no difference between CC and DAC (77% vs. 81%, p = 0.25). There was no difference in ED visits between CC and DAC (19% vs. 18%, p = 0.90). Overall, low PA was associated more ED visits (29% vs 14%, p = 0.04). In the SDD subgroup, low PA patients had more ED visits (38% vs. 7%, p = 0.015). PA level did not affect app usage metrics. On multiple regression, only low PA remained independently associated with ED visits (OR 3.42, 95%CI 1.27, 9.24).
Conclusion: Low PA remains an important predictor of surgical outcomes after elective colorectal surgery regardless of the use of a digital health application for remote post-discharge follow-up. This suggests that improving PA levels may improve postoperative outcomes.
abstract_id: PUBMED:24697968
Outcomes in patients undergoing urgent colorectal surgery. Background: Urgent surgery for acute intestinal presentations is generally associated with worse outcomes than elective procedures. This study assessed the outcomes of patients undergoing urgent colorectal surgery.
Methods: Patients were identified from a prospective database. Surgery was classified as urgent when performed as soon as possible after resuscitation and usually within 24 h. Outcome measures included 30 days mortality, return to theatre, anastomotic leak and overall survival.
Results: Two hundred forty-nine patients were included in the analysis. Median age was 65 years (interquartile range 48-74). The most common presentations were obstruction (52.2%) and perforation (23.6%). Cancer was the disease process responsible for presentation in 47.8% of patients. Thirty-day mortality was 6.8%. Age (odds ratio 1.08 95% confidence interval (CI) 1.02-1.15; P = 0.01), American Society of Anesthesiologists 4 (odds ratio 7.14 95% CI 1.67-30.4; P = 0.008) and cancer (odds ratio 6.61 95% CI 1.53-28.45; P = 0.011) were independent predictors of 30 days mortality. Relaparotomy was required in six (2.4%) cases. A primary anastomosis was performed in 156 (62.6%) patients. Anastomotic leak occurred in four (2.5%) patients. In patients with cancer, overall 5-year survival was 28% (95% CI 19-37), corresponding to 54% (95% CI 35-70) for stages I and II, 50% (95% CI 24-71) for stage III and 6% (95% CI 1-17) for stage IV disease. Urgent surgery was independently associated with worse overall survival (hazard ratio 2.65; 95% CI 1.76-3.99; P < 0.001).
Conclusion: In patients undergoing an urgent resection within a colorectal unit, performing a primary anastomosis is feasible and safe in the majority, relaparotomies are required in a minority and urgent surgery is an important predictor of worse prognosis in those with colorectal cancer.
abstract_id: PUBMED:33523269
Laparoscopic colorectal resection for deep infiltrating endometriosis: can we reliably predict anastomotic leakage and major postoperative complications in the early postoperative period? Background: Anastomotic leakage (AL) and major complications after colorectal resection for deep infiltrating endometriosis (DIE) have a remarkable impact on patient outcomes. The aim of this study is to assess the predictive value of C-reactive protein (CRP), procalcitonin (PCT), white blood cell count (WBCs) and the Dutch Leakage Score (DLS) as reliable markers in the early diagnosis of AL and major complications after laparoscopic colorectal resection for DIE.
Methods: 262 consecutive women undergoing laparoscopic colorectal resection for DIE between September 2017 and September 2018 were prospectively enrolled. WBCs, CRP, PCT and DLS were recorded at baseline and on postoperative day (POD) 2, 3 and 6 then statistically analyzed as predictors of AL and severe postoperative complications.
Results: The AL rate was 3.2%. The major morbidity rate was 11.2%. No postoperative mortality was recorded. The postoperative trend of DLS and serum levels of CRP and PCT, but not WBCs, were significantly higher in women developing AL and severe complications. DLS had better sensitivity and specificity than biomarkers on all postoperative days as a predictor of AL and major complications. CRP and PCT have a low positive predictive value (PPV) and a high negative predictive value (NPV) for AL and major complications on POD3 and POD6. The risk of malnutrition was significantly related to AL.
Conclusions: The combination of DLS as a standardized postoperative clinical monitoring system and CRP and PCT as serum biomarkers, allows the exclusion of AL and major complications in the early postoperative period after laparoscopic colorectal resection for DIE, thus ensuring a safe patient discharge.
abstract_id: PUBMED:35579864
The patient perspective on the preoperative colorectal cancer care pathway and preparedness for surgery and postoperative recovery-a qualitative interview study. Background And Objectives: This study aimed to explore colorectal cancer (CRC) patients' perspectives and experiences regarding the preoperative surgical care pathway and their subsequent preparedness for surgery and postoperative recovery.
Methods: CRC patients were recruited using purposive sampling and were interviewed three times (preoperatively, and 6 weeks and 3 months postoperatively) using semistructured telephone interviews. Interviews were audiotaped, transcribed verbatim and analysed independently by two researchers using thematic analysis with open coding.
Results: Data saturation was achieved after including 18 patients. Preoperative factors that contributed to a feeling of preparedness for surgery and recovery were patient-centred- and professional healthcare organization, sincere and personal guidance, and thorough information provision. Postoperatively, patients with complications or physical complaints experienced unmet information needs regarding the impact of complications and what to expect from postoperative recovery.
Conclusions: The preoperative period is a vital period to prepare patients for surgery and recovery in which patients most value personalized information, personal guidance and professionalism. According to CRC patients, the feeling of preparedness for surgery and recovery can be improved by continually providing dosed information. This information should provide the patient with patient-tailored perspectives regarding the impact of (potential) complications and what to expect during recovery.
abstract_id: PUBMED:28180953
Colorectal ESD in day surgery. Background: Colorectal endoscopic submucosal dissection (ESD) was developed in Japan and is growing in popularity in Europe. Patients undergoing a colorectal ESD procedure in Japan are hospitalized for several days. In this study, we investigated the feasibility of colorectal ESD as an outpatient procedure in a European setting.
Methods: A prospective cohort of all patients undergoing colorectal ESD at Danderyds Hospital, Stockholm, Sweden from April 2014 to December 2015 were studied. Data on patient demographics, procedural outcome and 30-day readmissions were studied. Data are presented as median (range), mean ± SD or true numbers as appropriate.
Results: A total of 182 patients underwent a colorectal ESD during the study period. Of the 182 these, 11 were scheduled for an in-hospital procedure and of 171 patients scheduled for a day-procedure and 15 were admitted for observation. The remaining 156 patients were discharged after 2-4 h of observation and comprise the study cohort. Mean age was 69 years. Median lesion size was 28 (10-120) mm, and median resection time was 65 (10-360) min. Lesions were located as follows: anal canal 1 (0.6%), rectum 52 (33.3%), sigmoid 17 (10.9%), descending 3 (1.9%), transverse 24 (15.4%), ascending 29 (18.6%), and cecum 30 (19.2%). Eight (5.1%) of the 156 day surgery patients returned for medical attention during the postoperative 30-day period. Three of them were admitted for in-hospital observation. None of the day surgery patients required any surgical intervention.
Conclusion: Uncomplicated colorectal ESD can safely be carried out in a day surgery setting.
abstract_id: PUBMED:34403857
Urgent Inpatient Colectomy Carries a Higher Morbidity and Mortality Than Elective Surgery. Background: Emergency colorectal surgery confers a higher risk of adverse outcomes compared to elective surgery. Few studies have examined the outcomes after urgent colectomies, typically defined as those performed at the index admission, but not performed at admission in an emergency fashion. The aim of this study is to evaluate the risk of adverse outcomes following urgent inpatient colorectal surgery.
Materials And Methods: All adult patients undergoing colectomy between 2013 and 2017 in the ACS NSQIP were included in the analysis. Patients were grouped into Elective, Urgent and Emergency groups. The Urgent group was further stratified by time from admission to surgery. Baseline characteristics and 30 day outcomes were compared between the Elective, Urgent and Emergency groups using univariable and multivariable analyses.
Results: 104,486 patients underwent elective colorectal resection. 23,179 underwent urgent while 22,241 had emergency resections. Patients undergoing urgent colectomy presented with increased comorbidities, and experienced higher mortality (2.5-4.1%, AOR 2.3 (1.9 - 2.8)) compared to elective surgery (0.4%). Urgent colectomy was an independent risk factor for the majority of short term complications documented in NSQIP. Moreover, patients undergoing urgent colectomy more than a week following admission had an increased risk of bleeding, deep venous thrombosis, pulmonary embolism, urinary tract infection, and prolonged hospitalization.
Conclusion: Urgent colectomies are associated with a greater risk of adverse outcomes compared to elective surgery. Urgent status is an independent risk factor for post operative mortality and morbidity. Further characterization of this patient population and their specific challenges may help ameliorate these adverse events.
abstract_id: PUBMED:36131162
Same day discharge following elective, minimally invasive, colorectal surgery : A review of enhanced recovery protocols and early outcomes by the SAGES Colorectal Surgical Committee with recommendations regarding patient selection, remote monitoring, and successful implementation. Background: As enhanced recovery programs (ERPs) have continued to evolve, the length of hospitalization (LOS) following elective minimally invasive colorectal surgery has continued to decline. Further refinements in multimodal perioperative pain management strategies have resulted in reduced opioid consumption. The interest in ambulatory colectomy has dramatically accelerated during the COVID-19 pandemic. Severe restrictions in hospital capacity and fear of COVID transmission forced surgical teams to rethink strategies to further reduce length of inpatient stay.
Methods: Members of the SAGES Colorectal Surgery Committee began reviewing the emergence of SDD protocols and early publications for SDD in 2019. The authors met at regular intervals during 2020-2022 period reviewing SDD protocols, safe patient selection criteria, surrogates for postoperative monitoring, and early outcomes.
Results: Early experience with SDD protocols for elective, minimally invasive colorectal surgery suggests that SDD is feasible and safe in well-selected patients and procedures. SDD protocols are associated with reduced opioid use and prescribing. Patient perception and experience with SDD is favourable. For early adopters, SDD has been the natural evolution of well-developed ERPs. Like all ERPs, SDD begins in the office setting, identifying the correct patient and procedure, aligning goals and objectives, and the perioperative education of the patient and their supporting significant others. A thorough discussion with the patient regarding expected activity levels, oral intake, and pain control post operatively lays the foundation for a successful application of SDD programs. These observations may not apply to all patient populations, institutions, practice types, or within the scope of an existing ERP. However, if the underlying principles of SDD can be incorporated into an existing institutional ERP, it may further reduce the incidence of post operative ileus, prolonged LOS, and improve the effectiveness of oral analgesia for postoperative pain management and reduced opioid use and prescribing.
Conclusions: The SAGES Colorectal Surgery Committee has performed a comprehensive review of the early experience with SDD. This manuscript summarizes SDD early results and considerations for safe and stepwise implementation of SDD with a specific focus on ERP evolution, patient selection, remote monitoring, and other relevant considerations based on hospital settings and surgical practices.
abstract_id: PUBMED:31206242
Fluid management for critical patients undergoing urgent colectomy. Rationale: The present study aimed to define thresholds for perioperative fluids and weight gain after urgent colectomies.
Method: Consecutive urgent colonic resections within an enhanced recovery pathway (2011-2017) were included. Primary outcomes were postoperative complications, stratified as overall (I-V) and major (IIIb-V) according to Clavien scale. Fluid-management-related thresholds were identified through receiver operating characteristics (ROC) analysis. Outcomes were compared for patients above vs below threshold, and multivariable logistic regression was performed to identify risk factors for overall complications.
Results: Overall, complications were observed in 133 out of 224 patients (59%), severe complications in 43 patients (19%). For overall complications, area under ROC (AUROC) was 0.71, identifying a critical cut-off of 3 L of total IV fluid administration at the day of surgery (negative predictive value [NPV]: 90%). Further, a critical cut-off for postoperative weight gain of 2.3 kg at postoperative day (POD) 2 was identified (AUROC 0.7, NPV 92%). Multivariable analysis identified fluid administration of >3 L (OR 5.33; 95% CI, 2.36-12.02) and weight gain of >2.3 kg at POD 2 (OR 2.5; 95% CI, 1.13-5.53) as independent predictors for overall complications. Median length of stay was 7 (5-10) days in patients receiving <3 L at POD 0 and 13 (9-19) days in patients receiving >3 L (P < .001).
Conclusions: Fluid administration of 3 L at the day of surgery and weight gain of 2.3 kg at POD 2 may represent critical thresholds for adverse outcomes after urgent colectomy. The suggested thresholds need to be confirmed through independent validation.
abstract_id: PUBMED:20345455
The lived experience of the early postoperative period after colorectal cancer surgery. Colorectal cancer is one of the most common cancer diagnoses and undergoing colorectal cancer surgery is reported to be associated with physical symptoms and psychological reactions. Social support is described as important during the postoperative period. The purpose of this paper was to describe how patients experience the early postoperative period after colorectal cancer surgery. Interviews according a phenomenological approach were performed with 13 adult participants, within 1 week after discharge from hospital. Data were collected from August 2006 to February 2007. Analysis of the interview transcripts was conducted according to Giorgi. The essence of the phenomenon was to regain control over ones body in the early postoperative period after colorectal cancer surgery. Lack of control, fear of wound and anastomosis rupture, insecurity according to complications was prominent findings. When caring for these patients it is a challenge to be sensitive, encourage and promote patients to express their feelings and needs. One possibility to empower the patients and give support could be a follow up phone call within a week after discharge.
Answer: According to the study conducted on the impact of daytime versus nighttime surgery on the postoperative period of patients with acute colorectal disease, there was no significant association between nighttime surgery and the presence of postoperative medical and surgical morbidities. However, patients who underwent daytime surgery had longer lengths of stay at the hospital. Despite the lack of significant differences in early or late complications between the two groups, it is noteworthy that all surgeons reported feeling less proficient during nighttime surgeries (PUBMED:26744066). This suggests that while the timing of surgery (day vs. night) may not directly influence postoperative complications, it could affect other aspects of patient care, such as the length of hospital stay and potentially the performance of the surgical team. |
Instruction: Does pulmonary artery venting decrease the incidence of postoperative atrial fibrillation after conventional aortocoronary bypass surgery?
Abstracts:
abstract_id: PUBMED:24370797
Does pulmonary artery venting decrease the incidence of postoperative atrial fibrillation after conventional aortocoronary bypass surgery? Objectives: In this study, we tested the hypothesis that pulmonary artery venting would decrease the incidence of atrial fibrillation after coronary artery bypass surgery.
Methods: This prospective study included 301 patients who underwent complete myocardial revascularization with cardiopulmonary bypass in our department during a 2-year period. The patients were randomly divided into 2 groups: group I included 151 patients who underwent aortic root venting and group II included 150 patients who underwent pulmonary arterial venting for decompression of the left heart. Pre-, peri-, and postoperative risk factors for atrial fibrillation were assessed in both groups.
Results: The mean age was similar in the 2 groups. The mean number of anastomoses was significantly higher in group I (2.8 ± 0.8) than in group II (2.4 ± 0.8) (P = 0.001). The mean cross-clamp time was 42.7 ± 17.4 minutes in group I and 54.1 ± 23.8 minutes in group II (P = 0.001). The mean cardiopulmonary bypass time was 66.4 ± 46.1 minutes in group I and 77.4 ± 28.6 minutes in group II (P = 0.08). The incidence of atrial fibrillation was 14.5% (n = 21) in group I and 6.5% (n = 10) in group II (P = 0.02). Multivariate regression analysis showed that pulmonary artery venting decreased the postoperative incidence of atrial fibrillation by 17.6%.
Conclusions: Pulmonary arterial venting may be used as an alternative to aortic root venting during on-pump coronary bypass surgery, especially in patients at high risk of postoperative atrial fibrillation.
abstract_id: PUBMED:25595923
Postoperative atrial fibrillation is not pulmonary vein dependent: results from a randomized trial. Background: Although often short-lived and self-limiting, postoperative atrial fibrillation (POAF) is a well-recognized postoperative complication of cardiac surgery and is associated with a 2-fold increase in cardiovascular mortality and morbidity.
Objective: Our aim was to determine whether intraoperative bilateral pulmonary vein radiofrequency ablation decreases the incidence of POAF in patients undergoing coronary artery bypass grafting (CABG).
Methods: A total of 175 patients undergoing CABG was prospectively randomized to undergo adjuvant bilateral radiofrequency pulmonary vein ablation in addition to CABG (group A; n = 89) or CABG alone (group B; n = 86). Intraoperative pulmonary vein isolation was confirmed by the inability to pace the heart via the pulmonary veins after ablation. All patients received postoperative β-blocker.
Results: There was no difference in the incidence of POAF in the treatment group who underwent adjuvant pulmonary vein ablation (group A; 37.1%) compared with the control group who did not (group B; 36.1%) (P = .887). There were no differences in postoperative inotropic support, antiarrhythmic drug use, need for oral anticoagulation, and complication rates. The mean length of postoperative hospital stay was 8.2 ± 6.5 days in the ablation group and 6.7 ± 4.6 days in the control group (P < .001).
Conclusion: Adjuvant pulmonary vein isolation does not decrease the incidence of POAF or its clinical impact but increases the mean length of stay in the hospital. The mechanism of POAF does not appear to depend on the pulmonary veins.
abstract_id: PUBMED:29615335
Does left atrial appendage ligation during coronary bypass surgery decrease the incidence of postoperative stroke? Objective: The study objective was to evaluate the association between surgical left atrial appendage ligation and in-hospital stroke incidence after coronary artery bypass grafting among patients with atrial fibrillation.
Methods: A retrospective cohort study was performed by using the Nationwide Inpatient Sample between 2008 and 2014. All atrial fibrillation patients who underwent coronary artery bypass graft were included and categorized as left atrial appendage ligation or control group. Propensity score-weighted regression analyses were performed to assess the impact of left atrial appendage ligation on stroke incidence.
Results: A total of 234,642 patients were identified, among whom 20,664 (8.81%) received concomitant left atrial appendage ligation. The national postoperative stroke incidence was 0.92%. Results of the propensity-weighted regression analysis showed no significant association between LAA ligation and control with regard to postoperative stroke (odds ratio [OR], 0.83; confidence interval [CI], 0.57-1.22; P = .35), pericardial complications (OR, 1.15; CI, 0.88-1.49; P = .31), hemorrhage and/or hematoma (OR, 1.08; CI, 0.99-1.17; P = .07), mortality (OR, 1.29; CI, 0.99-1.68; P = .06), and length of stay (coefficient -0.21; CI, -0.44-0.02; P = .08). There was no specific CHA2DS2VASC score cutoff above which left atrial appendage ligation was demonstrated to have lower postoperative stroke incidence.
Conclusions: The postoperative stroke risk after coronary artery bypass grafting was low at approximately 1% among patients with atrial fibrillation in the United States. Concomitant left atrial appendage ligation was not associated with lower postoperative stroke risk.
abstract_id: PUBMED:34519375
Perioperative risk factors for new-onset postoperative atrial fibrillation among patients after isolated coronary artery bypass grafting: A retrospective study. Aims: Incidence of atrial fibrillation is considerably high after open heart surgery, which may prolong hospitalization and increase mortality. The aim of the present study is to investigate the perioperative risk factors for the occurrence of new-onset atrial fibrillation following isolated coronary artery bypass grafting.
Design: A retrospective study.
Methods: A total of 327 Korean patients recorded to have undergone first-time isolated coronary artery bypass grafting and no preoperative history of atrial fibrillation were included. The data were obtained from electronic health record from January 2010 to December 2019 at a tertiary care hospital. Predictors of new-onset atrial fibrillation after the surgery were identified by multivariate logistic regression analysis.
Results: The incidence rate of new-onset atrial fibrillation after coronary artery bypass grafting was approximately 28.4%, and the highest occurrence rate was 44.1% on postoperative day 2. Our main finding showed that advanced age was the strongest predictor of atrial fibrillation after coronary artery bypass grafting. In addition, history of stroke and depression, chronic obstructive pulmonary disease and intraoperative use of intra-aortic balloon pump were shown to be the risk factors.
Conclusion: Our findings showed that approximately 28% patients had new-onset atrial fibrillation after the surgery. Healthcare professionals should proactively assess risk factors for postoperative atrial fibrillation and focus more on older adults with pre-existing comorbidities, such as stroke, depression and chronic obstructive pulmonary disease.
Impact: Older adults with history of stroke, depression and comorbid chronic obstructive pulmonary disease should be carefully monitored closely during perioperative period. The study highlights that early assessment of new-onset postoperative atrial fibrillation can contribute to promote the quality of nursing care and frontline nurses may be a vital role in timely detection of atrial fibrillation after surgery. Prospective studies are required to identify the mechanisms connecting perioperative risk factors for atrial fibrillation after cardiac surgery.
abstract_id: PUBMED:16967324
Does totally endoscopic access for off-pump cardiac surgery influence the incidence of postoperative atrial fibrillation in coronary artery bypass grafting? A preliminary report. Background: The occurrence rate of atrial fibrillation (AF) after coronary artery bypass grafting, quoted in the literature, is wide ranging from 5% to over 40%. It is speculated that, off-pump coronary artery bypass grafting (OPCAB) and also minimally invasive cardiac surgery reduces the incidence of postoperative AF due to reduced trauma, ischemia, and inflammation. Current data, however, do not clearly answer the question, whether the incidence of postoperative AF is reduced in using minimally invasive techniques, ideally resulting in the combination of both small access and off-pump surgery. The aim of this study was to evaluate the incidence of postoperative AF in patients undergoing totally endoscopic off-pump coronary artery bypass grafting (TECAB).
Methods: A retrospective analysis of 72 patients undergoing myocardial revascularization was performed. Early postoperative incidence of AF was compared between three groups of patients: 24 after conventional coronary artery bypass grafting (CABG), 24 after OPCAB, and 24 after totally endoscopic off-pump CABG. Clinical profile of the patients, including factors having potential influence on postoperative AF was matched for groups.
Results: Postoperative AF occurred in 25% of the patients in the CABG group, in 16% of the patients in the OPCAB group, and in 16% of the patients in the TECAB group. This difference has no statistical significance. Risk factors and incidence of postoperative complications were comparable in all groups excepting the number of distal anastomoses. There was a statistical significance between CABG group and TECAB group.
Conclusion: Avoiding cardiopulmonary bypass and minimizing surgical trauma did not reduce the incidence of postoperative AF in this patient collective. It remains an attractive hypothesis that postoperative AF is reduced by off-pump myocardial revascularisation and minimizing surgical trauma but more robust data are required.
abstract_id: PUBMED:12587081
Thoracic epidural anesthesia does not influence the incidence of postoperative atrial fibrillation after beating heart surgery. Background: At least 20 - 30 % of patients undergoing coronary artery bypass graft surgery (CABG) or beating-heart surgery develop postoperative atrial fibrillation (AF). We evaluated the effect of thoracic epidural anesthesia (TEA) on the occurrence of postoperative AF in patients submitted to CABG without cardiopulmonary bypass (OPCABG).
Methods: We performed a retrospective analysis of 125 patients undergoing myocardial revascularization. Early postoperative incidence of AF was compared between three groups of patients - 50 after conventional CABG, 45 after OPCABG, and 30 after OPCABG combined with TEA intraoperatively and postoperatively. Clinical profile of the patients, including factors with a potential influence on postoperative AF was matched for groups.
Results: Postoperative AF occurred in 13.3 % of the TEA-treated patients, in 17.7 % of the patients in the OPCABG group, and in 26 % of the patients in the CABG group. This difference did not carry any statistical significance. Risk factors and incidence of postoperative complications were comparable in all groups.
Conclusion: TEA has no effect on the incidence of postoperative AF in patients undergoing beating-heart surgery.
abstract_id: PUBMED:26188198
Can posterior pericardiotomy reduce the incidence of postoperative atrial fibrillation after coronary artery bypass grafting?†. Objectives: Atrial fibrillation (AF) is a common complication that increases the morbidity after open heart surgery. The pathophysiology is uncertain, and its prevention remains suboptimal. The aim of this study was to assess the efficiency of posterior pericardiotomy in decreasing the incidence of pericardial effusion and postoperative AF.
Methods: This multicentre randomized prospective study included 200 patients who underwent open heart surgery; coronary artery bypass grafting procedure between June 2010 and May 2012. A posterior pericardiotomy incision was done in Group I (n = 100). A longitudinal incision, 4-cm long and 2-cm width, was made parallel and posterior to the left phrenic nerve, extending from the left inferior pulmonary vein to the diaphragm. Group II constituted the control group (n = 100). Postoperative pericardial effusion was assessed by echocardiography and rhythm follow-up was monitored daily.
Results: The incidence of postoperative AF was significantly lower in the posterior pericardiotomy group than in the control group (13 vs 30%, P = 0.01). The number of patients with remarkable postoperative pericardial effusion was significantly lower in the posterior pericardiotomy group (15 vs 50 patients, P = 0.04). Tamponade developed in 3 patients in Group II (P = 0.07). There was a significantly higher incidence of chest drainage in the posterior pericardiotomy group than in the control group (1041 ± 549 vs 911 ± 122 ml; P = 0.04). There was no significant difference between the two groups regarding hospital stay (8 vs 9 days, P > 0.05).
Conclusions: Posterior pericardiotomy is a simple, safe and effective method for reducing the incidence of postoperative pericardial effusion and related atrial fibrillation by improving pericardial drainage after coronary artery bypass grafting.
abstract_id: PUBMED:18294494
Preoperative statin therapy is not associated with a decrease in the incidence of postoperative atrial fibrillation in patients undergoing cardiac surgery. Background: Atrial fibrillation (AF) after cardiac surgery is associated with significant morbidity. We investigated whether preoperative statin therapy was associated with decreased incidence of postoperative AF in patients undergoing cardiac surgery, including isolated valve surgery and patients with low ejection fraction (EF).
Methods: A retrospective study of consecutive patients without history of AF (n = 4044) who underwent cardiac surgeries at St. Luke's Episcopal Hospital (Houston, TX), from January 1, 2003, through April 30, 2006, was conducted. Postoperative AF was assessed by continuous telemetry monitoring during hospitalization for cardiac surgery.
Results: A total of 2096 patients (52%) received preoperative statins. Atrial fibrillation occurred in 1270 patients (31.4% in both the statin and nonstatin groups). In multivariate regression analysis, age >65 years, history of valvular heart disease, rheumatic disease, pulmonary disease, and New York Heart Association class III/IV were independent predictors of increased risk, whereas female sex was associated with decreased risk. Preoperative statin therapy was not associated with decreased risk in the entire cohort (odds ratio [OR] 1.13, 95% confidence interval [CI] 0.98-1.31) or in subgroups undergoing isolated coronary artery bypass grafting (OR 1.16, 95% CI 0.97-1.43), isolated valve surgery (OR 1.09, 95% CI 0.81-1.46), or both (OR 1.09, 95% CI 0.72-1.65), or the subgroup with EF <35% (OR 1.23, 95% CI 0.84-1.82). After propensity score analysis (n = 867 patients in each group), preoperative statin therapy was not associated with decreased AF incidence (OR 1.14, 95% CI 0.92-1.41).
Conclusions: Preoperative statin therapy was not associated with decreased incidence of postoperative AF in patients undergoing cardiac surgery, including patients with low EF.
abstract_id: PUBMED:8656542
Atrial fibrillation following coronary artery bypass graft surgery: predictors, outcomes, and resource utilization. MultiCenter Study of Perioperative Ischemia Research Group. Objective: To determine the incidence, predictors, and cost of atrial fibrillation and flutter (AFIB) following coronary artery bypass graft (CABG) surgery.
Design: Prospective observational study (MultiCenter Study of Perioperative Ischemia).
Setting: Twenty-four university-affiliated hospitals in the United States from 1991 to 1993.
Subjects: A total of 2417 patients undergoing CABG with or without concurrent valvular surgery selected using a systematic sampling interval.
Measurements: Detailed preoperative, intraoperative, and postoperative data collected on standardized reporting forms.
Results: The overall incidence of postoperative AFIB was 27 percent. Independent predictors of postoperative AFIB included advanced age (odds ratio [OR], 1.24 per 5-year increase; 95 percent confidence interval [CI], 1.18-1.31); male sex (OR, 1.41; 95 percent CI, 1.09-1.81); a history of AFIB (OR, 2.28; 95 percent CI, 1.74-3.00); a history of congestive heart failure (OR, 1.31; 95 percent CI, 1.04-1.64); and a precardiopulmonary bypass heart rate of more than 100 beats per minute (OR, 1.59; 95 percent CI, 1.00-2.55). Surgical practices such as pulmonary vein venting (OR, 1.44; 95 percent CI, 1.13-1.83); bicaval venous cannulation (OR, 1.40; 95 percent CI, 1.04-1.89); postoperative atrial pacing (OR, 1.27; 95 percent CI, 1.00-1.62); and longer cross-clamp times (OR, 1.06 per 15 minutes; 95 percent CI, 1.00-1.11) also were identified as independent predictors of postoperative AFIB. Patients with postoperative AFIB remained an average of 13 hours longer in the intensive care unit and 2.0 days longer in the ward when compared with patients without AFIB.
Conclusion: Postoperative AFIB is common after CABG surgery and has a significant effect on both intensive care unit and overall hospital length of stay. In addition to expected demographic factors, certain surgical practices increase the risk of postoperative AFIB. Randomized controlled trials are necessary to determine if modification of these surgical practices, especially in patients at high risk, would decrease the incidence of postoperative AFIB.
abstract_id: PUBMED:26917198
Perioperative ascorbic acid supplementation does not reduce the incidence of postoperative atrial fibrillation in on-pump coronary artery bypass graft patients. Background: Atrial fibrillation is the most common arrhythmia following cardiac surgery. It is associated with increased hemodynamic instability, systemic embolization, and complications linked to anticoagulant therapy. Oxidative stress and consequent electrophysiological remodeling have been proposed as a cause of postoperative atrial fibrillation. Ascorbic acid supplementation was suggested as a novel and effective preventive agent. The aim of this study was to evaluate the capability of ascorbic acid to reduce the incidence of postoperative atrial fibrillation in coronary artery bypass grafting (CABG) patients.
Methods: A prospective randomized single-center trial was conducted in patients scheduled for an elective on-pump CABG surgery. Subjects in the ascorbic acid group received 2g of ascorbic acid 24h and 2h before the surgery and 1g twice a day for five days after the surgery. Postoperatively, the patients were monitored for atrial fibrillation and other complications.
Results: The ascorbic acid group consisted of 52 patients and the control group included 53 patients. The groups were well matched for baseline demographics, preoperative medications, comorbidities, and had similar intraoperative characteristics. The incidence of atrial fibrillation in the ascorbic acid group was 13.5% and 18.9% in the control group (p=0.314). No difference was found between groups in the time of occurrence of atrial fibrillation (3.71±1.89 vs. 2.91±1.58 days after the surgery; p=0.342). There was also no difference in the other observed postoperative complications.
Conclusions: The results of this study do not support the effectiveness of ascorbic acid supplementation in reducing the incidence of postoperative atrial fibrillation in elective on-pump CABG patients.
Answer: Yes, according to the study reported in abstract PUBMED:24370797, pulmonary artery venting does decrease the incidence of postoperative atrial fibrillation after conventional aortocoronary bypass surgery. The prospective study included 301 patients who underwent complete myocardial revascularization with cardiopulmonary bypass. Patients were randomly divided into two groups: group I with aortic root venting and group II with pulmonary arterial venting for decompression of the left heart. The incidence of atrial fibrillation was 14.5% in group I and 6.5% in group II (P = 0.02). Multivariate regression analysis showed that pulmonary artery venting decreased the postoperative incidence of atrial fibrillation by 17.6%. The study concluded that pulmonary arterial venting may be used as an alternative to aortic root venting during on-pump coronary bypass surgery, especially in patients at high risk of postoperative atrial fibrillation. |
Instruction: Fatal meningitis in a previously healthy young adult caused by Streptococcus pneumoniae serotype 38: an emerging serotype?
Abstracts:
abstract_id: PUBMED:15943886
Fatal meningitis in a previously healthy young adult caused by Streptococcus pneumoniae serotype 38: an emerging serotype? Background: In December 2001, a fatal case of pneumococcal meningitis in a Marine Corps recruit was identified. As pneumococcal vaccine usage in recruit populations is being considered, an investigation was initiated into the causative serotype.
Case Presentation: Traditional and molecular methods were utilized to determine the serotype of the infecting pneumococcus. The pneumococcal isolate was identified as serotype 38 (PS38), a serotype not covered by current vaccine formulations. The global significance of this serotype was explored in the medical literature, and found to be a rare but recognized cause of carriage and invasive disease.
Conclusion: The potential of PS38 to cause severe disease is documented in this report. Current literature does not support the hypothesis that this serotype is increasing in incidence. However, as we monitor the changing epidemiology of pneumococcal illness in the US in this conjugate era, PS38 might find a more prominent and concerning niche as a replacement serotype.
abstract_id: PUBMED:31312315
Meningitis caused by Streptococcus pneumoniae serotype 7a in an infant vaccinated with two doses of 13-valent pneumococcal conjugate vaccine: a case study Pneumococcal meningitis is a global scourge. It is a major cause of morbidity and mortality. In Morocco, 13-valent pneumococcal conjugate vaccine (PCV13) was introduced into the National Immunization Program in October 2010 according to the immunization schedule 2 + 1 and replaced by PCV10 in July 2012, according to the same schedule. Despite the use of the PCV13, which is essential in the fight against pneumococcal disease, the emergence of new non-vaccine serotypes always results in meningitis in children, causing serious sequelae. We report the case of an infant vaccinated with two doses of PCV13 with meningitis caused by Streptococcus pneumoniae serotype 7a. The peculiarity of this case study lies in pneumococcal meningitis due to Streptococcus pneumoniae serotype 7a not included in the PCV13 in an infant immunized by 2 doses of PCV13. We here insist on the need and the importance of an observatory for pneumococcal meningitis and of a wide epidemiological study in order to determine the serotypes in Morocco after the introduction of PCV13 and then of PCV10.
abstract_id: PUBMED:37581129
Streptococcus pneumoniae Serotype 23B Causing Asymptomatic Sinusitis Complicated by Endocarditis and Meningitis: Sequela of a Non-vaccine Serotype. We describe a rare case of a Streptococcus pneumoniae (S. pneumoniae) infection causing mitral valve endocarditis and bacterial meningitis in a previously healthy young adult male in his 20s who presented with altered mentation. Though our patient did not endorse any respiratory issues, we suspected the paranasal sinuses to have been the cryptic primary source of disseminated infection into the respiratory system and meninges due to incidental mucosal thickening being found on imaging. Blood and cerebrospinal fluid analyses and cultures revealed the proliferation of S. pneumoniae serotype 23B, despite our patient having previously received appropriate pneumococcal vaccinations in his childhood without delinquency. Ultimately, surgical replacement of the mitral valve, as well as a course of ceftriaxone, was indicated for this patient, in which full resolution of symptoms was achieved upon discharge.
abstract_id: PUBMED:28417006
First report of serotype 23B Streptococcus pneumoniae isolated from an adult patient with invasive infection in Japan. Serotype 23B Streptococcus pneumoniae was isolated from a 67-year-old Japanese patient with meningitis. This isolate was susceptible to penicillin G, while genotyped as gPISP with a mutation in a penicillin-binding motif in PBP2b. The 23B isolate was assigned to ST11996 that is related to CC439, a dominant group among serotype 23B.
abstract_id: PUBMED:24374499
Characterization of Streptococcus pneumoniae invasive serotype 19A isolates recovered in Colombia. The aim of this study was to determine the molecular characterization of invasive penicillin non-susceptible Streptococcus pneumoniae serotype 19A isolates, collected in Colombia between 1994 and 2012. A total of 115 isolates serotype 19A were analyzed. Genetic relationship of 80 isolates with minimal inhibitory concentration (MIC) to penicillin ≥0.125 μg/was determined by pulsed-field gel electrophoresis (PFGE) and selected strains were studied by multilocus sequence typing (MLST). Among the 115 isolates, resistance to penicillin in meningitis was 64.2%, in non-meningitis 32.2% were intermediate and 1.1% were high resistance. The most frequent sequence types were ST320 (33.7%), ST276 (21.5%), and ST1118 (11.2%). Five isolates were associated with the Spain(9V)-ST156 clone, and two isolates were related to Colombia(23F)-ST338 clone. S. pneumoniae serotype 19A increased in Colombia was associated with the spread of isolates genetically related to ST320 and ST276, and emergence of capsular variants of worldwide-disseminated clones.
abstract_id: PUBMED:35095809
Serotype Distribution, Antimicrobial Susceptibility, Multilocus Sequencing Type and Virulence of Invasive Streptococcus pneumoniae in China: A Six-Year Multicenter Study. Background:Streptococcus pneumoniae is an important human pathogen that can cause severe invasive pneumococcal diseases (IPDs). The aim of this multicenter study was to investigate the serotype and sequence type (ST) distribution, antimicrobial susceptibility, and virulence of S. pneumoniae strains causing IPD in China. Methods: A total of 300 invasive S. pneumoniae isolates were included in this study. The serotype, ST, and antimicrobial susceptibility of the strains, were determined by the Quellung reaction, multi-locus sequence typing (MLST) and broth microdilution method, respectively. The virulence level of the strains in the most prevalent serotypes was evaluated by a mouse sepsis model, and the expression level of well-known virulence genes was measured by RT-PCR. Results: The most common serotypes in this study were 23F, 19A, 19F, 3, and 14. The serotype coverages of PCV7, PCV10, PCV13, and PPV23 vaccines on the strain collection were 42.3, 45.3, 73.3 and 79.3%, respectively. The most common STs were ST320, ST81, ST271, ST876, and ST3173. All strains were susceptible to ertapenem, levofloxacin, moxifloxacin, linezolid, and vancomycin, but a very high proportion (>95%) was resistant to macrolides and clindamycin. Based on the oral, meningitis and non-meningitis breakpoints, penicillin non-susceptible Streptococcus pneumoniae (PNSP) accounted for 67.7, 67.7 and 4.3% of the isolates, respectively. Serotype 3 strains were characterized by high virulence levels and low antimicrobial-resistance rates, while strains of serotypes 23F, 19F, 19A, and 14, exhibited low virulence and high resistance rates to antibiotics. Capsular polysaccharide and non-capsular virulence factors were collectively responsible for the virulence diversity of S. pneumoniae strains. Conclusion: Our study provides a comprehensive insight into the epidemiology and virulence diversity of S. pneumoniae strains causing IPD in China.
abstract_id: PUBMED:32636995
Surgical wound infection caused by a multi drug resistant Streptococcus pneumoniae Serotype 19A after a total coloproctectomy with ileostomy. Streptococcus pneumoniae (S. pneumoniae) colonizes asymptomatically the human nasopharynx. This pathogen is responsible for sinusitis, otitis media, pneumonia, bacteremia and meningitis. We report the case of a 35-year-old female patient who developed a surgical wound infection by a multi drug resistant S. pneumoniae serotype 19A after a total coloprotectomy. This first found in Morocco shows the implication of multidrug resistant S. pneumoniae in surgical wound infections.
abstract_id: PUBMED:19116604
Association of serotype of Streptococcus pneumoniae with risk of severe and fatal outcome. Background: Invasive pneumococcal disease (IPD) in children may manifest as bacteremia/sepsis, bacteremic pneumonia, or meningitis, with serious outcomes that include hospitalization, neurologic sequelae, or death. The risk of severe or fatal outcome of disease is associated with host-related factors, such as age or comorbid conditions. Furthermore, there is an ongoing discussion about organism-related factors, such as the pneumococcal serotype.
Methods: Data on 494 children aged <16 years hospitalized for IPD between 1997 and 2003 in pediatric hospitals in Germany were analyzed. Serotype specific case-fatality rates and rates of severe outcome were compared using standardized mortality ratios (SMR). The risk of severe or fatal outcome for the serotype with the highest case-fatality rate was further analyzed using multivariate logistic regression adjusting for age younger than 1 year, meningitis, sex, and immunocompromised status as potential confounders.
Results: The overall case-fatality rate was 5.3% and the rate of severe outcome was 17.0%. Serotype 7F had the highest case-fatality rate (14.8%, SMR 3.1), followed by serotypes 23F (8.3%, SMR 1.7) and 3 (8.3%, SMR 1.7). The highest rate of severe outcome was also observed for 7F (40.7%, SMR 2.4). Multivariate analysis showed an odds ratio of 4.3 (1.3-14.7) for fatal outcome and 4.0 (1.6-10.4) for severe outcome comparing 7F to all other serotypes.
Conclusions: In this study population, serotype 7F accounted for a higher risk of severe and fatal outcome than other serotypes of Streptococcus pneumoniae. In describing the epidemiology of IPD, the serotype-specific risk for severe or fatal outcome is an important complement to other serotype-specific aspects like incidence and antibiotic resistance pattern.
abstract_id: PUBMED:8757071
Purulent meningitis in an adult patient caused by multi-drug resistant Streptococcus pneumoniae 19A Unlabelled: This is the first report in Hungary about meningitis caused by multiply-resistant Streptococcus pneumoniae in a 54 year old woman. The Streptococcus pneumoniae serotype 19A was highly resistant to penicillin, ampicillin, chloramphenicol, cefuroxime and intermediate resistant to ceftriaxone. The antibiotic treatment was started with penicillin and ampicillin. The antibiotic treatment was changed to vancomycin + ceftriaxone, vancomycin + rifampicin and vancomycin + imipenem on the 2nd, 4th and 11th hospital days, respectively. She died on the 29th day with symptoms of sepsis. Necropsy and microscopic examination of the brain revealed localised inspissed layer of purulent exsudat over the convexities. Under this area the small wessels and capillaries were thrombotic and were surrounded by severe degeneration and necrosis in the white matter.
Recommendations: Streptococcus pneumoniae has to be considered penicillin resistant until the organism is proved to be susceptible to penicillin. The authors advise the administration of dexamethasone, based on their own favourable 7 years experience.
abstract_id: PUBMED:32974581
Isolation and identification of Streptococcus pneumoniae serotype 6B from a patient with bacterial meningitis infection in Jakarta, Indonesia. CNS infection is a life-threatening condition in developing countries and Streptococcus pneumoniae has been reported as the most common cause of bacterial meningitis; however, there is limited data on pneumococcal meningitis in Indonesia. This cross-sectional study aimed to isolate and identity S. pneumoniae strains from cerebrospinal fluid (CSF) specimens collected as part of routine testing from patients with clinically diagnosed central nervous system infection at a national referral hospital in Jakarta, Indonesia in 2017. S. pneumoniae isolation and identification were performed using conventional culture and molecular tools. Antibiotic susceptibility patterns were monitored through minimum inhibitory concentration testing. From 147 CSF specimens, one S. pneumoniae strain was identified from a patient with bacterial meningitis symptoms. The isolate was serotype 6B (ST5661) and susceptible to 18 antimicrobial agents tested, including penicillin, tetracycline, and the macrolide group. Our data provide insights into the epidemiology of invasive pneumococcal disease in Indonesia.
Answer: The case of fatal meningitis in a previously healthy young adult caused by Streptococcus pneumoniae serotype 38 (PS38) is documented in the literature. The investigation into this case revealed that the pneumococcal isolate was identified as serotype 38, which is not covered by current vaccine formulations (PUBMED:15943886). The global significance of this serotype was explored, and it was found to be a rare but recognized cause of carriage and invasive disease. The current literature does not support the hypothesis that this serotype is increasing in incidence. However, there is a concern that as the epidemiology of pneumococcal illness changes in the US during the conjugate vaccine era, PS38 might find a more prominent and concerning niche as a replacement serotype (PUBMED:15943886).
This case highlights the ongoing challenge of pneumococcal disease and the potential for non-vaccine serotypes to cause severe disease. Other cases of pneumococcal meningitis caused by non-vaccine serotypes have been reported, such as serotype 7a in an infant vaccinated with two doses of 13-valent pneumococcal conjugate vaccine (PCV13) (PUBMED:31312315), and serotype 23B causing asymptomatic sinusitis complicated by endocarditis and meningitis in a young adult who had received appropriate pneumococcal vaccinations in childhood (PUBMED:37581129). These cases underscore the need for continuous surveillance and possibly the development of vaccines that cover a broader range of pneumococcal serotypes to prevent invasive pneumococcal diseases effectively. |
Instruction: Organ-specific autoantibodies in coeliac disease: do they represent an epiphenomenon or the expression of associated autoimmune disorders?
Abstracts:
abstract_id: PUBMED:9265573
Organ-specific autoantibodies in coeliac disease: do they represent an epiphenomenon or the expression of associated autoimmune disorders? Background And Aims: The occurrence of autoimmune disorders and organ-specific autoantibodies has been reported in coeliac disease. We assessed the prevalence of organ-specific autoantibodies in coeliac patients and evaluated whether their finding is an expression of associated autoimmune diseases.
Methods: Sera from 70 coeliac disease patients were tested for thyroid microsomal, gastric parietal cell, adrenal cortex and pancreatic islet cell antibodies by indirect immunofluorescence on O blood group human tissues.
Results: Eighteen coeliacs (26%) were positive for at least one of the autoantibodies studied; thyroid microsomal antibodies showed a higher prevalence (21%) than parietal cell (11%), adrenal cortex (4%) and islet cell antibodies (3%). In 15 (21%) of the 70 coeliacs studied an association with autoimmune diseases was found, including insulin dependent diabetes mellitus (6 cases), autoimmune hepatitis (3 cases), hypothyroidism (4 cases), thyrotoxicosis (1 case) and dermatomyositis (1 case). One or more organ-specific autoantibodies were positive in 12 (80%) of the 15 coeliacs with autoimmune disorders in comparison with their positivity in 6 (11%) of the 55 coeliacs without autoimmune diseases (p < 0.0001).
Conclusions: The finding of organ-specific autoantibodies in coeliac patients discloses the coexistence of a wide spectrum of immunological diseases.
abstract_id: PUBMED:37303388
Prevalence of Organ-Specific Autoimmunity in Patients With Type 1 Diabetes Mellitus. Introduction: Type 1 diabetes mellitus (T1DM) is associated with other autoimmune disorders that are characterized by presence of organ-specific autoantibodies. The present study was undertaken to assess the prevalence of organ-specific autoantibodies among newly diagnosed T1DM subjects of India and to study its relationship with glutamic acid decarboxylase antibody (GADA). We also compared the clinical and biochemical parameters in GADA-positive and -negative T1DM subjects.
Methods: In a hospital-based cross-sectional study, we studied 61 patients with newly diagnosed T1DM ≤ 30 years of age. T1DM was diagnosed on the basis of acute onset of osmotic symptoms with or without ketoacidosis, severe hyperglycaemia [blood glucose > 13.9 mmol/l (>250 mg/dl)] and insulin requirement from the onset of diabetes. Subjects were screened for autoimmune thyroid disease (thyroid peroxidase antibody [TPOAb]), celiac disease (tissue transglutaminase antibody [tTGAb]), and gastric autoimmunity (parietal cell antibody [PCA]).
Results: Of the 61 subjects, more than one-third (38%) had at least one positive organ-specific autoantibody. In particular, 13 (21.3%) were found to be positive for TPOAb, nine (14.8%) were positive for tTGAb and 11 (18%) were positive for PCA. GADA was positive in 15 (25%) subjects. The frequency of TPOAb tended to be higher in patients who had GADA positivity compared with those with no circulating GADA (40% vs. 15.2%; p=0.07). Subjects positive for GADA were also more likely to be PCA positive compared with those who were GADA negative (40 vs.10.9%, p=0.02). There were no differences in frequency of diabetic ketoacidosis, body mass index, hemoglobin A1C (HbA1c), insulin requirement or fasting C-peptide in GADA-positive and -negative patients.
Conclusion: We support the recommendation for regular screening of organ-specific autoantibodies, in particular TPOAb, tTGAb and PCA in all patients with T1DM. Detection of these autoantibodies at onset may prevent complications associated with delayed diagnosis of these disorders. We also conclude that there is higher frequency of TPOAb and PCA in GADA-positive T1DM patients as compared to negative ones. However, patients with positive GADA had similar clinical and biochemical parameters compared to GADA-negative subjects. Lastly, low GADA positivity in our study cohort as compared to Western populations suggests the heterogenous nature of T1DM in the Indian population.
abstract_id: PUBMED:15793184
Autoantibody "subspecificity" in type 1 diabetes: risk for organ-specific autoimmunity clusters in distinct groups. Objective: Autoimmune thyroid disease (AIT), celiac disease, and Addison's disease are characterized by the presence of autoantibodies: thyroid peroxidase antibody (TPOAb) and thyroglobulin antibody (TGAb) in AIT, tissue transglutaminase antibody (TTGAb) in celiac disease, and 21-hydroxylase antibody (21-OHAb) in Addison's disease. The objective of this study was to define the prevalence of these autoantibodies and clinical disease in a population with type 1 diabetes.
Research Design And Methods: We screened 814 individuals with type 1 diabetes for TPOAb, TGAb, TTGAb, and 21-OHAb. Clinical disease was defined by chart review. Factors related to the presence of autoimmunity and clinical disease including age at onset of type 1 diabetes, duration of diabetes, age at screening, sex, and the presence of autoantibodies were reviewed.
Results: The most common autoantibodies expressed were TPOAb and/or TGAb (29%), followed by TTGAb (10.1%) and 21-OHAb (1.6%). Specific HLA DR/DQ genotypes were associated with the highest risk for expression of 21-OHAb (DRB1*0404-DQ8, DR3-DQ2) and TTGAb (DR3-DQ2- DR3-DQ2). The expression of thyroid autoantibodies was related to 21-OHAb but not to TTGAb. The presence of autoantibodies was associated with and predictive of disease.
Conclusions: In this large cohort of individuals with type 1 diabetes, the expression of organ-specific autoantibodies was very high. The grouping of autoantibody expression suggests common factors contributing to the clustering.
abstract_id: PUBMED:22100310
The role of gender and organ specific autoimmunity. Autoimmunity is influenced by multiple factors including gender and sex hormones. A definite female predominance is found in many autoimmune diseases. Gender is also associated with differences in clinical presentation, onset, progression and outcome of autoimmune diseases. Sex hormones might influence the target organ's vulnerability to an autoimmune response. Gender differences also exist in organ specific autoimmune diseases such as multiple sclerosis, Guillain-Barré syndrome, Crohn's disease and celiac disease. Nevertheless, other organ specific autoimmune diseases (i.e. ulcerative colitis) are seemingly characterized with similar prevalence in both males and females. The reason for gender differences in certain autoimmune diseases remains unknown, but may be attributed to sex hormone influence, fetal microchimerism, X chromosome inactivation, and X chromosome abnormalities. Sex hormones have been found to have immune modulating properties, as well as providing cellular protection following tissue damage in certain circumstances. Sex hormones also influence innate and adaptive immune cells, number of B and T cells, antigen presentation and cytokine secretion. Herein, we review the influence of gender on organ-specific autoimmune diseases affecting the heart, blood vessels, central nervous system and gastrointestinal tract. It appears that sex hormones may have a therapeutic potential in several autoimmune conditions, although further research is required before therapeutic recommendations can be made.
abstract_id: PUBMED:18176869
Organ-specific autoantibodies in patients with rheumatoid arthritis treated with adalimumab: a prospective long-term follow-up. Background: Rheumatoid arthritis (RA) is frequently associated with organ or non-organ-specific autoantibodies or overt autoimmune disorders. Aim of our study was to assess the prevalence and concentration of a panel of organ-specific autoantibodies in patients with RA and to evaluate their relationship with clinical manifestations and treatment efficacy.
Methods: Clinical and serological data from 20 patients with active RA (3M/17F), aged from 28 to 80 years and 50 healthy controls were analyzed. All patients fulfilled the 1987 American College of Rheumatology (ACR) classification criteria for RA and were treated with adalimumab and methotrexate. At baseline and after 6 months of therapy we tested anti-thyroid antibodies for thyroperoxidase (TPOAb) and thyroglobulin (TgAb) using an automated immunochemiluminescence assay (Immulite 2000, DPC, Los Angeles, CA), and anti-tissue transglutaminase (anti-tTG) using the ELISA assay (Phadia, Freiburg, Germany). Anti-smooth muscle (SMA), anti-liver kidney microsome (LKM), anti-parietal cells (APCA), anti-mitochondrial (AMA) and anti-liver cytosolic protein type 1 (LC1), anti-adrenal gland (ACA), anti-pancreatic islet (ICA) and anti-steroid-producing cell (stCA) antibodies were analyzed using a commercially available indirect immunofluorescence methods. Statistics were performed by the SPSS statistical software for Windows, using non parametric tests.
Results: At baseline 6 out of 20 (30%) patients were positive for TPOAb and 8 (40%) for TgAb. After 6 months of treatment 5 (25%) patients had TPOAb and 8 (40%) TgAb. At baseline and after 6 months of treatment only 1 (5%) patient tested positive for IgA anti-tTG (celiac disease was confirmed by intestinal biopsy), and no patients had IgG anti-tTG. However, in RA patients IgG anti-tTG levels significantly increased during treatment (p = 0.017) and were higher than in healthy individuals both at baseline (p = 0.028) and after 6 months of treatment (p = 0.001). Only 1 (5%) patient was positive for APCA and no patient was positive for the other anti-organ-specific antibodies either at baseline or after 6 months of treatment.
Conclusion: The prevalence of organ-specific antibodies does not seem to change during anti-TNF treatment in RA patients. However, a slight and probably irrelevant increase of IgG anti-tTG antibody levels was observed.
abstract_id: PUBMED:31213469
Age, HLA, and Sex Define a Marked Risk of Organ-Specific Autoimmunity in First-Degree Relatives of Patients With Type 1 Diabetes. Objective: Autoimmune diseases can be diagnosed early through the detection of autoantibodies. The aim of this study was to determine the risk of organ-specific autoimmunity in individuals with a family history of type 1 diabetes.
Research Design And Methods: The study cohort included 2,441 first-degree relatives of patients with type 1 diabetes who were prospectively followed from birth to a maximum of 29.4 years (median 13.2 years). All were tested regularly for the development of autoantibodies associated with type 1 diabetes (islet), celiac disease (transglutaminase), or thyroid autoimmunity (thyroid peroxidase). The outcome was defined as an autoantibody-positive status on two consecutive samples.
Results: In total, 394 relatives developed one (n = 353) or more (n = 41) of the three disease-associated autoantibodies during follow-up. The risk by age 20 years was 8.0% (95% CI 6.8-9.2%) for islet autoantibodies, 6.3% (5.1-7.5%) for transglutaminase autoantibodies, 10.7% (8.9-12.5%) for thyroid peroxidase autoantibodies, and 21.5% (19.5-23.5%) for any of these autoantibodies. Each of the three disease-associated autoantibodies was defined by distinct HLA, sex, genetic, and age profiles. The risk of developing any of these autoantibodies was 56.5% (40.8-72.2%) in relatives with HLA DR3/DR3 and 44.4% (36.6-52.2%) in relatives with HLA DR3/DR4-DQ8.
Conclusions: Relatives of patients with type 1 diabetes have a very high risk of organ-specific autoimmunity. Appropriate counseling and genetic and autoantibody testing for multiple autoimmune diseases may be warranted for relatives of patients with type 1 diabetes.
abstract_id: PUBMED:22261392
Organ-specific autoantibodies and autoimmune diseases in juvenile systemic lupus erythematosus and juvenile dermatomyositis patients. Objectives: To our knowledge, no study assessed simultaneously a variety of organ-specific autoantibodies and the prevalence of organ-specific autoimmune diseases in juvenile systemic lupus erythematosus (JSLE) and juvenile dermatomyositis (JDM). Therefore, the purpose of this study was to evaluate organ-specific autoantibodies and autoimmune diseases in JSLE and JDM patients.
Methods: Forty-one JSLE and 41 JDM patients were investigated for autoantibodies associated with autoimmune hepatitis, primary biliary cirrhosis, type 1 diabetes mellitus (T1DM), autoimmune thyroiditis (AT), autoimmune gastritis and coeliac disease (CD). Patients with positive antibodies were investigated for the respective organ-specific autoimmune diseases.
Results: Mean age at diagnosis was higher in JSLE compared to JDM patients (10.3±3.4 vs. 7.3±3.1years, p=0.0001). The frequencies of organ-specific autoantibodies were similar in JSLE and JDM patients (p>0.05). Of note, a high prevalence of T1DM and AT autoantibodies was observed in both groups (20% vs. 15%, p=0.77 and 24% vs. 15%, p=0.41; respectively). Higher frequencies of ANA (93% vs. 59%, p=0.0006), anti-dsDNA (61% vs. 2%, p<0.0001), anti-Ro, anti-Sm, anti-RNP, anti-La and IgG-aCL were observed in JSLE (p<0.05). Organ-specific autoimmune diseases were evidenced only in JSLE patients (24% vs. 0%, p=0.13). Two JSLE patients had T1DM associated with Hashimoto thyroiditis and another had subclinical thyroiditis. Another JSLE patient had CD diagnosis based on iron deficiency anaemia, anti-endomysial antibody, duodenal biopsy compatible to CD and response to a gluten-free diet.
Conclusions: Organ-specific diseases were observed solely in JSLE patients and required specific therapy. The presence of these antibodies recommends the evaluation of organ-specific diseases and a rigorous follow-up.
abstract_id: PUBMED:17656810
Prevalence and clinical significance of organ-specific autoantibodies in type 1 diabetes mellitus. As diabetes mellitus type 1 (DM1) is associated with other autoimmune diseases, clinical tools are needed to diagnose and predict the occurrence of other autoimmune diseases in DM1. We performed a systematic search of the literature on the prevalence, and the diagnostic and prognostic significance of organ-specific autoantibodies in DM1, focusing on the most prevalent autoimmune diseases in DM1: Hashimoto's disease, autoimmune gastric disease, Addison's disease and coeliac disease. We found 163 articles that fulfilled our selection criteria. We analysed and compared the prevalence of autoantibodies in DM1 and control populations, studied the relation between antibody prevalence and age, gender, race and DM1 duration and studied the relation between the presence of autoantibodies and organ dysfunction. Because of the large variation in population characteristics and study design, a uniform conclusion on the relation of these autoantibody prevalences with age, gender, race, DM1 duration and target organ failure cannot be drawn easily. In addition, most studies reviewed used a cross-sectional design. Therefore, few data on the predictive value of the organ-specific antibodies in DM1 populations are present in these studies. Obviously, prospective studies are needed to fill this gap in knowledge. Despite these restrictions, the general picture from the present review is that the prevalence of the organ-specific autoantibodies is significantly higher in DM1 than in control populations. Given the relevant risk for organ failure in DM1 patients with autoantibodies against thyroid, gastric, adrenal and intestinal antigens, we recommend checking these autoantibodies in these patients at least once, for instance at the diagnosis of DM1. For detailed advice on assessing the different organ autoantibodies and function we refer to the summaries in the results section.
abstract_id: PUBMED:18085442
The predictive significance of autoantibodies in organ-specific autoimmune diseases. Many organ-specific autoimmune diseases are preceded by a long pre-clinical phase, and several longitudinal cohort studies have shown that patients may carry autoantibodies many years before they manifest clinical symptoms. Detecting these antibodies in serum has been shown to have strong predictive value, depending on the particular autoantibody, test method, and disease at issue. This review examines the predictive value of various autoantibodies that are found in organ-specific autoimmune diseases, such as primary biliary cirrhosis, Addison's disease, Hashimoto's thyroiditis, type-1 diabetes, celiac disease, and Crohn's disease.
abstract_id: PUBMED:32393142
Type 1 diabetes, thyroid, gastric and adrenal humoral autoantibodies are present altogether in almost one third of adult celiac patients at diagnosis, with a higher frequency than children and adolescent celiac patients. Background: No data are available on the frequency of organ-specific humoral autoimmunity at diagnosis of adult celiac disease (CD).Aim: To evaluate the humoral immunoreactivities specific of type 1 diabetes (T1D), thyroid (THD), atrophic-gastritis (AG) and Addison's (AD) diseases in 92 adult CD patients at diagnosis and 237 adult healthy subjects (CTRL).Methods: T1D, THD and AD specific autoantibodies were analyzed by radioimmunoprecipitation assays. AG autoantibodies were detected by enzyme-linked immunosorbent assay.Results: Of 92 CD patients, 31.5% were positive for at least one of the organ-specific autoantibodies investigated (p < .0001 vs CTRL). Thyroid, diabetes, gastric and adrenal-autoantibodies, that increase with age at diagnosis, were detected in 12.0%, 10.9%, 10.9%, 2.2% of CD patients, respectively. Gastric- and diabetes- rather than thyroid- and adrenal-autoimmunity seem to be specifically related to presence of CD.Conclusions: One third of adult CD patients at diagnosis is target of at least one organ-specific autoantibody. A systematic organ-specific autoantibody screening in these patients might be of value to promptly identify, prevent or treat the relative diseases.
Answer: The presence of organ-specific autoantibodies in coeliac disease patients is not merely an epiphenomenon but rather indicates the coexistence of a wide spectrum of immunological diseases (PUBMED:9265573). The prevalence of organ-specific autoantibodies in coeliac disease patients is significant, with thyroid microsomal antibodies showing a higher prevalence than parietal cell, adrenal cortex, and islet cell antibodies. Moreover, the association with autoimmune diseases such as insulin-dependent diabetes mellitus, autoimmune hepatitis, hypothyroidism, thyrotoxicosis, and dermatomyositis has been found in a substantial proportion of coeliac patients (PUBMED:9265573).
The occurrence of organ-specific autoantibodies in coeliac disease patients suggests a broader underlying autoimmune diathesis rather than a random occurrence. This is supported by the observation that coeliac patients with autoimmune disorders have a higher positivity rate for organ-specific autoantibodies compared to those without autoimmune diseases (PUBMED:9265573). Additionally, the presence of these autoantibodies may have implications for the clinical management of coeliac disease, as their detection can help in the early diagnosis and prevention of complications associated with the associated autoimmune disorders (PUBMED:9265573).
In conclusion, the finding of organ-specific autoantibodies in coeliac disease patients is indicative of the expression of associated autoimmune disorders, rather than being a mere epiphenomenon. This underscores the importance of screening for these autoantibodies in coeliac patients to identify and manage potential coexisting autoimmune conditions. |
Instruction: Is posterior synovial plica excision necessary for refractory lateral epicondylitis of the elbow?
Abstracts:
abstract_id: PUBMED:22965262
Is posterior synovial plica excision necessary for refractory lateral epicondylitis of the elbow? Background: Arthroscopic treatments for lateral epicondylitis including débridement of the extensor carpi radialis brevis (ECRB) origin (Baker technique) or resection of the radiocapitellar synovial plica reportedly improve symptoms. However the etiology of the disease and the role of the plica remain unclear.
Questions/purposes: We asked if posterior radiocapitellar synovial plica excision made any additional improvement in pain or function after arthroscopic ECRB release.
Methods: We retrospectively reviewed 38 patients who had arthroscopic treatment for refractory lateral epicondylitis between November 2003 and October 2009. Twenty patients (Group A) underwent the Baker technique and 18 patients (Group B) underwent a combination of the Baker technique and posterior synovial plica excision. The minimum followup was 36 months (mean, 46 months; range, 36-72 months) for Group A and 25 months (mean, 30 months; range, 25-36 months) for Group B. Postoperatively we obtained VAS pain and DASH scores for each group.
Results: Two years postoperatively, we found no differences in the VAS pain score or DASH: the mean VAS pain scores were 0.3 points in Group A and 0.4 points in Group B, and the DASH scores were 5.1 points and 6.1 points respectively.
Conclusions: The addition of débridement of the posterior synovial fold did not appear to enhance either pain relief or function compared with the classic Baker technique without decortication.
abstract_id: PUBMED:37071147
Evaluation of lateral epicondylopathy, posterior interosseous nerve compression, and plica syndrome as co-existing causes of chronic tennis elbow. Purpose: A great number of patients that suffer from lateral epicondylitis, commonly called tennis elbow (TE), are not successfully treated, meaning, not getting adequate therapeutic effects and the main origin of the pain not being handled appropriately. The hypothesis of the present study is that the inefficiency of the treatment of the chronic TE may often be due to underdiagnosis of posterior interosseous nerve (PIN) entrapment or and plica syndrome, as the authors believe that those pathologies can often occur simultaneously.
Methods: A prospective cross sectional study was conducted. A total of 31 patients met the required criteria.
Results: Thirteen (40.7%) of the patients had more than one source of the lateral elbow pain. Five patients (15.6%) had all three examined pathologies. Six patients (18.8%) had TE and PIN syndrome. Two patients (6.3%) had TE and plica syndrome.
Conclusion: The present study demonstrated concomitant potential sources of lateral elbow pain in patients diagnosed with chronic TE. Our analysis shows how important it is to systematically diagnose patients that present with lateral elbow pain. The clinical characteristics of the three most common causes of chronic lateral elbow pain, meaning, TE, PIN compression, and plicae syndrome were also analyzed. Having adequate knowledge about the clinical aspects of these pathologies can help with a more effective differentiation of the etiology of chronic lateral elbow pain, and with that, a more efficient and cost-effective treatment plan.
abstract_id: PUBMED:36687491
A Rare Intra-articular Abnormality in the Posterior Radiocapitellar Joint: A Case Report. Introduction: Posterior radiocapitellar synovial plica excision is sometimes performed for lateral epicondylitis after debridement of the extensor carpi radialis brevis (ECRB) tendon. We describe a rare intra-articular abnormality of the posterior radiocapitellar joint diagnosed on posterior arthroscopic observation.
Case Report: A 48-year-old man presented with posterolateral pain and discomfort in his left elbow. A diagnosis of lateral epicondylitis was made, and arthroscopic debridement of the ECRB tendon was performed. Posterior arthroscopic examination revealed a tendon-like abnormality running longitudinally along the articular surface of the capitulum of the humerus. The abnormality was resected using a shaver, and symptoms improved postoperatively.
Conclusion: In patients with posterolateral pain and discomfort or catching of the elbow, posterior arthroscopic confirmation of the intra-articular structure is recommended after debridement of the ECRB tendon.
abstract_id: PUBMED:30016689
Prominent synovial plicae in radiocapitellar joints as a potential cause of lateral elbow pain: clinico-radiologic correlation. Background: Thickened synovial plicae in the radiocapitellar joint have been reported as a cause of lateral elbow pain. However, few reports regarding diagnosis based on detailed physical examination and magnetic resonance imaging (MRI) findings are available. The aims of this study were to characterize the clinical manifestations of this syndrome and to investigate the clinical outcomes of arthroscopic surgery.
Methods: We analyzed 20 patients who received a diagnosis of plica syndrome and underwent arthroscopic débridement between 2006 and 2011. The diagnosis was based on physical examination and MRI findings. Elbow symptoms were assessed using a visual analog scale for pain; the Mayo Elbow Performance Index; and the Disabilities of the Arm, Shoulder and Hand score at a minimum of 2 years after surgery. The thickness of plicae on MRI was compared with the normal data in the literature.
Results: Plicae were located on the anterior side in 1 patient, on the posterior side in 15, and on both sides in 4. Radiocapitellar joint tenderness and pain with terminal extension were observed in 65% of patients. MRI showed enlarged plicae consistent with intraoperative findings. The mean plica thickness on MRI was 3.7 ± 1.0 mm, which was significantly thicker than the normal value. The mean lengths (mediolateral length, 9.4 ± 1.6 mm; anteroposterior length, 8.2 ± 1.7 mm) were also greater than the normal values. The visual analog scale score for pain decreased from 6.3 to 1.0 after surgery. The Mayo Elbow Performance Index and Disabilities of the Arm, Shoulder and Hand scores improved from 66 to 89 and from 26 to 14, respectively.
Conclusions: Specific findings of the physical examination and MRI provide clues for the diagnosis of plica syndrome. Painful symptoms were successfully relieved after arthroscopic débridement.
abstract_id: PUBMED:19089437
Snapping elbow caused by hypertrophic synovial plica in the radiohumeral joint: a report of three cases and review of literature. The snapping elbow caused by hypertrophic synovial radiohumeral plica is a rare form of lateral elbow impingement. In this article we report on hypertrophic radiohumeral synovial folds in three male patients, aged 54, 65 and 27 years. All three patients suffered isolated lateral elbow pain, painful snapping and unsuccessful conservative treatment over at least 5 months (range 5-9 months, mean 7.7 months) prior to surgical treatment. None of the patients had lateral epicondylitis, instability, osteochondrosis dissecans, loose bodies, arthritis or neurological disorders. Upon clinical examination the range of motion in the respective painful elbows was found to be normal in all three cases, but a painful snapping occurred between 80 degrees and 100 degrees of flexion with the forearm in pronation. While there were no pathologic findings in standard radiographs, magnetic resonance imaging (MRI) revealed hypertrophic synovial plicae in the radiohumeral joints associated with effusion in each of the diseased elbows. Arthroscopic examinations confirmed the presence of a hypertrophic synovial plica in all three radiocapitellar joints, and revealed a transient interposition and compression of the folds in the articulation from extension until 90 degrees -100 degrees elbow flexion, with replacement beyond 90 degrees elbow flexion with a visible jump. Surgical management in all three cases comprised arthroscopic diagnosis confirmation and removal of the synovial plicae, leading to excellent outcomes at 6-12 months follow-up.
abstract_id: PUBMED:35315424
Radiocapitellar plica: a narrative review. Radiocapitellar plica is a vestigial lateral portion of elbow synovial fold which may cause pain and snap in some cases. Plica is a difficult and misleading diagnosis and it could be easily confused with a common lateral epicondylitis however, they are different conditions. Pathology full understanding and proper diagnosis is essential to achieve patient's pain relief and functional recovery therefore, we reviewed the most relevant literature about radiocapitaller plica. The aim of this study is to provide the best and current concepts about: clinical evaluation, imaging findings and surgical treatments of radiocapitellar plica.
abstract_id: PUBMED:32647733
Arthroscopic Modified Bosworth Procedure for Refractory Lateral Elbow Pain With Radiocapitellar Joint Snapping. Background: Radiocapitellar joint snapping due to the presence of synovial plica has been described as a contributory intra-articular pathology of lateral epicondylitis (LE).
Hypothesis: The arthroscopic modified Bosworth technique can provide a safe and favorable outcome for refractory LE with radiocapitellar snapping.
Study Design: Case series; Level of evidence, 4.
Methods: Patients treated with the arthroscopic modified Bosworth procedure for refractory LE with radiocapitellar joint snapping were included in this study. The sequential surgical procedures included excision of the upper portion of the anterolateral annular ligament, removal of the synovial plicae, and release of the extensor carpi radialis brevis for all patients. Clinical outcomes were measured at a minimum 1-year follow-up.
Results: A total of 22 patients with a mean ± SD age of 51.2 ± 10.4 years were included in this study. The mean follow-up was 29.4 ± 7.7 months (range, 21-42 months). The overall visual analog scale score (from preoperative to final follow-up) was 7.5 ± 1.2 vs 2.5 ± 1.8 (P < .001); flexion-extension motion arc was 133.8° ± 11.2° vs 146.4° ± 7.1° (P = .001); pronation-supination motion arc was 101.8° ± 9.2° vs 141.7° ± 10.2° (P = .001); Disabilities of the Arm, Shoulder and Hand score was 54.5 ± 13.2 vs 3.6 ± 4.1 (P < .001); and Mayo Elbow Performance Score was 51.9 ± 12.2 vs 84.3 ± 10.3 (P < .001).
Conclusion: Radiocapitellar joint snapping may coexist with LE as a disease spectrum. The arthroscopic modified Bosworth technique provides safe and favorable outcomes for patients with refractory LE associated with radiocapitellar joint snapping.
abstract_id: PUBMED:32458355
A Comprehensive Review of Radiohumeral Synovial Plicae for a Correct Clinical Interpretation in Intractable Lateral Epicondylitis. Purpose Of Review: Radiohumeral synovial plicae (RHSP) have been studied by different authors in different ways; in spite of this, the evidence is poor and the results are controversial and inconclusive even when it comes to referring to this elbow structure. The aim of this article is to review the embryologic development, anatomy and histology, pathophysiologic features, clinical manifestations, physical examination, imaging findings, and treatment of radiohumeral synovial plicae, for their correct clinical interpretation in patients with intractable lateral epicondylitis.
Recent Findings: Radiohumeral synovial plicae syndrome (RHSPS) can cause intractable lateral epicondylitis and can be easily confused with other clinical conditions affecting the elbow. Many clinicians are not familiar with radiohumeral synovial plica syndrome since there are not many studies about it and previous reports do not seem to reach a consensus. Although its role in elbow injuries and epicondylitis is accepted and its surgical treatment is effective, there is no clear consensus about clinically relevant aspects. RHSP are remnants of normal embryo development of the articular synovial membrane with different anatomical locations, size and shape. Traumatism or overuse can turn RHSP into symptomatic structures at any age and can be compressed between the radial and humeral heads during movement. This compression can cause pain and other symptoms such as snapping, catching, mobility restriction, pitching, clicking, locking, blockage, popping and swelling. Radiohumeral synovial plica syndrome (RHSPS) may be an isolated condition or it can be associated with other elbow abnormalities. The findings on physical examination and imaging diagnosis are multiple and variable. Nowadays, RHSPS are quite unknown and previous reports do not seem to agree, leading to misdiagnoses as epicondylitis and making this structure the main cause of some cases of "intractable lateral epicondylitis". The outcomes of surgical treatments are quite promising although more, higher quality research is needed. Taking this into account, this review is meant to be a starting point for new anatomical and clinical studies.
abstract_id: PUBMED:16365372
Arthroscopic treatment of posterolateral elbow impingement from lateral synovial plicae in throwing athletes and golfers. Background: Although elbow pain is common in throwing athletes and golfers, posterolateral impingement from a hypertrophic synovial plica is a rare but possibly underdiagnosed condition.
Purpose: To evaluate the clinical results of arthroscopic treatment of symptomatic lateral elbow plicae in this athletic population.
Study Design: Case series; Level of evidence, 4.
Methods: Twelve patients, 9 male and 3 female, whose mean age was 21.6 years (range, 17-33 years), were reviewed. There were 7 baseball pitchers, 2 softball players, and 3 golfers. All patients had diagnosed isolated lateral elbow plica; none had lateral epicondylitis, instability, osteochondritis dissecans, arthritis, loose bodies, or nerve conditions. The mean time from onset of symptoms to treatment was 9.25 months (range, 3-24.5 months). At a mean follow-up of 33.8 months (range, 24-65.5 months), patients were evaluated with a questionnaire and examination, based on the American Shoulder and Elbow Surgeons standardized elbow assessment and previously reported elbow outcome score.
Results: Posterolateral elbow pain was present in all patients. Fifty-eight percent (7 of 12 patients) complained of clicking or catching, whereas only 25% (3 of 12) experienced swelling or effusion. At elbow arthroscopy, a thickened synovial lateral plica was debrided in all cases. Ninety-two percent (11 of 12 patients) reported an excellent outcome with a mean elbow score of 92.5 points (maximum, 100 points). Return to competitive play averaged 4.8 months (range, 3-9.5 months). One patient with a fair outcome developed medial elbow instability that later required reconstructive surgery.
Conclusion: Posterolateral elbow impingement from hypertrophic synovial plicae can cause significant pain and disability in throwing athletes and golfers. With careful diagnosis and exclusion of other elbow problems, treatment with arthroscopic debridement and focused rehabilitation is highly successful and allows these athletes to return to their previous level of play.
abstract_id: PUBMED:16679240
The posterolateral plica: a cause of refractory lateral elbow pain. Lateral epicondylitis is one of the most common upper extremity pain syndromes. We report the results of patients in whom conservative treatment was unsuccessful and who were finally treated arthroscopically for symptomatic plicae. Ten patients (mean age, 40 years [range, 18-60 years]) who were misdiagnosed as having lateral epicondylitis were included in this study. Examination revealed the site of maximal tenderness to be posterior to the lateral epicondyle and centered at the posterior radiocapitellar joint. Preoperatively, all patients received conservative treatment (physical therapy or corticosteroid injections [or both]). The mean follow-up was 25 months (range, 6-68 months). The mean score on the Disabilities of the Arm, Shoulder and Hand questionnaire was 9 (range, 0-37). Preoperatively, 7 patients had full elbow range of motion; however, in 3 patients, there was a loss of extension at the elbow ranging from 7 degrees to 20 degrees preoperatively. The range of elbow motion was full in all patients postoperatively. No patient demonstrated posterolateral pain after the operation. Synovial plicae of the elbow may be the cause of lateral elbow pain in patients with vague clinical symptoms. Arthroscopic management may provide a successful treatment option for such patients.
Answer: The necessity of posterior synovial plica excision for refractory lateral epicondylitis of the elbow is not clearly established. A study by PUBMED:22965262 found that the addition of débridement of the posterior synovial fold did not appear to enhance either pain relief or function compared with the classic Baker technique without decortication. This suggests that posterior synovial plica excision may not be necessary for all patients with refractory lateral epicondylitis.
However, other studies have indicated that synovial plicae in the radiocapitellar joint can be a source of lateral elbow pain and that arthroscopic débridement can relieve painful symptoms (PUBMED:30016689). In particular, patients with posterolateral elbow pain and discomfort or catching of the elbow may benefit from posterior arthroscopic confirmation of the intra-articular structure after debridement of the ECRB tendon (PUBMED:36687491).
Furthermore, the presence of synovial plica has been associated with radiocapitellar joint snapping, which can coexist with lateral epicondylitis as a disease spectrum. The arthroscopic modified Bosworth technique, which includes excision of the synovial plicae, has been shown to provide favorable outcomes for patients with refractory lateral epicondylitis associated with radiocapitellar joint snapping (PUBMED:32647733).
In conclusion, while posterior synovial plica excision may not be necessary for all cases of refractory lateral epicondylitis, it can be beneficial in certain situations, particularly when other intra-articular pathologies such as radiocapitellar joint snapping are present. Clinicians should consider the individual patient's symptoms and diagnostic findings when deciding on the necessity of posterior synovial plica excision (PUBMED:35315424, PUBMED:32458355, PUBMED:16365372, PUBMED:16679240). |
Instruction: Does partial surgical tumour removal influence the response to octreotide-LAR in acromegalic patients previously resistant to the somatostatin analogue?
Abstracts:
abstract_id: PUBMED:17555503
Does partial surgical tumour removal influence the response to octreotide-LAR in acromegalic patients previously resistant to the somatostatin analogue? Objective: To compare the intrapatient response to the same dose of slow-release octreotide (OCT-LAR) before and after noncurative surgery in acromegalic patients who did not attain disease control after primary treatment with OCT-LAR.
Design: Prospective clinical study.
Patients: Eleven acromegalic patients (eight men, aged 42.45 +/- 11.15 years, 10 macroadenomas) received OCT-LAR (20 mg, n = 1; 30 mg, n = 10) every 28 days as the primary treatment (1stOCT-LAR) for 11.3 +/- 4.2 months, without IGF-I normalization. They were subsequently submitted to surgery without cure and were then treated with the same dose of OCT-LAR for 8.0 +/- 6.5 months (2ndOCT-LAR).
Measurements: GH and IGF-I serum concentrations were obtained under basal conditions as well as during treatment. Pituitary tumour volume was assessed by magnetic resonance imaging (MRI) of the sella. IGF-I was also expressed as a percentage of the upper limit of the normal age- and sex-matched range (%ULNR IGF-I).
Results: After 1stOCT-LAR, there was a decrease in GH levels (P = 0.003) and %ULNR IGF-I (P = 0.009) compared to baseline (B), but no IGF-I normalization. Tumour shrinkage was observed in eight of 10 patients with macroadenomas (median 63.7%, range 24.5-75.5%). After surgery, mean levels of GH and %ULNR IGF-I were lower than those at baseline (P = 0.0004 and P = 0.003, respectively), but not when compared to values during 1stOCT-LAR (P = 1.000 and P = 0.957, respectively). MRI confirmed surgical tumour removal (median 64%, range 4.9-96.6%) in eight of the 10 patients. Comparing the 2ndOCT-LAR results with postsurgical results, there were no significant decrease in %ULNR IGF-I (P = 0.061) and GH levels (P = 0.414). Nine patients (82%) achieved IGF-I normalization. The degree of surgical tumour reduction did not correlate with IGF-I normalization (P = 0.794). When comparing the results between 1stOCT-LAR and 2ndOCT-LAR, there was a decrease, albeit not statistically significant, in serum GH levels (P = 0.059) and a significant decrease in %ULNR IGF-I (P = 0.011).
Conclusions: Using strict criteria (same patient, same drug, same dose) our results strongly suggest that the surgical reduction of tumour mass can improve the outcome of OCT-LAR treatment in acromegalic patients resistant to primary therapy with SA.
abstract_id: PUBMED:19884028
Growth hormone isoforms in acromegalic patients before and after treatment with octreotide LAR. Background: Human growth hormone (hGH) circulates as a mixture of different isoforms. It has been previously reported that the ratio of 20kDa to 20kDa plus 22kDa (%20kDa-hGH) is increased in patients with active acromegaly.
Objectives: To evaluate the GH isoforms (20kDa- and 22kDa-hGH) in acromegalic patients before and after six months of treatment with octreotide LAR, and to compare the results with those in healthy controls. In addition, the relationships between the %20kDa-hGH, tumor size and biochemical measurements were also investigated.
Design: Random serum samples from 23 acromegalic patients evaluated before and after six months of treatment with octreotide LAR and from 23 matched healthy controls were studied. Growth hormone, IGF-I and prolactin (PRL) were measured by chemiluminescence immunometric assay and the 20kDa- and 22kDa-hGH isoforms were measured by specific time-resolved fluorescence immunoassays.
Results: In acromegalic patients before treatment, there was a significantly higher median %20Da-hGH in comparison to healthy controls (14.31% vs. 9.59%, p<0.001). After six months of treatment, the median %20kDa-hGH was similar to the baseline values. Patients with GH<2.5ng/mL after six months of treatment had already lower GH and %20kDa-hGH at baseline (p<0.01). The IGF-I (SD-scores) was positively correlated to total GH levels in acromegalic patients after treatment. There was no correlation between the %20kDa-hGH and PRL levels or tumor size.
Conclusions: Our study confirmed that acromegalic patients have an increased proportion of circulating 20kDa-hGH isoform. Consequently, the use of a 22kDa-hGH specific assay may underestimate the tumor production of total GH. Although octreotide LAR promoted a significant decrease in the GH and IGF-I levels, it did not normalize the GH isoforms composition and suggests that the secretion of GH isoforms is equally inhibited by somatostatin analogues and that it is the disease control that normalizes the GH isoforms composition in acromegaly.
abstract_id: PUBMED:19798622
The efficacy of octreotide LAR as firstline therapy for patients with newly diagnosed acromegaly is independent of tumor extension: predictive factors of tumor and biochemical response. Surgical outcome of acromegaly depends on the preoperatory tumor size and extension. Somatostatin analogues are also a highly effective treatment for acromegalic patients. Nevertheless, the response of GH-secreting adenomas to primary medical therapy is variable. The aim of the present study was to evaluate the efficacy of octreotide LAR as primary therapy for acromegalic patients as a function of initial tumor extension. We performed a multicentre, prospective, observational and analytical study recruiting 19 "naive" acromegalic patients (5 microadenomas, 10 intrasellar, and 4 extrasellar macroadenomas). All of them were treated with octreotide LAR for 12 months. Basal GH and fasting IGF-I concentrations, and tumor volume were measured at baseline and after 6 and 12 months of treatment. Six patients withdrew the study. The patients who completed the protocol showed a significant reduction of tumor volume (25+/-23%, Wilk's lambda=0.506, F=4.400, p=0.046) independently of tumor extension at study entry (Wilk's lambda=0.826, F=0.452, p=0.769). A shrinkage >25% of baseline tumor volume was achieved in 8 (42%) patients with no differences between tumor extension subgroups. Basal GH levels (76+/-18%) and fasting IGF-I (52+/-31%) decreased throughout the study. Six (46%) patients normalized their IGF-I levels. Octreotide LAR is an effective first-line treatment for a large group of acromegalic patients independent of initial tumor extension.
abstract_id: PUBMED:23148190
The efficacy of octreotide LAR in acromegalic patients as primary or secondary therapy. Objective: The objective of this study was to investigate the efficacy of octreotide therapy in acromegalic patients as primary or secondary therapy.
Methods: Ten acromegalic patients diagnosed at the Endocrinology Clinic in Sarajevo (seven females and three males, mean age 55.2 ± 7.2 years, age range 40-65 years, five patients with microadenoma and five patients with macroadenoma) were treated with octreotide. Among them, 60% of patients were operated on and the majority of the procedures were performed transnasaly (90%). That group of patients had recidivism of disease (pituitary adenoma and acromegaly). The concentration of human growth hormone (HGH) and insulin-like growth factor 1 (IGF-1) was evaluated at 0, 6 and 12 months, while magnetic resonance imaging (MRI) was taken before the treatment and 12 months after. Eight patients received octreotide 30 mg/28 days, one patient received a dose of 20 mg and the other received 60 mg/28 days.
Results: Before treatment growth hormone (GH) levels were 50.87 ± 10.56 ng/ml (range: 26-64.9), IGF-1 were 776.66 ± 118.40 ng/ml (range: 526-934). Four patients (40%) were treated with primary octreotide treatment and six patients (60%) with secondary somatostatin analog treatment. At the beginning of therapy, there were no differences in terms of age, HGH levels and IGF-1 levels between primary and secondary treatment groups (p > 0.05). The difference between groups was only in regard to the size of tumors (p = 0.01). After 6 and 12 months the GH levels decreased to 1.61 ± 0.86 ng/ml (range: 0.7-2.65) and 1.85 ± 2.40 ng/ml (range: 0.0-8.3), respectively, while the IGF-1 became 305.90 ± 43.19 ng/ml after 6 months of treatment (range: 240-376) and 256.99 ± 71.43 ng/ml after 12 months of octreotide treatment (range: 126-325), respectively. The pituitary adenomas size prior to treatment was 9.57 mm, while after 12 months of treatment, the size decreased to 8.0 mm. After therapy, a GH decrease to less than 2.5 ng/ml was achieved in 90% of cases; tumor size decrease was achieved in 60% while normalization of IGF-1 was achieved in 100% of the patients, respectively. All differences about HGH and IGF-1 in each group were statistically significant (p < 0.05). In the group of acromegalic patients treated with octreotide LAR as primary therapy, the difference was more significant for GH and IGF-1 than for adenomas size.
Conclusions: Octreotide treatment of acromegaly not only decreases GH and IGF-1 concentrations, but also appears to diminish the size of the tumor in about 60% of cases. The somatostatin analogs are more efficient in the primary treatment of acromegalic patients, due to the fact that primary therapy is as effective as secondary therapy but primary therapy has small advantages when compared with secondary octreotide therapy because no surgical treatment is required before.
abstract_id: PUBMED:14674719
The treatment of de novo acromegalic patients with octreotide-LAR: efficacy, tolerability and cardiovascular effects. Aim: Somatostatin analogues are normally used as adjunctive therapy to surgery and radiotherapy in management of acromegaly. We studied the effects of de novo OCT-LAR treatment on growth hormone (GH) suppression, tumour size, cardiovascular function, clinical symptoms, signs and quality of life in 9 newly diagnosed acromegalic patients.
Methods: Patients commenced OCT-LAR 20 mg IM monthly for 2 months. Dose increased to 30 mg monthly if mean serum GH (MGH) > 5 mU/l (2 microg/litre) (7 patients). Treatment continued for 6 months. Cardiac function assessed by echocardiography at baseline and day 169. Left ventricular (LV) mass and ejection fraction (EF) calculated from 2D M-mode studies.
Results: Serum GH demonstrated suppression in 8/9 patients (mean suppression 64.9% +/- 29.7%, range; 4-95.2%). MGH suppressed < 5 mU/ (2 microg/litre) in 3 (33%) patients. IGF-I and IGFBP3 normalised in 1 (12.5%) and 3 (38%) patients respectively. Tumour shrinkage seen in 30% patients. Eight patients were assessed by echocardiography. At baseline, 7 patients demonstrated abnormalities in LV mass and EF. At day 169, 6 patients demonstrated a fall and 1 an increase in LV mass. Overall there was no significant change in LV mass. A significant increase in EF was observed (p = 0.02). There were significant improvements in health perception (p = 0.01), fatigue (p < 0.05) and perspiration (p = 0.0039).
Conclusions: These data demonstrate OCT-LAR provides adequate control of acromegaly in a proportion of patients treated over 6 months. This is associated with improved LV function, evidenced by increased EF. Improved results are expected with longer-term treatment. OCT-LAR may be considered as primary treatment for acromegaly in selected patients.
abstract_id: PUBMED:22240890
Growth of an aggressive tumor during pregnancy in an acromegalic patient. Pregnancy in acromegalic patients is a rare event, but is usually uneventful, with stable GH and IGF-I levels and no tumor enlargement. Medical treatment can usually be withdrawn without problems and although no major adverse event has been reported, the suspension of drug treatments is generally recommended. No case report exists in the literature regarding evolution of a somatotropinoma with invasiveness markers throughout pregnancy. We report a case of an acromegalic patient who was submitted to surgery and treated with octreotide LAR maintaining a stable residual tumor and an IGF-I close to the normal levels. Her tumor presented with a high Ki-67 (11.6%) and a low aryl hydrocarbon receptor-interacting protein (AIP) expression. When she became pregnant, octreotide LAR was withdrawn, and despite remaining asymptomatic during pregnancy, tumor growth occurred with compression of surrounding structures. In conclusion, pregnancy in acromegalic patients has usually a favorable prognosis with no tumor growth. However, in the presence of high Ki-67 labeling index and low AIP expression, tumor enlargement may occur and somatostatin analogue treatment throughout the pregnancy should be considered.
abstract_id: PUBMED:10946911
Growth hormone receptor antagonist therapy in acromegalic patients resistant to somatostatin analogs. Transsphenoidal surgical resection is the primary therapy for acromegaly caused by GH secreting pituitary adenomas. Medical therapy for patients not controlled by surgery includes primarily somatostatin analogs and secondarily dopamine agonists, both of which inhibit pituitary growth hormone secretion. A novel GH receptor antagonist (pegvisomant) binds to hepatic GH receptors and inhibits peripheral insulin-like growth factor-1 generation. Six patients resistant to maximal doses of octreotide therapy received pegvisomant - three received placebo or pegvisomant 30 mg or 80 mg weekly for 6 weeks and three received placebo and pegvisomant 10-20 mg/d for 12 weeks. Thereafter, all patients received daily pegvisomant injections of doses determined by titrating IGF-1 levels. Serum total IGF-1 levels were normalized in all six acromegalic patients previously shown to be resistant to somatostatin analogs via a novel mechanism of peripheral GH receptor antagonism. The GH receptor antagonist is a useful treatment for patients harboring GH-secreting tumors who are resistant to octreotide.
abstract_id: PUBMED:16060910
Treatment of acromegaly with octreotide-LAR: extensive experience in a Brazilian institution. Objective: Somatostatin analogues have become the mainstay of the medical treatment of acromegaly. The aim of our study was to evaluate the efficacy and tolerability of octreotide-LAR (OCT-LAR) treatment in acromegalic patients.
Design: Prospective open trial.
Patients And Methods: Eighty acromegalic patients (46 women; 18-80 years) were treated with OCT-LAR. Mean +/- SD duration of follow-up was 16.6 +/- 6.6 months (6-24 months). Twenty-eight patients received OCT-LAR as primary treatment. The target was to achieve normal IGF-I levels. Clinical activity was evaluated by symptom score and fasting samples for GH and IGF-I serum concentrations, obtained under basal conditions as well as during treatment. Pituitary tumour volume was assessed by magnetic resonance imaging of the sella. A tumour volume reduction of at least 25% was considered significant.
Results: Clinical improvement was attained in most patients. Fifty-nine (74%) of them attained mean GH < 2.5 ng/ml and 33 (41%) achieved normal IGF-I by the 24th month of treatment. GH and IGF-I control increased throughout treatment. Regarding the 46 patients treated for at least 12 months there was a significant decrease of GH and IGF-I levels by the third month compared to basal levels, persisting with no subsequently variation. In the patient group that achieved normal serum IGF-1 during treatment (controlled group: n = 43) 20 patients maintained normal levels up to the latest follow-up, whereas 23 of them once again showed altered serum IGF-1-values of some measurements during follow-up, despite dose maintenance or elevation. Baseline percentage of the upper limit of IGF-I normal range, GH levels by the third month and length of treatment were predictive factors of IGF-I normalization. Tumour shrinkage occurred in 76% of primary patients. Among 21 diabetic patients, four worsened and five improved glycaemic control, based on glycated haemoglobin. One previously intolerant patient progressed to overt diabetes. Nine patients developed gall bladder sludge, other nine patients acquired microlithiasis and one patient developed gallstone pancreatitis.
Conclusion: OCT-LAR is an effective agent in alleviating symptoms, suppressing GH, normalizing IGF-I and inducing tumour shrinkage in many acromegalic patients. Overall, OCT-LAR is well tolerated and should be recommended for nonsurgically cured acromegalics, and also be considered as primary therapy for selected cases, mainly for those with a low probability of surgical cure.
abstract_id: PUBMED:30851160
AIP-mutated acromegaly resistant to first-generation somatostatin analogs: long-term control with pasireotide LAR in two patients. Acromegaly is a rare disease due to chronic excess growth hormone (GH) and IGF-1. Aryl hydrocarbon receptor interacting protein (AIP) mutations are associated with an aggressive, inheritable form of acromegaly that responds poorly to SST2-specific somatostatin analogs (SSA). The role of pasireotide, an SSA with affinity for multiple SSTs, in patients with AIP mutations has not been reported. We studied two AIP mutation positive acromegaly patients with early-onset, invasive macroadenomas and inoperable residues after neurosurgery. Patient 1 came from a FIPA kindred and had uncontrolled GH/IGF-1 throughout 10 years of octreotide/lanreotide treatment. When switched to pasireotide LAR, he rapidly experienced hormonal control which was associated with marked regression of his tumor residue. Pasireotide LAR was stopped after >10 years due to low IGF-1 and he maintained hormonal control without tumor regrowth for >18 months off pasireotide LAR. Patient 2 had a pituitary adenoma diagnosed when aged 17 that was not cured by surgery. Chronic pasireotide LAR therapy produced hormonal control and marked tumor shrinkage but control was lost when switched to octreotide. Tumor immunohistochemistry showed absent AIP and SST2 staining and positive SST5. Her AIP mutation positive sister developed a 2.5 cm follicular thyroid carcinoma aged 21 with tumoral loss of heterozygosity at the AIP locus and absent AIP staining. Patients 1 and 2 required multi-modal therapy to control diabetes. On stopping pasireotide LAR after >10 years of treatment, Patient 1's glucose metabolism returned to baseline levels. Long-term pasireotide LAR therapy can be beneficial in some AIP mutation positive acromegaly patients that are resistant to first-generation SSA.
abstract_id: PUBMED:26290466
The effects of somatostatin analogue therapy on pituitary tumor volume in patients with acromegaly. Introduction: In nearly all cases, acromegaly is caused by excess GH from a pituitary adenoma, resulting in elevated circulating levels of GH and, subsequently, IGF-1. Treatment goals are to eliminate morbidity and restore the increased mortality to normal rates. Therapeutic strategies aim to minimize tumor mass and normalize GH and IGF-1 levels. Somatostatin analogues are the medical treatment of choice in acromegaly, as first-line or post-surgical therapy, and have proven efficacy in pituitary tumor volume reduction (TVR).
Methods: Here we review the effects of somatostatin analogue therapy on pituitary tumor volume in patients with acromegaly.
Results: TVR with somatostatin analogues may be mediated by direct anti-proliferative effects via activation of somatostatin receptors, or by indirect effects, such as angiogenesis inhibition, and is more pronounced when they are administered as first-line therapy. Various studies of first-line treatment with octreotide LAR have shown significant TVR in ≥73% of patients. First-line treatment with lanreotide Autogel has shown evidence of TVR, although more studies are needed. In a recent randomized, double-blind, 12-month trial in 358 medical-treatment-naïve acromegaly patients, significant TVR was achieved by 81% of patients administered pasireotide LAR and 77% administered octreotide LAR. Pre-operative somatostatin analogue therapy may also induce TVR and improve post-operative disease control compared with surgery alone. TVR is progressive with prolonged treatment, and decreased IGF-1 levels may be its best predictor, followed by age and degree of GH decrease. However, TVR does not always correlate with degree of biochemical control.
Conclusion: Somatostatin analogues (first- or second-line treatment) are the mainstay of medical therapy and, as first-line medical therapy, are associated with significant pituitary TVR in most patients.
Answer: Yes, partial surgical tumor removal does influence the response to octreotide LAR in acromegalic patients who were previously resistant to the somatostatin analogue. A prospective clinical study involving eleven acromegalic patients who did not achieve disease control with primary treatment using octreotide LAR showed that after noncurative surgery and subsequent treatment with the same dose of octreotide LAR, nine patients (82%) achieved IGF-I normalization. The study suggests that surgical reduction of tumor mass can improve the outcome of octreotide LAR treatment in acromegalic patients resistant to primary therapy with somatostatin analogues (PUBMED:17555503). |
Instruction: Can we reduce episodes of haemoglobin desaturation in full-term babies restrained in car seats?
Abstracts:
abstract_id: PUBMED:18052992
Can we reduce episodes of haemoglobin desaturation in full-term babies restrained in car seats? Objectives: To determine whether episodes of haemoglobin oxygen (SpO2) desaturation in full-term infants restrained in car seats can be reduced by a simple foam plastic infant car seat insert designed to push the body forward, with space for the protuberant occiput to lie behind the spine, and so reduce flexion of the infant's head on the trunk.
Methods: Eighteen healthy full-term babies were evaluated while restrained in an infant car safety seat with, and without, the foam insert. Infants were monitored in each position for 30 min with continuous polygraphic recording of respiratory and heart rate, nasal airflow and SpO2.
Results: Placement of the insert in the car seat was associated with a significant reduction in the rate of apneas with a fall in SpO2 >5% (median, interquartile range: 4.4 (0, 10.6) vs. 9.2 (5.4, 15.2) events per hour, p=0.03). The one clinically severe episode of apnea, with a fall in SpO2 of more than 30%, occurred in the car seat without the insert.
Conclusions: A car seat insert that allows the newborn's head to lie in a neutral position during sleep may reduce the frequency of mild episodes of reduced SpO2 in some full-term newborn babies.
abstract_id: PUBMED:23858423
Randomized controlled trial of a car safety seat insert to reduce hypoxia in term infants. Objective: To test the hypothesis that a foam plastic insert that allows the infant head to rest in a neutral position in sleep may prevent obstruction of the upper airway and thus reduce episodes of reduced oxygenation in term infants in car seats.
Methods: Healthy full-term babies were randomized to be studied during sleep while restrained in an infant car safety seat either with or without the insert, with continuous polysomnographic recordings with sleep video.
Results: Seventy-eight infants (39 in each group) had polysomnogram recordings at a mean of 8 days of age. Both groups showed a small fall in mean hemoglobin oxygen saturation (SpO2) over the first hour of sleep. There was no difference between insert and no insert in the rate of moderate desaturations (a fall in SpO2 ≥ 4% lasting for ≥ 10 seconds, mean ± SEM, 17.0 ± 1.5 vs 17.2 ± 1.5/hour), or mean SpO2 during sleep. The insert was associated with a significant reduction in the rate of obstructive apnea (0.3 ± 0.1 vs 0.9 ± 1.5/hour, P < .03), the severity of desaturation events (minimum SpO2 82% ± 1% vs 74% ± 2%, P < .001), and time with SpO2 <85% (0.6% ± 0.3% vs 1.8% ± 1.4%, P = .03).
Conclusions: In full-term newborn infants, a car seat insert that helps the head to lie in a neutral position was associated with reduced severity of desaturation events but not the overall rate of moderate desaturations.
abstract_id: PUBMED:7630686
Oxygen desaturation of selected term infants in car seats. Objectives: Premature infants are known to be at risk for oxygen (O2) desaturation and/or apnea in car seats. Since 1990, the American Academy of Pediatrics has recommended a period of monitoring in car seats before hospital discharge for infants born at < 37 weeks gestation. The objective of this report is to determine if selected term infants are also at risk for O2 desaturation, apnea, or bradycardia while in an infant car seat.
Methods: MetroWest Medical Center is a community hospital with a level II neonatal unit. Term infants who in the judgment of their pediatrician were felt to be at risk for O2 desaturation or apnea were monitored for a 90-minute period in a car seat and observed for transcutaneous O2 desaturation, apnea, or bradycardia. In addition, several infants who were admitted to the pediatric inpatient unit after discharge from the nursery were monitored in a similar fashion.
Results: Eight of 28 monitored infants (28.6%) had a period of O2 desaturation < 90%. In addition, five of 28 monitored infants (17.8%) had borderline results (O2 saturation, 90 to 93%). All four infants monitored because of genetic syndromes had abnormal results. O2 desaturation was also observed in two term infants who had been observed to be apneic by a parent after discharge from the nursery.
Conclusions: In selected circumstances (eg, genetic disorders or observed apnea) term infants may be at risk for O2 desaturation in an upright car seat and monitoring these infants in car seats before nursery discharge should be considered. Because not all infants at risk for O2 desaturation can be identified at birth, an alternative approach would be to recommend, unless medically contraindicated (eg, gastroesphogeal reflux when supine), that infants should routinely be transported in a supine position car seat in the early months of life.
abstract_id: PUBMED:31041205
Parent's knowledge, attitude, and practice about children car seats at Unaizah city, KSA. Background: Motor vehicle collision (MVC) is a major cause of death in children worldwide. Using children car seats will stabilize them during accidents and decrease the morbidity and mortality from MVC dramatically. There is no study in Saudi Arabia about car seat use and relationship between using it and children morbidity and mortality following a car accident.
Objectives: To assess knowledge, attitude, and practice of children car seats among parents at Unaizah city, KSA, to assess the level of awareness regarding the children car safety system, to determine the parent level of education, socioeconomic status, and other factors affecting their behavior regarding car seats, to determine the prevalence of car seat use among parents in Unaizah city, and to assess the effectiveness of car seat policies on parents' behavior.
Design: Cross-sectional study.
Settings: Public and private pediatric clinics at Unaizah city in Qassim region.
Materials And Methods: The study was conducted from May to June 2018, among parents with child ≤7 years old. Anyone who could not complete the questionnaire for any reason was excluded from our study. SPSS version 20 has been used to analyze all data.
Main Outcome Measures: To assess knowledge, attitude, and practice of children car seats among parents at Unaizah city, KSA.
Sample Size: 350.
Results: There were 350 participants who were included in this study of which females were dominant 77.1%. The age range of parents was 25-35 years old. Most of them complied with the seatbelt policy (56.7%). Among these numbers, 130 participants use a seatbelt for security reason while others were to protect from irregularities. More parents do not put baby seat in the car (57.3%) while 57 participants use child seat every time the child rides in the car.
Conclusion: The overall knowledge, attitude, and practices toward children car safety seat in this study was relatively low. This signifies the need of parents to step up their awareness to safe guard their children while on the road.
Limitations: Small sample size and limited to pediatrics clinics visitors.
abstract_id: PUBMED:32532359
Child passenger safety education in the emergency department: teen driving, car seats, booster seats, and more. Background: The leading cause of death in children less than 19 years old is motor vehicle crashes (MVC). Non-use or improper use of motor vehicle car seats significantly adds to the morbidity and mortality. Emergency department (ED) encounters provide an opportunity for caregiver education. Our objective was to determine the effect of an educational intervention on knowledge and counseling behaviors of pediatric ED nurses regarding child passenger safety (CPS).
Methods: A pre/post educational intervention study was conducted with nursing staff in an urban ED. Responses to CPS related knowledge and counseling behaviors were collected using surveys administered before and after the intervention. The ED nurse education intervention was a one-hour lecture based on the American Academy of Pediatrics (AAP) CPS guidelines and Alabama state law regarding ages for each car seat type and teen driving risky behaviors. Individual data from pre and post surveys were matched, and nominal variables in pre-post matched pairs were analyzed using McNemar's test. To compare categorical variables within pre or post test data, we used the Chi-square test.
Results: Pretests were administered to 83/110 ED nurses; 64 nurses received the educational intervention and posttest. On the pretests, nurses reported "never" or "occasionally" counseling about CPS for the following: 56% car seats, 62% booster seat, 56% teen driving, 32% seat belts. When comparing the pretest CPS knowledge between nurses working 0-1 year vs. ≥ 2 years there was no statistically significant difference. Two CPS knowledge questions did not show significance due to a high correct baseline knowledge rate (> 98%), including baseline knowledge of MVC being the leading cause of death. Of the remaining 7 knowledge questions, 5 questions showed statistically significant improvement in knowledge: age when children can sit in front seat, state GDL law details, seat belt state law for back seat riders, age for booster seat, and rear facing car seat age. All four counseling behavior questions showed increases in intent to counsel families; however, only intent to counsel regarding teen driving reached statistical significance.
Conclusions: Educational efforts improved pediatric ED nursing knowledge regarding CPS. Intent to counsel was also improved following the education.
abstract_id: PUBMED:30377536
Overcoming barriers to use of child car seats in an urban Aboriginal community-formative evaluation of a program for Aboriginal Community Controlled Health Services. Background: Little is known about the barriers to use of child car seats in Australian Aboriginal communities, or the acceptability of programs to increase appropriate car seat use. This formative evaluation sought to consult and partner with Aboriginal Community Controlled Health Services (ACCHS) to develop and evaluate the feasibility and acceptability of a program intended to improve optimal use of child car seats.
Methods: Focus groups were conducted with parents and carers of Aboriginal children to identify the barriers and facilitating factors for child car seat use, and staff of two ACCHS were interviewed to inform program development. Following the implementation of the resulting multi-faceted program, consisting of staff training, education, hands-on demonstrations and a subsidised car seat distribution scheme, interviews were conducted to assess process issues and acceptability with 13 staff members.
Results: Parents and carers in the focus groups reported a lack of awareness of child car seat use, confusion about the right car seats for different aged children but agreed about the importance of safety and community responsibility to keep children safe in cars. Interviews with service staff informed an approach to deliver relevant information. Information and resources were delivered to families, while the car seat distribution scheme supplied 33 families with child car seats. Following the conclusion of the program, staff reported that the program was relevant to their role. They also valued the car seat distribution scheme. Staff training in selection and installation of car seats increased confidence in staff knowledge.
Conclusions: We developed a program to promote child car seat use in ACCHS, which focused on developing capacity, made use of existing infrastructure and developed resources for use in this setting. The program shows promise as a means to promote child car seat use in Aboriginal communities; however, the impact on child car seat use will need to be evaluated in a larger scale prospective trial.
abstract_id: PUBMED:11982873
Effects of child seats on the cardiorespiratory function of newborns. Background: This study aims to determine the effect of differently positioned infant car seats on cardio-respiratory parameters in healthy full-term newborns.
Methods: We examined 15 healthy term newborns for respiratory compromise due to normal restraint in a recommended infant car seat. There are currently two types of car seats available in Japan: a chair-shaped car seat and a bed-shaped car seat. Using a sleep apnea recorder, we simultaneously monitored heart rate, percutaneous oxygen saturation, chest impedance and nasal airflow in infants placed in each of the car seats and also placed in the supine position on a nursery cot. Episodes of oxygen desaturation below 95% and longer than 10 s (mild desaturation) and below 90% longer and than 10 s (moderate desaturation) were evaluated over 30 min observation period.
Results: The amount of time infants spent in a sleep state was significantly longer in the car seats than it was on the cot (P = 0.0015 for bed-shaped, P = 0.0012 for chair-shaped) and there was no difference in this measure between the two types of car safety seats. Mean of oxygen saturation with the chair-shaped car seat (95.8%) was significantly lower than that with the bed-shaped car seat (98.8%) (P = 0.0008). Newborn infants laid on the cot showed no episodes of desaturation. Newborn infants placed in the chair-shaped car seat had significantly more episodes of mild desaturation (mean, 7.33 times in nine of 15 infants), whereas in the bed-shaped seat observed only once each in two infants (P = 0.008). Moderate desaturation was observed in four of 15 infants in the chair-shaped car seat, whereas not observed in the bed-shaped car seat (P = 0.068).
Conclusion: The results suggest that prior to discharge the degree of oxygen desaturation that occurs when an infant is placed in a chair-style car seat should be checked.
abstract_id: PUBMED:36096214
High levels of synthetic antioxidants and ultraviolet filters in children's car seats. Forty-seven compounds among synthetic phenolic and amino antioxidants and ultraviolet filters, three suites of widely used chemical additives, were measured in eighteen popular children's car seats (fabric, foam, and laminated composites of both layers) marketed in the United States in 2018. Significantly higher levels of target compounds were found in foam and composite samples than in fabric samples. Median total concentrations of phenolic antioxidants and their transformation products ranged from 8.11 μg/g in fabric to 213 μg/g in foam In general, isooctyl 3-(3,5-di-tert-butyl-4-hydroxyphenyl) propionate (AO-1135) and 2,4-di-tert-butylphenol (24-DBP) were the most abundant among all target compounds with maximum levels of526 μg/g in composite and 13.7 μg/g, respectively. The total concentrations of amino antioxidants and their transformation products and of ultraviolet filters were at least one order of magnitude lower than those of phenolic antioxidants, with medians of 0.15-37.1 μg/g and 0.29-1.81 μg/g, respectively, in which the predominant congeners were 4-tert-butyl diphenylamine (BDPA), 4,4'-di-tert-butyl diphenylamine (DBDPA), 4-tert-octyl diphenylamine (ODPA), 2,4-dihydroxybenzophenone (BP-1), 2-hydroxy-4-methoxybenzophenone (BP-3), and 2-(2-benzotriazol-2-yl)-4-methylphenol (UV-P). Large variabilities in usage of these chemicals resulted in different compositional patterns among the car seats. These results suggest that these compounds are major polymeric additives in children's car seats as they are present at greater levels than previously measured groups of chemicals like brominated flame retardants and per- and polyfluoroalkyl substances. Given the documented toxic potentials of synthetic antioxidants and ultraviolet filters, their abundances in children products are a cause for concern.
abstract_id: PUBMED:2769505
Ventilatory changes in convalescent infants positioned in car seats. Because premature infants have been shown to be at risk for hypoxia and bradycardia when positioned in standard car seats, this study was done to confirm this finding in a larger sample, to investigate convalescent term infants in the neonatal intensive care unit for respiratory compromise in car seats, and to determine the physiologic mechanism or mechanisms responsible. Extensive multichannel polygraph recordings were obtained and pulmonary function tests were performed on 50 convalescent infants from the neonatal intensive care unit before, during, and after placement in a Cosco-Peterson First Ride car seat. Mean total dynamic compliance, total pulmonary resistance, and work of breathing improved in the car seat. Thirty percent of premature infants experienced hypoxia, bradycardia, or both in a car seat; in this group, tidal volume was lower (p = 0.02). In 11 of 16 infants with abnormal findings, oxygen desaturation was temporally related to episodes of short and mixed apnea. No term convalescent infant experienced respiratory difficulty in a car seat regardless of primary diagnosis. We conclude that premature infants may have respiratory compromise of a multifactorial nature when in car seats. Further development of car seats is necessary if such respiratory problems are to be avoided.
abstract_id: PUBMED:11533331
Respiratory instability of term and near-term healthy newborn infants in car safety seats. Objective: Premature infants who are discharged from intensive care nurseries are known to be at increased risk for apnea, bradycardia, and oxygen desaturation while in the upright position. These small infants also do not fit securely in standard infant car seats. Because of these problems, the American Academy of Pediatrics recommends a period of observation in a car seat for all infants who are born at <37 weeks' gestation. It is not clear whether this recommendation should apply to the minimally preterm infants (born at 35-36 weeks' gestation) who are healthy at birth and are hospitalized in the normal newborn nursery. The objective of this study was to evaluate the respiratory stability and safety requirements of healthy, minimally preterm infants in car seats compared with term infants.
Methods: Fifty healthy, nonmonitored, preterm infants (mean gestational age: 35.8 +/- 0.6 weeks) and 50 term infants (mean gestational age: 39.5 +/- 1.4 weeks) were recruited from a level I newborn nursery in a community hospital. Appropriateness of car seat fit was documented for each infant. Heart rate, respiratory rate, and pulse oximetry were evaluated while infants were supine and in their car seats. Apneic and bradycardic events were recorded in addition to a continuous recording of oxygen saturation values.
Results: Twenty-four percent of preterm and 4% of term newborn infants did not fit securely into suitable car seats despite the use of blanket rolls. Mean oxygen saturation values declined significantly in both preterm and term infants from 97% in the supine position (range: 92%-100%) to 94% after 60 minutes in their car seats (range: 87%-100%). Seven infants (3 preterm and 4 term) had oxygen saturation values of <90% for longer than 20 minutes in their car seats. Twelve percent of the preterm infants (95% confidence interval: 4.5%-24.3%) but no term infants had apneic or bradycardic events in their car seats.
Conclusions: Our data support the current American Academy of Pediatrics recommendations that all infants who are born at <37 weeks' gestation, including those who are admitted to level I community hospitals, be observed for respiratory instability and secure fit in their car seats before hospital discharge. Because lowering of oxygen saturation values was seen uniformly in all newborn infants, car seats should be used only for travel, and travel should be minimized during the first months of life.
Answer: Yes, episodes of haemoglobin desaturation in full-term babies restrained in car seats can be reduced. Studies have shown that using a simple foam plastic infant car seat insert designed to push the body forward and allow the newborn's head to lie in a neutral position can reduce the frequency of mild episodes of reduced SpO2 in some full-term newborn babies (PUBMED:18052992). Additionally, a randomized controlled trial found that such an insert was associated with a significant reduction in the rate of obstructive apnea, the severity of desaturation events, and the time with SpO2 <85%, although it did not reduce the overall rate of moderate desaturations (PUBMED:23858423). These findings suggest that car seat inserts that help maintain the infant's head in a neutral position can be beneficial in reducing the severity and frequency of desaturation events in full-term infants. |
Instruction: Does integrating nonurgent, clinically significant radiology alerts within the electronic health record impact closed-loop communication and follow-up?
Abstracts:
abstract_id: PUBMED:26335982
Does integrating nonurgent, clinically significant radiology alerts within the electronic health record impact closed-loop communication and follow-up? Objective: To assess whether integrating critical result management software--Alert Notification of Critical Results (ANCR)--with an electronic health record (EHR)-based results management application impacts closed-loop communication and follow-up of nonurgent, clinically significant radiology results by primary care providers (PCPs).
Materials And Methods: This institutional review board-approved study was conducted at a large academic medical center. Postintervention, PCPs could acknowledge nonurgent, clinically significant ANCR-generated alerts ("alerts") within ANCR or the EHR. Primary outcome was the proportion of alerts acknowledged via EHR over a 24-month postintervention. Chart abstractions for a random sample of alerts 12 months preintervention and 24 months postintervention were reviewed, and the follow-up rate of actionable alerts (eg, performing follow-up imaging, administering antibiotics) was estimated. Pre- and postintervention rates were compared using the Fisher exact test. Postintervention follow-up rate was compared for EHR-acknowledged alerts vs ANCR.
Results: Five thousand nine hundred and thirty-one alerts were acknowledged by 171 PCPs, with 100% acknowledgement (consistent with expected ANCR functionality). PCPs acknowledged 16% (688 of 4428) of postintervention alerts in the EHR, with the remaining in ANCR. Follow-up was documented for 85 of 90 (94%; 95% CI, 88%-98%) preintervention and 79 of 84 (94%; 95% CI, 87%-97%) postintervention alerts (P > .99). Postintervention, 11 of 14 (79%; 95% CI, 52%-92%) alerts were acknowledged via EHR and 68 of 70 (97%; 95% CI, 90%-99%) in ANCR had follow-up (P = .03).
Conclusions: Integrating ANCR and EHR provides an additional workflow for acknowledging nonurgent, clinically significant results without significant change in rates of closed-loop communication or follow-up of alerts.
abstract_id: PUBMED:25796594
Linking acknowledgement to action: closing the loop on non-urgent, clinically significant test results in the electronic health record. Failure to follow-up nonurgent, clinically significant test results (CSTRs) is an ambulatory patient safety concern. Tools within electronic health records (EHRs) may facilitate test result acknowledgment, but their utility with regard to nonurgent CSTRs is unclear. We measured use of an acknowledgment tool by 146 primary care physicians (PCPs) at 13 network-affiliated practices that use the same EHR. We then surveyed PCPs to assess use of, satisfaction with, and desired enhancements to the acknowledgment tool. The rate of acknowledgment of non-urgent CSTRs by PCPs was 78%. Of 73 survey respondents, 72 reported taking one or more actions after reviewing a CSTR; fewer (40-75%) reported that using the acknowledgment tool was helpful for a specific purpose. Forty-six (64%) were satisfied with the tool. Both satisfied and nonsatisfied PCPs reported that enhancements linking acknowledgment to routine actions would be useful. EHR vendors should consider enhancements to acknowledgment functionality to ensure follow-up of nonurgent CSTRs.
abstract_id: PUBMED:31775085
Radiology report alerts - are emailed 'Fail-Safe' alerts acknowledged and acted upon? Background: Guidelines from the Royal College of Radiologists and National Patient Safety Agency highlight the crucial importance of "fail-safe" alert systems for the communication of critical and significant clinically unexpected results between imaging departments and referring clinicians. Electronic alert systems are preferred, to minimise errors, increase workflow efficiency and improve auditability. To date there is a paucity of evidence on the utility of such systems. We investigated i) how often emailed radiology alerts were acknowledged by referring clinicians, ii) how frequently follow-up imaging was requested when indicated and iii) whether practise improved after an educational intervention.
Methods: 100 cases were randomly selected before and after an educational intervention at a tertiary referral centre in London, where the email-based 'RadAlert' system (Rivendale Systems, UK) has been in operation since May 2017.
Results: Following educational intervention, 'accepted' alerts increased from 39% to 56%, 'abandoned' alerts reduced from 55% to 37% and 'declined' alerts decreased from 5% to 3%. There was evidence to confirm that, when indicated, further imaging had been requested for 78% of all alerts, 78% of 'accepted' alerts and 76% of 'abandoned' alerts both before and after educational intervention.
Conclusions: Acknowledgment of report alerts by referring clinicians increased after departmental education / governance meetings. However, a proportion of email alerts remained unacknowledged. It is incumbent on reporting radiologists to be aware that electronic alert systems cannot be solely relied upon and to take the necessary steps to ensure significant and clinically unsuspected findings are relayed to referring clinical teams in a timely manner.
abstract_id: PUBMED:36287625
Predictors of Completion of Clinically Necessary Radiologist-Recommended Follow-Up Imaging: Assessment Using an Automated Closed-Loop Communication and Tracking Tool. BACKGROUND. Patients with adverse social determinants of health may be at increased risk of not completing clinically necessary follow-up imaging. OBJECTIVE. The purpose of this study was to use an automated closed-loop communication and tracking tool to identify patient-, referrer-, and imaging-related factors associated with lack of completion of radiologist-recommended follow-up imaging. METHODS. This retrospective study was performed at a single academic health system. A tool for automated communication and tracking of radiologist-recommended follow-up imaging was embedded in the PACS and electronic health record. The tool prompted referrers to record whether they deemed recommendations to be clinically necessary and assessed whether clinically necessary follow-up imaging was pursued. If imaging was not performed within 1 month after the intended completion date, the tool prompted a safety net team to conduct further patient and referrer follow-up. The study included patients for whom a follow-up imaging recommendation deemed clinically necessary by the referrer was entered with the tool from October 21, 2019, through June 30, 2021. The electronic health record was reviewed for documentation of eventual completion of the recommended imaging at the study institution or an outside institution. Multivariable logistic regression analysis was performed to identify factors associated with completion of follow-up imaging. RESULTS. Of 5856 recommendations entered during the study period, the referrer agreed with 4881 recommendations in 4599 patients (2929 women, 1670 men; mean age, 61.3 ± 15.6 years), who formed the study sample. Follow-up was completed for 74.8% (3651/4881) of recommendations. Independent predictors of lower likelihood of completing follow-up imaging included living in a socioeconomically disadvantaged neighborhood according to the area deprivation index (odds ratio [OR], 0.67 [95% CI, 0.54-0.84]), inpatient (OR, 0.25 [95% CI, 0.20-0.32]) or emergency department (OR, 0.09 [95% CI, 0.05-0.15]) care setting, and referrer surgical specialty (OR, 0.70 [95% CI, 0.58-0.84]). Patient age, race and ethnicity, primary language, and insurance status were not independent predictors of completing follow-up (p > .05). CONCLUSION. Socioeconomically disadvantaged patients are at increased risk of not completing recommended follow-up imaging that referrers deem clinically necessary. CLINICAL IMPACT. Initiatives for ensuring completion of follow-up imaging should be aimed at the identified patient groups to reduce disparities in missed and delayed diagnoses.
abstract_id: PUBMED:30779667
Adoption of a Closed-Loop Communication Tool to Establish and Execute a Collaborative Follow-Up Plan for Incidental Pulmonary Nodules. OBJECTIVE. The purpose of this study is to assess radiologists' adoption of a closed-loop communication and tracking system, Result Alert and Development of Automated Resolution (RADAR), for incidental pulmonary nodules and to measure its effect on the completeness of radiologists' follow-up recommendations. MATERIALS AND METHODS. This retrospective study was performed at a tertiary academic center that performs more than 600,000 radiology examinations annually. Before RADAR, the institution's standard of care was for radiologists to generate alerts for newly discovered incidental pulmonary nodules using a previously described PACS-embedded software tool. RADAR is a new closed-loop communication tool embedded in the PACS and enterprise provider workflow that enables establishing a collaborative follow-up plan between a radiologist and referring provider and helps automate collaborative follow-up plan tracking and execution. We assessed RADAR adoption for incidental pulmonary nodules, the primary outcome, in our thoracic radiology division (study period March 9, 2018, through August 2, 2018). The secondary outcome was the completeness of follow-up recommendation for incidental pulmonary nodules, defined as explicit imaging modality and time frame for follow-up. RESULTS. After implementation, 106 of 183 (58%) incidental pulmonary nodules alerts were generated using RADAR. RADAR adoption increased by 75% during the study period (40% in the first 3 weeks vs 70% in the last 3 weeks; p < 0.001 test for trend). All RADAR alerts had explicit documentation of imaging modality and follow-up time frame, compared with 71% for non-RADAR alerts for incidental pulmonary nodules (p < 0.001). CONCLUSION. A closed-loop communication system that enables establishing and executing a collaborative follow-up plan for incidental pulmonary nodules can be adopted and improves the quality of radiologists' follow-up recommendations.
abstract_id: PUBMED:37681820
Closed-Loop Medication Management with an Electronic Health Record System in U.S. and Finnish Hospitals. Many medication errors in the hospital setting are due to manual, error-prone processes in the medication management system. Closed-loop Electronic Medication Management Systems (EMMSs) use technology to prevent medication errors by replacing manual steps with automated, electronic ones. As Finnish Helsinki University Hospital (HUS) establishes its first closed-loop EMMS with the new Epic-based Electronic Health Record system (APOTTI), it is helpful to consider the history of a more mature system: that of the United States. The U.S. approach evolved over time under unique policy, economic, and legal circumstances. Closed-loop EMMSs have arrived in many U.S. hospital locations, with myriad market-by-market manifestations typical of the U.S. healthcare system. This review describes and compares U.S. and Finnish hospitals' EMMS approaches and their impact on medication workflows and safety. Specifically, commonalities and nuanced differences in closed-loop EMMSs are explored from the perspectives of the care/nursing unit and hospital pharmacy operations perspectives. As the technologies are now fully implemented and destined for evolution in both countries, perhaps closed-loop EMMSs can be a topic of continued collaboration between the two countries. This review can also be used for benchmarking in other countries developing closed-loop EMMSs.
abstract_id: PUBMED:37307897
Impact of an Automated Closed-Loop Communication and Tracking Tool on the Rate of Recommendations for Additional Imaging in Thoracic Radiology Reports. Objective: Assess the effects of feedback reports and implementing a closed-loop communication system on rates of recommendations for additional imaging (RAIs) in thoracic radiology reports.
Methods: In this retrospective, institutional review board-approved study at an academic quaternary care hospital, we analyzed 176,498 thoracic radiology reports during a pre-intervention (baseline) period from April 1, 2018, to November 30, 2018; a feedback report only period from December 1, 2018, to September 30, 2019; and a closed-loop communication system plus feedback report (IT intervention) period from October 1, 2019, to December 31, 2020, promoting explicit documentation of rationale, time frame, and imaging modality for RAI, defined as complete RAI. A previously validated natural language processing tool was used to classify reports with an RAI. Primary outcome of rate of RAI was compared using a control chart. Multivariable logistic regression determined factors associated with likelihood of RAI. We also estimated the completeness of RAI in reports comparing IT intervention to baseline using χ2 statistic.
Results: The natural language processing tool classified 3.2% (5,682 of 176,498) reports as having an RAI; 3.5% (1,783 of 51,323) during the pre-intervention period, 3.8% (2,147 of 56,722) during the feedback report only period (odds ratio: 1.1, P = .03), and 2.6% (1,752 of 68,453) during the IT intervention period (odds ratio: 0.60, P < .001). In subanalysis, the proportion of incomplete RAI decreased from 84.0% (79 of 94) during the pre-intervention period to 48.5% (47 of 97) during the IT intervention period (P < .001).
Discussion: Feedback reports alone increased RAI rates, and an IT intervention promoting documentation of complete RAI in addition to feedback reports led to significant reductions in RAI rate, incomplete RAI, and improved overall completeness of the radiology recommendations.
abstract_id: PUBMED:29874687
Semiautomated System for Nonurgent, Clinically Significant Pathology Results. Background: Failure of timely test result follow-up has consequences including delayed diagnosis and treatment, added costs, and potential patient harm. Closed-loop communication is key to ensure clinically significant test results (CSTRs) are acknowledged and acted upon appropriately. A previous implementation of the Alert Notification of Critical Results (ANCR) system to facilitate closed-loop communication of imaging CSTRs yielded improved communication of critical radiology results and enhanced adherence to institutional CSTR policies.
Objective: This article extends the ANCR application to pathology and evaluates its impact on closed-loop communication of new malignancies, a common and important type of pathology CSTR.
Materials And Methods: This Institutional Review Board-approved study was performed at a 150-bed community, academically affiliated hospital. ANCR was adapted for pathology CSTRs. Natural language processing was used on 30,774 pathology reports 13 months pre- and 13 months postintervention, identifying 5,595 reports with malignancies. Electronic health records were reviewed for documented acknowledgment for a random sample of reports. Percent of reports with documented acknowledgment within 15 days assessed institutional policy adherence. Time to acknowledgment was compared pre- versus postintervention and postintervention with and without ANCR alerts. Pathologists were surveyed regarding ANCR use and satisfaction.
Results: Acknowledgment within 15 days was documented for 98 of 107 (91.6%) pre- and 89 of 103 (86.4%) postintervention reports (p = 0.2294). Median time to acknowledgment was 7 days (interquartile range [IQR], 3, 11) preintervention and 6 days (IQR, 2, 10) postintervention (p = 0.5083). Postintervention, median time to acknowledgment was 2 days (IQR, 1, 6) for reports with ANCR alerts versus 6 days (IQR, 2.75, 9) for reports without alerts (p = 0.0351). ANCR alerts were sent on 15 of 103 (15%) postintervention reports. All pathologists reported that the ANCR system positively impacted their workflow; 75% (three-fourths) felt that the ANCR system improved efficiency of communicating CSTRs.
Conclusion: ANCR expansion to facilitate closed-loop communication of pathology CSTRs was favorably perceived and associated with significant improved time to documented acknowledgment for new malignancies. The rate of adherence to institutional policy did not improve.
abstract_id: PUBMED:25467899
Radiology reporting: a closed-loop cycle from order entry to results communication. With the increasing prevalence of PACS over the past decade, face-to-face image review among health care providers has become a rarity. This change has resulted in increasing dependence on fast and accurate communication in radiology. Turnaround time expectations are now conveyed in minutes rather than hours or even days. Ideal modern radiology communication is a closed-loop cycle with multiple interoperable applications contributing to the final product. The cycle starts with physician order entry, now often performed through the electronic medical record, with clinical decision support to ensure that the most effective imaging study is ordered. Radiology reports are now almost all in electronic format. The majority are produced using speech recognition systems. Optimization of this software use can alleviate some, if not all, of the inherent user inefficiencies in this type of reporting. Integrated third-party software applications that provide data mining capability are extremely helpful in both academic and clinical settings. The closed-loop ends with automated communication of imaging results. Software products for this purpose should facilitate use of levels of alert, automated escalation to providers, and recording of audit trails of reports received. The multiple components of reporting should be completely interoperable with each other, as well as with the PACS, the RIS, and the electronic medical record. This integration will maximize radiologist efficiency and minimize the possibility of communication error.
abstract_id: PUBMED:34374592
Electronic Health Record Closed-Loop Communication Program for Unexpected Nonemergent Findings. Background Reliance on examination reporting of unexpected imaging findings does not ensure receipt of findings or appropriate follow-up. A closed-loop communication system should include provider and patient notifications and be auditable through the electronic health record (EHR). Purpose To report the initial design of and results from using an EHR-integrated unexpected findings navigator (UFN) program that ensures closed-loop communication of unexpected nonemergent findings. Materials and Methods An EHR-integrated UFN program was designed to enable identification and communication of unexpected findings and aid in next steps in findings management. Three navigators (with prior training as radiologic technologists and sonographers) facilitated communication and documentation of results to providers and patients. Twelve months (October 2019 to October 2020) of results were retrospectively reviewed to evaluate patient demographics and program metrics. Descriptive statistics and correlation analysis were performed by using commercially available software. Results A total of 3542 examinations were reported within 12 months, representing 0.5% of all examinations performed (total of 749 649); the median patient age was 62 years (range, 1 day to 98 years; interquartile range, 23 years). Most patients were female (2029 of 3542 [57%]). Almost half of the examinations submitted were from chest radiography and CT (1618 of 3542 [46%]), followed by MRI and CT of the abdomen and pelvis (1123 of 3542 [32%]). The most common unexpected findings were potential neoplasms (391 of 3542 [11%]). The median time between examination performance and patient notification was 12 days (range, 0-136 days; interquartile range, 13 days). A total of 2127 additional imaging studies were performed, and 1078 patients were referred to primary care providers and specialists. Most radiologists (89%, 63 of 71 respondents) and providers (65%, 28 of 43 respondents) found the system useful and used it most frequently during regular business hours. Conclusion An electronic health record-integrated, navigator-facilitated, closed-loop communication program for unexpected radiologic findings led to near-complete success in notification of providers and patients and facilitated the next steps in findings management. © RSNA, 2021 See also the editorial by Safdar in this issue.
Answer: Integrating nonurgent, clinically significant radiology alerts within the electronic health record (EHR) does impact closed-loop communication and follow-up, but the extent of the impact varies based on the specific system and context in which it is implemented.
For instance, a study assessing the integration of critical result management software (Alert Notification of Critical Results, or ANCR) with an EHR-based results management application found that providing an additional workflow for acknowledging nonurgent, clinically significant results did not significantly change the rates of closed-loop communication or follow-up of alerts. Primary care providers (PCPs) acknowledged 16% of postintervention alerts in the EHR, with the remaining in ANCR. Follow-up was documented for 94% of alerts both preintervention and postintervention, indicating that the integration provided an additional acknowledgment workflow without significantly affecting follow-up rates (PUBMED:26335982).
Another study on the use of an acknowledgment tool within an EHR for nonurgent, clinically significant test results (CSTRs) found that the rate of acknowledgment by PCPs was 78%. The study suggested that enhancements linking acknowledgment to routine actions would be useful, as both satisfied and nonsatisfied PCPs reported that such enhancements could ensure follow-up of nonurgent CSTRs (PUBMED:25796594).
Furthermore, the implementation of an automated closed-loop communication and tracking tool was shown to identify patient-, referrer-, and imaging-related factors associated with the lack of completion of radiologist-recommended follow-up imaging. This tool prompted referrers to record whether they deemed recommendations to be clinically necessary and assessed whether clinically necessary follow-up imaging was pursued (PUBMED:36287625).
In summary, integrating nonurgent, clinically significant radiology alerts within the EHR can provide additional workflows for acknowledgment and has the potential to support closed-loop communication and follow-up. However, the effectiveness of such integrations may depend on the specific features of the system, the context of its use, and whether it includes functionalities that link acknowledgment to action. |
Instruction: Validity of the finger tapping test in Parkinson's disease, elderly and young healthy subjects: is there a role for central fatigue?
Abstracts:
abstract_id: PUBMED:22560636
Validity of the finger tapping test in Parkinson's disease, elderly and young healthy subjects: is there a role for central fatigue? Objective: The main goal of this work is to evaluate the validity of the finger tapping test (FT) to detect alterations in rhythm formation.
Methods: We use FT to study the alterations in motor rhythm in three different groups: Parkinson's patients, elderly healthy controls, and young healthy control subjects (HY). The test was performed in COMFORT and FAST tapping modes and repeated on two different days.
Results: For the variables analyzed (frequency and variability) both modes were repeatable in all groups. Also, intra-class correlation coefficients showed excellent levels of consistency between days. The test clearly differentiated the groups in both FAST and COMFORT modes. However, when fatigue was analyzed, a decrease in the tapping frequency was observed in HY during the FAST mode only. The amplitude of motor evoked potentials (MEPs) induced by transcranial magnetic stimulation (TMS) was early-potentiated but not delayed-depressed, both for COMFORT and FAST modes. This suggests that fatigue was not of cortico-spinal origin. Other forms of central fatigue are discussed.
Conclusions: FT at FAST mode is not a valid test to detect differences in rhythm formation across the groups studied; fatigue is a confounding variable in some groups if the test is performed as fast as possible.
Significance: COMFORT mode is recommended in protocols including the FT for evaluating rhythm formation.
abstract_id: PUBMED:24332155
A computer vision framework for finger-tapping evaluation in Parkinson's disease. Objectives: The rapid finger-tapping test (RFT) is an important method for clinical evaluation of movement disorders, including Parkinson's disease (PD). In clinical practice, the naked-eye evaluation of RFT results in a coarse judgment of symptom scores. We introduce a novel computer-vision (CV) method for quantification of tapping symptoms through motion analysis of index-fingers. The method is unique as it utilizes facial features to calibrate tapping amplitude for normalization of distance variation between the camera and subject.
Methods: The study involved 387 video footages of RFT recorded from 13 patients diagnosed with advanced PD. Tapping performance in these videos was rated by two clinicians between the symptom severity levels ('0: normal' to '3: severe') using the unified Parkinson's disease rating scale motor examination of finger-tapping (UPDRS-FT). Another set of recordings in this study consisted of 84 videos of RFT recorded from 6 healthy controls. These videos were processed by a CV algorithm that tracks the index-finger motion between the video-frames to produce a tapping time-series. Different features were computed from this time series to estimate speed, amplitude, rhythm and fatigue in tapping. The features were trained in a support vector machine (1) to categorize the patient group between UPDRS-FT symptom severity levels, and (2) to discriminate between PD patients and healthy controls.
Results: A new representative feature of tapping rhythm, 'cross-correlation between the normalized peaks' showed strong Guttman correlation (μ2=-0.80) with the clinical ratings. The classification of tapping features using the support vector machine classifier and 10-fold cross validation categorized the patient samples between UPDRS-FT levels with an accuracy of 88%. The same classification scheme discriminated between RFT samples of healthy controls and PD patients with an accuracy of 95%.
Conclusion: The work supports the feasibility of the approach, which is presumed suitable for PD monitoring in the home environment. The system offers advantages over other technologies (e.g. magnetic sensors, accelerometers, etc.) previously developed for objective assessment of tapping symptoms.
abstract_id: PUBMED:31084200
Cancer-Related Fatigue: Perception of Effort or Task Failure? Context: Patient's rating of perceived effort (RPE) is used to assess central fatigue. Cancer-related fatigue (CRF) is believed to be of central origin. The increased RPE with a motor task, such as the Finger-Tapping Test (FTT), can easily be measured in the clinical setting.
Objectives: To correlate the FTT, RPE and the Brief Fatigue Inventory (BFI) rated fatigue severity in patients with cancer.
Methods: Subjective fatigue was assessed in adult patients with cancer by the BFI. Participants performed a modified FTT with the index finger of the dominant hand: 15 seconds × 2, 30 seconds × 2, and 60 seconds × 2 with 1 minute of rest between each time trial. Rating of perceived effort at the end of task was measured by the Borg 10 scale.
Exclusions: Brain metastasis, history of brain radiation, Parkinson disease, Huntington Chorea, multiple sclerosis, delirium, and depression. Pearson correlation coefficients were used to describe the relationships between BFI, FTT, and Borg 10 scale.
Results: Thirty patients participated. Mean age was 56.2. Sixteen were females (53.3%). The mean BFI mean was 4.1, median 4.4. Tapping rate did not correlate with fatigue severity. The RPE correlated with the mean BFI: rs 0.438, P = .0155. These correlations persisted after adjustment for age.
Conclusion: An increased RPE in the absence of task failure suggests that the origin of CRF is central. The performance of an FTT with RPE helps to improve our understanding of fatigue in the clinical setting.
abstract_id: PUBMED:19526228
Rapid slowing of maximal finger movement rate: fatigue of central motor control? Exploring the limits of the motor system can provide insights into the mechanisms underlying performance deterioration, such as force loss during fatiguing isometric muscle contraction, which has been shown to be due to both peripheral and central factors. However, the role of central factors in performance deterioration during dynamic tasks has received little attention. We studied index finger flexion/extension movement performed at maximum voluntary rate (MVR) in ten healthy subjects, measuring movement rate and amplitude over time, and performed measures of peripheral fatigue. During 20 s finger movements at MVR, there was a decline in movement rate beginning at 7-9 s and continuing until the end of the task, reaching 73% of baseline (P < 0.001), while amplitude remained unchanged. Isometric maximum voluntary contraction force and speed of single ballistic flexion and extension finger movements remained unchanged after the task, indicating a lack of peripheral fatigue. The timing of finger flexor and extensor EMG burst activity changed during the task from an alternating flexion/extension pattern to a less effective co-contraction pattern. Overall, these findings suggest a breakdown of motor control rather than failure of muscle force generation during an MVR task, and therefore that the mechanisms underlying the early decline in movement rate are central in origin.
abstract_id: PUBMED:24351667
Automatic and objective assessment of alternating tapping performance in Parkinson's disease. This paper presents the development and evaluation of a method for enabling quantitative and automatic scoring of alternating tapping performance of patients with Parkinson's disease (PD). Ten healthy elderly subjects and 95 patients in different clinical stages of PD have utilized a touch-pad handheld computer to perform alternate tapping tests in their home environments. First, a neurologist used a web-based system to visually assess impairments in four tapping dimensions ('speed', 'accuracy', 'fatigue' and 'arrhythmia') and a global tapping severity (GTS). Second, tapping signals were processed with time series analysis and statistical methods to derive 24 quantitative parameters. Third, principal component analysis was used to reduce the dimensions of these parameters and to obtain scores for the four dimensions. Finally, a logistic regression classifier was trained using a 10-fold stratified cross-validation to map the reduced parameters to the corresponding visually assessed GTS scores. Results showed that the computed scores correlated well to visually assessed scores and were significantly different across Unified Parkinson's Disease Rating Scale scores of upper limb motor performance. In addition, they had good internal consistency, had good ability to discriminate between healthy elderly and patients in different disease stages, had good sensitivity to treatment interventions and could reflect the natural disease progression over time. In conclusion, the automatic method can be useful to objectively assess the tapping performance of PD patients and can be included in telemedicine tools for remote monitoring of tapping.
abstract_id: PUBMED:36524614
Determination of five times-sit-to-stand test performance in patients with multiple sclerosis: validity and reliability. Purpose/aim: Although Five Times-Sit-To-Stand test (FTSST) performance is known to be a valid and reliable method in people with chronic stroke, Parkinson's disease, and balance disorder, it has not been widely studied in patients with Multiple sclerosis (MS). The main aim of this study was to evaluate validity and reliability of the FTSST in patients with MS.
Methods: The first outcome measure of the study was the FTSST, which was conducted by two different researchers. Secondary outcome measures were Biodex Stability System (BSS), 10-meter walk test, time up go test (TUG), EDSS scoring, Fatigue Severity Scale (FSS), Barthel Index, Quadriceps Muscle strength test, Functional Reach test. Intraclass correlation coefficient (ICC) was used for the validity and reliability of the FTSST, which was made by two different researchers, and Pearson Correlation Analysis was used to determine its relationship with other measurements.
Results: Interrater and test-retest reliability for the FTSST were excellent (Intraclass correlation coefficients of 0.98 and 0.99, respectively). A statistically significant correlation was found between all secondary outcome measures and FTSST (p < 0.05).
Conclusion: FTSST is considered to be a valid, reliable, easy, and rapid method for evaluating lower extremity muscle strength and balance in patients with MS.
abstract_id: PUBMED:30617451
The Parkinson fatigue scale: an evaluation of its validity and reliability in Greek Parkinson's disease patients. Objective: Fatigue is one of the most frequent and important nonmotor symptoms of patients with Parkinson disease (PD), affecting quality of life. Although, in some cases, it may be a severe and debilitating complaint, it remains relatively unexplored. The PFS-16 is a fatigue measure, specifically designed for PD patients. The aim of this study was to investigate the psychometric properties of Parkinson fatigue scale (PFS-16) in Greek PD patients.
Methods: In total, 99 patients with PD were assessed. The following psychometric properties were tested: data quality, floor/ceiling effects, reliability (internal consistency, test-retest reliability), and construct validity. Construct validity was evaluated by examining correlations with other variables including other fatigue measures such as Fatigue Severity Scale (FSS) and the vitality scale (SF-VT) of SF-36. Moreover, assumptions were explored about "known" groups concerning fatigue.
Results: The mean score for the PFS-16 was 2.95 (± 0.91); acceptability was good with negligible floor and ceiling effects. Results showed high internal consistency (Cronbach's alpha, 0.96) and test-retest reliability (ICC, 0.93). Strong correlations were observed between the PFS-16 and other fatigue (FFS and SF-VT) measures (rs = 0.77 and - 0.70, p < 0.001), revealing appropriate validity. Furthermore, predictions for "known" groups validity were verified.
Conclusion: The Greek version of the PFS-16 showed satisfactory reliability and validity and thus can be regarded as a useful tool in assessing fatigue in PD.
abstract_id: PUBMED:19620846
Using modafinil to treat fatigue in Parkinson disease: a double-blind, placebo-controlled pilot study. Background: Fatigue is a major nonmotor symptom in Parkinson disease(PD). It is associated with reduced activity and lower quality of life.
Objective: To determine if modafinil improves subjective fatigue and physical fatigability in PD.
Methods: Nineteen PD patients who reported significant fatigue in the Multidimensional Fatigue Inventory (MFI) participated in this 8-week study. Subjects took their regular medications and were randomly assigned to the treatment group (9 subjects, modafinil 100-mg capsule BID) or placebo group (10 subjects). We used the MFI to measure subjective fatigue and used finger tapping and intermittent force generation to evaluate physical fatigability. Subjects also completed the Epworth Sleepiness Scale (ESS) and the Center of Epidemiological Study-Depression Scale.
Results: There were no significant differences at baseline and at 1 month in finger tapping and ESS between the modafinil and placebo groups. At 2 months, the modafinil group had a higher tapping frequency (P<0.05), shorter dwell time (P<0.05), and less fatigability in finger tapping and tended to have lower ESS scores (P<0.12) than the placebo group. However, there was no difference between groups over time for any dimension of the MFI .
Conclusions: This small study demonstrated that although modafinil may be effective in reducing physical fatigability in PD, it did not improve fatigue symptoms.
abstract_id: PUBMED:31453519
Turkish version of Parkinson Fatigue Scale: Validity and reliability study of binary scoring method. Objectives: The aim of the present study was to translate and cross-culturally adapt the Parkinson Fatigue Scale (PFS) into Turkish and to evaluate its reliability and validity.
Patients And Methods: Between September 2015 and May 2016, a total of 138 patients (84 males, 54 females; mean age 62.8±9.3 years; range, 42 to 83 years) with Parkinson's disease (PD) were included in this study. The Turkish version of the PFS was analyzed for data quality, scaling assumptions, acceptability, reliability, and validity. We used the binary scoring method of the Parkinson Fatigue Scale.
Results: The data quality for the Turkish version of the PFS was excellent. The scaling assumption was acceptable. The scale provided an acceptable internal consistency (Cronbach's alpha was 0.955 for a test and 0.941 for a retest, and corrected item-to-total correlations were ranged from 0.478 to 0.849. The test-retest reliability (correlation coefficients were ranged from 0.650 to 0.875) was adequate. Although the total binary score of the PFS was not associated with demographic and clinical data, it was significantly correlated with some of the clinical rating scale scores, including the Unified Parkinson's Disease Rating Scale, Schwab & England Activities of Daily Living Scale, Hospital Anxiety and Depression Scale, Epworth Sleepiness Scale, Pittsburg Sleep Quality Index, 36-item Short Form Health Survey, 39-item Parkinson's Disease Questionnaire, and Fatigue Severity Scale.
Conclusion: The Turkish version of the PFS is an acceptable, valid, and reliable tool for the assessment of fatigue in PD patients.
abstract_id: PUBMED:10634240
Quantitative digitography (QDG): a sensitive measure of digital motor control in idiopathic Parkinson's disease. This study introduces a new method for studying, quantitatively, the dynamics of finger movement using data obtained from sequences of key strikes on a computer-interfaced piano keyboard. We have called this quantitative digitography (QDG). This initial article introduces the method in a group of patients with Parkinson's disease and in a group of healthy subjects using simple, repetitive, alternating finger-tapping for 60 seconds. Patients with idiopathic Parkinson's disease (IPD) were studied "ON" and "OFF" dopaminergic medication before and after pallidotomy. Customized software allowed the independent analysis of key strike velocity, duration of key strike, and frequency of tapping along with a quantitative measure of the regularity of performance. Quantitative measures of the improvement in performance after medication are presented for each parameter of movement. The technique also reveals correlates of some clinical phenomena of the temporal disturbances of repetitive motion in IPD, such as fatigue, tremor, freezing, and festination. We demonstrate that the performance of 60 seconds of alternating finger tapping on a computerized keyboard yields objective measures of motor performance that are significantly different in patients with IPD "OFF" when compared with "ON" medication and when compared with healthy subjects. This is the first time that such a method has been used in the measurement of specific kinematics of digital motion in Parkinson's disease. The equipment is inexpensive and portable and the data are rapidly and easily collected, making it suitable for the outpatient setting.
Answer: The finger tapping test (FT) has been evaluated for its validity in detecting alterations in motor rhythm among different groups, including Parkinson's disease (PD) patients, elderly healthy controls, and young healthy controls. The test was performed in both COMFORT and FAST tapping modes and was found to be repeatable in all groups, with excellent levels of consistency between days. The FT differentiated the groups in both modes, but when fatigue was analyzed, a decrease in tapping frequency was observed in young healthy controls during the FAST mode only. The amplitude of motor evoked potentials (MEPs) induced by transcranial magnetic stimulation (TMS) was potentiated early but not depressed later, suggesting that fatigue was not of cortico-spinal origin. The study concluded that FT at FAST mode is not a valid test to detect differences in rhythm formation across the groups studied due to fatigue being a confounding variable in some groups if the test is performed as fast as possible. Therefore, the COMFORT mode is recommended for evaluating rhythm formation in protocols including the FT (PUBMED:22560636).
Additionally, a computer vision framework for finger-tapping evaluation in Parkinson's disease was introduced, which showed strong correlation with clinical ratings and high accuracy in categorizing patient samples between UPDRS-FT levels and discriminating between PD patients and healthy controls (PUBMED:24332155). Another study on cancer-related fatigue suggested that an increased rating of perceived effort (RPE) in the absence of task failure indicates that the origin of fatigue is central, which could be relevant to understanding fatigue in PD as well (PUBMED:31084200). Furthermore, a study on rapid slowing of maximal finger movement rate suggested that the mechanisms underlying the early decline in movement rate are central in origin (PUBMED:19526228).
In conclusion, the validity of the finger tapping test in Parkinson's disease, elderly, and young healthy subjects does seem to be influenced by central fatigue, particularly when the test is performed at a fast pace. The COMFORT mode of the FT is recommended to minimize the confounding effects of fatigue and to better evaluate rhythm formation (PUBMED:22560636). |
Instruction: Can community health officer-midwives effectively integrate skilled birth attendance in the community-based health planning and services program in rural Ghana?
Abstracts:
abstract_id: PUBMED:25518900
Can community health officer-midwives effectively integrate skilled birth attendance in the community-based health planning and services program in rural Ghana? Background: The burden of maternal mortality in sub-Saharan Africa is very high. In Ghana maternal mortality ratio was 380 deaths per 100,000 live births in 2013. Skilled birth attendance has been shown to reduce maternal mortality and morbidity, yet in 2010 only 68 percent of mothers in Ghana gave birth with the assistance of skilled birth attendants. In 2005, the Ghana Health Service piloted a strategy that involved using the integrated Community-based Health Planning and Services (CHPS) program and training Community Health Officers (CHOs) as midwives to address the gap in skilled attendance in rural Upper East Region (UER). The study assesses the feasibility of and extent to which the skilled delivery program has been implemented as an integrated component of the existing CHPS, and documents the benefits and challenges of the integrated program.
Methods: We employed an intrinsic case study design with a qualitative methodology. We conducted 41 in-depth interviews with health professionals and community stakeholders. We used a purposive sampling technique to identify and interview our respondents.
Results: The CHO-midwives provide integrated services that include skilled delivery in CHPS zones. The midwives collaborate with District Assemblies, Non-Governmental Organizations (NGOs) and communities to offer skilled delivery services in rural communities. They refer pregnant women with complications to district hospitals and health centers for care, and there has been observed improvement in the referral system. Stakeholders reported community members' access to skilled attendants at birth, health education, antenatal attendance and postnatal care in rural communities. The CHO-midwives are provided with financial and non-financial incentives to motivate them for optimal work performance. The primary challenges that remain include inadequate numbers of CHO-midwives, insufficient transportation, and infrastructure weaknesses.
Conclusions: Our study demonstrates that CHOs can successfully be trained as midwives and deployed to provide skilled delivery services at the doorsteps of rural households. The integration of the skilled delivery program with the CHPS program appears to be an effective model for improving access to skilled birth attendance in rural communities of the UER of Ghana.
abstract_id: PUBMED:25113017
Is there any role for community involvement in the community-based health planning and services skilled delivery program in rural Ghana? Background: In Ghana, between 1,400 and 3,900 women and girls die annually due to pregnancy related complications and an estimated two-thirds of these deaths occur in late pregnancy through to 48 hours after delivery. The Ghana Health Service piloted a strategy that involved training Community Health Officers (CHOs) as midwives to address the gap in skilled attendance in rural Upper East Region (UER). CHO-midwives collaborated with community members to provide skilled delivery services in rural areas. This paper presents findings from a study designed to assess the extent to which community residents and leaders participated in the skilled delivery program and the specific roles they played in its implementation and effectiveness.
Methods: We employed an intrinsic case study design with a qualitative methodology. We conducted 29 in-depth interviews with health professionals and community stakeholders. We used a random sampling technique to select the CHO-midwives in three Community-based Health Planning and Services (CHPS) zones for the interviews and a purposive sampling technique to identify and interview District Directors of Health Services from the three districts, the Regional Coordinator of the CHPS program and community stakeholders.
Results: Community members play a significant role in promoting skilled delivery care in CHPS zones in Ghana. We found that community health volunteers and traditional birth attendants (TBAs) helped to provide health education on skilled delivery care, and they also referred or accompanied their clients for skilled attendants at birth. The political authorities, traditional leaders, and community members provide resources to promote the skilled delivery program. Both volunteers and TBAs are given financial and non-financial incentives for referring their clients for skilled delivery. However, inadequate transportation, infrequent supply of drugs, attitude of nurses remains as challenges, hindering women accessing maternity services in rural areas.
Conclusions: Mutual collaboration and engagement is possible between health professionals and community members for the skilled delivery program. Community leaders, traditional and political leaders, volunteers, and TBAs have all been instrumental to the success of the CHPS program in the UER, each in their unique way. However, there are problems confronting the program and we have provided recommendations to address these challenges.
abstract_id: PUBMED:24721385
Using the community-based health planning and services program to promote skilled delivery in rural Ghana: socio-demographic factors that influence women utilization of skilled attendants at birth in northern Ghana. Background: The burden of maternal mortality in sub-Saharan Africa is enormous. In Ghana the maternal mortality ratio was 350 per 100,000 live births in 2010. Skilled birth attendance has been shown to reduce maternal deaths and disabilities, yet in 2010 only 68% of mothers in Ghana gave birth with skilled birth attendants. In 2005, the Ghana Health Service piloted an enhancement of its Community-Based Health Planning and Services (CHPS) program, training Community Health Officers (CHOs) as midwives, to address the gap in skilled attendance in rural Upper East Region (UER). The study determined the extent to which CHO-midwives skilled delivery program achieved its desired outcomes in UER among birthing women.
Methods: We conducted a cross-sectional household survey with women who had ever given birth in the three years prior to the survey. We employed a two stage sampling techniques: In the first stage we proportionally selected enumeration areas, and the second stage involved random selection of households. In each household, where there is more than one woman with a child within the age limit, we interviewed the woman with the youngest child. We collected data on awareness of the program, use of the services and factors that are associated with skilled attendants at birth.
Results: A total of 407 households/women were interviewed. Eighty three percent of respondents knew that CHO-midwives provided delivery services in CHPS zones. Seventy nine percent of the deliveries were with skilled attendants; and over half of these skilled births (42% of total) were by CHO-midwives. Multivariate analyses showed that women of the Nankana ethnic group and those with uneducated husbands were less likely to access skilled attendants at birth in rural settings.
Conclusions: The implementation of the CHO-midwife program in UER appeared to have contributed to expanded skilled delivery care access and utilization for rural women. However, women of the Nankana ethnic group and uneducated men must be targeted with health education to improve women utilizing skilled delivery services in rural communities of the region.
abstract_id: PUBMED:33951049
Assessing selection procedures and roles of Community Health Volunteers and Community Health Management Committees in Ghana's Community-based Health Planning and Services program. Background: Community participation in health care delivery will ensure service availability and accessibility and guarantee community ownership of the program. Community-based strategies such as the involvement of Community Health Volunteers (CHVs) and Community Health Management Committees (CHMCs) are likely to advance primary healthcare in general, but the criteria for selecting CHVs, CHMCs and efforts to sustain these roles are not clear 20 years after implementing the Community-based Health Planning Services program. We examined the process of selecting these cadres of community health workers and their current role within Ghana's flagship program for primary care-the Community-based Health Planning and Services program.
Methods: This was an exploratory study design using qualitative methods to appraise the health system and stakeholder participation in Community-based Health Planning and Services program implementation in the Upper East region of Ghana. We conducted 51 in-depth interviews and 33 focus group discussions with health professionals and community members.
Results: Community Health Volunteers and Community Health Management Committees are the representatives of the community in the routine implementation of the Community-based Health Planning and Services program. They are selected, appointed, or nominated by their communities. Some inherit the position through apprenticeship and others are recruited through advertisement. The selection is mostly initiated by the health providers and carried out by community members. Community Health Volunteers lead community mobilization efforts, support health providers in health promotion activities, manage minor illnesses, and encourage pregnant women to use maternal health services. Community Health Volunteers also translate health messages delivered by health providers to the people in their local languages. Community Health Management Committees mobilize resources for the development of Community-based Health Planning and Services program compounds. They play a mediatory role between health providers in the health compounds and the community members. Volunteers are sometimes given non-financial incentives but there are suggestions to include financial incentives.
Conclusion: Community Health Volunteers and Community Health Management Committees play a critical role in primary health care. The criteria for selecting Community Health Volunteers and Community Health Management Committees vary but need to be standardized to ensure that only self-motivated individuals are selected. Thus, CHVs and CHMCs should contest for their positions and be endorsed by their community members and assigned roles by health professionals in the CHPS zones. Efforts to sustain them within the health system should include the provision of financial incentives.
abstract_id: PUBMED:30443928
The influence of the Community-based Health Planning and Services (CHPS) program on community health sustainability in the Upper West Region of Ghana. Ghana introduced Community-based Health Planning and Services (CHPS) to improve primary health care in rural areas. The extension of health care services to rural areas has the potential to increase sustainability of community health. Drawing on the capitals framework, this study aims to understand the contribution of CHPS to the sustainability of community health in the Upper West Region of Ghana-the poorest region in the country. We conducted in-depth interviews with community members (n = 25), key informant interviews with health officials (n = 8), and focus group discussions (n = 12: made up of six to eight participants per group) in six communities from two districts. Findings show that through their mandate of primary health care provision, CHPS contributed directly to improvement in community health (eg, access to family planning services) and indirectly through strengthening social, human, and economic capital and thereby improving social cohesion, awareness of health care needs, and willingness to take action at the community level. Despite the current contributions of CHPS in improving the sustainability of community health, there are several challenges, based on which we recommend, that government should increase staffing and infrastructure in order to strengthen and maintain the functionality of CHPS.
abstract_id: PUBMED:34763699
Challenges to the utilization of Community-based Health Planning and Services: the views of stakeholders in Yendi Municipality, Ghana. Background: The Community-based Health Planning and Services (CHPS) is a national health reform programme that provides healthcare at the doorsteps of rural community members, particularly, women and children. It seeks to reduce health inequalities and promote equity of health outcomes. The study explored implementation and utilization challenges of the CHPS programme in the Northern Region of Ghana.
Methods: This was an observational study that employed qualitative methods to interview key informants covering relevant stakeholders. The study was guided by the systems theory. In all, 30 in-depth interviews were conducted involving 8 community health officers, 8 community volunteers, and 14 women receiving postnatal care in four (4) CHPS zones in the Yendi Municipality. The data were thematically analysed using Atlas.ti.v.7 software and manual coding system.
Results: The participants reported poor clinical attendance including delays in seeking health care, low antenatal and postnatal care visits. The barriers of the CHPS utilization include lack of transportation, poor road network, cultural beliefs (e.g. taboos of certain foods), proof of women's faithfulness to their husbands and absence of health workers. Other challenges were poor communication networks during emergencies, and inaccessibility of ambulance service. In seeking health care, insured members of the national health insurance scheme (NHIS) still pay for services that are covered by the NHIS. We found that the CHPS compounds lack the capacity to sterilize some of their equipment, lack of incentives for Community Health Officers and Community Health Volunteers and inadequate infrastructures such as potable water and electricity. The study also observed poor coordination of interventions, inadequate equipment and poor community engagement as setbacks to the progress of the CHPS policy.
Conclusions: Clinical attendance, timing and number of antenatal and postnatal care visits, remain major concerns for the CHPS programme in the study setting. The CHPS barriers include transportation, poor road network, cost of referrals, cultural beliefs, inadequate equipment, lack of incentives and poor community engagement. There is an urgent need to address these challenges to improve the utilization of CHPS compounds and to contribute to achieving the sustainable development goals.
abstract_id: PUBMED:25789874
Evaluating the impact of the community-based health planning and services initiative on uptake of skilled birth care in Ghana. Background: The Community-based Health Planning and Services (CHPS) initiative is a major government policy to improve maternal and child health and accelerate progress in the reduction of maternal mortality in Ghana. However, strategic intelligence on the impact of the initiative is lacking, given the persistant problems of patchy geographical access to care for rural women. This study investigates the impact of proximity to CHPS on facilitating uptake of skilled birth care in rural areas.
Methods And Findings: Data from the 2003 and 2008 Demographic and Health Survey, on 4,349 births from 463 rural communities were linked to georeferenced data on health facilities, CHPS and topographic data on national road-networks. Distance to nearest health facility and CHPS was computed using the closest facility functionality in ArcGIS 10.1. Multilevel logistic regression was used to examine the effect of proximity to health facilities and CHPS on use of skilled care at birth, adjusting for relevant predictors and clustering within communities. The results show that a substantial proportion of births continue to occur in communities more than 8 km from both health facilities and CHPS. Increases in uptake of skilled birth care are more pronounced where both health facilities and CHPS compounds are within 8 km, but not in communities within 8 km of CHPS but lack access to health facilities. Where both health facilities and CHPS are within 8 km, the odds of skilled birth care is 16% higher than where there is only a health facility within 8km.
Conclusion: Where CHPS compounds are set up near health facilities, there is improved access to care, demonstrating the facilitatory role of CHPS in stimulating access to better care at birth, in areas where health facilities are accessible.
abstract_id: PUBMED:28521740
Going to scale: design and implementation challenges of a program to increase access to skilled birth attendants in Nigeria. Background: The lack of availability of skilled providers in low- and middle- income countries is considered to be an important barrier to achieving reductions in maternal and child mortality. However, there is limited research on programs increasing the availability of skilled birth attendants in developing countries. We study the implementation of the Nigeria Midwives Service Scheme, a government program that recruited and deployed nearly 2,500 midwives to rural primary health care facilities across Nigeria in 2010. An outcome evaluation carried out by this team found only a modest impact on the use of antenatal care and no measurable impact on skilled birth attendance. This paper draws on perspectives of policymakers, program midwives, and community residents to understand why the program failed to have the desired impact.
Methods: We conducted semi-structured interviews with federal, state and local government policy makers and with MSS midwives. We also conducted focus groups with community stakeholders including community leaders and male and female residents.
Results: Our data reveal a range of design, implementation and operational challenges ranging from insufficient buy-in by key stakeholders at state and local levels, to irregular and in some cases total non-provision of agreed midwife benefits that likely contributed to the program's lack of impact. These challenges not only created a deep sense of dissatisfaction with the program but also had practical impacts on service delivery likely affecting households' uptake of services.
Conclusion: This paper highlights the challenge of effectively scaling up maternal and child health interventions. Our findings emphasize the critical importance of program design, particularly when programs are implemented at scale; the need to identify and involve key stakeholders during planning and implementation; the importance of clearly defining lines of authority and responsibility that align with existing structures; and the necessity for multi-faceted interventions that address multiple barriers at the same time.
abstract_id: PUBMED:17401450
Strategies of immunization in Ghana: the experience of a "community-based" health planning in a rural country Access to immunization of children and to prevention services is a relevant issue in poor-resource settings like rural areas of Western Africa. Ghana government has launched the "Community-based Health Planning and Services initiative-CHPS", a programme that by the institution of Local Clinics in small villages, the activity of their nurse, and the involvement of local communities and traditional institutions improves the access of population to primary care and prevention. Our survey in Jomoro district has confirmed that this model is effective in determining higher coverage rate for all the immunizations of the children.
abstract_id: PUBMED:28874157
Male involvement in maternal healthcare through Community- based Health Planning and Services: the views of the men in rural Ghana. Background: The need to promote maternal health in Ghana has committed the government to extend maternal healthcare services to the door steps of rural families through the community-based Health Planning and Services. Based on the concerns raised in previous studies that male spouses were indifferent towards maternal healthcare, this study sought the views of men on their involvement in maternal healthcare in their respective communities and at the household levels in the various Community-based Health Planning and Services zones in Awutu-Senya West District in the Central Region of Ghana.
Methods: A qualitative method was employed. Focus groups and individual interviews were conducted with married men, community health officers, community health volunteers and community leaders. The participants were selected using purposive, quota and snowball sampling techniques. The study used thematic analysis for analysing the data.
Results: The study shows varying involvement of men, some were directly involved in feminine gender roles; others used their female relatives and co-wives to perform the women's roles that did not have space for them. They were not necessarily indifferent towards maternal healthcare, rather, they were involved in the spaces provided by the traditional gender division of labour. Amongst other things, the perpetuation and reinforcement of traditional gender norms around pregnancy and childbirth influenced the nature and level of male involvement.
Conclusions: Sustenance of male involvement especially, husbands and CHVs is required at the household and community levels for positive maternal outcomes. Ghana Health Service, health professionals and policy makers should take traditional gender role expectations into consideration in the planning and implementation of maternal health promotion programmes.
Answer: Yes, community health officer midwives can effectively integrate skilled birth attendance in the community-based health planning and services (CHPS) program in rural Ghana. The study conducted in the Upper East Region (UER) of Ghana demonstrated that Community Health Officers (CHOs) trained as midwives could successfully provide skilled delivery services at the doorsteps of rural households. The integration of the skilled delivery program with the CHPS program was found to be an effective model for improving access to skilled birth attendance in rural communities (PUBMED:25518900).
The CHO-midwives collaborated with District Assemblies, Non-Governmental Organizations (NGOs), and communities to offer skilled delivery services in rural areas. They also referred pregnant women with complications to district hospitals and health centers for care, which led to an observed improvement in the referral system. Stakeholders reported increased access to skilled attendants at birth, health education, antenatal attendance, and postnatal care in rural communities. The CHO-midwives received financial and non-financial incentives to motivate them for optimal work performance. However, challenges such as inadequate numbers of CHO-midwives, insufficient transportation, and infrastructure weaknesses remained (PUBMED:25518900).
Community members played a significant role in promoting skilled delivery care in CHPS zones. Community health volunteers and traditional birth attendants (TBAs) helped provide health education on skilled delivery care and referred or accompanied clients for skilled attendants at birth. Political authorities, traditional leaders, and community members provided resources to promote the skilled delivery program. Both volunteers and TBAs received financial and non-financial incentives for referring clients for skilled delivery. However, challenges like inadequate transportation, infrequent supply of drugs, and the attitude of nurses were barriers to accessing maternity services in rural areas (PUBMED:25113017).
In conclusion, the integration of skilled birth attendance into the CHPS program is feasible and has shown effectiveness in improving access to skilled care during childbirth in rural Ghana. However, to sustain and enhance this integration, addressing the existing challenges is crucial. |
Instruction: Pediatric adrenocortical neoplasms: can imaging reliably discriminate adenomas from carcinomas?
Abstracts:
abstract_id: PUBMED:25794486
Pediatric adrenocortical neoplasms: can imaging reliably discriminate adenomas from carcinomas? Background: There is a paucity of literature describing and comparing the imaging features of adrenocortical adenomas and carcinomas in children and adolescents.
Objective: To document the CT and MRI features of adrenocortical neoplasms in a pediatric population and to determine whether imaging findings (other than metastatic disease) can distinguish adenomas from carcinomas.
Materials And Methods: We searched institutional medical records to identify pediatric patients with adrenocortical neoplasms. Pre-treatment CT and MRI examinations were reviewed by two radiologists in consensus, and pertinent imaging findings were documented. We also recorded relevant histopathological, demographic, clinical follow-up and survival data. We used the Student's t-test and Wilcoxon rank sum test to compare parametric and nonparametric continuous data, and the Fisher exact test to compare proportions. We used receiver operating characteristic (ROC) curve analyses to evaluate the diagnostic performances of tumor diameter and volume for discriminating carcinoma from adenoma. A P-value ≤0.05 was considered statistically significant.
Results: Among the adrenocortical lesions, 9 were adenomas, 15 were carcinomas, and 1 was of uncertain malignant potential. There were no differences in mean age, gender or sidedness between adenomas and carcinomas. Carcinomas were significantly larger than adenomas based on mean estimated volume (581 ml, range 16-2,101 vs. 54 ml, range 3-197 ml; P-value = 0.003; ROC area under the curve = 0.92) and mean maximum transverse plane diameter (9.9 cm, range 3.0-14.9 vs. 4.4 cm, range 1.9-8.2 cm; P-value = 0.0001; ROC area under the curve = 0.92). Carcinomas also were more heterogeneous than adenomas on post-contrast imaging (13/14 vs. 2/9; odds ratio [OR] = 45.5; P-value = 0.001). Six of 13 carcinomas and 1 of 8 adenomas contained calcification at CT (OR = 6.0; P-value = 0.17). Seven of 15 children with carcinomas exhibited metastatic disease at diagnosis, and three had inferior vena cava invasion. Median survival for carcinomas was 27 months.
Conclusion: In our experience, pediatric adrenocortical carcinomas are larger, more heterogeneous, and more often calcified than adenomas, although there is overlap in their imaging appearances.
abstract_id: PUBMED:21585435
CD56 immunohistochemistry does not discriminate between cortisol-producing and aldosterone-producing adrenal cortical adenomas. N/A
abstract_id: PUBMED:20390424
Anti-CD10 (56C6) is expressed variably in adrenocortical tumors and cannot be used to discriminate clear cell renal cell carcinomas. In the evaluation of retroperitoneal masses, the practicing pathologist faces a dilemma when making a diagnosis based on histology given the often overlapping morphologic appearances of the adrenocortical carcinoma, renal cell carcinoma (RCC), and hepatocellular carcinoma (HCC). CD10 is expressed in a membranous fashion in the vast majority of clear cell RCCs; therefore, it is widely used for distinction from its mimics. However, its expression is not well-investigated in adrenal cortical tumors. We examined CD10 expression in 47 surgically resected adrenocortical tumors (26 adenomas and 21 carcinomas) and compared with 20 clear cell RCCs and 25 HCCs. Twenty HCCs (80%), 18 RCCs (90%), 11 adrenocortical carcinomas (52%), and 18 adrenocortical adenomas (69%) were positive for CD10. HCCs were characterized by a canalicular staining, and clear cell RCCs exhibited membranous or mixed membranous-cytoplasmic staining. Adrenocortical tumors displayed mainly cytoplasmic staining. Four adrenocortical carcinomas and one adenoma also displayed the membranous staining pattern. Despite the relatively small number of samples, our preliminary results revealed that adrenocortical tumors may express CD10 (Clone: 56C6). The most important point from this paper is the fact that anti-CD10 expression has not been previously reported in adrenocortical carcinomas. This suggests that CD10 does not seem to be a useful marker for discriminating clear cell RCCs from adrenocortical tumors since CD10 expression does not rule out the possibility of adrenocortical tumors. This feature should be kept in mind when constructing an antibody panel for an epithelial tumor that involves the adrenal gland and kidney, especially in small biopsy specimens.
abstract_id: PUBMED:23908452
Adrenocortical tumours: high CT attenuation value correlates with eosinophilia but does not discriminate lipid-poor adenomas from malignancy. Background: Characterisation of adrenal tumours is an important clinical problem. Unenhanced CT is the primary imaging modality to assess the nature of these lesions.
Aims: To study the correlation between unenhanced CT attenuation value and the specific histopathology, as well as the proportion of lipid-poor eosinophilic cells in adrenocortical tumours.
Methods: We studied retrospectively primary adrenocortical tumours that had been operated on at Helsinki University Central Hospital between 2002 and 2008. Of 171 tumours, 79 had appropriate preoperative CT scans and were included in the study. We evaluated the unenhanced CT attenuation values (Hounsfield units, HU) of these tumours and determined their histopathological diagnosis by the Weiss scoring system. We also assessed the proportion of lipid-poor eosinophilic cells for each tumour.
Results: Unenhanced CT attenuation value (HU) in adrenocortical tumours correlated well with the proportion of lipid-poor eosinophilic cells (rs=0.750, p<0.001). HU and Weiss score also had a correlation (rs=0.582, p<0.001).
Conclusions: Unenhanced CT attenuation value correlates well with the percentage of lipid-poor eosinophilic cells, but unenhanced CT attenuation value fails to differentiate between benign lipid-poor adenomas and malignant adrenocortical tumours. All adrenocortical tumours with unenhanced CT attenuation value ≤10 HU are histologically benign lipid-rich tumours.
abstract_id: PUBMED:3485307
Cushing's syndrome 1985: current views and possibilities According to the current view a semiautonomously ACTH-producing pituitary microadenoma is the true cause of pituitary-dependent Cushing's syndrome in most instances. Only exceptionally does the disease seem to be caused by a functional pituitary or hypothalamic disturbance of cortisol regulation. A newly discovered rare etiology is ectopic production of CRF. Cushing's syndrome is still most reliably diagnosed by abnormal adrenocortical function tests based on corticosteroid determinations. However, determination of plasma ACTH concentrations and computer-assisted tomography of the pituitary or adrenal glands have become useful tools in differentiating the various forms of Cushing's syndrome. Although a considerable number of available drugs provide effective chemotherapy for Cushing's syndrome, surgical elimination of ACTH-producing or cortisol-producing tumors is still the therapy of choice in most cases.
abstract_id: PUBMED:1708604
Histology, immunocytochemistry and DNA cytophotometry of adrenocortical tumors--a clinicomorphological study of 72 tumors Surgical specimens of 72 adrenocortical tumours were investigated by conventional histology, immunocytochemistry and DNA-cytophotometry. Histologically, 57 tumours were classified as adenomas and 15 as carcinomas. Nine adenomas weighed more, 2 carcinomas less than 50g. Only in 9 of the latter cases were distant metastases and/or lethal outcome of disease recorded, while the clinical course of the remaining patients was uneventful. No significant differences in DNA content were found between adenomas and carcinomas or between carcinomas with aggressive and indolent behaviour. Neither could immunocytochemistry discriminate between these conditions. Immunostaining with the monoclonal antibody D 11 proved to be the only effective means to definitely type adrenocortical neoplasia. Thirty-one cases exhibited positivity upon immunostaining with a polyclonal antiserum against synaptophysin. This phenomenon has so far not been encountered in non-neuroendocrine neoplasia.
abstract_id: PUBMED:1539453
Immunocytochemistry in adrenocortical tumours: a clinicomorphological study of 72 neoplasms. Surgical specimens of 72 adrenocortical tumours (ACTs) were investigated. Histologically, 57 tumours were classified as adenomas and 15 as carcinomas. In 9 of the latter cases, distant metastases and/or lethal outcome of disease was recorded. Immunocyto-chemistry showed only 2 ACTs to be positive for cytokeratin and 6 for vimentin. None of the 72 tumours showed argyrophilia or immunoreactivity for epithelial membrane antigen (EMA), S-100 protein, chromogranin A, Leu 7 or Leu-M1, while 31 cases exhibited positivity on immunostaining with a polyclonal antiserum against synaptophysin. All 72 ACTs were immunoreactive with the recently described antibody D11. Thus the panel of antibodies described here could not discriminate between adenomas and carcinomas or between carcinomas with aggressive and indolent behaviour. Immunostaining with D11 and for EMA and Leu-M1 may help to distinguish ACTs from phenotypically similar lesions of different histogenesis.
abstract_id: PUBMED:24713984
Adrenocortical carcinoma: review and update. Adrenocortical carcinoma is a rare endocrine tumor with a poor prognosis. These tumors can be diagnostically challenging, and diagnostic algorithms and criteria continue to be suggested. Myxoid and oncocytic variants are important to recognize to not confuse with other tumors. In addition, the diagnostic criteria are different for oncocytic adrenal carcinomas than conventional carcinomas. Adrenocortical carcinomas usually occur in adults, but can also occur in children. In children these tumors are diagnostically challenging as the histologic features of malignancy seen in an adult tumor may not be associated with aggressive disease in a child. Adrenocortical carcinomas occur with increased frequency in Beckwith-Wiedemann and Li-Fraumeni syndromes, but most occur sporadically. Gene expression profiling by transcriptome analysis can discriminate adrenocortical carcinomas from adenomas and divide carcinomas into prognostic groups. The increasing understanding of the pathogenesis of these tumors may provide increasing treatment targets for this aggressive tumor.
abstract_id: PUBMED:15347821
Indicators of malignancy of canine adrenocortical tumors: histopathology and proliferation index. Tumors of the adrenal cortex account for 10-20% of the naturally occurring Cushing's syndrome diagnosed in dogs. Differentiating between adrenocortical adenoma and carcinomas is often difficult. The purposes of this study were to determine which histopathologic criteria can be used as markers for malignancy in canine adrenocortical tumors and the relevance of the proliferation marker, Ki-67, for differentiation between cortical adenomas and carcinomas. Twenty-six adrenocortical carcinomas, 23 adenomas, and 11 normal adrenal glands were examined. Morphologic criteria significantly associated with adrenocortical carcinomas included a size larger than 2 cm in diameter, peripheral fibrosis, capsular invasion, trabecular growth pattern, hemorrhage, necrosis, and single-cell necrosis, whereas hematopoiesis, fibrin thombi, and cytoplasmic vacuolation were significantly associated with adrenocortical adenomas. The mean (+/- SD) proliferation index, measured by immunohistochemistry for the Ki-67 antigen, was 9.3 +/- 6.3% in carcinomas, 0.76 +/- 0.83% in adenomas, and 0.58 +/- 0.57% in normal adrenal glands. The Ki-67 proliferation index was significantly higher in carcinomas compared with adenomas and normal adrenal glands. A threshold value of the proliferation index of 2.4% reliably separated carcinomas from adenomas. Based on these results, it appears that thorough evaluation of morphologic features combined with immunohistochemical assessment of the proliferation index is extremely useful for differentiating between adrenocortical adenomas and carcinomas in dogs.
abstract_id: PUBMED:9692828
Vascular patterns in the normal and pathological human adrenal cortex. The vasculature of the adrenal gland has been studied by microinjection techniques in a variety of species. While there is general agreement about the overall patterns, some uncertainty still exists over the structure of medullary arteries and the connections between the sinusoids of the cortex and medulla. We have taken a new approach to these problems by applying immunohistochemical techniques to the human adrenal gland, identifying overall vascular patterns by endothelial expression of CD34 and muscular channels by smooth muscle actin. We have also examined adrenal nodules, adenomas and carcinomas to see whether these can be differentiated on the basis of their vascular patterns. The general pattern in the normal gland was similar to that found in injection studies, but there appeared to be more connections between sinusoids of the zona fasciculata than previously reported. There was direct continuity between cortical and medullary sinusoids. Medullary arteries were demonstrated as thin-walled vessels. Immunopositivity for smooth muscle actin was present in sinusoids, apparently in endothelial cells, suggesting that they may express this protein and thus have a contractile function. Macronodules and adenomas could not be reliably distinguished, both showing a rich network of sinusoidal vessels. Carcinomas showed marked disorganization, with large-calibre vessels interspersed with irregular networks of vessels of very small calibre.
Answer: Imaging features of pediatric adrenocortical neoplasms have been studied to determine if they can reliably discriminate between adenomas and carcinomas. The results indicate that while there are some differences in imaging characteristics between the two types of lesions, there is still an overlap that makes it challenging to distinguish them solely based on imaging.
In a study that reviewed pre-treatment CT and MRI examinations of pediatric patients with adrenocortical neoplasms, it was found that carcinomas were significantly larger than adenomas based on mean estimated volume and mean maximum transverse plane diameter. Carcinomas also appeared more heterogeneous than adenomas on post-contrast imaging and were more likely to contain calcification at CT. However, the presence of calcification was not statistically significant as a distinguishing feature. Metastatic disease at diagnosis and inferior vena cava invasion were observed in some cases of carcinomas, which would be indicative of malignancy. Despite these findings, the study concluded that there is an overlap in imaging appearances between adenomas and carcinomas, making it difficult to rely solely on imaging for differentiation (PUBMED:25794486).
Other studies have explored the use of immunohistochemistry and histopathology to differentiate between adrenocortical tumors. For instance, CD56 and CD10 immunohistochemistry were found not to discriminate between cortisol-producing and aldosterone-producing adrenal cortical adenomas or between adrenocortical tumors and clear cell renal cell carcinomas (PUBMED:21585435, PUBMED:20390424). Additionally, unenhanced CT attenuation values correlated with the proportion of lipid-poor eosinophilic cells but failed to differentiate between benign lipid-poor adenomas and malignant adrenocortical tumors (PUBMED:23908452).
In conclusion, while certain imaging features such as size, heterogeneity, and the presence of metastatic disease can suggest malignancy, imaging alone cannot reliably discriminate between pediatric adrenocortical adenomas and carcinomas. Additional diagnostic tools, including histopathological examination and possibly molecular profiling, are necessary for accurate differentiation (PUBMED:25794486). |
Instruction: Generic drug prices and policy in Australia: room for improvement?
Abstracts:
abstract_id: PUBMED:24480618
Generic drug prices and policy in Australia: room for improvement? a comparative analysis with England. Objective: To assess the degree to which reimbursement prices in Australia and England differ for a range of generic drugs, and to analyse the supply- and demand-side factors that may contribute to these differences.
Methods: Australian and English reimbursement prices were compared for a range of generic drugs using pricing information obtained from government websites. Next, a literature review was conducted to identify supply- and demand-side factors that could affect generic prices in Australia and England. Various search topics were identified addressing potential supply-side (e.g. market approval, intellectual property protection of patented drugs, generic pricing policy, market size, generic supply chain and discounting practices) and demand-side (consumers, prescribers and pharmacists) factors. Related terms were searched in academic databases, official government websites, national statistical databases and internet search engines.
Results: Analysis of drug reimbursement prices for 15 generic molecules (representing 45 different drug presentations) demonstrated that Australian prices were on average over 7-fold higher than in England. Significant supply-side differences included aspects of pricing policy, the relative size of the generics markets and the use of clawback policies. Major differences in demand-side policies related to generic prescribing, pharmacist substitution and consumer incentives.
Conclusions: Despite recent reforms, the Australian Government continues to pay higher prices than its English counterpart for many generic medications. The results suggest that particular policy areas may benefit from review in Australia, including the length of the price-setting process, the frequency of subsequent price adjustments, the extent of price competition between originators and generics, medical professionals' knowledge about generic medicines and incentives for generic prescribing. WHAT IS KNOWN ABOUT THE TOPIC? Prices of generic drugs have been the subject of much scrutiny over recent years. From 2005 to 2010 the Australian Government responded to observations that Pharmaceutical Benefits Scheme prices for many generics were higher than in numerous comparable countries by instituting several reforms aimed at reducing the prices of generics. Despite this, several studies have demonstrated that prices for generic statins (one class of cholesterol-lowering drug) are higher in Australia compared with England and many other developed countries, and prices of numerous other generics remain higher than in the USA and New Zealand. Recently there has been increasing interest in why these differences exist. WHAT DOES THIS PAPER ADD? By including a much larger range of commonly used and costly generic drugs, this paper builds significantly on the limited previous investigations of generic drug prices in Australia and England. Additionally, this is the first comprehensive investigation of multiple supply- and, in particular, demand-side factors that may explain any price differences between these countries. WHAT ARE THE IMPLICATIONS FOR PRACTITIONERS? Practitioners may contribute to the higher prices of generic medications in Australia compared with England through relatively low rates of generic prescribing. There are also significant implications for health policy makers, as this paper demonstrates that if Australia achieved the same prices as England for many generic drugs there could be substantial savings for the Pharmaceutical Benefits Scheme.
abstract_id: PUBMED:28895227
Comparing Generic Drug Markets in Europe and the United States: Prices, Volumes, and Spending. Policy Points: Our study indicates that there are opportunities for cost savings in generic drug markets in Europe and the United States. Regulators should make it easier for generic drugs to reach the market. Regulators and payers should apply measures to stimulate price competition among generic drugmakers and to increase generic drug use. To meaningfully evaluate policy options, it is important to analyze historical context and understand why similar initiatives failed previously.
Context: Rising drug prices are putting pressure on health care budgets. Policymakers are assessing how they can save money through generic drugs.
Methods: We compared generic drug prices and market shares in 13 European countries, using data from 2013, to assess the amount of variation that exists between countries. To place these results in context, we reviewed evidence from recent studies on the prices and use of generics in Europe and the United States. We also surveyed peer-reviewed studies, gray literature, and books published since 2000 to (1) outline existing generic drug policies in European countries and the United States; (2) identify ways to increase generic drug use and to promote price competition among generic drug companies; and (3) explore barriers to implementing reform of generic drug policies, using a historical example from the United States as a case study.
Findings: The prices and market shares of generics vary widely across Europe. For example, prices charged by manufacturers in Switzerland are, on average, more than 2.5 times those in Germany and more than 6 times those in the United Kingdom, based on the results of a commonly used price index. The proportion of prescriptions filled with generics ranges from 17% in Switzerland to 83% in the United Kingdom. By comparison, the United States has historically had low generic drug prices and high rates of generic drug use (84% in 2013), but has in recent years experienced sharp price increases for some off-patent products. There are policy solutions to address issues in Europe and the United States, such as streamlining the generic drug approval process and requiring generic prescribing and substitution where such policies are not yet in place. The history of substitution laws in the United States provides insights into the economic, political, and cultural issues influencing the adoption of generic drug policies.
Conclusions: Governments should apply coherent supply- and demand-side policies in generic drug markets. An immediate priority is to convince more physicians, pharmacists, and patients that generic drugs are bioequivalent to branded products. Special-interest groups continue to obstruct reform in Europe and the United States.
abstract_id: PUBMED:28359273
A comparison of generic drug prices in seven European countries: a methodological analysis. Background: Policymakers and researchers frequently compare the prices of medicines between countries. Such comparisons often serve as barometers of how pricing and reimbursement policies are performing. The aim of this study was to examine methodological challenges to comparing generic drug prices.
Methods: We calculated all commonly used price indices based on 2013 IMS Health data on sales of 3156 generic drugs in seven European countries.
Results: There were large differences in generic drug prices between countries. However, the results varied depending on the choice of index, base country, unit of volume, method of currency conversion, and therapeutic category. The results also differed depending on whether one looked at the prices charged by manufacturers or those charged by pharmacists.
Conclusions: Price indices are a useful statistical approach for comparing drug prices across countries, but researchers and policymakers should interpret price indices with caution given their limitations. Price-index results are highly sensitive to the choice of method and sample. More research is needed to determine the drivers of price differences between countries. The data suggest that some governments should aim to reduce distribution costs for generic drugs.
abstract_id: PUBMED:30442275
Predictors of Drug Shortages and Association with Generic Drug Prices: A Retrospective Cohort Study. Background: Prescription drug shortages can disrupt essential patient care and drive up drug prices.
Objective: To evaluate some predictors of shortages within a large cohort of generic drugs in the United States and to determine the association between drug shortages and changes in generic drug prices.
Methods: This was a retrospective cohort study. Outpatient prescription claims from commercial health plans between 2008 and 2014 were analyzed. Seven years of data were divided into fourteen 6-month periods; the first period was designated as the baseline period. The first model estimated the probability of experiencing a drug shortage using drug-specific competition levels, market sizes, formulations (e.g., capsules), and drug prices as predictors. The second model estimated the percentage change in drug prices from baseline on the basis of drug shortage duration.
Results: From 1.3 billion prescription claims, a cohort of 1114 generic drugs was identified. Low-priced generic drugs were at a higher risk for drug shortages compared with medium- and high-priced generic drugs, with odds ratios of 0.60 (95% confidence interval [CI] 0.44-0.82) and 0.72 (95% CI 0.52-0.99), respectively. Compared with periods of no shortage, drug shortages lasting less than 6 months, 6 to 12 months, 12 to 18 months, and at least 18 months had corresponding price increases of 6.0% (95% CI 4.7-7.4), 10.9% (95% CI 8.5-13.4), 14.2% (95% CI 10.6-17.9), and 14.0% (95% CI 9.1-19.2), respectively.
Conclusions: Study findings may not be generalizable to drugs that became generic after 2008 or those commonly used in an inpatient setting. The lowest priced drugs are at a substantially elevated risk of experiencing a drug shortage. Periods of drug shortages were associated with modest increases in drug prices.
abstract_id: PUBMED:37727389
Generic cardiology drug prices: the potential benefits of the Marc Cuban cost plus drug company model. Introduction: Generic pharmaceuticals account for the majority of the $359 billion US pharmaceutical market, including for cardiology drugs. Amidst a lack of price transparency and administrative inefficiencies, generic drug prices are high, causing an undue burden on patients. Methods: We identified the 50 most used generic cardiology drugs by volume per the 2020 Medicare Part D spending data. We extracted cost per dose of each drug from the Marc Cuban Cost Plus Drug Company (MCCPDC) website and estimated the aggregate cost savings if MCCPDC were employed on a national scale by calculating the difference between this cost and Medicare spending. Results: Medicare spent $7.7 billion on the 50 most used generic cardiology drugs by volume in 2020 according to Medicare Part D data. Pharmacy and shipping costs accounted for a substantial portion of expenditures. Per our most conservative estimate, $1.3 billion (17% of total) savings were available on 16 of 50 drugs. A slightly less conservative estimate suggested $2.9 billion (38%) savings for 35 of 50 drugs. Discussion: There is enormous potential for cost savings in the US market for generic cardiology drugs. By encouraging increased competition, decreasing administrative costs, and advocating for our patients to compare prices between the MCCPDC and other generic pharmaceutical dispensers, we have the potential to improve access to care and corresponding outcomes for cardiology patients.
abstract_id: PUBMED:16832537
Generic drug policy implementation in Brazil A generic drug policy has been implemented in Brazil since 1999. Several political and administrative stages transpired between enactment of the legislation and the actual marketing and consumption of these drugs. This article describes the policy implementation process and examines the country's generic drug legislation, approved from 1999 to 2002. To contextualize these measures, the study compares articles published by two national periodicals and interviews with a government representative involved in drafting the legislation and a representative from the pharmaceutical industry. Generic drugs quickly gained considerable space in the Brazilian pharmaceutical market. Ongoing adaptation of the legislation, media support, and the government's involvement in spreading the policy were key success factors. The population's access to medicines did not increase significantly, but people can now purchase medicines at more affordable prices and with quality assurance and interchangeability.
abstract_id: PUBMED:37304112
An empirical study of the impact of generic drug competition on drug market prices in China. Introduction: Generic substitution is encouraged to reduce pharmaceutical spending in China, and with incentive policies, the market size of the generic drug continues to rise. To find out how the generic competition affects drug price in this area, this study examines how the quantity of generic drug manufacturers can influence average drug price in the Chinese market.
Methods: This study uses a rigorous selection of drugs from the 2021 China's National Reimbursement Drug List (NRDL), and uses drug-level fixed effects regressions to estimate the relationship between competition and price within each drug.
Results: We note that drug prices decline with increasing competition in the Chinese market, but not in a perfectly linear manner, with marginal price declines decreasing after the fourth entrant and "rebounding" at subsequent entrants, especially the sixth.
Discussion: The findings suggest the importance of maintaining effective competition between suppliers to control prices, and that the government needs to further control generic pricing, especially for late entry generics, to ensure effective competition in the Chinese market.
abstract_id: PUBMED:34904207
Effect of Competition on Generic Drug Prices. Background: Promoting substitution of lower priced generics for brand drugs once the market exclusivity period for the latter expires is a key component of the US strategy for achieving value in prescription drugs.
Objective: This study examines the effect of generic competition on drug prices by estimating the effect of entry of generic drugs, following a brand's loss-of-exclusivity (LOE), on the average price of competing drugs.
Methods: Using the Medicare Part D drug event (PDE) data from 2007 to 2018, we utilize both fixed effects and random effects at the drug level to estimate the relationship of competitors and prices within each drug while controlling for factors across drugs. We follow a drug 24 months and 36 months after first generic entry to examine whether the relationship between number of suppliers and price would change over time. We also test the hypothesis that drugs with more recent LOE might face less competition than those with earlier LOE.
Results: We find that drug prices fall with increasing number of competitors. Prices decline by 20% in markets with about three competitors (the expected price ratio of current generic to pre-generic entry brand average prices is 80%). Prices continue to decline by 80% relative to the pre-generic entry price in markets of ten or more competitors (the expected price ratio is about 30% following 2 years after entry, dropping to 20% following 3 years after entry). We also find that the impact of competition on relative prices is similar for generic drugs first entering the market in either 2007-11 or 2012-15.
Conclusion: Promoting generic entry and maintaining effective provider competition are effective methods for containing drug prices.
abstract_id: PUBMED:29544742
Increased topical generic prices by manufacturers. Background: There is limited data regarding generic medication prices. Recent studies have shown price changes at the retail level, but much is not known about the pharmaceutical supply chain or price changes at the manufacturer level.
Objective: We sought to examine the extent of price changes for topical generic medications.
Methods: A comprehensive review of average wholesale prices (AWPs) and manufacturers of topical generics and available corresponding branded medications was conducted for 2005 and 2016.
Results: A total of 51 topical chemical entities were examined. Between 2005 and 2016, the AWP of topical generic medications increased by 273% and the AWP of topical branded medications increased by 379%. The topical generic with the greatest price change increased by 2529%. Eight of the top 20 topical generic medications with the greatest increases in AWP also had an increase in the number of manufacturers.
Limitations: These findings are not generalizable to medications used in other areas of medicine.
Conclusion: Topical generic prices are rapidly increasing at the manufacturer level.
abstract_id: PUBMED:16097838
Pharmaceutical policy regarding generic drugs in Belgium. Pressure to control pharmaceutical expenditure and price competition among pharmaceutical companies are fuelling the development of generic drug markets in EU countries. However, in Belgium, the market for generic drugs is underdeveloped compared with other countries. To promote the use of generic drugs, the government introduced a reference pricing (RP) scheme in 2001. The aim of this paper is to discuss Belgian pharmaceutical policy regarding generic drugs and to analyse how the Belgian drug market has evolved following initiation of the RP scheme. The market share held by generic drugs increased following implementation of the RP scheme. Focusing on volume, average market share (by semester) for generic drugs amounted to 2.05% of the total pharmaceutical market from January 1998 to June 2001, compared with 6.11% from July 2001 to December 2003. As new generic drugs are introduced, their market share tends to increase in the first couple of months, after which it levels off. Faced with increasing generic competition, some manufacturers have launched new variants of their original drug, thereby effectively extending the period of patent protection. Strategies consisting of price reductions in return for the abolition of prescribing conditions and the launch of new dosages or formulations appear to have been successful in maintaining the market share of original drugs. Nevertheless, the introduction of the RP scheme was associated with savings amounting to 1.8% of pharmaceutical expenditure by the third-party payer in 2001 and 2.1% in 2002. The findings of this paper indicate that the RP scheme has stimulated the Belgian generic drug market. However, existing policy has largely failed to take into account the role that physicians and pharmacists can play in stimulating generic drug use. Therefore, further development of the Belgian generic drug market seems to hinge on the creation of appropriate incentives for physicians to prescribe, and for pharmacists to dispense, generic drugs. With respect to incentives to advance generic drug use, EU countries have experimented with various forms of budget constraints for physicians, generic substitution by pharmacists and RP schemes, although more evidence is needed of their impact on consumption and prices of generic drugs, pharmaceutical expenditure and health outcomes.
Answer: Yes, there is room for improvement in generic drug prices and policy in Australia. A comparative analysis with England has shown that Australian prices for generic drugs were on average over 7-fold higher than those in England (PUBMED:24480618). This suggests that despite recent reforms, the Australian Government continues to pay higher prices for many generic medications. The study identified several supply- and demand-side factors that may contribute to these differences, including aspects of pricing policy, the relative size of the generics markets, and the use of clawback policies. On the demand side, differences were noted in generic prescribing, pharmacist substitution, and consumer incentives.
The paper suggests that policy areas such as the length of the price-setting process, the frequency of subsequent price adjustments, and the extent of price competition between originators and generics could benefit from review in Australia. Additionally, it highlights the potential role of medical professionals in contributing to higher prices through relatively low rates of generic prescribing (PUBMED:24480618).
The findings imply that if Australia were to achieve the same prices as England for many generic drugs, there could be substantial savings for the Pharmaceutical Benefits Scheme. This indicates that there is significant potential for policy improvement to reduce generic drug prices in Australia, which could lead to cost savings and potentially improve access to medications for patients. |
Instruction: Does vitamin D mediate the protective effects of time outdoors on myopia?
Abstracts:
abstract_id: PUBMED:25406278
Does vitamin D mediate the protective effects of time outdoors on myopia? Findings from a prospective birth cohort. Purpose: More time outdoors is associated with a lesser risk of myopia, but the underlying mechanism is unclear. We tested the hypothesis that 25-hydroxyvitamin D (vitamin D) mediates the protective effects of time outdoors against myopia.
Methods: We analyzed data for children participating in the Avon Longitudinal Study of Parents and Children (ALSPAC) population-based birth cohort: noncycloplegic autorefraction at age 7 to 15 years; maternal report of time outdoors at age 8 years and serum vitamin D2 and D3 at age 10 years. A survival analysis hazard ratio (HR) for incident myopia was calculated for children spending a high- versus low-time outdoors, before and after controlling for vitamin D level (N = 3677).
Results: Total vitamin D and D3, but not D2, levels were higher in children who spent more time outdoors (mean [95% confidence interval (CI)] vitamin D in nmol/L: Total, 60.0 [59.4-60.6] vs. 56.9 [55.0-58.8], P = 0.001; D3, 55.4 [54.9-56.0] vs. 53.0 [51.3-54.9], P = 0.014; D2, 5.7 [5.5-5.8] vs. 5.4 [5.1-5.8], P = 0.23). In models including both time outdoors and sunlight-exposure-related vitamin D, there was no independent association between vitamin D and incident myopia (Total, HR = 0.83 [0.66-1.04], P = 0.11; D3, HR = 0.89 [0.72-1.10], P = 0.30), while time outdoors retained the same strong negative association with incident myopia as in unadjusted models (HR = 0.69 [0.55-0.86], P = 0.001).
Conclusions: Total vitamin D and D3 were biomarkers for time spent outdoors, however there was no evidence they were independently associated with future myopia.
abstract_id: PUBMED:23644222
Time outdoors and the prevention of myopia. Recent epidemiological evidence suggests that children who spend more time outdoors are less likely to be, or to become myopic, irrespective of how much near work they do, or whether their parents are myopic. It is currently uncertain if time outdoors also blocks progression of myopia. It has been suggested that the mechanism of the protective effect of time outdoors involves light-stimulated release of dopamine from the retina, since increased dopamine release appears to inhibit increased axial elongation, which is the structural basis of myopia. This hypothesis has been supported by animal experiments which have replicated the protective effects of bright light against the development of myopia under laboratory conditions, and have shown that the effect is, at least in part, mediated by dopamine, since the D2-dopamine antagonist spiperone reduces the protective effect. There are some inconsistencies in the evidence, most notably the limited inhibition by bright light under laboratory conditions of lens-induced myopia in monkeys, but other proposed mechanisms possibly associated with time outdoors such as relaxed accommodation, more uniform dioptric space, increased pupil constriction, exposure to UV light, changes in the spectral composition of visible light, or increased physical activity have little epidemiological or experimental support. Irrespective of the mechanisms involved, clinical trials are now underway to reduce the development of myopia in children by increasing the amount of time they spend outdoors. These trials would benefit from more precise definition of thresholds for protection in terms of intensity and duration of light exposures. These can be investigated in animal experiments in appropriate models, and can also be determined in epidemiological studies, although more precise measurement of exposures than those currently provided by questionnaires is desirable.
abstract_id: PUBMED:27921098
Time outdoors, blood vitamin D status and myopia: a review. Background: Myopia is a major public health concern throughout the world and the prevalence has been increasing rapidly in recent years, especially in urban Asia. The "vitamin D hypothesis" has been raised recently because vitamin D may be a link between less time outdoors and increased risk of myopia.
Methods: We reviewed all studies published in English which examined the association of time outdoors and blood vitamin D status with myopia.
Results: The protective effect of time spent outdoors on the risk of myopia onset has been well-established with numerous observational studies and three trials published. Five studies reporting the association between the blood vitamin D status and the risk of myopia and two studies examining the variations in the vitamin D receptor as potential risk factors for myopia development were identified. Most of the current evidence was cross-sectional in nature and had not properly controlled important confounders in its analyses. The evidence supporting that vitamin D played a role in myopia development is weak and the mechanisms are unclear.
Conclusions: At the current stage, it is still unclear whether blood vitamin D status regulates the onset or progression of myopia. Blood vitamin D status may only serve as a biomarker of outdoor exposure, which is the real protective factor for myopia.
abstract_id: PUBMED:33423400
Time spent outdoors through childhood and adolescence - assessed by 25-hydroxyvitamin D concentration - and risk of myopia at 20 years. Purpose: To investigate the relationship between time spent outdoors, at particular ages in childhood and adolescence, and myopia status in young adulthood using serum 25-hydroxyvitamin D [25(OH)D] concentration as a biomarker of time spent outdoors.
Methods: Participants of the Raine Study Generation 2 cohort had 25(OH)D concentrations measured at the 6-, 14-, 17- and 20-year follow-ups. Participants underwent cycloplegic autorefraction at age 20 years, and myopia was defined as a mean spherical equivalent -0.50 dioptres or more myopic. Logistic regression was used to analyse the association between risk of myopia at age 20 years and age-specific 25(OH)D concentrations. Linear mixed-effects models were used to analyse trajectory of 25(OH)D concentrations from 6 to 20 years.
Results: After adjusting for sex, race, parental myopia, body mass index and studying status, myopia at 20 years was associated with lower 25(OH)D concentration at 20 years (per 10 nmol/L decrease, odds ratio (aOR)=1.10, 95% CI: 1.02, 1.18) and a low vitamin D status [25(OH)D < 50 nmol/L] at 17 years (aOR = 1.71, 95% CI: 1.06, 2.76) and 20 years (aOR = 1.71, 95% CI: 1.14, 2.56), compared to those without low vitamin D status. There were no associations between 25(OH)D at younger ages and myopia. Individuals who were myopic at 20 years had a 25(OH)D concentration trajectory that declined, relative to non-myopic peers, with increasing age. Differences in 25(OH)D trajectory between individuals with and without myopia were greater among non-Caucasians compared to Caucasians.
Conclusions: Myopia in young adulthood was most strongly associated with recent 25(OH)D concentrations, a marker of time spent outdoors.
abstract_id: PUBMED:31854216
Protective effects of increased outdoor time against myopia: a review. Myopia has become a major cause for concern globally, particularly in East Asian countries. The increasing prevalence of myopia has been associated with a high socioeconomic burden owing to severe ocular complications that may occur with progressive myopia. There is an urgent need to identify effective and safe measures to address the growing number of people with myopia in the general population. Among the numerous strategies implemented to slow the progression of myopia, longer time spent outdoors has come to be recognized as a protective factor against this disorder. Although our understanding of the protective effects of outdoor time has increased in the past decade, considerably more research is needed to understand the mechanisms of action. Here, we summarize the main potential factors associated with the protective effects against myopia of increased outdoor time, namely, exposure to elevated levels and shorter wavelengths of light, and increased dopamine and vitamin D levels. In this review, we aimed to identify safe and effective therapeutic interventions to prevent myopia-related complications and vision loss.
abstract_id: PUBMED:28660095
Development of the FitSight Fitness Tracker to Increase Time Outdoors to Prevent Myopia. Purpose: To develop a fitness tracker (FitSight) to encourage children to increase time spent outdoors. To evaluate the wear pattern for this tracker and outdoor time pattern by estimating light illumination levels among children.
Methods: The development of the FitSight fitness tracker involved the designing of two components: (1) the smartwatch with custom-made FitSight watch application (app) to log the instant light illuminance levels the wearer is exposed to, and (2) a companion smartphone app that synchronizes the time outdoors recorded by the smartwatch to smartphone via Bluetooth communication. Smartwatch wear patterns and tracker-recorded daily light illuminance levels data were gathered over 7 days from 23 Singapore children (mean ± standard deviation age: 9.2 ± 1.4 years). Feedback about the tracker was obtained from 14 parents using a three-level rating scale: very poor/poor/good.
Results: Of the 14 parents, 93% rated the complete "FitSight fitness tracker" as good and 64% rated its wearability as good. While 61% of 23 children wore the watch on all study days (i.e., 0 nonwear days), 26% had 1 nonwear day, and 4.5% children each had 3, 4, and 5 nonwear days, respectively. On average, children spent approximately 1 hour in light levels greater than 1000 lux on weekdays and 1.3 hours on weekends (60 ± 46 vs. 79 ± 53 minutes, P = 0.19). Mean number of outdoor "spurts" (light illuminance levels >1000 lux) per day was 8 ± 3 spurts with spurt duration of 34 ± 32 minutes.
Conclusion: The FitSight tracker with its novel features may motivate children to increase time outdoors and play an important role in supplementing community outdoor programs to prevent myopia.
Translational Relevance: If the developed noninvasive, wearable, smartwatch-based fitness tracker, FitSight, promotes daytime outdoor activity among children, it will be beneficial in addressing the epidemic of myopia.
abstract_id: PUBMED:31187504
Ocular biometry, refraction and time spent outdoors during daylight in Irish schoolchildren. Background: Previous studies have investigated the relationship between ocular biometry and spherical equivalent refraction in children. This is the first such study in Ireland. The effect of time spent outdoors was also investigated.
Methods: Examination included cycloplegic autorefraction and non-contact ocular biometric measures of axial length, corneal radius and anterior chamber depth from 1,626 children in two age groups: six to seven years and 12 to 13 years, from 37 schools. Parents/guardians completed a participant questionnaire detailing time spent outdoors during daylight in summer and winter.
Results: Ocular biometric data were correlated with spherical equivalent refraction (axial length: r = -0.64, corneal radius: r = 0.07, anterior chamber depth: r = -0.33, axial length/corneal radius ratio: r = -0.79, all p < 0.0001). Participants aged 12-13 years had a longer axial length (6-7 years 22.53 mm, 12-13 years 23.50 mm), deeper anterior chamber (6-7 years 3.40 mm, 12-13 years 3.61 mm), longer corneal radius (6-7 years 7.81 mm, 12-13 years 7.87 mm) and a higher axial length/corneal radius ratio (6-7 years 2.89, 12-13 years 2.99), all p < 0.0001. Controlling for age: axial length was longer in boys (boys 23.32 mm, girls 22.77 mm), and non-White participants (non-White 23.21 mm, White 23.04 mm); corneal radius was longer in boys (boys 7.92 mm, girls 7.75 mm); anterior chamber was deeper in boys (boys 3.62 mm, girls 3.55 mm, p < 0.0001), and axial length/corneal radius ratios were higher in non-White participants (non-White 2.98, White 2.94, p < 0.0001). Controlling for age and ethnicity, more time outdoors in summer was associated with a less myopic refraction, shorter axial length, and lower axial length/corneal radius ratio. Non-White participants reported spending significantly less time outdoors than White participants (p < 0.0001).
Conclusion: Refractive error variance in schoolchildren in Ireland was best explained by variation in the axial length/corneal radius ratio with higher values associated with a more myopic refraction. Time spent outdoors during daylight in summer was associated with shorter axial lengths and a less myopic spherical equivalent refraction in White participants. Strategies to promote daylight exposure in wintertime is a study recommendation.
abstract_id: PUBMED:31722876
How does spending time outdoors protect against myopia? A review. Myopia is an increasingly common condition that is associated with significant costs to individuals and society. Moreover, myopia is associated with increased risk of glaucoma, retinal detachment and myopic maculopathy, which in turn can lead to blindness. It is now well established that spending more time outdoors during childhood lowers the risk of developing myopia and may delay progression of myopia. There has been great interest in further exploring this relationship and exploiting it as a public health intervention aimed at preventing myopia in children. However, spending more time outdoors can have detrimental effects, such as increased risk of melanoma, cataract and pterygium. Understanding how spending more time outdoors prevents myopia could advance development of more targeted interventions for myopia. We reviewed the evidence for and against eight facets of spending time outdoors that may protect against myopia: brighter light, reduced peripheral defocus, higher vitamin D levels, differing chromatic spectrum of light, higher physical activity, entrained circadian rhythms, less near work and greater high spatial frequency (SF) energies. There is solid evidence that exposure to brighter light can reduce risk of myopia. Peripheral defocus is able to regulate eye growth but whether spending time outdoors substantially changes peripheral defocus patterns and how this could affect myopia risk is unclear. Spectrum of light, circadian rhythms and SF characteristics are plausible factors, but there is a lack of solid evidence from human studies. Vitamin D, physical activity and near work appear unlikely to mediate the relationship between time spent outdoors and myopia.
abstract_id: PUBMED:21258262
Blood levels of vitamin D in teens and young adults with myopia. Purpose: Longitudinal data suggest that time outdoors may be protective against myopia onset. We evaluated the hypothesis that time outdoors might create differences in circulating levels of vitamin D between myopes and non-myopes.
Methods: Subjects provided 200 μl of peripheral blood in addition to survey information about dietary intakes and time spent in indoor or outdoor activity. The 22 subjects ranged in age from 13 to 25 years. Myopes (n = 14) were defined as having at least -0.75 diopter of myopia in each principal meridian and non-myopes (n = 8) had +0.25 diopter or more hyperopia in each principal meridian. Blood level of vitamin D was measured using liquid chromatography/mass spectroscopy.
Results: Unadjusted blood levels of vitamin D were not significantly different between myopes (13.95 ± 3.75 ng/ml) and non-myopes (16.02 ± 5.11 ng/ml, p = 0.29) nor were the hours spent outdoors (myopes = 12.9 ± 7.8 h; non-myopes = 13.6 ± 5.8 h; p = 0.83). In a multiple regression model, total sugar and folate from food were negatively associated with blood vitamin D, whereas theobromine and calcium were positively associated with blood vitamin D. Myopes had lower levels of blood vitamin D by an average of 3.4 ng/ml compared with non-myopes when adjusted for age and dietary intakes (p = 0.005 for refractive error group, model R = 0.76). Gender, time outdoors, and dietary intake of vitamin D were not significant in this model.
Conclusions: The hypothesis that time outdoors might create differences in vitamin D could not be evaluated fully because time outdoors was not significantly related to myopia in this small sample. However, adjusted for differences in the intake of dietary variables, myopes appear to have lower average blood levels of vitamin D than non-myopes. Although consistent with the hypothesis above, replication in a larger sample is needed.
abstract_id: PUBMED:27350182
The use of conjunctival ultraviolet autofluorescence (CUVAF) as a biomarker of time spent outdoors. Purpose: Conjunctival ultraviolet autofluorescence (CUVAF) has been used in previous Southern Hemisphere myopia research as a marker for time spent outdoors. The validity of CUVAF as an indicator of time spent outdoors is yet to be explored in the Northern Hemisphere. It is unclear if CUVAF represents damage attributed to UV exposure or dry eye. This cross-sectional study investigated the association between CUVAF measures, self-reported time spent outdoors and measures of dry eye.
Methods: Participants were recruited from University staff and students (n = 50, 19-64 years; mean 41). None were using topical ocular medications (with the exception of dry eye treatments). Sun exposure and dry eye questionnaires (Ocular Surface Disease Index and McMonnies) were completed by the participant. Dryness was also assessed using slit lamp biomicroscopy and invasive tear break up time. Images of the temporal and nasal conjunctiva from the right and left eye were captured using a bespoke photography system. The total CUVAF area, average CUVAF pixel intensity per mm(2) and total CUVAF pixel intensity were analysed using MATLAB R2013a (The MathWorks Inc).
Results: Of the 50 participants, 42% were classified as having dry eye. Self-reported sunglasses use was negatively associated with all CUVAF measures (Kruskal Wallis total CUVAF area, p = 0.04, ptrend = 0.03, average CUVAF pixel intensity p = 0.02, ptrend = 0.02, total CUVAF pixel intensity: p = 0.04, ptrend = 0.02). Time spent outdoors was positively associated with all CUVAF measures (Spearman's correlation coefficients, total CUVAF area: r = 0.37, p = 0.01, average CUVAF pixel intensity: r = 0.36, p = 0.01, total CUVAF pixel intensity: r = 0.37, p = 0.01) and remained significant when sunglasses use was controlled for (partial correlation, total CUVAF area: r = 0.32, p = 0.03, average CUVAF pixel intensity: r = 0.39, p = 0.01, total CUVAF pixel intensity: r = 0.39, p = 0.03). Neither CUVAF area nor intensity measures were associated with any dry eye measure (Ocular Surface Disease Index: all p ≥ 0.41, corneal staining: all p ≥ 0.38, McMonnies: all r ≤ 0.09 all p ≥ 0.52, slit lamp biomicroscopy: all r ≤ 0.20 all p ≥ 0.17, invasive tear break up time: all r ≤ -0.07 all p ≥ 0.31).
Conclusions: CUVAF area and intensity were not associated with clinical measures of dry eye. Greater CUVAF area and intensity were associated with wearing sunglasses less frequently and spending more time outdoors. If sunglass wear is accounted for, CUVAF may be a useful biomarker of time spent outdoors in future myopia studies.
Answer: The relationship between vitamin D levels and the protective effects of time outdoors on myopia has been explored in various studies, but the evidence does not conclusively support that vitamin D mediates these protective effects.
In the Avon Longitudinal Study of Parents and Children (ALSPAC) cohort, while children who spent more time outdoors had higher levels of total vitamin D and D3, there was no independent association between vitamin D levels and incident myopia after controlling for time outdoors. Time outdoors retained a strong negative association with incident myopia, suggesting that factors other than vitamin D might be responsible for the protective effect against myopia (PUBMED:25406278).
Similarly, a review of studies examining the association between time outdoors, blood vitamin D status, and myopia concluded that the evidence supporting a role for vitamin D in myopia development is weak. The review suggested that blood vitamin D status might only serve as a biomarker for outdoor exposure, which is the actual protective factor against myopia (PUBMED:27921098).
Another study using the Raine Study Generation 2 cohort found that myopia in young adulthood was most strongly associated with recent 25-hydroxyvitamin D concentrations, a marker of time spent outdoors. However, the study did not establish a direct causal relationship between vitamin D levels and myopia, but rather used vitamin D as an indicator of outdoor activity (PUBMED:33423400).
Overall, while vitamin D levels may correlate with time spent outdoors, the current evidence does not strongly support the hypothesis that vitamin D mediates the protective effects of time outdoors on myopia. Instead, time outdoors itself appears to be the critical factor, with vitamin D levels potentially serving as a biomarker of outdoor activity rather than an independent protective agent against myopia development (PUBMED:25406278, PUBMED:27921098, PUBMED:33423400). |
Instruction: Is childhood abuse or neglect associated with symptom reports and physiological measures in women with irritable bowel syndrome?
Abstracts:
abstract_id: PUBMED:21196423
Is childhood abuse or neglect associated with symptom reports and physiological measures in women with irritable bowel syndrome? Purpose: Early childhood traumatic experiences (e.g., abuse or neglect) may contribute to sleep disturbances as well as to other indicators of arousal in patients with irritable bowel syndrome (IBS). This study compared women with IBS positive for a history of childhood abuse and/or neglect to women with IBS without this history on daily gastrointestinal (GI), sleep, somatic, and psychological symptom distress, polysomnographic sleep, urine catecholamines (CAs) and cortisol, and nocturnal heart rate variability (HRV).
Methods: Adult women with IBS recruited from the community were divided into two groups: 21 with abuse/neglect and 19 without abuse/neglect based on responses to the Childhood Trauma Questionnaire (CTQ; physical, emotional, sexual abuse, or neglect). Women were interviewed, maintained a 30-day symptom diary, and slept in a sleep laboratory. Polysomnographic and nocturnal HRV data were obtained. First-voided urine samples were assayed for cortisol and CA levels.
Results: Women with IBS positive for abuse/neglect history were older than women without this history. Among GI symptoms, only heartburn and nausea were significantly higher in women with abuse/neglect. Sleep, somatic, and psychological symptoms were significantly higher in women in the abuse/neglect group. With the exception of percentage of time in rapid eye movement (REM) sleep, there were few differences in sleep-stage variables and urine hormone levels. Mean heart rate interval and the natural log of the standard deviation of RR intervals for the entire sleep interval (Ln SDNN) values were lower in those who experienced childhood abuse/neglect.
Conclusion: Women with IBS who self-report childhood abuse/neglect are more likely to report disturbed sleep, somatic symptoms, and psychological distress. Women with IBS should be screened for adverse childhood events including abuse/neglect.
abstract_id: PUBMED:34499948
The importance of child abuse and neglect in adult medicine. The risk for adverse consequences and disease due to the trauma of child abuse or neglect is easily assessed using the self-administered modified ACEs questionnaire. Exposure to child maltreatment is endemic and common. At least one out of every ten USA adults has a significant history of childhood maltreatment. This is a review of the literature documenting that a past history of childhood abuse and neglect (CAN) makes substantial contributions to physical disease in adults, including asthma, chronic obstructive pulmonary disease, lung cancer, hypertension, stroke, kidney disease, hepatitis, obesity, diabetes, coronary artery disease, pelvic pain, endometriosis, chronic fatigue syndrome, irritable bowel syndrome, fibromyalgia, and auto immune diseases. Adults who have experienced child maltreatment have a shortened life expectancy. The contribution of CAN trauma to these many pathologies remains largely underappreciated and neglected compared to the attention given to the array of mental illnesses associated with child maltreatment. Specific pathophysiolologic pathways have yet to be defined. Clinical recognition of the impact of past CAN trauma will contribute to the healing process in any disease but identifying specific effective therapies based on this insight remains to be accomplished. Recommendations are made for managing these patients in the clinic. It is important to incorporate screening for CAN throughout adult medical practice now.
abstract_id: PUBMED:19785241
Effect of sexual and physical abuse on symptom experiences in women with irritable bowel syndrome. Background: Irritable Bowel Syndrome (IBS) is a common chronic functional bowel disorder characterized by alterations in bowel patterns and abdominal pain. One factor that is conjectured to contribute to the onset of IBS is sexual and/or physical abuse in childhood or as an adult. This conjecture is supported by the increased prevalence of abuse experiences in persons with IBS when compared to healthy controls or those with organically-defined gastrointestinal (GI) disorders.
Objectives: The purposes of the present study were to (a) compare the history of sexual and physical abuse in a sample of women with IBS to a sample of women without IBS and (b) to compare women with IBS who had sexual and physical abusive experiences to those who had not on GI symptoms, psychological distress, healthcare-seeking behavior, and physiological measures.
Methods: Data were collected from two samples of women (ages 18-40 years) with IBS and controls were recruited through community advertisements and letters from a health maintenance organization. Participants completed questionnaires (i.e., Sexual and Physical Abuse, Bowel Disease Questionnaire, Symptom Checklist-90-R) during an in-person interview and completed a symptom diary each night across one menstrual cycle. Cortisol and catecholamine levels were determined in morning urine samples on 6 days across the menstrual cycle.
Results: More women in the IBS group reported unwanted sexual contact during childhood relative to control women. Within the IBS group, minimal differences were found between those who had experienced abuse and those who had not. Women with IBS who had experienced abuse reported greater impact of GI symptoms on activity.
Conclusions: The prevalence of a history of childhood sexual abuse experiences is elevated among women with IBS. However, within women with IBS, those with a history of abuse do not appear to be different from those with no history of abuse on GI symptoms, psychological symptoms, or physiological arousal indicators.
abstract_id: PUBMED:27061107
Adverse childhood experiences are associated with irritable bowel syndrome and gastrointestinal symptom severity. Background: Early adverse life events (EALs) are associated with irritable bowel syndrome (IBS). Exposure to EALs as assessed by the Adverse Childhood Experiences (ACE) questionnaire is associated with greater disease prevalence, but ACE has not been studied in gastrointestinal disorders. Study aims were to: (i) Estimate the prevalence of EALs in the IBS patients using the ACE questionnaire; (ii) Determine correlations between ACE and Early Trauma Inventory Self Report-Short Form (ETI-SR) scores to confirm its validity in IBS; and (iii) Correlate ACE scores with IBS symptom severity.
Methods: A total of 148 IBS (73% women, mean age = 31 years) and 154 HCs (59% women, mean age = 30 years) completed the ACE and ETI-SR between June 2010 and April 2015. These surveys measured EALs before age 18 in the domains of physical, sexual, and emotional abuse, and general trauma. IBS and abdominal pain severity was measured by a 20-point scale (0 = none, 20 = worst symptoms).
Key Results: The ACE score increased the odds of having IBS (odds ratio [OR] = 2.05, 95% confidence interval [CI]: 1.21-3.48, p = 0.008). Household mental illness (p < 0.001), emotional abuse (p = 0.004), and incarcerated household member (p = 0.019) were significant predictors of IBS. Adverse childhood experiences and ETI-SR scores were strongly correlated (r = 0.59, p < 0.001). ACE, but not ETI-SR, modestly correlated with IBS severity (r = 0.17, p = 0.036) and abdominal pain (r = 0.20, p = 0.015).
Conclusions & Inferences: The ACE questionnaire is a useful instrument to measure EALs in IBS based on its use in large studies, its ability to measure prevalence across different EAL domains, and its correlation with symptom severity.
abstract_id: PUBMED:35812574
Adverse Childhood Experiences and Their Effect on Irritable Bowel Syndrome Among Saudi Arabian Adults. Background Adverse childhood experiences (ACEs) are traumatic events that occur before 18 years of age. ACEs have been associated with many negative health problems, including the development of chronic diseases, such as irritable bowel syndrome (IBS), a functional gastrointestinal disorder characterized by abdominal pain. We investigated the prevalence of ACEs among patients with IBS, identified the types of ACEs commonly related to patients with IBS, and further assessed the impact of ACEs on IBS severity. Methodology A cross-sectional study was performed. The study targeted patients with IBS aged ≥ 18 years who were recruited from gastroenterology outpatient clinics at King Abdulaziz University Hospital. Adults were contacted and invited to take part in the study by completing a survey. Data were collected using two validated questionnaires, the ACE questionnaire for adults and the IBS symptom severity scoring system. Results The study included 109 patients with IBS (59.6% females). The prevalence of ACEs (patients with IBS exposed to at least one ACE) was 63.3%. The most prevalent type was emotional abuse (34.9%), followed by both physical abuse and emotional neglect (28.4%). Females reported significantly more ACEs (p = 0.035) than males. The overall IBS symptoms (r = 0.195, p = 0.043) and abdominal pain (r = 0.240, p = 0.012) severity were significantly correlated with total ACEs score. Conclusions Our findings point to a probable association between ACEs exposure and IBS, demonstrating their long-term impacts on symptoms severity. Further studies are needed to acquire a better understanding of the potential impact of ACEs on IBS.
abstract_id: PUBMED:12021556
Childhood abuse and later medical disorders in women. An epidemiological study. Background: There have been many studies documenting adverse psychiatric consequences for people who have experienced childhood and adult sexual and physical abuse. These include posttraumatic stress disorder, anxiety, depression, substance abuse, eating disorders and probably some personality disorders or trait abnormalities. Much less is known about the links between abuse and physical/psychosomatic conditions in adult life. Hints of causal links are evident in the literature discussing headache, lower back pain, pelvic pain and irritable bowel syndrome. These studies are not definitive as they use clinic-based samples.
Methods: This study used interview data with a random community sample of New Zealand women, half of whom reported childhood sexual abuse and half who did not. Details about childhood physical abuse and adult abuse were also collected in a two-phase study.
Results: Complex relationships were found, as abuses tended to co-occur. Seven of 18 potentially relevant medical conditions emerged as significantly increased in women with one or more types of abuse. These were chronic fatigue, bladder problems, headache including migraine, asthma, diabetes and heart problems. Several of these associations with abuse are previously unreported.
Conclusions: In this random community sample, a number of chronic physical conditions were found more often in women who reported different types of sexual and physical abuse, both in childhood and in adult life. The causal relationships cannot be studied in a cross-sectional retrospective design, but immature coping strategies and increased rates of dissociation appeared important only in chronic fatigue and headache, suggesting that these are not part of the causal pathway between abuse experiences and the other later physical health problems. This finding and the low co-occurrence of the identified physical conditions suggest relative specificity rather than a general vulnerability to psychosomatic conditions in women who have suffered abuses. Each condition may require separate further study.
abstract_id: PUBMED:26155376
IRRITABLE BOWEL SYNDROME: Relationships with Abuse in Childhood. Irritable bowel syndrome is allegedly the most common gastrointestinal diagnosis in the United States. The etiology of this syndrome appears to entail the interaction of both genes and the environment. One potential environmental contributory factor to irritable bowel syndrome is abuse in childhood. Of the various forms of abuses previously examined, sexual abuse in childhood appears to be the most patent contributor. However, both emotional and physical abuses may also contribute to irritable bowel syndrome, although less distinctly. Studies examining a combined childhood-abuse variable (i.e., sexual, emotional, and/or physical abuses) in relationship to irritable bowel syndrome also indicate inconsistent results. Given the presence of childhood abuse as a potential factor in the development of irritable bowel syndrome, a number of pathophysiological events are postulated to explain this relationship, including alterations in norepinephrine and serotonin levels as well as dysregulation of the hypothalamic-pituitary-adrenal axis. Only future research will clarify the specific abuse elements (i.e., further clarification of the individual types of abuse, duration of abuse, roles of the perpetrator/victim) and the pathophysiological changes that culminate in irritable bowel syndrome.
abstract_id: PUBMED:26938439
Paradise Lost: The Neurobiological and Clinical Consequences of Child Abuse and Neglect. In the past two decades, much evidence has accumulated unequivocally demonstrating that child abuse and neglect is associated with a marked increase in risk for major psychiatric disorders (major depression, bipolar disorder, post-traumatic stress disorder [PTSD], substance and alcohol abuse, and others) and medical disorders (cardiovascular disease, diabetes, irritable bowel syndrome, asthma, and others). Moreover, the course of psychiatric disorders in individuals exposed to childhood maltreatment is more severe. Recently, the biological substrates underlying this diathesis to medical and psychiatric morbidity have been studied. This Review summarizes many of the persistent biological alterations associated with childhood maltreatment including changes in neuroendocrine and neurotransmitter systems and pro-inflammatory cytokines in addition to specific alterations in brain areas associated with mood regulation. Finally, I discuss several candidate gene polymorphisms that interact with childhood maltreatment to modulate vulnerability to major depression and PTSD and epigenetic mechanisms thought to transduce environmental stressors into disease vulnerability.
abstract_id: PUBMED:34469265
Association between abuse and neglect with functional constipation and irritable bowel syndrome in adolescents. Objectives: To evaluate the association between violence exposure, abuse, and neglect victimization with functional constipation and irritable bowel syndrome in adolescents.
Methods: Observational cross-sectional case-control study conducted with adolescents from two public schools in the municipality of Osasco, metropolitan region of São Paulo, Brazil. A self-administered questionnaire validated for Brazilian Portuguese Child Abuse Screening Tools - Children's version (ICAST-C) was used to screen the different types of violence. The definition of functional constipation and irritable bowel syndrome was performed using the Rome IV criteria for adolescents. Parents or legal guardians completed the questionnaire for socioeconomic assessment and signed the informed consent form.
Results: 265 students aged 11-17 years, 157 females, were evaluated. Functional constipation and irritable bowel syndrome were found in 74 (27.9%) of the 265 adolescents. Violence exposure was found in 82.6% of the 265 screened adolescents, physical abuse in 91.3%, psychological abuse in 93.2%, sexual abuse in 12.1%, and neglect in 53.6%. The multiple logistic regression analysis showed an association (p < .05) between functional constipation and irritable bowel syndrome with violence exposure (OR = 2.77), physical abuse (OR = 2.17), psychological abuse (OR = 2.95), and neglect (OR= 2.31). There was no association with sexual abuse.
Conclusions: Functional constipation and irritable bowel syndrome were associated with violence exposure, physical abuse, psychological abuse, and neglect in adolescent students from public schools. No association was found with sexual abuse. Further studies are necessary to investigate the causal relationship between violence and functional gastrointestinal disorders.
abstract_id: PUBMED:19845780
Childhood maltreatment and migraine (part III). Association with comorbid pain conditions. Objective: To evaluate in a headache clinic population the relationship of childhood maltreatment on the prevalence of pain conditions comorbid with migraine.
Background: Childhood maltreatment is highly prevalent and has been frequently associated with recurrent headache. The relationship of maltreatment and pain has, however, been a subject of some debate.
Methods: Cross-sectional data on self-reported physician-diagnosed pain conditions were electronically collected from persons with migraine (diagnosed according to International Classification of Headache Disorders-2), seeking treatment in headache clinics at 11 centers across the US and Canada. These included irritable bowel syndrome (IBS), chronic fatigue syndrome (CFS), fibromyalgia (FM), interstitial cystitis (IC), arthritis, endometriosis, and uterine fibroids. Other information included demographics, migraine characteristics (frequency, headache-related disability), remote and current depression (The Patient Health Questionnaire-9), and remote and current anxiety (The Beck Anxiety Inventory). Patients also completed the Childhood Trauma Questionnaire regarding sexual, emotional, and physical abuse, and emotional and physical neglect under the age of 18 years old. Statistical analyses accounted for the survey design and appropriate procedures in SAS such as surveymeans, surveyfreq, and surveylogistic were applied to the weighted data.
Results: A total of 1348 migraineurs (88% women) were included in this study (mean age 41 years). Based on physician diagnosis or validated criteria, 31% had IBS, 16% had CFS, and 10% had FM. Diagnosis of IC was reported by 6.5%, arthritis by 25%, and in women, endometriosis was reported by 15% and uterine fibroids by 14%. At least 1 comorbid pain condition was reported by 61%, 2 conditions by 18%, and 3 or more by 13%. Childhood maltreatment was reported by 58% of the patients. Emotional abuse was associated with increased prevalence of IBS, CFS, arthritis, and physical neglect with arthritis. In women, physical abuse was associated with endometriosis and physical neglect with uterine fibroids. Emotional abuse, and physical abuse and neglect (P < .0001 for all) were also associated with increased total number of comorbid conditions. In ordinal logistic regression models, adjusted for sociodemographics and current depression (prevalence 28%) and anxiety (prevalence 56%), emotional abuse (odds ratios [OR] = 1.69, 95% confidence intervals [CI]: 1.224-2.33) and physical neglect (OR = 1.73, 95% CI: 1.22-2.46) were independently associated with an increased number of pain conditions. The cohort of women, similarly, had associations of emotional abuse (OR = 1.94, 95% CI: 1.40-2.72) and physical neglect (OR = 1.90, 95% CI: 1.34-2.68) with an increased number of pain comorbidities.
Conclusion: The association of childhood maltreatment and pain was stronger in those reporting multiple pain conditions and multiple maltreatment types. This finding suggests that in migraineurs childhood maltreatment may be a risk factor for development of comorbid pain disorders.
Answer: Yes, childhood abuse or neglect is associated with symptom reports and physiological measures in women with irritable bowel syndrome (IBS). Studies have found that women with IBS who report a history of childhood abuse or neglect are more likely to report disturbed sleep, somatic symptoms, and psychological distress (PUBMED:21196423). Additionally, exposure to childhood maltreatment is linked to a range of physical diseases in adults, including IBS (PUBMED:34499948). More women with IBS report unwanted sexual contact during childhood compared to control women, and those with a history of abuse report a greater impact of GI symptoms on activity (PUBMED:19785241). Adverse childhood experiences (ACEs) are associated with IBS and gastrointestinal symptom severity, with household mental illness, emotional abuse, and incarcerated household member being significant predictors of IBS (PUBMED:27061107). Furthermore, the prevalence of ACEs among patients with IBS is notable, and the severity of overall IBS symptoms and abdominal pain correlates significantly with the total ACEs score (PUBMED:35812574).
In a broader context, childhood abuse has been linked to a number of chronic physical conditions, including chronic fatigue, bladder problems, headache including migraine, asthma, diabetes, and heart problems (PUBMED:12021556). The relationship between childhood abuse and IBS may involve pathophysiological changes such as alterations in norepinephrine and serotonin levels, as well as dysregulation of the hypothalamic-pituitary-adrenal axis (PUBMED:26155376). Moreover, childhood maltreatment is associated with a marked increase in risk for major psychiatric disorders and medical disorders, with persistent biological alterations including changes in neuroendocrine and neurotransmitter systems, pro-inflammatory cytokines, and specific brain areas associated with mood regulation (PUBMED:26938439).
Additionally, an association has been found between abuse and neglect with functional constipation and IBS in adolescents, with violence exposure, physical abuse, psychological abuse, and neglect being associated with these conditions (PUBMED:34469265). Lastly, childhood maltreatment has been associated with an increased prevalence of pain conditions comorbid with migraine, including IBS (PUBMED:19845780). |
Instruction: Does a volar locking plate provide equivalent stability as a dorsal nonlocking plate in a dorsally comminuted distal radius fracture?
Abstracts:
abstract_id: PUBMED:18827589
Does a volar locking plate provide equivalent stability as a dorsal nonlocking plate in a dorsally comminuted distal radius fracture?: a biomechanical study. Objectives: The purpose of this study was to compare the fixation afforded by a dorsal nonlocking plate with a volar locking plate in a fracture model simulating an extra-articular distal radius fracture with dorsal comminution (OTA [Orthopaedic Trauma Association] type 23-A3.2).
Methods: In 10 matched pairs of fresh-frozen cadaveric arms, a comminuted extra-articular dorsally unstable distal radius fracture (OTA type 23-A3.2) was created. The fractures were fixed with either dorsally placed nonlocking T-plate or volarly placed locking plate within matched pairs. The precycling stiffness with axial and torsional loading of the specimens was determined. The specimens were then loaded axially for 5000 cycles, and postcycling axial and torsional stiffness and load to failure were determined.
Results: The mean axial and torsional stiffness before and after cyclic loading of fractures stabilized with dorsal nonlocking plate was not significantly different than fractures fixed with volar locking plate. Although the mean load to failure was greater for the volar locking plate group than dorsal nonlocking plate group, the difference was not significant.
Conclusions: This study suggests that the fixation obtained with volar locking plates is as stable as fixation with a dorsal plate in acute healing period and can withstand the functional demands of the immediate postoperative period in dorsally comminuted unstable extra-articular distal radius fractures. Elimination of dorsal tendinopathy by using volar locking plates may lead to fewer long-term complications. Locking plates provided better stability in specimens with osteoporosis.
abstract_id: PUBMED:16039369
Locking versus nonlocking T-plates for dorsal and volar fixation of dorsally comminuted distal radius fractures: a biomechanical study. Purpose: To see if locking volar plates approach the strength of dorsal plates on a dorsally comminuted distal radius fracture model. Volar plates have been associated with fewer tendon complications than dorsal plates but are thought to have mechanical disadvantages in dorsally comminuted distal radius fractures. Locking plates may increase construct strength and stiffness. This study compares dorsal and volar locking and nonlocking plates in a dorsally comminuted distal radius fracture model.
Methods: Axial loading was used to test 14 pairs of embalmed radii after an osteotomy simulating dorsal comminution and plating in 1 of 4 configurations: a standard nonlocking 3.5-mm compression T-plate or a 3.5-mm locking compression T-plate applied either dorsally or volarly. Failure was defined as the point of initial load reduction caused by bone breakage or substantial plate bending.
Results: No significant differences in stiffness or failure strength were found between volar locked and nonlocked constructs. Although not significant, the stiffness of dorsal locked constructs was 51% greater than that of the nonlocked constructs. Locked or nonlocked dorsal constructs were more than 2 times stiffer than volar constructs. The failure strength of dorsal constructs was 53% higher than that of volar constructs. Failure for both volar locked and nonlocked constructs occurred by plate bending through the unfilled hole at the osteotomy site. Failure for both dorsal locked and nonlocked constructs occurred by bone breakage.
Conclusions: Locking plates failed to increase the stiffness or strength of dorsally comminuted distal radius fractures compared with nonlocking plates. Failure strength and stiffness are greater for locked or nonlocked dorsal constructs than for either locked or nonlocked volar constructs. Whether the lower stiffness and failure strength are of clinical significance is unknown. The unfilled hole at the site of comminution or osteotomy is potentially a site of weakness in both volar locked and nonlocked plates.
abstract_id: PUBMED:17079398
Internal fixation of dorsally displaced fractures of the distal part of the radius. A biomechanical analysis of volar plate fracture stability. Background: Volar plate fixation with use of either a locking plate or a neutralization plate has become increasingly popular among surgeons for the treatment of dorsally comminuted extra-articular distal radial fractures. The purpose of the present study was to compare the relative stability of five distal radial plates (four volar and one dorsal), all of which are commonly used for the treatment of dorsally comminuted extra-articular distal radial fractures, under loading conditions simulating the physiologic forces that are experienced during early active rehabilitation.
Methods: With use of a previously validated Sawbones fracture model, a dorsally comminuted extra-articular distal radial fracture was created. The fracture fixation stability of four volar plates (an AO T-plate, an AO 3.5-mm small-fragment plate, an AO 3.5-mm small-fragment locking plate, and the Hand Innovations DVR locking plate) were compared under axial compression loading and dorsal and volar bending simulating the in vivo stresses that are generated at the fracture site during early unopposed active motion of the wrist and digits. A single dorsal plate (an AO pi plate) was used for comparison, with and without simulated volar cortical comminution. The construct stiffness was measured to assess the resistance to fracture gap motion, and comparisons were made among the implants.
Results: The volar AO locking and DVR plates had greater resistance to fracture gap motion (greater stiffness) compared with the volar AO nonlocking and AO T-plates under axial and dorsal loading conditions (p < 0.01), with no significant difference between the AO volar locking and DVR plates. The volar AO locking plate had greater resistance to fracture gap motion than did the volar AO nonlocking plate under axial loading and dorsal bending forces (p < 0.01). The dorsal pi plate had the greatest resistance to fracture gap motion under axial loading and volar and dorsal bending forces (p < 0.01). However, the pi plate was significantly less stable to axial load and dorsal bending forces when the volar cortex was comminuted (p < 0.01).
Conclusions: In this model of dorsally comminuted extra-articular distal radial fractures, dorsal pi-plate fixation demonstrated better resistance to fracture gap motion than did the four types of volar plate fixation. The AO volar locking and DVR plates conferred the greatest resistance to fracture gap motion among the four volar plates tested. Volar locking technology conferred a significant increase in resistance to fracture gap motion as compared with nonlocking plate technology.
abstract_id: PUBMED:34674899
Comminuted Dorsal Ulnar Fragment in Distal Radius Fractures Treated Using the Integrated Compression Screw With a Mini-Plate. Stabilization for displaced dorsoulnar fragments in distal radius fractures is challenging to treat with conventional volar locking plates alone. The integrated compression screw combined with a volar locking plate has been introduced as an additional tool to stabilize the dorsoulnar fragment and has been reported to work effectively. However, the compression screw is unable to stabilize a comminuted dorsal ulnar fragment; therefore, it is necessary to consider using an additional dorsal plate. We have developed a modified surgical technique to stabilize a comminuted dorsal intra-articular fragment by combining the integrated compression screw with a mini-plate as a washer or a buttress.
abstract_id: PUBMED:17481999
Mechanical characteristics of locking and compression plate constructs applied dorsally to distal radius fractures. Purpose: Locking plates are thought to have many advantages such as a decreased incidence of loss of reduction secondary to screw toggling and improved bone healing due to an increased periosteal blood supply. We hypothesized that locking plates will also provide increased stiffness and increased load to failure when they are applied dorsally to stabilize dorsally comminuted distal radius fractures. This study compared the stiffness and strength of dorsally applied locking and standard (nonlocking) T-plates applied to a dorsally comminuted distal radius fracture model.
Methods: Sixteen pairs of embalmed cadaveric human radii were potted, and a standard wedge osteotomy was performed simulating a dorsally comminuted distal radius fracture. The radii were randomized into 2 groups, so that 8 pairs received a 3.5-mm dorsal locking T-plate over the osteotomy on the right radius and 8 pairs received the same on the left radius. A dorsal 3.5-mm standard T-plate was placed over the osteotomy on the contralateral radius in each group. An axial load was used to test the strength and stiffness of each construct. Paired t tests were then used to compare the strength and stiffness of the locking plate with those of the standard plate.
Results: A significant difference was found in both the stiffness and the strength between the locking and standard nonlocking plates. The locking T-plate was 33% stiffer than the standard T-plate. The locking T-plate had a 91% increase in the load to failure. Failure for both locking and standard T-plates occurred via volar cortex bone fracture.
Conclusions: Locking T-plates increased both the stiffness and strength of dorsally comminuted distal radius fractures compared with standard nonlocking T-plates by a statistically significant margin.
abstract_id: PUBMED:21549526
Dorsal and volar 2.4-mm titanium locking plate fixation for AO type C3 dorsally comminuted distal radius fractures. Purpose: In this retrospective, nonrandomized, single-surgeon study, we evaluated the clinical outcomes of dorsal and volar locking plate fixation for AO type C3 dorsally comminuted distal radius fractures.
Methods: We treated 41 consecutive patients who had sustained AO C3 dorsally comminuted fractures of the distal radius with 2.4-mm titanium locking plates between 2006 and 2008. Patients in group 1 (n = 22) were treated with dorsal locking plates, and those in group 2 (n = 19) with volar locking plates. We evaluated clinical outcomes at an average of 37 months and performed statistical analysis using the Mann-Whitney U test and Fisher's exact test.
Results: No significant difference was noted between the 2 groups in terms of radial inclination, volar tilt, and ulnar variance. At the 3- and 6-month follow-up, group 1 showed better clinical results with respect to wrist extension, grip strength, and Gartland and Werley score, whereas group 2 showed better wrist flexion during this period. The range of motion and grip strength progressively leveled out between the 2 groups, and no significant differences were observed at the 9- and 12-month assessments. One patient in group 1 had short-term complex regional pain syndrome, and 4 patients in group 2 had temporary median nerve numbness.
Conclusions: Treatment with dorsal or volar locking plates can provide satisfactory radiographic and functional outcomes for AO type C3 dorsal comminuted distal radius fractures. The dorsal plate group showed an earlier recovery of wrist extension, grip strength, and functional score at the 3- and 6-month follow-up owing to direct reduction as well as fragmental-specific fixation of the dorsal fracture fragments.
Type Of Study/level Of Evidence: Therapeutic IV.
abstract_id: PUBMED:26064356
Efficacy of volar and dorsal plate fixation for unstable dorsal distal radius fractures. Objective: To compare the efficacy of volar and dorsal plate fixation for unstable dorsal distal radius fractures.
Methods: Forty-seven cases were selected from patients undergoing surgical reduction and internal fixation treatment in our hospital from August 2006 to October 2010, with 21 males and 26 females, aged 39-73 years old. Patients were divided into two groups: volar plate fixation group (Group A) which has 32 cases, including 27 cases with locking plate, 5 cases with ordinary T plate, and 4 cases combined with dorsal Kirschner wire fixation; dorsal plate fixation group (Group B) which has 15 cases, including 7 cases with locking plate. The efficacy of the two fixation methods were compared in terms of postoperative wrist function, X-ray score, and postoperative complications.
Results: Compared with those of preoperative groups, the volar tilt, ulnar deviation and radial styloid height in both group A and B were significantly improved one week after surgery as shown by X-ray imaging. Comparison of X-ray images one week after surgery with those of six months after surgery showed no significant changes in volar tilt, ulnar deviation or radial styloid height. 87.5% of patients in group A and 86.7% of patients in group B got "excellent" in their wrist function assessment, and there was no significant difference between the two groups (X(2)=0.825, P=1.000). But patients in group A hax significantly lower incidence rate of postoperative complications than group B (X(2)=4.150, P=0.042).
Conclusion: For unstable distal radius fractures with dorsal displacement, volar plate fixation can achieve satisfactory reduction results, and cause less tendon damage or other complications than dorsal plate fixation.
abstract_id: PUBMED:28648329
Biomechanical Properties of 3-Dimensional Printed Volar Locking Distal Radius Plate: Comparison With Conventional Volar Locking Plate. Purpose: This study evaluated the biomechanical properties of a new volar locking plate made by 3-dimensional printing using titanium alloy powder and 2 conventional volar locking plates under static and dynamic loading conditions that were designed to replicate those seen during fracture healing and early postoperative rehabilitation.
Methods: For all plate designs, 12 fourth-generation synthetic composite radii were fitted with volar locking plates according to the manufacturers' technique after segmental osteotomy. Each specimen was first preloaded 10 N and then was loaded to 100 N, 200 N, and 300 N in phases at a rate of 2 N/s. Each construct was then dynamically loaded for 2,000 cycles of fatigue loading in each phase for a total 10,000 cycles. Finally, the constructs were loaded to a failure at a rate of 5 mm/min.
Results: All 3 plates showed increasing stiffness at higher loads. The 3-dimensional printed volar locking plate showed significantly higher stiffness at all dynamic loading tests compared with the 2 conventional volar locking plates. The 3-dimensional printed volar locking plate had the highest yield strength, which was significantly higher than those of 2 conventional volar locking plates.
Conclusions: A 3-dimensional printed volar locking plate has similar stiffness to conventional plates in an experimental model of a severely comminuted distal radius fracture in which the anterior and posterior metaphyseal cortex are involved.
Clinical Relevance: These results support the potential clinical utility of 3-dimensional printed volar locking plates in which design can be modified according the fracture configuration and the anatomy of the radius.
abstract_id: PUBMED:35601514
Appropriately Matched Fixed-Angle Locking Plates Improve Stability in Volar Distal Radius Fixation. Purpose: Size options for volar locking plates may provide value for distal radius fixation. We compared excessively narrow plates with plates that were appropriately matched in width for fixation of an multifragmented distal radius fracture model.
Methods: Eighteen matched pairs (right and left wrists) of large, cadaveric male distal radii specimens, prepared with a simulated Arbeitsgemeinschaft für Osteosynthesefragen type C-3 distal radius fractures, were tested. One specimen from each matched pair was randomized to receive a plate that was appropriately matched in width to the distal radius. The contralateral limb received a narrow plate, which in all cases was undersized in width. Fixation stability was tested and compared to the contralateral matched specimen. Specimens were preloaded at 50 N for 30 seconds before cyclic loading from 50-250 N at 1 Hz for 5000 cycles then loaded to failure.
Results: Loss of fixation under cyclic loading was significantly greater in the specimens fixed with excessively narrow plates compared with plates of appropriate width. When loaded to failure, the plates of appropriate width were stiffer, with higher force at failure and compressive strength than narrow plates. The primary mode of failure was displacement of the distal lunate facet fragment.
Conclusions: These findings suggest that optimally matching the volar locking plate width to the radius may provide advantages for stability of the fixation construct and fragment capture. This may be due to reduced stress concentration from the distribution of forces across a larger surface area.
Clinical Relevance: Optimizing the plate width to the radial width may improve fracture stability and may carry additional importance in comminuted fractures, where narrow plates may not completely capture small bone fragments.
abstract_id: PUBMED:35415519
Volar Locking Plate Fixation for Distal Radius Fractures by Intraoperative Computed Tomographic-Guided Navigation. Purpose: Unstable distal radius intra-articular fractures require restoration of alignment. Exact fixation of intra-articular fragments is ideal. Here, we employed intraoperative computed tomography (CT) navigation to insert screws accurately in the intra-articular dorsal fragments during treatment with a volar locking plate for distal radius intra-articular fractures. The main purposes of this study were to evaluate the accuracy of this procedure and the postoperative stability of the articular fragments through CT findings, as well as to assess clinical outcomes.
Methods: This study included 26 patients with distal radius fractures, who were treated with a volar locking plate using intraoperative CT navigation with a minimum follow-up of 12 months. Mean patient age was 63 years and mean follow-up was 16 months. We examined the position of the inserted distal screws and articular displacement on preoperative, intraoperative, and post-bone union CT images. The 3 distal ulnar screw positions that influence the stability of the dorsoulnar articular fragment were evaluated. The Mayo wrist score and Disabilities of the Arm, Shoulder, and Hand score were also clinically evaluated.
Results: Computed tomography evaluation revealed that the distal locking screws were appropriately inserted at the subchondral position, with sufficient length to stabilize the dorsal fragments, and reduction and stability of the articular fragment were acceptable. At the final follow-up, mean Mayo wrist score was 90.8 and mean Disabilities of the Arm, Shoulder, and Hand score was 9.6.
Conclusions: Intraoperative CT navigation was successfully used for volar locking plate fixation of intra-articular distal radius fractures. Computed tomography evaluation revealed that the screws were precisely inserted for articular fragments and bone union was achieved, maintaining good intra-articular alignment. The findings demonstrate the accuracy of volar locking plate fixation assisted by intraoperative CT navigation and the good clinical outcomes of this procedure.
Type Of Study/level Of Evidence: Therapeutic IV.
Answer: A volar locking plate provides equivalent stability to a dorsal nonlocking plate in a dorsally comminuted distal radius fracture model. This is supported by a biomechanical study that compared the fixation afforded by a dorsal nonlocking plate with a volar locking plate in a fracture model simulating an extra-articular distal radius fracture with dorsal comminution. The study found that the mean axial and torsional stiffness before and after cyclic loading of fractures stabilized with a dorsal nonlocking plate was not significantly different than fractures fixed with a volar locking plate. Additionally, the mean load to failure was greater for the volar locking plate group than the dorsal nonlocking plate group, although the difference was not significant. The study concluded that volar locking plates can withstand the functional demands of the immediate postoperative period in dorsally comminuted unstable extra-articular distal radius fractures and may lead to fewer long-term complications due to the elimination of dorsal tendinopathy (PUBMED:18827589).
Another study comparing locking and nonlocking T-plates for dorsal and volar fixation of dorsally comminuted distal radius fractures found no significant differences in stiffness or failure strength between volar locked and nonlocked constructs. However, dorsal constructs, whether locked or nonlocked, were more than twice as stiff as volar constructs, and the failure strength of dorsal constructs was 53% higher than that of volar constructs (PUBMED:16039369).
A separate biomechanical analysis compared the stability of five distal radial plates (four volar and one dorsal) under loading conditions simulating physiologic forces experienced during early active rehabilitation. The study found that dorsal pi-plate fixation demonstrated better resistance to fracture gap motion than the four types of volar plate fixation. Among the volar plates, the AO volar locking and DVR plates conferred the greatest resistance to fracture gap motion. Volar locking technology conferred a significant increase in resistance to fracture gap motion compared with nonlocking plate technology (PUBMED:17079398).
In summary, while volar locking plates provide stability comparable to dorsal nonlocking plates, certain dorsal plate constructs may offer greater stiffness and failure strength. However, the clinical significance of these differences in biomechanical properties remains to be fully understood, and the choice of fixation method may also be influenced by factors such as the potential for tendon complications and the specific characteristics of the fracture. |
Instruction: Positivity in illness: self-delusion or existential growth?
Abstracts:
abstract_id: PUBMED:15125802
Positivity in illness: self-delusion or existential growth? Objectives: This study investigated the relationship between a measure of positivity in illness, the Silver Lining Questionnaire (SLQ), and measures of personality and spirituality/religious beliefs as a way of determining whether positivity in illness is a delusion or existential growth.
Method: This is a cross-sectional study comparing response to the SLQ, to the Eysenck Personality Questionnaire (EPQ-R), breathlessness, illness type, and spiritual and religious beliefs in a final total sample of 194 respiratory outpatients.
Results: The SLQ was associated positively with extraversion (r =.16, p<.05), unrelated to neuroticism (r =.11, n.s.) and repression (r =.10, n.s.) and was positively associated with spiritual and religious beliefs, F(2; 187) = 7.12, p < 001, as predicted by the existential growth but not the delusion interpretation. There was no relationship between positivity and age, r(194) =.09, n.s., or between positivity and gender t(192) = -1.27, n.s., and nor were there relationships with type of illness, F(4, 188) = 2.17, n.s., or breathlessness, F (5, 173) = 0.42, n.s.
Conclusions: The results suggest that positivity in illness is associated with existential growth, though the cross-sectional nature of the study precludes a conclusion of causal direction. The non-significant correlation between the SLQ and neuroticism is in the opposite direction predicted by the delusion explanation, but the non-significant relationship between the SLQ and repression is in the predicted direction. We cannot rule out the possibility that some positivity is delusion.
abstract_id: PUBMED:32446197
An existential support program for people with cancer: Development and qualitative evaluation. Purpose: To describe the development process of an existential support program and to explore participants' evaluation of supportive/unsupportive processes of change.
Method: A five-day existential support program called "Energy for life" was designed including three main elements: 1. existential group counseling, 2. art therapy and 3. interaction with nature and aesthetic surroundings. The program was implemented at two different study sites. Focus group interviews were conducted to evaluate the program.
Results: 40 subjects were recruited (20 for each one of the two study sites) and 36 completed the study (31 women, five men) in the age range from 31 to 76 years and living with cancer across all stages and types. The program resulted in supportive processes of "existential sharing". The existential group counseling included a sharing process which led to an increased awareness and acceptance of one's existential situation and a preparation for the next steps in one's life. Art therapy offered a respite from the illness or the opportunity to express and share difficult thoughts and feelings connected to the illness experience. The interaction with nature/surroundings induced feelings of calmness and peace, increasing self-worth and spiritual belonging. Unsupportive processes of change related to the organization of the existential counseling groups, feelings of discomfort with creative engagement and feelings of distress provoked by a hospital environment.
Conclusion: Through "Energy for life" existential concerns and distress were shared, contained and transformed. Knowledge has been gained about how an existential support program can be designed that explicitly focuses on alleviating patients' existential distress.
abstract_id: PUBMED:35153958
Suffering a Healthy Life-On the Existential Dimension of Health. This paper examines the existential context of physical and mental health. Hans Georg Gadamer and The World Health Organization's conceptualizations are discussed, and current medicalized and idealized views on health are critically examined. The existential dimension of health is explored in the light of theories of selfhood consisting of different parts, Irvin Yalom's approach to "ultimate concerns" and Martin Heidegger's conceptualization of "existentials." We often become aware of health as an existential concern during times of illness, and health and illness can co-exist. The paper discusses how existential suffering in Western culture is described, to an increasing degree, as disorders or psychological deficits, and perfectionistic health goals easily can become a problem. We seek to avoid suffering rather than relate to it, with all the tension that may create. The paper argues that suffering is an unavoidable aspect of people's experience of their lives, and actively relating to suffering must be regarded as a fundamental aspect of health. The need and usefulness of a concept of "existential health" is discussed.
abstract_id: PUBMED:30307088
Existential distress in cancer: Alleviating suffering from fundamental loss and change. A severe life threatening illness can challenge fundamental expectations about security, interrelatedness with others, justness, controllability, certainty, and hope for a long and fruitful life. That distress and suffering but also growth and mastery may arise from confrontation with an existentially threatening stressor is a long-standing idea. But only recently have researchers studied existential distress more rigorously and begun to identify its distinct impact on health care outcomes. Operationalizations of existential distress have included fear of cancer recurrence, death anxiety, demoralization, hopelessness, dignity-related distress, and the desire for hastened death. These focus in varying emphasis on fear of death, concern about autonomy, suffering, or being a burden to others; a sense of profound loneliness, pointlessness or hopelessness; grief, regret, or embitterment about what has been missed in life; and shame if dignity is lost or expectations about coping are not met. We provide an overview of conceptual issues, diagnostic approaches, and treatments to alleviate existential distress. Although the two meta-analyses featured in this special issue indicate the progress that has been made, many questions remain unresolved. We suggest how the field may move forward through defining a threshold for clinically significant existential distress, investigating its comorbidity with other psychiatric conditions, and inquiring into adjustment processes and mechanisms underlying change in existential interventions. We hope that this special issue may inspire progress in this promising area of research to improve recognition and management of a central psychological state in cancer care.
abstract_id: PUBMED:25050872
A cognitive-existential intervention to improve existential and global quality of life in cancer patients: A pilot study. Objective: We developed a specific cognitive-existential intervention to improve existential distress in nonmetastatic cancer patients. The present study reports the feasibility of implementing and evaluating this intervention, which involved 12 weekly sessions in both individual and group formats, and explores the efficacy of the intervention on existential and global quality of life (QoL) measures.
Method: Some 33 nonmetastatic cancer patients were randomized between the group intervention, the individual intervention, and the usual condition of care. Evaluation of the intervention on the existential and global QoL of patients was performed using the existential well-being subscale and the global scale of the McGill Quality of Life (MQoL) Questionnaire.
Results: All participants agreed that their participation in the program helped them deal with their illness and their personal life. Some 88.9% of participants agreed that this program should be proposed for all cancer patients, and 94.5% agreed that this intervention helped them to reflect on the meaning of their life. At post-intervention, both existential and psychological QoL improved in the group intervention versus usual care (p = 0.086 and 0.077, respectively). At the three-month follow-up, global and psychological QoL improved in the individual intervention versus usual care (p = 0.056 and 0.047, respectively).
Significance Of Results: This pilot study confirms the relevance of the intervention and the feasibility of the recruitment and randomization processes. The data strongly suggest a potential efficacy of the intervention for existential and global quality of life, which will have to be confirmed in a larger study.
abstract_id: PUBMED:31040052
A concept analysis of the existential experience of adults with advanced cancer. Background: Attention to the existential dimension of an individual's experience during serious illness is important. However, existential concerns continue to be poorly defined in literature, leading to neglect in the clinical realm.
Purpose: This concept analysis seeks to clarify the concept of the existential experience within the context of adults with advanced cancer.
Methods: Rodgers' evolutionary method of concept analysis was used.
Discussion: Existential experience in adults with advanced cancer is a dynamic state, preceded by confronting mortality, defined by diverse reactions to shared existential challenges related to the parameters of existence (body, time, others, and death), resulting in a dialectical movement between existential suffering and existential health, with capacity for personal growth. Personal factors and the ability to cope appear to influence this experience.
Conclusion: These findings can drive future research and enhance clinician ability to attend to the existential domain, thereby improving patient experience at end-of-life.
abstract_id: PUBMED:37859430
"What it is like to be human": The existential dimension of care as perceived by professionals caring for people approaching death. Objectives: Existential/spiritual questions often arise when a person suffers from a serious and/or life-threatening illness. "Existential" can be seen as a broad inclusive term for issues surrounding people's experience and way of thinking about life. To be able to meet patients' existential needs, knowledge is needed about what the existential dimension includes. The aim of this study was to investigate how professionals caring for people with life-threatening disease perceive the existential dimension of care.
Methods: This study is based on a mixed method design utilizing a digital survey with open- and closed-ended questions. Descriptive statistics were applied to closed-ended questions and a qualitative descriptive approach was used for the responses to the open-ended questions. Healthcare professionals at specialized palliative care units, an oncology clinic and municipal healthcare within home care and a nursing home in Sweden answered the survey.
Results: Responses from 77 professionals expressed a broad perspective on existential questions such as thoughts about life and death. Identifying existential needs and performing existential care was considered a matter of attitude and responsiveness and thus a possible task for any professional. Existential needs centered around the opportunity to communicate, share thoughts and experiences, and be seen and heard. Existential care was connected to communication, sharing moments in the present without doing anything and was sometimes described as embedded in professionals' ordinary care interventions. The existential dimension was considered important by the majority of respondents.
Significance Of Results: This study indicates that with the right attitude and responsiveness, all professionals can potentially contribute to existential care, and that existential care can be embedded in all care. The existential dimension of care can also be considered very important by health professionals in a country that is considered secular.
abstract_id: PUBMED:35236538
Impact of an education program to facilitate nurses' discussions of existential issues in neurological care. Objectives: Discussing existential issues is integral to caring for people with acute, progressive, or life-limiting neurological illness, but there is a lack of research examining how nurses approach existential issues with this patient group and their family members. The purpose was to examine the experiential impact of an educational program for nurses designed to facilitate discussions of existential issues with patients and family members in neurological wards.
Method: Nurses in inpatient and outpatient care at a neurological clinic in Sweden were invited to participate in an education program about discussing existential issues with patients and their family members as related to neurological conditions. The evaluation of the program and of the nurses' view of discussing existential issues was conducted through focus groups before and after participation. The data were analyzed by qualitative content analysis.
Results: The program gave nurses a deeper understanding of existential issues and how to manage these conversations with patients and their family members. Both internal and external barriers remained after education, with nurses experiencing insecurity and fear, and a sense of being inhibited by the environment. However, they were more aware of the barriers after the education, and it was easier to find strategies to manage the conversations. They demonstrated support for each other in the team both before and after participating in the program.
Significance Of Results: The educational program gave nurses strategies for discussing existential issues with patients and family members. The knowledge that internal and external barriers impede communication should compel organizations to work on making conditions more conducive, for example, by supporting nurses to learn strategies to more easily manage conversations about existential issues and by reviewing the physical environment and the context in which they are conducted.
abstract_id: PUBMED:25964883
Existential well-being and meaning making in the context of primary brain tumor: conceptualization and implications for intervention. When faced with a significant threat to life, people tend to reflect more intensely upon existential issues, such as the meaning and purpose of one's life. Brain tumor poses a serious threat to a person's life, functioning, and personhood. Although recognized as an important dimension of quality of life, existential well-being is not well understood and reflects an overlooked area of support for people with brain tumor. This perspective article reviews the historical underpinnings of the concept of existential well-being and integrates this discussion with theoretical perspectives and research on meaning making and psychological adjustment to primary brain tumor. We then provide an overview of psychosocial support interventions for people with brain tumor and describe the findings of a recently published psychotherapy trial targeting existential well-being. Overall, this article highlights the importance of assessing the existential support needs of people with primary brain tumor and their family members, and providing different avenues of support to facilitate the meaning-making process across the illness trajectory.
abstract_id: PUBMED:34930220
Palliative care provider attitudes toward existential distress and treatment with psychedelic-assisted therapies. Background: Existential distress is a significant source of suffering for patients facing life-threatening illness. Psychedelic-Assisted Therapies (PAT) are novel treatments that have shown promise in treating existential distress, but openness to providing PAT may be limited by stigma surrounding psychedelics and the paucity of education regarding their medical use. How PAT might be integrated into existing treatments for existential distress within palliative care remains underexplored.
Methods: The present study aimed to elucidate the attitudes of palliative care clinicians regarding treatments for existential distress, including PAT. We recruited palliative care physicians, advanced practice nurses, and spiritual and psychological care providers from multiple US sites using purposive and snowball sampling methods. Attitudes toward PAT were unknown prior to study involvement. Semi-structured interviews targeted at current approaches to existential distress and attitudes toward PAT were analyzed for thematic content.
Results: Nineteen respondents (seven physicians, four advanced practice nurses, four chaplains, three social workers, and one psychologist) were interviewed. Identified themes were 1) Existential distress is a common experience that is frequently insufficiently treated within the current treatment framework; 2) Palliative care providers ultimately see existential distress as a psychosocial-spiritual problem that evades medicalized approaches; 3) Palliative care providers believe PAT hold promise for treating existential distress but that a stronger evidence base is needed; 4) Because PAT do not currently fit existing models of existential distress treatment, barriers remain.
Conclusions: PAT is seen as a potentially powerful tool to treat refractory existential distress. Larger clinical trials and educational outreach are needed to clarify treatment targets and address safety concerns. Further work to adapt PAT to palliative care settings should emphasize collaboration with spiritual care as well as mental health providers and seek to address unresolved concerns about equitable access.
Answer: The question of whether positivity in illness represents self-delusion or existential growth has been explored in various studies. The Silver Lining Questionnaire (SLQ) study found that positivity in illness was associated with extraversion and spiritual and religious beliefs, which supports the notion of existential growth rather than delusion. The lack of significant correlation between the SLQ and neuroticism, which would be expected if positivity were a delusion, further supports this interpretation. However, the study acknowledges that some positivity could still be delusional, as the cross-sectional nature of the study does not allow for a definitive conclusion on causal direction (PUBMED:15125802).
Other research has focused on the development of support programs that address existential concerns in patients with cancer. For instance, the "Energy for life" program, which includes existential group counseling, art therapy, and interaction with nature, has been shown to support existential sharing and help patients increase awareness and acceptance of their existential situation. This suggests that such programs can facilitate existential growth by providing patients with tools to cope with their illness (PUBMED:32446197).
The concept of existential health has also been discussed, with the argument that suffering is an unavoidable aspect of life and that actively relating to suffering is a fundamental aspect of health. This perspective implies that existential growth involves acknowledging and engaging with suffering rather than denying it (PUBMED:35153958).
Furthermore, existential distress in cancer patients has been recognized as a significant issue, with interventions aimed at alleviating this distress showing promise in improving health care outcomes. This indicates that addressing existential concerns can lead to growth and mastery, rather than being a form of self-delusion (PUBMED:30307088).
In conclusion, the evidence suggests that positivity in illness is more aligned with existential growth than with self-delusion. Interventions and support programs that address existential concerns can facilitate this growth, helping patients find meaning and cope with their illness in a constructive way. |
Instruction: Do rural consumers expect a prescription from their GP visit?
Abstracts:
abstract_id: PUBMED:15720314
Do rural consumers expect a prescription from their GP visit? Investigation of patients' expectations for a prescription and doctors' prescribing decisions in rural Australia. Objective: To assess patients' expectation for receiving a prescription and GPs' perceptions of patient expectation for a prescription.
Design: Matched questionnaire study completed by patients and GPs.
Setting: Seven general practices in rural Queensland, Australia.
Subjects: The subjects were 481 patients consulting 17 GPs.
Main Outcome Measures: Patients' expectation for receiving a prescription and GPs' perceptions of patients' expectation.
Results: Ideal expectation (hope) for a prescription was expressed by 57% (274/481) of patients. Sixty-six per cent (313/481) thought it was likely that the doctor would actually give them a prescription. Doctors accurately predicted hope or lack of hope for a prescription in 65% (314/481) of consultations, but were inaccurate in 19% (93/481). A prescription was written in 55% of consultations. No increase in patients' expectation, doctors' perceptions of expectation, or decision to prescribe were detected for patients living a greater distance from the doctors.
Conclusions: Rural patients demonstrated similar rates of hope for a prescription to those found in previous urban studies. Rural doctors seem to be similarly 'accurate' and 'inaccurate' in determining patients' expectations. Rates of prescribing were comparable to urban rates. Distance was not found to increase the level of patient expectation, affect the doctors' perception or to influence the decision to prescribe.
abstract_id: PUBMED:23876926
Between the idea and the reality: GP liaison in a rural setting. Objectives: To describe the organisational, clinical and pragmatic features of a GP liaison service established by the Division of Mental Health in the Darling Downs Hospital and Health Service catchment to facilitate the care of rural patients and improve communication between primary and specialist care.
Conclusions: The GP liaison service was created using funding from the Commonwealth STP initiative to provide weekly registrar clinics to primary care providers in the Darling Downs. The service was eagerly accepted by providers who saw patient benefits outweighing financial considerations. Expectations of a greater level of care than the assessment and advice provided reflects the large unmet need for mental health services in rural areas. GPs expressed enthusiasm for true collaborative care, such as case management overseen by the public mental health service but based at GP offices.
abstract_id: PUBMED:35206024
Rural Ties and Consumption of Rural Provenance Food Products-Evidence from the Customers of Urban Specialty Stores in Portugal. Consumers' food preferences increasingly meet concerns of authenticity, health, origin, and sustainability, altogether attributes embodied in rural provenance food products. The dynamics of production, commercialization, and availability of these products in urban centers are growing stronger. This study aims to explore rural provenance food consumption and underlying motivations, the consumers' images of products and provenance areas, and the influence of rural ties in consumption. Data from a survey directed to 1554 consumers of 24 urban specialty stores located in three Portuguese cities were analyzed. The analysis is based on the differences between frequent and sporadic consumers of Portuguese rural provenance food products. The two groups significantly differ in the reasons provided to acquire the products. Those who buy and consume these products more frequently especially value sensorial features, convenience, national provenance, and the impacts on rural development. Additionally, the motivations to choose rural provenance foods tend to pair with positive images of those products and of their territories of origin. This is intrinsically connected with familiarity, a nuclear notion that encompasses the symbolic images of the products and their origins as actual connections (familiar and otherwise) to rural contexts.
abstract_id: PUBMED:25528572
Which dimensions of access are most important when rural residents decide to visit a general practitioner for non-emergency care? Objective: Access to primary healthcare (PHC) services is key to improving health outcomes in rural areas. Unfortunately, little is known about which aspect of access is most important. The objective of this study was to determine the relative importance of different dimensions of access in the decisions of rural Australians to utilise PHC provided by general practitioners (GP).
Methods: Data were collected from residents of five communities located in 'closely' settled and 'sparsely' settled rural regions. A paired-comparison methodology was used to quantify the relative importance of availability, distance, affordability (cost) and acceptability (preference) in relation to respondents' decisions to utilise a GP service for non-emergency care.
Results: Consumers reported that preference for a GP and GP availability are far more important than distance to and cost of the service when deciding to visit a GP for non-emergency care. Important differences in rankings emerged by geographic context, gender and age.
Conclusions: Understanding how different dimensions of access influence the utilisation of PHC services is critical in planning the provision of PHC services. This study reports how consumers 'trade-off' the different dimensions of access when accessing GP care in rural Australia. The results show that ensuring 'good' access requires that policymakers and planners should consider other dimensions of access to services besides geography.
abstract_id: PUBMED:26110147
Utilization of Rural Primary Care Physicians' Visit Services for Diabetes Management of Public Health in Southwestern China: A Cross-Sectional Study from Patients' View. Background: Primary care physicians' visit services for diabetes management are now widely delivered in China's rural public health care. Current studies mainly focus on supply but risk factors from patients' view have not been previously explored. This study aims to present the utilization of rural primary care physicians' visit services for diabetes management in the last 12 months in southwestern China, and to explore risk factors from patients' view.
Methods: This cross sectional study selected six towns at random and all 385 diabetics managed by primary care physicians were potential participants. Basing on the inclusion and exclusion criteria, 374 diabetics were taken as valid subjects and their survey responses formed the data resource of analyses. Descriptive indicators, χ2 contingency table analyses and Logistic regression were used.
Results: 54.8% respondents reported the utilization of visit services. According to the multivariate analysis, the positive factors mainly associated with utilization of visit services include disease duration (OR=1.654), use of diabetic drugs (OR=1.869), consulting diabetes care knowledge (OR=1.602), recognition of diabetic complications (OR=1.662), needs of visit services (OR=2.338).
Conclusion: The utilization of rural primary care physicians' visit services still remains unsatisfactory. Mass rural health policy awareness, support, and emphasis are in urgent need and possible risk factors including disease duration, use of diabetic drugs, consulting diabetes care knowledge, recognition of diabetic complications and needs of visit services should be taken into account when making rural health policy of visit services for diabetes management in China and many other low- and middle-income countries.
abstract_id: PUBMED:34066610
Differences in the Pattern of Non-Recreational Sharing of Prescription Analgesics among Patients in Rural and Urban Areas. Introduction: This study aimed to analyze differences in sharing of prescription analgesics between rural and urban populations.
Methods: We surveyed 1000 participants in outpatient family medicine settings in Croatia. We used a 35-item questionnaire to analyze patients' characteristics, pain intensity, prescription analgesic sharing behavior, and perception of risks regarding sharing prescription medications.
Results: Prescription analgesic sharing was significantly more frequent in the rural (64%) than in the urban population 55% (p = 0.01). Participants from rural areas more commonly asked for verbal or written information than those from urban areas when taking others' prescription analgesics (p < 0.001) or giving such analgesics (p < 0.001). Participants from rural areas more commonly informed their physician about such behavior compared to those from urban areas (p < 0.01), and they were significantly more often asked about such behavior by their physician (p < 0.01). Perceptions about risks associated with sharing prescription medication were similar between rural and urban populations.
Conclusions: There are systematic differences in the frequency of prescription analgesics and associated behaviors between patients in family medicine who live in rural and urban areas. Patients from rural areas were more prone to share prescription analgesics. Future studies should examine reasons for differences in sharing prescription analgesics between rural and urban areas.
abstract_id: PUBMED:35035183
The role of novel instruments of brand communication and brand image in building consumers' brand preference and intention to visit wineries. This research aims to analyze brand communication and brand image as specific drivers of wine brand preference and their influence on wine consumers' intention to visit associated wineries. Specifically, this paper enhances the understanding of the roles of advertising-promotion, sponsorship-public relations, corporate social responsibility, and social media in brand communication, as well as functional, emotional and reputation components in brand image development in the context of wine tourism industry. Data was collected through a structured and self-administered questionnaire from 486 visitors to wineries in Spain. Partial least squares regression was used to evaluate the measurement model and the hypotheses. The empirical analysis shows that brand communication and brand image have similar positive effects on brand preference, and that brand image mediates the relationship between brand communication and brand preference. This research suggests implications for theory and practice relative to brand management in terms of communication and image; and it proposes insights into novel communication tools and marketing activities for the winery tourism industry. Firms should employ a holistic evaluation of brand communication to involve the whole organization, which would enhance the strategic role that brand communication plays.
Supplementary Information: The online version contains supplementary material available at 10.1007/s12144-021-02656-w.
abstract_id: PUBMED:34625971
Sensory profile and acceptance of maize tortillas by rural and urban consumers in Mexico. Background: Maize tortillas are the staple food of Mexico and their consumption contributes to preserving the gastronomic patrimony and food security of the population. The aim of the present study was to generate a reference sensory profile for different types of tortillas and to evaluate the effect that these sensory characteristics have on consumer liking and how this influences their consumption preferences and purchase intent. Three types of maize tortillas were analyzed: traditional (T1), combined (T2) and industrialized (T3). The samples were characterized using the modified flash profile method. Sensory acceptability and preference tests were conducted on 240 urban and rural consumers.
Results: The judges characterized 19 attributes in the tortilla samples, eight of which were also identified by consumers. In the case of traditional tortillas, the matching attributes were maize flavor, color, thickness and moisture. Only rural consumers were able to perceive significant differences between the samples in terms of aroma and taste/flavor. The study has contributed to understanding the complex mechanisms of sensory acceptance through the use of tools that combine qualitative and quantitative data.
Conclusion: Although 56% of rural and urban consumers prefer traditional tortillas for their sensory characteristics, purchase intent is also affected by socioeconomic, cultural and microbiological factors. © 2021 Society of Chemical Industry.
abstract_id: PUBMED:24853143
Source of prescription drugs used nonmedically in rural and urban populations. Background: Unintentional overdose deaths due to nonmedical use of prescription drugs disproportionately impact rural over urban settings in the United States. Sources of these prescriptions may play a factor.
Objective: This study examines the relationships between rurality and source of prescription drugs used nonmedically.
Methods: Using data from the National Survey on Drug Use and Health 2008-2010 (n = 10 693), we examined bivariate and multivariate associations of socio-demographic and clinical correlates and source (physician or non-physician) of prescription drugs (opioid, sedative, tranquilizer, or stimulant) used nonmedically among urban and rural residents. We also examined the type of prescription drugs used nonmedically among urban and rural residents by source.
Results: Among respondents reporting past year nonmedical use of prescription medications, 18.9% of urban residents and 17.5% of rural residents had a physician source for drugs used nonmedically. Likelihood of physician source for urban and rural residents was increased in Hispanic, insured individuals and those with any prescription drug dependence. Rural residents who had a non-physician source were more likely than urban residents with a non-physician source to use hydrocodone (49% vs. 43.3%, p = 0.05), morphine (17.7% vs. 14.0%, p = 0.04), tramadol products (18.1% vs. 11.2%, p = 0.003), and methadone (17.2% vs. 10.9%, p = 0.001).
Conclusion: Urban and rural individuals with nonmedical use of prescription drugs are equally likely to obtain these prescriptions from a physician source. Some factors associated with obtaining prescription drugs from a physician source differed between these populations, which may have implications for public health approaches to addressing problems such as overdose.
abstract_id: PUBMED:32867750
How does GP training impact rural and remote underserved communities? Exploring community and professional perceptions. Background: Substantial government funding has been invested to support the training of General Practitioners (GPs) in Australia to serve rural communities. However, there is little data on the impact of this expanded training on smaller communities, particularly for smaller rural and more remote communities. Improved understanding of the impact of training on underserved communities will assist in addressing this gap and inform ongoing investment by governments and communities.
Method: A purposive sample of GP supervisors, GP registrars, practice managers and health services staff, and community members (n = 40) from previously identified areas of workforce need in rural and remote North-West Queensland were recruited for this qualitative study. Participants had lived in their communities for periods ranging from a few months to 63 years (Median = 12 years). Semi-structured interviews and a focus group were conducted to explore how establishing GP training placements impacts underserved communities from a health workforce, health outcomes, economic and social perspective. The data were then analysed using thematic analysis.
Results: Participants reported they perceived GP training to improve communities' health services and health status (accessibility, continuity of care, GP workforce, health status, quality of health care and sustainable health care), some social factors (community connectedness and relationships), cultural factors (values and identity), financial factors (economy and employment) and education (rural pathway). Further, benefits to the registrars (breadth of training, community-specific knowledge, quality of training, and relationships with the community) were reported that also contributed to community development.
Conclusion: GP training and supervision is possible in smaller and more remote underserved communities and is perceived positively. Training GP registrars in smaller, more remote communities, matches their training more closely with the comprehensive primary care services needed by these communities.
Answer: Yes, rural consumers do expect a prescription from their GP visit. A study conducted in rural Queensland, Australia, found that 57% of patients expressed an ideal expectation (hope) for a prescription, and 66% thought it was likely that the doctor would actually give them a prescription. However, doctors accurately predicted hope or lack of hope for a prescription in only 65% of consultations, indicating some level of inaccuracy in understanding patient expectations. The study concluded that rural patients demonstrated similar rates of hope for a prescription to those found in previous urban studies, and the rates of prescribing were comparable to urban rates. Distance from the doctors did not increase patients' expectation, affect the doctors' perception, or influence the decision to prescribe (PUBMED:15720314). |
Instruction: Are Obese Residents More Likely to Be Admitted to Nursing Homes That Have More Deficiencies in Care?
Abstracts:
abstract_id: PUBMED:27225360
Are Obese Residents More Likely to Be Admitted to Nursing Homes That Have More Deficiencies in Care? Objectives: To determine whether obese older adults who qualify for nursing home (NH) placement are as likely as nonobese adults to be admitted to NHs that provide adequate quality of care.
Design: Retrospective study.
Setting: NHs in New York State.
Participants: Individuals aged 65 and older newly admitted to a NH in New York State in 2006-07.
Measurements: Total and healthcare-related deficiency citations for each facility were obtained from the Online Survey, Certification, and Reporting file. Bivariate and multivariate regression analyses were used to assess the association between obesity (body mass index (BMI) 30.0-39.9 kg/m(2) ) and morbid obesity (BMI ≥ 40.0 kg/m(2) ) separately and admission to facilities with more deficiencies.
Results: NHs that admitted a higher proportion of morbidly obese residents were more likely to have more deficiencies, whether total or healthcare related. These NHs also had greater odds of having severe deficiencies, or falling in the top quartile ranking of total deficiencies. After sequentially controlling for the choice of facilities within the inspection region, resident characteristics, and facility covariates, the association between morbid obesity and admission to higher-deficiency NHs persisted.
Conclusion: Residents with morbid obesity were more likely to be admitted to NHs of poorer quality based on deficiency citations. The factors driving these disparities and their impact on the care of obese NH residents require further elucidation.
abstract_id: PUBMED:34919836
Characteristics of Working-Age Adults With Schizophrenia Newly Admitted to Nursing Homes. Objectives: Persons aged <65 years account for a considerable proportion of US nursing home residents with schizophrenia. Because they are often excluded from psychiatric and long-term care studies, a contemporary understanding of the characteristics and management of working-age adults (22-64 years old) with schizophrenia living in nursing homes is lacking. This study describes characteristics of working-age adults with schizophrenia admitted to US nursing homes in 2015 and examines variations in these characteristics by age and admission location. Factors associated with length of stay and discharge destination were also explored.
Design: This is a cross-sectional study using the Minimum Data Set 3.0 merged to Nursing Home Compare.
Setting And Participants: This study examines working-age (22-64 years) adults with schizophrenia at admission to a nursing home.
Methods: Descriptive statistics of resident characteristics (sociodemographic, clinical comorbidities, functional status, and treatments) and facility characteristics (ownership, geography, size, and star ratings) were examined overall, stratified by age and by admission location. Generalized estimating equation models were used to explore the associations of age, discharge to the community, and length of stay with relevant resident and facility characteristics. Coefficient estimates, adjusted odds ratios, and 95% CIs are presented.
Results: Overall, many of the 28,330 working-age adults with schizophrenia had hypertension, diabetes, and obesity. Those in older age subcategories tended to have physical functional dependencies, cognitive impairments, and clinical comorbidities. Those in younger age subcategories tended to exhibit higher risk of psychiatric symptoms.
Conclusions And Implications: Nursing home admission is likely inappropriate for many nursing home residents with schizophrenia aged <65 years, especially those in younger age categories. Future psychiatric and long-term care research should include these residents to better understand the role of nursing homes in their care and should explore facility-level characteristics that may impact quality of care.
abstract_id: PUBMED:38011172
High Medicaid Nursing Homes: Contextual Factors Associated with the Availability of Specialized Resources Required to Care for Obese Residents. Obesity is an increasingly important concern in the delivery of high-quality nursing home care. Obese nursing home residents require specialized equipment and resources. As high Medicaid nursing homes have limited financial ability, they may lack the necessary resources to address the needs of obese residents. Moreover, there are variations in the availability of obesity-related specialized resources across these facilities. This study aims to investigate the organizational and market factors associated with the availability of obesity-related specialized resources in high-Medicaid nursing homes. Survey and secondary data sources for the study period 2017-2018 were utilized. The survey data were merged with Brown University's Long Term Care Focus (LTCFocus), Nursing Home Compare, and Area Health Resource File datasets. The dependent variable was the composite score of obesity-related specialized resources, ranging from 0-19. An ordinary least square regression with propensity score weights (to adjust for potential survey non-response bias), along with appropriate organizational/market level control variables were used for our analysis. Our results suggest that payer-mix (>Medicare residents) and a higher proportion of obese residents were positively associated with the availability of obesity-related specialized resources. Policymakers should consider implementing incentives, such as increased Medicaid payments, to assist high Medicaid nursing homes in addressing the specific needs of obese residents.
abstract_id: PUBMED:25918773
The high price of obesity in nursing homes. This article provides a commentary on the costs of obese nursing home patients. We conducted a comprehensive literature search, which found 46 relevant articles on obesity in older adults and effects on nursing home facilities. This review indicated obesity is increasing globally for all age groups and older adults are facing increased challenges with chronic diseases associated with obesity more than ever before. With medical advances comes greater life expectancy, but obese adults often experience more disabilities, which require nursing home care. In the United States, the prevalence of obesity in adults aged 60 years and older increased from 9.9 million (23.6%) to 22.2 million (37.0%) in 2010. Obese older adults are twice as likely to be admitted to a nursing home. Many obese adults have comorbidities such as Type 2 diabetes; patients with diabetes incurred 1 in every 4 nursing home days. Besides the costs of early entrance into nursing facilities, caring for obese residents is different than caring for nonobese residents. Obese residents have more care needs for additional equipment, supplies, and staff costs. Unlike emergency rooms and hospitals, nursing homes do not have federal requirements that require them to serve all patients. Currently, some nursing homes are not prepared to deal with very obese patients. This is a public health concern because there are more obese people than ever in history before and the future appears to have even a heavier generation moving forward. Policymakers need to become aware of this serious gap in nursing home care.
abstract_id: PUBMED:30689774
The Increasing Prevalence of Obesity in Residents of U.S. Nursing Homes: 2005-2015. Background: Obesity prevalence has been increasing over decades among the U.S. population. This study analyzed trends in obesity prevalence among long-stay nursing home residents from 2005 to 2015.
Methods: Data came from the Minimum Data Sets (2005-2015). The study population was limited to long-stay residents (ie, those residing in a nursing home ≥100 days in a year). Residents were stratified into body mass index (BMI)-based groups: underweight (BMI < 18.5), normal weight (18.5 ≤ BMI < 25), overweight (25 ≤ BMI < 30), and obese (BMI ≥ 30); residents with obesity were further categorized as having Class I (30 ≤ BMI < 35), Class II (35 ≤ BMI < 40), or Class III (BMI ≥ 40) obesity. Minimum Data Sets assessments for 2015 were used to compare clinical and functional characteristics across these groups.
Results: Obesity prevalence increased from 22.4% in 2005 to 28.0% in 2015. The prevalence of Class III obesity increased from 4.0% to 6.2%. The prevalence of underweight, normal weight, and overweight decreased from 8.5% to 7.2%, from 40.3% to 37.1%, and from 28.9% to 27.8%, respectively. In 2015, compared with residents with normal weight, residents with obesity were younger, were less likely to be cognitively impaired, had high levels of mobility impairment, and were more likely to have important medical morbidities.
Conclusions And Relevance: There was a steady upward trend in obesity prevalence among nursing home residents for 2005-2015. Medical and functional characteristics of these residents may affect the type and level of care required, putting financial and staffing pressure on nursing homes.
abstract_id: PUBMED:33129263
Impact of morbidity on care need increase and mortality in nursing homes: a retrospective longitudinal study using administrative claims data. Background: A growing number of older people are care dependent and live in nursing homes, which accounts for the majority of long-term-care spending. Specific medical conditions and resident characteristics may serve as risk factors predicting negative health outcomes. We investigated the association between the risk of increasing care need and chronic medical conditions among nursing home residents, allowing for the competing risk of mortality.
Methods: In this retrospective longitudinal study based on health insurance claims data, we investigated 20,485 older adults (≥65 years) admitted to German nursing homes between April 2007 and March 2014 with care need level 1 or 2 (according to the three level classification of the German long-term care insurance). This classification is based on required daily time needed for assistance. The outcome was care level change. Medical conditions were determined according to 31 Charlson and Elixhauser conditions. Competing risks analyses were applied to identify chronic medical conditions associated with risk of care level change and mortality.
Results: The probability for care level change and mortality acted in opposite directions. Dementia was associated with increased probability of care level change compared to other conditions. Patients who had cancer, myocardial infarction, congestive heart failure, cardiac arrhythmias, renal failure, chronic pulmonary disease, weight loss, or recent hospitalization were more likely to die, as well as residents with paralysis and obesity when admitted with care level 2.
Conclusion: This paper identified risk groups of nursing home residents which are particularly prone to increasing care need or mortality. This enables focusing on these risk group to offer prevention or special treatment. Moreover, residents seemed to follow specific trajectories depending on their medical conditions. Some were more prone to increased care need while others had a high risk of mortality instead. Several conditions were neither related to increased care need nor mortality, e.g., valvular, cerebrovascular or liver disease, peripheral vascular disorder, blood loss anemia, depression, drug abuse and psychosis. Knowledge of functional status trajectories of residents over time after nursing home admission can help decision-makers when planning and preparing future care provision strategies (e.g., planning of staffing, physical equipment and financial resources).
abstract_id: PUBMED:22811294
Temporal and structural differences in the care of obese and non-obese people in nursing homes Obesity is a common disease in Germany. Although care facilities are confronted with an increasing number of obese people, the care of them in nursing homes is barely investigated. The present study examines the amount of work using the example of the activity of dressing obese and non-obese nursing home residents and discloses with its temporal and structural differences. In five nursing homes in Berlin a fully structured observational study based on a convenience sample was conducted. 48 nurses were observed while performing the activity of dressing 70 residents aged 65 years and older. The residents' demographic data and medical diagnoses were taken from the nursing records. Information about the functional/cognitive status and pain events were collected by using the interRAI Contact Assessment. Further data regarding the nurses were obtained through face-to-face interviews. The results show a significant correlation between Body Mass Index and the required time of dressing. No correlations exist between age, qualifications and nurses' level of education and the time of dressing. Structural differences in the care of obese and non-obese residents appear by changes of, single activity sequences. The care of the obese residents is associated with increased time requirements and structurally differs from the care of the non-obese residents. This should lead to further research because it has implications for staffing in nursing homes.
abstract_id: PUBMED:16078966
Obesity in nursing homes: an escalating problem. Objectives: To estimate trends in the prevalence of obesity in nursing homes, to characterize the obese nursing home population, and to evaluate the extent to which estimates of the prevalence of obesity varied by facility and geographic location.
Design: Cross-sectional.
Setting: One thousand six hundred twenty-five nursing homes in Kansas, Maine, Mississippi, New York, and South Dakota from 1992 to 2002; 16,110 nursing homes in the United States in 2002.
Participants: Newly admitted residents between 1992 and 2002 (n=847,601) in selected states and 1,448,046 residents newly admitted to a U.S. nursing home in 2002 with height and weight documented on the Minimum Data Set (MDS) assessment.
Measurements: Data were from the Systematic Assessment of Geriatric Drug Use via Epidemiology database. Residents were classified as having a body mass index of less than 18.5 kg/m2, 18.5 to 24.9 kg/m2, 25.0 to 29.9 kg/m2, 30 to 34.9 kg/m2, or 35.0 kg/m2 or greater.
Results: Adjusting for sociodemographics, in Kansas, Maine, Mississippi, New York, and South Dakota, fewer than 15% of newly admitted residents were obese in 1992, rising to more than 25% in 2002. In U.S. nursing homes, the distribution of obese residents is not shared equally across facilities. Nearly 30% of residents with a BMI of 35 kg/m2 or greater are younger than 65, and a disproportionate percentage of obese residents are non-Hispanic black. Residents identified as obese had a higher likelihood of comorbid conditions (e.g., diabetes mellitus, arthritis, hypertension, depression, and allergies).
Conclusion: Increasing prevalence of obesity in nursing homes and substantial variation of obesity prevalence within facilities raise concerns about nursing home preparedness and access.
abstract_id: PUBMED:16837398
Weighing the relevant issues: obesity in nursing homes. The obesity epidemic has touched all segments of society and every health care sector including the nursing home industry. Understanding the issues related to the care of obese nursing home residents is important in light of this epidemic. Such issues include the difficulties related to access to nursing homes, the structural preparedness of nursing homes to care for obese patients, and the potential for staff injuries. Policies regarding strategies to improve nursing home access for obese patients, mitigate disparities in quality of care and health outcomes owing to obesity, and better prepare for the growing obesity epidemic are needed.
abstract_id: PUBMED:18519112
Personal care assistance needs of obese elders entering nursing homes. Objectives: To assess the personal care assistance needs of obese (body mass index [BMI] >or=30) elders (>or=65 years) entering nursing homes in Arkansas between 1999 and 2004.
Design: Retrospective cross-sectional study, using descriptive statistics and inferential statistics including multivariate logistic regression.
Setting: The United States.
Participants: Participants were 47,932 unduplicated elders (>or=65 years) upon first admission to a nursing home in Arkansas from 1999 through 2004.
Measurements: Clinical assessment of height and weight and personal care assistance needs reported on the initial Minimum Data Set (MDS) admission assessment.
Results: Obese elders had significantly (P < .01) greater odds (1.18 to 1.70) of needing extensive assistance to perform the selected activities of daily living (ADLs) compared to non-obese (BMI 16 to 29) elders.
Conclusions: Excessive body weight of newly admitted elderly nursing home residents is associated with greater use of personal care staff to perform selected ADLs. Other research has shown that increased ADL dependency is associated with increased cost, which coupled with the current finding suggests rising rates of obesity among the elderly and those entering nursing homes may strain the long-term care (LTC) system in terms of staffing and cost. Interventions to address these potential impacts should therefore be developed and tested.
Answer: Yes, obese residents are more likely to be admitted to nursing homes that have more deficiencies in care. A study found that nursing homes that admitted a higher proportion of morbidly obese residents were more likely to have more deficiencies, whether total or healthcare related. These nursing homes also had greater odds of having severe deficiencies or falling in the top quartile ranking of total deficiencies. Even after controlling for various factors such as the choice of facilities within the inspection region, resident characteristics, and facility covariates, the association between morbid obesity and admission to higher-deficiency nursing homes persisted (PUBMED:27225360). |
Instruction: Probability of mediastinal involvement in non-small-cell lung cancer: a statistical definition of the clinical target volume for 3-dimensional conformal radiotherapy?
Abstracts:
abstract_id: PUBMED:16226394
Probability of mediastinal involvement in non-small-cell lung cancer: a statistical definition of the clinical target volume for 3-dimensional conformal radiotherapy? Purpose: Conformal irradiation (3D-CRT) of non-small-cell lung carcinoma (NSCLC) is largely based on precise definition of the nodal clinical target volume (CTVn). A reduction of the number of nodal stations to be irradiated would facilitate tumor dose escalation. The aim of this study was to design a mathematical tool based on documented data to predict the risk of metastatic involvement for each nodal station.
Methods And Materials: We reviewed the large surgical series published in the literature to identify the main pretreatment parameters that modify the risk of nodal invasion. The probability of involvement for the 17 nodal stations described by the American Thoracic Society (ATS) was computed from all these publications. Starting with the primary site of the tumor as the main characteristic, we built a probabilistic tree for each nodal station representing the risk distribution as a function of each tumor feature. Statistical analysis used the inversion of probability trees method described by Weinstein and Feinberg. Validation of the software based on 134 patients from two different populations was performed by receiver operator characteristic (ROC) curves and multivariate logistic regression.
Results: Analysis of all of the various parameters of pretreatment staging relative to each level of the ATS map results in 20,000 different combinations. The first parameters included in the tree, depending on tumor site, were histologic classification, metastatic stage, nodal stage weighted as a function of the sensitivity and specificity of the diagnostic examination used (positron emission tomography scan, computed tomography scan), and tumor stage. Software is proposed to compute a predicted probability of involvement of each nodal station for any given clinical presentation. Double cross validation confirmed the methodology. A 10% cutoff point was calculated from ROC and logistic model giving the best prediction of mediastinal lymph node involvement.
Conclusion: To more accurately define the CTVn in NSCLC three-dimensional conformal radiotherapy, we propose a software that evaluates the risk of mediastinal lymph node involvement from easily accessible individual pretreatment parameters.
abstract_id: PUBMED:11797293
Estimation of the probability of mediastinal involvement: a statistical definition of the clinical target volume for 3-dimensional conformal radiotherapy in non-small-cell lung cancer? Purpose: Conformal irradiation of non-small cell lung carcinoma (NSCLC) is largely based on a precise definition of the nodal clinical target volume (CTVn). The reduction of the number of nodal stations to be irradiated would render tumor dose escalation more achievable. The aim of this work was to design an mathematical tool based on documented data, that would predict the risk of metastatic involvement for each nodal station.
Methods And Material: From the large surgical series published in the literature we looked at the main pre-treatment parameters that modify the risk of nodal invasion. The probability of involvement for the 17 nodal stations described by the American Thoracic Society (ATS) was computed from all these publications and then weighted according to the French epidemiological data. Starting from the primitive location of the tumour as the main characteristic, we built a probabilistic tree for each nodal station representing the risk distribution as a function of each tumor feature. From the statistical point of view, we used the inversion of probability trees method described by Weinstein and Feinberg.
Results: Taking into account all the different parameters of the pre-treatment staging relative to each level of the ATS map brings up to 20,000 different combinations. The first chosen parameters in the tree were, depending on the tumour location, the histological classification, the metastatic stage, the nodal stage weighted in function of the sensitivity and specificity of the diagnostic examination used (PET scan, CAT scan) and the tumoral stage. A software is proposed to compute a predicted probability of involvement of each nodal station for any given clinical presentation.
Conclusion: To better define the CTVn in NSCLC 3DRT, we propose a software that evaluates the mediastinal nodal involvement risk from easily accessible individual pre-treatment parameters.
abstract_id: PUBMED:9849380
Target volume definition for three-dimensional conformal radiation therapy of lung cancer. Three-dimensional conformal radiation therapy (3DCRT) is a mode of high precision radiotherapy which has the potential to improve the therapeutic ratio of radiation therapy for locally advanced non-small cell lung cancer. The preliminary clinical experience with 3DCRT has been promising and justifies further endeavour to refine its clinical application and ultimately test its role in randomized trials. There are several steps to be taken before 3DCRT evolves into an effective single modality for the treatment of lung cancer and before it is effectively integrated with chemotherapy. This article addresses core issues in the process of target volume definition for the application of 3DCRT technology to lung cancer. The International Commission on Radiation Units and Measurements Report no. 50 definitions of target volumes are used to identify the factors influencing target volumes in lung cancer. The rationale for applying 3DCRT to lung cancer is based on the frequency of failure to eradicate gross tumour with conventional approaches. It may therefore be appropriate to ignore subclinical or microscopic extensions when designing a clinical target volume, thereby restricting target volume size and allowing dose escalation. When the clinical target volume is expanded to a planning target volume, an optimized margin would result in homogeneous irradiation to the highest dose feasible within normal tissue constraints. To arrive at such optimized margins, multiple factors, including data acquisition, data transfer, patient movement, treatment reproducibility, and internal organ and target volume motion, must be considered. These factors may vary significantly depending on technology and techniques, and published quantitative analyses are no substitute for meticulous attention to detail and audit of performance.
abstract_id: PUBMED:11715317
Gross tumor volume and clinical target volume in radiotherapy: lung cancer Radiotherapy plays a major role as a curative treatment of various stages non-small cell lung cancers (NSCLC): as an exclusive treatment in curative attempt for patients with unresectable stages I and II; as a preoperative treatment, which is often associated with chemotherapy, for patients with surgically stage IIIA NSCLC in clinical trials; in association with chemotherapy for unresectable stages IIIA and IIIB patients. Currently, three-dimensional conformal radiotherapy allows for some dose escalation, increasing radiation quality. However, the high inherent conformality of this radiotherapy technique requires a rigorous approach and an optimal quality of the preparation throughout the treatment procedure and specifically of the accurate definition of the safety margins (GTV, CTV...). Different questions remain specific to lung cancers: 1) Despite the absence of randomized trials, the irradiated lymph nodes volume should be only, for the majority of the authors, the visible macroscopically involved lymph nodal regions. However, local control remains low and solid arguments suggest the poor local control is due to an insufficient delivered dose. Therefore the goal of radiotherapy, in this particular location, is to improve local control by increasing the dose until the maximum normal tissue tolerance is achieved, which essentially depends on the dose to the organs at risk (OAR) and specifically for the lung, the esophagus and the spinal cord. For this reason, the irradiated volume should be as tiny as possible, leading to not including the macroscopically uninvolved lymph nodes regions in prophylactic view in the target volume; 2) The lung is one of the rare organs with extensive motion within the body, making lung tumors difficult to treat. This particular point is not specifically considered in the GTV and CTV definitions but it is important enough to be noted; 3) When radiation therapy starts after a good response to chemotherapy, the residual tumoral volume should be defined as the target volume in place of the initial tumor volume. These different elements are discussed in this paper.
abstract_id: PUBMED:14570090
Three-dimensional conformal radiotherapy in the radical treatment of non-small cell lung cancer. Patients with locally advanced, inoperable, non-small cell lung cancer (NSCLC) have a poor prognosis mainly due to failure of local control after treatment with radical radiotherapy. This overview addresses the role of three-dimensional conformal radiotherapy (3D CRT) in trying to improve survival and reduce toxicity for patients with NSCLC. Current techniques of 3D CRT are analysed and discussed. They include imaging, target volume definition, optimisation of the delivery of radiotherapy through improvement of set-up inaccuracy and reduction of organ motion, dosimetry and implementation and verification issues; the overview concludes with the clinical results of 3D CRT.
abstract_id: PUBMED:23988437
Target volume margins for lung cancer: internal target volume/clinical target volume The aim of this study was to carry out a review of margins that should be used for the delineation of target volumes in lung cancer, with a focus on margins from gross tumour volume (GTV) to clinical target volume (CTV) and internal target volume (ITV) delineation. Our review was based on a PubMed literature search with, as a cornerstone, the 2010 European Organisation for Research and Treatment of Cancer (EORTC) recommandations by De Ruysscher et al. The keywords used for the search were: radiotherapy, lung cancer, clinical target volume, internal target volume. The relevant information was categorized under the following headings: gross tumour volume definition (GTV), CTV-GTV margin (first tumoural CTV then nodal CTV definition), in field versus elective nodal irradiation, metabolic imaging role through the input of the PET scanner for tumour target volume and limitations of PET-CT imaging for nodal target volume definition, postoperative radiotherapy target volume definition, delineation of target volumes after induction chemotherapy; then the internal target volume is specified as well as tumoural mobility for lung cancer and respiratory gating techniques. Finally, a chapter is dedicated to planning target volume definition and another to small cell lung cancer. For each heading, the most relevant and recent clinical trials and publications are mentioned.
abstract_id: PUBMED:12587391
Non-small-cell bronchial cancers: improvement of survival probability by conformal radiotherapy The conformal radiotherapy approach, three-dimensional conformal radiotherapy (3DCRT) and intensity-modulated radiotherapy (IMRT), is based on modern imaging modalities, efficient 3D treatment planning systems, sophisticated immobilization devices and demanding quality assurance and treatment verification. The main goal of conformal radiotherapy is to ensure a high dose distribution tailored to the limits of the target volume while reducing exposure of healthy tissues. These techniques would then allow a further dose escalation increasing local control and survival. Non-small cell lung cancer (NSCLC) is one of the most difficult malignant tumors to be treated. It combines geometrical difficulties due to respiratory motion, and number of low tolerance neighboring organs, and dosimetric difficulties because of the presence of huge inhomogeneities. This localization is an attractive and ambitious example for the evaluation of new techniques. However, the published clinical reports in the last years described very heterogeneous techniques and, in the absence of prospective randomized trials, it is somewhat difficult at present to evaluate the real benefits drawn from those conformal radiotherapy techniques. After reviewing the rationale for 3DCRT for NSCLC, this paper will describe the main studies of 3DCRT, in order to evaluate its impact on lung cancer treatment. Then, the current state-of-the-art of IMRT and the last technical and therapeutic innovations in NSCLC will be discussed.
abstract_id: PUBMED:17182373
Multimodalities imaging for target volume definition in radiotherapy Modern radiotherapy delivery nowadays relies on tridimensional, conformal techniques. The aim is to better target the tumor while decreasing the dose administered to surrounding normal tissues. Gold standard imaging modality remains computed-tomography (CT) scanner. However, the intrinsic lack of contrast between soft tissues leads to high variabilities in target definition. The risks are : a geographical miss with tumor underirradiation on the one hand, and a tumor overestimation with undue normal tissues irradiation on the other hand. Alternative imaging modalities like magnetic resonance imaging and functional positron emission tomography could theoretically overcome the lack of soft tissues contrast of CT. However, the fusion of the different imaging modalities images requires the use of sophisticated computer algorithms. We will briefly review them. We will then review the different clinical results reported with multi-modalities imaging for tumors of the head, neck, lung, esophagus, cervix and lymphomas. Finally, we will briefly give practical recommendations for multi-modality imaging in radiotherapy treatment planning process.
abstract_id: PUBMED:26865754
Dosimetric comparison of three-dimensional conformal radiotherapy, intensity modulated radiotherapy, and helical tomotherapy for lung stereotactic body radiotherapy. To compare the treatment plans generated with three-dimensional conformal radiation therapy (3DCRT), intensity modulated radiotherapy (IMRT), and helical tomotherapy (HT) for stereotactic body radiotherapy of lung, twenty patients with medically inoperable (early nonsmall cell lung cancer) were retrospectively reviewed for dosimetric evaluation of treatment delivery techniques (3DCRT, IMRT, and HT). A dose of 6 Gy per fraction in 8 fractions was prescribed to deliver 95% of the prescription dose to 95% volume of planning target volume (PTV). Plan quality was assessed using conformity index (CI) and homogeneity index (HI). Doses to critical organs were assessed. Mean CI with 3DCRT, IMRT, and HT was 1.19 (standard deviation [SD] 0.13), 1.18 (SD 0.11), and 1.08 (SD 0.04), respectively. Mean HI with 3DCRT, IMRT, and HT was 1.14 (SD 0.05), 1.08 (SD 0.02), and 1.07 (SD 0.04), respectively. Mean R50% values for 3DCRT, IMRT, and HT was 8.5 (SD 0.35), 7.04 (SD 0.45), and 5.43 (SD 0.29), respectively. D2cm was found superior with IMRT and HT. Significant sparing of critical organs can be achieved with highly conformal techniques (IMRT and HT) without compromising the PTV conformity and homogeneity.
abstract_id: PUBMED:12044245
Three-dimensional conformal radiotherapy for lung cancer: promises and pitfalls. Lung cancer represents a major source of morbidity and mortality. Despite recent advances, long-term survival remains elusive in most patients with locally advanced cancer. A substantial proportion of these patients experience a relapse at the original site of disease within the thorax, making radiotherapy an important component of treatment. Of several approaches investigated to improve the therapeutic ratio in radiotherapy, three-dimensional conformal radiotherapy holds the most promise, primarily because it allows higher doses to be delivered to the target by improved shaping of radiation portals and conformal avoidance of normal structures. The rationale and evolution of this technology and its potential pitfalls are presented in this review.
Answer: The probability of mediastinal involvement in non-small cell lung cancer (NSCLC) is a critical factor in defining the clinical target volume (CTV) for three-dimensional conformal radiotherapy (3D-CRT). Studies have aimed to create mathematical tools to predict the risk of metastatic involvement for each nodal station, which is essential for precise CTV definition and could facilitate tumor dose escalation by reducing the number of nodal stations that need to be irradiated (PUBMED:16226394; PUBMED:11797293).
The prediction models are based on a review of large surgical series and consider various pretreatment parameters that modify the risk of nodal invasion. These parameters include the primary site of the tumor, histologic classification, metastatic stage, nodal stage, and tumor stage, with the sensitivity and specificity of diagnostic examinations like PET and CT scans also factored in (PUBMED:16226394; PUBMED:11797293).
The proposed software computes the predicted probability of involvement for each of the 17 nodal stations described by the American Thoracic Society (ATS), taking into account up to 20,000 different combinations of pretreatment staging parameters (PUBMED:16226394; PUBMED:11797293). The methodology has been validated through receiver operator characteristic (ROC) curves and multivariate logistic regression, with a 10% cutoff point calculated to give the best prediction of mediastinal lymph node involvement (PUBMED:16226394).
In conclusion, to more accurately define the CTV in NSCLC 3D-CRT, software has been proposed that evaluates the risk of mediastinal lymph node involvement using individual pretreatment parameters that are easily accessible. This approach aims to improve the precision of radiotherapy planning and potentially enhance treatment outcomes by allowing for higher radiation doses to be delivered to the tumor while minimizing exposure to non-involved lymph nodes (PUBMED:16226394; PUBMED:11797293). |
Instruction: Is there any association between retroperitoneal lymphadenectomy and survival benefit in advanced stage epithelial ovarian carcinoma patients?
Abstracts:
abstract_id: PUBMED:32660444
The relationship between retroperitoneal lymphadenectomy and survival in advanced ovarian cancer patients. Background: Systematic retroperitoneal lymphadenectomy has been widely used in the surgical treatment of advanced ovarian cancer patients. Nevertheless, the corresponding therapeutic may not provide a survival benefit. The aim of this study was to assess the effect of systematic retroperitoneal lymphadenectomy in such patients.
Methods: Patients with advanced ovarian cancer (stage III-IV, according to the classification presented by the International Federation of Gynecology and Obstetrics) who were admitted and treated in Zhejiang Cancer Hospital from January 2004 to December 2013 were enrolled and reviewed retrospectively. All patients were optimally or suboptimally debulked (absent or residual tumor < 1 cm) and divided into two groups. Group A (no-lymphadenectomy group, n = 170): patients did not undergo lymph node resection; lymph nodes resection or biopsy were selective. Group B (n = 240): patients underwent systematic retroperitoneal lymphadenectomy.
Results: A total of 410 eligible patients were enrolled in the study. The patients' median age was 51 years old (range, 28-72 years old). The 5-year overall survival (OS) and 2-year progression-free survival (PFS) rates were 78 and 24% in the no-lymphadenectomy group and 76 and 26% in the lymphadenectomy group (P = 0.385 and 0.214, respectively). Subsequently, there was no significant difference in 5-year OS and 2-year PFS between the two groups stratified to histological types (serous type or non-serous type), the clinical evaluation of negative lymph nodes or with macroscopic peritoneal metastasis beyond pelvic (IIIB-IV). Multivariate Cox regression analysis indicated that systematic retroperitoneal lymphadenectomy was not a significant factor influencing the patients' survival. Patients in the lymphadenectomy group had a higher incidence of postoperative complications (incidence of infection treated with antibiotics was 21.7% vs. 12.9% [P = 0.027]; incidence of lymph cysts was 20.8% vs. 2.4% [P < 0.001]).
Conclusions: Our study showed that systematic retroperitoneal lymphadenectomy did not significantly improve survival of advanced ovarian cancer patients with residual tumor < 1 cm or absent after cytoreductive surgery, and were associated with a higher incidence of postoperative complications.
abstract_id: PUBMED:22568659
Is there any association between retroperitoneal lymphadenectomy and survival benefit in advanced stage epithelial ovarian carcinoma patients? Aim: The effect of systematic retroperitoneal lymphadenectomy (SRL) remains controversial in patients with advanced epithelial ovarian cancer (aEOC) who are optimally debulked.
Material And Methods: Demographic and clinicopathologic data were obtained from the Tokai Ovarian Tumor Study Group between 1986 and 2009. All patients were divided into two groups. Group A (n = 93): (i) patients did not undergo SRL; and (ii) lymph node exploration or sampling was optional. Group B (n = 87): patients underwent SRL. Survival curves were calculated using the Kaplan-Meier method. Differences in survival rates were analyzed using the log-rank test.
Results: All pT3-4 aEOC patients were optimally debulked (residual tumor <1 cm). The median age was 55 years (range: 18-84). The 5-year progression-free survival (PFS) rates of groups A and B were 46.7 and 41.9%, respectively (P = 0.658). In addition, the 5-year overall survival (OS) rates were 62.9 and 59.0%, respectively (P = 0.853). Subsequently, there was no significant difference in OS and PFS in the two groups stratified to histological type (serous or non-serous type). Furthermore, there was no significant difference in recurrence rates in retroperitoneal lymph nodes regardless of completion of lymphadenectomy.
Conclusion: Our data suggest that aEOC patients with optimal cytoreduction who underwent SRL did not show a significant improvement in survival irrespective of each histological type.
abstract_id: PUBMED:12875728
Effect of retroperitoneal lymphadenectomy on prognosis of patients with epithelial ovarian cancer. Objective: To evaluate prognostic factors which have an influence on overall survival and to assess the rational application of retroperitoneal lymphadenectomy in patients with epithelial ovarian cancer.
Methods: The data of 131 patients treated between January 1990 and December 1998 in Union Hospital and Tongji Hospital were analyzed retrospectively. Survival was calculated using the Kaplan-Meier method and comparisons were performed using Log-rank test. Independent prognostic factors were identified by the Cox proportional hazards regression model.
Results: Univariate analysis showed that age, general conditions, menopausal status, stage, pathological types, location of the tumor, residual tumor and retroperitoneal lymphadenectomy were prognostic factors. Multivariate analysis showed that age, stage, residual tumor, retroperitoneal lymphadenectomy and the number of courses of chemotherapy were the most important prognostic factors. The survival rate could not be improved through retroperitoneal lymphadenectomy in the patients in early stage, advanced stage with residual tumor > 2 cm or those with mucinous adenocarcinoma (P > 0.05). Among patients in advanced stage cancer with a residual tumor </= 2 cm, 5-year survival was 65% and 30% for patients who did and did not undergo lymphadenectomy, respectively (P < 0.01). Among patients with serous adenocarcinoma, 5-year survival was 61% and 31% for patients who did and did not undergo lymphadenectomy, respectively (P < 0.01).
Conclusions: The prognosis of the patients with epithelial ovarian cancer may be influenced by age, stage, residual tumor, retroperitoneal lymphadenectomy and the number of courses of chemotherapy. Although retroperitoneal lymphadenectomy could improve the survival rate, it should be carried out selectively.
abstract_id: PUBMED:38332458
Real-world study of lymphadenectomy in patients with advanced epithelial ovarian cancer. Background: The evidence on the role of retroperitoneal lymphadenectomy is limited to less common histology subtypes of epithelial advanced ovarian cancer.
Methods: This retrospective cohort study utilized data from the Surveillance, Epidemiology, and End Results Program from January 1, 2010, to December 31, 2019. Patients with stage III-IV epithelial ovarian cancer were included and divided into two groups based on whether they received retroperitoneal lymphadenectomy. The primary outcomes are overall survival (OS) and cause-specific survival (CSS).
Results: Among the 10 184 included patients, 5472 patients underwent debulking surgery with retroperitoneal lymphadenectomy, while 4712 patients only underwent debulking surgery. No differences were found in the baseline information between the two groups after propensity score matching. Retroperitoneal lymphadenectomy during debulking surgery was associated with improved 5-year OS (43.41% vs. 37.49%, p < 0.001) and 5-year CSS (46.43% vs. 41.79%, p < 0.001). Subgroup analysis further validate the retroperitoneal lymphadenectomy increased the 5-year OS and CSS in patients with high-grade serous cancer. Although the results were not validated in the less common ovarian cancer (including endometrial cancer, mucinous cancer, low-grade serous cancer, and clear cell cancer), the tendency showed patients with the above four subtypes may benefit from the lymphadenectomy which is restricted for small sample size after propensity score matching.
Conclusions: This study revealed that retroperitoneal lymphadenectomy could further improve the survival outcome during debulking surgery in patients with advanced epithelial ovarian cancer. The conclusion was affected by the histology subtypes of ovarian cancer and further studies are needed to validate the conclusion in less common ovarian cancer.
abstract_id: PUBMED:21482037
Retroperitoneal lymphadenectomy and survival of patients treated for an advanced ovarian cancer: the CARACO trial The standard management for advanced-stage epithelial ovarian cancer is optimum cytoreductive surgery followed by platinum based chemotherapy. However, retroperitoneal lymph node resection remains controversial. The multiple directions of the lymph drainage pathway in ovarian cancer have been recognized. The incidence and pattern of lymph node involvement depends on the extent of the disease and the histological type. Several published cohorts suggest the survival benefit of pelvic and para-aortic lymphadenectomy. A recent large randomized trial have demonstrated the potential benefit for surgical removal of bulky lymph nodes in term of progression-free survival but failed to show any overall survival benefit because of a critical methodology. Further randomised trials are needed to balance risks and benefits of systematic lymphadenectomy in advanced-stage disease. CARACO is a French ongoing trial, built to bring a reply to this important question. A huge effort for inclusion of the patients, and involving new teams, are mandatory.
abstract_id: PUBMED:11263195
A rational selection of retroperitoneal lymphadenectomy for advanced epithelial ovarian cancer Objective: To evaluate the rational application of retroperitoneal lymphadenectomy to advanced epithelial ovarian cancer.
Methods: 42 patients of advanced epithelial ovarian cancer were treated by retroperitoneal lymphadenectomy. Two groups were divided according to the residual disease post-operation. A: 26 patients with the residual disease < 2 cm; B: 16 patients > or = 2 cm. The regime of combined chemotherapy was the same in the two groups after operation. Clinical stage and pathologic grade showed no difference.
Results: The 5-year survival rate was 53.8% (14/26) in A and 12.5% (2/16) in B. There was a significant difference between the two groups (P < 0.001).
Conclusions: The survival rate could be greatly improved for advanced epithelial ovarian cancer through retroperitoneal lymphadenectomy when the residual disease was smaller than 2 cm. This procedure would not be performed if the residual disease was larger than or equal to 2 cm.
abstract_id: PUBMED:19995689
Management of retroperitoneal lymphadenectomy in advanced epithelial ovarian cancer The standard management for advanced-stage epithelial ovarian cancer is optimum cytoreductive surgery followed by aggressive cytotoxic chemotherapy. However retroperitoneal remains controversial. The multiple directions of the lymph drainage pathway in ovarian cancer have been recognized. The incidence and pattern of lymph node involvement depends on the extent of disease progression and the histological type. Thus, it is difficult to specify a single node as the sentinel node. In this chapter, we review and discuss the actual benefits of lymph node dissection in patients with ovarian cancer, analysing previously reported and ongoing trials. A recent large randomized trial in patients with advanced ovarian cancer revealed that systemic lymphadenectomy had no impact on survival compared with removing only macroscopic lymph nodes but improves progression-free survival significantly. Further studies are needed to balance risks and benefits of systematic lymphadenectomy in advanced-stage disease.
abstract_id: PUBMED:12783692
Effect of retroperitoneal lymphadenectomy on survival of patients with epithelial ovarian cancer Objective: The purpose of this study was to determine prognostic factors that have an impact on overall survival and to assess the rational application of retroperitoneal lymphadenectomy in patients with epithelial ovarian cancer.
Methods: A retrospective review was performed of 131 patients treated between Jan.1990 and Dec.1998 in Union Hospital and Tongji Hospital. Survival was calculated by Kaplan-Meier method and comparison was performed using Log-rank test. Independent prognostic factors were identified by the COX proportional hazards regression model.
Results: Multivariate analysis showed that the age, stage, residual tumor, retroperitoneal lymphadenectomy and the number of courses of chemotherapy were the most important prognostic factors. The overall 5-year survival was 66% and 41% for patients who did and did not undergo lymphadenectomy, respectively (P < 0.01). But the survival rate could not be improved through retroperitoneal lymphadenectomy in the patients with early stage, advanced stage whose residual tumor > 2 cm and those with mucinous adenocarcinoma (P > 0.05). Among patients with advanced stage whose residual tumor < or = 2 cm, 5-year survival was 65% and 30% for patients who did and did not undergo lymphadenectomy, respectively (P < 0.01). Among patients with serous adenocarcinoma, 5-year survival was 61% and 31% for patients who did and did not undergo lymphadenectomy, respectively (P < 0.01).
Conclusions: The prognosis of the patients with epithelial ovarian cancer may be influenced by age, stage, residual tumor, retroperitoneal lymphadenectomy and the number of courses of chemotherapy. Although retroperitoneal lymphadenectomy could improve the survival rate, it should be carried out selectively.
abstract_id: PUBMED:37667358
Influence of lymphadenectomy on survival and recurrence in patients with early-stage epithelial ovarian cancer: a meta-analysis. Background: This meta-analysis aimed to evaluate the effectiveness of lymphadenectomy on survival and recurrence in patients with early-stage epithelial ovarian cancer (eEOC).
Methods: Relevant studies were searched from four online databases. Hazard ratios (HRs) with 95% confidence intervals (CIs) or risk ratios (RRs) with 95% CIs were used to evaluate the effects of lymphadenectomy on overall survival (OS), progression-free survival (PFS), and recurrence rates. A subgroup analysis was performed to explore the sources of heterogeneity, followed by sensitivity and publication bias assessments.
Results: Fourteen articles involving 22,178 subjects were included. Meta-analysis revealed that lymphadenectomy was significantly associated with improved OS (HR = 0.72; 95% CI:0.61, 0.84; P < 0.001), improved PFS (HR = 0.74; 95% CI: 0.67, 0.80; P < 0.001), and reduced recurrence rates (RR = 0.72; 95% CI: 0.60, 0.85; P < 0.001). Subgroup analysis showed that factors including area, histology, and source of the control group were significantly related to improved OS and PFS in patients with eEOC. Sensitivity analysis showed that the combined results were stable and reliable, and no significant publication bias was observed.
Conclusions: Patients with eEOC can benefit from lymphadenectomy, with improved survival outcomes (OS and PFS) and a lower recurrence rate.
abstract_id: PUBMED:30149960
Association of lymphadenectomy and survival in epithelial ovarian cancer. Purpose: Lymph node metastasis has a significant contribution to the prognosis of epithelial ovarian cancer but the role of lymph node dissection in treatment is not clear. In this study, we aimed to retrospectively determine the effect of the number and localization of lymph nodes removed and the number of metastatic lymph nodes on survival.
Methods: In this study, we retrospectively reviewed the data of 378 patients (210 patients with lymph node dissection and 168 patients with no dissection) who underwent primary surgery between 2004 and 2014 in various centers with epithelial ovarian cancer diagnosis and followed up in our medical oncology clinic. Demographic and histopathologic features, stage, Ca 125 levels, chemotherapy responses of these patients were examined and survival analyzes were performed.
Results: The median age of the patients was 52 years (range 16-89) and median follow-up duration was 39 months (range 1-146). During the analysis, 156 patients (41%) died and 222 patients (59%) were alive. Patients who underwent lymphadenectomy had significantly improved progression free survival (PFS) (18 vs 31 months, P < 0.05) and overall survival (OS) (57 vs 92 months, P < 0.05). OS was longer in patients with >10 lymph nodes removed compared to patients with 1-10 lymph nodes removed (P = 0.005). Survival was found to be longer in patients with pelvic and paraaortic lymph node dissection compared to patients with only pelvic lymph node dissection (P < 0.05). Patients in stage I-II had no difference in PFS and OS. Patients in stage III-IV had no difference in PFS but there was a significant difference in OS (P = 0.02).
Conclusion: It may be a therapeutic effect of lymphadenectomy in advanced stage ovarian cancer. The number of lymph nodes removed and the removal of the paraaortic lymph nodes may also contribute to the treatment.
Answer: The association between retroperitoneal lymphadenectomy and survival benefit in advanced stage epithelial ovarian carcinoma patients is a subject of ongoing debate and research. Several studies have investigated this relationship with varying results:
- Some studies have found no significant difference in overall survival (OS) and progression-free survival (PFS) between patients who underwent systematic retroperitoneal lymphadenectomy and those who did not, suggesting that lymphadenectomy may not provide a survival benefit in advanced ovarian cancer patients who are optimally debulked (PUBMED:32660444, PUBMED:22568659).
- Conversely, other studies have indicated that retroperitoneal lymphadenectomy could improve survival rates, particularly in patients with advanced stage cancer with a residual tumor ≤ 2 cm and in patients with serous adenocarcinoma (PUBMED:12875728, PUBMED:12783692, PUBMED:11263195).
- A real-world study using data from the Surveillance, Epidemiology, and End Results Program found that retroperitoneal lymphadenectomy during debulking surgery was associated with improved 5-year OS and cause-specific survival (CSS) in patients with advanced epithelial ovarian cancer, particularly in those with high-grade serous cancer (PUBMED:38332458).
- The CARACO trial and other ongoing trials are mentioned as efforts to further clarify the potential benefits of lymphadenectomy in advanced-stage disease, acknowledging that previous studies have had methodological limitations and that more research is needed to balance risks and benefits (PUBMED:21482037).
- A meta-analysis on early-stage epithelial ovarian cancer suggested that lymphadenectomy was associated with improved OS, PFS, and reduced recurrence rates, although this pertains to early-stage rather than advanced-stage disease (PUBMED:37667358).
- Another study found that patients who underwent lymphadenectomy had significantly improved PFS and OS, and that the number of lymph nodes removed and the removal of paraaortic lymph nodes may contribute to treatment (PUBMED:30149960).
In summary, the association between retroperitoneal lymphadenectomy and survival benefit in advanced stage epithelial ovarian carcinoma patients is complex and may depend on various factors, including the extent of disease, histological type, and the amount of residual tumor after surgery. While some studies suggest no significant survival benefit, others indicate potential improvements in survival outcomes, particularly in certain subgroups of patients. |
Instruction: Mismatch negativity: a tool for studying morphosyntactic processing?
Abstracts:
abstract_id: PUBMED:20430695
Mismatch negativity: a tool for studying morphosyntactic processing? Objective: Mismatch negativity (MMN) was originally shown in a passive auditory oddball paradigm to be generated by any acoustical change. More recently, it has been applied to the study of higher order linguistic levels including the morphosyntactic level in spoken language comprehension. In this study, we present two MMN experiments to determine whether morphosyntactic features are involved in the representations underlying the morphosyntactic processing.
Methods: We reported two MMN experiments in passive auditory oddball paradigm with pairs of French words, a pronoun and a verb, differing in agreement grammaticality. These two experiments differed in the number of morphosyntactic features producing agreement violations, i.e. either of person and number features or of person feature.
Results: We observed no effect of grammaticality on the MMN response for these two experiments.
Conclusions: Our studies highlight the difficulties encountered in studying morphosyntactic level with the passive auditory oddball paradigm.
Significance: The reasons for our inability to replicate previous studies are presented, and methodological changes in the passive auditory oddball paradigm are proposed to better tap into the morphosyntactic level.
abstract_id: PUBMED:26550957
The role of nonverbal working memory in morphosyntactic processing by school-aged monolingual and bilingual children. The current study examined the relationship between nonverbal working memory and morphosyntactic processing in monolingual native speakers of English and bilingual speakers of English and Spanish. We tested 42 monolingual children and 42 bilingual children between the ages of 8 and 10years matched on age and nonverbal IQ. Children were administered an auditory Grammaticality Judgment task in English to measure morphosyntactic processing and a visual N-Back task and Corsi Blocks task to measure nonverbal working memory capacity. Analyses revealed that monolinguals were more sensitive to English morphosyntactic information than bilinguals, but the groups did not differ in reaction times or response bias. Furthermore, higher nonverbal working memory capacity was associated with greater sensitivity to morphosyntactic violations in bilinguals but not in monolinguals. The findings suggest that nonverbal working memory skills link more tightly to syntactic processing in populations with lower levels of language knowledge.
abstract_id: PUBMED:30519208
Isolating the Effects of Word's Emotional Valence on Subsequent Morphosyntactic Processing: An Event-Related Brain Potentials Study. Emotional information significantly affects cognitive processes, as proved by research in the past decades. Recently, emotional effects on language comprehension and, particularly, syntactic processing, have been reported. However, more research is needed, as this is yet very scarce. The present paper focuses on the effects of emotion-laden linguistic material (words) on subsequent morphosyntactic processing, by using Event-Related brain Potentials (ERP). The main aim of this paper is to clarify whether the effects previously reported remain when positive, negative and neutral stimuli are equated in arousal levels and whether they remain long-lasting. In addition, we aimed at testing whether these effects vary as a function of the task performed with the emotion-laden words, to assess their robustness across variations in attention and cognitive load during the processing of the emotional words. In this regard, two different tasks were performed: a reading aloud (RA) task, where participants simply read aloud the words, written in black on white background, and an Emotional Stroop (ES) task, where participants named the colors in which the emotional words were shown. After these words, neutral sentences followed, that had to be evaluated for grammaticality while recording ERPs (50% containing a morphosyntactic anomaly). ERP analyses showed main effects of valence across tasks on the two components reflecting morphosyntactic processing: The Left Anterior Negativity (LAN) is increased by previous emotional words (more by negative than positive) relative to neutral ones, while the P600 is similarly decreased. No interactions between task and valence were found. As a result, an emotion-laden word preceding a sentence can modulate the syntactic processing of the latter, independently of the arousal and processing conditions of the emotional word.
abstract_id: PUBMED:37473640
Processing of complex morphosyntactic structures in French: ERP evidence from native speakers. This event-related brain potentials (ERP) study investigated the neurocognitive mechanisms underlying the auditory processing of verbal complexity in French illustrated by the prescriptive present subjunctive mode. Using a violation paradigm, ERPs of 32 French native speakers were continuously recorded while they listened to 200 ecological French sentences selected from the INTEFRA oral corpus (2006). Participants performed an offline acceptability judgement task on each sentence, half of which contained a correct present subjunctive verbal agreement (reçoive) and the other half an incorrect present indicative one (peut). Critically, the present subjunctive mode was triggered either by verbs (Ma mère desire que j'apprenneMy mother wants me to learn) or by subordinating conjunctions (Pour qu'elle reçoiveSo that she receives). We found a delayed anterior negativity (AN) due to the length of the verbal forms and a P600 that were larger for incongruent than for congruent verbal agreement in the same time window. While the two effects were left lateralized for subordinating conjunctions, they were right lateralized for both structures with a larger effect for subordinating conjunctions than for verbs. Moreover, our data revealed that the AN/P600 pattern was larger in late position than in early ones. Taken together, these results suggest that morphosyntactic complexity conveyed by the French subjunctive involves at least two neurocognitive processes thought to support an initial morphosyntactic analysis (AN) and a syntactic revision and repair (posterior P600). These two processes may be modulated as a function of both the element (i.e., subordinating conjunction vs verb) that triggers the subjunctive mode and the moment at which this element is used while sentence processing.
abstract_id: PUBMED:35911026
The Role of Working Memory, Short-Term Memory, Speed of Processing, Education, and Locality in Verb-Related Morphosyntactic Production: Evidence From Greek. This study investigates the relationship between verb-related morphosyntactic production (VRMP) and locality (i.e., critical cue being adjacent to the target or not), verbal Working Memory (vWM), nonverbal/visuospatial WM (nvWM), verbal short-term memory (vSTM), nonverbal/visuospatial STM (nvSTM), speed of processing, and education. Eighty healthy middle-aged and older Greek-speaking participants were administered a sentence completion task tapping into production of subject-verb Agreement, Time Reference/Tense, and grammatical Aspect in local and nonlocal configurations, and cognitive tasks tapping into vSTM, nvSTM, vWM, nvWM, and speed of processing. Aspect elicited worse performance than Time Reference and Agreement, and Time Reference elicited worse performance than Agreement. There were main effects of vSTM, vWM, education, and locality: the greater the participants' vSTM/vWM capacity, and the higher their educational level, the better their VRMP; nonlocal configurations elicited worse performance on VRMP than local configurations. Moreover, vWM affected Aspect and Time Reference/Tense more than Agreement, and education affected VRMP more in local than in nonlocal configurations. Lastly, locality affected Agreement and Aspect (with nonlocal configurations eliciting more agreement and aspect errors than local configurations) but not Time Reference. That vSTM/vWM (but not nvSTM/nvWM) were found to subserve VRMP suggests that VRMP is predominantly supported by domain-specific, not by domain-general, memory resources. The main effects of vWM and vSTM suggest that both the processing and storage components of WM are relevant to VRMP. That vWM (but not vSTM) interacts with production of Aspect, Time Reference, and Agreement suggests that Aspect and Time Reference are computationally more demanding than Agreement. These findings are consistent with earlier findings that, in individuals with aphasia, vWM interacts with production of Aspect, Time Reference, and Agreement. The differential effect of education on VRMP in local vs. nonlocal configurations could be accounted for by assuming that education is a proxy for an assumed procedural memory system that is sensitive to frequency patterns in language and better supports VRMP in more frequent than in less frequent configurations. In the same vein, the interaction between locality and the three morphosyntactic categories might reflect the statistical distribution of local vs. nonlocal Aspect, Agreement, and Time Reference/Tense in Greek.
abstract_id: PUBMED:32540159
Language comprehension in the social brain: Electrophysiological brain signals of social presence effects during syntactic and semantic sentence processing. Although, evolutionarily, language emerged predominantly for social purposes, much has yet to be uncovered regarding how language processing is affected by social context. Social presence research studies the ways in which the presence of a conspecific affects processing, but has yet to be thoroughly applied to language processes. The principal aim of this study was to see how syntactic and semantic language processing might be subject to mere social presence effects by studying Event-Related brain Potentials (ERP). In a sentence correctness task, participants read sentences with a semantic or syntactic anomaly while being either alone or in the mere presence of a confederate. Compared to the alone condition, the presence condition was associated with an enhanced N400 component and a more centro-posterior LAN component (interpreted as an N400). The results seem to imply a boosting of heuristic language processing strategies, proper of lexico-semantic operations, which actually entails a shift in the strategy to process morphosyntactic violations, typically based on algorithmic or rule-based strategies. The effects cannot be related to increased arousal levels. The apparent enhancement of the activity in the precuneus while in presence of another person suggests that the effects conceivably relate to social cognitive and attentional factors. The present results suggest that understanding language comprehension would not be complete without considering the impact of social presence effects, inherent to the most natural and fundamental communicative scenarios.
abstract_id: PUBMED:34975607
First Event-Related Potentials Evidence of Auditory Morphosyntactic Processing in a Subject-Object-Verb Nominative-Accusative Language (Farsi). While most studies on neural signals of online language processing have focused on a few-usually western-subject-verb-object (SVO) languages, corresponding knowledge on subject-object-verb (SOV) languages is scarce. Here we studied Farsi, a language with canonical SOV word order. Because we were interested in the consequences of second-language acquisition, we compared monolingual native Farsi speakers and equally proficient bilinguals who had learned Farsi only after entering primary school. We analyzed event-related potentials (ERPs) to correct and morphosyntactically incorrect sentence-final syllables in a sentence correctness judgment task. Incorrect syllables elicited a late posterior positivity at 500-700 ms after the final syllable, resembling the P600 component, as previously observed for syntactic violations at sentence-middle positions in SVO languages. There was no sign of a left anterior negativity (LAN) preceding the P600. Additionally, we provide evidence for a real-time discrimination of phonological categories associated with morphosyntactic manipulations (between 35 and 135 ms), manifesting the instantaneous neural response to unexpected perturbations. The L2 Farsi speakers were indistinguishable from L1 speakers in terms of performance and neural signals of syntactic violations, indicating that exposure to a second language at school entry may results in native-like performance and neural correlates. In nonnative (but not native) speakers verbal working memory capacity correlated with the late posterior positivity and performance accuracy. Hence, this first ERP study of morphosyntactic violations in a spoken SOV nominative-accusative language demonstrates ERP effects in response to morphosyntactic violations and the involvement of executive functions in non-native speakers in computations of subject-verb agreement.
abstract_id: PUBMED:27259194
The dissociability of lexical retrieval and morphosyntactic processes for nouns and verbs: A functional and anatomoclinical study. Nouns and verbs can dissociate following brain damage, at both lexical retrieval and morphosyntactic processing levels. In order to document the range and the neural underpinnings of behavioral dissociations, twelve aphasics with disproportionate difficulty naming objects or actions were asked to apply phonologically identical morphosyntactic transformations to nouns and verbs. Two subjects with poor object naming and 2/10 with poor action naming made no morphosyntactic errors at all. Six of 10 subjects with poor action naming showed disproportionate or no morphosyntactic difficulties for verbs. Morphological errors on nouns and verbs correlated at the group level, but in individual cases a selective impairment of verb morphology was observed. Poor object and action naming with spared morphosyntax were associated with non-overlapping lesions (inferior occipitotemporal and fronto-temporal, respectively). Poor verb morphosyntax was observed with frontal-temporal lesions affecting white matter tracts deep to the insula, possibly disrupting the interaction of nodes in a fronto-temporal network.
abstract_id: PUBMED:19664766
Neural correlates of morphosyntactic and verb-argument structure processing: an EfMRI study. In the current study, we investigated the processing of ungrammatical sentences containing morphosyntactic and verb-argument structure violations in an fMRI paradigm. In the morphosyntactic condition, participants listened to German perfect tense sentences with morphosyntactic violations which were neither related to finiteness nor to agreement but which were based on a syntactic feature mismatch between two verbal elements. When compared to correct sentences, morphosyntactically ungrammatical sentences elicited an increase in brain activity in the left middle to posterior superior temporal gyrus (STG). In the verb-argument structure condition, sentences were either correct or contained an intransitive verb with an unlicensed direct object. Ungrammatical sentences of this type elicited brain activations in the left inferior frontal gyrus (IFG) (BA 44). Thus, we found evidence for different brain activity patterns as a function of violation type. The left posterior STG, an area known to support lexical-syntactic integration was strongly implicated in morphosyntactic processing whereas the left dorsal IFG (BA 44) was seen to be involved in the processing of verb-argument structure. Our results suggest that lexical, syntactic and semantic features of verbal stimuli interact in a complex fashion during language comprehension.
abstract_id: PUBMED:36840629
Methodologies for assessing morphosyntactic ability in people with Alzheimer's disease. Background: The detection and description of language impairments in neurodegenerative diseases like Alzheimer's Disease (AD) play an important role in research, clinical diagnosis and intervention. Various methodological protocols have been implemented for the assessment of morphosyntactic abilities in AD; narrative discourse elicitation tasks and structured experimental tasks for production, offline and online structured experimental tasks for comprehension. Very few studies implement and compare different methodological protocols; thus, little is known about the advantages and disadvantages of each methodology.
Aims: To discuss and compare the main behavioral methodological approaches and tasks that have been used in psycholinguistic research to assess different aspects of morphosyntactic production and comprehension in individuals with AD at the word and sentence levels.
Methods: A narrative review was conducted through searches in the scientific databases Google Scholar, Scopus, Science Direct, MITCogNet, PubMed. Only studies written in English, that reported quantitative data and were published in peer-reviewed journals were considered with respect to their methodological protocol. Moreover, we considered studies that reported research on all stages of the disease and we included only studies that also reported results of a healthy control group. Studies that implemented standardized assessment tools were not considered in this review.
Outcomes & Results: The main narrative discourse elicitation tasks implemented for the assessment of morphosyntactic production include interviews, picture-description and story narration, whereas the main structured experimental tasks include sentence completion, constrained sentence production, sentence repetition and naming. Morphosyntactic comprehension in AD has been assessed with the use of structured experimental tasks, both offline (sentence-picture matching, grammaticality judgment) and online (cross-modal naming,speeded sentence acceptability judgment, auditory moving window, word detection, reading). For each task we considered studies that reported results from different morphosyntactic structures and phenomena in as many different languages as possible.
Conclusions & Implications: Our review revealed strengths and weaknesses of these methods but also directions for future research. Narrative discourse elicitation tasks as well as structured experimental tasks have been used in a variety of languages, and have uncovered preserved morphosyntactic production but also deficits in people with AD. A combination of narrative discourse elicitation and structured production tasks for the assessment of the same morphosyntactic structure has been rarely used. Regarding comprehension, offline tasks have been implemented in various languages, whereas online tasks have been mainly used in English. Offline and online experimental paradigms have often produced contradictory results even within the same study. The discrepancy between the two paradigms has been attributed to the different working memory demands they impose to the comprehender or to the different parsing processes they tap. Strengths and shortcomings of each methodology are summarized in the paper, and comparisons between different tasks are attempted when this is possible. Thus, the paper may serve as a methodological guide for the study of morphosyntax in AD and possibly in other neurodegenerative diseases.
What This Paper Adds: What is already known on this subject For the assessment of morphosyntactic abilities in AD, various methodological paradigms have been implemented: narrative discourse elicitation tasks and structured experimental tasks for production, and offline and online structured experimental tasks for comprehension. Very few studies implement and compare different methodological protocols; thus, little is known about the advantages and disadvantages of each methodology. What this paper adds to existing knowledge The paper presents an overview of methodologies that have been used to assess morphosyntactic production and comprehension of people with AD at the word and sentence levels. The paper summarizes the strengths and shortcomings of each methodology, providing both the researcher and the clinician with some directions in their endeavour of investigating language in AD. Also, the paper highlights the need for further research that will implement carefully scrutinized tasks from various experimental paradigms and will explore distinct aspects of the AD patients' morphosyntactic abilities in typologically different languages. What are the potential or actual clinical implications of this work? The paper may serve as a reference point for (psycho-)linguists who wish to study morphosyntactic abilities in AD, and for speech and language therapists who might need to apply morphosyntactic protocols to their patients in order to assess them or design appropriate therapeutic interventions for production and comprehension deficits.
Answer: Mismatch negativity (MMN) has been explored as a potential tool for studying morphosyntactic processing in language comprehension. MMN is an event-related potential (ERP) component that is elicited in response to any acoustical change in a passive auditory oddball paradigm. It has been applied to higher order linguistic levels, including morphosyntactic processing (PUBMED:20430695). However, the effectiveness of MMN in this domain has been met with challenges.
In one study, two MMN experiments were conducted using pairs of French words that differed in agreement grammaticality. The results showed no effect of grammaticality on the MMN response, highlighting difficulties in studying the morphosyntactic level with the passive auditory oddball paradigm (PUBMED:20430695). The authors suggested that methodological changes might be necessary to better tap into morphosyntactic processing using MMN.
Despite these challenges, MMN and other ERP components have been used to investigate morphosyntactic processing in various contexts. For instance, ERP studies have shown that emotional valence of words can affect subsequent morphosyntactic processing (PUBMED:30519208), and complex morphosyntactic structures have been examined in native speakers of French, revealing neurocognitive mechanisms underlying the processing of verbal complexity (PUBMED:37473640). Additionally, the role of working memory in morphosyntactic processing has been studied, with findings suggesting that nonverbal working memory skills are more closely linked to syntactic processing in populations with lower levels of language knowledge (PUBMED:26550957).
Overall, while MMN has potential as a tool for studying morphosyntactic processing, its application in this area is not straightforward and may require careful consideration of experimental design and methodology. Other ERP components and methodologies may also provide valuable insights into the neural correlates of morphosyntactic processing (PUBMED:36840629). |
Instruction: Rheumatology outpatient nurse clinics: a valuable addition?
Abstracts:
abstract_id: PUBMED:29484828
Measuring person-centred care in nurse-led outpatient rheumatology clinics. Background: Measurement of person-centred care (PCC) outcomes is underdeveloped owing to the complexity of the concept and lack of conceptual clarity. A framework conceptualizing outpatient PCC in rheumatology nurse-led clinics has therefore been suggested and operationalized into the PCC instrument for outpatient care in rheumatology (PCCoc/rheum).
Objective: The aim of the present study was to test the extent to which the PCCoc/rheum represents the underpinning conceptual outpatient PCC framework, and to assess its measurement properties as applied in nurse-led outpatient rheumatology clinics.
Methods: The 24-item PCCoc/rheum was administered to 343 persons with rheumatoid arthritis from six nurse-led outpatient rheumatology clinics. Its measurement properties were tested by Rasch measurement theory.
Results: Ninety-two per cent of individuals (n = 316) answered the PCCoc/rheum. Items successfully operationalized a quantitative continuum from lower to higher degrees of perceived PCC. Model fit was generally good, including lack of differential item functioning (DIF), and the PCCoc/rheum was able to separate individuals with a reliability of 0.88. The four response categories worked as intended, with the exception of one item. Item ordering provided general empirical support of a priori expectations, with the exception of three items that were omitted owing to multidimensionality, dysfunctional response categories and unexpected ordering. The 21-item PCCoc/rheum showed good accordance with the conceptual framework, improved fit, functioning response categories and no DIF, and its reliability was 0.86.
Conclusion: We found general support for the appropriateness of the PCCoc/rheum as an outcome measure of patient-perceived PCC in nurse-led outpatient rheumatology clinics. While in need of further testing, the 21-item PCCoc/rheum has the potential to evaluate outpatient PCC from a patient perspective.
abstract_id: PUBMED:29417713
Person-centred care in nurse-led outpatient rheumatology clinics: Conceptualization and initial development of a measurement instrument. Background: Person-centred care (PCC) is considered a key component of effective illness management and high-quality care. However, the PCC concept is underdeveloped in outpatient care. In rheumatology, PCC is considered an unmet need and its further development and evaluation is of high priority. The aim of the present study was to conceptualize and operationalize PCC, in order to develop an instrument for measuring patient-perceived PCC in nurse-led outpatient rheumatology clinics.
Methods: A conceptual outpatient PCC framework was developed, based on the experiences of people with rheumatoid arthritis (RA), person-centredness principles and existing PCC frameworks. The resulting framework was operationalized into the PCC instrument for outpatient care in rheumatology (PCCoc/rheum), which was tested for acceptability and content validity among 50 individuals with RA attending a nurse-led outpatient clinic.
Results: The conceptual framework focuses on the meeting between the person with RA and the nurse, and comprises five interrelated domains: social environment, personalization, shared decision-making, empowerment and communication. Operationalization of the domains into a pool of items generated a preliminary PCCoc/rheum version, which was completed in a mean (standard deviation) of 5.3 (2.5) min. Respondents found items easy to understand (77%) and relevant (93%). The Content Validity Index of the PCCoc/rheum was 0.94 (item level range, 0.87-1.0). About 80% of respondents considered some items redundant. Based on these results, the PCCoc/rheum was revised into a 24-item questionnaire.
Conclusions: A conceptual outpatient PCC framework and a 24-item questionnaire intended to measure PCC in nurse-led outpatient rheumatology clinics were developed. The extent to which the questionnaire represents a measurement instrument remains to be tested.
abstract_id: PUBMED:11409670
Rheumatology outpatient nurse clinics: a valuable addition? Objectives: "Transmural rheumatology nurse clinics," where nursing care is provided under the joint responsibility of a home care organization and a hospital, were recently introduced into Dutch health care. This article gives insight into outcomes of the transmural rheumatology nurse clinics.
Methods: Patients with rheumatologic conditions who attended a transmural nurse clinic, in addition to receiving regular care, were compared with patients with rheumatologic conditions who received regular care only. The main outcome measures were the need for rheumatology-related information, the use of aids and adaptations, the use of health care services, and daily functioning.
Results: Attending a transmural nurse clinic does not influence patients' need for information, the application of practical aids and adaptations, or daily functioning. However, attending a transmural nurse clinic does result in more contacts with rheumatologists and occupational therapists.
Conclusions: Attending transmural nurse clinics does not result in major differences in outcomes compared with regular care. Further studies are needed to appreciate the long-term effects of transmural nurse clinics.
abstract_id: PUBMED:34645352
Examining the impact of video-based outpatient education on patient demand for a rheumatology CNS service. Background: Patient demand for education and access to the clinical nurse specialists (CNSs) during the rheumatology clinic at one hospital in Ireland was increasing. Alternative methods of providing patient education had to be examined.
Aims: To explore the efficacy of video-based outpatient education, and its impact on demand for the CNSs.
Methods: A video was produced to play in a rheumatology outpatient department. A representative sample of 240 patients (120 non-exposed and 120 exposed to the video) attending the clinic was selected to complete a questionnaire exploring the effect of the video. Data were analysed using chi-square tests with Yates' continuity correction.
Findings: Demand for the CNSs was six times higher in the non-exposed group compared with the exposed group (non-exposed: 25%, exposed: 4.8%) (χ2=15.7, P=0.00007), representing a significant decrease in resource demand.
Conclusion: High-quality educational videos on view in the rheumatology outpatient department provide patients with information sufficient to meet their educational needs, thus releasing CNS resources.
abstract_id: PUBMED:22745012
The experience of care at nurse-led rheumatology clinics. Objective: To describe how people with rheumatoid arthritis (RA) experience the care provided by Swedish nurse-led rheumatology outpatient clinics.
Methods: Eighteen adult people with a diagnosis of RA who had had at least three documented contact sessions with a nurse-led clinic were interviewed. The interviews were analysed with qualitative content analysis.
Results: Care was expressed in three categories: social environment, professional approach and value-adding measures. A social environment including a warm encounter, a familial atmosphere and pleasant premises was desired and contributed to a positive experience of care. The nurses' professional approach was experienced as empathy, knowledge and skill, as well as support. The care was described as person centred and competent, as it was based on the individual's unique experience of his/her disease and needs. The nurses' specialist knowledge of rheumatology and rheumatology care was highly valued. The offered care represented added value for the participants, instilling security, trust, hope and confidence. It was perceived as facilitating daily life and creating positive emotions. The nurse-led clinics were reported to be easily accessible and provided continuity of the care. These features were presented as fundamental guarantees for health care safety.
Conclusion: The experiences emphasized the need for a holistic approach to care. In this process, the organization of care and the role and skills of the nurse should be focused on the individual's needs and perspectives. The social environment, professional approach and value-adding measures are particularly relevant for optimal care at nurse-led rheumatology outpatient clinics.
abstract_id: PUBMED:1489682
A nurse practitioner rheumatology clinic. The author describes the background to the establishment of nurse practitioner clinics at Leeds General Infirmary, which combine the skills of nurses with medical input to provide care for rheumatology patients. The model used is based on patient knowledge, and the author stresses that one of the primary functions of the nurse practitioner is that of educator. She also highlights the importance of research within the nurse practitioner role and concludes that nurses can deliver high quality care from nurse-led rheumatology clinics.
abstract_id: PUBMED:31682204
Tele-Rheumatology to Regional Hospital Outpatient Clinics: Patient Perspectives on a New Model of Care. Background: Telehealth has the potential to improve access to specialist rheumatology services. The timely and appropriate delivery of care to those living with rheumatological diseases is crucial to ensuring excellent long-term outcomes. Introduction: The outcomes of a tele-rheumatology service delivered to regional hospital outpatient clinics were evaluated with patient perspectives and acceptability analyzed. Materials and Methods: A tele-rheumatology clinic was commenced in Australia from a metropolitan hospital to five regional clinics. The model of care included a trained nurse at the spoke site linked to a rheumatologist from the hospital hub site for follow-up consultations of stable review patients using videoconferencing. Surveys assessing perspectives on the tele-rheumatology encounter were completed and a subsample participated in focus groups to further explore acceptability. Results: Forty-eight patients with a diverse range of conditions participated. Patient travel was reduced on average by 95 km and 42% avoided time off work. Eighty-eight to 100% of participants agreed/strongly agreed with statements relating to acceptability, quality of physician-patient interaction, and nurse involvement. Twenty-nine percent expressed the need for a physical examination by a specialist rheumatologist and 25% felt that an in-person consultation would establish better patient-physician rapport. Qualitatively, participants viewed tele-rheumatology as equivalent to in-person care after an initial adjustment period. Discussion: Tele-rheumatology through videoconferencing for follow-up of patients with established disease is acceptable to patients and demonstrates the potential to improve time, travel, and cost burdens placed on patients who live remotely compared with traditional, face-to-face rheumatology care. Conclusions: Implementation of sustainable and patient acceptable models of tele-rheumatology care may allow timely access to all patients living with rheumatological conditions.
abstract_id: PUBMED:17042020
Rheumatology nurse practitioners' perceptions of their role. Objectives: To identify the current practices of rheumatology nurse practitioners and ascertain their perceptions of how their role could be enhanced.
Method: A cross-sectional questionnaire study of currently employed nurse practitioners in rheumatology in the United Kingdom (UK) was undertaken.
Results: 200 questionnaires were distributed and 118 nurses responded. Ninety-five respondents met the inclusion criteria for undertaking an advanced nursing role. Typical conditions dealt with included: rheumatoid arthritis (96.8%); psoriatic arthritis (95.8%); osteoarthritis (63.2%); ankylosing spondylitis (62.8%); systemic lupus erythematosus (51.6%); and scleroderma (34.7%). Drug monitoring, education, counselling of patients and arranging basic investigations were routinely performed by more than 80% of respondents. A smaller proportion performed an extended role that included dealing with referrals, research and audit, the administration of intra-articular injections, and admission of patients. Specific attributes identified as being necessary for competence were: knowledge and understanding of rheumatic diseases (48.4%); drug therapy (33.7%); good communication skills (35.8%); understanding of the roles of the team (27.4%); working effectively (23.2%) as part of a multidisciplinary team; assessment of patients by physical examination (28.4%); teaching (26.3%), research (17.9%); organizational skills (14.7%); and the interpretation of investigations (9.5%). Factors that could enhance their role included: attendance at postgraduate courses (30.5%); obtaining further qualifications (13.7%); active participation in the delivery of medical education (41.1%); training in practical procedures (31.6%); protected time and resources for audit and research (11.6%); formal training in counselling (11.6%); and implementation of nurse prescribing (10.5%).
Conclusion: Nurse practitioners already have a wide remit and play an invaluable part in the delivery of modern rheumatology services. An extended role could improve patient care and enhance nursing career pathways in rheumatology.
abstract_id: PUBMED:9044010
Patient satisfaction in a nurse-led rheumatology clinic. Patient satisfaction with a nurse-led rheumatology clinic was tested using the Leeds Satisfaction Questionnaire (LSQ), which was specially developed and shown to be both reliable (Cronbach's alpha, 0.96) and stable (test-re-test r = 0.83). A total of 70 patients with rheumatoid arthritis, aged 22-75 years were randomly allocated to either a nurse's or a rheumatologist's clinic and seen on six occasions over a year. They completed the LSQ on entry and on completion of the study. At week 0 both groups were satisfied with their care and there were no significant differences between them. By week 48 the medical cohort showed significantly increased satisfaction with access and continuity (P < 0.05) but no change in overall satisfaction. The patients in the nurse-led clinic recorded significant increases in overall satisfaction and in all subscales (P < 0.0001). Between-group comparison at week 48 showed the nurse's patients to be significantly more satisfied than those of the rheumatologist.
abstract_id: PUBMED:38181233
Evaluation of nurse practitioners' roles and competencies among rheumatology nurses: Results of a national survey. To investigate nurse practitioners' roles and competencies among rheumatology nurses in China, an online, cross-sectional survey was conducted between July 7 and 14, 2020 among the national cooperation group of nursing experts on management of rheumatic and immune diseases. A total of 796 valid questionnaires were returned and participants' mean total scores on the Nurse Practitioners' Roles and Competencies Scale (NPRCS) was 2.51 (SD = 0.55), indicating a medium level. Medical assistance, leadership reform, and clinical research had low scores in the six dimensions of the NPRCS. Nurse practitioners' roles and competencies in the area of rheumatology require improvement. Training should focus on medical assistance, leadership reform, and clinical research. The current study can provide a reference for an improved training framework of nursing practitioners in the rheumatology field.
Answer: Rheumatology outpatient nurse clinics have been introduced as a valuable addition to healthcare services for patients with rheumatologic conditions. These clinics, often led by nurse practitioners or clinical nurse specialists (CNSs), aim to provide person-centred care (PCC), improve patient education, and enhance the overall quality of care for individuals with rheumatoid arthritis (RA) and other rheumatic diseases.
The introduction of nurse-led clinics has been associated with various benefits. For instance, a study found that the implementation of video-based outpatient education in a rheumatology clinic significantly reduced patient demand for CNS services, suggesting that such educational interventions can effectively meet patients' informational needs and free up specialist resources (PUBMED:34645352). Additionally, patients have reported positive experiences with the care provided at nurse-led rheumatology clinics, emphasizing the importance of a holistic approach, a warm social environment, a professional approach by the nurses, and value-adding measures that facilitate daily life and instill security, trust, hope, and confidence (PUBMED:22745012).
Moreover, the development of the PCC instrument for outpatient care in rheumatology (PCCoc/rheum) has been instrumental in measuring patient-perceived PCC in nurse-led outpatient rheumatology clinics. This instrument has shown good measurement properties and the potential to evaluate outpatient PCC from a patient perspective (PUBMED:29484828). The conceptual framework for PCC in these settings includes domains such as social environment, personalization, shared decision-making, empowerment, and communication, which are crucial for delivering high-quality care (PUBMED:29417713).
Tele-rheumatology services have also been introduced, offering an alternative model of care that can reduce travel and time burdens for patients living in remote areas. Patients have generally accepted this service, finding it equivalent to in-person care after an initial adjustment period (PUBMED:31682204).
However, it is important to note that while nurse-led clinics offer many advantages, their impact on certain outcomes such as the need for rheumatology-related information, the use of aids and adaptations, and daily functioning may not differ significantly from regular care (PUBMED:11409670). Nonetheless, nurse practitioners play an invaluable role in the delivery of modern rheumatology services, and their roles and competencies are recognized as essential for patient care (PUBMED:17042020).
In conclusion, rheumatology outpatient nurse clinics are a valuable addition to healthcare services, providing person-centred, holistic care that is highly valued by patients. These clinics have the potential to improve patient education, satisfaction, and access to care while also addressing the unique needs of individuals with rheumatic diseases. |
Instruction: Does resident participation influence otolaryngology-head and neck surgery morbidity and mortality?
Abstracts:
abstract_id: PUBMED:27010505
Does resident participation influence otolaryngology-head and neck surgery morbidity and mortality? Objectives/hypothesis: Patients may perceive resident procedural participation as detrimental to their outcome. Our objective is to investigate whether otolaryngology-head and neck surgery (OHNS) housestaff participation is associated with surgical morbidity and mortality.
Study Design: Case-control study.
Methods: OHNS patients were analyzed from the American College of Surgeons National Surgical Quality Improvement Program 2006 to 2013 databases. We compared the incidence of 30-day postoperative morbidity, mortality, readmissions, and reoperations in patients operated on by resident surgeons with attending supervision (AR) with patients operated on by an attending surgeon alone (AO) using cross-tabulations and multivariable regression.
Results: There were 27,018 cases with primary surgeon data available, with 9,511 AR cases and 17,507 AO cases. Overall, 3.62% of patients experienced at least one postoperative complication. The AR cohort had a higher complication rate of 5.73% than the AO cohort at 2.48% (P < .001). After controlling for all other variables, there was no significant difference in morbidity (odds ratio [OR] = 1.05 [0.89 to 1.24]), mortality (OR = 0.91 [0.49 to 1.70]), readmission (OR = 1.29 [0.92 to 1.81]), or reoperation (OR = 1.28 [0.91 to 1.80]) for AR compared to AO cases. There was no difference between postgraduate year levels for adjusted 30-day morbidity or mortality.
Conclusions: There is an increased incidence of morbidity, mortality, readmission, and reoperation in OHNS surgical cases with resident participation, which appears related to increased comorbidity with AR patients. After controlling for other variables, resident participation was not associated with an increase in 30-day morbidity, mortality, readmission, or reoperation odds. These data suggest that OHNS resident participation in surgical cases is not associated with poorer short-term outcomes.
Level Of Evidence: 3b Laryngoscope, 126:2263-2269, 2016.
abstract_id: PUBMED:24812080
Vascular Anomalies in Otolaryngology-Head and Neck Surgery Resident Education. The evaluation and treatment of vascular anomalies is rapidly evolving. In recent years, improved imaging, medical therapies, interventional radiology procedures, and technical advances have led to improved functional and aesthetic outcomes with reduced morbidity. With management of vascular anomalies becoming increasingly complex, we wanted to assess the opinions of otolaryngology-head and neck surgery resident trainees regarding education in this evolving subspecialty. The results of our survey show that a significant majority of trainees feel that vascular anomalies are best managed by a multidisciplinary team, consistent with practice in large vascular anomalies centers. While training in this area does not seem to be deficient, it may be helpful to identify those otolaryngology residents who are interested in gaining exposure to patients with vascular anomalies, so that they may seek additional subspecialty experiences to complement their otolaryngology training.
abstract_id: PUBMED:6750674
What is otolaryngology--head and neck surgery? Under the influence of significant developments in the area of otolaryngology over the past two decades, this specialty has evolved into a comprehensive discipline of medicine and surgery of the head and neck region. The official name change in 1980 to "Otolaryngology--Head and Neck Surgery" indicates this shift (and expansion) in emphasis. Problems that may now be considered to fall into the jurisdiction of the otolaryngologist--head and neck surgeon are discussed.
abstract_id: PUBMED:38091101
Application of organoids in otolaryngology: head and neck surgery. Purpose: The purpose of this review is to systematically summarize the application of organoids in the field of otolaryngology and head and neck surgery. It aims to shed light on the current advancements and future potential of organoid technology in these areas, particularly in addressing challenges like hearing loss, cancer research, and organ regeneration.
Methods: Review of current literature regrading organoids in the field of otolaryngology and head and neck surgery.
Results: The review highlights several advancements in the field. In otology, the development of organoid replacement therapies offers new avenues for treating hearing loss. In nasal science, the creation of specific organoid models aids in studying nasopharyngeal carcinoma and respiratory viruses. In head and neck surgery, innovative approaches for squamous cell carcinoma prediction and thyroid regeneration using organoids have been developed.
Conclusion: Organoid research in otolaryngology-head and neck surgery is still at an early stage. This review underscores the potential of this technology in advancing our understanding and treatment of various conditions, predicting a transformative impact on future medical practices in these fields.
abstract_id: PUBMED:22319688
Accuracy of references in Indian journal of otolaryngology and head & neck surgery. This study was done to observe the accuracy of references in articles published in Indian Journal of Otolaryngology and Head & Neck Surgery. There were 63 references randomly selected from different issues of Indian Journal of Otolaryngology and Head & Neck Surgery (IJOHNS). It includes: Volume 61, Number 4, December 2009 and Volume 62, Number 1, January 2010. References were examined in details by dividing them into six elements and they were compared with the original for accuracy. References not cited from indexed journals were excluded. Statistical analysis was done by using frequency and percentage. Results show that 30.1% references in Indian Journal of Otolaryngology and Head & Neck Surgery were incorrect. Most common errors were author's name and journal name. Author's names were found to be incorrect in 11.1% references while journal name were found to be incorrect in 6.3%. Errors in citing the references are also found in the Indian Journal of Otolaryngology and Head & Neck Surgery. The quoted error in this study is comparable to other international literatures. The majority of errors are avoidable. So, the authors, editors and the reviewers have to check for any errors seriously before publication in the journal.
abstract_id: PUBMED:18021845
Certification and maintenance of certification in otolaryngology-head and neck surgery. The American Board of Otolaryngology is the organization responsible for certifying physicians who have met the Board's professional standards of training and knowledge in otolaryngology-head and neck surgery. The American Board of Otolaryngology monitors the progress of residents through training and conducts examinations for board certification. Quality of care initiatives throughout medicine have stimulated the Board to develop a maintenance of certification process with a 10-year, time-limited certification. Maintaining certification requires participation in the Board's process, which includes evaluation of professional standing, continuing education and self-assessment, cognitive expertise, and performance in practice. The ultimate goal of the American Board of Otolaryngology's activities is improved patient care.
abstract_id: PUBMED:25574567
Impact of resident participation on morbidity and mortality in neurosurgical procedures: an analysis of 16,098 patients. Object: The authors sought to determine the impact of resident participation on overall 30-day morbidity and mortality following neurosurgical procedures.
Methods: The American College of Surgeons National Surgical Quality Improvement Program database was queried for all patients who had undergone neurosurgical procedures between 2006 and 2012. The operating surgeon(s), whether an attending only or attending plus resident, was assessed for his or her influence on morbidity and mortality. Multivariate logistic regression, was used to estimate odds ratios for 30-day postoperative morbidity and mortality outcomes for the attending-only compared with the attending plus resident cohorts (attending group and attending+resident group, respectively).
Results: The study population consisted of 16,098 patients who had undergone elective or emergent neurosurgical procedures. The mean patient age was 56.8 ± 15.0 years, and 49.8% of patients were women. Overall, 15.8% of all patients had at least one postoperative complication. The attending+resident group demonstrated a complication rate of 20.12%, while patients with an attending-only surgeon had a statistically significantly lower complication rate at 11.70% (p < 0.001). In the total population, 263 patients (1.63%) died within 30 days of surgery. Stratified by operating surgeon status, 162 patients (2.07%) in the attending+resident group died versus 101 (1.22%) in the attending group, which was statistically significant (p < 0.001). Regression analyses compared patients who had resident participation to those with only attending surgeons, the referent group. Following adjustment for preoperative patient characteristics and comorbidities, multivariate regression analysis demonstrated that patients with resident participation in their surgery had the same odds of 30-day morbidity (OR = 1.05, 95% CI 0.94-1.17) and mortality (OR = 0.92, 95% CI 0.66-1.28) as their attending only counterparts.
Conclusions: Cases with resident participation had higher rates of mortality and morbidity; however, these cases also involved patients with more comorbidities initially. On multivariate analysis, resident participation was not an independent risk factor for postoperative 30-day morbidity or mortality following elective or emergent neurosurgical procedures.
abstract_id: PUBMED:32326972
Knowledge and confidence in managing obstructive sleep apnea patients in Canadian otolaryngology - head and neck surgery residents: a cross sectional survey. Background: Obstructive sleep apnea is an expected competency for Otolaryngology - Head and Neck surgery residents and tested on the Royal College of Physicians and Surgeons examination. Our objective was to evaluate the knowledge, attitudes and confidence of Canadian Otolaryngology - Head and Neck surgery residents in managing Obstructive Sleep Apnea (OSA) patients.
Methods: An anonymous, online, cross-sectional survey was distributed to all current Canadian Otolaryngology-Head and Neck surgery residents according to the Dillman Tailored Design Method in English and French. The previously validated OSA Knowledge and Attitudes (OSAKA) questionnaire was administered, along with questions exploring resident confidence levels with performing OSA surgeries. Descriptive statistics, Wilcoxon Rank Sum and unpaired Student's t tests were calculated in Excel.
Results: Sixty-six (38.4%) out of 172 residents responded (60.6% male; 80.3% English-speaking). Median OSAKA knowledge score was 16/18 (88.9%; Interquartile range: 14-16). Although all respondents believed that OSA was an important clinical disorder, only 45.5% of residents felt confident in managing OSA patients, while only 15.2% were confident in managing continuous positive airway pressure therapy (CPAP). Senior residents were more confident than junior residents in identifying OSA patients (96.7% vs 69.4%; p < 0.005) and managing the disease (60.0% vs. 33.3%; p = 0.03), including CPAP (26.7% vs. 5.6%; p = 0.01). Residents had lowest confidence levels in performing tongue base suspension (1.5%), transpalatal advancement pharyngoplasty (3.0%), and laser assisted uvulopalatoplasty (6.1%). Highest confidence levels were described in performing septoplasty (56.1%), adult tonsillectomy (75.8%), and tracheotomy (77.3%).
Conclusions: Otolaryngology - Head and Neck surgery residents' knowledge of OSA was very good; however, confidence levels for managing OSA and performing OSA surgeries were varied. Several areas of perceived strengths and weaknesses in OSA training were identified by Canadian Otolaryngology - Head and Neck surgery residents.
abstract_id: PUBMED:18021842
Fellowship training in otolaryngology-head and neck surgery. Fellowship training in otolaryngology-head and neck surgery is a relatively new phenomenon that reflects the increasing complexity of clinical medicine. Despite an expansion of fellowship opportunities over the last three decades, there is still incomplete regulation and standardization of fellowship training. Approximately one third of residency graduates obtain some form of fellowship training. This article describes the fellowship opportunities that are currently available within the field of otolaryngology. Fellowship training serves as an avenue to gain further credentials and certification, although only the fields of neurotology, pediatric otolaryngology, and sleep medicine are accredited by the ACGME.
abstract_id: PUBMED:2383388
The resident selection process in otolaryngology-head and neck surgery. This study was undertaken to characterize the objective information available about applicants to otolaryngology-head and neck surgery residency positions, and to determine the influence of these factors on obtaining a residency position. Applicants to the otolaryngology-head and neck surgery programs at University of Utah, Salt Lake City, University of Texas Medical Branch, Galveston, and University of Louisville (Ky) were studied. Thirty-one variables were examined, representing demographic and academic factors. Excellent academic performance in medical school (as represented by medical school grade-point average greater than 3.4, National Board part I score greater than 650, class percentile rank greater than 85th percentile, honors grades in both junior medicine and surgery clinical clerkships, or [5] election to Alpha Omega Alpha) was significantly correlated with success in obtaining an otolaryngology-head and neck surgery residency position.
Answer: Resident participation in otolaryngology-head and neck surgery (OHNS) does not appear to negatively influence morbidity and mortality rates. A study analyzing OHNS patients from the American College of Surgeons National Surgical Quality Improvement Program databases between 2006 and 2013 found that while the cohort with resident participation initially showed a higher complication rate compared to the cohort operated on by attending surgeons alone, after controlling for other variables, there was no significant difference in morbidity, mortality, readmissions, or reoperations between the two groups (PUBMED:27010505). This suggests that the presence of residents during surgical procedures, under the supervision of attending surgeons, does not lead to poorer short-term outcomes.
Moreover, the study found no difference in adjusted 30-day morbidity or mortality between different postgraduate year levels of residents, indicating that the level of resident training did not significantly impact these outcomes either (PUBMED:27010505). This aligns with findings in neurosurgery, where a study showed that although cases with resident participation had higher rates of mortality and morbidity, these cases also involved patients with more comorbidities initially. After adjusting for preoperative characteristics and comorbidities, resident participation was not an independent risk factor for postoperative 30-day morbidity or mortality (PUBMED:25574567).
Therefore, based on the available evidence, resident participation in OHNS procedures, when supervised by attending surgeons, does not independently influence the rates of morbidity and mortality. |
Instruction: Is heart rate variability better than routine vital signs for prehospital identification of major hemorrhage?
Abstracts:
abstract_id: PUBMED:25534122
Is heart rate variability better than routine vital signs for prehospital identification of major hemorrhage? Objective: During initial assessment of trauma patients, metrics of heart rate variability (HRV) have been associated with high-risk clinical conditions. Yet, despite numerous studies, the potential of HRV to improve clinical outcomes remains unclear. Our objective was to evaluate whether HRV metrics provide additional diagnostic information, beyond routine vital signs, for making a specific clinical assessment: identification of hemorrhaging patients who receive packed red blood cell (PRBC) transfusion.
Methods: Adult prehospital trauma patients were analyzed retrospectively, excluding those who lacked a complete set of reliable vital signs and a clean electrocardiogram for computation of HRV metrics. We also excluded patients who did not survive to admission. The primary outcome was hemorrhagic injury plus different PRBC transfusion volumes. We performed multivariate regression analysis using HRV metrics and routine vital signs to test the hypothesis that HRV metrics could improve the diagnosis of hemorrhagic injury plus PRBC transfusion vs routine vital signs alone.
Results: As univariate predictors, HRV metrics in a data set of 402 subjects had comparable areas under receiver operating characteristic curves compared with routine vital signs. In multivariate regression models containing routine vital signs, HRV parameters were significant (P<.05) but yielded areas under receiver operating characteristic curves with minimal, nonsignificant improvements (+0.00 to +0.05).
Conclusions: A novel diagnostic test should improve diagnostic thinking and allow for better decision making in a significant fraction of cases. Our findings do not support that HRV metrics add value over routine vital signs in terms of prehospital identification of hemorrhaging patients who receive PRBC transfusion.
abstract_id: PUBMED:26508581
Development of a prehospital vital signs chart sharing system. Objective: Physiological parameters are crucial for the caring of trauma patients. There is a significant loss of prehospital vital signs data of patients during handover between prehospital and in-hospital teams. Effective strategies for reducing the loss remain a challenging research area. We tested whether the newly developed electronic automated prehospital vital signs chart sharing system would increase the amount of prehospital vital signs data shared with a remote trauma center prior to hospital arrival.
Methods: Fifty trauma patients, transferred to a level I trauma center in Japan, were studied. The primary outcome variable was the number of prehospital vital signs shared with the trauma center prior to hospital arrival.
Results: The prehospital vital signs chart sharing system significantly increased the number of prehospital vital signs, including blood pressure, heart rate, and oxygen saturation, shared with the in-hospital team at a remote trauma center prior to patient arrival at the hospital (P < .0001). There were significant differences in prehospital vital signs during ambulance transfer between patients who had severe bleeding and non-severe bleeding within 24 hours after injury onset.
Conclusions: Vital signs data collected during ambulance transfer via patient monitors could be automatically converted to easily visible patient charts and effectively shared with the remote trauma center prior to hospital arrival. The prehospital vital signs chart sharing system increased the number of precise vital signs shared prior to patient arrival at the hospital, which can potentially contribute to better trauma care without increasing labor and reduce information loss during clinical handover.
abstract_id: PUBMED:22421006
The association between vital signs and major hemorrhagic injury is significantly improved after controlling for sources of measurement variability. Purpose: Measurement error and transient variability affect vital signs. These issues are inconsistently considered in published reports and clinical practice. We investigated the association between major hemorrhagic injury and vital signs, successively applying analytic techniques that excluded unreliable measurements, reduced transient variation, and then controlled for ambiguity in individual vital signs through multivariate analysis.
Methods: Vital sign data from 671 adult prehospital trauma patients were analyzed retrospectively. Computer algorithms were used to identify and exclude unreliable data and to apply time averaging. An ensemble classifier was developed and tested by cross-validation. Primary outcome was hemorrhagic injury plus red cell transfusion. Areas under receiver operating characteristic curves (ROC AUCs) were compared by the test of DeLong et al.
Results: Of initial vital signs, systolic blood pressure (BP) had the highest ROC AUC of 0.71 (95% confidence interval, 0.64-0.78). The ROC AUCs improved after excluding unreliable data, significantly for heart rate and respiratory rate but not significantly for BP. Time averaging to reduce temporal variability further increased AUCs, significantly for BP and not significantly for heart rate and respiratory rate. The ensemble classifier yielded a final ROC AUC of 0.84 (95% confidence interval, 0.80-0.89) in cross-validation.
Conclusions: Techniques to reduce variability in vital sign data can lead to significantly improved diagnostic performance. Failure to consider such variability could significantly reduce clinical effectiveness or confound research investigations.
abstract_id: PUBMED:33797486
Prehospital continuous vital signs predict need for resuscitative endovascular balloon occlusion of the aorta and resuscitative thoracotomy prehospital continuous vital signs predict resuscitative endovascular balloon occlusion of the aorta. Background: Rapid triage and intervention to control hemorrhage are key to survival following traumatic injury. Patients presenting in hemorrhagic shock may undergo resuscitative thoracotomy (RT) or resuscitative endovascular balloon occlusion of the aorta (REBOA) as adjuncts to rapidly control bleeding. We hypothesized that machine learning along with automated calculation of continuously measured vital signs in the prehospital setting would accurately predict need for REBOA/RT and inform rapid lifesaving decisions.
Methods: Prehospital and admission data from 1,396 patients transported from the scene of injury to a Level I trauma center via helicopter were analyzed. Utilizing machine learning and prehospital autonomous vital signs, a Bleeding Risk Index (BRI) based on features from pulse oximetry and electrocardiography waveforms and blood pressure (BP) trends was calculated. Demographics, Injury Severity Score and BRI were compared using Mann-Whitney-Wilcox test. Area under the receiver operating characteristic curve (AUC) was calculated and AUC of different scores compared using DeLong's method.
Results: Of the 1,396 patients, median age was 45 years and 68% were men. Patients who underwent REBOA/RT were more likely to have a penetrating injury (24% vs. 7%, p < 0.001), higher Injury Severity Score (25 vs. 10, p < 0.001) and higher mortality (44% vs. 7%, p < 0.001). Prehospital they had lower BP (96 [70-130] vs. 134 [117-152], p < 0.001) and higher heart rate (106 [82-118] vs. 90 [76-106], p < 0.001). Bleeding risk index calculated using the entire prehospital period was 10× higher in patients undergoing REBOA/RT (0.5 [0.42-0.63] vs. 0.05 [0.02-0.21], p < 0.001) with an AUC of 0.93 (95% confidence interval [95% CI], 0.90-0.97). This was similarly predictive when calculated from shorter periods of transport: BRI initial 10 minutes prehospital AUC of 0.89 (95% CI, 0.83-0.94) and initial 5 minutes AUC of 0.90 (95% CI, 0.85-0.94).
Conclusion: Automated prehospital calculations based on vital sign features and trends accurately predict the need for the emergent REBOA/RT. This information can provide essential time for team preparedness and guide trauma triage and disaster management.
Level Of Evidence: Therapeutic/care management, Level IV.
abstract_id: PUBMED:2712210
The sensitivity of vital signs in identifying major thoracoabdominal hemorrhage. Prehospital and emergency room recordings of hemodynamic vital signs frequently play a major role in the evaluation and treatment of trauma victims. Guidelines for resuscitation and treatment are affected by absolute cutoffs in hemodynamic parameters. To determine the sensitivity of various strata of systolic blood pressure and heart rate in identifying patients with major thoracoabdominal hemorrhage, a 1-year retrospective review was conducted. A third of all patients presented to the emergency department with a normal blood pressure and over three-quarters attained a normal blood pressure during the emergency department evaluation. Although the sensitivity of vital signs in identifying this group of patients improved as the variance from normal increased, standard cutoffs were relatively insensitive. We conclude that normal postinjury vital signs do not predict the absence of potentially life-threatening hemorrhage and abnormal vital signs at any point after injury require investigation to rule out significant blood loss.
abstract_id: PUBMED:24124656
Clinical relevance of routinely measured vital signs in hospitalized patients: a systematic review. Background: Conflicting evidence exists on the effectiveness of routinely measured vital signs on the early detection of increased probability of adverse events.
Purpose: To assess the clinical relevance of routinely measured vital signs in medically and surgically hospitalized patients through a systematic review.
Data Sources: MEDLINE, Embase, Cochrane Central Register of Controlled Trials (CENTRAL), Cumulative Index to Nursing and Allied Health Literature, and Meta-analysen van diagnostisch onderzoek (in Dutch; MEDION) were searched to January 2013.
Study Selection: Prospective studies evaluating routine vital sign measurements of hospitalized patients, in relation to mortality, septic or circulatory shock, intensive care unit admission, bleeding, reoperation, or infection.
Data Extraction: Two reviewers independently assessed potential bias and extracted data to calculate likelihood ratios (LRs) and predictive values.
Data Synthesis: Fifteen studies were performed in medical (n = 7), surgical (n = 4), or combined patient populations (n = 4; totaling 42,565 participants). Only three studies were relatively free from potential bias. For temperature, the positive LR (LR+) ranged from 0 to 9.88 (median 1.78; n = 9 studies); heart rate 0.82 to 6.79 (median 1.51; n = 5 studies); blood pressure 0.72 to 4.7 (median 2.97; n = 4 studies); oxygen saturation 0.65 to 6.35 (median 1.74; n = 2 studies); and respiratory rate 1.27 to 1.89 (n = 3 studies). Overall, three studies reported area under the Receiver Operator Characteristic (ROC) curve (AUC) data, ranging from 0.59 to 0.76. Two studies reported on combined vital signs, in which one study found an LR+ of 47.0, but in the other the AUC was not influenced.
Conclusions: Some discriminative LR+ were found, suggesting the clinical relevance of routine vital sign measurements. However, the subject is poorly studied, and many studies have methodological flaws. Further rigorous research is needed specifically intended to investigate the clinical relevance of routinely measured vital signs.
Clinical Relevance: The results of this research are important for clinical nurses to underpin daily routine practices and clinical decision making.
abstract_id: PUBMED:26743804
Muscle Oxygen Saturation Improves Diagnostic Association Between Initial Vital Signs and Major Hemorrhage: A Prospective Observational Study. Objectives: During initial assessment of trauma patients, vital signs do not identify all patients with life-threatening hemorrhage. We hypothesized that a novel vital sign, muscle oxygen saturation (SmO2 ), could provide independent diagnostic information beyond routine vital signs for identification of hemorrhaging patients who require packed red blood cell (RBC) transfusion.
Methods: This was an observational study of adult trauma patients treated at a Level I trauma center. Study staff placed the CareGuide 1100 tissue oximeter (Reflectance Medical Inc., Westborough, MA), and we analyzed average values of SmO2 , systolic blood pressure (sBP), pulse pressure (PP), and heart rate (HR) during 10 minutes of early emergency department evaluation. We excluded subjects without a full set of vital signs during the observation interval. The study outcome was hemorrhagic injury and RBC transfusion ≥ 3 units in 24 hours (24-hr RBC ≥ 3). To test the hypothesis that SmO2 added independent information beyond routine vital signs, we developed one logistic regression model with HR, sBP, and PP and one with SmO2 in addition to HR, sBP, and PP and compared their areas under receiver operating characteristic curves (ROC AUCs) using DeLong's test.
Results: We enrolled 487 subjects; 23 received 24-hr RBC ≥ 3. Compared to the model without SmO2 , the regression model with SmO2 had a significantly increased ROC AUC for the prediction of ≥ 3 units of 24-hr RBC volume, 0.85 (95% confidence interval [CI], 0.75-0.91) versus 0.77 (95% CI, 0.66-0.86; p < 0.05 per DeLong's test). Results were similar for ROC AUCs predicting patients (n = 11) receiving 24-hr RBC ≥ 9.
Conclusions: SmO2 significantly improved the diagnostic association between initial vital signs and hemorrhagic injury with blood transfusion. This parameter may enhance the early identification of patients who require blood products for life-threatening hemorrhage.
abstract_id: PUBMED:28069417
Standing shock index: An alternative to orthostatic vital signs. Objective The lack of a sensitive, practical bedside test for hypovolemia has rekindled interest in the shock index (heart rate divided by systolic blood pressure). Here, we compare the effect of blood donation on standing shock index values with its effect on values for the supine shock index and orthostatic change in shock indicies (OCSI).
Methods: This is a re-analysis of data collected for an earlier report. Data were available from 292 adults below age 65 and 44 adults ages 65 and over, donating 450mL of blood. We obtained supine and standing vital signs before and after donation and then calculated 95% confidence intervals for differences based on the t-distribution.
Results: Blood donation resulted in a mean increase in the standing shock index of 0.09 [95% CI, 0.08-0.11] in younger adults and 0.08 [95% CI, 0.05-0.11] in older adults. These changes were similar to those noted for OCSI (young, 95% CI, 0.08-0.10; old, 95% CI, 0.04-0.10). Supine shock index values did not change with donation in younger donors (mean difference 0.0 [95% CI, 0.0-0.01]) or older donors (mean difference 0.0 [95% CI, -0.01-0.03]).
Conclusion: Blood donation does not affect the supine shock index, but it does result in changes in standing shock index that are similar to changes in more complicated orthostatic vital signs.
abstract_id: PUBMED:27457863
Vital Signs Strongly Predict Massive Transfusion Need in Geriatric Trauma Patients. Early recognition of massive transfusion (MT) requirement in geriatric trauma patients presents a challenge, as older patients present with vital signs outside of traditional thresholds for hypotension and tachycardia. Although many systems exist to predict MT need in trauma patients, none have specifically evaluated the geriatric population. We sought to evaluate the predictive value of presenting vital signs in geriatric trauma patients for prediction of MT. We retrospectively reviewed geriatric trauma patients presenting to our Level I trauma center from 2010 to 2013 requiring full trauma team activation. The area under the receiver operating characteristic curve was calculated to assess discrimination of arrival vital signs for MT prediction. Ideal cutoffs with high sensitivity and specificity were identified. A total of 194 patients with complete data were analyzed. Of these, 16 patients received MT. There was no difference between the MT and non-MT groups in sex, age, or mechanism. Systolic blood pressure, pulse pressure, diastolic blood pressure, and shock index all were strongly predictive of MT need. Interestingly, we found that heart rate does not predict MT. MT in geriatric trauma patients can be reliably and simply predicted by arrival vital signs. Heart rate may not reflect serious hemorrhage in this population.
abstract_id: PUBMED:35623625
The Predictive Value of Vital Signs for Morbidity in Pregnancy: Evaluating and Optimizing Maternal Early Warning Systems. Objective: Vital sign scoring systems that alert providers of clinical deterioration prior to critical illness have been proposed as a means of reducing maternal risk. This study examined the predictive ability of established maternal early warning systems (MEWS)-as well as their component vital sign thresholds-for different types of maternal morbidity, to discern an optimal early warning system.
Study Design: This retrospective cohort study analyzed all patients admitted to the obstetric services of a four-hospital urban academic system in 2018. Three sets of published MEWS criteria were evaluated. Maternal morbidity was defined as a composite of hemorrhage, infection, acute cardiac disease, and acute respiratory disease ascertained from the electronic medical record data warehouse and administrative data. The test characteristics of each MEWS, as well as for heart rate, blood pressure, and oxygen saturation were compared.
Results: Of 14,597 obstetric admissions, 2,451 patients experienced the composite morbidity outcome (16.8%) including 980 cases of hemorrhage (6.7%), 1,337 of infection (9.2%), 362 of acute cardiac disease (2.5%), and 275 of acute respiratory disease (1.9%) (some patients had multiple types of morbidity). The sensitivities (15.3-64.8%), specificities (56.8-96.1%), and positive predictive values (22.3-44.5%) of the three MEWS criteria ranged widely for overall morbidity, as well as for each morbidity subcategory. Of patients with any morbidity, 28% met criteria for the most liberal vital sign combination, while only 2% met criteria for the most restrictive parameters, compared with 14 and 1% of patients without morbidity, respectively. Sensitivity for all combinations was low (maximum 28.2%), while specificity for all combinations was high, ranging from 86.1 to 99.3%.
Conclusion: Though all MEWS criteria demonstrated poor sensitivity for maternal morbidity, permutations of the most abnormal vital signs have high specificity, suggesting that MEWS may be better implemented as a trigger tool for morbidity reduction strategies in the highest risk patients, rather than a general screen.
Key Points: · MEWS have poor sensitivity for maternal morbidity.. · MEWS can be optimized for high specificity using modified criteria.. · MEWS could be better used as a trigger tool..
Answer: Based on the provided abstracts, heart rate variability (HRV) does not appear to be better than routine vital signs for prehospital identification of major hemorrhage. In a study that evaluated whether HRV metrics provide additional diagnostic information beyond routine vital signs for identifying hemorrhaging patients who receive packed red blood cell (PRBC) transfusion, the findings did not support that HRV metrics add value over routine vital signs. The HRV parameters were significant in multivariate regression models containing routine vital signs, but they yielded areas under receiver operating characteristic curves with minimal, nonsignificant improvements (+0.00 to +0.05) (PUBMED:25534122).
Other studies have focused on the importance of vital signs in various clinical settings. For instance, one study found that prehospital vital signs chart sharing systems significantly increased the number of prehospital vital signs shared with the in-hospital team at a remote trauma center prior to patient arrival at the hospital, which can potentially contribute to better trauma care (PUBMED:26508581). Another study suggested that techniques to reduce variability in vital sign data can lead to significantly improved diagnostic performance, indicating the importance of considering measurement error and transient variability in vital signs (PUBMED:22421006).
Moreover, continuous monitoring of vital signs in the prehospital setting using machine learning was found to accurately predict the need for emergent procedures such as resuscitative thoracotomy (RT) or resuscitative endovascular balloon occlusion of the aorta (REBOA), which are used to rapidly control bleeding (PUBMED:33797486). This suggests that while HRV alone may not be superior, the continuous monitoring and analysis of vital signs can be crucial in prehospital trauma care.
In summary, the evidence from the abstracts does not support the superiority of HRV over routine vital signs for the prehospital identification of major hemorrhage. Instead, the focus seems to be on the effective use and interpretation of routine vital signs, possibly enhanced by technological advancements and data analysis methods. |
Instruction: Can intra-operative GH measurement in acromegalic subjects predict completeness of surgery?
Abstracts:
abstract_id: PUBMED:9797846
Can intra-operative GH measurement in acromegalic subjects predict completeness of surgery? Objective: Results of trans-sphenoidal pituitary surgery, in terms of long-term cure, vary considerably between centres. Additional techniques, which can assist the neurosurgeon in deciding whether surgery is complete or not, might therefore be important. One such potential tool is the intra-operative measurement of GH and calculating the plasma half-life from the plasma samples obtained after the presumed complete resection of the adenoma.
Methods: GH half-life was calculated from 5-10 min plasma samples after adenomectomy in 20 patients. GH was measured with a sensitive and rapid IFMA, and the results could be reported within 30 min, but were not used in this study for per-operative decisions. Cure was defined by a glucose suppressed plasma GH concentration below 1 mU/l (0.38 microgram/l) during follow-up studies and a normal plasma IGFI concentration.
Results: In 13 cured patients the plasma half-life was 22.2 +/- 1.9 min (range 14-40.6). In three non-cured patients the plasma half-life could not be calculated, and in four other patients the plasma half-life was 35.8 +/- 5.9 min (range 25.8-51 min). By applying 25 min as the upper normal limit for the GH plasma half-life, the sensitivity was 77%, specificity 100%, and positive predictive value 100%.
Conclusion: Per-operative plasma GH monitoring is a potentially useful tool for determining the completeness of trans-sphenoidal surgery in acromegaly.
abstract_id: PUBMED:31630821
Completeness of operative reports for rectal cancer surgery. Introduction: Synoptic operative reporting has been shown to improve completeness and consistency in surgical documentation. We sought to determine whether operative reports contain the key elements recommended by the National Accreditation Program for Rectal Cancer.
Methods: Rectal cancer operative reports from June-December 2018 were submitted from ten hospitals in Michigan. These reports were analyzed to identify key elements in the synoptic operative template and assessed for completeness.
Results: In total, 110 operative reports were reviewed. Thirty-one (28%) reports contained all 24 elements; all of these reports used a synoptic template. Overall, 62 (56%) reports used a synoptic template and 48 (44%) did not. Using a synoptic template significantly improved documentation, as these reports contained 92% of required elements, compared to 39% for narrative reports (p < 0.001).
Conclusions/discussion: Narrative operative reports inconsistently document rectal cancer resection. This study provides evidence that synoptic reporting will improve quality of documentation for rectal cancer surgery.
abstract_id: PUBMED:23930065
A case of an acromegalic patient resistant to the recommended maximum GH receptor antagonist dosage. Background: The competitive GH receptor antagonist pegvisomant is reported to normalise IGF-1 levels in up to 97 % of acromegalic patients at a maximum dosage of 40 mg/d. Description of Case: We present an acromegalic patient resistant to the recommended maximum GH receptor antagonist dosage. The 60-year-old male patient presenting with typical clinical signs of acromegaly has underwent multiple transsphenoidal surgeries and pituitary irradiation, while currently available pharmacological therapies for acromegaly have been exhausted.
Results: Biochemical control of the disease could only be achieved until uptitration of pegvisomant to 60 mg/d which was tolerated well.
Conclusions: The current treatment algorithm for acromegaly should be modified to treat cases of persistent and uncontrolled disease.
abstract_id: PUBMED:32001904
Relationship between intra-operative hypotension and post-operative complications in traumatic hip surgery. Background And Aims: The relationship between intra-operative hypotension and post-operative complications has been recently studied in non-cardiac surgery. Little is known about this relationship in traumatic hip surgery. Our study aimed to investigate this relationship.
Methods: A retrospective study was conducted on patients who underwent surgical correction of traumatic hip fracture between 2010 and 2015. We reviewed the perioperative blood pressure readings and the episodes of intra-operative hypotension. Hypotension was defined as ≥30% decrease in the pre-induction systolic blood pressure sustained for ≥10 min. The relationship between intra-operative hypotension and post-operative complications was evaluated. Post-operative complications were defined as new events or diseases that required post-operative treatment for 48 h. Factors studied included type of anaesthesia, blood transfusion rate, pre-operative comorbidities and delay in surgery. We used the Statistical Package for Social Sciences (SPSS, IBM 25) to perform descriptive and non-parametric statistics.
Results: A total of 502 patients underwent various types of traumatic hip surgery during the study period. Intra-operative hypotension developed in 91 patients (18.1%) and 42 patients (8.4%) developed post-operative complications. Significantly more patients with hypotension developed post-operative complications compared to patients with stable vitals (18.7% vs. 6.1; P < 0.001). There was no statistically significant difference in the incidence of post-operative complication in patients receiving general or spinal anaesthesia. Pre-operative comorbidities had no significant relationship with post-operative complications. Intra-operative blood transfusion was related to both intra-operative hypotension and post-operative complications.
Conclusion: There was an association between intra-operative hypotension and post-operative complications in patients undergoing traumatic hip surgery.
abstract_id: PUBMED:24082601
Familial isolated hyperparathyroidism: role of intra operative parathormone assay. Primary Hyperparathyroidism (PHPT) has been reported to occur in members of same family either alone or in syndromic association. We report a family of patients with multi glandular disease wherein we have successfully used intra operative PTH (IOPTH) to assess the completeness of resection. Three members of one family were affected (One male and two females). Two of them had symptomatic disease whereas one was asymptomatic. Since the genetic studies are not available we used a combination of radiological, clinical and laboratory findings to rule out the other components of MEN syndromes. Extent of surgery in familial isolated hyperparathyroidism (FIHP) is controversial. Hence for confirmation of adequate parathyroid tissue resection we used IOPTH. Role of IOPTH has been well established in sporadic PHPT but controversial in multi glandular syndromes. IOPTH was successfully used to confirm the excision and establish cure.
abstract_id: PUBMED:29732427
Do intra-operative neurophysiological changes predict functional outcome following decompressive surgery for lumbar spinal stenosis? A prospective study. Background: To analyse the relation between immediate intraoperative neurophysiological changes during decompression and clinical outcome in a series of patients with lumbar spinal stenosis (LSS) undergoing surgery.
Methods: Twenty-four patients with neurogenic intermittent claudication (NIC) due to LSS undergoing decompressive surgery were prospectively studied. Intra operative trans-cranial motor evoked potentials (tcMEPs) were recorded before and immediately after surgical decompression. Lower limb normalised tcMEP improvement was used as primary neurophysiological outcome. Clinical outcome was assessed using the Zurich Claudication Questionnaire (ZCQ) self-assessment score, before surgery (baseline) and at an average of 8 and 29 months post-operatively.
Results: We found a moderate positive correlation between tcMEP changes and ZCQ at early follow-up (R=0.36). At late follow-up no correlation was found between intra-operative tcMEP and ZCQ changes. Dichotomizing the data showed a statistically significant relationship between tcMEP improvement and better functional outcome at early follow-up (P=0.013) but not at later follow-up (P=1).
Conclusions: Our findings suggest that intra-operative neurophysiological improvement during decompressive surgery may predict a better clinical outcome at early follow-up although this is not applicable to late follow-up possibly due to the observed erosion of functional improvement with time.
abstract_id: PUBMED:30662762
Intra-operative imaging in trauma surgery. The reconstruction of anatomical joint surfaces, limb alignment and rotational orientation are crucial in the treatment of fractures in terms of preservation of function and range of motion. To assess reduction and implant position intra-operatively, mobile C-arms are mandatory to immediately and continuously control these parameters.Usually, these devices are operated by OR staff or radiology technicians and assessed by the surgeon who is performing the procedure. Moreover, due to special objectives in the intra-operative setting, the situation cannot be compared with standard radiological image acquisition. Thus, surgeons need to be trained and educated to ensure correct technical conduct and interpretation of radiographs.It is essential to know the standard views of the joints and long bones and how to position the patient and C-arm in order to acquire these views. Additionally, the operating field must remain sterile, and the radiation exposure of the patient and staff must be kept as low as possible.In some situations, especially when reconstructing complex joint fractures or spinal injuries, complete evaluation of critical aspects of the surgical results is limited in two-dimensional views and fluoroscopy. Intra-operative three-dimensional imaging using special C-arms offers a valuable opportunity to improve intra-operative assessment and thus patient outcome.In this article, common fracture situations in trauma surgery as well as special circumstances that the surgeon may encounter are addressed. Cite this article: EFORT Open Rev 2018;3:541-549. DOI: 10.1302/2058-5241.3.170074.
abstract_id: PUBMED:36415246
The role of intra-operative parathyroid hormone assay in non-localized adenoma. The incidence of primary hyperparathyroidism (PHPT) is increasing in trend due to more common practice of routine blood investigations especially in the elderly. Surgery is the only curative therapy in symptomatic patients. We present a case of a 63-year-old lady with generalised body weakness associated with occasional muscle cramps. Her biochemical results were consistent with PHPT. As a result of persistent severe hypercalcemia, surgery was planned. However, the pre-operative anatomical and functional radiological imaging (neck ultrasonography, 99mTc-MIBI and FDG-PET scans) failed to identify the abnormal parathyroid gland. Therefore, bilateral neck exploration with intra-operative parathyroid hormone (io-PTH) measurement was performed. The nodular left thyroid and adenomatous right superior parathyroid glands were removed. Possible causes of negative localization and incorporation of io-PTH in under-resourced countries to ensure successful surgery are discussed.
abstract_id: PUBMED:29643880
Intra-Operative Predictors of difficult cholecystectomy and Conversion to Open Cholecystectomy - A New Scoring System. Objective: To evaluate the intra-operative scoring system to predict difficult cholecystectomy and conversion to open surgery.
Methods: This descriptive study was conducted from March 2016 to August, 2016 in the Department of Surgery, Shalimar Hospital. The study recruited 120 patients of either gender, age greater than 18 years and indicated for laparoscopic cholecystectomy (LC). Intra-operatively all patients were evaluated using the new scoring system. The scoring system included five aspects; appearance and adhesion of Gall Bladder (GB), distension or contracture degree of GB, ease in access, local or septic complications, and time required for cystic artery and duct identification. The scoring system ranges from 0 to 10, classified as score of <2 being considered easy, 2 to 4 moderate, 5-7 very difficult, and 8 to 10, extreme. Patient demographic data (i.e. age, gender), co-morbidities, intra-operative scores using the scoring system and conversion to open were recorded. The data was analysed using statistical analysis software SPSS (IBM).
Results: Among one hundred and twenty participants, sixty seven percent were females and the mean age (years) was 43.05 ± 14.16. Co-morbidities were present in twenty percent patients with eleven diagnosed with diabetes, six with hypertension and five with both hypertension and diabetes. The conversion rate to open surgery was 6.7%. The overall mean intra-operative scores were 3.52 ± 2.23; however significant difference was seen in mean operative score of converted to open and those not converted to open (8.00 ± 0.92 Vs. 3.20 V 1.92; p-value = 0.001). Among eight cases converted to open, three (37.5%) were in very difficult category while five (62.5%) were in extreme category. Moreover, age greater than 40 years and being diabetic were also the risk factors for conversion to open surgery.
Conclusion: The new intra-operative scoring system is a valuable assessment tool to predict difficult laparoscopic cholecystectomy and conversion parameters to open surgery and its utility could improve patient's clinical outcome indicated for laparoscopic cholecystectomy.
abstract_id: PUBMED:1362848
Cholinergic modulation of GH secretion in acromegalic patients before and after pituitary surgery. Cholinergic neurotransmission exerts a physiological control on GH secretion. Pirenzepine (Pz), an antagonist of muscarinic receptors, by enhancing hypothalamic somatostatin release, inhibits stimulated GH secretion in normal subjects but not in acromegalic patients. To address the hypothesis that a feedback effect of GH hypersecretion can be involved in this condition, GH responses to GHRH 1-29, 1 microgram/kg iv, with and without administration of Pz, 40 mg iv before tests, were investigated in eight acromegalic patients, before and 20-30 days after transsphenoidal adenomectomy. Pz diminished (p < 0.001) the incremental area under the curve (AUC) of GH responses to GHRH in seven normal controls. In contrast, GHRH responsiveness in untreated acromegalic patients was not affected by Pz. Postoperative basal GH levels decreased by 62.4 +/- 14.9% (p < 0.01). Pz inhibited GH responses to GHRH (p < 0.01). Furthermore, a direct relationship (r = 0.73, p < 0.01) between basal concentrations and the AUC of GH responses following Pz plus GHRH-test was found. The finding that muscarinic receptor activity recovered after the reduction of serum GH basal levels by pituitary surgery lends support to the proposed pathophysiological role of GH excess as a possible determinant factor in cholinergic-somatostatinergic dysfunction in acromegaly.
Answer: Yes, intra-operative GH measurement in acromegalic subjects can predict the completeness of surgery. A study conducted on 20 patients undergoing trans-sphenoidal pituitary surgery for acromegaly found that calculating the plasma half-life of GH from samples taken after the presumed complete resection of the adenoma could be a useful tool. In the cured patients, the plasma half-life of GH was significantly shorter (22.2 +/- 1.9 min) compared to those who were not cured, where the half-life was longer (35.8 +/- 5.9 min) or could not be calculated. By setting an upper normal limit for the GH plasma half-life at 25 minutes, the sensitivity of the test was 77%, specificity was 100%, and the positive predictive value was 100%. These results suggest that per-operative plasma GH monitoring could assist neurosurgeons in determining whether the surgical removal of the adenoma is complete (PUBMED:9797846). |
Instruction: Nuclear imaging techniques in the assessment of myocardial perfusion and function after CABG: does it correlate with CK-MB elevation?
Abstracts:
abstract_id: PUBMED:38286569
Effect of ischaemic postconditioning on markers of myocardial injury in ST-elevation myocardial infarction: a meta-analysis. Objectives: This study aimed to perform a meta-analysis of the short-term impact of ischaemic postconditioning (IPoC) on myocardial injury in ST elevation myocardial infarction (STEMI) using surrogate cardiac biomarkers.
Methods: Eligible studies were identified using several article databases. Randomised controlled trials published between 1 January 2000 and 1 December 2021 comparing IPoC to standard of therapy in STEMI patients were included in the search. Outcomes included surrogates of myocardial injury, specifically peak troponin, creatine-kinase (CK) and CK myoglobin binding (CK-MB) enzyme levels.
Results: 11 articles involving 1273 patients reported on CK-MB and 8 studies involving 505 patients reported on CK. Few studies used troponin as an outcome, thus, a subanalysis of troponin dynamics was not performed. Meta-regression analysis demonstrated no significant effect of IPoC on peak CK-MB (effect size -0.41, 95% CI -1.15 to 0.34) or peak CK (effect size -0.42, 95% CI -1.20 to 0.36). Linear regression analysis demonstrated a significant correlation between a history of smoking and CK-MB in the IPoC group (p=0.038).
Conclusions: IPoC does not seem to protect against myocardial injury in STEMI, except possibly in smokers. These results resonate with some studies using imaging techniques to ascertain myocardial damage. More research using troponin and cardiac imaging should be pursued to better assess the effects of IPoC on cardiovascular outcomes in STEMI.
abstract_id: PUBMED:30328026
Impact of tissue protrusion after coronary stenting in patients with ST-segment elevation myocardial infarction. Clinical impact of tissue protrusion (TP) after coronary stenting is still controversial, especially in patients with ST-segment elevation myocardial infarction (STEMI). A total of 104 STEMI patients without previous MI who underwent primary percutaneous coronary intervention (PCI) under intravascular ultrasound (IVUS)-guidance were included. Post-stenting grayscale IVUS analysis was performed, and the patients were classified according to the presence or absence of post-stenting TP on IVUS. Coronary angiography and single-photon emission computed tomography myocardial perfusion imaging (SPECT MPI) with 99mTc tetrofosmin were analyzed. Major adverse cardiac events were defined as cardiovascular death, myocardial infarction, heart failure hospitalization, and target vessel revascularization. TP on IVUS was detected in 62 patients (60%). Post-PCI coronary flow was more impaired, and peak creatine kinase-myoglobin binding level was higher in patients with TP compared to those without. SPECT MPI was performed in 77 out of 104 patients (74%) at 35.4 ± 7.7 days after primary PCI. In patients with TP, left ventricular ejection fraction was significantly reduced (47.5 ± 12.0% vs. 57.6 ± 11.2%, p < 0.001), and infarct size was larger [17% (8-25) vs. 4% (0-14), p = 0.002] on SPECT MPI. During a median follow-up of 14 months after primary PCI, Kaplan-Meier analysis demonstrated a significantly higher incidence of major adverse cardiac events in patients with TP compared to those without. TP on IVUS after coronary stenting was associated with poor outcomes in patients with STEMI.
abstract_id: PUBMED:19540571
Release of necrosis markers and cardiovascular magnetic resonance-derived microvascular perfusion in reperfused ST-elevation myocardial infarction. Introduction: The association of the temporal evolution of cardiac necrosis marker release with cardiovascular magnetic resonance-derived microvascular perfusion after ST-elevation myocardial infarction is unknown.
Methods: We analyzed 163 patients with a first ST-elevation myocardial infarction and a patent infarct-related artery treated with thrombolysis (67%) or primary angioplasty (33%). Using first-pass perfusion CMR, abnormal perfusion was defined as a lack of contrast arrival into the infarct area in >1 segment. Troponin I, creatine kinase MB and myoglobin were measured upon arrival and at 6, 12, 24, 48 and 96 hours after reperfusion.
Results: Abnormal perfusion was detected in 75 patients (46%) and was associated with a larger release of all 3 necrosis markers after reperfusion and higher peak values. This association was observed in the whole group and separately in patients treated with thrombolysis and primary angioplasty. Out of the 3 markers, troponin levels at 6 hours after reperfusion yielded the largest area under the receiver operating characteristic curve for prediction of abnormal perfusion (troponin: 0.69, creatine kinase MB: 0.65 and myoglobin: 0.58). In a comprehensive multivariate analysis, adjusted for clinical, angiographic, cardiovascular magnetic resonance parameters and all necrosis markers, high troponin levels at 6 hours after reperfusion (>median) independently predicted abnormal microvascular perfusion (OR 2.6 95%CI [1.2 - 5.5], p = .012).
Conclusions: In ST-elevation myocardial infarction, a larger release of cardiac necrosis markers soon after reperfusion therapy relates to abnormal perfusion. Troponin appears as the most reliable necrosis marker for an early detection of cardiovascular magnetic resonance-derived abnormal microvascular reperfusion.
abstract_id: PUBMED:11489766
Admission troponin T level predicts clinical outcomes, TIMI flow, and myocardial tissue perfusion after primary percutaneous intervention for acute ST-segment elevation myocardial infarction. Background: In ST-segment elevation myocardial infarction, a troponin T >/=0.1 microg/L on admission indicates poorer prognosis despite early reperfusion. To evaluate the underlying reason, we studied the value of cardiac troponin T (cTnT) for prediction of outcomes, epicardial blood flow, and myocardial reperfusion after primary percutaneous intervention.
Methods And Results: Patients (n=140) admitted within 12 hours after onset of symptoms were stratified by admission cTnT. Epicardial and myocardial reperfusion were graded by the TIMI score and by measurement of relative increases of myoglobin, cTnT, and creatine kinase (CK)-MB 60 minutes after recanalization, respectively. cTnT was positive in 64 patients (45.7%) and was associated with longer median time intervals to admission (5.5 versus 3.5 hours, P<0.001) and higher mortality rates after 30 days (12.5% versus 3.9%, P=0.06) and 9 months (14% versus 3.9%, P=0.005). cTnT independently predicted a 3.2-fold risk for incomplete epicardial reperfusion (P=0.03). In addition, cTnT >/=0.1 microg/L was associated with more severely impaired myocardial perfusion despite normal epicardial flow, as indicated by lower 60-minute ratios of myoglobin (2.6 versus 7.6, P=0.007), cTnT (6.6 versus 29.2, P<0.001), and CK-MB (3.5 versus 21.4, P=0.002) and a tendency for less resolution of ST-segment elevations (54% versus 60%, P=0.08).
Conclusions: cTnT predicts poorer clinical outcomes, lower rates of postprocedural TIMI 3 flow, and more severely compromised myocardial perfusion despite normal epicardial flow. Thus, a cTnT-positive patient may require more aggressive adjunctive therapy when treated by percutaneous coronary intervention. The impact of preexisting or evolving microvascular dysfunction and the effect of therapies that target myocardial perfusion require further prospective evaluation.
abstract_id: PUBMED:10608583
Comparison of acute rest myocardial perfusion imaging and serum markers of myocardial injury in patients with chest pain syndromes. Background: Newer diagnostic modalities such as serum markers and acute rest myocardial perfusion imaging (MPI) have been evaluated diagnostically in patients with chest pain in the emergency department (ED), but never concurrently. We compared these two modalities in distinguishing patients in the ED with symptomatic myocardial ischemia from those with non-cardiac causes.
Methods: Serum markers and acute technetium-99m sestamibi/tetrofosmin rest MPI were obtained in 75 patients admitted to the ED with chest pain and nondiagnostic electrocardiograms. Venous samples were drawn at admission and 8 to 24 hours later for total creatine kinase, CK-MB fraction, troponin T, troponin I, and myoglobin. Three nuclear cardiologists performed blinded image interpretation. Coronary artery disease (CAD) was confirmed either by diagnostic testing or by the occurrence of myocardial infarction (MI).
Results: Acute rest MPI results were abnormal in all 9 patients with MI. An additional 26 patients had objective evidence of CAD confirmed by diagnostic testing. The sensitivity of acute rest MPI for objective evidence of CAD was 73%. Serum troponin T and troponin I were highly specific for acute MI but had low sensitivity at presentation. Individual serum markers had very low sensitivity for symptomatic myocardial ischemia alone. In the multivariate regression model, only acute rest MPI and diabetes were independently predictive of CAD.
Conclusion: At the time of presentation and 8 to 24 hours later, acute rest MPI has a better sensitivity and similar specificity for patients with objective evidence of CAD when compared with serum markers.
abstract_id: PUBMED:17584670
Update on ACC/ESC criteria for acute ST-elevation myocardial infarction. Disruption of vulnerable or high-risk plaques is the common pathophysiological mechanism of acute coronary syndromes with or without ST elevation. The reflection of the same pathophysiological mechanism differs in non-ST-elevation acute coronary syndromes and ST-elevation myocardial infarction (STEMI) in terms of clinical presentation, prognosis and therapeutic approach. Diagnostic and therapeutic evolution had come along together from the beginning of the acute myocardial infarction (MI) concept. Pathological appearance of acute MI is classified as acute, healing and healed phases as a time related phenomenon. Clinical presentation of STEMI, is different than the other ischaemic cardiac events with the sudden onset, the duration and the severity of chest pain or discomfort. Although the old markers creatine kinase and the MB fraction, lactate dehydrogenase are also used for the diagnosis of acute MI, cardiac troponins are very sensitive and specific, and myoglobin is an early marker for acute MI. In electrocardiogram; new or presumed new ST segment elevation at the J point in two or more contiguous leads or Q wave in established MI are typical changes. Echocardiographic or nuclear techniques have been used widely to rule out or confirm STEMI. In conclusion; all clinical, pathological, biochemical, electrocardiographic analysis methods and new imaging techniques have their own unique contribution for evaluating STEMI.
abstract_id: PUBMED:3535001
The value of serum CK-MB and myoglobin measurements for assessing perioperative myocardial infarction after cardiac surgery. In 41 patients who underwent coronary bypass surgery, creatine kinase (CK)-MB mass concentration was repeatedly measured in serum during and after the intervention using a new two-site immunoenzymetric assay (IEMA). Serum CK-MB activity was determined with the use of four different techniques: immunoinhibition, immunoinhibition-immunoprecipitation, column chromatography and electrophoresis. Myoglobin (Mb) was also measured in each specimen by radioimmunoassay. In the 33 patients who followed a completely uneventful postoperative course, the cumulated CK-MB release was, on the average, 12.2-fold less than after acute myocardial infarction. The CK-MB peak concentrations using the IEMA were 33 +/- 3 micrograms/l (X +/- SEM) and occurred 6.4 +/- 0.5 h after the intervention was started; CK-MB levels had decreased to 2.9 +/- 0.4 micrograms/l at the end of the first postoperative day. The evolution of the CK-MB concentration was parallel to that of the enzyme activity. The serum Mb maximum concentrations (518 +/- 39 micrograms/l) were reached after 3.3 +/- 0.1 h. The other eight patients developed perioperative myocardial infarction (PMI); in this group, the cumulated CK-MB release was higher, and the serum CK-MB postoperative curves were of three different types. The patients with delayed CK-MB peaks (type I pattern) or sustained elevations (type III) of this isoenzyme also showed increased serum Mb levels at the end of the first postoperative day. The PMI patients with early (10 h) CK-MB elevations (type II) did not demonstrate abnormal serum Mb levels.(ABSTRACT TRUNCATED AT 250 WORDS)
abstract_id: PUBMED:10073813
Early diagnosis of acute myocardial infarction in patients without ST-segment elevation. Early identification of acute myocardial infarction (AMI) is necessary to initiate appropriate treatment. In patients presenting without ST-segment elevation, diagnosis is often dependent on the presence of elevated myocardial markers. This study examines the ability of serial MB mass alone and in combination with myoglobin in diagnosing AMI in patients without ST-segment elevation within 3 hours of presentation. In all, 2,093 patients were admitted and underwent serial marker analysis using myoglobin, creatine kinase (CK), and CK-MB at 0, 3, 6, and 8 hours. AMI was diagnosed by a CK-MB > or =8.0 ng/ml and a relative index (RI) (CK-MB x 100/total CK) > or =4.0. A total of 186 patients (9%) were diagnosed with AMI. The optimal diagnostic strategy was an elevated CK-MB + RI on the initial or 3-hour sample or at least a twofold increase in CK-MB without exceeding the upper range of normal over the 3-hour time period (sensitivity 93%, specificity 98%). The combination of an elevated CK-MB + RI or myoglobin on the initial or 3-hour sample had a sensitivity of 94%, although specificity was significantly lower, at 86%. Sensitivities and specificities after exclusion of the 242 patients with ischemic electrocardiographic changes were essentially unchanged. We conclude that most patients with AMI presenting with nondiagnostic electrocardiograms can be diagnosed within 3 hours of presentation.
abstract_id: PUBMED:9274441
Rhabdomyolysis and renal function impairment after isolated limb perfusion--comparison between the effects of perfusion with rhTNF alpha and a 'triple-drug' regimen. The aim of this study was to monitor serum and perfusate levels of myoglobin (MB) and creatine kinase (CK) during isolated limb perfusion (ILP) in order to identify those at risk of renal failure. We investigated the release of MB and CK in 40 patients who underwent ILP for melanoma (n = 15) or sarcoma (n = 25) using rhTNF alpha/melphalan (n = 28) or a triple-drug regimen (n = 12). Serial determinations of CK and MB were performed in both perfusate and systemic circulation during and after ILP and renal function was assessed. A significant increase of MB could be detected in the perfusate during ILP. After ILP, an up to 100-fold increase with a double peak of MB at 4 h and 24 h postoperatively was observed. The maximum elevation of serum activity of CK was at 30 h. The increase for both proteins was highly significant (P < 0.001). ILP with rhTNF alpha/melphalan yielded significantly (P < 0.001) higher serum values of MB and CK and also the impairment of the renal function was more pronounced. The peak values of MB after ILP occur early and allow the patients most at risk of developing renal failure to be identified. Rhabdomyolysis can be detected early by determination of MB from the perfusate. Further measurements twice daily for 2-3 days post ILP from serum samples as well as daily assessment of MB in the urine is helpful for detecting myoglobinuria and imminent renal failure.
abstract_id: PUBMED:9581854
Diagnostic application of CK-MB mass determination. Recent advances in analytic techniques have increased the diagnostic value of creatine kinase MB (CK-MB), enabling earlier and more sensitive results. The CK-MB mass immunoassays, that utilise the monoclonal anti-CK-MB in conjunction with anti-M or anti-B antibodies, are able to measure accurately small changes during the early hours after myocardial infarction (MI). CK-MB has two main limitations in diagnosing MI neither of which however undermines its established clinical value: CK-MB is not perfectly specific to cardiac injury, with increase occurring also during massive musculoskeletal injury; furthermore, the early release pattern of CK-MB limits its value for the late MI diagnosis. For the foreseeable future evidence is compelling for greater access to rapid testing capabilities in emergency situations, using protocols incorporating CK-MB mass evaluation together with other biochemical markers, i.e. myoglobin and troponins.
Answer: The provided abstracts do not directly address the correlation between nuclear imaging techniques in the assessment of myocardial perfusion and function after coronary artery bypass grafting (CABG) and creatine kinase MB (CK-MB) elevation. However, some abstracts discuss the use of nuclear imaging techniques and the measurement of CK-MB in the context of myocardial injury and infarction, which may provide indirect insights.
For instance, abstract PUBMED:30328026 discusses the use of single-photon emission computed tomography myocardial perfusion imaging (SPECT MPI) to assess myocardial perfusion and function after ST-segment elevation myocardial infarction (STEMI). It reports that patients with tissue protrusion (TP) after coronary stenting, which can be detected by intravascular ultrasound (IVUS), had impaired post-percutaneous coronary intervention (PCI) coronary flow and higher peak CK-MB levels. Additionally, these patients had reduced left ventricular ejection fraction and larger infarct size on SPECT MPI, suggesting a correlation between CK-MB levels and myocardial perfusion/function as assessed by nuclear imaging.
Abstract PUBMED:10608583 compares acute rest myocardial perfusion imaging (MPI) with serum markers of myocardial injury, including CK-MB, in patients with chest pain syndromes. It found that acute rest MPI had better sensitivity and similar specificity for patients with objective evidence of coronary artery disease (CAD) when compared with serum markers.
While these abstracts do not specifically address the correlation between nuclear imaging and CK-MB elevation post-CABG, they do suggest that nuclear imaging techniques such as SPECT MPI are valuable in assessing myocardial perfusion and function in the context of myocardial injury and that there may be an association between CK-MB levels and the findings of nuclear imaging. Further research would be needed to directly answer the question regarding the correlation post-CABG. |
Instruction: Do oral flora colonize the nasal floor of patients with oronasal fistulae?
Abstracts:
abstract_id: PUBMED:11420021
Do oral flora colonize the nasal floor of patients with oronasal fistulae? Objective: To determine if oral bacteria colonize the cleft nasal floor in patients with unilateral oronasal fistula when compared with the unaffected nasal floor and whether the results obtained would be of benefit in assessing oronasal fistulae in the clinic.
Design: Prospective study of 26 patients with cleft palate and unilateral oronasal fistula. Microbiological culture swabs were taken from the mouth and nasal floors of patients. The unaffected nasal floor was used as a control. Bacterial isolates were identified and compared in the laboratory by a senior microbiologist.
Main Outcomes Measure: A significant growth of oral bacteria from the cleft nasal floor when compared with the unaffected nasal floor.
Results: Four patients were excluded because no growth was found on any culture plate. In the remaining 22 cases, a light growth of oral flora was found in the cleft nasal floor in only 3 patients. No statistical correlation between culture of oral bacteria and the cleft nasal floor could be found (p =.12).
Conclusions: The relative lack of colonization of the cleft nasal floor by oral bacteria may reflect poor transmission of bacteria through the fistula, competition with commensal nasal flora, or an inability of oral bacteria to survive in a saliva-depleted area. The investigation is not helpful in the assessment of oronasal fistulae in the clinic.
abstract_id: PUBMED:12846609
Colonization of the cleft nasal floor by anaerobic oral flora in patients with oronasal fistulae. Objectives: Aerobic oral bacteria only rarely colonize the cleft nasal floor in patients with patent oronasal fistula. There are no studies that have investigated whether anaerobic oral flora colonize this site and whether attempting to culture them is useful for assessing the patency of oronasal fistulae in the clinic.
Design: A prospective study of 13 patients with cleft with patent unilateral oronasal fistulae. Microbiological culture swabs were taken from the oral cavity and both nasal floors, with the unaffected side being used as a control. Following aerobic and anaerobic culture, bacterial isolates were identified and compared.
Main Outcome Measure: A significant growth of anaerobic oral bacteria from the cleft nasal floor when compared with the unaffected side.
Results: Aerobic oral flora was cultured from the oral cavity in all 13 patients. A light growth of aerobic oral flora was found in the cleft nasal floor in two patients, and anaerobic oral flora was cultured from the cleft nasal floor in the same two patients. No statistical correlation was found between growth of anaerobic flora and the cleft nasal floor (p =.48).
Conclusions: Like aerobic oral flora, anaerobic oral bacteria would appear to only rarely colonize the cleft nasal floor in patients with oronasal fistulae. This additional investigation does not appear to be helpful in the assessment of oronasal fistulae in the clinic.
abstract_id: PUBMED:29733474
Computed tomographic description of the highly variable imaging features of equine oromaxillary sinus and oronasal fistulae. Oronasal and oromaxillary sinus fistulae are well-documented complications following removal or loss of a maxillary cheek tooth. Diagnosis is currently based on a combination of oral examination, videoendoscopy, radiography, and computed tomography (CT). The objective of this retrospective, case series study was to describe the CT characteristics of confirmed oronasal and oromaxillary sinus fistulae in a group of horses. Inclusion criteria were a head CT acquired at the authors' hospital during the period of 2012-2017, a CT diagnosis of oronasal or oromaxillary sinus fistulae, and a confirmed diagnosis based on a method other than CT. Signalment, clinical findings, oral examination findings, presence of a confirmed fistula, and method for confirmation of the diagnosis were recorded. A veterinary radiologist reviewed CT studies for all included horses and recorded characteristics of the fistulae. Seventeen horses were sampled. Fourteen oromaxillary sinus fistulae and three oronasal fistulae were identified. All fistulae appeared as variably sized focal defects in the alveolar bone. Defects frequently contained a linear tract of heterogeneous material interspersed with gas bubbles, considered consistent with food. Computed tomographic attenuation of the material (Hounsfield units, HU) varied widely within and between cases. In 16 of 17 cases, there was evidence of concurrent dental disease in addition to the fistulae. Although the gold standard diagnostic test remains identification of feed material within the sinus or nasal passages, findings from the current study support the use of CT as an adjunctive diagnostic test for assessing the extent of involvement and presurgical planning.
abstract_id: PUBMED:29719157
Oronasal Transfixion Suture to Prevent Uplifted Nasal Floor Deformity in Cleft Lip and Palate Patients: A 5-Year Follow-Up. Objective: In unilateral cleft lip and palate, the reconstructed nasal floor is sometimes uplifted regardless of the reconstructive method used. We used a 5-0 absorbable anchoring suture, the oronasal transfixion suture (ONT suture), to fasten the reconstructed nasal floor to the orbicularis oris muscle to prevent this deformity. This study was performed to evaluate the effects of the ONT suture.
Design: Blind retrospective study of photography and chart review.
Setting: Shinshu University Hospital, tertiary care, Nagano, Japan. Private practice.
Patients: Ninety-three consecutive patients with unilateral complete cleft lip and palate who had undergone primary nasolabial repair in our department and affiliated hospitals between 1999 and 2011 participated in this study. Finally, 45 patients were included.
Interventions: The ONT suture was put in place at the time of primary nasolabial repair.
Main Outcome Measure: The height of the nasal floor was evaluated on submental view photographs at 5 years old.
Results: The ONT suture was applied in 21 patients. The height of the nasal floor on the cleft side was significantly closer to that on the noncleft side with the ONT suture than without the ONT suture ( P = .008).
Conclusions: The ONT suture is effective to prevent uplifted nasal floor deformity on the cleft side// in unilateral complete cleft lip and palate at the time of primary nasolabial repair.
abstract_id: PUBMED:31196804
Persistent symptomatic anterior oronasal fistulae in patients with Veau type III and IV clefts: A therapeutic protocol and outcomes. Background: The anterior oronasal fistulae neighboring the alveolar cleft could persist or reappear after the alveolar reconstruction with cancellous bone grafting. The persistent symptomatic anterior oronasal fistulae need to be repaired, but surgery remains a challenge in cleft care. Surprisingly, this issue has rarely been reported in the literature. The purpose of this long-term study was to report a single surgeon experience with a therapeutic protocol for persistent symptomatic anterior oronasal fistula repair.
Methods: This is a retrospective study of consecutive patients with Veau type III and IV clefts and persistent symptomatic anterior oronasal fistulae managed according to a therapeutic protocol from 1997 to 2018. Depending on fistula size, patients were treated with local flaps associated with an interpositional graft or two-stage tongue flaps (small/medium or large fistulae, respectively). The surgical outcomes were classified as "good" (complete fistula closure with no symptoms), "fair" (asymptomatic narrow fistula remained), or "poor" (failure with persistent symptoms).
Results: Forty-four patients with persistent symptomatic anterior oronasal fistulae were reconstructed with local flaps associated with interpositional fascia or dermal fat grafting (52.3%) or two-stage tongue flaps (47.7%). Most of patients (93.2%) presented "good" outcomes, ranging from 87% to 100% (local and tongue flaps, respectively). Three (6.8%) patients presented symptomatic residual fistula ("poor" outcomes).
Conclusions: For the repair of persistent symptomatic anterior oronasal fistulae, this therapeutic protocol provided satisfactory outcome with low fistula recurrence rate.
abstract_id: PUBMED:27448430
Oronasal Masks Require a Higher Pressure than Nasal and Nasal Pillow Masks for the Treatment of Obstructive Sleep Apnea. Study Objectives: Oronasal masks are frequently used for continuous positive airway pressure (CPAP) treatment in patients with obstructive sleep apnea (OSA). The aim of this study was to (1) determine if CPAP requirements are higher for oronasal masks compared to nasal mask interfaces and (2) assess whether polysomnography and patient characteristics differed among mask preference groups.
Methods: Retrospective analysis of all CPAP implementation polysomnograms between July 2013 and June 2014. Prescribed CPAP level, polysomnography results and patient data were compared according to mask type (n = 358).
Results: Oronasal masks were used in 46%, nasal masks in 35% and nasal pillow masks in 19%. There was no difference according to mask type for baseline apnea-hypopnea index (AHI), body mass index (BMI), waist or neck circumference. CPAP level was higher for oronasal masks, 12 (10-15.5) cm H2O compared to nasal pillow masks, 11 (8-12.5) cm H2O and nasal masks, 10 (8-12) cm H2O, p < 0.0001 (Median [interquartile range]). Oronasal mask type, AHI, age, and BMI were independent predictors of a higher CPAP pressure (p < 0.0005, adjusted R(2) = 0.26.). For patients with CPAP ≥ 15 cm H2O, there was an odds ratio of 4.5 (95% CI 2.5-8.0) for having an oronasal compared to a nasal or nasal pillow mask. Residual median AHI was higher for oronasal masks (11.3 events/h) than for nasal masks (6.4 events/h) and nasal pillows (6.7 events/h), p < 0.001.
Conclusions: Compared to nasal mask types, oronasal masks are associated with higher CPAP pressures (particularly pressures ≥ 15 cm H2O) and a higher residual AHI. Further evaluation with a randomized control trial is required to definitively establish the effect of mask type on pressure requirements.
Commentary: A commentary on this article appears in this issue on page 1209.
abstract_id: PUBMED:31238041
Transmission of Oral Pressure Compromises Oronasal CPAP Efficacy in the Treatment of OSA. Background: An oronasal mask is frequently used to treat OSA. In contrast to nasal CPAP, the effectiveness of oronasal CPAP varies by unknown mechanisms. We hypothesized that oral breathing and pressure transmission through the mouth compromises oronasal CPAP efficacy.
Methods: Thirteen patients with OSA, well adapted to oronasal CPAP, were monitored by full polysomnography, pharyngeal pressure catheter, and nasoendoscope. Patients slept with low doses of midazolam, using an oronasal mask with sealed nasal and oral compartments. CPAP was titrated during administration by the oronasal and nasal routes, and was then reduced to induce stable flow limitation and abruptly switched to the alternate route. In addition, tape sealing the mouth was used to block pressure transmission to the oral cavity.
Results: Best titrated CPAP was significantly higher by the oronasal route rather than the nasal route (P = .005), and patients with > 25% oral breathing (n = 5) failed to achieve stable breathing during oronasal CPAP. During stable flow limitation, inspiratory peak flow was lower, driving pressure was higher, upper airway inspiratory resistance was higher, and retropalatal and retroglossal area were smaller by the oronasal rather than nasal route (P < .05 for all comparisons). Differences were observed even among patients with no oral flow and were abolished when tape sealing the mouth was used (n = 6).
Conclusions: Oral breathing and transmission of positive pressure through the mouth compromise oronasal CPAP.
abstract_id: PUBMED:25043897
The ontogeny of nasal floor shape variation in extant humans. Variation in nasal floor topography has generated both neontological and paleontological interest. Three categories of nasal floor shape (Franciscus: J Hum Evol 44 (2003) 699-727) have been used when analyzing this trait in extant humans and fossil Homo: flat, sloped, and depressed (or "bi-level"). Variation in the frequency of these configurations within and among extant and fossil humans has been well-documented (Franciscus: J Hum Evol 44 (2003) 699-727; Wu et al.: Anthropol Sci 120 (2012) 217-226). However, variation in this trait in Homo has been observed primarily in adults, with comparatively small subadult sample sizes and/or large age gradients that may not sufficiently track key ontogenetic changes. In this study, we investigate the ontogeny of nasal floor shape in a relatively large cross-sectional age sample of extant humans (n = 382) ranging from 4.0 months fetal to 21 years post-natal. Results indicate that no fetal or young infant individuals possess a depressed nasal floor, and that a depressed nasal floor, when present (ca. 21% of the sample), does not occur until 3.0 years postnatal. A canonical variates analysis of maxillary shape revealed that individuals with depressed nasal floors were also characterized by relatively taller anterior alveolar regions. This suggests that palate remodeling at about 3.0-3.5 years after birth, under the influence of tooth development, strongly influences nasal floor variation, and that various aspects of dental development, including larger crown/root size, may contribute to the development of a depressed nasal floor. These results in extant humans may help explain the high frequency of this trait found in Neandertal and other archaic Homo maxillae.
abstract_id: PUBMED:31797217
Nasal vs. oronasal mask during PAP treatment: a comparative DISE study. Purpose: The present study evaluated the upper airway pattern of obstruction in individuals undergoing drug-induced sleep endoscopy (DISE) exam with positive airway pressure (PAP), and compared this effect through a nasal or oronasal mask.
Methods: Prospective study. Patients requiring PAP due to obstructive sleep apnea (OSA) were evaluated through DISE at three different moments: (1) a baseline condition (without PAP); (2) PAP treatment with a nasal mask; and (3) PAP with an oronasal mask at the same pressure. The conditions were compared intra-individually, following VOTE classification. A TOTAL VOTE score (the sum of VOTE scores observed for each anatomical site) was also applied to compare intra-individual results.
Results: Thirteen patients were enrolled in the study. All patients presented multi-level pharyngeal obstruction at baseline condition. In six patients, the pattern of obstruction differed according to the mask. Nasal mask significantly decreased the obstruction score when compared with baseline condition both in velum (P value < 0.05) and oropharynx regions (P value < 0.005). TOTAL VOTE score was also significantly lower during nasal mask evaluation when compared with basal condition (P value < 0.005). Remarkably, oronasal mask with the same pressure was not as effective as nasal masks. Obstruction levels observed at the tongue base or epiglottis levels were more resistant to PAP treatment.
Conclusions: Collapse in velum and oropharyngeal sites is more compliant to PAP than obstruction at lower levels of the pharynx, either with nasal or oronasal masks. Nasal mask is superior to prevent pharyngeal collapse than oronasal devices under the same pressure.
abstract_id: PUBMED:21731145
Equivalence of nasal and oronasal masks during initial CPAP titration for obstructive sleep apnea syndrome. Study Objective: Continuous positive airway pressure (CPAP) titration studies are commonly performed using a nasal mask but some patients may prefer a full-face or oronasal mask. There is little evidence regarding the equivalence of different mask interfaces used to initiate treatment. We hypothesized that oronasal breathing when using an oronasal mask increases upper airway collapsibility and that a higher pressure may be required to maintain airway patency. We also assessed patient preferences for the 2 mask interfaces.
Design: Prospective, randomized, cross-over design with 2 consecutive CPAP titration nights.
Setting: Accredited laboratory in a university hospital.
Patients Or Participants: Twenty-four treatment-naive subjects with obstructive sleep apnea syndrome and respiratory disturbance index of greater than 15 events per hour.
Interventions: CPAP titration was performed using an auto-titrating machine with randomization to a nasal or oronasal mask, followed by a second titration night using the alternate mask style.
Measurements And Results: There was no significant difference in the mean pressures determined between nasal and oronasal masks, although 43% of subjects had nasal-to-oronasal mask-pressure differences of 2 cm H(2)O or more. Residual respiratory events, arousals, and measured leak were all greater with the oronasal mask. Seventy-nine percent of subjects preferred the nasal mask.
Conclusions: Patients with obstructive sleep apnea syndrome can generally switch between nasal and oronasal masks without changing machine pressure, although there are individual differences that may be clinically significant. Measured leak is greater with the oronasal mask. Most patients with obstructive sleep apnea syndrome prefer a nasal mask as the interface for initiation of CPAP.
Clinical Trial Registration: Australian New Zealand Clinical Trials Registry (ANZCTR). ACTRN: ACTRN12611000243910. URL: http://www.ANZCTR.org.au/ACTRN12611000243910.aspx
Answer: The studies investigating the colonization of the nasal floor by oral flora in patients with oronasal fistulae suggest that oral bacteria do not commonly colonize the cleft nasal floor. In a prospective study of 26 patients with cleft palate and unilateral oronasal fistula, only a light growth of oral flora was found in the cleft nasal floor in 3 patients out of the 22 cases that could be evaluated, with no statistical correlation between culture of oral bacteria and the cleft nasal floor (p =.12) (PUBMED:11420021). Similarly, another study involving 13 patients with cleft with patent unilateral oronasal fistulae found that aerobic oral flora was cultured from the oral cavity in all patients, but only a light growth of aerobic oral flora and anaerobic oral flora was found in the cleft nasal floor in two patients, with no statistical correlation between growth of anaerobic flora and the cleft nasal floor (p =.48) (PUBMED:12846609). These findings indicate that both aerobic and anaerobic oral bacteria rarely colonize the cleft nasal floor in patients with oronasal fistulae, and the additional investigation does not appear to be helpful in the assessment of oronasal fistulae in the clinic. |
Instruction: Does a mineral wristband affect balance?
Abstracts:
abstract_id: PUBMED:26113281
Does a mineral wristband affect balance? A randomized, controlled, double-blind study. Background: Having good balance is a facilitating factor in the performance of everyday activities. Good balance is also essential in various sport activities in order to both get results and prevent injury. A common measure of balance is postural sway, which can be measured both antero-posteriorly and medio-laterally. There are several companies marketing wristbands whose intended function is to improve balance, strength and flexibility. Randomized controlled trials have shown that wristbands with holograms have no effect on balance but studies on wristbands with minerals seem to be lacking.
Objective: The aim of this study was to investigate if the mineral wristband had any effect on postural sway in a group of healthy individuals.
Study Design: Randomized, controlled, double-blind study.
Material/methods: The study group consisted of 40 healthy persons. Postural sway was measured antero-posteriorly and medio-laterally on a force plate, to compare: the mineral wristband, a placebo wristband, and without any wristband. The measurements were performed for 30 s, in four situations: with open eyes and closed eyes, standing on a firm surface and on foam. Analyses were made with multilevel technique.
Results: The use of wristband with or without minerals did not alter postural sway. Closed eyes and standing on foam both prolonged the dependent measurement, irrespective if it was medio-lateral or antero-posterior. Wearing any wristband (mineral or placebo) gave a small (0.22-0.36 mm/s) but not statistically significant reduction of postural sway compared to not wearing wristband.
Conclusion: This study showed no effect on postural sway by using the mineral wristband, compared with a placebo wristband or no wristband. Wearing any wristband at all (mineral or placebo) gave a small but not statistically significant reduction in postural sway, probably caused by sensory input.
abstract_id: PUBMED:24611240
The effect of a silicone wristband in dynamic balance. The effect of a wristband on the dynamic balance of young adults was assessed. Twenty healthy young adults wore a commercial Power BalanceT or fake silicone wristband. A 3D accelerometer was attached to their lumbar region to measure body sway. They played the video game Tightrope (Wii video game console) with and without a wristband; body sway acceleration was measured. Mean balance sway acceleration and its variability were the same in all conditions, so silicone wristbands do not modify dynamic balance control.
abstract_id: PUBMED:28723821
Can a Balance Wristband Influence Postural Control? Eichhorn, S, Foerster, S, Friemert, B, Willy, C, Riesner, H-J, and Palm, H-G. Can a balance wristband influence postural control? J Strength Cond Res 34(12): 3416-3422, 2020-Top sports performances cannot be achieved without a high level of postural control. Balance wristbands purport to improve the mental and physical balance of the wearer. It is still unclear, however, whether these wristbands can indeed enhance postural control. Our aim was to ascertain through computerized dynamic posturography whether balance wristbands can improve postural stability. In this randomized controlled single-blind clinical study, posturography was used to assess postural control in 179 healthy subjects with or without a balance wristband. Tests were also performed with the subjects blinded to whether they were wearing an intact or a defective wristband. Analysis of variance (ANOVA) was used to detect significant differences (p ≤ 0.05). Stability indexes did not reveal significant differences in postural control between wearing and not wearing a wristband. Our study did not provide evidence of an improvement in postural stability. Because the single-blind trials too revealed no significant differences, a placebo effect could be ruled out.
abstract_id: PUBMED:33574950
Mediating Role of Resilience in the Relationships Between Fear of Happiness and Affect Balance, Satisfaction With Life, and Flourishing. The present study was a first attempt to examine the mediating role of resilience in the relationships between fear of happiness and affect balance, satisfaction with life, and flourishing. Participants consisted of 256 Turkish adults (174 males and 82 females) and aged between 18 and 62 years (M = 36.97, SD = 9.02). Participants completed measures assessing fear of happiness, affect balance, satisfaction with life, and flourishing. The results showed that fear of happiness was negatively correlated with resilience, affect balance, satisfaction with life, and flourishing, while resilience was positively correlated with affect balance, satisfaction with life, and flourishing. The results of mediation analysis showed that (a) resilience fully mediated the effect of fear of happiness upon flourishing, and satisfaction with life, (b) partially mediated the effect of fear of happiness upon affect balance. These findings suggest that resilience helps to explain the associations between fear of happiness and affect balance, satisfaction with life, and flourishing. This study elucidates the potential mechanism behind the association between fear of happiness and indicators of well-being.
abstract_id: PUBMED:37683782
Silicone wristband as a sampling tool for insecticide exposure assessment of vegetable farmers. The use of passive sampling devices (PSDs) as an appropriate alternative to conventional methods of assessing human exposure to environmental toxicants was studied. One-time purposive sampling by a silicone wristband was used to measure insecticide residues in 35 volunteer pepper farmers in the Vea irrigation scheme in the Guinea savannah and the Weija irrigation scheme in the coastal savannah ecological zones of Ghana. A GC-MS/MS method was developed and validated for quantifying 18 insecticides used by farmers in Ghana. Limits of detection (LODs) and quantitation (LOQs) ranged from 0.64 to 67 and 2.2-222 ng per wristband, respectively. The selected insecticides showed a range of concentrations in the various silicone wristbands from not detected to 27 μg/wristband. The concentrations of 13 insecticides were above their LOQs. Chlorpyrifos had the highest detection frequencies and concentrations, followed by cyhalothrin and then allethrin. This study shows that silicone wristbands can be used to detect individual insecticide exposures, providing a valuable tool for future exposure studies. Ghanaian vegetable farmers are substantially exposed to insecticides. Hence, the use of appropriate personal protective equipment is recommended.
abstract_id: PUBMED:31392731
Allergic contact dermatitis caused by 1,6-hexanediol diacrylate in a hospital wristband. Background: 1,6-Hexanediol diacrylate (1,6-HDDA) is a multifunctional acrylate and a potent sensitizer.
Objectives: To report a case of allergic contact dermatitis caused by 1,6-HDDA in a hospital wristband.
Methods: A male patient presented with eczema on his wrist where he had worn a hospital wristband. Patch testing was performed with our extended European baseline series, additional series, and pieces of the hospital wristband. Thin-layer chromatography (TLC) was performed with extracts from the wristband and gas chromatography-mass spectrometry was used for chemical analysis.
Results: Positive reactions were found to pieces of the wristband, including adhesive rim (+++), inside (+++), and outside (++); to multiple allergens in the (meth)acrylates series; and to extracts of the wristband in acetone and ethanol. Chemical analysis of the ethanol extract showed presence of lauryl acrylate and 1,6-HDDA. Patch testing with TLC strips and subsequent chemical analysis showed that the substance causing the strongest reaction was 1,6-HDDA, to which the patient had a confirmed positive patch test reaction.
Conclusion: 1,6-HDDA was identified as the culprit allergen responsible for allergic contact dermatitis caused by the hospital wristband.
abstract_id: PUBMED:22762695
Efficacy of the Power Balance Silicone Wristband: a single-blind, randomized, triple placebo-controlled study. Introduction: The Power Balance Silicone Wristband (Power Balance LLC, Laguna Niguel, CA) (power balance band; PBB) consists of a silicone wristband, incorporating two holograms, which is meant to confer improvements in balance on the wearer. Despite its popularity, the PBB has become somewhat controversial, with a number of articles being published in the news media regarding its efficacy. The PBB has not been formally evaluated but remains popular, largely based on anecdotal evidence. This study subjectively and objectively measured the effects of the PBB on balance in normal participants.
Methods: A prospective, single-blind, randomized, triple placebo-controlled crossover study was undertaken. Twenty participants underwent measurement using the modified Test of Sensory Interaction on Balance (mCTSIB) and gave subjective feedback (visual analogue scale [VAS]) for each of four band conditions: no band, a silicone band, a deactivated PBB, and the PBB. Participants acted as their own controls.
Results: The mean of the four mCTSIB conditions (eyes open and closed on both firm and compliant surfaces) was calculated. This mean value and condition 4 of the mCTSIB were compared between band conditions using path length (PL) and root mean square (RMS) as outcome measures. No significant differences were found between band conditions for PL (p = .91 and p = .94, respectively) and RMS (p = .85 and p = .96, respectively). VASs also showed no difference between bands (p = .25).
Conclusion: The PBB appears to have no effect on mCTSIB or VAS measurements of balance.
abstract_id: PUBMED:31910939
Validity and Reliability of the Wristband Activity Monitor in Free-living Children Aged 10-17 Years. Objective: In this study we aimed to examine the reliability and validity of the wristband activity monitor against the accelerometer for children..
Methods: A total of 99 children (mean age = 13.0 ± 2.5 y) wore the two monitors in a free-living context for 7 days. Reliability was measured by intraclass correlation to evaluate consistency over time. Repeated-measures analyses of variance was used to detect differences across days. Spearman's correlation coefficient (rho), median of absolute percentage error, and Bland-Altman analyses were performed to assess the validity of the wristband against the ActiGraph accelerometer. The optimal number of repeated measures for the wristband was calculated by using the Spearman-Brown prophecy formula.
Results: The wristband had high reliability for all variables, although physical activity data were different across 7 days. A strong correlation for steps (rho: 0.72, P < 0.001), and moderate correlations for time spent on total physical activity (rho: 0.63, P < 0.001) and physical activity energy expenditure (rho: 0.57, P < 0.001) were observed between the wristband and the accelerometer. For different intensities of physical activity, weak to moderate correlations were found (rho: 0.38 to 0.55, P < 0.001).
Conclusion: The wristband activity monitor seems to be reliable and valid for measurement of overall children's physical activity, providing a feasible objective method of physical activity surveillance in children.
abstract_id: PUBMED:24408751
Affect Balance and Relationship With Well-Being in Nursing Home Residents With Dementia. The purpose of this exploratory study was to determine whether the balance of positive to negative affect can discriminate states of well-being in nursing home residents with dementia and whether affect balance is associated with activity engagement. Baseline data from a randomized clinical trial were used in this secondary analysis. Participants were 128 residents recruited from 9 nursing homes located in Pennsylvania. Participants were primarily female, Caucasian, and had a mean age of 86 years. Measures of agitation and affect were taken from video recordings of 10 observational sessions. Measures of engagement and self-reported mood were taken in real time. Ten percent of participants were categorized as having high well-being. These participants had a mean ratio of positive-negative affect of 2.21 (±0.50), a ratio significantly different than that of participants with moderate or low well-being. Affect balance was related to greater engagement in activity (r= .23;P= .008).
abstract_id: PUBMED:34253099
12-month trajectory and predictors of affect balance in nursing home residents living with dementia. Objectives:Emotional expressions in late-stage dementia have traditionally been studied within a deficit paradigm. Moving the narrative of the dementia trajectory from a solely negative pathological experience to one that acknowledges the potential for positive experiences aligns with international recommendations for living well with dementia. The purpose of this study was to extend prior research by examining the pattern of well-being using affect balance, the ratio of positive to negative affect, in nursing home residents living with dementia over 12 months and its association to factors that could potentially influence resident well-being.Method:This study was a secondary analysis of baseline, 4 and 12-month data from a pragmatic clinical trial. A total of 536 residents with moderate to severe cognitive impairments from 55 nursing homes were included in the multivariable linear mixed model regression analyses.Results:Resident function, the number of registered nurse hours devoted to care in the facility, and the quality of staff interaction predicted higher affect balance over time after controlling for other variables.Conclusion:The findings provide support for the utility of affect balance as a meaningful outcome measure of well-being for persons living with dementia. In addition, results point to specific interventions (i.e. maintaining/improving resident function, providing adequate nurse staffing levels, and improving staff communications skills) that can serve as the focus for both research and practice to help residents live well with dementia. Clinicaltrials.gov (NCT03014570).
Answer: Based on the provided abstracts, the consensus is that a mineral wristband does not affect balance. A randomized, controlled, double-blind study found that the use of a mineral wristband did not alter postural sway when compared with a placebo wristband or no wristband at all (PUBMED:26113281). Similarly, another study assessing the effect of a silicone wristband on dynamic balance found that the wristband did not modify dynamic balance control in young adults (PUBMED:24611240). Furthermore, a randomized controlled single-blind clinical study using computerized dynamic posturography to assess postural control with or without a balance wristband found no evidence of improvement in postural stability, ruling out even a placebo effect (PUBMED:28723821). Lastly, a study evaluating the efficacy of the Power Balance Silicone Wristband, which is meant to improve balance, found no effect on balance measurements (PUBMED:22762695). Therefore, the evidence from these studies suggests that mineral wristbands do not have an effect on balance. |
Instruction: The response of paroxysmal supraventricular tachycardia to overdrive atrial and ventricular pacing: can it help determine the tachycardia mechanism?
Abstracts:
abstract_id: PUBMED:8269296
The response of paroxysmal supraventricular tachycardia to overdrive atrial and ventricular pacing: can it help determine the tachycardia mechanism? Introduction: Standard electrophysiologic techniques generally allow discrimination among mechanisms of paroxysmal supraventricular tachycardia. The purpose of this study was to determine whether the response of paroxysmal supraventricular tachycardia to atrial and ventricular overdrive pacing can help determine the tachycardia mechanism.
Methods And Results: Fifty-three patients with paroxysmal supraventricular tachycardia were studied. Twenty-two patients had the typical form of atrioventricular (AV) junctional (nodal) reentry, 18 patients had orthodromic AV reentrant tachycardia, 10 patients had atrial tachycardia, and 3 patients had the atypical form of AV nodal reentrant tachycardia. After paroxysmal supraventricular tachycardia was induced, 15-beat trains were introduced in the high right atrium and right ventricular apex sequentially with cycle lengths beginning 10 msec shorter than the spontaneous tachycardia cycle length. The pacing cycle length was shortened in successive trains until a cycle of 200 msec was reached or until tachycardia was terminated. Several responses of paroxysmal supraventricular tachycardia to overdrive pacing were useful in distinguishing atrial tachycardia from other mechanisms of paroxysmal supraventricular tachycardia. During decremental atrial overdrive pacing, the curve relating the pacing cycle length to the VA interval on the first beat following the cessation of atrial pacing was flat or upsloping in patients with AV junctional reentry or AV reentrant tachycardia, but variable in patients with atrial tachycardia. AV reentry and AV junctional reentry could always be terminated by overdrive ventricular pacing whereas atrial tachycardia was terminated in only one of ten patients (P < 0.001). The curve relating the ventricular pacing cycle length to the VA interval on the first postpacing beat was flat or upsloping in patients with AV junctional reentry and AV reentry, but variable in patients with atrial tachycardia. The typical form of AV junctional reentry could occasionally be distinguished from other forms of paroxysmal supraventricular tachycardia by the shortening of the AH interval following tachycardia termination during constant rate atrial pacing.
Conclusions: Atrial and ventricular overdrive pacing can rapidly and reliably distinguish atrial tachycardia from other mechanisms of paroxysmal supraventricular tachycardia and occasionally assist in the diagnosis of other tachycardia mechanisms. In particular, the ability to exclude atrial tachycardia as a potential mechanism for paroxysmal supraventricular tachycardia has important implications for the use of catheter ablation techniques to cure paroxysmal supraventricular tachycardia.
abstract_id: PUBMED:17537206
Utility of atrial and ventricular cycle length variability in determining the mechanism of paroxysmal supraventricular tachycardia. Introduction: No prior studies have systematically investigated the diagnostic value of cycle length (CL) variability in differentiating the mechanism of paroxysmal supraventricular tachycardia (PSVT).
Methods And Results: We studied 173 consecutive patients with PSVT; 86 typical atrioventricular nodal reentrant tachycardia (AVNRT), 11 atypical AVNRT, 47 orthodromic reciprocating tachycardia (ORT), and 29 with atrial tachycardia (AT). Two consecutive atrial cycles that displayed the most CL variability were selected for analysis. One hundred and twenty-six patients (73%) had > or = 15 msec variability in tachycardia CL. The change in atrial CL predicted the change in subsequent ventricular CL in six of eight patients (75%) with atypical AVNRT, 18 of 21 patients (86%) with AT, in none of 66 patients with typical AVNRT, and in 32 patients with ORT. The change in atrial CL was predicted by the change in preceding ventricular CL in 55 of 66 patients (83%) with typical AVNRT, no patient with atypical AVNRT, 27 of 31 patients (87%) with ORT, and one of 21 patients (5%) with AT. The sensitivity, specificity, and positive and negative predictive values of a change in atrial CL predicting the change in ventricular CL for AT or atypical AVNRT were 83%, 100%, 100%, and 95%, respectively. The corresponding values for the change in atrial CL being predicted by the change in the preceding ventricular CL for typical AVNRT or ORT were 85%, 97%, 99%, and 65%, respectively.
Conclusion: Tachycardia CL variability > or = 15 msec is common in PSVT. A change in atrial CL that predicts the change in subsequent ventricular CL strongly favors AT or atypical AVNRT. A change in atrial CL that is predicted by the change in the preceding ventricular CL favors typical AVNRT or ORT.
abstract_id: PUBMED:10080480
A technique for the rapid diagnosis of atrial tachycardia in the electrophysiology laboratory. Objective: The purpose of this study was to determine if the atrial response upon cessation of ventricular pacing associated with 1:1 ventriculoatrial conduction during paroxysmal supraventricular tachycardia is a useful diagnostic maneuver in the electrophysiology laboratory.
Background: Despite various maneuvers, it can be difficult to differentiate atrial tachycardia from other forms of paroxysmal supraventricular tachycardia.
Methods: The response upon cessation of ventricular pacing associated with 1:1 ventriculoatrial conduction was studied during four types of tachycardia: 1) atrioventricular nodal reentry (n = 102), 2) orthodromic reciprocating tachycardia (n = 43), 3) atrial tachycardia (n = 19) and 4) atrial tachycardia simulated by demand atrial pacing in patients with inducible atrioventricular nodal reentry or orthodromic reciprocating tachycardia (n = 32). The electrogram sequence upon cessation of ventricular pacing was, categorized as "atrial-ventricular" (A-V) or "atrial-atrial-ventricular" (A-A-V).
Results: The A-V response was observed in all cases of atrioventricular nodal reentrant and orthodromic reciprocating tachycardia. In contrast, the A-A-V response was observed in all cases of atrial tachycardia and simulated atrial tachycardia, even in the presence of dual atrioventricular nodal pathways or a concealed accessory atrioventricular pathway.
Conclusions: In conclusion, an A-A-V response upon cessation of ventricular pacing associated with 1:1 ventriculoatrial conduction is highly sensitive and specific for the identification of atrial tachycardia in the electrophysiology laboratory.
abstract_id: PUBMED:1373411
Triggered activity as the proposed mechanism of left atrial tachycardia induced by premature ventricular beats. In a 57-year-old woman with complex ventricular ectopy, a paroxysmal supraventricular tachycardia initiated by premature ventricular beats is presented. She underwent an electrophysiologic study. The tachycardia origin was localised to the left atrium. At the presence of retrograde dual atrioventricular nodal pathway, the atrial tachycardia was induced by programmed ventricular stimulation. Triggered activity was shown to be the likely mechanism of both atrial and ventricular arrhythmias.
abstract_id: PUBMED:2144947
Plasma prohormone atrial natriuretic peptides 1-98 and 31-67 increase with supraventricular and ventricular arrhythmias. Recently two peptides consisting of amino acids (AA) 1-30 and 31-67 of the N-terminus of the 126 AA prohormone of atrial natriuretic factor (pro ANF) as well as atrial natriuretic factor (ANF, AA 99-126; C-terminus) were found to have vasodilatory and natriuretic properties. These peptides as well as ANF circulate in man as part of the N-terminus of the prohormone. To determine if the polyuria, associated with both ventricular and supraventricular arrhythmias, is associated with increased circulating concentrations of the N-terminus and C-terminus of the ANF prohormone, 20 individuals with spontaneous arrhythmias, including ten persons with atrial fibrillation, six with paroxysmal supraventricular tachycardia, and four with ventricular tachycardia, were evaluated before and after conversion to sinus rhythm. In all 20 patients, the circulating concentrations of the whole N-terminus (ie, AA 1-98), the midportion of the N-terminus (pro ANF 31-67) that circulates as a distinct 3900 molecular weight peptide after being proteolytically cleaved from the N-terminus, and the C-terminus were significantly higher (p less than 0.001) than their concentration in 54 persons with sinus rhythm. With conversion to sinus rhythm, the plasma C-terminus concentration of these 20 arrhythmia patients decreased to the level of persons with sinus rhythm within 30 minutes.(ABSTRACT TRUNCATED AT 250 WORDS)
abstract_id: PUBMED:3915712
Use of electrical pacemakers in the treatment of ventricular tachycardia and ventricular fibrillation. Significant advances have been made in the therapy of ventricular arrhythmias. Many new antiarrhythmic drugs have expanded the medical armamentarium to treat ventricular tachycardia and ventricular fibrillation, and the use of intracardiac electrophysiologic studies has aided in predicting long-term drug efficacy. Major advances have also been made in the surgical treatment of arrhythmias. However, there remain a number of patients in whom ventricular arrhythmias remain a major therapeutic problem and, in some of these, electrical devices may aid in treatment. Overdrive pacing may prevent certain cases of ventricular arrhythmias, and antitachycardia devices may be useful in terminating paroxysmal ventricular tachycardia. In certain circumstances, internal cardioversion or defibrillation may be an alternative. At present, antitachycardia pacing and internal countershock must be considered as forms of therapy to be used when medical and surgical therapy are impractical or have failed. Careful selection is necessary to delineate patients in whom these forms of therapy may be indicated.
abstract_id: PUBMED:18299309
A novel pacing manoeuvre to diagnose atrial tachycardia. Aims: Currently used diagnostic manoeuvres at the electrophysiology study do not always allow for consistent identification of atrial tachycardia (AT), either because of inapplicability of the technique or because of low predictive value and specificity. The aim of this study was to determine whether overdrive atrial pacing during paroxysmal supraventricular tachycardia (SVT) with the same cycle length from both the high right atrium and the coronary sinus can accurately identify or exclude AT by examining the difference between the V-A intervals of the first returning beat of tachycardia between the two pacing sites.
Methods And Results: Fifty-two patients were included; 24 patients with atrioventricular nodal re-entry tachycardia (AVNRT), 13 patients with atrioventricular re-entry tachycardia (AVRT), and 15 patients with AT. Comparing the 37 non-AT patients with the 15 AT patients, there was a highly significant difference between the mean V-A interval difference, (delta V-A) 2.1 +/- 1.8 ms (range 0-9 ms) vs. 79.1 +/- 42 (range 22-267 ms) (P < 0.001), respectively. None of the patients in the non-AT group had a delta V-A > 10 ms. In contrast, all 15 patients with AT had a delta V-A interval >10 ms. Thus, the diagnostic accuracy of the delta V-A interval cut-off of >10 ms was 100%, with a 95% confidence interval of 93.1-100% for AT. In 11 (73%) of the 15 AT patients, the standard ventricular overdrive pacing manoeuvre was not possible. In 14 of the 15 patients (93%) in the AT group, standard atrial overdrive pacing showed variable V-A intervals, correctly diagnosing AT. In all 52 patients, this measurement was repeated during pacing from the other location. In five patients from the AT group, the result of the second attempt was different from the result of the first attempt.
Conclusion: We found that atrial differential pacing during paroxysmal SVT without termination of tachycardia and the finding of variable returning V-A interval was highly sensitive and specific for the diagnosis of AT. The manoeuvre can be easily performed in all patients with SVT and is highly reproducible. It is a useful adjunct to the currently available ventricular and atrial pacing manoeuvres.
abstract_id: PUBMED:20497352
Doubling of the ventricular rate by interpolated junctional extrasystoles resembling supraventricular tachycardia. In a study of seven cases of paroxysmal supraventricular tachycardia, it was noted that the fast rate was not caused by the mechanism of rapid firing, reentry, or dual atrioventricular nodal conduction but by an abrupt doubling of the rate by interpolation of junctional extrasystoles between adjacent sinus beats while the sinus mechanism remained undisturbed. Dual ventricular response to a single atrial depolarization was seriously considered in each case. The intervals separating the junctional extrasystoles tended to be quite fixed, thus conforming to the pattern of junctional parasystole with an intrinsic rate very close to the rate of the dominant sinus rhythm. The paroxysms of tachycardia were transient, lasting a few seconds to 3.5 minutes. The onset and termination of the paroxysms were completely unpredictable and appeared unrelated to any change in the basic sinus rate or other identifiable mechanism. In only one case, case 7, the concept of dual ventricular response appeared tenable. However, as will be discussed later, the mechanism of junctional parasystole was found to be physiologically more acceptable.
abstract_id: PUBMED:7069321
Rate-related accelerating (autodecremental) atrial pacing for reversion of paroxysmal supraventricular tachycardia. Twenty consecutive patients with paroxysmal intra A-V nodal or atrio-ventricular tachycardia had a new tachycardia reversion pacing modality evaluated during routine electrophysiological study. The pacing was controlled by a micropressor interfaced with a stimulator connected to a right atrial pacing electrode. On detection of tachycardia the first pacing cycle interval is equal to the tachycardia cycle length minus a decrement value D. Each subsequent pacing cycle is further reduced by the same value of D, thus accelerating the pacing burst until a plateau of 100 beats/min faster than tachycardia (with an absolute lower limit of 275 beats/min) is reached. Seven different values of D (2, 4, 8, 16, 24, 34, 50 msec) were assessed in combination with three different durations of pacing P (500, 5000 msec). With P:500, only 2/20 tachycardias were terminated, but with P:1000, 16/20 were terminated. With P:5000 all were terminated and the combination successful in all patients was P:5000 and D:16. No unwanted arrhythmias were induced. In contrast, competitive constant rate overdrive atrial pacing accomplished tachycardia termination in all cases, but in four instances resulted in atrial flutter or fibrillation. Autodecremental pacing, which tends to avoid stimulation in the vulnerable period, allowed safe and successful termination of all tachycardias evaluated in this study.
abstract_id: PUBMED:2946210
Plasma levels of immunoreactive atrial natriuretic factor increase during supraventricular tachycardia. A significant diuretic and natriuretic response occurs during paroxysmal supraventricular tachycardia (SVT). Although the diuresis may be secondary to suppression of vasopressin secretion, the etiology of the natriuresis remains unexplained. To determine if atrial natriuretic factor (ANF) could contribute to the polyuric response during SVT, 10 patients were studied: five during spontaneous SVT and five during simulated SVT produced by rapid simultaneous atrial and ventricular pacing. Plasma immunoreactive ANF (IR-ANF) levels measured by radioimmunoassay were obtained at baseline (before and/or 24 to 48 hours after SVT) and after at least 15 minutes of SVT in all patients. During spontaneous and simulated SVT, IR-ANF was significantly elevated (mean +/- SE; 275 +/- 68 pmol/L) compared to baseline (28 +/- 7 pmol/L; p = 0.0036). Similar increases in IR-ANF were noted during both simulated and spontaneous SVT. To determine if this IR-ANF release was related to the increase in heart rate or the rise in right atrial pressure during SVT, IR-ANF levels were also measured in five patients with sinus tachycardia and in six patients with congestive heart failure. IR-ANF was significantly related to right atrial pressure (r = 0.93; p = 0.0009) but not to heart rate (r = 0.46). Thus, IR-ANF is elevated during SVT and may contribute to the natriuretic response. The stimulus to IR-ANF secretion during SVT appears to be related to the rise in right atrial pressure rather than to the increase in heart rate.
Answer: Yes, the response of paroxysmal supraventricular tachycardia (PSVT) to overdrive atrial and ventricular pacing can help determine the tachycardia mechanism. A study involving 53 patients with PSVT found that several responses to overdrive pacing were useful in distinguishing atrial tachycardia from other mechanisms of PSVT. For example, during decremental atrial overdrive pacing, the curve relating the pacing cycle length to the VA interval on the first beat following the cessation of atrial pacing was flat or upsloping in patients with AV junctional reentry or AV reentrant tachycardia, but variable in patients with atrial tachycardia. AV reentry and AV junctional reentry could always be terminated by overdrive ventricular pacing, whereas atrial tachycardia was terminated in only one of ten patients. These findings suggest that atrial and ventricular overdrive pacing can rapidly and reliably distinguish atrial tachycardia from other mechanisms of PSVT and occasionally assist in the diagnosis of other tachycardia mechanisms (PUBMED:8269296).
Another study involving 173 consecutive patients with PSVT investigated the diagnostic value of cycle length (CL) variability in differentiating the mechanism of PSVT. The study found that a change in atrial CL that predicts the change in subsequent ventricular CL strongly favors atrial tachycardia (AT) or atypical AVNRT, while a change in atrial CL that is predicted by the change in the preceding ventricular CL favors typical AVNRT or orthodromic reciprocating tachycardia (ORT) (PUBMED:17537206).
Additionally, a technique that examines the atrial response upon cessation of ventricular pacing associated with 1:1 ventriculoatrial conduction during PSVT was found to be highly sensitive and specific for the identification of atrial tachycardia in the electrophysiology laboratory. The A-A-V response upon cessation of ventricular pacing was observed in all cases of atrial tachycardia and simulated atrial tachycardia, while the A-V response was observed in all cases of atrioventricular nodal reentrant and orthodromic reciprocating tachycardia (PUBMED:10080480).
In conclusion, the response of PSVT to overdrive atrial and ventricular pacing is a valuable tool in determining the mechanism of the tachycardia, which has important implications for the use of catheter ablation techniques to cure PSVT. |
Instruction: Does oral health contribute to post-transplant complications in kidney allograft recipients?
Abstracts:
abstract_id: PUBMED:22943293
Does oral health contribute to post-transplant complications in kidney allograft recipients? Objective: The significant number of complications in kidney graft recipients can not be easily explained. The paper assesses whether poor oral health increases the risk of acute rejections and hospitalizations in kidney allograft recipients.
Materials And Methods: Ninety-one kidney transplant recipients were divided into three sub-groups according to post-transplant time (< 1, 1-5 and > 5 years). Dental examination evaluated oral hygiene index (OHI-S) and Community Periodontal Index of Treatment Needs (CPITN), which were correlated with the occurrence of post-transplant complications.
Results: Within the first year after transplantation the indicators of the increased risk of hospitalizations and acute rejection episodes was the OHI-S (hazard ratio 1.02 and 1.11, respectively), also CPITN score correlated with acute rejections (R = 0.82, p < 0.01).
Conclusion: The neglect in oral health is associated with the increased risk of clinical complications within first year after kidney transplantation.
abstract_id: PUBMED:25116310
Current management and care issues in kidney transplant recipients Kidney transplantation is one strategy for treating end-stage renal disease. Recent advances in perioperative management and immunosuppressive agents as well as improved understanding of transplant immunology have improved the post-surgery quality of life of kidney recipients dramatically. However, lifelong monitoring of renal functions and potential complications is essential to ensure optimal medical outcomes. Furthermore, the self-care competency of transplant recipients is a significant factor affecting survival of the graft and the patient over the long term. All kidney transplant recipients should comply with the self-care instructions provided by transplantation medical personnel and work to improve their self-care abilities in order to prevent / detect post-transplant complications such as rejection, infection, and medical comorbidities as early as possible. The purpose of this study is to explore the current management and care issues faced by kidney transplant recipients.
abstract_id: PUBMED:38369626
Impact of nonspecific allograft biopsy findings in symptomatic kidney transplant recipients. A for-cause biopsy is performed to diagnose the cause of allograft dysfunction in kidney transplantation. We occasionally encounter ambiguous biopsy results in symptomatic kidney transplant recipients. Yet, the allograft survival outcome in symptomatic recipients with nonspecific allograft biopsy findings remains unclear. The purpose of this study was to analyze the impact of nonspecific for-cause biopsy findings in symptomatic kidney transplant recipients. We retrospectively collected records from 773 kidney transplant recipients between January 2008 and October 2021. The characteristics of transplant recipients with nonspecific findings in the first for-cause biopsy were analyzed. Nonspecific allograft biopsy findings were defined as other biopsy findings excluding rejection, borderline rejection, calcineurin inhibitor toxicity, infection, glomerulonephritis, and diabetic nephropathy. The graft outcome was compared between recipients who had never undergone a for-cause biopsy and those who had a first for-cause biopsy with nonspecific findings. The graft survival in recipients with nonspecific for-cause biopsy findings was comparable to that in recipients who did not require the for-cause biopsy before and after propensity score matching. Even in symptomatic kidney transplant recipients, nonspecific allograft biopsy findings might not be a poor prognostic factor for allograft survival compared to recipients who did not require the for-cause biopsy.
abstract_id: PUBMED:34159855
Predictors and Complications of Post Kidney Transplant Leukopenia. Background: Leukopenia occurs frequently following kidney transplantation and is associated with adverse clinical outcomes including increased infectious risk. In this study we sought to characterize the causes and complications of leukopenia following kidney transplantation.
Methods: In a cohort of adult patients (≥18 years) who underwent kidney transplant from Jan 2006-Dec 2017, we used univariable Cox proportional Hazards models to identify predictors of post-transplant leukopenia (WBC < 3500 mm3). Factors associated with post-transplant leukopenia were then included in a multivariable backwards stepwise selection process to create a prediction model for the outcome of interest. Cox regression analyses were subsequently used to determine if post-transplant leukopenia was associated with complications.
Results: Of 388 recipients, 152 (39%) developed posttransplant leukopenia. Factors associated with leukopenia included antithymocyte globulin as induction therapy (HR 3.32, 95% CI 2.25-4.91), valganciclovir (HR 1.84, 95% CI 1.25-2.70), tacrolimus (HR 3.05, 95% CI 1.08-8.55), prior blood transfusion (HR 1.17 per unit, 95% CI 1.09- 1.25), and donor age (HR 1.02 per year, 95% CI 1.00-1.03). Cytomegalovirus infection occurred in 26 patients with leukopenia (17.1%). Other than cytomegalovirus, leukopenia was not associated with posttransplant complications.
Conclusion: Leukopenia commonly occurred posttransplant and was associated with modifiable and non-modifiable pretransplant factors.
abstract_id: PUBMED:33328065
Pregnancy in Kidney Transplant Recipients. Women with end-stage kidney disease commonly have difficulty conceiving through spontaneous pregnancy, and many suffer from infertility. Kidney transplantation restores the impairment in fertility and increases the possibility of pregnancy. In addition, the number of female kidney transplant recipients of reproductive age has been increasing. Thus, preconception counseling, contraceptive management, and family planning are of great importance in the routine care of this population. Pregnancy in kidney transplant recipients is complicated by underlying maternal comorbidities, kidney allograft function, the effect of pregnancy on the transplanted kidney, and the effect of the maternal health on the fetus, in addition to immunosuppressive medications and their potential teratogenesis. Given the potential maternal and fetal risks, and possible complications during pregnancy, pretransplant and prepregnancy counseling for women of reproductive age are crucial, including delivery of information regarding contraception and timing for pregnancy, fertility and pregnancy rates, the risk of immunosuppression on the fetus, the risk of kidney allograft, and other maternal complications. In this article, we discuss aspects related to pregnancy among kidney transplant recipients and their management.
abstract_id: PUBMED:37545440
Evolving challenges with long-term care of liver transplant recipients. The number of liver transplants (LT) performed worldwide continues to rise, and LT recipients are living longer post-transplant. This has led to an increasing number of LT recipients requiring lifelong care. Optimal care post-LT requires careful attention to both the allograft and systemic issues that are more common after organ transplantation. Common causes of allograft dysfunction include rejection, biliary complications, and primary disease recurrence. While immunosuppression prevents rejection and reduces incidences of some primary disease recurrence, it has detrimental systemic effects. Most commonly, these include increased incidences of metabolic syndrome, various malignancies, and infections. Therefore, it is of utmost importance to optimize immunosuppression regimens to prevent allograft dysfunction while also decreasing the risk of systemic complications. Institutional protocols to screen for systemic disease and heightened clinical suspicion also play an important role in providing optimal long-term post-LT care. In this review, we discuss these common complications of LT as well as unique considerations when caring for LT recipients in the years after transplant.
abstract_id: PUBMED:32639652
Patterns of emergency department utilization between transplant and non-transplant centers and impact on clinical outcomes in kidney recipients. There is a high rate of Emergency Department (ED) utilization in kidney recipients post-transplant; ED visits are associated with readmission rates and lower survival rates. However, utilization within and outside transplant centers may lead to different outcomes. The objective was to analyze ED utilization patterns at transplant and non-transplant centers as well as common etiologies of ED visits and correlation with hospitalization, graft, and patient outcomes. This was a longitudinal, retrospective, single-center cohort study in kidney transplant recipients evaluating ED utilization. Comparator groups were determined by ED location, time from transplant, and disposition/readmission from ED visit. 1,106 kidney recipients were included in the study. ED utilization dropped at the transplant center after the 1st year (P < .001), while remaining at a similar rate at non-transplant centers (0.22 vs 1.06 VPPY). Infection and allograft complications were the most common causes of ED visits. In multivariable Cox modeling, an ED visit due to allograft complication at a non-transplant center >1 year post-transplant was associated with higher risk for graft loss and death (aHR 2.93 and aHR 1.75, P < .0001). The results of this study demonstrate an increased risk of graft loss among patients who utilize non-transplant center emergency departments. Improved communication and coordination between transplant centers and non-transplant centers may contribute to better long-term outcomes.
abstract_id: PUBMED:32640109
Enteric dysbiosis in liver and kidney transplant recipients: a systematic review. Several factors mediate intestinal microbiome (IM) alterations in transplant recipients, including immunosuppressive (IS) and antimicrobial drugs. Studies on the structure and function of the IM in the post-transplant scenario and its role in the development of metabolic abnormalities, infection, and cancer are limited. We conducted a systematic review to study the taxonomic changes in liver (LT) and kidney (KT) transplantation, and their potential contribution to post-transplant complications. The review also includes pre-transplant taxa, which may play a critical role in microbial alterations post-transplant. Two reviewers independently screened articles, and assessed risk of bias. The review identified 13 clinical studies, which focused on adult kidney and liver transplant recipients. Patient characteristics and methodologies varied widely between studies. Ten studies reported increased an abundance of opportunistic pathogens (Enterobacteriaceae, Enterococcaceae, Fusobacteriaceae, and Streptococcaceae) followed by butyrate-producing bacteria (Lachnospiraceae and Ruminococcaceae) in nine studies in post-transplant conditions. The current evidence is mostly based on observational data and studies with no proof of causality. Therefore, further studies exploring the bacterial gene functions rather than taxonomic changes alone are in demand to better understand the potential contribution of the IM in post-transplant complications.
abstract_id: PUBMED:37675004
Value and limitations of sonography in kidney transplant recipients with special attention to the resistive index - An update. Kidney transplantation has become the standard treatment for end-stage renal disease. Even though the success rates are high, early and late post-transplant complications remain a major clinical problem due to the risk of graft failure. Therefore, it is of highest interest to early diagnose post-transplant complications. Ultrasound with color coded Duplex analysis plays a crucial role in imaging mechanical and vascular complications. In this article, we give an update of the visualizable complications in kidney transplant recipients and discuss the value of resistive index (RI) measurement with its limitations in allograft rejection.
abstract_id: PUBMED:34021949
Course of renal allograft function after diagnosis and treatment of post-transplant lymphoproliferative disorders in pediatric kidney transplant recipients. Background: Post-transplant lymphoproliferative disease (PTLD) is a life-threatening complication in renal transplant recipients. Immunomodulatory and chemotherapeutic treatment potentially affect allograft function. The aim of this study was to evaluate graft function of pediatric kidney transplant recipients following diagnosis and standardized treatment of PTLD.
Methods: Patients were identified from the German Ped-PTLD registry, and data on renal function were retrospectively retrieved from patient charts. For PTLD treatment, immunosuppressive therapy was reduced and all children received rituximab (375 mg/m2 ) for up to six doses. Two patients required additional low-dose chemotherapy. Renal allograft function was monitored by consecutive measurements of estimated glomerular filtration rate (eGFR) at defined time points. Follow-up was up to 60 months after PTLD.
Results: Twenty patients were included in this cohort analysis. Median time from transplantation to PTLD was 2.4 years. Histopathology showed monomorphic lesions in 16 and polymorphic in 4 patients. Two patients experienced PTLD relapse after 2 and 14 months. Range-based analysis of variance showed stable allograft function in 17 of 20 patients (85%). Mean eGFR increased during early treatment phase. One patient experienced graft rejection 5.3 years after diagnosis of PTLD. Another patient developed recurrence of primary renal disease (focal-segmental glomerulosclerosis) and lost his renal allograft 3.8 years post-transplant (2.0 years after PTLD diagnosis).
Conclusion: Treatment of PTLD with rituximab with or without low-dose chemotherapy in combination with reduced immunosuppression, mostly comprising of an mTOR inhibitor-based, calcineurin inhibitor-free regimen, is associated with stable graft function and favorable graft survival in pediatric renal transplant patients.
Answer: Yes, oral health does contribute to post-transplant complications in kidney allograft recipients. A study found that poor oral health, as indicated by higher Oral Hygiene Index (OHI-S) and Community Periodontal Index of Treatment Needs (CPITN) scores, was associated with an increased risk of hospitalizations and acute rejection episodes within the first year after kidney transplantation (PUBMED:22943293). |
Instruction: Should Medical Errors Be Disclosed to Pediatric Patients?
Abstracts:
abstract_id: PUBMED:26329017
Should Medical Errors Be Disclosed to Pediatric Patients? Pediatricians' Attitudes Toward Error Disclosure. Objective: Limited data exist on medical error disclosure in pediatrics. We sought to assess physicians' attitudes toward error disclosure to parents and pediatric patients.
Methods: An anonymous survey was distributed to 1200 members of the American Academy of Pediatrics. Surveys included 1 of 4 possible cases that only varied by patient age (16 or 9 years old) and by whether the medical error resulted in reversible or irreversible harm. Statistical analyses included chi-square, Bonferroni-adjusted P values, Fisher's exact test, Wilcoxon signed rank test, and logistic regressions including key demographic factors, patient age, and error reversibility.
Results: The response rate was 40% (474 of 1186). Overall, 98% of respondents believed it was very important to disclose medical errors to parents versus 57% to pediatric patients (P < .0001). Respondents believed that medical errors could be disclosed to developmentally appropriate pediatric patients at a mean age of 12.15 years old (SD 3.33), but not below a mean age of 10.25 years old (SD 3.55). Most respondents (72%) believed that physicians and parents should jointly decide whether to disclose to pediatric patients. When disclosing to pediatric patients, 88% of respondents believed that physicians should disclose with the parents present. Logistic regressions found only patient age (odds ratio 18.65, 95% confidence interval 9.20-37.8) and error reversibility (odds ratio 2.90, 95% confidence interval 1.73-4.86) to affect attitudes toward disclosure to pediatric patients. Respondent sex, year of medical school graduation, and area of practice had no effect on disclosure attitudes.
Conclusions: Most respondents endorse disclosing medical errors to parents and older pediatric patients, particularly when irreversible harm occurs.
abstract_id: PUBMED:26770701
Medical errors in hospitalized pediatric trauma patients with chronic health conditions. Objective: This study compares medical errors in pediatric trauma patients with and without chronic conditions.
Methods: The 2009 Kids' Inpatient Database, which included 123,303 trauma discharges, was analyzed. Medical errors were identified by International Classification of Diseases, Ninth Revision, Clinical Modification diagnosis codes. The medical error rates per 100 discharges and per 1000 hospital days were calculated and compared between inpatients with and without chronic conditions.
Results: Pediatric trauma patients with chronic conditions experienced a higher medical error rate compared with patients without chronic conditions: 4.04 (95% confidence interval: 3.75-4.33) versus 1.07 (95% confidence interval: 0.98-1.16) per 100 discharges. The rate of medical error differed by type of chronic condition. After controlling for confounding factors, the presence of a chronic condition increased the adjusted odds ratio of medical error by 37% if one chronic condition existed (adjusted odds ratio: 1.37, 95% confidence interval: 1.21-1.5), and 69% if more than one chronic condition existed (adjusted odds ratio: 1.69, 95% confidence interval: 1.48-1.53). In the adjusted model, length of stay had the strongest association with medical error, but the adjusted odds ratio for chronic conditions and medical error remained significantly elevated even when accounting for the length of stay, suggesting that medical complexity has a role in medical error. Higher adjusted odds ratios were seen in other subgroups.
Conclusion: Chronic conditions are associated with significantly higher rate of medical errors in pediatric trauma patients. Future research should evaluate interventions or guidelines for reducing the risk of medical errors in pediatric trauma patients with chronic conditions.
abstract_id: PUBMED:15342846
Use of incident reports by physicians and nurses to document medical errors in pediatric patients. Objectives: To describe the proportion and types of medical errors that are stated to be reported via incident report systems by physicians and nurses who care for pediatric patients and to determine attitudes about potential interventions for increasing error reports.
Methods: A survey on use of incident reports to document medical errors was sent to a random sample of 200 physicians and nurses at a large children's hospital. Items on the survey included proportion of medical errors that were reported, reasons for underreporting medical errors, and attitudes about potential interventions for increasing error reports. In addition, the survey contained scenarios about hypothetical medical errors; the physicians and nurses were asked how likely they were to report each of the events described. Differences in use of incident reports for documenting medical errors between nurses and physicians were assessed with chi(2) tests. Logistic regression was used to determine the association between health care profession type and likelihood of reporting medical errors.
Results: A total of 140 surveys were returned, including 74 from physicians and 66 by nurses. Overall, 34.8% of respondents indicated that they had reported <20% of their perceived medical errors in the previous 12 months, and 32.6% had reported <40% of perceived errors committed by colleagues. After controlling for potentially confounding variables, nurses were significantly more likely to report >or=80% of their own medical errors than physicians (odds ratio: 2.8; 95% confidence interval: 1.3-6.0). Commonly listed reasons for underreporting included lack of certainty about what is considered an error (indicated by 40.7% of respondents) and concerns about implicating others (37%). Potential interventions that would lead to increased reporting included education about which errors should be reported (listed by 65.4% of respondents), feedback on a regular basis about the errors reported (63.8%) and about individual events (51.2%), evidence of system changes because of reports of errors (55.4%), and an electronic format for reports (44.9%). Although virtually all respondents would likely report a 10-fold overdose of morphine leading to respiratory depression in a child, only 31.7% would report an event in which a supply of breast milk is inadvertently connected to a venous catheter but is discovered before any breast milk goes into the catheter.
Conclusions: Medical errors in pediatric patients are significantly underreported in incident report systems, particularly by physicians. Some types of errors are less likely to be reported than others. Information in incident reports is not a representative sample of errors committed in a children's hospital. Specific changes in the incident report system could lead to more reporting by physicians and nurses who care for pediatric patients.
abstract_id: PUBMED:29558835
Views of children, parents, and health-care providers on pediatric disclosure of medical errors. Despite the prevalence of medical errors in pediatrics, little research examines stakeholder perspectives on the disclosure of adverse events, particularly in the case of children's own perspectives. Stakeholder perspectives, however, are integral to informing processes for pediatric disclosure. Building on a systematic review of the literature, this article presents findings from a series of focus groups with key pediatric stakeholders where perspectives were sought on the disclosure of medical errors. Focus groups were conducted with three stakeholder groups. Participants included child members of the Children's Council from a large pediatric hospital (n = 14), parents of children with chronic medical conditions (n = 5), and health-care providers including physicians, nurses, and patient safety professionals (n = 27). Children acknowledged various disclosure approaches while citing the importance of children's right to know about errors. Parents generally identified the need for full disclosure and the uncovering of hidden errors. Health-care providers were concerned about the process of disclosure and whether it always served the best interest of the child or family. While some health-care providers addressed the need for more clarity in pediatric policies, most stakeholders agreed that a case-by-case approach was necessary for supporting variations in how medical errors are disclosed.
abstract_id: PUBMED:15548104
Pediatric resident education about medical errors. Background: National organizations have called for patient safety curricula to help reduce the incidence of errors. Little is known about what trainees are taught about medical errors.
Objective: 1) To determine the amount and type of training that pediatric residents have about medical errors and 2) to assess pediatric chief resident knowledge about medical errors.
Methods: We surveyed chief residents from a national sample of 51 pediatric training programs by selecting every fourth program from the American Council on Graduate Medical Education list of accredited programs. The 21-item telephone survey was developed with patient safety specialists and piloted on several chief residents. It asked about patient-safety training sessions and awareness and knowledge about medical errors.
Results: The 51 chief residents helped teach 2176 residents, approximately one third of all pediatric residents. One third of programs had no lectures about medical errors and 23% did not have morbidity and mortality rounds. Sixty-one percent of respondents stated that outpatient medical errors were rarely discussed. Informal teaching was most often reported as the primary method for educating residents about medical errors. Although 58% of respondents did not know that a systemic change should be made in response to a medical error, 83% felt that residents are adequately trained to deal with a medical error.
Discussion: Pediatric resident education about medical errors varies widely. Attention by pediatric residency training programs to this important issue seems limited.
abstract_id: PUBMED:18501764
Medical errors affecting the pediatric intensive care patient: incidence, identification, and practical solutions. The complexity of patient care and the potential for medical error make the pediatric ICU environment a key target for improvement of outcomes in hospitalized children. This article describes several event-specific errors as well as proven and potential solutions. Analysis of pediatric intensive care staffing, education, and administration systems, although a less "traditional" manner of thinking about medical error, may reveal further opportunities for improved pediatric ICU outcome.
abstract_id: PUBMED:22106082
Improving reporting of outpatient pediatric medical errors. Objective: Limited information exists about medical errors in ambulatory pediatrics and on effective strategies for improving their reporting. We aimed to implement nonpunitive error reporting, describe errors, and use a team-based approach to promote patient safety in an academic pediatric practice.
Patients And Methods: The setting was an academic general pediatric practice in Charlotte, North Carolina, that has ∼26 000 annual visits and primarily serves a diverse, low-income, Medicaid-insured population. We assembled a multidisciplinary patient safety team to detect and analyze ambulatory medical errors by using a reporter-anonymous nonpunitive process. The team used systems analysis and rapid redesign to evaluate each error report and recommend changes to prevent patient harm.
Results: In 30 months, 216 medical errors were reported, compared with 5 reports in the year before the project. Most reports originated from nurses, physicians, and midlevel providers. The most frequently reported errors were misfiled or erroneously entered patient information (n = 68), laboratory tests delayed or not performed (n = 27), errors in medication prescriptions or dispensing (n = 24), vaccine errors (n = 21), patient not given requested appointment or referral (n = 16), and delay in office care (n = 15), which together comprised 76% of the reports. Many recommended changes were implemented.
Conclusions: A voluntary, nonpunitive, multidisciplinary team approach was effective in improving error reporting, analyzing reported errors, and implementing interventions with the aim of reducing patient harm in an outpatient pediatric practice.
abstract_id: PUBMED:36211673
Gender differences in medical errors among older patients and inequalities in medical compensation compared with younger adults. Background: Despite growing evidence focusing on health inequalities in older adults, inequalities in medical compensation compared with younger adults and gender disparities of medical errors among older patients have received little attention. This study aimed to disclose the aforementioned inequalities and examine the disparities in medical errors among older patients.
Methods: First, available litigation documents were searched on "China Judgment Online" using keywords including medical errors. Second, we compiled a database with 5,072 disputes. After using systematic random sampling to retain half of the data, we removed 549 unrelated cases. According to the age, we identified 424 and 1,563 cases related to older and younger patients, respectively. Then, we hired two frontline physicians to review the documents and independently judge the medical errors and specialties involved. A third physician further considered the divergent results. Finally, we compared the medical compensation between older and younger groups and medical errors and specialties among older patients.
Results: Older patients experienced different medical errors in divergent specialties. The medical error rate of male older patients was over 4% higher than that of females in the departments of general surgery and emergency. Female older patients were prone to adverse events in respiratory medicine departments and primary care institutes. The incidence of insufficient implementation of consent obligation among male older patients was 5.18% higher than that of females. However, females were more likely to suffer adverse events at the stages of diagnosis, therapy, and surgical operation. The total amount of medical compensation obtained by younger patients was 41.47% higher than that of older patients.
Conclusions: Except for the common medical errors and departments involved, additional attention should be paid to older patients of different genders according to the incidence of medical errors. Setting up the department of geriatrics or specialist hospitals is also an important alternative to improve patient safety for older people. Furthermore, there may be inequality in medical compensation in older patients due to the tort liability law of China.
abstract_id: PUBMED:34366133
Pediatric surgical errors: A systematic scoping review. Background: Medical errors were largely concealed prior to the landmark report "To Err Is Human". The purpose of this systematic scoping review was to determine the extent pediatric surgery defines and studies errors, and to explore themes among papers focused on errors in pediatric surgery.
Methods: The methodological framework used to conduct this scoping study has been outlined by Arksey and O'Malley. In January 2020, PubMed, the Cochrane Database of Systematic Reviews, and the Cochrane Central Register of Controlled Trials were searched. Oxford Level of Evidence was assigned to each study; only studies rated Level 3 or higher were included.
Results: Of 3,064 initial studies, 12 were included in the final analysis: 4 cohort studies, and 8 outcome/audit studies. This data represented 5,442,000 aggregate patients and 8,893 errors. There were 6 different error definitions and 5 study methods. Common themes amongst the studies included a systems-focused approach, an increase in errors seen with increased complexity, and studies exploring the relationship between error and adverse events.
Conclusions: This study revealed multiple error definitions, multiple error study methods, and common themes described in the pediatric surgical literature. Opportunities exist to improve the safety of surgical care of children by reducing errors. Original Scientific Research Type of Study: Systematic Scoping Review Level of Evidence Rating: 1.
abstract_id: PUBMED:18695280
Medical errors in pediatric practice. This prospective study was conducted in a teaching hospital to identify and analyze medical errors in pediatric practice. All admitted children underwent surveillance for medical errors. Of 457 errors identified in 1286 children, medication errors were 313 (68.5%), those related to treatment procedures were 62 (13.6%) and to clerical procedures 82 (17.9%). Physiological factors accounted for 125 (27.3%) of errors, equipment failures in 68 (14.9%), clerical mistakes 118 (25.8%), carelessness 98(21.4%) and lack of training for 48 (10.5%). Morbidity was nil in 375 (82%), mild in 49 (10.7%), moderate in 22 (4.8%) and severe in 11 (2.4%) errors.
Answer: Based on the provided abstracts, the consensus among pediatricians appears to favor disclosing medical errors to parents and, in certain circumstances, to older pediatric patients, particularly when irreversible harm occurs (PUBMED:26329017). Pediatricians believe that medical errors can be disclosed to developmentally appropriate pediatric patients at a mean age of 12.15 years old, but not below a mean age of 10.25 years old. Most respondents (72%) believe that physicians and parents should jointly decide whether to disclose to pediatric patients, and when disclosing, 88% believe that physicians should do so with the parents present (PUBMED:26329017).
The attitudes of children, parents, and healthcare providers towards the disclosure of medical errors also indicate that children recognize their right to know about errors, parents generally desire full disclosure, and healthcare providers are concerned about the process of disclosure and whether it always serves the best interest of the child or family. Stakeholders agree that a case-by-case approach is necessary for supporting variations in how medical errors are disclosed (PUBMED:29558835).
In conclusion, while there is a strong endorsement for disclosing medical errors to parents, the decision to disclose to pediatric patients should consider the patient's developmental stage, the nature of the error, and should involve a joint decision-making process with the parents. A case-by-case approach is recommended to ensure that the disclosure process is tailored to the individual circumstances and best interests of the child and family. |
Instruction: Are Risk Indices Derived From CGM Interchangeable With SMBG-Based Indices?
Abstracts:
abstract_id: PUBMED:26275643
Are Risk Indices Derived From CGM Interchangeable With SMBG-Based Indices? Background: The risk of hypo- and hyperglycemia has been assessed for years by computing the well-known low blood glucose index (LBGI) and high blood glucose index (HBGI) on sparse self-monitoring blood glucose (SMBG) readings. These metrics have been shown to be predictive of future glycemic events and clinically relevant cutoff values to classify the state of a patient have been defined, but their application to continuous glucose monitoring (CGM) profiles has not been validated yet. The aim of this article is to explore the relationship between CGM-based and SMBG-based LBGI/HBGI, and provide a guideline to follow when these indices are computed on CGM time series.
Methods: Twenty-eight subjects with type 1 diabetes mellitus (T1DM) were monitored in daily-life conditions for up to 4 weeks with both SMBG and CGM systems. Linear and nonlinear models were considered to describe the relationship between risk indices evaluated on SMBG and CGM data.
Results: LBGI values obtained from CGM did not match closely SMBG-based values, with clear underestimation especially in the low risk range, and a linear transformation performed best to match CGM-based LBGI to SMBG-based LBGI. For HBGI, a linear model with unitary slope and no intercept was reliable, suggesting that no correction is needed to compute this index from CGM time series.
Conclusions: Alternate versions of LBGI and HBGI adapted to the characteristics of CGM signals have been proposed that enable extending results obtained for SMBG data and using clinically relevant cutoff values previously defined to promptly classify the glycemic condition of a patient.
abstract_id: PUBMED:19325815
RNA:DNA ratio and other nucleic acid derived indices in marine ecology. Some of most used indicators in marine ecology are nucleic acid-derived indices. They can be divided by target levels in three groups: 1) at the organism level as ecophysiologic indicators, indicators such as RNA:DNA ratios, DNA:dry weight and RNA:protein, 2) at the population level, indicators such as growth rate, starvation incidence or fisheries impact indicators, and 3) at the community level, indicators such as trophic interactions, exergy indices and prey identification. The nucleic acids derived indices, especially RNA:DNA ratio, have been applied with success as indicators of nutritional condition, well been and growth in marine organisms. They are also useful as indicators of natural or anthropogenic impacts in marine population and communities, such as upwelling or dredge fisheries, respectively. They can help in understanding important issues of marine ecology such as trophic interactions in marine environment, fish and invertebrate recruitment failure and biodiversity changes, without laborious work of counting, measuring and identification of small marine organisms. Besides the objective of integrate nucleic acid derived indices across levels of organization, the paper will also include a general characterization of most used nucleic acid derived indices in marine ecology and also advantages and limitations of them. We can conclude that using indicators, such RNA:DNA ratios and other nucleic acids derived indices concomitantly with organism and ecosystems measures of responses to climate change (distribution, abundance, activity, metabolic rate, survival) will allow for the development of more rigorous and realistic predictions of the effects of anthropogenic climate change on marine systems.
abstract_id: PUBMED:23876067
Systematic review and meta-analysis of the effectiveness of continuous glucose monitoring (CGM) on glucose control in diabetes. Diabetes mellitus is a chronic disease that necessitates continuing treatment and patient self-care education. Monitoring of blood glucose to near normal level without hypoglycemia becomes a challenge in the management of diabetes. Although self monitoring of blood glucose (SMBG) can provide daily monitoring of blood glucose level and help to adjust therapy, it cannot detect hypoglycemic unawareness and nocturnal hypoglycemia which occurred mostly in T1DM pediatrics. Continuous glucose monitoring (CGM) offers continuous glucose data every 5 minutes to adjust insulin therapy especially for T1DM patients and to monitor lifestyle intervention especially for T2DM patients by care providers or even patients themselves. The main objective of this study was to assess the effects of continuous glucose monitoring (CGM) on glycemic control in Type 1 diabetic pediatrics and Type 2 diabetic adults by collecting randomized controlled trials from MEDLINE (pubmed), SCOPUS, CINAHL, Web of Science and The Cochrane Library up to May 2013 and historical search through the reference lists of relevant articles. There are two types of CGM device: real-time CGM and retrospective CGM and both types of the device were included in the analysis. In T1DM pediatrics, CGM use was no more effective than SMBG in reducing HbA1c [mean difference - 0.13% (95% CI -0.38% to 0.11%,]. This effect was independent of HbA1c level at baseline. Subgroup analysis indicated that retrospective CGM was not superior to SMBG [mean difference -0.05% (95% CI -0.46% to 0.35%)]. In contrast, real-time CGM revealed better effect in lowering HbA1c level compared with SMBG [mean difference -0.18% (95% CI -0.35% to -0.02%, p = 0.02)]. In T2DM adults, significant reduction in HbA1c level was detected with CGM compared with SMBG [mean difference - 0.31% (95% CI -0.6% to -0.02%, p = 0.04)]. This systematic review and meta-analysis suggested that real-time CGM can be more effective than SMBG in T1DM pediatrics, though retrospective CGM was not. CGM provided better glycemic control in T2DM adults compared with SMBG.
abstract_id: PUBMED:33322930
Continuous Glucose Monitoring for the Detection of Hypoglycemia in Patients With Diabetes of the Exocrine Pancreas. Background: Detailed evaluations of hypoglycemia and associated indices based on continuous glucose monitoring (CGM) are limited in patients with diabetes of the exocrine pancreas. Our study sought to evaluate the frequency and pattern of hypoglycemic events and to investigate hypoglycemia-specific indices in this population.
Methods: This was a cross-sectional study comprising 83 participants with diabetes of the exocrine pancreas. CGM and self-monitoring of blood glucose (SMBG) were performed on all participants for a minimum period of 72 hours. The frequency and pattern of hypoglycemic events, as well as hypoglycemia-related indices, were evaluated.
Results: Hypoglycemia was detected in 90.4% of patients using CGM and 38.5% of patients using SMBG. Nocturnal hypoglycemic events were more frequent (1.9 episodes/patient) and prolonged (142 minutes) compared with day-time events (1.1 episodes/patient; 82.8 minutes, P < 0.05). The mean low blood glucose index was 2.1, and glycemic risk assessment diabetes equation hypoglycemia was 9.1%. The mean time spent below (TSB) <70 mg/dL was 9.2%, and TSB <54 mg/dL was 3.7%. The mean area under curve (AUC) <70 mg/dL was 1.7 ± 2.5 mg/dL/hour and AUC <54 mg/dL was 0.6 ± 1.3 mg/dL/hour. All of the CGM-derived hypoglycemic indices were significantly more deranged at night compared with during the day (P < 0.05).
Conclusion: Patients with diabetes of the exocrine pancreas have a high frequency of hypoglycemic episodes that are predominantly nocturnal. CGM is superior to SMBG in the detection of nocturnal and asymptomatic hypoglycemic episodes. CGM-derived hypoglycemic indices are beneficial in estimating the risk of hypoglycemia.
abstract_id: PUBMED:37646226
Prognostic Value of Multiple Complete Blood Count-Derived Indices in Intermediate Coronary Lesions. Complete blood count (CBC)-derived indices have been proposed as reliable inflammatory biomarkers to predict outcomes in the context of coronary artery disease. These indices have yet to be thoroughly validated in patients with intermediate coronary stenosis. Our study included 1527 patients only with intermediate coronary stenosis. The examined variables were neutrophil-lymphocyte ratio (NLR), derived NLR, monocyte-lymphocyte ratio (MLR), platelet-lymphocyte ratio (PLR), systemic immune inflammation index (SII), system inflammation response index (SIRI), and aggregate index of systemic inflammation (AISI). The primary endpoint was the composite of major adverse cardiovascular events (MACEs), including all-cause death, non-fatal myocardial infarction, and unplanned revascularization. Over a follow-up of 6.11 (5.73-6.55) years, MACEs occurred in 189 patients. Receiver operator characteristic curve analysis showed that SIRI outperformed other indices with the most significant area under the curve. In the multivariable analysis, SIRI (hazard ratio [HR] 1.588, 95% confidence interval [CI] 1.138-2.212) and AISI (HR 1.673, 95% CI 1.217-2.300) were the most important prognostic factors among all the indices. The discrimination ability of each index was strengthened in patients with less burden of modifiable cardiovascular risk factors. SIRI also exhibited the best incremental value beyond the traditional cardiovascular risk model.
abstract_id: PUBMED:35707068
Some dominance indices to determine market concentration. This study intends to provide a new insight into the concentration and dominance indices as the concerns grow about the increasing concentration in the markets around the world. Most of the studies attempting to measure concentration or dominance in a market employ the popular concentration/dominance indices like Herfindahl-Hirschmann, Hannah-Kay, Rosenbluth-Hall-Tidemann and Concentration ratio. On the other hand, measures of qualitative variation are closely related to entropy, diversity and concentration/dominance measures. In this study, two normalized dominance measures that can be derived from the work of Wilcox on qualitative variation are proposed. The limiting distributions of these normalized dominance measures are formulated. By some simulations, asymptotic behaviors of these indices are analyzed under some assumptions about the market structure. In the end, by an application on the Turkish car sales in 2019, it is determined that the values of dominance indices vary in a considerably large range. Thus one of the dominance indices is determined to have the advantage of having less error in estimation, less sensitivity to smaller market shares, and less sampling variability.
abstract_id: PUBMED:35011061
DXA-Derived Indices in the Characterisation of Sarcopenia. Sarcopenia is linked with increased risk of falls, osteoporosis and mortality. No consensus exists about a gold standard "dual-energy X-ray absorptiometry (DXA) index for muscle mass determination" in sarcopenia diagnosis. Thus, many indices exist, but data on sarcopenia diagnosis agreement are scarce. Regarding sarcopenia diagnosis reliability, the impact of influencing factors on sarcopenia prevalence, diagnosis agreement and reliability are almost completely missing. For nine DXA-derived muscle mass indices, we aimed to evaluate sarcopenia prevalence, diagnosis agreement and diagnosis reliability, and investigate the effects of underlying parameters, presence or type of adjustment and cut-off values on all three outcomes. The indices were analysed in the BioPersMed cohort (58 ± 9 years), including 1022 asymptomatic subjects at moderate cardiovascular risk. DXA data from 792 baselines and 684 follow-up measurements (for diagnosis agreement and reliability determination) were available. Depending on the index and cut-off values, sarcopenia prevalence varied from 0.6 to 36.3%. Height-adjusted parameters, independent of underlying parameters, showed a relatively high level of diagnosis agreement, whereas unadjusted and adjusted indices showed low diagnosis agreement. The adjustment type defines which individuals are recognised as sarcopenic in terms of BMI and sex. The investigated indices showed comparable diagnosis reliability in follow-up examinations.
abstract_id: PUBMED:35592392
Prognostic Impact of Multiple Lymphocyte-Based Inflammatory Indices in Acute Coronary Syndrome Patients. Background: The aim of this study was to evaluate the prognostic values of five lymphocyte-based inflammatory indices (platelet-lymphocyte ratio [PLR], neutrophil-lymphocyte ratio [NLR], monocyte-lymphocyte ratio [MLR], systemic immune inflammation index [SII], and system inflammation response index [SIRI]) in patients with acute coronary syndrome (ACS).
Methods: A total of 1,701 ACS patients who underwent percutaneous coronary intervention (PCI) were included in this study and followed up for major adverse cardiovascular events (MACE) including all-cause death, non-fatal ischemic stroke, and non-fatal myocardial infarction. The five indices were stratified by the optimal cutoff value for comparison. The association between each of the lymphocyte-based inflammatory indices and MACE was assessed by the Cox proportional hazards regression analysis.
Results: During the median follow-up of 30 months, 107 (6.3%) MACE were identified. The multivariate COX analysis showed that all five indices were independent predictors of MACE, and SIRI seemingly performed best (Hazard ratio [HR]: 3.847; 95% confidence interval [CI]: [2.623-5.641]; p < 0.001; C-statistic: 0.794 [0.731-0.856]). The addition of NLR, MLR, SII, or SIRI to the Global Registry of Acute Coronary Events (GRACE) risk score, especially SIRI (C-statistic: 0.699 [0.646-0.753], p < 0.001; net reclassification improvement [NRI]: 0.311 [0.209-0.407], p < 0.001; integrated discrimination improvement [IDI]: 0.024 [0.010-0.046], p < 0.001), outperformed the GRACE risk score alone in the risk predictive performance.
Conclusion: Lymphocyte-based inflammatory indices were significantly and independently associated with MACE in ACS patients who underwent PCI. SIRI seemed to be better than the other four indices in predicting MACE, and the combination of SIRI with the GRACE risk score could predict MACE more accurately.
abstract_id: PUBMED:10500443
The problem of therapeutic efficacy indices. 2. Description of the indices The four indices for a binary outcome or therapeutic objective are: the odds ratio, the relative risk, the absolute benefit and the number of patients to treat. For a continuous outcome, the effect size is the best choice. The odds ratio approximates the relative risk. The difference may be large in some instances. The number of patients to treat is the reciprocal of the absolute benefit. Although they are built on the same two quantities, they are not interchangeable and should not be considered in the same way. Moreover, their meaning is not straightforward and they can be misused.
abstract_id: PUBMED:35444477
Construction of a Personalized Insulin Resistance Risk Assessment Tool in Xinjiang Kazakhs Based on Lipid- and Obesity-Related Indices. Purpose: This study aimed to explore the relationship between obesity- and lipid-related indices and insulin resistance (IR) and construct a personalized IR risk model for Xinjiang Kazakhs based on representative indices.
Methods: This cross-sectional study was performed from 2010 to 2012. A total of 2170 Kazakhs from Xinyuan County were selected as research subjects. IR was estimated using the homeostasis model assessment of insulin resistance. Multivariable logistic regression analysis, least absolute shrinkage and selection operator penalized regression analysis, and restricted cubic spline were applied to evaluate the association between lipid- and obesity-related indices and IR. The risk model was developed based on selected representative variables and presented using a nomogram. The model performance was assessed using the area under the ROC curve (AUC), the Hosmer-Lemeshow goodness-of-fit test, and decision curve analysis (DCA).
Results: After screening out 25 of the variables, the final risk model included four independent risk factors: smoking, sex, triglyceride-glucose (TyG) index, and body mass index (BMI). A linear dose-response relationship was observed for the BMI and TyG indices against IR risk. The AUC of the risk model was 0.720 based on an independent test and 0.716 based on a 10-fold cross-validation. Calibration curves showed good consistency between actual and predicted IR risks. The DCA demonstrated that the risk model was clinically effective.
Conclusion: The TyG index and BMI had the strongest association with IR among all obesity- and lipid-related indices, and the developed model was useful for predicting IR risk among Kazakh individuals.
Answer: No, risk indices derived from Continuous Glucose Monitoring (CGM) are not directly interchangeable with those based on Self-Monitoring of Blood Glucose (SMBG). A study exploring the relationship between CGM-based and SMBG-based low blood glucose index (LBGI) and high blood glucose index (HBGI) found that LBGI values obtained from CGM did not closely match SMBG-based values, particularly underestimating in the low-risk range. A linear transformation was necessary to better align CGM-based LBGI with SMBG-based LBGI. However, for HBGI, a linear model with a unitary slope and no intercept was reliable, suggesting that no correction is needed to compute this index from CGM time series. Therefore, alternate versions of LBGI and HBGI adapted to the characteristics of CGM signals have been proposed to enable the use of clinically relevant cutoff values previously defined for SMBG data to classify the glycemic condition of a patient (PUBMED:26275643). |
Instruction: Procedures required to accomplish complete cytoreduction of ovarian cancer: is there a correlation with "biological aggressiveness" and survival?
Abstracts:
abstract_id: PUBMED:11520137
Procedures required to accomplish complete cytoreduction of ovarian cancer: is there a correlation with "biological aggressiveness" and survival? Objective: The aim of this study was to determine if the necessity of using specific procedures to attain complete cytoreduction in ovarian cancer correlates with innate biologic aggressiveness and independently influences survival.
Methods: Between 1990 and 2000, 213 patients with Stage IIIC epithelial ovarian cancer underwent complete cytoreduction before initiation of systemic platinum-based combination chemotherapy. Survival was stratified and analyzed (log rank and Cox regression) on the basis of whether extrapelvic bowel resection, diaphragm stripping, full-thickness diaphragm resection, modified posterior pelvic exenteration, peritoneal implant ablation and/or aspiration, and excision of grossly involved retroperitoneal lymph nodes were necessary to attain a visibly disease-free cytoreductive outcome.
Results: The median and estimated 5-year survival for the cohort were 75.8 months and 54%, respectively. Survival was influenced (log rank) by the requirement of diaphragm stripping (required, median 42 months vs not required, median 79 months; P = 0.03) and the extent of mesenteric and serosal implants that required removal (none, median not reached, vs 1-50 implants, median not reached, vs >50 implants, median 40 months; P = 0.002). Survival was independently influenced (Cox regression) only by the extent of peritoneal metastatic implants that required removal (P = 0.01). The other investigated procedures and type of chemotherapy used did not influence survival.
Conclusions: The need to remove a large number of peritoneal implants correlates with biological aggressiveness and diminished survival, but not significantly enough to preclude long-term survival or justify abbreviation of the operative effort. The need to use the other investigated procedures had minimal or no observed influence on survival.
abstract_id: PUBMED:27008588
Surgical Complexity Impact on Survival After Complete Cytoreductive Surgery for Advanced Ovarian Cancer. Introduction: The direct relationship between surgical radicality to compensate biologic behavior and improvement of patient outcome at the time of primary or interval cytoreduction remains unclear.
Objective: The aim of this study was to evaluate the impact of disease extension and surgical complexity on survival after complete macroscopic resection for stage IIIC-IV ovarian cancer.
Materials And Methods: Medical records from seven referral centers in France were reviewed to identify all patients who had complete cytoreductive surgery for stage IIIC-IV epithelial ovarian, fallopian, or primary peritoneal cancer. All patients had at least six cycles of carboplatin and paclitaxel combination therapy.
Results: From the 374 consecutive patients with complete cytoreduction who were included in this study, stage, grade, upper abdominal disease, surgical complexity, and carcinomatosis extent were significantly associated with disease-free survival (DFS) at univariate analysis. Stage IV and the need for ultra-radical procedures were significantly associated with lower overall survival (OS). On multivariate analysis, radical surgery, including more than two visceral resections, was significantly associated with decreased DFS and OS.
Conclusions: Patients who need complex surgical procedures involving two or more visceral resections in order to achieve successful complete cytoreduction have worse outcome than patients with less extensive procedures. The negative impact of surgical complexity was not significant in patients who underwent upfront procedures. Tumor volume and extension were associated with decreased DFS in patients undergoing a primary surgical approach. This adds to the evidence that, even though complete cytoreduction is currently the objective of surgery, tumor load remains an independent poor prognostic factor and probably reflects a more aggressive behavior.
abstract_id: PUBMED:16876853
"Optimal" cytoreduction for advanced epithelial ovarian cancer: a commentary. Objective: To derive the most appropriate threshold to classify primary cytoreductive operations as "optimal" and address the clinical significance of this issue.
Methods: Criteria used to classify primary cytoreductive outcomes are reviewed. Survival outcomes are analyzed to address relative influences of the completeness of cytoreduction and "biological aggressiveness", as manifested by the extent of intra-abdominal metastases.
Results: Most cohorts analyzing relative influences of metastatic tumor burden and the dimension of residual disease on survival report completeness of cytoreduction to influence the prognosis more significantly than tumor burden, with necessity to perform various procedures having minimal or no influence. Equivalent survival is reported for completely cytoreduced patients with stage III disease whether substages IIIa/b (smaller tumor burden) are excluded or included. However, some stage IIIc series report more favorable median and 5-year survivals for small fractions of completely cytoreduced patients than series with a large visibly disease-free fraction. Increasing fractions of complete cytoreduction are reported in recent cohorts, without increase in morbidity.
Conclusions: Complete primary cytoreduction improves the prognosis for survival significantly more than a small dimension of residual disease. Although prospective randomized trials addressing surgical issues have not been undertaken and numerous variables may reflect "biological aggressiveness" by influencing the prognosis, available data justify elimination of macroscopic disease to be the most appropriate objective of primary cytoreductive surgery. Stratification of survival by dimensions of residual disease in an investigational setting should include a visibly disease-free subgroup and if used, the term "optimal" should be applied to patients undergoing complete cytoreduction.
abstract_id: PUBMED:32132055
Impact of Bevacizumab-containing Primary Treatment on Outcome of Recurrent Ovarian Cancer: An Italian Study. Background/aim: The aim of the study was to assess the outcome of advanced ovarian cancer patients who i) underwent primary surgery followed by carboplatin/paclitaxel-based chemotherapy with or without bevacizumab, ii) were in complete response after chemotherapy, iii) and subsequently recurred.
Patients And Methods: The hospital records of 138 complete responders after chemotherapy with (n=58) or without (n=80) bevacizumab were reviewed.
Results: Both survival after recurrence and overall survival were related to age (≤61 vs. >61 years, p=0.002 and p=0.0001), performance status (0 vs. ≥1, p=0.002 and p=0.001), histotype (serous vs. non serous, p=0.005 and p=0.01), time to recurrence (≥12 vs. <12 months, p<0.0001 and p<0.0001) and treatment at recurrence (surgery plus chemotherapy vs. chemotherapy, p=0.01 and p=0.004), but not to first-line treatment.
Conclusion: This investigation failed to detect a more aggressive behavior of recurrent ovarian cancer after bevacizumab-containing primary treatment.
abstract_id: PUBMED:10926799
Complete cytoreduction: is epithelial ovarian cancer confined to the pelvis biologically different from bulky abdominal disease? Objectives: The aim of this study was to determine whether site and size of tumor masses prior to complete surgical cytoreduction affect outcome survival.
Methods: A retrospective review was performed of 53 women with stage II and III epithelial ovarian cancer following complete surgical cytoreduction.
Results: Fifteen cases (28%) were classified as stage II and the remaining 38 cases (72%) as stage III. The overall median survival was 58 months with overall 2- and 5-year survivals of 76 and 42%, respectively. On univariate analysis, women with well differentiated tumors did significantly better than those with moderately or poorly differentiated tumours (P = 0.0009). FIGO stage did not reach statistical significance (P = 0.066). On multivariate analysis, comparing patient's age, previous history of pelvic surgery, previous history of malignancy, performance of lymphadenectomy for visibly/palpably enlarged nodes, performance of bowel resection, presence of concomitant tumors, positive pelvic and/or para-aortic lymph nodes, histological type, histological grade, and FIGO stage, only histological grade remained an independent variable affecting outcome survival (P = 0.0004; FIGO stage, P = 0.22) (hazard ratio = 6.5: well versus poor differentiation, 95% confidence interval, 1.7-25.5).
Conclusion: When surgical cytoreduction to no visible disease has been achieved in women with stage II and III epithelial ovarian cancer, FIGO stage, i.e., site and size of tumor masses prior to surgical cytoreduction, does not appear to influence outcome survival. The aggressiveness of the remaining microscopic disease would seem to be determined largely by histological grade. Bearing in mind the retrospective nature of this study and the relatively small cohort of patients, the results would appear to suggest that it is unlikely that there are any other significant parameters (hidden factors) affecting tumor biology which are independent of tumor grade in these patients. A possible implication of this result is that complete surgical cytoreduction confers a survival benefit by producing a biologically more homogeneous tumor.
abstract_id: PUBMED:30029471
One-Carbon Metabolism: Biological Players in Epithelial Ovarian Cancer. Metabolism is deeply involved in cell behavior and homeostasis maintenance, with metabolites acting as molecular intermediates to modulate cellular functions. In particular, one-carbon metabolism is a key biochemical pathway necessary to provide carbon units required for critical processes, including nucleotide biosynthesis, epigenetic methylation, and cell redox-status regulation. It is, therefore, not surprising that alterations in this pathway may acquire fundamental importance in cancer onset and progression. Two of the major actors in one-carbon metabolism, folate and choline, play a key role in the pathobiology of epithelial ovarian cancer (EOC), the deadliest gynecological malignancy. EOC is characterized by a cholinic phenotype sustained via increased activity of choline kinase alpha, and via membrane overexpression of the alpha isoform of the folate receptor (FRα), both of which are known to contribute to generating regulatory signals that support EOC cell aggressiveness and proliferation. Here, we describe in detail the main biological processes associated with one-carbon metabolism, and the current knowledge about its role in EOC. Moreover, since the cholinic phenotype and FRα overexpression are unique properties of tumor cells, but not of normal cells, they can be considered attractive targets for the development of therapeutic approaches.
abstract_id: PUBMED:18348300
Indicators of survival duration in ovarian cancer and implications for aggressiveness of care. Background: Ovarian cancer patients frequently receive chemotherapy near the end of life. The purpose of the current study was to develop indicators that characterize those ovarian cancer patients who have a short life span.
Methods: The medical charts of deceased epithelial ovarian cancer patients were retrospectively reviewed from 2000 through 2006. All patients received primary debulking surgery and adjuvant chemotherapy. Aggressiveness of cancer care within the last month of life was measured by chemotherapy regimens, emergency room visits, and hospitalizations. Significant clinical events (SCE) were defined as ascites, bowel obstruction, and pleural effusion. Survival quartiles were compared using chi-square and Student t test statistics. Multiple regression analysis was performed using survival duration as a dependent variable.
Results: In all, 113 patients with epithelial ovarian cancer were reviewed. Patients had increased hospitalizations (P < .001) and SCE (P < .001) as they approached the end of life. There was no difference in the pattern of hospitalizations and SCE between the top and bottom survival quartiles. Patients with a shorter survival time had a trend toward increased chemotherapy during their last 3 months of life (P = .057) and had increased overall aggressiveness of care (P = .013). In patients with a disease remission, the length of initial remission time was found to be significant in predicting survival (P < .01). Time to second disease recurrence was also significant in predicting survival time (P < 0.01).
Conclusions: Patients who received aggressive care did not have improvement in survival. Short disease remissions and increasing hospitalizations with SCE should be indicators of the appropriateness of reducing cure-oriented therapies and increasing palliative interventions.
abstract_id: PUBMED:11535982
Surgery of advanced malignant epithelial tumours of the ovary. Surgery is still the cornerstone in the management of advanced epithelial ovarian cancer (AEOC) patients. It involves: i. establishment of diagnosis and staging; ii. primary cytoreduction; iii. interval cytoreduction, interval debulking surgery (IDS) or surgery after neoadjuvant chemotherapy; iv. secondary cytoreduction during the assessment of the status of the disease at the end of primary chemotherapy - second look; v. surgery for recurrence; vi. palliation. Substantial evidence exists to demonstrate that if surgery is performed by gynaecologists with a special training in gynaecological oncology, a survival advantage can be achieved when compared with that obtained when general surgeons are primarily treating AEOC. Primary surgery with diagnostic and cytoreductive intent should be performed in accordance with the European Guidelines of Staging in Ovarian Cancer. Whether or not cytoreduction should systematically include lymphadenectomy is still a controversial issue. The strong correlation between chemosensitivity, successful debulking surgery and survival strongly support the concept that it is the biological characteristic of the disease rather than the aggressiveness of the surgeon to allow a successful cytoreduction to the real optimal disease status. It should be now recognised as the complete absence of disease at the end of the surgical procedure. Both IDS and neoadjuvant chemotherapy represent a strong effort to achieve such a status through less morbidity and a better quality of life for the patient. Surgery for recurrence and palliation need to be optimised both in terms of patient selection and a better integration with chemotherapy and ancillary management.
abstract_id: PUBMED:8760635
Massive transfusion in cancer surgery. A study of the survival of 21 patients In a retrospective study over 5 years, the evolution of 21 patients who received a massive blood transfusion during a carcinological surgery was analyzed. In this type of surgery, the frequency of massive blood transfusion is 0.28% and affects 2.5% of the patients transfused. In half of the cases, surgery was performed to resect an ovarian cancer. The overall death rate, related to the importance of the blood transfusion, is high: 38% (8 patients out of 21). Among the patients who survived (13), the actuarial survival is low: 10 deaths from local or metastatic disease with a mean survival of 12 months, suggesting that the use of massive blood transfusion was related to the tumoral aggressiveness and to the severity of the deepseated malignant disease.
abstract_id: PUBMED:31882477
Overexpression of Annexin A1 Is an Independent Predictor of Longer Overall Survival in Epithelial Ovarian Cancer. Background: Epithelial ovarian cancer (EOC) is the major gynecological cause of cancer deaths. Annexin A1 (ANXA1) protein has been implicated in the aggressiveness of several cancer types.
Materials And Methods: This study retrospectively assessed ANXA1 expression in epithelial cells of 156 pre-chemotherapy EOC samples and 34 normal ovarian samples from patients treated at Salah Azaiez Institute. Using immunohistochemistry, ANXA1 expression was compared in normal versus cancer samples; correlations with clinicopathological features, including overall survival, were sought.
Results: Fifty-two percent of tumor samples showed epithelial ANXA1 staining versus only 26% of normal samples (Fisher's exact test, p=0.00794). Epithelial ANXA1 expression was correlated with better overall survival in both univariate and multivariate analyses.
Conclusion: The possible contribution of ANXA1 overexpression to EOC outcome may be relevant to therapeutic strategies.
Answer: The studies provided offer insights into the relationship between the procedures required for complete cytoreduction of ovarian cancer, the biological aggressiveness of the disease, and patient survival outcomes.
One study found that the need to remove a large number of peritoneal implants correlates with biological aggressiveness and diminished survival, but this was not significant enough to preclude long-term survival or justify abbreviation of the operative effort. The necessity of other investigated procedures had minimal or no observed influence on survival (PUBMED:11520137).
Another study indicated that patients who require complex surgical procedures involving two or more visceral resections to achieve complete cytoreduction have worse outcomes than patients with less extensive procedures. Tumor volume and extension were associated with decreased disease-free survival in patients undergoing a primary surgical approach, suggesting that tumor load is an independent poor prognostic factor and may reflect more aggressive behavior (PUBMED:27008588).
Further analysis suggested that complete primary cytoreduction significantly improves prognosis for survival more than a small dimension of residual disease. The completeness of cytoreduction influences prognosis more significantly than tumor burden, with the necessity to perform various procedures having minimal or no influence (PUBMED:16876853).
Another study found that the site and size of tumor masses prior to complete surgical cytoreduction do not appear to influence outcome survival when complete cytoreduction is achieved. The aggressiveness of the remaining microscopic disease seems to be largely determined by histological grade (PUBMED:10926799).
In summary, while complete cytoreduction is a critical objective in the management of ovarian cancer, the extent and complexity of the procedures required to achieve this may reflect the biological aggressiveness of the disease and influence survival outcomes. However, the necessity for extensive procedures alone does not preclude the possibility of long-term survival, and tumor load and histological grade are important factors in determining patient prognosis. |
Instruction: Children's friendships and learning in school: cognitive enhancement through social interaction?
Abstracts:
abstract_id: PUBMED:33666343
Psychological and social adjustment in refugee adolescents: The role of parents' and adolescents' friendships. Friendships have important implications for adolescents' psychological and social adjustment. However, there is still limited evidence on how different same-ethnic and cross-ethnic friendships are formed and regarding their role in refugee adjustment. Therefore, the present study was designed to examine the interplay of parents' and adolescents' same-ethnic and cross-ethnic friendships with adolescents' own psychological and social adjustment among refugee adolescents. Participants were 208 Syrian refugees (71.2% female; Mage = 15.35, SDage = 0.95) in the Southeastern Anatolia Region of Turkey. Findings indicated that refugee parents' same-ethnic and cross-ethnic friendships were positively related to their children's same-ethnic and cross-ethnic friendships. Moreover, parents' same-ethnic friendships were negatively linked with adolescents' social well-being, whereas parents' cross-ethnic friendships were positively related to adolescents' social well-being. Furthermore, adolescents' same-ethnic and cross-ethnic friendships were both positively related to adolescents' social well-being, and cross-ethnic friendships were also positively associated with psychological well-being. These findings suggest that adolescents' cross-ethnic friendships mediated the positive associations of parents' cross-ethnic friendships with adolescents' social and psychological well-being. Overall, our study provides novel insights into the protective roles of diverse friendships for refugee adolescents.
abstract_id: PUBMED:25309488
Learning in friendship groups: developing students' conceptual understanding through social interaction. The role that student friendship groups play in learning was investigated here. Employing a critical realist design, two focus groups on undergraduates were conducted to explore their experience of studying. Data from the "case-by-case" analysis suggested student-to-student friendships produced social contexts which facilitated conceptual understanding through discussion, explanation, and application to "real life" contemporary issues. However, the students did not conceive this as a learning experience or suggest the function of their friendships involved learning. These data therefore challenge the perspective that student groups in higher education are formed and regulated for the primary function of learning. Given these findings, further research is needed to assess the role student friendships play in developing disciplinary conceptual understanding.
abstract_id: PUBMED:36818094
Changes in social interaction, social relatedness, and friendships in Education Outside the Classroom: A social network analysis. Introduction: Social interaction is associated with many effects on the psychological level of children such as mental health, self-esteem, and executive functions. Education Outside the Classroom (EOtC) describes regular curricular classes/lessons outside the school building, often in natural green and blue environments. Applied as a long-term school concept, EOtC has the potential to enable and promote social interaction. However, empirical studies on this topic have been somewhat scant.
Methods: One class in EOtC (N = 24) and one comparison class (N = 26) were examined in this study to explore those effects. Statistical Actor-Oriented Models and Exponential Random Graph Models were used to investigate whether there are differences between EOtC and comparison class regarding changes over time in social interaction parameters; whether a co-evolution between social interaction during lessons and breaks and attendant social relatedness and friendships exists; whether students of the same gender or place of residence interact particularly often (homophily).
Results: Besides inconsistent changes in social interaction parameters, no co-evolutional associations between social interaction and social relatedness and friendships could be determined, but grouping was evident in EOtC. Both classes showed pronounced gender homophily, which in the case of EOtC class contributes to a fragmentation of the network over time.
Discussion: The observed effects in EOtC could be due to previously observed tendencies of social exclusion as a result of a high degree of freedom of choices. It therefore seems essential that in future studies not only the quality of the study design and instruments should be included in the interpretation - rather, the underlying methodological-didactic concept should also be evaluated in detail. At least in Germany, it seems that there is still potential for developing holistic concepts with regards to EOtC in order to maximize the return on the primarily organizational investment of implementing EOtC in natural environments.
abstract_id: PUBMED:36399226
Friendships in Children with Williams Syndrome: Parent and Child Perspectives. Although children with Williams syndrome (WS) are strongly socially motivated, many have friendship difficulties. The parents of 21 children with WS and 20 of the children themselves participated in a semi-structured interview about the children's friendships. Parents reported that their child had difficulties sustaining friendships and low levels of interaction with peers. Barriers to friendships included difficulties with play and self-regulating behaviour. However, there was within-group variability, with a small number of children reported to have strong friendships. While parents reported friendship challenges, all of the children named at least one friend, and most said that they had never felt excluded by their peers. Future research is needed to determine optimal ways to support children with WS in their friendships.
abstract_id: PUBMED:33990245
Social Anxiety Disorder and Social Support Behavior in Friendships. Relationship quality is a strong predictor of health outcomes, and individuals with social anxiety disorder (SAD) report increased interpersonal impairment. However, there are few studies testing the effect of SAD on friendships and it is thus unclear whether there are behavioral differences that distinguish friendships in which a target individual has SAD from friendships in which the target individual does not have SAD. We tested for differences in the provision and receipt of support behaviors as a function of having a SAD diagnosis and accounting for comorbid depressive symptoms. Participants with SAD (n = 90) and their friends engaged in support conversations that were coded using the Social Support Interaction Coding System. Structural equation modeling revealed some differences between participants and friends when accounting for depression. Specifically, friends of participants with SAD and comorbid depression engaged in fewer positive helper behaviors than the friends of participants who did not have SAD or comorbid depression. Additionally, dyads in which the primary participant had SAD engaged in more off-task behaviors. Results suggest that SAD does not result in global interpersonal impairment, but that receipt of positive support behaviors from friends may differ as a function of SAD and comorbid depression. Interpersonal interventions aimed at increasing adaptive friendships and aspects of CBT that target subtle avoidance (e.g., safety behaviors) may be useful in facilitating more satisfactory relationships for these individuals.
abstract_id: PUBMED:16318677
Children's friendships and learning in school: cognitive enhancement through social interaction? Background: Recent literature has identified that children's performance on cognitive (or problem-solving) tasks can be enhanced when undertaken as a joint activity among pairs of pupils. Performance on this 'social' activity will require quality relationships between pupils, leading some researchers to argue that friendships are characterized by these quality relationships and, therefore, that friendship grouping should be used more frequently within classrooms.
Aims: Children's friendship grouping may appear to be a reasonable basis for cognitive development in classrooms, although there is only inconsistent evidence to support this argument. The inconsistency may be explained by the various bases for friendship, and how friendship is affected by cultural contexts of gender and schooling. This study questions whether classroom-based friendship pairings will perform consistently better on a cognitive task than acquaintance pairings, taking into account gender, age, and ability level of children. The study also explores the nature of school-based friendship described by young children.
Sample: 72 children were paired to undertake science reasoning tasks (SRTs). Pairings represented friendship (versus acquaintance), sex (male and female pairings), ability (teacher-assessed high, medium, and low), and age (children in Years 1, 3, and 5 in a primary school).
Method: A small-scale quasi-experimental design was used to assess (friendship- or acquaintance-based) paired performance on SRTs. Friendship pairs were later interviewed about qualities and activities that characterized their friendships.
Results: Girls' friendship pairings were found to perform at the highest SRT levels and boys' friendship pairing performed at the lowest levels. Both boy and girl acquaintance pairings performed at mid-SRT levels. These findings were consistent across Year (in school) levels and ability levels. Interviews revealed that male and female friendship pairs were likely to participate in different types of activity, with girls being school-inclusive and boys being school-exclusive.
Conclusion: Recommendations to use friendship as a basis for classroom grouping for cognitive tasks may facilitate performance of some pairings, but may also inhibit the performance of others. This is shown very clearly with regard to gender. Some of the difference in cognitive task performance may be explained by distinct, cultural (and social capital) orientations to friendship activities, with girls integrating school and educational considerations into friendship, and boys excluding school and educational considerations.
abstract_id: PUBMED:36650893
Workplace friendships while teleworking during COVID-19: Experiences of social workers in Australia. COVID-19 has shifted Australia's social service delivery. Understanding the impact on workplace relationships is key. This article used a small-scale sample of social workers (N = 37) to explore workplace friendship experiences while teleworking. Participants reported opportunities for friendships during COVID-19 but reported ongoing personal and professional concerns.
abstract_id: PUBMED:9327087
The quality of friendships between children with and without learning problems. The nature and quality of preadolescent friendships between children with and without learning problems due to mental retardation or mild cognitive difficulties were investigated. Based on an assessment of the reciprocal relationship status of 373 children, including 54 with learning problems, 33 friend and 32 acquaintance dyads were identified. Of these dyads, half included a child with learning problems and half consisted of 2 children without learning problems. The dyads were observed performing a play task. Unlike friendships between children without disabilities, friendships between children with and without learning problems were marked by limited collaboration and shared decision-making, a low level of cooperative play and shared laughter, and an asymmetrical, hierarchical division of roles. The importance of advancing beyond the study of the social acceptance of children with learning problems to study the qualitative aspects of their friendships was discussed.
abstract_id: PUBMED:32763498
Similarities and differences between young children with selective mutism and social anxiety disorder. A growing body of evidence points to a strong overlap between selective mutism (SM) and social anxiety disorder in children, specifically with regard to characteristics such as social reticence and anxiety. Yet few studies have directly compared these populations, especially with young children. This study compared 25 children (aged 3-7 years) with a primary diagnosis of SM, 17 children with a primary diagnosis of social anxiety disorder and 15 non-clinical controls using interviews and questionnaires on expressions of social anxiety and behavioural inhibition. Results showed that children with SM or social anxiety disorder were more anxious than non-clinical controls and did not significantly differ from each other on either non-verbal social anxiety or non-social forms of anxiety. Both children with SM and social anxiety disorder had fewer friends and experienced greater difficulties forming friendships than non-clinical controls. However, children with SM scored higher than children with social anxiety disorder and non-clinical controls on a few measures of inhibition (both verbal and nonverbal). The results support assertions of strong similarities between SM and social anxiety disorder, but suggest that children with SM may show even greater severity in certain symptoms at a young age. These findings point to the need for treatment to include both cognitive behavioural skills to manage social anxiety and structured practice of social skills in order to improve treatment efficacy for children with SM.
abstract_id: PUBMED:24027521
How relevant is social interaction in second language learning? Verbal language is the most widespread mode of human communication, and an intrinsically social activity. This claim is strengthened by evidence emerging from different fields, which clearly indicates that social interaction influences human communication, and more specifically, language learning. Indeed, research conducted with infants and children shows that interaction with a caregiver is necessary to acquire language. Further evidence on the influence of sociality on language comes from social and linguistic pathologies, in which deficits in social and linguistic abilities are tightly intertwined, as is the case for Autism, for example. However, studies on adult second language (L2) learning have been mostly focused on individualistic approaches, partly because of methodological constraints, especially of imaging methods. The question as to whether social interaction should be considered as a critical factor impacting upon adult language learning still remains underspecified. Here, we review evidence in support of the view that sociality plays a significant role in communication and language learning, in an attempt to emphasize factors that could facilitate this process in adult language learning. We suggest that sociality should be considered as a potentially influential factor in adult language learning and that future studies in this domain should explicitly target this factor.
Answer: Children's friendships can indeed play a role in cognitive enhancement through social interaction. Research has shown that performance on cognitive or problem-solving tasks can be enhanced when undertaken as a joint activity among pairs of pupils, suggesting that quality relationships between pupils, such as friendships, may facilitate cognitive development (PUBMED:16318677). This is supported by the finding that girls' friendship pairings performed at the highest levels on science reasoning tasks, while boys' friendship pairings performed at the lowest levels, indicating that the effectiveness of friendship groupings for cognitive tasks may vary based on gender and other factors (PUBMED:16318677).
Additionally, the role of student friendship groups in learning has been investigated in higher education, where it was found that student-to-student friendships produced social contexts that facilitated conceptual understanding through discussion, explanation, and application to real-life issues (PUBMED:25309488). However, the students did not perceive this as a learning experience, suggesting that the function of their friendships in learning may not be fully recognized by the students themselves (PUBMED:25309488).
In summary, there is evidence to suggest that children's friendships can contribute to cognitive enhancement through social interaction in school settings, although the impact of such friendships may be influenced by various factors, including gender, age, and cultural contexts. |
Instruction: Cross-sector cooperation in health-enhancing physical activity policymaking: more potential than achievements?
Abstracts:
abstract_id: PUBMED:27129850
Cross-sector cooperation in health-enhancing physical activity policymaking: more potential than achievements? Background: The cooperation of actors across policy fields and the need for cross-sector cooperation as well as recommendations on how to implement cross-sector cooperation have been addressed in many national and international policies that seek to solve complex issues within societies. For such a purpose, the relevant governance structure between policy sectors is cross-sector cooperation. Therefore, cross-sector cooperation and its structures need to be better understood for improved implementation. This article reports on the governance structures and processes of cross-sector cooperation in health-enhancing physical activity (HEPA) policies in six European Union (EU) member states.
Methods: Qualitative content analysis of HEPA policies and semi-structured interviews with key policymakers in six European countries.
Results: Cross-sector cooperation varied between EU member states within HEPA policies. The main issues of the cross-sector policy process can be divided into stakeholder involvement, governance structures and coordination structures and processes. Stakeholder involvement included citizen hearings and gatherings of stakeholders from various non-governmental organisations and citizen groups. Governance structures with policy and political discussions included committees, working groups and consultations for HEPA policymaking. Coordination structures and processes included administrative processes with various stakeholders, such as ministerial departments, research institutes and private actors for HEPA policymaking. Successful cross-sector cooperation required joint planning and evaluation, financial frameworks, mandates based on laws or agreed methods of work, communication lines, and valued processes of cross-sector cooperation.
Conclusions: Cross-sector cooperation required participation with the co-production of goals and sharing of resources between stakeholders, which could, for example, provide mechanisms for collaborative decision-making through citizen hearing. Clearly stated responsibilities, goals, communication, learning and adaptation for cross-sector cooperation improve success. Specific resources allocated for cross-sector cooperation could enhance the empowerment of stakeholders, management of processes and outcomes of cross-sector cooperation.
abstract_id: PUBMED:37332258
Systematic review of the barriers and facilitators to cross-sector partnerships in promoting physical activity. Aims: To review the barriers and facilitators that cross-sector partners face in promoting physical activity.
Methods: We searched Medline, Embase, PsychINFO, ProQuest Central, SCOPUS and SPORTDiscus to identify published records dating from 1986 to August 2021. We searched for public health interventions drawn from partnerships, where the partners worked across sectors and their shared goal was to promote or increase physical activity through partnership approaches. We used the Critical Appraisal Skills Programme UK (CASP) checklist and Risk Of Bias In Non-randomised Studies - of Interventions (ROBINS-I) tool to guide the critical appraisal of included records, and thematic analysis to summarise and synthesise the findings.
Results: Findings (n = 32 articles) described public health interventions (n = 19) aiming to promote physical activity through cross-sector collaboration and/or partnerships. We identified barriers, facilitators and recommendations in relation to four broad themes: approaching and selecting partners, funding, building capacity and taking joint action.
Conclusion: Common challenges that partners face are related to allocating time and resources, and sustaining momentum. Identifying similarities and differences between partners early on and building good relationships, strong momentum and trust can take considerable time. However, these factors may be essential for fruitful collaboration. Boundary spanners in the physical activity system could help translate differences and consolidate common ground between cross-sector partners, accelerating joint leadership and introducing systems thinking.
Prospero Registration Number: CRD42020226207.
abstract_id: PUBMED:26847443
Baseline- and health enhancing physical activity in adults with obesity Physical inactivity is one of the major risk factors for people to become overweight or obese. To achieve a substantial health benefit, adults should do at least 150 min of moderate or 75 min of high intensity aerobic activity per week and additionally they should do muscle strengthening exercises. This recommendation represents the lower limit and not the optimum. To loose body weight a significantly higher level of physical activity is required. Exercise programs can play an important part to reach the required level of health-enhancing physical activity. The Austrian pilot projects "Aktiv Bewegt" and "GEHE-Adipositas" showed that obese adults were interested in structured exercise programs and that they were also willing to use them. Clear defined quality criteria, the differentiation from conventional programs for already active and fit people and a recommendation from a doctor or other health professionals were important motivation reasons.
abstract_id: PUBMED:26286974
Health-Enhancing Physical Activity: Associations with Markers of Well-Being. Background: The association between health-enhancing physical activity (HEPA) and well-being was investigated across a cross-sectional (Study 1; N=243) and a longitudinal, two-wave (Study 2; N=198) design. Study 2 further examined the role played by fulfilling basic psychological needs in terms of understanding the mechanisms via which HEPA is associated with well-being.
Methods: Women enrolled in undergraduate courses were surveyed.
Results: In general, greater HEPA was associated with greater well-being (Study 1; rs ranged from .03 to .25). Change score analyses revealed that increased HEPA positively predicted well-being (Study 2; R(2) adj=0.03 to 0.15) with psychological need fulfilment underpinning this relationship.
Conclusions: Collectively these findings indicate that increased engagement in health-enhancing physical activity represents one factor associated with greater well-being. Continued investigation of basic psychological need fulfilment as one mechanism underpinning the HEPA-well-being relationship appears justified.
abstract_id: PUBMED:35853152
Cross-sector co-creation of a community-based physical activity program for breast cancer survivors in Colombia. Benefits of physical activity (PA) in breast cancer survivors (BCS) are well established. However, programs to promote PA among BCS tailored to real-world contexts within low- to middle-income countries are limited. Cross-sector co-creation can be key to effective and scalable programs for BCS in these countries. This study aimed to evaluate the networking process to engage multisector stakeholders in the co-creation of a PA program for Colombian BCS called My Body. We employed a mixed-methods design including semistructured interviews, workshops and a social network analysis of centrality measures to assess stakeholders' engagement, resources and skills enabling the collaborative work, challenges, outcomes and lessons learned. The descriptive analysis and the centrality measures of the network revealed that 19 cross-sector stakeholders engaged in the My Body collaborative network. Through ongoing communication and cooperation, My Body built relationships between the academic lead institutions (local and international), and local and national public, private and academic institutions working in public health, sports and recreation, social sciences and engineering fields. The outcomes included the co-creation of the community-based PA program for BCS, its implementation through cross-sector synergies, increased relationships and communications among stakeholders, and successful dissemination of evidence and project results to the collaboration partners and other relevant stakeholders and community members. The mixed-methods assessment enabled understanding of ways to advance cross-sector co-creation of health promotion programs. The findings can help to enable continued development of sustainable cross-sector co-creation processes aimed at advancing PA promotion.
abstract_id: PUBMED:23727399
Health-enhancing physical activity and associated factors in a Spanish population. Objectives: This study describes the prevalence of health-enhancing physical activity and leisure-time physical activity in a Spanish sample and identifies the characteristics of the physically active and inactive populations.
Design: A cross-sectional study.
Methods: A random sample of 1595 adults (18-70 years old) living in Catalonia, Spain were assessed using the International Physical Activity Questionnaire (short version) and categorised according to their physical activity levels. The independent associations between physical activity levels and socio-demographic and health-related variables were investigated.
Results: Seventy-seven percent of the population engaged in health-enhancing physical activity. Being a young adult (odds ratio=2.0; 95% confidence interval=1.25-3.21) and having a normal weight (odds ratio=1.46; 95% confidence interval=1.04-2.03) were positively associated with a high health-enhancing physical activity level. Living in a medium-sized town (odds ratio=1.60; 95% confidence interval=1.09-2.35) was positively associated with a moderate health-enhancing physical activity level, whereas being male (odds ratio=0.72; 95% confidence interval=0.53-0.96) odds ratio a middle-aged adult (odds ratio=0.67; 95% confidence interval=0.46-0.97) was negatively associated with a moderate health-enhancing physical activity level. Regarding leisure-time physical activity, 16.1% of the participants were active, 28.3% were lightly active and 55.6% were sedentary. Being male, being a non-smoker, having a normal weight and living with a partner increased the odds of engaging in leisure-time physical activity.
Conclusions: Engaging in health-enhancing physical activity is common but not during leisure time, as concluded based on a representative sample of adults from Catalonia, Spain. Being a young adult, having a normal weight odds ratio living in a medium-sized town was positively associated with a high health-enhancing physical activity level, whereas being male odds ratio a middle-aged adult was negatively associated with a moderate health-enhancing physical activity level.
abstract_id: PUBMED:34187658
The importance of ensuring long-lasting public-private cooperation The health crisis has led to real cooperation between the public and private health sector, for the benefit of the patients. This cooperation must not be restricted to certain circumstances but form a permanent part of a care provision regulated by equity, trust, transparency and recognition of the missions accomplished by healthcare workers.
abstract_id: PUBMED:38174750
Translation and cross-cultural adaptation of the modified Short QUestionnaire to Assess Health-enhancing physical activity (mSQUASH) into Turkish. Aims The aim was to translate and cross-culturally adapt the modified Short Questionnaire to Assess Health-enhancing physical activity (mSQUASH) into Turkish Methods The mSQUASH was translated into Turkish and backward-translation into Dutch was performed afterwards using the Beaton method. After the Turkish version was reviewed and revised by an expert committee that included translators, two patients and the research team a pre-final version was produced. The-pre final version then entered a field-test with cognitive debriefing in 10 patients with axSpA. The final result was the Turkish mSQUASH version. Results The translation process went without difficulties. Small discrepancies were either resolved during the synthesis or expert consensus meetings. Mean (SD) time to complete the mSQUASH was 6.1 (2.4) minutes in field-test procedure. The cognitive debriefing showed that the items of the Turkish mSQUASH were clear, relevant, easy to understand and easy to complete. None of the patients reported that an important aspect of physical activity was missing from the questionnaire items. Patients raised the concern that not all sport examples were culturally suitable; tennis was replaced by volleyball and basketball after the cognitive debriefing, to make it more appropriate to the Turkish culture. Conclusion The final Turkish version of the mSQUASH showed acceptable linguistic and field validity for use in both clinical practice and research. However, further assessment of the psychometric properties (validity and reliability) of the Turkish version of the mSQUASH is needed before it can be implemented.
abstract_id: PUBMED:28216486
Behavioural factors enhancing mental health - preliminary results of the study on its association with physial activity in 15 to 16 year olds. Introduction: Reliable information on the influence of behavioural factors on adolescent mental health may help to implement more effective intervention programmes.
Objective: The objective of the study was to determine whether physical activity influences the variability of selected indices of mental health.
Methods: The study comprised 2,015 students aged 15-16, who were investigated as part of the HBSC survey (Health Behaviour in School-aged Children) in the 2013/14 school year. The dependent variable was the mental health index GHQ-12 (0-36 points) and its two domains (social dysfunction, anxiety and depression). Physical activity was measured with the MVPA (moderate-to-vigorous physical activity). Multivariable linear models were estimated, with overall GHQ index and partial indices as dependent variables.
Results: Adolescents reported a mean GHQ-12 score of 12.57 (±7.06). In a multivariable analysis, the pressure with school work along with gender resulted to be the main GHQ-12 predictors. School achievements and a high level of physical activity were identified as strong protective factors. Taking into account other factors, the GHQ-12 index falls by 2.13 points when comparing adolescents with an extremely low and high MVPA. Protective effect of physical activity appeared to be stronger in small towns and villages than in big cities and was more visible in the domain of social dysfunction.
Conclusions: Physical activity remains an important predictor of mental health, even when the impact of sociodemographic and environmental factors as well as the respondents' school achievements are considered. Researches of such types, taking into account more complex determinants, wider behavioural factors' spectre as well as other outcome measures connected with teenagers' mental condition, should be continue.
abstract_id: PUBMED:31064351
The perceived neighborhood environment is associated with health-enhancing physical activity among adults: a cross-sectional survey of 13 townships in Taiwan. Background: Many environmental factors have been associated with physical activity. The environment is considered a key factor in terms of the rate of engagement in physical activity. This study examined the perceived effect of environmental factors on different levels of health-enhancing physical activity among Taiwanese adults.
Methods: Data were collected from 549 adults aged at least 18 years from the northern, central, southern and eastern regions of Taiwan. Physical activity was measured using the International Physical Activity Questionnaire (IPAQ) showcard version, and participants were divided into three categories: those who performed low-, moderate-, or high-levels of physical activity, as suggested by the IPAQ scoring protocol. The perceived neighborhood environment in relation to physical activity was adapted from the Physical Activity Neighborhood Environment Scale. A multinomial logistic regression was conducted to ascertain associations between individual perceptions of the neighborhood environment and different physical activity levels.
Results: Respondents who perceived their neighborhood environment as having easy access to services and stores, and higher traffic safety were more likely to be moderate level of physical activity (odds ratio [OR]: 1.90, 95% confidence interval [CI]: 1.07-3.37; OR: 1.77, 95% CI: 1.12-2.80). The perception of having easy access to services and stores and seeing many physically active people in the neighborhood were both positively associated with a high level of physical activity (OR: 2.25, 95% CI: 1.01-5.01; OR: 2.40, 95% CI: 1.11-5.23).
Conclusions: Different perceived neighborhood environmental factors were associated with moderate and high levels of physical activity, respectively. These findings highlight the importance of an activity-friendly neighborhood environment to stimulate engagement in physical activity among adults in Taiwan. Therefore, policies and programs should focus on improving friendliness and diversity in neighborhoods to facilitate individuals' transitions from inactive to active lifestyles.
Answer: Cross-sector cooperation in health-enhancing physical activity (HEPA) policymaking has shown potential, but the achievements vary across different contexts and regions. The qualitative content analysis of HEPA policies and interviews with key policymakers in six European countries revealed that the extent of cross-sector cooperation differed among EU member states. The main issues identified in the cross-sector policy process included stakeholder involvement, governance structures, and coordination structures and processes. Successful cooperation required joint planning and evaluation, financial frameworks, clear mandates, communication lines, and valued processes of cooperation (PUBMED:27129850).
A systematic review of barriers and facilitators to cross-sector partnerships in promoting physical activity highlighted common challenges such as allocating time and resources and sustaining momentum. Building good relationships, trust, and identifying similarities and differences between partners early on were essential for fruitful collaboration. The role of boundary spanners was emphasized to help translate differences and consolidate common ground between cross-sector partners (PUBMED:37332258).
In Colombia, cross-sector co-creation was key to developing a community-based physical activity program for breast cancer survivors. The networking process engaged stakeholders from various sectors, including public health, sports, recreation, social sciences, and engineering fields. The outcomes included the co-creation and implementation of the program, increased relationships and communications among stakeholders, and successful dissemination of evidence and project results (PUBMED:35853152).
Despite the potential, the actual achievements of cross-sector cooperation in HEPA policymaking can be limited by various factors. These include the need for clear responsibilities, goals, communication, learning, and adaptation, as well as specific resources allocated for cross-sector cooperation to enhance stakeholder empowerment and manage processes and outcomes effectively (PUBMED:27129850). Additionally, the importance of ensuring long-lasting public-private cooperation beyond specific health crises has been recognized, emphasizing the need for equity, trust, transparency, and recognition of healthcare workers' missions (PUBMED:34187658).
In conclusion, while there is significant potential for cross-sector cooperation in HEPA policymaking, the achievements are not uniform and depend on various factors that influence the success of such collaborations. |
Instruction: Is diffusion imaging appearance an independent predictor of outcome after ischemic stroke?
Abstracts:
abstract_id: PUBMED:12427888
Is diffusion imaging appearance an independent predictor of outcome after ischemic stroke? Background: MR diffusion-weighted imaging (DWI) in ischemic stroke can be quantified by calculating the apparent diffusion coefficient (ADC) or measuring lesion volume.
Objective: To clarify the association between DWI lesion parameters, clinical stroke severity at baseline, and the relationship with functional outcome.
Methods: Consecutive patients with stroke were categorized for stroke type (Oxford Community Stroke Project Classification [OCSP]) and severity (Canadian Neurologic Scale [CN Scale]) before DWI. The ratio of the trace of the apparent diffusion tensor in the ischemic lesion to the mirror image area in the contralateral hemisphere was calculated (<ADC>r). The volume of the visible lesion on DWI was measured. Any visible lesion on T2-weighted imaging (T2WI) was noted. All assessments were blind to all other information. A blinded observer obtained a 6-month Rankin score. Univariate and multivariate analyses were performed to test for independent associations with outcome.
Results: In 108 patients, those with lower (i.e., more abnormal) <ADC>r values had more severe strokes according to the CN Scale (p = 0.01) and the OCSP stroke type (p = 0.002), a large lesion on DWI (p = 0.05), a visible lesion on T2WI (p = 0.001), and poor 6-month functional outcome (p = 0.009). However, on logistic regression, neither <ADC>r nor DWI lesion volume were independent predictors of 6-month outcome over and above age and stroke severity.
Conclusion: The <ADC>r is associated with functional outcome, but that is because it and DWI lesion volume are also associated with stroke severity. Although DWI lesion features are univariate surrogate outcome predictors, the authors were unable to show that they were independent outcome predictors in the current study. Differences between these and other results may be due to differences in study design, sample size, and case mix.
abstract_id: PUBMED:11062281
Is early ischemic lesion volume on diffusion-weighted imaging an independent predictor of stroke outcome? A multivariable analysis. Background And Purpose: The heterogeneity of stroke makes outcome prediction difficult. Neuroimaging parameters may improve the predictive value of clinical measures such as the National Institutes of Health Stroke Scale (NIHSS). We investigated whether the volume of early ischemic brain lesions assessed with diffusion-weighted imaging (DWI) was an independent predictor of functional outcome.
Methods: We retrospectively selected patients with nonlacunar ischemic stroke in the anterior circulation from 4 prospective Stanford Stroke Center studies evaluating early MRI. The baseline NIHSS score and ischemic stroke risk factors were assessed. A DWI MRI was performed within 48 hours of symptom onset. Clinical characteristics and early lesion volume on DWI were compared between patients with an independent outcome (Barthel Index score >/=85) and a dependent outcome (Barthel Index score <85) at 1 month. A logistic regression model was performed with factors that were significantly different between the 2 groups in univariate analysis.
Results: Sixty-three patients fulfilled the entry criteria. One month after symptom onset, 24 patients had a Barthel Index score <85 and 39 had a Barthel Index score >/=85. In univariate analysis, patients with independent outcome were younger, had lower baseline NIHSS scores, and had smaller lesion volumes on DWI. In a logistic regression model, DWI volume was an independent predictor of outcome, together with age and NIHSS score, after correction for imbalances in the delay between symptom onset and MRI.
Conclusions: DWI lesion volume measured within 48 hours of symptom onset is an independent risk factor for functional independence. This finding could have implications for the design of acute stroke trials.
abstract_id: PUBMED:35412044
Clinical Impact and Predictors of Diffusion Weighted Imaging (DWI) Reversal in Stroke Patients with Diffusion Weighted Imaging Alberta Stroke Program Early CT Score 0-5 Treated by Thrombectomy : Diffusion Weighted Imaging Reversal in Large Volume Stroke. Purpose: To determine whether reversal of DWI lesions (DWIr) on the DWI-ASPECTS (diffusion weighted imaging Alberta Stroke Program CT Score) template should serve as a predictor of 90-day clinical outcome in acute ischemic stroke (AIS) patients with pretreatment diffusion-weighted imaging (DWI)-ASPECTS 0-5 treated with thrombectomy, and to determine its predictors in current practice.
Methods: We analyzed data of all consecutive patients included in the prospective multicenter national Endovascular Treatment in Ischemic Stroke Registry between 1 January 2015 and 31 December 2020 with a premorbid mRS ≤ 2, who presented with a pretreatment DWI-ASPECTS 0-5 score, underwent thrombectomy and had an available 24 h post-interventional MRI follow-up. Multivariable analyses were performed to evaluate the clinical impact of DWIr on early neurological improvement (ENI), 3‑month modified Rankin scale (mRS) score distribution (shift analysis) and to define independent predictors of DWIr.
Results: Early neurological improvement was detected in 82/211 (41.7%) of patients while 3‑month functional independence was achieved by 75 (35.5%) patients. The DWI reversal (39/211, 18.9%) resulted an independent predictor of both ENI (aOR 3.6, 95% CI 1.2-7.7; p 0.018) and 3‑month clinical outcome (aOR for mRS shift: 2.2, 95% CI 1-4.6; p 0.030). Only successful recanalization (mTICI 2c-3) independently predicted DWIr in the studied population (aOR 3.3, 95% CI 1.3-7.9; p 0.009).
Conclusion: The DWI reversal occurs in a non-negligible proportion of DWI-ASPECTS 0-5 patients subjected to thrombectomy and significantly influences clinical outcome. The mTICI 2c-3 recanalization emerged as an independent DWIr predictor.
abstract_id: PUBMED:16525124
MR diffusion-weighted imaging and outcome prediction after ischemic stroke. Background: MR diffusion-weighted imaging (DWI) shows acute ischemic lesions early after stroke so it might improve outcome prediction and reduce sample sizes in stroke treatment trials. Previous studies of DWI and outcome produced conflicting results.
Objective: To determine whether DWI lesion characteristics independently predict outcome in a broad range of patients with acute stroke.
Methods: The authors recruited hospital-admitted patients with all severities of suspected stroke, assessed stroke severity on the NIH Stroke Scale (NIHSS), performed early brain DWI, and assessed outcome at 3 months (modified Rankin Scale). Clinical data and DWI lesion parameters were evaluated in a logistic regression model to identify independent predictors of outcome at 3 months and a previously described "Three-Item Scale" (including DWI) was tested for outcome prediction.
Results: Among 82 patients (mean NIHSS 7.1 [+/-6.3 SD]), the only independent outcome predictors were age and stroke severity. Neither DWI lesion volume nor apparent diffusion coefficient nor the previously described Three-Item Scale predicted outcome independently. Comparison with previous studies suggested that DWI may predict outcome only in patients with more severe cortical ischemic strokes.
Conclusions: Across a broad range of stroke severities, diffusion-weighted imaging (DWI) did not predict outcome beyond that of key clinical variables. Thus, DWI is unlikely to reduce sample sizes in acute stroke trials assessing functional outcome, especially where estimated treatment effects are modest.
abstract_id: PUBMED:31399748
Associations Between Diffusion Dynamics and Functional Outcome in Acute and Early Subacute Ischemic Stroke. Purpose: The current study aimed to investigate the associations between diffusion dynamics of ischemic lesions and clinical functional outcome of acute and early subacute stroke.
Material And Methods: A total of 80 patients with first ever infarcts in the territory of the middle cerebral artery underwent multi-b-values diffusion-weighted imaging and diffusion kurtosis imaging. Multiple diffusion parameters were generated in postprocessing using different diffusion models. Long-term functional outcome was evaluated with modified Rankin scale (mRS) at 6 months post-stroke. Good functional outcome was defined as mRS score ≤ 2 and poor functional outcome was defined as mRS score ≥ 3. Univariate analysis was used to compare the diffusion parameters and clinical features between patients with poor and good functional outcome. Significant parameters were further analyzed for correlations with functional outcome using partial correlation.
Results: In univariate analyses, standard-b-values apparent diffusion coefficient (ADCst) ratio and fractional anisotropy (FA) ratio of acute stroke, ADCst ratio and mean kurtosis (MK) ratio of early subacute stroke were statistically different between patients with poor outcome and good outcome (P < 0.05). When the potential confounding factor of lesion volume was controlled, only FA ratio of acute stroke, ADCst ratio and MK ratio of early subacute stroke remained correlated with the functional outcome (P < 0.05).
Conclusion: Diffusion dynamics are correlated with the clinical functional outcome of ischemic stroke. This correlation is independent of the effect of lesion volume and is specific to the time period between symptom onset and imaging. More effort is needed to further investigate the predictive value of diffusion-weighted imaging.
abstract_id: PUBMED:31779719
Qualitative Posttreatment Diffusion-Weighted Imaging as a Predictor of 90-day Outcome in Stroke Intervention. Purpose: The aim was to assess the ability of post-treatment diffusion-weighted imaging (DWI) to predict 90-day functional outcome in patients with endovascular therapy (EVT) for large vessel occlusion in acute ischemic stroke (AIS).
Methods: We examined a retrospective cohort from March 2016 to January 2018, of consecutive patients with AIS who received EVT. Planimetric DWI was obtained and infarct volume calculated. Four blinded readers were asked to predict modified Rankin Score (mRS) at 90 days post-thrombectomy.
Results: Fifty-one patients received endovascular treatment (mean age 65.1 years, median National Institutes of Health Stroke Scale (NIHSS) 18). Mean infarct volume was 43.7 mL. The baseline NIHSS, 24-hour NIHSS, and the DWI volume were lower for the mRS 0-2 group. Also, the thrombolysis in cerebral infarction (TICI) 2b/3 rate was higher in the mRS 0-2 group. No differences were found in terms of the occlusion level, reperfusion technique, or recombinant tissue plasminogen activator use. There was a significant association noted between average infarct volume and mRS at 90 days. On multivariable analysis, higher infarct volume was significantly associated with 90-day mRS 3-5 when adjusted to TICI scores and occlusion location (OR 1.01; CI 95% 1.001-1.03; p = 0.008). Area under curve analysis showed poor performance of DWI volume reader ability to qualitatively predict 90-day mRS.
Conclusion: The subjective impression of DWI as a predictor of clinical outcome is poorly correlated when controlling for premorbid status and other confounders. Qualitative DWI by experienced readers both overestimated the severity of stroke for patients who achieved good recovery and underestimated the mRS for poor outcome patients. Infarct core quantitation was reliable.
abstract_id: PUBMED:20957383
Diffusion-weighted ASPECTS as an independent marker for predicting functional outcome. Whether lesion volume on diffusion-weighted MRI imaging (DWI) can reliably predict functional outcome in acute ischemic stroke is controversial. The aim of our study was to assess whether the Alberta Stroke Program Early CT Score (ASPECTS) on DWI is useful for predicting functional outcome in patients with anterior circulation infarction with a broad range of severities. Three-hundred and fifty patients with first-ever ischemic stroke in the anterior circulation within 24 h of onset were enrolled. We compared background characteristics, vital signs, laboratory data, and MRI findings between favorable (F) and unfavorable (U) outcome groups at 3 months, according to the modified Rankin Scale (mRS). The F and U groups were defined as having a mRS of 0-2 and 3-6, respectively. DWI ASPECTS was scored by DWI obtained 3-24 h after onset. Two-hundred and eighteen patients (62.3%) were classified into the F group and 132 patients (37.7%) into the U group. On univariate analysis, the F group patients were younger, had lower score of the National Institutes of Health Stroke Scale (NIHSS) at entry (5.7 ± 3.3 vs. 14.2 ± 6.0), male predominance, longer time after onset, lower rate of prior antithrombotic therapy, higher hematocrit and lower fibrinogen than the U group patients. Stroke subtype was different between the two groups, and F group patients had higher DWI ASPECTS score, lower leukoaraiosis and medial temporal atrophy score, and lower rate of early neurological deterioration (END) than the U group patients. Multiple logistic regression analysis revealed that NIHSS (p < 0.001), prior antithrombotic therapy (p = 0.013), ASPECTS (p = 0.002), and END (p < 0.001) were independent predictors of functional outcome. DWI ASPECTS can be an independent predictor for functional outcome, along with other clinical variables.
abstract_id: PUBMED:36109692
In-hospital clinical outcomes in diffusion weighted imaging-negative stroke treated with intravenous thrombolysis. Objective: We aimed to investigate whether negative diffusion weighted imaging (DWI) is related to the in-hospital clinical outcomes for ischemic stroke patients with intravenous tissues plasminogen activator (IV tPA).
Methods: We retrospectively enrolled patients who received IV tPA therapy within 4.5 hours from symptoms onset. The classification of DWI-positive or negative was based on post-IV tPA MR scan. Demographic factors, stroke characteristics, imaging information, and the in-hospital clinical outcomes including early neurological improvement (ENI) and favourable functional outcome were collected. Multivariable logistic regression and sensitivity analyses were conducted to test whether negative DWI imaging was an independent predictor of the in-hospital clinical outcomes.
Results: In the final study population, 437 patients treated with IV tPA were included and 12.36% of them had negative DWI imaging at the first MR scan post IV tPA. In the DWI-negative group, 51.9% (28/54) of the patients achieved ENI at 24 hours and 74.1% (40/54) of the patients achieved favourable clinical outcome at discharge. DWI-negative was not related to ENI (adjusted odds ratio 0.93, 95% confidence interval 0.17-4.91) or favourable clinical outcome (adjusted odds ratio 2.40, 95% confidence interval 0.48-11.95). Additional sensitivity analyses yielded similar results.
Conclusion: DWI-negative is not associated with ENI or favourable functional outcome at discharge.
abstract_id: PUBMED:26585396
Stroke Location Is an Independent Predictor of Cognitive Outcome. Background And Purpose: On top of functional outcome, accurate prediction of cognitive outcome for stroke patients is an unmet need with major implications for clinical management. We investigated whether stroke location may contribute independent prognostic value to multifactorial predictive models of functional and cognitive outcomes.
Methods: Four hundred twenty-eight consecutive patients with ischemic stroke were prospectively assessed with magnetic resonance imaging at 24 to 72 hours and at 3 months for functional outcome using the modified Rankin Scale and cognitive outcome using the Montreal Cognitive Assessment (MoCA). Statistical maps of functional and cognitive eloquent regions were derived from the first 215 patients (development sample) using voxel-based lesion-symptom mapping. We used multivariate logistic regression models to study the influence of stroke location (number of eloquent voxels from voxel-based lesion-symptom mapping maps), age, initial National Institutes of Health Stroke Scale and stroke volume on modified Rankin Scale and MoCA. The second part of our cohort was used as an independent replication sample.
Results: In univariate analyses, stroke location, age, initial National Institutes of Health Stroke Scale, and stroke volume were all predictive of poor modified Rankin Scale and MoCA. In multivariable analyses, stroke location remained the strongest independent predictor of MoCA and significantly improved the prediction compared with using only age, initial National Institutes of Health Stroke Scale, and stroke volume (area under the curve increased from 0.697-0.771; difference=0.073; 95% confidence interval, 0.008-0.155). In contrast, stroke location did not persist as independent predictor of modified Rankin Scale that was mainly driven by initial National Institutes of Health Stroke Scale (area under the curve going from 0.840 to 0.835). Similar results were obtained in the replication sample.
Conclusions: Stroke location is an independent predictor of cognitive outcome (MoCA) at 3 months post stroke.
abstract_id: PUBMED:29314208
Pretreatment lesional volume impacts clinical outcome and thrombectomy efficacy. Objective: We aimed to characterize the association between pretreatment lesional volume measured on diffusion-weighted images and functional outcome, and estimate the impact on thrombectomy efficacy for ischemic stroke with anterior proximal intracranial arterial occlusion.
Methods: Anterior circulation ischemic stroke patients who had pretreatment diffusion-weighted imaging in the THRACE study were included. Lesional volume was semiautomatically segmented. Logistic regression was applied to model clinical outcome as a function of lesional volume. Outcomes included functional independence (modified Rankin Scale [mRS] 0-2), degree of disability (ordinal mRS 0-6), and mortality at 3 months.
Results: Of 298 included patients, with median lesional volume 17.2ml (interquartile range [IQR], 9.2-51.8) and median mRS 2 (IQR, 1-4), 51.0% achieved functional independence. Increased lesional volume was an independent predictor for a lower probability of functional independence (odds ratio [OR], 0.90 [95% confidence interval {CI}, 0.81-0.99] per 10ml; p < 0.001), a less favorable degree of disability (common OR, 0.86 [95% CI, 0.81-0.90] per 10ml; p < 0.001), and a higher mortality rate (OR, 1.21 [95% CI, 1.08-1.37] per 10ml; p < 0.001). For additional thrombectomy, the number of patients needed to treat to achieve functional independence in 1 patient increased with lesional volume (10 for a volume of 80ml; 15 for 135ml). No significant treatment-by-dichotomized volume interaction for functional independence and mortality was observed.
Interpretation: Pretreatment lesional volume is an independent predictor for functional outcome in acute ischemic stroke with proximal intracranial occlusion. The clinical benefit of adding mechanical thrombectomy to thrombolysis decreased with the increase of lesional volume. Ann Neurol 2018;83:178-185.
Answer: The relationship between diffusion imaging appearance and outcome after ischemic stroke is complex and has been the subject of multiple studies with varying conclusions. Some studies have found that diffusion-weighted imaging (DWI) parameters, such as lesion volume, can be independent predictors of functional outcome. For instance, one study found that DWI lesion volume measured within 48 hours of symptom onset was an independent risk factor for functional independence (PUBMED:11062281). Another study reported that DWI reversal after thrombectomy was an independent predictor of both early neurological improvement and 3-month clinical outcome (PUBMED:35412044). Additionally, DWI ASPECTS (Alberta Stroke Program Early CT Score) was found to be an independent predictor for functional outcome, along with other clinical variables (PUBMED:20957383).
However, other studies have reported that DWI parameters do not independently predict outcomes beyond key clinical variables such as age and stroke severity. One study concluded that DWI did not predict outcome independently across a broad range of stroke severities (PUBMED:16525124). Similarly, another study found that neither the apparent diffusion coefficient ratio nor DWI lesion volume were independent predictors of 6-month outcome over and above age and stroke severity (PUBMED:12427888). Furthermore, a study assessing the predictive value of post-treatment DWI for 90-day functional outcome found that the subjective impression of DWI was poorly correlated with actual outcomes when controlling for confounders (PUBMED:31399748).
In addition, the timing of the imaging relative to symptom onset and the specific diffusion parameters measured can influence the association with outcome. For example, diffusion dynamics were found to be correlated with clinical functional outcome, and this correlation was independent of the effect of lesion volume and specific to the time period between symptom onset and imaging (PUBMED:31399748). Another study indicated that pretreatment lesional volume was an independent predictor for functional outcome in acute ischemic stroke with proximal intracranial occlusion (PUBMED:29314208).
In summary, while some studies suggest that diffusion imaging appearance can be an independent predictor of outcome after ischemic stroke, others indicate that it may not provide additional predictive value beyond established clinical factors. The predictive value of DWI may also depend on the specific patient population, the timing of imaging, and the stroke characteristics. |
Instruction: Does fast-track treatment lead to a decrease of intensive care unit and hospital length of stay in coronary artery bypass patients?
Abstracts:
abstract_id: PUBMED:16614584
Does fast-track treatment lead to a decrease of intensive care unit and hospital length of stay in coronary artery bypass patients? A meta-regression of randomized clinical trials. Objective: Evaluation of randomized, controlled clinical trials studying fast-track treatment in low-risk coronary artery bypass grafting patients.
Design: Meta-regression.
Patients: Low-risk coronary artery bypass grafting patients.
Interventions: Fast-track treatments including (high or low) anesthetic dose, normothermia vs. hypothermia, and extubation protocol (within or after 8 hrs).
Measurements: Number of hours of intensive care unit stay, number of days of hospital stay, prevalence of myocardial infarction, and death. Furthermore, quality of life and cost evaluations were evaluated. The epidemiologic and economic qualities of the different trials were also assessed.
Main Results: A total of 27 studies evaluating fast-track treatment were identified, of which 12 studies were with major and 15 were without major differences in extubation protocol or anesthetic treatment or both. The use of an early extubation protocol (p=.000) but not the use of a low anesthetic dose (p=.394) or normothermic temperature management (p=.552) resulted in a decrease of the total intensive care unit stay of low-risk coronary artery bypass grafting patients. Early extubation was found to be an important determinant of the total hospital stay for these patients. An influence of the type of fast-track treatment on mortality or the prevalence of postoperative myocardial infarction was not observed. In general, the epidemiologic and economic qualities of included studies were moderate.
Conclusions: Although fast-track anesthetics and normothermic temperature management facilitate early extubation, the introduction of an early extubation protocol seems essential to decrease intensive care unit and hospital stay in low-risk coronary artery bypass grafting patients.
abstract_id: PUBMED:36590715
A 3-hour fast-track extubation protocol for early extubation after cardiac surgery. Objectives: Early extubation after cardiac surgery improves outcomes and reduces cost. We investigated the effect of a multidisciplinary 3-hour fast-track protocol on extubation, intensive care unit length of stay time, and reintubation rate after a wide range of cardiac surgical procedures.
Methods: We performed an observational study of 472 adult patients undergoing cardiac surgery at a large academic institution. A multidisciplinary 3-hour fast-track protocol was applied to a wide range of cardiac procedures. Data were collected 4 months before and 6 months after protocol implementation. Cox regression model assessed factors associated with extubation time and intensive care unit length of stay.
Results: A total of 217 patients preprotocol implementation and 255 patients postprotocol implementation were included. Baseline characteristics were similar except for the median procedure time and dexmedetomidine use. The median extubation time was reduced by 44% (4:43 hours vs 3:08 hours; P < .001) in the postprotocol group. Extubation within 3 hours was achieved in 49.4% of patients in the postprotocol group compared with 25.8% patients in the preprotocol group; P < .001. There was no statistically significant difference in the intensive care unit length of stay after controlling for other factors. Early extubation was associated with only 1 patient requiring reintubation in the postprotocol group.
Conclusions: The multidisciplinary 3-hour fast-track extubation protocol is a safe and effective tool to further reduce the duration of mechanical ventilation after a wide range of cardiac surgical procedures. The protocol implementation did not decrease the intensive care unit length of stay.
abstract_id: PUBMED:26139591
Effects on length of stay and costs with same-day retransfer to the referring hospitals for patients with acute coronary syndrome after angiography and/or percutaneous coronary intervention. Background: Fast track interventions may generate benefits for patients and hospitals by representing a potential for shorter hospital stay. The aim of this study was to investigate how same-day retransfers to the referring hospital after angiographic examination and/or percutaneous coronary intervention (PCI) at the PCI centre affected length of stay and hospital treatment costs for patients with acute coronary syndrome.
Methods And Results: Three hundred and ninety-nine consecutive admitted patients were prospectively randomized to ordinary care with overnight stay or fast track with same-day retransfer. Length of stay at both the PCI centre and the referring hospital after the stay at the PCI centre were recorded. Costs at the PCI centre related to examinations and treatments were also collected. The ordinary care group included 206 patients and the fast track group 193 patients. Forty-six per cent underwent PCI and 10% coronary artery bypass graft (CABG) in the ordinary care group. In the fast track group 40% had PCI and 6% CABG. Length of stay was reduced at the PCI centre from a median 1.25 days for the ordinary care group to median 0.24 days for the FT group (p<0.001). Length of stay at the PCI centre was significantly reduced after selective coronary angiography and PCI but not for patients undergoing CABG. No significant difference was identified in length of stay for the referring hospitals. Total median treatment costs were reduced from NOK23,657 (US$3838) for the ordinary care group to NOK15,730 (US$2552) for the fast track group (p<0.001). The main contributor to this reduction was shorter length of stay and the corresponding reduction in ward costs at the PCI centre.
Conclusions: We conclude that fast-track intervention with same-day retransfer for patients with acute coronary syndrome to the referring hospital reduced length of stay and the hospital treatment costs for patients undergoing selective coronary angiography and PCI.
abstract_id: PUBMED:7888272
Determinants of the length of stay in intensive care and in hospital after coronary artery surgery. Background: Patients who have coronary artery surgery normally occupy intensive care beds for less than 24 hours. Longer stays may result in under use of cardiac surgical capacity. One approach to optimise surgical throughput is prospectively to identify fast track patients--that is, those who occupy an intensive care bed for less than 24 hours. A prospective audit of patients was performed to identify fast track patients by simple clinical criteria. Total length of hospital stay was also assessed in an attempt to predict which patients were likely to have a short postoperative stay, defined as < or = 7 days.
Methods: Baseline demographic details, cardiovascular risk factors, angiographic and operative details were recorded for 431 consecutive patients who underwent coronary surgery at a regional centre over a nine month period. Outcome measures were the duration of the stay in the intensive care unit in hours and total duration of the postoperative stay in hospital in days. In addition, two groups of patients who were thought to be fast track were identified prospectively. Fast track 1 patients were identified by criteria selected by cardiovascular physicians. These were age less than 60 years, stable angina, good left ventricular function (ejection fraction > 50%), good renal function (serum creatinine < 120 mumol/l), and no obesity, diabetes, or other serious disease. Fast track 2 patients were identified by criteria defined by cardiovascular surgeons. These were male sex, age less than 65 years, good left ventricular function and no peripheral vascular disease, diabetes, or other serious disease. The efficacy of both sets of criteria in predicting outcome was tested.
Results: 344 (79.8%) patients were fast track. Significant factors for the prediction of fast track patients by univariate analysis (with positive predictive accuracy and sensitivity) were left ventricular ejection fraction > 50% (83%, 80%), left ventricular end diastolic pressure < 13 mm Hg (90%, 59%), creatinine less than 120 mumol/l (83%, 87%), and one or two vessel coronary disease (89%, 34%). Of the patients categorised as fast track 1 89% proved to be fast track (sensitivity 24%), however, the fast track 2 characteristics were not significant. Age, sex, obesity, diabetes, hypertension, a history of obstructive pulmonary disease and unstable angina were not predictive of the duration of intensive care stay. Multivariate analysis indicated that only left ventricular end diastolic pressure and the number of diseased coronary arteries predicted fast track patients. These criteria separated patients into three groups. Those who were good risk had one or two vessel disease and left ventricular end diastolic pressure < 13 mm Hg. They comprised 19% of the total and 93% of them were fast track. Those who were intermediate risk had either three vessel disease or left ventricular end diastolic pressure > 13 mm Hg but not both. They comprised 49% of the total and 85% of them were fast track. Those who were poor risk had both three vessel disease and left ventricular end diastolic pressure > 13 mm Hg. They comprised 32% of the total and 62% of them were fast track. The 106 (24%) patients who spent < or = 7 days in hospital after surgery were significantly younger (mean (SD) 55(8) v 58(8) years; P < 0.001) with a lower incidence of previous myocardial infarction (positive predictive accuracy 30%, sensitivity 53%), were less likely to have a history of obstructive pulmonary disease (25%, 98%), and more likely to have one or two vessel coronary disease (33%, 41%). They were more likely to have an internal mammary artery as a bypass conduit (27%, 89%) and more likely to need fewer than three distal anastomoses of the vein graft (29%, 63%). By multivariate analysis only age was significantly predictive of hospital stay. Total hospital stay could not be satisfactorily modelled on the basis of the criteria tested here. Sex, obesity, diabetes, hypertension, unstable angina, renal function, and left ventricular function were not associated with hospital stay. CONCLUSIONS-Most patients who had coronary artery surgery spent less than or equal to 24 hours in intensive care, but most spent > 7 days in hospital. The chance of a patient spending less than or equal to 24 hours in intensive care could be predicted by the number of coronary arteries diseased and the left ventricular end diastolic pressure. Poor risks patients (32%) had only a 62% chance of an intensive care unit stay of less than or equal to 24 hours. A policy of scheduling no more than one such patient for surgery per day would be simple to institute and would maximise the use of surgical capacity.
abstract_id: PUBMED:21545065
Correlation between EuroSCORE and intensive care unit length of stay after coronary surgery. During the last several years many authors have found that the European System for Cardiac Operative Risk Evaluation is useful in the prediction of not only postoperative mortality but also of the length of stay in the intensive care unit, complication rate and overall treatment expenses. This study included 329 patients who had undergone isolated surgical myocardial revascularization at our Department during the period from January 1st to June 6th, 2008. For the operative risk evaluation, the additive European System for Cardiac Operative Risk Evaluaion was used. In group I (low risk 0-2%) there were 144 patients (43.7%), whereas group II (medium risk 3-5%) and group III (high risk > or = 6%) included 141 (42.8%) and 44 (13.4%) patients, respectively. The length of stay in the intensive care unit was 25.56, 32.43 and 49.59 hours for groups I, II and III, respectively. The difference in the mean length of stay in the intensive care unit between the groups was highly statistically significant (p < 0.001) with a positive correlation (R = 0.193; p < 0.001). There is a positive correlation in patients who had undergone surgical myocardial revascularization in terms of operative risk expressed by the additive European System for Cardiac Operative Risk Evaluation and length of stay in the intensive care unit, total intubation period and development of early postoperative complications.
abstract_id: PUBMED:12626303
The efficiency of fast track protocol in elderly patients who underwent coronary artery surgery Objective: This study is planned to display the efficiency of fast track protocol and its difference from the conventional anesthesia in patients older than 65 years.
Methods: One hundred patients older than 65 years underwent coronary artery surgery between October 2000-March 2001 in cardiovascular surgery clinic were considered in this study. Fifty patients in whom fast track protocol was applied were included into the study group, group A; fifty patients underwent conventional anesthesia technique were referred to the control group, group B. In both groups demographic characteristics, early hospital mortality, operation time, total drainage, number of transfusions, stay in the intensive care unit and discharge time were recorded.
Results: The mean age was 69.0+/-3.0 years in group A and 70.4+/-3.6 years in group B. Early hospital mortality was 2% in group A, 10% in group B (p>0.05). Intensive care unit stay was 22.01+/-10.12 hours in group A and 60.18+/-32.23 hours in group B (p<0.05). Discharge time was on 5.5+/-1.3 day in group A and on 6.9+/-2.3 day in group B (p<0.05). There were no statistical differences between the two groups in respect to other parameters.
Conclusion: Fast track protocol in patients older than 65 years is a suitable technique by using modern cardiac surgery methods. This protocol is successfully used by selecting the suitable patients and following the patients carefully in the postoperative period.
abstract_id: PUBMED:33689923
Prediction of Prolonged Intensive Care Unit Length of Stay Following Cardiac Surgery. Intensive care unit (ICU) costs comprise a significant proportion of the total inpatient charges for cardiac surgery. No reliable method for predicting intensive care unit length of stay following cardiac surgery exists, making appropriate staffing and resource allocation challenging. We sought to develop a predictive model to anticipate prolonged ICU length of stay (LOS). All patients undergoing coronary artery bypass grafting (CABG) and/or valve surgery with a Society of Thoracic Surgeons (STS) predicted risk score were evaluated from an institutional STS database. Models were developed using 2014-2017 data; validation used 2018-2019 data. Prolonged ICU LOS was defined as requiring ICU care for at least three days postoperatively. Predictive models were created using lasso regression and relative utility compared. A total of 3283 patients were included with 1669 (50.8%) undergoing isolated CABG. Overall, 32% of patients had prolonged ICU LOS. Patients with comorbid conditions including severe COPD (53% vs 29%, P < 0.001), recent pneumonia (46% vs 31%, P < 0.001), dialysis-dependent renal failure (57% vs 31%, P < 0.001) or reoperative status (41% vs 31%, P < 0.001) were more likely to experience prolonged ICU stays. A prediction model utilizing preoperative and intraoperative variables correctly predicted prolonged ICU stay 76% of the time. A preoperative variable-only model exhibited 74% prediction accuracy. Excellent prediction of prolonged ICU stay can be achieved using STS data. Moreover, there is limited loss of predictive ability when restricting models to preoperative variables. This novel model can be applied to aid patient counseling, resource allocation, and staff utilization.
abstract_id: PUBMED:32990156
Derivation and Validation of a Clinical Model to Predict Intensive Care Unit Length of Stay After Cardiac Surgery. Background Across the globe, elective surgeries have been postponed to limit infectious exposure and preserve hospital capacity for coronavirus disease 2019 (COVID-19). However, the ramp down in cardiac surgery volumes may result in unintended harm to patients who are at high risk of mortality if their conditions are left untreated. To help optimize triage decisions, we derived and ambispectively validated a clinical score to predict intensive care unit length of stay after cardiac surgery. Methods and Results Following ethics approval, we derived and performed multicenter valida tion of clinical models to predict the likelihood of short (≤2 days) and prolonged intensive care unit length of stay (≥7 days) in patients aged ≥18 years, who underwent coronary artery bypass grafting and/or aortic, mitral, and tricuspid value surgery in Ontario, Canada. Multivariable logistic regression with backward variable selection was used, along with clinical judgment, in the modeling process. For the model that predicted short intensive care unit stay, the c-statistic was 0.78 in the derivation cohort and 0.71 in the validation cohort. For the model that predicted prolonged stay, c-statistic was 0.85 in the derivation and 0.78 in the validation cohort. The models, together termed the CardiOttawa LOS Score, demonstrated a high degree of accuracy during prospective testing. Conclusions Clinical judgment alone has been shown to be inaccurate in predicting postoperative intensive care unit length of stay. The CardiOttawa LOS Score performed well in prospective validation and will complement the clinician's gestalt in making more efficient resource allocation during the COVID-19 period and beyond.
abstract_id: PUBMED:19189062
Leipzig fast-track protocol for cardio-anesthesia. Effective, safe and economical Background: In November 2005 a complex, multimodal anesthesia fast-track protocol (FTP) was introduced for elective cardiac surgery patients in the Cardiac Center of the University of Leipzig which included changing from an opioid regime to remifentanil and postoperative treatment in a special post-anesthesia recovery and care unit. The goal was to speed up recovery times while maintaining safety and improving costs.
Method: A total of 421 patients who underwent the FTP and were treated in the special recovery room were analyzed retrospectively. These patients were compared with patients who had been treated by a standard protocol (SP) prior to instituting the FTP. Primary outcomes were time to extubation, length of stay in the intensive care unit (ICU) and treatment costs.
Results: The times to extubation were significantly shorter in the FTP group with 75 min (range 45-110 min) compared to 900 min (range 600-1140 min) in the SP group. Intensive care unit stay and hospital length of stay were also significantly shorter in the FTP group (p<0.01). The reduction of treatment costs of intensive care for FTP patients was 53.5% corresponding to savings of EUR 738 per patient in the FTP group compared with the SP group.
Conclusions: The Leipzig fast-track protocol for cardio-anesthesia including the central elements of switching opiate therapy to remifentanil and switching patient recovery to a special post-anesthesia recovery and care unit, shortened therapy times, is safe and economically effective.
abstract_id: PUBMED:16551817
Determinants of morbidity and intensive care unit stay after coronary surgery. The study evaluated rates and determinants of hospital morbidity, serious morbid events, and prolonged intensive care unit stay associated with isolated coronary artery bypass. The medical records of 391 patients undergoing isolated coronary artery bypass at our center during 2003 were reviewed. The observed crude hospital mortality rate was 2.05%, similar to the EuroSCORE predicted mortality rate of 2.34%. Arrhythmia was the most frequent postoperative complication (17.6%). The serious hospital morbidity rate was 5.9%. The final logistic regression model of serious morbid events identified the following predictors: drug allergy, diabetes, and EuroSCORE. Prolonged intensive care unit stay (>/= 3 days) was observed in 9.5% of patients. Multivariable logistic regression analysis revealed age, preoperative rhythm disturbances, previous cardiac operation, and hypertension as independent predictors of prolonged intensive care unit stay. The rates of hospital mortality, morbidity, and prolonged intensive care unit stay were comparable to those of other major international cardiac surgery centers. These data can be used as a benchmark for further self- and peer-assessment quality improvement activities.
Answer: Yes, fast-track treatment leads to a decrease of intensive care unit (ICU) and hospital length of stay in coronary artery bypass patients. A meta-regression of randomized clinical trials found that the use of an early extubation protocol significantly decreased the total ICU stay for low-risk coronary artery bypass grafting patients. Early extubation was also an important determinant of the total hospital stay for these patients (PUBMED:16614584). Additionally, a study on a 3-hour fast-track extubation protocol for early extubation after cardiac surgery showed that the median extubation time was reduced by 44% in the post-protocol group, although there was no statistically significant difference in the ICU length of stay after controlling for other factors (PUBMED:36590715).
Furthermore, fast-track interventions with same-day retransfer for patients with acute coronary syndrome to the referring hospital reduced length of stay and hospital treatment costs for patients undergoing selective coronary angiography and percutaneous coronary intervention (PUBMED:26139591). Another study on elderly patients who underwent coronary artery surgery using a fast-track protocol showed a significant reduction in ICU stay and discharge time compared to conventional anesthesia (PUBMED:12626303).
Moreover, the Leipzig fast-track protocol for cardio-anesthesia, which included changing from an opioid regime to remifentanil and postoperative treatment in a special post-anesthesia recovery and care unit, resulted in significantly shorter times to extubation, ICU stay, and hospital length of stay, along with a reduction in treatment costs (PUBMED:19189062).
These findings suggest that fast-track treatment protocols, particularly those involving early extubation, can effectively decrease the length of ICU and hospital stays for coronary artery bypass patients. |
Instruction: Do hip prosthesis related infection codes in administrative discharge registers correctly classify periprosthetic hip joint infection?
Abstracts:
abstract_id: PUBMED:26109151
Do hip prosthesis related infection codes in administrative discharge registers correctly classify periprosthetic hip joint infection? Purpose: Administrative discharge registers could be a valuable and easily accessible single-sources for research data on periprosthetic hip joint infection. The aim of this study was to estimate the positive predictive value of the International Classification of Disease 10th revision (ICD-10) periprosthetic hip joint infection diagnosis code in the Danish National Patient Register.
Methods: Patients were identified with an ICD-10 discharge diagnosis code of T84.5 ("Infection and inflammatory reaction due to internal joint prosthesis") in association with hip-joint associated surgical procedure codes in The Danish National Patient Register. Medical records of the identified patients (n = 283) were verified for the existence of a periprosthetic hip joint infection. Positive predictive values with 95% confidence intervals (95% CI) were calculated.
Results: A T84.5 diagnosis code irrespective of the associated surgical procedure code had a positive predictive value of 85% (95% CI: 80-89). Stratified to T84.5 in combination with an infection-specific surgical procedure code the positive predictive value increased to 86% (95% CI: 80-91), and in combination with a noninfection-specific surgical procedure code decreased to 82% (95% CI: 72-89).
Conclusions: Misclassification must be expected and taken into consideration when using administrative discharge registers for epidemiological research on periprosthetic hip joint infection. We believe that the periprosthetic hip joint infection diagnosis code can be of use in future single-source register based studies, but preferably should be used in combination with alternate data sources to ensure higher validity.
abstract_id: PUBMED:30642705
Mortality During Total Hip Periprosthetic Joint Infection. Background: We sought to understand the mortality rate of periprosthetic joint infection (PJI) of the hip undergoing 2-stage revision for infection.
Methods: Database search, yielding 23 relevant studies, totaled 19,169 patients who underwent revision for total hip PJI.
Results: One-year weighted mortality rate was 4.22% after total hip PJI. Five-year mortality was 21.12%. Average age was 65 years. When comparing the national age-adjusted risk of mortality and the reported 1-year mortality risk in this systematic review, the risk of death after total hip PJI is significantly increased (odds ratio 3.58, P < .001).
Conclusion: The mortality rate during total hip revision for infection is high. When counseling a patient regarding complications of this disease, death should be discussed.
abstract_id: PUBMED:31266691
Improved Patient-Reported Quality of Life and Hip Function After Cementless 1-Stage Revision of Chronic Periprosthetic Hip Joint Infection. Background: Limited information is available on health-related quality of life (HRQoL) and patient-reported hip function following treatment for a chronic periprosthetic hip joint infection. The purpose of this study is to evaluate changes in HRQoL and patient-reported hip function 2 years following a cementless 1-stage revision for chronic periprosthetic hip joint infection.
Methods: Patients (n = 52) enrolled in a previously published clinical study on cementless 1-stage revision in chronic periprosthetic hip joint infection prospectively answered the EuroQol-5D, Short-Form Health Survey 36 (SF-36), and Oxford Hip Score preoperatively and at 3, 6, 12, and 24 months follow-up. Results were compared to age-matched and gender-matched population norm.
Results: A significant improvement in HRQoL and patient-reported hip function appeared in the first 3 months after surgery and reached a plateau after 6 months. The patients statistically reached age-matched and gender-matched population norm after 3 to 12 months follow-up on most items, except for Physical Functioning and Social Functioning on the SF-36. The largest effect sizes were found for Oxford Hip Score at 1.8 and for Role Limitation, Physical and Bodily Pain on the SF-36 at 1.5 and 1.6, respectively.
Conclusion: Patients treated with a cementless 1-stage revision for chronic periprosthetic hip joint infection experienced a marked increase in HRQoL and patient-reported hip function, and matched population norms on many parameters.
abstract_id: PUBMED:30927063
Partial two-stage exchange at the site of periprosthetic hip joint infections. Introduction: In the past 10 years an increasing number of studies about partial two-stage exchange arthroplasty in the management of periprosthetic hip infections have been published. The aim of the present work was to systematically review the current knowledge about this procedure, and critically verify the success as well as the complications of this treatment option.
Materials-methods: A literature search was performed through PubMed until June 2018. Search terms were "partial two stage hip" and "partial retention hip", and "retaining well fixed hip".
Results: A total of 7 studies reporting on a total of 80 patients could be identified. All studies had a level of evidence IV. The great majority of the studies reported on the isolated removal of the acetabular cup and placement of an antibiotic-loaded cement spacer head onto the retained, well-fixed stem. Most of the periprosthetic infections were caused by staphylococci. The infection eradication rate varied between 81.3 and 100% at a mean follow-up between 19 and 70 months. Poor outcome was observed at the site of MRSA infections.
Conclusions: The partial two-stage exchange arthroplasty appears to be a possible option in the management of PJI when one prosthetic component is well-fixed so that their removal might result in significant bone loss and compromise of fixation at the time of the later prosthesis reimplantation, and the causative organisms are not multiresistant. The small numbers published about this protocol does not allow for a generalization of application and should be only applied in highly selected patients. Future studies with larger collectives and longer follow-ups are welcome to evaluate the clinical success of this option and its possible role in the management of PJI.
abstract_id: PUBMED:32538176
Intraoperative frozen section histopathology for the diagnosis of periprosthetic joint infection in hip revision surgery: the influence of recent dislocation and/or periprosthetic fracture. Aims: To evaluate the accuracy of intraoperative frozen section histopathology for diagnosing periprosthetic joint infection (PJI) during hip revision surgery, both for patients with and without recent trauma to the hip.
Patients And Methods: The study included all revision total hip replacement procedures where intraoperative frozen section histopathology had been used for the evaluation of infection in a single institution between 2008 and 2015. Musculoskeletal Infection Society criteria were used to define infection. 210 hips were included for evaluation. Prior to revision surgery, 36 hips had a dislocation or a periprosthetic fracture (group A), and 174 did not (group B).
Results: The prevalence of infection was 14.3% (5.6% in group A and 16.1% in group B). Using Feldman criteria, the sensitivity of histopathology was 50.0%, specificity 47.1%, positive predictive value 5.3% and negative predictive value 94.1% in group A. The sensitivity of frozen section histopathology was 75.0%, specificity 96.5%, positive predictive value 85% and negative predictive value 95.3% in group B.
Conclusions: Intraoperative frozen section histopathology is reliable for the diagnosis of PJI if no dislocation or periprosthetic fracture has occurred prior to hip revision surgery.
abstract_id: PUBMED:31187256
Incidence and risk factors for heterotopic ossification following periprosthetic joint infection of the hip. Introduction: Heterotopic ossifications (HOs) commonly occur following total hip arthroplasty. Data regarding the appearance of HO after periprosthetic joint infection (PJI) of the hip are rare. Therefore, the aim of this study was to analyze the incidence and potential risk factors for the development of HO in patients with PJI of the hip.
Materials And Methods: We performed a single-center, retrospective study including patients treated with a two- or multistage operation and patients undergoing salvage procedure in cases of PJI of the hip with a minimum follow-up of 6 months. A total of 150 patients were included in the analysis. The Brooker-scale was used to classify HO. Patients were divided in three groups: (1) No HO, (2) HO Brooker type 1-4, and (3) high-grade HO (HO Brooker type 3 and 4). In each group, we checked possible risk factors for the development of HO for statistical significance.
Results: Patients included in our study had a mean age of 70.4 ± 12.1 years. Of all patients, 75 were women (50%). HOs could be found in 70 patients (46.7%). Twenty-seven patients showed HO Brooker type 1, 23 type 2, 15 type 3 and 5 type 4. Male gender [odds ratio (OR) 2.14; p = 0.022], smoking (OR 5.75; p = 0.025) were significant risk factors for HO. A chronic infection (OR 3.54; p = 0.029) and a higher number of procedures (p = 0.009) were significant risk factors for the development of high-grade HO.
Conclusions: HOs often occur following surgical care of PJI. Male gender, smoking, a chronic infection and high number of operations are risk factors for developing HO after PJI.
abstract_id: PUBMED:26096072
Single-Stage Hip and Knee Exchange for Periprosthetic Joint Infection. Periprosthetic joint infections following hip and knee arthroplasty are challenging complications for Orthopaedic surgeons to manage. The single-stage exchange procedure is becoming increasingly popular with promising results. At our Institute we have demonstrated favourable or similar outcomes compared to the 'gold-standard' two-stage exchange, and other published single-stage results. The aim of this study is to describe the patient selection criteria and perioperative steps in a single-stage exchange for hip and knee arthroplasty undertaken at our Institute. The outlined protocol can be performed using standard debridement, attention to detail and well-recognised reconstructive techniques.
abstract_id: PUBMED:36047015
Estimating incidence rates of periprosthetic joint infection after hip and knee arthroplasty for osteoarthritis using linked registry and administrative health data. Aims: The aim of this study was to estimate the 90-day periprosthetic joint infection (PJI) rates following total knee arthroplasty (TKA) and total hip arthroplasty (THA) for osteoarthritis (OA).
Methods: This was a data linkage study using the New South Wales (NSW) Admitted Patient Data Collection (APDC) and the Australian Orthopaedic Association National Joint Replacement Registry (AOANJRR), which collect data from all public and private hospitals in NSW, Australia. Patients who underwent a TKA or THA for OA between 1 January 2002 and 31 December 2017 were included. The main outcome measures were 90-day incidence rates of hospital readmission for: revision arthroplasty for PJI as recorded in the AOANJRR; conservative definition of PJI, defined by T84.5, the PJI diagnosis code in the APDC; and extended definition of PJI, defined by the presence of either T84.5, or combinations of diagnosis and procedure code groups derived from recursive binary partitioning in the APDC.
Results: The mean 90-day revision rate for infection was 0.1% (0.1% to 0.2%) for TKA and 0.3% (0.1% to 0.5%) for THA. The mean 90-day PJI rates defined by T84.5 were 1.3% (1.1% to 1.7%) for TKA and 1.1% (0.8% to 1.3%) for THA. The mean 90-day PJI rates using the extended definition were 1.9% (1.5% to 2.2%) and 1.5% (1.3% to 1.7%) following TKA and THA, respectively.
Conclusion: When reporting the revision arthroplasty for infection, the AOANJRR substantially underestimates the rate of PJI at 90 days. Using combinations of infection codes and PJI-related surgical procedure codes in linked hospital administrative databases could be an alternative way to monitor PJI rates.Cite this article: Bone Joint J 2022;104-B(9):1060-1066.
abstract_id: PUBMED:36323976
Two-stage revision for periprosthetic joint infection in cemented total hip arthroplasty: an increased risk for failure? Background: The impact of the prior fixation mode on the treatment outcome of chronic periprosthetic joint infection (PJI) of the hip is unclear. Removal of cemented total hip arthroplasty (THA) is particularly challenging and residual cement might be associated with reinfection. This study seeks to compare the results of two-stage revision for PJI in cemented and cementless THA.
Methods: We reviewed 143 consecutive patients undergoing two-stage revision THA for PJI between 2013 and 2018. Thirty-six patients with a fully cemented (n = 6), hybrid femur (n = 26) or hybrid acetabulum (n = 4) THA (cemented group) were matched 1:2 with a cohort of 72 patients who underwent removal of a cementless THA (cementless group). Groups were matched by sex, age, number of prior surgeries and history of infection treatment. Outcomes included microbiological results, interim re-debridement, reinfection, all-cause revision, and modified Harris hip scores (mHHS). Minimum follow-up was 2 years.
Results: Compared with PJI in cementless THA, patients undergoing removal of cemented THA had increasingly severe femoral bone loss (p = 0.004). Patients in the cemented group had an increased risk for positive cultures during second-stage reimplantation (22% compared to 8%, p = 0.043), higher rates of reinfection (22% compared to 7%, p = 0.021) and all-cause revision (31% compared to 14%, p = 0.039) compared to patients undergoing two-stage revision of cementless THA. Periprosthetic femoral fractures were more frequent in the group of patients with prior cementation (p = .004). Mean mHHS had been 37.5 in the cemented group and 39.1 in the cementless group, and these scores improved significantly in both groups (p < 0.01).
Conclusion: This study shows that chronic infection in cemented THA might be associated with increased bone loss, higher rates of reinfection and all-cause revision following two-stage revision. This should be useful to clinicians counselling patients with hip PJI and can guide treatment and estimated outcomes.
abstract_id: PUBMED:36436706
Less Than 1-Year Quiescent Period After Septic Arthritis of the Hip is Associated With High Risk of Periprosthetic Joint Infection Following Total Hip Arthroplasty. Background: Approximately 20,000 patients are diagnosed with septic arthritis annually, with 15% specifically affecting the hip joint. These cases exacerbate arthritic changes, often warranting a total hip arthroplasty (THA). Given their prior history of infection, these patients are predisposed to subsequent periprosthetic joint infections (PJIs). Multiple studies suggest delaying THA after a native septic hip, but no study utilizing a large cohort examined the specific timing to mitigate post-THA PJI risk within a short (<1 year) quiescent period after septic arthritis. We sought to compare patients who were diagnosed with septic hip arthritis at time intervals (0-6, or 6-12 months) prior to an ipsilateral primary THA to a cohort of THA patients who never had a septic hip history. Specifically, we assessed: from 90 days to 2 years (1) revisions due to PJI and (2) associated risk factors for PJI at 2-years.
Methods: A national, all-payer database was queried to identify all patients who underwent a primary THA between 2010 and 2021 and patients who had prior ipsilateral septic hip arthritis were characterized using International Classification of Disease and Current Practice Terminology codes (n = 1,052). A randomized sample of patients who never had a history of septic arthritis prior to undergoing THA was used as a nonseptic group comparison (n = 5,000). The incidences of PJI at 90 days through two years were then identified and compared using bivariate chi-square analyses. Risk factors for post-THA PJIs were then analyzed using multivariate regression models.
Results: The septic arthritis cohorts were more likely to require revisions due to PJIs, as compared to the non-septic group at 90 days, 1 year, and 2 years (all P < .0001). Patients who were diagnosed with septic arthritis between 0 and 6 months prior to THA were at greater PJI risk at both one-year (odds ratio (OR) of 43.1 versus 29.6, P < .0001) and two years (OR of 38.3 versus 22.1, P < .0001) compared to patients who had diagnoses between 6 and 12 months. Diabetes mellitus, obesity, and tobacco use were associated risk factors for PJIs at 2 years in the septic hip cohort in comparison to the cohort without a septic hip history.
Conclusion: Less than a 1-year quiescent period after septic arthritis is associated with a 38 times increased risk and a 22 times risk for post-THA PJI, at 0 and 6 months and 6 and 12 months, respectively. Though patients who undergo THA greater than 6 months after their septic arthritis treatment have a decreased risk compared to those between 0 and 6 months the risks are still high. Orthopaedic surgeons should be aware of the increased risks of PJIs when considering performing a THA in patients with a history of septic arthritis.
Answer: The study aimed to estimate the positive predictive value of the International Classification of Disease 10th revision (ICD-10) periprosthetic hip joint infection diagnosis code in the Danish National Patient Register. The results showed that a T84.5 diagnosis code, irrespective of the associated surgical procedure code, had a positive predictive value of 85% (95% CI: 80-89). When stratified, the positive predictive value increased to 86% (95% CI: 80-91) in combination with an infection-specific surgical procedure code, and decreased to 82% (95% CI: 72-89) in combination with a noninfection-specific surgical procedure code. The study concluded that misclassification must be expected and taken into consideration when using administrative discharge registers for epidemiological research on periprosthetic hip joint infection. It suggests that the periprosthetic hip joint infection diagnosis code can be useful in future single-source register-based studies, but should preferably be used in combination with alternate data sources to ensure higher validity (PUBMED:26109151). |
Instruction: Case report: acute bowel obstruction with an isolated transition point in peritoneal dialysis patients; a presentation of encapsulating peritoneal sclerosis?
Abstracts:
abstract_id: PUBMED:26727891
Case report: acute bowel obstruction with an isolated transition point in peritoneal dialysis patients; a presentation of encapsulating peritoneal sclerosis? Background: Encapsulating peritoneal sclerosis (EPS) is classically described as progressive sclerosis and cocooning of the entire peritoneum; however, there has been limited number of reported cases of localized fibrosis as a variant form.
Case Presentation: We describe two cases of acute bowel obstruction with isolated transition points in the setting of long-term peritoneal dialysis.
Conclusion: We postulate that some of the cases of small bowel obstruction with an obvious transition point in long-term peritoneal dialysis patients may represent a unique and localized form of EPS. We aim to emphasize the presence of macroscopic variations in presentation of EPS.
abstract_id: PUBMED:35433847
Small Bowel Obstruction with a Transition Point in a Patient on Peritoneal Dialysis. Small bowel obstruction (SBO) is a rare complication of peritoneal dialysis (PD) that is usually seen in patients with encapsulating peritoneal sclerosis. We present a case of SBO that was caused by mechanical obstruction from omental adhesions around the PD catheter. This is the case of 71-year-old female with end-stage renal disease who was recently started on PD and presented with recurrent syncopal episodes and altered mental status. During hospitalization, the patient began experiencing incomplete drainage of the PD solution. Abdominal computerized tomography revealed SBO with a transition point near the PD catheter. The patient then underwent laparoscopy, which revealed omental adhesions around the PD catheter near the obstruction area, but no adhesion of the intestine was observed. The adhesions were dissected by laparoscopy, and the PD catheter was removed. This case highlights the challenges of PD access.
abstract_id: PUBMED:25601836
Encapsulating peritoneal sclerosis-a rare but devastating peritoneal disease. Encapsulating peritoneal sclerosis (EPS) is a devastating but, fortunately, rare complication of long-term peritoneal dialysis. The disease is associated with extensive thickening and fibrosis of the peritoneum resulting in the formation of a fibrous cocoon encapsulating the bowel leading to intestinal obstruction. The incidence of EPS ranges between 0.7 and 3.3% and increases with duration of peritoneal dialysis therapy. Dialysis fluid is hyperosmotic, hyperglycemic, and acidic causing chronic injury and inflammation in the peritoneum with loss of mesothelium and extensive tissue fibrosis. The pathogenesis of EPS, however, still remains uncertain, although a widely accepted hypothesis is the "two-hit theory," where, the first hit is chronic peritoneal membrane injury from long standing peritoneal dialysis followed by a second hit such as an episode of peritonitis, genetic predisposition and/or acute cessation of peritoneal dialysis, leading to EPS. Recently, EPS has been reported in patients shortly after transplantation suggesting that this procedure may also act as a possible second insult. The process of epithelial-mesenchymal transition of mesothelial cells is proposed to play a central role in the development of peritoneal sclerosis, a common characteristic of patients on dialysis, however, its importance in EPS is less clear. There is no established treatment for EPS although evidence from small case studies suggests that corticosteroids and tamoxifen may be beneficial. Nutritional support is essential and surgical intervention (peritonectomy and enterolysis) is recommended in later stages to relieve bowel obstruction.
abstract_id: PUBMED:35990572
Calcified encapsulating peritoneal sclerosis associated with peritoneal dialysis: A case report. Encapsulating peritoneal sclerosis (EPS) is a rare, but sometimes fatal, complication of peritoneal dialysis characterized by diffuse thickening and encapsulation of the bowel and peritoneum. In more advanced cases, the peritoneum will gradually calcify. EPS usually presents as partial small bowel obstruction and diagnosed on imaging studies. We present a case of a 19-year-old female on long-term peritoneal dialysis with EPS and diffuse peritoneal calcifications.
abstract_id: PUBMED:25343532
A Rare Reason of Ileus in Renal Transplant Patients With Peritoneal Dialysis History: Encapsulated Peritoneal Sclerosis. Encapsulating peritoneal sclerosis is a rare complication of long-term peritoneal dialysis ranging from moderate inflammation of peritoneal structures to severe sclerosing peritonitis and encapsulating peritoneal sclerosis. Complicated it, ileus may occur during or after peritoneal dialysis treatment or after kidney transplant. We sought to evaluate 3 posttransplant encapsulating peritoneal sclerosis through clinical presentation, radiologic findings, and outcomes. We analyzed 3 renal transplant patients with symptoms of encapsulating peritoneal sclerosis admitted posttransplant to our hospital with ileus between 2012 and 2013. Conservative treatment was applied to the patients whenever necessary to avoid surgery. One patient improved with medical therapy. Surgical treatment was delayed and we decided it as a last resort, in 2 cases with no response to conservative treatment for a long time. Finally, patients with peritoneal dialysis history should be searched carefully before renal transplant for intermittent bowel obstruction story.
abstract_id: PUBMED:33270013
A Successful Treatment of Encapsulating Peritoneal Sclerosis in an Adolescent Boy on Long-term Peritoneal Dialysis: A Case Report. Encapsulating peritoneal sclerosis (EPS) is a rare life-threatening complication associated with peritoneal dialysis (PD). EPS is characterized by progressive fibrosis and sclerosis of the peritoneum, with the formation of a membrane and tethering of loops of the small intestine resulting in intestinal obstruction. It is very rare in children. We present a case of a 16-year-old adolescent boy who developed EPS seven years after being placed on continuous ambulatory peritoneal dialysis (CAPD) complicated by several episodes of bacterial peritonitis. The diagnosis was based on clinical, radiological, intraoperative and histopathological findings. The patient was successfully treated with surgical enterolysis. During a 7-year follow-up, there have been no further episodes of small bowel obstruction documented. He still continues to be on regular hemodialysis and is awaiting a deceased donor kidney transplant. EPS is a long-term complication of peritoneal dialysis and is typically seen in adults. Rare cases may be seen in the pediatric population and require an appropriate surgical approach that is effective and lifesaving for these patients.
abstract_id: PUBMED:18663253
Encapsulating peritoneal sclerosis in patients on peritoneal dialysis. Encapsulating peritoneal sclerosis (EPS) is an uncommon but one of the most serious complications in patients on long-term peritoneal dialysis. EPS is characterised by a diffuse thickening and/or sclerosis of the peritoneal membrane which leads to a decreased ultrafiltration and ultimately to bowel obstruction. We present four cases of EPS and discuss the clinical manifestations, multifactorial aetiology, diagnosis, treatment, prognosis, and prevention. We end with a proposal for the development of an EPS prevention guideline.
abstract_id: PUBMED:20690523
Encapsulating peritoneal sclerosis in a peritoneal dialysis patient with prune-belly syndrome: a case report. This case describes a prune-belly syndrome patient who had a kidney transplantation and was diagnosed with Encapsulating Peritoneal Sclerosis (EPS), a rare but potentially fatal condition, mostly associated with Peritoneal Dialysis (PD). The definition of EPS is based on the clinical findings linked to bowel obstruction and on the demonstration of peritoneal thickening. Surgical treatment is the only established basic treatment for the condition. Prune-belly syndrome is characterized by the triad of deficient abdominal musculature, urinary tract abnormality and cryptorchidism. Because it is often associated with end-stage renal disease, PD is essential in the treatment of patients with prune-belly syndrome. The aetiology of EPS follows a 'two-hit theory': the first 'hit' is peritoneal deterioration, caused by long-time exposure to PD. This causes peritoneal disruption which predisposes the patient to a second hit. In our patient, PD discontinuation and renal transplantation are possible 'second hits' that triggered the development of EPS. This case of prune-belly syndrome has all the necessary elements for the development of EPS, and we felt we should report it as the peroperative diagnosis was unexpected.
abstract_id: PUBMED:29668432
Bleeding Peritoneum During Peritoneal Dialysis: A Case of Early Encapsulating Peritoneal Sclerosis? Complications of peritoneal dialysis (PD) create a significant burden for patients and providers. Some complications, such as infections and leaks, are preventable or easily treatable; however, potential fatal complications, such as encapsulating peritoneal sclerosis (EPS), cost patients their lives. Here, we present the case of a PD patient who might have had early, subtle, but ominous symptoms and signs of EPS, diagnosed in its early stages and promptly managed.A 57-year-old man who had been receiving PD for 6 years began having recurrent episodes of abdominal pain, blood-tinged effluent, and peritonitis. Even after successful treatment of his peritonitis episode, his dialysate effluent would be intermittently hazy or pinkish. When he presented with similar complaints for the third time, he was diagnosed with EPS after laparoscopy for further evaluation during his hospitalization.Encapsulating peritoneal sclerosis is a rare complication of PD. The advanced stages of EPS with "EPS syndrome" portend a grave prognosis because of small-bowel obstruction, malnutrition, infection, and death. Early recognition and timely intervention can be a strategy to potentially prevent the progression of EPS.
abstract_id: PUBMED:16300276
Peritoneal dialysis in Japan: the issue of encapsulating peritoneal sclerosis and future challenges. Encapsulating peritoneal sclerosis (EPS) is a life-threatening complication of peritoneal dialysis (PD). The overall prevalence of EPS in Japanese PD patients is 2.3%. Among patients on PD for less than 5 years, the rate is 0.9%; among patients on PD for 5 - 10 years, the rate is 3.8%; and among patients on PD for >10 years, it is 11.5%. Thus, the longer the treatment duration, the higher the prevalence of EPS. Encapsulating peritoneal sclerosis does not result solely from the natural progression of peritoneal sclerosis. A "second hit" event, such as bacterial peritonitis, abdominal bleeding, or abdominal surgery may be needed to trigger the onset of EPS in the face of advanced peritoneal sclerosis. To prevent development of EPS, PD treatment is replaced by other treatments when patients reached high-transport status. Peritoneal lavage and prednisolone administration have been reported to be effective in preventing or stopping the progress of EPS. When bowel obstruction has occurred, total enterolysis to remove the fibrous capsule from the bowel is indicated. To maximize overall quality of life, patients with endstage renal disease (ESRD) should have the choice to make use of all the treatment modalities available: PD, hemodialysis (HD), and transplantation. Furthermore, the development of truly biocompatible PD equipment--including peritoneal catheters, solutions, and systems--are desirable to extend PD treatment for the long-term. The cost of individual products could decrease significantly if PD use were to increase to 30% from 10% among ESRD patients worldwide. As practitioners, we have to further improve the technical survival rate and functional duration of PD treatment so that adequate peritoneal function can be maintained for 10 years in at least 40% of PD patients. The goal is to place PD on par with HD using high-flux dialysis membranes and ultrapure dialysis solution.
Answer: The case report described in PUBMED:26727891 discusses acute bowel obstruction with an isolated transition point in patients undergoing long-term peritoneal dialysis, suggesting that this may represent a unique and localized form of encapsulating peritoneal sclerosis (EPS). EPS is typically characterized by progressive sclerosis and cocooning of the entire peritoneum, but the report indicates that there may be macroscopic variations in the presentation of EPS, including cases with localized fibrosis.
Another case report in PUBMED:35433847 presents a 71-year-old female on peritoneal dialysis who developed small bowel obstruction (SBO) due to mechanical obstruction from omental adhesions around the peritoneal dialysis (PD) catheter. This case highlights the challenges associated with PD access and suggests that SBO in PD patients can also be caused by factors other than EPS, such as adhesions.
EPS is a rare but devastating complication of long-term peritoneal dialysis, associated with extensive thickening and fibrosis of the peritoneum, leading to bowel obstruction (PUBMED:25601836). The incidence of EPS increases with the duration of peritoneal dialysis therapy, and its pathogenesis is believed to involve a "two-hit theory," where chronic peritoneal membrane injury from long-standing peritoneal dialysis is followed by a second hit, such as an episode of peritonitis, genetic predisposition, acute cessation of peritoneal dialysis, or transplantation (PUBMED:25601836).
In summary, acute bowel obstruction with an isolated transition point in peritoneal dialysis patients can be a presentation of encapsulating peritoneal sclerosis, although other causes such as mechanical obstruction from adhesions should also be considered. EPS is a serious condition that requires careful management and can present with various degrees of severity and localization. |
Instruction: Is intra-articular pathology associated with MCL edema on MR imaging of the non-traumatic knee?
Abstracts:
abstract_id: PUBMED:15940487
Is intra-articular pathology associated with MCL edema on MR imaging of the non-traumatic knee? Objective: Edema surrounding the medial collateral ligament (MCL) is seen on MR imaging in patients with MCL injuries and in patients with radiographic osteoarthritis in the non-traumatic knee. Because we noted MCL edema in patients without prior trauma or osteoarthritis, we studied the association between intra-articular pathology and MCL edema in patients without knee trauma.
Design And Patients: We evaluated the MR examinations of 247 consecutive patients (121 male, 126 female with a mean age of 44 years) without recent trauma for the presence of edema surrounding the MCL, meniscal and ACL tears, medial meniscal extrusion, medial compartment chondromalacia, and osteoarthritis. The percentages of patients illustrating MCL edema with and without each type of pathology were compared using Fisher's exact test to determine if there was a statistically significant association.
Results: We found MCL edema in 60% of 247 patients. MCL edema was present in 67% of patients with medial meniscal tears, 35% with lateral meniscal tears, 100% with meniscal extrusion of 3 mm or more, 78% with femoral chondromalacia, 82% with tibial chondromalacia, and 50% with osteoarthritis. The percentage of patients with edema increased with the severity of the chondromalacia. These associations were all statistically significant (p <0.02). The mean age of those with MCL edema was 49.7 years compared with 34.9 years without MCL edema ( p <0.001). Patient gender and ACL tear did not correlate with MCL edema. Nine (4%) of the 247 patients had MCL edema without intra-articular pathology. None of these 9 patients had MCL tenderness or joint laxity on physical examination.
Conclusions: We confirmed that MCL edema is associated with osteoarthritis, but is also associated with meniscal tears, meniscal extrusion, and chondromalacia. In addition, MCL edema can be seen in patients without intra-articular pathology, recent trauma or MCL abnormality on physical examination.
abstract_id: PUBMED:12764655
MR findings in knee osteoarthritis. Knee osteoarthritis (OA) is a leading cause of disability. Recent advances in drug discovery techniques and improvements in understanding the pathophysiology of osteoarthritic disorders have resulted in an unprecedented number of new therapeutic agents. Of all imaging modalities, radiography has been the most widely used for the diagnosis and management of the progression of knee OA. Magnetic resonance imaging is a relatively recent technique and its applications to osteoarthritis have been limited. Compared with conventional radiography, MR imaging offers unparalleled discrimination among articular soft tissues by directly visualizing all components of the knee joint simultaneously and therefore allowing the knee joint to be evaluated as a whole organ. In this article we present the MR findings in knee OA including cartilage abnormalities, osteophytes, bone edema, subarticular cysts, bone attrition, meniscal tears, ligament abnormalities, synovial thickening, joint effusion, intra-articular loose bodies, and periarticular cysts.
abstract_id: PUBMED:12522393
Solitary intra-articular tumoral calcinosis of the knee. An unusual case of symptomatic, solitary, intra-articular tumoral calcinosis of the knee in a 39-year-old man is presented. This is the first reported case of intra-articular tumoral calcinosis with no associated underlying systemic diseases. Magnetic resonance imaging was helpful in delineating the lesion. Surgical excision resulted in resolution of symptoms and was not followed by recurrence of the lesion.
abstract_id: PUBMED:14530646
MR evaluation of radiation synovectomy of the knee by means of intra-articular injection of holmium-166-chitosan complex in patients with rheumatoid arthritis: results at 4-month follow-up. Objective: To determine whether MRI is able to demonstrate the effect of radiation synovectomy after the intra-articular injection of holmium-166-chitosan complex for the treatment of rheumatoid arthritis of the knee.
Materials And Methods: Fourteen patients aged 36-59 years were treated with 10-20 mCi of holmium-166-chitosan complex. A criterion for inclusion in this study was the absence of observable improvement after 3- or more months of treatment of the knee with disease-modifying anti-rheumatic drugs. MR images were acquired both prior to and 4-months after treatment. Clinical evaluation included the use of visual analog scales to assess pain, and the circumference of the knee and its range of motion were also determined. MR evaluation included measurement of the volume of synovial enhancement and wall thickness, the amount of joint effusion, and quantifiable scoring of bone erosion, bone edema and lymph nodes.
Results: Visual analog scale readings decreased significantly after radiation synovectomy (p < 0.05). MRI showed that joint effusion decreased significantly (p < 0.05), and that the volume of synovial enhancement tended to decrease, but to an insignificant extent (p = 0.107).
Conclusion: The decreased joint effusion noted at 4-month follow-up resulted from radiation synovectomy of the rheumatoid knee by means of intra-articular injection of holmium-166-chitosan complex.
abstract_id: PUBMED:35956034
Does Bone Marrow Edema Influence the Clinical Results of Intra-Articular Platelet-Rich Plasma Injections for Knee Osteoarthritis? Platelet-rich plasma (PRP) is increasingly used for the intra-articular treatment of knee osteoarthritis (OA). However, clinical studies on PRP injections reported controversial results. Bone marrow edema (BME) can cause symptoms by affecting the subchondral bone and it is not targeted by intra-articular treatments. The aim of this study was to investigate if the presence of BME can influence the outcome of intra-articular PRP injections in knee OA patients. A total of 201 patients were included in the study, 80 with and 121 without BME at the baseline MRI. BME area and site were evaluated, and BME was graded using the Whole-Organ Magnetic Resonance Imaging Score (WORMS). Patients were assessed with International Knee Documentation Committee (IKDC) score Knee injury and Osteoarthritis Outcome Score (KOOS) subscales, the EuroQol-Visual Analogue Scale (EQ-VAS), and the Tegner score at baseline, 2, 6, and 12 months. Overall, the presence of BME did not influence the clinical results of intra-articular PRP injections in these patients treated for knee OA. Patients with BME presented a similar failure rate and clinical improvement after PRP treatment compared to patients without BME. The area and site of BME did not affect clinical outcomes. However, patients with a higher BME grade had a higher failure rate.
abstract_id: PUBMED:10749260
Treatable chondral injuries in the knee: frequency of associated focal subchondral edema. Objective: In the knee, chondral flaps and fractures are radiographically occult articular cartilage injuries that can mimic meniscal tears clinically; once correctly diagnosed, these injuries can be treated surgically. We investigated an associated MR imaging finding--focal subchondral bone edema--in a series of surgically proven lesions.
Materials And Methods: Two musculoskeletal radiologists retrospectively reviewed the MR studies of 18 knees with arthroscopically proven treatable cartilage infractions, noting articular surface defects and associated subchondral bone edema; subchondral edema was defined as focal regions of high signal intensity in the bone immediately underlying an articular surface defect on a T2-weighted or short inversion time inversion recovery (STIR) image.
Results: The first observer saw focal subchondral edema deep relative to a cartilage surface defect in 15 (83%) of the 18 cases; in two additional cases a surface defect was seen without underlying edema. The second observer identified 13 knees (72%) with surface defects and associated subchondral edema and three with chondral surface defects and no associated edema. Subchondral edema was seen more frequently on fat-suppressed images and on STIR images than non-fat-suppressed images.
Conclusion: Focal subchondral edema is commonly visible on MR images of treatable, traumatic cartilage defects in the knee; this MR finding may prove to be an important clue to assist in the detection of these traumatic chondral lesions.
abstract_id: PUBMED:22724879
Intra-articular injection of autologous mesenchymal stem cells in six patients with knee osteoarthritis. Background: Osteoarthritis (OA) is a progressive disorder of the joints caused by gradual loss of articular cartilage, which naturally possesses a limited regenerative capacity. In the present study, the potential of intra-articular injection of mesenchymal stem cells (MSCs) has been evaluated in six osteoarthritic patients.
Methods: Six female volunteers, average age of 54.56 years, with radiologic evidence of knee OA that required joint replacement surgery were selected for this study. About 50 ml bone marrow was aspirated from each patient and taken to the cell laboratory, where MSCs were isolated and characterized in terms of some surface markers. About 20-24 × 10(6) passaged-2 cells were prepared and tested for microbial contamination prior to intra-articular injection.
Results: During a one-year follow-up period, we found no local or systemic adverse events. All patients were partly satisfied with the results of the study. Pain, functional status of the knee, and walking distance tended to be improved up to six months post-injection, after which pain appeared to be slightly increased and patients' walking abilities slightly decreased. Comparison of magnetic resonance images (MRI) at baseline and six months post-stem cell injection displayed an increase in cartilage thickness, extension of the repair tissue over the subchondral bone and a considerable decrease in the size of edematous subchondral patches in three out of six patients.
Conclusion: The results indicated satisfactory effects of intra-articular injection of MSCs in patients with knee OA.
abstract_id: PUBMED:11046166
Traumatic musculotendinous injuries of the knee: diagnosis with MR imaging. Magnetic resonance (MR) imaging is the imaging modality of choice for evaluation of acute traumatic musculotendinous injuries of the knee. Three discrete categories of acute injuries to the musculotendinous unit can be defined: muscle contusion, myotendinous strain, and tendon avulsion. Among the quadriceps muscles, the rectus femoris is the most susceptible to injury at the myotendinous junction due to its superficial location, predominance of type II fibers, eccentric muscle action, and extension across two joints. Among the muscles of the pes anserinus, the sartorius is the most susceptible to strain injury due to its superficial location and biarticular course. The classic fusiform configuration of the semimembranosus along with a propensity for eccentric actions also make it prone to strain injury. MR imaging findings associated with rupture of the iliotibial tract include discontinuity and edema, which are best noted on coronal images. The same mechanism of injury that tears the arcuate ligament from its fibular insertion can also result in avulsion injury of the biceps femoris. The gastrocnemius muscle is prone to strain injury due to its action across two joints and its superficial location. Injuries of the muscle belly and myotendinous junction of the popliteus are far more common than tendinous injuries.
abstract_id: PUBMED:18405354
Natural course of intra-articular shifting bone marrow edema syndrome of the knee. Background: Intra-articular shift (migration) of bone marrow edema syndrome (BMES) is a very rare disease. Only a few cases have been reported thus far. The condition may cause the clinician to suspect an aggressive disease.
Methods: We reviewed eight patients (four women and four men) with unilateral BMES located in the knee. The patients were aged 39 to 56 years (mean, 49.2 years). In all patients, bone marrow edema (BME) initially observed on magnetic resonance imaging (MR imaging) shifted within the same joint, i.e. from the medial to the lateral femoral condyle or the adjacent bone. Seven patients were given conservative therapy, including limited weight-bearing, for a period of three weeks after the initial detection of BMES, whereas one patient underwent surgical core decompression twice.
Results: MR imaging showed complete restitution in 6 cases and a small residual edema in one case. A final control MR could not be obtained for one patient, who had no pain. A further patient had an avascular necrosis of the contralateral hip after 16 months. Improvement on MR imaging was correlated with the clinical outcome in all cases. All patients became asymptomatic after a mean period of 9 months (6-11).
Interpretation: Intra-articular shifting BMES is a very rare condition. As the disease is self-limiting, conservative therapy may be recommended.
abstract_id: PUBMED:25462055
Atypical location of an osteoid osteoma with atypical anterior knee pain. Introduction: An osteoid osteoma is a painful tumor that most commonly affects the extra-articular parts of the long bones. An intra-articular location of an osteoid osteoma is rare. Various differential diagnoses may arise in connection with such an unusual location because it causes atypical clinical signs.
Presentation Of Case: A 24-year-old male developed pain in the central region of the right knee. Magnetic resonance imaging (MRI) showed no clear pathology in the knee joint. A technetium bone scan and computed tomography (CT) were then ordered and confirmed the presence of an osteoid osteoma in the knee joint. The patient was treated through an anteromedial approach to the knee, and the lesion was removed by excisional biopsy under fluoroscopy.
Discussion: The diagnosis of intra-articular osteoid osteoma is challenging because the clinical presentation can be misleading. MRI is often requested as the first imaging method when dealing with knee symptoms, and radiologists are often unaware of the clinical presentation. Edema seen on MRI can be misleading with respect to the location of the nidus. CT is considered to be the best imaging method because it usually allows for clear visualization of the nidus. Different treatments have been proposed, ranging from open excision to arthroscopic resection.
Conclusion: Osteoid osteoma should be considered in young adult patients with chronic knee pain and no history of trauma.
Answer: Yes, intra-articular pathology is associated with medial collateral ligament (MCL) edema on MR imaging of the non-traumatic knee. A study evaluating the MR examinations of 247 consecutive patients without recent trauma found that MCL edema was present in a significant number of patients with various intra-articular pathologies. Specifically, MCL edema was observed in 67% of patients with medial meniscal tears, 35% with lateral meniscal tears, 100% with meniscal extrusion of 3 mm or more, 78% with femoral chondromalacia, 82% with tibial chondromalacia, and 50% with osteoarthritis. The associations between MCL edema and these pathologies were statistically significant (p < 0.02). The study also noted that the mean age of patients with MCL edema was significantly higher than those without MCL edema (PUBMED:15940487).
However, it is important to note that while there is a strong association, MCL edema can also be seen in patients without intra-articular pathology, recent trauma, or MCL abnormality on physical examination, as 4% of the patients in the study had MCL edema without any of these conditions (PUBMED:15940487). |
Instruction: Are CMS G-Code Functional Limitation Severity Modifiers Responsive to Change Across an Episode of Outpatient Rehabilitation?
Abstracts:
abstract_id: PUBMED:35938548
Thrombosis-related circulating miR-16-5p is associated with disease severity in patients hospitalised for COVID-19. SARS-CoV-2 tropism for the ACE2 receptor, along with the multifaceted inflammatory reaction, is likely to drive the generalized hypercoagulable and thrombotic state seen in patients with COVID-19. Using the original bioinformatic workflow and network medicine approaches we reanalysed four coronavirus-related expression datasets and performed co-expression analysis focused on thrombosis and ACE2 related genes. We identified microRNAs (miRNAs) which play role in ACE2-related thrombosis in coronavirus infection and further, we validated the expressions of precisely selected miRNAs-related to thrombosis (miR-16-5p, miR-27a-3p, let-7b-5p and miR-155-5p) in 79 hospitalized COVID-19 patients and 32 healthy volunteers by qRT-PCR. Consequently, we aimed to unravel whether bioinformatic prioritization could guide selection of miRNAs with a potential of diagnostic and prognostic biomarkers associated with disease severity in patients hospitalized for COVID-19. In bioinformatic analysis, we identified EGFR, HSP90AA1, APP, TP53, PTEN, UBC, FN1, ELAVL1 and CALM1 as regulatory genes which could play a pivotal role in COVID-19 related thrombosis. We also found miR-16-5p, miR-27a-3p, let-7b-5p and miR-155-5p as regulators in the coagulation and thrombosis process. In silico predictions were further confirmed in patients hospitalized for COVID-19. The expression levels of miR-16-5p and let-7b in COVID-19 patients were lower at baseline, 7-days and 21-day after admission compared to the healthy controls (p < 0.0001 for all time points for both miRNAs). The expression levels of miR-27a-3p and miR-155-5p in COVID-19 patients were higher at day 21 compared to the healthy controls (p = 0.007 and p < 0.001, respectively). A low baseline miR-16-5p expression presents predictive utility in assessment of the hospital length of stay or death in follow-up as a composite endpoint (AUC:0.810, 95% CI, 0.71-0.91, p < 0.0001) and low baseline expression of miR-16-5p and diabetes mellitus are independent predictors of increased length of stay or death according to a multivariate analysis (OR: 9.417; 95% CI, 2.647-33.506; p = 0.0005 and OR: 6.257; 95% CI, 1.049-37.316; p = 0.044, respectively). This study enabled us to better characterize changes in gene expression and signalling pathways related to hypercoagulable and thrombotic conditions in COVID-19. In this study we identified and validated miRNAs which could serve as novel, thrombosis-related predictive biomarkers of the COVID-19 complications, and can be used for early stratification of patients and prediction of severity of infection development in an individual.Abbreviations: ACE2, angiotensin-converting enzyme 2AF, atrial fibrillationAPP, Amyloid Beta Precursor ProteinaPTT, activated partial thromboplastin timeAUC, Area under the curveAβ, amyloid betaBMI, body mass indexCAD, coronary artery diseaseCALM1, Calmodulin 1 geneCaM, calmodulinCCND1, Cyclin D1CI, confidence intervalCOPD, chronic obstructive pulmonary diseaseCOVID-19, Coronavirus disease 2019CRP, C-reactive proteinCV, CardiovascularCVDs, cardiovascular diseasesDE, differentially expressedDM, diabetes mellitusEGFR, Epithelial growth factor receptorELAVL1, ELAV Like RNA Binding Protein 1FLNA, Filamin AFN1, Fibronectin 1GEO, Gene Expression OmnibushiPSC-CMs, Human induced pluripotent stem cell-derived cardiomyocytesHSP90AA1, Heat Shock Protein 90 Alpha Family Class A Member 1Hsp90α, heat shock protein 90αICU, intensive care unitIL, interleukinIQR, interquartile rangelncRNAs, long non-coding RNAsMI, myocardial infarctionMiRNA, MiR, microRNAmRNA, messenger RNAncRNA, non-coding RNANERI, network-medicine based integrative approachNF-kB, nuclear factor kappa-light-chain-enhancer of activated B cellsNPV, negative predictive valueNXF, nuclear export factorPBMCs, Peripheral blood mononuclear cellsPCT, procalcitoninPPI, Protein-protein interactionsPPV, positive predictive valuePTEN, phosphatase and tensin homologqPCR, quantitative polymerase chain reactionROC, receiver operating characteristicSARS-CoV-2, severe acute respiratory syndrome coronavirus 2SD, standard deviationTLR4, Toll-like receptor 4TM, thrombomodulinTP53, Tumour protein P53UBC, Ubiquitin CWBC, white blood cells.
abstract_id: PUBMED:29140569
Single-Cell Functional Analysis of Stem-Cell Derived Cardiomyocytes on Micropatterned Flexible Substrates. Human pluripotent stem-cell derived cardiomyocytes (hPSC-CMs) hold great promise for applications in human disease modeling, drug discovery, cardiotoxicity screening, and, ultimately, regenerative medicine. The ability to study multiple parameters of hPSC-CM function, such as contractile and electrical activity, calcium cycling, and force generation, is therefore of paramount importance. hPSC-CMs cultured on stiff substrates like glass or polystyrene do not have the ability to shorten during contraction, making them less suitable for the study of hPSC-CM contractile function. Other approaches require highly specialized hardware and are difficult to reproduce. Here we describe a protocol for the preparation of hPSC-CMs on soft substrates that enable shortening, and subsequently the simultaneous quantitative analysis of their contractile and electrical activity, calcium cycling, and force generation at single-cell resolution. This protocol requires only affordable and readily available materials and works with standard imaging hardware. © 2017 by John Wiley & Sons, Inc.
abstract_id: PUBMED:33869689
Assessment of human bioengineered cardiac tissue function in hypoxic and re-oxygenized environments to understand functional recovery in heart failure. Introduction: Myocardial recovery is one of the targets for heart failure treatment. A non-negligible number of heart failure with reduced ejection fraction (EF) patients experience myocardial recovery through treatment. Although myocardial hypoxia has been reported to contribute to the progression of heart failure even in non-ischemic cardiomyopathy, the relationship between contractile recovery and re-oxygenation and its underlying mechanisms remain unclear. The present study investigated the effects of hypoxia/re-oxygenation on bioengineered cardiac cell sheets-tissue function and the underlying mechanisms.
Methods: Bioengineered cardiac cell sheets-tissue was fabricated with human induced pluripotent stem cell derived cardiomyocytes (hiPSC-CM) using temperature-responsive culture dishes. Cardiac tissue functions in the following conditions were evaluated with a contractile force measurement system: continuous normoxia (20% O2) for 12 days; hypoxia (1% O2) for 4 days followed by normoxia (20% O2) for 8 days; or continuous hypoxia (1% O2) for 8 days. Cell number, sarcomere structure, ATP levels, mRNA expressions and Ca2+ transients of hiPSC-CM in those conditions were also assessed.
Results: Hypoxia (4 days) elicited progressive decreases in contractile force, maximum contraction velocity, maximum relaxation velocity, Ca2+ transient amplitude and ATP level, but sarcomere structure and cell number were not affected. Re-oxygenation (8 days) after hypoxia (4 days) was associated with progressive increases in contractile force, maximum contraction velocity and relaxation time to the similar extent levels of continuous normoxia group, while maximum relaxation velocity was still significantly low even after re-oxygenation. Ca2+ transient magnitude, cell number, sarcomere structure and ATP level after re-oxygenation were similar to those in the continuous normoxia group. Hypoxia/re-oxygenation up-regulated mRNA expression of PLN.
Conclusions: Hypoxia and re-oxygenation condition directly affected human bioengineered cardiac tissue function. Further understanding the molecular mechanisms of functional recovery of cardiac tissue after re-oxygenation might provide us the new insight on heart failure with recovered ejection fraction and preserved ejection fraction.
abstract_id: PUBMED:33149250
Human perinatal stem cell derived extracellular matrix enables rapid maturation of hiPSC-CM structural and functional phenotypes. The immature phenotype of human induced pluripotent stem cell derived cardiomyocytes (hiPSC-CMs) is a major limitation to the use of these valuable cells for pre-clinical toxicity testing and for disease modeling. Here we tested the hypothesis that human perinatal stem cell derived extracellular matrix (ECM) promotes hiPSC-CM maturation to a greater extent than mouse cell derived ECM. We refer to the human ECM as Matrix Plus (Matrix Plus) and compare effects to commercially available mouse ECM (Matrigel). hiPSC-CMs cultured on Matrix Plus mature functionally and structurally seven days after thaw from cryopreservation. Mature hiPSC-CMs showed rod-shaped morphology, highly organized sarcomeres, elevated cTnI expression and mitochondrial distribution and function like adult cardiomyocytes. Matrix Plus also promoted mature hiPSC-CM electrophysiological function and monolayers' response to hERG ion channel specific blocker was Torsades de Pointes (TdP) reentrant arrhythmia activations in 100% of tested monolayers. Importantly, Matrix Plus enabled high throughput cardiotoxicity screening using mature human cardiomyocytes with validation utilizing reference compounds recommended for the evolving Comprehensive In Vitro Proarrhythmia Assay (CiPA) coordinated by the Health and Environmental Sciences Institute (HESI). Matrix Plus offers a solution to the commonly encountered problem of hiPSC-CM immaturity that has hindered implementation of these human based cell assays for pre-clinical drug discovery.
abstract_id: PUBMED:31916131
Absence of Functional Nav1.8 Channels in Non-diseased Atrial and Ventricular Cardiomyocytes. Purpose: Several studies have indicated a potential role for SCN10A/NaV1.8 in modulating cardiac electrophysiology and arrhythmia susceptibility. However, by which mechanism SCN10A/NaV1.8 impacts on cardiac electrical function is still a matter of debate. To address this, we here investigated the functional relevance of NaV1.8 in atrial and ventricular cardiomyocytes (CMs), focusing on the contribution of NaV1.8 to the peak and late sodium current (INa) under normal conditions in different species.
Methods: The effects of the NaV1.8 blocker A-803467 were investigated through patch-clamp analysis in freshly isolated rabbit left ventricular CMs, human left atrial CMs and human-induced pluripotent stem cell-derived CMs (hiPSC-CMs).
Results: A-803467 treatment caused a slight shortening of the action potential duration (APD) in rabbit CMs and hiPSC-CMs, while it had no effect on APD in human atrial cells. Resting membrane potential, action potential (AP) amplitude, and AP upstroke velocity were unaffected by A-803467 application. Similarly, INa density was unchanged after exposure to A-803467 and NaV1.8-based late INa was undetectable in all cell types analysed. Finally, low to absent expression levels of SCN10A were observed in human atrial tissue, rabbit ventricular tissue and hiPSC-CMs.
Conclusion: We here demonstrate the absence of functional NaV1.8 channels in non-diseased atrial and ventricular CMs. Hence, the association of SCN10A variants with cardiac electrophysiology observed in, e.g. genome wide association studies, is likely the result of indirect effects on SCN5A expression and/or NaV1.8 activity in cell types other than CMs.
abstract_id: PUBMED:29125993
Substrate and mechanotransduction influence SERCA2a localization in human pluripotent stem cell-derived cardiomyocytes affecting functional performance. Physical cues are major determinants of cellular phenotype and evoke physiological and pathological responses on cell structure and function. Cellular models aim to recapitulate basic functional features of their in vivo counterparts or tissues in order to be of use in in vitro disease modeling or drug screening and testing. Understanding how culture systems affect in vitro development of human pluripotent stem cell (hPSC)-derivatives allows optimization of cellular human models and gives insight in the processes involved in their structural organization and function. In this work, we show involvement of the mechanotransduction pathway RhoA/ROCK in the structural reorganization of hPSC-derived cardiomyocytes after adhesion plating. These structural changes have a major impact on the intracellular localization of SERCA2 pumps and concurrent improvement in calcium cycling. The process is triggered by cell interaction with the culture substrate, which mechanical cues drive sarcomeric alignment and SERCA2a spreading and relocalization from a perinuclear to a whole-cell distribution. This structural reorganization is mediated by the mechanical properties of the substrate, as shown by the process failure in hPSC-CMs cultured on soft 4kPa hydrogels as opposed to physiologically stiff 16kPa hydrogels and glass. Finally, pharmacological inhibition of Rho-associated protein kinase (ROCK) by different compounds identifies this specific signaling pathway as a major player in SERCA2 localization and the associated improvement in hPSC-CMs calcium handling ability in vitro.
abstract_id: PUBMED:26005764
A defined synthetic substrate for serum-free culture of human stem cell derived cardiomyocytes with improved functional maturity identified using combinatorial materials microarrays. Cardiomyocytes from human stem cells have applications in regenerative medicine and can provide models for heart disease and toxicity screening. Soluble components of the culture system such as growth factors within serum and insoluble components such as the substrate on which cells adhere to are important variables controlling the biological activity of cells. Using a combinatorial materials approach we develop a synthetic, chemically defined cellular niche for the support of functional cardiomyocytes derived from human embryonic stem cells (hESC-CMs) in a serum-free fully defined culture system. Almost 700 polymers were synthesized and evaluated for their utility as growth substrates. From this group, 20 polymers were identified that supported cardiomyocyte adhesion and spreading. The most promising 3 polymers were scaled up for extended culture of hESC-CMs for 15 days and were characterized using patch clamp electrophysiology and myofibril analysis to find that functional and structural phenotype was maintained on these synthetic substrates without the need for coating with extracellular matrix protein. In addition, we found that hESC-CMs cultured on a co-polymer of isobornyl methacrylate and tert-butylamino-ethyl methacrylate exhibited significantly longer sarcomeres relative to gelatin control. The potential utility of increased structural integrity was demonstrated in an in vitro toxicity assay that found an increase in detection sensitivity of myofibril disruption by the anti-cancer drug doxorubicin at a concentration of 0.05 μM in cardiomyocytes cultured on the co-polymer compared to 0.5 μM on gelatin. The chemical moieties identified in this large-scale screen provide chemically defined conditions for the culture and manipulation of hESC-CMs, as well as a framework for the rational design of superior biomaterials.
abstract_id: PUBMED:29617427
Investigation of human iPSC-derived cardiac myocyte functional maturation by single cell traction force microscopy. Recent advances have made it possible to readily derive cardiac myocytes from human induced pluripotent stem cells (hiPSC-CMs). HiPSC-CMs represent a valuable new experimental model for studying human cardiac muscle physiology and disease. Many laboratories have devoted substantial effort to examining the functional properties of isolated hiPSC-CMs, but to date, force production has not been adequately characterized. Here, we utilized traction force microscopy (TFM) with micro-patterning cell printing to investigate the maximum force production of isolated single hiPSC-CMs under varied culture and assay conditions. We examined the role of length of differentiation in culture and the effects of varied extracellular calcium concentration in the culture media on the maturation of hiPSC-CMs. Results show that hiPSC-CMs developing in culture for two weeks produced significantly less force than cells cultured from one to three months, with hiPSC-CMs cultured for three months resembling the cell morphology and function of neonatal rat ventricular myocytes in terms of size, dimensions, and force production. Furthermore, hiPSC-CMs cultured long term in conditions of physiologic calcium concentrations were larger and produced more force than hiPSC-CMs cultured in standard media with sub-physiological calcium. We also examined relationships between cell morphology, substrate stiffness and force production. Results showed a significant relationship between cell area and force. Implementing directed modifications of substrate stiffness, by varying stiffness from embryonic-like to adult myocardium-like, hiPSC-CMs produced maximal forces on substrates with a lower modulus and significantly less force when assayed on increasingly stiff adult myocardium-like substrates. Calculated strain energy measurements paralleled these findings. Collectively, these findings further establish single cell TFM as a valuable approach to illuminate the quantitative physiological maturation of force in hiPSC-CMs.
abstract_id: PUBMED:37189424
Does Enhanced Structural Maturity of hiPSC-Cardiomyocytes Better for the Detection of Drug-Induced Cardiotoxicity? Human induced pluripotent stem cell derived cardiomyocytes (hiPSC-CMs) are currently used following the Comprehensive in vitro Proarrhythmic Assay (CiPA) initiative and subsequent recommendations in the International Council for Harmonization (ICH) guidelines S7B and E14 Q&A, to detect drug-induced cardiotoxicity. Monocultures of hiPSC-CMs are immature compared to adult ventricular cardiomyocytes and might lack the native heterogeneous nature. We investigated whether hiPSC-CMs, treated to enhance structural maturity, are superior in detecting drug-induced changes in electrophysiology and contraction. This was achieved by comparing hiPSC-CMs cultured in 2D monolayers on the current standard (fibronectin matrix, FM), to monolayers on a coating known to promote structural maturity (CELLvo™ Matrix Plus, MM). Functional assessment of electrophysiology and contractility was made using a high-throughput screening approach involving the use of both voltage-sensitive fluorescent dyes for electrophysiology and video technology for contractility. Using 11 reference drugs, the response of the monolayer of hiPSC-CMs was comparable in the two experimental settings (FM and MM). The data showed no functionally relevant differences in electrophysiology between hiPSC-CMs in standard FM and MM, while contractility read-outs indicated an altered amplitude of contraction but not changes in time course. RNA profiling for cardiac proteins shows similarity of the RNA expression across the two forms of 2D culture, suggesting that cell-to-matrix adhesion differences may explain account for differences in contraction amplitude. The results support the view that hiPSC-CMs in both 2D monolayer FM and MM that promote structural maturity are equally effective in detecting drug-induced electrophysiological effects in functional safety studies.
abstract_id: PUBMED:34823103
Synthesis, toxicity evaluation and determination of possible mechanisms of antimicrobial effect of arabinogalactane-capped selenium nanoparticles. Background: The elemental selenium nanoparticles (Se0NPs) find application in biology and medicine due to wide spectrum of their biological activity combined with low toxicity. For instance, Se0NPs are promising antimicrobial agents for plant treatment against the bacterial phytopathogen Clavibacter michiganensis sepedonicus (Cms). Careful characterization of possible mechanisms of antimicrobial action of Se0NPs as well as the assessment of their biosafety for plant and animal organisms represents urgent challenge.
Methods: AG-stabilized Se0NPs (AG/Se0NPs) were synthesized by oxidation of selenide-anions by molecular oxygen dissolved in the reaction medium in the presence of AG macromolecules. The antimicrobial activity of AG/Se0NPs against Cms was investigated both by observing the change in optical density of bacterial suspension and directly evaluating the cell death using fluorescent microscopy with propidium iodide staining. The effect of AG/Se0NPs on the dehydrogenase activity was studied by determination of Cms enzymes ability to reduce colorless TTC to formazan. The effect of AG/Se0NPs nanocomposite on the respiration rate of Cms cells was examined by polarographic method. For qualitative visualization of the potential on the inner membrane of Cms mesosomes, the potential-dependent TMRM dye and fluorescence microscopy were used. The toxicity of the AG/Se0NPs was investigated on white mice by the Litchfield-Wilcoxon method. The effect of AG/Se0NPs on plant organisms (potato plants) was studied on healthy and Cms-infected plants by determining the level of chlorophyll and lipid peroxidation products (LPO) in their leaves when treated with nanoparticles.
Results: Spherical Se° nanoparticles with an average size of 94 nm were obtained using the stabilizing potential of AG. It was found that these nanoparticles exhibited the pronounced (up to 60 %) bacteriostatic action (in 6.25 μg/mL concentration) against the bacterial phytopathogen Cms. It was shown and experimentally confirmed for the first time that the probable causes of the bacteriostatic action of AG/Se°NPs against Cms are non-reversible inhibition of Cms cell respiration, a decrease of the transmembrane potential with a change in the cell wall permeability for H+ protons and a decrease in their dehydrogenase activity. It was revealed that the treatment of healthy and Cms-infected potato plants with an aqueous solution of AG/Se°NPs involved no significant changes in the content of LPO and negative effect on the chlorophyll content, thus contributing to the saving of these values at the level of control intact plants.
Conclusion: Using a complex of complementary methods, we have found that antimicrobial activity of AG/Se0NPs is apparently due to their ability to inhibit the dehydrogenase activity of Cms cells, as well as to disrupt the integrity of the cell membrane, resulting in a decrease of transmembrane potential and reduction of cellular respiration. The antimicrobial and antibiofilm activity of AG/Se0NPs, together with their nontoxicity and safety for plant and animal organisms, determine the prospects for design of AG/Se0NPs-based drugs for the rehabilitation of plants from the Cms.
Answer: The provided abstracts do not contain information related to CMS G Code Functional Limitation Severity Modifiers or their responsiveness to change across an episode of outpatient rehabilitation. The abstracts primarily focus on various aspects of biomedical research, including the study of thrombosis in COVID-19 patients (PUBMED:35938548), the functional analysis of stem-cell derived cardiomyocytes (PUBMED:29140569), the assessment of cardiac tissue function in hypoxic environments (PUBMED:33869689), the maturation of human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) (PUBMED:33149250, PUBMED:31916131, PUBMED:29617427, PUBMED:37189424), and the antimicrobial effect of selenium nanoparticles (PUBMED:34823103).
CMS G Code Functional Limitation Severity Modifiers are specific to the Centers for Medicare & Medicaid Services (CMS) and are used in outpatient rehabilitation to report the severity of a patient's functional limitation. These modifiers are intended to track the progress of a patient's functional status throughout their care. However, none of the abstracts provided discuss these modifiers or their use in a rehabilitation context. Therefore, based on the information from the abstracts, it is not possible to determine whether CMS G Code Functional Limitation Severity Modifiers are responsive to change across an episode of outpatient rehabilitation. |
Instruction: A review of 93 cases of severe preeclampsia in Singapore: are there risk factors for complications?
Abstracts:
abstract_id: PUBMED:17728960
A review of 93 cases of severe preeclampsia in Singapore: are there risk factors for complications? Introduction: This study aims to assess the epidemiology of severe preeclampsia in Singapore, the disease characteristics, maternal and perinatal outcome, and to identify risk factors for complications.
Methods: Data of 93 consecutive women with severe preeclampsia in KK Women's and Children's Hospital in Singapore was collected prospectively and analysed using the unpaired t-test for normally-distributed continuous variables and Fisher's exact chi-square test for discrete variables. Multivariate logistic regression analysis was performed for prediction of complicated cases.
Results: The incidence of severe pre-eclampsia was 29.3 per 10,000 deliveries, with an increased risk in women who were aged more than 35 years and who were nulliparous. The risk was also increased in women of the Malay race and they also had the tendency to book later, compared with the other races. 43 percent of women had maternal complications, including eclampsia, haemolysis/elevated liver enzymes/low platelets syndrome, oliguria, pulmonary oedema and placental abruption. Significantly raised levels of uric acid (439.5 +/- 114.1 micromol/L versus 395.4 +/- 96.7 micromol/L, p-value equals 0.047) and aspartate transaminase (80.1 +/- 107.4 IU/L versus 38.8 +/- 16.1 IU/L, p-value equals 0.021) were found in those with complications, compared to those without complications. The average gestation at time of diagnosis was 33 weeks and the average gestation at delivery was 34 weeks. 89.3 percent of women required caesarean section and 59.1 percent of women were admitted to intensive care.
Conclusion: Age, parity and race are risk factors for severe preeclampsia with increased levels of uric acid and aspartate transaminase found in the complicated cases. The morbidity and cost of treatment of severe preeclampsia are high with a large percentage requiring caesarean section and intensive care admission.
abstract_id: PUBMED:24431616
Maternal complications associated with severe preeclampsia. Objective: Hypertension disorders are associated with higher rates of maternal, fetal, and infant mortality, and severe morbidity, especially in cases of severe preeclampsia, eclampsia, and HELLP syndrome. The aim of the study was to determine maternal outcomes in pregnant women with severe preeclampsia.
Data Source: The data source consisted of 349 cases with severe preeclampsia.
Design: A cross-sectional study was undertaken on 349 cases of severe preeclampsia in pregnancy.
Setting/period: The patients selected for this study were from those who presented at Kermanshah University of Medical Sciences, Department of Obstetrics and Gynecology during 2007-2009.
Materials And Methods: Statistical analysis was performed using SPSS 16 software and conducting Chi square and independent sample t tests. Demographic data involving age, parity, gestational age, clinical, and laboratory findings were recorded from the medical files. In addition, delivery route, indications of cesarean delivery, and maternal complications were determined.
Results: Of the 349 severely preeclampsia cases, among the 22 cases (6.3 %) who had suffered from eclamptic seizers, 17 cases (77.3 %) were in the age group of 18-35 years (P = 0.351) and 13 cases (59.1 %) in the gestational age group of 28-37 weeks (P = 0.112). One case (0.3 %) was demonstrated to have HELLP syndrome. Placental abruption was obstetric complication in 7.7 % (27 cases). Delivery route was vaginal in 120 cases (34.4 %), while 229 cases (65.6 %) underwent cesarean delivery. The most frequent maternal complication (37 cases) reported was coagulopathy (10.6 %).
Conclusions: We concluded that severe preeclampsia and eclampsia are associated with higher rates of maternal severe morbidity and that these two factors still remain the major contributors to maternal morbidity in Iran.
abstract_id: PUBMED:30586147
Preventability review of severe maternal morbidity. Introduction: Severe maternal morbidity (SMM) is rising globally. Assessing SMM is an important quality measure. This study aimed to examine SMM in a national cohort in New Zealand.
Material And Methods: This is a national retrospective review of pregnant or postpartum women admitted to an Intensive Care Unit or High Dependency Unit during pregnancy or recent postpartum. Outcomes were rates of SMM and assessment of potential preventability. Preventability was defined as any action on the part of the provider, system or patient that may have contributed to progression to more severe morbidity, and was assessed by a multidisciplinary review team.
Results: Severe maternal morbidity was 6.2 per 1000 deliveries (95% confidence interval 5.7-6.8) with higher rates for Pacific, Indian and other Asian racial groups. Major blood loss (39.4%), preeclampsia-associated conditions (23.3%) and severe sepsis (14.1%) were the most common causes of SMM. Potential preventability was highest with sepsis cases (56%) followed by preeclampsia and major blood loss (34.3% and 30.9%). Of these cases, only 36.4% were managed appropriately as determined by multidisciplinary review. Provider factors such as inappropriate diagnosis, delay or failure to recognize high risk were the most common factors associated with potential preventability of SMM. Pacific Island women had over twice the rate of preventable morbidity (relative risk 2.48, 95% confidence interval 1.28-4.79).
Conclusions: Multidisciplinary external anonymized review of SMM showed that over a third of cases were potentially preventable, being due to substandard provider care with increased preventability rates for racial/ethnic minority women. Monitoring country rates of SMM and implementing case reviews to assess potential preventability are appropriate quality improvement measures and external review of anonymized cases may reduce racial profiling to inform unbiased appropriate interventions and resource allocation to help prevent these severe events.
abstract_id: PUBMED:25028703
Maternal genotype and severe preeclampsia: a HuGE review. Severe preeclampsia is a common cause of maternal and perinatal morbidity worldwide. The disease clusters in families; however, individual genetic studies have produced inconsistent results. We conducted a review to examine relationships between maternal genotype and severe preeclampsia. We searched the MEDLINE and Embase databases for prospective and retrospective cohort and case-control studies reporting associations between genes and severe preeclampsia. Four reviewers independently undertook study selection, quality assessment, and data extraction. We performed random-effects meta-analyses by genotype and predefined functional gene group (thrombophilic, vasoactive, metabolic, immune, and cell signalling). Fifty-seven studies evaluated 50 genotypes in 5,049 cases and 16,989 controls. Meta-analysis showed a higher risk of severe preeclampsia with coagulation factor V gene (proaccelerin, labile factor) (F5) polymorphism rs6025 (odds ratio = 1.90, 95% confidence interval: 1.42, 2.54; 23 studies, I(2) = 29%), coagulation factor II (thrombin) gene (F2) mutation G20210A (rs1799963) (odds ratio = 2.01, 95% confidence interval: 1.14, 3.55, 9 studies, I(2) = 0%), leptin receptor gene (LEPR) polymorphism rs1137100 (odds ratio = 1.75, 95% confidence interval: 1.15, 2.65; 2 studies, I(2) = 0%), and the thrombophilic gene group (odds ratio = 1.87, 95% confidence interval: 1.43, 2.45, I(2) = 27%). There were no associations with other gene groups. There was moderate heterogeneity between studies and potential for bias from poor-quality genotyping and inconsistent definition of phenotype. Further studies with robust methods should investigate genetic factors that might potentially be used to stratify pregnancies according to risk of complications.
abstract_id: PUBMED:22882950
Outcome and risk factors of early onset severe preeclampsia. Background: Early onset severe preeclampsia is a specific type of severe preeclampsia, which causes high morbidity and mortality of both mothers and fetus. This study aimed to investigate the clinical definition, features, treatment, outcome and risk factors of early onset severe preeclampsia in Chinese women.
Methods: Four hundred and thirteen women with severe preeclampsia from June 2006 to June 2009 were divided into three groups according to the gestational age at the onset of preeclampsia as follows: group A (less than 32 weeks, 73 cases), group B (between 32 and 34 weeks, 71 cases), and group C (greater than 34 weeks, 269 cases). The demographic characteristics of the subjects, complications, delivery modes and outcome of pregnancy were analyzed retrospectively.
Results: The systolic blood pressure at admission and the incidence of severe complications were significantly lower in group C than those in groups A and B, prolonged gestational weeks and days of hospitalization were significantly shorter in group C than those in groups A and B. Liver and kidney dysfunction, pleural and peritoneal effusion, placental abruption and postpartum hemorrhage were more likely to occur in group A compared with the other two groups. Twenty-four-hour urine protein levels at admission, intrauterine fetal death and days of hospitalization were risk factors that affected complications of severe preeclampsia. Gestational week at admission and delivery week were also risk factors that affected perinatal outcome.
Conclusions: Early onset severe preeclampsia should be defined as occurring before 34 weeks, and it is featured by more maternal complications and a worse perinatal prognosis compared with that defined as occurring after 34 weeks. Independent risk factors should be used to tailor the optimized individual treatment plan, to balance both maternal and neonatal safety.
abstract_id: PUBMED:25928880
Still births, neonatal deaths and neonatal near miss cases attributable to severe obstetric complications: a prospective cohort study in two referral hospitals in Uganda. Background: Neonatal near miss cases occur more often than neonatal deaths and could enable a more comprehensive analysis of risk factors, short-term outcomes and prognostic factors in neonates born to mothers with severe obstetric complications. The objective was to assess the incidence, presentation and perinatal outcomes of severe obstetric morbidity in two referral hospitals in Central Uganda.
Methods: A prospective cohort study was conducted between March 1, 2013 and February 28, 2014, in which all newborns from cases of severe pregnancy and childbirth complications were eligible for inclusion. The obstetric conditions included obstetric haemorrhage, hypertensive disorders, obstructed labour, chorioamnionitis and pregnancy-specific complications such as malaria, anemia and premature rupture of membranes. Still births, neonatal deaths and neonatal near miss cases (defined using criteria that employed clinical features, presence of organ-system dysfunction and management provided to the newborns were compiled). Stratified and multivariate logistic regression analysis was conducted to identify risk factors for perinatal death.
Results: Of the 3100 mothers, 192 (6.2%) had abortion complications. Of the remainder, there were 2142 (73.1%) deliveries, from whom the fetal outcomes were 257 (12.0%) still births, 369 (17.2%) neonatal deaths, 786 (36.7%) neonatal near misses and 730 (34.1%) were newborns with no or minimal life threatening complications. Of the 235 babies admitted to the neonatal intensive care unit (NICU), the main reasons for admission were prematurity for 64 (26.8%), birth asphyxia for 59 (23.7%), and grunting respiration for 26 (11.1%). Of the 235 babies, 38 (16.2%) died in the neonatal period, and of these, 16 died in the first 24 hours after admission. Ruptured uterus caused the highest case-specific mortality of 76.8%, and led to 16.9% of all newborn deaths. Across the four groups, there were significant differences in mean birth weight, p = 0.003.
Conclusions: Antepartum hemorrhage, ruptured uterus, severe preeclampsia, eclampsia, and the syndrome of Hemolysis, Elevated Liver Enzymes, Low Platelets (HELLP syndrome), led to statistically significant attributable risk of newborn deaths (still birth or neonatal deaths). Development of severe maternal outcomes, the mothers having been referred, and gravidity of 5 or more were significantly associated with newborn deaths.
abstract_id: PUBMED:36159350
Peripartum Complications as Risk Factors for Postpartum Psychosis: A Systemic Review. The aim of this research paper is to conduct a systematic review of periparturient complications as risk factors of postpartum psychosis. The investigation of risk factors for maternal psychosis following childbirth is complicated by the risk of confounding by a previous psychiatric history; therefore, this systematic review focuses on labor complications as risk factors among women without any previous psychiatric hospitalizations or diagnoses. Articles were collected and analyzed from the PubMed, MEDLINE, and Cochrane Review Library databases, as well as Clinicaltrials.gov, in accordance with the 2020 Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Article abstracts and article titles of the identified publications were screened independently by all seven authors, and studies were selected if they met the following inclusion criteria: patients were diagnosed with postpartum psychosis per the guidelines in the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM-V), DSM-IV or World Health Organization's ICD-10 Classification of Mental and Behavioral Disorders; patients presented with no prior psychiatric diagnoses, hospitalizations or history; and the study evaluated the association of periparturient complications to first-onset postpartum psychosis, excluding narrative reviews, systematic reviews, or meta-analyses. Fifteen case-control, cohort, and case report studies, with thousands of patients, were selected to investigate the correlation between perinatal complications and first-onset post-partum psychosis. Obstetric complications during childbirth significantly predisposed for postpartum psychosis in certain individual studies but did not reveal an association in others. More studies must be implemented to elaborate on this limited scope.
abstract_id: PUBMED:29422154
Audit for quality of care and fate of maternal critical cases at Women's Health Hospital. Maternal deaths remain high, numbers at the facility level are relatively low.
Aim: To evaluate effect of management guidelines on occurrence of maternal near miss in Women's Health Hospital.
Design: A cross-sectional study.
Setting: ICU of Women's Health Hospital's at Assiut Main University Hospital and Al-fayoum University Hospital.
Subjects: Convenient sample of 93 maternal near-miss cases including (Pregnancy or postpartum complications). TOOL: audit the applied critical care for severe condition related to obstetric complications and consist of three parts: Patient's demographic data, Audit of critical care and "Maternal near-miss" Fate. Data collected during a period of 1/3/2015 to 30/8/2015 for management guidelines and maternal outcomes.
Results: A statistical significant differences between the medical management and occurrence of sever maternal complications such as (severe postpartum hemorrhage, severe pre-eclampsia, Sepsis or severe systemic infection, uterine hemorrhage, ruptured uterus) (P=0.000, P=0.031, P=0.036, P=0.052, P=0.012 respectively).
Conclusions: The maternal management guidelines was a successful tool in recording the gap between the current received management and standards management guidelines in ICU. Also they measure the effect of current management in ICU on maternal mortality and morbidity.
abstract_id: PUBMED:31334197
Successful management of severe preeclampsia major complications: Case report. Severe preeclampsia (PE) have considerable adverse outcome especially in low-resource countries. A 21-year-old pregnant woman with severe PE and intrauterine fetal death, delivered by cesarean section (CS). The CS complicated by atonic postpartum hemorrhage (PPH). She was transferred by the air ambulance to the tertiary center of West Kazakhstan University-intensive care unit, once she developed anuria. She was carefully monitored after exclusion of maternal sepsis and HELLP (hemolysis, elevated liver enzymes and low platelet) syndrome and she developed postpartum eclampsia and right partial lobe intracranial hemorrhage (ICH). She was managed by multi-disciplinary team with proper and clear management plan and discharged from the hospital on the 20th postpartum day in good general condition. The complications of severe PE need clear multi-disciplinary team management plan to avoid the adverse outcome of the severe PE.
abstract_id: PUBMED:24353670
Risk factors and complications of puerperal sepsis at a tertiary healthcare centre. Objective: To determine the risk factors and complications of puerperal sepsis.
Methods: This was an observational prospective Cohort study conducted from January 2011 to December 2011 at the Obstetrics and Gynaecology Department Liaquat University of Medical & Health Sciences Jamshoro/Hyderabad, Sindh Pakistan. During this study period, all the women who delivered in this hospital or referred to this hospital within 42 days after delivery with puerperal pyrexia/sepsis diagnosed on clinical examination as well as with relevant investigations were included in the study. Women with other ailments like malaria, typhoid fever and postpartum eclampsia during the puerperal period were excluded. The subjects were registered on predesigned proforma after giving informed written consent. The data was collected and analyzed using SPSS version 17.
Results: During this period there were 3316 obstetrical admission and out of these 129(3.89%) women had puerperal sepsis. Most of these women 84(65.11%) were aged 31 years and above, multiparous 101 (78.29%), and unbooked 98 (75.96%) cases. Common risk factors found were absent membranes in 108(83.72%) of the women, delivered or undelivered and mismanaged, referred cases 95(73.64%), are being delivered in this hospital 34(26.35%). Morbidities seen were septicemia in 35 (27.13%) cases, and disseminated intra vascular coagulation in 23(17.82%) cases, while 11 (8.52%) of the women died.
Conclusion: Common risk factors were anaemia; suboptimal personal hygiene as well as improper sterilization which resulted in severe health hazards such as septicemia, disseminated intravascular coagulation as well as death.
Answer: Yes, the review of 93 cases of severe preeclampsia in Singapore identified several risk factors for complications. The study found that the incidence of severe preeclampsia was higher in women who were aged more than 35 years and who were nulliparous. Additionally, women of the Malay race had an increased risk and also tended to book later compared with other races. Maternal complications occurred in 43 percent of the women and included conditions such as eclampsia, hemolysis/elevated liver enzymes/low platelets syndrome, oliguria, pulmonary edema, and placental abruption. Complicated cases had significantly raised levels of uric acid and aspartate transaminase compared to those without complications. The morbidity and cost of treatment of severe preeclampsia were high, with a large percentage requiring cesarean section and intensive care admission (PUBMED:17728960). |
Instruction: Maxillofacial and axial/appendicular giant cell lesions: unique tumors or variants of the same disease?
Abstracts:
abstract_id: PUBMED:20006167
Maxillofacial and axial/appendicular giant cell lesions: unique tumors or variants of the same disease?--A comparison of phenotypic, clinical, and radiographic characteristics. Purpose: The relationship between giant cell lesions (GCLs) of the maxillofacial (MF) skeleton and those of the axial/appendicular (AA) skeleton has been long debated. The present study compared the clinical and radiographic characteristics of subjects with MF and AA GCLs.
Materials And Methods: This was a retrospective cohort study of patients treated for GCLs at Massachusetts General Hospital from 1993 to 2008. The predictor variables included tumor location (MF or AA) and clinical behavior (aggressive or nonaggressive). The outcome variables included demographic, clinical, and radiographic parameters, treatments, and outcomes. Descriptive and bivariate statistics were computed, and P <or= .05 was considered significant.
Results: The sample included 93 subjects: 45 with MF (38 with aggressive and 7 with nonaggressive) and 48 with AA (30 with aggressive and 18 with nonaggressive). Comparing the patients with MF and AA GCLs, those with MF lesions presented younger (P < .001), and the lesions were more commonly asymptomatic (P < .001), smaller (P < .001), and managed differently (P < .001) than AA lesions. When stratified by clinical behavior, aggressive tumors were diagnosed earlier than nonaggressive tumors (P < .001). Controlling for location and clinical behavior, patients with MF aggressive lesions were younger (P < .001) than those with AA aggressive lesions. MF nonaggressive lesions were more commonly asymptomatic (P = .04), smaller (P = .05), and less commonly locally destructive (P = .05) than AA nonaggressive lesions.
Conclusions: These results suggest that MF and AA GCLs represent a similar, if not the same, disease. Comparing the aggressive and nonaggressive subgroups, more similarities were found than when evaluating without stratification by clinical behavior. The remaining differences could be explained by the likelihood that MF tumors are diagnosed earlier than AA tumors because of facial exposure and dental screening examinations and radiographs.
abstract_id: PUBMED:27546031
Genetic Analysis of Giant Cell Lesions of the Maxillofacial and Axial/Appendicular Skeletons. Purpose: To compare the genetic and protein expression of giant cell lesions (GCLs) of the maxillofacial (MF) and axial/appendicular (AA) skeletons. We hypothesized that when grouped according to biologic behavior and not simply by location, MF and AA GCLs would exhibit common genetic characteristics.
Materials And Methods: This was a prospective and retrospective study of patients with GCLs treated at Massachusetts General Hospital from 1993 to 2008. In a preliminary prospective study, fresh tissue from 6 aggressive tumors each from the MF and AA skeletons (n = 12 tumors) was obtained. RNA was extracted and amplified from giant cells (GCs) and stromal cells first separated by laser capture microdissection. Genes highly expressed by GCs and stroma at both locations were determined using an Affymetrix GeneChip analysis. As confirmation, a tissue microarray (TMA) was created retrospectively from representative tissue of preserved pathologic specimens to assess the protein expression of the commonly expressed genes found in the prospective study. Quantification of immunohistochemical staining of MF and AA lesions was performed using Aperio image analysis to determine whether immunoreactivity was predictive of aggressive or nonaggressive behavior.
Results: Five highly ranked genes were found commonly in GCs and stroma at each location: matrix metalloproteinase-9 (MMP-9), cathepsin K (CTSK), T-cell immune regulator-1 (TCIRG1), C-type lectin domain family-11, and zinc finger protein-836. MF (n = 40; 32 aggressive) and AA (n = 48; 28 aggressive) paraffin-embedded tumors were included in the TMA. The proteins CTSK, MMP-9, and TCIRG1 were confirmed to have abundant expression within both MF and AA lesions. Only the staining levels for TCIRG1 within the GCs predicted the clinical behavior of the MF lesions.
Conclusions: MMP-9, CTSK, and TCIRG1 are commonly expressed by GCLs of the MF and AA skeletons. This supports the hypothesis that these lesions are similar but at different locations. TCIRG1 has not been previously associated with GCLs and could be a potential target for molecular diagnosis and/or therapy.
abstract_id: PUBMED:31736609
Axial giant cell tumor - current standard of practice. Giant cell tumors of bone are relatively rare in the axial skeleton, accounting for approximately 6.7% of all cases. Due to their anatomical complexity, difficult access and proximity to vital neurovascular structures, management of these tumors poses a huge challenge on the treating surgeon. Several data series reported on axial GCTB involve short series of limited cases with varied methods used in their local control due to which, proper guidelines are unavailable for the management of such difficult cases. Though the present data support the use of denosumab for effective management of these lesions but there is varied consensus on dosage and duration of treatment. This review article summarizes the basic features and treatment modalities related to axial GCTB stressing on multidisciplinary approach to achieve optimum outcomes.
abstract_id: PUBMED:34252646
Axial flap for giant basal cell carcinoma of the anterior chest wall: Case report. Introduction And Importance: Anterior chest wall Giant Basal Cell Carcinoma (GBCC) is rare amongst GBCC cases and results in a large defect that is challenging to resect and reconstruct. It requires multidisciplinary approach to prevent recurrence.
Case Presentation: A 72-year-old man with giant basal cell carcinoma at the anterior chest wall measuring 10 × 6 cm. Wide resection of 1 cm margin with axial flap was performed to close the defect. The follow-up report stated that the patient was satisfied with the result and there was no recurrence observed.
Clinical Discussion: Review of literatures concludes that GBCC is excised with a minimum of 4-6 mm margin outside the tumor area. The axial IMAP flap is ideal to close the upper chest wall defect because of the better aesthetic outcome compared to other conventional flaps, especially in stable elderly male, patients with noninfected wound. Increased skin laxity and more relaxed skin tension associated with aging allows easier tissue mobilization and transfer to close the defect.
Conclusion: Axial flap for GBCC in anterior chest wall is ideal, safe, and has the advantage of aesthetic reasons of suitable skin tone, particularly for stable elderly male patients.
abstract_id: PUBMED:22935032
Oral and maxillofacial osteosarcoma in dogs: a review. Osteosarcoma in dogs is a heterogeneous disease entity with regard to its histologic, clinical and biologic behaviour. Differences in behaviour are associated with tumour location. Oral and maxillofacial osteosarcomas are typically reported as a component of the broader classifications of axial osteosarcoma or osteosarcoma of flat bones to differentiate them from appendicular osteosarcoma. Similar to human oral and maxillofacial osteosarcoma, in dogs, these also appear to have less aggressive behaviour than appendicular osteosarcoma. Ideally, local control is achieved with wide surgical resection that results in tumour-free margins. Failure of local control is the most common contributor to poor prognosis. Chemotherapy and radiation treatment are reported to have variable outcomes. The aim of this article is to review the literature on oral and maxillofacial osteosarcoma in dogs in comparison to appendicular and axial osteosarcoma. Similarities and differences between oral and maxillofacial osteosarcoma in humans are addressed.
abstract_id: PUBMED:12072763
Giant-cell tumor of the appendicular skeleton. The common objective of all surgical procedures in the treatment of giant-cell tumor of bone is to minimize the incidence of local recurrence. The purpose of this study was to determine what, if any, patient factors, tumor characteristics, or surgical practices correlate with local recurrence. Seventy-five patients treated for a giant-cell tumor of the appendicular skeleton were followed up for at least 2 years. The mean duration of followup was 62 months (range, 24-224 months). The highest proportion of patients had intralesional curettage, high-speed burring, and adjuvant treatment. Ten patients (13%) had a local recurrence. Bivariate analysis revealed that, with the numbers available, none of the patient variables, tumor variables, or surgical approaches correlated with local recurrence. Post hoc power analysis revealed the power of the study to be 33% to detect a clinically significant difference between treatment groups. The data presented here potentially could contribute to a metaanalysis, which would have the statistical power to determine which tumor-related factors and surgical techniques are most important in predicting recurrence in giant-cell tumor of bone.
abstract_id: PUBMED:33569822
Extra-axial sacral soft tissue giant cell ependymoma affecting a child: Case report and review of the literature. An otherwise healthy eight-year-old girl presented with a mass in the soft tissue of the sacral region. The lesion was diagnosed as a vascular malformation on imaging studies, for which percutaneous sclerotherapy was attempted. The mass continued to grow and a complete resection was performed after four years. The pathological diagnosis was giant cell ependymoma (GCE). GCE is a term used to describe a rare histologic variant of ependymoma characterized by malignancy-like morphologic phenotype and indolent behavior. To the best of our knowledge, this is the first case of extra-axial soft tissue sacral GCE reported in a child.
abstract_id: PUBMED:38342030
Maxillofacial challenge: Rare presentation of central giant cell tumor involving both maxilla and mandible. Introduction And Importance: Central giant cell tumor (CGCT) of bone is an uncommon yet locally aggressive neoplasm originating from undifferentiated mesenchymal cells in bone marrow. This case report explores a rare presentation in the maxilla extending to the mandible, emphasizing the complexity of CGCT management and the need for a multidisciplinary approach.
Case Presentation: A 35-year-old female presented with a progressively enlarging non-tender, firm swelling on the left maxilla and a similar mandibular swelling. Paraesthesia of the left lower lip and chin accompanied the mandibular swelling. CT scans and 3D reconstructions revealed expansive osteolytic defects affecting the maxilla and mandible. Biochemical tests supported a central giant cell tumor diagnosis. Histopathology confirmed spindle cell proliferation and multinucleated giant cells in both lesions. Surgical intervention involved excision and reconstruction. A five-month follow-up showed no recurrence, affirming the treatment's success.
Clinical Discussion: Central giant cell tumors (CGCTs) of bone are primarily benign, arising from undifferentiated mesenchymal cells. While mostly benign, they carry a rare potential for malignancy. Diagnosis involves imaging (CT, MRI, bone scintigraphy) and confirmation through biopsy. Surgical resection is the standard treatment, with radiotherapy considered in challenging cases. Recurrence rates vary with the extent of surgical intervention. Alternative treatments like cryotherapy and chemotherapy show varying success.
Conclusion: This case emphasizes the necessity of precise histopathological diagnosis for CGCT management. The intricate nature of maxillary involvement, coupled with mandibular association, mandates a multidisciplinary approach. Surgery, while the primary treatment, should be judiciously determined based on tumor characteristics and recurrence.
abstract_id: PUBMED:34753866
Sclerostin Immunohistochemical Staining in Aggressive Maxillofacial Giant Cell Lesions: Initial Results and Potential Therapeutic Target. Introduction: Maxillofacial (MF) giant cell lesions (GCLs) are benign, often locally aggressive lesions with potential for recurrence. Systemic treatments have included interferon alpha, calcitonin, bisphosphonates, and denosumab. Sclerostin (SOST) is typically thought to be a negative regulator of bone metabolism and anti-SOST agents have been used to treat osteoporosis; however, its role in central giant cell granuloma is unknown. The purpose of this study was to evaluate the expression of SOST in MF GCLs.
Materials And Methods: This was a retrospective study of patients with MF GCLs treated at a single institution between 1993 and 2008 with a minimum follow-up of 6 months. Representative tissue was used to create a tissue microarray and SOST immunohistochemical (IHC) staining and grading was performed. The primary outcomes were IHC staining of the stromal cells and giant cells. The secondary outcomes included correlation of IHC staining and patient predictor variables including clinically benign and aggressive lesions. All analyses were completed using univariate statistical tests.
Results: A total of 37 subjects were included (29 clinically aggressive and 8 clinically benign). Sclerostin staining was present in 30 of 37 subjects (81%). Of these, 22 (60%) had stromal cell staining and 28 (76%) had giant cell staining. The presence or absence of staining, of either cell type, was not associated with aggressiveness, presence of clinical symptoms, tumor size, previous interferon therapy, previous surgery, or the race or age of the patient.
Discussion: Maxillofacial GCLs have an overall high level of SOST staining; however, the role of SOST in treatment and prognosis is unknown and warrants further study.
abstract_id: PUBMED:26464744
Giant cell rich osteosarcoma of the mandible with abundant spindle cells and osteoclast-like giant cells mimicking malignancy in giant cell tumor. Giant cell rich osteosarcoma is a relatively unusual histological form of osteosarcoma, common lesion usually presenting in the long bones of the appendicular skeleton. The occurrence in the mandible is exceptional rare. Histologically, this tumor tends to be a highly anaplastic, pleomorphic tumor in which the tumor cells may be: plasmacytoid, fusiform, ovoid, small round cells, clear cells, mono-or multinucleated giant cells, or, spindle cells. Herein, we present a case with the sternum and first thoracic vertebra metastasis from primary giant cell rich osteosarcoma of the mandible in a 28 year-old Chinese female. The tumor was predominantly composed of abundant spindle cells with marked atypia and numerous osteoclast-like giant cells reminiscent of malignancy in giant cell tumor. The unusual histological appearance can pose a great diagnostic challenge. It may be easily misdiagnosed, especially if the specimen is limited or from fine-needle aspiration.
Answer: Maxillofacial (MF) and axial/appendicular (AA) giant cell lesions (GCLs) have been the subject of debate regarding whether they are unique tumors or variants of the same disease. A comparison of phenotypic, clinical, and radiographic characteristics suggests that MF and AA GCLs may represent a similar, if not the same, disease. This is supported by the observation that MF and AA GCLs share more similarities when stratified by clinical behavior (aggressive or nonaggressive) than when evaluated without such stratification. Differences that do exist could be attributed to the likelihood that MF tumors are diagnosed earlier than AA tumors due to facial exposure and dental screening examinations and radiographs (PUBMED:20006167).
Genetic analysis further supports the hypothesis that MF and AA GCLs are similar but located at different sites. A study found that matrix metalloproteinase-9 (MMP-9), cathepsin K (CTSK), and T-cell immune regulator-1 (TCIRG1) are commonly expressed by GCLs of both the MF and AA skeletons. Notably, TCIRG1 expression within the giant cells predicted the clinical behavior of the MF lesions, suggesting that it could be a potential target for molecular diagnosis and/or therapy (PUBMED:27546031).
In summary, both clinical and genetic analyses indicate that MF and AA GCLs are not entirely unique entities but rather may be variants of the same disease that present differently due to their location and the timing of diagnosis. The shared genetic characteristics and clinical behaviors strengthen the argument for a common disease process, although further research, particularly into the role of TCIRG1 and other molecular markers, could provide additional insights into the relationship between these lesions. |
Instruction: Does lobectomy for lung cancer in patients with chronic obstructive pulmonary disease affect lung function?
Abstracts:
abstract_id: PUBMED:19561925
Lung function changes and complications after lobectomy for lung cancer in septuagenarians. Background: In septuagenarians, lobectomy is the preferable operation, with lower morbidity than for pneumonectomy. However, the 1-year impact of lobectomy on lung function has not been well studied in elderly patients.
Materials And Methods: Retrospective study including 30 patients 70 years or older (study group), 25 patients with chronic obstructive pulmonary disease (COPD) under 70 years (control group 1), and 22 patients under 70 years with normal lung function (control group 2) operated for lung cancer in a 2-year period. The study and control groups were compared related to lung function changes after lobectomy, operative morbidity, and mortality.
Results: Postoperative lung function changes in the elderly followed the similar trend as in patients with COPD. There were no significant differences between these two groups related to changes in forced expiratory volume in the first second (FEV₁) and vital capacity (VC). Unlike that, the pattern of the lung function changes in the elderly was significantly different compared with patients with normal lung function. The mean postoperative decrease in FEV₁ was 14.16% in the elderly, compared with a 29.23% decrease in patients with normal lung function (P < 0.05). In the study and control groups, no patients died within the first 30 postoperative days. The operative morbidity in the elderly group was significantly lower than in patients with COPD (23.3% vs. 60%).
Conclusions: The lung function changes after lobectomy in the elderly are similar to those in patients with COPD. The explanation for such a finding needs further investigation. Despite a high proportion of concomitant diseases, the age itself does not carry a prohibitively high risk of operative mortality and morbidity.
abstract_id: PUBMED:32374491
Outcomes of lobectomy on pulmonary function for early stage non-small cell lung cancer (NSCLC) patients with chronic obstructive pulmonary disease (COPD). Background: Lung cancer is the first cause of cancer mortality worldwide. Chronic obstructive pulmonary disease (COPD) is an independent risk factor for lung cancer. An epidemiological survey discovered that the presence of COPD increases the risk of lung cancer by 4.5-fold. Lobectomy is considered to be the standard surgical method for early stage non-small cell lung cancer (NSCLC). However, the influence of lobectomy on the loss of pulmonary function has not been fully investigated in NSCLC patients with COPD.
Methods: We searched the PubMed database using the following strategies: COPD and pulmonary function test (MeSH term) and lobectomy (MeSH term) from 01 January 1990 to 01 January 2019. We selected the articles of patients with COPD. A total of six studies, including 195 patients with COPD, provided lung function values before and after surgery.
Results: Five out of six studies focused on the short-term change of pulmonary function (within 3-6 months) after lobectomy, and the average loss of FEV1 was 0.11 L (range: -0.33-0.09 L). One study investigated the long-term change of pulmonary function (within 1-2 years) after lobectomy, and the average loss of FEV1 was 0.15 L (range: -0.29-0.05 L).
Conclusions: A short-term (3-6 months) loss of pulmonary function after operation is acceptable for lung cancer patients with COPD. However, there may be a high risk of postoperative complications in NSCLC patients with COPD. Therefore, surgical treatment needs to be carefully considered for these patients.
abstract_id: PUBMED:32857335
Airway inflammation and lung function recovery after lobectomy in patients with primary lung cancer. Objective: Fractional exhaled nitric oxide (FeNO), which represents airway inflammation, is an indicator of postoperative complication after lung surgery. However, its effects in the late postoperative period are unknown. The aim of this prospective study was to clarify the impact of FeNO on postoperative lung function in patients with lung cancer.
Methods: We measured preoperative FeNO using NIOX VERO® in patients with primary lung cancer. Patients were divided into two groups according to their potential airway inflammatory status: preoperative FeNO levels below 25 ppb (N group) and above 25 ppb (H group). They were evaluated by spirometry at 3 and 6 months after surgery during follow-up. The relationship between postoperative lung function and preoperative FeNO was evaluated.
Results: Between September 2017 and March 2019, 61 participants were enrolled. All of them underwent lobectomy as a curative surgery. There were no significant background variables between the two groups. Postoperative vital capacity (VC) and forced expiratory volume in 1 s (FEV1) in the H group achieved less predictive values than those in the N group, which were not significant. The postoperative VC and FEV1 from 3 to 6 months in the H group were significantly increased as compared to those in the N group (p < 0.001).
Conclusions: Preoperative FeNO is a predictor of delayed lung function recovery 3 months after lobectomy in lung cancer patients. The impact had extended to VC and FEV1. Although this impact is temporary, early postoperative intervention is expected to reduce the adverse effect.
abstract_id: PUBMED:23619593
Serial changes in pulmonary function after video-assisted thoracic surgery lobectomy in lung cancer patients. Background: The aim of this study is to evaluate the serial changes in pulmonary function and the recovery time for the observed postoperative values to reach the predicted postoperative values after video-assisted thoracic surgery (VATS) lobectomy for lung cancer.
Patients And Methods: Patients undergoing VATS lobectomy for lung cancer were prospectively evaluated using complete preoperative and repeated postoperative pulmonary function tests (PFTs). The parameters of PFT at each time were compared according to the resected lobe as well as the presence of chronic obstructive pulmonary disease (COPD). The differences between the observed and predicted postoperative values of PFT and the recovery time for the observed values to reach the predicted values were calculated.
Results: Seventy-two patients (33 men, 39 women; mean age: 63.9 years) received complete pre- and postoperative regular PFT after undergoing VATS lobectomy. Of these patients, 24 (33.3%) patients satisfied the criteria for COPD. During the immediate postoperative period, forced vital capacity (FVC) percentage of the patients who received right lower lobectomy patients was decreased most significantly compared with the preoperative values. Compared with the upper lobectomy (UL) group, the lower lobectomy (LL) group showed a significant decrease of FVC% up to 6 months. However, there was no significant difference at 12 months after surgery. Patients with COPD showed little reduction of FEV1% that persisted significantly until 1 month after the surgery in both UL and LL groups. The recovery time was shortest in the left lower lobectomy patients, and it was shorter in the LL group than in the UL group.
Conclusions: Postoperative pulmonary function and recovery time were different depending on the lobe resected and presence of COPD in VATS lobectomy patients. The information obtained from postoperative serial PFT would help accurately predict postoperative pulmonary function changes and recovery time after VATS lobectomy for lung cancer.
abstract_id: PUBMED:26618048
Influence of Pulmonary Rehabilitation on Lung Function Changes After the Lung Resection for Primary Lung Cancer in Patients with Chronic Obstructive Pulmonary Disease. Influence of physiotherapy on the outcome of the lung resection is still controversial. Study aim was to assess the influence of physiotherapy program on postoperative lung function and effort tolerance in lung cancer patients with chronic obstructive pulmonary disease (COPD) that are undergoing lobectomy or pneumonectomy. The prospective study included 56 COPD patients who underwent lung resection for primary non small-cell lung cancer after previous physiotherapy (Group A) and 47 COPD patients (Group B) without physiotherapy before lung cancer surgery. In Group A, lung function and effort tolerance on admission were compared with the same parameters after preoperative physiotherapy. Both groups were compared in relation to lung function, effort tolerance and symptoms change after resection. In patients with tumors requiring a lobectomy, after preoperative physiotherapy, a highly significant increase in FEV1, VC, FEF50 and FEF25 of 20%, 17%, 18% and 16% respectively was registered with respect to baseline values. After physiotherapy, a significant improvement in 6-minute walking distance was achieved. After lung resection, the significant loss of FEV1 and VC occurred, together with significant worsening of the small airways function, effort tolerance and symptomatic status. After the surgery, a clear tendency existed towards smaller FEV1 loss in patients with moderate to severe, when compared to patients with mild baseline lung function impairment. A better FEV1 improvement was associated with more significant loss in FEV1. Physiotherapy represents an important part of preoperative and postoperative treatment in COPD patients undergoing a lung resection for primary lung cancer.
abstract_id: PUBMED:34423004
Complication and lung function impairment prediction using perfusion and computed tomography air trapping (CLIPPCAIR): protocol for the development and validation of a novel multivariable model for the prediction of post-resection lung function. Background: Recent advancements in computed tomography (CT) scanning and post processing have provided new means of assessing factors affecting respiratory function. For lung cancer patients requiring resection, and especially those with respiratory comorbidities such as chronic obstructive pulmonary disease (COPD), the ability to predict post-operative lung function is a crucial step in the lung cancer operability assessment. The primary objective of the CLIPPCAIR study is to use novel CT data to develop and validate an algorithm for the prediction of lung function remaining after pneumectomy/lobectomy.
Methods: Two sequential cohorts of non-small cell lung cancer patients requiring a pre-resection CT scan will be recruited at the Montpellier University Hospital, France: a test population (N=60) on which predictive models will be developed, and a further model validation population (N=100). Enrolment will occur during routine pre-surgical consults and follow-up visits will occur 1 and 6 months after pneumectomy/lobectomy. The primary outcome to be predicted is forced expiratory volume in 1 second (FEV1) six months after lung resection. The baseline CT variables that will be used to develop the primary multivariable regression model are: expiratory to inspiratory ratios of mean lung density (MLDe/i for the total lung and resected volume), the percentage of voxels attenuating at less than ‒950 HU (PVOX‒950 for the total lung and resected volume) and the ratio of iodine concentrations for the resected volume over that of the total lung. The correlation between predicted and real values will be compared to (and is expected to improve upon) that of previously published methods. Secondary analyses will include the prediction of transfer factor for carbon monoxide (TLCO) and complications in a similar fashion. The option to explore further variables as predictors of post-resection lung function or complications is kept open.
Discussion: Current methods for estimating post-resection lung function are imperfect and can add assessments (such as scintigraphy) to the pre-surgical workup. By using CT imaging data in a novel fashion, the results of the CLIPPCAIR study may not only improve such estimates, it may also simplify patient pathways.
Trial Registration: Clinicaltrials.gov (NCT03885765).
abstract_id: PUBMED:12902063
Minimal alteration of pulmonary function after lobectomy in lung cancer patients with chronic obstructive pulmonary disease. Background: The aim of this study was to evaluate the influence of chronic obstructive pulmonary diseases (COPD) on postoperative pulmonary function and to elucidate the factors for decreasing the reduction of pulmonary function after lobectomy.
Methods: We conducted a retrospective chart review of 521 patients who had undergone lobectomy for lung cancer at Chiba University Hospital between 1990 and 2000. Forty-eight patients were categorized as COPD, defined as percentage of predicted forced expiratory volume at 1 second (FEV1) less than or equal to 70% and percentage of FEV1 to forced vital capacity less than or equal to 70%. The remaining 473 patients were categorized as non-COPD.
Results: Although all preoperative pulmonary function test data and arterial oxygen tension were significantly lower in the COPD group, postoperative arterial oxygen tension and FEV1 were equivalent between the two groups, and the ratio of actual postoperative to predicted postoperative FEV1 was significantly better in the COPD group (p < 0.001). With multivariable analysis, COPD and pulmonary resection of the lower portion of the lung (lower or middle-lower lobectomies) were identified as independent factors for the minimal deterioration of FEV1. Actual postoperative FEV1 was 15% lower and higher than predicted, respectively, in the non-COPD patients with upper portion lobectomy and the COPD patients with lower portion lobectomy. Finally, we created a new equation for predicting postoperative FEV1, and it produced a higher coefficient of determination (R(2)) than the conventional one.
Conclusions: The postoperative ventilatory function in patients with COPD who had lower or middle-lower lobectomies was better preserved than predicted.
abstract_id: PUBMED:16675249
Assessment of pulmonary function after lobectomy for lung cancer--upper lobectomy might have the same effect as lung volume reduction surgery. Objective: Lung volume reduction surgery (LVRS) in well-selected patients with severe emphysema results in postoperative improvement in symptoms and pulmonary function. Experience with LVRS suggests that predicted postoperative FEV(1.0) may be underestimated after lobectomy in patients with lung cancer and emphysema. As most of the patients with lung cancer have more or less emphysematous changes in the lungs, we assumed that lobectomy would achieve the same effect as LVRS even in patients without chronic obstructive pulmonary disease on the pulmonary function test. We assessed changes in pulmonary function in terms of 'volume reduction effect' after lobectomy for lung cancer.
Methods: Forty-three patients underwent right upper lobectomy (RUL), 38 patients left upper lobectomy (LUL), 39 patients right lower lobectomy (RLL), and 38 patients left lower lobectomy (LLL). Pulmonary function tests were performed preoperatively and 6 months to 1 year after surgery.
Results: Percent change in FEV(1.0) after lobectomy was -6.9+/-16.1% in RUL group, -11.2+/-16.9% in LUL group, -14.7+/-9.8% in RLL group, and -12.8+/-9.5% in LLL group. We evaluated the correlation between a preoperative FEV(1.0)% of predicted and percentage change in FEV(1.0) after lobectomy. There were no significant relationships between these variables in RLL or LLL group. In contrast, there were significant negative relationships between these variables in RUL and LUL groups. Correlation coefficients were r = -0.667, p < 0.0001 for RUL and r = -0.712, p < 0.0001 for LUL. In RUL and LUL groups, patients with a higher preoperative FEV(1.0)% of predicted had a more adverse percentage change in FEV(1.0) after surgery. In addition, all 13 patients with a preoperative FEV(1.0)% of predicted <60% in RUL and LUL groups had an increase in FEV(1.0) postoperatively. Patients with a lower preoperative FEV(1.0)% of predicted had a greater 'volume reduction effect' with an increase in FEV(1.0) after upper lobectomy.
Conclusion: Upper lobectomy might have a volume reduction effect.
abstract_id: PUBMED:19635204
Influence of chronic obstructive pulmonary disease on postoperative lung function of lung cancer patients and predictive value of lung perfusion scan Background And Objective: Postoperative lung function is closely related to the prognosis of lung cancer patients after lobectomy. This study was to explore the influence of chronic obstructive pulmonary disease (COPD) on postoperative lung function in patients undergoing lobectomy for non-small cell lung cancer (NSCLC), and to assess the predictive value of lung perfusion scan for lung cancer patients with COPD before operation.
Methods: Clinical data of 65 NSCLC patients who underwent lobectomy were analyzed. Of the 65 patients, 25 had COPD (COPD group) and 40 had normal lung function (control group). The change of forced expiratory volume in 1st second (FEV1) after lobectomy and deference between postoperative FEV1 and preoperative predictive postoperative (ppo) FEV1 were compared between the two groups. For ten patients with COPD who had undergone lung perfusion scan before operation, ppo'FEV1 by lung perfusion scan and ppoFEV1 by equation were compared.
Results: The mean percent loss of FEV1 was less in COPD group than in control group (8.98% vs. 22.47%, P<0.05). The value of postoperative FEV1 minus ppoFEV1 and the ratio of postoperative FEV1 to ppoFEV1 were significantly higher in COPD group than in control group (6.90 vs. 0.83, P<0.05; 1.14 vs. 1.01, P<0.05). For the ten patients undergone lung perfusion scan, the mean value of ppo'FEV1 minus ppoFEV1 was 4.04%, with a 95% confidence interval of 3.01%-5.07%.
Conclusions: The mean loss of lung function after lobectomy is less in lung cancer patients with COPD than in patients with normal lung function. Lung perfusion scan before operation may help to predict postoperative lung function of lung cancer patients with COPD.
abstract_id: PUBMED:26323287
Chronic pulmonary aspergillosis as a sequel to lobectomy for lung cancer. Objectives: Chronic pulmonary aspergillosis (CPA) is an emerging complication after lobectomy for lung cancer. This retrospective study aimed to determine the incidence, main risk factors and clinical features of postoperative CPA in lung cancer patients.
Methods: This study included lung cancer patients treated by lobectomy and with no previous history of thoracic surgery or coexistent aspergillosis at the time of surgery. The cumulative incidence of CPA was determined using death as a competing risk. Furthermore, the identified lung cancer patients were divided into CPA and non-CPA groups to compare their preoperative clinical features and to identify the risk factors of postoperative CPA by univariable and multivariable analyses. We also analysed the clinical features of CPA patients after diagnosis.
Results: We included 475 lung cancer patients. Of these, 17 patients (3.6%) developed CPA after the lobectomy. The cumulative postoperative incidence rate of CPA was 2.3% [95% confidence interval (CI), 0.8-3.8%] at 5 years and 7.9% (95% CI, 3.0-13.0%) at 10 years. There were significantly more men (P = 0.007), smokers (P = 0.002) and comorbid chronic obstructive pulmonary disease (COPD) (P = 0.008) and interstitial lung disease (ILD) (P = 0.009) patients in the CPA group than in the non-CPA group. Multivariable analysis identified comorbid COPD (P = 0.0019) and ILD (P = 0.0003) as significant risk factors. An antifungal treatment response was obtained in 6 patients (35%). The 1-year survival rate was 47% (follow-up periods, interquartile range: 3-78 months), and 5 of the total of 11 deaths were due to CPA.
Conclusions: Through the present retrospective study, CPA seems to be a common sequel to lobectomy in lung cancer patients, and COPD and ILD represent strong risk factors of postoperative CPA. Because of the poor clinical outcome of lung cancer patients who develop CPA after lobectomy, careful follow-up using several examinations and chest radiographs to make CPA diagnosis may be essential.
Answer: Yes, lobectomy for lung cancer in patients with chronic obstructive pulmonary disease (COPD) does affect lung function. Studies have shown that postoperative lung function changes in patients with COPD are significant, although the degree of impact can vary.
One study found that lung function changes after lobectomy in septuagenarians are similar to those in patients with COPD. The mean postoperative decrease in forced expiratory volume in the first second (FEV₁) was 14.16% in the elderly, compared with a 29.23% decrease in patients with normal lung function. The operative morbidity in the elderly group was significantly lower than in patients with COPD (23.3% vs. 60%) (PUBMED:19561925).
Another study reported that the average loss of FEV1 was 0.11 L (range: -0.33-0.09 L) in the short-term (3-6 months) after lobectomy in COPD patients. In the long-term (1-2 years), the average loss of FEV1 was 0.15 L (range: -0.29-0.05 L) (PUBMED:32374491).
Furthermore, preoperative fractional exhaled nitric oxide (FeNO), which represents airway inflammation, was found to be a predictor of delayed lung function recovery 3 months after lobectomy in lung cancer patients. The impact extended to vital capacity (VC) and FEV1, suggesting that early postoperative intervention could reduce adverse effects (PUBMED:32857335).
Serial changes in pulmonary function after video-assisted thoracic surgery (VATS) lobectomy also indicated that postoperative pulmonary function and recovery time were different depending on the lobe resected and the presence of COPD (PUBMED:23619593).
Pulmonary rehabilitation was shown to influence lung function changes after lung resection for primary lung cancer in COPD patients. Preoperative physiotherapy resulted in significant increases in FEV1, VC, and other measures, and a significant improvement in 6-minute walking distance (PUBMED:26618048).
In summary, lobectomy for lung cancer in patients with COPD does lead to changes in lung function, with some loss of FEV1 and other measures. |
Instruction: Is ingestion of Thasus gigas (Xamues) an alimentary culture or an auxiliary treatment for type II diabetes?
Abstracts:
abstract_id: PUBMED:25392592
Is ingestion of Thasus gigas (Xamues) an alimentary culture or an auxiliary treatment for type II diabetes? Background: Diabetes is a disease characterized by high blood glucose levels that result from the body's inability to produce and/or use insulin. Among different types of diabetes, type II diabetes is the most common. This work studied the causes and effects of Thasus gigas on the population of Actopan, Hidalgo regarding its ingestion and utility in the treatment of type II diabetes.
Material And Methods: An exploratory study was carried out based on a survey conducted among the residents of Actopan, Hidalgo suffering from diabetes mellitus (type II). In order to investigate the effect of the ingestion of insects "xohues" or "shamues", a study was conducted on 100 adults among the population of Actopan, Hidalgo in order to get information on Thasus gigas consumption. The study was designed to identify the relationships between its usage, effects on human health, the reasons for its consumption by the Actopan community; either for cultural matters or as an alternative treatment to manage type II diabetes.
Results: Of the 100 persons surveyed, 39 were diabetic, 29 made medical outpatient visits. Among these, 21 had eaten Xamues to manage their diabetes while 21.5% replaced their medical treatment with Xamues. Of the 53% of the people who ingested Xamues as an alternative for their disease, 13% abandoned their medical treatment while 33% consumed them for alimentary culture.
Conclusion: People who have stopped attending medical checkups are at risk, because there is no evidence that ingestion of these insects can regulate blood glucose levels.
abstract_id: PUBMED:27852130
Angelica gigas Ameliorates Hyperglycemia and Hepatic Steatosis in C57BL/KsJ-db/db Mice via Activation of AMP-Activated Protein Kinase Signaling Pathway. The prevention and management of type 2 diabetes mellitus has become a major global public health challenge. Decursin, an active compound of Angelica gigas Nakai roots, was recently reported to have a glucose-lowering activity. However, the antidiabetic effect of Angelica gigas Nakai extract (AGNE) has not yet been investigated. We evaluated the effects of AGNE on glucose homeostasis in type 2 diabetic mice and investigated the underlying mechanism by which AGNE acts. Male C57BL/KsJ-db/db mice were treated with either AGNE (10 mg/kg, 20 mg/kg, and 40 mg/kg) or metformin (100 mg/kg) for 8 weeks. AGNE supplementation (20 and 40 mg/kg) significantly decreased fasting glucose and insulin levels, decreased the areas under the curve of glucose in oral glucose tolerance and insulin tolerance tests, and improved homeostatic model assessment-insulin resistant (HOMA-IR) scores. AGNE also ameliorated hepatic steatosis, hyperlipidemia, and hypercholesterolemia. Mechanistic studies suggested that the glucose-lowering effect of AGNE was mediated by the activation of AMP activated protein kinase, Akt, and glycogen synthase kinase-3[Formula: see text]. AGNE can potentially improve hyperglycemia and hepatic steatosis in patients with type 2 diabetes.
abstract_id: PUBMED:24206825
Timing of caffeine ingestion alters postprandial metabolism in rats. Objective: The association between caffeine intake and the risk for chronic diseases, namely type 2 diabetes, has not been consistent, and may be influenced by the timing of caffeine ingestion. The aim of this study was to investigate the acute effect of caffeine administered in different scenarios of meal ingestion on postprandial glycemic and lipidemic status, concomitant with changes in body glycogen stores.
Methods: Forty overnight-fasted rats were randomly divided into five groups (meal-ingested, caffeine-administered, post-caffeine meal-ingested, co-caffeine meal-ingested, post-meal caffeine-administered), and tube-fed the appropriate intervention, then sacrificed 2 h later. Livers and gastrocnemius muscles were analyzed for glycogen content; blood samples were analyzed for glucose, insulin, triglycerides, and non-esterified fatty acid concentrations.
Results: Postprandial plasma glucose concentrations were similar between groups, while significantly higher levels of insulin were witnessed following caffeine administration, irrespective of the timing of meal ingestion. Triglyceride concentrations were significantly lower in the caffeine-administered groups. Regarding glycogen status, although caffeine administration before meal ingestion reduced hepatic glycogen content, co- and post-meal caffeine administration failed to produce such an effect. Muscle glycogen content was not significantly affected by caffeine administration.
Conclusions: Caffeine administration seems to decrease insulin sensitivity as indicated by the sustenance of glucose status despite the presence of high insulin levels. The lower triglyceride levels in the presence of caffeine support the theory of retarded postprandial triglyceride absorption. Caffeine seems to play a biphasic role in glucose metabolism, as indicated by its ability to variably influence hepatic glycogen status.
abstract_id: PUBMED:26349512
Submerged-Culture Mycelia and Broth of the Maitake Medicinal Mushroom Grifola frondosa (Higher Basidiomycetes) Alleviate Type 2 Diabetes-Induced Alterations in Immunocytic Function. Type 2 Diabetes mellitus (T2DM), a disease with impaired glucose, protein and lipid metabolism, low-grade chronic inflammation, and immune dysfunction, is a global public health crisis. We previously demonstrated that Grifola frondosa has bioactivities in improving glycemic responses in diabetic rats. Herein, we investigated the immunomodulatory effects of the submerged-culture mycelia and broth of G. frondosa on the peripheral blood cells (PBL) and splenocytes. Male Wistar rats were administered with saline (normal rats) or streptozotocin plus nicotinamide (T2DM rats) and were intragastrically administered with placebo, fermented mycelia, broth, or mycelia plus broth (1 g kg-1 day-1) for two weeks. In normal rats, ingestion of mycelia significantly decreased monocytes and ingestion of mycelia and broth significantly decreased the productions of interferon (IFN)-γ and interleukin (IL)-4 from the PBL and splenocytes. In T2DM rats, ingestion of mycelia, broth, and mycelia plus broth significantly alleviated the increases in 2 h postprandial blood glucose and the productions of IFN-γ from the T-leukocytes, IL-4, and IL-6 from the monocytes and IL-4 from the T-splenocytes, as well as significantly improved the productions of tumor-necrosis factor-α from the macrophages. In conclusion, submerged-culture mycelia and broth of G. frondosa may decrease cell-medicated immunity in normal rats and improve hyperglycemia and diabetes-induced alterations in cell-medicated and innate immunities in T2DM rats.
abstract_id: PUBMED:9019416
Influence of the velocity of meal ingestion on postprandial glycemia To assess the influence of the velocity of meal ingestion on postprandial glycemia, 10 healthy volunteers and 10 patients with non-insulin-dependent diabetes mellitus (NIDDM) were studied. All the subjects had two identical meals (1980 J, carbohydrate 37%, proteins 23%, lipids 40%) on different days. One meal was ingested in 10 minutes (fast ingestion) while the other was ingested in 20 minutes (slow ingestion). Glucose serum levels were measured immediately before the meal and throughout the following 180 minutes. In NIDDM patients, serum glucose levels from 30 to 90 minutes were significantly (p < 0.05) higher after fast ingestion than after the slow intake. Area under the glucose curve (AUC) and maximal peak of serum glucose concentrations (MP) showed also higher values with fast intake: AUC was of 13 +/- 2.4 and 11.3 +/- 2.9 mmol/ L/h (X +/- SD) (p < 0.05), MP 15.8 +/- 4.3 and 12.9 +/- 2.6 mmol/L (p < 0.05) with fast and slow ingestion respectively. No differences in serum glucose levels between test were noticed in healthy subjects. Slow meal ingestion might be a dietary recommendation in patients with NIDDM.
abstract_id: PUBMED:26721413
Let Visuals Tell the Story: Medication Adherence in Patients with Type II Diabetes Captured by a Novel Ingestion Sensor Platform. Background: Chronic diseases such as diabetes require high levels of medication adherence and patient self-management for optimal health outcomes. A novel sensing platform, Digital Health Feedback System (Proteus Digital Health, Redwood City, CA), can for the first time detect medication ingestion events and physiological measures simultaneously, using an edible sensor, personal monitor patch, and paired mobile device. The Digital Health Feedback System (DHFS) generates a large amount of data. Visual analytics of this rich dataset may provide insights into longitudinal patterns of medication adherence in the natural setting and potential relationships between medication adherence and physiological measures that were previously unknown.
Objective: Our aim was to use modern methods of visual analytics to represent continuous and discrete data from the DHFS, plotting multiple different data types simultaneously to evaluate the potential of the DHFS to capture longitudinal patterns of medication-taking behavior and self-management in individual patients with type II diabetes.
Methods: Visualizations were generated using time domain methods of oral metformin medication adherence and physiological data obtained by the DHFS use in 5 patients with type II diabetes over 37-42 days. The DHFS captured at-home metformin adherence, heart rate, activity, and sleep/rest. A mobile glucose monitor captured glucose testing and level (mg/dl). Algorithms were developed to analyze data over varying time periods: across the entire study, daily, and weekly. Following visualization analysis, correlations between sleep/rest and medication ingestion were calculated across all subjects.
Results: A total of 197 subject days, encompassing 141,840 data events were analyzed. Individual continuous patch use varied between 87-98%. On average, the cohort took 78% (SD 12) of prescribed medication and took 77% (SD 26) within the prescribed ±2-hour time window. Average activity levels per subjects ranged from 4000-12,000 steps per day. The combination of activity level and heart rate indicated different levels of cardiovascular fitness between subjects. Visualizations over the entire study captured the longitudinal pattern of missed doses (the majority of which took place in the evening), the timing of ingestions in individual subjects, and the range of medication ingestion timing, which varied from 1.5-2.4 hours (Subject 3) to 11 hours (Subject 2). Individual morning self-management patterns over the study period were obtained by combining the times of waking, metformin ingestion, and glucose measurement. Visualizations combining multiple data streams over a 24-hour period captured patterns of broad daily events: when subjects rose in the morning, tested their blood glucose, took their medications, went to bed, hours of sleep/rest, and level of activity during the day. Visualizations identified highly consistent daily patterns in Subject 3, the most adherent participant. Erratic daily patterns including sleep/rest were demonstrated in Subject 2, the least adherent subject. Correlation between sleep /rest and medication ingestion in each individual subject was evaluated. Subjects 2 and 4 showed correlation between amount of sleep/rest over a 24-hour period and medication-taking the following day (Subject 2: r=.47, P<.02; Subject 4: r=.35, P<.05). With Subject 2, sleep/rest disruptions during the night were highly correlated (r=.47, P<.009) with missing doses the following day.
Conclusions: Visualizations integrating medication ingestion and physiological data from the DHFS over varying time intervals captured detailed individual longitudinal patterns of medication adherence and self-management in the natural setting. Visualizing multiple data streams simultaneously, providing a data-rich representation, revealed information that would not have been shown by plotting data streams individually. Such analyses provided data far beyond traditional adherence summary statistics and may form the foundation of future personalized predictive interventions to drive longitudinal adherence and support optimal self-management in chronic diseases such as diabetes.
abstract_id: PUBMED:17179241
Recent metformin ingestion does not increase in-hospital morbidity or mortality after cardiac surgery. Background: Perioperative treatment of type 2 diabetes with metformin, an oral hypoglycemic drug, is thought to increase the risk of life-threatening postoperative lactic acidosis. In contrast, metformin improves serum glucose control and has beneficial cardiovascular effects, which may decrease the risk of adverse outcomes. In this investigation we sought to determine the influence of metformin treatment on mortality and morbidity compared with treatment with other oral hypoglycemic drugs in diabetic patients undergoing cardiac surgery.
Methods: In this retrospective investigation, 1284 diabetic patients, with recent oral hypoglycemic ingestion (presumed to be 8-24 h preoperatively), underwent cardiac surgery from 1994-2004. Propensity scores were calculated from a logistic model which included baseline characteristics and perioperative variables. Four-hundred-forty-three (85%) of the metformin-treated patients were matched on nearest propensity score using greedy matching techniques with 443 nonmetformin-treated patients. Postoperative outcomes were compared between matched metformin- and nonmetformin-treated patients.
Results: In-hospital mortality, cardiac, renal, and neurologic morbidities were similar between groups. Metformin-treated patients had less postoperative prolonged tracheal intubation [OR (95% CI), 0.3 (0.1, 0.7), P = 0.003], infection [0.2 (0.1, 0.7), P = 0.007] and overall morbidities [0.4 (0.2, 0.8), P = 0.005].
Conclusions: These data suggest that recent metformin ingestion is not associated with increased risk of adverse outcome in cardiac surgical patients. Alternatively, metformin treatment may have beneficial effects.
abstract_id: PUBMED:28757535
Coffee Ingestion Suppresses Hyperglycemia in Streptozotocin-Induced Diabetic Mice. Coffee consumption reduces the risk of type 2 diabetes in humans, but the mechanism remains unclear. In this study, we investigated the effect of coffee on pancreatic β-cells in the induction of diabetes by streptozotocin (STZ) treatment in mice. We examined the effect of coffee, caffeine, or decaffeinated coffee ingestion on STZ-induced hyperglycemia. After STZ injection in Exp. 1 and 2, serum glucose concentration and water intake in coffee ingestion (Coffee group) tended to be lowered or was significantly lowered compared to those in water ingestion (Water group) instead of coffee. In Exp. 1, the values for water intake and serum glucose concentration in caffeine ingestion (Caffeine group) were similar to those in the Water group. In Exp. 2, serum glucose concentrations in the decaffeinated coffee ingestion (Decaf group) tended to be lower than those in the Water group. Pancreatic insulin contents tended to be higher in the Coffee and Decaf groups than in the Water group (Exp. 1 and 2). In Exp. 3, subsequently, we showed that coffee ingestion also suppressed the deterioration of hyperglycemia in diabetic mice which had been already injected with STZ. This study showed that coffee ingestion prevented the development of STZ-induced diabetes and suppressed hyperglycemia in STZ-diabetic mice. Caffeine or decaffeinated coffee ingestion did not significantly suppress STZ-induced hyperglycemia. These results suggest that the combination of caffeine and other components of decaffeinated coffee are needed for the preventive effect on pancreatic β-cell destruction. Coffee ingestion may contribute to the maintenance of pancreatic insulin contents.
abstract_id: PUBMED:18030810
A role of alimentary factor in insulin resistance at type 2 diabetes In this review the role of alimentary factor in correction of insulin resistance in type 2 diabetes are discussed. It is presented data on influence of caloric restriction, macro- and micronutrients, biological compounds of food on insulin sensitivity in patients with type 2 diabetes.
abstract_id: PUBMED:27221118
Sodium nitrate co-ingestion with protein does not augment postprandial muscle protein synthesis rates in older, type 2 diabetes patients. The age-related anabolic resistance to protein ingestion is suggested to be associated with impairments in insulin-mediated capillary recruitment and postprandial muscle tissue perfusion. The present study investigated whether dietary nitrate co-ingestion with protein improves muscle protein synthesis in older, type 2 diabetes patients. Twenty-four men with type 2 diabetes (72 ± 1 yr, 26.7 ± 1.4 m/kg(2) body mass index, 7.3 ± 0.4% HbA1C) received a primed continuous infusion of l-[ring-(2)H5]phenylalanine and l-[1-(13)C]leucine and ingested 20 g of intrinsically l-[1-(13)C]phenylalanine- and l-[1-(13)C]leucine-labeled protein with (PRONO3) or without (PRO) sodium nitrate (0.15 mmol/kg). Blood and muscle samples were collected to assess protein digestion and absorption kinetics and postprandial muscle protein synthesis rates. Upon protein ingestion, exogenous phenylalanine appearance rates increased in both groups (P < 0.001), resulting in 55 ± 2% and 53 ± 2% of dietary protein-derived amino acids becoming available in the circulation over the 5h postprandial period in the PRO and PRONO3 groups, respectively. Postprandial myofibrillar protein synthesis rates based on l-[ring-(2)H5]phenylalanine did not differ between groups (0.025 ± 0.004 and 0.021 ± 0.007%/h over 0-2 h and 0.032 ± 0.004 and 0.030 ± 0.003%/h over 2-5 h in PRO and PRONO3, respectively, P = 0.7). No differences in incorporation of dietary protein-derived l-[1-(13)C]phenylalanine into de novo myofibrillar protein were observed at 5 h (0.016 ± 0.002 and 0.014 ± 0.002 mole percent excess in PRO and PRONO3, respectively, P = 0.8). Dietary nitrate co-ingestion with protein does not modulate protein digestion and absorption kinetics, nor does it further increase postprandial muscle protein synthesis rates or the incorporation of dietary protein-derived amino acids into de novo myofibrillar protein in older, type 2 diabetes patients.
Answer: The ingestion of Thasus gigas (Xamues) by the population of Actopan, Hidalgo, is both an alimentary culture and considered by some as an auxiliary treatment for type II diabetes. The exploratory study based on a survey conducted among residents of Actopan, Hidalgo, who suffer from diabetes mellitus (type II), revealed that 53% of the people ingested Xamues as an alternative for their disease. Among these, 13% abandoned their medical treatment, while 33% consumed them for alimentary culture. However, it is important to note that there is no evidence that ingestion of these insects can regulate blood glucose levels, and people who have stopped attending medical checkups are at risk (PUBMED:25392592).
In contrast, other studies on different substances and treatments have shown potential benefits for managing type II diabetes. For instance, Angelica gigas Nakai extract (AGNE) was found to ameliorate hyperglycemia and hepatic steatosis in diabetic mice via activation of the AMP-Activated Protein Kinase signaling pathway (PUBMED:27852130). Similarly, submerged-culture mycelia and broth of the Maitake medicinal mushroom Grifola frondosa were shown to alleviate type 2 diabetes-induced alterations in immunocytic function (PUBMED:26349512). Coffee ingestion was also reported to suppress hyperglycemia in streptozotocin-induced diabetic mice (PUBMED:28757535).
In summary, while Thasus gigas (Xamues) is ingested by some individuals as an alternative treatment for type II diabetes, there is no scientific evidence supporting its efficacy in regulating blood glucose levels, and it is primarily part of the alimentary culture in Actopan, Hidalgo (PUBMED:25392592). Other substances, such as Angelica gigas Nakai extract and components found in coffee, have demonstrated potential benefits in scientific studies for managing type II diabetes (PUBMED:27852130, PUBMED:26349512, PUBMED:28757535). |
Instruction: 'Are you on the market?
Abstracts:
abstract_id: PUBMED:22087373
Health care market deviations from the ideal market. A common argument in the health policy debate is that market forces allocate resources efficiently in health care, and that government intervention distorts such allocation. Rarely do those making such claims state explicitly that the market they refer to is an ideal in economic theory which can only exist under very strict conditions. This paper explores the strict conditions necessary for that ideal market in the context of health care as a means of examining the claim that market forces do allocate resources efficiently in health care.
abstract_id: PUBMED:37251827
Housing market forecasts via stock market indicators. Through the reinterpretation of housing data as candlesticks, we extend Nature Scientific Reports article by Liang and Unwin [LU22] on stock market indicators for COVID-19 data, and utilize some of the most prominent technical indicators from the stock market to estimate future changes in the housing market, comparing the findings to those one would obtain from studying real estate ETF's. By providing an analysis of MACD, RSI, and Candlestick indicators (Bullish Engulfing, Bearish Engulfing, Hanging Man, and Hammer), we exhibit their statistical significance in making predictions for USA data sets (using Zillow Housing data) and also consider their applications within three different scenarios: a stable housing market, a volatile housing market, and a saturated market. In particular, we show that bearish indicators have a much higher statistical significance then bullish indicators, and we further illustrate how in less stable or more populated countries, bearish trends are only slightly more statistically present compared to bullish trends.
abstract_id: PUBMED:34172576
How market ecology explains market malfunction. Standard approaches to the theory of financial markets are based on equilibrium and efficiency. Here we develop an alternative based on concepts and methods developed by biologists, in which the wealth invested in a financial strategy is like the abundance of a species. We study a toy model of a market consisting of value investors, trend followers, and noise traders. We show that the average returns of strategies are strongly density dependent; that is, they depend on the wealth invested in each strategy at any given time. In the absence of noise, the market would slowly evolve toward an efficient equilibrium, but the statistical uncertainty in profitability (which is calibrated to match real markets) makes this noisy and uncertain. Even in the long term, the market spends extended periods of time away from perfect efficiency. We show how core concepts from ecology, such as the community matrix and food webs, give insight into market behavior. For example, at the efficient equilibrium, all three strategies have a mutualistic relationship, meaning that an increase in the wealth of one increases the returns of the others. The wealth dynamics of the market ecosystem explain how market inefficiencies spontaneously occur and gives insight into the origins of excess price volatility and deviations of prices from fundamental values.
abstract_id: PUBMED:32958968
Collective market shaping by competitors and its contribution to market resilience. Employing an inductive approach, we found that competitors engage in market shaping, where they actively change their market through their purposeful actions. These competitors shape their market from one characterized by competition to one of collaboration when facing disturbances, ultimately contributing to market resilience, with benefits to market actors. They achieve this through engaging in unique forms of work, which we have called 'resilience work' and its three associated practices, namely meshing, pooling, and deploying. The competitors engage in these practices, which provide for and deploy communally pooled resources across their cultivated web of meshed relationships in the face of disturbances. The findings, which provide a model of market resilience, contribute to marketing's emerging knowledge of competitors' collective actions with respect to market shaping, while delineating an important outcome of market shaping, that of resilience.
abstract_id: PUBMED:36268201
Volatility spillovers from the Chinese stock market to the U.S. stock market: The role of the COVID-19 pandemic. The COVID-19 pandemic, which originated in Wuhan, China, precipitated the stock market crash of March 2020. According to published global data, the U.S. has been most affected by the tragedy throughout this outbreak. Understanding the degree of integration between the financial systems of the world's two largest economies, particularly during the COVID-19 pandemic, necessitates thorough research of the risk transmission from China's stock market to the U.S. stock market. This study examines the volatility transmission from the Chinese to the U.S. stock market from January 2001 to October 2020. We employ a variant form of the EGARCH (1,1) model with long-term control over the excessive volatility breakpoints identified by the ICSS algorithm. Since 2004, empirical evidence indicates that the volatility shocks of the Chinese stock market have frequently and negatively affected the volatility of the U.S. stock market. Most importantly, we explore that the COVID-19 pandemic vigorously and positively promoted the volatility infection from the Chinese equity market to the U.S. equity market in March 2020. This precious evidence endorses the asymmetric volatility transmission from the Chinese to the U.S. stock market when COVID-19 broke out. These experimental results provide profound insight into the risk contagion between the U.S. and China stock markets. They are also essential for securities investors to minimize portfolio risk. Furthermore, this paper suggests that globalization has carefully driven the integration of China's stock market with the international equity markets.
abstract_id: PUBMED:34441102
How to Measure a Two-Sided Market. Applying the theories of complex network and entropy measurement to the market, the two-sided market structure is analyzed in constructing the O2O platform transaction on the entropy measurement of the nodes and links. Market structure entropy (MSE) is initially introduced to measure the consistency degree of the individuals and the groups in the O2O market, according to the interaction in the profits, the time/space, and the information relationship. Considering that the market structure entropies are changing upward or downward, MSE is used to judge the consistency degree between the individuals and the groups. Respectively, considering the scale, the cost and the value dimensions, MSE is expanded to explain the market quality entropy, the market time-effect entropy, and the market capacity entropy.MSE provides a methodology in studying the O2O platform transaction and gives the quantitative index in the evaluation of the O2O market state.
abstract_id: PUBMED:27606072
At slaughtering and post mortem characteristics on Traditional market ewes and Halal market ewes in Tuscany. Background: The aim of this work was the comparison between the carcass and the meat ewes of the regional Traditional market and the Islamic religious (Halal) market.
Methods: Thirty and 20 at the end of career traditional market and Halal market ewes were slaughtered following the EC (European Council, 2009) animal welfare guidelines. Live weight of ewes was taken and dressing percentage of carcasses was calculated. On every carcass zoometric measurement and the evaluation trough the EU grid rules were performed. On the Musculus longissimus thoracis of 12 Traditional market carcasses and 11 Halal market carcasses the physical-chemical and nutritional analysis were performed. Consumer tests for liking meat ewe were performed in order to find consumer's preference level for Traditional and Halal markets ewe meat. Considering as fixed factor the ewe meat market (Traditional and Halal), results were submitted to oneway Analysis of Variance (ANOVA) and to Principal Component Analysis (PCA).
Results: The Halal market ewes have shown lower dressing percentages (42.91 ± 0.82 vs 46.42 ± 0.69) and lower conformation score (4.5 ± 0.5 vs 7.8 ± 0.4). The Halal market meat showed higher cooking loss in oven (37.83 ± 1.20 vs 32.03 ± 1.15 %), lesser Chroma value (18.63 ± 0.70 vs 21.84 ± 0.67), and lesser Hue angle value (0.26 ± 0.02 vs 0.34 ± 0.02). This product had also lower fat percentage (4.2 ± 0.4 vs 7.09 ± 0.4). The traditional market meat had higher percentage in monounsatured fatty acids (MUFA) (43.84 ± 1.05 vs 38.22 ± 1.10), while the Halal market meat had higher percentage in ω3 poliunsatured fatty acids (PUFA) (5.04 ± 0.42 vs 3.60 ± 0.40). The consumer test showed as the ewe meat was appreciate by the consumers.
Conclusions: Both meat typologies have shown good nutritional characteristics. The traditional market meat had higher MUFA composition, and a better MUFA/satured fatty acids (SFA) ratio, while the Halal market meat had higher PUFA composition. These results were also supported by the PCA. The consumers preferred the traditional market meat.
abstract_id: PUBMED:33052307
Market capitalization: Pre and post COVID-19 analysis. This research paper focuses on the impact of COVID-19 on Indian Stock Market and shares performance. In other words, the article analyses the market capitalization correlation between the performances of shares and the growth of the share market, using the stock market data of Pre and post COVID-19 status by comparing the data from Jan'20 to Jun'20. The variables have positive and statistically strong significance on the changes in the market's performance and the value of its market capitalization.
abstract_id: PUBMED:34063670
An Entropy-Based Approach to Measurement of Stock Market Depth. The aim of this study is to investigate market depth as a stock market liquidity dimension. A new methodology for market depth measurement exactly based on Shannon information entropy for high-frequency data is introduced and utilized. The proposed entropy-based market depth indicator is supported by an algorithm inferring the initiator of a trade. This new indicator seems to be a promising liquidity measure. Both market entropy and market liquidity can be directly measured by the new indicator. The findings of empirical experiments for real-data with a time stamp rounded to the nearest second from the Warsaw Stock Exchange (WSE) confirm that the new proxy enables us to effectively compare market depth and liquidity for different equities. Robustness tests and statistical analyses are conducted. Furthermore, an intra-day seasonality assessment is provided. Results indicate that the entropy-based approach can be considered as an auspicious market depth and liquidity proxy with an intuitive base for both theoretical and empirical analyses in financial markets.
abstract_id: PUBMED:37628170
Transaction Entropy: An Alternative Metric of Market Performance. Market uncertainty has a significant impact on market performance. Previous studies have dedicated much effort towards investigations into market uncertainty related to information asymmetry and risk. However, they have neglected the uncertainty inherent in market transactions, which is also an important aspect of market performance, besides the quantity of transactions and market efficiency. In this paper, we put forward a concept of transaction entropy to measure market uncertainty and see how it changes with price. Transaction entropy is defined as the ratio of the total information entropy of all traders to the quantity of transactions, reflecting the level of uncertainty in making successful transactions. Based on the computational and simulated results, our main finding is that transaction entropy is the lowest at equilibrium, it will decrease in a shortage market, and increase in a surplus market. Additionally, we make a comparison of the total entropy of the centralized market with that of the decentralized market, revealing that the price-filtering mechanism could effectively reduce market uncertainty. Overall, the introduction of transaction entropy enriches our understanding of market uncertainty and facilitates a more comprehensive assessment of market performance.
Answer: The question "Are you on the market?" is ambiguous and could refer to various contexts. However, based on the provided abstracts, it seems the question is related to economic and financial markets rather than personal status. None of the abstracts directly answer a question about an individual's participation in the market. Instead, they discuss various aspects of market dynamics, such as deviations from the ideal market (PUBMED:22087373), housing market forecasts (PUBMED:37251827), market ecology and malfunction (PUBMED:34172576), market shaping and resilience (PUBMED:32958968), volatility spillovers in stock markets (PUBMED:36268201), measuring two-sided markets (PUBMED:34441102), characteristics of traditional and Halal market ewes (PUBMED:27606072), market capitalization pre and post COVID-19 (PUBMED:33052307), an entropy-based approach to stock market depth (PUBMED:34063670), and transaction entropy as a metric of market performance (PUBMED:37628170).
If the question is asking whether an individual is participating in or engaged with any of these markets, the abstracts do not provide information on individual participation. They are focused on analyzing and understanding market mechanisms, behaviors, and impacts from a research or macroeconomic perspective. |
Instruction: Small bowel obstruction following restorative proctocolectomy: affected by a laparoscopic approach?
Abstracts:
abstract_id: PUBMED:21474147
Small bowel obstruction following restorative proctocolectomy: affected by a laparoscopic approach? Background: Total proctocolectomy with ileal pouch-anal anastomosis (IPAA) is the gold standard surgical treatment for chronic ulcerative colitis. More recently, this procedure is being performed laparoscopically assisted. Postoperatively, small bowel obstruction (SBO) is one of the more common associated complications. However, it is unknown whether the addition of a laparoscopic approach has changed this risk. This study aims to assess and compare the incidence of SBOs after both open and laparoscopic restorative proctocolectomy.
Methods: All subjects who underwent restorative proctocolectomy from 1998-2008 were identified from a prospective Colorectal Surgery Database. Medical records were reviewed for all cases of SBO, confirmed by a combination of clinical symptoms and radiologic evidence. Comparisons were made between laparoscopic and open approaches. The incidence of SBO was also subdivided into pre-ileostomy takedown, early post-ileostomy takedown (30 d post), and late post-ileostomy takedown (30 d to 1 y post). Several potential risk factors were also evaluated. Statistical analysis was performed utilizing Fisher's exact (for incidence) or t-tests (for means). Significance was defined as P < 0.05
Results: A total of 290 open cases and 100 laparoscopic cases were identified during this time period. The overall incidence of SBO at 1 y post-ileostomy takedown was 14% (n = 42) in the open group and 16% (n = 16) laparoscopic (P = NS). In the pre-ileostomy takedown period the incidence of SBO was 7% (n = 21) open and 13% (n = 13) laparoscopic (P = NS). While in the post-takedown period, the early incidence was 4% (n = 12) open and 1% (n = 1) laparoscopic and late incidence was 3% (n = 9) open and 2% (n = 2) laparoscopic (P = NS). Factors associated with an increased risk of SBO include coronary artery disease, prior appendectomy and W and J pouch configurations.
Conclusions: The burden of postoperative small bowel obstruction after restorative proctocolectomy is not changed with a laparoscopic approach. Most cases occur in the early postoperative period, especially prior to ileostomy reversal.
abstract_id: PUBMED:18483829
Multimedia article. Laparoscopic restorative proctocolectomy with small McBurney incision for ileal pouch construction without protective ileostomy. Purpose: Restorative proctocolectomy is a standard treatment for colorectal diseases over decades. At present, this technique is frequently performed via minimal invasive approach. Most reported techniques of laparoscopic restorative proctocolectomy involved a Pfannenstiel incision for the major part of the operation to be performed openly; a double-stapled pouch anal anastomosis technique and protective ileostomy. This study was designed to demonstrate the modification of this technique.
Methods: This was a retrospective study of seven patients (4 had ulcerative colitis and 3 had familial adenomatous polyposis) who underwent laparoscopic restorative proctocolectomy at King Chulalongkorn Memorial Hospital between September 2004 and February 2007. The details of the procedure are shown in the video. The techniques involve the following: full mobilization of entire colon and rectum using medial to lateral approach, division of submesenteric arcades for ileal pouch elongation with preservation of three to four inner most arcades of distal ileum segment and preservation of both superior mesenteric and ileocolic trunk, ileal pouch construction via a small (3-4 cm) McBurney incision, transanal mucosectomy with removal of the entire rectum and colon transanally, and handsewn ileal pouch-anal anastomosis. None of the patients underwent protective ileostomy.
Results: Mean surgical time was 360 (270-510) minutes, and median blood loss was 230 (100-400) ml. There were neither conversions nor intraoperative surgical complications. However, one patient developed small-bowel obstruction, which was successfully treated by laparoscopic approach. Anastomotic leakage was not found in this series. All patients have good control of their bowel movement as well as a very good cosmetic result during the follow-up period.
Conclusions: Laparoscopic restorative proctocolectomy with small McBurney incision for ileal pouch construction, without protective ileostomy, is technically feasible and safe.
abstract_id: PUBMED:27626834
Vascular High Ligation and Embryological Dissection in Laparoscopic Restorative Proctocolectomy for Ulcerative Colitis. Introduction: After its description in 1980, restorative proctocolectomy has become the procedure of choice for ulcerative colitis (UC). The supposed advantages of the laparoscopy have proven beneficial for colorectal operations but a standard technique in laparoscopic restorative proctocolectomy (LRP) is still lacking. In this study, we present our technique of LRP with vascular high ligation (VHL) and embryological dissection (ED).
Materials And Methods: This retrospective study reviewed patients who underwent LRP with VHL for UC from January 2009 to June 2015. Of these, only two-stage LRP patients were included to the study. The LRP technique was performed by five ports through a medial-to-lateral approach. The dissection was carried out between the embryological planes and all the vessel roots were highly divided. A diverting ileostomy was performed in all of the patients.
Results: Forty-six patients were operated for UC with the laparoscopic approach. Among these patients, there were 19 (8 females) patients who were performed LRP with VHL. The median age was 42 (range 25-62) years. No intraoperative complications occurred. There was no conversion to open procedure. Early postoperative complications were observed in 3 (15.8%) patients, including postoperative mechanical bowel obstruction (n = 1), wound infection (n = 1), and ileal pouch bleeding (n = 1).
Discussion: High ligation of the vessels is not routinely performed except in the presence of malignancy. In our study, we focus on the importance of high ligation and ED for better observation and preservation of the important anatomical structures. According to our opinion, this approach aids in the preservation of the ureters, nerves, and the duodenum providing better observation of dissection planes.
abstract_id: PUBMED:12590719
Laparoscopic restorative proctocolectomy for patients with ulcerative colitis. Background: Significant concern continues about the feasibility of laparoscopic restorative proctocolectomy (RP) with an ileal J pouch anal anastomosis in the surgical treatment of patients with ulcerative colitis (UC). The aim of this study was to clarify the feasibility of laparoscopic RP at a single institution where the surgical routine of laparoscopic colorectal surgery has already been established.
Patients And Methods: Between July 1994 and December 2001, 18 patients with UC underwent laparoscopic RP. The median age was 30 (range, 18-51) years, and the median follow-up was 20 (range, 5-89) months. Five trocars were placed. After the entire colon and rectum were mobilized and the vessels were divided intracorporeally, the rectum was divided with use of a laparoscopic linear stapler. A pouch anal anastomosis was fashioned with use of a double stapling technique. A diverting loop ileostomy was fashioned.
Results: There were no conversions to the open procedure. The median operative time and median blood loss were 360 (range, 290-500) minutes and 105 (range, 10-586) mL, respectively. Six postoperative complications occurred (wound sepsis, 2; bowel obstruction, 1; anastomotic stricture, 2; pouchitis, 1). In one patient, a bowel obstruction developed 3 months after the operation, which was managed conservatively. The median length of the hospital stay was 9 (range, 7-21) days.
Conclusions: The laparoscopic RP is safe and feasible in selected patients with UC. New laparoscopic instrumentation, such as a linear stapler, and a more reliable laparoscopic coagulating and dividing tool should be designed, which would make it possible to perform this procedure more frequently in the surgical treatment of UC.
abstract_id: PUBMED:21899706
Laparoscopic restorative proctocolectomy: safety and critical level of the ileal pouch anal anastomosis. Aim: The study reports the longer-term results of laparoscopic-assisted restorative proctocolectomy (RPC), with particular reference to safety and the level of the stapled ileal pouch-anal anastomosis (IPAA).
Method: Data were collected prospectively from all patients who underwent laparoscopic RP from July 2006 to July 2010. In each patient the operation involved the use of a short (6 cm) Pfannenstiel incision to facilitate placement of the linear stapler for anorectal division.
Results: Seventy-five patients underwent RPC either with total proctocolectomy (n = 53) or after previous emergency colectomy (n = 22). Early postoperative morbidity occurred in 18 (24%) patients and readmission within 30 days occurred in 18 (24%). Morbidity during follow up developed in 29 (39%). A pouchogram was carried out in all 75 patients before ileostomy closure with an abnormality shown in eight. The median level of the IPAA was at 3.0 cm (1.0-5.0 cm) above the dentate line. At a median of 33 (9-57) months, there has been one case of small bowel obstruction and no incisional hernia.
Conclusion: In laparoscopic-assisted RPC a limited Pfannenstiel incision allows safe construction of the IPAA at an appropriate level. Laparoscopic RPC is safe and the emerging long-term follow-up data show the benefit of this approach, with very low rates of small bowel obstruction and incisional hernia formation.
abstract_id: PUBMED:25560185
Characterizing readmission in ulcerative colitis patients undergoing restorative proctocolectomy. Background: Postoperative readmissions increase costs and affect patient quality of life. Ulcerative colitis (UC) patients are at a high risk for hospital readmission following restorative proctocolectomy (RP).
Objective: The objective of this study is to characterize UC patients undergoing RP and identify causes and risk factors for readmission.
Design: A retrospective review of a prospectively maintained institutional database was performed. Postoperative readmission rates and reasons for readmission were examined following RP. Univariate and multivariate analyses were performed to evaluate for risk factors associated with readmission.
Results: Of 533 patients who met our inclusion criteria, 18.2 % (n = 97) were readmitted within 30 days while 22.7 % (n = 121) were readmitted within 90 days of stage I of RP. Younger patient age (OR 1.825, 95 % CI 1.139-2.957), laparoscopic approach (OR 1.943, 95 % CI 1.217-3.104), and increased length of initial stay (OR 1.155, 95 % CI 1.090-1.225) were all associated with 30-day readmission. The most common reason for readmission was dehydration/ileus/partial bowel obstruction, with 10 % of patients readmitted for this reason within 30 days.
Conclusions: Patients undergoing restorative proctocolectomy are at high risk for readmission, particularly following the first stage of the operation. Novel treatment pathways to prevent ileus and dehydration as an outpatient may decrease the rates of readmission following RP.
abstract_id: PUBMED:31803289
Adrenalectomy by laparoscopic anterolateral transperitoneal approach for patients with previous abdominal surgery. Adrenal surgery has been radically changed by laparoscopic approach and we wonder whether the increase in the number of adrenalectomies is entirely justified by better understanding of the pathology and a developed diagnosis methods. The type of approach (transabdominal/retroperitoneal) remains a matter of the surgeon's experience. Method: In the past 8 years, we have performed more than 200 laparoscopic adrenalectomies by transperitoneal approach, 24 of them having previously significant abdominal surgery (cholecistectomy, gastric surgery, colectomy, bowel obstruction, exploratory laparoscopy, and adrenalectomy). The patients had a variety of adrenal pathologies such as Cushing disease, Cushing syndrome, Conn syndrome, incidentaloma, pheochromocytoma and even carcinoma. Results: 3 cases were converted to open approach, only one because of the adhesions. Reasons for conversion were also: spleen intarctisation and a difficulty in mobilizing the tumor. Operating time was not significantly prolonged because of the adhesions (40-360 min, median time 127 min). Postoperative evolution was simple with no morbidity or mortality and a fast recovery was recorded. Conclusions: Choosing the type of approach is related to surgeon experience, although 79-94% of the surgeons prefer the transabdominal lateral approach. We believe that with an experienced surgical team, there is no difficulty in performing adrenalectomy by transabdominal approach, with no significantly prolonged operating time, even though the patient has previously had abdominal surgery.
abstract_id: PUBMED:22189280
Hybrid procedures of restorative proctocolectomy with ileal pouch anal anastomosis for large bowel disorders. Unlabelled: THE AIM OF THE STUDY was to describe the authors' experience in performing laparoscopic restorative proctocolectomy with the formation of an intestinal reservoir of the J-pouch type, anal anastomosis and protective ileostomy.
Material And Methods: Between 2004 and 2011, a total of 23 patients underwent laparoscopic restorative proctocolectomy with the formation of an intestinal reservoir of the J-pouch type, anal anastomosis and protective ileostomy for ulcerative colitis (n = 17) or familial adenomatous polyposis (n = 6). A statistical analysis of the treatment outcomes was performed.
Results: No intraoperative complications were observed and none of the patients required conversion or blood transfusions. The mean duration of the procedure was 4.08 hours (2.5-6.0 hours). The mean duration of hospitalization was 15.4 days (8-24 days). We observed three major postoperative complications requiring intervention: two cases of small bowel obstruction (one due to postoperative adhesions and the other due to volvulus) and one case of infection of the surgical and ostomy wound healed following ileostomy closure.
Conclusions: For such extensive procedures as restorative proctocolectomy, laparoscopic techniques prove safe and are characterised by a better patient acceptance thanks to the low invasiveness and good cosmetic effects. The technological progress and the increasing experience in performing laparoscopy provide more and more arguments to support the selection of this method as the preferred method of treatment.
abstract_id: PUBMED:12408505
Laparoscopic surgery for inflammatory bowel disease: current concepts. Background: The aim of a laparoscopic approach is reduced pain scores, early mobilization, virtual absence of wound sepsis, rapid return of gastrointestinal function, early discharge from hospital and return to normal activity and improved cosmetics. Potential advantages are fewer complications due to adhesion formation, viz. small-bowel obstruction, infertility and chronic abdominal pain-advantages that are of particular importance to patients with inflammatory bowel disease (IBD) since they are young and in the middle of building up their socio-economic life. This review highlights the current status of laparoscopic surgery for patients with IBD.
Methods: Virtually all abdominal procedures carried out in patients with IBD can be done laparoscopically, and vary from stoma formation to restorative proctocolectomy.
Results: Conversion rates and operating times depend on the surgical expertise and patient-related factors, viz. prior laparotomy, the presence of intestinal fistula or inflammatory masses. Morbidity rates are similar to those of open surgery provided that the procedures are done by expert laparoscopic surgeons. The observed earlier recovery contributed to laparoscopic surgery has not been proved in well-conducted trials; however, an advantage can be expected. A very obvious feature of laparoscopic surgery is the improved cosmetics, which might turn out to be the most important advantage of the laparoscopic approach in this relatively young patient group.
Conclusion: The laparoscopic approach can be considered the procedure of first choice in patients with IBD provided the surgery is done by expert laparoscopists ensuring low conversion rates, acceptable operating times and low morbidity.
abstract_id: PUBMED:23895970
Comparison of laparoscopic-assisted and open total proctocolectomy and ileal pouch anal anastomosis in children and adolescents. Background: Laparoscopic techniques have been applied to restorative proctocolectomy since the early 2000's. We have employed a technique for laparoscopic-assisted total proctocolectomy (TPC) and ileal pouch anal anastomosis (IPAA) for the treatment of children with ulcerative colitis (UC).
Methods: We retrospectively reviewed 68 laparoscopic-assisted TPCs and 39 open TPCs performed at our institution for UC between January 1997 and February 2011. Case duration, postoperative length of stay, and complications of the two groups were compared, and multivariable analysis was applied.
Results: The two groups were comparable with respect to gender, age, and postoperative length of stay. Total abdominal colectomy (TAC) duration was significantly longer in the laparoscopic-assisted group (P < .001). Complications were similar in the laparoscopic and open group, although small bowel obstruction (SBO) was significantly less frequent in the laparoscopic group (log-rank test = 8.88, P = .003). Kaplan-Meier estimated freedom from SBO at 1 year follow-up is 99% for patients treated laparoscopically (95% CI: 98%-100%) and 76% for those undergoing an open surgical approach (95% CI: 64%-88%).
Conclusions: The significantly lower SBO rate, low complication rates, and equivalent length of stay favor use of the laparoscopic-assisted approach for TPC and IPAA in children.
Answer: The incidence of small bowel obstruction (SBO) following restorative proctocolectomy does not appear to be significantly affected by the use of a laparoscopic approach. Studies have compared the incidence of SBOs after both open and laparoscopic restorative proctocolectomy and found no significant difference in the overall incidence of SBO at 1 year post-ileostomy takedown between the two groups (14% in the open group and 16% in the laparoscopic group) (PUBMED:21474147). Additionally, the incidence of SBO in the pre-ileostomy takedown period was 7% for open and 13% for laparoscopic (P = NS), while in the post-takedown period, the early incidence was 4% for open and 1% for laparoscopic, and the late incidence was 3% for open and 2% for laparoscopic (P = NS) (PUBMED:21474147).
However, one study reported a case of small-bowel obstruction in a patient who underwent laparoscopic restorative proctocolectomy without protective ileostomy, which was successfully treated by a laparoscopic approach (PUBMED:18483829). Another study noted an early postoperative complication of mechanical bowel obstruction in one patient following laparoscopic restorative proctocolectomy with vascular high ligation (PUBMED:27626834). Similarly, a patient developed bowel obstruction 3 months after laparoscopic restorative proctocolectomy in another study, which was managed conservatively (PUBMED:12590719).
In contrast, a study on laparoscopic-assisted restorative proctocolectomy reported a very low rate of small bowel obstruction, with only one case observed at a median of 33 months follow-up (PUBMED:21899706). Another study found that laparoscopic-assisted total proctocolectomy and ileal pouch anal anastomosis in children resulted in a significantly lower SBO rate compared to the open surgical approach (PUBMED:23895970).
Overall, while individual studies may report varying rates of SBO following laparoscopic restorative proctocolectomy, the evidence suggests that the laparoscopic approach does not significantly change the burden of postoperative SBO when compared to the open approach. Factors other than the surgical approach, such as patient characteristics and the presence of other risk factors, may influence the incidence of SBO. |
Instruction: Preventing HIV by providing support for orphan girls to stay in school: does religion matter?
Abstracts:
abstract_id: PUBMED:22715988
Preventing HIV by providing support for orphan girls to stay in school: does religion matter? Objective: The paper examines the influence of religion on attitudes, behaviors, and HIV infection among rural adolescent women in Zimbabwe.
Design: We analyzed data from a 2007 to 2010 randomized controlled trial in rural eastern Zimbabwe testing whether school support can prevent HIV risk behaviors and related attitudes among rural adolescent orphan girls; supplementary data from the 2006 Zimbabwe Demographic and Health Survey (ZDHS) were also analyzed. The present study design is largely cross-sectional, using the most recent available survey data from the clinical trial to examine the association between religious affiliation and religiosity on school dropout, marriage, and related attitudes, controlling for intervention condition, age and orphan type. The ZDHS data examined the effect of religious denomination on marriage and HIV status among young rural women, controlling for age.
Results: Apostolic Church affiliation greatly increased the likelihood of early marriage compared to reference Methodist Church affiliation (odds ratio = 4.5). Greater religiosity independently reduced the likelihood of school dropout, increased gender equity attitudes and disagreement with early sex, and marginally reduced early marriage. Young rural Apostolic women in the ZDHS were nearly four times as likely to marry as teenagers compared to Protestants, and marriage doubled the likelihood of HIV infection.
Conclusions: Findings contradict an earlier seminal study that Apostolics are relatively protected from HIV compared to other Christian denominations. Young Apostolic women are at increased risk of HIV infection through early marriage. The Apostolic Church is a large and growing denomination in sub-Saharan Africa and many Apostolic sects discourage medical testing and treatment in favor of faith healing. Since this can increase the risk of undiagnosed HIV infection for young married women and their infants in high prevalence areas, further study is urgently needed to confirm this emerging public health problem, particularly among orphan girls. Although empirical evidence suggests that keeping orphan girls in school can reduce HIV risk factors, further study of the religious context and the implications for prevention are needed.
abstract_id: PUBMED:21493943
Supporting adolescent orphan girls to stay in school as HIV risk prevention: evidence from a randomized controlled trial in Zimbabwe. Objectives: Using a randomized controlled trial in rural eastern Zimbabwe, we tested whether comprehensive support to keep orphan adolescent girls in school could reduce HIV risk.
Methods: All orphan girls in grade 6 in 25 primary schools were invited to participate in the study in fall 2007 (n = 329). Primary schools were randomized to condition. All primary schools received a universal daily feeding program; intervention participants received fees, uniforms, and a school-based helper to monitor attendance and resolve problems. We conducted annual surveys and collected additional information on school dropout, marriage, and pregnancy rates. We analyzed data using generalized estimating equations over 3 time points, controlling for school and age at baseline.
Results: The intervention reduced school dropout by 82% and marriage by 63% after 2 years. Compared with control participants, the intervention group reported greater school bonding, better future expectations, more equitable gender attitudes, and more concerns about the consequences of sex.
Conclusions: We found promising evidence that comprehensive school support may reduce HIV risk for orphan girls. Further study, including assessment of dose response, cost benefit, and HIV and herpes simplex virus 2 biomarker measurement, is warranted.
abstract_id: PUBMED:26470396
Community Support and Adolescent Girls' Vulnerability to HIV/AIDS: Evidence From Botswana, Malawi, and Mozambique. Girls are vulnerable to HIV in part because the social systems in which they live have failed to support and protect them. The goal of this research was to develop a viable supportive community index and test its association with intermediate variables associated with HIV risk across 16 communities in Botswana, Malawi, and Mozambique. This cross-sectional survey with separate samples randomly drawn in each country (2010) yielded a total sample of 1,418 adolescent girls (aged 11-18). Multilevel, multivariate logistic regression, while controlling for vulnerability, age, religion, and residence, found that an increase in supportive community index is positively associated with the odds of indicating improved community support for girls and with the confidence to refuse unwanted sex with a boyfriend across the three countries, as well as with self-efficacy to insist on condom use in Botswana and Mozambique. Program implementers and decision makers alike can use the supportive community index to identify and measure structural factors associated with girls' vulnerability to HIV/AIDS; this will potentially contribute to judicious decision making regarding resource allocation to enhance community-level, protective factors for adolescent girls.
abstract_id: PUBMED:23334923
Cost-effectiveness of school support for orphan girls to prevent HIV infection in Zimbabwe. This cost-effectiveness study analyzes the cost per quality-adjusted life year (QALY) gained in a randomized controlled trial that tested school support as a structural intervention to prevent HIV risk factors among Zimbabwe orphan girl adolescents. The intervention significantly reduced early marriage, increased years of schooling completed, and increased health-related quality of life. By reducing early marriage, the literature suggests the intervention reduced HIV infection. The intervention yielded an estimated US$1,472 in societal benefits and an estimated gain of 0.36 QALYs per orphan supported. It cost an estimated US$6/QALY gained, about 1 % of annual per capita income in Zimbabwe. That is well below the maximum price that the World Health Organization (WHO) Commission on Macroeconomics and Health recommends paying for health gains in low and middle income countries. About half the girls in the intervention condition were boarded when they reached high school. For non-boarders, the intervention's financial benefits exceeded its costs, yielding an estimated net cost savings of $502 per pupil. Without boarding, the intervention would yield net savings even if it were 34 % less effective in replication. Boarding was not cost-effective. It cost an additional $1,234 per girl boarded (over the 3 years of the study, discounted to present value at a 3 % discount rate) but had no effect on any of the outcome measures relative to girls in the treatment group who did not board. For girls who did not board, the average cost of approximately 3 years of school support was US$973.
abstract_id: PUBMED:25692731
Educational Outcomes for Orphan Girls in Rural Zimbabwe: Effects of a School Support Intervention. Educational achievement has important implications for the health and well-being of young women in sub-Saharan Africa. The authors assessed the effects of providing school support on educational outcomes of orphan girls in rural Zimbabwe. Data were from a randomized controlled trial offering the intervention group comprehensive schooling support and controls no treatment initially and then fees only. Results indicated comprehensive support reduced school dropout and absence but did not improve test scores. Providing support to orphan girls is promising for addressing World Health Organization Millennium Development Goals, but further research is needed about contextual factors affecting girls' school participation and learning.
abstract_id: PUBMED:25530739
A Mixed Methods Mapping of Church versus Secular School Messages to Influence Sexual Decision-Making as Perceived by Zimbabwean Orphan Girl Students. This study examined the messages perceived by adolescent girls with orphanhood to influence their sexual decision-making. Participants were 125 students (mean age =14.7 years), 54% of whom attended church schools in a rural district of eastern Zimbabwe. We collected and analyzed data using concept mapping, a mixed method approach that enabled the construction of message clusters, with weighting for their relative importance. Messages that clustered under Biblical Teachings and Life Planning ranked highest in salience among students in both church and secular schools. Protecting Family Honor, HIV Prevention, and Social Stigma messages ranked next, respectively. Contrary to study hypotheses, the messages that orphan adolescent girls perceived to influence their sexual decisions did not vary by type of school attended.
abstract_id: PUBMED:25530603
The impact of school subsidies on HIV-related outcomes among adolescent female orphans. Purpose: We examine effects of school support as a structural HIV prevention intervention for adolescent female orphans in Zimbabwe after 5 years.
Methods: Three hundred twenty-eight orphan adolescent girls were followed in a clustered randomized controlled trial from 2007 to 2010. The experimental group received school fees, uniforms, and school supplies and were assigned a school-based "helper." In 2011-2012, the control group received delayed partial treatment of school fees only. At the final data point in 2012, survey, HIV, and Herpes Simplex Virus Type 2 (HSV-2) biomarker data were collected from approximately 88% of the sample. Bivariate and multivariate analyses were conducted on end point outcomes, controlling for age, religious affiliation, and baseline socioeconomic status.
Results: The two groups did not differ on HIV or HSV-2 biomarkers. The comprehensive 5-year intervention continued to reduce the likelihood of marriage, improve school retention, improve socioeconomic status (food security), and marginally maintain gains in quality of life, even after providing school fees to the control group.
Conclusions: Paying school fees and expenses resulted in significant improvements in life outcomes for orphan adolescent girls. Biological evidence of HIV infection prevention, however, was not observed. Our study adds to the growing body of research on school support as HIV prevention for girls in sub-Saharan Africa, but as yet, no clear picture of effectiveness has emerged.
abstract_id: PUBMED:29033613
Let us fight and support one another: adolescent girls and young women on contributors and solutions to HIV risk in Zambia. In Zambia, adolescent girls and young women (AGYW) are disproportionately affected by human immunodeficiency virus (HIV), social, cultural and economic factors making them particularly vulnerable. This study was designed to understand the context in which AGYW are at risk and to identify perceived drivers of the epidemic and potential strategies to reduce HIV risk. Focus group discussions were conducted with AGYW in Zambian districts with the highest HIV prevalence from February through August 2016. The focus group guide addressed HIV risk factors and strategies for HIV prevention in AGYW. Focus group discussions were recorded, translated and transcribed, themes identified and responses coded. Results suggest that gender inequality undermined potentially protective factors against HIV among AGYW. Poverty and stigmatization were major barriers to accessing available HIV prevention services as well as primary risk factors for HIV infection. Sponsorship to support AGYW school attendance, programs for boys and girls to foster gender equality and financial assistance from the government of Zambia to support AGYW most in need were proposed as strategies to reduce HIV risk. Results highlight the utility of using community-based research to guide potential interventions for the affected population. Future research should explore the use of multilevel interventions to combat HIV among AGYW.
abstract_id: PUBMED:34212766
Feasibility, Acceptability and Preliminary Efficacy of Tikambisane ('Let's Talk to Each Other'): A Pilot Support Group Intervention for Adolescent Girls Living With HIV in Zambia. Background: In Zambia, 84,959 adolescent girls and young women (AGYW) aged 15-24 are currently living with HIV. We explored the feasibility and acceptability of a 6-session, curriculum-based support group intervention designed to address key concerns of AGYW living with HIV.
Setting: Urban Zambia.
Methods: Surveys and in-depth interviews were collected pre- and post-intervention from participants enrolled from 2 health facilities. Eight participant observations of sessions were conducted. Descriptive statistics at baseline were reported only for AGYW who participated in the intervention (N = 21), while analyses comparing baseline and endline outcome measures were restricted to participants who had data at both time points (N = 14).
Results: Support groups were feasible to conduct and acceptable to participants. Co-facilitation by an adult counselor and peers living with HIV raised confidence about session content. Sessions on antiretroviral therapy (ART), disclosure and stigma, and grief and loss were most in demand. We did not observe significant differences in key outcome measures between baseline and follow-up. However, qualitative data supported the positive impact of the intervention on ART adherence and hope for the future following the intervention among our participants.
Conclusion: A short-term, structured support group series holds promise for helping AGYW living with HIV safely navigate a complex time in their lives.
abstract_id: PUBMED:25724504
Adolescent girls and young women: key populations for HIV epidemic control. Introduction: At the epicentre of the HIV epidemic in southern Africa, adolescent girls and young women aged 15-24 contribute a disproportionate ~30% of all new infections and seroconvert 5-7 years earlier than their male peers. This age-sex disparity in HIV acquisition continues to sustain unprecedentedly high incidence rates, and preventing HIV infection in this age group is a pre-requisite for achieving an AIDS-free generation and attaining epidemic control.
Discussion: Adolescent girls and young women in southern Africa are uniquely vulnerable to HIV and have up to eight times more infection than their male peers. While the cause of this vulnerability has not been fully elucidated, it is compounded by structural, social and biological factors. These factors include but are not limited to: engagement in age-disparate and/or transactional relationships, few years of schooling, experience of food insecurity, experience of gender-based violence, increased genital inflammation, and amplification of effects of transmission co-factors. Despite the large and immediate HIV prevention need of adolescent girls and young women, there is a dearth of evidence-based interventions to reduce their risk. The exclusion of adolescents in biomedical research is a huge barrier. School and community-based education programmes are commonplace in many settings, yet few have been evaluated and none have demonstrated efficacy in preventing HIV infection. Promising data are emerging on prophylactic use of anti-retrovirals and conditional cash transfers for HIV prevention in these populations.
Conclusions: There is an urgent need to meet the HIV prevention needs of adolescent girls and young women, particularly those who are unable to negotiate monogamy, condom use and/or male circumcision. Concerted efforts to expand the prevention options available to these young women in terms of the development of novel HIV-specific biomedical, structural and behavioural interventions are urgently needed for epidemic control. In the interim, a pragmatic approach of integrating existing HIV prevention efforts into broader sexual reproductive health services is a public health imperative.
Answer: The influence of religion on attitudes, behaviors, and HIV infection among rural adolescent women in Zimbabwe has been examined, and findings suggest that religion does indeed matter. Apostolic Church affiliation was found to greatly increase the likelihood of early marriage compared to the Methodist Church affiliation, and early marriage was associated with a doubled likelihood of HIV infection (PUBMED:22715988). Greater religiosity, however, independently reduced the likelihood of school dropout, increased gender equity attitudes, and disagreement with early sex, and marginally reduced early marriage (PUBMED:22715988).
The study also contradicts an earlier study that suggested Apostolics were relatively protected from HIV compared to other Christian denominations. Instead, it was found that young Apostolic women are at increased risk of HIV infection through early marriage, and many Apostolic sects discourage medical testing and treatment in favor of faith healing, which can increase the risk of undiagnosed HIV infection (PUBMED:22715988).
Furthermore, a randomized controlled trial in Zimbabwe showed that comprehensive school support for orphan girls reduced school dropout by 82% and marriage by 63% after 2 years, indicating that keeping orphan girls in school can reduce HIV risk factors (PUBMED:21493943). The intervention also resulted in improved school retention, socioeconomic status, and marginally maintained gains in quality of life, even after providing school fees to the control group (PUBMED:25530603).
In summary, religion plays a significant role in the attitudes and behaviors that influence HIV risk among rural adolescent women in Zimbabwe. While religiosity can have protective effects, affiliation with certain religious denominations, such as the Apostolic Church, may increase the risk of early marriage and HIV infection. Interventions that support orphan girls to stay in school appear to be an effective strategy for reducing HIV risk, regardless of religious affiliation. |
Instruction: Does vimentin help to delineate the so-called 'basal type breast cancer'?
Abstracts:
abstract_id: PUBMED:19695088
Does vimentin help to delineate the so-called 'basal type breast cancer'? Background: Vimentin is one of the cytoplasmic intermediate filament proteins which are the major component of the cytoskeleton. In our study we checked the usefulness of vimentin expression in identifying cases of breast cancer with poorer prognosis, by adding vimentin to the immunopanel consisting of basal type cytokeratins, estrogen, progesterone, and HER2 receptors.
Methods: 179 tissue specimens of invasive operable ductal breast cancer were assessed by the use of immunohistochemistry. The median follow-up period for censored cases was 90 months.
Results: 38 cases (21.2%) were identified as being vimentin-positive. Vimentin-positive tumours affected younger women (p = 0.024), usually lacked estrogen and progesterone receptor (p < 0.001), more often expressed basal cytokeratins (<0.001), and were high-grade cancers (p < 0.001). Survival analysis showed that vimentin did not help to delineate basal type phenotype in a triple negative (ER, PgR, HER2-negative) group. For patients with 'vimentin or CK5/6, 14, 17-positive' tumours, 5-year estimated survival rate was 78.6%, whereas for patients with 'vimentin, or CK5/6, 14, 17-negative' tumours it was 58.3% (log-rank p = 0.227).
Conclusion: We were not able to better delineate an immunohistochemical definition of basal type of breast cancer by adding vimentin to the immunopanel consisted of ER, PgR, HER2, CK5/6, 14 and 17 markers, when overall survival was a primary end-point.
abstract_id: PUBMED:29560294
Basal Cell Carcinoma with Myoepithelial Differentiation: Case Report and Literature Review. Basal cell carcinoma is the most common skin cancer. Myoepithelial cells are specialized epithelial cells. Basal cell carcinoma with myoepithelial differentiation is a rare tumor. A 71-year-old man with a basal cell carcinoma with myoepithelial differentiation that presented as an asymptomatic red papule of two months duration on his forehead is described. Including the reported patient, this variant of basal cell carcinoma has been described in 16 patients: 11 men and five women. The patients ranged in age at diagnosis from 43 years to 83 years; the median age at diagnosis was 66 years. All of the tumors were located on the face-most were papules or nodules of less than 10 x 10 mm. Their pathology demonstrated two components: one was that of a typical basal cell carcinoma and the other was myoepithelioma-like in which the tumor cells were plasmacytoid or signet ring in appearance and contained abundant eosinophilic cytoplasm or hyaline inclusions or both. The myoepithelial tumor cells had variable immunohistochemical expression that included not only cytokeratin but also actin, glial fibrillary acid protein, S100, and vimentin. The most common clinical impression, prior to biopsy, was a basal cell carcinoma. The pathologic differential diagnosis included cutaneous mixed sweat gland tumor of the skin, myoepithelioma, myoepithelial carcinoma, and tumors that contain a prominent signet ring cell component (such as metastatic gastrointestinal and breast carcinoma, melanoma, plasmacytoid squamous cell carcinoma, and T-cell lymphoma). Mohs micrographic surgical excision, with complete removal of the tumor, was recommended for treatment of the carcinoma.
abstract_id: PUBMED:24975897
The dog as a natural animal model for study of the mammary myoepithelial basal cell lineage and its role in mammary carcinogenesis. Basal-like tumours constitute 2-18% of all human breast cancers (HBCs). These tumours have a basal myoepithelial phenotype and it has been hypothesized that they originate from either myoepithelial cells or mammary progenitor cells. They are heterogeneous in morphology, clinical presentation, outcome and response to therapy. Canine mammary carcinomas (CMCs) have epidemiological and biological similarities to HBCs, are frequently biphasic and are composed of two distinct neoplastic populations (epithelial and myoepithelial). The present study evaluates the potential of CMCs as a natural model for basal-like HBCs. Single and double immunohistochemistry was performed on serial sections of 10 normal canine mammary glands and 65 CMCs to evaluate expression of cytokeratin (CK) 8/18, CK5, CK14, α-smooth muscle actin (SMA), calponin (CALP), p63 and vimentin (VIM). The tumours were also evaluated for Ki67 and human epidermal growth factor receptor (HER)-2 expression. A hierarchical model of cell differentiation was established, similar to that for the human breast. We hypothesized that progenitor cells (CK5(+), CK14(+), p63(+) and VIM(+)) differentiate into terminally-differentiated luminal glandular (CK8/18(+)) and myoepithelial (CALP(+), SMA(+) and VIM(+)) cells via intermediary luminal glandular cells (CK5(+), CK14(+) and CK8/CK18(+)) and intermediary myoepithelial cells (CK5(+), CK14(+), p63(+), SMA(+), CALP(+) and VIM(+)). Neoplastic myoepithelial cells in canine complex carcinomas had labelling similar to that of terminally-differentiated myoepithelial cells, while those of carcinomas-and-malignant myoepitheliomas with a more aggressive biological behaviour (i.e. higher frequency of vascular/lymph node invasion and visceral metastases and higher risk of tumour-related death) were comparable with intermediary myoepithelial cells and had significantly higher Ki67 expression. The majority of CMCs examined were negative for expression of HER-2. The biphasic appearance of CMCs with involvement of the myoepithelial component in different stages of cell differentiation may help to define the role of myoepithelial cells in the mammary carcinogenetic process and the heterogeneous nature of basal-like HBCs.
abstract_id: PUBMED:17334350
Sox2: a possible driver of the basal-like phenotype in sporadic breast cancer. Tumours arising in BRCA1 mutation carriers and sporadic basal-like breast carcinomas have similar phenotypic, immunohistochemical and clinical characteristics. SOX2 is an embryonic transcription factor located at chromosome 3q, a region frequently gained in sporadic basal-like and BRCA1 germline mutated tumours. The aim of the study was to establish whether sox2 expression was related to basal-like sporadic breast tumours. Two hundred and twenty-six sporadic node-negative invasive breast carcinomas were immunohistochemically analysed for oestrogen receptor (ER), progesterone receptor (PR), CK5/6, EGFR, vimentin, HER2, ki67, p53 and sox2 using tissue microarrays. Tumours were considered to have basal-like phenotype if they were ER/HER2-negative and CK5/6 and/or EGFR-positive. Thirty cases of this series (13.7%) displayed a basal-like phenotype. Sox2 expression was observed in 16.7% of cases and was significantly more frequently expressed in basal-like breast carcinomas (43.3% in basal-like, 10.6% in luminal and 13.3% in HER2+ tumours, P<0.001). Moreover, Sox2 showed a statistically significant inverse association with ER and PR (P=0.001 and 0.017, respectively) and direct association with CK5/6, EGFR and vimentin (P=0.022, 0.005 and <0.001, respectively). Sox2 is preferentially expressed in tumours with basal-like phenotype and may play a role in defining their less differentiated/'stem cell' phenotypic characteristics.
abstract_id: PUBMED:17123107
P-cadherin and cytokeratin 5: useful adjunct markers to distinguish basal-like ductal carcinomas in situ. Gene expression profiles of invasive breast carcinomas have identified a subgroup of tumours with worse prognosis, which have been called "basal-like". These are characterized by a specific pattern of expression, being estrogen receptor (ER) and HER2 negative, and frequently expressing at least one basal marker such as basal cytokeratins or epidermal growth factor receptor (EGFR). Previously, our group characterized basal-like tumours in a series of invasive breast carcinomas using P-cadherin (P-CD), p63 and cytokeratin 5 (CK5). Based on that study, we hypothesized that those high-grade basal-like invasive carcinomas might have a pre-invasive counterpart, which could be identified using the same approach. A series of 79 ductal carcinomas in situ (DCIS) were classified into distinct subgroups according to their ER, HER2 and basal markers expression. Luminal DCIS expressed ER and constituted 64.6% of the series; the HER2 overexpressing tumours did not express ER and represented 25.3% of the cases, whereas 10.1% lack the expression of ER and HER2 and expressed at least one basal marker (P-CD, CK5, CK14, p63, vimentin and/or EGFR). These basal-like DCIS were mostly high-grade, with comedo-type necrosis, and consistently showed expression of P-CD and CK5. In conclusion, DCIS with a basal-like phenotype represent a small percentage in our series, being P-CD and CK5, the most useful adjunct markers to distinguish this subset of carcinomas in situ of the breast.
abstract_id: PUBMED:20395444
Wnt/beta-catenin pathway activation is enriched in basal-like breast cancers and predicts poor outcome. Although Wnt/beta-catenin pathway activation has been implicated in mouse models of breast cancer, there is contradictory evidence regarding its importance in human breast cancer. In this study, invasive and in situ breast cancer tissue microarrays containing luminal A, luminal B, human epidermal growth factor receptor 2 (HER2)(+)/ER(-) and basal-like breast cancers were analyzed for beta-catenin subcellular localization. We demonstrate that nuclear and cytosolic accumulation of beta-catenin, a read-out of Wnt pathway activation, was enriched in basal-like breast cancers. In contrast, membrane-associated beta-catenin was observed in all breast cancer subtypes, and its expression decreased with tumor progression. Moreover, nuclear and cytosolic localization of beta-catenin was associated with other markers of the basal-like phenotype, including nuclear hormone receptor and HER2 negativity, cytokeratin 5/6 and vimentin expression, and stem cell enrichment. Importantly, this subcellular localization of beta-catenin was associated with a poor outcome and is more frequently observed in tumors from black patients. In addition, beta-catenin accumulation was more often observed in basal-like in situ carcinomas than other in situ subtypes, suggesting that activation of this pathway might be an early event in basal-like tumor development. Collectively, these data indicate that Wnt/beta-catenin activation is an important feature of basal-like breast cancers and is predictive of worse overall survival, suggesting that it may be an attractive pharmacological target for this aggressive breast cancer subtype.
abstract_id: PUBMED:36768838
GRHL2 Regulation of Growth/Motility Balance in Luminal versus Basal Breast Cancer. The transcription factor Grainyhead-like 2 (GRHL2) is a critical transcription factor for epithelial tissues that has been reported to promote cancer growth in some and suppress aspects of cancer progression in other studies. We investigated its role in different breast cancer subtypes. In breast cancer patients, GRHL2 expression was increased in all subtypes and inversely correlated with overall survival in basal-like breast cancer patients. In a large cell line panel, GRHL2 was expressed in luminal and basal A cells, but low or absent in basal B cells. The intersection of ChIP-Seq analysis in 3 luminal and 3 basal A cell lines identified conserved GRHL2 binding sites for both subtypes. A pathway analysis of ChIP-seq data revealed cell-cell junction regulation and epithelial migration as well as epithelial proliferation, as candidate GRHL2-regulated processes and further analysis of hub genes in these pathways showed similar regulatory networks in both subtypes. However, GRHL2 deletion in a luminal cell line caused cell cycle arrest while this was less prominent in a basal A cell line. Conversely, GRHL2 loss triggered enhanced migration in the basal A cells but failed to do so in the luminal cell line. ChIP-Seq and ChIP-qPCR demonstrated GRHL2 binding to CLDN4 and OVOL2 in both subtypes but not to other GRHL2 targets controlling cell-cell adhesion that were previously identified in other cell types, including CDH1 and ZEB1. Nevertheless, E-cadherin protein expression was decreased upon GRHL2 deletion especially in the luminal line and, in agreement with its selectively enhanced migration, only the basal A cell line showed concomitant induction of vimentin and N-cadherin. To address how the balance between growth reduction and aspects of EMT upon loss of GRHL2 affected in vivo behavior, we used a mouse basal A orthotopic transplantation model in which the GRHL2 gene was silenced. This resulted in reduced primary tumor growth and a reduction in number and size of lung colonies, indicating that growth suppression was the predominant consequence of GRHL2 loss. Altogether, these findings point to largely common but also distinct roles for GRHL2 in luminal and basal breast cancers with respect to growth and motility and indicate that, in agreement with its negative association with patient survival, growth suppression is the dominant response to GRHL2 loss.
abstract_id: PUBMED:24139214
P-cadherin and vimentin are useful basal markers in breast cancers. Basal-like breast cancer (BLBC) is the breast cancer subtype defined by gene profiling and generates keen clinical interest. Immunohistochemical panels using basal cytokeratins and epidermal growth factor receptor are widely adopted for its identification. Nonetheless, there are concerns about the risk for missing some true BLBCs. Both P-cadherin and vimentin have been proposed as BLBC markers, but their usefulness for BLBC classification has not been well documented. In this study, we evaluated by immunohistochemistry their expression in a large cohort of breast carcinoma. Cancers expressing vimentin or P-cadherin showed BLBC-related morphological features (high grade, presence of necrosis, and lymphocytic infiltration; P < .001 for all except P = .006 for vimentin with lymphocytic infiltration) and immunohistochemical profile (P < .001 for all markers tested except P = .007 for vimentin with human epidermal growth factor receptor 2). Concordantly, they were significantly associated with BLBC (P < .001 for both). Nonetheless, they did not appear to be good stand-alone BLBC markers. Compared with the commonly used reference panel, the specificity (95.9%) and sensitivity (43.1%) of coexpression of vimentin and P-cadherin were better than most single markers or their combinations tested. Moreover, their coexpression was significantly associated with basal features in non-BLBCs and worse disease-free survival in triple-negative breast cancers (hazard ratio, 2.232; P = .027). This raised the possibility that the vimentin and P-cadherin combination can be used to identify BLBC especially those that were missed by the commonly used basal cytokeratins and epidermal growth factor receptor panel. Together, P-cadherin and vimentin could be adjunctive to the commonly used immunohistochemical surrogates for BLBC identification.
abstract_id: PUBMED:17885672
Immunohistochemical heterogeneity of breast carcinomas negative for estrogen receptors, progesterone receptors and Her2/neu (basal-like breast carcinomas). Basal breast carcinomas triple negative for estrogen receptors, progesterone receptors and Her2/neu breast carcinomas are more aggressive than conventional neoplasms. We studied 64 cases with immunohistochemistry, using 23 antibodies, to characterize diverse pathological pathways. A basal cytokeratin was identified in 81% of tumors and vimentin was identified in 55%. The mean Ki67 index was 46% (range, 10-90%). Coincident expression of p50 and p65, which suggests an active nuclear factor-kappaB factor, was present in 13% of neoplasms. Epithelial growth factor receptor (EGFR), insulin-like growth factor-I receptor (IGF-IR) or c-kit (CD117) was identified in 77% of tumors. Loss of protein tyrosine phosphatase was found in 14%, whereas Akt activation was present in 28%. Several differences were identified between two subtypes of basal breast carcinomas: the pure variant (negative S-100 and actin) was more frequently associated with 'in situ carcinoma' (P=0.019) and pBad overexpression (P=0.098), whereas the myoepithelial variant (positive S-100 or actin) showed more frequent tumor necrosis (P=0.048), vimentin expression (P=0.0001), CD117 expression (P=0.001) and activated caspase-3 (P=0.089). IGF-IR could be as important as EGFR for the growth of these neoplasms. Basal cell carcinoma has at least two subtypes with distinct microscopic and immunohistochemical features.
abstract_id: PUBMED:24055090
Epithelial-mesenchymal transition increases during the progression of in situ to invasive basal-like breast cancer. Epithelial-mesenchymal transition (EMT) is known to play an important role in breast cancer invasion and metastatic progression. However, the pattern of expression of EMT markers in the progression from in situ to invasive breast carcinoma is not clear. To investigate this, we performed immunohistochemical analyses of EMT markers (expression of vimentin, smooth muscle actin, osteonectin, and N-cadherin; loss of E-cadherin; alteration of β-catenin), breast cancer stem cell (CSC) markers (CD44(+)/CD24(-), ALDH1), and CD146, an EMT inducer, in invasive carcinomas and ductal carcinoma in situ (DCIS) of the breast. Expression of EMT markers was closely associated with the basal-like subtype and CSC phenotype in invasive carcinoma but not in pure DCIS, except for vimentin. The expression of smooth muscle actin and N-cadherin, loss of E-cadherin, and alteration of β-catenin were significantly higher in invasive carcinomas than in pure DCIS (P = .015, P = .029, P = .001, and P = .007, respectively). Subgroup analyses revealed greater loss of E-cadherin and alteration of β-catenin in invasive carcinoma than in pure DCIS in basal-like subtype (P = .001) but not in non-basal-like subtypes. Moreover, expression of EMT markers and CD146 was higher in the invasive than in the DCIS component of basal-like cancers. Our study confirmed that EMT is an intrinsic characteristic of basal-like subtype and is associated with CSC phenotype. Furthermore, we showed higher expression of EMT markers in invasive carcinomas than in pure DCIS, especially in basal-like subtype, and in the invasive component of basal-like breast cancers, suggesting that EMT may be involved in the progression from in situ to invasive basal-like breast cancers.
Answer: Vimentin is an intermediate filament protein that is often used as a marker for mesenchymal cells and has been investigated for its role in identifying basal type breast cancer, which is known for its poor prognosis. The studies provided offer insights into the utility of vimentin as a marker for basal type breast cancer, but the findings are not entirely conclusive.
In one study, vimentin-positive tumors were associated with younger women, usually lacked estrogen and progesterone receptor, more often expressed basal cytokeratins, and were high-grade cancers. However, vimentin did not help to delineate basal type phenotype in a triple negative group when overall survival was the primary endpoint. The study concluded that adding vimentin to the immunopanel consisting of ER, PgR, HER2, CK5/6, 14, and 17 markers did not better delineate an immunohistochemical definition of basal type of breast cancer (PUBMED:19695088).
Another study found that vimentin expression was associated with basal-like breast cancers (BLBCs) and was significantly associated with worse disease-free survival in triple-negative breast cancers. The combination of vimentin and P-cadherin expression was suggested as potentially useful in identifying BLBC, especially those that might be missed by panels using basal cytokeratins and epidermal growth factor receptor (PUBMED:24139214).
Furthermore, vimentin was identified in 55% of basal breast carcinomas in a study that characterized diverse pathological pathways, suggesting some heterogeneity within this subtype of breast cancer (PUBMED:17885672). Another study showed that vimentin expression was closely associated with the basal-like subtype and cancer stem cell phenotype in invasive carcinoma but not in pure ductal carcinoma in situ (DCIS), except for vimentin (PUBMED:24055090).
In conclusion, while vimentin is associated with certain characteristics of basal type breast cancer, such as younger age at diagnosis, lack of hormone receptors, expression of basal cytokeratins, and high-grade tumors, its utility in delineating basal type breast cancer is not definitive. Vimentin may be part of a broader panel of markers that, when used in combination, could aid in identifying basal type breast cancer, but it does not appear to be a standalone marker for this purpose. |
Instruction: Is psychoprophylaxis a valid approach for heart surgery in children?
Abstracts:
abstract_id: PUBMED:8728804
Is psychoprophylaxis a valid approach for heart surgery in children? Purpose: To validate the hypothesys that a psycological preparation for children who will undergo cardiac surgery may improve the outcome.
Methods: Sixty patients, with ages ranging between 3 and 10 years, submitted to heart surgery for treatment of congenital heart defects, were evaluated. They were divided in 2 groups: experimental and control. A questionnaire was designed for collecting data about psychological and clinical aspects of each patient.
Results: The following data was found to be of statistical significance: acceptance of peripheral vein puncture in the surgical group (chi 2 = 11.59, p < 0.05), calm awakening following general anesthesia (chi 2 = 9.64 p < 0.05), cooperation with the physiotherapy staff (chi 2 = 13.30, p < 0.05), coping with parents absence (chi 2 = 9.64, p < 0.05), acceptance of fluid restriction (chi 2 = 17.78, p < 0.05) and cooperation with removal of stitches and pacemaker electrodes (chi 2 = 19.20, p < 0.05). There was not statistical significance on demand of sedation, cooperation at removal of the orotracheal tube and during examination, necessity of reintubation and occurrence of clinical complications. However, the prepared group showed a slight tendency to have less postoperative complications (20%) than the control (27%).
Conclusion: It was found that children who had adequated psychologic preparation prior to the correction of congenital heart defects had better psychological results with the imposed trauma.
abstract_id: PUBMED:35173672
Extended Neuroendoscopic Endonasal Approach for Resection of Craniopharyngioma in Children. Objective: To explore the surgical approach and technique of neuroendoscopic endonasal resection of pediatric craniopharyngiomas and to further evaluate its safety and effect in children.
Methods: The clinical data of 8 children with craniopharyngiomas who were surgically treated by neuroendoscopy through an extended endonasal approach in our center from 2018 to 2021 were retrospectively analyzed. The related surgical approach and technique were evaluated to improve the surgical results and further reduce the surgical complications when removing craniopharyngioma in children.
Results: All 8 patients achieved a gross-total resection of the tumor under neuroendoscopy. Postoperatively, 2 cases of transient hyperthermia and 4 cases of transient hyper- and/or hyponatremia occurred within the first 2 weeks, all of which were quickly controlled. Seven patients had symptoms of diabetes insipidus to varying degrees after the operation, and 4 of them improved within 1-3 months after surgery, but 3 cases still needed oral pituitrin. There were no cases of coma or death, leakage of cerebrospinal fluid, or severe electrolyte imbalance after surgery. During the postoperative follow-up of 3 months to 2 years, no tumor recurrence was found. Among the 7 patients who suffered postoperative neuroendocrine deficiencies, 3 patients were found to be temporary during the follow-up, but 4 patients still required hormone replacement therapy. Particularly, postoperative visual deterioration and olfactory defect that occurred in patients were all improved during follow-up periods. In addition, 4 cases of obesity were noted at the last follow-up.
Conclusions: Extended neuroendoscopic endonasal resection of craniopharyngiomas may be used as a safe and effective approach for children. Due to the poor pneumatization of the sphenoid sinus and worse compliance of treatment in children, surgical techniques of exposing the sellar region, removing the tumor, and reconstructing the skull base, as well as postoperative management of patients was proposed. However, due to the limited surgical cases in the study, the surgical safety and effects of the extended neuroendoscopic endonasal approach for children with craniopharyngiomas need to be further studied in the future.
abstract_id: PUBMED:31850246
Laparoscopic Splenectomy: Postero-Lateral Approach. In paediatric population, the laparoscopic splenectomy has been preferred to the open surgery during the last years. Due to the improvement of the technique and the devices, the indications to the laparoscopic splenectomy have been increased, even though there is still a variety of conditions in which the execution of this technique is arduous. During the preoperative consult there is the need to carefully evaluate the existence of cholecystic lithiasis, the haemoglobin level in patients with SCA, platelet count in children with ITP and the vaccination status. An anterior and a lateral or hanging spleen approach are primarily used for laparoscopic splenectomy. In the last four years, near the Section of Pediatric Surgery of the Department of Pediatrics, Obstetrics and Medicine of the Reproduction of Siena University, 8 cases of splenomegaly have been treated, 7 by lateral videolaparoscopic splenectomy (5 males and 2 females, with medium age of 10,5 years) and 1 by anterior approach (10 years). The advantages shown by these techniques allow the laparoscopic splenectomy to be considered as a valid alternative to the open surgery. In children's laparoscopic splenectomy, the rate of complications is considerably low and the the major problem is the intraoperative hemorrhage. With increasing surgical experience, the minimally invasive approach appears to be superior in terms of faster postoperative recovery, shorter hospital stay, perioperative and postoperative advantages. Therefore, the laparoscopic technique may soon be accepted as the standard method in patients requiring splenectomy.
abstract_id: PUBMED:29266892
Psychoprophylaxis in elective paediatric general surgery: does audiovisual tools improve the perioperative anxiety in children and their families? Aim Of The Study: Surgery is considered a stressful experience for children and their families who undergo elective procedures. Different tools have been developed to improve perioperative anxiety. Our objective is to demonstrate if the audiovisual psychoprophylaxis reduces anxiety linked to paediatric surgery.
Methods: A randomized prospective case-control study was carried out in children aged 4-15 who underwent surgery in a Paediatric Surgery Department. We excluded patients with surgical backgrounds, sever illness or non-elective procedures. Simple randomization was performed and cases watched a video before being admitted, under medical supervision. Trait and state anxiety levels were measured using the STAI-Y2, STAI-Y2, STAI-C tests and VAS in children under 6-years-old, at admission and discharge.
Results: 100 patients (50 cases/50 controls) were included, mean age at diagnosis was 7.98 and 7.32 respectively. Orchiopexy was the most frequent surgery performed in both groups. Anxiety state levels from parents were lower in the Cases Group (36.06 vs 39.93 p= 0.09 in fathers, 38.78 vs 40.34 p= 0.43 in mothers). At discharge, anxiety levels in children aged > 6 were statistically significant among cases (26.84 vs 32.96, p< 0.05).
Conclusions: The use of audiovisual psychoprophylaxis tools shows a clinically relevant improvement in perioperative anxiety, both in children and their parents. Our results are similar to those reported by other authors supporting these tools as beneficial strategy for the family.
abstract_id: PUBMED:3178060
Preoperative psychoprophylaxis in childhood. Results of a hospital program Results of a surgical psychoprophylaxis program, theoretically and technically framed within psychoanalytic theory is presented. It also comprises a description of the method used, as well as criteria by which authors have determined whether or not a child is ready for surgery. Results obtained with 134 children and a description of those who showed post-surgical disturbances are presented. Analysis is carried out of the percentage of disorders according to age group, showing that highest risk is among children up to five years of age, coinciding with the finding put forth by other authors. Finally some conclusions in relation to prevention of psychologic iatrogenic disorders in pediatric surgery are drawn.
abstract_id: PUBMED:38364228
Measurement of the intersiphon distance for normal skull base development and estimation of the surgical window for the endoscopic transtuberculum approach in children. Objective: Due to the underdeveloped skull base in children, it is crucial to predict whether a sufficient surgical window for an endoscopic endonasal approach can be achieved. This study aimed to analyze the presumed surgical window through measurement of the intersiphon distance (ISD) and the planum-sella height (PSH) on the basis of age and its correlation with the actual surgical window for the endoscopic transtuberculum approach.
Methods: Twenty patients of each age from 3 to 18 years were included as the normal skull base population. ISD and PSH were measured and compared among consecutive ages. Additionally, 42 children with craniopharyngiomas or Rathke's cleft cysts who underwent treatment via the endoscopic transtuberculum approach were included. ISD and PSH were measured on preoperative images and then correlated with the dimensions of the surgical window on postoperative CT scans. The intraoperative endoscopic view was classified as narrow, intermediate, or wide based on operative photographs or videos, and relevant clinical factors were analyzed.
Results: In the normal skull base population, both ISD and the estimated area of the surgical window increased with age, particularly at 8 and 11 years old. On the other hand, PSH did not show an incremental pattern with age. Among the 42 children who underwent surgery, 24 had craniopharyngioma and 18 had Rathke's cleft cysts. ISD showed the strongest correlation with the actual area of the surgical window [r(40) = 0.69, p < 0.001] rather than with age or PSH. The visual grade of the intraoperative endoscopic view was narrow in 17 patients, intermediate in 21, and wide in 4. Preoperative ISD was 14.58 ± 1.29 mm in the narrow group, 16.13 ± 2.30 mm in the intermediate group, and 18.09 ± 3.43 mm in the wide group (p < 0.01). There were no differences in terms of extent of resection (p = 0.41); however, 2 patients in the narrow group had postoperative complications.
Conclusions: Normal skull base development exhibited age-related growth. However, in children with suprasellar lesions, the measurement of the ISD showed a better correlation than age for predicting the surgical window for the endoscopic transtuberculum approach. Children with a small ISD should be approached with caution due to the limited surgical window.
abstract_id: PUBMED:35087661
Lateral versus posterior surgical approach for the treatment of supracondylar humeral fractures in children: a systematic review and meta-analysis. Background: Supracondylar humeral fracture (SHF) is the most common type of fracture in children. Moreover, lateral and posterior surgical approaches are the most frequently chosen approaches for open reduction surgery in displaced SHF when C-arm is unavailable. However, previous literature showed mixed findings regarding functional and cosmetic outcomes. Currently, no systematic review and meta-analysis has compared these two procedures. Methods: Our protocol was registered at PROSPERO (registration number CRD42021213763). We conducted a comprehensive electronic database search in MEDLINE, EMBASE, and CENTRAL. Two independent reviewers screened the title and abstract, followed by full-text reading and study selection based on eligibility criteria. The quality of the selected studies was analyzed with the ROBINS-I tool. Meta-analysis was carried out to compare the range of motion (functional outcome) and cosmetic outcome according to Flynn's criteria. This systematic review was conducted based on PRISMA and Cochrane handbook guidelines. Results: Our initial search yielded 163 studies, from which we included five comparative studies comprising 231 children in the qualitative and quantitative analysis. The lateral approach was more likely to result in excellent (OR 1.69, 95% CI [0.97-2.93]) and good (OR 1.12, 95% CI [0.61-2.04]) functional outcomes and less likely to result in fair (OR 0.84, 95% CI [0.34-2.13]) and poor (OR 0.42, 95% CI [0.1-1.73]) functional outcomes compared to the posterior approach. In terms of cosmetic results, both approaches showed mixed findings. The lateral approach was more likely to result in excellent (OR 1.11, 95% CI [0.61-2.02]) and fair (OR 1.18, 95% CI [0.49-2.80]) but less likely to result in good (OR 0.79, 95% CI [0.40-1.55]) cosmetic outcomes. However, none of these analyses were statistically significant (p> 0.05). Conclusion: Lateral and posterior surgical approaches resulted in satisfactory functional and cosmetic outcomes. The two approaches are comparable for treating SHF in children when evaluated with Flynn's criteria.
abstract_id: PUBMED:30350531
Constipation and fecal incontinence in children with cerebral palsy. Overview of literature and flowchart for a stepwise approach. Background And Study Aims: Constipation and fecal incontinence are common problems in neurologically impaired children. This paper aims to give an overview on bowel problems in cerebral palsy children and to suggest a stepwise treatment approach. A pubmed search was performed looking at studies during the past 20 years investigating bowel problems in neurologically disabled children.
Results: The search revealed 15 articles. Prevalence and presentation was the subject of 8 papers, confirming the importance of the problem in these children. The other papers studied the results of different treatment modalities. No significant differences between treatment modalities could be demonstrated due to small studied cohorts. Therefore, no specific treatment strategy is currently available. An experienced based stepwise approach is proposed starting with normalization of fiber intake. The evaluation of the colon transit time could help in deciding whether desimpaction and eventually laxatives including both osmotic (lactulose, macrogol) as well as stimulant laxatives might be indicated. Or, in case of fast transit loperamide or psyllium can be tried. Surgery should be a last resort option.
Conclusion: Studies investigating constipation and continence in neurologically impaired children are scarce, making it difficult to choose for the optimal treatment. A stepwise treatment approach is proposed, measuring the colon transit time to guide treatment choices.
abstract_id: PUBMED:33679063
Video-assisted thoracoscopic pacemaker lead placement in children with atrioventricular block. Background: The pacemaker lead placement is presented as one of the most appropriate procedures in children with a complete atrioventricular block (AVB). Despite the fact that video-assisted thoracic surgery (VATS) for epicardial lead placement has demonstrated positive results as to the feasibility, safety, and efficacy in adults, its role in pacemaker implantation in children remains unclear.
Aim: This study sought to assess the intermediate-term outcomes of video-assisted thoracoscopic pacemaker lead placement in children with complete AVB.
Materials And Methods: From May 2017 to November 2019, five children with complete AVB underwent minimally invasive left ventricular (LV) lead placements via thoracoscopic video assistance approach. The procedure was performed under complex intratracheal anesthesia with single-lung ventilation, all pacing parameters were evaluated in perioperative and follow-up periods.
Results: The median age of children at implantation was 3 years (range: 2 to 4 years), the median weight was 13 kg (range: 12-15 kg). All procedures were completed successfully, pacing thresholds for the active lead measured 0.3-1.1V, with R-wave amplitude of 8-18 mV and impedance of 560-1478 Ohm.
Conclusion: Thoracoscopic pacemaker lead placement may provide a potential alternative to the transthoracic approach of epicardial lead placement in children with AVB.
abstract_id: PUBMED:29404240
Open Approaches to the Anterior Skull Base in Children: Review of the Literature. Introduction Skull base lesions in children and adolescents are rare, and comprise only 5.6% of all skull base surgery. Anterior skull base lesions dominate, averaging slightly more than 50% of the cases. Until recently, surgery of the anterior skull base was dominated by open procedures and endoscopic skull base surgery was reserved for benign pathologies. Endoscopic skull base surgery is gradually gaining popularity. In spite of that, open skull base surgery is still considered the "gold standard" for the treatment of anterior skull base lesions, and it is the preferred approach in selected cases. Objective This article reviews current concepts and open approaches to the anterior skull base in children in the era of endoscopic surgery. Materials and Methods Comprehensive literature review. Results Extensive intracranial-intradural invasion, extensive orbital invasion, encasement of the optic nerve or the internal carotid artery, lateral supraorbital dural involvement and involvement of the anterior table of the frontal sinus or lateral portion of the frontal sinus precludes endoscopic surgery, and mandates open skull base surgery. The open approaches which are used most frequently for surgical resection of anterior skull base tumors are the transfacial/transmaxillary, subcranial, and subfrontal approaches. Reconstruction of anterior skull base defects is discussed in a separate article in this supplement. Discussion Although endoscopic skull base surgery in children is gaining popularity in developed countries, in many cases open surgery is still required. In addition, in developing countries, which accounts for more than 80% of the world's population, limited access to expensive equipment precludes the use of endoscopic surgery. Several open surgical approaches are still employed to resect anterior skull base lesions in the pediatric population. With this large armamentarium of surgical approaches, tailoring the most suitable approach to a specific lesion in regard to its nature, location, and extent is of utmost importance.
Answer: Yes, psychoprophylaxis appears to be a valid approach for heart surgery in children. A study that evaluated sixty patients aged between 3 and 10 years undergoing heart surgery for congenital heart defects found that children who received adequate psychological preparation prior to surgery had better psychological outcomes when dealing with the trauma imposed by the surgery. The study reported statistically significant improvements in several areas, including acceptance of peripheral vein puncture, calm awakening following general anesthesia, cooperation with the physiotherapy staff, coping with parents' absence, acceptance of fluid restriction, and cooperation with the removal of stitches and pacemaker electrodes. There was no statistical significance in the demand for sedation, cooperation at removal of the orotracheal tube and during examination, necessity of reintubation, and occurrence of clinical complications. However, the prepared group showed a slight tendency to have fewer postoperative complications compared to the control group (PUBMED:8728804). |
Instruction: Does HPV type affect outcome in oropharyngeal cancer?
Abstracts:
abstract_id: PUBMED:27864932
HPV status and favourable outcome in vulvar squamous cancer. It is universally accepted that high-risk human papillomavirus (HR-HPV) is the cause of cervical dysplasia and cancer. More recently, it has been shown that HPV is also a marker of clinical outcome in oropharyngeal cancer. However, contemporary information is lacking on both the prevalence of HPV infection in vulvar cancer (VSCC), its precursor lesion, vulvar intraepithelial neoplasia (VIN) and the influence of HPV-status on the prognosis of this malignancy. We have conducted a detailed population-based study to examine rates of progression of VIN to VSCC, type-specific HPV prevalence in vulvar disease and the influence of HPV status on clinical outcome in VSCC. We observed that the age at which women are diagnosed with VSCC is falling and there is a significant time gap between first diagnosis of VIN and progression to invasive disease. HR-HPV infection was detected in 87% (97/112) cases of VIN and 52% cases (32/62) of VSCC. The presence of HR-HPV in squamous intraepithelial lesion was associated with lower rates of progression to invasive cancer (hazard ratio, 0.22, p = 0.001). In the adjusted analysis, HR-HPV was associated with improved progression-free survival of VSCC compared to those with HPV negative tumours (hazard ratio, 0.32, p = 0.02).
abstract_id: PUBMED:35406449
Context-Aware Saliency Guided Radiomics: Application to Prediction of Outcome and HPV-Status from Multi-Center PET/CT Images of Head and Neck Cancer. Purpose: This multi-center study aims to investigate the prognostic value of context-aware saliency-guided radiomics in 18F-FDG PET/CT images of head and neck cancer (HNC).
Methods: 806 HNC patients (training vs. validation vs. external testing: 500 vs. 97 vs. 209) from 9 centers were collected from The Cancer Imaging Archive (TCIA). There were 100/384 and 60/123 oropharyngeal carcinoma (OPC) patients with human papillomavirus (HPV) status in training and testing cohorts, respectively. Six types of images were used for radiomics feature extraction and further model construction, namely (i) the original image (Origin), (ii) a context-aware saliency map (SalMap), (iii, iv) high- or low-saliency regions in the original image (highSal or lowSal), (v) a saliency-weighted image (SalxImg), and finally, (vi) a fused PET-CT image (FusedImg). Four outcomes were evaluated, i.e., recurrence-free survival (RFS), metastasis-free survival (MFS), overall survival (OS), and disease-free survival (DFS), respectively. Multivariate Cox analysis and logistic regression were adopted to construct radiomics scores for the prediction of outcome (Rad_Ocm) and HPV-status (Rad_HPV), respectively. Besides, the prognostic value of their integration (Rad_Ocm_HPV) was also investigated.
Results: In the external testing cohort, compared with the Origin model, SalMap and SalxImg achieved the highest C-indices for RFS (0.621 vs. 0.559) and MFS (0.785 vs. 0.739) predictions, respectively, while FusedImg performed the best for both OS (0.685 vs. 0.659) and DFS (0.641 vs. 0.582) predictions. In the OPC HPV testing cohort, FusedImg showed higher AUC for HPV-status prediction compared with the Origin model (0.653 vs. 0.484). In the OPC testing cohort, compared with Rad_Ocm or Rad_HPV alone, Rad_Ocm_HPV performed the best for OS and DFS predictions with C-indices of 0.702 (p = 0.002) and 0.684 (p = 0.006), respectively.
Conclusion: Saliency-guided radiomics showed enhanced performance for both outcome and HPV-status predictions relative to conventional radiomics. The radiomics-predicted HPV status also showed complementary prognostic value.
abstract_id: PUBMED:30061236
Overexpression of FGFR3 in HPV-positive Tonsillar and Base of Tongue Cancer Is Correlated to Outcome. Background/aim: Human papillomavirus-positive (HPV+) tonsillar and base of tongue squamous cell carcinoma (TSCC/BOTSCC) have better outcome than corresponding HPV- cancers. To better individualize treatment, additional predictive markers are needed. Previously, we have shown that mutated fibroblast growth factor receptor 3 protein (FGFR3) was correlated to poorer prognosis and here FGFR3 expression was further analyzed.
Patients And Methods: One-hundred-fifteen HPV+TSCC/ BOTSCC biopsies were analyzed for FGFR3 by immunohistochemistry (IHC), and 109/115 were analyzed for FGFR3 mutations by Ion Proton sequencing, or by Competitive Allele-Specific Taqman PCR (CAST-PCR). Disease-free survival (DFS) was then calculated according to FGFR3 IHC expression.
Results: CAST-PCR was useful for detecting the three most common FGFR3 mutations. Focusing especially on the 98/115 patients with HPV+TSCC/BOTSCC and wild-type FGFR3, high FGFR3 expression correlated to significantly better 3-year DFS, p=0.043.
Conclusion: In patients with HPV+TSCC/BOTSCC and wild-type FGFR3, overexpression of FGFR3 was correlated with better DFS.
abstract_id: PUBMED:34960724
Maternal HPV Infection: Effects on Pregnancy Outcome. The human papilloma virus (HPV) infection, caused by a ubiquitous virus typically transmitted through the direct contact of infected organs, either through the skin or mucosa, is the most common sexually transmitted infection, placing young women at a high risk of contracting it. Although the vast majority of cases spontaneously clear within 1-2 years, persistent HPV infection remains a serious concern, as it has repeatedly been linked to the development of multiple malignancies, including cervical, anogenital, and oropharyngeal cancers. Additionally, more recent data suggest a harmful effect of HPV infection on pregnancy. As the maternal hormonal environment and immune system undergo significant changes during pregnancy, the persistence of HPV is arguably favored. Various studies have reported an increased risk of adverse pregnancy outcomes among HPV-positive women, with the clinical impact encompassing a range of conditions, including preterm birth, miscarriage, pregnancy-induced hypertensive disorders (PIHD), intrauterine growth restriction (IUGR), low birth weight, the premature rupture of membranes (PROM), and fetal death. Therefore, understanding the mechanisms employed by HPV that negatively impact pregnancy and assessing potential approaches to counteract them would be of interest in the quest to optimize pregnancy outcomes and improve child survival and health.
abstract_id: PUBMED:31216850
Prognosis of HPV-Positive and -Negative Oropharyngeal Cancers Depends on the Treatment Modality. Background: The association between human papilloma virus (HPV) and oropharyngeal carcinoma is a topical issue due mainly to the rapid increase in incidence over recent years. These tumors are etiopathogenetically, epidemiologically, and clinically different from other carcinomas at this location. They have a better prognosis in that they are more chemo-and radiosensitive. Indeed, this has been shown by many extensive retrospective and prospective studies. HPV status is considered an integral part of a standard histopathological examination and is included as a new biological parameter in TNM classification.
Materials And Methods: The results of 77 patients who were treated non-surgically for locally advanced oropharyngeal carcinoma at a single university ear, nose, and throat clinic were analyzed retrospectively.
Results: Overall and specific survival of those with HPV-positive (HPV+) tumors was better that those for HPV negative (HPV-) tumors. With the exception of TNM classification, HPV positivity appeared to be the strongest predictor of local control, and of overall and specific survival, regardless of the type of treatment. However, smoking and p53 positivity were significant negative predictors of overall survival. Patients with HPV-associated tumors had a significantly better prognosis, regardless of treatment type. The difference between treatment modalities was confirmed for the whole group of patients, but not for the HPV+ and HPV-patients specifically, most probably due to the small number of patients enrolled.
Conclusion: The results obtained herein may constitute the first step toward the concept of treatment de-escalation in those with HPV-associated oropharyngeal carcinoma; however, this decision can be based only on the results of current extensive randomized trials. Specification of the optimal de-escalation scheme, or the choice of treatment modality, for which the difference in treatment results is most pronounced, has yet to be identified. This work was supported by grants of the Ministry of Health of the Czech Republic IGA NT12483-4/2011 and AZV 15-31627A. he authors declare they have no potential conflicts of interest concerning drugs, products, or services used in the study. The Editorial Board declares that the manuscript met the ICMJE recommendation for biomedical papers. Submitted: 21. 9. 2018 Accepted: 14. 5. 2019.
abstract_id: PUBMED:34572957
HPV Status as Prognostic Biomarker in Head and Neck Cancer-Which Method Fits the Best for Outcome Prediction? The incidence of human papillomavirus (HPV)-related head and neck cancer (HNSCC) is rising globally, presenting challenges for optimized clinical management. To date, it remains unclear which biomarker best reflects HPV-driven carcinogenesis, a process that is associated with better therapeutic response and outcome compared to tobacco/alcohol-induced cancers. Six potential HPV surrogate biomarkers were analyzed using FFPE tissue samples from 153 HNSCC patients (n = 78 oropharyngeal cancer (OPSCC), n = 35 laryngeal cancer, n = 23 hypopharyngeal cancer, n = 17 oral cavity cancer): p16, CyclinD1, pRb, dual immunohistochemical staining of p16 and Ki67, HPV-DNA-PCR, and HPV-DNA-in situ hybridization (ISH). Biomarkers were analyzed for correlation with one another, tumor subsite, and patient survival. P16-IHC alone showed the best performance for discriminating between good (high expression) vs poor outcome (low expression; p = 0.0030) in OPSCC patients. Additionally, HPV-DNA-ISH (p = 0.0039), HPV-DNA-PCR (p = 0.0113), and p16-Ki67 dual stain (p = 0.0047) were significantly associated with prognosis in uni- and multivariable analysis for oropharyngeal cancer. In the non-OPSCC group, however, none of the aforementioned surrogate markers was prognostic. Taken together, P16-IHC as a single biomarker displays the best diagnostic accuracy for prognosis stratification in OPSCC patients with a direct detection of HPV-DNA by PCR or ISH as well as p16-Ki67 dual stain as potential alternatives.
abstract_id: PUBMED:27527216
HPV Associated Head and Neck Cancer. Head and neck cancers (HNCs) are a highly heterogeneous group of tumours that are associated with diverse clinical outcomes. Recent evidence has demonstrated that human papillomavirus (HPV) is involved in up to 25% of HNCs; particularly in the oropharyngeal carcinoma (OPC) subtype where it can account for up to 60% of such cases. HPVs are double-stranded DNA viruses that infect epithelial cells; numerous HPV subtypes, including 16, 18, 31, 33, and 35, drive epithelial cell transformation and tumourigenesis. HPV positive (HPV+) HNC represents a distinct molecular and clinical entity from HPV negative (HPV-) disease; the biological basis for which remains to be fully elucidated. HPV positivity is strongly correlated with a significantly superior outcome; indicating that such tumours should have a distinct management approach. This review focuses on the recent scientific and clinical investigation of HPV+ HNC. In particular, we discuss the importance of molecular and clinical evidence for defining the role of HPV in HNC, and the clinical impact of HPV status as a biomarker for HNC.
abstract_id: PUBMED:21956679
Update on HPV-induced oropharyngeal cancer Oropharyngeal squamous cell carcinoma (OSCC) is associated with oncogenic human papillomavirus (HPV) infection in 30-40% of all cases in Germany. The use of PCR and / or in situ hybridisation to detect HPV in tumour tissue is used in combination with p16 immunohistochemistry to reliably distinguish HPV-related and HPV-unrelated OSCC. The distinct biological behaviour of the HPV-related subset of OSCC results in a more favourable prognosis. This might be the result of a greater response to chemotherapy and radiotherapy as seen in recent studies. Ongoing and future clinical trials will stratify for HPV status. If the results of these prospective, randomized trials are consistent with the preliminary results of recent studies, HPV status will be of enormous clinical relevance in the future.
abstract_id: PUBMED:23663293
Does HPV type affect outcome in oropharyngeal cancer? Background: An epidemic of human papillomavirus (HPV)-related oropharyngeal squamous cell cancer (OPSCC) has been reported worldwide largely due to oral infection with HPV type-16, which is responsible for approximately 90% of HPV-positive cases. The purpose of this study was to determine the rate of HPV-positive oropharyngeal cancer in Southwestern Ontario, Canada.
Methods: A retrospective search identified ninety-five patients diagnosed with OPSCC. Pre-treatment biopsy specimens were tested for p16 expression using immunohistochemistry and for HPV-16, HPV-18 and other high-risk subtypes, including 31,33,35,39,45,51,52,56,58,59,67,68, by real-time qPCR.
Results: Fifty-nine tumours (62%) were positive for p16 expression and fifty (53%) were positive for known high-risk HPV types. Of the latter, 45 tumors (90%) were identified as HPV-16 positive, and five tumors (10%) were positive for other high-risk HPV types (HPV-18 (2), HPV-67 (2), HPV-33 (1)). HPV status by qPCR and p16 expression were extremely tightly correlated (p < 0.001, Fishers exact test). Patients with HPV-positive tumors had improved 3-year overall (OS) and disease-free survival (DFS) compared to patients with HPV-negative tumors (90% vs 65%, p = 0.001; and 85% vs 49%, p = 0.005; respectively). HPV-16 related OPSCC presented with cervical metastases more frequently than other high-risk HPV types (p = 0.005) and poorer disease-free survival was observed, although this was not statistically significant.
Conclusion: HPV-16 infection is responsible for a significant proportion of OPSCC in Southwestern Ontario. Other high-risk subtypes are responsible for a smaller subset of OPSCC that present less frequently with cervical metastases and may have a different prognosis.
abstract_id: PUBMED:32635845
Update for Diagnosis and Management of HPV-Driven Oropharyngeal Cancer Update for Diagnosis and Management of HPV-Driven Oropharyngeal Cancer Abstract. In the past decades, an increasing incidence of oropharyngeal squamous cell cancer could be observed. More than twenty years ago, a correlation between a pharyngeal Human papillomavirus high-risk type infection and the development of oropharyngeal cancer has been suspected. Especially younger patients without the former risk factors smoking and alcohol have a higher prevalence for this cancer type. HPV-associated cancer is developing in the lymphatic tissue of the tonsils and the base of the tongue. HPV-driven tumors can be defined as a clinical and morphologic distinct tumor entity with a significantly better prognosis compared to tumors based on smoking and alcohol consumption. They are demonstrating a clearly better treatment response irrespective of the treatment modality. The tumor development is assumed to be comparable to cervical cancer, probably through a step-wise process from dysplasia to invasive cancer. In the pharynx, no HPV-associated precursor lesions have been detected so far. Therefore, Screening program proven to be very successful in the cervix have not could not have been implemented so far. The reduction of HPV-associated tumor burden in the cervix is likely to be compensated by the rising number of HPV-driven oropharyngeal cancer. P16 as a surrogate marker for HPV has been implemented in the 8th edition of the TNM classification for oropharyngeal cancer. A worldwide accepted definition of an HPV-driven tumor is lacking so far. P16 immunhistochemistry or HPV-DNA detection by PCR as single markers have an insufficient sensitivity and specificity. A combination of both markers demonstrates a higher accuracy compared to the gold standard RNA. Antibodies to HPV oncoproteins are reliable diagnostic and prognostic markers that could in the future possibly serve for early tumor detection.
Answer: Yes, HPV type does affect the outcome in oropharyngeal cancer. HPV-positive oropharyngeal squamous cell cancer (OPSCC) is associated with a better prognosis compared to HPV-negative tumors. This is largely due to oral infection with HPV type-16, which is responsible for approximately 90% of HPV-positive cases and is associated with improved overall and disease-free survival (PUBMED:23663293). HPV status is considered an integral part of standard histopathological examination and is included as a new biological parameter in TNM classification due to its strong correlation with significantly superior outcomes (PUBMED:31216850). HPV-positive (HPV+) head and neck cancers (HNCs), particularly in the oropharyngeal carcinoma (OPC) subtype, are more chemo-and radiosensitive, which contributes to their better prognosis (PUBMED:27527216). However, it is noted that HPV-16 related OPSCC presented with cervical metastases more frequently than other high-risk HPV types, and poorer disease-free survival was observed, although this was not statistically significant (PUBMED:23663293).
Moreover, the study of HPV-associated head and neck cancer suggests that HPV positivity is strongly correlated with a significantly superior outcome, indicating that such tumors should have a distinct management approach (PUBMED:27527216). The distinct biological behavior of HPV-related subset of OSCC results in a more favorable prognosis, potentially due to a greater response to chemotherapy and radiotherapy (PUBMED:21956679).
In summary, the type of HPV, particularly HPV-16, is a significant factor in the outcome of oropharyngeal cancer, with HPV-positive cases generally having a better prognosis than HPV-negative cases. |
Instruction: Are cognitively intact seniors with subjective memory loss more likely to develop dementia?
Abstracts:
abstract_id: PUBMED:12221654
Are cognitively intact seniors with subjective memory loss more likely to develop dementia? Background: Subjective memory loss (SML) is common in elderly persons. It is not clear if SML predicts the development of dementia.
Objectives: (1) to determine if SML in those with normal cognition predicts dementia or cognitive impairment without dementia (CIND); (2) to determine if an association is independent of the effect of age, gender and depressive symptoms.
Methods: Secondary analysis of the Manitoba Study of Health and Aging (MSHA), a population-based prospective study. Data were collected in 1991, and follow-up was done 5 years later. Community-dwelling seniors sampled randomly from a population-based registry in the Canadian province of Manitoba, stratified on age and region. Only those scoring in the normal range of the Modified mini-mental state examination (3MS) were included. Predictor variables were self-reported memory loss, 3MS, Center for epidemiological studies-depression scale (CES-D), age, gender, and education. Outcomes were mortality and cognitive impairment five years later.
Results: In bivariate analyses, SML was associated with both death and dementia. In multivariate models, SML did not predict mortality. After adjusting for age, gender, and depressive symptoms, SML predicted dementia. However, after adjusting for baseline 3MS score, SML did not predict dementia.
Conclusions: Memory complaints predict the development of dementia over five years, and clinicians should monitor these persons closely. However, the proportion of persons developing dementia was small, and SML alone is unlikely to be a useful clinical predictor of dementia.
abstract_id: PUBMED:28216174
Subjective Memory Complaints are Associated with Incident Dementia in Cognitively Intact Older People, but Not in Those with Cognitive Impairment: A 24-Month Prospective Cohort Study. Objective: Although subjective memory complaints (SMCs) are considered a risk factor for incident dementia in older people, the effect might differ based on cognitive function. The aim of the present study was to investigate whether the effect of SMCs on the incidence of dementia in older people differed based on cognitive function.
Design: A 24-month follow-up cohort study.
Setting: Japanese community.
Participants: Prospective, longitudinal data for incident dementia were collected for 3,672 participants (mean age: 71.7 years; 46.5% men) for up to 24 months.
Measurements: Baseline measurements included covariates for incident dementia, SMCs, and cognitive function. Associations between SMCs, cognitive impairment, and incident dementia were examined using Cox proportional hazards models.
Results: Incidences of dementia in the cognitively intact without SMC, cognitively intact with SMC, cognitive impairment without SMC, and cognitive impairment with SMC groups were 0.3%, 1.8%, 3.4%, and 4.8%, respectively. In the cognitively intact participants, SMCs were associated with a significantly higher risk of dementia (hazard ratio [HR]: 4.95, 95% confidence interval [CI]: 1.52-16.11, p = 0.008). Incident dementia with cognitive impairment was not significantly different based on SMC presence (p = 0.527). Participants with cognitive impairment in multiple domains had a significantly higher risk of incident dementia (HR: 2.07, 95% CI: 1.01-4.24, p = 0.046) CONCLUSION: SMCs were related with dementia in cognitively intact older people, but not in those with cognitive impairment.Multiple domains of cognitive impairment were associated with a higher risk of incident dementia.
abstract_id: PUBMED:23567387
Subjective memory complaints are associated with diurnal measures of salivary cortisol in cognitively intact older adults. Objective: To investigate the relationship between subjective memory complaints (SMC) and the stress hormone cortisol using diurnal measures in older, cognitively intact subjects.
Methods: This cross-sectional study conducted at a university research center included 64 volunteers (with or without SMC) with a mean age of 78.6 (±6.3) years and diagnosis of cognitively normal based on objective neuropsychological testing. Measures of diurnal salivary cortisol, depressive symptoms, episodic memory performance, level of anxiety, and apolipoprotein E (APOE) e4 allele status were obtained.
Results: In multivariate logistic regression analyses with SMC as outcome, averaged postpeak cortisol, the cortisol awakening response, and depressive symptoms were significant predictors, whereas gender, memory performance, anxiety, and APOE-e4 status were not.
Conclusions: Significant associations between SMC and diurnal measures of cortisol in cognitively intact elderly suggest that hypothalamic-pituitary-adrenal axis dysfunction may contribute to early neuropathologic changes in older adults who complain of memory decline undetected on neuropsychological testing.
abstract_id: PUBMED:15526312
No association between subjective memory complaints and apolipoprotein E genotype in cognitively intact elderly. Objective: This cross-sectional study examined the relationship between subjective memory complaints and the apolipoprotein epsilon 4 allele (epsilon4), a genetic risk factor for Alzheimer's disease (AD), among cognitively normal subjects identified from a community memory screening.
Design: The sample comprised 232 consecutive white non-Hispanic older adults who presented to a free community-based memory-screening program at a University affiliated memory disorders center. Participants were classified as cognitively normal based on scores on the age and educated adjusted Folstein Mini-Mental Status Exam (MMSAdj) and a brief Delayed Verbal Recall Test (DRT). Subjects were assessed for APOE genotype, subjective memory complaints (Memory Questionnaire, MQ), depressive symptoms (Hamilton Depression Rating Scale, HDRS), and history of four major medical conditions that have been associated with memory loss (stroke/transient ischemic attack [TIA], atherosclerotic heart disease, hypertension, and diabetes). A hierarchical regression analysis was performed to examine the association between APOE genotype and memory complaints after controlling for a host of potential confounding factors.
Results: The APOE epsilon4 allele frequency for cognitively normal subjects was 0.13. Subjective memory complaints were predicted by depressive symptoms and a history of stroke/TIA. They were not associated with APOE genotype, MMSAdj score, DRT score, age, education, gender, and reported history of atherosclerotic heart disease, hypertension, or diabetes.
Conclusion: The results did not suggest an association between subjective memory complaints and the APOE epsilon4 allele in this sample of cognitively intact subjects. This indicates that memory complaints may confer risk for future dementia through pathways independent of APOE genotype. The results also show that older adults with memory complaints are at increased risk for underlying depression.
abstract_id: PUBMED:30631336
Lifestyle Factors Are Important Contributors to Subjective Memory Complaints among Patients without Objective Memory Impairment or Positive Neurochemical Biomarkers for Alzheimer's Disease. Background/aims: Many patients presenting to a memory disorders clinic for subjective memory complaints do not show objective evidence of decline on neuropsychological data, have nonpathological biomarkers for Alzheimer's disease, and do not develop a neurodegenerative disorder. Lifestyle variables, including subjective sleep problems and stress, are factors known to affect cognition. Little is known about how these factors contribute to patients' subjective sense of memory decline. Understanding how lifestyle factors are associated with the subjective sense of failing memory that causes patients to seek a formal evaluation is important both for diagnostic workup purposes and for finding appropriate interventions and treatment for these persons, who are not likely in the early stages of a neurodegenerative disease. The current study investigated specific lifestyle variables, such as sleep and stress, to characterize those patients that are unlikely to deteriorate cognitively.
Methods: Two hundred nine patients (mean age 58 years) from a university hospital memory disorders clinic were included.
Results: Sleep problems and having much to do distinguished those with subjective, but not objective, memory complaints and non-pathological biomarkers for Alzheimer's disease.
Conclusions: Lifestyle factors including sleep and stress are useful in characterizing subjective memory complaints from objective problems. Inclusion of these variables could potentially improve health care utilization efficiency and guide interventions.
abstract_id: PUBMED:30714214
Prevalence and determinants of subjective cognitive decline in a representative Greek elderly population. Objectives: We studied the prevalence of subjective cognitive decline (SCD) and its determinants in a sample of 1456 cognitively normal Greek adults ≥65 years old.
Methods/design: Subjects were evaluated by a multidisciplinary team on their neurological, medical, neuropsychological, and lifestyle profile to reach consensus diagnoses. We investigated various types of SCD, including single-question, general memory decline, specific subjective memory decline based on a list of questions and three types of subjective naming, orientation, and calculation decline.
Results: In a single general question about memory decline, 28.0% responded positively. The percentage of our sample that reported at least one complaint related to subjective memory decline was 76.6%. Naming difficulties were also fairly common (26.0%), while specific deficits in orientation (5.4%) and calculations/currency handling (2.6%) were rare. The majority (84.2%) of the population reported subjective deficits in at least one cognitive domain. Genetic predisposition to dementia increased the odds for general memory decline by more than 1.7 times. For each one-unit reduction in the neuropsychological composite score (a mean of memory, executive, language, visuospatial, and attention-speed composite scores), the odds for decline in orientation increased by 40.3%. Depression/anxiety and increased cerebrovascular risk were risk factors for almost all SCD types.
Conclusions: SCD regarding memory is more frequent than non-memory decline in the cognitively normal Greek elderly population. Genetic predisposition to dementia, lower cognitive performance, affective symptoms, and increased cerebrovascular risk are associated with prevalent SCD. Further prospective research is needed to improve understanding of the evolution of SCD over time.
abstract_id: PUBMED:25257154
Non-adherence in seniors with dementia - a serious problem of routine clinical practice. Background: Non-adherence to treatment in seniors with dementia is a frequent and potentially dangerous phenomenon in routine clinical practice which might lead to the inappropriate treatment of a patient, including the risk of intoxication. There might be different causes of non-adherence in patients with dementia: memory impairment, sensory disturbances, limitations in mobility, economical reasons limiting access to health care and medication. Non-adherence leads to serious clinical consequences as well as being a challenge for public health.
Aim: to estimate prevalence of non-adherence in seniors with dementia and to study correlation between cognitive decline and non-adherence.
Subjects And Methods: Prospective study, analyzing medical records of seniors with dementia admitted to the inpatient psychogeriatric ward in the Kromeriz mental hospital from January 2010 to January 2011. Cognitive decline measured by MMSE, prevalence of Non-adherence to treatment and reasons for patient Non-adherence were studied.
Results: Non-adherence to any treatment was detected in 31.3% of seniors; memory impairment was the most common cause of non-adherence to treatment.
Conclusion: In conclusion, non-adherence to treatment in the studied group of seniors with dementia correlates with the severity of cognitive impairment - a higher cognitive decline correlates with a higher risk of non-adherence to treatment.
abstract_id: PUBMED:15033228
The Test of Memory Malingering (TOMM): normative data from cognitively intact, cognitively impaired, and elderly patients with dementia. This research adds to the psychometric validation of the Test of Memory Malingering (TOMM) by providing data for samples of elderly patients who are cognitively intact, cognitively impaired (non-dementia), and with dementia. Subjects were 78 individuals referred for evaluation of memory complaints. Significant group differences emerged between the dementia group and the two other groups (normals and cognitively impaired), although the latter two did not differ from each other. One hundred percent of normals and 92.7% of the cognitively impaired group made fewer than five errors (the suggested cut-off) on Trial 2 or the Retention trial of the TOMM, yielding an overall correct classification rate of 94.7%. However, the rate of misclassification for persons with dementia was high whether using a cut-point score of five, eight, or ten errors. This investigation extends the validity and clinical utility of this instrument. Results suggest that the TOMM is an useful index for detecting the malingering of memory deficits, even in patients with cognitive impairment, but only when dementia can be ruled out.
abstract_id: PUBMED:27802231
Subjective Memory Impairment and Gait Variability in Cognitively Healthy Individuals: Results from a Cross-Sectional Pilot Study. Background: Increased stride time variability has been associated with memory impairment in mild cognitive impairment. Subjective memory impairment (SMI) is considered the earliest clinical stage of Alzheimer's disease (AD). The association between increased stride time variability and SMI has not been reported.
Objective: This study aims to examine the association of stride time variability while performing single and dual tasking with SMI in cognitively healthy individuals (CHI).
Methods: A total of 126 CHI (15 without SMI, 69 with SMI expressed by participants, 10 with SMI expressed by participant's relative, and 32 with SMI expressed by both participants and their relatives) were included in this cross-sectional study. The coefficient of variation (CoV) of stride time and walking speed were recorded under usual condition and while counting backwards. Age, gender, body mass index, number of drugs taken daily, use of psychoactive drugs, fear of falling, history of previous falls, and walking speed were used as covariates.
Results: The multiple linear regression models showed that greater CoV of stride time while counting backwards, but not while single tasking, was associated with a participant's relative SMI (p = 0.038).
Conclusion: This study found a specific association between SMI expressed by a participant's relative and a greater CoV of stride time (i.e., worse performance) while dual tasking, suggesting that the association between gait variability and memory may be present in the earliest stages of memory impairment. Thus, gait variability under dual-task in individuals with SMI expressed by their relatives can be a potential biomarker of AD.
abstract_id: PUBMED:31409180
Subjective Memory Complaints in White and African American Participants. Objective: Subjective memory complaints (SMCs) are associated with mild cognitive impairment and dementia but are understudied in African Americans (AAs). We compared SMC endorsement in white and AA participants and evaluated predictors of diagnostic progression.
Methods: Initial visit variables, including SMC and memory performance, were compared within a cognitively normal race-matched sample of white and AA participants (Ntotal = 912; 456each race) to assess the presence and predictors of SMC, the predictors of future diagnostic progression, and the change in memory performance over time.
Results: More white (32.9%) than AA (24.3%) participants reported SMC (P < .01, ϕ = -.10). Subjective memory complaint was predicted by memory performance (B = -0.03, standard error [SE] = 0.013, odds ratio [OR] = .968, P < .05) and race (B = -0.99, SE = 0.080, OR = .373, P < .001). Subjective memory complaints and memory performance were associated with progression, χ2 (3, n = 912) = 102.37, P < .001. African American race (-2.05 ± 0.24 SE) and SMC (-0.45 ± 0.21 SE) were associated with worse memory performance at baseline and over time, χ2(3) = 13.54, P < .01.
Conclusions: In contrast to previous research, our study found that SMC is associated with diagnostic progression and objective memory declines in both white and AA participants.
Answer: Yes, cognitively intact seniors with subjective memory loss (SML) are more likely to develop dementia. Studies have shown that subjective memory complaints (SMCs) can predict the development of dementia in older individuals with normal cognition. For instance, a secondary analysis of the Manitoba Study of Health and Aging found that after adjusting for age, gender, and depressive symptoms, SML predicted dementia, although the predictive value was diminished after adjusting for baseline cognitive scores (PUBMED:12221654). Similarly, a 24-month prospective cohort study in a Japanese community demonstrated that SMCs were associated with a significantly higher risk of dementia in cognitively intact participants, but not in those with cognitive impairment (PUBMED:28216174).
Furthermore, research has indicated that SMCs are related to diurnal measures of cortisol in cognitively intact elderly, suggesting that hypothalamic-pituitary-adrenal axis dysfunction may contribute to early neuropathologic changes in older adults who complain of memory decline (PUBMED:23567387). However, the presence of SMCs was not found to be associated with the apolipoprotein E genotype, a genetic risk factor for Alzheimer's disease, in cognitively intact elderly, indicating that memory complaints may confer risk for future dementia through pathways independent of this genotype (PUBMED:15526312).
Lifestyle factors, such as sleep and stress, have also been identified as important contributors to SMCs among patients without objective memory impairment or positive neurochemical biomarkers for Alzheimer's disease (PUBMED:30631336). Additionally, a study on the prevalence and determinants of SCD in a Greek elderly population found that genetic predisposition to dementia, lower cognitive performance, affective symptoms, and increased cerebrovascular risk are associated with prevalent SCD (PUBMED:30714214).
In summary, while not all cognitively intact seniors with SML will develop dementia, there is evidence to suggest that SML is a risk factor for future cognitive decline and dementia, and such individuals should be monitored closely (PUBMED:12221654; PUBMED:28216174). |
Instruction: Participatory ergonomics as a return-to-work intervention: a future challenge?
Abstracts:
abstract_id: PUBMED:26154230
Participatory ergonomics simulation of hospital work systems: The influence of simulation media on simulation outcome. Current application of work system simulation in participatory ergonomics (PE) design includes a variety of different simulation media. However, the actual influence of the media attributes on the simulation outcome has received less attention. This study investigates two simulation media: full-scale mock-ups and table-top models. The aim is to compare, how the media attributes of fidelity and affordance influence the ergonomics identification and evaluation in PE design of hospital work systems. The results illustrate, how the full-scale mock-ups' high fidelity of room layout and affordance of tool operation support ergonomics identification and evaluation related to the work system entities space and technologies & tools. The table-top models' high fidelity of function relations and affordance of a helicopter view support ergonomics identification and evaluation related to the entity organization. Furthermore, the study addresses the form of the identified and evaluated conditions, being either identified challenges or tangible design criteria.
abstract_id: PUBMED:35509303
Does participatory ergonomics reduce musculoskeletal pain in sonographers? A mixed methods study. Introduction: Sonographers in the Western New South Wales Local Health District (WNSWLHD) reported a musculoskeletal pain prevalence rate of 95%. Participatory ergonomics, where workers are consulted about improving work conditions, was utilised to identify work-related musculoskeletal disorder (WMSD) risks and potential solutions. The aim of this study was to compare the prevalence of WMSD in a cohort of sonographers before and after implementation of ergonomic changes that were driven by recommendations from a participatory ergonomics approach.
Methods: This observational mixed methods study analysed the impact of participatory ergonomic-driven interventions on changes on musculoskeletal pain in a cohort of sonographers employed within the WNSWLHD. A retrospective analysis of 10 sonographer WMSD pain surveys over five sites was completed, along with semi-structured interviews regarding which interventions were perceived as useful, which interventions were not implemented and any barriers to implementation.
Results: Installation of patient monitors, use of ergonomic scanning techniques and job rotation were perceived as responsible for decreased musculoskeletal pain. Taking lunch breaks and microbreaks, use of antifatigue mats and having two sonographers perform mobile exams were not fully implemented. No interventions were perceived as responsible for increased pain.
Conclusion: This small study provides preliminary evidence that a participatory ergonomics approach facilitated identification of occupation and site-specific risks for WMSD in the WNSWLHD, allowing implementation of ergonomic changes to be tailored to the workplace, resulting in a safer work environment for sonographers.
abstract_id: PUBMED:32343204
Participatory ergonomics for the reduction of musculoskeletal exposure of maintenance workers. Exposure to musculoskeletal disorders (MSDs) is a prevalent risk among those working in the maintenance of machinery and equipment for industry. Participatory ergonomics (PE) in the workplace embodies a solid strategy for the implementation of MSD prevention programs. This practical case describes a PE project implemented to improve MSD prevention strategies for the safety of maintenance workers. Experienced workers and maintenance workers employed in an Italian company for the industrial processing of wool have been actively involved in the risk assessment, in the proposal of improvement interventions and in the proposal of new preventive strategies. Ergonomic training and guidance helped the workers to have a proactive role in the prevention process. PE can help in the preventive management of critical activities of maintenance, through the empowerment of workers, the identification of targeted and feasible solutions and by using ergonomics as a basis for improving health and safety at work.
abstract_id: PUBMED:12929147
Participatory ergonomics as a return-to-work intervention: a future challenge? Background: Participatory ergonomics (PE) are often applied for prevention of low back pain (LBP). In this pilot-study, a PE-program is applied to the disability management of workers sick listed due to LBP.
Methods: The process, implementation, satisfaction, and barriers for implementation concerning the PE-program were analyzed quantitatively and qualitatively for 35 workers sick listed 2-6 weeks due to LBP and their ergonomists.
Results: Two-hundred-and-seventy ergonomic solutions were proposed to the employer. They were targeted more at work design and organization of work (58.9%) than at workplace and equipment design (38.9%). They were planned mostly on a short-term basis (74.8%). Almost half (48.9%) of the solutions for work adjustment were completely or partially implemented within 3 months after the first day of absenteeism. Most workers were satisfied about the PE-program (median score 7.8 on a 10-point scale) and reported a stimulating effect on return-to-work (66.7%). Main obstacles to implementation were technical or organizational difficulties (50.0%) and physical disabilities of the worker (44.8%).
Conclusions: This study suggests that compliance, acceptance, and satisfaction related to the PE-program were good for all participants. Almost half of the proposed solutions were implemented.
abstract_id: PUBMED:29409647
Participatory ergonomics: Evidence and implementation lessons. Participatory ergonomics programs have been proposed as the most effective means of eliminating, or redesigning, manual tasks with the aim of reducing the incidence of occupational musculoskeletal disorders. This review assesses the evidentiary basis for this claim; describes the range of approaches which have been taken under the banner of participatory ergonomics in diverse industries; and collates the lessons learned about the implementation of such programs.
abstract_id: PUBMED:34511474
Organisational and relational factors that influence return to work and job retention: The contribution of activity ergonomics. Background: Work is a determinant of employee health, and the same conditions that contribute to an illness do not favour return to work; consequently, they hinder job retention, other employees can become ill and new leaves are generated.
Objective: To analyse the nursing technicians work in intensive and semi-intensive care units (ICUs and SICUs) and discuss the influence of organisational and relational factors on return to work and job retention. This study also discusses the contributions of activity ergonomics to these processes.
Method: Qualitative case study based on ergonomic work analysis (EWA). Data were collected using documentary analyses, and global, systematic, and participant observations involving nursing technicians working in ICUs and SICUs.
Results: Task planning and the staff size adjustment to respond to the work demands of these units were ineffective in real-world situations and were aggravated by cases of absenteeism, medical leave, and employees returning to work.
Conclusions: Work structure limits return to work and job retention. An EWA based on the activities developed by professionals is a valid tool for understanding working processes by applying transforming actions to real-world work situations.
abstract_id: PUBMED:33492263
Transfer of ergonomics knowledge from participatory simulation events into hospital design projects. Background: Participatory simulation (PS) is a method that can be used to integrate ergonomics and safety into workplace design projects. Previous studies have mainly focused on tools and methods for the simulation activities. The subsequent process of transferring and integrating the simulation outcomes into the design of workplaces is poorly understood.
Objective: This study sets out to study the role of actors and objects in the transfer of ergonomics knowledge generated in PS events and in the integration of this knowledge into a design project. The study identifies factors that influence what part of the simulation outcomes are integrated.
Methods: The empirical context of the study was six PS events that were part of a hospital design project. The events were investigated based on knowledge transfer theory, observations, interviews and document studies.
Results: Actors and objects with abilities of transferring ergonomics knowledge from the PS events to the hospital design project were identified. The study indicated that persons producing the objects functioned as a filter, meaning that not all ergonomics knowledge was transferred from the PS events. The main influencing factors on the integration were: predetermined building dimensions and room interdependency.
Conclusions: Four recommendations were proposed for ergonomists and safety professionals when planning PS events.
abstract_id: PUBMED:27539352
Functions of participatory ergonomics programs in reducing work-related musculoskeletal disorders Work-related musculoskeletal disorders (MSDs) are most commonly seen in all the occupational non-fatal injuries and illnesses for workers, especially those who are involved in labor-intensive industries. Participatory ergonomics is frequently used to prevent musculoskeletal disorders. This paper gives an overview of a historical perspective on the use of participatory ergonomics approach in reducing the health effects of labor-intensive industries. Progress, barriers and facilitators on the organization, implementation and evaluation of participatory ergonomics programs are studied. Participatory ergonomics seems a successful method to develop, prioritize measures to prevent MSDs. Participatory ergonomics can help industries reduce musculoskeletal injuries and disorders, improve workplace condition and promote health conditions of the workers.
abstract_id: PUBMED:34898383
Design, implementation and evaluation of a participatory ergonomics program among home-based Mapuche weavers. The Mapuche comprise 80% of the indigenous population of Chile. The Araucanía has the highest concentration of Mapuche peoples and is also the poorest region of the country. The region's proximity to a large tourist sector provides opportunities for weavers to work from home and sell their products as a primary or secondary income source. The adverse health effects related to craft production and home-based work are significant and not well described in the literature. Participatory ergonomics (PE), an approach that emphasizes the input and participation of individuals directly impacted by the work being studied, is a strong fit for this population. This article describes a PE project among 33 Mapuche weavers that characterizes the hazards, risks and perceived adverse health effects associated with home-based weaving and the identification, implementation and evaluation of three ergonomic interventions. This project can inform the development of sustainable ergonomics health programs for home-based artisans.
abstract_id: PUBMED:29420140
Participatory approaches to workplace safety management: bridging the gap between behavioral safety and participatory ergonomics. Many researchers and practitioners argue the importance of end-user involvement in workplace safety management, but the research literature and practices remain fractured across orientations. The primary aim of this article is to bridge the gap between two major participatory safety management approaches: behavioral safety and participatory ergonomics. First, an overview and brief history of participative management is presented to provide context for its application to workplace safety. Next, behavioral safety and participatory ergonomics are separately examined in terms of their origins and core elements. Finally, based on this examination, unifying elements between behavioral safety and participatory ergonomics will be presented to provide a comprehensive account of participatory safety management.
Answer: Participatory ergonomics (PE) has been explored as a potential intervention for facilitating return to work, particularly for workers who have been sick-listed due to musculoskeletal disorders such as low back pain (LBP). In a pilot study, a PE program was applied to the disability management of workers sick-listed due to LBP, and the process, implementation, satisfaction, and barriers to implementation were analyzed. The study found that a significant number of ergonomic solutions were proposed to the employer, with a focus on work design and organization. Almost half of these solutions were implemented within three months after the first day of absenteeism. The workers reported satisfaction with the PE program and a stimulating effect on return-to-work, although obstacles such as technical or organizational difficulties and physical disabilities of the worker were noted (PUBMED:12929147).
The evidence suggests that PE can be an effective strategy for the implementation of musculoskeletal disorder prevention programs, and it has been used to improve MSD prevention strategies for the safety of maintenance workers (PUBMED:32343204). Moreover, PE has been shown to facilitate the identification of occupation and site-specific risks for work-related musculoskeletal disorders (WMSDs), allowing for the implementation of tailored ergonomic changes that result in safer work environments (PUBMED:35509303).
However, the transfer of ergonomics knowledge from participatory simulation events into actual design projects can be challenging. Factors such as predetermined building dimensions and room interdependency can influence what part of the simulation outcomes are integrated into the design (PUBMED:33492263).
In conclusion, while PE shows promise as a return-to-work intervention, it also presents challenges that need to be addressed for successful implementation. These include overcoming technical and organizational barriers, ensuring the transfer of knowledge from simulations to actual workplace design, and addressing the physical limitations of workers. Future efforts in PE as a return-to-work intervention will likely need to focus on these areas to enhance its effectiveness and overcome the challenges identified. |
Instruction: Partial Nephrectomy for Small Renal Masses: Do Teaching and Nonteaching Institutions Adhere to Guidelines Equally?
Abstracts:
abstract_id: PUBMED:27025539
Partial Nephrectomy for Small Renal Masses: Do Teaching and Nonteaching Institutions Adhere to Guidelines Equally? Introduction: The American Urological Association (AUA) guidelines recommend partial nephrectomy (PN) as the gold standard for treatment of small renal masses (SRMs). This study examines the change in utilization of partial and radical nephrectomies at teaching and nonteaching institutions from 2003 to 2012.
Materials And Methods: The data sample for this study came from the Healthcare Cost and Utilization Project Nationwide Inpatient Sample from 2003 to 2012. International Classification of Diseases, Ninth Revision and Clinical Modification codes were used to identify patients undergoing PN and radical nephrectomy for renal masses limited to the renal parenchyma. Teaching hospitals were defined, but not limited to any institution with an American Medical Association-approved residency program. Linear regression, bivariate, multivariate, and odds ratio analysis were used to demonstrate statistical significance.
Results: 39,685 patients were identified in teaching hospitals, and 22,239 were identified in nonteaching hospitals. Prior to the 2009 AUA guidelines, cumulative rates of PN were 33% vs 20% in teaching vs nonteaching hospitals (p < 0.0001) compared with postguideline rates of 48% vs 33% in teaching vs nonteaching hospitals (p < 0.0001).
Conclusions: During the 10-year study period, the use of PN to treat SRMs has significantly increased in both teaching hospitals and in nonacademic centers; however, these changes are occurring at a slower rate in nonteaching hospitals.
abstract_id: PUBMED:28270950
Trends of partial and radical nephrectomy in managing small renal masses. Objective: Use of partial nephrectomy (PN) for renal tumors appears to have relatively lower incidence rates in Jordan. We sought to characterize its trend at King Hussein Cancer Center for the last 10 years.
Material And Methods: A retrospective review of our renal cell cancer data was performed. We identified 169 patients who had undergone surgery for renal tumors measuring ≤7 cm between 2005 and 2015. We characterized tumor size, pathology, type of surgery and clinical outcomes. Factors associated with the use of PN were evaluated using univariable and multivariable logistic regression models.
Results: Of the 169 patients, 34 (20%) and 135 (80%) had undergone partial and radical nephrectomy (RN) respectively for tumors ≤7 cm in diameter. Total number of 48 patients with tumors of ≤4 cm in diameter had undergone either PN (n=19; 40%) or RN (n=29; 60%). The frequency of PN procedures steadily increased over the years from 6% in 2005-2008, to 32% in 2013-2015, contrary to RN which was less frequently applied 94% in 2005-2008, and 68% in 2013-2015. In multivariable analysis, delayed surgery (p=0.01) and smaller tumor size (p=0.0005) were significant independent predictors of PN. During follow-up period, incidence of metastasis was lower in PN versus RN (13% and 32%, respectively, p=0.043). Local recurrence rates were not significantly different between PN (6.9%) and RN (7.2%) (p=0.99). The mean tumor sizes for patients who had undergone PN and RN were 4.1 and 5.5 cm respectively, (p<0.0001). The mean follow-up period for PN was 20 months, and for RN 33 months, (p=0.0225).
Conclusion: Partial nephrectomy for small renal tumors is relatively less frequently applied in Jordan, however an increase in its use has been observed over the years. Our data showed lower rates of distant metastasis and similar rates of local recurrence in favor of PN.
abstract_id: PUBMED:28182157
Comparison of the Clinical Efficacy of Retroperitoneal Laparoscopic Partial Nephrectomy and Radical Nephrectomy for Treating Small Renal Cell Carcinoma: Case Report and Literature Review. Background: Renal cell carcinoma (RCC) is a common malignancy of the urinary system with high rates of morbidity and mortality.
Objectives: This study aimed to investigate and analyze the clinical efficacy of retroperitoneal laparoscopic partial nephrectomy and laparoscopic radical nephrectomy for the treatment of small RCC.
Methods: In this retrospective study of 45 patients with small RCC, the patients were divided into two treatment groups: Group A (retroperitoneal laparoscopic partial nephrectomy, 25 cases) and Group B (retroperitoneal laparoscopic radical nephrectomy, 20 cases).
Results: There were no statistically significant differences in the operative time, amount of intraoperative blood loss, length of hospital stay, preoperative creatinine level, postoperative creatinine level after 24 hours, and survival rate after 1, 2, and 3 years between the two groups (P > 0.05).
Conclusions: There were no significant differences in the survival rates and short-term postoperative complications between the laparoscopic partial nephrectomy group and the laparoscopic radical nephrectomy group for small RCC, but the former was slightly more effective.
abstract_id: PUBMED:30840387
Total or partial nephrectomy for renal tumors? Total or partial nephrectomy for renal tumors? Due to the raising incidence of small renal masses in the past decades and long term consequences of enlarged nephrectomy on renal function, partial nephrectomy has been recommended as reference treatment for renal tumors less than 4 cm. Partial nephrectomy has shown to allow equivalent oncological control compared to enlarged nephrectomy and allows preservation of the patient's nephronic capital. However, this surgery is technically demanding and requires experience and rapidity to limit renal ischemia.
abstract_id: PUBMED:24917728
Comparison of the loss of renal function after cold ischemia open partial nephrectomy, warm ischemia laparoscopic partial nephrectomy and laparoscopic partial nephrectomy using microwave coagulation. Purpose: Nephron sparing surgery is an effective surgical option in patients with renal cell carcinoma. Laparoscopic partial nephrectomy involves clamping and unclamping techniques of the renal vasculature. This study compared the postoperative renal function of partial nephrectomy using an estimation of the glomerular filtration rate (eGFR) for a Japanese population in 3 procedures; open partial nephrectomy in cold ischemia (OPN), laparoscopic partial nephrectomy in warm ischemia (LPN), and microwave coagulation using laparoscopic partial nephrectomy without ischemia (MLPN).
Materials And Methods: A total of 57 patients underwent partial nephrectomy in Yokohama City University Hospital from July 2002 to July 2008. 18 of these patients underwent OPN, 17 patients received MLPN, and 22 patients had LPN. The renal function evaluation included eGFR, as recommended by The Japanese Society of Nephrology.
Results: There was no significant difference between the 3 groups in the reduction of eGFR. eGFR loss in the OPN group was significantly higher in patients that experienced over 20 minutes of ischemia time. eGFR loss in LPN group was significantly higher in patients that experienced over 30 minutes of ischemia time.
Conclusion: This study showed that all 3 procedures for small renal tumor resection were safe and effective for preserving postoperative renal function.
abstract_id: PUBMED:27174506
Open partial nephrectomy in renal cell cancer - Essential or obsolete? Since the first partial nephrectomy was first conducted 131 years ago, the procedure has evolved into the gold standard treatment for small renal masses. Over the past decade, with the introduction of minimally invasive surgery, open partial nephrectomy still retains a valuable role in the treatment of complex tumours in challenging clinical situations (e.g. hereditary renal cancer or single kidneys), and enables surgeons to push the boundaries of nephron-sparing surgery. In this article, we consider the origin of the procedure and how it has evolved over the past century, the surgical techniques involved, and the oncological and functional outcomes.
abstract_id: PUBMED:24917730
Prognostic Factors Influencing Postoperative Development of Chronic Kidney Disease in Patients with Small Renal Tumors who Underwent Partial Nephrectomy. Background: The objective of this study was to determine factors associated with the postoperative development of chronic kidney disease (CKD) following partial nephrectomy.
Patients And Methods: This study included 109 patients with normal renal function treated with partial nephrectomy for small renal tumors. Of these, 73 and 36 patients underwent open partial nephrectomy (OPN) and laparoscopic partial nephrectomy (LPN), respectively.
Results: Among several parameters, there was a significant difference only in the ischemia time between the OPN and LPN groups. During the median observation period of 53.4 months, CKD, defined as estimated glomerular filtration rate (eGFR) less than 60 ml/min/1.73 m(2), developed in 29 (39.7%) and 14 (38.9%) patients in the OPN and LPN groups, respectively. Univariate analysis identified age at surgery, diabetes and preoperative eGFR as significant predictors of the postoperative development of CKD; however, only age at surgery and preoperative eGFR appeared to be independently related to CKD-free survival. In fact, there was a significant difference in the CKD-free survival between patients without any independent risk factor and those with at least one of these independent risk factors.
Conclusions: Careful management following partial nephrectomy is necessary for elderly patients and/or those with impaired renal function, even mild, before surgery.
abstract_id: PUBMED:26884989
Outcome of radiofrequency ablation over partial nephrectomy for small renal mass (<4 cm): a systematic review and meta-analysis. Objective: A meta-analysis was undertaken to provide evidence-based clinical trials comparing radiofrequency ablation with partial nephrectomy for small renal mass.
Methods: We searched through the major medical databases such as Pub Med, EMBASE, Medline, Science Citation Index, Web of Science and CNKI (Chinese National Knowledge Infrastructure Database) and Wangfang (Database of Chinese Ministry of Science & Technology) for all published studies without any limit on language from May 2007 until May 2015. The following search terms wereused: partial nephrectomy, radiofrequency ablation, renal cell carcinoma, small renal tumor or mass. Furthermore, additional related studies were manually searched in the reference lists of all published reviews and retrieved articles.
Results: We found there were no statistical differences between groups in 5y-DFS, recurrence rates, complications, but a less percentage decease rate of GFR than PN, and RFA may be a better application for SRM (<4 cm).
abstract_id: PUBMED:23321635
Kidney function after partial nephrectomy: current thinking. Purpose Of Review: With clinical guidelines recommending partial nephrectomy for small renal masses, it is essential to understand the benefits of partial nephrectomy in regards to renal function. Our objective was to review current evidence and highlight emerging issues for partial nephrectomy and renal function.
Recent Findings: A recent clinical trial of partial and radical nephrectomy found minimal differences in survival or adverse renal sequelae. However, most observational studies and systematic reviews suggest that partial nephrectomy decreases the risks of adverse renal function, in particular, new-onset severe chronic kidney disease, and improves overall survival. Key features associated with long-term renal function include treatment modality (observation, ablation, surgery), ischemia type and duration, amount of healthy renal preservation, and baseline renal function.
Summary: Partial nephrectomy should remain the standard of care for small renal masses, if the renal tumor size and complexity are amenable to such a surgical approach. Efforts to minimize ischemia time are important for long-term renal functional recovery, and hypothermia should be considered if longer warm ischemia times are anticipated (i.e. >25 min). Although the preliminary results of zero ischemia partial nephrectomy are promising, further research is needed to determine if these surgical techniques are safely adaptable in the broader urologic community.
abstract_id: PUBMED:24118633
Partial nephrectomy versus radical nephrectomy for non-metastatic pathological T3a renal cell carcinoma: a multi-institutional comparative analysis. Objectives: To compare the recurrence-free survival of partial nephrectomy and radical nephrectomy in patients with non-metastatic pathological T3a renal cell carcinoma.
Methods: We reviewed the records of 3567 patients who had undergone a nephrectomy for renal cell carcinoma at five institutions in Korea from January 2000 to December 2010. The clinical data of 45 patients with pathological T3a renal cell carcinoma in the partial nephrectomy group were compared with 298 patients with pathological T3a renal cell carcinoma in the radical nephrectomy group. The effects of surgical methods on recurrence-free survival were assessed by a multivariate Cox proportional hazard analysis. All comparisons were repeated in subgroup analysis on 63 clinical T1a patients with tumors ≤4 cm.
Results: During a median 43-month follow-up period, disease recurrence occurred in two patients (4.4%) in the partial nephrectomy group, and 94 patients (31.5%) in the radical nephrectomy group. The results from a multivariate model showed that radical nephrectomy was a significant predictor of recurrence. However, in subgroup analysis that included 63 clinical T1a pathological T3a patients, the recurrence-free survival rates were not significantly different between the two cohorts. The renal function was significantly better preserved in the partial nephrectomy cohort than in the radical nephrectomy cohort.
Conclusions: Partial nephrectomy provides similar recurrence-free survival outcomes compared with radical nephrectomy in patients with clinical T1a pathological T3a renal cell carcinoma. However, there seems to be a higher risk of recurrence for large pathological T3a tumors treated by radical nephrectomy compared with small tumors treated by partial nephrectomy. Thus, large tumors with the same pathological T3a renal cell carcinoma grade could have hidden aggressive features.
Answer: The study examining the adherence to guidelines for partial nephrectomy (PN) for small renal masses (SRMs) at teaching and nonteaching institutions found that both types of institutions have increased their use of PN over the 10-year study period from 2003 to 2012. However, the increase in PN utilization was slower in nonteaching hospitals compared to teaching hospitals. Prior to the 2009 American Urological Association (AUA) guidelines, cumulative rates of PN were 33% in teaching hospitals versus 20% in nonteaching hospitals. After the guidelines, the rates increased to 48% in teaching hospitals and 33% in nonteaching hospitals. Despite the significant increase in both settings, the gap in adherence to guidelines between teaching and nonteaching hospitals persisted (PUBMED:27025539). |
Instruction: Are remnant-like particles independent predictors of coronary heart disease incidence?
Abstracts:
abstract_id: PUBMED:15947240
Are remnant-like particles independent predictors of coronary heart disease incidence? The Honolulu Heart study. Background: Remnant-like particles have been proposed as a new risk factor for coronary heart disease (CHD). This is the first long-term prospective investigation of the relationship between remnant-like particles and a cardiovascular disease outcome in healthy men.
Methods And Results: A cohort of 1156 Japanese-American men aged 60 to 82 from the Honolulu Heart Program was followed for 17 years. During that period 164 incident cases of CHD were identified. In multivariate Cox regression analyses, baseline remnant-like particle cholesterol (RLP-C) and triglyceride (RLP-TG) levels were significantly related to CHD incidence independently of nonlipid cardiovascular risk factors and of total cholesterol or high-density and low-density lipoprotein cholesterol levels. Total triglyceride levels were an independent predictor of CHD incidence. However, in models including RLP and triglyceride level simultaneously, neither variable was significant when adjusted for the other. This finding can be attributed to the strong correlation between RLP-C and RLP-TG levels and total triglycerides. When individuals with normal triglyceride levels (n=894) were separated from those with elevated triglycerides (n=260), the association between RLPs and CHD relative risk was only significant for the group with elevated triglyceride levels.
Conclusions: RLP levels predicted CHD incidence independently of nonlipid risk factors and of total cholesterol or high-density and low-density lipoprotein cholesterol levels. However, RLP levels did not provide additional information about CHD incidence over and above total triglyceride levels. Therefore, this study does not support the need for testing of remnants in men if measures of fasting triglycerides are available.
abstract_id: PUBMED:16510044
Triglycerides and remnant particles as risk factors for coronary artery disease. Coronary artery disease (CAD) is the largest cause of morbidity and mortality in the world. A relationship between CAD and elevated levels of low-density lipoprotein cholesterol has been established. However, risk assessment limited to low-density lipoprotein fails to identify a significant portion of patients at risk for CAD. Remnant lipoproteins, derived from very low-density lipoprotein and chylomicrons, have been considered atherogenic. Recently, a simple and reliable immunoaffinity separation method for the isolation of remnant-like particles (RLP) has been developed. It has been shown that RLP cholesterol levels are significantly correlated with CAD, and thus cellular mechanisms have been determined by which RLP cholesterol causes progression of atherosclerosis. Measurement of RLP cholesterol is useful for the assessment of risk and the evaluation of therapy in patients at risk for CAD.
abstract_id: PUBMED:15536021
Association between LDL particle size and postprandial increase of remnant-like particles in Japanese type 2 diabetic patients. Small, dense LDL, as well as chylomicron- and VLDL-remnant lipoproteins, are known to be important risk factors for coronary heart disease in patients with type 2 diabetes mellitus. The aim of this study was to clarify the relationship between LDL particle size and postprandial remnant lipoprotein levels in Japanese type 2 diabetic patients. Forty-six patients with type 2 diabetes mellitus were divided into tertiles according to LDL particle size. The peak LDL particle diameter was <26.30 nm in tertile 1, 26.30-26.85 nm in tertile 2, and >26.85 nm in tertile 3. After a test meal, tertile 1 had a significantly greater increment of triglycerides (TG), remnant-like particle (RLP)-TG, and RLP-cholesterol (RLP-C) than tertiles 2 and 3. There was a negative correlation between LDL particle size and the postprandial increases of TG, RLP-TG, and RLP-C. These results indicate that smaller sized LDL particles may be a marker of fasting state for an exaggerated postprandial increase of remnant lipoproteins as well as an increase of TG-rich lipoproteins.
abstract_id: PUBMED:17306468
Remnant like particles may induce atherosclerosis via accelerating endothelial progenitor cells senescence. Remnant like particles (RLPs) are closely associated with coronary heart disease, whereas the underlying mechanisms are complex and have not been fully elucidated. Studies show that maintenance of endothelial cells layer is essential for normal function of vessel. Endothelial progenitor cells (EPCs) were shown to incorporate into sites of neovascularization and home to sites of endothelial denudation, thus provide an endogenous repair mechanism. Risk factors of coronary heart disease can impair EPCs repairing function by inducing EPCs senescence. EPCs senescence is associated with telomerase inactivation, which is regulated via phosphatidylinositol-3-kinase/Akt kinase (PI3K/Akt) signaling pathway. RLPs are triglyceride rich lipoproteins reflecting chylomicron remnants and very-low-density lipoprotein remnants. RLPs can impair endothelial function via inhibiting endothelial NO synthase (eNOS) activity and nitric oxide (NO) production by inducing intracellular oxidant levels. However, there is no research about effect of RLPs on EPCs. Evidence shows that RLPs can induce focal adhesion kinase (FAK) activation in monocytic U937 cells. Therefore, it can be hypothesized that RLPs could inhibit eNOS and telomerase activities, thus induce atherosclerosis by promoting EPCs senescence via FAK and its downstream PI3K/Akt pathway through an oxidative mechanism.
abstract_id: PUBMED:27829582
Postprandial Hyperlipidemia and Remnant Lipoproteins. Fasting hypertriglyceridemia is positively associated with the morbidity of coronary heart disease (CHD), and postprandial (non-fasting) hypertriglyceridemia is also correlated with the risk status for CHD, which is related to the increase in chylomicron (CM) remnant lipoproteins produced from the intestine. CM remnant particles, as well as oxidized low density lipoprotein (LDL) or very low density lipoprotein (VLDL) remnants, are highly atherogenic and act by enhancing systemic inflammation, platelet activation, coagulation, thrombus formation, and macrophage foam cell formation. The cholesterol levels of remnant lipoproteins significantly correlate with small, dense LDL; impaired glucose tolerance (IGT) and CHD prevalence. We have developed an assay of apolipoprotein (apo)B-48 levels to evaluate the accumulation of CM remnants. Fasting apoB-48 levels correlate with the morbidity of postprandial hypertriglyceridemia, obesity, type III hyperlipoproteinemia, the metabolic syndrome, hypothyroidism, chronic kidney disease, and IGT. Fasting apoB-48 levels also correlate with carotid intima-media thickening and CHD prevalence, and a high apoB-48 level is a significant predictor of CHD risk, independent of the fasting TG level. Diet interventions, such as dietary fibers, polyphenols, medium-chain fatty acids, diacylglycerol, and long-chain n-3 polyunsaturated fatty acids (PUFA), ameliorate postprandial hypertriglyceridemia, moreover, drugs for dyslipidemia (n-3 PUFA, statins, fibrates or ezetimibe) and diabetes concerning incretins (dipeptidyl-peptidase IV inhibitor or glucagon like peptide-1 analogue) may improve postprandial hypertriglyceridemia. Since the accumulation of CM remnants correlates to impaired lipid and glucose metabolism and atherosclerotic cardiovascular events, further studies are required to investigate the characteristics, physiological activities, and functions of CM remnants for the development of new interventions to reduce atherogenicity.
abstract_id: PUBMED:15025802
Serum amyloid A, C-reactive protein and remnant-like lipoprotein particle cholesterol in type 2 diabetic patients with coronary heart disease. Background: Serum amyloid A (SAA) and C-reactive protein (CRP) have been suggested to be involved in the process of coronary heart disease (CHD) and to be potential markers and/or predictors of CHD. Remnant-like lipoprotein particles (RLPs), which are regarded as atherogenic remnant lipoprotein, are reported to be increased in type 2 diabetic patients. We assessed the association of CHD with SAA, CRP and RLP-cholesterol in type 2 diabetic patients.
Methods: One hundred and twenty-six diabetic patients without CHD and 41 patients with CHD were recruited from our hospital. Plasma SAA was measured by the latex agglutination nephelometric immunoassay. Plasma high-sensitivity CRP was measured by a latex immunoturbidity method. Plasma RLP-cholesterol was measured by an immunoabsorption enzyme method.
Results: The mean standard deviation values of RLP-cholesterol in patients with and without CHD were 0.22 (0.26) mmol/L and 0.15 (0.10) mmol/L, respectively (P <0.05). Median (interquartile ranges) for SAA in patients with and without CHD were 7.4 (4.2-11.2) mg/L and 3.9 (2.2-5.9) mg/L, respectively (P <0.001). Median (interquartile ranges) for CRP in patients with and without CHD was 1.14 (0.45-2.08) mg/L and 0.43 (0.19-1.25) mg/L, respectively (P <0.001). For all patients, the Spearman rank correlation statistics for RLP-cholesterol compared with SAA and with CRP were 0.213 (P <0.05) and 0.301 (P <0.01), respectively.
Conclusion: These data suggest that SAA, CRP and RLP-cholesterol are increased in type 2 diabetic patients with CHD, and that the inflammatory proteins correlate with remnant lipoprotein.
abstract_id: PUBMED:18582890
Remnant-like particles accelerate endothelial progenitor cells senescence and induce cellular dysfunction via an oxidative mechanism. Remnant-like particles (RLPs) are closely associated with coronary heart disease and can induce endothelial dysfunction through oxidative mechanisms. Many risk factors accelerate the onset of endothelial progenitor cells (EPCs) senescence via increased oxidative stress. In this study, we investigated the effect of RLPs on EPCs senescence and function. RLPs were isolated from postprandial plasma of hypertriglyceridemic patients by use of the immunoaffinity gel mixture of anti-apoA-1 and anti-apoB-100 monoclonal antibodies. Our results show that EPCs became senescent as determined by senescence-associated acidic beta-galactosidase (SA-beta-Gal) staining after ex vivo cultivation without any stimulation. Co-incubation with RLPs accelerated the increase in SA-beta-Gal-positive EPCs. The acceleration of RLPs-induced EPCs senescence occurred dose-dependently with a maximal effect when EPCs were treated with RLPs at 0.10 mg cholesterol/mL (P<0.01). Moreover, RLPs decreased adhesion, migration and proliferation capacities of EPCs as assessed by adherence to fibronectin, modified Boyden chamber technique and MTT assay (P<0.01), respectively. RLPs increased nitrotyrosine staining in EPCs. However, RLPs-induced EPCs senescence and dysfunction were significantly inhibited by pre-treatment of superoxide dismutase (50 U/mL) (P<0.05). Our results provide evidence that RLPs accelerate the onset of EPC senescence via increased oxidative stress, accompanying with the impairment of adhesion, migration and proliferation capacities.
abstract_id: PUBMED:9626027
Remnant-like particle cholesterol levels in patients with dysbetalipoproteinemia or coronary artery disease. Purpose: Several studies have provided support for a proatherogenic role for remnant lipoproteins. Thus, the aim of this study was to compare remnant-like particle (RLP) cholesterol levels in patients with coronary artery disease who were normolipidemic with those in controls of similar age and gender. We also assessed the usefulness of measuring RLP-cholesterol levels in patients with type III dyslipidemia.
Subjects And Methods: Remnant-like particle cholesterol levels were measured in 63 normolipidemic men with coronary artery disease and 23 male controls of similar age as well as in 15 patients with type III dyslipidemia and 103 controls, using an immunoaffinity method.
Results: Remnant-like particle cholesterol levels were significantly increased in men with coronary artery disease compared with controls (7.6 +/- 3.8 mg/dL versus 5.7 +/- 1.9 mg/dL, P < 0.01). In patients with coronary artery disease, RLP-cholesterol levels were correlated with total triglyceride and nonhigh-density-lipoprotein (HDL) cholesterol levels, but not with HDL-cholesterol levels. RLP-cholesterol levels were significantly elevated in patients with type III dyslipidemia (median 119, range 31 to 240 mg/dL) compared with controls (median 5.6, range 2.2 to 10.5 mg/dL, P < 0.001).
Conclusion: Normolipidemic men with coronary artery disease have increased levels of RLP-cholesterol that is not detected with conventional lipid screening. The RLP-cholesterol assay is a simple method for detecting high concentrations of remnant lipoproteins in patients with type III dyslipidemia.
abstract_id: PUBMED:12029982
Remnant like particles as a risk factor for coronary artery disease Plasma triglyceride(TG) has now emerged as an independent risk factor for coronary artery disease(CAD). In contrast to LDL, TG-rich lipoproteins are heterogenous in size and apolipoprotein composition. Remnant is proatherogenic TG-rich lipoproteins, thereby drawing much research interest of how to isolate and quantify remnants. Remnant like particles(RLP) was developed by using immunoaffinity gels coupled to anti-apoA-I and anti-apoB-100 antibodies. RLP has been shown to be positively correlated with coronary vasocontraction to acetylcholine in CAD patients, suggesting that RLP causes endothelial dysfunction. Further study revealed that CAD patient with relatively high RLP-C levels had a significantly poor prognosis as compared to low RLP-C counterpart. Both Framingham heart study and VA-HIT also found RLP-C as a significant risk factor for CAD. In conclusions, RLP-C is a reasonable indicator for proatherogenic remnants, thus considered as a new risk factor for CAD.
abstract_id: PUBMED:15175559
Postprandial increase in plasma concentrations of remnant-like particles: an independent risk factor for restenosis after percutaneous coronary intervention. Postprandial increase in remnant lipoprotein concentrations has been suggested as an important atherogenic factor. However, the influence of these remnants on the development of restenosis after percutaneous coronary intervention (PCI) remains to be examined. The present study was designed to address this point. In 60 consecutive patients with successful PCI, the influences of possible risk factors on the development of restenosis, including remnant-like particles (RLP) cholesterol (RLP-C) and triglyceride (RLP-TG), were examined. While mean concentrations of RLP-C and RLP-TG were normal in fasting state, postprandial change in RLP-C concentrations was a significant and independent risk factor for restenosis after PCI. The calculated cut-off index (COI) for the change was +64%. When the patients were divided into 2 groups according to this COI, minimal lumen diameter (MLD) and reference coronary diameter were comparable before and immediately after PCI between the high- (COI < 64%) and the low- (COI < 64%) responders. However, follow-up coronary angiography 3 to 6 months after PCI demonstrated that MLD, late loss, and loss index were all worse in the high responders compared with the low responders. These results indicate that post-prandial increase in RLP-C concentrations is an independent risk factor for restenosis after successful PCI, even in patients with normal fasting RLP-C levels.
Answer: Yes, remnant-like particles (RLPs) have been identified as independent predictors of coronary heart disease (CHD) incidence. The Honolulu Heart Study followed a cohort of Japanese-American men and found that baseline remnant-like particle cholesterol (RLP-C) and triglyceride (RLP-TG) levels were significantly related to CHD incidence independently of nonlipid cardiovascular risk factors and of total cholesterol or high-density and low-density lipoprotein cholesterol levels (PUBMED:15947240). Additionally, other studies have shown that RLP cholesterol levels are significantly correlated with coronary artery disease (CAD) and that measurement of RLP cholesterol is useful for the assessment of risk and the evaluation of therapy in patients at risk for CAD (PUBMED:16510044). Furthermore, RLPs have been associated with endothelial dysfunction and atherosclerosis, potentially through mechanisms involving oxidative stress and endothelial progenitor cells senescence (PUBMED:17306468, PUBMED:18582890). Postprandial hyperlipidemia and remnant lipoproteins have also been correlated with the risk status for CHD (PUBMED:27829582). In patients with type 2 diabetes and CHD, increased levels of RLP-cholesterol, along with inflammatory proteins such as serum amyloid A and C-reactive protein, have been observed (PUBMED:15025802). Moreover, RLP cholesterol levels were found to be increased in normolipidemic men with coronary artery disease compared with controls, suggesting that RLPs are not detected with conventional lipid screening (PUBMED:9626027). Finally, postprandial increases in RLP concentrations have been identified as an independent risk factor for restenosis after percutaneous coronary intervention (PCI), indicating their role in the progression of CHD (PUBMED:15175559). |
Instruction: Mother to child transmission of diabetes mellitus: does gestational diabetes program Type 2 diabetes in the next generation?
Abstracts:
abstract_id: PUBMED:17054597
Mother to child transmission of diabetes mellitus: does gestational diabetes program Type 2 diabetes in the next generation? Aim: Type 2 diabetes is frequently familial. Hyperglycaemia in pregnancy might act in addition to genetic factors to cause diabetes in the children of mothers with gestational diabetes mellitus (GDM). The first manifestation of this in female offspring is likely to be GDM in their own pregnancies. We compared the incidence of GDM in daughters of diabetic mothers and diabetic fathers to determine if in utero exposure to hyperglycaemia increased the risk of a diabetes-prone phenotype in offspring.
Methods: We analysed the outcome of a GDM screening programme in women with a family history of diabetes in their mother (n = 535), father (n = 566), both parents (n = 77) or neither (n = 4672).
Results: GDM was twice as common in the daughters of diabetic mothers (11%) than diabetic fathers (5%, P = 0.002). Women with two diabetic parents were no more likely to have GDM than women with only a diabetic mother.
Conclusions: Genetic predisposition to GDM should be equally shared by daughters of diabetic mothers and fathers. An excess of maternal transmission of diabetes is consistent with an epigenetic effect of hyperglycaemia in pregnancy acting in addition to genetic factors to produce diabetes in the next generation.
abstract_id: PUBMED:32800764
Perinatal Outcomes in a Longitudinal Birth Cohort of First Nations Mothers With Pregestational Type 2 Diabetes and Their Offspring: The Next Generation Study. Objectives: There is emerging evidence that First Nations women with diabetes in pregnancy and their offspring have poorer health outcomes than non-First Nations women. The aim of this study was to describe the perinatal outcomes of pregnancies complicated by type 2 diabetes.
Methods: The Next Generation longitudinal study is a First Nations birth cohort of children born to mothers diagnosed in childhood with type 2 diabetes. Pregnant women were prospectively enrolled in the birth cohort, and a review of medical records (including stored fetal ultrasound images) was performed to determine perinatal outcomes for 112 child-mother pairs between 2005 and 2015. Maternal demographics, antenatal variables, fetal ultrasound findings, obstetric and delivery information and neonatal birth outcomes were collected and analyzed.
Results: Mothers in our cohort were young and most were overweight at the start of pregnancy. Most had suboptimal glycemic control in the first trimester (median glycated hemoglobin, 9.3%). The cesarean section rate was high at 41%. Over one-half of newborns had macrosomia at birth, and almost 1 in 5 were born with a structural anomaly, mainly renal. Fetal ultrasound significantly underestimated the proportion of infants born with macrosomia (p<0.05) and missed 3 of 7 cardiac defects in this cohort.
Conclusions: High rates of anomalies, macrosomia and cesarean deliveries provide insight into pregnancy management and disease processes for First Nations women with pregestational type 2 diabetes and their offspring, and highlights opportunities for improvement in prenatal care of these women.
abstract_id: PUBMED:27174368
Gestational diabetes mellitus and long-term consequences for mother and offspring: a view from Denmark. Gestational diabetes mellitus (GDM) is defined as glucose intolerance of varying severity and is present in about 2-6% of all pregnancies in Europe, making it one of the most common pregnancy disorders. Aside from the short-term maternal, fetal and neonatal consequences associated with GDM, there are long-term consequences for both mother and child. Although maternal glucose tolerance often normalises shortly after pregnancy, women with GDM have a substantially increased risk of developing type 2 diabetes later in life. Studies have reported that women are more than seven times as likely to develop diabetes after GDM, and that approximately 50% of mothers with GDM will develop diabetes within 10 years, making GDM one of the strongest predictors of type 2 diabetes. In women with previous GDM, development of type 2 diabetes can be prevented or delayed by lifestyle intervention and/or medical treatment. Systematic follow-up programmes would be ideal to prevent progression of GDM to diabetes, but such programmes are unfortunately lacking in the routine clinical set-up in most countries. Studies have found that the risks of obesity, the metabolic syndrome, type 2 diabetes and impaired insulin sensitivity and secretion in offspring of mothers with GDM are two- to eightfold those in offspring of mothers without GDM. The underlying pathogenic mechanisms behind the abnormal metabolic risk profile in offspring are unknown, but epigenetic changes induced by exposure to maternal hyperglycaemia during fetal life are implicated. Animal studies indicate that treatment can prevent long-term metabolic complications in offspring, but this remains to be confirmed in humans. Thus, diabetes begets diabetes and it is likely that GDM plays a significant role in the global diabetes epidemic. This review summarises a presentation given at the 'Gestational diabetes: what's up?' symposium at the 2015 annual meeting of the EASD. It is accompanied by two other reviews on topics from this symposium (by Marja Vääräsmäki, DOI: 10.1007/s00125-016-3976-6 , and by Cuilin Zhang and colleagues, DOI: 10.1007/s00125-016-3979-3 ) and an overview by the Session Chair, Kerstin Berntorp (DOI: 10.1007/s00125-016-3975-7 ).
abstract_id: PUBMED:30372451
Late-pregnancy dysglycemia in obese pregnancies after negative testing for gestational diabetes and risk of future childhood overweight: An interim analysis from a longitudinal mother-child cohort study. Background: Maternal pre-conception obesity is a strong risk factor for childhood overweight. However, prenatal mechanisms and their effects in susceptible gestational periods that contribute to this risk are not well understood. We aimed to assess the impact of late-pregnancy dysglycemia in obese pregnancies with negative testing for gestational diabetes mellitus (GDM) on long-term mother-child outcomes.
Methods And Findings: The prospective cohort study Programming of Enhanced Adiposity Risk in Childhood-Early Screening (PEACHES) (n = 1,671) enrolled obese and normal weight mothers from August 2010 to December 2015 with trimester-specific data on glucose metabolism including GDM status at the end of the second trimester and maternal glycated hemoglobin (HbA1c) at delivery as a marker for late-pregnancy dysglycemia (HbA1c ≥ 5.7% [39 mmol/mol]). We assessed offspring short- and long-term outcomes up to 4 years, and maternal glucose metabolism 3.5 years postpartum. Multivariable linear and log-binomial regression with effects presented as mean increments (Δ) or relative risks (RRs) with 95% confidence intervals (CIs) were used to examine the association between late-pregnancy dysglycemia and outcomes. Linear mixed-effects models were used to study the longitudinal development of offspring body mass index (BMI) z-scores. The contribution of late-pregnancy dysglycemia to the association between maternal pre-conception obesity and offspring BMI was estimated using mediation analysis. In all, 898 mother-child pairs were included in this unplanned interim analysis. Among obese mothers with negative testing for GDM (n = 448), those with late-pregnancy dysglycemia (n = 135, 30.1%) had higher proportions of excessive total gestational weight gain (GWG), excessive third-trimester GWG, and offspring with large-for-gestational-age birth weight than those without. Besides higher birth weight (Δ 192 g, 95% CI 100-284) and cord-blood C-peptide concentration (Δ 0.10 ng/ml, 95% CI 0.02-0.17), offspring of these women had greater weight gain during early childhood (Δ BMI z-score per year 0.18, 95% CI 0.06-0.30, n = 262) and higher BMI z-score at 4 years (Δ 0.58, 95% CI 0.18-0.99, n = 43) than offspring of the obese, GDM-negative mothers with normal HbA1c values at delivery. Late-pregnancy dysglycemia in GDM-negative mothers accounted for about one-quarter of the association of maternal obesity with offspring BMI at age 4 years (n = 151). In contrast, childhood BMI z-scores were not affected by a diagnosis of GDM in obese pregnancies (GDM-positive: 0.58, 95% CI 0.36-0.79, versus GDM-negative: 0.62, 95% CI 0.44-0.79). One mechanism triggering late-pregnancy dysglycemia in obese, GDM-negative mothers was related to excessive third-trimester weight gain (RR 1.72, 95% CI 1.12-2.65). Furthermore, in the maternal population, we found a 4-fold (RR 4.01, 95% CI 1.97-8.17) increased risk of future prediabetes or diabetes if obese, GDM-negative women had a high versus normal HbA1c at delivery (absolute risk: 43.2% versus 10.5%). There is a potential for misclassification bias as the predominantly used GDM test procedure changed over the enrollment period. Further studies are required to validate the findings and elucidate the possible third-trimester factors contributing to future mother-child health status.
Conclusions: Findings from this interim analysis suggest that offspring of obese mothers treated because of a diagnosis of GDM appeared to have a better BMI outcome in childhood than those of obese mothers who-following negative GDM testing-remained untreated in the last trimester and developed dysglycemia. Late-pregnancy dysglycemia related to uncontrolled weight gain may contribute to the development of child overweight and maternal diabetes. Our data suggest that negative GDM testing in obese pregnancies is not an "all-clear signal" and should not lead to reduced attention and risk awareness of physicians and obese women. Effective strategies are needed to maintain third-trimester glycemic and weight gain control among otherwise healthy obese pregnant women.
abstract_id: PUBMED:36318841
GESTATIONAL DIABETES: PREVALENCE AND RISKS FOR THE MOTHER AND CHILD (REVIEW). Gestational diabetes mellitus (GDM) is chronic hyperglycemia during gestation in women without previously diagnosed diabetes. This hyperglycemia is caused by impaired glucose tolerance due to pancreatic β-cell dysfunction in the setting of chronic insulin resistance. GDM has been found to affect approximately 4-16.5% of pregnant women worldwide. The large range of prevalence is associated with different approaches to the diagnosis of gestational diabetes, which are addressed in recent organizational documents but have not yet been introduced into wide clinical practice, and therefore prevalence figures vary between countries, as well as between regions of one country. Studies have shown that overweight and obese patients or people with a family history of any form of diabetes are more likely to have GDM and the incidence of GDM increases with the age of the pregnant woman. It has been proven that half of the cases of GDM occur as a relapse in a subsequent pregnancy. Consequences of GDM include an increased risk of maternal cardiovascular disease and type 2 diabetes, as well as macrosomia and birth complications in the infant. There is also a long-term risk of obesity, type 2 diabetes, and cardiovascular disease in the child. Despite the fact that management strategies, insulin therapy, and behavioral therapy have been discussed for a long time, the effectiveness of these methods is insufficient. This review discusses what is currently known about the epidemiology, pathophysiology of GDM, and maternal and child outcomes.
abstract_id: PUBMED:37808264
Assessing Knowledge on Gestational Diabetes Mellitus and Child Health. Gestational diabetes mellitus (GDM) is a diagnosis of glucose intolerance during pregnancy. The risk of type II diabetes mellitus (T2DM) and obesity for the child and mother increases when GDM develops. Preventing the development of GDM could help lower the prevalence of obesity and type II diabetes mellitus morbidity rates in children of affected mothers. The purpose of the study was to identify the awareness level of females ages 12 and 51 years, on the long-term risk of obesity and T2DM on their children in Australia and Samoa. This is a quantitative study involving 202 females, from across Australia and Samoa, between April 2021 and November 2021, comparing the level of knowledge between a developing and developed country. In Australia and Samoa, 15% (n=16) and 34% (n=33) of females respectively, were aware of the long-term complications of GDM on their children. These findings indicate that there is inadequate knowledge regarding the long-term consequences associated with GDM on both the risk for T2DM in women and the risk for long-term complications for their children. The greatest source of information in both countries was obtained from physicians or midwives, 52% (n=105). This supports the need for increased education on GDM, through social media, the internet, and community health professionals. By increasing awareness of GDM and implementing preventive strategies, it may be possible to reduce the prevalence of obesity and T2DM in Australia and Samoa.
abstract_id: PUBMED:19150058
Future risk of diabetes in mother and child after gestational diabetes mellitus. Gestational diabetes mellitus (GDM) is a common pregnancy complication with increased maternal and perinatal morbidity. However, significant long-term morbidity also exists for the mother and offspring. Women with previous GDM have a very high risk of developing overt diabetes, primarily type 2 diabetes, later in life. Moreover, the risk of the metabolic syndrome is increased 3-fold in these women. Their offspring have an 8-fold risk of diabetes/prediabetes at 19-27 years of age. Thus, GDM is part of a vicious circle which increases the development of diabetes in the coming generations.
abstract_id: PUBMED:20090113
Maternal nutrition: effects on health in the next generation. Nearly 20 years ago, it was discovered that low birthweight was associated with an increased risk of adult diabetes and cardiovascular disease (CVD). This led to the hypothesis that exposure to undernutrition in early life increases an individual's vulnerability to these disorders, by 'programming' permanent metabolic changes. Implicit in the programming hypothesis is that improving the nutrition of girls and women could prevent common chronic diseases in future generations. Research in India has shown that low birthweight children have increased CVD risk factors, and a unique birth cohort in Delhi has shown that low infant weight, and rapid childhood weight gain, increase the risk of type 2 diabetes. Progress has been made in understanding the role of specific nutrients in the maternal diet. In the Pune Maternal Nutrition Study, low maternal vitamin B12 status predicted increased adiposity and insulin resistance in the children, especially if the mother was folate replete. It is not only maternal undernutrition that causes problems; gestational diabetes, a form of foetal overnutrition (glucose excess), is associated with increased adiposity and insulin resistance in the children, highlighting the adverse effects of the 'double burden' of malnutrition in developing countries, where undernutrition and overnutrition co-exist. Recent intervention studies in several developing countries have shown that CVD risk factors in the offspring can be improved by supplementing undernourished mothers during pregnancy. Results differ according to the population, the intervention and the post-natal environment. Ongoing studies in India and elsewhere seek to understand the long-term effects of nutrition in early life, and how best to translate this knowledge into policies to improve health in future generations.
abstract_id: PUBMED:30663027
High frequency of pathogenic and rare sequence variants in diabetes-related genes among Russian patients with diabetes in pregnancy. Aims: Diabetes in pregnancy may be associated with monogenic defects of beta-cell function, frequency of which depends on ethnicity, clinical criteria for selection of patients as well as methods used for genetic analysis. The aim was to evaluate the contribution and molecular spectrum of mutations among genes associated with monogenic diabetes in non-obese Russian patients with diabetes in pregnancy using the next-generation sequencing (NGS).
Methods: 188 non-obese pregnant women with diabetes during pregnancy were included in the study; among them 57 subjects (30.3%) met the American Diabetes Association (ADA) criteria of preexisting pregestational diabetes (pre-GDM), whereas 131 women (69.7%) fulfilled criteria of gestational diabetes mellitus (GDM). A custom NGS panel targeting 28 diabetes causative genes was used for sequencing. The sequence variants were rated according to the American College of Medical Genetics and Genomics (ACMG) guidelines.
Results: In total, 23 pathogenic, 18 likely pathogenic and 16 variants of uncertain significance were identified in 59/188 patients (31.4%). The majority of variants (38/59) were found in GCK gene. No significant differences in the number of variants among the two study groups (pre-GDM and GDM) were observed.
Conclusions: The study suggests that frequency of monogenic variants of diabetes might be underestimated, which warrants a broader use of genetic testing, especially in pregnancy.
abstract_id: PUBMED:36962592
Associations of adverse maternal experiences and diabetes on postnatal maternal depression and child social-emotional outcomes in a South African community cohort. Previous literature has identified associations between diabetes during pregnancy and postnatal maternal depression. Both maternal conditions are associated with adverse consequences on childhood development. Despite an especially high prevalence of diabetes during pregnancy and maternal postnatal depression in low- and middle-income countries, related research predominates in high-income countries. In a South African cohort with or without diabetes, we investigated associations between adverse maternal experiences with postnatal maternal depression and child social-emotional outcomes. South African mother-child dyads were recruited from the Bishop Lavis community in Cape Town. Participants consisted of 82 mother-child dyads (53 women had GDM or type 2 diabetes). At 14-20 months postpartum, maternal self-report questionnaires were administered to assess household socioeconomic status, food insecurity, maternal depressive symptoms (Edinburgh Postnatal Depression Scale (EPDS)), maternal trauma (Life Events Checklist), and child social-emotional development (Brief Infant Toddler Social Emotional Assessment, Ages and Stages Questionnaires: Social-Emotional, Second Edition). Lower educational attainment, lower household income, food insecurity, living without a partner, and having experienced physical assault were each associated with postnatal maternal depressive symptoms and clinical maternal depression (EPDS ≥ 13). Maternal postnatal depression, lower maternal educational attainment, lower household income, household food insecurity, and living in a single-parent household were each associated with child social-emotional problems. Stratified analyses revealed maternal experiences (education, income, food insecurity, trauma) were associated with postnatal maternal depressive symptoms and child social-emotional problems only among dyads with in utero exposure to diabetes. Women with pre-existing diabetes or gestational diabetes in LMIC settings should be screened for health related social needs to reduce the prevalence of depression and to promote child social-emotional development.
Answer: The transmission of diabetes mellitus from mother to child has been a subject of research, and studies suggest that gestational diabetes mellitus (GDM) may indeed program Type 2 diabetes in the next generation. The analysis of a GDM screening program showed that GDM was twice as common in daughters of diabetic mothers (11%) compared to diabetic fathers (5%), suggesting that in utero exposure to hyperglycemia could increase the risk of a diabetes-prone phenotype in offspring (PUBMED:17054597). This is consistent with an epigenetic effect of hyperglycemia in pregnancy acting in addition to genetic factors to produce diabetes in the next generation.
Furthermore, women with GDM have a substantially increased risk of developing type 2 diabetes later in life, and studies have reported that approximately 50% of mothers with GDM will develop diabetes within 10 years after pregnancy (PUBMED:27174368). This indicates that GDM is one of the strongest predictors of type 2 diabetes. Additionally, offspring of mothers with GDM are at increased risks of obesity, the metabolic syndrome, type 2 diabetes, and impaired insulin sensitivity and secretion, with risks being two- to eightfold higher than in offspring of mothers without GDM (PUBMED:27174368).
Moreover, gestational diabetes is associated with increased risk of maternal cardiovascular disease and type 2 diabetes, as well as macrosomia and birth complications in the infant. There is also a long-term risk of obesity, type 2 diabetes, and cardiovascular disease in the child (PUBMED:36318841). These findings highlight the intergenerational impact of GDM and the importance of managing and monitoring the condition to prevent the transmission of diabetes-related risks to the next generation.
In summary, the evidence suggests that gestational diabetes does have the potential to program Type 2 diabetes in the next generation, with both genetic and epigenetic factors contributing to this risk. This underscores the importance of interventions and follow-up programs to prevent the progression of GDM to type 2 diabetes and to mitigate the associated risks for offspring. |
Instruction: Sjögren's syndrome in the community: can serology replace salivary gland biopsy?
Abstracts:
abstract_id: PUBMED:36305349
Can salivary gland ultrasonography replace salivary gland biopsy in the diagnosis of Sjögren's syndrome? Ultrasound is a promising diagnostic method when it comes to assessing the involvement of major salivary glands in patients with primary Sjögren's syndrome (pSS). A matter of debate is whether ultrasound of the major salivary glands (SGUS) can replace a salivary gland biopsy in the diagnosis or classification of pSS. The intra- and inter-observer reliability of SGUS was found to be good, especially when focusing on hypoechogenic areas and homogeneity, and comparable to the reliability of histopathologic characteristics of salivary gland biopsies of pSS patients. However, replacing salivary gland biopsy by SGUS led to substantial decrease of the accuracy of the 2016 American College of Rheumatology/European League Against Rheumatism (ACR/EULAR) classification criteria with clinical diagnosis as the gold standard. When SGUS was added as an additional item to the criteria, the accuracy of the criteria remained high, offering at the same time the clinicians a wider array of tools to assess patients. Combination of SGUS and anti-SSA antibodies was shown to be highly predictive of the classification of a patient suspected of pSS, making routine salivary gland biopsy debatable.
abstract_id: PUBMED:20660500
Salivary gland biopsy: a comprehensive review of techniques and related complications. Objective: This study proposes a revision of the literature on the current techniques employed in salivary gland biopsy.
Methods: A systematic review of the literature between January 1990 and January 2010 was conducted using MEDLINE, Embase and the Cochrane Central Register of Controlled Trials. The search terms were: 'biopsy AND parotid AND Sjögren'; 'biopsy AND sublingual salivary gland AND Sjögren'; 'biopsy AND minor salivary gland AND Sjögren'; 'biopsy AND labial salivary gland AND Sjögren' and 'biopsy AND salivary glands AND connective disorders'.
Results: No study reporting submandibular salivary gland biopsy was found; 3 studies reported sublingual salivary gland biopsy; 1 study reported palate biopsy; 4 studies reported parotid gland biopsy and 21 studies reported minor salivary gland biopsy.
Conclusion: Biopsy of salivary glands must be performed as last investigation and only when the other items are not complete enough to satisfy the diagnosis. The knowledge of complications and sequelae may be useful in order to minimize the risk.
abstract_id: PUBMED:27136104
The Importance of Minor Salivary Gland Biopsy in Sjögren Syndrome Diagnosis and the Clinicopathological Correlation. Objective: Minor salivary gland biopsy is one of the objective tests used in the diagnosis of Sjögren syndrome. The aim of our study was to compare the clinical and laboratory data of primary and secondary Sjögren syndrome cases with a lymphocyte score 3 and 4 in the minor salivary gland biopsy.
Material And Method: Data from a total of 2346 consecutive minor salivary gland biopsies were retrospectively evaluated in this study. Clinical and autoantibody characteristics of 367 cases with lymphocyte score 3 or 4 and diagnosed with primary or secondary Sjögren syndrome were compared.
Results: There was no difference between lymphocyte score 3 and 4 primary Sjögren syndrome patients in terms of dry mouth, dry eye symptoms and Schirmer test results but Anti-Ro and Antinuclear Antibody positivity was statistically significantly higher in cases with lymphocyte score 4 (p= 0.025, p= 0.001). Anti-Ro test results were also found to be statistically significantly higher in secondary Sjögren syndrome patients with lymphocyte score 4 (p= 0.048).
Conclusion: In this study, the high proportion of cases with negative autoantibody but positive lymphocyte score is significant in terms of showing the contribution of minor salivary gland biopsy to Sjögren syndrome diagnosis. Lymphocyte score 3 and 4 cases were found to have similar clinical findings but a difference regarding antibody positivity in primary Sjögren syndrome. We believe that cases with lymphocyte score 4 may be Sjögren syndrome cases whose clinical manifestations are relatively established and higher autoantibody levels are therefore found.
abstract_id: PUBMED:35909442
Minor Salivary Gland Biopsy in Diagnosis of Sjögren's Syndrome. Objective: Previous studies have questioned the safety and efficacy of minor salivary gland biopsy in the diagnosis of Sjögren's syndrome, citing complications and difficulty of pathologic evaluation. This study aims to determine the rate of biopsy specimen adequacy and the risk of complications after minor salivary gland biopsy.
Study Design: Case series.
Setting: Single tertiary care center.
Methods: We reviewed the records of all patients who underwent minor salivary gland biopsy at our institution from October 1, 2016, to September 1, 2021. Demographics, comorbidities, symptoms, and serologic results were recorded. The primary outcome was adequacy of the tissue sample. Complications of the procedure were recorded. Biopsies with at least one focus of ≥50 lymphocytes per 4-mm2 sample were considered positive.
Results: We identified 110 patients who underwent minor salivary gland biopsy. Ninety-three (85%) were female, and the median age was 49.1 years (range, 18.7-80.5). Seventy-seven procedures (70%) were performed in the office setting, and 33 (30%) were performed in the operating room. Nearly all biopsy samples (n = 108, 98%) were adequate, and 33 (31%) were interpreted as positive. Four patients (4%) experienced temporary lip numbness, which resolved with conservative management. No permanent complications were reported after lip biopsy. Nineteen (58%) patients with positive biopsy results had no Sjögren's-specific antibodies. Most patients with positive biopsy results (n = 20, 61%) subsequently started immunomodulatory therapy.
Conclusion: Minor salivary gland biopsy can be performed safely and effectively in both the office and the operating room. This procedure provides clinically meaningful information and can be reasonably recommended in patients suspected to have Sjögren's syndrome.
abstract_id: PUBMED:35083248
Recent Advances of Salivary Gland Biopsy in Sjögren's Syndrome. Sjögren's syndrome (SS) is a chronic, systemic, inflammatory autoimmune disease characterized by lymphocyte proliferation and progressive damage to exocrine glands. The diagnosis of SS is challenging due to its complicated clinical manifestations and non-specific signs. Salivary gland biopsy plays an important role in the diagnosis of SS, especially with anti-Sjögren's syndrome antigen A (SSA) and anti-SSB antibody negativity. Histopathology based on biopsy has clinical significance for disease stratification and prognosis evaluation, such as risk assessment for the development of non-Hodgkin's lymphoma. Furthermore, histopathological changes of salivary gland may be implicated in evaluating the efficacy of biological agents in SS. In this review, we summarize the histopathological features of salivary gland, the mechanism of histopathological changes and their clinical significance, as well as non-invasive imaging techniques of salivary glands as a potential alternative to salivary gland biopsy in SS.
abstract_id: PUBMED:24287191
Salivary gland biopsy for Sjögren's syndrome. Salivary gland biopsy is a technique broadly applied for the diagnosis of Sjögren's syndrome (SS), lymphoma accompanying SS, sarcoidosis, amyloidosis, and other connective tissue disorders. SS has characteristic microscopic findings involving lymphocytic infiltration surrounding the excretory ducts in combination with destruction of acinar tissue. This article focuses on the main techniques used for taking labial and parotid salivary gland biopsies in the diagnostic workup of SS with respect to their advantages, their postoperative complications, and their usefulness for diagnostic procedures, monitoring disease progression, and treatment evaluation.
abstract_id: PUBMED:3377870
Comparison of parotid and minor salivary gland biopsy specimens in the diagnosis of Sjögren's syndrome. We conducted a prospective study comparing minor salivary gland and parotid gland biopsy specimens obtained simultaneously from 24 patients who were undergoing evaluation for primary Sjögren's syndrome (SS). Adequate tissue for study was obtained with all minor salivary gland biopsies and 19 of 24 parotid gland biopsies. Parotid inflammation was seen in 6 of 11 patients whose minor salivary gland biopsy results indicated SS, but in none of 8 patients who had normal findings on minor salivary gland biopsy. Patients with parotid inflammation were older and had a higher frequency of dry eyes and mouth, abnormal results on Schirmer's test, serious extraglandular involvement, and serologic abnormalities. We conclude that parotid gland biopsy adds very little to the minor salivary gland biopsy in the diagnosis of primary SS, but that parotid inflammatory changes may reflect disease duration and/or severity.
abstract_id: PUBMED:15703951
Sjögren's syndrome in the community: can serology replace salivary gland biopsy? Background: It is relatively difficult in a community setting to perform salivary gland biopsy or reliable diagnostic tests for salivary gland involvement in a patient suspected to suffer from Sjögren's syndrome (SS).
Objective: To investigate whether anti-Ro/La antibodies are a good substitute for salivary gland biopsy in community patients suspected to suffer from SS.
Methods: Forty-one patients suspected as having SS due to dry eyes and mouth, articular complaints, and/or serological findings were examined for the presence of anti-Ro/La, and underwent minor salivary gland biopsy.
Results: Sixteen patients (39%) were classified as primary SS by the American-European Consensus Group criteria. Twelve subjects had anti-Ro/La antibodies and 11 subjects in this group had positive biopsy findings. Of 29 patients without anti-Ro/La antibodies, only four manifested positive biopsy findings. A significant association was found between the presence of anti-Ro/La antibodies and positive salivary gland findings characteristic for SS (p<0.0001, Fisher's exact test).
Conclusion: These findings tend to support the suggestion that a patient suspected to suffer from SS in a community setting may be first tested for the presence of anti-Ro/La antibodies to confirm the diagnosis. Only those with a negative result for the presence of anti-Ro/La antibodies need to be referred for salivary gland biopsy.
abstract_id: PUBMED:14631230
Diagnostic contribution of minor salivary gland biopsy: statistical analysis in 100 cases Introduction: The minor salivary glands biopsy is a very common diagnostic procedure in oral medicine rather its efficiency has not been statistically proved.
Material And Methods: One hundred biopsies have been studied with special attention to the suspected diagnosis before biopsy and the final histologic result.
Results: The minor salivary gland biopsy confirmed the initial diagnosis in 13 cases.
Discussion: Although if the minor salivary gland biopsy is in most cases not contributive it is a very simple procedure which gives the diagnosis of Gougerot-Sjögren disease, amylosis and sarcoidosis.
abstract_id: PUBMED:2727420
Salivary gland biopsy in sarcoidosis. Previous studies have suggested that the simple technique of minor salivary gland biopsy might be a useful method of diagnosing sarcoidosis. We have therefore investigated the success of this technique in 25 consecutive patients who were attending a chest clinic and in whom a diagnosis of sarcoid had been made. The histological appearances of the minor salivary glands were correlated with the degree of gallium uptake by the parotid glands after intravenous injection of gallium citrate. We also compared the histological appearances of the sarcoid patients with 40 patients suffering from a variety of mixed connective tissue disorders (systemic lupus erythematosis, systemic sclerosis, primary Sjogrens syndrome, rheumatoid arthritis and polymyositis). Five out of twenty-five (20%) of the sarcoid patients had classical non-caseating granulomata in their minor salivary glands, four of whom also had hilar lymphadenopathy and one had bone marrow involvement. All the patients with sarcoid histology in their minor salivary glands also had Gallium uptake in parotid tissue. None of the patients with mixed connective tissue disorders had granulomata in their minor salivary gland tissue although a continuum of changes were found, ranging from normal appearances through small foci of lymphocytic infiltrate to extensive inflammation and destruction of glandular tissue with fibrosis. Whilst minor salivary gland biopsy may be a simple and rapid method of diagnosing sarcoidosis it is most likely to be positive in patients with hilar lymphadenopathy.
Answer: In the community setting, the question of whether serology can replace salivary gland biopsy for the diagnosis of Sjögren's syndrome (SS) has been explored. A study investigated the efficacy of anti-Ro/La antibodies as a substitute for salivary gland biopsy in patients suspected of having SS due to symptoms such as dry eyes and mouth, articular complaints, and/or serological findings. The results showed that there was a significant association between the presence of anti-Ro/La antibodies and positive salivary gland findings characteristic of SS. Specifically, among patients with anti-Ro/La antibodies, the majority had positive biopsy findings. Conversely, in patients without these antibodies, only a small number had positive biopsy findings. This suggests that testing for anti-Ro/La antibodies could be a first step in confirming the diagnosis of SS in a community setting. Patients with negative results for these antibodies might then be referred for salivary gland biopsy (PUBMED:15703951).
However, it is important to note that while serology can be indicative of SS, salivary gland biopsy remains a key diagnostic tool, especially in cases where serological markers are negative. The biopsy can provide clinically meaningful information and is considered safe and effective. It can be performed in both office and operating room settings, with a high rate of tissue sample adequacy and a low risk of complications. Moreover, a significant proportion of cases with positive biopsy results had no Sjögren's-specific antibodies, indicating the importance of biopsy in diagnosis (PUBMED:35909442).
In summary, while serological testing for anti-Ro/La antibodies can be a useful initial diagnostic tool for SS in the community, it may not completely replace the need for salivary gland biopsy, particularly in seronegative patients or when a definitive diagnosis is required. |
Instruction: Does survey non-response bias the association between occupational social class and health?
Abstracts:
abstract_id: PUBMED:17454926
Does survey non-response bias the association between occupational social class and health? Aims: A non-response rate of 20-40%is typical in questionnaire studies. The authors evaluate non-response bias and its impact on analyses of social class inequalities in health.
Methods: Set in the context of a health survey carried out among the employees of the City of Helsinki (non-response 33%) in 2000-02. Survey response and non-response records were linked with a personnel register to provide information on occupational social class and long sickness absence spells as an indicator of health status.
Results: Women and employees in higher occupational social classes were more likely to respond. Non-respondents had about 20-30% higher sickness absence rates. Relative social class differences in sickness absence in the total population were similar to those among either respondents or non-respondents.
Conclusions: In working populations survey non-response does not seriously bias analyses of social class inequalities in sickness absence and possibly health inequalities more generally.
abstract_id: PUBMED:25388324
Social class, psychosocial occupational risk factors, and the association with self-rated health and mental health in Chile The objective of this study was to analyze the association between social class and psychosocial occupational risk factors and self-rated health and mental health in a Chilean population. A cross-sectional study analyzed data from the First National Survey on Employment, Work, Quality of Life, and Male and Female Workers in Chile (N = 9,503). The dependent variables were self-rated health status and mental health. The independent variables were social class (neo-Marxist), psychosocial occupational risk factors, and material deprivation. Descriptive and logistic regression analyses were performed. There were inequalities in the distribution of psychosocial occupational risk factors by social class and sex. Furthermore, social class and psychosocial occupational risk factors were associated with unequal distribution of self-rated health and mental health among the working population in Chile. Occupational health interventions should consider workers' exposure to socioeconomic and psychosocial risk factors.
abstract_id: PUBMED:3455422
Is social class standardisation appropriate in occupational studies? Social class standardisation has been proposed as a method for separating the effects of occupation and "social" or "lifestyle" factors in epidemiological studies, by comparing workers in a particular occupation with other workers in the same social class. The validity of this method rests upon two assumptions: (1) that social factors have the same effect in all occupational groups in the same social class, and (2) that other workers in the same social class as the workers being studied are free of occupational risk factors for the disease of interest. These assumptions will not always be satisfied. In particular, the effect of occupation will be underestimated when the comparison group also has job-related exposures which cause the disease under study. Thus, although adjustment for social class may minimise bias due to social factors, it may introduce bias due to unmeasured occupational factors. This difficulty may be magnified when occupational category is used as the measure of social class. Because of this potential bias, adjustment for social class should be done only after careful consideration of the exposures and disease involved and should be based on an appropriate definition of social class. Both crude and standardised results should be presented when such adjustments are made.
abstract_id: PUBMED:21627726
Determinants of non-response in an occupational exposure and health survey in New Zealand. Objective: Study the determinants of non-response and the potential for non-response bias in a New Zealand survey of occupational exposures and health.
Methods: A random sample of 10,000 New Zealanders aged 20-64 years were invited by mail to take part in a telephone survey. Multiple logistic regression was used to study the determinants of non-response. Whether occupational exposure, lifestyle and health indicators were associated with non-response was studied by standardising their prevalence towards the demographic distribution of the source population, and comparing early with late responders.
Results: The response rate was 37%. Younger age, Māori descent, highest and lowest deprivation groups and being a student, unemployed, or retired were determinants of non-contact. Refusal was associated with older age and being a housewife. Prevalence of key survey variables were unchanged after standardising to the demographic distribution of the source population.
Conclusions: Following up the non-responders to the mailed invitations with telephone calls more than doubled the response rate and improved the representativeness of the sample. Although the response rate was low, we found no evidence of major non-response bias.
Implications: Judgement regarding the validity of a survey should not be based on its response rate.
abstract_id: PUBMED:31216499
Survey research in podiatric medicine: An analysis of the reporting of response rates and non-response bias. Background: Survey research is common practice in podiatry literature and many other health-related fields. An important component of the reporting of survey results is the provision of sufficient information to permit readers to understand the validity and representativeness of the results presented. However, the quality of survey reporting measures in the body of podiatry literature has not been systematically reviewed.
Objective: To examine the reporting of response rates and nonresponse bias within survey research articles published in the podiatric literature in order to provide a foundation with regard to the development of appropriate research reporting standards within the profession.
Methods: This study reports on a secondary analysis of survey research published in the Journal of the American Podiatric Medical Association, the Foot, and the Journal of Foot and Ankle Research. 98 surveys published from 2000 to 2018 were reviewed and data abstracted regarding the report of response rates and non-response bias.
Results: 67 surveys (68.4%) report a response rate while only 36 articles (36.7%) mention non-response bias in any capacity.
Conclusions: The findings suggest that there is room for improvement in the quality of reporting response rates and nonresponse in the body of podiatric literature involving survey research. Both nonresponse and response rate should be reported to assess survey quality. This is particularly problematic for studies that contribute to best practices.
abstract_id: PUBMED:24584263
Addressing social inequality in aging by the Danish occupational social class measurement. Objective: To present the Danish Occupational Social Class (DOSC) measurement as a measure of socioeconomic position (SEP) applicable in a late midlife population, and to analyze associations of this measure with three aging-related outcomes in midlife, adjusting for education.
Method: Systematic coding procedures of the DOSC measurement were applied to 7,084 participants from the Copenhagen Aging and Midlife Biobank (CAMB) survey. We examined the association of this measure of SEP with chronic conditions, self-rated health, and mobility in logistic regression analyses, adjusting for school education in the final analysis.
Results: The measure of SEP showed a strong social gradient along the social classes in terms of prevalence of chronic conditions, poor self-rated health, and mobility limitations. Adjusting for school education attenuated the association only to a minor degree.
Discussion: The DOSC measure was associated with aging-related outcomes in a midlife Danish population, and is, thus, well suited for future epidemiological research on social inequalities in health and aging.
abstract_id: PUBMED:15682206
Bias measuring mortality gradients by occupational class in New Zealand. Background: Socioeconomic differences in mortality in New Zealand have traditionally been measured using occupational class from mortality data (based on usual or last occupation) as the numerator, and class from census data (current occupation on census night) as the denominator. Such analyses are prone to numerator-denominator bias. Record linkage of census and mortality data in the New Zealand Census-Mortality Study (NZCMS) allows analyses of 'linked' data that will avoid numerator-denominator bias, but may be prone to other biases.
Objectives: To determine differences in the assignment of occupational class between census and mortality data; to investigate biases in the observed association of class with mortality using linked census-mortality data; and to compare the class-mortality association using unlinked versus linked census-mortality data.
Methods: Census records for males aged 25-64 years on census night 1991 were anonymously and probabilistically linked to 5,844 out of 8,145 eligible deaths occurring in the second and third years following census night.
Results: (by objective) Only 47% of linked deaths had an occupation recorded on census data, compared to 84% on mortality data - a census to mortality ratio of 0.56. Relatively fewer deaths were identified as class 4 on census data (census to mortality ratio of 0.45) compared to other classes (ratios 0.55 to 0.64). Linkage bias: A lower likelihood of 25-44 year old deaths (but not 45-64 year olds) from lower socioeconomic classes being successfully linked to a census record meant that analyses using linked census-mortality data underestimated the class-mortality association. Bias due to exclusion of economically inactive: Analyses on linked-census data (using current occupational class) considerably underestimated the estimated association of usual occupational class with mortality. The strength of the association of class with mortality according to linked census-mortality data (and adjusted for the above two biases) and unlinked data were roughly comparable.
Conclusion: Differences in the recording of occupational class on census and mortality data in New Zealand mean measuring mortality differences by class is thwart with difficulty. If one assumes that biases for any particular method of analysis are similar over time, or one carefully adjusts where possible for bias, using occupational class to monitor trends in socioeconomic mortality gradients may be valid.
abstract_id: PUBMED:28369582
Measuring inequalities in health from survey data using self-assessed social class. Background: Asking participants to assess their social class may be an efficient approach to examining inequalities in heath from survey data. The present study investigated this possibility empirically by testing whether subjective class identification is related to overall health.
Methods: I used pooled cross-sectional data from the 2012 and the 2014 General Social Survey, a nationally representative survey carried out among adults in the United States. The association between health and class was estimated separately by gender, race and age.
Results: The association follows a gradient pattern where health deteriorates with lower class position even after controlling for indicators typically used in research that examines class differences in health-educational attainment, family income and occupational prestige. The results largely hold when the data are stratified by gender, race and age.
Conclusions: These findings demonstrate the empirical value of subjective class identification for assessing social inequalities in health from survey data.
abstract_id: PUBMED:23229159
Wealth, health, and the moderating role of implicit social class bias. Background: Subjective social status (captured by the MacArthur Scale of Subjective Social Status) is in many cases a stronger predictor of health outcomes than objective socioeconomic status (SES).
Purpose: The study aims to test whether implicit beliefs about social class moderate the relationship between subjective social status and inflammation.
Methods: We measured implicit social class bias, subjective social status, SES, and baseline levels of interleukin-6 (IL-6), a marker of inflammation, in 209 healthy adults.
Results: Implicit social class bias significantly moderated the relationship between subjective social status and levels of IL-6, with a stronger implicit association between the concepts "lower class" and "bad" predicting greater levels of IL-6.
Conclusions: Implicit social class bias moderates the relationship between subjective social status and health outcomes via regulation of levels of the inflammatory cytokine IL-6. High implicit social class bias, particularly when one perceives oneself as having low social standing, may increase vulnerability to inflammatory processes.
abstract_id: PUBMED:18851892
The contribution of occupational factors to social inequalities in health: findings from the national French SUMER survey. Social inequalities in health have long been demonstrated, but the understanding of these inequalities remains unclear. Work and its related occupational factors may contribute to these inequalities. The objective of this study was to study the contribution of work factors using an integrated approach (including all types of exposures) to social inequalities in three health outcomes: poor self-reported health, long sickness absence, and work injury. Respondents were 14,241 men and 10,245 women drawn from a survey of the national French working population (response rate: 96.5%). Work factors included job characteristics, and occupational exposures of the physical, ergonomic, biological, chemical, and psychosocial work environment. All work factors were measured through expert evaluation by occupational physicians, except psychosocial work factors, which were self-reported. Strong social gradients were found for all work factors, except for psychological demands, workplace bullying, and aggression from the public. Marked social gradients were also observed for the health outcomes studied, blue collar workers being more likely to report poor self-reported health, long sickness absence, and work injury. The social differences in health were reduced strongly after adjustment for work factors (psychological demands excluded) by 24-58% according to sex and health outcomes. The strongest impacts were found for decision latitude, ergonomic, physical, and chemical exposures, as well as for work schedules. A detailed analysis allowed us to identify more precisely the contributing occupational factors. It suggests that concerted prevention of occupational risk factors would be useful not only to improve health at work, but also to reduce social inequalities in health.
Answer: Survey non-response can potentially bias the association between occupational social class and health, but the evidence from the provided abstracts suggests that this may not always be the case, and the impact of non-response bias can vary depending on the context and the measures taken to address it.
In a health survey among employees of the City of Helsinki, non-respondents had higher sickness absence rates, indicating poorer health status. However, the relative social class differences in sickness absence were similar among both respondents and non-respondents, suggesting that survey non-response did not seriously bias the analysis of social class inequalities in sickness absence (PUBMED:17454926).
In New Zealand, a study on occupational exposure and health found no evidence of major non-response bias after standardizing the prevalence of key survey variables to the demographic distribution of the source population, despite a low response rate (PUBMED:21627726).
However, it is important to note that the validity of using occupational social class as a measure in epidemiological studies can be complicated by biases, such as numerator-denominator bias and linkage bias, as seen in the New Zealand Census-Mortality Study (PUBMED:15682206). Additionally, the use of social class standardization in occupational studies has been questioned due to potential biases introduced by unmeasured occupational factors (PUBMED:3455422).
The Danish Occupational Social Class measurement showed a strong social gradient in terms of chronic conditions, self-rated health, and mobility limitations, suggesting that it is a suitable measure for examining social inequalities in health and aging (PUBMED:24584263).
In the context of Chile, social class and psychosocial occupational risk factors were associated with an unequal distribution of self-rated health and mental health among the working population, indicating that occupational health interventions should consider workers' exposure to socioeconomic and psychosocial risk factors (PUBMED:25388324).
Overall, while non-response can be a concern in survey research, as it may affect the representativeness of the results (PUBMED:31216499), the impact on the association between occupational social class and health seems to be variable and can be mitigated by methodological approaches that address potential biases. |
Instruction: Craniofacial gunshot injuries: an unrecognised risk factor for blunt cervical vascular injuries?
Abstracts:
abstract_id: PUBMED:22569994
Craniofacial gunshot injuries: an unrecognised risk factor for blunt cervical vascular injuries? Objectives: To review our institutional experience with cervical arterial injuries remote from the penetrating tract seen in the setting of craniofacial gunshot injuries.
Methods: Institutional Review Board approval was obtained. Our institutional trauma registry was queried over a 5-year period for patients with cervical arterial injuries due to penetrating craniofacial gunshot wounds who underwent CT angiography. Imaging results and clinical notes were reviewed.
Results: A total of 427 patients sustained gunshot wounds to the head, face and/or neck, of whom 222 underwent CT angiography yielding 56 patients with 78 vascular injuries. There were five internal carotid artery injuries remote from the wound tract. The incidence of these "indirect" cervical arterial injuries in our patient population was 1.2%, or 2.8% of patients who underwent CT angiography.
Conclusions: The incidence of "indirect" cervical arterial injuries with craniofacial gunshot wounds is comparable to or slightly higher than those seen in pure blunt trauma. Screening patients with craniofacial gunshot injuries with CT angiography may yield unexpected cervical vascular injuries remote from the penetrating tract. The significance and optimal therapy of these injuries are unknown. Additional experience will be needed to determine the significance of "indirect" cervical arterial injuries in the setting of craniofacial gunshot wounds.
abstract_id: PUBMED:24065257
Transcranial Doppler investigation of hemodynamic alterations associated with blunt cervical vascular injuries in trauma patients. Objectives: Blunt cervical vascular injuries, often missed with current screening methods, have substantial morbidity and mortality, and there is a need for improved screening. Elucidation of cerebral hemodynamic alterations may facilitate serial bedside monitoring and improved management. Thus, the objective of this study was to define cerebral flow alterations associated with single blunt cervical vascular injuries using transcranial Doppler sonography and subsequent Doppler waveform analyses in a trauma population.
Methods: In this prospective pilot study, patients with suspected blunt cervical vascular injuries had diagnoses by computed tomographic angiography and were examined using transcranial Doppler sonography to define cerebral hemodynamics. Multiple vessel injuries were excluded for this analysis, as the focus was to identify hemodynamic alterations from isolated injuries. The inverse damping factor characterized altered extracranial flow patterns; middle cerebral artery flow velocities, the pulsatility index, and their asymmetries characterized altered intracranial flow patterns.
Results: Twenty-three trauma patients were evaluated: 4 with single internal carotid artery injuries, 5 with single vertebral artery injuries, and 14 without blunt cervical vascular injuries. All internal carotid artery injuries showed a reduced inverse damping factor in the internal carotid artery and dampened ipsilateral mean flow and peak systolic velocities in the middle cerebral artery. Vertebral artery injuries produced asymmetry of a similar magnitude in the middle cerebral artery mean flow velocity with end-diastolic velocity alterations.
Conclusions: These data indicate that extracranial and intracranial hemodynamic alterations occur with internal carotid artery and vertebral artery blunt cervical vascular injuries and can be quantified in the acute injury phase by transcranial Doppler indices. Further study is required to elucidate cerebral flow changes resulting from a single blunt cervical vascular injury, which may guide future management to preserve cerebral perfusion after trauma.
abstract_id: PUBMED:26412898
Fatal case of cervical blunt vascular injury with cervical vertebral fracture: a case report. Blunt cerebrovascular injury (BCVI) is usually caused by neck trauma that predominantly occurs in high-impact injuries. BCVI may occur due to damage to both the vertebral and carotid arteries, and may be fatal in the absence of appropriate treatment and early diagnosis. Here, we describe a case of cerebral infarction caused by a combination of a lower cervical spinal fracture and traumatic injury to the carotid artery by a direct blunt external force in a 52-year-old man. Initially, there was no effect on consciousness, but 6 hours later loss of consciousness occurred due to traumatic dissection of the carotid artery that resulted in a cerebral infarction. Brain edema was so extensive that decompression by emergency craniectomy and internal decompression were performed by a neurosurgeon, but with no effect, and the patient died on day 7. This is a rare case of cerebral infarction caused by a combination of a lower cervical spinal fracture and traumatic injury to the carotid artery. The case suggests that cervical vascular injury should be considered in a patient with a blunt neck trauma and that additional imaging should be performed.
abstract_id: PUBMED:23716524
A novel decision tree approach based on transcranial Doppler sonography to screen for blunt cervical vascular injuries. Objectives: Early detection and treatment of blunt cervical vascular injuries prevent adverse neurologic sequelae. Current screening criteria can miss up to 22% of these injuries. The study objective was to investigate bedside transcranial Doppler sonography for detecting blunt cervical vascular injuries in trauma patients using a novel decision tree approach.
Methods: This prospective pilot study was conducted at a level I trauma center. Patients undergoing computed tomographic angiography for suspected blunt cervical vascular injuries were studied with transcranial Doppler sonography. Extracranial and intracranial vasculatures were examined with a portable power M-mode transcranial Doppler unit. The middle cerebral artery mean flow velocity, pulsatility index, and their asymmetries were used to quantify flow patterns and develop an injury decision tree screening protocol. Student t tests validated associations between injuries and transcranial Doppler predictive measures.
Results: We evaluated 27 trauma patients with 13 injuries. Single vertebral artery injuries were most common (38.5%), followed by single internal carotid artery injuries (30%). Compared to patients without injuries, mean flow velocity asymmetry was higher for single internal carotid artery (P = .003) and single vertebral artery (P = .004) injuries. Similarly, pulsatility index asymmetry was higher in single internal carotid artery (P = .015) and single vertebral artery (P = .042) injuries, whereas the lowest pulsatility index was elevated for bilateral vertebral artery injuries (P = .006). The decision tree yielded 92% specificity, 93% sensitivity, and 93% correct classifications.
Conclusions: In this pilot feasibility study, transcranial Doppler measures were significantly associated with the blunt cervical vascular injury status, suggesting that transcranial Doppler sonography might be a viable bedside screening tool for trauma. Patient-specific hemodynamic information from transcranial Doppler assessment has the potential to alter patient care pathways to improve outcomes.
abstract_id: PUBMED:37379619
Trauma mechanisms and patterns of blunt cervical vascular injury: A descriptive study using a nationwide trauma registry. Objective: Blunt cervical vascular injury (BCVI) is a non-penetrating trauma to the carotid and/or vertebral vessels following a direct injury to the neck or by the shearing of the cervical vessels. Despite its potentially life-threatening nature, important clinical features of BCVI such as typical patterns of co-occurring injuries for each trauma mechanism are not well known. To address this knowledge gap, we described the characteristics of patients with BCVI to identify the pattern of co-occurring injuries by common trauma mechanisms.
Methods: This is a descriptive study using a Japanese nationwide trauma registry from 2004 through 2019. We included patients aged ≥13 years presenting to the emergency department (ED) with BCVI, defined as a blunt trauma to any of the following vessels: common/internal carotid artery, external carotid artery, vertebral artery, external jugular vein, and internal jugular vein. We delineated characteristics of each BCVI classified according to three damaged vessels (common/internal carotid artery, vertebral artery, and others). In addition, we applied network analysis to unravel patterns of co-occurring injuries among patients with BCVI by four common trauma mechanisms (car accident, motorcycle/bicycle accident, simple fall, and fall from a height).
Results: Among 311,692 patients who visited the ED for blunt trauma, 454 (0.1%) patients had BCVI. Patients with common/internal carotid artery injuries presented to the ED with severe symptoms (e.g., the median Glasgow Coma Scale was 7) and had high in-hospital mortality (45%), while patients with vertebral artery injuries presented with relatively stable vital signs. Network analysis showed that head-vertebral-cervical spine injuries were common across four trauma mechanisms (car accident, motorcycle/bicycle accident, simple fall, and fall from a height), with co-occurring injuries of the cervical spine and vertebral artery being the most common injuries due to falls. In addition, common/internal carotid artery injuries were associated with thoracic and abdominal injuries in patients with car accidents.
Conclusions: Based on analyses of a nationwide trauma registry, we found that patients with BCVI had distinct patterns of co-occurring injuries by four trauma mechanisms. Our observations provide an important basis for the initial assessment of blunt trauma and could support the management of BCVI.
abstract_id: PUBMED:27781188
Blunt Cerebrovascular Injury in Cervical Spine Fractures: Are More-Liberal Screening Criteria Warranted? Study Design Retrospective comparative study. Objective To compare strict Biffl criteria to more-liberal criteria for computed tomography angiography (CTA) when screening for blunt cerebrovascular injury (BCVI). Methods All CTAs performed for blunt injury between 2009 and 2011 at our institution were reviewed. All patients with cervical spine fractures who were evaluated with CTA were included; patients with penetrating trauma and atraumatic reasons for imaging were excluded. We then categorized the patients' fractures based on the indications for CTA as either within or outside Biffl criteria. For included subjects, the percentage of studies ordered for loose versus strict Biffl criteria and the resulting incidences of BCVI were determined. Results During our study period, 1,000 CTAs were performed, of which 251 met inclusion criteria. Of the injuries, 192 met Biffl criteria (76%). Forty-nine were found to have BCVIs (19.5%). Forty-one injuries were related to fractures meeting Biffl criteria (21.4%), and 8 were related to fractures not meeting those criteria (13.6%). The relative risk of a patient with a Biffl criteria cervical spine injury having a vascular injury compared with those imaged outside of Biffl criteria was 1.57 (p = 0.19). Conclusions Our data demonstrates that although cervical spine injuries identified by the Biffl criteria trend toward a higher likelihood of concomitant BCVI (21.4%), a significant incidence of 13.6% also exists within the non-Biffl fracture cohort. As a result, a more-liberal screening than proposed by Biffl may be warranted.
abstract_id: PUBMED:30429928
Risk Factors in Pediatric Blunt Cervical Vascular Injury and Significance of Seatbelt Sign. Introduction: Computed tomography angiography (CTA) is used to screen patients for cerebrovascular injury after blunt trauma, but risk factors are not clearly defined in children. This modality has inherent radiation exposure. We set out to better delineate the risk factors associated with blunt cervical vascular injury (BCVI) in children with attention to the predictive value of seatbelt sign of the neck.
Methods: We collected demographic, clinical and radiographic data from the electronic medical record and a trauma registry for patients less than age 18 years who underwent CTA of the neck in their evaluation at a Level I trauma center from November 2002 to December 2014 (12 years). The primary outcome was BCVI.
Results: We identified 11,446 pediatric blunt trauma patients of whom 375 (2.7%) underwent CTA imaging. Fifty-three patients (0.4%) were diagnosed with cerebrovascular injuries. The average age of patients was 12.6 years and included 66% males. Nearly half of the population was white (52%). Of those patients who received CTA, 53 (14%) were diagnosed with arterial injury of various grades (I-V). We created models to evaluate factors independently associated with BCVI. The independent predictors associated with BCVI were Injury Severity Score >/= 16 (odds ratio [OR] [2.35]; 95% confidence interval [CI] [1.11-4.99%]), infarct on head imaging (OR [3.85]; 95% CI [1.49-9.93%]), hanging mechanism (OR [8.71]; 95% CI [1.52-49.89%]), cervical spine fracture (OR [3.84]; 95% CI [1.94-7.61%]) and basilar skull fracture (OR [2.21]; 95% CI [1.13-4.36%]). The same independent predictors remained associated with BCVI when excluding hanging mechanism from the multivariate regression analysis. Seatbelt sign of the neck was not associated with BCVI (p=0.68).
Conclusion: We have found independent predictors of BCVI in pediatric patients. These may help in identifying children that may benefit from screening with CTA of the neck.
abstract_id: PUBMED:32174162
An Update in Imaging of Blunt Vascular Neck Injury. Traumatic injuries of the cervical carotid and vertebral arteries, collectively referred to as blunt cerebrovascular injury (BCVI), can result in significant patient morbidity and mortality, with one of the most feared outcomes being cerebrovascular ischemia. Systematic imaging-guided screening for BCVI aims for early detection to guide timely management. In particular, accurate detection of the severity and grade of BCVI is paramount in guiding initial management. Furthermore, follow-up imaging is required to decide the duration of antithrombotic therapy. In this article, classification of the grades of BCVI and associated imaging findings will be outlined and diagnostic pitfalls and mimickers that can confound diagnosis will be described. In addition, updates to existing screening guidelines and recent efforts of criteria modification to improve detection of BCVI cases will be reviewed. The advent of postprocessing tools applied to conventional computed tomography (CT) angiograms and new diagnostic tools in dual energy CT for improved detection will also be discussed.
abstract_id: PUBMED:23232377
Cervical arterial injury after blunt trauma in children: characterization and advanced imaging. Background: The incidence of cervical vascular injury (CVI) after blunt cervical trauma in children and adolescents is low. Potential harm from missed injury is high. Screening for CVI has increased with advances in noninvasive angiography, including computed tomographic angiography (CTA) and magnetic resonance angiography (MRA). We attempt to characterize CVI in children and adolescents and evaluate the utility of advanced imaging in CVI screening in this patient population.
Methods: Clinical and radiographic records of consecutive patients aged 4 to 18 years with blunt cervical spine trauma from 1998 to 2008 were reviewed. Patient demographics, injury pattern, neurological findings, and treatment were recorded.
Results: Sixty-one patients were identified. Nineteen underwent screening to evaluate for CVI, including 12 males and 7 females, mean age 13.5 years. The most common mechanism of injury was motor vehicle collision (n=11). Seven patients underwent MRA, 7 CTA, 3 had both studies, and 2 had traditional angiography. Seven patients had CVI, with an overall incidence of 11.5%. High-risk criteria (fracture extension to transverse foramina, fracture/dislocations or severe subluxations, or C1-C3 injury) were associated with increased rates of CVI. Neurological injury was found in 12/19 patients screened and 6/7 patients with CVI. Two of 7 patients underwent anticoagulation due to documented CVI. No delayed-onset ischemic neurological events occurred.
Conclusions: After blunt cervical spine trauma, certain fracture patterns increase the risk of CVI. CVI is common, with a minimum incidence of 7/61 or >10% of pediatric patients with blunt cervical spine injury. Over 1/4 of patients studied on the basis of high-risk criteria had injury. Advanced imaging with noninvasive angiography (CTA/MRA) should be strongly considered in pediatric patients with cervical spine trauma. The presence of CVI may prompt a change in management.
Level Of Evidence: Level IV-retrospective diagnostic study.
abstract_id: PUBMED:31754533
Spectrum of multi-detector computed tomography imaging findings of thoracic vascular injuries secondary to blunt chest trauma: Correlation with vascular intervention and patient outcomes. Background: Thoracic vascular injuries following blunt chest trauma are the second leading cause of trauma-related deaths. Multi-detector computed tomography (MDCT) is the imaging modality of choice in detecting these injuries.
Objectives: To determine the spectrum of vascular injuries detected on MDCT imaging in patients who sustained blunt chest trauma, and to assess the various types of management options and patient outcomes.
Method: We retrospectively reviewed archived medical records of polytrauma patients who presented with blunt chest trauma and confirmed vascular injury on MDCT and vascular intervention images between May 2015 and August 2018 at Inkosi Albert Luthuli Central Hospital.
Results: Thirty-nine patients with vascular injury findings were analysed. The injury spectrum comprised: 15 aortic injuries (AI), 19 non-aortic injuries (NAI), 4 combined (AI and NAI) and 1 aorto-venous injury. A majority of males (69%) with an overall mean age of 39 years constituted the study cohort. The commonest injury mechanisms included motor vehicle collisions (61%) and pedestrian accidents (28%); the remaining 11% were shared amongst motorbike accidents or falling from a moving train or a height. The subclavian artery (36%) was the most common anatomical location in the NAI and the frequent imaging finding was vessel occlusion (55%). The most common imaging findings in AI were the indirect signs (20.5%) followed by a grade III injury (15.4%). Six patients with a grade III AI were successfully managed with endovascular repair.
Conclusion: A thorough knowledge of blunt vascular injury spectrums and imaging manifestations is critical when interpreting MDCT scans. Awareness of the mechanism of injury will trigger a high index of suspicion and probe a search for a vascular injury.
Answer: Craniofacial gunshot injuries may indeed be an unrecognized risk factor for blunt cervical vascular injuries. A study reviewing institutional experience with cervical arterial injuries remote from the penetrating tract in the setting of craniofacial gunshot injuries found that the incidence of "indirect" cervical arterial injuries was 1.2% in their patient population, or 2.8% of patients who underwent CT angiography. This incidence is comparable to or slightly higher than those seen in pure blunt trauma. The study suggests that screening patients with craniofacial gunshot injuries with CT angiography may reveal unexpected cervical vascular injuries remote from the penetrating tract, although the significance and optimal therapy for these injuries are not yet fully understood (PUBMED:22569994). |
Instruction: Do shared barriers when reporting to work during an influenza pandemic influence hospital workers' willingness to work?
Abstracts:
abstract_id: PUBMED:25882124
Do shared barriers when reporting to work during an influenza pandemic influence hospital workers' willingness to work? A multilevel framework. Objective: Characteristics associated with interventions and barriers that influence health care workers' willingness to report for duty during an influenza pandemic were identified. Additionally, this study examined whether workers who live in proximal geographic regions shared the same barriers and would respond to the same interventions.
Methods: Hospital employees (n=2965) recorded changes in willingness to work during an influenza pandemic on the basis of interventions aimed at mitigating barriers. Distance from work, hospital type, job role, and family composition were examined by clustering the effects of barriers from reporting for duty and region of residence.
Results: Across all workers, providing protection for the family was the greatest motivator for willingness to work during a pandemic. Respondents who expressed the same barriers and lived nearby shared common responses in their willingness to work. Younger employees and clinical support staff were more receptive to interventions. Increasing distance from home to work was significantly associated with a greater likelihood to report to work for employees who received time off.
Conclusions: Hospital administrators should consider the implications of barriers and areas of residence on the disaster response capacity of their workforce. Our findings underscore communication and development of preparedness plans to improve the resilience of hospital workers to mitigate absenteeism.
abstract_id: PUBMED:26371972
Hospital Employee Willingness to Work during Earthquakes Versus Pandemics. Background: Research indicates that licensed health care workers are less willing to work during a pandemic and that the willingness of nonlicensed staff to work has had limited assessment.
Objective: We sought to assess and compare the willingness to work in all hospital workers during pandemics and earthquakes.
Methods: An online survey was distributed to Missouri hospital employees. Participants were presented with 2 disaster scenarios (pandemic influenza and earthquake); willingness, ability, and barriers to work were measured. T tests compared willingness to work during a pandemic vs. an earthquake. Multivariate linear regression analyses were conducted to describe factors associated with a higher willingness to work.
Results: One thousand eight hundred twenty-two employees participated (15% response rate). More willingness to work was reported for an earthquake than a pandemic (93.3% vs. 84.8%; t = 17.1; p < 0.001). Significantly fewer respondents reported the ability to work during a pandemic (83.5%; t = 17.1; p < 0.001) or an earthquake (89.8%; t = 13.3; p < 0.001) compared to their willingness to work. From multivariate linear regression, factors associated with pandemic willingness to work were as follows: 1) no children ≤3 years of age; 2) older children; 3) working full-time; 4) less concern for family; 5) less fear of job loss; and 6) vaccine availability. Earthquake willingness factors included: 1) not having children with special needs and 2) not working a different role.
Conclusion: Improving care for dependent family members, worker protection, cross training, and job importance education may increase willingness to work during disasters.
abstract_id: PUBMED:25807865
Healthcare workers' willingness to work during an influenza pandemic: a systematic review and meta-analysis. To estimate the proportion of healthcare workers (HCWs) willing to work during an influenza pandemic and identify associated risk factors, we undertook a systematic review and meta-analysis compliant with PRISMA guidance. Databases and grey literature were searched to April 2013, and records were screened against protocol eligibility criteria. Data extraction and risk of bias assessments were undertaken using a piloted form. Random-effects meta-analyses estimated (i) pooled proportion of HCWs willing to work and (ii) pooled odds ratios of risk factors associated with willingness to work. Heterogeneity was quantified using the I(2) statistic, and publication bias was assessed using funnel plots and Egger's test. Data were synthesized narratively where meta-analyses were not possible. Forty-three studies met our inclusion criteria. Meta-analysis of the proportion of HCWs willing to work was abandoned due to excessive heterogeneity (I(2) = 99.2%). Narrative synthesis showed study estimates ranged from 23.1% to 95.8% willingness to work, depending on context. Meta-analyses of specific factors showed that male HCWs, physicians and nurses, full-time employment, perceived personal safety, awareness of pandemic risk and clinical knowledge of influenza pandemics, role-specific knowledge, pandemic response training, and confidence in personal skills were statistically significantly associated with increased willingness. Childcare obligations were significantly associated with decreased willingness. HCWs' willingness to work during an influenza pandemic was moderately high, albeit highly variable. Numerous risk factors showed a statistically significant association with willingness to work despite significant heterogeneity between studies. None of the included studies were based on appropriate theoretical constructs of population behaviour.
abstract_id: PUBMED:22146669
Health care workers' ability and willingness to report to work during public health emergencies. Objectives: We conducted a county-wide survey to assess the ability and willingness of health care workers to report to work during a pandemic influenza and a severe earthquake and to identify barriers and strategies that would help them report to work.
Methods: A stratified random sample of 9211 health care workers was selected from the Washington state licensure database and from health care agencies. We assessed correlates between self-reported ability and willingness to report to work and demographic and employer-related variables under two scenarios, influenza pandemic and a severe earthquake.
Results: For the influenza pandemic scenario, 95% of respondents reported that they would be able and 89% reported that they would be willing to report to their usual place of work. Seventy-four percent of respondents reported that they would be able and 88% would be willing to report to their usual place of work following a severe earthquake. The most frequently cited strategies that would help respondents report to work during an influenza pandemic were the availability of anti-viral influenza treatment and the ability to work from home. For persons with children at home, the strategy to increase ability to report to work during an earthquake was the availability of child care.
Conclusions: The majority of the King County health care workforce is willing and able to respond to an influenza pandemic or a severe earthquake.
abstract_id: PUBMED:21769203
Who is willing to risk his life for a patient with a potentially fatal, communicable disease during the peak of A/H1N1 pandemic in Israel? Background: The willingness of healthcare workers to risk their lives for a patient with a potentially fatal, communicable disease is a major concern, especially during a pandemic where the need for adequate staffing is crucial and where the public atmosphere might enhance anxiety and fear of exposure.
Objective: To examine the relationships between the willingness of healthcare workers to risk their lives for a patient with a potentially fatal A/H1N1 flu, and knowledge of personal protection against infection, and trust in colleagues, workplace preparedness and the effectiveness of safety measures, during the winter A/H1N1 pandemic in Israel.
Materials And Methods: A questionnaire was distributed to healthcare workers in 21 hospitals in Israel between 26 November 2009 and 10 December 2009 (the peak of the winter A/H1N1 flu outbreak). The questionnaire was completed by 1147 healthcare workers.
Results: Willingness to risk one's life for a patient was significantly lower in females, respondents of younger age (18-24 years), administrative staff, and those with a non-academic education, as well as among those with a less knowledge about safety measures and among those with less trust in colleagues, in work place preparedness, and in the effectiveness of safety measures.
Conclusions: Willingness to risk one's life for a patient is related to knowledge of safety measures, and trust in colleagues and work place preparedness. Education programs to enhance trust in colleagues, improve work place preparedness, and safety measures are recommended, especially for healthcare workers who are young, inexperienced, female, or administrative staff.
abstract_id: PUBMED:20836796
Pre-pandemic planning survey of healthcare workers at a tertiary care children's hospital: ethical and workforce issues. Background: Prior to the development of written policies and procedures for pandemic influenza, worker perceptions of ethical and workforce issues must be identified.
Objective: To determine the relationship between healthcare worker (HCW) reporting willingness to work during a pandemic and perception of job importance, belief that one will be asked to work, and sense of professionalism and to assess HCW's opinions regarding specific policy issues as well as barriers and motivators to work during a pandemic.
Methods: A survey was conducted in HCWs at The Children's Hospital in Denver, Colorado, from February to June 2007. Characteristics of workers reporting willingness to work during a pandemic were compared with those who were unwilling or unsure. Importance of barriers and motivators was compared by gender and willingness to work.
Results: Sixty percent of respondents reported willingness to work (overall response rate of 31%). Belief one will be asked to work (OR 4.6, P < 0.0001) and having a high level of professionalism (OR 8.6, P < 0.0001) were associated with reporting willingness to work. Hospital infrastructure support staffs were less likely to report willingness to work during a pandemic than clinical healthcare professionals (OR 0.39, P < 0.001). Concern for personal safety, concern for safety of family, family's concern for safety, and childcare issues were all important barriers to coming to work.
Conclusions: Educational programs should focus on professional responsibility and the importance of staying home when ill. Targeted programs toward hospital infrastructure support and patient and family support staff stressing the essential nature of these jobs may improve willingness to work.
abstract_id: PUBMED:19952885
Mitigating absenteeism in hospital workers during a pandemic. Objectives: An influenza pandemic, as with any disaster involving contagion or contamination, has the potential to influence the number of health care employees who will report for duty. Our project assessed the uptake of proposed interventions to mitigate absenteeism in hospital workers during a pandemic.
Methods: Focus groups were followed by an Internet-based survey of a convenience sample frame of 17,000 hospital workers across 5 large urban facilities. Employees were asked to select their top barrier to reporting for duty and to score their willingness to work before and after a series of interventions were offered to mitigate it.
Results: Overall, 2864 responses were analyzed. Safety concerns were the most frequently cited top barrier to reporting for work, followed by issues of dependent care and transportation. Significant increases in employee willingness to work scores were observed from mitigation strategies that included preferential access to antiviral medication or personal protective equipment for the employee as well as their immediate family.
Conclusions: The knowledge base on workforce absenteeism during disasters is growing, although in general this issue is underrepresented in emergency planning efforts. Our data suggest that a mitigation strategy that includes options for preferential access to either antiviral therapy, protective equipment, or both for the employee as well as his or her immediate family will have the greatest impact. These findings likely have import for other disasters involving contamination or contagion, and in critical infrastructure sectors beyond health care.
abstract_id: PUBMED:20659340
Characterizing hospital workers' willingness to report to duty in an influenza pandemic through threat- and efficacy-based assessment. Background: Hospital-based providers' willingness to report to work during an influenza pandemic is a critical yet under-studied phenomenon. Witte's Extended Parallel Process Model (EPPM) has been shown to be useful for understanding adaptive behavior of public health workers to an unknown risk, and thus offers a framework for examining scenario-specific willingness to respond among hospital staff.
Methods: We administered an anonymous online EPPM-based survey about attitudes/beliefs toward emergency response, to all 18,612 employees of the Johns Hopkins Hospital from January to March 2009. Surveys were completed by 3426 employees (18.4%), approximately one third of whom were health professionals.
Results: Demographic and professional distribution of respondents was similar to all hospital staff. Overall, more than one-in-four (28%) hospital workers indicated they were not willing to respond to an influenza pandemic scenario if asked but not required to do so. Only an additional 10% were willing if required. One-third (32%) of participants reported they would be unwilling to respond in the event of a more severe pandemic influenza scenario. These response rates were consistent across different departments, and were one-third lower among nurses as compared with physicians. Respondents who were hesitant to agree to work additional hours when required were 17 times less likely to respond during a pandemic if asked. Sixty percent of the workers perceived their peers as likely to report to work in such an emergency, and were ten times more likely than others to do so themselves. Hospital employees with a perception of high efficacy had 5.8 times higher declared rates of willingness to respond to an influenza pandemic.
Conclusions: Significant gaps exist in hospital workers' willingness to respond, and the EPPM is a useful framework to assess these gaps. Several attitudinal indicators can help to identify hospital employees unlikely to respond. The findings point to certain hospital-based communication and training strategies to boost employees' response willingness, including promoting pre-event plans for home-based dependents; ensuring adequate supplies of personal protective equipment, vaccines and antiviral drugs for all hospital employees; and establishing a subjective norm of awareness and preparedness.
abstract_id: PUBMED:31800338
Emergency Medical Services Personnel's Pandemic Influenza Training Received and Willingness to Work during a Future Pandemic. Objective: Identify determinants of emergency medical service (EMS) personnel's willingness to work during an influenza pandemic. Background: Little is known about the willingness of EMS personnel to work during a future influenza pandemic or the extent to which they are receiving pandemic training. Methods: EMS personnel were surveyed in July 2018 - Feb 2019 using a cross-sectional approach; the survey was available both electronically and on paper. Participants were provided a pandemic scenario and asked about their willingness to respond if requested or required; additional questions assessed their attitudes and beliefs and training received. Chi-square tests assessed differences in attitude/belief questions by willingness to work. Logistic regressions were used to identify significant predictors of response willingness when requested or required, controlling for gender and race. Results: 433 individuals completed the survey (response rate = 82.9%). A quarter (26.8%, n = 116) received no pandemic training; 14.3% (n = 62) participated in a pandemic exercise. Significantly more EMS personnel were willing to work when required versus when only requested (88.2% vs 76.9%, X2 = 164.1, p < .001). Predictors of willingness to work when requested included believing it is their responsibility to work, believing their coworkers were likely to work, receiving prophylaxis for themselves and their family members, and feeling safe working during a pandemic. Discussion: Many emergency medical services personnel report lacking training or disaster exercises related to influenza pandemics, and a fair percentage are unwilling to work during a future event. This may limit healthcare surge capacity and could contribute to increased morbidity and mortality. Findings from this study indicate that prehospital staff's attitudes and beliefs about pandemics influence their willingness to work. Pre-event training and planning should address these concerns.
abstract_id: PUBMED:30501496
Nothing to sneeze at - uptake of protective measures against an influenza pandemic by people with schizophrenia: willingness and perceived barriers. Objectives: To examine willingness to adopt protective behaviours, and perceived barriers, during a pandemic influenza, in people with schizophrenia.
Methods: A cross-sectional study using a questionnaire was conducted exploring the responses of 71 adults with schizophrenia and 238 adults without schizophrenia attending a general practice setting, regarding willingness and perceived barriers to adopting protective measures against the 2009 swine influenza pandemic in Australia.
Results: The majority of participants with schizophrenia reported that they would be at least moderately willing to be vaccinated (74.2%), isolate themselves (73.2%), wear a face mask (54.9%) and increase hand washing (88.6%). However, 71.8% were concerned about "catching" flu from vaccination. Predictors of willingness to adopt protective actions included self-efficacy (vaccination, face mask, isolation), perceived likelihood of contracting swine flu (vaccination), educational status (face mask) and perceived overall risk from swine flu (face mask). Key modifiable perceived barriers to adopting protective measures were identified, including cost and need for transport assistance for vaccination.
Conclusions: People with schizophrenia report being generally willing to adopt protective measures, especially increased hand washing, during a pandemic influenza. Understanding perceived barriers may enable development of effective interventions to increase uptake of protective measures.
Answer: Yes, shared barriers when reporting to work during an influenza pandemic do influence hospital workers' willingness to work. A study identified that providing protection for the family was the greatest motivator for willingness to work during a pandemic, and respondents who expressed the same barriers and lived nearby shared common responses in their willingness to work (PUBMED:25882124). Additionally, factors such as having no children under the age of three, working full-time, less concern for family, less fear of job loss, and vaccine availability were associated with a higher willingness to work during a pandemic (PUBMED:26371972). Furthermore, a systematic review and meta-analysis found that childcare obligations were significantly associated with decreased willingness to work among healthcare workers (HCWs) during an influenza pandemic (PUBMED:25807865). These findings suggest that shared barriers, particularly those related to family and personal safety concerns, are significant factors influencing hospital workers' willingness to report for duty during an influenza pandemic. |
Instruction: Explaining educational inequalities in adolescent life satisfaction: do health behaviour and gender matter?
Abstracts:
abstract_id: PUBMED:24368542
Explaining educational inequalities in adolescent life satisfaction: do health behaviour and gender matter? Objectives: There is little evidence on the explanation of health inequalities based on a gender sensitive perspective. The aim was to investigate to what extent health behaviours mediate the association between educational inequalities and life satisfaction of boys and girls.
Methods: Data were derived from the German part of the Health Behaviour in School-aged Children (HBSC) study 2010 (n = 5,005). Logistic regression models were conducted to investigate educational inequalities in life satisfaction among 11- to 15-year-old students and the relative impact of health behaviour in explaining these inequalities.
Results: Educational inequalities in life satisfaction were more pronounced in boys than in girls from lower educational tracks (OR 2.82, 95 % CI 1.97-4.05 and OR 2.30, 95 % CI 1.68-3.14). For adolescents belonging to the lowest educational track, behavioural factors contributed to 18 % (boys) and 39 % (girls) in the explanation of educational inequalities in life satisfaction.
Conclusions: The relationship between educational track and life satisfaction is substantially mediated by health-related behaviours. To tackle inequalities in adolescent health, behavioural factors should be targeted at adolescents from lower educational tracks, with special focus on gender differences.
abstract_id: PUBMED:26310848
Educational inequalities in smoking over the life cycle: an analysis by cohort and gender. Objectives: The study investigates the life cycle patterns of educational inequalities in smoking according to gender over three successive generations.
Methods: Based on retrospective smoking histories collected by the nationwide French Health Barometer survey 2010, we explored educational inequalities in smoking at each age, using the relative index of inequality.
Results: Educational inequalities in smoking increase across cohorts for men and women, corresponding to a decline in smoking among the highly educated alongside progression among the lower educated. The analysis also shows a life cycle evolution: for all cohorts and for men and women, inequalities are considerable during adolescence, then start declining from 18 years until the age of peak prevalence (around 25), after which they remain stable throughout the life cycle, even tending to rise for the most recent cohort.
Conclusions: This analysis contributes to the description of the "smoking epidemic" and highlights adolescence and late adulthood as life cycle stages with greater inequalities.
abstract_id: PUBMED:32438706
Educational Inequalities in Life and Healthy Life Expectancies among the 50-Plus in Spain. This study computes educational inequalities in life expectancy (LE), healthy life expectancy (HLE), and unhealthy life expectancy (ULE) by gender and education level in Spain in 2012. Death registrations and vital status by level of education were obtained from Spain's National Institute of Statistics. Health prevalences were estimated from the National Health Survey for Spain. We used Sullivan's method to compute HLE, ULE, and the proportion of time lived with health problems. Our results reveal that Spanish women live longer than men in all education groups, but a higher proportion of women report poor health. We detect substantial differences in unhealthy life by gender and education, with higher effect for women and for those with low levels of education. Poor self-perceived health shows the largest educational gradient; chronic diseases present the lowest. This is the first work that provides evidence on health inequalities by education level in Spain. Our findings seem to be in line with reports of the smaller social inequalities experienced in Southern Europe and highlight the importance of education level on extending the proportion of years spent in good health in a Mediterranean country.
abstract_id: PUBMED:31561487
Adolescent Socioeconomic Status and Mental Health Inequalities in the Netherlands, 2001-2017. Even in wealthy countries there are substantial socioeconomic inequalities in adolescent mental health. Socioeconomic status (SES) indicators-parental SES, adolescent subjective SES and adolescent educational level-are negatively associated with adolescent mental health problems, but little is known about the interplay between these SES indicators and whether associations have changed over time. Using data from the Dutch Health Behaviour in School-Aged Children (HBSC) studies (n = 27,020) between 2001 and 2017, we examined associations between three SES indicators and six indicators of adolescent mental health problems. Linear regressions revealed that adolescent subjective SES and adolescent educational level were independently negatively associated with adolescent mental health problems and positively associated with adolescent life satisfaction, but parental SES had negligible independent associations with adolescent mental health problems and life satisfaction. However, when interactions between SES indicators were considered, high adolescent subjective SES was shown to buffer the negative association between parental SES and adolescent mental health problems and the positive association between parental SES and life satisfaction. Despite societal changes between 2001 and 2017, socioeconomic inequalities in adolescent mental health were stable during this period. Findings suggest that all three SES indicators-parental SES, adolescent subjective SES and adolescent educational level-are important for studying socioeconomic inequalities in adolescent mental health.
abstract_id: PUBMED:29180946
Educational inequalities in late-life depression across Europe: results from the generations and gender survey. This study explores country- and gender-stratified educational differences in depression among older adults from 10 European countries. We examine inequalities in both absolute (prevalence differences) and relative (odds ratios) terms and in bivariate and multivariate models. We use cross-sectional, nationally representative data from the generations and gender survey. The analysis comprises 27,331 Europeans aged 60-80. Depression is measured with a seven-item version of the Center for Epidemiologic Studies Depression scale. Findings show considerable between-country heterogeneity in late-life depression. An East-West gradient is evident, with rates of depression up to three times higher in Eastern European than in Scandinavian countries. Rates are about twice as high among women than men in all countries. Findings reveal marked absolute educational gaps in depression in all countries, yet the gaps are larger in weaker welfare states. This pattern is less pronounced for the relative inequalities, especially for women. Some countries observe similar relative inequalities but vastly different absolute inequalities. We argue that the absolute differences are more important for social policy development and evaluation. Educational gradients in depression are strongly mediated by individual-level health and financial variables. Socioeconomic variation in late-life depression is greater in countries with poorer economic development and welfare programs.
abstract_id: PUBMED:30200912
What's the difference? A gender perspective on understanding educational inequalities in all-cause and cause-specific mortality. Background: Material and behavioural factors play an important role in explaining educational inequalities in mortality, but gender differences in these contributions have received little attention thus far. We examined the contribution of a range of possible mediators to relative educational inequalities in mortality for men and women separately.
Methods: Baseline data (1991) of men and women aged 25 to 74 years participating in the prospective Dutch GLOBE study were linked to almost 23 years of mortality follow-up from Dutch registry data (6099 men and 6935 women). Cox proportional hazard models were used to calculate hazard ratios with 95% confidence intervals, and to investigate the contribution of material (financial difficulties, housing tenure, health insurance), employment-related (type of employment, occupational class of the breadwinner), behavioural (alcohol consumption, smoking, leisure and sports physical activity, body mass index) and family-related factors (marital status, living arrangement, number of children) to educational inequalities in all-cause and cause-specific mortality, i.e. mortality from cancer, cardiovascular disease, other diseases and external causes.
Results: Educational gradients in mortality were found for both men and women. All factors together explained 62% of educational inequalities in mortality for lowest educated men, and 71% for lowest educated women. Yet, type of employment contributed substantially more to the explanation of educational inequalities in all-cause mortality for men (29%) than for women (- 7%), whereas the breadwinner's occupational class contributed more for women (41%) than for men (7%). Material factors and employment-related factors contributed more to inequalities in mortality from cardiovascular disease for men than for women, but they explained more of the inequalities in cancer mortality for women than for men.
Conclusions: Gender differences in the contribution of employment-related factors to the explanation of educational inequalities in all-cause mortality were found, but not of material, behavioural or family-related factors. A full understanding of educational inequalities in mortality benefits from a gender perspective, particularly when considering employment-related factors.
abstract_id: PUBMED:35417815
Educational inequalities in risk perception, perceived effectiveness, trust and preventive behaviour in the onset of the COVID-19 pandemic in Germany. Objectives: This study analysed educational inequalities in risk perception, perceived effectiveness, trust and adherence to preventive behaviours in the onset of the COVID-19 pandemic in Germany.
Study Design: This was a cross-sectional online survey.
Methods: Data were obtained from the GESIS Panel Special Survey on the coronavirus SARS-CoV-2 Outbreak in Germany, including 2949 participants. Stepwise linear regression was conducted to analyse educational inequalities in risk perception, perceived effectiveness, trust and adherence to preventive behaviours considering age, gender, family status and household size as covariates.
Results: We found lower levels in risk perception, trust towards scientists and adherence to preventive behaviour among individuals with lower education, a lower level of trust towards general practitioners among individuals with higher education and no (clear) educational inequalities in perceived effectiveness and trust towards local and governmental authorities.
Conclusion: The results underline the relevance of a comprehensive and strategic management in communicating the risks of the pandemic and the benefits of preventive health behaviours by politics and public health. Risk and benefit communication must be adapted to the different needs of social groups in order to overcome educational inequalities in risk perception, trust and adherence to preventive behaviour.
abstract_id: PUBMED:33598526
Determinants of educational inequalities in disability-free life expectancy between ages 35 and 80 in Europe. Socioeconomic inequalities in disability-free life expectancy (DFLE) exist across all European countries, yet the driving determinants of these differences are not completely known. We calculated the impact on educational inequalities in DFLE of equalizing the distribution of eight risk factors for mortality and disability using register-based mortality data and survey data from 15 European countries for individuals between 35 and 80 years old. From the selected risk factors, the ones that contribute the most to the educational inequalities in DFLE are low income, high body-weight, smoking (for men), and manual occupation of the father. Potentially large reductions in inequalities can be achieved in Eastern European countries, where educational inequalities in DFLE are also the largest.
abstract_id: PUBMED:28980028
Trends in educational inequalities in smoking among adolescents in Germany : Evidence from four population-based studies Background: In Germany, smoking prevalence among adolescents has significantly declined since the early 2000s. However, data show that adolescent smoking rates considerably differ between different types of secondary schools. The aim of our study was to examine how educational inequalities in adolescent smoking behaviour have developed over time.
Methods: Data were used from four population-based studies (each consisting of repeated cross-sectional surveys from 2001-2015): the representative surveys of the Federal Centre for Health Education, the German Health Interview and Examination Survey for Children and Adolescents, the Health Behaviour in School-aged Children Study, and the European School Survey Project on Alcohol and Other Drugs. Each study comprised different age groups (within the age range of 11-17 years) and used different smoking measures. Adolescents' educational status was based on the attended type of secondary school. Absolute and relative educational inequalities were presented as prevalence differences and prevalence ratios, respectively.
Results: Despite methodical differences, all four studies similarly reveal that adolescent smoking rates have significantly declined in all educational groups. However, lower smoking rates among secondary school students attending higher educational tracks could be observed. While absolute educational inequalities tended to decrease over time, relative inequalities between educational groups remained rather stable or even increased.
Discussion: Declining adolescent smoking rates suggest that smoking may have lost some of its attractiveness for young people. Our findings further emphasize the importance of tobacco control measures such as raising cigarette taxes, smoking bans, and increasing minimum legal age for tobacco purchase. As relative educational inequalities in adolescent smoking rates did not diminish over time, setting- and target group-specific interventions should focus more on students in middle and lower secondary school tracks.
abstract_id: PUBMED:37588766
Educational disparities in disability-free life expectancy across Europe: A focus on the East-West gaps from a gender perspective. Education plays a crucial role in shaping the health outcomes of adults. This study examines the relationship between educational attainment and health across Europe. Using data from the Survey of Health, Ageing and Retirement in Europe (SHARE), we estimate educational inequalities in disability-free life expectancy (DFLE) by gender in seven Western European (2004-2019) and three Central and Eastern European (CEE) (2010-2019) countries. We exploit a novel approach that combines the Sullivan method and multivariate life tables to calculate DFLE using SHARE data. We find that educational differences in DFLE favoring the better-educated exist in both CEE and Western European countries, but also that the differences across countries are more pronounced among the low-educated. While the absolute gaps in DFLE between low- and high-educated individuals in CEE and Western European countries are similar, the educational disparities in DFLE impose a more significant burden on the CEE populations due to their overall lower life expectancy. Educational inequalities are larger among women than among men in CEE countries, while the results for Western European countries are mixed. Our findings further highlight the important role of the institutional context in mitigating or exacerbating educational inequalities in health.
Answer: Educational inequalities in adolescent life satisfaction are influenced by both health behavior and gender. The study from the German part of the Health Behaviour in School-aged Children (HBSC) found that educational inequalities in life satisfaction were more pronounced in boys than in girls from lower educational tracks. For adolescents belonging to the lowest educational track, behavioral factors contributed to 18% of the explanation for boys and 39% for girls in the explanation of educational inequalities in life satisfaction (PUBMED:24368542). This suggests that health-related behaviors play a significant role in mediating the relationship between educational track and life satisfaction, and that these factors should be targeted in interventions aimed at adolescents from lower educational tracks, with a special focus on gender differences.
Moreover, the study on educational inequalities in smoking over the life cycle also highlights adolescence as a life cycle stage with greater inequalities in health behaviors such as smoking, which can impact life satisfaction (PUBMED:26310848). The study on educational inequalities in late-life depression across Europe further supports the notion that educational gradients in health behaviors and outcomes can have long-lasting effects, with marked educational gaps in depression observed in all countries (PUBMED:29180946).
In addition, the study on adolescent socioeconomic status and mental health inequalities in the Netherlands found that adolescent subjective SES and educational level were independently associated with adolescent mental health problems and life satisfaction, indicating that these SES indicators are important for understanding socioeconomic inequalities in adolescent mental health (PUBMED:31561487).
Overall, these findings underscore the importance of considering both health behavior and gender when examining educational inequalities in adolescent life satisfaction. Behavioral factors are significant mediators of the relationship between education and life satisfaction, and gender differences in these relationships must be taken into account in efforts to address and reduce educational inequalities in health and well-being. |
Instruction: Total aortic arch replacement with frozen elephant trunk in acute type A aortic dissections: are we pushing the limits too far?
Abstracts:
abstract_id: PUBMED:36820356
Total arch replacement with extended branched stented anastomosis frozen elephant trunk repair for type A dissection improves operative outcome. Objective: Emergency surgical repair is the standard treatment for acute aortic dissection type A. However, the surgical risk of total arch replacement remains high. The Viabahn Open Revascularization TEChnique has been used for supra-aortic reconstruction during total arch replacement. This Cleveland Clinic technique is called "branched stented anastomosis frozen elephant trunk repair." Our total arch replacement with reconstructed extended branched stented anastomosis frozen elephant trunk repair requires no unnecessary cervical artery exposure. We compared the outcomes of extended branched stented anastomosis frozen elephant trunk repair and conventional total arch replacement in acute aortic dissection type A.
Methods: We compared the clinical course of patients undergoing total arch replacement using sutureless direct branch vessel stent grafting with frozen elephant trunk (extended branched stented anastomosis frozen elephant trunk repair) for acute aortic dissection type A with patients undergoing conventional total arch replacement. For the procedure, the aortic arch was transected circumferentially distal to the brachiocephalic artery origin. Frozen elephant trunk was fenestrated by heating with a cautery, and the self-expandable stent graft was delivered into the branch vessels through the fenestration.
Results: Of 58 cases, 21 and 37 were classified in the extended branched stented anastomosis frozen elephant trunk repair and conventional total arch replacement groups, respectively. The times (minutes) of selective antegrade cerebral perfusion (75 ± 24, 118 ± 47), total operation (313 ± 83, 470 ± 151), and cardiopulmonary bypass (195 ± 46, 277 ± 96) were significantly better in the extended branched stented anastomosis frozen elephant trunk repair group (P < .001). Six surgical deaths occurred: 2 (9%) in the extended branched stented anastomosis frozen elephant trunk repair group and 4 (10%) in the conventional total arch replacement group. In all cases, only 1 patient (2%) in the conventional total arch replacement group had a branch artery-related complication during the postoperative follow-up period. In the extended branched stented anastomosis frozen elephant trunk repair group, blood product use significantly decreased (P < .05).
Conclusions: Extended branched stented anastomosis frozen elephant trunk repair has shown comparable safety and efficacy to conventional total arch replacement and can be used for acute aortic dissection type A emergency repair. It optimizes true lumen perfusion and facilitates supra-aortic artery remodeling.
abstract_id: PUBMED:33181309
Total Aortic Arch Replacement and Frozen Elephant Trunk. Aortic arch pathologies have been a surgical challenge, involving cerebral, visceral and myocardial protection. Innovative techniques including total arch replacement and frozen elephant trunk had evolved over last decades with promising mid-term outcomes. We evaluate our mid-term outcomes on total arch replacement with frozen elephant trunk and the role of timely second staged interventions. Between August 2014 and April 2020, 41 patients with aortic arch pathologies underwent total arch replacement with frozen elephant trunk with Thoraflex-Hybrid-Plexus device (Vascutek, Inchinnan, Scotland). Patients' perioperative, clinical and radiological outcomes were reviewed. Post discharge survival (n = 37) at 3 year was 100%. Overall survival of 85.3% over a median follow up of 3.3 years, inpatient mortality of 9.8%. Distribution of aortic pathologies with acute type A dissection or intramural hematoma (n = 15, 36.6%), thoracic aortic aneurysm, including arch and descending aortic aneurysm (n = 9, 22%) and chronic aortic dissection including chronic type A and type B dissections (n = 13, 31.7%). Mean operative, circulatory arrest, and antegrade cerebral perfusion time were 417 ± 121 minutes, 89 ± 28 minutes, and 154 ± 43 minutes, respectively. Second stage procedures were performed in 32% and distal stent graft induced new entry was observed in 19% of patients. We reported an Asian series of Thoraflex with outstanding midterm clinical outcomes, given descending aortic pathologies were tackled with a timely second stage interventions. The observation of aortic remodeling and distal stent graft induced new entry requires further investigations.
abstract_id: PUBMED:33341270
Total arch replacement and frozen elephant trunk for acute type A aortic dissection. Objective: The present study aimed to evaluate the outcomes of total aortic arch replacement with proximalization of distal anastomosis using the frozen elephant trunk technique with the J Graft FROZENIX (Japan Lifeline, Tokyo, Japan) and Gelweave Lupiae (Vascutek Terumo Inc, Scotland, United Kingdom) graft (distal anastomosis performed in zones 1 and 2) in patients with acute Stanford type A acute aortic dissection.
Methods: A total of 50 patients underwent total aortic arch replacement using the frozen elephant trunk technique, deploying the J Graft FROZENIX into zone 1 or 2 (zone 1: n = 17, zone 2: n = 33) in combination with the Gelweave Lupiae graft for acute Stanford type A acute aortic dissection. Patient characteristics, intraoperative data, and early and midterm outcomes were analyzed.
Results: The overall in-hospital mortality rate was 4% (2 patients). The in-hospital mortality rate in patients with visceral malperfusion was 11% (1/9). There were no patients with paraplegia and stent graft-induced new entry. Resection or closure of the most proximal entry tear was achieved in 100% of 42 patients who had postoperative computed tomography. The overall survival was 87.9%, 84.1%, and 84.1% at 1, 2, and 3 years, respectively. However, 1 patient required endovascular extension for the dilatation of the descending thoracic aorta 4 months after the initial surgery.
Conclusions: Total aortic arch replacement with the frozen elephant trunk technique (zone 1-2) and Gelweave Lupiae graft was safe and effective in simplifying surgery for acute Stanford type A acute aortic dissection.
abstract_id: PUBMED:35463702
"Branch-First total arch replacement": a valuable alternative to frozen elephant trunk in acute type A aortic dissection? The "Branch-First total arch replacement" technique has been used extensively in both elective and acute situations, including in type A aortic dissection. The focus of the Branch-First technique is to reduce the risk of neurological and end-organ dysfunction associated with arch replacement by optimising neuroprotection, distal organ perfusion and myocardial protection. The Branch-First technique is a valuable alternative to the frozen elephant trunk (FET) technique in type A aortic dissection, providing a stable landing zone for subsequent interventions on the distal aorta should they be required. Combining the Branch-First technique with FET in appropriate cases can further improve outcomes. We discuss the merits of the Branch-First technique, and contrast them to those of FET techniques for repair of type A aortic dissection.
abstract_id: PUBMED:26767803
A Meta-Analysis of Total Arch Replacement With Frozen Elephant Trunk in Acute Type A Aortic Dissection. Objectives: To assess the safety and efficacy, we performed a meta-analysis of total arch replacement with frozen elephant trunk in exclusive acute type A (neither chronic nor type B) aortic dissection.
Methods: Databases including MEDLINE and EMBASE were searched through March 2015 using Web-based search engines (PubMed and OVID). Eligible studies were case series of frozen elephant trunk enrolling patients with acute type A (neither chronic nor type B) aortic dissection reporting at least early (in-hospital or 30-day) all-cause mortality. Study-specific estimates were combined in both fixed- and random-effect models.
Results: Fifteen studies enrolling 1279 patients were identified and included. Pooled analyses demonstrated the cardiopulmonary bypass time of 207.1 (95% confidence interval [CI], 186.1-228.1) minutes, aortic cross-clamp time of 123.3 (95% CI, 113.1-133.5) minutes, selective antegrade cerebral perfusion time of 49.3 (95% CI, 37.6-61.0) minutes, hypothermic circulatory arrest time of 39.0 (95% CI, 30.7-47.2) minutes, early mortality of 9.2% (95% CI, 7.7-11.0%), stroke of 4.8% (95% CI, 2.5-9.0%), spinal cord injury of 3.5% (95% CI, 1.9-6.6%), mid- to long-term (≥1-year) overall mortality of 13.0% (95% CI, 10.4-16.0%), reintervention of 9.6% (95% CI, 5.6-15.8%), and false lumen thrombosis of 96.8% (95% CI, 90.7-98.9%).
Conclusions: Total arch replacement with frozen elephant trunk provides a safe alternative to that with conventional elephant trunk in patients with acute type A aortic dissection, with acceptable early mortality and morbidity. The rates of mid- to long-term reintervention and false lumen non-thrombosis may be lower in patients undergoing the frozen than conventional elephant trunk procedure.
abstract_id: PUBMED:30902465
Total arch repair with frozen elephant trunk using the "zone 0 arch repair" strategy for type A acute aortic dissection. Objective: The aim of this study was to investigate the effect of frozen elephant trunk deployment from the zone 0 aorta to the descending aorta on early and midterm postoperative results in patients with acute type A aortic dissection.
Methods: Between October 2014 and April 2018, 108 patients underwent a combined strategy of frozen elephant trunk deployment, ascending aortic replacement, and arch vessel reconstruction ("zone 0 arch repair" strategy) for acute type A aortic dissection (excluding DeBakey type II). Of the 108 patients, 32 (29.6%) had primary tears of the aortic arch or descending aorta.
Results: The 30-day mortality rate was 2.8% (3 patients), and in-hospital mortality rate was 6.5% (7 patients). New-onset permanent neurologic dysfunction and spinal cord injury occurred in 3.7% and 0% of patients, respectively. Five of the 101 survivors underwent thoracic endovascular aortic repair during hospitalization (2 for rapid false lumen enlargement; 3 for true lumen stenosis). The overall survival was 89.8%, 88.1%, and 88.1% at 1, 2, and 3 years, respectively. The cumulative incidence of distal aortic reintervention was 5.8%, 9.1%, and 9.1% at 1, 2, and 3 years, respectively. Two patients underwent thoracic endovascular aortic repair for distal aortic enlargement after discharge.
Conclusions: The use of the "zone 0 arch repair" strategy can eliminate the need for invasive aortic arch resection. It also eliminates the false lumen and produces satisfactory early and midterm postoperative results. Therefore, it can be an alternative to hemiarch and total arch replacements, which are based on a conventional "tear-oriented resection" strategy.
abstract_id: PUBMED:35092274
Risk factors for stroke after total aortic arch replacement using the frozen elephant trunk technique. Objectives: This study aimed to analyse risk factors for postoperative stroke, evaluate the underlying mechanisms and report on outcomes of patients suffering a postoperative stroke after total aortic arch replacement using the frozen elephant trunk technique.
Methods: Two-hundred and fifty patients underwent total aortic arch replacement via the frozen elephant trunk technique between March 2013 and November 2020 for acute and chronic aortic pathologies. Postoperative strokes were evaluated interdisciplinarily by a cardiac surgeon, neurologist and radiologist, and subclassified to each's cerebral territory. We conducted a logistic regression analysis to identify any predictors for postoperative stroke.
Results: Overall in-hospital was mortality 10% (25 patients, 11 with a stroke). A symptomatic postoperative stroke occurred in 42 (16.8%) of our cohort. Eight thereof were non-disabling (3.3%), whereas 34 (13.6%) were disabling strokes. The most frequently affected region was the arteria cerebri media. Embolism was the primary underlying mechanism (n = 31; 73.8%). Mortality in patients with postoperative stroke was 26.2%. Logistic regression analysis revealed age over 75 (odds ratio = 3.25; 95% confidence interval 1.20-8.82; P = 0.021), a bovine arch (odds ratio = 4.96; 95% confidence interval 1.28-19.28; P = 0.021) and an acute preoperative neurological deficit (odds ratio = 19.82; 95% confidence interval 1.09-360.84; P = 0.044) as predictors for postoperative stroke.
Conclusions: Stroke after total aortic arch replacement using the frozen elephant trunk technique remains problematic, and most lesions are of embolic origin. Refined organ protection strategies, and sophisticated monitoring are mandatory to reduce the incidence of postoperative stroke, particularly in older patients presenting an acute preoperative neurological deficit or bovine arch.
abstract_id: PUBMED:33479773
Aortic remodelling effect of the frozen elephant trunk technique on Stanford type A acute aortic dissection. Total arch replacement using the frozen elephant trunk procedure is performed for true lumen expansion of the descending aorta in patients with type A acute aortic dissection. However, the remodelling effect of the frozen elephant trunk on the dissected descending aorta is unclear. We aimed to evaluate the effect of the frozen elephant trunk on postoperative descending aortic remodelling after surgery. Between December 2012 and January 2020, we retrospectively investigated 24 patients who underwent total arch replacement using the frozen elephant trunk for type A acute aortic dissection. Remodelling of the descending aorta was evaluated using computed tomography. The aortic remodelling effect, based on aortic true lumen ratio, was determined for (i) DeBakey type (type I versus type III retrograde); (ii) thoracic endovascular aneurysm repair reintervention status (reintervention versus no reintervention); and (iii) stent length of the frozen elephant trunk (60 vs 90 mm). Postoperative true lumen ratio significantly increased in the type I dissection group. The true lumen ratio in the no-reintervention group, which had many patients with the type I dissection, significantly increased after the frozen elephant trunk. Aortic remodelling due to the frozen elephant trunk can be expected after type I acute aortic dissections.
abstract_id: PUBMED:30039203
Editorial comment regarding "Total aortic arch replacement using the frozen elephant trunk technique with J Graft Open Stent Graft for distal aortic arch aneurysm". Total aortic arch replacement using the frozen elephant trunk (FET) technique has gained worldwide popularity, because it has simplified the treatment of complicated thoracic aortic lesions. The most effective use of FET has been total arch repair for acute-type A aortic dissection because of its particularly favorable prognosis. However, the use rate for true aneurysm in 2016 was half of the total 2150 cases using FET. The widespread use of FET is attributable to its ease of use. The incidence of FET-related paraplegia has decreased with the use of total aortic arch replacement but is still higher than in classic total arch replacement with distal arch anastomosis. A prospective multicenter study for FET is necessary.
abstract_id: PUBMED:36285489
Perfusion management of arch-clamping technique in total arch replacement with frozen elephant trunk. A technique called arch-clamping was used at our institute to ensure perfusion of the lower body and brain during total arch replacement with frozen elephant trunk (TAR and FET). The aortic arch clamp is inserted between the left common carotid artery and the left subclavian artery after inserting the stented elephant trunk into the true lumen of the descending aorta during the procedure, and then clamps the aorta and graft together as the distal anastomotic edge of the aorta. After the arch clamp was in place, lower body perfusion was resumed through the femoral artery was resumed and time to circulatory arrest was reduced to approximately 4 min. Cardiopulmonary bypass (CPB) flow was gradually restored to full rate. Thereafter, the left carotid artery anastomosis was completed and rewarming began. Finally, during the rewarming period, other branches of the aortic arch and ascending aorta were reconstructed. In this paper, we describe the perfusion management strategy, discuss intraoperative monitoring parameters, and examine the feasibility of the technique from a perfusion perspective.
Answer: Total aortic arch replacement with the frozen elephant trunk (FET) technique has become an increasingly utilized approach for managing acute type A aortic dissections, and the available literature suggests that it is a viable option with acceptable outcomes. The technique has been shown to simplify the treatment of complicated thoracic aortic lesions and is particularly favorable for acute type A aortic dissections due to its positive prognosis (PUBMED:30039203). The FET technique has been associated with true lumen expansion of the descending aorta, which is crucial for the management of type A acute aortic dissections (PUBMED:33479773).
Studies have reported various outcomes and refinements of the FET technique. For instance, the use of the "zone 0 arch repair" strategy, which involves FET deployment from the zone 0 aorta to the descending aorta, has shown satisfactory early and midterm postoperative results, with a low rate of distal aortic reintervention (PUBMED:30902465). Another study reported that total arch replacement with FET using the J Graft FROZENIX and Gelweave Lupiae graft was safe and effective in simplifying surgery for acute Stanford type A acute aortic dissection (PUBMED:33341270).
Moreover, a meta-analysis indicated that total arch replacement with FET provides a safe alternative to conventional elephant trunk procedures in patients with acute type A aortic dissection, with acceptable early mortality and morbidity (PUBMED:26767803). The extended branched stented anastomosis frozen elephant trunk repair, a variation of the FET technique, has shown comparable safety and efficacy to conventional total arch replacement and can be used for emergency repair of acute aortic dissection type A (PUBMED:36820356).
However, the risk of complications such as stroke remains a concern, with certain factors like age over 75, a bovine arch, and an acute preoperative neurological deficit identified as predictors for postoperative stroke (PUBMED:35092274). Additionally, the need for a prospective multicenter study for FET has been highlighted to better understand its risks and benefits (PUBMED:30039203).
In conclusion, while the FET technique for total aortic arch replacement in acute type A aortic dissections is pushing the boundaries of surgical management, the current evidence suggests that it is not necessarily pushing the limits too far. |
Instruction: Late recurrence of hepatocellular carcinoma after liver transplantation: is an active surveillance for recurrence needed?
Abstracts:
abstract_id: PUBMED:36925456
Predictors of early and late hepatocellular carcinoma recurrence. Hepatocellular carcinoma (HCC) is the most frequent liver neoplasm, and its incidence rates are constantly increasing. Despite the availability of potentially curative treatments (liver transplantation, surgical resection, thermal ablation), long-term outcomes are affected by a high recurrence rate (up to 70% of cases 5 years after treatment). HCC recurrence within 2 years of treatment is defined as "early" and is generally caused by the occult intrahepatic spread of the primary neoplasm and related to the tumor burden. A recurrence that occurs after 2 years of treatment is defined as "late" and is related to de novo HCC, independent of the primary neoplasm. Early HCC recurrence has a significantly poorer prognosis and outcome than late recurrence. Different pathogenesis corresponds to different predictors of the risk of early or late recurrence. An adequate knowledge of predictive factors and recurrence risk stratification guides the therapeutic strategy and post-treatment surveillance. Patients at high risk of HCC recurrence should be referred to treatments with the lowest recurrence rate and when standardized to combined or adjuvant therapy regimens. This review aimed to expose the recurrence predictors and examine the differences between predictors of early and late recurrence.
abstract_id: PUBMED:22841215
Late recurrence of hepatocellular carcinoma after liver transplantation: is an active surveillance for recurrence needed? Introduction: Liver transplantation (OLT) is considered the most efficient therapeutic option for patients with liver cirrhosis and early stage hepatocellular carcinoma (HCC) in terms of overall survival and recurrence rates, when restrictive selection criteria are applied. Nevertheless, tumor recurrence may occur in 3.5% to 21% of recipients. It usually occurs within 2 years following OLT, having a major negative impact on prognosis. The efficacy of active posttransplantation surveillance for recurrence has not been demonstrated, due to the poor prognosis of recipients with recurrences.
Aim: To analyze the clinical, pathological, and prognostic consequences of late recurrence (>5 years after OLT).
Method: We analyzed the clinical records of 165 HCC patients including 142 males of overall mean age of 58 ± 6.9 years who underwent OLT between July 1994 and August 2011.
Results: Overall survival was 84%, 76%, 66.8%, and 57% at 1, 3, 5, and 10 years, respectively. Tumor recurrence, which was observed in 18 (10.9%) recipients, was a major predictive factor for survival: its rates were 72.2%, 53.3%, 26.7%, and 10% at 1, 3, 5, and 10 years, respectively. HCC recurrence was detected in 77.8% of patients within the first 3 years after OLT. Three recipients (100% males, aged 54-60 years) showed late recurrences after 7, 9, and 10 years. In only one case were Milan criteria surpassed after the examination of explanted liver; no vascular invasion was detected in any case. Recurrence sites were peritoneal, intrahepatic, and subcutaneous abdominal wall tissue. In all cases, immunosuppression was switched from a calcineurin-inhibitor to a mammalian target of rapamycin inhibitor. We surgically resected the extrahepatic recurrences. The remaining recipient was treated with transarterial chemoembolization with doxorubicin-eluting beads and sorafenib. Prognosis after diagnosis of recurrence was poor with median a survival of 278 days (range, 114-704).
Conclusions: Global survival, recurrence rate, and pattern of recurrence were similar to previously reported data. Nevertheless, in three patients recurrence was diagnosed >5 years after OLT. Although recurrence was limited and surgically removed in two cases, disease-free survival was poor. Thus, prolonged active surveillance for HCC recurrence beyond 5 years after OLT may be not useful to provide a survival benefit for these patients.
abstract_id: PUBMED:30237389
Clinical Features and Surveillance of Very Late Hepatocellular Carcinoma Recurrence After Liver Transplantation. BACKGROUND This study aimed to assess patterns of hepatocellular carcinoma (HCC) recurrence after liver transplantation (LT) and to establish long-term surveillance protocols for late HCC recurrence. MATERIAL AND METHODS The 232 LT recipients experiencing subsequent HCC recurrence were categorized as Group 1, early recurrence (within 1 year of LT; n=117); Group 2, late recurrence (occurring in years 2-5; n=93); and Group 3, very late recurrence (after year 5; n=22). RESULTS Recurrence was detected by only elevated tumor marker levels in 11.1%, 30.1%, and 45.5% of patients in Groups 1, 2, and 3, respectively (p<0.001). The proportion of intrahepatic and extrahepatic metastases was similar in all 3 groups. Common sites of extrahepatic metastasis were the lung and bone; these were also similar across the 3 groups. Overall post-recurrence patient survival rates were 60.2% at 1 year, 28.2% at 3 years, 20.5% at 5 years, and 7.0% at 10 years. Median post-recurrence survival periods were 10.2, 23.8, and 37.0 months in Groups 1, 2, and 3, respectively. CONCLUSIONS While the pattern of HCC recurrence was similar regardless of time of recurrence, post-recurrence survival was significantly longer in patients with later recurrence. Long-term surveillance for HCC recurrence beyond 5 years after LT is recommended.
abstract_id: PUBMED:21597889
Late recurrence of hepatocellular carcinoma after liver transplantation. Background: Long-term survival of patients with hepatocellular carcinoma (HCC) after liver transplantation is affected mainly by recurrence of HCC. There is the opinion that the chance of recurrence after 2 years post-transplantation is remote, and therefore lifelong surveillance is not justified because of limited resources. The aims of the present study were to determine the rate of late HCC recurrence (≥ 2 years after transplantation) and to compare the long-term patient survival outcomes between cases of early recurrence (<2 years after transplantation) and late recurrence.
Patients: A total of 139 adult HCC patients having liver transplantation during the period from July 1994 to December 2007 were included in the analysis. The median follow-up period was 55 months. Thirty-two patients received deceased-donor grafts and 107 received living-donor grafts.
Results: Hepatocellular carcinoma recurrence occurred in 24 (17.3%) patients, among them 22 (86%) had living-donor grafts and 7 (5%) developed late recurrence. Patients in the early recurrence group and patients in the late recurrence group had comparable demographics and disease pathology. The former group, when compared with the latter, had significantly worse overall survival at 3 years (13.3 versus 100%) and 5 years (6.67 versus 71.4%) (log-rank test; p < 0.001).
Conclusions: Both early recurrence and late recurrence of HCC after liver transplantation were not uncommon, mostly detected at a subclinical stage. Regular and long-term surveillance with imaging and blood tests is essential for early detection.
abstract_id: PUBMED:34638365
Management of Hepatocellular Carcinoma Recurrence after Liver Transplantation. Recurrence of hepatocellular carcinoma (HCC) after liver transplantation (LT), occurring in 10-15% of cases, is a major concern. A lot of work has been done in order to refine the selection of LT candidates with HCC and to improve the outcome of patients with recurrence. Despite this, the prognosis of these patients remains poor, partly due to the several areas of uncertainty in their management. Even if surveillance for HCC recurrence is crucial for early detection, there is currently no evidence to support a specific and cost-effective post-LT surveillance strategy. Concerning preventive measures, consensus on the best immunosuppressive drugs has not been reached and not enough data to support adjuvant therapy are present. Several therapeutic approaches (surgical, locoregional and systemic treatments) are available in case of recurrence, but there are still few data in the post-LT setting. Moreover, the use of immune checkpoint inhibitors is controversial in transplant recipients considered the risk of rejection. In this paper, the available evidence on the management of HCC recurrence after LT is comprehensively reviewed, considering pre- and post-transplant risk stratification, post-transplant surveillance, preventive strategies and treatment options.
abstract_id: PUBMED:28966983
Late recurrence of hepatocellular carcinoma after liver transplantation. Aims: Hepatocellular carcinoma (HCC) is the third leading cause of cancer deaths worldwide and liver transplant (LT) prolongs survival. However, 15-20% will experience recurrent HCC, most occurring within 2 years of LT. HCC patients with late recurrences (>5 years after LT) may have distinctive clinical/biological characteristics.
Methods: A retrospective review was conducted of 88 patients who underwent LT for HCC between 1993-2015, analyzing demographics, clinical factors, explant pathology, and outcome.
Results: Median follow-up was 6.4 years. HCC recurred in 15 (17.0%) patients with mean time to recurrence of 3.96 +/- 3.99 years. Five patients recurred >5 years post-LT. All late recurrences involved males in their 50s, recurring at 8.5 years on average. Recurrences occurred in chest wall (2), liver (2), lung (2), bone (1) and pelvis (1), with multifocal involvement in 2 patients. Four patients died within 18 months of late recurrence. The fifth patient is alive after ablation of liver recurrence and treatment with sorafenib and everolimus.
Conclusions: One-third of post-LT patients with recurrent HCC experienced late recurrence. Although the sample size makes it difficult to identify significant risk factors, this study highlights the importance of long-term follow up and need for biomarkers to identify patients at risk for late recurrences.
abstract_id: PUBMED:31464006
Different prognostic factors and strategies for early and late recurrence after adult living donor liver transplantation for hepatocellular carcinoma. Background: Some patients with hepatocellular carcinoma (HCC) recurrence after LT show good long-term survival. We aimed to determine the prognostic factors affecting survival after recurrence and to suggest treatment strategies.
Methods: Between January 2000 and December 2015, 532 patients underwent adult living donor liver transplantation (LDLT) for HCC. Among these, 92 (17.3%) who experienced recurrence were retrospectively reviewed.
Results: The 1-, 3-, and 5-year survival rates after recurrence were 59.5%, 23.0%, and 11.9%, respectively. In multivariate analysis, time to recurrence >6 months and surgical resection after recurrence were related to longer survival after recurrence, while multi-organ involvement at the time of primary recurrence was related to poorer survival. We classified patients into early (≤6 months) and late (>6 months) recurrence groups. In the early recurrence group, tumor size >5 cm in the explant liver, liver as the first detected site of recurrence, and multiple organ involvement at primary recurrence were related to survival on multivariate analysis. In the late recurrence group, mammalian target of rapamycin inhibitor (mTORi) usage and multi-organ involvement were significantly associated with the prognosis on multivariate analysis.
Conclusions: Various therapeutic approaches are needed depending on the period of recurrence after LT and multiplicity of involved organs.
abstract_id: PUBMED:35082636
Late Recurrence of Hepatocellular Carcinoma in a Patient 10 Years after Liver Transplantation Unrelated to Transplanted Organ. Liver transplantation (LTx) is an accepted method of hepatocellular carcinoma (HCC) treatment in cirrhotic patients; however, it has many limitations, and there is a substantial risk of recurrence. Most relapses occur within the first 2 posttransplant years. We aimed to present a late extrahepatic recurrence of HCC 10 years after LTx, and we discuss the possible risk factors and ways to improve transplantation results. A 68-year-old patient with liver cirrhosis and HCC on the background of chronic HCV and past HBV infection was transplanted urgently due to the rapid decompensation. Anti-HCV treatment before surgery was unsuccessful. Pretransplant computed tomography showed 1 focal 4.5 cm lesion consistent with HCC. Histopathology of the explanted organ showed 2 nodules outside the Milan criteria. Angioinvasion was not found. The patient achieved a sustained viral response to pegylated interferon and ribavirin 2 years post-LTx. Eight years were uneventful. CT of the abdomen performed occasionally was normal. Ten years after LTx, the patient unexpectedly presented with shortness of breath, fatigue, and weight loss. Two metastatic nodules of HCC in the lungs and pelvis were found. Although late HCC recurrence post-LTx is rare, it should be always considered, especially when risk factors such as viral infections and underestimation of tumor advancement were identified. We advocate that oncological surveillance of HCC relapse has to be continued during the whole posttransplant period. High AFP levels, the unfavorable neutrophil to lymphocyte ratio, and better estimation of primary tumor size seem to be useful in the identification of good candidates for transplantation.
abstract_id: PUBMED:27474913
Hepatocellular carcinoma recurrence pattern following liver transplantation and a suggested surveillance algorithm. Purpose: This study aims to evaluate the recurrence pattern of hepatocellular carcinoma (HCC) following liver transplantation.
Materials And Methods: A total of 54 patients underwent liver transplantation for HCC; 9 patients developed biopsy-proven recurrent HCC (16.6%). The site of HCC recurrence along with other factors was analyzed.
Results: Seven patients were diagnosed with HCC prior to liver transplantation and 2 patients had incidental HCC in the explanted liver. Two patients had locoregional recurrence, 4 patients had distant metastasis, and 3 patients had synchronous locoregional recurrence and distant metastasis.
Conclusion: A significant proportion of HCC recurrence following liver transplantation is extrahepatic.
abstract_id: PUBMED:28009758
Evidence-Based Surveillance Imaging Schedule After Liver Transplantation for Hepatocellular Carcinoma Recurrence. Background: There is presently no evidence-based recommendation for surveillance of recurrent hepatocellular carcinoma after liver transplantation (LT). We aim to evaluate and develop evidence-based alternate surveillance imaging schedules for post-LT hepatocellular carcinoma patients.
Methods: Imaging and pathologic reports for consecutive post-LT patients followed up by regular surveillance imaging from a single institution's prospective database were evaluated with institutional review board approval. Outcome variable was time to diagnosis of first recurrence post-LT by surveillance imaging. Recurrence-free survival times from alternative surveillance schedules were compared with the existing schedule (every 3 months) using a parametric frailty model. Expected delay (EpD) in diagnosis compared to the existing schedule was also computed for the alternate surveillance schedules. A P value less than 0.05 was considered to indicate a significant difference.
Results: One hundred twenty-five patients (108 men; 59.4 ± 16.6 years) underwent 1953 computed tomography and 255 magnetic resonance imaging scans. Recurrence-free survival time was not significantly different in the first 5 years after LT when the imaging interval was extended from current every 3 months to every 6 months (P = 0.786, EpD = 55 days). This alternative schedule incurred 10 (50.0%) fewer surveillance scans than the 20 in the original schedule, and a corresponding reduction in radiation dose (if involved) and cost during the 5-year follow-up period.
Conclusions: In conclusion, modeled alternative surveillance schedules have the potential to reduce the frequency of scans without compromising surveillance benefits.
Answer: The question of whether active surveillance for late recurrence of hepatocellular carcinoma (HCC) after liver transplantation is needed is addressed in several studies. The consensus appears to be that while late recurrence of HCC is less common than early recurrence, it still represents a significant concern and warrants surveillance.
One study found that late recurrence (>5 years after liver transplantation) was observed in a small percentage of patients, and despite limited and surgically removed recurrences in some cases, the prognosis remained poor, suggesting that prolonged active surveillance beyond 5 years may not provide a survival benefit (PUBMED:22841215). However, another study reported that late recurrences of HCC after liver transplantation were not uncommon and mostly detected at a subclinical stage, emphasizing the importance of regular and long-term surveillance with imaging and blood tests for early detection (PUBMED:21597889).
Further, a study that categorized HCC recurrence after liver transplantation into early, late, and very late groups found that post-recurrence survival was significantly longer in patients with later recurrence, recommending long-term surveillance for HCC recurrence beyond 5 years after liver transplantation (PUBMED:30237389). This is supported by another study that highlighted the importance of long-term follow-up and the need for biomarkers to identify patients at risk for late recurrences (PUBMED:28966983).
While there is no evidence to support a specific and cost-effective post-liver transplantation surveillance strategy, surveillance for HCC recurrence is crucial for early detection (PUBMED:34638365). Additionally, an evidence-based surveillance imaging schedule suggests that extending the imaging interval from every 3 months to every 6 months in the first 5 years after liver transplantation does not significantly affect recurrence-free survival time, potentially reducing the frequency of scans without compromising surveillance benefits (PUBMED:28009758).
In summary, although the utility of prolonged active surveillance for late HCC recurrence after liver transplantation is debated, the evidence suggests that long-term surveillance is recommended, and it may be possible to optimize the surveillance schedule to balance the benefits against the costs and risks associated with frequent imaging. |
Instruction: Are men with erectile dysfunction more likely to have hypertension than men without erectile dysfunction?
Abstracts:
abstract_id: PUBMED:27351435
Trends in pharmaceutical care for men Over the past few years, perceptible changes - both fundamental and specific - have taken place in pharmaceutical care for men. While the most striking difference persists, namely that between somatic drug therapies for men and drugs for the treatment of psychological disorders and diseases, the large discrepancies that long existed between the quantities prescribed for men and women have meanwhile not only evened out, but men are even prescribed larger quantities than women if they undergo drug therapy. An analysis of the drugs prescribed particularly for men revealed that they are primarily prescribed for the treatment of cardiovascular diseases (hypertension and cardiac insufficiency) and metabolic disorders (diabetes, gout), especially in elderly patients. The evaluation also showed that the drugs prescribed most frequently for younger men also included psychostimulants and antidepressants, such as SSRIs, for diagnoses of ADHD and depression.Besides these prescribed medicaments, other drugs must also be taken into account that reflect men's gender-specific everyday needs. These include drugs for treating erectile dysfunction, hair growth products or drugs for male menopause or to build muscle. The sometimes serious undesired effects of these products are often given small attention because of the desired benefit of supporting the perceived male role. While hormones are widely used in anabolic steroids, the use of hormones in contraceptive pills for men is evidently still far away from the aforementioned trends in pharmaceutical care for men.
abstract_id: PUBMED:15947647
Are men with erectile dysfunction more likely to have hypertension than men without erectile dysfunction? A naturalistic national cohort study. Purpose: : We examined whether men with erectile dysfunction (ED) are more likely to have hypertension than men without ED in a managed care setting.
Materials And Methods: : We used a naturalistic cohort design to compare hypertension prevalence rates in 285,436 men with ED to that in 1,584,230 men without ED from 1995 through 2001. We also used a logistic regression model to isolate the effect of ED on the likelihood of hypertension after controlling for subject age, census regions and 9 concurrent diseases. The ED and the nonED cohort came from a nationally representative, managed care claims database that covers 51 health plans and 28 million members in the United States. Finally, the prevalence rate difference between members with and without ED, and the OR of having hypertension were calculated.
Results: : The hypertension prevalence rate was 41.2% in men with ED and 19.2% in men without ED. After controlling for subject age, census region and 9 concurrent diseases the OR was 1.383 (p <0.0001), which implies that the odds for men with ED to have hypertension were 38.3% higher than the odds for men without ED.
Conclusions: : Men with ED were more likely to have hypertension than men without ED. This evidence supports the hypothesis that ED shares common risk factors with hypertension. It also suggests that men with ED and clinicians could use ED as an alerting signal to detect and treat undiagnosed hypertension earlier.
abstract_id: PUBMED:34969613
The Temporal Association of Depression and Anxiety in Young Men With Erectile Dysfunction. Background: Erectile dysfunction (ED) is a multidimensional sexual disorder that is being increasingly diagnosed in younger men. Although mental illnesses such as depression and anxiety are known risk factors for ED, the association between these conditions and ED has been understudied in young men.
Aim: To explore the temporal association between depression, anxiety, and ED in a population-based cohort of young men.
Methods: Using 2009-2018 MarketScan Commercial Claims data, we identified all men with ED aged 18-40 years (cases). Using ICD-9/-10 codes and prescription data, we evaluated the prevalence and incidence of depression and anxiety in this cohort. Cases were matched with men without a diagnosis of ED (controls) based on age, Charlson Comorbidity Index, history of hypertension, geographic region, and year of presentation. We examined the prevalence of depression and anxiety within 12 months prior to ED diagnosis and incidence of depression and anxiety up to 36 months after ED diagnosis in cases vs controls. Differences between cases and controls were tested with Wilcoxon rank-sum test for numerical covariates, and chi-square test for categorical covariates. Significance was set at P < .05.
Outcomes: Prevalence and incidence of depression and anxiety in young men with and without ED.
Results: Within the 12-month period preceding ED diagnosis, the prevalence of depression and anxiety in cases vs controls were 17.1% vs 12.9%, respectively (P < .001). The incidence of depression and anxiety were higher amongst cases vs controls at 12- (11.7% vs 6.3%), 24- (14.5% vs 9.0%,) and 36- (15.9% vs 10.6%) months following ED diagnosis (P < .001).
Clinical Implications: High incidence and prevalence of depression and anxiety in young men diagnosed with ED highlight the importance of normalizing mental health screenings and routine psychiatric follow-up in this population.
Strengths & Limitations: Our contemporary, case-control study utilizes a population-based cohort of young men with ED to study the temporal association between depression, anxiety, and ED, which is understudied to date. The MarketScan commercial claims database used in this analysis includes men covered by private insurers only and lacks data on symptoms and treatments.
Conclusion: Young men with ED had significantly higher rates of depression and anxiety both before and after ED diagnosis in comparison to young men without ED. Manalo TA, Biermann HD, Patil DH, et al. The Temporal Association of Depression and Anxiety in Young Men With Erectile Dysfunction. J Sex Med 2022;19:201-206.
abstract_id: PUBMED:22115177
The triad of erectile dysfunction, testosterone deficiency syndrome and metabolic syndrome: findings from a multi-ethnic Asian men study (The Subang Men's Health Study). The etiology of erectile dysfunction (ED) is multi-factorial. This paper examines the association between ED, testosterone deficiency syndrome (TDS) and metabolic syndrome (MS) in Malaysian men in an urban setting. One thousand and forty-six men aged ≥ 40 years from Subang Jaya, Malaysia were randomly selected from an electoral-roll list. The men completed questionnaires that included: socio-demographic data, self-reported medical problems and the International Index of erectile function (IIEF-5). Physical examination and the following biochemical tests were performed: lipid profile, fasting blood glucose (FBG) and total testosterone. The response rate was 62.8% and the mean age of men was 55.8 ± 8.4 (41-93) years. Ethnic distribution was Chinese, 48.9%; Malay, 34.5%; Indian, 14.8%. The prevalence of moderate-severe ED was 20.0%, while 16.1% of men had TDS (< 10.4 nmol/L) and 31.3% of men had MS. Indian and Malay men were significantly more likely to have ED (p = 0.001), TDS (p < 0.001) and MS (p < 0.001) than the Chinese. Multivariate regression analysis showed that elevated blood pressure, elevated FBG, low high-density lipoprotein and heart disease were predictors of ED while all MS components were independently associated with TDS. Malay and Indian men have a higher disease burden compared to Chinese men and were more likely to suffer with ED, TDS and MS. MS components were closely related to TDS and ED.
abstract_id: PUBMED:21358664
Profile of men's health in Malaysia: problems and challenges. Men's health concerns have evolved from the traditional andrology and male sexual health to a more holistic approach that encompasses male psychological, social and physical health. The poor state of health in men compared to their female counterparts is well documented. A review of the epidemiological data from Malaysia noted a similar trend in which men die at higher rates in under 1 and above 15 years old groups and most disease categories compared to women. In Malaysia, the main causes of death in men are non-communicable diseases and injuries. Risk factors, such as risk-taking behaviour, smoking and hypertension, are prevalent and amenable to early interventions. Erectile dysfunction, premature ejaculation and prostate disorders are also prevalent. However, many of these morbidities go unreported and are not diagnosed early; therefore, opportunities for early intervention are missed. This reflects poor health knowledge and inadequate health-care utilisation among Malaysian men. Their health-seeking behaviour has been shown to be strongly influenced by family members and friends. However, more research is needed to identify men's unmet health-care needs and to develop optimal strategies for addressing them. Because the Malaysian population is aging and there is an increase in sedentary lifestyles, optimizing men's health will remain a challenge unless effective measures are implemented. The existing male-unfriendly health-care system and the negative influence of masculinity on men's health behaviour must be addressed. A national men's health policy based on a male-friendly approach to health-care delivery is urgently needed to provide a framework for addressing these challenges.
abstract_id: PUBMED:17635695
Prevalence and risk factors for erectile dysfunction in Korean men: results of an epidemiological study. Introduction: The prevalence of erectile dysfunction (ED) and associated risk factors has been described in many countries, but there are still only a few studies from Asia.
Aim: We investigated the prevalences of ED and premature ejaculation (PE) in Korean men and the impact of general health, lifestyle, and psychosocial factors on these conditions.
Methods: To assess ED and PE, 1,570 Korean men aged 40-79 years were interviewed with a self-administered questionnaire on sexual function and the International Index of Erectile Function (IIEF)-5. In addition, blood chemistry was analyzed for each subject.
Main Outcome Measures: The prevalences of ED and PE were obtained from self-reported ED, IIEF-5 scoring, EF (erectile function) domain scoring, and self-reported intravaginal ejaculatory latency time (IELT). The data were analyzed for the presence of risk factors and the relationship of general health, lifestyle, and psychosocial factors with ED.
Results: The prevalences of ED among Korean men were 13.4% (self-reported ED) and 32.4% (IIEF-5 score <or= 17), and PE prevalences were 11% (IELT <or= 2-min) and 33.1% (IELT <or= 5-min). ED was more prevalent in the subject groups with older age, lower income, or lower education, and in subjects without a spouse. ED prevalence was positively associated with risk factors such as diabetes, hypertension, heart disease, psychological stress, and obesity. Levels of serum hemoglobin (Hb) A1c, triglycerides, testosterone, or dehydroepiandrosterone sulfate (DHEA-S) were significantly different between the ED and non-ED groups.
Conclusions: The prevalences of ED and PE in Korean men were 13.4% (self-reported ED) and 11% (IELT <or= 2-min), respectively. Risk factors and other socioeconomic and mental health factors were associated with ED prevalence. Biochemical factors such as HbA1c, triglycerides, testosterone, and DHEA-S were significantly related to ED prevalence.
abstract_id: PUBMED:30655182
Blood Pressure, Sexual Activity, and Erectile Function in Hypertensive Men: Baseline Findings from the Systolic Blood Pressure Intervention Trial (SPRINT). Introduction: Erectile function, an important aspect of quality of life, is gaining increased research and clinical attention in older men with hypertension.
Aim: To assess the cross-sectional association between blood pressure measures (systolic blood pressure [SBP]; diastolic blood pressure [DBP]; and pulse pressure [PP]) and (i) sexual activity and (ii) erectile function in hypertensive men.
Methods: We performed analyses of 1,255 male participants in a larger randomized clinical trial of 9,361 men and women with hypertension aged ≥50 years.
Main Outcome Measures: The main outcome measures were self-reported sexual activity (yes/no) and erectile function using the 5-item International Index of Erectile Function (IIEF-5).
Results: 857 participants (68.3%) reported being sexually active during the previous 4 weeks. The mean (SD) IIEF-5 score for sexually active participants was 18.0 (5.8), and 59.9% of the sample reported an IIEF-5 score <21, suggesting erectile dysfunction (ED). In adjusted logistic regression models, neither SBP (adjusted odds ratio = 0.998; P = .707) nor DBP (adjusted odds ratio = 1.001; P = .929) was significantly associated with sexual activity. In multivariable linear regression analyses in sexually active participants, lower SBP (β = -0.04; P = .025) and higher DBP (β = 0.05; P = .029) were associated with better erectile function. In additional multivariable analyses, lower PP pressure was associated with better erectile function (β = -0.04; P = .02).
Clinical Implications: Blood pressure is an important consideration in the assessment of erectile function in men with hypertension.
Strengths & Limitations: Assessments of blood pressure and clinical and psychosocial variables were performed using rigorous methods in this multi-ethnic and geographically diverse sample. However, these cross-sectional analyses did not include assessment of androgen or testosterone levels.
Conclusions: Erectile dysfunction was highly prevalent in this sample of men with hypertension, and SBP, DBP, and PP were associated with erectile function in this sample. Foy CG, Newman JC, Berlowitz DR, et al. Blood Pressure, Sexual Activity, and Erectile Function in Hypertensive Men: Baseline Findings from the Systolic Blood Pressure Intervention Trial (SPRINT). J Sex Med 2019;16:235-247.
abstract_id: PUBMED:30638828
Erectile Dysfunction in 45-Year-Old Heterosexual German Men and Associated Lifestyle Risk Factors and Comorbidities: Results From the German Male Sex Study. Background: Erectile dysfunction (ED) is a common public health issue with a significant impact on quality of life. The associations between ED and several risk factors have been reported previously. The continuously increasing incidence of these factors is contributing to the increasing prevalence of ED.
Aim: To assess ED prevalence and severity in a representative sample of 45-year-old German men and to analyze the association with risk factors (lifestyle risk factors/comorbidities).
Methods: Data were collected within the German Male Sex-Study. Randomly selected 45-year-old men were invited. A total of 10,135 Caucasian, heterosexual, sexually active men were included in this analysis. The self-reported prevalence of ED was assessed using the Erectile Function domain of the International Index of Erectile Function. Risk factors for ED were ascertained using self-report questionnaires. An anamnesis interview and a short physical examination were performed.
Main Outcome Measure: ED prevalence and severity were evaluated in a cross-sectional design. The associations of ED with comorbidities (eg, depression, diabetes, hypertension, lower urinary tract symptoms) and lifestyle factors (ie, smoking, obesity, central obesity, physical inactivity, and poor self-perceived health-status) were analyzed by logistic regression.
Results: The overall prevalence of ED was 25.2% (severe, 3.1%; moderate, 9.2%; mild to moderate, 4.2%; mild, 8.7%). Among the men with ED, 48.8% had moderate or severe symptoms. ED prevalence increased with the number of risk factors, to as high as 68.7% in men with 5-8 risk factors. In multiple logistic regression with backward elimination, the strongest associations with ED were found for depression (odds ratio [OR] = 1.87), poor self-perceived health status (OR = 1.72), lower urinary tract symptoms (OR = 1.68), and diabetes (OR = 1.38).
Conclusion: One out of 4 men already had symptoms of ED at age 45. Almost one-half of the men with ED had moderate to severe symptoms. ED was strongly associated with each analyzed risk factor, and the prevalence and severity of ED increased with an increasing number of risk factors. Hallanzy J, Kron M, Goethe VE, et al. Erectile Dysfunction in 45-Year-Old Heterosexual German Men and Associated Lifestyle Risk Factors and Comorbidities: Results From the German Male Sex Study. Sex Med 2019;7:26-34.
abstract_id: PUBMED:24589222
Weaker masturbatory erection may be a sign of early cardiovascular risk associated with erectile dysfunction in young men without sexual intercourse. Introduction: Although increasing evidences emphasize the importance of early cardiovascular evaluation in men with erectile dysfunction (ED) of unexplained aetiology, impaired masturbation-induced erections in young men are usually overlooked and habitually presumed to be psychological origin.
Aims: To evaluate the young men presenting weaker masturbatory erection with no sexual intercourse (WME-NS) and verify if this cohort have early cardiovascular risks associated with ED.
Methods: Male subjects aged 18-40 years with WME-NS were screened by analyzing detailed sexual intercourse and masturbatory history. The age-matched ED and non-ED population were identified by using International Index of Erectile Function-5 (IIEF-5). All subjects with acute and/or chronic diseases (including diagnosed hypertension and diabetes) and long-term pharmacotherapy were excluded. Nocturnal penile tumescence and rigidity (NPTR), systemic vascular parameters and biochemical indicators related to metabolism were assessed.
Main Outcome Measures: Comparison analysis and logistic regression analysis were conducted among WME-NS, ED and non-ED population.
Results: In total, 78 WME-NS cases (mean 28.99 ± 5.92 years), 179 ED cases (mean 30.69 ± 5.21 years) and 43 non-ED cases (mean 28.65 ± 4.30 years) were screened for analysis. Compared with non-ED group, WME-NS group had higher prevalence of early ED risk factors including endothelial dysfunction, insulin resistance, high level of glycosylated serum protein and abnormal NPTR. Multivariable-adjusted logistic regression analysis showed endothelia dysfunction (odds ratio: 8.83 vs. 17.11, both P < 0.001) was the independent risk factor for both WME-NS and ED.
Conclusions: Weaker masturbatory erection may be a sign of early cardiovascular risk associated with ED in young men without sexual intercourse. More studies are warranted to elucidate the clinical benefits by targeting these formulated strategies.
abstract_id: PUBMED:37132563
Quality of life of hypertensive men with erectile dysfunction in a tertiary health centre in southern Nigeria. Background: Erectile dysfunction (ED) is the most common disorder of sexual health seen in men in community studies. A man's sexual health has been found to be a key factor in determining the capacity for maintaining a healthy relationship.
Aim: This study sought to determine the quality of life of hypertensive men with ED attending the out-patient clinics of Federal Medical Centre (FMC), Asaba, South-South, Nigeria.
Setting: This study was conducted in the Out Patients Clinics (OPC) of FMC, Asaba, Delta state, Nigeria.
Methods: After obtaining approval from the ethics and research committees in Asaba, 184 consenting hypertensive men who met the eligibility criteria were selected by systematic random sampling to participate in the study from October 2015 to January 2016. This study was a cross-sectional survey. Data were collected with a semi-structured interviewer-administered questionnaire adopted from the international index of Sexual Health Inventory for Men (SHIM) and the World Health Organization Quality of Life Scale (WHOQOL-BREF). The study complied with the principles of Helsinki and Good Clinical Practice.
Results: The results showed the mean score for physical domain (58.78 ± 24.37), the psychological domain (62.68 ± 25.93), the social domain (50.47 ± 29.09), and the environmental domain (62.25 ± 18.52). Over a fifth, 11 (22.0%), of the respondents with severe ED had poor quality of life.
Conclusion: This study showed that ED is common in hypertensive men and their quality of life was more impaired than those with normal erectile function.Contribution: This study contributes to holistic approaches to patient care.
Answer: Yes, men with erectile dysfunction (ED) are more likely to have hypertension than men without ED. A naturalistic national cohort study found that the hypertension prevalence rate was 41.2% in men with ED compared to 19.2% in men without ED. After controlling for age, region, and concurrent diseases, the odds for men with ED to have hypertension were 38.3% higher than for men without ED (PUBMED:15947647). Additionally, the study "The Temporal Association of Depression and Anxiety in Young Men With Erectile Dysfunction" found that young men with ED had significantly higher rates of depression and anxiety both before and after ED diagnosis compared to young men without ED, which suggests a complex interplay between ED, mental health, and potentially associated conditions like hypertension (PUBMED:34969613). Furthermore, the "Profile of men's health in Malaysia" study indicated that hypertension is a prevalent risk factor and is associated with ED (PUBMED:21358664). The "Blood Pressure, Sexual Activity, and Erectile Function in Hypertensive Men" study also highlighted that erectile dysfunction was highly prevalent in a sample of men with hypertension, and blood pressure measures were associated with erectile function in this group (PUBMED:30655182). These findings collectively support the hypothesis that ED shares common risk factors with hypertension and that the presence of ED could serve as an alert for clinicians to investigate and potentially treat undiagnosed hypertension earlier. |
Instruction: Complex coronary anatomy in coronary artery bypass graft surgery: impact of complex coronary anatomy in modern bypass surgery?
Abstracts:
abstract_id: PUBMED:37841646
The effect of off-pump coronary bypass graft surgery on subfoveal choroidal thickness, ganglion cell complex, and retinal nerve fiber layer thickness. Background: Cardiac surgery has been associated with adverse ocular events. Off-pump coronary artery bypass graft surgery evades the systemic inflammatory response seen in extracorporeal circulation and is superior to on-pump surgery with regard to end-organ dysfunction and neurological outcomes.
Objectives: To determine the effects of off-pump (without extracorporeal circulation) coronary artery bypass graft surgery on choroidal thickness, ganglion cell complex, and the retinal nerve fiber layer.
Design: Prospective, longitudinal study.
Methods: Patients who underwent off-pump surgery were examined preoperatively and postoperatively at 1 week and 6 weeks after surgery. Choroidal thickness, ganglion cell complex, and the retinal nerve fiber layer measurements were recorded, and the effects of off-pump coronary artery bypass on these parameters were assessed.
Results: A total of 44 eyes of 44 patients were included in the study. There was a statistically significant increase in subfoveal choroidal thickness from 252.84 ± 56.24 µm preoperatively to 273.82 ± 39.76 µm at 1 week and 301.97 ± 44.83 µm at 6 weeks after off-pump coronary artery bypass graft surgery (p = 0.044; p ⩽ 0.001). Ganglion cell complex and retinal nerve fiber measurements showed no significant difference compared to preoperative values.
Conclusion: Off-pump coronary artery bypass graft surgery showed no negative effects on ganglion cell complex and retinal nerve fiber measurements. A significant increase in subfoveal choroidal thickness was seen after off-pump surgery, which might be advantageous in patients who are at high risk or have preexisting ocular diseases that are affected by the choroid.
abstract_id: PUBMED:32653282
Minimally invasive coronary bypass versus percutaneous coronary intervention for isolated complex stenosis of the left anterior descending coronary artery. Objective: Debate continues as to the optimal minimally invasive treatment modality for complex disease of the left anterior descending coronary artery, with advocates for both robotic-assisted minimally invasive direct coronary artery bypass and percutaneous coronary intervention with a drug-eluting stent. We analyzed the midterm outcomes of patients with isolated left anterior descending disease, revascularized by minimally invasive direct coronary artery bypass or drug-eluting stent percutaneous coronary intervention, focusing on those with complex lesion anatomy.
Methods: A retrospective review was undertaken of all patients who underwent coronary revascularization between January 2008 and December 2016. From this population, 158 propensity-matched pairs of patients were generated from 158 individuals who underwent minimally invasive direct coronary artery bypass for isolated complex left anterior descending disease and from 373 patients who underwent percutaneous coronary intervention using a second-generation drug-eluting stent. Midterm survival and incidence of repeat left anterior descending intervention were analyzed for both patient groups.
Results: Overall 9-year survival was not significantly different between patient groups both before and after propensity matching. Midterm mortality in the matched minimally invasive direct coronary artery bypass group was low, irrespective of patient risk profile. By contrast, advanced age (hazard ratio, 1.10; P = .012) and obesity (hazard ratio, 1.09; P = .044) predicted increased late death after drug-eluting stent percutaneous coronary intervention among matched patients. Patients who underwent minimally invasive direct coronary artery bypass were significantly less likely to require repeat left anterior descending revascularization than those who had percutaneous coronary intervention, both before and after propensity matching. Smaller stent diameter in drug-eluting stent percutaneous coronary intervention was associated with increased left anterior descending reintervention (hazard ratio, 3.53; P = .005).
Conclusions: In patients with complex disease of the left anterior descending artery, both minimally invasive direct coronary artery bypass and percutaneous coronary intervention are associated with similar excellent intermediate-term survival, although reintervention requirements are lower after surgery.
abstract_id: PUBMED:21168023
Complex coronary anatomy in coronary artery bypass graft surgery: impact of complex coronary anatomy in modern bypass surgery? Lessons learned from the SYNTAX trial after two years. Objective: SYNTAX study compares outcomes of coronary artery bypass grafting with percutaneous coronary intervention in patients with 3-vessel and/or left main disease. Complexity of coronary artery disease was quantified by the SYNTAX score, which combines anatomic characteristics of each significant lesion. This study aims to clarify whether SYNTAX score affects the outcome of bypass grafting as defined by major adverse cerebrovascular and cardiac events (MACCE) and its components over a 2-year follow-up period.
Methods: Of the 3075 patients enrolled in SYNTAX, 1541 underwent coronary artery bypass grafting (897 randomized controlled trial patients, and 644 registry patients). All patients undergoing bypass grafting were stratified according to their SYNTAX score into 3 tertiles: low (0-22), intermediate (22-32), and high (≥33) complexity. Clinical outcomes up to 2 years after allocation were determined for each group and further risk factor analysis was performed.
Results: Registry patients had more complex disease than those in the randomized controlled trial (SYNTAX score: registry 37.8 ± 13.3 vs randomized 29.1 ± 11.4; P < .001). At 30 days, overall coronary bypass mortality was 0.9% (registry 0.6% vs randomized 1.2%). MACCE rate at 30 days was 4.4% (registry 3.4% vs randomized 5.2%). SYNTAX score did not significantly affect overall 2-year MACCE rate of 15.6% for low, 14.3% for medium, and 15.4% for high SYNTAX scores. Compared with randomized patients, registry patients had a lower rate of overall MACCE rate (registry 13.0% vs randomized 16.7%; P = .046) and repeat revascularization (4.7% vs 8.6%; P = .003), whereas other event rates were comparable. Risk factor analysis revealed left main disease (P = .049) and incomplete revascularization (P = .005) as predictive for adverse 2-year outcomes.
Conclusions: The outcome of coronary artery bypass grafting was excellent and independent from the SYNTAX score. Incomplete revascularization rather than degree of coronary complexity adversely affects late outcomes of coronary bypass.
abstract_id: PUBMED:34416772
Coronary revascularization: Interventional therapy or coronary bypass surgery Coronary artery disease remains the leading cause of death and is responsible for myocardial infarction, heart failure and angina. Therapy combines optimal control of cardiovascular risk factors with coronary revascularization performed by interventional therapy or bypass surgery. While interventional therapy is preferred for single or two vessel disease, interdisciplinary heart team decision should be reached for complex lesion, three vessel or left main disease. Both revascularization strategies perform similar for low level complexity three vessel or left main disease while coronary bypass surgery proved superior for more complex coronary artery disease. Heart team decision should be based on vascular anatomy and expected revascularization success under consideration of operative risk and patient preference.
abstract_id: PUBMED:26583818
Stable coronary heart disease - when is bypass surgery appropriate? Medical treatment is the therapeutic cornerstone in patients with stable coronary artery disease (SCAD). Although an increasing number of these patients is treated with percutaneous interventions (PCI), bypass surgery remains an important therapeutic option. Bypass surgery improves the prognosis of patients with complex coronary anatomy as compared to PCI. Modern surgical techniques lead to a considerable reduction of the invasiveness of the operation.
abstract_id: PUBMED:25228158
Transcollateral retrograde diagnostic coronary angiogram - important therapeutic implications for an occluded arterial coronary artery bypass graft. This case illustrates the potential clinical usefulness of retrograde approach for selective visualization of distal vessels in a patient with multiple coronary chronic total occlusions and previous coronary artery bypass graft (CABG) surgery. By knowing in extreme detail the exact anatomy of the complex post-surgical coronary system, a successful treatment can be planned for the patient.
abstract_id: PUBMED:33197560
Effect of Evolocumab on Complex Coronary Disease Requiring Revascularization. Objectives: This study sought to evaluate the ability of the proprotein convertase subtilisin/kexin type 9 (PCSK9) inhibitor evolocumab to reduce the risk of complex coronary atherosclerosis requiring revascularization.
Background: PCSK9 inhibitors induce plaque regression and reduce the risk of coronary revascularization overall.
Methods: FOURIER (Further Cardiovascular Outcomes Research with PCSK9 Inhibition in Subjects with Elevated Risk) was a randomized trial of the PCSK9 inhibitor evolocumab versus placebo in 27,564 patients with stable atherosclerotic cardiovascular disease on statin therapy followed for a median of 2.2 years. Clinical documentation of revascularization events was blindly reviewed to assess coronary anatomy and procedural characteristics. Complex revascularization was the composite of complex percutaneous coronary intervention (PCI) (as per previous analyses, ≥1 of: multivessel PCI, ≥3 stents, ≥3 lesions treated, bifurcation PCI, or total stent length >60 mm) or coronary artery bypass grafting surgery (CABG).
Results: In this study, 1,724 patients underwent coronary revascularization, including 1,482 who underwent PCI, 296 who underwent CABG, and 54 who underwent both. Complex revascularization was performed in 632 (37%) patients. Evolocumab reduced the risk of any coronary revascularization by 22% (hazard ratio [HR]: 0.78; 95% CI: 0.71 to 0.86; p < 0.001), simple PCI by 22% (HR: 0.78; 95% CI: 0.70 to 0.88; p < 0.001), complex PCI by 33% (HR: 0.67; 95% CI: 0.54 to 0.84; p < 0.001), CABG by 24% (HR: 0.76; 95% CI: 0.60 to 0.96; p = 0.019), and complex revascularization by 29% (HR: 0.71; 95% CI: 0.61 to 0.84; p < 0.001). The magnitude of the risk reduction with evolocumab in complex revascularization tended to increase over time (20%, 36%, and 41% risk reductions in the first, second, and beyond second years).
Conclusions: Adding evolocumab to statin therapy significantly reduced the risk of developing complex coronary disease requiring revascularization, including complex PCI and CABG individually. (Further Cardiovascular Outcomes Research with PCSK9 Inhibition in Subjects with Elevated Risk (FOURIER); NCT01764633.).
abstract_id: PUBMED:31111565
Hybrid off-pump coronary artery bypass grafting surgery and transaortic transcatheter aortic valve replacement: Literature review of a feasible bailout for patients with complex coronary anatomy and poor femoral access. Background And Aim Of Study: The treatment of inoperable patients with concomitant complex coronary artery disease and severe aortic stenosis unsuitable for conventional transcatheter aortic valve replacement (TAVR) poses a significant challenge. Effective treatment is even more difficult in those patients with complex coronary anatomy unamenable to percutaneous revascularization. Our manuscript aims to enlighten clinicians on the management of this complex patient.
Methods: We conducted a contemporary review of the literature of combined off-pump coronary artery bypass grafting and transaortic TAVR in this patient population and describe our own successful experience in an inoperable patient with a porcelain aorta.
Results: Including our report, 17 cases have been described in the literature. All patients had multiple comorbidities with elevated STS (range, 2.6-25; 6%) and EuroScore I (range, 13.7-83; 7%) and were not considered candidates for conventional CABG and SAVR. Most had severe, complex, multivessel CAD deemed unsuitable for PCI and structural findings precluding them from other standard percutaneous or alternative TAVR approaches (transfemoral/subclavian/transcaval/transapical). Out of the 17 cases, 5 (29%) had porcelain aortas. Most reports specify the decision-making process is driven by a multidisciplinary team.
Conclusion: This report demonstrates that hybrid off-pump CABG surgery and transaortic TAVR can be successfully performed in high-risk patients with porcelain aortas who are not candidates for percutaneous methods, on-pump revascularization, transfemoral, subclavian, or transcaval valve implantations. It also highlights that careful study of the CTA scan could predict adequate access for a transaortic approach even in the presence of porcelain aorta in selected patients.
abstract_id: PUBMED:37348857
Impact of on-pump and off-pump coronary artery bypass grafting on 10-year mortality versus percutaneous coronary intervention. Objectives: The very long-term mortality of off-pump and on-pump coronary artery bypass grafting (CABG) versus percutaneous coronary intervention (PCI) in a randomized complex coronary artery disease population is unknown. This study aims to investigate the impact of on-pump and off-pump CABG versus PCI on 10-year all-cause mortality.
Methods: The SYNTAX trial randomized 1800 patients with three-vessel and/or left main coronary artery disease to PCI or CABG and assessed their survival at 10 years. In this sub-study, the hazard of mortality over 10 years was compared according to the technique of revascularization: on-pump CABG (n = 725), off-pump CABG (n = 128) and PCI (n = 903).
Results: There was substantial inter-site variation in the use of off-pump CABG despite baseline characteristics being largely homogeneous among the 3 groups. The crude rate of mortality was significantly lower following on-pump CABG versus PCI [25.6% vs 28.4%, hazard ratio (HR) 0.79, 95% confidence interval (CI) 0.65-0.96], while it was comparable between off-pump CABG and PCI (28.5% vs 28.4%, HR 0.98, 95% CI 0.69-1.40). After adjusting for the 9 variables included in the SYNTAX score II 2020, 10-year mortality remained significantly lower with on-pump CABG than PCI (HR 0.75 against PCI, P = 0.009).
Conclusions: In the SYNTAXES trial, 10-year mortality adjusted for major confounders was significantly lower following on-pump CABG compared to PCI. There was no evidence for unadjusted difference between off-pump CABG and PCI, although the unadjusted estimated HR had a wide CI. Site heterogeneity in the technique used in bypass surgery has had measurable effects on treatment performance.
abstract_id: PUBMED:11006367
Multiple complex coronary plaques in patients with acute myocardial infarction. Background: Acute myocardial infarction is believed to be caused by rupture of an unstable coronary-artery plaque that appears as a single lesion on angiography. However, plaque instability might be caused by pathophysiologic processes, such as inflammation, that exert adverse effects throughout the coronary vasculature and that therefore result in multiple unstable lesions.
Methods: To document the presence of multiple unstable plaques in patients with acute myocardial infarction and determine their influence on outcome, we analyzed angiograms from 253 patients for complex coronary plaques characterized by thrombus, ulceration, plaque irregularity, and impaired flow.
Results: Single complex coronary plaques were identified in 153 patients (60.5 percent) and multiple complex plaques in the other 100 patients (39.5 percent). As compared with patients with single complex plaques, those with multiple complex plaques were less likely to undergo primary angioplasty (86.0 percent vs. 94.8 percent, P = 0.03) and more commonly required urgent bypass surgery (27.0 percent vs. 5.2 percent, P < or = 0.001). During the year after myocardial infarction, the presence of multiple complex plaques was associated with an increased incidence of recurrent acute coronary syndromes (19.0 percent vs. 2.6 percent, P < or = 0.001); repeated angioplasty (32.0 percent vs. 12.4 percent, P < or = 0.001), particularly of non-infarct-related lesions (17.0 percent vs. 4.6 percent, P < or = 0.001); and coronary-artery bypass graft surgery (35.0 percent vs. 11.1 percent, P < or = 0.001).
Conclusions: Patients with acute myocardial infarction may harbor multiple complex coronary plaques that are associated with adverse clinical outcomes. Plaque instability may be due to a widespread process throughout the coronary vessels, which may have implications for the management of acute ischemic heart disease.
Answer: Complex coronary anatomy can significantly influence the outcomes and decisions in coronary artery bypass graft surgery (CABG). The SYNTAX trial, which compared outcomes of CABG with percutaneous coronary intervention (PCI) in patients with 3-vessel and/or left main disease, used the SYNTAX score to quantify the complexity of coronary artery disease (CAD) by combining anatomic characteristics of each significant lesion. The study found that the outcome of CABG was excellent and independent of the SYNTAX score, suggesting that the degree of coronary complexity did not significantly affect the overall 2-year major adverse cerebrovascular and cardiac events (MACCE) rate. However, incomplete revascularization rather than the degree of coronary complexity adversely affected late outcomes of coronary bypass (PUBMED:21168023).
In the context of modern bypass surgery, it is recognized that bypass surgery improves the prognosis of patients with complex coronary anatomy compared to PCI (PUBMED:26583818). For patients with complex disease of the left anterior descending artery, minimally invasive direct coronary artery bypass (MIDCAB) and PCI with drug-eluting stents have similar excellent intermediate-term survival, although reintervention requirements are lower after surgery (PUBMED:32653282). Additionally, coronary bypass surgery has been shown to be superior for more complex coronary artery disease, and heart team decisions should be based on vascular anatomy, expected revascularization success, operative risk, and patient preference (PUBMED:34416772).
Furthermore, the presence of multiple complex coronary plaques in patients with acute myocardial infarction is associated with adverse clinical outcomes, indicating that plaque instability may be due to a widespread process throughout the coronary vessels, which may have implications for the management of acute ischemic heart disease (PUBMED:11006367).
In summary, while complex coronary anatomy is a significant factor in the management of CAD, modern bypass surgery techniques and decision-making processes have evolved to address these complexities effectively, with the goal of improving patient outcomes and reducing the need for repeat revascularization. |
Instruction: Does myocardial thallium-201 SPECT combined with electron beam computed tomography improve the detectability of coronary artery disease?
Abstracts:
abstract_id: PUBMED:9795705
Does myocardial thallium-201 SPECT combined with electron beam computed tomography improve the detectability of coronary artery disease?--comparative study of diagnostic accuracy. Objective: The aim of this study is to evaluate the diagnostic accuracy of myocardial 201Tl SPECT combined with EBT for detecting CAD.
Methods: The study was based on 34 patients with suspected CAD, who had EBT and myocardial 201Tl SPECT. The CAD was diagnosed by the findings of coronary arteriography. Sensitivity, specificity and accuracy of EBT, myocardial 201Tl SPECT and the combined diagnosis on a per vessel basis and a per-patient basis were studied.
Results: The sensitivity for detecting CAD of myocardial 201Tl SPECT, EBT and the combined diagnosis was 85%, 77%, and 62%, respectively. No significant difference in the accuracy of myocardial 201Tl SPECT, EBT and the combined diagnosis was observed on a patient basis and per vessel basis. In the over 70 yr age subgroup, the sensitivity and accuracy of EBT for detecting LAD lesion were significantly superior to those of myocardial 201Tl SPECT. Regardless of age-based subgroups and gender, the combined diagnosis did not contribute to an improvement in diagnostic accuracy.
Conclusion: Although the sensitivity of EBT for detecting LAD lesion in patients over 70 yr of age was significantly higher than that of myocardial 201Tl SPECT, in the detectability of CAD, combined use of myocardial 201Tl SPECT and EBT offers no improvement.
abstract_id: PUBMED:9521332
18F]fluorodeoxyglucose single photon emission computed tomography: can it replace PET and thallium SPECT for the assessment of myocardial viability? Background: New high-energy collimators for single photon emission computed tomography (SPECT) cameras have made imaging of positron-emitting tracers, such as [18F]fluorodeoxyglucose (18FDG), possible. We examined differences between SPECT and PET technologies and between 18FDG and thallium tracers to determine whether 18FDG SPECT could be adopted for assessment of myocardial viability.
Methods And Results: Twenty-eight patients with chronic coronary artery disease (mean left ventricular ejection fraction [LVEF]=33+/-15% at rest) underwent 18FDG SPECT, 18FDG PET, and thallium SPECT studies. Receiver operating characteristic curves showed overall good concordance between SPECT and PET technologies and thallium and 18FDG tracers for assessing viability regardless of the level of 18FDG PET cutoff used (40% to 60%). However, in the subgroup of patients with LVEF< or =25%, at 60% 18FDG PET threshold value, thallium tended to underestimate myocardial viability. In a subgroup of regions with severe asynergy, there were considerably more thallium/18FDG discordances in the inferior wall than elsewhere (73% versus 27%, P<.001), supporting attenuation of thallium as a potential explanation for the discordant observations. When uptake of 18FDG by SPECT and PET was compared in 137 segments exhibiting severely irreversible thallium defects (scarred by thallium), 59 (43%) were viable by 18FDG PET, of which 52 (88%) were also viable by 18FDG SPECT. However, of the 78 segments confirmed to be nonviable by 18FDG PET, 57 (73%) were nonviable by 18FDG SPECT (P<.001).
Conclusions: Although 18FDG SPECT significantly increases the sensitivity for detection of viable myocardium in tissue declared nonviable by thallium (to 88% of the sensitivity achievable by PET), it will occasionally (27% of the time) result in falsely identifying as viable tissue that has been identified as nonviable by both PET and thallium.
abstract_id: PUBMED:1948113
Thallium 201 for assessment of myocardial viability. Left ventricular (LV) performance is reduced in a large subset of patients with chronic coronary artery disease (CAD) and LV dysfunction on the basis of regionally ischemic or hibernating myocardium rather than irreversibly infarcted tissue. The detection of dysfunctional but viable myocardium is clinically relevant since regional and global LV function in such patients will improve after revascularization procedures; however, the identification of patients with such potentially reversible LV dysfunction is difficult. Although thallium 201 imaging may be of value in detecting viable myocardium if regions with perfusion defects during exercise demonstrate redistribution of thallium on a 3- to 4-hour resting image, thallium defects often appear persistently "fixed" within regions of severely ischemic or hibernating myocardium. It has been shown that up to 50% of regions with apparently irreversible thallium defects will improve in function after revascularization. Thus, standard exercise-redistribution thallium scintigraphy may not differentiate LV dysfunction arising from infarcted versus hibernating myocardium. The precision with which thallium imaging identifies viable myocardium can be improved greatly by additional studies once 4-hour redistribution imaging demonstrates an irreversible thallium defect. These additional studies include late (24-hour) redistribution imaging, repeat imaging after thallium reinjection, or a combination of thallium reinjection followed by late imaging. Several recent studies suggest that thallium reinjection techniques, by demonstrating thallium uptake in dysfunctional regions with apparently irreversible defects, predict improvement after revascularization with similar predictive accuracy as that achieved using metabolic imaging with positron emission tomography (PET). Studies directly comparing such thallium methods and PET, which thus far involve only small numbers of patients, suggest that the assessment of regional metabolic activity using PET and the assessment of regional thallium activity using single photon emission computed tomography provide concordant results. These findings, if confirmed by larger ongoing studies, suggest that thallium reinjection imaging is a convenient, clinically accurate, and relatively inexpensive method with which to identify viable myocardium in patients with chronic CAD and LV dysfunction.
abstract_id: PUBMED:9412421
Simultaneous assessment of perfusion and myocardial viability with 2 isotope studies (thallium at rest and sestamibi in exercise). Initial experience in Mexico and Latin America Unlabelled: Rest-stress sestamibi single photon emission computed tomography (SPECT) has sensitivity and specificity similar to those of thallium 201 SPECT for detection of coronary artery disease. However, sestamibi is not ideal agent to study myocardial viability. There is not published experience in Latin American using dual isotope SPECT protocol to evaluate myocardial perfusion and viability. We studied 44 consecutive patients with coronary artery disease, 37 of them with previous myocardial infarction. Coronary angiography was performed in all patients. We used a 3 mCi rest T 201 SPECT followed by stress and 25 mCi sestamibi injection. Sestamibi SPECT was performed 30 minutes after exercise or 1 hour after pharmacologic stress with dipyridamole. To validate perfusion findings patients returning next day for rest sestamibi injection and SPECT. Scintigraphic data were read by two blinded expert using 20 SPECT segment analysis and each segment was scored using 5 points scoring system (0 = normal, 4 = absent uptake). The segmental score agreement between rest thallium 201 and rest sestamibi and the comparison of defect reversibility percentage and non reversibility between both protocols was 90.7%.
Conclusion: Separate acquisition dual isotope myocardial perfusion SPECT is accurate for coronary artery disease evaluation. It showed a good agreement with rest-stress sestamibi SPECT for assessment of rest perfusion defects and reversibility and it was a better method to evaluate myocardial viability.
abstract_id: PUBMED:7819722
Exercise echocardiography and thallium 201 single-photon emission computed tomography in male patients after an episode of unstable coronary artery disease. To compare modern, digital exercise echocardiography and thallium 201 single-photon emission computed tomography (SPECT) in patients with unstable coronary artery disease, 65 men unselected with regard to echocardiography were prospectively investigated 1 month after an episode of unstable angina or non-Q-wave myocardial infarction. Exercise echocardiography and 201Tl SPECT were performed on consecutive days in connection with a standard symptom-limited upright bicycle test and analyzed in a 9-segment model. Coronary angiography was performed in all but 1 patient and 60 patients had significant coronary lesions. Wall motion abnormalities were seen in 53 patients (81%) at rest and perfusion defects in 57 patients (88%) at the redistribution images. New or worsening of wall motion abnormalities were seen in 55 patients, either seated at peak exercise or recumbent after exercise, and 43 patients had reversible or partially reversible 201Tl scintigraphic defects (P = .02). The segmental agreement between wall motion abnormalities and scintigraphic defects was low (58%). The additional value of exercise echocardiography and 201Tl SPECT to exercise test was greatest in patients with one-vessel disease. Thus, 1 month after an episode of unstable coronary artery disease in men, there is a high incidence of significant coronary stenoses as well as signs of ischemia shown both by wall motion abnormalities during exercise echocardiography and by postexercise studies with 201Tl SPECT. Exercise echocardiography gives a higher diagnostic yield regarding occurrence of reversible ischemia.
abstract_id: PUBMED:19024194
Predictors of positive thallium 201 single photon emission computed tomography in patients of end stage renal disease. Objectives: To study coronary artery disease (CAD) risk factors predicting positive thallium-201 single photon emission computed tomography (SPECT) indicating underlying CAD among patients of end stage renal disease. PLACE AND DESIGN: This cross-sectional (analytical) study was done at Department of Cardiology, Punjab Institute of Cardiology, from April 2004 to Dec 2007.
Methods: One hundred consecutive patients with ESRD undergoing thallium SPECT as a routine screening test before renal transplant were studied. Dipyridamole thallium SPECT was performed in patients who were unable to exercise.
Results: Thallium SPECT was positive in 47 (47%) cases. There were significant differences in age, underlying diabetic nephropathy and total cholesterol levels among patients positive and negative on thallium SPECT. Among the risk factors age and underlying diabetic nephropathy were significantly associated (p < 0.05) with a positive thallium SPECT in patients with ESRD.
Conclusion: Positive thallium SPECT indicating underlying CAD was observed in a significant number of patients with ESRD awaiting renal transplant. Presence of advanced age and underlying diabetic nephropathy predict a positive thallium SPECT in this population.
abstract_id: PUBMED:10426854
Nitrate-enhanced thallium 201 single-photon emission computed tomography imaging in hibernating myocardium. Objectives: This study tested the usefulness of nitrate-enhanced thallium 201 imaging for detecting myocardial viability.
Background: Previous work suggests that nitrates enhance the ability of (201)Tl imaging to detect viable myocardium.
Methods: Eighteen patients with coronary artery disease underwent (201)Tl imaging at rest, after 4 hours of redistribution, and during intravenous nitroglycerine infusion (mean dose = 5.96 +/- 5.37 microgram/kg/min). Twelve patients had their echocardiograms repeated after revascularization. Perfusion and wall motion were scored from 0 to 2 (absent to normal).
Results: All the regions identified as viable by the rest/redistribution pair of scans were identified as viable by the rest/nitroglycerine pair of scans. Ninety-one percent of these regions were identified as viable by the single nitroglycerine scan alone. In patients who underwent revascularization, the total (201)Tl perfusion score improved from 193 to 214 after revascularization (P =.009). Wall motion score improved from 151 to 168 after revascularization (P =.09). Both the rest/nitroglycerine and rest/redistribution studies correctly predicted 14 (88%) of 16 regions that improved after revascularization. Most importantly, the rest/nitroglycerine and rest/redistribution studies were able to predict postrevascularization myocardial viability (absence of akinesis or dyskinesis after revascularization), with a sensitivity of 95% and 92%, respectively, and a predictive accuracy of 84.4%.
Conclusions: Nitroglycerine infusion during (201)Tl imaging is a useful technique for detecting underperfused, viable myocardium, requires less time to perform than rest/redistribution imaging, and may allow detection of viable myocardium with a single (201)Tl single-photon emission computed tomographic study.
abstract_id: PUBMED:9817471
Thallium scintigraphy compared with 18F-fluorodeoxyglucose positron emission tomography for assessing myocardial viability in patients with moderate versus severe left ventricular dysfunction. Thallium-201 reinjection imaging and positron emission tomography provide concordant information regarding myocardial viability in many patients with coronary artery disease and left ventricular (LV) dysfunction. It is unclear whether this concordance applies to patients with severe, as well as those with moderate, LV dysfunction. We studied 44 patients with chronic coronary artery disease and LV dysfunction, subgrouped on the basis of severity of dysfunction: 23 patients had moderate and 21 had severe dysfunction (ejection fractions 34 +/- 6% and 19 +/- 6%). Patients underwent exercise thallium single-photon emission computed tomography (SPECT) with 3- to 4-hour redistribution and reinjection imaging, as well as positron emission tomography (PET) imaging with 18fluorodeoxyglucose and 15O-water. Data were analyzed quantitatively in aligned transaxial PET and SPECT tomograms. A myocardial region was considered nonviable by PET if 18fluorodeoxyglucose activity was <50% of that in a normal region, associated with proportional reduction in blood flow. Similarly, regions were considered nonviable by thallium if activity was <50% of activity in normal regions on redistribution and reinjection studies. Thallium SPECT and PET data were concordant regarding viability in 98% and 93% of myocardial regions, respectively, in patients with moderate and with severe LV dysfunction. Lower concordance was observed only when regions with severe irreversible thallium perfusion defects on redistribution images were considered in both groups: 86% and 78%, respectively (p <0.01). Thus, thallium SPECT with reinjection yields information regarding regional myocardial viability that is similar to that provided by PET in patients with severe as well as moderate LV dysfunction. However, there is discordance in >20% of regions manifesting severe irreversible thallium defects in patients with severely reduced LV function.
abstract_id: PUBMED:7594034
Noninvasive prediction of coronary atherosclerosis by quantification of coronary artery calcification using electron beam computed tomography: comparison with electrocardiographic and thallium exercise stress test results. Objectives: This study was designed to compare the usefulness of electron beam computed tomography for prediction of coronary stenosis with that of electrocardiographic (ECG) and thallium exercise tests.
Background: Electron beam computed tomography can quantify coronary calcifications; however, its clinical value has yet to be established.
Methods: Using the volume mode of electron beam computed tomography, we studied 251 consecutive patients who underwent elective coronary angiography because of suspected coronary artery disease and compared the results with those of ECG and thallium exercise tests. The total coronary calcification score was calculated by multiplying the area ( > or = 2 pixels) of calcification (peak density > or = 130 Hounsfield units) by an arbitrarily weighted density score (0 to 4) based on its peak density. The mean of two scans was log transformed.
Results: Calcification was first noted in women in the 4th decade of life, approximately 10 years later than its occurrence in men. Among patients with advanced atherosclerosis (two- and three-vessel disease), calcification scores were uniformly high in women but ranged widely in men. Nine percent of patients with significant stenoses ( > or = 75% by densitometry) had no calcification. The calcification scores of patients with significant stenosis in at least one vessel were significantly higher than those of patients without significant stenosis in the study group as a whole and in most patient subgroups classified according to age and gender. A cutoff calcification score for prediction of significant stenosis, determined by receiver operating characteristic curve analysis, showed high sensitivity (0.77) and specificity (0.86) in all study patients; sensitivity was similarly high even in older patients ( > or = 70 years) and was enhanced in middle-aged patients (40 to < or = 60 years). The difference in specificity between calcification scores and ECG exercise test results had borderline significance (p = 0.058) and that between calcification scores and thallium test results was significant (p = 0.001). The latter difference became small but remained significant (p = 0.01) even after the reevaluation of thallium test results in light of each subject's clinical data.
Conclusions: Quantification of coronary artery calcification with electron beam computed tomography noninvasively predicted angiographically confirmed coronary stenosis. Results obtained with this method were at least as useful and potentially better in some patient groups than those obtained with thallium and ECG exercise testing.
abstract_id: PUBMED:10645919
Severity of coronary artery calcification by electron beam computed tomography predicts silent myocardial ischemia. Background: Detection of subclinical coronary artery disease (CAD) before the development of life-threatening cardiac complications has great potential clinical relevance. Electron beam computed tomography (EBCT) is currently the only noninvasive test that can detect CAD in all stages of its development and thus has the potential to be an excellent screening technique for identifying asymptomatic subjects with underlying myocardial ischemia.
Methods And Results: Over 2.5 years, we prospectively studied 3895 generally asymptomatic subjects with EBCT, 411 of whom had stress myocardial perfusion tomography (SPECT) within a close (median, 17 days) time period. SPECT and exercise treadmill results were compared with the coronary artery calcium score (CACS) as assessed by EBCT. The total CACS identified a population at high risk for having myocardial ischemia by SPECT although only a minority of subjects (22%) with an abnormal EBCT had an abnormal SPECT. No subject with CACS <10 had an abnormal SPECT compared with 2.6% of those with scores from 11 to 100, 11.3% of those with scores from 101 to 399, and 46% of those with scores >/=400 (P<0.0001). CACS predicted an abnormal SPECT regardless of subject age or sex.
Conclusions: CACS identifies a high-risk group of asymptomatic subjects who have clinically important silent myocardial ischemia. Our results support the role of EBCT as the initial screening tool for identifying individuals at various stages of CAD development for whom therapeutic decision making may differ considerably.
Answer: Based on the provided abstracts, the combination of myocardial thallium 201 SPECT with electron beam computed tomography (EBT) does not appear to improve the detectability of coronary artery disease (CAD) over either modality alone. In a comparative study, the sensitivity for detecting CAD of myocardial 201Tl SPECT, EBT, and the combined diagnosis was 85%, 77%, and 62%, respectively. No significant difference in the accuracy of myocardial 201Tl SPECT, EBT, and the combined diagnosis was observed on a patient basis and per vessel basis. Specifically, in the over 70-year age subgroup, the sensitivity and accuracy of EBT for detecting left anterior descending (LAD) lesion were significantly superior to those of myocardial 201Tl SPECT. However, regardless of age-based subgroups and gender, the combined diagnosis did not contribute to an improvement in diagnostic accuracy (PUBMED:9795705).
Therefore, the evidence suggests that while EBT may have a higher sensitivity for detecting LAD lesions in older patients, the addition of myocardial 201Tl SPECT to EBT does not enhance the overall diagnostic accuracy for CAD. |
Instruction: Efficiency and productivity change in the English National Health Service: can data envelopment analysis provide a robust and useful measure?
Abstracts:
abstract_id: PUBMED:29501073
The development of an outcome measure for liaison mental health services. Aims and methodTo develop and pilot a clinician-rated outcome scale to evaluate symptomatic outcomes in liaison psychiatry services. Three hundred and sixty patient contacts with 207 separate individuals were rated using six subscales (mood, psychosis, cognition, substance misuse, mind-body problems and behavioural disturbance) plus two additional items (side-effects of medication and capacity to consent for medical treatment). Each item was rated on a five-point scale from 0 to 5 (nil, mild, moderate, severe and very severe).
Results: The liaison outcome measure was acceptable and easy to use. All subscales showed acceptable interrater reliability, with the exception of the mind-body subscale. Overall, the measure appears to show stability and sensitivity to change.Clinical implicationsThe measure provides a useful and robust way to determine symptomatic change in a liaison mental health setting, although the mind-body subscale requires modification.Declaration of interestNone.
abstract_id: PUBMED:31219769
Reduced exercise ventilatory efficiency in adults with cystic fibrosis and normal to moderately impaired lung function. Despite being a hallmark and an independent prognostic factor in several cardiopulmonary diseases, ventilatory efficiency-i.e., minute ventilation/carbon dioxide output relationship (V̇e/V̇co2)-has never been systematically explored in cystic fibrosis (CF). To provide a comprehensive frame of reference regarding measures of ventilatory efficiency in CF adults with normal to moderately impaired lung function and to confirm the hypothesis that V̇e/V̇co2 is a sensitive marker of early lung disease. CF patients were divided into three groups, according to their spirometry: normal (G1), mild impairment (G2), and moderate impairment (G3) in lung function. All participants underwent incremental cardiopulmonary exercise testing on a cycle ergometer. Lowest V̇e/V̇co2 ratio (nadir) and the slope and the intercept of the linear region of the V̇e/V̇co2 relationship were contrasted in a two-center retrospective analysis, involving 72 CF patients and 36 healthy controls (HC). Compared with HC, CF patients had significantly higher V̇e/V̇co2 nadir, slope, and intercept (P < 0.001, P < 0.001, and P = 0.049, respectively). Subgroup analysis revealed significant differences in nadir (P = 0.001) and slope (P = 0.012) values even between HC and G1. Dynamic hyperinflation related negatively with slope (P = 0.045) and positively with intercept (P = 0.001), while no impact on nadir was observed. Ventilatory inefficiency is a clear feature of adults with CF, even among patients with normal spirometry. V̇e/V̇co2 nadir seems to be the most reliable metric to describe ventilatory efficiency in CF adults. Further prospective studies are needed to clarify whether V̇e/V̇co2 could represent a useful marker in the evaluation of early lung disease in CF.NEW & NOTEWORTHY This is the first study to investigate ventilatory efficiency in a cohort of adult cystic fibrosis (CF) patients with nonsevere lung disease. The finding of impaired ventilatory efficiency in patients with normal lung function confirms the higher sensitivity of exercise testing in detecting early lung disease compared with spirometry. Dynamic hyperinflation plays a significant role in determining the behavior of V̇e/V̇co2 slope and intercept values with increasing lung function impairment. Apparently free from interference from mechanical constraints, V̇e/V̇co2 nadir seems the most reliable parameter to evaluate ventilatory efficiency in CF adults.
abstract_id: PUBMED:29325701
Developmental cognitive neuroscience using latent change score models: A tutorial and applications. Assessing and analysing individual differences in change over time is of central scientific importance to developmental neuroscience. However, the literature is based largely on cross-sectional comparisons, which reflect a variety of influences and cannot directly represent change. We advocate using latent change score (LCS) models in longitudinal samples as a statistical framework to tease apart the complex processes underlying lifespan development in brain and behaviour using longitudinal data. LCS models provide a flexible framework that naturally accommodates key developmental questions as model parameters and can even be used, with some limitations, in cases with only two measurement occasions. We illustrate the use of LCS models with two empirical examples. In a lifespan cognitive training study (COGITO, N = 204 (N = 32 imaging) on two waves) we observe correlated change in brain and behaviour in the context of a high-intensity training intervention. In an adolescent development cohort (NSPN, N = 176, two waves) we find greater variability in cortical thinning in males than in females. To facilitate the adoption of LCS by the developmental community, we provide analysis code that can be adapted by other researchers and basic primers in two freely available SEM software packages (lavaan and Ωnyx).
abstract_id: PUBMED:30936275
Considerations for Evaluating and Recommending Worker Productivity Outcome Measures: An Update from the OMERACT Worker Productivity Group. Objective: The Outcome Measures in Rheumatology (OMERACT) Worker Productivity Group continues efforts to assess psychometric properties of measures of presenteeism.
Methods: Psychometric properties of single-item and dual answer multiitem scales were assessed, as well as methods to evaluate thresholds of meaning.
Results: Test-retest reliability and construct validity of single item global measures was moderate to good. The value of measuring both degree of difficulty and amount of time with difficulty in multiitems questionnaires was confirmed. Thresholds of meaning vary depending on methods and external anchors applied.
Conclusion: We have advanced our understanding of the performance of presenteeism measures and have developed approaches to describing thresholds of meaning.
abstract_id: PUBMED:25746196
Patients' impression of change following treatment for chronic pain: global, specific, a single dimension, or many? Unlabelled: The Patient Global Impression of Change (PGIC) measure has frequently been used as an indicator of meaningful change in treatments for chronic pain. However, limited research has examined the validity of PGIC items despite their wide adoption in clinical trials for pain. Additionally, research has not yet examined predictors of PGIC ratings following psychologically based treatment for pain. The purpose of the present study was to examine the validity, factor structure, and predictors of PGIC ratings following an interdisciplinary psychologically based treatment for chronic pain. Patients with chronic pain (N = 476) completed standard assessments of pain, daily functioning, and depression before and after a 4-week treatment program based on the principles of acceptance and commitment therapy. Following the program, patients rated 1 item assessing their impression of change overall and several items assessing their impression of more specific changes: physical and social functioning, work-related activities, mood, and pain. Results indicated that the global and specific impression of change items represent a single component. In the context of the acceptance and commitment therapy-based treatment studied here, overall PGIC ratings appeared to be influenced to a greater degree by patients' experienced improvements in physical activities and mood than by improvements in pain. The findings suggest that in addition to a single overall PGIC rating, domain-specific items may be relevant for some treatment trials.
Perspective: This article reports on the validity and predictors of patients' impression of change ratings following interdisciplinary psychologically based treatment for pain. In addition to a single overall PGIC rating, domain-specific items may be important for clinicians and researchers to consider depending on the focus of treatment.
abstract_id: PUBMED:35258462
A Tailored App for the Self-management of Musculoskeletal Conditions: Evidencing a Logic Model of Behavior Change. Background: Musculoskeletal conditions such as joint pain are a growing problem, affecting 18.8 million people in the United Kingdom. Digital health interventions (DHIs) are a potentially effective way of delivering information and supporting self-management. It is vital that the development of such interventions is transparent and can illustrate how individual components work, how they link back to the theoretical constructs they are attempting to change, and how this might influence outcomes. getUBetter is a DHI developed to address the lack of personalized, supported self-management tools available to patients with musculoskeletal conditions by providing knowledge, skills, and confidence to navigate through a self-management journey.
Objective: The aim of this study was to map a logic model of behavior change for getUBetter to illustrate how the content and functionality of the DHI are aligned with recognized behavioral theory, effective behavior change techniques, and clinical guidelines.
Methods: A range of behavior change models and frameworks were used, including the behavior change wheel and persuasive systems design framework, to map the logic model of behavior change underpinning getUBetter. The three main stages included understanding the behavior the intervention is attempting to change, identifying which elements of the intervention might bring about the desired change in behavior, and describing intervention content and how this can be optimally implemented.
Results: The content was mapped to 25 behavior change techniques, including information about health consequences, instruction on how to perform a behavior, reducing negative emotions, and verbal persuasion about capability. Mapping to the persuasive system design framework illustrated the use of a number of persuasive design principles, including tailoring, personalization, simulation, and reminders.
Conclusions: This process enabled the proposed mechanisms of action and theoretical foundations of getUBetter to be comprehensively described, highlighting the key techniques used to support patients to self-manage their condition. These findings provide guidance for the ongoing evaluation of the effectiveness (including quality of engagement) of the intervention and highlight areas that might be strengthened in future iterations.
abstract_id: PUBMED:37945547
Climate change: Attitudes and concerns of, and learnings from, people with neurological conditions, carers, and health care professionals. Objective: Concern about climate change among the general public is acknowledged by surveys. The health care sector must play its part in reducing greenhouse gas emissions and adapting to a changing climate, which will require the support of its stakeholders including those with epilepsy, who may be especially vulnerable. It is important to understand this community's attitudes and concerns about climate change and societal responses.
Methods: A survey was made available to more than 100 000 people among a section of the neurological community (patients, carers, and clinicians), focused on epilepsy. We applied quantitative analysis of Likert scale responses supported by qualitative analyses of free-text questions with crossover analyses to identify consonance and dissonance between the two approaches.
Results: A small proportion of potential respondents completed the survey; of 126 respondents, 52 had epilepsy and 56 explicitly declared no illness. The survey indicated concern about the impact of climate change on health within this neurological community focused on epilepsy. More than half of respondents considered climate change to have been bad for their health, rising to 68% in a subgroup with a neurological condition; over 80% expected climate change to harm their health in future. Most (>75%) believed that action to reduce greenhouse gas emissions will lead to improved health and well-being. The crossover analysis identified cost and accessibility as significant barriers.
Significance: The high level of concern about climate change impacts and positive attitudes toward policies to reduce greenhouse gas emissions provide support for climate action from the epilepsy community. However, if policies are implemented without considering the needs of patients, they risk being exclusionary, worsening inequalities, and further threatening neurological health and well-being.
abstract_id: PUBMED:37410518
Capability, Opportunity, and Motivation Model for Behavior Change in People With Asthma: Protocol for a Cross-Sectional Study. Background: Asthma is a common lung condition that cannot be cured, but it can usually be effectively managed using available treatments. Despite this, it is widely acknowledged that 70% of patients do not adhere to their asthma treatment. Personalizing treatment by providing the most appropriate interventions based on the patient's psychological or behavioral needs produces successful behavior change. However, health care providers have limited available resources to deliver a patient-centered approach for their psychological or behavioral needs, resulting in a current one-size-fits-all strategy due to the nonfeasible nature of existing surveys. The solution would be to provide health care professionals with a clinically feasible questionnaire that identifies the patient's personal psychological and behavioral factors related to adherence.
Objective: We aim to apply the capability, opportunity, and motivation model of behavior change (COM-B) questionnaire to detect a patient's perceived psychological and behavioral barriers to adherence. Additionally, we aim to explore the key psychological and behavioral barriers indicated by the COM-B questionnaire and adherence to treatment in patients with confirmed asthma with heterogeneous severity. Exploratory objectives will include a focus on the associations between the COM-B questionnaire responses and asthma phenotype, including clinical, biological, psychosocial, and behavioral components.
Methods: In a single visit, participants visiting Portsmouth Hospital's asthma clinic with a diagnosis of asthma will be asked to complete a 20-minute questionnaire on an iPad about their psychological and behavioral barriers following the theoretical domains framework and capability, opportunity, and motivation model. Participants' data are routinely collected, including demographics, asthma characteristics, asthma control, asthma quality of life, and medication regime, which will be recorded on an electronic data capture form.
Results: The study is already underway, and it is anticipated that the results will be available by early 2023.
Conclusions: The COM-B asthma study will investigate an easily accessible theory-based tool (a questionnaire) for identifying psychological and behavioral barriers in patients with asthma who are not adhering to their treatment. This will provide useful information on the behavioral barriers to asthma adherence and whether or not a questionnaire can be used to identify these needs. The highlighted barriers will improve health care professionals' knowledge of this important subject, and participants will benefit from the study by removing their barriers. Overall, this will enable health care professionals to use effective individualized interventions to support improved medication adherence while also recognizing and meeting the psychological needs of patients with asthma.
Trial Registration: ClinicalTrials.gov NCT05643924; https://clinicaltrials.gov/ct2/show/NCT05643924.
International Registered Report Identifier (irrid): DERR1-10.2196/44710.
abstract_id: PUBMED:34111808
This is the day your life must surely change : Prioritising behavioural change in musculoskeletal practice. Behavioural change is the modification or transformation of behaviour. Health behaviour has been defined as, 'any activity undertaken for the purpose of preventing or detecting disease or for improving health and wellbeing' (Bennell et al., 2019 [1]). For a smoker it is acting on the decision to stop or reduce the number of cigarettes smoked, for someone with a higher than ideal body mass index, it is acting to reduce weight and for someone who isn't achieving ideal levels of exercise (Briggs et al., 2020 [2]) it is the decision and action to increase metabolic activity. With increased understanding of the importance of self-management and impact of lifestyle, clinicians supporting people presenting with musculoskeletal symptoms have recognized that behavioural change is an essential component of the care we need to prioritize. It is about sharing the benefits of lifestyle choices that promote health, discussing how to introduce and maintain them, and supporting people through their journey of change, so the change becomes the new normal. Behavioural change science assumes that (i) morbidity and mortality are due in part to certain behaviours and (ii) these behaviours are modifiable. The aim of this debate article is to discuss reasons why clinicians must include long term behavioural change into their management plan and methods they may use to facilitate the change ensuring recipients are empowered to act and make today the day their 'lives will surely change'.
abstract_id: PUBMED:38311180
Meaningful Symptomatic Change in Patients With Myelofibrosis From the SIMPLIFY Studies. Objectives: Patients with myelofibrosis develop symptoms due to bone marrow fibrosis, systemic inflammation, and/or organomegaly. Alleviating symptoms improves overall quality of life. Clinical trials have historically defined symptom response as a reduction of at least 50% in Total Symptom Score at week 24 compared with baseline. Whether 50% constitutes a meaningful benefit has not been established. This study determined the meaningful change threshold (MCT) for 2 momelotinib phase III trials, SIMPLIFY-1 and SIMPLIFY-2.
Methods: The absolute and percentage MCT was determined using anchor-based methods applied to the modified Myeloproliferative Neoplasm Symptom Assessment Form v2.0 and Patient Global Impression of Change. MCTs were applied retrospectively to determine responder rates. Generalized estimating equations estimated the treatment-related difference in likelihood of improvement.
Results: In SIMPLIFY-1, a Janus kinase inhibitor-naive population, the MCT was 8 points. In SIMPLIFY-2, a previously Janus kinase inhibitor-treated population, the MCT was 6 points. A 32% MCT was determined in both studies, showing that the historic 50% reduction threshold may be a conservative choice. In SIMPLIFY-1, a similar proportion of patients achieved responder status with 24 weeks of momelotinib or ruxolitinib therapy based on the absolute MCT (39% vs 41%, respectively). In SIMPLIFY-2, a significantly greater proportion of patients treated with momelotinib achieved responder states compared with best available therapy based on absolute and percent change MCTs.
Conclusions: This study demonstrates that momelotinib provided clinically meaningful symptom benefit for patients with myelofibrosis and provides insight into the appropriateness of the symptom change threshold used in historical studies.
Answer: The provided abstracts do not directly address the question of efficiency and productivity change in the English National Health Service (NHS) using data envelopment analysis (DEA). DEA is a non-parametric method used in operations research and economics for the estimation of production frontiers, which can be used to evaluate the efficiency of decision-making units (e.g., hospitals, clinics) in the healthcare sector. It helps to assess relative efficiency by comparing the inputs (such as staff, equipment, and capital) and outputs (such as patient outcomes, service quality, and throughput) of different units.
While the abstracts cover a range of topics related to outcome measures, behavior change, and symptom change in various health conditions and settings, they do not specifically discuss the application of DEA to measure efficiency and productivity in the NHS. For example, the development of an outcome measure for liaison mental health services (PUBMED:29501073) and the evaluation of worker productivity outcome measures (PUBMED:30936275) are relevant to healthcare performance but do not mention DEA.
To answer the question, one would need to look for studies that have applied DEA to the NHS or similar healthcare systems to measure efficiency and productivity changes over time. Such studies would typically involve the collection of relevant input and output data from NHS units, the application of DEA to this data, and the interpretation of the results to draw conclusions about the relative efficiency and productivity of the units analyzed. |
Instruction: Can the cardiopulmonary 6-minute walk test reproduce the usual activities of patients with heart failure?
Abstracts:
abstract_id: PUBMED:12185855
Can the cardiopulmonary 6-minute walk test reproduce the usual activities of patients with heart failure? Objective: The 6-minute walk test is an way of assessing exercise capacity and predicting survival in heart failure. The 6-minute walk test was suggested to be similar to that of daily activities. We investigated the effect of motivation during the 6-minute walk test in heart failure.
Methods: We studied 12 males, age 45 +/- 12 years, ejection fraction 23 +/- 7%, and functional class III. Patients underwent the following tests: maximal cardiopulmonary exercise test on the treadmill (max), cardiopulmonary 6-minute walk test with the walking rhythm maintained between relatively easy and slightly tiring (levels 11 and 13 on the Borg scale) (6EB), and cardiopulmonary 6-minute walk test using the usual recommendations (6RU). The 6EB and 6RU tests were performed on a treadmill with zero inclination and control of the velocity by the patient.
Results: The values obtained in the max, 6EB, and 6RU tests were, respectively, as follows: O2 consumption (ml.kg-1.min-1) 15.4 +/- 1.8, 9.8 +/- 1.9 (60 +/- 10%), and 13.3 +/- 2.2 (90 +/- 10%); heart rate (bpm) 142 +/- 12, 110 +/- 13 (77 +/- 9%), and 126 +/- 11 (89 +/- 7%); distance walked (m) 733 +/- 147, 332 +/- 66, and 470 +/- 48; and respiratory exchange ratio (R) 1.13 +/- 0.06, 0.9 +/- 0.06, and 1.06 +/- 0.12. Significant differences were observed in the values of the variables cited between the max and 6EB tests, the max and 6RU tests, and the 6EB and 6RU tests (p < 0.05).
Conclusion: Patients, who undergo the cardiopulmonary 6-minute walk test and are motivated to walk as much as they possibly can, usually walk almost to their maximum capacity, which may not correspond to that of their daily activities. The use of the Borg scale during the cardiopulmonary 6-minute walk test seems to better correspond to the metabolic demand of the usual activities in this group of patients.
abstract_id: PUBMED:28460707
Could peak oxygen uptake be estimated from proposed equations based on the six-minute walk test in chronic heart failure subjects? Objectives: To evaluate the agreement between the measured peak oxygen uptake (VO2peak) and the VO2peak estimated by four prediction equations based on the six-minute walk test (6MWT) in chronic heart failure patients.
Method: Thirty-six chronic heart failure patients underwent cardiopulmonary exercise testing and the 6MWT to assess their VO2peak. Four previously published equations that include the variable six-minute walk distance were used to estimate the VO2peak: Cahalin, 1996a (1); Cahalin, 1996b (2); Ross, 2010 (3); and Adedoyin, 2010 (4). The agreement between the VO2peak in the cardiopulmonary exercise testing and the estimated values was assessed using the Bland-Altman method. A p-value of <0.05 was considered statistically significant.
Results: All estimated VO2peak values presented moderate correlation (ranging from 0.55 to 0.70; p<0.001) with measured VO2peak values. Equations 2, 3, and 4 underestimated the VO2peak by 30%, 15.2%, and 51.2%, respectively, showing significant differences from the actual VO2peak measured in the cardiopulmonary exercise testing (p<0.0001 for all), and the limits of agreement were elevated. The VO2peak estimated by equation 1 was similar to that measured by the cardiopulmonary exercise testing, and despite the agreement, bias increased as VO2peak increased.
Conclusions: Only equation 1 showed estimated VO2peak similar to the measured VO2peak; however, a large limits of agreement range (∼3 METs) does not allow its use to estimate maximal VO2peak.
abstract_id: PUBMED:37819228
Activities of daily living in heart failure patients and healthy subjects: when the cardiopulmonary assessment goes beyond traditional exercise test protocols. Heart failure (HF) patients traditionally report dyspnoea as their main symptom. Although the cardiopulmonary exercise test (CPET) and 6 min walking test are the standardized tools in assessing functional capacity, neither cycle ergometers nor treadmill maximal efforts do fully represent the actual HF patients' everyday activities [activities of daily living (ADLs)] (i.e. climbing the stairs). New-generation portable metabolimeters allow the clinician to measure task-related oxygen intake (VO2) in different scenarios and exercise protocols. In the last years, we have made considerable progress in understanding the ventilatory and metabolic behaviours of HF patients and healthy subjects during tasks aimed to reproduce ADLs. In this paper, we describe the most recent findings in the field, with special attention to the relationship between the metabolic variables obtained during ADLs and CPET parameters (i.e. peak VO2), demonstrating, for example, how exercises traditionally thought to be undemanding, such as a walk, instead represent supramaximal efforts, particularly for subjects with advanced HF and/or artificial heart (left ventricular assist devices) wearers.
abstract_id: PUBMED:26743588
Could the two-minute step test be an alternative to the six-minute walk test for patients with systolic heart failure? Background: The consequence of exercise intolerance for patients with heart failure is the difficulty climbing stairs. The two-minute step test is a test that reflects the activity of climbing stairs.
Design: The aim of the study design is to evaluate the applicability of the two-minute step test in an assessment of exercise tolerance in patients with heart failure and the association between the six-minute walk test and the two-minute step test.
Methods: Participants in this study were 168 men with systolic heart failure (New York Heart Association (NYHA) class I-IV). In the study we used the two-minute step test, the six-minute walk test, the cardiopulmonary exercise test and isometric dynamometer armchair.
Results: Patients who performed more steps during the two-minute step test covered a longer distance during the six-minute walk test (r = 0.45). The quadriceps strength was correlated with the two-minute step test and the six-minute walk test (r = 0.61 and r = 0.48). The greater number of steps performed during the two-minute step test was associated with higher values of peak oxygen consumption (r = 0.33), ventilatory response to exercise slope (r = -0.17) and longer time of exercise during the cardiopulmonary exercise test (r = 0.34). Fatigue and leg fatigue were greater after the two-minute step test than the six-minute walk test whereas dyspnoea and blood pressure responses were similar.
Conclusion: The two-minute step test is well tolerated by patients with heart failure and may thus be considered as an alternative for the six-minute walk test.
abstract_id: PUBMED:31590569
Confirming a beneficial effect of the six-minute walk test on exercise confidence in patients with heart failure. Background: Low confidence to exercise is a barrier to engaging in exercise in heart failure patients. Participating in low to moderate intensity exercise, such as the six-minute walk test, may increase exercise confidence.
Aim: To compare the effects of a six-minute walk test with an educational control condition on exercise confidence in heart failure patients.
Methods: This was a prospective, quasi-experimental design whereby consecutive adult patients attending an out-patient heart failure clinic completed the Exercise Confidence Scale prior to and following involvement in the six-minute walk test or an educational control condition.
Results: Using a matched pairs, mixed model design (n=60; 87% male; Mage=58.87±13.16), we identified a significantly greater improvement in Total exercise confidence (F(1,54)=4.63, p=0.036, partial η2=0.079) and Running confidence (F(1,57)=4.21, p=0. 045, partial η2=0.069) following the six-minute walk test compared to the educational control condition. These benefits were also observed after adjustment for age, gender, functional class and depression.
Conclusion: Heart failure patients who completed a six-minute walk test reported greater improvement in exercise confidence than those who read an educational booklet for 10 min. The findings suggest that the six-minute walk test may be used as a clinical tool to improve exercise confidence. Future research should test these results under randomized conditions and examine whether improvements in exercise confidence translate to greater engagement in exercise behavior.
abstract_id: PUBMED:10673190
Clinical correlates and prognostic significance of six-minute walk test in patients with primary pulmonary hypertension. Comparison with cardiopulmonary exercise testing. The six-minute walk test is a submaximal exercise test that can be performed even by a patient with heart failure not tolerating maximal exercise testing. To elucidate the clinical significance and prognostic value of the six-minute walk test in patients with primary pulmonary hypertension (PPH), we sought (1) to assess the relation between distance walked during the six-minute walk test and exercise capacity determined by maximal cardiopulmonary exercise testing, and (2) to investigate the prognostic value of the six-minute walk test in comparison with other noninvasive parameters. The six-minute walk test was performed in 43 patients with PPH, together with echocardiography, right heart catheterization, and measurement of plasma epinephrine and norepinephrine. Symptom-limited cardiopulmonary exercise testing was performed in a subsample of patients (n = 27). Distance walked in 6 min was significantly shorter in patients with PPH than in age- and sex-matched healthy subjects (297 +/- 188 versus 655 +/- 91 m, p < 0. 001). The distance significantly decreased in proportion to the severity of New York Heart Association functional class. The distance walked correlated modestly with baseline cardiac output (r = 0.48, p < 0.05) and total pulmonary resistance (r = -0.49, p < 0. 05), but not significantly with mean pulmonary arterial pressure. In contrast, the distance walked correlated strongly with peak V O(2) (r = 0.70, p < 0.001), oxygen pulse (r = 0.57, p < 0.01), and V E-VCO(2) slope (r = -0.66, p < 0.001) determined by cardiopulmonary exercise testing. During a mean follow-up period of 21 +/- 16 mo, 12 patients died of cardiopulmonary causes. Among noninvasive parameters including clinical, echocardiographic, and neurohumoral parameters, only the distance walked in 6 min was independently related to mortality in PPH by multivariate analysis. Patients walking < 332 m had a significantly lower survival rate than those walking farther, assessed by Kaplan-Meier survival curves (log-rank test, p < 0.01). These results suggest that the six-minute walk test, a submaximal exercise test, reflects exercise capacity determined by maximal cardiopulmonary exercise testing in patients with PPH, and it is the distance walked in 6 min that has a strong, independent association with mortality.
abstract_id: PUBMED:33305534
Predicting maximal oxygen uptake from the 6 min walk test in patients with heart failure. Aims: A cardiopulmonary exercise (CPX) test is considered the gold standard in evaluating maximal oxygen uptake. This study aimed to evaluate the predictive validity of equations provided by Burr et al., Ross et al., Adedoyin et al., and Cahalin et al. in predicting peak VO2 from 6 min walk test (6MWT) distance in patients with heart failure (HF).
Methods And Results: New York Heart Association Class I-III HF patients performed a maximal effort CPX test and two 6MWTs. Correlations between CPX VO2 peak and the predicted VO2 peak , coefficient of determination (R2 ), and mean absolute percentage error (MAPE) scores were calculated. P-values were set at 0.05. A total of 106 participants aged 62.5 ± 11.5 years completed the tests. The mean VO2 peak from CPX testing was 16.4 ± 3.9 mL/kg/min, and the mean 6MWT distance was 419.2 ± 93.0 m. The predicted mean VO2 peak (mL/kg/min) by Burr et al., Ross et al., Adedoyin et al., and Cahalin et al. was 22.8 ± 8.8, 14.6 ± 2.1, 8.30 ± 1.4, and 16.6 ± 2.8. A significant correlation was observed between the CPX test VO2 peak and predicted values. The mean difference (0.1 mL/kg/min), R2 (0.97), and MAPE (0.14) values suggest that the Cahalin et al. equation provided the best predictive validity.
Conclusions: The equation provided by Cahalin et al. is simple and has a strong predictive validity, and researchers may use the equation to predict mean VO2 peak in patients with HF. Based on our observation, equations to predict individual maximal oxygen uptake should be used cautiously.
abstract_id: PUBMED:29979904
Dynamics of cardiorespiratory response during and after the six-minute walk test in patients with heart failure. Purpose: The six-minute walk test (6MWT) is a useful measure to evaluate exercise capacity with a simple method. The kinetics of oxygen uptake ([Formula: see text]O2) throughout constant-load exercise on cardiopulmonary exercise testing (CPX) are composed of three phases and the [Formula: see text]O2 kinetics are delayed in patients with heart failure (HF). This study aimed to investigate the kinetics of the cardiorespiratory response during and after the 6MWT according to exercise capacity. Methods: Forty-nine patients with HF performed CPX and the 6MWT. They were divided into two groups by 6MWT distance: 34 patients walked ≥300 m (HF-M), and 15 patients walked <300 m (HF-L). [Formula: see text]O2, minute ventilation ([Formula: see text]E), breathing frequency, tidal volume, and heart rate, both during and after the 6MWT, were recorded. The time courses of each parameter were compared between the two groups. CPX was used to assess functional capacity and physiological responses. Results: In the HF-M group, [Formula: see text]O2 and [Formula: see text]E stabilized from 3 min during the 6MWT and recovered for 3 min, respectively, after the 6MWT ended. In the HF-L group, [Formula: see text]O2 and VE stabilized from 4 min, respectively, during the 6MWT and did not recover within 3 min after the 6MWT ended. On CPX in the HF-M group, [Formula: see text]O2 peak, and anaerobic threshold were significantly higher, while the relationship between minute ventilation and carbon dioxide production was lower compared with the HF-L group. Conclusion: In lower exercise capacity patients with HF had slower [Formula: see text]O2 and [Formula: see text]E kinetics during and after the 6MWT.
abstract_id: PUBMED:28421409
Six-Minute Walk Test for Assessing Physical Functional Capacity in Chronic Heart Failure. Purpose Of The Review: The six-minute walk test (6MWT) is a submaximal exercise test for evaluating physical functional capacity. This review aims to report the research on the use of the 6MWT in chronic heart failure (CHF) that has been published in the past 5 years.
Recent Findings: The 6MWT distance does not accurately reflect peak VO2. Minimal clinically important difference in the 6MWT distance, and additional measurements, such as heart rate recovery, can assist in the interpretation of the 6MWT distance, so management decisions can be made. Incorporating mobile apps and information technology in measuring the 6MWT distance extends the usefulness of this simple walk test and improve remote monitoring of patients with CHF. The 6MWT is a useful tool in CHF programs. However, interpretation of the 6MWT distance must be with caution. With the advancement in technology, the 6MWT has the potential to facilitate the monitoring of people living in rural and remote areas.
abstract_id: PUBMED:33136223
Six-minute walk test: prognostic value and effects of nebivolol versus placebo in elderly patients with heart failure from the SENIORS trial. Background: There is limited information about the 6-min walk test (6MWT) in elderly patients with heart failure. We evaluated 6MWT and the effect of nebivolol on 6MWT from the SENIORS trial.
Methods And Results: The SENIORS trial evaluated nebivolol versus placebo on death and hospitalisation in patients aged ≥ 70 years with heart failure. A total of 1982 patients undertook a 6MWT at baseline and 1716 patients at 6 months. Patients were divided into tertiles (≤ 200 m, 201 to ≤ 300 m and > 300 m) and to change in distance walked between baseline and 6 months (< 0 m, 0 to < 30 m and ≥ 30 m). The primary outcome was all-cause mortality and cardiovascular hospital admission. Secondary endpoint was all-cause mortality. Baseline walk distance of ≤ 200 m incurred a greater risk of the primary and secondary outcomes (HR 1.41, CI 95% 1.17-1.69, p < 0.001) and (HR 1.37, CI 95% 1.05-1.78, p = 0.019). A decline in walk distance over 6 months was associated with increased risk of clinical events. Nebivolol had no influence on change in walk distance over 6 months.
Conclusions: The 6MWT has prognostic utility in elderly patients. Those who walked less than 200 m were at highest risk. Nebivolol had no effect on 6MWT.
Answer: The cardiopulmonary 6-minute walk test (6MWT) is a commonly used measure to assess exercise capacity in patients with heart failure. However, whether it can accurately reproduce the usual activities of these patients is a matter of investigation.
One study found that when patients with heart failure are motivated to walk as much as they can during the 6MWT, they tend to walk almost to their maximum capacity, which may not correspond to the level of exertion experienced during their daily activities. The use of the Borg scale during the test, which maintains the walking rhythm between relatively easy and slightly tiring, seems to better correspond to the metabolic demand of usual activities in this group of patients (PUBMED:12185855).
Another study suggested that the 6MWT does not fully represent the actual everyday activities of heart failure patients, such as climbing stairs. New-generation portable metabolimeters have allowed clinicians to measure task-related oxygen intake during activities aimed to reproduce activities of daily living (ADLs), revealing that exercises like walking can represent supramaximal efforts for subjects with advanced heart failure (PUBMED:37819228).
Additionally, the two-minute step test, which reflects the activity of climbing stairs, has been proposed as an alternative to the 6MWT for assessing exercise tolerance in patients with heart failure. This test is well tolerated by patients and may be considered as an alternative for the 6MWT (PUBMED:26743588).
In conclusion, while the 6MWT is a useful tool for assessing exercise capacity in patients with heart failure, it may not always accurately reproduce the usual activities of these patients, particularly when patients are motivated to perform at their maximum capacity during the test. Alternative tests and measures that more closely mimic daily activities may provide a more accurate representation of a patient's functional capacity (PUBMED:12185855; PUBMED:37819228; PUBMED:26743588). |
Instruction: Can biochemical markers serve as surrogates for imaging in knee osteoarthritis?
Abstracts:
abstract_id: PUBMED:18050200
Can biochemical markers serve as surrogates for imaging in knee osteoarthritis? Objective: Osteoarthritis (OA) is a complex heterogeneous joint disease affecting more than 35 million people worldwide. The current gold standard diagnostic investigation is the plain radiograph, which lacks sensitivity. Biochemical markers have the potential to act as adjunct markers for imaging in the assessment of knee OA. We undertook this study to determine the association between individual biochemical markers and radiographic features, and to establish whether the association is strengthened when selected biochemical markers are combined into a single factor (a theoretical marker).
Methods: Twenty serum and urinary biochemical markers were analyzed in 119 patients with predominantly tibiofemoral knee OA. Pearson's correlation was performed, and corresponding coefficients of determination (R(2)) were calculated to determine the association between biochemical markers and a range of imaging features from radiographs and dual x-ray absorptiometry of the knee. Biochemical markers demonstrating a significant association (P < 0.05) with a specific imaging feature were combined by principal components analysis (PCA). Pearson's correlation was repeated to establish whether the combined panel of biochemical markers showed a stronger association with imaging than the best single marker.
Results: Fourteen biochemical markers showed significant associations with one or more imaging features. By combining specific panels of biochemical markers to form factors, the association of markers with imaging features (R(2)) increased from 11.9% to 22.7% for the Kellgren/Lawrence (K/L) score, from 5.9% to 9.2% for joint space width (JSW), from 6.6% to 10.8% for sclerosis, from 13.5% to 22.6% for osteophytes, and from 12.0% to 14.2% for bone mineral density (BMD). Biochemical markers identifying patients with osteophytes overlapped with those correlated with a high K/L score, while markers of subchondral BMD formed a completely separate group. Biochemical markers of JSW included markers associated with both osteophytes and BMD.
Conclusion: The PCA results suggest that biochemical marker combinations may be more sensitive than individual biochemical markers for reflecting structural damage in patients with knee OA. The differences in biochemical marker profiles associated with osteophytes compared with those associated with subchondral BMD raise the possibility that these 2 processes, commonly seen in bone in knee OA, have underlying biologic differences.
abstract_id: PUBMED:27723280
Association Between Biochemical Markers of Bone Turnover and Bone Changes on Imaging: Data From the Osteoarthritis Initiative. Objective: To determine the relationship between biochemical markers involved in bone turnover and bone features on imaging in knees with osteoarthritis (OA).
Methods: We analyzed data from the Foundation for the National Institutes of Health OA Biomarkers Consortium within the Osteoarthritis Initiative (n = 600). Bone marrow lesions (BMLs), osteophytes, and subchondral bone area (mm2 ) and shape (position on 3-D vector) were assessed on magnetic resonance images, and bone trabecular integrity (BTI) was assessed on radiographs. Serum and urinary markers (serum C-terminal crosslinked telopeptide of type I collagen [CTX-I], serum crosslinked N-telopeptide of type I collagen [NTX-I], urinary NTX-I, urinary C-terminal crosslinked telopeptide of type II collagen [CTX-II], and urinary CTX-Iα and CTX-Iβ) were measured. The associations between biochemical and imaging markers at baseline and over 24 months were assessed using regression models adjusted for covariates.
Results: At baseline, most biochemical markers were associated with BMLs, with C statistics for the presence/absence of any BML ranging from 0.675 to 0.688. At baseline, urinary CTX-II was the marker most consistently associated with BMLs (with odds of having ≥5 subregions affected compared to no BML increasing by 1.92-fold [95% confidence interval (95% CI) 1.25, 2.96] per 1 SD of urinary CTX-II), large osteophytes (odds ratio 1.39 [95% CI 1.10, 1.77]), bone area and shape (highest partial R2 = 0.032), and changes in bone shape over 24 months (partial R2 range 0.008 to 0.024). Overall, biochemical markers were not predictive of changes in BMLs or osteophytes. Serum NTX-I was inversely associated with BTI of the vertical trabeculae (quadratic slope) in all analyses (highest partial R2 = 0.028).
Conclusion: We found multiple significant associations, albeit mostly weak ones. The role of systemic biochemical markers as predictors of individual bone anatomic features of single knees is limited based on our findings.
abstract_id: PUBMED:35240332
Association between osteoarthritis-related serum biochemical markers over 11 years and knee MRI-based imaging biomarkers in middle-aged adults. Objective: To describe the associations between osteoarthritis (OA)-related biochemical markers (COMP, MMP-3, HA) and MRI-based imaging biomarkers in middle-aged adults over 10-13 years.
Methods: Blood serum samples collected during the Childhood Determinants of Adult Health (CDAH)-1 study (year:2004-06; n = 156) and 10-13 year follow-up at CDAH-3 (n = 167) were analysed for COMP, MMP-3, and HA using non-isotopic ELISA. Knee MRI scans obtained during the CDAH-knee study (year:2008-10; n = 313) were assessed for cartilage volume and thickness, subchondral bone area, cartilage defects, and BML.
Results: In a multivariable linear regression model describing the association of baseline biochemical markers with MRI-markers (assessed after 4-years), we found a significant negative association of standardised COMP with medial femorotibial compartment cartilage thickness (β:-0.070; 95%CI:-0.138,-0.001), and standardised MMP-3 with patellar cartilage volume (β:-141.548; 95%CI:-254.917,-28.179) and total bone area (β:-0.729; 95%CI:-1.340,-0.118). In multivariable Tobit regression model, there was a significant association of MRI-markers with biochemical markers (assessed after 6-9 years); a significant negative association of patellar cartilage volume (β:-0.001; 95%CI:-0.002,-0.00004), and total bone area (β:-0.158; 95%CI-0.307,-0.010) with MMP-3, and total cartilage volume (β:-0.001; 95%CI:-0.001,-0.0001) and total bone area (β:-0.373; 95%CI:-0.636,-0.111) with COMP. No significant associations were observed between MRI-based imaging biomarkers and HA.
Conclusion: COMP and MMP-3 levels were negatively associated with knee cartilage thickness and volume assessed 4-years later, respectively. Knee cartilage volume and bone area were negatively associated with COMP and MMP-3 levels assessed 6-9 years later. These results suggest that OA-related biochemical markers and MRI-markers are interrelated in early OA.
abstract_id: PUBMED:25205017
Systemic biochemical markers of joint metabolism and inflammation in relation to radiographic parameters and pain of the knee: data from CHECK, a cohort of early-osteoarthritis subjects. Objective: To investigate associations of biochemical markers of joint metabolism and inflammation with minimum joint space width (JSW) and osteophyte area (OP area) of knees showing no or doubtful radiographic osteoarthritis (OA) and to investigate whether these differed between painful and non-painful knees.
Design: Serum (s-) and urinary (u-) levels of the cartilage markers uCTX-II, sCOMP, sPIIANP, and sCS846, bone markers uCTX-I, uNTX-I, sPINP, and sOC, synovial markers sPIIINP and sHA, and inflammation markers hsCRP and erythrocyte sedimentation rate (ESR) were assessed in subjects from CHECK (Cohort Hip and Cohort Knee) demonstrating Kellgren and Lawrence grade ≤1 OA on knee radiographs. Minimum JSW and OP area of these knees were quantified in detail using Knee Images Digital Analysis (KIDA).
Results: uCTX-II levels showed negative associations with minimum JSW and positive associations with OP area. sCOMP and sHA levels showed positive associations with OP area, but not with minimum JSW. uCTX-I and uNTX-I levels showed negative associations with minimum JSW and OP area. Associations of biochemical marker levels with minimum JSW were similar between painful and non-painful knees, associations of uCTX-II, sCOMP, and sHA with OP area were only observed in painful knees.
Conclusions: In these subjects with no or doubtful radiographic knee OA, uCTX-II might not only reflect articular cartilage degradation but also endochondral ossification in osteophytes. Furthermore, sCOMP and sHA relate to osteophytes, maybe because synovitis drives osteophyte development. High bone turnover may aggravate articular cartilage loss. Metabolic activity in osteophytes and synovial tissue, but not in articular cartilage may be related to knee pain.
abstract_id: PUBMED:16396978
Osteoarthritis, magnetic resonance imaging, and biochemical markers: a one year prospective study. Objective: To investigate the relation between biochemical markers of bone, cartilage, and synovial remodelling and the structural progression of knee osteoarthritis.
Methods: 62 patients of both sexes with knee osteoarthritis were followed prospectively for one year. From magnetic resonance imaging (MRI), done at baseline and after one year, the volume and thickness of cartilage of the femur, the medial tibia, and the lateral tibia were assessed. A whole organ magnetic resonance imaging score (WORMS) of the knee was calculated for each patient at baseline and at the one year visits. This score consists in a validated, semiquantitative scoring system for whole organ assessment of the knee in osteoarthritis using MRI. Biochemical markers (serum hyaluronic acid, osteocalcin, cartilage glycoprotein 39 (YKL-40), cartilage oligomeric matrix protein (COMP), and C-telopeptide of type I collagen (CTX-I), and urine C-telopeptide of type II collagen (CTX-II)) were measured at baseline and after three months.
Results: Baseline markers were not correlated with one year changes observed in cartilage volume and thickness. However, an increase in CTX-II after three months was significantly correlated with a one year decrease in mean thickness of medial tibial and lateral tibial cartilage. Patients in the highest quartile of three month changes in CTX-II experienced a mean loss of 0.07 (0.08) mm of their medial thickness, compared with a mean increase of 0.05 (0.19) mm for patients in the lowest quartile (p = 0.04) Multiple regression analysis showed that high baseline levels of hyaluronic acid are predictive of a worsening in WORMS (p = 0.004).
Conclusions: These results suggest that a single measurement of serum hyaluronic acid or short term changes in urine CTX-II could identify patients at greatest risk of progression of osteoarthritis.
abstract_id: PUBMED:19630944
Identification of progressors in osteoarthritis by combining biochemical and MRI-based markers. Introduction: At present, no disease-modifying osteoarthritis drugs (DMOADS) are approved by the FDA (US Food and Drug Administration); possibly partly due to inadequate trial design since efficacy demonstration requires disease progression in the placebo group. We investigated whether combinations of biochemical and magnetic resonance imaging (MRI)-based markers provided effective diagnostic and prognostic tools for identifying subjects with high risk of progression. Specifically, we investigated aggregate cartilage longevity markers combining markers of breakdown, quantity, and quality.
Methods: The study included healthy individuals and subjects with radiographic osteoarthritis. In total, 159 subjects (48% female, age 56.0 +/- 15.9 years, body mass index 26.1 +/- 4.2 kg/m2) were recruited. At baseline and after 21 months, biochemical (urinary collagen type II C-telopeptide fragment, CTX-II) and MRI-based markers were quantified. MRI markers included cartilage volume, thickness, area, roughness, homogeneity, and curvature in the medial tibio-femoral compartment. Joint space width was measured from radiographs and at 21 months to assess progression of joint damage.
Results: Cartilage roughness had the highest diagnostic accuracy quantified as the area under the receiver-operator characteristics curve (AUC) of 0.80 (95% confidence interval: 0.69 to 0.91) among the individual markers (higher than all others, P < 0.05) to distinguish subjects with radiographic osteoarthritis from healthy controls. Diagnostically, cartilage longevity scored AUC 0.84 (0.77 to 0.92, higher than roughness: P = 0.03). For prediction of longitudinal radiographic progression based on baseline marker values, the individual prognostic marker with highest AUC was homogeneity at 0.71 (0.56 to 0.81). Prognostically, cartilage longevity scored AUC 0.77 (0.62 to 0.90, borderline higher than homogeneity: P = 0.12). When comparing patients in the highest quartile for the longevity score to lowest quartile, the odds ratio of progression was 20.0 (95% confidence interval: 6.4 to 62.1).
Conclusions: Combination of biochemical and MRI-based biomarkers improved diagnosis and prognosis of knee osteoarthritis and may be useful to select high-risk patients for inclusion in DMOAD clinical trials.
abstract_id: PUBMED:20175979
Serum and urinary biochemical markers for knee and hip-osteoarthritis: a systematic review applying the consensus BIPED criteria. Context: Molecules that are released into biological fluids during matrix metabolism of articular cartilage, subchondral bone, and synovial tissue could serve as biochemical markers of the process of osteoarthritis (OA). Unfortunately, actual breakthroughs in the biochemical OA marker field are limited so far.
Objective: By reviewing the status of commercially available biochemical OA markers according to the "Burden of disease, Investigative, Prognostic, Efficacy of intervention, and Diagnostic" ("BIPED") classification, future use of this "BIPED" classification is encouraged and more efficient biochemical OA marker research stimulated.
Data Sources: Three electronic databases [PubMed, Scopus, EMBASE (1997-May 2009)] were searched for publications on blood and urinary biochemical markers in human primary knee and hip-OA.
Study Selection: Stepwise selection of original English publications describing human studies on blood or urinary biochemical markers in primary knee or hip-OA was performed. Selected articles were fully read to determine whether biochemical markers were investigated on performance within any of the "BIPED" categories. Eighty-four relevant publications were identified.
Data Extraction: Data from relevant publications were tabulated according to the "BIPED" classification. Individual analyses within a publication were summarized in general "BIPED" scores.
Data Synthesis: An uneven distribution of scores on biochemical marker performance and heterogeneity among the publications complicated direct comparison of individual biochemical markers. Comparison of categories of biochemical markers was therefore performed instead. In general, biochemical markers of cartilage degradation were investigated most extensively and performed well in comparison with other categories of biochemical markers. Biochemical markers of bone metabolism performed less adequately. Biochemical markers of synovial tissue metabolism were not investigated extensively, but performed quite well.
Conclusions: Specific biochemical markers and categories of biochemical markers as well as their nature, origin and metabolism, need further investigation. International standardization of future investigations should be pursued to obtain more high-quality, homogenous data on the full spectrum of biochemical OA markers.
abstract_id: PUBMED:21221577
Biochemical markers in the diagnosis of chondral defects following anterior cruciate ligament insufficiency. Purpose: The aim of this study was to determine the value of systemic biochemical markers of bone turnover-urine levels of cross-linked C-terminal telopeptide I (uCTX-I), urinary C-terminal telopeptide II (uCTX-II) and serum cartilage oligomeric matrix protein (sCOMP)-in the diagnosis of chondral defects after anterior cruciate ligament (ACL) rupture. Thirty-eight patients with previous ACL rupture were included.
Methods: Magnetic resonance imaging (MRI) of the injured and the intact knee joint was performed with volumetric measurement of volume and area of cartilage (VC/AC), area of subchondral bone (cAB), and area of subchondral bone denuded and eroded (dAB). Biochemical markers were measured using commercially available enzyme-linked immunoassays.
Results: MRI-based volumetric cartilage measurement showed significant differences between the injured and the intact knees. uCTX-I, sCOMP and in parts uCTX-II correlated well with MRI parameters. CTX-I showed a significant correlation with VC and AC of the whole knee joint.
Conclusions: The results suggest that uCTX-I, uCTX-II and sCOMP could identify patients with focal cartilage lesions from an early stage of osteoarthritis of the knee.
abstract_id: PUBMED:36474982
Calcified cartilage revealed in whole joint by X-ray phase contrast imaging. Objective: X-ray Phase Contrast Imaging (PCI) is an emerging modality that will be in the next few years available in a wider range of preclinical set-ups. In this study, we compare this imaging technique with conventional preclinical modalities in an osteoarthritis mouse model.
Method: Phase contrast technique was performed on 6 post-mortem, monoiodoacetate-induced osteoarthritis knees and 6 control knees. The mice knees were then imaged using magnetic resonance imaging and conventional micro computed tomography. Examples of imaging surrogate markers are reported: local distances within the articular space, cartilage surface roughness, calcified cartilage thickness, number, volume and locations of osteophytes.
Results: Thanks to PCI, we can show in 3D calcified cartilage without contrast agent by a non-invasive technique. The phase contrast images reveal more details than conventional imaging techniques, especially at smaller scales, with for instance a higher number of micro-calcifications detected (57, 314 and 329 for MRI, conventional micro-CT and phase contrast imaging respectively). Calcified cartilage thickness was measured with a significant difference (p < 0.01) between the control (23.4 ± 17.2 μm) and the osteoarthritis induced animal (46.9 ± 19.0 μm).
Conclusions: X-ray phase contrast imaging outperforms the conventional imaging modalities for assessing the different tissue types (soft and hard). This new imaging modality seems to bring new relevant surrogate markers for following-up small animal models even for low-grade osteoarthritis.
abstract_id: PUBMED:38238803
Association of biochemical markers with bone marrow lesion changes on imaging-data from the Foundation for the National Institutes of Health Osteoarthritis Biomarkers Consortium. Background: To assess the prognostic value of short-term change in biochemical markers as it relates to bone marrow lesions (BMLs) on MRI in knee osteoarthritis (OA) over 24 months and, furthermore, to assess the relationship between biochemical markers involved with tissue turnover and inflammation and BMLs on MRI.
Methods: Data from the Foundation for the National Institutes of Health OA Biomarkers Consortium within the Osteoarthritis Initiative (n = 600) was analyzed. BMLs were measured according to the MRI Osteoarthritis Knee Score (MOAKS) system (0-3), in 15 knee subregions. Serum and urinary biochemical markers assessed were as follows: serum C-terminal crosslinked telopeptide of type I collagen (CTX-I), serum crosslinked N-telopeptide of type I collagen (NTX-I), urinary CTX-Iα and CTX-Iβ, urinary NTX-I, urinary C-terminal cross-linked telopeptide of type II collagen (CTX-II), serum matrix metalloproteinase (MMP)-degraded type I, II, and III collagen (C1M, C2M, C3M), serum high sensitivity propeptide of type IIb collagen (hsPRO-C2), and matrix metalloproteinase-generated neoepitope of C-reactive protein (CRPM). The association between change in biochemical markers over 12 months and BMLs over 24 months was examined using regression models adjusted for covariates. The relationship between C1M, C2M, C3M, hsPRO-C2, and CRPM and BMLs at baseline and over 24 months was examined.
Results: Increases in serum CTX-I and urinary CTX-Iβ over 12 months were associated with increased odds of changes in the number of subregions affected by any BML at 24 months. Increase in hsPRO-C2 was associated with decreased odds of worsening in the number of subregions affected by any BML over 24 months. C1M and C3M were associated with BMLs affected at baseline.
Conclusions: Short-term changes in serum CTX-I, hsPRO-C2, and urinary CTX-Iβ hold the potential to be prognostic of BML progression on MRI. The association of C1M and C3M with baseline BMLs on MRI warrants further investigation.
Answer: Biochemical markers have shown potential as adjunct markers for imaging in the assessment of knee osteoarthritis (OA), but their ability to serve as surrogates for imaging is not yet fully established. Several studies have investigated the association between individual biochemical markers and radiographic features of knee OA.
One study found that combining specific panels of biochemical markers increased the association with imaging features such as the Kellgren/Lawrence (K/L) score, joint space width (JSW), sclerosis, osteophytes, and bone mineral density (BMD) (PUBMED:18050200). Another study within the Osteoarthritis Initiative found that most biochemical markers were associated with bone marrow lesions (BMLs) at baseline, with urinary CTX-II being the most consistently associated marker with BMLs, large osteophytes, bone area and shape, and changes in bone shape over 24 months (PUBMED:27723280).
Further research described associations between OA-related biochemical markers (COMP, MMP-3, HA) and MRI-based imaging biomarkers in middle-aged adults over 10-13 years, suggesting that these markers are interrelated in early OA (PUBMED:35240332). Another study found that biochemical markers of joint metabolism and inflammation were associated with minimum JSW and osteophyte area, particularly in painful knees (PUBMED:25205017).
A one-year prospective study indicated that an increase in urine CTX-II was significantly correlated with a decrease in medial and lateral tibial cartilage thickness, and high baseline levels of hyaluronic acid were predictive of worsening in a whole organ MRI score (PUBMED:16396978). Additionally, combining biochemical and MRI-based markers improved the diagnosis and prognosis of knee OA and may be useful for selecting high-risk patients for clinical trials (PUBMED:19630944).
A systematic review applying the consensus BIPED criteria highlighted the need for further investigation of specific biochemical markers and categories of biochemical markers, as well as international standardization of future investigations (PUBMED:20175979). Another study suggested that biochemical markers could identify patients with focal cartilage lesions from an early stage of OA (PUBMED:21221577).
In summary, while biochemical markers show promise in reflecting structural damage and progression in knee OA, they are not yet fully established as surrogates for imaging. The associations found are often weak, and the role of systemic biochemical markers as predictors of individual bone anatomic features is limited (PUBMED:27723280). Further research and standardization are needed to determine the full potential of biochemical markers as surrogates for imaging in knee OA. |
Instruction: Is single port enough in minimally surgery for pneumothorax?
Abstracts:
abstract_id: PUBMED:25801108
Is single port enough in minimally surgery for pneumothorax? Background: Video-assisted thoracoscopic surgery is a widespread used procedure for treatment of primary spontaneous pneumothorax patients. In this study, the adaptation of single-port video-assisted thoracoscopic surgery approach to primary spontaneous pneumothorax patients necessitating surgical treatment, with its pros and cons over the traditional two- or three-port approaches are examined.
Methods: Between January 2011 and August 2013, 146 primary spontaneous pneumothorax patients suitable for surgical treatment are evaluated prospectively. Indications for surgery included prolonged air leak, recurrent pneumothorax, or abnormal findings on radiological examinations. Visual analog scale and patient satisfaction scale score were utilized.
Results: Forty triple-port, 69 double-port, and 37 single-port operations were performed. Mean age of 146 (126 male, 20 female) patients was 27.1 ± 16.4 (range 15-42). Mean operation duration was 63.59 ± 26 min; 61.7 for single, 64.2 for double, and 63.8 min for triple-port approaches. Total drainage was lower in the single-port group than the multi-port groups (P = 0.001). No conversion to open thoracotomy or 30-day hospital mortality was seen in our group. No recurrence was seen in single-port group on follow-up period. Visual analog scale scores on postoperative 24th, 48th, and 72nd hours were 3.42 ± 0.94, 2.46 ± 0.81, 1.96 ± 0.59 in the single-port group; significantly lower than the other groups (P = 0.011, P = 0.014, and P = 0.042, respectively). Patient satisfaction scale scores of patients in the single-port group on 24th and 48th hours were 1.90 ± 0.71 and 2.36 ± 0.62, respectively, indicating a significantly better score than the other two groups (P = 0.038 and P = 0.046).
Conclusions: This study confirms the competency of single-port procedure in first-line surgical treatment of primary spontaneous pneumothorax.
abstract_id: PUBMED:32551166
Minimally invasive approach to pneumothorax: Single port or two ports? Background: This study aims to compare the effectiveness of single-port and two-port video-assisted thoracoscopic surgery in patients with pneumothorax.
Methods: Between June 2016 and December 2018, a total of 44 patients (39 males, 5 females; mean age 27.0±9.5 years; range, 15 to 60 years) who underwent video-assisted thoracoscopic surgery due to the spontaneous pneumothorax in our center were retrospectively evaluated. The study population was divided into two groups as the single-port (n=29) and two-port (n=15) procedure according to the number of port entries applied during the operation. Age, gender, number of days of drainage, length of hospitalization, number of days of air leak, the indication of operation, pneumothorax side, type of pneumothorax, duration of operation, and complications were compared between the groups.
Results: Twenty-two patients (50%) were operated on the right side and 22 patients (50%) on the left side. The mean operation time was 81.1±19.2 min, indicating no significant difference between the groups (p=0.053). No significant difference was observed in the number of days of drainage, the length of hospitalization, and number of days of air leak between the two groups. Complications developed in eight patients (27.6%) in the single-port group and five patients (33.3%) in the two-port group, indicating no significant difference between the groups (p=0.475).
Conclusion: Our study results show that video-assisted thoracoscopic surgery for the treatment of pneumothorax can be successfully performed via a single-port approach.
abstract_id: PUBMED:27440029
Single Port Thoracic Surgery and Reduced Port Thoracic Surgery Single port thoracic surgery, reduced port surgery and needlescopic surgery attract attention as one of the minimally invasive surgery in thoracic surgery recently. Single port thoracic surgery was advocated by Rocco in 2004, it was reported usefulness of single port thoracic surgery for primary spontaneous pneumothorax. The surgical procedure as single (or reduced) port thoracic surgery is roughly divided into the following. One is operated with instruments inserted from the single extended incision, and the other is operated with instruments punctured without extending incision. It is not generally complicated procedures in single port thoracic surgery. Primary spontaneous pneumothorax and biopsy for lung and pleura are considered the surgical indication for single (or reduced) port surgery. It is revealed that single port surgery for primary spontaneous pneumothorax is less invasive than conventional surgery. Single port and reduced port thoracic surgery will spread furthermore in the future.
abstract_id: PUBMED:29399498
Subxiphoid single-port video-assisted thoracoscopic surgery. Background: We report the feasibility and safety of chest surgery through the subxiphoid single port approach based on our preliminary experience.
Methods: From December 2013 till January 2016, 39 patients underwent 40 thoracoscopic surgeries via a 3- to 4-cm subxiphoid single incision. A sternal lifter was applied for better entrance and working angle. A zero-degree deflectable scope was preferred. The technique for anatomic resection was similar to that in the traditional single-port approach. Patient characteristics and demographic data were analyzed.
Results: There were 29 females and 10 males, with a median age of 56 years. Indication for surgery included 24 patients with primary lung cancer, eight with lung metastases, two with benign lung lesions, one with bilateral pneumothorax, and five with mediastinal tumors. Surgeries included lobectomy in 21, segmentectomy in five, wedge resection in nine, and mediastinal surgery in five patients. There was no surgical mortality. Complications (10%, 4 in 40) included postoperative bleeding in one patient, chylothorax in one patient, and transient arrhythmia in the early learning curve in two patients.
Conclusions: Our results indicated that subxiphoid single-incision thoracoscopic pulmonary resection could be performed safely but under careful patient selection with modification of instruments. Moreover, having a previous single-port incision experience was crucial. Major limitations of this approach included more frequently encountered instrument fighting; interference of left-side procedure related to heartbeat and radical mediastinal lymph node (LN) dissection; and the ability to handle complex conditions, such as anthracotic LNs, diffuse adhesion, and major bleeding.
abstract_id: PUBMED:30069371
Tubeless single-port thoracoscopic sublobar resection: indication and safety. Background: The tubeless technique, defined as non-intubated general anesthesia with omission of chest drainage after video-assisted thoracoscopic surgery (VATS), is a new concept to further minimize surgical trauma. However, there has been little investigation into the associated feasibility and safety. Minimization of postoperative pneumothorax is challenging. We set up a "tubeless protocol" to select patients for tubeless single-port VATS with monitoring of a digital drainage system (DDS).
Methods: From November 2016 to September 2017, 50 consecutive non-intubated single-port VATS for pulmonary resection were performed. In our study, patients with small and peripheral pulmonary lesions indicated for sublobar resections, as diagnostic or curative intent, were included. After excluding patients having tumors >2 cm, or intrapleural adhesions noted during operation, or forced expiratory volume in the 1 second <1.5 L, 36 patients were selected for tubeless protocol. The clinical characteristics and perioperative outcomes of these patients are presented.
Results: Among 36 cases, 5 patients had minor air leaks detected using the DDS and required intercostal drainage after wound closure. Among the remaining 31 patients in whom the DDS showed no air leak, the chest drainage was removed immediately after wound closure. A postoperative chest roentgenogram on the surgery day showed full expansion in all patients without pneumothorax. Only 7 (19.4%) patients developed minor subclinical pneumothorax on the first postoperative day without the need for chest drainage. All patients were discharged uneventfully without the need for intervention.
Conclusions: Our tubeless protocol utilizes DDS to select patients who can have intercostal drainage omitted after non-intubated single-port VATS for pulmonary resection. Using objective DDS parameters, we believe that this is an effective way to reduce the rate of pneumothorax after tubeless single-port VATS in selected patients.
abstract_id: PUBMED:26547093
Single-incision laparo-thoracoscopic minimally invasive oesophagectomy to treat oesophageal cancer†. Objectives: Single-incision thoracoscopic and laparoscopic procedures have been applied in treating various diseases. However, it is unknown whether such procedures are feasible in treating oesophageal cancer.
Methods: Minimally invasive oesophagectomy (MIO) with a single-incision approach in the thoracoscopic and laparoscopic procedures was attempted in 16 patients with oesophageal cancer.
Results: One patient was converted to laparotomy and a four-port thoracoscopic procedure due to bleeding. Of the patients successfully treated with a single-port MIO, 6 underwent a McKeown procedure and 9 an Ivor Lewis procedure, including 3 cases of total laryngopharyngo-oesophagectomy with cervical pharyngogastrostomy. The mean ventilator usage of the patients after surgery was 0.3 ± 0.6 days, the mean intensive care unit (ICU) stay was 3.8 ± 3.1 days and the mean number of dissected lymph nodes was 28.6 ± 14.6. One delayed anastomotic leakage occurred, and another patient developed a trachea-oesophageal fistula induced by surgical clip-related tissue erosion, both of which were successfully treated by the placement of an oesophageal stent. No pulmonary complications or surgical mortalities occurred in the study. Minor complications developed in 2 patients, 1 experiencing pneumothorax and 1 postoperative delirium. When compared with traditional MIO in our series (n = 315), no statistical difference was found among patients receiving single-port MIO in terms of ventilator usage, ICU stay and the number of dissected lymph nodes.
Conclusions: Single-port MIO seems to be a feasible option for treating patients with oesophageal cancer, which requires further evaluation and follow-up in the future.
abstract_id: PUBMED:30971282
Unilateral single-port thoracoscopic surgery for bilateral pneumothorax or pulmonary bullae. Background: Rapid rehabilitation surgery has become a widely accepted approach. Thoracic surgeons have attempted in many ways to make surgery less invasive. We combined tubeless technology, single-port technology and mediastinum approach for the treatment of simultaneous bilateral primary spontaneous pneumothorax(PSP)or pulmonary bullae. And we evaluated its therapeutic effect. This study aimed to investigate if tubeless single-port video-assisted thoracic surgery (Tubeless-SPVATS) via anterior mediastinum can be used as an alternative surgical treatment for bilateral lung diseases, especially for concurrent or contralateral recurrence PSP.
Methods: From November 2014 to December 2016, 18 patients with simultaneous bilateral PSP or pulmonary bullae were treated with tubeless -SPVATS via anterior mediastinum. They were 13 males and 5 females with an average age of 20.2 ± 2.3 years (17 to 24 years). They all had preoperative chest CT and were diagnosed with simultaneous bilateral PSP or pulmonary bullae.
Results: Fifteen patients underwent bilateral bullae resection with Tubeless-SPVATS via anterior mediastinum. Three patients underwent bilateral single-port video-assisted thoracic surgery. No thoracotomy was performed. No death and grade 3-4 mobidity were found. All the patients started eating 6 hours after surgery. The average operation time was 44.56±17.8min. The patients were discharged 3. 5±1.0 days postoperatively.
Conclusions: Tubeless-SPVATS via anterior mediastinum is a safe and feasible treatment for patients with simultaneous bilateral PSP or pulmonary bullae. However,contralateral thoracic is not explored fully enough. And when contralateral lung bullae are located near the hilum, endoscopic linear stapler cannot be easily used to conduct suture. Thus, the recurrence rate after performing Tubeless-SPVATS may be increased compared to performing thoracotomy. However, compared to bilateral thoracic surgery, this method reduced postoperative pain. And it took significantly less time than bilateral thoracic surgery. Thus, this method has some clinic value.
abstract_id: PUBMED:32852262
Comparison of single-port vs. two-port VATS technique for primary spontaneous pneumothorax. Background: Video-assisted thoracoscopic surgery (VATS) has been used for thoracic surgery for about two decades. As the trend in VATS is to use fewer ports to decrease postoperative complications, we compared the results of our experience with single-port and two-port VATS for primary spontaneous pneumothorax (PSP).
Material And Methods: This is a non-randomized retrospective study. From January 2017 to December 2018, 104 patients with PSP underwent VATS. Fifty-six patients received single-port VATS and 48 patients received two-port VATS. Operation time, blood loss, number of staplers used, drainage time, postoperative hospital stay, complications, chest wall paresthesia, visual analog scale (VAS) pain scores, and patient satisfaction scale scores were compared between the two groups.
Results: There was no difference in age, gender, body mass index (BMI), smoking status, surgical indication, and involved side between the two groups. The procedures performed in the single-port group were similar to those performed in the two-port group. No significant difference was found in operation time, blood loss, number of staplers used, drainage time, and recurrence rate. The rate of chest wall paresthesia was lower in the single-port group than in the two-port group (28.6 vs. 52.1%, p = .014). The VAS scores in the single-port group were lower than those in the two-port group at 24 and 48 h (p = .032 and p = .004).
Conclusions: Compared with two-port VATS, single-port VATS for PSP showed more favorable results in terms of postoperative paresthesia and pain. The single-port procedure may be considered a good alternative to the standard thoracoscopic treatment of PSP. Abbreviations: VATS: Video-assisted thoracic surgery; PSP: primary spontaneous pneumothorax.
abstract_id: PUBMED:29078667
Uniportal video-assisted thoracic surgery for pneumothorax and blebs/bullae. The last British Society of Thoracic Surgeons guidelines of 2010 for the management of primary spontaneous pneumothorax (PSP) stated that, after the first recurrence, the treatment of PSP should be a surgical operation, like a bullectomy accompanying with a procedure for inducing pleural adhesions. Therefore, the surgical approach is considered the best treatment to minimise the risk of recurrence in patients who experienced a PSP. There is substantial evidence in the literature demonstrating that the minimally invasive approach should be preferred to the thoracotomic procedure since it can reduce the postoperative pain and it is associated with a faster recovery of the physical and working activity. The video-assisted thoracic surgery (VATS) approach has been shown to offer greater advantages about patient pain and respiratory function when compared to thoracotomic incisions. A single port or single incision or uniportal approach was developed as an alternative to the standard multi-port VATS. Uniportal technique has shown to be safe and efficient not only for pulmonary resections and biopsies but also for lobectomy. When used for PSP, the bullectomy/blebectomy and pleural abrasion/pleurectomy is performed through the single incision through which the chest drain is then inserted. In this perspective, evidence showed that the minimally invasive approach should be preferred, confirming the advantages in comparison with traditional techniques.
abstract_id: PUBMED:27293823
Single-port thoracoscopic surgery for pneumothorax under two-lung ventilation with carbon dioxide insufflation. Background: The development of single-port thoracoscopic surgery and two-lung ventilation reduced the invasiveness of minor thoracic surgery. This study aimed to evaluate the feasibility and safety of single-port thoracoscopic bleb resection for primary spontaneous pneumothorax using two-lung ventilation with carbon dioxide insufflation.
Methods: Between February 2009 and May 2014, 130 patients underwent single-port thoracoscopic bleb resection under two-lung ventilation with carbon dioxide insufflation. Access was gained using a commercial multiple-access single port through a 2.5-cm incision; carbon dioxide gas was insufflated through a port channel. A 5-mm thoracoscope, articulating endoscopic devices, and flexible endoscopic staplers were introduced through a multiple-access single port for bulla resection.
Results: The mean time from endotracheal intubation to incision was 29.2±7.8 minutes, the mean operative time was 30.9±8.2 minutes, and the mean total anesthetic time was 75.5±14.4 minutes. There were no anesthesia-related complications or wound problems. The chest drain was removed after a mean of 3.7±1.4 days and patients were discharged without complications 4.8±1.5 days from the operative day. During a mean 7.5±10.1 months of follow-up, there were five recurrences (3.8%) in operated thorax.
Conclusions: The anesthetic strategy of single-lumen intubation with carbon dioxide gas insufflation can be a safe and feasible option for single-port thoracoscopic bulla resection as it represents the least invasive surgical option with the potential advantages of reducing operative time and one-lung ventilation-related complications without diminishing surgical outcomes.
Answer: Yes, single port is sufficient in minimally invasive surgery for pneumothorax. Studies have shown that single-port video-assisted thoracoscopic surgery (VATS) is an effective and safe approach for the treatment of pneumothorax. A study comparing single-port to multi-port approaches found that single-port operations had significantly lower visual analog scale scores for pain at various postoperative time points and higher patient satisfaction scores, without any observed recurrences or mortality within the follow-up period (PUBMED:25801108). Another study confirmed that single-port VATS for pneumothorax treatment is feasible and did not show significant differences in operation time, drainage duration, hospitalization length, or complications when compared to two-port VATS (PUBMED:32551166).
Single port thoracic surgery has been reported to be less invasive than conventional surgery for primary spontaneous pneumothorax (PUBMED:27440029). Additionally, subxiphoid single-port VATS has been performed safely with careful patient selection and appropriate instrument modification, although it has some limitations such as instrument fighting and difficulty in handling complex conditions (PUBMED:29399498).
Furthermore, a tubeless protocol utilizing a digital drainage system (DDS) has been developed to select patients for tubeless single-port VATS, effectively reducing the rate of pneumothorax postoperatively in selected patients (PUBMED:30069371). Single-port minimally invasive oesophagectomy (MIO) has also been shown to be a feasible option for treating oesophageal cancer, with outcomes comparable to traditional MIO (PUBMED:26547093).
Moreover, single-port VATS has been used successfully for bilateral pneumothorax or pulmonary bullae, offering rapid rehabilitation and less postoperative pain compared to bilateral thoracic surgery (PUBMED:30971282). A comparison of single-port and two-port VATS for primary spontaneous pneumothorax revealed that single-port VATS resulted in lower rates of chest wall paresthesia and lower pain scores, suggesting it as a good alternative to standard thoracoscopic treatment (PUBMED:32852262).
In summary, the evidence suggests that single-port VATS is a viable and often preferable option for the surgical treatment of pneumothorax, offering benefits such as reduced invasiveness, pain, and recovery time, without compromising the safety and effectiveness of the procedure. |