input
stringlengths 6.82k
29k
|
---|
Instruction: Risk adjustment for congenital heart surgery (RACHS): is it useful in a single-center series of newborns as a predictor of outcome in a high-risk population?
Abstracts:
abstract_id: PUBMED:18377539
Risk adjustment for congenital heart surgery (RACHS): is it useful in a single-center series of newborns as a predictor of outcome in a high-risk population? Objective: Risk adjustment for congenital heart surgery (RACHS) was developed to compare outcome data for pediatric patients undergoing cardiac surgery. RACHS stratifies anatomic diversity into 6 categories based on age, type of surgery performed, and similar in-hospital mortality. The purpose of this retrospective review was to evaluate the use of RACHS in a single-center series as a predictor of outcome in a high-risk newborn population.
Methods: In 2003, 793 pediatric cardiac surgical operations (584 open; 209 closed) were performed at our institution. Mortality was 2.1%. Of the 793 operations, 114 were in newborns less than 15 days of age. These 114 newborns were stratified according to RACHS. Two patients could not be stratified and were excluded from analysis. Preoperative, operative, and postoperative variables were compared between the RACHS stratified newborns.
Results: Unexpectedly, newborns in RACHS category 4 had lower birth weights (3.0 +/- 0.5 kg vs. 3.5 +/- 0.5 kg; P < .05) and a trend toward increased postoperative inotropic score (19 +/- 7 vs. 16 +/- 4), increased postoperative lactic acid (72 +/- 48 vs. 63 +/- 25), increased length of mechanical ventilation (23 +/- 72 days vs. 8 +/- 6 days), increased length of stay (34 +/- 72 days vs. 31 +/- 17 days), and increased mortality (16% vs. 11%) compared with newborns in RACHS category 6.
Conclusion: Limitations of risk assessment using RACHS in a single-center series of high-risk newborns include the lack of consideration of confounding variables. Further risk adjustments that include such confounding variables are warranted.
abstract_id: PUBMED:15283367
Risk adjustment for congenital heart surgery: the RACHS-1 method. The new health care environment has increased the need for accurate information about outcomes after pediatric cardiac surgery to facilitate quality improvement efforts both locally and globally. The Risk Adjustment for Congenital Heart Surgery (RACHS-1) method was created to allow a refined understanding of differences in mortality among patients undergoing congenital heart surgery, as would typically be encountered within a pediatric population. RACHS-1 can be used to evaluate differences in mortality among groups of patients within a single dataset, such as variability among institutions. It can also be used to evaluate the performance of a single institution in comparison to other benchmark data, provided that complete model parameters are known. Underlying assumptions about RACHS-1 risk categories, inclusion and exclusion criteria, and appropriate and inappropriate uses are discussed.
abstract_id: PUBMED:16476221
Risk adjustment for surgery of congenital heart disease--secondary publication Risk adjustment for specialties covering many diagnoses is difficult. The Risk Adjusted classification for Congenital Heart Surgery (RACHS-1) was created to compare the in-hospital mortality rate of groups of children undergoing surgery for congenital heart disease. We applied the classification to the operations performed at Skejby Sygehus (1996-2002) and found that RACHS-1 can be used to predict the in-hospital mortality rate and length of stay in the intensive care unit in a Danish center for congenital heart surgery. The mortality rate was similar to that reported by larger centers.
abstract_id: PUBMED:20224379
A risk adjustment method for newborns undergoing noncardiac surgery. Objective: To develop a risk adjustment method for in-hospital mortality in newborns undergoing noncardiac surgery.
Summary Of Background Data: Understanding variation in outcomes is critical to guide quality improvement. Reliable outcome assessments need risk adjustment to allow comparisons.
Methods: Infants <or=30-days-old undergoing noncardiac surgical procedures were identified using the Kids' Inpatient Database (KID); year 2000. Premature infants were excluded. Procedures identified by ICD-9-CM codes with >or=20 cases in the data set were placed into 4 risk categories by in-hospital mortality rates. Clinical variables were added to the model to better predict mortality; areas under the receiver-operator characteristic (ROC) curves were compared. The final model was validated in the KID 2003 database.
Results: Among 6103 eligible cases in the KID 2000, 5117 (83.8%) could be assigned to a risk category. Mortality rates were 0.2% in risk category 1, 2.5% in category 2, 6.4% in 3, and 18.4% in 4. The odds of mortality increased in each risk category relative to category 1 (P < 0.001 for each). In multivariable models adjusting for risk category, the clinical variables most predictive of in-hospital death were serious respiratory conditions and necrotizing enterocolitis. The area under the ROC curve for the full model including clinical risk factors was 0.92 in the KID 2000. The model was validated using data for KID 2003 and showed excellent discrimination (ROC = 0.90).
Conclusion: This validated method provides a means of risk adjustment in groups of newborns undergoing noncardiac surgery, and should allow for comparative analyses of in-hospital mortality.
abstract_id: PUBMED:24569325
Congenital heart surgery outcome analysis: Indian experience. Background: The study aimed to analyze the outcome of congenital heart surgery in a subset of Indian patients, using the Aristotle Basic Complexity score, the Risk Adjustment for Congenital Heart Surgery categories, and the Society of Thoracic Surgeons and European Association for Cardiothoracic Surgery mortality categories.
Patients And Methods: 1312 patients <18 years of age undergoing congenital heart surgery were assigned the 3 scores and studied for outcome indices of difficulty (cardiopulmonary bypass time or duration of surgery >120 min), morbidity (intensive care unit stay >7 days), and mortality.
Results: The overall mortality was 6.85%, with mean a Aristotle Basic Complexity score, Risk Adjustment for Congenital Heart Surgery category, and Society of Thoracic Surgeons and European Association for Cardiothoracic Surgery mortality category of 7.17 ± 2.04, 2.28 ± 0.78, and 2.24 ± 1.06, respectively. The mortality predictive capacity of the Risk Adjustment for Congenital Heart Surgery category (c = 0.76) was similar to that of the Society of Thoracic Surgeons and European Association for Cardiothoracic Surgery mortality category (c = 0.75); both were better compared to the Aristotle Basic Complexity score (c = 0.66). The Risk Adjustment for Congenital Heart Surgery category and Aristotle Basic Complexity score correlated with morbidity and difficulty outcomes.
Conclusion: The study shows that the Aristotle Basic Complexity score, the Risk Adjustment for Congenital Heart Surgery category, and the Society of Thoracic Surgeons and European Association for Cardiothoracic Surgery mortality category are tools of case mix stratification to analyze congenital heart surgery outcomes in a subset of the Indian population.
abstract_id: PUBMED:28891032
The need for unique risk adjustment for surgical site infections at a high-volume, tertiary care center with inherent high-risk colorectal procedures. Background: The aim of the present study was to create a unique risk adjustment model for surgical site infection (SSI) in patients who underwent colorectal surgery (CRS) at the Cleveland Clinic (CC) with inherent high risk factors by using a nationwide database.
Methods: The American College of Surgeons National Surgical Quality Improvement Program database was queried to identify patients who underwent CRS between 2005 and 2010. Initially, CC cases were identified from all NSQIP data according to case identifier and separated from the other NSQIP centers. Demographics, comorbidities, and outcomes were compared. Logistic regression analyses were used to assess the association between SSI and center-related factors.
Results: A total of 70,536 patients met the inclusion criteria and underwent CRS, 1090 patients (1.5%) at the CC and 69,446 patients (98.5%) at other centers. Male gender, work-relative value unit, diagnosis of inflammatory bowel disease, pouch formation, open surgery, steroid use, and preoperative radiotherapy rates were significantly higher in the CC cases. Overall morbidity and individual postoperative complication rates were found to be similar in the CC and other centers except for the following: organ-space SSI and sepsis rates (higher in the CC cases); and pneumonia and ventilator dependency rates (higher in the other centers). After covariate adjustment, the estimated degree of difference between the CC and other institutions with respect to organ-space SSI was reduced (OR 1.38, 95% CI 1.08-1.77).
Conclusions: The unique risk adjustment strategy may provide center-specific comprehensive analysis, especially for hospitals that perform inherently high-risk procedures. Higher surgical complexity may be the reason for increased SSI rates in the NSQIP at tertiary care centers.
abstract_id: PUBMED:34801250
The influence of decreasing variable collection burden on hospital-level risk-adjustment. Background: Risk-adjustment is a key feature of the American College of Surgeons National Surgical Quality Improvement Program-Pediatric (NSQIP-Ped). Risk-adjusted model variables require meticulous collection and periodic assessment. This study presents a method for eliminating superfluous variables using the congenital malformation (CM) predictor variable as an example.
Methods: This retrospective cohort study used NSQIP-Ped data from January 1st to December 31st, 2019 from 141 hospitals to compare six risk-adjusted mortality and morbidity outcome models with and without CM as a predictor. Model performance was compared using C-index and Hosmer-Lemeshow (HL) statistics. Hospital-level performance was assessed by comparing changes in outlier statuses, adjusted quartile ranks, and overall hospital performance statuses between models with and without CM inclusion. Lastly, Pearson correlation analysis was performed on log-transformed ORs between models.
Results: Model performance was similar with removal of CM as a predictor. The difference between C-index statistics was minimal (≤ 0.002). Graphical representations of model HL-statistics with and without CM showed considerable overlap and only one model attained significance, indicating minimally decreased performance (P = 0.058 with CM; P = 0.044 without CM). Regarding hospital-level performance, minimal changes in the number and list of hospitals assigned to each outlier status, adjusted quartile rank, and overall hospital performance status were observed when CM was removed. Strong correlation between log-transformed ORs was observed (r ≥ 0.993).
Conclusions: Removal of CM from NSQIP-Ped has minimal effect on risk-adjusted outcome modelling. Similar efforts may help balance optimal data collection burdens without sacrificing highly valued risk-adjustment in the future.
Level Of Evidence: Level II prognosis study.
abstract_id: PUBMED:31545015
Risk stratification models for congenital heart surgery in children: Comparative single-center study. Objective: Three scores have been proposed to stratify the risk of mortality for each cardiac surgical procedure: The RACHS-1, the Aristotle Basic Complexity (ABC), and the STS-EACTS complexity scoring model. The aim was to compare the ability to predict mortality and morbidity of the three scores applied to a specific population.
Design: Retrospective, descriptive study.
Setting: Pediatric and neonatal intensive care units in a referral hospital.
Patients: Children under 18 years admitted to the intensive care unit after surgery.
Interventions: None.
Outcome Measures: Demographic, clinical, and surgical data were assessed. Morbidity was considered as prolonged length of stay (LOS > 75 percentile), high respiratory (>72 hours of mechanical ventilation), and high hemodynamic support (inotropic support >20).
Results: One thousand and thirty-seven patients were included, in which 205 were newborns (18%). The category 2 was the most frequent in the three scores: In RACHS-1, ABC, 44.9%, and STS-EACTS, 40.8%. Newborns presented significant higher categories. Children required cardiopulmonary bypass in more occasions (P < .001) but the times of bypass and aortic cross-clamp were significantly higher in newborns (P < .001 and P = .016). Thirty-two patients died (2.8%). A quarter of patients had a prolonged LOS, 17%, a high respiratory support, and 7.1%, a high hemodynamic support. RACHS-1 (AUC 0.760) and STS-EACTS (AUC 0.763) were more powerful for predicting mortality and STS-EACTS for predicting prolonged LOS (AUC 0.733) and the need for high respiratory support (AUC 0.742).
Conclusions: STS-EACTS seems to stratify better risk of mortality, prolonged LOS, and need for respiratory support after surgery.
abstract_id: PUBMED:17382616
Case complexity scores in congenital heart surgery: a comparative study of the Aristotle Basic Complexity score and the Risk Adjustment in Congenital Heart Surgery (RACHS-1) system. Objective: The Aristotle Basic Complexity score and the Risk Adjustment in Congenital Heart Surgery system were developed by consensus to compare outcomes of congenital cardiac surgery. We compared the predictive value of the 2 systems.
Methods: Of all index congenital cardiac operations at our institution from 1982 to 2004 (n = 13,675), we were able to assign an Aristotle Basic Complexity score, a Risk Adjustment in Congenital Heart Surgery score, and both scores to 13,138 (96%), 11,533 (84%), and 11,438 (84%) operations, respectively. Models of in-hospital mortality and length of stay were generated for Aristotle Basic Complexity and Risk Adjustment in Congenital Heart Surgery using an identical data set in which both Aristotle Basic Complexity and Risk Adjustment in Congenital Heart Surgery scores were assigned. The likelihood ratio test for nested models and paired concordance statistics were used.
Results: After adjustment for year of operation, the odds ratios for Aristotle Basic Complexity score 3 versus 6, 9 versus 6, 12 versus 6, and 15 versus 6 were 0.29, 2.22, 7.62, and 26.54 (P < .0001). Similarly, odds ratios for Risk Adjustment in Congenital Heart Surgery categories 1 versus 2, 3 versus 2, 4 versus 2, and 5/6 versus 2 were 0.23, 1.98, 5.80, and 20.71 (P < .0001). Risk Adjustment in Congenital Heart Surgery added significant predictive value over Aristotle Basic Complexity (likelihood ratio chi2 = 162, P < .0001), whereas Aristotle Basic Complexity contributed much less predictive value over Risk Adjustment in Congenital Heart Surgery (likelihood ratio chi2 = 13.4, P = .009). Neither system fully adjusted for the child's age. The Risk Adjustment in Congenital Heart Surgery scores were more concordant with length of stay compared with Aristotle Basic Complexity scores (P < .0001).
Conclusions: The predictive value of Risk Adjustment in Congenital Heart Surgery is higher than that of Aristotle Basic Complexity. The use of Aristotle Basic Complexity or Risk Adjustment in Congenital Heart Surgery as risk stratification and trending tools to monitor outcomes over time and to guide risk-adjusted comparisons may be valuable.
abstract_id: PUBMED:23305582
Clinical outcome of patients in a start-up congenital heart surgery program in Turkey. This study summarizes the clinical outcome data of a start-up congenital heart surgery program in Turkey. Between December 2009 and February 2012, 616 operations have been performed in 132 newborns (22%), 260 infants (42%), and 224 children/adolescents (36%). Risk adjustment analysis was performed using the risk adjustment in congenital heart surgery (RACHS-1) risk assessment model. There were 66 mortalities (10.7%). According to the RACHS-1 categories, there were 51 cases in level I (8.2%) with no mortality (0%), 250 in level II (40.6%) with 11 (4.4%) mortalities, 199 in level III (32.3%) with 33 (16.5%) mortalities, 53 in level IV (8.6%) with 10 (18.8%) mortalities, 14 in level V and VI (2.2%) with 11 (78.5%) mortalities, and 49 cases (7.9%) out of the RACHS-1 categories with one (2.0%) mortality. Mortality was higher in neonates compared to infants, and in infants compared to children/adolescents. Mortality was higher in palliative procedures compared to corrective procedures. The data demonstrate that a start-up program with a relatively loaded surgical volume may achieve acceptable clinical results with a good teamwork. Collaboration of anesthesiologists, perfusionists, pediatric cardiologists, intensivists, and cardiovascular surgeons is necessary to provide a better outcome in congenital heart surgery.
Answer: The Risk Adjustment for Congenital Heart Surgery (RACHS) method is designed to compare outcomes for pediatric patients undergoing cardiac surgery by stratifying patients into categories based on age, type of surgery, and similar in-hospital mortality rates. The RACHS method has been applied in various settings to predict outcomes such as in-hospital mortality and length of stay in intensive care units (PUBMED:16476221, PUBMED:24569325, PUBMED:31545015).
In a single-center series of high-risk newborns, the RACHS method was evaluated as a predictor of outcome. The study found that newborns in RACHS category 4 had lower birth weights and a trend toward increased postoperative complications compared with newborns in RACHS category 6. However, the study also highlighted limitations in the RACHS method, such as the lack of consideration for confounding variables, suggesting that further risk adjustments including these variables are warranted (PUBMED:18377539).
Another study comparing the RACHS-1 method with the Aristotle Basic Complexity score and the STS-EACTS complexity scoring model found that RACHS-1 and STS-EACTS were more powerful for predicting mortality and that STS-EACTS was better for predicting prolonged length of stay and the need for respiratory support after surgery (PUBMED:31545015).
Overall, while the RACHS method is useful for risk adjustment in congenital heart surgery, its application in a single-center series of newborns may require additional considerations and adjustments to account for confounding factors that are not captured by the RACHS categories alone. Additional studies and models may be necessary to improve the predictive accuracy for high-risk newborn populations (PUBMED:18377539). |
Instruction: Immunohistochemical characterisation of breast cancer: towards a new clasification?
Abstracts:
abstract_id: PUBMED:18783672
Immunohistochemical characterisation of breast cancer: towards a new clasification? Background: The aim of this paper is to determine the possible association between five different profiles of immunohistochemical expression related to clinical, histopathological and immunohistochemical known prognostic value variables for breast cancer.
Material And Method: A total of 194 breast carcinoma tumour samples were studied. In this study five groups or immunohistochemical profiles were defined, based on expression of hormone receptors (oestrogen or progesterone) and/or Her2/neu (luminal-type A, luminal-type B, mixed profile, Her2/neu profile and triple-negative-type profile) and we studied whether there are differences between them with regard to clinical, histopathological and immunohistochemical variables that have a known prognostic significance.
Results: In the series we found 134 (69%) cases corresponding to a luminal immunophenotype, of which 98 (50.5%) were from the luminal A group and 36 (18.6%) from luminal B. Twenty-nine cases (15.9%) were triple-negative, 18 (9.3%) mixed and 13 (6.7%) Her2/neu type. It is worth noting the relationship between the triple-negative and Her2/neu immunophenotypes and the more poorly differentiated histological forms (62% and 60%, respectively) and between the luminal A group and well-differentiated tumours (p = 0.008). Expression of ki67 was high in the triple-negative group (73.9%) and low in the luminal A group (26.3%; p = 0.001). The expression of p53 was also greater for the Her2/neu (55.5%) and triple-negative (60.8%) groups (p = 0.0005) than for the others.
Conclusions: The subgroups without hormone receptor expression, with Her2/neu overexpression or without (triple-negative group), have characteristics associated with variables of a poorer prognosis. The lack of progesterone receptor expression also seems to be associated with these.
abstract_id: PUBMED:36662110
An Approach toward Automatic Specifics Diagnosis of Breast Cancer Based on an Immunohistochemical Image. The paper explored the problem of automatic diagnosis based on immunohistochemical image analysis. The issue of automated diagnosis is a preliminary and advisory statement for a diagnostician. The authors studied breast cancer histological and immunohistochemical images using the following biomarkers progesterone, estrogen, oncoprotein, and a cell proliferation biomarker. The authors developed a breast cancer diagnosis method based on immunohistochemical image analysis. The proposed method consists of algorithms for image preprocessing, segmentation, and the determination of informative indicators (relative area and intensity of cells) and an algorithm for determining the molecular genetic breast cancer subtype. An adaptive algorithm for image preprocessing was developed to improve the quality of the images. It includes median filtering and image brightness equalization techniques. In addition, the authors developed a software module part of the HIAMS software package based on the Java programming language and the OpenCV computer vision library. Four molecular genetic breast cancer subtypes could be identified using this solution: subtype Luminal A, subtype Luminal B, subtype HER2/neu amplified, and basalt-like subtype. The developed algorithm for the quantitative characteristics of the immunohistochemical images showed sufficient accuracy in determining the cancer subtype "Luminal A". It was experimentally established that the relative area of the nuclei of cells covered with biomarkers of progesterone, estrogen, and oncoprotein was more than 85%. The given approach allows for automating and accelerating the process of diagnosis. Developed algorithms for calculating the quantitative characteristics of cells on immunohistochemical images can increase the accuracy of diagnosis.
abstract_id: PUBMED:37113711
Immunohistochemical subtype and its relationship with 5-year overall survival in breast cancer patients. Background: Breast cancer (BC) is the malignant tumour that has been most frequently diagnosed, being the second most common cancer worldwide and the most frequent in women.
Objective: To analyse the probability of 5-year overall survival according to age, stage of disease, immunohistochemical subtype, histological grade and histological type in patients with BC.
Methodology: Operational research that used a cohort design of patients diagnosed with BC at the SOLCA Núcleo de Loja-Ecuador Hospital from 2009 to 2015 and with follow-up until December 2019. Survival was estimated with the actuarial method and Kaplan-Meier method, and, for multivariate analysis, the proportional hazards model or Cox regression was used to estimate the adjusted Hazard Ratios (HRs).
Results: Two hundred and sixty-eight patients were studied. Mean overall survival was 4.35 years (95% confidence interval (95% CI): 40.20-4.51) and 66% survived to 5 years. The main predictors of survival were advanced stage of disease (III-IV) (HR = 7.03; 95% CI: 3.81-12.9); patients human epidermal growth factor receptor 2-neu (HER2-neu) overexpressed (HR = 2.26; 95% CI = 1.31-4.75) and triple negative (HR = 2.57; 95% CI = 1.39-4.75). The other variables were not significant.
Conclusions: The results show a higher mortality associated with higher clinical stage, more aggressive histological grades and immunohistochemical subtype HER2-neu overexpressed and triple negative tumours.
abstract_id: PUBMED:30283220
Evaluation of Immunohistochemical Profile of Breast Cancer for Prognostics and Therapeutic Use. Introduction: Breast cancer is leading cancer in women, and the incidence of breast cancer in India is on the rise. The most common histologic type of breast cancer is infiltrating ductal carcinoma. Prognostic and predictive factors are used in the management of breast cancer. Estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor-2 (HER2/neu) are immunohistochemical markers of prognosis as well as predictors of response to therapy.
Aims And Objectives: The study was conducted to evaluate ER, PR, and HER2/neu expressions in invasive ductal carcinomas of the breast by immunohistochemistry, to explore the correlation of these markers to each other and to various clinicopathological parameters: age of the patient, histological grade, tumor size, and lymph node metastasis.
Materials And Methods: This prospective study was conducted on 100 cases of infiltrating ductal carcinoma. Slides were prepared from blocks containing cancer tissue, and immunohistochemical staining was done for ER, PR, and HER2/neu expressions. Interpretation of expressions was done using Allred scoring system for ER/PR and the American Society of Clinical Oncology/College of American Pathologists guidelines for HER2/neu. Statistical analysis was performed to determine the statistical significance by applying Chi-square test.
Results: Majority of tumors were ER and PR positive and HER2/neu negative. ER and PR correlated significantly with age, tumor size, and tumor grade; whereas, HER2/neu correlated significantly with tumor size only. No association was seen with axillary lymph node metastasis. ER and PR expression correlated with each other, but none correlated with HER2/neu.
Conclusions: As the majority of the tumors are ER, PR positive and since ER and PR correlate with each other as well as with age, tumor size, and grade. Therefore, routine assessment of hormone receptors is recommended for prognostic and therapeutic information in breast cancer cases.
abstract_id: PUBMED:35811595
Immunohistochemical Subtypes of The Breast Cancer in The Ultrasound and Clinical Aspect - Literature Review. Breast cancer is a heterogeneous disease both in its clinical and radiological manifestations and response to treatment. This is largely due to the polymorphism of the histological types as well as diversified molecular profiles of individual breast cancer types. Progress in the understanding of the biology of breast cancer was made with the introduction of immunohistochemical research into the common practice. On this basis, four main breast cancer subtypes were distinguished: luminal A, luminal B, HER2 positive (human epidermal growth factor receptor-2 positive), and triple negative cancer. The classification of a tumour to an appropriate subtype allows for the optimisation of treatment (surgery or pre-operative chemotherapy). In this study, the authors present different patterns of breast cancer subtypes in ultrasound examination and differences in their treatment, with particular emphasis on aggressive breast cancer subtypes, such as triple negative or HER2 positive. They can, unlike the luminal subtypes, create diagnostic problems. Based on multifactorial analysis of the ultrasound image, with the assessment of lesion margins, orientation, shape, echogenicity, vascularity, the presence of calcifications or assessment by sonoelastography, it is possible to initially differentiate individual subtypes.
abstract_id: PUBMED:26137243
Matrix metalloproteinase-1 expression in breast cancer and cancer-adjacent tissues by immunohistochemical staining. Although matrix metalloproteinase-1 (MMP-1) has been considered a factor of crucial importance for breast cancer cells invasion and metastasis, the expression of MMP-1 in different breast cancer and cancer-adjacent tissues have not been fully examined. In the present study, immunohistochemical staining was used to detect the MMP-1 expression in non-specific invasive ductal carcinoma of the breast, cancer-adjacent normal breast tissue, lymph node metastatic non-specific invasive ductal carcinoma of the breast and normal lymph node tissue. The results showed that MMP-1 expression is different in the above tissues. MMP-1 had a positive expression in normal lymph node tissue and lymph node metastatic non-specific invasive ductal carcinoma. The MMP-1 negative expression rate was only 6.1% in non-specific invasive ductal carcinoma of the breast and 2.9% in cancer-adjacent normal breast tissue respectively. MMP-1 expression is higher in non-specific invasive ductal carcinoma and lymph node metastatic non-specific invasive ductal carcinoma compared to cancer-adjacent normal breast tissue and normal lymph node tissue. In conclusion, higher expression of MMP-1 in breast cancer may play a crucial role in promoting breast cancer metastasis.
abstract_id: PUBMED:31929699
Immunohistochemical expression of Ki-67, p53, and CD10 in phyllodes tumor and their correlation with its histological grade. Context: Phyllodes tumors (PTs) are the fibroepithelial neoplasms of the breast. Histologically, PTs are divided into three subgroups according to their clinicopathological behavior: benign, borderline, and malignant. It is at times difficult to ascertain the grade of PT on morphological criteria alone, especially borderline PT may be at times difficult to distinguish from its benign or malignant counterparts.
Aims: This study was undertaken to evaluate an immunohistochemical panel of Ki-67, p53, and CD10 in PT and to determine their expression in PT in correlation with its grade.
Settings And Design: This was a retrospective study.
Subjects And Methods: The study included six malignant, six borderline, and twelve benign PT. Expressions of Ki-67, p53, and CD10 were evaluated on all the 12 cases and compared in these three categories.
Statistical Analysis Used: Chi-square test was applied, and P < 0.05 was taken as statistically significant.
Results: Stromal expression of Ki-67 and p53 between the benign and borderline/malignant group showed a statistically significant difference. Neither CD10 expression nor epithelial expressions of Ki-67 and p53 were found significant. Periepithelial accentuation of Ki-67 and p53 immunostaining was noted in all positive cases.
Conclusions: Ki-67 labeling index and p53 immunostaining can be a useful adjunct to determine the grade in difficult cases. However, no single immunomarker can reliably distinguish between benign and borderline phyllodes in all cases.
abstract_id: PUBMED:32232794
Immunohistochemical index prediction of breast tumor based on multi-dimension features in contrast-enhanced ultrasound. Breast cancer is the leading killer of Chinese women. Immunohistochemistry index has great significance in the treatment strategy selection and prognosis analysis for breast cancer patients. Currently, histopathological examination of the tumor tissue through surgical biopsy is the gold standard to determine immunohistochemistry index. However, this examination is invasive and commonly causes discomfort in patients. There has been a lack of noninvasive method capable of predicting immunohistochemistry index for breast cancer patients. This paper proposes a machine learning method to predict the immunohistochemical index of breast cancer patients by using noninvasive contrast-enhanced ultrasound. A total of 119 breast cancer patients were included in this retrospective study. Each patient implemented the pathological examination of immunohistochemical expression and underwent contrast-enhanced ultrasound imaging of breast tumor. The multi-dimension features including 266 three-dimension features and 837 two-dimension dynamic features were extracted from the contrast-enhanced ultrasound sequences. Using the machine learning prediction method, 21 selected multi-dimension features were integrated to generate a model for predicting the immunohistochemistry index noninvasively. The immunohistochemical index of human epidermal growth factor receptor-2 (HER2) was predicted based on multi-dimension features in contrast-enhanced ultrasound sequence with the sensitivity of 71%, and the specificity of 79% in the testing cohort. Therefore, the noninvasive contrast-enhanced ultrasound can be used to predict the immunohistochemical index. To our best knowledge, no studies have been reported about predicting immunohistochemical index by using contrast-enhanced ultrasound sequences for breast cancer patients. Our proposed method is noninvasive and can predict immunohistochemical index by using contrast-enhanced ultrasound in several minutes, instead of relying totally on the invasive and biopsy-based histopathological examination. Graphical abstract Immunohistochemical index prediction of breast tumor based on multi-dimension features in contrast-enhanced ultrasound.
abstract_id: PUBMED:34068349
Detection of Human Cytomegalovirus Proteins in Paraffin-Embedded Breast Cancer Tissue Specimens-A Novel, Automated Immunohistochemical Staining Protocol. Emerging evidence supports a significant association between human cytomegalovirus (HCMV) and human malignancies, suggesting HCMV as a human oncomodulatory virus. HCMV gene products are found in >90% of breast cancer tumors and seem to be correlated with more aggressive disease. The definitive diagnosis of HCMV relies on identification of virus inclusions and/or viral proteins by different techniques including immunohistochemical staining. In order to reduce biases and improve clinical value of HCMV diagnostics in oncological pathology, automation of the procedure is needed and this was the purpose of this study. Tumor specimens from 115 patients treated for primary breast cancer at Akershus University Hospital in Norway were available for the validation of the staining method in this retrospective study. We demonstrate that our method is highly sensitive and delivers excellent reproducibility for staining of HCMV late antigen (LA), which makes this method useful for future routine diagnostics and scientific applications.
abstract_id: PUBMED:27895868
Immunohistochemical Subtypes of Breast Cancer: Correlation with Clinicopathological and Radiological Factors. Background: The relationship between biomarkers and imaging features is important because imaging findings can predict molecular features.
Objectives: To investigate the relationship between clinicopathologic and radiologic factors and the immunohistochemical (IHC) profiles associated with breast cancer.
Patients And Methods: From December 2004 to September 2013, 200 patients (mean age, 56 years; range, 29 - 82 years) were diagnosed with breast cancer and underwent surgery at our institution. Their medical records were reviewed to determine age, symptom presence, mammographic findings (including mass, asymmetry, microcalcifications, or negativity), sonographic Breast Imaging-Reporting and Data System (BI-RADS) category, pathologic type of cancer (invasive ductal, mucinous, medullary, or papillary carcinoma), histologic grade, T-stage, and IHC subtypes. Based on the IHC profiles, tumor subtypes were classified as luminal A, luminal B, human epidermal growth factor receptor 2 (HER2) enriched, or triple-negative (TN) cancers. Using univariate and multivariate logistic regression analyses, we looked for correlations between four IHC subtypes and two IHC subtypes (TN and non-triple negative [non-TN]) and clinicopathologic and radiologic factors, respectively.
Results: Based on our univariate analyses with the four subtypes, the TN subtype showed a higher incidence of masses on mammography compared to the other subtypes (P = 0.037), and the TN subtype also tended to have the highest histologic grade among the subtypes (P < 0.001). With regard to the two IHC subtypes, the TN subtype had a significant association with medullary cancer (P = 0.021), higher histologic grade (grade 3; P < 0.001), and higher T stage (T2; P = 0.027) compared to the non-TN subtypes. In a multivariate logistic regression analysis of the clinicoradiologic factors compared to luminal A, the HER2 subtype had a significant association with BI-RADS category 4b (odds ratio [OR], 9.005; 95% confidence interval [CI], 1.414 - 57.348; P = 0.020) and borderline significance with category 4c (OR, 4.669; 95% CI, 0.970 - 22.468; P = 0.055). In a multivariate logistic regression analysis of the clinicoradiologic factors associated with the non-TN subtypes, the TN subtype was significantly correlated with medullary carcinoma (OR, 7.092; 95% CI, 1.149 - 43.772; P = 0.035).
Conclusion: These results suggest that patients with the TN subtypes are more likely to have higher-histologic-grade tumors and medullary cancer. The HER2 subtype was typically associated with a higher BI-RADS category.
Answer: The immunohistochemical characterization of breast cancer is indeed moving towards a new classification system that is based on the expression of hormone receptors (estrogen or progesterone) and/or Her2/neu. This classification system defines different immunophenotypes, such as luminal A, luminal B, mixed profile, Her2/neu profile, and triple-negative profile, which are associated with distinct clinical, histopathological, and prognostic features (PUBMED:18783672).
In the luminal A group, which is characterized by hormone receptor expression without Her2/neu overexpression, tumors are often well-differentiated and have a lower expression of ki67 and p53, indicating a better prognosis. In contrast, the triple-negative and Her2/neu immunophenotypes are associated with more poorly differentiated histological forms and higher expression of ki67 and p53, suggesting a poorer prognosis (PUBMED:18783672).
The development of automated diagnostic methods based on immunohistochemical image analysis has been explored to improve the accuracy and speed of breast cancer diagnosis. These methods can identify molecular genetic breast cancer subtypes, such as Luminal A, Luminal B, HER2/neu amplified, and basal-like subtype, by analyzing biomarkers like progesterone, estrogen, oncoprotein, and cell proliferation biomarkers (PUBMED:36662110).
Furthermore, the immunohistochemical subtype of breast cancer has been shown to have a significant relationship with 5-year overall survival, with patients exhibiting HER2-neu overexpressed and triple-negative subtypes having a higher mortality associated with more aggressive histological grades and advanced stages of the disease (PUBMED:37113711).
The immunohistochemical profile of breast cancer, including the expression of ER, PR, and HER2/neu, is also used for prognostics and therapeutic purposes, as these markers correlate with various clinicopathological parameters and can predict the response to therapy (PUBMED:30283220).
Overall, the immunohistochemical characterization of breast cancer is evolving into a more nuanced classification system that has significant implications for prognosis and treatment strategies, and it is being supported by advancements in automated image analysis and machine learning prediction methods (PUBMED:36662110; PUBMED:32232794). |
Instruction: Does a Higher Society of Thoracic Surgeons Score Predict Outcomes in Transfemoral and Alternative Access Transcatheter Aortic Valve Replacement?
Abstracts:
abstract_id: PUBMED:27209615
Does a Higher Society of Thoracic Surgeons Score Predict Outcomes in Transfemoral and Alternative Access Transcatheter Aortic Valve Replacement? Background: Nontransfemoral (non-TF) transcatheter aortic valve replacement (TAVR) is often associated with worse outcomes than TF TAVR. We investigated the relationship between increasing Society of Thoracic Surgeons (STS) predicted risk of mortality (PROM) score and observed mortality and morbidity in TF and non-TF TAVR groups.
Methods: We reviewed 595 patients undergoing TAVR at Emory Healthcare between 2007 and 2014. Clinical outcomes were reported for 337 TF patients (57%) and 258 non-TF patients (43%). We created 3 STS PROM score subgroups: <8%, 8%-15%, and >15%. A composite outcome of postoperative events was defined as death, stroke, renal failure, vascular complications, or new pacemaker implantation.
Results: TF patients were older (82.4 ± 8.0 vs 80.8 ± 8.7 years, p = 0.02), whereas the STS PROM was higher in non-TF patients (10.5% ± 5.3% vs 11.7% ± 5.7%, p = 0.01). Observed/expected mortality was less than 1.0 in all groups. The rate of the composite outcome did not differ between STS PROM subgroups in TF (p = 0.68) or non-TF TAVR (p = 0.27). One-year mortality was higher for patients with STS PROM >8% in the non-TF group; however, this difference was not observed in TF patients (p = 0.40).
Conclusions: As expected, non-TF patients were at a higher risk than TF patients for procedural morbidity and death. Although no differences were observed in 30-day deaths or morbidity in different STS PROM subgroups, those undergoing non-TF TAVR at a higher STS PROM (>8%) had higher 1-year mortality. When applicable, TF TAVR remains the procedure of choice in high- or extreme-risk patients undergoing TAVR.
abstract_id: PUBMED:29963391
Transcaval transcatheter aortic valve replacement: a visual case review. Transcatheter aortic valve replacement (TAVR) has emerged as a viable, minimally-invasive and widely adopted approach for the treatment of severe symptomatic aortic stenosis in patients who are intermediate-risk or greater for surgical aortic valve replacement. Numerous studies have demonstrated favorable outcomes with TAVR in this population, particularly with transfemoral access TAVR. Transfemoral TAVR has been shown to be safer and associated less morbidity, shorter lengths of hospital stay and more rapid recovery as compared with traditional thoracic alternative-access TAVR (transapical or transaortic). Despite iterative advancements in transcatheter heart valve technology and delivery system, there remain a portion of patients with iliofemoral arterial vessel sizes that are too small for safe transfemoral TAVR. Paradoxically, these patients are generally higher risk and are thus less favorable candidates for open surgery or traditional alternative-access TAVR. With these considerations in mind, transcaval TAVR was developed as a fully percutaneous, non-surgical approach for aortic valve replacement in patients who are poor candidates for traditional alternative-access TAVR. In this manuscript we describe the principles on which transcaval TAVR was developed, the outcomes from the largest trial completed evaluating this technique as well as describing the technique used to perform this procedure in a case-based format.
abstract_id: PUBMED:34915823
Alternative access in high-risk patients in the era of transfemoral aortic valve replacement. Background: We aimed to evaluate the outcomes of transapical and transaortic transcatheter aortic valve replacement (TAVR) in high-risk patients who were not suitable for transfemoral access and had a logistic EuroSCORE-I ≥ 25% and Society of Thoracic Surgeons (STS) score >6%. 'STS/ACC TAVR In-Hospital Mortality Risk App' was evaluated.
Material And Methods: Between January 2016 and May 2020, 126 patients at very high risk for aortic valve replacement underwent transapical (n = 121) or transaortic (n = 5) transcatheter aortic valve replacement. TAVR was performed using SAPIEN 3™ or ACURATE TA™ prosthesis.
Results: The logistic EuroSCORE-I was 40.6 ± 14.0%, the STS-score 7.9 ± 4.6%, and STS/ACC-score 8.4 ± 3.4%. Valve implantation was successful in all patients. Operative, in-hospital and 30-days mortality, were 0, 7.9, and 13.5%, respectively. Survival was 72% at one year and 48% at four years. Expected/observed in-hospital mortality was 1.0 for the STS-score and 1.06 for the STS/ACC-score. Renal failure, low ejection fraction, and postoperative acute kidney injury, hemorrhage, and vascular complications were identified as independent predictors for 30-day mortality.
Conclusions: Transapical and transaortic TAVR in high-risk patients unsuitable for transfemoral access is still a reasonable alternative in these patients. STS and STS/ACC-score appear to be highly accurate in predicting in-hospital mortality in high-risk patients undergoing TAVR.
abstract_id: PUBMED:29974264
A Review of Alternative Access for Transcatheter Aortic Valve Replacement. With the advent of transcatheter aortic valve replacement (TAVR), appropriately selected intermediate-, high-, and extreme-risk patients with severe aortic stenosis (AS) are now offered a less invasive option compared to conventional surgery. In contemporary practice, TAVR is performed predominantly via a transfemoral arterial approach, whereby a transcatheter heart valve (THV) is delivered in a retrograde fashion through the iliofemoral arterial system and thoraco-abdominal aorta, into the native aortic valve annulus. While the majority of patients possess suitable anatomy for transfemoral arterial access, there is a subset of patients with extensive peripheral vascular disease that precludes this traditional approach to TAVR. Fortunately, innovation in the field of structural heart disease has led to the refinement of alternative access options for THV delivery. Selection of the most appropriate route of therapy mandates a careful consideration of multiple factors, including patient anatomy, technical feasibility, and equipment specifications. Furthermore, understanding the risks conferred by each access site for valve delivery-notably stroke, vascular injury, and major bleeding-is of paramount importance when selecting the approach that will best optimize the outcome for an individual. In this review, we provide a comprehensive summary of alternative approaches to transfemoral arterial TAVR as well as the available outcome data supporting each of these various techniques.
abstract_id: PUBMED:36282201
Standard Transfemoral Transcatheter Aortic Valve Replacement. The introduction of the transcatheter aortic valve implantation procedure has revolutionized the standards of care in patients with aortic valve pathologies and has significantly increased the quality of the medical treatment provided. The durability and constant technical improvements in the modern transcatheter aortic valve implantation procedure have broadened the indications towards younger patient groups with low-risk profiles. Therefore, transcatheter aortic valve implantation now represents an effective alternative for surgical aortic valve replacement in a large number of cases. Currently, various technical methods for the transcatheter aortic valve implantation procedure are available. The contemporary transcatheter aortic valve implantation procedure focuses on optimization of postoperative results and reduction of complications such as paravalvular leakage and permanent pacemaker implantation. Another goal of transcatheter aortic valve implantation is the achievement of a valid lifetime concept with secure coronary access and conditions for future valve-in-valve interventions. In this case report, we demonstrate a standard transfemoral transcatheter aortic valve implantation procedure with a self-expandable supra-annular device, one of the most commonly performed methods.
abstract_id: PUBMED:35243929
Transfemoral Versus Subclavian Access for Transcatheter Aortic Valve Replacement. Objective: This study sought to compare outcomes of transcatheter aortic valve replacement (TAVR) performed through subclavian access with those performed through transfemoral access. Methods: This was an observational study utilizing an institutional TAVR database from 2010 to 2018. All patients undergoing a TAVR via a transfemoral (TF-TAVR) or subclavian (SC-TAVR) approach were included in the study. The groups were analyzed for differences in operative mortality and postoperative outcomes. Multivariable Cox analysis was performed to identify variables associated with long-term survival after TAVR. Results: Of the 1,095 patients identified, 133 patients underwent SC-TAVR and 962 patients underwent TF-TAVR. Patients who underwent SC-TAVR were younger, more likely to have chronic lung disease and peripheral vascular disease, had higher Society of Thoracic Surgeons predicted risk of mortality scores, and were more likely to have self-expanding valves placed (P < 0.05). Operative mortality was similar between the TF-TAVR (2.7%) and SC-TAVR (3.8%) groups. There were no significant differences in stroke, length of stay, 30-day readmission, blood transfusions, acute kidney injury, need for permanent pacemaker, paravalvular leak, or major vascular complications between the groups (P > 0.05). The unadjusted Kaplan-Meier survival estimate for TF-TAVR was significantly higher than for SC-TAVR (P = 0.009, log-rank). However, on multivariable Cox analysis, subclavian access was not significantly associated with an increased hazard of death as compared with transfemoral access (P = 0.21). Conclusions: Outcomes of SC-TAVR are comparable to those of TF-TAVR. Subclavian access may be a favorable alternative approach when TF-TAVR is contraindicated.
abstract_id: PUBMED:36126131
Association Between Peripheral Versus Central Access for Alternative Access Transcatheter Aortic Valve Replacement and Mortality and Stroke: A Report From the Society of Thoracic Surgeons/American College of Cardiology Transcatheter Valve Therapy Registry. Background: In some patients, the alternative access route for transcatheter aortic valve replacement (TAVR) is utilized because the conventional transfemoral approach is not felt to be either feasible or optimal. However, accurate prognostication of patient risks is not well established. This study examines the associations between peripheral (transsubclavian/transaxillary, and transcarotid) versus central access (transapical and transaortic) in alternative access TAVR and 30-day and 1-year end points of mortality and stroke for all valve platforms.
Methods: Using data from The Society of Thoracic Surgeons/American College of Cardiology Transcatheter Valve Therapy Registry with linkage to Medicare claims, patients who underwent alternative access TAVR from June 1, 2015 to June 30, 2018 were identified. Adjusted and unadjusted Cox proportional hazards modeling were performed to determine the association between alternate access TAVR site and 30-day and 1-year end points of mortality and stroke.
Results: Of 7187 alternative access TAVR patients, 3725 (52%) had peripheral access and 3462 (48%) had central access. All-cause mortality was significantly lower in peripheral access versus central access group at in-hospital and 1 year (2.9% versus 6.3% and 20.3% versus 26.6%, respectively), but stroke rates were higher (5.0% versus 2.8% and 7.3% versus 5.5%, respectively; all P<0.001). These results persisted after 1-year adjustment (death adjusted hazard ratio, 0.72 [95% CI, 0.62-0.85] and stroke adjusted hazard ratio, 2.92 [95% CI, 2.21-3.85]). When broken down by individual subtypes, compared with transaxillary/subclavian access patients, transapical, and transaortic access patients had higher all-cause mortality but less stroke (P<0.05).
Conclusions: In this real-world, contemporary, nationally representative benchmarking study of alternate access TAVR sites, peripheral access was associated with favorable mortality and morbidity outcomes compared with central access, at the expense of higher stroke. These findings may allow for accurate prognostication of risk for patient counseling and decision-making for the heart team with regard to alternative access TAVR.
abstract_id: PUBMED:30174904
Non-transfemoral access sites for transcatheter aortic valve replacement. Transfemoral access is currently the standard and preferred access site for transcatheter aortic valve replacement (TAVR), though novel approaches are emerging to expand treatment options for the increasing numbers of patients with a contraindication for the traditional route. Previous publications have provided comparisons between two TAVR access sites, primarily transfemoral versus one of the novel approaches, while others have compared three or four novel approaches. The aim of this report is to provide a comprehensive summary of publications that analyse and compare the six non-transfemoral access sites currently described in the literature. These include the transapical, transaortic, axillary/subclavian, brachiocephalic, transcarotid, and transcaval approaches. Though there remains little consensus as to the superiority or non-inferiority of TAVR approaches, and there has yet to be randomized clinical trials to support published findings, with careful patient and procedural selection, outcomes for novel approaches have been reported to be comparable to standard transfemoral access when performed by skilled physicians. As such, choice of procedure is primarily based on registry data and the judgement of surgical teams as to which approach is best in each individual case. As TAVR continues to be an increasingly widespread treatment, search for the optimal access site will grow, and focus should be placed on the importance of educating surgeons as to all possible approaches so they may review and chose the most appropriate technique for a given patient.
abstract_id: PUBMED:30170829
Simultaneous thoracic aortic endovascular graft and transfemoral transcatheter aortic valve replacement in a patient with a descending aortic thrombus. Severe descending thoracic and abdominal aortic pathology can deter consideration of transfemoral (TF) access for transcatheter aortic valve replacement (TAVR) in adults with severe symptomatic aortic stenosis (AS) and may lead to utilization of alternative access sites. We report a case of an 88-year-old frail woman with severe symptomatic AS referred for TAVR with demonstration of a large thrombus in the descending thoracic aorta immediately distal to the left subclavian artery. Given concerns of thrombus embolization with femoral advancement of the transcatheter valve, coverage with a thoracic aortic endograft was planned immediately prior to the TAVR.
abstract_id: PUBMED:37198066
Shockwave Lithotripsy of Calcific Stenosis of the Distal Abdominal Aorta to Facilitate Transcatheter Aortic Valve Replacement. The transfemoral approach for transcatheter aortic valve replacement (TAVR) is superior to alternative access strategies. Only transfemoral access has been shown to have better clinical outcomes than surgical aortic valve replacement. In our patient, severe calcification of the distal abdominal aorta posed difficulty in using transfemoral access for TAVR. We applied intravascular lithotripsy (IVL) to the distal abdominal aorta to achieve necessary luminal gain facilitating bioprosthetic aortic valve deployment.
Answer: A higher Society of Thoracic Surgeons (STS) score does predict outcomes in both transfemoral (TF) and alternative access transcatheter aortic valve replacement (TAVR), but the impact varies between the two approaches. In a study examining patients undergoing TAVR at Emory Healthcare, it was found that non-TF patients had a higher STS predicted risk of mortality (PROM) score compared to TF patients, and those undergoing non-TF TAVR with an STS PROM score greater than 8% had higher 1-year mortality. However, this difference was not observed in TF patients (PUBMED:27209615).
Transfemoral TAVR is generally considered safer, with less morbidity, shorter hospital stays, and more rapid recovery compared to traditional thoracic alternative-access TAVR (PUBMED:29963391). When TF access is not feasible, alternative access routes such as transapical and transaortic TAVR are still reasonable alternatives for high-risk patients, and the STS and STS/ACC scores appear to be accurate in predicting in-hospital mortality for these patients (PUBMED:34915823).
Subclavian access for TAVR has been found to have comparable outcomes to TF-TAVR, suggesting it may be a favorable alternative when TF access is contraindicated (PUBMED:35243929). Additionally, peripheral access for alternative access TAVR has been associated with favorable mortality and morbidity outcomes compared with central access, despite a higher stroke rate (PUBMED:36126131).
Overall, while the STS score is a predictor of outcomes in both TF and alternative access TAVR, the choice of access route and the impact of the STS score on outcomes may vary based on patient-specific factors, procedural considerations, and the type of access used. TF TAVR remains the procedure of choice in high- or extreme-risk patients when feasible, but alternative access routes provide viable options with their own risk profiles and should be considered based on individual patient anatomy and risk factors (PUBMED:27209615, PUBMED:29974264, PUBMED:36282201, PUBMED:30174904, PUBMED:30170829, PUBMED:37198066). |
Instruction: Is low-grade serous ovarian cancer part of the tumor spectrum of hereditary breast and ovarian cancer?
Abstracts:
abstract_id: PUBMED:21126756
Is low-grade serous ovarian cancer part of the tumor spectrum of hereditary breast and ovarian cancer? Objective: To determine whether women with low-grade serous ovarian cancer (LGSOC) have personal and family histories of breast and ovarian cancer that are less suggestive of Hereditary Breast and Ovarian Cancer (HBOC), as compared to women with high-grade serous ovarian cancer (HGSOC).
Methods: A single institution, case-control retrospective review of medical records was conducted. Personal demographics, personal cancer history, and family history of breast and ovarian cancer of women with LGSOC were collected and compared to controls with HGSOC, which is known to be associated with HBOC.
Results: 195 cases of LGSOC and 386 controls with HGSOC were included in the analysis. Women with LGSOC were significantly less likely to have a first- or second-degree relative with breast or ovarian cancer (p=0.0016). Additionally, when the personal and family histories were quantified using the AMyriad BRC mutation prevalence tables, women with LGSOC had lower scores indicative of a less suggestive family history for HBOC (p=0.027).
Conclusions: In this study, women with LGSOC had family histories that were less suggestive of HBOC compared to women with HGSOC, especially when the degree of relatedness of affected relatives was taken into account. By beginning to determine if LGSOC is part of the tumor spectrum seen in HBOC, this study is an important step towards refining hereditary cancer risk assessment for women with ovarian cancer.
abstract_id: PUBMED:22829013
Portuguese c.156_157insAlu BRCA2 founder mutation: gastrointestinal and tongue neoplasias may be part of the phenotype. We have screened BRCA2 c.156_157insAlu founder mutation in a cohort of 168 women with diagnosis of breast cancer referred for genetic counseling because of risk of being carriers of hereditary breast and ovarian cancer syndrome. Portuguese founder mutation BRCA2 c.156_157insAlu was identified in three unrelated breast cancer probands. Genotyping identified a common haplotype between markers D13S260 and D13S171, and allele sizes were compatible to those described in the Portuguese families. Allele sizes of marker D13S1246, however, were concordant in two families, suggesting that the haplotype may be larger in a subset of families. Tumor phenotypes in Brazilian families seem to reinforce the high prevalence of breast cancer among affected males. However, an apparent excess of gastrointestinal and tongue neoplasias were also observed in these families. Although these tumors are not part of the phenotypic spectrum of hereditary breast and ovarian cancer syndrome, they might be accounted for by other risk alleles contained in the founder haplotype region.
abstract_id: PUBMED:37170728
Diverse genetic spectrum among patients who met the criteria of hereditary breast, ovarian and pancreatic cancer syndrome. Objective: Genetic high-risk assessment combines hereditary breast, ovarian and pancreatic cancer into one syndrome. However, there is a lack of data for comparing the germline mutational spectrum of the cancer predisposing genes between these three cancers.
Methods: Patients who met the criteria of the hereditary breast, ovarian and pancreatic cancer were enrolled and received multi-gene sequencing.
Results: We enrolled 730 probands: 418 developed breast cancer, 185 had ovarian cancer, and 145 had pancreatic cancer. Out of the 18 patients who had two types of cancer, 16 had breast and ovarian cancer and 2 had breast and pancreatic cancer. A total of 167 (22.9%) patients had 170 mutations. Mutation frequency in breast, ovarian and pancreatic cancer was 22.3%, 33.5% and 17.2%, respectively. The mutation rate was significantly higher in patients with double cancers than those with a single cancer (p<0.001). BRCA1 and BRCA2 were the most dominant genes associated with hereditary breast and ovarian cancer, whereas ATM was the most prevalent gene related to hereditary pancreatic cancer. Genes of hereditary colon cancer such as lynch syndrome were presented in a part of patients with pancreatic or ovarian cancer but seldom in those with breast cancer. Families with a history of both ovarian and breast cancer were associated with a higher mutation rate than those with other histories.
Conclusion: The mutation spectrum varies across the three cancer types and family histories. Our analysis provides guidance for physicians, counsellors, and counselees on the offer and uptake of genetic counseling.
abstract_id: PUBMED:38355628
Germline mutations of 4567 patients with hereditary breast-ovarian cancer spectrum in Thailand. Multi-gene panel testing has led to the detection of pathogenic/likely pathogenic (P/LP) variants in many cancer susceptibility genes in patients with breast-ovarian cancer spectrum. However, the clinical and genomic data of Asian populations, including Thai cancer patients, was underrepresented, and the clinical significance of multi-gene panel testing in Thailand remains undetermined. In this study, we collected the clinical and genetic data from 4567 Thai patients with cancer in the hereditary breast-ovarian cancer (HBOC) spectrum who underwent multi-gene panel testing. Six hundred and ten individuals (13.4%) had germline P/LP variants. Detection rates of germline P/LP variants in breast, ovarian, pancreatic, and prostate cancer were 11.8%, 19.8%, 14.0%, and 7.1%, respectively. Non-BRCA gene mutations accounted for 35% of patients with germline P/LP variants. ATM was the most common non-BRCA gene mutation. Four hundred and thirty-two breast cancer patients with germline P/LP variants (80.4%) met the current NCCN genetic testing criteria. The most common indication was early-onset breast cancer. Ten patients harbored double pathogenic variants in this cohort. Our result showed that a significant proportion of non-BRCA P/LP variants were identified in patients with HBOC-related cancers. These findings support the benefit of multi-gene panel testing for inherited cancer susceptibility among Thai HBOC patients. Some modifications of the testing policy may be appropriate for implementation in diverse populations.
abstract_id: PUBMED:25863477
The prevalence and spectrum of BRCA1 and BRCA2 mutations in Korean population: recent update of the Korean Hereditary Breast Cancer (KOHBRA) study. The Korean Hereditary Breast Cancer (KOHBRA) study was established to evaluate the prevalence and spectrum of BRCA1/2 mutations in Korean breast cancer patients at risk for hereditary breast and ovarian cancer. A total of 2953 subjects (2403 index patients and 550 family members of affected carriers) from 36 centers participated in this study between May 2007 and December 2013. All participants received genetic counseling and BRCA genetic testing. In total, 378 mutation carriers among 2403 index patients were identified. The prevalence of BRCA mutations in specific subgroups was as follows: 22.3 % (274/1228) for breast cancer patients with a family history of breast/ovarian cancers, 8.8 % (39/441) for patients with early-onset (<35 years) breast cancer without a family history, 16.3 % (34/209) for patients with bilateral breast cancer, 4.8 % (1/21) for male patients with breast cancer, and 37.5 % (3/8) for patients with both breast and ovarian cancer. From an analysis of the mutation spectrum, 63 BRCA1 and 90 BRCA2 different mutations, including 44 novel mutations, were identified. The c.7480 (p.Arg2494Ter) mutation in BRCA2 (10.1 %) was the most commonly identified in this cohort. The KOHBRA study is the largest cohort to identify BRCA mutation carriers in Asia. The results suggest that the prevalence of BRCA mutations in familial breast cancer patients is similar to that among Western cohorts. However, some single risk factors without family histories (early-onset breast cancer, male breast cancer, or multiple organ cancers) may limit the utility of BRCA gene testing in the Korean population.
abstract_id: PUBMED:38146508
The emergence of Fanconi anaemia type S: a phenotypic spectrum of biallelic BRCA1 mutations. BRCA1 is involved in the Fanconi anaemia (FA) pathway, which coordinates repair of DNA interstrand cross-links. FA is a rare genetic disorder characterised by bone marrow failure, cancer predisposition and congenital abnormalities, caused by biallelic mutations affecting proteins in the FA pathway. Germline monoallelic pathogenic BRCA1 mutations are known to be associated with hereditary breast/ovarian cancer, however biallelic mutations of BRCA1 were long predicted to be incompatible with embryonic viability, hence BRCA1 was not considered to be a canonical FA gene. Despite this, several patients with biallelic pathogenic BRCA1 mutations and FA-like phenotypes have been identified - defining a new FA type (FA-S) and designating BRCA1 as an FA gene. This report presents a scoping review of the cases of biallelic BRCA1 mutations identified to date, discusses the functional effects of the mutations identified, and proposes a phenotypic spectrum of BRCA1 mutations based upon available clinical and genetic data. We report that this FA-S cohort phenotype includes short stature, microcephaly, facial dysmorphisms, hypo/hyperpigmented lesions, intellectual disability, chromosomal sensitivity to crosslinking agents and predisposition to breast/ovarian cancer and/or childhood cancers, with some patients exhibiting sensitivity to chemotherapy. Unlike most other types of FA, FA-S patients lack bone marrow failure.
abstract_id: PUBMED:18821011
Founder mutations account for the majority of BRCA1-attributable hereditary breast/ovarian cancer cases in a population from Tuscany, Central Italy. Background: Germline mutations in the BRCA1 and BRCA2 tumour-suppressor genes predispose to early-onset breast and ovarian cancer. Although both genes display a highly heterogeneous mutation spectrum, a number of alterations recur in some populations. Only a limited number of founder mutations have been identified in the Italian population so far.
Objective: To investigate the spectrum of BRCA1/BRCA2 mutations in a set of families originary from the Central-Eastern part of Tuscany and to ascertain the presence of founder effects. We also wanted to approximate the age of the most frequent BRCA1 founder mutation.
Results: Overall, four distinct BRCA1 mutations accounted for a large fraction (72.7%) of BRCA1-attributable hereditary breast/ovarian cancer in families originary from this area. We identified common haplotypes for two newly recognised recurrent BRCA1 mutations, c.3228_3229delAG and c.3285delA. The c.3228_3229delAG mutation was estimated to have originated about 129 generations ago. Interestingly, male breast cancer cases were present in 3 out of 11 families with the c.3228_3229delAG mutation.
Conclusions: The observation that a high proportion of families with BRCA1 alterations from Central-Eastern Tuscany harbours a limited number of founder mutations can have significant impact on clinical management of at risk subjects from this area. In addition, the identification of a large set of families carrying an identical mutation that predisposes to breast and ovarian cancer provides unique opportunities to study the effect of other genetic and environmental factors on penetrance and disease phenotype.
abstract_id: PUBMED:32806537
Prevalence and Spectrum of BRCA Germline Variants in Central Italian High Risk or Familial Breast/Ovarian Cancer Patients: A Monocentric Study. Hereditary breast and ovarian cancers are mainly linked to variants in BRCA1/2 genes. Recently, data has shown that identification of BRCA variants has an immediate impact not only in cancer prevention but also in targeted therapeutic approaches. This prospective observational study characterized the overall germline BRCA variant and variant of uncertain significance (VUS) frequency and spectrum in individuals affected by breast (BC) or ovarian cancer (OC) and in healthy individuals at risk by sequencing the entire BRCA genes. Of the 363 probands analyzed, 50 (13.8%) were BRCA1/2 mutated, 28 (7.7%) at BRCA1 and 23 (6.3%) at BRCA2 gene. The variant c.5266dupC p.(Gln1756Profs) was the most frequent alteration, representing 21.4% of the BRCA1 variants and 12.0% of all variants identified. The variant c.6313delA p.(Ile2105Tyrfs) of BRCA2 was the most frequent alteration observed in 6 patients. Interestingly, two new variants were identified in BRCA2. In addition, 25 different VUS were identified; two were reported for the first time in BRCA1 and two in BRCA2. The number of triple-negative BCs was significantly higher in patients with the pathogenic BRCA1/2-variant (36.4%) than in BRCA1/2 VUS (16.0%) and BRCA1/2 wild-type patients (10.7%) (p < 0.001). Our study reveals that the overall frequency of BRCA germline variants in the selected high-risk Italian population is about 13.8%. We believe that our results could have significant implications for preventive strategies for unaffected BRCA-carriers and effective targeted treatments such as PARP inhibitors for patients with BC or OC.
abstract_id: PUBMED:35578052
Mutational spectrum of BRCA1/2 genes in Moroccan patients with hereditary breast and/or ovarian cancer, and review of BRCA mutations in the MENA region. Purpose: Breast cancer (BC) is the most common form of female cancer around the world. BC is mostly sporadic, and rarely hereditary. These hereditary forms are mostly BRCA1- and BRCA2-associated hereditary breast and ovarian cancer syndrome. BRCA1 and BRCA2 genes are large and had some recurrent mutations specific to some populations. Through this work we analyze the most recurrent mutations in Moroccan population and compared them to a large review of other BRCA1/2 spectrum mutations in the MENA region.
Methods: We report in this work a series of 163 unrelated patients (the largest series of Moroccan patients) with familial breast and/or ovarian cancer, selected among patients referred to our oncogenetic outpatient clinic, from 2006 to 2021. To identify genetic variants in these two genes, different genetic analysis strategies have been carried out, using Sanger Sequencing DNA or Target Panel Sequencing.
Results: Pathogenic variants were identified in 27.6% of patients. The most frequent mutation identified in our patients was the c.1310_1313delAAGA, BRCA2 (33%), and three other mutations seem more frequent in the Moroccan population (33%) of all reported patients: c.798_799delTT, BRCA1; and c.3279delC, BRCA1; and c.7234_7235insG in BRCA2 gene.
Conclusion: Through this work, we emphasize the importance of screening for BRCA1 and BRCA2 recurrent mutations in Moroccan patients. Other MENA (MENA: English-language acronym referring to the Middle East and North Africa region) countries had also some recurrent BRCA mutations, which will allow a fast and unexpensive first line genetic analysis and a precise molecular diagnosis. This will allow an adapted follow-up of the patients and a pre-symptomatic diagnosis of their relatives.
abstract_id: PUBMED:15975711
BRCA1 regulation of transcription. BRCA1, a tumor suppressor gene on chromosome 17q21, was identified in 1994 based on its linkage to hereditary breast and ovarian cancer syndromes. The BRCA1 gene encodes a 220 kDa nuclear phosphoprotein. Studies aimed at elucidating the mechanisms of its tumor suppressor activity have revealed, in part, that BRCA1 participates in the DNA damage response and acts to maintain the integrity of the genome. This activity is generic and does not account for the propensity of BRCA1 mutation carriers to develop specific tumor types rather than a broad spectrum of cancers. In addition to genome maintenance, BRCA1 has been found to broadly regulate gene transcription, even though it is not itself a sequence-specific DNA-binding transcription factor. The ability of BRCA1 to function as a coregulator of transcription may underlie some of its tumor suppressor activity and may explain the tissue-specific nature of this activity. This review will focus on how BRCA1 selectively regulates transcription and how this regulatory function may relate to tumor suppression.
Answer: Based on the information provided in the abstracts, women with low-grade serous ovarian cancer (LGSOC) were found to have family histories that were less suggestive of Hereditary Breast and Ovarian Cancer (HBOC) compared to women with high-grade serous ovarian cancer (HGSOC). Specifically, women with LGSOC were significantly less likely to have a first- or second-degree relative with breast or ovarian cancer, and when personal and family histories were quantified using the Myriad BRCA mutation prevalence tables, women with LGSOC had lower scores indicative of a less suggestive family history for HBOC (PUBMED:21126756). This suggests that LGSOC may not be as strongly associated with the tumor spectrum seen in HBOC as HGSOC is. However, it is important to note that this conclusion is based on the analysis of family histories and the degree of relatedness of affected relatives, and further research may be needed to fully understand the relationship between LGSOC and HBOC. |
Instruction: Can arthroscopically assisted treatment of chronic patellar tendinopathy reduce pain and restore function?
Abstracts:
abstract_id: PUBMED:21461867
Can arthroscopically assisted treatment of chronic patellar tendinopathy reduce pain and restore function? Background: Patellar tendinopathy is a common source of pain in athletes, especially those involved in sports with a high incidence of jumping and cutting. Changes in training programs and exercises based on eccentric quadriceps contractions often relieve patients' symptoms. For athletes unresponsive to this treatment, some authors suggest open and arthroscopic procedures débriding either the tendon alone, or the tendon and bone.
Questions/purposes: We asked whether an arthroscopically assisted approach to débride not only the tendon, bone, but also the peritenon could relieve pain and allow athletes to return to their former activities.
Patients And Methods: We retrospectively reviewed 23 patients with a history of at least 6 months of painful patellar tendinopathy unresponsive to nonoperative treatment treated with an arthroscopic technique that débrided the tendon, inferior pole of the patella, and peritenon: 22 males and one female. Mean age was 29 years. Patients were evaluated using the anterior knee pain score of Kujala et al. The minimum followup was 12 months (mean, 58 months; range, 12-121 months).
Results: Twelve patients scored 100, one 99, one 98, five 97, two 94, one 90, and one 64. The Kujala et al. mean score was 96 (range, 64-100). All but four patients returned to their former sports activities. We observed no complications.
Conclusions: Arthroscopic treatment can relieve the pain of refractory chronic patellar tendinopathy. Our observations were comparable with those previously reported for open techniques and a high percentage of patients returned to their previous activity level.
Level Of Evidence: Level IV, observational study. See Guidelines for Authors for a complete description of levels of evidence.
abstract_id: PUBMED:22714975
Open versus arthroscopic surgical treatment of chronic proximal patellar tendinopathy. A systematic review. Purpose: A general agreement on the best surgical treatment option of chronic proximal patellar tendinopathy is still lacking. The purpose of this systematic review was to investigate if arthroscopically assisted procedures have been reported better results compared to open surgery and to assess the methodology of studies.
Methods: Twenty-one studies were included in the review. Surgical outcomes were defined referring to the functional classification described by Kelly et al. (Am J Sports Med 12(5):375-380, [11]): return to sport was regarded as the ability of training at the original level before injury with mild or moderate pain and success as the improvement after surgery with symptom reduction. Methodological analysis was performed by two reviewers adopting the Coleman Methodology Score (CMS) (range 0-100, best score 100).
Results: Only one randomized controlled trial (RCT) met inclusion criteria; all other included studies were case series. Median sample size 24, range 11-138, mean age at surgery 26.8 ± 3.2 years, mean follow-up 32.5 ± 18.4 (median 31, range 6-60) months. Return to sport rate: global 78.5 %, open group 76.6 % and arthroscopic group 84.2 %. Success rate: global 84.6 %, open group 87.2 % and arthroscopic group 92.4 %. Differences between groups were not statistically significant. CMSs were positively correlated with the year of publication (P < 0.05).
Conclusions: Minimally invasive arthroscopically assisted procedures have not reported better statistically significant results when compared to open surgery in the treatment of chronic proximal patellar tendinopathy. The methodology of studies in this field has improved over the past 15 years, but well-designed RCTs using validated patient-based outcome measures are still lacking.
Level Of Evidence: Systematic Review, Level IV.
abstract_id: PUBMED:37806246
Surgical treatment of chronic proximal patellar tendon tears grades 3 and 4 using augmentation with quadriceps tendon-bone graft. Background: Chronic proximal patellar tendinosis with partial tendon tears represents a multifactorial overuse injury. Several surgical techniques have been described with various outcomes and the return to sports may fail.
Hypothesis: Reconstruction of the proximal patellar tendon with augmentation using a quadriceps tendon-bone (QTB) graft improves knee function in patients presenting with proximal patellar tendinosis and partial tendon tears.
Methods: Forty-seven patients (32 males, 15 females) with chronic proximal patellar tendinosis and tendon tears grade 3 and 4 were treated between 1992 and 2018. Patients were evaluated retrospectively using the Popkin-Golman (PG) MRI grading system and the removed tendon parts. The Tegner Activity Scale (TAS) and the Numerical Rating Scale (NRS) for pain were used as outcome measures before surgery and at follow up. Complete data were available in 100% of cases at 6 months follow up, and fifteen of them at later follow up.
Results: The average follow up was 1.5 years (range, 0.5-16). The TAS improved from a mean preoperative score of 3.7 to a mean postoperative score of 9.1. The median NRS status decreased from an average of 6.4 to 1.1. Two patients needed additional arthroscopic scar tissue removal.
Conclusion: Reconstruction of proximal patellar tendon tears grades 3 and 4 with augmentation using a QTB graft is a valuable surgical salvage procedure in chronic cases. It improves knee function and yields good to excellent results in most cases including high level athletes. The use of MRI with the PG classification of tendon tears is highly recommended.
Level Of Evidence: Therapeutic case series, Level IV.
abstract_id: PUBMED:20858940
Patient guided Piezo-electric Extracorporeal Shockwave Therapy as treatment for chronic severe patellar tendinopathy: A pilot study. Background And Purpose: Patellar tendinopathy is a common overuse injury for which no evidence-based treatment guidelines exist. Extracorporeal Shock Wave Therapy (ESWT) seems to be an effective treatment for patellar tendinopathy but the most beneficial treatment strategies still need to be ascertained. Aim of this pilot study was to investigate if patient guided Piezo-electric, focused ESWT, without local anesthesia is a safe and well tolerated treatment which improves pain and function in patients with patellar tendinopathy.
Methods: Nineteen male athletes with severe chronic patellar tendinopathy received 3 patient guided focused medium to high energy ESWT treatments at a weekly interval. Before and after 3 months VISA-P and VAS (pain) scores were recorded. Data on side effects and complications of treatment were also collected.
Results: No serious complications were reported and patients tolerated the treatment well. Mean VISA-P score improved from 36.1 to 50.1 (p < 0.05), VAS decreased from to 7.2 to 3.7 (p < 0.05).
Conclusion: Patient guided Piezo-electric ESWT without local anesthesia is a safe and well tolerated treatment which should be considered as a treatment for patients with patellar tendinopathy.
abstract_id: PUBMED:29166934
Arthroscopic patellar release for treatment of chronic symptomatic patellar tendinopathy: long-term outcome and influential factors in an athletic population. Background: Arthroscopic patellar release (APR) is utilized for minimally invasive surgical treatment of patellar tendinopathy. Evidence regarding long-term success following the procedure is limited. Also, the influence of age and preoperative performance level, are incompletely understood. The aim of this study was to investigate whether APR translates into sustained pain relief over a long-term follow-up in athletes undergoing APR. Furthermore, we analyzed if age influences clinical and functional outcome measures in APR.
Methods: Between 1998 and 2010, 30 competitive and recreational athletes were treated with APR due to chronic refractory patellar tendinopathy. All data were analyzed retrospectively. Demographic data, such as age or level of performance prior to injury were extracted. Clinical as well as functional outcome measures (Swedish Victorian Institute of sport assessment for patella (VISA-P), the modified Blazina score, pain level following exercise, return to sports, and subjective knee function were assessed pre- and postoperatively.
Results: In total, 30 athletes were included in this study. At follow-up (8.8 ± 2.82 years), clinical and functional outcome measures such as the mean Blazina score, VISA-P, VAS, and subjective knee function revealed significant improvement compared to before surgery (P < 0.001). The mean time required for return to sports was 4.03 ± 3.18 months. After stratification by age, patients younger than 30 years of age yielded superior outcome in the mean Blazina score and pain level when compared to patients ≥30 years (P = 0.0448). At 8 years of follow-up, patients yielded equivalent clinical and functional outcome scores compared to our previous investigation after four years following APR.
Conclusion: In summary, APR can be regarded a successful, minimally invasive, and sustained surgical technique for the treatment of patella tendinopathy in athletes. Younger age at surgery may be associated with improved clinical and functional outcome following APR.
abstract_id: PUBMED:33616456
Clinical Outcomes, Structure, and Function Improve With Both Heavy and Moderate Loads in the Treatment of Patellar Tendinopathy: A Randomized Clinical Trial. Background: Loading interventions have become a predominant treatment strategy for tendinopathy, and positive clinical outcomes and tendon tissue responses may depend on the exercise dose and load magnitude.
Purpose/hypothesis: The purpose was to investigate if the load magnitude influenced the effect of a 12-week loading intervention for patellar tendinopathy in the short term (12 weeks) and long term (52 weeks). We hypothesized that a greater load magnitude of 90% of 1 repetition maximum (RM) would yield a more positive clinical outcome, tendon structure, and tendon function compared with a lower load magnitude of 55% of 1 RM when the total exercise volume was kept equal in both groups.
Study Design: Randomized clinical trial; Level of evidence, 1.
Methods: A total of 44 adult participants with chronic patellar tendinopathy were included and randomized to undergo moderate slow resistance (MSR group; 55% of 1 RM) or heavy slow resistance (HSR group; 90% of 1 RM). Function and symptoms (Victorian Institute of Sport Assessment-Patella questionnaire [VISA-P]), tendon pain during activity (numeric rating scale [NRS]), and ultrasound findings (tendon vascularization and swelling) were assessed before the intervention, at 6 and 12 weeks during the intervention, and at 52 weeks from baseline. Tendon function (functional tests) and tendon structure (ultrasound and magnetic resonance imaging) were investigated before and after the intervention period.
Results: The HSR and MSR interventions both yielded significant clinical improvements in the VISA-P score (mean ± SEM) (HSR: 0 weeks, 58.8 ± 4.3; 12 weeks, 70.5 ± 4.4; 52 weeks, 79.7 ± 4.6) (MSR: 0 weeks, 59.9 ± 2.5; 12 weeks, 72.5 ± 2.9; 52 weeks, 82.6 ± 2.5), NRS score for running, NRS score for squats, NRS score for preferred sport, single-leg decline squat, and patient satisfaction after 12 weeks, and these were maintained after 52 weeks. HSR loading was not superior to MSR loading for any of the measured clinical outcomes. Similarly, there were no differences in functional (strength and jumping ability) or structural (tendon thickness, power Doppler area, and cross-sectional area) improvements between the groups undergoing HSR and MSR loading.
Conclusion: There was no superior effect of exercising with a high load magnitude (HSR) compared with a moderate load magnitude (MSR) for the clinical outcome, tendon structure, or tendon function in the treatment of patellar tendinopathy in the short term. Both HSR and MSR showed equally good, continued improvements in outcomes in the long term but did not reach normal values for healthy tendons.
Registration: NCT03096067 (ClinicalTrials.gov identifier).
abstract_id: PUBMED:27904789
CURRENT CONCEPTS IN THE TREATMENT OF PATELLAR TENDINOPATHY. Patellar tendon pain is a significant problem in athletes who participate in jumping and running sports and can interfere with athletic participation. This clinical commentary reviews patellar tendon anatomy and histopathology, the language used to describe patellar tendon pathology, risk factors for patellar tendinopathy and common interventions used to address patellar tendon pain. Evidence is presented to guide clinicians in their decision-making regarding the treatment of athletes with patellar tendon pain.
Level Of Evidence: 5.
abstract_id: PUBMED:24519184
Are multiple platelet-rich plasma injections useful for treatment of chronic patellar tendinopathy in athletes? a prospective study. Background: Chronic patellar tendinopathy (PT) is one of the most common overuse knee disorders. Platelet-rich plasma (PRP) appears to be a reliable nonoperative therapy for chronic PT.
Purpose: To evaluate clinical and radiological outcomes of 3 consecutive ultrasound (US)-guided PRP injections for the treatment of chronic PT in athletes.
Study Design: Case series; Level of evidence, 4.
Methods: A total of 28 athletes (17 professional, 11 semiprofessional) with chronic PT refractory to nonoperative management were prospectively included for US-guided pure PRP injections into the site of the tendinopathy. The same treating physician at a single institution performed 3 consecutive injections 1 week apart, with the same PRP preparation used. All patients underwent clinical evaluation, including the Victorian Institute of Sport Assessment-Patella (VISA-P) score, visual analog scales (VAS) for pain, and Lysholm knee scale before surgery and after return to practice sports. Tendon healing was assessed with MRI at 1 and 3 months after the procedure.
Results: The VISA-P, VAS, and Lysholm scores all significantly improved at the 2-year follow-up. The average preprocedure VISA-P, VAS, and Lysholm scores improved from 39 to 94 (P < .001), 7 to 0.8 (P < .0001), and 60 to 96 (P < .001), respectively, at the 2-year follow-up. Twenty-one of the 28 athletes returned to their presymptom sporting level at 3 months (range, 2-6 months) after the procedure. Follow-up MRI assessment showed improved structural integrity of the tendon at 3 months after the procedure and complete return to normal structural integrity of the tendon in 16 patients (57%). Seven patients did not recover their presymptom sporting level (among them, 6 were considered treatment failures): 3 patients returned to sport at a lesser level, 1 patient changed his sport activity (for other reasons), and 3 needed surgical intervention.
Conclusion: In this study, application of 3 consecutive US-guided PRP injections significantly improved symptoms and function in athletes with chronic PT and allowed fast recovery to their presymptom sporting level. The PRP treatment permitted a return to a normal architecture of the tendon as assessed by MRI.
abstract_id: PUBMED:26548965
Ultrasound-Guided Scraping for Chronic Patellar Tendinopathy: A Case Presentation. Chronic patellar tendinopathy is a common complaint among athletes who repetitively stress the extensor mechanism of the knee. Multiple treatment options have been described, but evidence is lacking, specifically when eccentric loading has failed. Debate continues regarding the patho-etiology of chronic patellar tendon pain. There has been recent interest regarding the neurogenic influences involved in chronic tendinopathy, and interventions targeting neovessels and accompanying neonerves have shown promise. This is the first description of an ultrasound-guided technique in which the neovessels and accompanying neonerves in patellar tendinopathy were targeted using a needle scraping technique of the posterior surface of the patellar tendon.
abstract_id: PUBMED:22930695
Topical glyceryl trinitrate treatment of chronic patellar tendinopathy: a randomised, double-blind, placebo-controlled clinical trial. Objectives: To assess if continuous topical glyceryl trinitrate (GTN) treatment improves outcome in patients with chronic patellar tendinopathy when compared with eccentric training alone.
Methods: Randomised double-blind, placebo-controlled clinical trial comparing a 12-week programme of using a GTN or placebo patch in combination with eccentric squats on a decline board. Measurements were performed at baseline, 6, 12 and 24 weeks. Primary outcome measure was the Victorian Institute of Sports Assessment-Patella (VISA-P) questionnaire. Secondary outcome measures were patient satisfaction and pain scores during sports. Generalised estimated equation was used to analyse the treatment, time and treatment×time effect. Analyses were performed following the intention-to-treat principle.
Results: VISA-P scores for both groups improved over the study period to 75.0±16.2 and 80.7±22.1 at 24 weeks. Results showed a significant effect for time (p<0.01) but no effect for treatment×time (p=0.80). Mean Visual Analogue Scores pain scores during sports for both groups increased over the study period to 6.6±3 and 7.8±3.1. Results showed a significant effect for time (p<0.01) but no effect for treatment×time (p=0.38). Patient satisfaction showed no difference between GTN and placebo groups (p=0.25) after 24 weeks, but did show a significant difference over time (p=0.01). Three patients in the GTN group reported some rash.
Conclusion: It seems that continuous topical GTN treatment in addition to an eccentric exercise programme does not improve clinical outcome compared to placebo patches and an eccentric exercise programme in patients with chronic patellar tendinopathy.
Answer: Yes, arthroscopically assisted treatment of chronic patellar tendinopathy can reduce pain and restore function. A study reviewed 23 patients with a history of at least 6 months of painful patellar tendinopathy unresponsive to nonoperative treatment. These patients were treated with an arthroscopic technique that débrided the tendon, inferior pole of the patella, and peritenon. The results showed that most patients experienced pain relief and returned to their former sports activities, with a high mean score on the anterior knee pain score of Kujala et al. There were no complications observed, suggesting that arthroscopic treatment can be effective for refractory chronic patellar tendinopathy (PUBMED:21461867).
Furthermore, a systematic review comparing open versus arthroscopic surgical treatment of chronic proximal patellar tendinopathy found that minimally invasive arthroscopically assisted procedures did not report statistically significant better results compared to open surgery. However, the review also indicated that the methodology of studies in this field has improved over time, and well-designed randomized controlled trials are still needed (PUBMED:22714975).
Additionally, other studies have explored different surgical techniques and interventions for chronic patellar tendinopathy, such as the use of quadriceps tendon-bone graft for augmentation in chronic cases, which showed improvement in knee function and good to excellent results in most cases, including high-level athletes (PUBMED:37806246). Another study on arthroscopic patellar release (APR) found that it can be a successful, minimally invasive, and sustained surgical technique for the treatment of patella tendinopathy in athletes, with younger age at surgery possibly associated with improved clinical and functional outcomes (PUBMED:29166934).
Overall, the evidence suggests that arthroscopically assisted treatments, among other surgical interventions, can be beneficial in reducing pain and restoring function in patients with chronic patellar tendinopathy, particularly when conservative treatments have failed. |
Instruction: Does Preconception Health Differ by Nativity?
Abstracts:
abstract_id: PUBMED:26728899
Does Preconception Health Differ by Nativity?: Findings from the Los Angeles Mommy and Baby (LAMB) Study. Objectives: To compare certain preconception health (PCH) behaviors and conditions among US-born (USB) and foreign-born (FB) mothers in Los Angeles County (LAC), regardless of race/ethnicity, and to determine if any identified differences vary among Asian/Pacific Islanders (API's) and Hispanics.
Methods: Data are from the 2012 Los Angeles Mommy and Baby study (n = 6252). PCH behaviors included tobacco use, multivitamin use, unintended pregnancy, and contraception use. PCH conditions comprised being overweight/obese, diabetes, asthma, hypertension, gum disease, and anemia. The relationship between nativity and each PCH behavior/condition was assessed using multivariable logistic regression models.
Results: USB women were more likely than FB women to smoke (AOR 2.12, 95 % CI 1.49-3.00), be overweight/obese (AOR 1.57, 95 % CI 1.30-1.90), and have asthma (AOR 2.04, 95 % CI 1.35-3.09) prior to pregnancy. They were less likely than FB women to use contraception before pregnancy (AOR 0.59, 95 % CI 0.49-0.72). USB Hispanics and API's were more likely than their FB counterparts to be overweight/obese (AOR 1.57, 95 % CI 1.23-2.01 and AOR 2.37, 95 % CI 1.58-3.56, respectively) and less likely to use contraception (AOR 0.58, 95 % CI 0.45-0.74 and AOR 0.46, 95 % CI 0.30-0.71, respectively). USB Hispanic mothers were more likely than their FB counterparts to smoke (AOR 2.47, 95 % CI 1.46-4.17), not take multivitamins (AOR 1.30, 95 % CI 1.02-1.66), and have asthma (AOR 2.35, 95 % CI 1.32-4.21) before pregnancy.
Conclusions: US nativity is linked to negative PCH among LAC women, with many of these associations persisting among Hispanics and API's. As PCH profoundly impacts maternal and child health across the lifecourse, culturally-appropriate interventions that maintain positive behaviors among FB reproductive-aged women and encourage positive behaviors among USB women should be pursued.
abstract_id: PUBMED:37001422
Assessing the Unmet Preconception Care Needs of Men in the United States by Race/Ethnicity and Nativity. Objective: To estimate the percentage of men in the U.S. in need of preconception care and to assess gaps in utilization of services by race/ethnicity and nativity, irrespective of intention for children, via cross-sectional analysis of 2017-2019 National Survey for Family Growth (NSFG).
Methods: The need for preconception care was defined as non-sterile men who had sexual experience and were with female partner(s) who were not sterile. Thirteen preconception care services were assessed across six domains: family planning, blood pressure, HIV, STD, weight management, and smoking cessation. Multivariable weighted analyses were performed to obtain odds ratios to assess differences in preconception care utilization among participants.
Results: Approximately 64% of men were estimated to need preconception care. Substantial disparities in need and service use were found across sociodemographic characteristics. Foreign-born men had significantly higher odds of not receiving three of the thirteen preconception care services, including condom use screening (aOR = 1.67; CI = 1.23-2.26), HIV advice (aOR = 1.76; CI = 1.35-2.29), and STD testing (aOR = 1.66; CI = 1.13-2.44), than U.S.-born. Hispanic men had higher odds of not receiving blood pressure (aOR = 1.39; CI = 1.09-1.79) and smoking screenings (aOR = 1.33; CI = 1.02-1.73) than White men. Black men had the highest use in six of the thirteen preconception care services.
Conclusion: Gaps in preconception care utilization suggest a need to further explore potential drivers of disparities, specifically for Hispanic and foreign-born men. Additional research into the timing and quality of care received by men are needed to assess the scope, severity, and prevalence of unmet needs within medically underserved communities.
abstract_id: PUBMED:38336500
Preconception health risks by presence and type of disability among U.S. women. Background: Poor preconception health may contribute to adverse perinatal outcomes among women with disabilities. While prior research has found higher prevalence of preconception health risks among women with versus without disabilities, existing U.S. studies have not assessed how preconception health risks may differ by disability type. Understanding such differences is relevant for informing and targeting efforts to improve health opportunities and optimize pregnancy outcomes.
Objective: This cross-sectional study examined preconception health in relation to disability type among reproductive-age women in the United States.
Methods: We analyzed 2016-2019 data from the Behavioral Risk Factor Surveillance System to estimate the prevalence of 19 preconception health risk among non-pregnant women 18-44 years of age. We used modified Poisson regression to compare women with different types of disability to non-disabled women. Disability categories included: 1) hearing difficulty only; 2) vision difficulty only; 3) physical/mobility difficulty only; 4) cognitive difficulty only; 5) multiple or complex disabilities (including limitations in self-care or independent living activities). Multivariable analyses adjusted for other sociodemographic characteristics such as age and marital status.
Results: Women with each disability type experienced a higher prevalence of indicators associated with poor preconception health compared to women with no disabilities. The number and extent of health risks varied substantially by disability type. Women with cognitive disabilities and women with multiple or complex disabilities experienced the greatest risk.
Conclusions: Addressing the specific preconception health risks experienced by women with different types of disabilities may help reduce adverse perinatal outcomes for disabled women and their infants.
abstract_id: PUBMED:28983715
Father's Role in Preconception Health. As part of the federal multi-agency conference on Paternal Involvement in Pregnancy Outcomes, the existing Fatherhood paradigm was expanded to include a new focus on Men's Preconception Health. This concept grew out of the women's preconception health movement and the maternal and child health (MCH) life course perspective, as well as pioneering research from the child development, public health data and family planning fields. It encourages a new examination of how men's preconception health impacts both reproductive outcomes and men's own subsequent health and development. This essay introduces the concept of men's preconception health and health care; examines its historical development; notes the challenges of its inclusion into fatherhood and reproductive health programs; and situates it within a longer men's reproductive health life course. We then briefly explore six ways men's preconception health and health care can have positive direct and indirect impacts-planned and wanted pregnancies (family planning); enhanced paternal biologic and genetic contributions; improved reproductive health biology for women; improved reproductive health practices and outcomes for women; improved capacity for parenthood and fatherhood (psychological development); and enhanced male health through access to primary health care. Research on men's preconception health and health care is very limited and siloed. We propose a research agenda to advance this topic in three broad domains: increasing the basic epidemiology and risk factor knowledge base; implementing and evaluating men's preconception health/fatherhood interventions (addressing clinical health care, psychological resiliency/maturation, and social determinants of health); and fostering more fatherhood health policy and advocacy research.
abstract_id: PUBMED:37864771
Looking Back, Visioning Forward: Preconception Health in the US 2005 to 2023. Preconception health has always been about preventative health care, ensuring the overall wellbeing of people of reproductive age before they have children. However, just as public health and health care have shifted to prioritize equity and include ideas about how social determinants of health influence health outcomes, the field of preconception health has experienced a similar transition. The purpose of this paper is to provide an overview of the evolution of preconception health in the United States after 2005, highlighting the key tensions that have shaped the field. We provide an overview of the early history of the movement and describe how four phases of ideological tensions overtime have led to changes across seven categories of preconception health: definitions and frameworks, surveillance and measurement, messaging and education, strategic convenings and collaborations, clinical practice, and reproductive life planning. We also describe the historic and emerging challenges that affect preconception care, including limited sustained investment and ongoing threats to reproductive health. The vision of preconception health care we outline has been created by a diversity of voices calling for wellness, equity, and reproductive justice to be the foundation to all preconception health work. This requires a focus on preconception health education that prioritizes bodily autonomy, not just pregnancy intentions; national surveillance and data measures that center equity; attention to mental health and overall well-being; and the inclusion of transgender and non-binary people of reproductive age.
abstract_id: PUBMED:27939264
Preconception health behaviours: A scoping review. Preconception health refers to the health of males and females at any point in time prior to a potential pregnancy. A goal of preconception health research is to use preventive behaviour and healthcare to optimize the health of future offspring that result from both planned and unplanned pregnancies. This paper briefly reviews evidence of the importance of various preconception health behaviours, and examines the extent to which specific preconception health behaviours have been included in recent studies of such knowledge, behaviours, and intentions. To describe this recent research in highly developed countries, a scoping review of the literature was completed of studies published within the past seven years. A total of 94 studies on preconception health were identified and reviewed: (a) 15 examined knowledge and attitudes, (b) 68 studied behaviours, (c) 18 examined interventions designed to improve knowledge or behaviour, and (d) no studies examined intentions to engage in preconception health behaviours. Over 40% of studies examining preconception health behaviour focussed exclusively on folic acid. Overall, folic acid, alcohol, and cigarettes have consistently been topics of focus, while exposure to harmful environmental substances, stress, and sleep have been largely neglected. Despite strong evidence for the importance of men's health during the preconception period, only 11% of all studies included male participants. Based on existing gaps in the research, recommendations are provided, such as including men in future research, assessing a wider variety of behaviours, consideration of behavioural intentions, and consideration of the relationships between preconception health knowledge, intentions, and behaviour.
abstract_id: PUBMED:29195631
Preconception health care interventions: A scoping review. Pregnancy is often framed as a "window of opportunity" for intervening on a variety of health practices such as alcohol and tobacco use. However, there is evidence that interventions focusing solely on the time of pregnancy can be too narrow and potentially stigmatizing. Indeed, health risks observed in the preconception period often continue during pregnancy. Using a scoping review methodology, this study consolidates knowledge and information related to current preconception and interconception health care interventions published in the academic literature. We identified a total of 29 intervention evaluations, and summarized these narratively. Findings suggest that there has been some progress in intervening on preconception health, with the majority of interventions offering assessment or screening followed by brief intervention or counselling. Overall, these interventions demonstrated improvements in at least some of the outcomes measured. However, further preconception care research and intervention design is needed. In particular, the integration of gender transformative principles into preconception care is needed, along with further intervention design for partners/ men, and more investigation on how best to deliver preconception care.
abstract_id: PUBMED:27646555
Advancing preconception health in the United States: strategies for change. In January 2015, the US Preconception Health and Health Care Initiative (PCHHC) established a new national vision that all women and men of reproductive age will achieve optimal health and wellness, fostering a healthy life course for them and any children they may have. Achieving this vision presents both challenges and opportunities. This manuscript describes the reasons why the US needs to prioritize preconception health as well as its efforts historically to advance change. The authors share lessons from past work and current strategies in the US to reach this ambitious goal.
abstract_id: PUBMED:37091006
Men's knowledge of preconception health: A systematic review. Preconception health is defined as the physical and psychological well-being of women and men throughout their reproductive life. It is a method that raises healthy fertility and focuses on activities that persons can take to minimize risks, raise healthy lifestyles, and increase preparation for pregnancy. The purpose of this systematic review study was to assess men's knowledge of preconception health. Electronic databases, including Web of Science, PubMed, Scopus, Sciencedirect, ProQuest, Cochrane, SAGE, Springer, Google Scholar, were searched for published studies from 2000 to March 2021 to identify the studies carried out on men's knowledge of preconception health. The quality assessment was done using the critical appraisal skills program tool for qualitative studies and the Newcastle-Ottawa scale for cross-sectional studies. Of the 1195 references identified in the initial search, 11 studies met the inclusion criteria. Because of the diversity in the study design and the data collection tools used in studies, meta-analysis was impossible. All the studies of the present systematic review found that men's preconception health knowledge is poor. This systematic review showed that men's preconception health knowledge is low. Due to the limited studies of men's knowledge about the importance of optimizing their health before pregnancy, further study of the issue is still required.
abstract_id: PUBMED:27054935
Health Care System Measures to Advance Preconception Wellness: Consensus Recommendations of the Clinical Workgroup of the National Preconception Health and Health Care Initiative. Preconception wellness reflects a woman's overall health before conception as a strategy to affect health outcomes for the woman, the fetus, and the infant. Preconception wellness is challenging to measure because it attempts to capture health status before a pregnancy, which may be affected by many different service points within a health care system. The Clinical Workgroup of the National Preconception Health and Health Care Initiative proposes nine core measures that can be assessed at initiation of prenatal care to index a woman's preconception wellness. A two-stage web-based modified Delphi survey and a face-to-face meeting of key opinion leaders in women's reproductive health resulted in identifying seven criteria used to determine the core measures. The Workgroup reached unanimous agreement on an aggregate of nine preconception wellness measures to serve as a surrogate but feasible assessment of quality preconception care within the larger health community. These include indicators for: 1) pregnancy intention, 2) access to care, 3) preconception multivitamin with folic acid use, 4) tobacco avoidance, 5) absence of uncontrolled depression, 6) healthy weight, 7) absence of sexually transmitted infections, 8) optimal glycemic control in women with pregestational diabetes, and 9) teratogenic medication avoidance. The focus of the proposed measures is to quantify the effect of health care systems on advancing preconception wellness. The Workgroup recommends that health care systems adopt these nine preconception wellness measures as a metric to monitor performance of preconception care practice. Over time, monitoring these baseline measures will establish benchmarks and allow for comparison within and among regions, health care systems, and communities to drive improvements.
Answer: Yes, preconception health does differ by nativity. Studies have found that US-born (USB) women are more likely than foreign-born (FB) women to engage in certain negative preconception health (PCH) behaviors and to have specific PCH conditions. For instance, USB women in Los Angeles County were more likely to smoke, be overweight/obese, and have asthma prior to pregnancy compared to FB women. They were also less likely to use contraception before pregnancy. These differences persisted among Hispanic and Asian/Pacific Islander populations, with USB individuals within these groups more likely to be overweight/obese and less likely to use contraception than their FB counterparts. USB Hispanic mothers were also more likely to smoke, not take multivitamins, and have asthma before pregnancy (PUBMED:26728899).
Similarly, disparities in preconception care utilization were found among men in the United States, with foreign-born men having significantly higher odds of not receiving certain preconception care services such as condom use screening, HIV advice, and STD testing compared to U.S.-born men. Hispanic men had higher odds of not receiving blood pressure and smoking screenings than White men (PUBMED:37001422).
These findings suggest that nativity is an important factor in preconception health, with US nativity linked to negative PCH behaviors among women in Los Angeles County and gaps in preconception care utilization among men in the United States, particularly for Hispanic and foreign-born individuals. Therefore, culturally appropriate interventions that maintain positive behaviors among FB reproductive-aged individuals and encourage positive behaviors among USB individuals should be pursued to improve PCH outcomes (PUBMED:26728899; PUBMED:37001422). |
Instruction: Do social information-processing models explain aggressive behaviour by children with mild intellectual disabilities in residential care?
Abstracts:
abstract_id: PUBMED:16999780
Do social information-processing models explain aggressive behaviour by children with mild intellectual disabilities in residential care? Background: This study aimed to examine whether the social information-processing model (SIP model) applies to aggressive behaviour by children with mild intellectual disabilities (MID). The response-decision element of SIP was expected to be unnecessary to explain aggressive behaviour in these children, and SIP was expected to mediate the relation between social schemata and aggressive behaviour.
Method: SIP and aggressive behaviour of 130 10- to 14-year-old children with MID in residential care were assessed. The fit of various SIP models was tested with structural equation modelling.
Results: The response-decision process was found not to be necessary to explain aggressive behaviour. Social schemata were indirectly related to aggressive behaviour with aggressive response generation as mediating variable.
Conclusions: Implications for SIP theory and intervention are discussed.
abstract_id: PUBMED:36893581
Social information processing, normative beliefs about aggression and parenting in children with mild intellectual disabilities and aggressive behavior. Background: High levels of aggressive behavior in children with mild intellectual disabilities to borderline intellectual functioning (MID-BIF) are associated with deviant social information processing (SIP) steps. The current study investigated deviant SIP as a mediating mechanism linking both children's normative beliefs about aggression and parenting to aggressive behavior in children with MID-BIF. Additionally, the mediating role of normative beliefs about aggression in linking parenting and deviant SIP was investigated.
Methods: 140 children with MID-BIF in community care in the Netherlands, their parent(s) or caretaker(s), and their teacher participated in this cross-sectional study. Structural equation modeling was performed to test mediations. Models were run separately for parent and teacher reports of aggression, and included three deviant SIP steps (interpretation, response generation, response selection).
Results: A total indirect effect through deviant SIP steps was found from normative beliefs about aggression to teacher-reported aggression, but not to parent-reported aggression. An indirect effect was found from positive parenting through normative beliefs about aggression to deviant SIP.
Conclusion: The results of this study suggest that, next to deviant SIP and parenting, normative beliefs about aggression may be a relevant intervention target for children with MID-BIF and aggressive behavior.
abstract_id: PUBMED:23925967
The Social Information Processing model as a framework for explaining frequent aggression in adults with mild to moderate intellectual disabilities: a systematic review of the evidence. Background: There is an established evidence base con-cerning the use of anger management interventions with violent offenders who have intellectual disabilities. However, there has been limited research investigating the role of social cognitive factors underpinning problems of aggression. Psychosocial sources of agg-ression in the non-disabled population are generally discussed using Social Information Processing (SIP) models.
Method: A systematic review of the available evidence was carried out to establish whether SIP offers a useful explanatory model for understanding the contribution of social cognitive factors to problems of aggression presented by people with intellectual disabilities.
Results And Conclusions: Whilst research relating to the SIP model remains sparse for this population, there was evidence for different patterns of processing between aggressive and non-aggressive individuals. Group diff-erences included interpretation of emotional cues, inter-personal attributions and beliefs about the outcomes of aggressive behaviour. The future direction of SIP research with people who have intellectual disabilities is discussed, along with the possibility of using this framework to help build on current initiatives to develop individually tailored interventions to work at a cognitive level with those who are aggressive and offend.
abstract_id: PUBMED:30010484
Social information processing skills link executive functions to aggression in adolescents with mild to borderline intellectual disability. Executive Functions (EFs) have been associated with aggression in children and adolescents. EFs as higher-order cognitive abilities are assumed to affect cognitive functions such as Social Information Processing (SIP). We explored SIP skills as a mediating mechanism linking EFs to aggression in adolescents with mild to borderline intellectual disability (MBID with IQ from 50-84), a high risk group for aggressive behaviors and EF impairments. A total of 153 adolescents (Mage = 15.24, SD = 1.35; 54% male) with MBID participated. Focused attention, behavioral inhibition, and working memory were tested with multiple neurocognitive tasks to define latent EF constructs. Participants responded to a video-based SIP task. A latent construct for aggression was defined by caretaker, teacher, and adolescent self-reports of aggression (Child Behavior Check List, Teacher Report Form, and Youth Self Report). Structural equation modeling was performed to test mediation. Results were consistent with mediation of the relation between focused attention and aggression by SIP, namely via hostile interpretations and self-efficacy for aggression. Behavioral inhibition was linked to aggression, but this relation was not mediated by SIP. The relation between working memory and aggression was mediated by SIP, namely via hostile interpretations, aggressive response generation and via self-efficacy for aggressive responses. Bearing the cross-sectional design in mind, support was found for SIP skills as a mechanism linking EFs, in particular focused attention and working memory, to aggression, providing a viable explanation for the high vulnerability of adolescents with MBID for aggression.
abstract_id: PUBMED:19178616
Contextual variables affecting aggressive behaviour in individuals with mild to borderline intellectual disabilities who live in a residential facility. Background: Aggression is a common type of problem behaviour in clients with mild to borderline intellectual disability who live in a residential facility. We explored contextual events that elicit aggressive behaviour and variables that were associated with such events.
Method: Respondents were 87 direct-care staff members of 87 clients with aggressive behaviour who lived in a residential facility. Staff members completed the Contextual Assessment Inventory (CAI) and a questionnaire on demographic information and types, frequency and severity of aggressive behaviour. Internal consistency of the total CAI was excellent (alpha = 0.95), and Cronbach alpha's for the CAI sub-scales ranged from 0.75 to 0.93. Inter-rater agreement for the CAI could be considered good (mean intra-class correlation coefficient = 0.63).
Results: Both social and task-related events were reported to evoke aggressive behaviour of clients most often. Negative interactions, task characteristics and daily routines relatively often evoked aggressive behaviour while an uncomfortable environment, medication, illness and physiological states (i.e. physical and biological events) evoked aggressive behaviour least often. Mean CAI sub-scale scores were significantly related to gender, IQ and frequency of aggressive behaviour.
Conclusion: The present study extends our knowledge regarding events that are associated with an increased probability of aggressive behaviour. Knowledge of these contextual variables may be helpful in designing programmes (e.g. applied behaviour analysis, social skills training and cognitive behavioural therapies) for the management and prevention of aggressive behaviour in clients with mild to borderline intellectual disability who live in a residential facility.
abstract_id: PUBMED:15882392
Do children do what they say? Responses to hypothetical and real-life social problems in children with mild intellectual disabilities and behaviour problems. Background: Most research on children's social problem-solving skills is based on responses to hypothetical vignettes. Just how these responses relate to actual behaviour in real-life social situations is, however, unclear, particularly for children with mild intellectual disabilities (MID).
Method: In the present study, the spontaneous and selected responses of 56 children with MID to hypothetical situations from the Social Problem-Solving Test for children with MID (SPT-MID) were compared to their actual behaviour in comparable staged standardized real-life conflict situations. Correlations to externalizing behaviour problems were assessed using the Teacher's Report Form (TRF).
Results: The results show children with MID and accompanying externalizing behaviour problems to behave more aggressively in the staged real-life conflicts and provide more spontaneous aggressive responses to the hypothetical vignettes than children with MID and no accompanying externalizing behaviour problems; they did not, however, select more aggressive responses from the hypothetical options provided. A moderate correlation was found between the aggressiveness of the spontaneous responses in the hypothetical situations and actual behaviour in the staged real-life situations. In addition, both the spontaneous aggressive responses under hypothetical circumstances and the actual aggressive behaviour under staged real-life circumstances were related to teacher-rated aggressive behaviour in the classroom.
Conclusions: It is concluded that the hypothetical vignettes from the SPT-MID do provide information on both the actual behaviour and knowledge of social problem-solving skills of children with MID.
abstract_id: PUBMED:18691355
Impulse control and aggressive response generation as predictors of aggressive behaviour in children with mild intellectual disabilities and borderline intelligence. Background: A growing interest exists in mechanisms involved in behaviour problems in children with mild intellectual disabilities and borderline intelligence (MID/BI). Social problem solving difficulties have been found to be an explanatory mechanism for aggressive behaviour in these children. However, recently a discrepancy was found between automatic and reflective responding in social situations. We hypothesise that low impulse control and aggressive social problem solving strategies together may explain mechanisms involved in aggressive behaviour by children with MID/BI.
Method: In a clinical sample of 130 children with MID/BI receiving intramural treatment, main, moderating and mediating effects of impulse control and aggressive response generation on aggressive behaviour were examined by conducting hierarchical linear multiple regression analyses.
Results: Independent main effects of both impulse control and aggressive response generation on aggressive behaviour were found. Results indicated that low impulse control and aggressive response generation each explain unique variance in aggressive behaviour.
Conclusions: As this study is the first that has shown both impulse control and aggressive response generation to be important predictors for aggressive behaviour in children with MID/BI, future research should further examine the nature of relations between low impulse control and social problem solving.
abstract_id: PUBMED:18033147
Dimensional approach of social behaviour deficits in children. Preliminary validation study of the French version of the Children's Social Behaviour Questionnaire (CSBQ) Unlabelled: Social deficit is the core symptom of pervasive developmental disorder. In other child psychiatric disorders, social problems are also described but mainly as a result of the disease symptomatology. However, some recent studies suspect that in several disorders such as attention deficit hyperactive disorder, patients have an endogenous social disturbance. The aim of our research was to study abnormal child social behaviour in several disorders, using a dimensional approach. It is a preliminary validation study of the French version of the Children's Social Behaviour Questionnaire, a dimensional instrument constructed by Luteijn, Minderaa et al.
Methodology: Five clinical groups, according to the DSM IV criteria, formed a population of 103 children aged 6 to 16 years old: autistic disorder, attention deficit hyperactive disorder (ADHD), emotional disorder (anxious, depressed), mental retardation and normal children. Parents completed the Child Behaviour Checklist (CBCL) and the Children's Social Behaviour Questionnaire (CSBQ). The research worker and the child's physician completed a data form. The data form included information about medical history, development and socio-demographic criteria. The CBCL explored children's behaviours and general psychopathology, and included social dimensions (withdrawn, social problems, aggressive/delinquent behaviours, thought problems). The CSBQ, a dimensional questionnaire, explored children's social behaviours and included five dimensions: <<acting-out>>, <<social contact>>, <<social insight>>, <<social anxiety>>, <<social stereotypes>>. The English version of the CSBQ, validated with in the Netherlands Dutch population was translated into French and the translation was validated (double back translation). As the CBCL and CSBQ questionnaires are both dimensional instruments, dimensions have been compared. All instrument results were analysed separately; correlations and comparisons were made between groups.
Results: Correlations between CSBQ and CBCL dimensions are consistent. Positive correlations exist for: <<acting-out>> dimension with <<external behaviours>>, <<aggressive behaviour>> and <<delinquent behaviour>>; <<social contact>> with <<internal behaviours>> and <<withdrawn>>; <<social Insight>> with <<social problems>> and <<attention problems>>; <<social anxiety>> with <<anxious/depressed>>, <<thought problems>> and <<internal behaviours>>; social stereotypes>> with <<thought problems>>. Mean CSBQ results are as follows: 1. autistic group has the highest score for the <<social contact>> dimension, ADHD group has the highest score for the <<acting-out>> dimension, mental retardation group has the highest score for the <<social insight>> dimension. 2. comparisons between groups shows: significant difference between the autistic and ADHD groups for <<social contact>> and <<social anxiety>> but not for <<social insight>> and <<acting-out>>; between the autistic and mental retardation groups, there is a significant difference for <<social contact>> but not for the other dimensions; between the ADHD and mental retardation groups, there is a significant difference only for <<acting-out>>; there is no significant difference between the ADHD and emotional groups; control group has very low scores. CBCL results are: abnormal scores in all groups except normal control group, for <<social problems>> and <<attention problems>>; abnormal scores in the autistic and emotional groups for <<anxious/depressed>>, <<withdrawn>> and <<internal behaviours>>; abnormal scores in the ADHD group for <<aggressive behaviour>>, <<delinquent behaviour>> and <<external behaviours>>; the <<internal behaviours>> score is borderline.
Discussion: Social behaviour profiles are different and characteristic for each disorder. However, social symptoms are not specific for one disorder and common social signs do exist between different disorders. Our results are concordant with the Luteijn study and literature data. The results support the hypothesis of a dimensional pathogenesis in social behaviour disturbance. We discuss the benefit of a dimensional approach to complete the categorical one. The Children's Social Behaviour Questionnaire seems to be an interesting instrument to explore social behaviour disturbances in several child disorders.
abstract_id: PUBMED:38397009
Alterations in KIDINS220/ARMS Expression Impact Sensory Processing and Social Behavior in Adult Mice. Kinase D-interacting substrate of 220 kDa (Kidins220) is a transmembrane protein that participates in neural cell survival, maturation, and plasticity. Mutations in the human KIDINS220 gene are associated with a neurodevelopmental disorder ('SINO' syndrome) characterized by spastic paraplegia, intellectual disability, and in some cases, autism spectrum disorder. To better understand the pathophysiology of KIDINS220-linked pathologies, in this study, we assessed the sensory processing and social behavior of transgenic mouse lines with reduced Kidins220 expression: the CaMKII-driven conditional knockout (cKO) line, lacking Kidins220 in adult forebrain excitatory neurons, and the Kidins220floxed line, expressing constitutively lower protein levels. We show that alterations in Kidins220 expression levels and its splicing pattern cause impaired response to both auditory and olfactory stimuli. Both transgenic lines show impaired startle response to high intensity sounds, with preserved pre-pulsed inhibition, and strongly reduced social odor recognition. In the Kidins220floxed line, olfactory alterations are associated with deficits in social memory and increased aggressive behavior. Our results broaden our knowledge of the SINO syndrome; understanding sensory information processing and its deviations under neuropathological conditions is crucial for devising future therapeutic strategies to enhance the quality of life of affected individuals.
abstract_id: PUBMED:25716574
Mental health needs and availability of mental health care for children and adolescents with intellectual disability in Berlin. Background: The increased risk of mental health problems in children and adolescents with intellectual disability (ID) has been reported in several studies. However, almost no research has been conducted on parents' experiences with the general mental health system. We have investigated the prevalence of emotional and behavioural problems in children with ID as well as the availability and quality of mental health care from the parents' point of view.
Methods: Teachers of specialised schools for ID in Berlin were asked to complete the Teacher's Report Form (TRF) of the Child Behavior Checklist. Information was collected for 1226 children and adolescents aged 6-18 years with mild to profound ID (response 70.5%). The availability and quality of mental health care was assessed by a questionnaire given to parents who had already been seeking help for their children. A total of 330 parents completed the questionnaires (response 62.0%). In addition to univariate analysis, we conducted multiple logistic regressions regarding the psychopathology reported by teachers (TRF-syndrome scales) and difficulties concerning mental health care reported by parents for a paired sample of 308 children.
Results: Overall, 52.4% of the children and adolescents with ID had a total problem score on the TRF in the deviant range (47.1% when eliminating four items reflecting cognitive deficits). Compared with the general population normative sample of children, this is a three-time higher prevalence. The most striking problems were thought problems (schizoid and obsessive-compulsive), aggressive behaviour, attention problems and social problems. Parents whose children had more severe behavioural or emotional dysfunction reported more difficulties with the mental health system. From the parents' point of view, mental health professionals frequently did not feel responsible or were not sufficiently skilled for the treatment of children with ID. As a consequence, 96% of all parents were longing for specialised in- and outpatient services.
Conclusions: This study confirms the findings from other studies regarding the high rate of co-occurrence of ID and mental health problems in youths. Results indicate that both are strongly requested by parents: specialised in- and outpatient services, as well as more professional general services and equitable treatment for all children, with and without ID.
Answer: Yes, social information processing models (SIP models) have been found to explain aggressive behavior in children with mild intellectual disabilities (MID) in residential care. Research has shown that the response-decision element of the SIP model is not necessary to explain aggressive behavior in these children. Instead, social schemata are indirectly related to aggressive behavior, with aggressive response generation serving as a mediating variable (PUBMED:16999780). This suggests that interventions targeting SIP could be beneficial for addressing aggressive behavior in this population.
Additionally, studies have indicated that deviant SIP steps mediate the relationship between children's normative beliefs about aggression, parenting, and aggressive behavior in children with MID to borderline intellectual functioning (MID-BIF). An indirect effect was found from positive parenting through normative beliefs about aggression to deviant SIP, suggesting that normative beliefs about aggression may be a relevant intervention target for children with MID-BIF and aggressive behavior (PUBMED:36893581).
Furthermore, SIP skills have been identified as a mediating mechanism linking executive functions (EFs) to aggression in adolescents with mild to borderline intellectual disability (MBID). Specifically, focused attention and working memory were found to be related to aggression through SIP skills, such as hostile interpretations and self-efficacy for aggression (PUBMED:30010484).
In summary, SIP models provide a framework for understanding and intervening in aggressive behavior among children with MID in residential care. These models highlight the importance of cognitive processes in the development and maintenance of aggressive behavior and suggest potential targets for intervention to reduce aggression in this population. |
Instruction: Is admission hyperglycaemia in non-diabetic patients with acute myocardial infarction a surrogate for previously undiagnosed abnormal glucose tolerance?
Abstracts:
abstract_id: PUBMED:17000629
Is admission hyperglycaemia in non-diabetic patients with acute myocardial infarction a surrogate for previously undiagnosed abnormal glucose tolerance? Aims: To investigate whether admission hyperglycaemia in non-diabetic patients with acute myocardial infarction (AMI) is a surrogate for previously undiagnosed abnormal glucose tolerance.
Methods And Results: Two hundred non-diabetic patients with AMI were divided into three groups: 81 patients with admission glucose < 7.8 mmol/L (group 1), 83 patients with admission glucose > or = 7.8 mmol/L and < 11.1 mmol/L (group 2), and 36 patients with admission glucose > or = 11.1 mmol/L (group 3). Abnormal glucose tolerance, diabetes, or impaired glucose tolerance (IGT) was diagnosed by oral glucose tolerance test (OGTT). OGTT identified diabetes in 53 patients (27%) and IGT in 78 patients (39%). When the fasting glucose criteria were applied, however, only 14 patients (7%) were diagnosed as having diabetes. The prevalence of abnormal glucose tolerance was similar among the three groups: 67% in group 1, 63% in group 2, and 69% in group 3 (P = 0.74). The relation of fasting glucose (r2 = 0.50, P < 0.001) and HbA1c (r2 = 0.34, P < 0.001) to 2-h post-load glucose was significant, but the relation of admission glucose to 2-h post-load glucose was not significant (r2 = 0.02, P = 0.08). Multivariable analysis showed that fasting glucose and HbA1c were independent predictors of abnormal glucose tolerance, but admission glucose was not.
Conclusion: Admission hyperglycaemia in non-diabetic patients with AMI does not represent previously undiagnosed abnormal glucose tolerance. Fasting glucose and HbA1c, rather than admission glucose, may be useful to predict abnormal glucose tolerance. However, these parameters lacked sensitivity. OGTT should be considered in all non-diabetic patients with AMI.
abstract_id: PUBMED:3799239
Prevalence of hyperglycaemia and undiagnosed diabetes mellitus in patients with acute myocardial infarction. The prevalence of hyperglycaemia and undiagnosed diabetes mellitus was assessed in 214 consecutive patients admitted to the coronary care units with acute myocardial infarction (AMI). On admission, 16 patients (7.5%) had known diabetes, and 19 patients, not previously known to be diabetic, had blood glucose concentrations of greater than or equal to 9 mmol/l. Fifteen patients survived for 2 months at which time a 75 g oral glucose tolerance test showed diabetes in 9 (60%) and impaired glucose tolerance in 4 (27%). Ten of these 13 patients (77%) with abnormal glucose tolerance had elevated glycosylated haemoglobin (HbA1c) on admission, indicating pre-existing glucose intolerance or diabetes. The prevalence of undiagnosed diabetes was 4.5% (9/198). However, we may have overlooked undiagnosed diabetes in a small number of patients on admission, since only a random blood glucose less than 8 mmol/l rules out diabetes, WHO criteria. Elevated blood glucose in patients with AMI is more likely to reflect a stationary pre-existing abnormal glucose tolerance than a temporary stress-induced phenomenon.
abstract_id: PUBMED:22607511
Stress hyperglycaemia in patients with first myocardial infarction. Objective: To investigate the incidence of stress hyperglycaemia at first acute myocardial infarction (MI) with ST-segment elevation, occurrence of stress hyperglycaemia as a manifestation of previously undiagnosed abnormal glucose tolerance (AGT), and its relation to stress hormone levels.
Materials And Methods: The population of this prospective cohort study consisted of 243 patients. On admission glucose, adrenaline, noradrenaline and cortisol levels were measured. Patients without previously diagnosed diabetes (n = 204) underwent an oral glucose tolerance test on day 3 of hospitalisation and 3 months after discharge.
Results: Abnormal glucose tolerance at day 3 was observed in 92 (45.1%) patients without a previous diagnosis of diabetes mellitus and resolved after 3 months in 46 (50.0%) patients (p < 0.0001). Stress hyperglycaemia, defined as admission glycaemia ≥ 11.1 mmol/l, affected 34 (14.0%) study participants: 28 (54.9%) patients with diabetes vs. 3 (8.8%) subjects with newly detected impaired glucose intolerance (p < 0.00001) and 1 (2.2%) person with AGT at day 3 (p < 0.000001). Multivariable analysis identified elevated glycated haemoglobin (HbA(1c) ; p < 0.0000001), anterior MI (p < 0.05) and high admission cortisol concentration (p < 0.001), but not catecholamines, as independent predictors of stress hyperglycaemia. The receiver operating characteristic curve analysis revealed the optimal cut-off values of 8.2% for HbA(1c) and 47.7 μg/dl for admission cortisol with very good and sufficient diagnostic accuracies respectively.
Conclusions: Newly detected AGT in patients with a first MI is transient in 50% of cases. Stress hyperglycaemia is a common finding in patients with a first MI with ST-segment elevation and diabetes mellitus, but is rarely observed in individuals with impaired glucose tolerance or transient AGT diagnosed during the acute phase of MI. The risk factors of stress hyperglycaemia occurrence include elevated HbA(1c) , anterior MI and high admission cortisol concentration.
abstract_id: PUBMED:6135025
"Stress" hyperglycaemia during acute myocardial infarction: an indicator of pre-existing diabetes? Hyperglycaemia occurring at admission in patients with suspected acute myocardial infarction is generally held to represent stress hyperglycaemia. 26 patients, not previously known to be diabetic, had blood glucose values greater than or equal to 10 mmol/l on admission to a coronary care unit. 16 survived for 2 months at which time a 75 g oral glucose tolerance test (OGTT) showed diabetes in 10 (63%) and impaired glucose tolerance in 1 (WHO criteria). All those with abnormal glucose tolerance at 2 months had had raised glycosylated haemoglobin (HbA1) (greater than 7.5%) on admission, indicating pre-existing diabetes. All those with a HbA1 level over 8% had abnormal glucose tolerance. 7 of the 10 who died or did not have an OGTT also had raised HbA1 at admission. An admission blood glucose greater than or equal to 10 mmol/l in patients with severe chest pain is more likely to indicate previously undiagnosed diabetes than "stress" hyperglycaemia. There is no evidence that myocardial infarction precipitates diabetes. The glycosylated haemoglobin concentration can be used to distinguish between stress hyperglycaemia and hyperglycaemia caused by diabetes.
abstract_id: PUBMED:12090978
Glucose metabolism in patients with acute myocardial infarction and no previous diagnosis of diabetes mellitus: a prospective study. Background: Glycometabolic state at hospital admission is an important risk marker for long-term mortality in patients with acute myocardial infarction, whether or not they have known diabetes mellitus. Our aim was to ascertain the prevalence of impaired glucose metabolism in patients without diagnosed diabetes but with myocardial infarction, and to assess whether such abnormalities can be identified in the early course of a myocardial infarction.
Methods: We did a prospective study, in which we enrolled 181 consecutive patients admitted to the coronary care units of two hospitals in Sweden with acute myocardial infarction, no diagnosis of diabetes, and a blood glucose concentration of less than 11.1 mmol/L. We recorded glucose concentrations during the hospital stay, and did standardised oral glucose tolerance tests with 75 g of glucose at discharge and again 3 months later.
Findings: The mean age of our cohort was 63.5 years (SD 9) and the mean blood glucose concentration at admission was 6.5 mmol/L (1.4). The mean 2-h postload blood glucose concentration was 9.2 mmol/L (2.9) at hospital discharge, and 9.0 mmol/L (3.0) 3 months later. 58 of 164 (35%, 95% CI 28-43) and 58 of 144 (40%, 32-48) individuals had impaired glucose tolerance at discharge and after 3 months, respectively, and 51 of 164 (31%, 24-38) and 36 of 144 (25%, 18-32) had previously undiagnosed diabetes mellitus. Independent predictors of abnormal glucose tolerance at 3 months were concentrations of HbA(1c) at admission (p=0.024) and fasting blood glucose concentrations on day 4 (p=0.044).
Interpretation: Previously undiagnosed diabetes and impaired glucose tolerance are common in patients with an acute myocardial infarction. These abnormalities can be detected early in the postinfarction period. Our results suggest that fasting and postchallenge hyperglycaemia in the early phase of an acute myocardial infarction could be used as early markers of high-risk individuals.
abstract_id: PUBMED:32647915
Fasting blood glucose at admission is an independent predictor for 28-day mortality in patients with COVID-19 without previous diagnosis of diabetes: a multi-centre retrospective study. Aims/hypothesis: Hyperglycaemia is associated with an elevated risk of mortality in community-acquired pneumonia, stroke, acute myocardial infarction, trauma and surgery, among other conditions. In this study, we examined the relationship between fasting blood glucose (FBG) and 28-day mortality in coronavirus disease 2019 (COVID-19) patients not previously diagnosed as having diabetes.
Methods: We conducted a retrospective study involving all consecutive COVID-19 patients with a definitive 28-day outcome and FBG measurement at admission from 24 January 2020 to 10 February 2020 in two hospitals based in Wuhan, China. Demographic and clinical data, 28-day outcomes, in-hospital complications and CRB-65 scores of COVID-19 patients in the two hospitals were analysed. CRB-65 is an effective measure for assessing the severity of pneumonia and is based on four indicators, i.e. confusion, respiratory rate (>30/min), systolic blood pressure (≤90 mmHg) or diastolic blood pressure (≤60 mmHg), and age (≥65 years).
Results: Six hundred and five COVID-19 patients were enrolled, including 114 who died in hospital. Multivariable Cox regression analysis showed that age (HR 1.02 [95% CI 1.00, 1.04]), male sex (HR 1.75 [95% CI 1.17, 2.60]), CRB-65 score 1-2 (HR 2.68 [95% CI 1.56, 4.59]), CRB-65 score 3-4 (HR 5.25 [95% CI 2.05, 13.43]) and FBG ≥7.0 mmol/l (HR 2.30 [95% CI 1.49, 3.55]) were independent predictors for 28-day mortality. The OR for 28-day in-hospital complications in those with FBG ≥7.0 mmol/l and 6.1-6.9 mmol/l vs <6.1 mmol/l was 3.99 (95% CI 2.71, 5.88) or 2.61 (95% CI 1.64, 4.41), respectively.
Conclusions/interpretation: FBG ≥7.0 mmol/l at admission is an independent predictor for 28-day mortality in patients with COVID-19 without previous diagnosis of diabetes. Glycaemic testing and control are important to all COVID-19 patients even where they have no pre-existing diabetes, as most COVID-19 patients are prone to glucose metabolic disorders. Graphical abstract.
abstract_id: PUBMED:18842319
Temporal change in glucose tolerance in non-ST-elevation myocardial infarction. We assessed the prevalence and 3-month change in glucose tolerance status in consecutive non-ST-elevation myocardial infarction (NSTEMI; European Society of Cardiology 2007 definition) patients (N=49; mean (S.D.) age 65 (11) years) admitted to a coronary care unit, without known diabetes. These patients underwent an oral glucose tolerance test (OGTT) 36-hour (median, IQR: 18-72) after admission and at 3 months. Undiagnosed abnormal glucose tolerance (AGT: impaired fasting glucose (IFG), impaired glucose tolerance (IGT) or new diabetes) was common (61% at admission and 41% at 3 months, p<0.05) and the majority (approximately 3/4) had IGT. Glucose tolerance status improved in a higher proportion of patients than it worsened (31% vs. 8%, p=0.04). At 3 months, fasting glucose was unchanged but 2-hour OGTT glucose was lower (mean (S.D.): 8.5 (2.7) mmol/L vs. 7.7 (2.7) mmol/L, p=0.004). 'Stress hyperglycaemia' could explain higher admission glucose levels and this raises the question about the optimal timing of OGTT in relation to myocardial infarction. Newly diagnosed diabetes was present in approximately 10% of patients and this was not reliably detected by fasting plasma glucose. In NSTEMI patients OGTT is the only reliable strategy to identify subjects with IGT and diabetes.
abstract_id: PUBMED:21768543
Prognostic value of admission glycosylated hemoglobin and glucose in nondiabetic patients with ST-segment-elevation myocardial infarction treated with percutaneous coronary intervention. Background: In nondiabetic patients with ST-segment-elevation myocardial infarction, acute hyperglycemia is associated with adverse outcome. Whether this association is due merely to hyperglycemia as an acute stress response or whether longer-term glycometabolic derangements are also involved is uncertain. It was our aim to determine the association between both acute and chronic hyperglycemia (hemoglobin A(₁c) [HbA(₁c)]) and outcome in nondiabetic patients with ST-segment-elevation myocardial infarction.
Methods And Results: This observational study included consecutive patients (n=4176) without known diabetes mellitus admitted with ST-segment-elevation myocardial infarction. All patients were treated with primary percutaneous intervention. Both glucose and HbA(1c) were measured on admission. Main outcome measure was total long-term mortality; secondary outcome measures were 1-year mortality and enzymatic infarct size. One-year mortality was 4.7%, and mortality after total follow-up (3.3 ± 1.5 years) was 10%. Both elevated HbA(1c) levels (P<0.001) and elevated admission glucose (P<0.001) were associated with 1-year and long-term mortality. After exclusion of early mortality (within 30 days), HbA(₁c) remained associated with long-term mortality (P<0.001), whereas glucose lost significance (P=0.09). Elevated glucose, but not elevated HbA(₁c), was associated with larger infarct size. After multivariate analysis, HbA(₁c) (hazard ratio, 1.2 per interquartile range; P<0.01), but not glucose, was independently associated with long-term mortality.
Conclusions: In nondiabetic patients with ST-segment-elevation myocardial infarction, both elevated admission glucose and HbA(₁c) levels were associated with adverse outcome. Both of these parameters reflect different patient populations, and their association with outcome is probably due to different mechanisms. Measurement of both parameters enables identification of these high-risk groups for aggressive secondary risk prevention.
abstract_id: PUBMED:18182836
Admission hyperglycemia and abnormal glucose tolerance at discharge in patients with acute myocardial infarction and no previous history of diabetes mellitus Unlabelled: The objective of this study was to determine frequency of admission hyperglycemia and abnormal glucose tolerance at discharge in patients with acute myocardial infarction and no previous history of diabetes mellitus.
Methods And Results: Data on 1522 patients with acute myocardial infarction and no previous history of diabetes mellitus were analyzed. Before discharge from hospital, standardized oral glucose tolerance test was performed in 197 patients with admission hyperglycemia.
Results: Admission hyperglycemia (> or =6.1 mmol/L) was determined in half of the patients with acute myocardial infarction: glucose concentration of 6.1-6.99 mmol/L was in 21.5% and > or =7.0 mmol/L in 30.1% of the patients. By using glucose tolerance test, normal glucose metabolism was noted in 57.9% of the patients with admission hyperglycemia; abnormal glucose tolerance was diagnosed newly in more than one-third and glucose concentration of > or =11.1 mmol/L in 10.1% of the patients.
Conclusions: Abnormal glucose tolerance is a frequent feature in nondiabetic patients with admission hyperglycemia during acute myocardial infarction, and glucose tolerance test should be considered in all patients with ischemic heart disease for early modification of this risk factor.
abstract_id: PUBMED:3690957
Stress hyperglycaemia is a predictor of abnormal glucose tolerance in Indian patients with acute myocardial infarction. 66 non-diabetic Indian patients with acute myocardial infarction were assessed prospectively for the presence of hyperglycaemia and the value of this admission hyperglycaemia and glycosylated haemoglobin (HbA1) levels in reflecting the glucose tolerance status of the patients was studied. Hyperglycaemia, defined as admission plasma glucose greater than or equal to 8 mmol/l was detected in 49% of the patients, whilst raised HbA1 values were seen in 11%. The admission plasma glucose (APG) correlated significantly with both the HbA1 levels and with the 2 hour glucose value in the oral glucose tolerance test (p less than 0.001). An oral glucose tolerance test performed 3 months after the acute episode revealed that 35 patients (53%) had abnormal glucose tolerance according to WHO criteria. Of the patients with initial hyperglycaemia, 75% had abnormal glucose tolerance tests, whilst 32% of patients with normal APG had abnormal glucose tolerance. Abnormal glucose tolerance was also detected in all patients with raised HbA1 values (greater than 8.9%) and in 48% of patients with normal levels. The sensitivity and specificity of APG greater than or equal to 8 mmol/l for abnormal glucose tolerance was 68.6% and 74.2% respectively and that of raised HbA1 values were 20% and 100%. Hence an APG greater than or equal to 8 mmol/l in patients with myocardial infarction is more likely to indicate the presence of unrecognized abnormal glucose tolerance rather than stress. HbA1 measurements do not appear to offer any further advantage in the assessment of hyperglycaemia following myocardial infarction.
Answer: Admission hyperglycemia in non-diabetic patients with acute myocardial infarction (AMI) is not necessarily a surrogate for previously undiagnosed abnormal glucose tolerance (AGT). Several studies have investigated the relationship between admission hyperglycemia and AGT, with varying results.
One study found that admission hyperglycemia in non-diabetic patients with AMI does not represent previously undiagnosed AGT. Fasting glucose and HbA1c were identified as independent predictors of AGT, but admission glucose was not. The study suggested that an oral glucose tolerance test (OGTT) should be considered in all non-diabetic patients with AMI to detect AGT (PUBMED:17000629).
Another study indicated that elevated blood glucose in patients with AMI is more likely to reflect pre-existing abnormal glucose tolerance than a temporary stress-induced phenomenon. The prevalence of undiagnosed diabetes was 4.5% in the study population, and elevated glycosylated hemoglobin (HbA1c) on admission indicated pre-existing glucose intolerance or diabetes (PUBMED:3799239).
A prospective cohort study found that stress hyperglycemia is common in patients with a first MI with ST-segment elevation and diabetes mellitus, but is rarely observed in individuals with impaired glucose tolerance or transient AGT diagnosed during the acute phase of MI. Elevated HbA1c, anterior MI, and high admission cortisol concentration were identified as independent predictors of stress hyperglycemia (PUBMED:22607511).
Another study supported the notion that "stress" hyperglycemia during acute myocardial infarction is more likely to indicate previously undiagnosed diabetes than a temporary stress response. The study suggested that an admission blood glucose greater than or equal to 10 mmol/L in patients with severe chest pain is more likely to indicate previously undiagnosed diabetes (PUBMED:6135025).
In summary, while admission hyperglycemia in non-diabetic patients with AMI may be associated with stress, it is also frequently indicative of previously undiagnosed AGT, including diabetes and impaired glucose tolerance. Therefore, it is not a reliable surrogate marker for AGT, and additional testing such as OGTT and HbA1c measurements are recommended to accurately diagnose AGT in these patients. |
Instruction: Should fetal growth be a matter of concern in isolated single umbilical artery?
Abstracts:
abstract_id: PUBMED:24918999
Should fetal growth be a matter of concern in isolated single umbilical artery? Objective: To examine birth weight in pregnancies with isolated single umbilical artery (ISUA).
Methods: Case control study with retrospective review of 131 singleton pregnancies with isolated single umbilical artery diagnosed before birth. Control group consisted of 730 singleton pregnancies recruited prospectively, that had histological confirmation of a 3 vessels cord. Pregnancies were classified as uncomplicated or high-risk according to the presence of diseases that increase the risk of placental insufficiency during pregnancy. Mean birth weight and frequency of low birth weight (< 2.500 g), very low birth weight (< 1.500 g) and fetal growth restriction below the 5th and 10th centiles were compared between groups.
Results: Mean birth weight difference between ISUA (n=131, 2840±701g) and control (n=730, 2.983 ± 671g) pregnancies was 143g (95% CI= 17-269; p=0.04) and birth weight below the 5thcentile was significantly more common in ISUA group [28/131 (21.4%) versus 99/730 (13.6%), p=0.02]. When only uncomplicated pregnancies were considered in both groups, no birth weight differences were observed. Amongst high-risk subgroups, birth weight below the 5th centile remained significantly more common in ISUA compared to control pregnancies [10/35 (28.6%) versus 53/377 (14.1%), p=0.04].
Conclusion: Isolated single umbilical artery does not increase the risk of fetal growth restriction in uncomplicated singleton pregnancies.
abstract_id: PUBMED:28273661
Early- versus Late-Onset Fetal Growth Restriction Differentially Affects the Development of the Fetal Sheep Brain. Fetal growth restriction (FGR) is a common complication of pregnancy, principally caused by suboptimal placental function, and is associated with high rates of perinatal mortality and morbidity. Clinical studies suggest that the time of onset of placental insufficiency is an important contributor towards the neurodevelopmental impairments that are evident in children who had FGR. It is however currently unknown how early-onset and late-onset FGR differentially affect brain development. The aim of this study was to examine neuropathology in early-onset and late-onset FGR fetal sheep and to determine whether they differentially alter brain development. We induced placental insufficiency and FGR via single umbilical artery ligation at either 88 days (early-onset) or 105 days (late-onset) of fetal sheep gestation (term is approx. 147 days), reflecting a period of rapid white matter brain development. Fetal blood samples were collected for the first 10 days after surgery, and all fetuses were sacrificed at 125 days' gestation for brain collection and subsequent histopathology. Our results show that early-onset FGR fetuses became progressively hypoxic over the first 10 days after onset of placental insufficiency, whereas late-onset FGR fetuses were significantly hypoxic compared to controls from day 1 after onset of placental insufficiency (SaO2 46.7 ± 7.4 vs. 65.7 ± 3.9%, respectively, p = 0.03). Compared to control brains, early-onset FGR brains showed widespread white matter injury, with a reduction in both CNPase-positive and MBP-positive density of staining in the periventricular white matter (PVWM), subcortical white matter, intragyral white matter (IGWM), subventricular zone (SVZ), and external capsule (p < 0.05 for all). Total oligodendrocyte lineage cell counts (Olig-2-positive) did not differ across groups, but mature oligodendrocytes (MBP-positive) were reduced, and neuroinflammation was evident in early-onset FGR brains with reactive astrogliosis (GFAP-positive) in the IGWM and cortex (p < 0.05), together with an increased number of Iba-1-positive activated microglia in the PVWM, SVZ, and cortex (p < 0.05). Late-onset FGR was associated with a widespread reduction of CNPase-positive myelin expression (p < 0.05) and a reduced number of mature oligodendrocytes in all white matter regions examined (p < 0.05). NeuN-positive neuronal cell counts in the cortex were not different across groups; however, the morphology of neuronal cells was different in response to placental insufficiency, most notable in the early-onset FGR fetuses, but it was late-onset FGR that induced caspase-3-positive apoptosis within the cortex. This study demonstrates that early-onset FGR is associated with more widespread white matter injury and neuroinflammation; however, both early- and late-onset FGR are associated with complex patterns of white and grey matter injury. These results indicate that it is the timing of the onset of fetal compromise relative to brain development that principally mediates altered brain development associated with FGR.
abstract_id: PUBMED:32425758
Does Antenatal Betamethasone Alter White Matter Brain Development in Growth Restricted Fetal Sheep? Fetal growth restriction (FGR) is a common complication of pregnancy often associated with neurological impairments. Currently, there is no treatment for FGR, hence it is likely these babies will be delivered prematurely, thus being exposed to antenatal glucocorticoids. While there is no doubt that antenatal glucocorticoids reduce neonatal mortality and morbidities, their effects on the fetal brain, particularly in FGR babies, are less well recognized. We investigated the effects of both short- and long-term exposure to antenatal betamethasone treatment in both FGR and appropriately grown fetal sheep brains. Surgery was performed on pregnant Border-Leicester Merino crossbred ewes at 105-110 days gestation (term ~150 days) to induce FGR by single umbilical artery ligation (SUAL) or sham surgery. Ewes were then treated with a clinical dose of betamethasone (11.4 mg intramuscularly) or saline at 113 and 114 days gestation. Animals were euthanized at 115 days (48 h following the initial betamethasone administration) or 125 days (10 days following the initial dose of betamethasone) and fetal brains collected for analysis. FGR fetuses were significantly smaller than controls (115 days: 1.68 ± 0.11 kg vs. 1.99 ± 0.11 kg, 125 days: 2.70 ± 0.15 kg vs. 3.31 ± 0.20 kg, P < 0.001) and betamethasone treatment reduced body weight in both control (115 days: 1.64 ± 0.10 kg, 125 days: 2.53 ± 0.10 kg) and FGR fetuses (115 days: 1.41 ± 0.10 kg, 125 days: 2.16 ± 0.17 kg, P < 0.001). Brain: body weight ratios were significantly increased with FGR (P < 0.001) and betamethasone treatment (P = 0.002). Within the fetal brain, FGR reduced CNPase-positive myelin staining in the subcortical white matter (SCWM; P = 0.01) and corpus callosum (CC; P = 0.01), increased GFAP staining in the SCWM (P = 0.02) and reduced the number of Olig2 cells in the periventricular white matter (PVWM; P = 0.04). Betamethasone treatment significantly increased CNPase staining in the external capsule (EC; P = 0.02), reduced GFAP staining in the CC (P = 0.03) and increased Olig2 staining in the SCWM (P = 0.04). Here we show that FGR has progressive adverse effects on the fetal brain, particularly within the white matter. Betamethasone exacerbated growth restriction in the FGR offspring, but betamethasone did not worsen white matter brain injury.
abstract_id: PUBMED:36766676
Changes in Artery Diameters and Fetal Growth in Cases of Isolated Single Umbilical Artery. Background-There are conflicting data in the international literature on the risks of abnormal fetal growth in fetuses presenting an isolated single umbilical artery (SUA), and the pathophysiology of this complication is poorly understood. Objective-To evaluate if changes in diameter of the remaining umbilical artery in fetuses presenting an isolated SUA are associated with different fetal growth patterns. Study design-This was a two-center prospective longitudinal observational study including 164 fetuses diagnosed with a SUA at the 20-22-week detailed ultrasound examination and 200 control fetuses with a three-vessel cord. In all cases, the diameters of the cord vessels were measured in a transverse view of the central portion of the umbilical cord, and the number of cord vessels was confirmed at delivery. Logistic regression and nonparametric receiver operating characteristic (ROC) analysis were carried out to evaluate the association of the umbilical artery diameter in a single artery with small for-gestational age (SGA) and with fetal growth restriction (FGR). The impact of artery dimension was adjusted for maternal BMI, parity, ethnicity, side of the remaining umbilical artery and umbilical resistance index (RI) in the regression model. Results-A significantly (p < 0.001) larger mean diameter was found for the remaining artery in fetuses with SUA compared with controls (3.0 ± 0.9 vs. 2.5 ± 0.6 mm). After controlling for BMI and parity, we found no difference in umbilical resistance and side of the remaining umbilical artery between the SUA and control groups. A remaining umbilical artery diameter of >3.1 mm was found to be associated with a lower risk of FGR, but this association failed to be statistical significant (OR = 0.60, 95% CI = 0.33-1.09, p value = 0.089). We also found that the mean vein-to-artery area ratio was significantly (p < 0.001) increased in the SUA group as compared with the controls (2.4 ± 1.8 vs. 1.8 ± 0.9; mean difference = 0.6; Cohen's d = 0.46). Conclusion-In most fetuses with isolate SUA, the remaining artery diameter at 20-22 weeks is significantly larger than in controls. When there are no changes in the diameter and, in particular, if it remains <3.1 mm, the risk of abnormal fetal growth is higher, and measurements of the diameter of the remaining artery could be used to identify fetuses at risk of FGR later in pregnancy.
abstract_id: PUBMED:23775879
Relationship of isolated single umbilical artery to fetal growth, aneuploidy and perinatal mortality: systematic review and meta-analysis. Objective: To review the available literature on outcome of pregnancy when an isolated single umbilical artery (iSUA) is diagnosed at the time of the mid-trimester anomaly scan.
Methods: We searched MEDLINE (1948-2012), EMBASE (1980-2012) and the Cochrane Library (until 2012) for relevant citations reporting on outcome of pregnancy with iSUA seen on ultrasound. Data were extracted by two reviewers. Where appropriate, we pooled odds ratios (ORs) for the dichotomous outcome measures: small for gestational age (SGA), perinatal mortality and aneuploidy. For birth weight we determined the mean difference with 95% CI.
Results: We identified three cohort studies and four case-control studies reporting on 928 pregnancies with iSUA. There was significant heterogeneity between cohort and case-control studies. Compared to fetuses with a three-vessel cord, fetuses with an iSUA were more likely to be SGA (OR 1.6 (95% CI, 0.97-2.6); n = 489) or suffer perinatal mortality (OR 2.0 (95% CI, 0.9-4.2); n = 686), although for neither of the outcomes was statistical significance reached. The difference in mean birth weight was 51 g (95% CI, -154.7 to 52.6 g): n = 407), but again this difference was not statistically significant. We found no evidence that fetuses with iSUA have an increased risk for aneuploidy.
Conclusion: In view of the non-significant association between iSUA and fetal growth and perinatal mortality, and in view of the heterogeneity in studies on aneuploidy, we feel that large-scale, prospective cohort studies are needed to reach definitive conclusions on the appropriate work-up in iSUA pregnancies. At present, targeted growth assessment after diagnosis of iSUA should not be routine practice.
abstract_id: PUBMED:15863549
Fetal growth assessment and neonatal birth weight in fetuses with an isolated single umbilical artery. Objective: To evaluate interval fetal growth and compare the incidence of small-for-gestational age (SGA) newborns between fetuses with an isolated single umbilical artery and those with a 3-vessel umbilical cord.
Methods: A retrospective, case-controlled study in which 84 singleton pregnancies with an isolated single umbilical artery were compared with 3-vessel umbilical cord fetuses as the control group.
Results: There was no statistical difference between the groups in maternal demographic data, except for ethnicity and neonatal outcomes, respectively. The mean newborn birth weight was similar between the isolated single umbilical artery and the control groups, 3,268 +/- 596 g and 3,274 +/- 627 g, respectively. The prevalence of SGA newborns was 7.1% (6 of 84) in the isolated single umbilical artery group and 4.8% (4 of 84) in the control group. An ultrasound examination demonstrated fetal growth restriction in 50% of cases (3 of 6) in the isolated single umbilical artery group and in 25% of subjects (1 of 4) in the control group, respectively.
Conclusion: Fetuses with an isolated single umbilical artery are at similar risk for SGA compared with fetuses with 3-vessel umbilical cords. It appears that antepartum serial ultrasound examination does not provide more information for interval fetal growth assessment in fetuses with an isolated single umbilical artery.
abstract_id: PUBMED:18297613
Serial sonographic growth assessment in pregnancies complicated by an isolated single umbilical artery. Pregnancies complicated by an isolated single umbilical artery (SUA) are thought to be at increased risk for intrauterine growth restriction (IUGR). The management of these pregnancies often includes serial sonographic assessments of fetal growth. The goal of our study was to test the validity of this assertion. We conducted a longitudinal sonographic assessment of intrauterine fetal growth in pregnancies complicated by a SUA. We included pregnancies where fetal growth was assessed three or more times, and the presence of SUA was repeatedly demonstrated. Pregnancies with fetal anomalies and multiple gestations were excluded. IUGR was defined as an estimated fetal weight (EFW) < or = 10th percentile of the normal ranges established by Hadlock. Between January 1999 and December 2005, we identified 273 pregnancies with SUA, for an overall incidence of 0.48% within the total population of patients examined at our institution. One hundred and thirty-five pregnancies did not meet our inclusion criteria. Of the 138 we analyzed, four pregnancies (2.9%) were found to have EFW < or = 10th percentile. We concluded that the occurrence of IUGR in pregnancies complicated by an isolated SUA is not increased. Serial sonographic assessments of fetal growth do not appear to be indicated in the management of such pregnancies.
abstract_id: PUBMED:31191328
Fetal Growth Restriction Alters Cerebellar Development in Fetal and Neonatal Sheep. Fetal growth restriction (FGR) complicates 5-10% of pregnancies and is associated with increased risks of perinatal morbidity and mortality. The development of cerebellar neuropathology in utero, in response to chronic fetal hypoxia, and over the period of high risk for preterm birth, has not been previously studied. Therefore, the objective of this study was to examine the effects of FGR induced by placental insufficiency on cerebellar development at three timepoints in ovine fetal and neonatal development: (1) 115 days gestational age (d GA), (2) 124 d GA, and (3) 1-day-old postnatal age. We induced FGR via single umbilical artery ligation (SUAL) at ~105 d GA in fetal sheep, term is ~147 d GA. Animals were sacrificed at 115 d GA, 124 d GA, and 1-day-old postnatal age; fetuses and lambs were weighed and the cerebellum collected for histopathology. FGR lambs demonstrated neuropathology within the cerebellum after birth, with a significant, ~18% decrease in the number of granule cell bodies (NeuN+ immunoreactivity) within the internal granular layer (IGL) and an ~80% reduction in neuronal extension and branching (MAP+ immunoreactivity) within the molecular layer (ML). Oxidative stress (8-OHdG+ immunoreactivity) was significantly higher in FGR lambs within the ML and the white matter (WM) compared to control lambs. The structural integrity of neurons was already aberrant in the FGR cerebellum at 115 d GA, and by 124 d GA, inflammatory cells (Iba-1+ immunoreactivity) were significantly upregulated and the blood-brain barrier (BBB) was compromised (Pearls, albumin, and GFAP+ immunoreactivity). We confirm that cerebellar injuries develop antenatally in FGR, and therefore, interventions to prevent long-term motor and coordination deficits should be implemented either antenatally or perinatally, thereby targeting neuroinflammatory and oxidative stress pathways.
abstract_id: PUBMED:23525628
The impact of different sides of the absent umbilical artery on fetal growth in an isolated single umbilical artery. Purpose: The aim of this study was to determine whether laterality of an absent umbilical artery (AUA) is associated with fetal growth in fetuses with isolated single umbilical artery (SUA).
Methods: Fifty singleton pregnancies were studied, including 26 cases with a right AUA and 24 cases with a left AUA in isolated SUA, and 200 singleton pregnancies with a three-vessel cord. Delivery data, including gestational age and birth weight and height, were recorded. Compare the birth weight and height in fetuses between the different sides of an AUA and the three-vessel cord by covariance analysis.
Results: The mean difference was 0.25 kg (SD 0.05; P < 0.05) in birth weight between fetuses with a left AUA and a three-vessel cord. The mean difference was 1.03 cm (SD 0.56; P < 0.05) in birth height between fetuses with a left AUA and a three-vessel cord. No significant differences were observed in birth weight and height between fetuses with a right AUA and those with a three-vessel cord.
Conclusion: Our data suggest that the birth weight and height of fetuses with a left AUA in isolated SUA are lower than those with a three-vessel cord.
abstract_id: PUBMED:28467985
Effects of Antenatal Melatonin Treatment on the Cerebral Vasculature in an Ovine Model of Fetal Growth Restriction. Chronic moderate hypoxia, such as occurs in fetal growth restriction (FGR) during gestation, compromises the blood-brain barrier (BBB) and results in structural abnormalities of the cerebral vasculature. We have previously determined the neuroprotective and antioxidant effects of maternal administration of melatonin (MLT) on growth-restricted newborn lambs. The potential of maternal MLT therapy for the treatment of cerebrovascular dysfunction-associated developmental hypoxia has also been demonstrated in newborn lambs. We assessed whether MLT had an effect on the previously reported structural and cerebral vascular abnormalities in chronically hypoxic FGR lambs. Single umbilical-artery ligation surgery was performed in fetuses at approximately 105 days of gestation (term: 147 days) to induce placental insufficiency and FGR, and treatment with either saline or an MLT infusion (0.1 mg/kg) was started 4 h after surgery. Ewes delivered naturally at term and lambs were euthanased 24 h later. We found a significant reduction in the number of laminin-positive blood vessels within the subcortical and periventricular white matter (SCWM and PVWM) and the subventricular zone (SVZ) in FGR (p < 0.0005) and FGR + MLT brains (p < 0.0005 vs. controls), with no difference found between FGR and FGR + MLT animals. This was associated with a significant decrease in VEGF immunoreactivity in FGR and FGR + MLT brains versus controls (p < 0.0005; SCWM and PVWM) and in the SVZ in FGR brains versus controls (p < 0.005) and also with significantly lower levels of proliferating blood vessels versus controls (p < 0.0005). Glucose transporter-1 immunoreactivity (vascular endothelium) was decreased in FGR versus control lambs (p < 0.0005) in SCWM, PVWM, and the SVZ; it was significantly increased in FGR + MLT lambs compared with FGR lambs in SCWM and PVWM (p < 0.005) and even more markedly in the SVZ (p < 0.0005). FGR brains showed a 72% reduction in pericyte coverage versus control lambs and 68% versus FGR + MLT in PVWM. In SCWM, we found a 77 and 73% reduction compared with control and FGR + MLT lambs, respectively, while in the SVZ, we observed a 68% reduction versus controls and a 70% reduction in FGR versus FGR + MLT lambs. Astrocyte end-feet coverage in the SCWM showed a significant 24% reduction in FGR versus control levels, a 42% decrease within the PVWM, and a 35% decrease within the SVZ versus controls. MLT normalized astrocyte attachment to blood vessels, with no difference seen between controls and FGR + MLT animals in any of the brain regions examined. We also observed a decrease in albumin extravasation and microhemorrhage in controls and FGR + MLT brains versus FGR lambs. Our results demonstrate that umbilicoplacental insufficiency is associated with FGR-produced vascular changes in the white matter and SVZ of FGR newborn brains and that maternal MLT prevented disruption of the BBB by protecting perivascular cells essential for the maintenance of vascular homeostasis and stability.
Answer: Fetal growth should be a matter of concern in isolated single umbilical artery (ISUA) cases, as studies have shown an association with lower birth weights and increased frequency of low birth weight and fetal growth restriction (FGR) compared to pregnancies with a normal three-vessel cord. A study found that the mean birth weight difference between ISUA and control pregnancies was 143g, and birth weight below the 5th percentile was significantly more common in the ISUA group (PUBMED:24918999). However, when only uncomplicated pregnancies were considered, no birth weight differences were observed, suggesting that ISUA does not increase the risk of FGR in uncomplicated singleton pregnancies.
Other research has indicated that FGR, whether early-onset or late-onset, can differentially affect brain development, with early-onset FGR associated with more widespread white matter injury and neuroinflammation, while both types are associated with complex patterns of white and grey matter injury (PUBMED:28273661). This highlights the importance of monitoring fetal growth in ISUA cases due to potential neurodevelopmental impacts.
Additionally, the diameter of the remaining umbilical artery in fetuses with ISUA has been associated with different fetal growth patterns. A larger mean diameter was found for the remaining artery in fetuses with ISUA compared with controls, and a diameter greater than 3.1 mm was associated with a lower risk of FGR, although this association was not statistically significant (PUBMED:36766676).
A systematic review and meta-analysis also suggested that fetuses with iSUA were more likely to be small for gestational age (SGA) or suffer perinatal mortality, although the associations were not statistically significant (PUBMED:23775879). Another study found no significant difference in the risk for SGA between fetuses with an isolated single umbilical artery and those with a 3-vessel umbilical cord (PUBMED:15863549), and a longitudinal study concluded that the occurrence of IUGR in pregnancies complicated by an isolated SUA is not increased (PUBMED:18297613).
In summary, while some studies suggest that ISUA can be associated with lower birth weights and increased risk of FGR, particularly in high-risk pregnancies, the evidence is not entirely consistent. |
Instruction: A 21-year history of all-terrain vehicle injuries: has anything changed?
Abstracts:
abstract_id: PUBMED:33839036
A preliminary study on enhancing safety of contact features in the terrain park. Objectives: Terrain park riders use contact features such as fun boxes and rails. Typical fun box and rail features have a design characteristic that can be changed to improve safety. Fun box edge coping and edges of rails are typically constructed of soft steel. Ski/snowboard edges (HRC50) can easily become engaged in the softer metal, causing a chip to develop, suddenly stopping the rider, probably causing a fall and possible injury. The aim of the study is to examine the effect of terrain park running surface hardness on chip development.
Design: Testing on steel specimens was performed to research chip development generated by a ski/snowboard edge on steel used in the construction of contact features and on steel that is proposed for such use. An apparatus was constructed to simulate a ski/snowboard edge moving perpendicular to the long axis of coping or rail edge.
Methods: The author performed observation, photographic documentation, metallurgical testing and environmental testing of various contact features at different ski area terrain parks. Several steel specimens of varying hardness were tested at various load levels to study the propensity of chip development by ski/snowboard edges.
Results: Testing of steel samples showed that increasing the hardness of the rail steel or coping steel reduced the propensity for a ski/snowboard edge to engage in the coping or rail.
Conclusions: Increasing steel coping and rail contact surface hardness to HRC 50 and above will likely reduce engagement by steel snowboard/ski edges, which in turn is expected to reduce the chance of a fall and injury.
abstract_id: PUBMED:35735597
Terrain Perception Using Wearable Parrot-Inspired Companion Robot, KiliRo. Research indicates that deaths due to fall incidents are the second leading cause of unintentional injury deaths in the world. Death by fall due to a person texting or talking on mobile phones while walking, impaired vision, unexpected terrain changes, low balance, weakness, and chronic conditions has increased drastically over the past few decades. Particularly, unexpected terrain changes would many times lead to severe injuries and sometimes death even in healthy individuals. To tackle this problem, a warning system to alert the person of the imminent danger of a fall can be developed. This paper describes a solution for such a warning system used in our bio-inspired wearable pet robot, KiliRo. It is a terrain perception system used to classify the terrain based on visual features obtained from processing the images captured by a camera and notify the wearer of terrain changes while walking. The parrot-inspired KiliRo robot can twist its head and the camera up to 180 degrees to obtain visual feedback for classification. Feature extraction is followed by K-nearest neighbor for terrain classification. Experiments were conducted to establish the efficacy and validity of the proposed approach in classifying terrain changes. The results indicate an accuracy of over 95% across five terrain types, namely pedestrian pathway, road, grass, interior, and staircase.
abstract_id: PUBMED:26854062
Analysis of terrain effects on the interfacial force distribution at the hand and forearm during crutch gait. Forces transferred to the upper body during crutch use can lead to both short-term and long-term injuries, including joint pain, crutch palsy, and over-use injuries. While this force transmission has been studied in controlled laboratory settings, it is unclear how these forces are affected by irregular terrains commonly encountered during community ambulation. The purpose of this study was to determine the effects of walking speed and uneven terrain on the load magnitude, distribution, and rate of loading at the human-crutch contact surfaces. Our results show that the rates of loading were significantly increased with higher walking speeds and while negotiating certain irregular terrains, despite there being no apparent effect on the peak force transmission, suggesting load rate may be a more appropriate metric for assessing terrain effects on crutch gait. Furthermore, irrespective of the type of terrain and walking condition, the largest compressive forces were found to reside in the carpal-tunnel region of the hand, and may therefore be a primary contributor to carpal-tunnel injury.
abstract_id: PUBMED:30431365
Pediatric and adolescent injury in all-terrain vehicles. All-terrain vehicles (ATVs) remain a significant source of death and injury among youth. The purpose of this review is to provide an overview of the scope of the problem, the risk factors involved, crash-related outcomes and costs, and injury prevention strategies. There are currently more than 100 pediatric ATV-related fatalities each year and over 30,000 emergency department visits, with a potential annual cost for deaths and injuries approaching $1 billion. Major risk factors include lack of training, operating adult-size ATVs, riding as or carrying passengers, riding on the road, and not wearing a helmet. Extremity injuries are highly common, and the leading causes of death include brain injuries and multi-organ trauma. The latter increasingly involves being crushed by or pinned under the ATV. Reducing ATV-related deaths and injuries will require multiple strategies that integrate approaches from education, engineering, and evidence-based safety laws and their enforcement.
abstract_id: PUBMED:24291073
Causes of accidents in terrain parks: an exploratory factor analysis of recreational freestylers' views. Objective: This study examines ski and snowboard terrain park users' views on aspects associated with accidents by identifying and assessing variables that may influence the occurrence of accidents and the resulting injuries.
Methods: The research was conducted in a major resort in the Spanish Pyrenees, using information gathered from freestyle skiers and snowboarders aged 6 or older. To identify interrelationships among variables and to group the variables belonging to unified concepts, an exploratory factor analysis was performed using varimax rotation.
Results: The results revealed 5 factors that grouped the measured variables that may influence the occurrence of accidents while freestyling in terrain parks. The park features, conditions of the activity, and the user's personal conditions were found to have the most substantial influence on the freestylers' perceptions.
Conclusions: Variables identified as components of the main factors of accident risk in terrain parks should be incorporated into resort management communication and policies.
abstract_id: PUBMED:24179426
Preventing injuries from all-terrain vehicles. All-terrain vehicles (ATVs) are widely used in Canada for recreation, transportation and occupations such as farming. As motorized vehicles, they can be especially dangerous when used by children and young adolescents who lack the knowledge, physical size, strength, and cognitive and motor skills to operate them safely. The magnitude of injury risk to young riders is reflected in explicit vehicle manual warnings and the warning labels on current models, and evidenced by the significant number of paediatric hospitalizations and deaths due to ATV-related trauma. However, helmet use is far from universal among youth operators, and unsafe riding behaviours, such as driving unsupervised and/or driving with passengers, remain common. Despite industry warnings and public education that emphasize the importance of safety behaviours and the risks of significant injury to children and youth, ATV-related injuries and fatalities continue to occur. Until measures are taken that clearly effect substantial reductions in these injuries, restricting ridership by young operators, especially those younger than 16 years of age, is critical to reducing the burden of ATV-related trauma in children and youth. This document replaces a previous Canadian Paediatric Society position statement published in 2004.
abstract_id: PUBMED:18367134
A 21-year history of all-terrain vehicle injuries: has anything changed? Background: All-terrain vehicle (ATV)-related injuries have increased. The purpose of this study was to determine if the increase in injuries correlates with the expiration of government mandates.
Methods: ATV-injured patients admitted to a level I trauma center were reviewed over the years 1985-1999 and 2000-2005. Several demographic variables and injuries sustained were analyzed.
Results: There were a total of 433 injuries, which increased from 164 between 1985 and 1999, to 269 between 2000 and 2005. By comparing the time periods we observed a decrease in closed-head injury (53.6% vs 27.5%; P < .001), spinal cord injury (11.6% vs 5.2%; P < .05), and soft-tissue injury (62.8% vs 45.3%; P < .01), but an increase in long-bone fractures (18.9% vs 33.0%; P < .05). No differences were observed in other injuries.
Conclusions: The number of patients sustaining ATV-related injuries has increased and correlates with the expiration of government mandates. Even though ATVs remain dangerous, injury prevention strategies such as helmet laws may be having a positive impact.
abstract_id: PUBMED:35782169
Globe dislocation and optic nerve avulsion following all-terrain vehicle accidents. Purpose: Open-air motor vehicles present unique trauma risks to the eyes and face. We describe two patients who suffered a crash while riding an all-terrain vehicle (ATV), leading to globe dislocation with optic nerve avulsion in order to raise awareness about the risks associated with ATV accidents.
Observations: In both cases, the injury was caused by high-speed trauma to the orbit involving a tree branch. One patient sustained a life threatening arrythmia requiring a short stay in the intensive care unit, and both patients required emergent surgical management and eventual socket reconstruction.
Conclusions And Importance: These cases highlight the need for greater advocacy on behalf of rider safety. The authors encourage ophthalmologists to counsel patients who use ATVs to wear helmets, seatbelts, and protective eyewear to prevent these types of injuries in the future.
abstract_id: PUBMED:32814547
Understanding youths' attitudes and practices regarding listening to music, video recording and terrain park use while skiing and snowboarding. Background: Skiing and snowboarding are popular activities among Canadian youth and these sports have evolved to include certain risk behaviours such as listening to music, using terrain parks, and video recording yourself or others. The objective of this study was to determine the prevalence of these risk behaviours and identify factors that are associated with the risk behaviours.
Methods: Using focus group methodology, a questionnaire was developed to capture aspects of the Theory of Planned Behaviour. A cross-sectional study was conducted where the questionnaire was administered to youth aged 13-18 during two winter seasons at two ski hills in Manitoba, Canada.
Results: The sample was comprised of 735 youth (mean age 14.9; 82.1% male, 83.6% snowboarding). The most common behavior was using the TP (83.1%), followed by listening to music that day (36.9%), and video recording that day (34.5%). Youth had significantly higher odds of listening to music that day if they planned to next time (OR 19.13; 95% CI: 10.62, 34.44), were skiing or snowboarding alone (OR 2.33; 95% CI: 1.10, 4.95), or thought listening to music makes skiing or snowboarding more exciting or fun or makes them feel more confident (OR 2.30; 95% CI: 1.31, 4.05). They were less likely to if they believed that music made it more difficult to hear or talk to others (OR: 0.35; 95% CI: 0.18, 0.65). Youth had significantly higher odds of using the terrain park if they believed that terrain parks were cool, challenging, or fun (OR: 5.84; 95% CI: 2.85, 11.96) or if their siblings used terrain parks (OR: 4.94; OR: 2.84, 9.85). Those who believed that terrain parks were too busy or crowded (OR: 0.31; 95% CI: 0.16, 0.62) were less likely to use them. Youth had significantly higher odds of video recording that day if they reported that they plan to video record next time (OR: 8.09, 95% CI: 4.67, 14.01) or if they were skiing or snowboarding with friends (OR: 3.65, 95% CI: 1.45, 9.18). Youth had significantly higher odds of video recording that day if they agreed that recording makes them try harder and improved their tricks (OR: 3.34, 95% CI: 1.38, 8.08) compared to those who neither agreed nor disagreed. Youth were less likely to record themselves that day if their friends did not do so (OR: 0.36; 95% CI: 0.16, 0.80).
Conclusion: Common predictors of engaging in risk behaviours suggest that injury prevention programs may not have to be specific to each behaviour. Some strategies for injury prevention are suggested.
abstract_id: PUBMED:34350742
Engineering Control Technologies to Protect Operators in Agricultural All-Terrain Vehicle Rollovers. HIGHLIGHTS Rollovers are the leading cause of injury and fatality in farm all-terrain vehicle (ATV) incidents. Engineering technologies to prevent rollovers or protect the operator in ATV crashes were reviewed in this study. The advances in safety for ATVs are correlated with improvements in stability, handling, and crashworthiness. Operator protection devices and crash notification systems can protect the operator in ATV rollover incidents. ABSTRACT. All-terrain vehicles (ATVs) are the second most common source of injury, following tractors, in U.S. agriculture. Rollovers are the leading cause of death in farm ATV incidents, constituting about 85% of ATV-related deaths. There is neither a significant practical solution for ATV rollover crashes in the U.S. nor standards and rules for implementing such a solution. Behavior-based control methods have been used for several decades but have reached their limit of success. Hence, engineering controls are needed to significantly decrease the severity of injuries in ATV rollover incidents (as in tractor incidents). In this study, engineering technologies to protect the operator in agricultural ATV crashes were reviewed. The discussion includes improving crash testing and stability ratings, evaluating static stability of ATVs, dynamic handling tests of ATVs, using automatic systems to notify first responders of a crash, and testing and applying operator protection devices. The available standards, rules, and recommendations related to these technologies around the world are also discussed.
Answer: Over a 21-year period, all-terrain vehicle (ATV)-related injuries have increased, which correlates with the expiration of government mandates (PUBMED:18367134). The study comparing injuries between 1985-1999 and 2000-2005 found a decrease in closed-head injury, spinal cord injury, and soft-tissue injury, but an increase in long-bone fractures. Despite ATVs remaining dangerous, injury prevention strategies such as helmet laws may be having a positive impact. However, the number of pediatric ATV-related fatalities and emergency department visits remains high, with more than 100 deaths and over 30,000 visits annually, and the potential cost for deaths and injuries approaching $1 billion (PUBMED:30431365). Major risk factors include lack of training, operating adult-size ATVs, riding as or carrying passengers, riding on the road, and not wearing a helmet. Extremity injuries are common, and leading causes of death include brain injuries and multi-organ trauma, often involving being crushed by or pinned under the ATV. Reducing ATV-related deaths and injuries will require multiple strategies that integrate education, engineering, and evidence-based safety laws and their enforcement (PUBMED:30431365). Despite industry warnings and public education emphasizing the importance of safety behaviors and the risks of significant injury to children and youth, ATV-related injuries and fatalities continue to occur. Restricting ridership by young operators, especially those younger than 16 years of age, is critical to reducing the burden of ATV-related trauma in children and youth (PUBMED:24179426). Therefore, while there have been changes in the types of injuries and some impact from safety measures, the overall risk and incidence of ATV-related injuries remain a significant concern. |
Instruction: Is it possible to reduce AIDS deaths without reinforcing socioeconomic inequalities in health?
Abstracts:
abstract_id: PUBMED:15737970
Is it possible to reduce AIDS deaths without reinforcing socioeconomic inequalities in health? Background: The wide use of highly active antiretroviral therapy has led to an impressive improvement in AIDS survival after the mid-1990s in cities and countries with a high access to these medications. Notwithstanding its beneficial overall effect, antiretroviral therapy was also reported as a factor for the increase in socioeconomic inequalities in health, because AIDS patients have unequal access and adherence to these medications.
Methods: We assessed trends AIDS mortality in districts of Sao Paulo, Brazil, from 1995 to 2002, in order to test their association with area-level socioeconomic indices in a city with a large-scale and cost-free distribution of highly active antiretroviral therapy. We gathered information on yearly death rates due to AIDS, adjusted for gender, age group, income, instruction, living standards, and the human development index. Trend estimation used the autoregression procedure of exact maximum-likelihood estimation for time-series analysis. Regression analysis was used to study the association between the annual percentage decrease in AIDS deaths and socioeconomic indices.
Results: AIDS mortality decreased in Sao Paulo from 32.1 deaths (per 100 000 inhabitants) in 1995 to 11.2 deaths (per 100 000 inhabitants) in 2002. District-level figures of social development did not show an association with the annual percentage decrease in AIDS mortality, with all correlation coefficients corresponding to P-values >0.27.
Conclusions: This observation indicates that the perspective of public policies addressed to the entire population contribute to reducing inequalities in health, while attaining an overall reduction in AIDS deaths, may have been feasible in the Brazilian context.
abstract_id: PUBMED:37254141
Socioeconomic and geographical inequalities in health care coverage in Mozambique: a repeated cross-sectional study of the 2015 and 2018 national surveys. Background: Over the past years, Mozambique has implemented several initiatives to ensure equitable coverage to health care services. While there have been some achievements in health care coverage at the population level, the effects of these initiatives on social inequalities have not been analysed.
Objective: The present study aimed to assess changes in socioeconomic and geographical inequalities (education, wealth, region, place of residence) in health care coverage between 2015 and 2018 in Mozambique.
Methods: The study was based on repeated cross-sectional surveys from nationally representative samples: the Survey of Indicators on Immunisation, Malaria and HIV/AIDS in Mozambique (IMASIDA) 2015 and the 2018 Malaria Indicator survey. Data from women of reproductive age (15 to 49 years) were analysed to evaluate health care coverage of three indicators: insecticide-treated net use, fever treatment of children, and use of Fansidar malaria prophylaxis for pregnant women. Absolute risk differences and the slope index of inequality (SII) were calculated for the 2015 survey period and the 2018 survey period, respectively. An interaction term between the socioeconomic and geographical variables and the period was included to assess inequality changes between 2015 and 2018.
Results: The non-use of insecticide-treated nets dropped, whereas the proportion of women with children who were not treated for fever and the prevalence of women who did not take the full Fansidar dose during pregnancy decreased between 2015 and 2018. Significant reductions in the inequality related to insecticide-treated net use were observed for all socioeconomic variables. Concerning fever treatment, some reductions in socioeconomic inequalities were observed, though not statistically significant. For malaria prophylaxis, the SII was significant for education, wealth, and residence in both periods, but no significant inequality reductions were observed in any of these variables over time.
Conclusions: We observed significant reductions of socioeconomic inequalities in insecticide-treated net use, but not in fever treatment of children and Fansidar prophylaxis for pregnant women. Decision-makers should target underserved populations, specifically the non-educated, poor, and rural women, to address inequalities in health care coverage.
abstract_id: PUBMED:38468229
Socioeconomic inequalities in avoidable mortality in Italy: results from a nationwide longitudinal cohort. Background: Disparities in avoidable mortality have never been evaluated in Italy at the national level. The present study aimed to assess the association between socioeconomic status and avoidable mortality.
Methods: The nationwide closed cohort of the 2011 Census of Population and Housing was followed up for 2012-2019 mortality. Outcomes of preventable and of treatable mortality were separately evaluated among people aged 30-74. Education level (elementary school or less, middle school, high school diploma, university degree or more) and residence macro area (North-West, North-East, Center, South-Islands) were the exposures, for which adjusted mortality rate ratios (MRRs) were calculated through multivariate quasi-Poisson regression models, adjusted for age at death. Relative index of inequalities was estimated for preventable, treatable, and non-avoidable mortality and for some specific causes.
Results: The cohort consisted of 35,708,459 residents (48.8% men, 17.5% aged 65-74), 34% with a high school diploma, 33.5% living in the South-Islands; 1,127,760 deaths were observed, of which 65.2% for avoidable causes (40.4% preventable and 24.9% treatable). Inverse trends between education level and mortality were observed for all causes; comparing the least with the most educated groups, a strong association was observed for preventable (males MRR = 2.39; females MRR = 1.65) and for treatable causes of death (males MRR = 1.93; females MRR = 1.45). The greatest inequalities were observed for HIV/AIDS and alcohol-related diseases (both sexes), drug-related diseases and tuberculosis (males), and diabetes mellitus, cardiovascular diseases, and renal failure (females). Excess risk of preventable and of treatable mortality were observed for the South-Islands.
Conclusions: Socioeconomic inequalities in mortality persist in Italy, with an extremely varied response to policies at the regional level, representing a possible missed gain in health and suggesting a reassessment of priorities and definition of health targets.
abstract_id: PUBMED:25879739
Trends in socioeconomic inequalities in preventable mortality in urban areas of 33 Spanish cities, 1996-2007 (MEDEA project). Background: Preventable mortality is a good indicator of possible problems to be investigated in the primary prevention chain, making it also a useful tool with which to evaluate health policies particularly public health policies. This study describes inequalities in preventable avoidable mortality in relation to socioeconomic status in small urban areas of thirty three Spanish cities, and analyses their evolution over the course of the periods 1996-2001 and 2002-2007.
Methods: We analysed census tracts and all deaths occurring in the population residing in these cities from 1996 to 2007 were taken into account. The causes included in the study were lung cancer, cirrhosis, AIDS/HIV, motor vehicle traffic accidents injuries, suicide and homicide. The census tracts were classified into three groups, according their socioeconomic level. To analyse inequalities in mortality risks between the highest and lowest socioeconomic levels and over different periods, for each city and separating by sex, Poisson regression were used.
Results: Preventable avoidable mortality made a significant contribution to general mortality (around 7.5%, higher among men), having decreased over time in men (12.7 in 1996-2001 and 10.9 in 2002-2007), though not so clearly among women (3.3% in 1996-2001 and 2.9% in 2002-2007). It has been observed in men that the risks of death are higher in areas of greater deprivation, and that these excesses have not modified over time. The result in women is different and differences in mortality risks by socioeconomic level could not be established in many cities.
Conclusions: Preventable mortality decreased between the 1996-2001 and 2002-2007 periods, more markedly in men than in women. There were socioeconomic inequalities in mortality in most cities analysed, associating a higher risk of death with higher levels of deprivation. Inequalities have remained over the two periods analysed. This study makes it possible to identify those areas where excess preventable mortality was associated with more deprived zones. It is in these deprived zones where actions to reduce and monitor health inequalities should be put into place. Primary healthcare may play an important role in this process.
abstract_id: PUBMED:31912356
Social inequalities in health-related quality of life among people aging with HIV/AIDS: the role of comorbidities and disease severity. Purpose: While socioeconomic inequalities in health-related quality of life are well documented in the scientific literature, research has neglected to look into the reasons for these inequalities. The purpose of this study is to determine in what way social inequalities in health-related quality of life among patients with the same chronic disease could be explained by variations in disease severity.
Methods: We used the data of 748 people aging with HIV in Germany who took part in the nationwide study 50plushiv and provided self-report data on socioeconomic status, health-related quality of life (SF-12) and various markers of disease severity (comorbidity, falls, late presentation and AIDS diagnosis). Regression analyses were applied to determine the impact of SES on HRQOL after adjusting for disease severity variables.
Results: The mental and physical subscales of the SF-12, comorbidity burden and falls were significantly related to SES. SES explained 7% of the variance in PCS scores and 3% of the variance in MCS scores after adjusting for age and time since diagnosis. Markers of disease severity explained 33% of the variance in PCS scores and 14% of the variance in MCS scores. After adjusting for disease severity SES was still significantly related to PCS and MCS scores.
Conclusions: The diverse sample of people aging with HIV showed social inequalities regarding HRQOL and most of the disease severity markers. SES was significantly related to mental and physical HRQOL after adjusting for disease severity. Possible explanations for this phenomenon are discussed.
abstract_id: PUBMED:10822475
Can we monitor socioeconomic inequalities in health? A survey of U.S. health departments' data collection and reporting practices. Objective: To evaluate the potential for and obstacles to routine monitoring of socioeconomic inequalities in health using U.S. vital statistics and disease registry data, the authors surveyed current data collection and reporting practices for specific socioeconomic variables.
Methods: In 1996 the authors mailed a self-administered survey to all of the 55 health department vital statistics offices reporting data to the National Center for Health Statistics (NCHS) to determine what kinds of socioeconomic data they collected on birth and death certificates and in cancer, AIDS, and tuberculosis (TB) registries and what kinds of socioeconomic data were routinely reported in health department publications.
Results: Health departments routinely obtained data on occupation on death certificates and in most cancer registries. They collected data on educational level for both birth and death certificates. None of the databases collected information on income, and few obtained data on employment status, health insurance carrier, or receipt of public assistance. When socioeconomic data were collected, they were usually not included in published reports (except for mothers educational level in birth certificate data). Obstacles cited to collecting and reporting socioeconomic data included lack of resources and concerns about the confidentiality and accuracy of data. All databases, however, included residential addresses, suggesting records could be geocoded and linked to Census-based socioeconomic data.
Conclusions: U.S. state and Federal vital statistics and disease registries should routinely collect and publish socioeconomic data to improve efforts to monitor trends in and reduce social inequalities in health.
abstract_id: PUBMED:34501987
Socioeconomic Inequalities in Human Immunodeficiency Virus (HIV) Sero-Prevalence among Women in Namibia: Further Analysis of Population-Based Data. Socioeconomic inequality is a major factor to consider in the prevention of human immunodeficiency virus (HIV) transmission. The aim of this study was to investigate socioeconomic inequalities in HIV prevalence among Namibian women. Data from a population-based household survey with multistage-stratified sample of 6501 women were used to examine the link between socioeconomic inequalities and HIV prevalence. The weighted HIV prevalence was 13.2% (95% CI: 12.1-14.3%). The HIV prevalence among the poorest, poorer, middle, richer, and richest households was 21.4%, 19.7%, 16.3%, 11.0%, and 3.7%, respectively. Similarly, 21.2%, 21.7%, 11.8%, and 2.1% HIV prevalence was estimated among women with no formal education and primary, secondary, and higher education, respectively. Women from poor households (Conc. Index = -0.258; SE = 0.017) and those with no formal education (Conc. Index = -0.199; SE = 0.015) had high concentration of HIV infection, respectively. In light of these findings, HIV prevention strategies must be tailored to the specific drivers of transmission in low socioeconomic groups, with special attention paid to the vulnerabilities faced by women and the dynamic and contextual nature of the relationship between socioeconomic status and HIV infection.
abstract_id: PUBMED:35983351
Socioeconomic inequalities and family planning utilization among female adolescents in urban slums in Nigeria. Background/statement Of Problem: Family planning (FP) utilization is important for preventing unwanted pregnancy and achieving optimal reproductive health. However, the modern contraceptive prevalence rate (mCPR) among women of childbearing age is still low in many low- and middle-income countries (LMIC), particularly in Nigeria, despite interventions to increase access and utilization. The low mCPR has been associated with a high prevalence of unwanted pregnancy, unsafe abortion, sexually transmitted infections such as HIV/AIDS, and high maternal and infant mortality in LMIC. Despite existing studies associating high family planning utilization to urban settings relative to the rural areas, the socioeconomic inequality in urban settings, especially among adolescents in urban slums has been given less research attention. This study examines the role of socioeconomic inequality on family planning utilization among female adolescents of various ethnic backgrounds in urban slums in Nigeria.
Methods: The study utilized data from the Adolescent Childbearing Survey (2019). A total sample of 2,035 female adolescents of ages 14-19 years who were not pregnant at the time of the study and were resident in selected slums. Associations between socioeconomic inequalities-measured by wealth index, social status, and education-and modern contraceptive use were examined using relative and slope inequality indices, and logistic regression models.
Results: The results show that only 15% of the female adolescents in the North, and 19% in the South reported modern contraceptive use. While wealth index and education were important predictors of FP use among adolescents in southern urban slums, only education was important in the North. However, the relative and slope inequality indices further indicate that adolescents with no education and those in the lowest social status group use much fewer contraceptives compared to their counterparts with higher wealth and social statuses. Those with secondary/higher education and the highest social status group, respectively, were more disadvantaged in terms of FP utilization (Education: RII = 1.86, p < 0.05; 95% C.I. = 1.02-2.71; Social Status: RII = 1.97, p < 0.05; 95% C.I. = 1.26-2.68) with results showing a more marked level of disparity when disaggregated by North and South.
Conclusion: The persistent socioeconomic inequalities among female adolescents in Nigeria, especially those in the urban slums, have continued to limit their utilization. Policy measure in education, communication and subsidized contraceptives should be intensified for vulnerable female adolescents in the slums.
abstract_id: PUBMED:10605248
Socioeconomic inequalities in health care efficacy. Three examples in Lazio region There has been a limited number of studies in Italy investigating the relationship between socioeconomic inequalities and efficacy of treatments. This paper reviews three case studies on the association between socioeconomic status and disease outcome in the Lazio Region, Italy. The studies investigated: a) the probability of renal transplantation among patients with endstage renal disease; b) the survival of AIDS patients before and after the introduction of the new antiretroviral therapies; c) the 30-day mortality of patients who underwent coronary artery bypass graft surgery. The three studies demonstrate the complex relationship between socioeconomic status and both access to, and efficacy of, important health interventions.
abstract_id: PUBMED:33212953
The Association between Material Deprivation and Avoidable Mortality in Lisbon, Portugal. There is considerable evidence pointing to the existence of a socioeconomic gradient in mortality, which tends to be steeper in urban areas. Similar to other European cities, Lisbon is far from homogeneous since considerable geographical inequalities exist between the more advantaged and the more deprived neighborhoods. The main goals of this study are to describe the geographical pattern of premature deaths (before 65 years old), avoidable deaths (preventable and amenable to healthcare) and cause-specific mortality (HIV/AIDS and suicide) in Lisbon, at the lower administrative level (civil parish, in Portuguese: Freguesia), and analyze the statistical association between mortality risk and deprivation, before (1999-2003) and during the economic crisis (2008-2012). Smoothed Standardized Mortality Ratios (sSMR) and Relative Risk (RR) with 95% credible intervals were calculated to identify the association between mortality and deprivation. The analysis of the geographical distribution of cause-specific mortality reveals that civil parishes with high sSMR in the first period continued to present higher mortality rates in the second. Moreover, a significant statistical association was found between all the causes of death and deprivation, except suicide. These findings contribute to understanding how social conditions influence health outcomes and can offer insights about potential policy directions for local government.
Answer: Yes, it is possible to reduce AIDS deaths without reinforcing socioeconomic inequalities in health. The study conducted in Sao Paulo, Brazil, from 1995 to 2002, showed that AIDS mortality decreased significantly in a context where highly active antiretroviral therapy was distributed on a large scale and cost-free. Importantly, the reduction in AIDS mortality was not associated with district-level socioeconomic indices, indicating that public policies addressing the entire population can contribute to reducing health inequalities while also achieving an overall reduction in AIDS deaths (PUBMED:15737970).
This finding is supported by the observation that socioeconomic inequalities in health persist in various contexts, but targeted public health policies and interventions can mitigate these inequalities. For instance, in Italy, socioeconomic inequalities in avoidable mortality were observed, with a varied response to policies at the regional level, suggesting the need for a reassessment of priorities and definition of health targets to address these disparities (PUBMED:38468229). Similarly, in the context of Lisbon, Portugal, a significant statistical association was found between all causes of death, including HIV/AIDS, and deprivation, indicating that social conditions influence health outcomes and that policy interventions can potentially address these issues (PUBMED:33212953).
Moreover, studies from other regions, such as Mozambique and Spain, have shown that while socioeconomic and geographical inequalities in health care coverage exist, initiatives aimed at increasing equitable coverage can lead to reductions in these inequalities, at least for some health indicators (PUBMED:37254141; PUBMED:25879739).
In conclusion, while socioeconomic inequalities in health are a complex and persistent issue, evidence suggests that it is feasible to reduce AIDS deaths without exacerbating these inequalities through comprehensive and inclusive public health policies, as well as targeted interventions that address the needs of underserved populations. |
Instruction: Are infant behavioural feeding difficulties associated with congenital heart disease?
Abstracts:
abstract_id: PUBMED:11136341
Are infant behavioural feeding difficulties associated with congenital heart disease? Aim: To compare the feeding patterns and difficulties of infants with congenital heart disease (CHD) and healthy controls. Information was gathered via parental questionnaires.
Methods: A matched case controlled study of 64 infants with CHD compared with 64 healthy controls.
Results: The main findings were: (1) Feeding patterns: mothers with infants with CHD used bottle-feeding as a first method of feeding their babies more often (CHD, 20%, controls, 2%); (2) Specific feeding difficulties: (a) infants with CHD were significantly more breathless when feeding (CHD = 16%, controls, 0%), (b) had more vomiting at mealtimes (CHD = 23%, controls = 11%), but (c) had significantly less spitting (CHD = 19%, controls, 41%); and (3) infants with CHD showed significantly reduced growth.
Conclusions: The feeding difficulties are related to the organic condition and not specific difficulties in mother-infant interaction. Professional support may be required for mothers of infants with CHD to maintain feeding routines and to deal with the difficulties that arise.
abstract_id: PUBMED:11723968
Feeding the infant with congenital heart disease: an occupational performance challenge. This review article uses the Canadian Model of Occupational Performance (CMOP) as a theoretical framework to organize a discussion of the complexities of infant feeding when the infant has congenital heart disease (CHD). Literature from many fields indicates that feeding supports the physical, cognitive, and affective development of infants within their various environmental contexts. Many infants with CHD, who are now surviving in increasing numbers, experience feeding difficulties that affect their growth and development and that challenge their caregivers. The feeding experiences of infants with CHD illustrate the clinical applicability of the CMOP and the need for further research. Research using the framework of the CMOP will enable the development and implementation of evidence-based interventions that support the occupation of feeding from both the infant and the caregiver perspective.
abstract_id: PUBMED:33028141
Transdiagnostic feasibility trial of internet-based parenting intervention to reduce child behavioural difficulties associated with congenital and neonatal neurodevelopmental risk: introducing I-InTERACT-North. Objective: We examined feasibility and acceptability of an adapted telepsychological parent-child intervention to improve parenting skills and reduce emotional and behavioural difficulties in Canadian families of children at-risk for poor neurodevelopment given congenital or neonatal conditions. Preliminary program efficacy outcomes are also described.
Methods: Twenty-two families of children between the ages of 3-8 years with histories of neonatal stroke, hypoxic ischemic encephalopathy (HIE) and serious congenital and neonatal conditions [(congenital heart disease (CHD) or prematurity)] consented to participate in an adapted telepsychological parenting skills training program (I-InTERACT-North). The program helps parents develop positive parenting skills to improve parenting competence and child behaviour through 7 online psychoeducational modules completed independently and 7 videoconference sessions with a therapist. Videoconference sessions include live coaching to support application of skills. Feasibility (i.e., number of participants eligible, consented, refused), adherence (i.e., completion time, retention rates), acceptability (i.e., website experience questionnaire, therapist and parent semi-structured interviews), and preliminary efficacy (i.e., observational coding of parenting skill, self-reported parent competence, parent-reported child behaviour) data were collected.
Results: Nineteen of the 22 families (86%) enrolled completed the program in an average of 10 weeks (range: 6-17 weeks). Parents and therapists reported high overall satisfaction with the program (100%), including acceptability of both the online modules (95%) and the videoconference sessions (95%). Parenting confidence (d = 0.45), parenting skill (d = 0 .64), and child behaviour (d = 0.50) significantly improved over the course of the intervention.
Conclusions: Findings provide preliminary evidence for the feasibility, acceptability, and efficacy of I-InTERACT-North for parents of children with neonatal brain injury.
abstract_id: PUBMED:33303052
Disruptions in the development of feeding for infants with congenital heart disease. Congenital heart disease (CHD) is the most common birth defect for infants born in the United States, with approximately 36,000 affected infants born annually. While mortality rates for children with CHD have significantly declined, there is a growing population of individuals with CHD living into adulthood prompting the need to optimise long-term development and quality of life. For infants with CHD, pre- and post-surgery, there is an increased risk of developmental challenges and feeding difficulties. Feeding challenges carry profound implications for the quality of life for individuals with CHD and their families as they impact short- and long-term neurodevelopment related to growth and nutrition, sensory regulation, and social-emotional bonding with parents and other caregivers. Oral feeding challenges in children with CHD are often the result of medical complications, delayed transition to oral feeding, reduced stamina, oral feeding refusal, developmental delay, and consequences of the overwhelming intensive care unit (ICU) environment. This article aims to characterise the disruptions in feeding development for infants with CHD and describe neurodevelopmental factors that may contribute to short- and long-term oral feeding difficulties.
abstract_id: PUBMED:20816559
Content validation of the infant malnutrition and feeding checklist for congenital heart disease: a tool to identify risk of malnutrition and feeding difficulties in infants with congenital heart disease. Infants with congenital heart disease (CHD) have a high prevalence of feeding difficulties and malnutrition. Early intervention decreases morbidity and long-term developmental deficits. The purpose of this study was to develop and establish the content validity of a screening checklist to identify infants with CHD at risk of feeding difficulties or inadequate nutritional intake for timely referral to a feeding specialist or dietitian. The Delphi method was used, and expert participants reached consensus on 24 risk indicators. This study is the first step in establishing the validity and reliability of a screening tool for early intervention of feeding difficulties and inadequate nutritional intake in infants with CHD.
abstract_id: PUBMED:1569531
Parent-infant interaction during feeding when the infant has congenital heart disease. This article examines parent-infant interaction (PII) during feeding when the infant has congenital heart disease (CHD) using the Nursing Child Assessment Feeding Scale (NCAFS) and compares the NCAFS scores of the infants with CHD with those of healthy controls. Twenty mother-infant dyads, 10 with CHD and 10 controls, were studied. Infants with CHD scored significantly lower than controls on both infant subscales, Responsiveness to Caregiver and Clarity of Cues, of the NCAFS. Mothers of CHD infants scored significantly lower on the Social Emotional Growth Fostering subscale. These findings suggest specific behavioral differences in infants with CHD during feeding and support the need for more information about feeding interactions in infants with CHD.
abstract_id: PUBMED:35433875
The Associations Between Preoperative Anthropometry and Postoperative Outcomes in Infants Undergoing Congenital Heart Surgery. Aim: We explored the association between preoperative anthropometry and biochemistry, and postoperative outcomes in infants with CHD after cardiac surgery, as infants with congenital heart disease (CHD) often have feeding difficulties and malnutrition.
Methodology: This was a retrospective review of infants (≤ 1-year-old) who underwent congenital heart surgery. Preoperative anthropometryin terms of preoperative weight-for-age z-score (WAZ), length-for-age z-score (LAZ), as well as preoperative serum albumin and hemoglobin concentrations, were evaluated against 6-month mortality, and morbidity outcomes including postoperative complications, vasoactive inotrope score, duration of mechanical ventilation, length of stay in the pediatric intensive care unit and in hospital, using the logistic regression or median regression models accounting for infant-level clustering.
Results: One hundred and ninety-nine operations were performed in 167 infants. Mean gestational age at birth was 38.0 (SD 2.2) weeks (range 26 to 41 weeks). Thirty (18.0%) infants were born preterm (<37 weeks). The commonest acyanotic and cyanotic lesions were ventricular septal defect (26.3%, 44/167), and tetralogy of Fallot (13.8%, 23/167), respectively. Mean age at cardiac surgery was 94 (SD 95) days. Feeding difficulties, including increased work of breathing during feeding, diaphoresis, choking or coughing during feeding, and inability to complete feeds, was present in 54.3% (108/199) of infants prior to surgery, of which 21.6% (43/199) required tube feeding. The mean preoperative WAZ was-1.31 (SD 1.79). Logistic regression models showed that low preoperative WAZ was associated with increased risk of postoperative complications (odds ratio 1.82; p = 0.02), and 6-month mortality (odds ratio 2.38; p = 0.008) following CHD surgery. There was no meaningful association between the other preoperative variables and other outcomes.
Conclusion: More than 50% of infants with CHD undergoing cardiac surgery within the first year of life have feeding difficulties, of which 22% require to be tube-fed. Low preoperative WAZ is associated with increased postoperative complications and 6-month mortality.
abstract_id: PUBMED:36880736
Validation of the instrument "Infant Malnutrition and Feeding Checklist for Congenital Heart Disease", a tool to identify risk of malnutrition and feeding difficulties in infants with congenital heart disease Introduction: Introduction: currently, various tools have been designed to timely detect the risk of malnutrition in hospitalized children. In those with a diagnosis of congenital heart disease (CHD), there is only one tool developed in Canada: Infant Malnutrition and Feeding Checklist for Congenital Heart Disease (IMFC:CHD), which was designed in English. Objective: to evaluate the validity and reliability of the Spanish adaptation of the IMFC:CHD tool in infants with CHD. Methods: cross-sectional validation study carried out in two stages. The first, of translation and cross-cultural adaptation of the tool, and the second, of validation of the new translated tool, where evidence of reliability and validity were obtained. Results: in the first stage, the tool was translated and adapted to the Spanish language; for the second stage, 24 infants diagnosed with CHD were included. The concurrent criterion validity between the screening tool and the anthropometric evaluation was evaluated, obtaining a substantial agreement (κ = 0.660, 95 % CI: 0.36-0.95) and for the predictive criterion validity, which was compared with the days of hospital stay, moderate agreement was obtained (κ = 0.489, 95 % CI: 0.1-0.8). The reliability of the tool was evaluated through external consistency, measuring the inter-observer agreement, obtaining a substantial agreement (κ = 0.789, 95 % CI: 0.5-0.9), and the reproducibility of the tool showed an almost perfect agreement (κ = 1, CI 95 %: 0.9-1.0). Conclusions: the IMFC:CHD tool showed adequate validity and reliability, and could be considered as a useful resource for the identification of severe malnutrition.
abstract_id: PUBMED:35651405
Feeding Difficulties Following Vascular Ring Repair: A Contemporary Narrative Review. Vascular rings are congenital abnormalities of the aortic arch vascular system that compress the trachea and esophagus. A review of long-term outcomes suggests that chronic feeding difficulties can persist following surgical repair of vascular rings. Previous reports of postoperative vascular ring division outcomes indicate that chronic esophageal symptoms may persist following repair, though most available data focuses on persistent respiratory symptoms. It is therefore the aim of this article to summarize and organize recent evidence reporting the frequency, presentation, and management of feeding difficulties following vascular ring repair in pediatric patients. Pathophysiologic mechanisms for postoperative esophageal symptoms may include residual compression from an unresected diverticulum of Kommerell or delayed repair leading to chronic esophageal dysmotility despite correction of esophageal compression. Guidance on the management of feeding difficulties following vascular ring repair is limited. The authors describe success in one case with nasogastric tube feeding and interdisciplinary evaluation. Consensus regarding the management of feeding difficulty following vascular ring repair is needed.
abstract_id: PUBMED:29148256
Toward standardization of care: The feeding readiness assessment after congenital cardiac surgery. Background: Feeding practices after neonatal and congenital heart surgery are complicated and variable, which may be associated with prolonged hospitalization length of stay (LOS). Systematic assessment of feeding skills after cardiac surgery may earlier identify those likely to have protracted feeding difficulties, which may promote standardization of care.
Methods: Neonates and infants ≤3 months old admitted for their first cardiac surgery were retrospectively identified during a 1-year period at a single center. A systematic feeding readiness assessment (FRA) was utilized to score infant feeding skills. FRA scores were assigned immediately prior to surgery and 1, 2, and 3 weeks after surgery. FRA scores were analyzed individually and in combination as predictors of gastrostomy tube (GT) placement prior to hospital discharge by logistic regression.
Results: Eighty-six patients met inclusion criteria and 69 patients had complete data to be included in the final model. The mean age of admit was five days and 51% were male. Forty-six percent had single ventricle physiology. Twenty-nine (42%) underwent GT placement. The model containing both immediate presurgical and 1-week postoperative FRA scores was of highest utility in predicting discharge with GT (intercept odds = 10.9, P = .0002; sensitivity 69%, specificity 93%, AUC 0.913). The false positive rate was 7.5%.
Conclusions: In this analysis, systematic and standardized measurements of feeding readiness employed immediately before and one week after congenital cardiac surgery predicted need for GT placement prior to hospital discharge. The FRA score may be used to risk stratify patients based on likelihood of prolonged feeding difficulties, which may further improve standardization of care.
Answer: Yes, infant behavioral feeding difficulties are associated with congenital heart disease (CHD). Studies have found that infants with CHD experience specific feeding difficulties that are related to their organic condition rather than issues with mother-infant interaction. These difficulties include increased breathlessness during feeding, more vomiting at mealtimes, and reduced growth compared to healthy controls. Additionally, infants with CHD were found to use bottle-feeding as a first method more often than controls (PUBMED:11136341).
Feeding challenges in infants with CHD can affect their growth and development and pose challenges for their caregivers. These difficulties are often due to medical complications, delayed transition to oral feeding, reduced stamina, oral feeding refusal, developmental delay, and the impact of the intensive care unit environment (PUBMED:33303052).
Moreover, more than 50% of infants with CHD undergoing cardiac surgery within the first year of life have feeding difficulties, with approximately 22% requiring tube feeding. Low preoperative weight-for-age z-score (WAZ) in these infants is associated with increased postoperative complications and 6-month mortality (PUBMED:35433875).
Tools such as the Infant Malnutrition and Feeding Checklist for Congenital Heart Disease have been developed to identify infants with CHD at risk of feeding difficulties or inadequate nutritional intake for timely referral to a feeding specialist or dietitian (PUBMED:20816559, PUBMED:36880736).
In summary, feeding difficulties are indeed associated with congenital heart disease in infants, and these challenges have significant implications for their overall health and development. |
Instruction: Are trauma and dissociation related to treatment resistance in patients with obsessive-compulsive disorder?
Abstracts:
abstract_id: PUBMED:24213522
Are trauma and dissociation related to treatment resistance in patients with obsessive-compulsive disorder? Objective: Previous research has indicated a relation between obsessive-compulsive disorder (OCD), childhood traumatic experiences and higher levels of dissociation that appears to relate to negative treatment outcome for OCD. The aim of the present study is to investigate whether childhood trauma and dissociation are related to severity of OCD in adulthood. We also intend to examine the association between treatment resistance, dissociation, and each form of trauma.
Methods: Participants included 120 individuals diagnosed with OCD; 58 (48.3 %) of them met the criteria for treatment-resistant OCD (resistant group), whereas the other 62 (51.7 %) were labeled as responder group. The intensity of obsessions and compulsions was evaluated using Yale-brown obsessive-compulsive scale (Y-BOCS). All patients were assessed with the traumatic experiences checklist, dissociative experiences scale, beck depression inventory, and beck anxiety inventory.
Results: Controlling for clinical variables, resistant group had significantly higher general OCD severity, anxiety, depression, trauma, and dissociation scores than the responders. Correlation analyses indicated that Y-BOCS scores were significantly related to severity of dissociation, anxiety, depression, and traumatic experiences. In a logistic regression analysis with treatment resistance as a dependent variable, high dissociation levels, long duration of illness, and poor insight emerged as relevant predictors, but gender, levels of anxiety, depression, and traumatic experiences did not.
Conclusions: Our results suggest that dissociation may be a predictor of poorer treatment outcome in patients with OCD; therefore, a better understanding of the mechanisms that underlie this phenomenon may be useful. Future longitudinal studies are warranted to verify if this variable represents predictive factors of treatment non-response.
abstract_id: PUBMED:24908543
Psychiatric comorbidity differences between women with history of childhood sexual abuse who are methadone-maintained former opiate addicts and non-addicts. Following our finding of high rates of obsessive compulsive disorder (OCD) among methadone maintained (MMT) former opiate addict women with a history of childhood sexual abuse, we compared 68 MMT sexually abused women to 48 women from a Sexual Abuse Treatment Center (SATC) without a history of opiate addiction, for clinical-OCD (Yale-Brown Obsessive Compulsive Scale), dissociation (Dissociative Experiences Scale (DES), complex-post-traumatic stress disorder (PTSD) (Structured Interview for Disorders of Extreme Stress - Non-Other Specify), sexual PTSD (the Clinician-Administered PTSD Scale) and trauma events history (Life Event Inventory). MMT patients were treated for longer periods and were older and less educated. Clinical OCD was more prevalent among the MMT patients (66.2% vs. 30.4%, respectively), while complex-PTSD and high dissociation score (DES≥30) were more prevalent among the non-addicts (46.9% vs. 19.1%, and 57.1% vs. 11.8% respectively). The high rate of OCD among sexually abused MMT women was not found in women who are sexually abused non-addicts. As dissociation was rare among the MMT group, it may just be that the opioids (either as street-drugs or as MMT) serve as an external coping mechanism when the access to the internal one is not possible. Future study about OCD and dissociation before entry to MMT are needed.
abstract_id: PUBMED:25133142
Dissociative symptoms and dissociative disorders comorbidity in obsessive compulsive disorder: Symptom screening, diagnostic tools and reflections on treatment. Borderline personality disorder, conversion disorder and obsessive compulsive disorder frequently have dissociative symptoms. The literature has demonstrated that the level of dissociation might be correlated with the severity of obsessive compulsive disorder (OCD) and that those not responding to treatment had high dissociative symptoms. The structured clinical interview for DSM-IV dissociative disorders, dissociation questionnaire, somatoform dissociation questionnaire and dissociative experiences scale can be used for screening dissociative symptoms and detecting dissociative disorders in patients with OCD. However, a history of neglect and abuse during childhood is linked to a risk factor in the pathogenesis of dissociative psychopathology in adults. The childhood trauma questionnaire-53 and childhood trauma questionnaire-40 can be used for this purpose. Clinicians should not fail to notice the hidden dissociative symptoms and childhood traumatic experiences in OCD cases with severe symptoms that are resistant to treatment. Symptom screening and diagnostic tools used for this purpose should be known. Knowing how to treat these pathologies in patients who are diagnosed with OCD can be crucial.
abstract_id: PUBMED:35692991
Trauma-Related Dissociation and the Dissociative Disorders:: Neglected Symptoms with Severe Public Health Consequences. Trauma-related dissociation is a major public health risk warranting the attention of the healthcare professions. Severe dissociative pathology or dissociative disorders (DDs) are more prevalent than some commonly assessed psychiatric disorders (e.g., Bipolar Disorder, Obsessive Compulsive Disorder, Schizophrenia), yet are often under-recognized and undertreated, despite being associated with significant disability and chronic medical issues, among many other severe and costly public health consequences. In fact, people living with DDs spend an average of 5 to 12.4 years actively engaged in treatment before receiving an accurate diagnosis. Detection and treatment of trauma-related dissociation and DDs leads to a myriad of positive outcomes including improved quality of life, treatment outcomes, reduction in health and social risks, decreased healthcare utilization and costs (25-64% reduction), and significant economic advantages for society. It is imperative that healthcare professionals are trained in recognizing, assessing, and treating dissociation in service of preventing the discussed public health consequences. This article provides a comprehensive review of the important public health implications resulting from often neglected or untreated trauma-related dissociation and DDs while offering a summary of assessment methods, treatments, and resources to empower individuals and healthcare professionals to effect change.
abstract_id: PUBMED:24171326
Childhood trauma and dissociation in patients with obsessive compulsive disorder. Background And Objective: The present study attempted to assess childhood trauma events and dissociative symptoms in patients with obsessive compulsive disorder (OCD).
Method: The study included all patients who were admitted for the first time to the psychiatric outpatient unit over a 24-month period. Seventy-eight patients were diagnosed as having OCD during the two-year study period. Childhood traumatic events were assessed with a Childhood Trauma Questionnaire (CTQ). Obsessive compulsive disorder symptoms were assessed with the Yale-Brown Obsessive Compulsive Scale (Y-BOCS). A Dissociation Questionnaire (DIS-Q) was also used to measure dissociative symptoms.
Results: The mean of Y-BOCS points were 23.37 +/- 7.27. Dissociation questionnaire scores were between 0.40 and 3.87 and the mean was 2.23 +/- 0.76. Childhood trauma points were 1.27-4.77 and the mean was 2.38 +/- 0.56. There was no statistically significant relationship between Y-BOCS points and childhood trauma points (p > 0.05). There was a statistically significant positive relationship between Y-BOCS points and DIS-Q points. There was no statistically significant relationship between DIS-Q points and childhood trauma points (p > 0.05).
Conclusion: Childhood trauma questionnaire points might be significant clinically, although there was not a statistically significant correlation in our study. We also conclude that dissociative symptoms among patients with OCD should alert clinicians to treatment of the disorder.
abstract_id: PUBMED:35121678
Clinical Presentation and Treatment Trajectory of Gender Minority Patients With Obsessive-Compulsive Disorder. Gender minorities experience unique minority stressors that increase risk for psychiatric disorders. Notably, gender minorities are four and six times more likely than their cisgender female and male peers, respectively, to be treated for or diagnosed with obsessive-compulsive disorder (OCD). Despite higher rates of OCD, more psychiatric comorbidities, and minority stressors, little is known about the clinical presentation and treatment outcomes of gender minorities with OCD. Using a sample of 974 patients in specialty treatment programs for OCD, the current study found that gender minorities reported more severe contamination symptoms and greater incidence of comorbid substance use/addiction, trauma/stressor-related, personality, and other/miscellaneous disorders compared to cisgender male and female patients. Despite significantly longer lengths of stay, gender minorities reported less symptom improvement across treatment compared to cisgender male and female patients. Findings underscore the need for continued research to improve the effectiveness and individualization of treatment for gender minorities with OCD.
abstract_id: PUBMED:15569899
Childhood trauma, dissociation, and psychiatric comorbidity in patients with conversion disorder. Objective: The aim of this study was to evaluate dissociative disorder and overall psychiatric comorbidity in patients with conversion disorder.
Method: Thirty-eight consecutive patients previously diagnosed with conversion disorder were evaluated in two follow-up interviews. The Structured Clinical Interview for DSM-III-R, the Dissociation Questionnaire, the Somatoform Dissociation Questionnaire, and the Childhood Trauma Questionnaire were administered during the first follow-up interview. The Structured Clinical Interview for DSM-IV Dissociative Disorders was conducted in a separate evaluation.
Results: At least one psychiatric diagnosis was found in 89.5% of the patients during the follow-up evaluation. Undifferentiated somatoform disorder, generalized anxiety disorder, dysthymic disorder, simple phobia, obsessive-compulsive disorder, major depression, and dissociative disorder not otherwise specified were the most prevalent psychiatric disorders. A dissociative disorder was seen in 47.4% of the patients. These patients had dysthymic disorder, major depression, somatization disorder, and borderline personality disorder more frequently than the remaining subjects. They also reported childhood emotional and sexual abuse, physical neglect, self-mutilative behavior, and suicide attempts more frequently.
Conclusions: Comorbid dissociative disorder should alert clinicians for a more chronic and severe psychopathology among patients with conversion disorder.
abstract_id: PUBMED:35487501
Sleep quality in persons with mental disorders: Changes during inpatient treatment across 10 diagnostic groups. Sleep disturbances have been documented across a range of mental disorders, particularly depression. However, studies that have examined sleep quality in large samples of different diagnostic groups and that report how sleep quality changes during inpatient treatment have been scarce. This retrospective, observational study examined changes in sleep quality during inpatient treatment at a psychosomatic hospital in Germany from admission to discharge as a function of 10 diagnostic groups. Data of 11,226 inpatients were analysed who completed the Pittsburgh Sleep Quality Index as part of the routine diagnostic assessment at admission and discharge. All diagnostic groups showed impaired sleep quality (Pittsburgh Sleep Quality Index score > 5). Patients with trauma-related disorders had the lowest sleep quality and patients with obsessive-compulsive disorder had the highest sleep quality. While sleep quality significantly improved in each diagnostic group, changes differed in size, with patients with trauma-related disorders showing the smallest improvement and patients with eating disorders showing the largest improvement. The current study documents impaired sleep quality in inpatients with mental disorders and shows that sleep problems are a transdiagnostic feature in this population. Results also resonate with earlier suggestions that sleep disturbances represent a key feature of trauma-related disorders in particular and the need for trauma-specific sleep interventions. Although sleep quality significantly improved during disorder-specific inpatient treatment in all diagnostic groups, average scores were still clinically elevated at discharge. Thus, a future avenue would be to examine whether adding sleep-specific treatment elements fosters both short- and long-term success in the treatment of mental disorders.
abstract_id: PUBMED:21937875
Assessment of dissociation symptoms in patients with mental disorders by the Dissociation Questionnaire (DIS-Q). Aim: Dissociative symptoms are often found in psychiatric patients and have been implicated in psychotic trauma. We aimed to explore dissociative tendencies in psychiatric patients including dissociative disorders (DDs), obsessive-compulsive disorder (OCD), eating disorder (ED), and post-traumatic stress disorder (PTSD) by using the Dissociation Questionnaire Japanese version (DIS-Q-J).
Methods: We evaluated the reliability and the validity of DIS-Q-J by comparing it with the Dissociative Experience Scale (DES). 107 patients (32 DDs, 28 OCDs, 24 PTSDs, 23 EDs) and 83 controls answered both the DIS-Q-J and the DES questionnaires. In addition, OCD patients were assessed by the Yale-Brown Obsessive-Compulsive Scale (Y-BOCS), PTSD patients were assessed by the Impact of Event Scale-Revised (IES-R), and ED patients were assessed by the Bulimic Investigatory Test, Edinburgh (BITE).
Results: The internal consistency of the total DIS-Q-J and DES scale was high in all groups (Cronbach's alpha coefficients, DIS-Q-J; 0.922-0.975, DES; 0.934-0.957, p<0.01). The correlation between the total scores of the DIS-Q-J and the DES in all groups was significant (Spearman's rank correlation, 0.613-0.777 (p<0.01)). An analysis of variance (ANOVA) showed that the mean total scores of the control and clinical groups were significantly different (p<0.05) for both the DIS-Q-J and DES.
Conclusion: These results suggest that the DIS-Q-J is a useful tool for the assessment of dissociative symptoms.
abstract_id: PUBMED:36257080
A meta-analysis of mentalizing in anxiety disorders, obsessive-compulsive and related disorders, and trauma and stressor related disorders. Background: The number of studies that have researched the ability to mentalize in individuals with anxiety and related disorders is limited. Often, no distinction is made between different anxiety and related disorders in the examination of mentalization.
Objective: The goal of this study was to obtain insight into mentalization in anxiety and related disorders, and to compare this ability between these disorders.
Method: A systematic literature search was performed to identify studies in which performance on a mentalization task was compared between a patient group diagnosed with an anxiety or a related disorder, and a control group. Meta-analyses were performed on the included articles.
Results: The initial search yielded 2844 articles, of which 26 studies on 1056 patients were included. Patients diagnosed with anxiety and related disorders showed a deficit in mentalization when compared to healthy controls (SMD = -0.60, p 0.001). A deficit was found in all patient groups: Patients with anxiety disorders (SMD = -0.39, p = 0.007), obsessive-compulsive and related disorders (SMD = -0.78, p = 0.01), and trauma and stressor related disorders showed significant deficits (SMD = -0.77, p = 0.02) as compared to healthy controls.
Conclusion: The results indicated impaired mentalization in anxiety and related disorders, with specific deficiencies in posttraumatic stress disorder, social anxiety, and obsessive-compulsive disorders. Mentalization could provide a clinical target in treatment of these disorders.
Answer: Yes, trauma and dissociation are related to treatment resistance in patients with obsessive-compulsive disorder (OCD). Research has indicated that there is a significant relationship between the severity of dissociation and the severity of OCD symptoms, with higher dissociation levels being predictive of poorer treatment outcomes in patients with OCD (PUBMED:24213522). Additionally, dissociative symptoms and comorbidity with dissociative disorders are frequently observed in patients with OCD, particularly in those who do not respond to treatment (PUBMED:25133142). Furthermore, a study found that patients with OCD who had dissociative symptoms were more likely to have experienced childhood trauma, although the study did not find a statistically significant correlation between childhood trauma and OCD severity (PUBMED:24171326).
Moreover, a comparison between methadone-maintained former opiate addict women with a history of childhood sexual abuse and non-addicts revealed that clinical OCD was more prevalent among the methadone-maintained patients, while complex-PTSD and high dissociation scores were more prevalent among the non-addicts. This suggests that opioids may serve as an external coping mechanism when internal mechanisms, such as dissociation, are not accessible (PUBMED:24908543).
In the context of gender minorities with OCD, they experience unique stressors that increase the risk for psychiatric disorders, including OCD. Gender minorities with OCD reported more severe contamination symptoms and greater incidence of comorbid substance use/addiction, trauma/stressor-related, personality, and other/miscellaneous disorders compared to cisgender patients. Despite longer treatment durations, gender minorities reported less symptom improvement, highlighting the need for improved treatment approaches for this population (PUBMED:35121678).
Overall, the evidence suggests that trauma and dissociation are indeed related to treatment resistance in patients with OCD, and clinicians should be aware of these factors when treating patients with OCD to improve treatment outcomes. |
Instruction: Can Adding Laboratory Values Improve Risk-Adjustment Mortality Models Using Clinical Percutaneous Cardiac Intervention Registry Data?
Abstracts:
abstract_id: PUBMED:26136285
Can Adding Laboratory Values Improve Risk-Adjustment Mortality Models Using Clinical Percutaneous Cardiac Intervention Registry Data? Background: Registry data for percutaneous coronary intervention (PCI) are being used in New York and Massachusetts and by the American College of Cardiology to risk-adjust provider mortality rates. These registries contain very few numerical laboratory data for risk adjustment.
Methods: For 20 hospitals, New York's PCI registry data from 2008-2010 were used to develop statistic models for predicting in-hospital/30-day mortality with and without appended laboratory data. Discrimination, calibration, correlation in hospital's risk-adjusted mortality rates, and differences in hospital quality outlier status were compared for the two models.
Results: The discrimination of the risk-adjustment models was very similar (C-statistic = 0.898 from the registry model vs C-statistic = 0.908 from the registry/laboratory model; P=.40). Most of the non-laboratory variables in the two models were identical, except that the registry model contained malignant ventricular arrhythmia and the registry/laboratory model contained previous coronary artery bypass surgery. The registry/laboratory model also contained albumin ≤3.3 g/dL, creatine kinase ≥600 U/L, glucose ≥270 mg/dL, platelet count >350 k/μL, potassium >51 mmol/L, and partial thromboplastin time >40 seconds. The addition of laboratory data did not affect outlier status for better-performing hospitals, but there were differences in identifying the hospitals with significantly higher risk-adjusted mortality rates.
Conclusions: Adding laboratory data did not significantly improve the risk-adjustment mortality models' performance and did not dramatically change the quality assessment of hospitals. The pros and cons of adding key laboratory variables to PCI registries require further evaluation.
abstract_id: PUBMED:26917809
Adding Laboratory Data to Hospital Claims Data to Improve Risk Adjustment of Inpatient/30-Day Postdischarge Outcomes. Numerical laboratory data at admission have been proposed for enhancement of inpatient predictive modeling from administrative claims. In this study, predictive models for inpatient/30-day postdischarge mortality and for risk-adjusted prolonged length of stay, as a surrogate for severe inpatient complications of care, were designed with administrative data only and with administrative data plus numerical laboratory variables. A comparison of resulting inpatient models for acute myocardial infarction, congestive heart failure, coronary artery bypass grafting, and percutaneous cardiac interventions demonstrated improved discrimination and calibration with administrative data plus laboratory values compared to administrative data only for both mortality and prolonged length of stay. Improved goodness of fit was most apparent in acute myocardial infarction and percutaneous cardiac intervention. The emergence of electronic medical records should make the addition of laboratory variables to administrative data an efficient and practical method to clinically enhance predictive modeling of inpatient outcomes of care.
abstract_id: PUBMED:25497074
The value of adding laboratory data to coronary artery bypass grafting registry data to improve models for risk-adjusting provider mortality rates. Background: Clinical databases are currently being used for calculating provider risk-adjusted mortality rates for coronary artery bypass grafting (CABG) in a few states and by the Society for Thoracic Surgeons. These databases contain very few laboratory data for purposes of risk adjustment.
Methods: For 15 hospitals, New York's CABG registry data from 2008 to 2010 were linked to laboratory data to develop statistical models comparing risk-adjusted mortality rates with and without supplementary laboratory data. Differences between these two models in discrimination, calibration, and outlier status were compared, and correlations in hospital risk-adjusted mortality rates were examined.
Results: The discrimination of the statistical models was very similar (c = 0.785 for the registry model and 0.797 for the registry/laboratory model, p =0.63). The correlation between hospital risk-adjusted mortality rates by use of the two models was 0.90. The registry/laboratory model contained three additional laboratory variables: alkaline phosphatase (ALKP), aspartate aminotransferase (AST), and prothrombin time (PT). The registry model yielded one hospital with significantly higher mortality than the statewide average, and the registry/laboratory model yielded no outliers.
Conclusions: The clinical models with and without laboratory data had similar discrimination. Hospital risk-adjusted mortality rates were essentially unchanged, and hospital outlier status was identical. However, three laboratory variables, ALKP, AST, and PT, were significant independent predictors of mortality, and they deserve consideration of addition to CABG clinical databases.
abstract_id: PUBMED:17200477
Enhancement of claims data to improve risk adjustment of hospital mortality. Context: Comparisons of risk-adjusted hospital performance often are important components of public reports, pay-for-performance programs, and quality improvement initiatives. Risk-adjustment equations used in these analyses must contain sufficient clinical detail to ensure accurate measurements of hospital quality.
Objective: To assess the effect on risk-adjusted hospital mortality rates of adding present on admission codes and numerical laboratory data to administrative claims data.
Design, Setting, And Patients: Comparison of risk-adjustment equations for inpatient mortality from July 2000 through June 2003 derived by sequentially adding increasingly difficult-to-obtain clinical data to an administrative database of 188 Pennsylvania hospitals. Patients were hospitalized for acute myocardial infarction, congestive heart failure, cerebrovascular accident, gastrointestinal tract hemorrhage, or pneumonia or underwent an abdominal aortic aneurysm repair, coronary artery bypass graft surgery, or craniotomy.
Main Outcome Measures: C statistics as a measure of the discriminatory power of alternative risk-adjustment models (administrative, present on admission, laboratory, and clinical for each of the 5 conditions and 3 procedures).
Results: The mean (SD) c statistic for the administrative model was 0.79 (0.02). Adding present on admission codes and numerical laboratory data collected at the time of admission resulted in substantially improved risk-adjustment equations (mean [SD] c statistic of 0.84 [0.01] and 0.86 [0.01], respectively). Modest additional improvements were obtained by adding more complex and expensive to collect clinical data such as vital signs, blood culture results, key clinical findings, and composite scores abstracted from patients' medical records (mean [SD] c statistic of 0.88 [0.01]).
Conclusions: This study supports the value of adding present on admission codes and numerical laboratory values to administrative databases. Secondary abstraction of difficult-to-obtain key clinical findings adds little to the predictive power of risk-adjustment equations.
abstract_id: PUBMED:23968699
Enhanced mortality risk prediction with a focus on high-risk percutaneous coronary intervention: results from 1,208,137 procedures in the NCDR (National Cardiovascular Data Registry). Objectives: This study sought to update and validate a contemporary model for inpatient mortality following percutaneous coronary intervention (PCI), including variables indicating high clinical risk.
Background: Recently, new variables were added to the CathPCI Registry data collection form. This modification allowed us to better characterize the risk of death, including recent cardiac arrest and duration of cardiogenic shock.
Methods: Data from 1,208,137 PCI procedures performed between July 2009 and June 2011 at 1,252 CathPCI Registry sites were used to develop both a "full" and pre-catheterization PCI in-hospital mortality risk model using logistic regression. To support prospective implementation, a simplified bedside risk score was derived from the pre-catheterization risk model. Model performance was assessed by discrimination and calibration metrics in a separate split sample.
Results: In-hospital mortality was 1.4%, ranging from 0.2% among elective cases (45.1% of total cases) to 65.9% among patients with shock and recent cardiac arrest (0.2% of total cases). Cardiogenic shock and procedure urgency were the most predictive of inpatient mortality, whereas the presence of a chronic total occlusion, subacute stent thrombosis, and left main lesion location were significant angiographic predictors. The full, pre-catheterization, and bedside risk prediction models performed well in the overall validation sample (C-indexes 0.930, 0.928, 0.925, respectively) and among pre-specified patient subgroups. The model was well calibrated across the risk spectrum, although slightly overestimating risk in the highest risk patients.
Conclusions: Clinical acuity is a strong predictor of PCI procedural mortality. With inclusion of variables that further characterize clinical stability, the updated CathPCI Registry mortality models remain well-calibrated across the spectrum of PCI risk.
abstract_id: PUBMED:9928588
Laboratory values improve predictions of hospital mortality. Objective: To compare the precision of risk adjustment in the measurement of mortality rates using: (i) data in hospitals' electronic discharge abstracts, including data elements that distinguish between comorbidities and complications; (ii) these data plus laboratory values; and (iii) these data plus laboratory values and other clinical data abstracted from medical records.
Design: Retrospective cohort study.
Setting: Twenty-two acute care hospitals in St Louis, Missouri, USA.
Study Participants: Patients hospitalized in 1995 with acute myocardial infarction, congestive heart failure, or pneumonia (n = 5966).
Main Outcome Measures: Each patient's probability of death calculated using: administrative data that designated all secondary diagnoses present on admission (administrative models); administrative data and laboratory values (laboratory models); and administrative data, laboratory values, and abstracted clinical information (clinical models). All data were abstracted from medical records.
Results: Administrative models (average area under receiver operating characteristic curve=0.834) did not predict death as well as did clinical models (average area under receiver operating characteristic curve=0.875). Adding laboratory values to administrative data improved predictions of death (average area under receiver operating characteristic curve=0.860). Adding laboratory data to administrative data improved its average correlation of patient-level predicted values with those of the clinical model from r=0.86 to r=0.95 and improved the average correlation of hospital-level predicted values with those of the clinical model from r=0.94 for the administrative model to r=0.98 for the laboratory model.
Conclusions: In the conditions studied, predictions of inpatient mortality improved noticeably when laboratory values (sometimes available electronically) were combined with administrative data that included only those secondary diagnoses present on admission (i.e. comorbidities). Additional clinical data contribute little more to predictive power.
abstract_id: PUBMED:36306945
Risk prediction models in patients undergoing percutaneous coronary intervention: A collaborative analysis from a Japanese administrative dataset and nationwide academic procedure registry. Background: Contemporary guidelines emphasize the importance of risk stratification in improving the quality of care for patients undergoing percutaneous coronary intervention (PCI). We aimed to investigate whether adding information from a procedure-based academic registry to administrative claims data would improve the performance of risk prediction model.
Methods: We combined two nationally representative administrative and clinical databases. The study cohort comprised 43,095 patients; 18,719 and 23, 525 with acute [ACS] and chronic [CCS] coronary syndrome, respectively. Each population was randomly divided into the logistic regression model (derivation cohort, 80%) and model validation (validation cohort, 20%) groups. The performances of the following models were compared using C-statistics: (1) variables restricted to baseline claims data (model #1), (2) clinical registry data (model #2), and (3) expanded to both claims and clinical registry data (model #3). The primary outcomes were in-hospital mortality and bleeding.
Results: The primary outcomes occurred in 3.7% (in-hospital mortality)/5.0% (bleeding) of patients with ACS and 0.21%/0.95% of CCS patients. For each event, the model performance was 0.65 (95% confidence interval [CI], 0.60-0.69) /0.67 (0.63-0.71) in ACS and 0.52 (0.35-0.76) /0.62 (0.54-0.70) for CCS patients in model #1, 0.83 (0.80-0.87) /0.77 (0.74-0.81) in ACS and 0.76 (0.60-0.92) /0.67 (0.59-0.75) in CCS for model #2, and 0.83 (0.79-0.86) /0.78 (0.75-0.81) in ACS and 0.76 (0.61-0.92) /0.67 (0.58-0.74) in CCS for model #3.
Conclusions: Combining clinical information from the academic registry with claims databases improved its performance in predicting adverse events.
abstract_id: PUBMED:18812585
Modifying ICD-9-CM coding of secondary diagnoses to improve risk-adjustment of inpatient mortality rates. Objective: To assess the effect on risk-adjustment of inpatient mortality rates of progressively enhancing administrative claims data with clinical data that are increasingly expensive to obtain. Data Sources. Claims and abstracted clinical data on patients hospitalized for 5 medical conditions and 3 surgical procedures at 188 Pennsylvania hospitals from July 2000 through June 2003.
Methods: Risk-adjustment models for inpatient mortality were derived using claims data with secondary diagnoses limited to conditions unlikely to be hospital-acquired complications. Models were enhanced with one or more of 1) secondary diagnoses inferred from clinical data to have been present-on-admission (POA), 2) secondary diagnoses not coded on claims but documented in medical records as POA, 3) numerical laboratory results from the first hospital day, and 4) all available clinical data from the first hospital day. Alternative models were compared using c-statistics, the magnitude of errors in prediction for individual cases, and the percentage of hospitals with aggregate errors in prediction exceeding specified thresholds.
Results: More complete coding of a few under-reported secondary diagnoses and adding numerical laboratory results to claims data substantially improved predictions of inpatient mortality. Little improvement resulted from increasing the maximum number of available secondary diagnoses or adding additional clinical data.
Conclusions: Increasing the completeness and consistency of reporting a few secondary diagnosis codes for findings POA and merging claims data with numerical laboratory values improved risk adjustment of inpatient mortality rates. Expensive abstraction of additional clinical information from medical records resulted in little further improvement.
abstract_id: PUBMED:20545780
Development and validation of a disease-specific risk adjustment system using automated clinical data. Objective: To develop and validate a disease-specific automated inpatient mortality risk adjustment system primarily using computerized numerical laboratory data and supplementing them with administrative data. To assess the values of additional manually abstracted data.
Methods: Using 1,271,663 discharges in 2000-2001, we derived 39 disease-specific automated clinical models with demographics, laboratory findings on admission, ICD-9 principal diagnosis subgroups, and secondary diagnosis-based chronic conditions. We then added manually abstracted clinical data to the automated clinical models (manual clinical models). We compared model discrimination, calibration, and relative contribution of each group of variables. We validated these 39 models using 1,178,561 discharges in 2004-2005.
Results: The overall mortality was 4.6 percent (n = 58,300) and 4.0 percent (n = 47,279) for derivation and validation cohorts, respectively. Common mortality predictors included age, albumin, blood urea nitrogen or creatinine, arterial pH, white blood counts, glucose, sodium, hemoglobin, and metastatic cancer. The average c-statistic for the automated clinical models was 0.83. Adding manually abstracted variables increased the average c-statistic to 0.85 with better calibration. Laboratory results displayed the highest relative contribution in predicting mortality.
Conclusions: A small number of numerical laboratory results and administrative data provided excellent risk adjustment for inpatient mortality for a wide range of clinical conditions.
abstract_id: PUBMED:11923032
Development of a risk adjustment mortality model using the American College of Cardiology-National Cardiovascular Data Registry (ACC-NCDR) experience: 1998-2000. Objectives: We sought to develop and evaluate a risk adjustment model for in-hospital mortality following percutaneous coronary intervention (PCI) procedures using data from a large, multi-center registry.
Background: The 1998-2000 American College of Cardiology-National Cardiovascular Data Registry (ACC-NCDR) dataset was used to overcome limitations of prior risk-adjustment analyses.
Methods: Data on 100,253 PCI procedures collected at the ACC-NCDR between January 1, 1998, and September 30, 2000, were analyzed. A training set/test set approach was used. Separate models were developed for presentation with and without acute myocardial infarction (MI) within 24 h.
Results: Factors associated with increased risk of PCI mortality (with odds ratios in parentheses) included cardiogenic shock (8.49), increasing age (2.61 to 11.25), salvage (13.38) urgent (1.78) or emergent PCI (5.75), pre-procedure intra-aortic balloon pump insertion (1.68), decreasing left ventricular ejection fraction (0.87 to 3.93), presentation with acute MI (1.31), diabetes (1.41), renal failure (3.04), chronic lung disease (1.33); treatment approaches including thrombolytic therapy (1.39) and non-stent devices (1.64); and lesion characteristics including left main (2.04), proximal left anterior descending disease (1.97) and Society for Cardiac Angiography and Interventions lesion classification (1.64 to 2.11). Overall, excellent discrimination was achieved (C-index = 0.89) and application of the model to high-risk patient groups demonstrated C-indexes exceeding 0.80. Patient factors were more predictive in the MI model, while lesion and procedural factors were more predictive in the analysis of non-MI patients.
Conclusions: A risk adjustment model for in-hospital mortality after PCI was successfully developed using a contemporary multi-center registry. This model is an important tool for valid comparison of in-hospital mortality after PCI.
Answer: The addition of laboratory values to clinical percutaneous cardiac intervention (PCI) registry data for risk adjustment mortality models has been explored in several studies, with varying results.
One study using New York's PCI registry data from 2008-2010 found that adding laboratory data did not significantly improve the discrimination of the risk-adjustment models (C-statistic = 0.898 for the registry model vs. C-statistic = 0.908 for the registry/laboratory model; P=.40) and did not dramatically change the quality assessment of hospitals. The study concluded that the pros and cons of adding key laboratory variables to PCI registries require further evaluation (PUBMED:26136285).
In contrast, another study demonstrated that adding numerical laboratory data at admission to administrative claims data improved discrimination and calibration for both mortality and prolonged length of stay in models for acute myocardial infarction, congestive heart failure, coronary artery bypass grafting, and PCI. This study suggested that the emergence of electronic medical records should make the addition of laboratory variables to administrative data an efficient and practical method to clinically enhance predictive modeling of inpatient outcomes (PUBMED:26917809).
Similarly, a study on coronary artery bypass grafting (CABG) registry data linked to laboratory data showed that while the discrimination of the models was very similar with or without laboratory data, three laboratory variables (alkaline phosphatase, aspartate aminotransferase, and prothrombin time) were significant independent predictors of mortality and deserve consideration for addition to CABG clinical databases (PUBMED:25497074).
Another study supported the value of adding present on admission codes and numerical laboratory values to administrative databases, finding that these additions resulted in substantially improved risk-adjustment equations for inpatient mortality (PUBMED:17200477).
In summary, while some studies indicate that adding laboratory values can improve the performance of risk adjustment mortality models in PCI data, the extent of the improvement and its practical implications for quality assessment vary. Further evaluation and consideration of the specific laboratory variables to be included are necessary to determine the overall benefit of such additions to clinical registries. |
Instruction: Can harvesting techniques modify postoperative results of the radial artery conduit?
Abstracts:
abstract_id: PUBMED:16320927
Can harvesting techniques modify postoperative results of the radial artery conduit? Background: Inappropriate harvesting of arterial conduits can lead to severe postoperative complications. We analyzed clinical and functional results of patients undergoing radial artery (RA) harvesting by means of three techniques.
Methods: From January 2001 to January 2004 188 patients undergoing coronary artery bypass graft with RA were divided into three groups: harmonic scalpel was employed in 61 (RA1), electrocautery in 63 (RA2), Potts-scissors and clips in 64 (RA3) patients. Harvesting time, local complications, number of clips employed, graft flowmetry, postoperative troponin I, incidence of re-exploration for bleeding due to the graft were analyzed.
Results: RA1 and RA2 showed a lower harvesting time (RA1 16.2 +/- 8.4 vs RA3 41.4 +/- 7.7 min, p = 0.0001; RA2 21.1 +/- 10.4 min, p = 0.001). Postoperative hand paresthesia was detected in RA1 (5/61; 8.2%) and RA2 (5/63; 7.9%), but not in RA3 (p = 0.048 and p = 0.05, respectively). More clips were necessary in RA3 compared to RA2 (p = 0.04) or RA1 (p = 0.0001 vs RA3; p = 0.001 vs RA2). RA1 showed significant higher values of maximum flow (RA1 59.4 +/- 37.5 vs RA2 22.1 +/- 7.7 ml/min, p = 0.0001; vs RA3 31.3 +/- 12.0 ml/min, p = 0.001), mean flow (RA1 23.4 +/- 17.3 vs RA2 10.2 +/- 5.7 mi/min, p = 0.001; vs RA3 11.6 +/- 8.9 ml/min, p = 0.001), minimum flow (RA1 11.6 +/- 6.5 vs RA2 4.2 +/- 3.7 ml/min, p = 0.01; vs RA3 4.7 +/- 3.3, p = 0.03), and pulsatility index (RA1 0.9 +/- 0.8 vs RA2 2.1 +/- 1.3, p = 0.03; vs RA3 1.7 +/- 2.1, p = 0.04). Troponin I was significantly lower in RA1, compared to RA2 and RA3 at 12 hours (p = 0.01 and p = 0.03, respectively) and 24 hours (p = 0.05 and p = 0.045, respectively). No RA1 patient underwent re-exploration for bleeding compared to RA2 (p = 0.011) and RA3 (p = 0.02).
Conclusions: RA harvesting with ultrasounds is fast, determines high flowmetry values, low enzyme release and rarely causes local complications.
abstract_id: PUBMED:12958557
Evolving techniques for endoscopic radial artery harvesting. The role of radial artery as an arterial conduit for myocardial revascularisation is well established. Minimally invasive approaches for the harvesting of conduits are desirable for clinical and cosmetic reasons. We report our experience with two techniques of endoscopic radial artery harvesting. The techniques are illustrated and their relative advantages discussed.
abstract_id: PUBMED:34534423
The radial artery: Open harvesting technique. The radial artery is an important conduit in coronary artery surgical revascularization due to its robust long-term clinical outcomes. The use of the radial artery has become popularized in recent times. Therefore it is essential for junior surgeons to master harvest techniques that are safe, reliable, and easy to replicate.
abstract_id: PUBMED:29629553
Open radial artery harvesting. The radial artery is a versatile bypass conduit that is being used with increasing frequency for an arterial coronary bypass strategy due to its excellent long-term patency and survival benefits. Open radial artery harvesting allows for careful dissection of the radial artery with minimal risk for endothelial damage, which helps to prevent vasospasm. Our technique for open radial artery harvesting and preparation is presented here.
abstract_id: PUBMED:18595502
Use of the harmonic scalpel for harvesting radial artery as a conduit for myocardial revascularization Radial artery (RA) as a conduit for coronary artery bypass grafting (CABG) was introduced in 1973 by Carpantier and within two years its use was abondoned because of high incidence of narrowing and occlusion. The reason for early RA's graft failure became clear in the late 1980s and it was its propensity for vasospasm. In recent years in conjuction with availability of antispasm agents and less invasive harvesting techniques, the RA is increasingly used for CABG. The RA has become the second arterial graft of choice after the internal thoracic artery, mainly because of its promising patency rates. In order to avoid graft traumatization less invasive techniques have been introduced lately. The purpose of this paper was to asses the clinical effect of harvesting RA with the use of the Harmonic Scalpel. We examined the results of this technique among 140 patients operated in our Department in years 2005 and 2006.
abstract_id: PUBMED:11453125
The technical aspects of radial artery harvesting. We describe our technique for harvesting the radial artery for coronary revascularization. Anatomy and preoperative preparation are also presented, as well as the history of the radial artery as a bypass conduit, the advantages, and some contraindications. We have found that, with proper harvesting, the radial artery is an effective means of coronary artery revascularization.
abstract_id: PUBMED:12173836
Endoscopic radial artery harvesting: results of first 300 patients. Background: With the expanded use of the radial artery as a bypass conduit in patients undergoing coronary artery bypass grafting, an endoscopic radial artery harvesting method was used to improve esthetics and patient acceptance, and possibly, to decrease hand neurologic complications.
Methods: After informed consent and confirmation of adequate ulnar collateral blood flow, 300 consecutive patients undergoing coronary artery bypass grafting had their nondominant radial artery endoscopically removed through a small 3-cm incision just proximal to the radial styloid prominence. Standard endoscopic vein equipment (30-degree 5-mm endoscope, subcutaneous retractor, and vessel dissector) with ultrasonic harmonic coagulating shears were used. After radial artery isolation, the radial artery was proximally clipped and transected 1 to 2 cm distal to the visualized ulnar artery origin to the inferior end of the wrist incision.
Results: The mean age was 62.2 years; 23% of the patients were women, 39% had diabetes mellitus, and 28% had peripheral vascular disease. All 300 endoscopic radial arteries were grossly acceptable and used for grafting. Early in the series, 29 patients (9.7%) required a second 3-cm incision proximally for vascular control. Only one wrist incision was required at the last 200 cases. The conduit length varied between 18 and 24 cm. Occurring early in the series, hospital complications were two tunnel hematomas requiring drainage and one brachial artery clipping repaired primarily without sequela. At 30 days postoperative follow-up, 5 patients (1.6%) had been treated with oral antibiotics for incisional cellulitis and 26 patients (8.7%) had objective dorsal thenar sensory numbness. No ischemic hand complication, perioperative myocardial infarction, reintervention in radial artery graft distribution, or numbness in the lateral forearm occurred. All patients expressed marked satisfaction with the small incision and cosmetic result.
Conclusions: In our initial experience, endoscopic radial artery harvesting can be performed safely, with minor, infrequent complications. A full-length radial artery conduit can be obtained with improved esthetics and patient satisfaction and acceptance. Late dorsal thenar paresthesias, although infrequent, continue to be a problem as with the open method.
abstract_id: PUBMED:16401988
Endoscopic radial artery harvesting: our initial experience and results of the first 25 patients. Background: The radial artery has become an increasingly popular arterial conduit for coronary artery bypass grafting (CABG). However, the traditional open harvesting technique requires a long incision, and is therefore associated with some wound complications and cosmetic problems. Here, we describe our experience of endoscopic radial artery harvesting (ERAH) through a small incision in 25 patients who underwent CABG.
Materials And Methods: Between February 2, 2004 and January 7, 2005, a total of 25 patients (4 females; mean age: 64+/-10 years) underwent ERAH using the VasoView System (Guidant Corporation, Indianapolis, IN) at our institution. All patients underwent a preoperative Allen test to assess the competence of the palmer arch. Twenty-four radial arteries were harvested from the nondominant arm and one from the dominant arm. The mean clinical follow-up was 8+/-2.9 months.
Results: All radial arteries were harvested through a 2-cm incision at the wrist, successfully removed with ERAH and successfully used as CABG conduits. The mean harvest time was 59+/-11 min, and the mean harvested length was 17+/-1.7 cm. No adjunctive procedures were required during vessel harvesting, and no conversions to the open technique were necessary. Harvesting complications included 2 cases of postoperative hematoma and 7 cases of superficial radial nerve paresthesia. Five postoperative angiographies were performed and all radial arteries were patent. Overall, 24/25 (96%) patients were satisfied with the procedure.
Conclusion: The ERAH technique was performed as safely as the traditional open technique and the harvested radial arteries were acceptable as CABG conduits. In particular, patient satisfaction with the procedure regarding the cosmetic results was excellent.
abstract_id: PUBMED:16910506
Endoscopic radial artery harvesting The radial artery is a popular arterial conduit for coronary artery bypass grafting (CABG). However, the traditional open harvesting technique requires a long incision, and is therefore associated with some wound complications and cosmetic problems. In order to solve them, we introduced the endoscopic radial artery harvesting (ERAH) at our institution in February 2004 utilizing the VasoView system. The ERAH technique was performed as safely as the traditional open technique and the harvested radial arteries were acceptable as CABG conduits. In particular, patients are satisfied with the excellent cosmetic results of the procedure. Herein, we focus surgical technique, complications of ERAH and ways to prevent them by describing our initial experience and short-term clinical follow-up of the ERAH patients.
abstract_id: PUBMED:7646631
Technique for harvesting the radial artery as a coronary artery bypass graft. The radial artery was proposed and then abandoned as a coronary artery bypass graft in the 1970s. Development of new pharmacologic antispasmodic agents and minimally traumatic harvesting techniques has led to a revival of the use of the radial artery in coronary artery bypass procedures. Unlike the saphenous vein in the lower extremities, the radial artery in the volar forearm is not a subcutaneous structure. Safe harvest of the artery requires an understanding of volar forearm anatomy. Based on review of anatomy, cadaver dissection, and clinical experience with 40 patients, we have developed a technique for radial artery harvest. The volar forearm is divided into three zones: the proximal zone, the middle zone, and the distal zone. In each zone, important anatomic landmarks are identified. Our harvesting technique has resulted in minimal postoperative morbidity and no postoperative ischemic complications. Although the pedicled internal thoracic artery graft remains the primary arterial conduit for myocardial revascularization, the radial artery is an excellent additional bypass conduit.
Answer: Yes, harvesting techniques can modify postoperative results of the radial artery conduit. A study analyzing clinical and functional results of patients undergoing radial artery (RA) harvesting by means of three techniques—harmonic scalpel, electrocautery, and Potts-scissors and clips—found that the technique used for RA harvesting can influence various postoperative outcomes. The use of a harmonic scalpel (RA1) was associated with lower harvesting time, higher graft flowmetry values, lower enzyme release, and fewer local complications compared to the other two techniques. Additionally, RA1 patients did not require re-exploration for bleeding, which was necessary for some patients in the other groups (RA2 and RA3). Postoperative hand paresthesia was detected in the groups where harmonic scalpel and electrocautery were used, but not in the group where Potts-scissors and clips were employed (PUBMED:16320927).
Endoscopic radial artery harvesting (ERAH) techniques have also been reported to improve esthetics and patient satisfaction, with minor and infrequent complications. ERAH can be performed safely, providing a full-length radial artery conduit with improved cosmetic outcomes. However, late dorsal thenar paresthesias can still be an issue, similar to the open method (PUBMED:12173836). Another report on ERAH through a small incision indicated high patient satisfaction regarding cosmetic results, with the harvested radial arteries being acceptable as CABG conduits (PUBMED:16401988).
Open radial artery harvesting allows for careful dissection with minimal risk for endothelial damage, which helps to prevent vasospasm, an important consideration given the radial artery's propensity for vasospasm (PUBMED:29629553, PUBMED:18595502). The radial artery is an important conduit in coronary artery surgical revascularization, and mastering safe, reliable, and easy-to-replicate harvest techniques is essential for junior surgeons (PUBMED:34534423).
In summary, the choice of harvesting technique can have significant implications for the postoperative results of the radial artery conduit, affecting factors such as graft patency, local complications, patient satisfaction, and cosmetic outcomes. |
Instruction: On-going clinical trials for elderly patients with a hematological malignancy: are we addressing the right end points?
Abstracts:
abstract_id: PUBMED:24458474
On-going clinical trials for elderly patients with a hematological malignancy: are we addressing the right end points? Background: Cancer societies and research cooperative groups worldwide have urged for the development of cancer trials that will address those outcome measures that are most relevant to older patients. We set out to determine the characteristics and study objectives of current clinical trials in hematological patients.
Method: The United States National Institutes of Health clinical trial registry was searched on 1 July 2013, for currently recruiting phase I, II or III clinical trials in hematological malignancies. Trial characteristics and study objectives were extracted from the registry website.
Results: In the 1207 clinical trials included in this overview, patient-centered outcome measures such as quality of life, health care utilization and functional capacity were only incorporated in a small number of trials (8%, 4% and 0.7% of trials, respectively). Even in trials developed exclusively for older patients, the primary focus lies on standard end points such as toxicity, efficacy and survival, while patient-centered outcome measures are included in less than one-fifth of studies.
Conclusion: Currently on-going clinical trials in hematological malignancies are unlikely to significantly improve our knowledge of the optimal treatment of older patients as those outcome measures that are of primary importance to this patient population are still included in only a minority of studies. As a scientific community, we cannot continue to simply acknowledge this issue, but must all participate in taking the necessary steps to enable the delivery of evidence-based, tailor-made and patient-focused cancer care to our rapidly growing elderly patient population.
abstract_id: PUBMED:29700207
Representation of Minorities and Elderly Patients in Multiple Myeloma Clinical Trials. Multiple myeloma (MM) occurs in all races, but the incidence in non-Hispanic black patients (NHBs) is two to three times higher than in non-Hispanic white patients (NHWs). We determined the representation of minorities and elderly patients in MM clinical trials. Enrollment data from all therapeutic trials reported in ClinicalTrials.gov from 2000 to 2016 were analyzed. Enrollment fraction (EF) was defined as the number of trial enrollees divided by the 2014 MM prevalence. Participation in MM clinical trials varied significantly across racial and ethnic groups; NHWs were more likely to be enrolled in clinical trials (EF 0.18%) than NHBs (EF 0.06%, p < .0001) and Hispanic patients (EF 0.04%, p < .0001). The median age of trial participants was 62 years, with 7,956 participants (66%) being less than 65 years of age. Collaborations between investigators, sponsors, and the community are necessary to increase access to clinical trials to our minority and elderly patients.
abstract_id: PUBMED:25170014
Exclusion of older patients from ongoing clinical trials for hematological malignancies: an evaluation of the National Institutes of Health Clinical Trial Registry. Introduction: Cancer societies, research cooperatives, and countless publications have urged the development of clinical trials that facilitate the inclusion of older patients and those with comorbidities. We set out to determine the characteristics of currently recruiting clinical trials with hematological patients to assess their inclusion and exclusion of elderly patients.
Methods: The NIH clinical trial registry was searched on July 1, 2013, for currently recruiting phase I, II or III clinical trials with hematological malignancies. Trial characteristics and study objectives were extracted from the registry website.
Results: Although 5% of 1,207 included trials focused exclusively on elderly or unfit patients, 69% explicitly or implicitly excluded older patients. Exclusion based on age was seen in 27% of trials, exclusion based on performance status was seen in 16%, and exclusion based on stringent organ function restrictions was noted in 51%. One-third of the studies that excluded older patients based on age allowed inclusion of younger patients with poor performance status; 8% did not place any restrictions on organ function. Over time, there was a shift from exclusion based on age (p value for trend <.001) toward exclusion based on organ function (p = .2). Industry-sponsored studies were least likely to exclude older patients (p < .001).
Conclusion: Notably, 27% of currently recruiting clinical trials for hematological malignancies use age-based exclusion criteria. Although physiological reserves diminish with age, the heterogeneity of the elderly population does not legitimize exclusion based on chronological age alone. Investigators should critically review whether sufficient justification exists for every exclusion criterion before incorporating it in trial protocols.
abstract_id: PUBMED:35236569
Characteristics of clinical trials for haematological malignancies from 2015 to 2020: A systematic review. Background: As the landscape of haematological malignancies dramatically changes due to diagnostic and therapeutic advances, it is important to evaluate trends in clinical trial designs. The objective of our study was to describe the design of clinical trials for five common haematological malignancies with respect to randomisation and end-points. We also aimed to assess trends over time and examine the relationships of funding source and country of origin to proportions of randomisation and utilisation of clinical end-points.
Methods: This systematic review identified haematological malignancy clinical trials starting in 2015-2020 registered at ClinicalTrials.gov as of 20th February 2021. Trial-related variables including randomisation status, type of primary end-point, and both projected and actual enrolment numbers were captured. Clinical end-points were defined as overall survival and quality of life, while surrogate end-points included all other end-points.
Results: Of 2609 relevant trials included in this analysis, only one-fifth were randomised (538, 21%), with a significant decrease in the proportion of randomised clinical trials from 26% of trials in 2015 to 19% in 2020 (p < 0.00001). Between the years 2015 and 2020, the proportion of randomised trials for all haematological malignancies using primary surrogate end-points remained relatively consistent, ranging from 84% in 2015 to 78% in 2020 (p = 0.352). Overall, only 15% of trials utilised primary end-points of overall survival or quality of life in a randomised design.
Conclusions: This systematic review of haematological malignancy trials found that the majority of trials are non-randomised and that there has been an increase in the ratio of non-randomised to randomised studies over time. The vast majority of randomised haematological malignancy trials use surrogate primary end-points.
abstract_id: PUBMED:27771959
Future prospects of therapeutic clinical trials in acute myeloid leukemia. Acute myeloid leukemia (AML) is a markedly heterogeneous hematological malignancy that is most commonly seen in elderly adults. The response to current therapies to AML is quite variable, and very few new drugs have been recently approved for use in AML. This review aims to discuss the issues with current trial design for AML therapies, including trial end points, patient enrollment, cost of drug discovery and patient heterogeneity. We also discuss the future directions in AML therapeutics, including intensification of conventional therapy and new drug delivery mechanisms; targeted agents, including epigenetic therapies, cell cycle regulators, hypomethylating agents and chimeric antigen receptor T-cell therapy; and detail of the possible agents that may be incorporated into the treatment of AML in the future.
abstract_id: PUBMED:9476146
Testing the role of P-glycoprotein expression in clinical trials: applying pharmacological principles and best methods for detection together with good clinical trials methodology. P-gp (Pgp) is a cell surface ATPase which confers resistance to many of the most active chemotherapy drugs, including taxol, doxorubicin, and vinca alkaloids. Pgp can be detected in human cancers by immunohistochemistry, RNA probes, or by functional assays utilizing transported fluorescent dyes such as rhodamine. The expression of Pgp in untreated human cancers is highly variable, being almost universal in colon, hepatocellular carcinoma, and renal cell cancers, less common in breast, ovarian, and lymphoid malignancies. At least part of the heterogeneity is attributable to different definitions of positivity even with a given method of detection. In chemotherapy naive cancers, resistant cells may not occur very frequently. Whilst the Goldie-Coldman hypothesis predicts treatment failure if 1 cell in 10(6) expresses a resistance mechanism, no method of detection yet described can reliably achieve this. The field has reached a stage in which it may be possible to detect Pgp accurately in advanced cancers which have failed chemotherapy allowing phase II clinical trials to be performed in Pgp-positive tumors. In terms of which Pgp inhibitors are selected for clinical study it is likely that selection of Pgp inhibitors with nM potency to bind to Pgp will be important. Such drugs should undergo extensive phase I trial evaluation to assess pharmacokinetic interactions with a range of cytotoxic drugs before entering randomized trials. In randomized clinical trials Pgp detection may be less important, as disease-free survival and overall survival would be the key end-points, but the Pgp positivity of relapsed disease would indicate if treatment with inhibitors of Pgp-eliminated Pgp-expressing clones. The accurate detection of Pgp in human cancers is being refined and will be an essential component of future Pgp inhibitor clinical trials. Finally, these trails must be of sufficient size (> 500 patients per arm) to reliably detect clinically meaningful differences.
abstract_id: PUBMED:32836173
Clinical development of cell therapies for cancer: The regulators' perspective. Novel cell therapies for haematological malignancies and solid tumours address pressing clinical need while offering potentially paradigm shifts in efficacy. However, innovative development risks outflanking information on statutory frameworks, regulatory guidelines and their working application. Meeting this challenge, regulators offer wide-ranging expertise and experience in confidential scientific and regulatory advice. We advocate early incorporation of regulatory perspectives to support strategic development of clinical programmes. We examine critical issues and key advances in clinical oncology trials to highlight practical approaches to optimising the clinical development of cell therapies. We recommend early consideration of collaborative networks, early-access schemes, reducing bias in single-arm trials, adaptive trials, clinical end-points supporting risk/benefit and cost/benefit analyses, companion diagnostics, real-world data and common technical issues. This symbiotic approach between developers and regulators should reduce development risk, safely expedite marketing authorisation, and promote early, wider availability of potentially transformative cell therapies for cancer.
abstract_id: PUBMED:32974950
Evaluation of the Relationship of Glasdegib Exposure and Safety End Points in Patients With Refractory Solid Tumors and Hematologic Malignancies. Glasdegib is approved for treating acute myeloid leukemia in elderly patients at 100 mg once daily in combination with low-dose cytarabine. Exposure-efficacy analysis showed that the survival benefit of glasdegib was not glasdegib exposure-dependent. The relationship between glasdegib exposure and adverse event (AE) cluster terms of clinical concern was explored in this analysis. The incidence and severity of dysgeusia, muscle spasms, renal toxicity, and QT interval prolonged was modeled using ordinal logistic regression. AEs were graded using the National Cancer Institute Common Terminology Criteria for Adverse Events (version 4.03). Estimated pharmacokinetic parameters were used to derive glasdegib exposure metrics. Demographic characteristics, disease factors, and other variables of interest as potential moderators of safety signals were evaluated. Clinical trial data from patients who received single-agent glasdegib (N = 70; 5-640 mg once daily); or glasdegib (N = 202, 100-200 mg once daily) with low-dose cytarabine, decitabine, or daunorubicin and cytarabine were analyzed. Glasdegib exposure was statistically significantly associated with the cluster term safety end points dysgeusia, muscle spasms, renal toxicity, and QT interval prolonged. The impact of age on muscle spasms and baseline body weight and creatinine clearance on renal toxicity helped explain the AE grade distribution. At the 100 mg once daily clinical dose, the predicted probabilities of the highest AE grade were 11.3%, 6.7%, 7.7%, and 2.5% for dysgeusia, muscle spasms, renal toxicity, and QT interval prolonged, respectively. Overall, the predicted probability of developing an AE of any severity for these safety end points was low. Therefore, no starting dose adjustments are recommended for glasdegib based on the observed safety profile.
abstract_id: PUBMED:32609869
Strategies for Overcoming Disparities for Patients With Hematologic Malignancies and for Improving Enrollment on Clinical Trials. A multitude of factors contribute to cancer disparities, including, but not limited to, differences in diet, lifestyle, environmental exposures, cultural beliefs, genetic and biological factors related to ancestry, socioeconomic status (SES), and access to health care. More investigation is needed in evaluating these factors in less common cancers and hematological malignancies. Addressing disparities in cancer incidence, prevalence, burden of disease, mortality, and survivorship that have been documented among racial/ethnic minority populations with blood cancers will require multilevel models of the interactions between relevant factors and performance of translational research that uses knowledge of cancer biology to develop and test the feasibility of interventions that can impact human end points. Such work must address a wide range of research areas, including prevention, early detection, diagnosis, treatment, epidemiology, cancer control, treatment, and survivorship. To be effective, efforts should be made to advance these research findings to applications that can transform clinical practice and health care delivery. We reviewed the literature to define a framework for overcoming disparities for patients with hematologic malignancies and to improve patient enrollment on clinical trials.
abstract_id: PUBMED:9815662
Prognostic factors in clinical cancer trials. Differences in nontreatment-related covariates ("prognostic factors") often account for differences between treatments in clinical cancer trials. This is true even in randomized trials in which the number of patients randomized is less than 200. Hence analysis of prognostic factors is crucial in historically controlled and small randomized trials. However, it is known that factors found prognostic in one series are often not prognostic in another. This is a result of the increased type I error inherent in most methods used to identify the optimal cutpoint of a potential prognostic factor. This report describes new graphical methods, based on Martingale residuals, that can be used to better identify the relationship between outcome and a covariate. For example, use of these methods indicated that the effect of antecedent hematological disorder on survival in acute myelogenous leukemia/myelodysplastic syndrome is continuous (the longer the length of antecedent hematological disorder, the shorter the survival) rather than dichotomous (antecedent hematological disorder present = unfavorable). This report also discusses the use of graphical methods to test the assumption of proportional hazards crucial to the Cox model and to account for any time-varying effects of a prognostic factor. The graphical methods discussed here provide a better fit of a statistical model to the data and provide more reliable estimates of the effect of a particular variable.
Answer: The ongoing clinical trials for elderly patients with hematological malignancies appear to be inadequately addressing the right endpoints. The trials often focus on standard endpoints such as toxicity, efficacy, and survival, while patient-centered outcome measures like quality of life, healthcare utilization, and functional capacity are incorporated in only a small number of trials (8%, 4%, and 0.7% of trials, respectively) (PUBMED:24458474). Even in trials developed exclusively for older patients, less than one-fifth include patient-centered outcome measures (PUBMED:24458474). This indicates a gap in the clinical trial design where the outcomes that are most relevant to the older patient population are not being sufficiently considered.
Furthermore, there is a significant underrepresentation of elderly patients in clinical trials for hematological malignancies. Although 5% of the trials focus exclusively on elderly or unfit patients, 69% explicitly or implicitly exclude older patients based on age, performance status, or stringent organ function restrictions (PUBMED:25170014). This exclusion is problematic because the heterogeneity of the elderly population does not justify exclusion based solely on chronological age. Industry-sponsored studies are least likely to exclude older patients (PUBMED:25170014).
Additionally, a systematic review of clinical trials for haematological malignancies from 2015 to 2020 found that only one-fifth of the trials were randomized, and there has been an increase in the ratio of non-randomized to randomized studies over time. The majority of randomized trials used surrogate primary endpoints rather than clinical endpoints like overall survival or quality of life (PUBMED:35236569).
In conclusion, current clinical trials for hematological malignancies are not sufficiently addressing the right endpoints for elderly patients. There is a need for a greater focus on patient-centered outcomes and the inclusion of older patients in clinical trials to improve our knowledge of optimal treatment for this demographic and to deliver evidence-based, patient-focused cancer care (PUBMED:24458474; PUBMED:25170014; PUBMED:35236569). |
Instruction: Does endoscopic papillary balloon dilation affect gallbladder motility?
Abstracts:
abstract_id: PUBMED:10385726
Does endoscopic papillary balloon dilation affect gallbladder motility? Background: Endoscopic papillary balloon dilation for treatment of bile duct stones is likely to preserve papillary function. However, endoscopic papillary balloon dilation may affect gallbladder motility. We investigated the effects of endoscopic papillary balloon dilation on gallbladder motility.
Methods: Ten patients with an intact gallbladder (six with and four without gallbladder stones) who underwent endoscopic papillary balloon dilation for choledocholithiasis were studied. Gallbladder motility was examined before and 7 days and 1 month after endoscopic papillary balloon dilation. Gallbladder volume, while fasting and after dried egg yolk ingestion, was determined by ultrasonography.
Results: Before endoscopic papillary balloon dilation, particularly in patients with gallbladder stones, the gallbladder showed significantly larger fasting volume and lower yolk-stimulated maximum contraction compared with control subjects. Seven days after endoscopic papillary balloon dilation, fasting volume was decreased and maximum contraction was increased, regardless of the presence of gallbladder stones, with significant differences from the values before endoscopic papillary balloon dilation. One month after endoscopic papillary balloon dilation, these changes were reduced and gallbladder function did not differ significantly from baseline.
Conclusions: After endoscopic papillary balloon dilation, gallbladder motility improves transiently at 7 days but returns to baseline at 1 month. In terms of gallbladder motility, endoscopic papillary balloon dilation does not seem to increase the subsequent risk of acute cholecystitis.
abstract_id: PUBMED:24363517
Endoscopic papillary balloon dilation: revival of the old technique. Radiologists first described the removal of bile duct stones using balloon dilation in the early 1980s. Recently, there has been renewed interest in endoscopic balloon dilation with a small balloon to avoid the complications of endoscopic sphincterotomy (EST) in young patients undergoing laparoscopic cholecystectomy. However, there is a disparity in using endoscopic balloon papillary dilation (EPBD) between the East and the West, depending on the origin of the studies. In the early 2000s, EST followed by endoscopic balloon dilation with a large balloon was introduced to treat large or difficult biliary stones. Endoscopic balloon dilation with a large balloon has generally been recognized as an effective and safe method, unlike EPBD. However, fatal complications have occurred in patients with endoscopic papillary large balloon dilation (EPLBD). The safety of endoscopic balloon dilation is still a debatable issue. Moreover, guidelines of indications and techniques have not been established in performing endoscopic balloon dilation with a small balloon or a large balloon. In this article, we discuss the issue of conventional and large balloon endoscopic dilation. We also suggest the indications and optimal techniques of EPBD and EPLBD.
abstract_id: PUBMED:23701518
Papillary balloon dilation is not itself a cause of post-endoscopic retrograde cholangiopancreatography pancreatitis; results of anterograde and retrograde papillary balloon dilation. Objectives: The mechanism of pancreatitis development following endoscopic papillary balloon dilation (EPBD) remains unknown. Antegrade dilation with percutaneous transhepatic papillary balloon dilation (PTPBD) allows the removal of bile duct stones or fragments during percutaneous choledochoscopic lithotomy, with less mechanical trauma to the papilla than with EPBD-mediated stone removal.
Methods: A total of 56 patients with bile duct stones underwent antegrade dilation with PTPBD from March 2006 to February 2011. A total of 208 patients with common bile duct stones underwent retrograde dilation with EPBD during the same period. The conditions of papillary balloon dilation were identical in both groups. The frequencies of pancreatitis and hyperamylasemia were compared in both groups.
Results: Pancreatitis occurred in 14 (6.7%) of 208 patients in the EPBD group (mild, nine; moderate, four; severe, one). There was no case of pancreatitis among 56 patients in the PTPBD group (P < 0.05). Hyperamylasemia developed in significantly more patients treated in the EPBD group (62, 29.8%) compared with the PTPBD group (4, 7.1%; P < 0.05). Complete bile duct clearance was achieved in 98.2% of PTPBD group and 97.1% of EPBD group.
Conclusions: The rates of post-procedural pancreatitis and hyperamylasemia were significantly higher after retrograde dilation with EPBD than after antegrade dilation with PTPBD for the removal of bile duct stones. Although the mechanism of pancreatitis following papillary balloon dilation remains unclear, post-EPBD pancreatitis may be associated with procedures before and after balloon dilation similar to mechanical lithotripsy rather than balloon dilation itself.
abstract_id: PUBMED:11577307
Medium-term effects of endoscopic papillary balloon dilation on gallbladder motility. Background: Endoscopic papillary balloon dilation (EPBD) for removal of bile duct stones tends to preserve papillary function. However, EPBD may exert beneficial or deleterious effects on gallbladder motility. This was a prospective, medium-term investigation (2 years) of the effects of EPBD on gallbladder motility.
Methods: Twelve patients with intact gallbladders (6 with and 6 without gallbladder stones) who underwent EPBD for choledocholithiasis were enrolled in this study. Gallbladder motility was examined before EPBD and at 7 days, 1 month, 1 year, and 2 years after EPBD. Gallbladder volumes, measured after fasting and after ingestion of dried egg yolk, were determined by US.
Results: All patients were asymptomatic during the 2-year follow-up period. Before EPBD, particularly in patients with cholelithiasis, the gallbladder had a larger fasting volume and lower yolk-stimulated maximum contraction compared with normal control subjects. Seven days after EPBD, fasting volume was decreased and maximum contraction was increased, both significantly compared with pre-EPBD values and regardless of the presence or absence of gallbladder stones. At 1 month, 1 year, and 2 years after EPBD, these changes were far less evident and gallbladder function did not differ significantly from baseline.
Conclusion: EPBD does not adversely affect gallbladder motility in the medium-term (2 years). In terms of gallbladder motility, EPBD does not appear to increase the future risk of acute cholecystitis or gallbladder stone formation.
abstract_id: PUBMED:22977823
Endoscopic papillary large balloon dilation: guidelines for pursuing zero mortality. Since endoscopic papillary large balloon dilation (EPLBD) is used to treat benign disease and as a substitute for conventional methods, such as endoscopic sphincterotomy plus endoscopic mechanical lithotripsy, we should aim for zero mortality. This review defines EPLBD and suggests guidelines for its use based on a review of published articles and our large-scale multicenter retrospective review.
abstract_id: PUBMED:25685263
Reappraisal of endoscopic papillary balloon dilation for the management of common bile duct stones. Although endoscopic sphincterotomy (EST) is still considered as a gold standard treatment for common bile duct (CBD) stones in western guideline, endoscopic papillary balloon dilation (EPBD) is commonly used by the endoscopists in Asia as the first-line treatment for CBD stones. Besides the advantages of a technical easy procedure, endoscopic papillary large balloon dilation (EPLBD) can facilitate the removal of large CBD stones. The indication of EPBD is now extended from removal of the small stones by using traditional balloon, to removal of large stones and avoidance of lithotripsy by using large balloon alone or after EST. According to the reports of antegrade papillary balloon dilatation, balloon dilation itself is not the cause of pancreatitis. On the contrary, adequate dilation of papillary orifice can reduce the trauma to the papilla and pancreas by the basket or lithotripter during the procedure of stone extraction. EPLBD alone is as effective as EPLBD with limited EST. Longer ballooning time may be beneficial in EPLBD alone to achieve adequate loosening of papillary orifice. The longer ballooning time does not increase the risk of pancreatitis but may reduce the bleeding episodes in patients with coagulopathy. Slowly inflation of the balloon, but not exceed the diameter of bile duct and tolerance of the patients are important to prevent the complication of perforation. EPBLD alone or with EST are not the sphincter preserved procedures, regular follow up is necessary for early detection and management of CBD stones recurrence.
abstract_id: PUBMED:15628707
Long-term effects of endoscopic papillary balloon dilation on gallbladder motility. We prospectively studied long-term (5 years) effects of endoscopic papillary balloon dilation (EPBD) on gallbladder motility. Thirteen patients with intact gallbladders (six with and seven without gallbladder stones) who had undergone EPBD for choledocholithiasis were enrolled in this study. Gallbladder volumes, while fasting and after dried egg yolk ingestion, were determined by ultrasonography, before and at 7 days, 1 month, and 1, 2, and 5 years after EPBD. Before EPBD, the gallbladder had a larger fasting volume and lower yolk-stimulated maximum contraction than in normal controls. Seven days after EPBD, fasting volume was decreased and maximum contraction was increased, regardless of whether the patient had gallbladder stones, showing significant differences from the pre-EPBD values. At 1 month to 5 years after EPBD, these changes were far less evident and gallbladder function did not differ significantly from baseline. EPBD does not adversely affect gallbladder motility in the long-term (5 years).
abstract_id: PUBMED:26487232
Cholecystectomy after endoscopic papillary balloon dilation for bile duct stones reduced late biliary complications: a propensity score-based cohort analysis. Background: Cholecystectomy after endoscopic sphincterotomy for bile duct stones with concomitant gallstones is known to reduce late biliary complications. Endoscopic papillary balloon dilation for bile duct stones develops fewer late biliary complications than endoscopic sphincterotomy, but no randomized controlled trials have been conducted about the role of cholecystectomy after endoscopic papillary balloon dilation. Therefore, we conducted this propensity score-matched analysis to compare cholecystectomy and wait-and-see approach after endoscopic papillary balloon dilation.
Methods: Propensity score matching extracted 147 pairs of patients with cholecystectomy after endoscopic papillary balloon dilation and with gallbladder left in situ with stones (wait-and-see) from 725 patients who underwent endoscopic papillary balloon dilation for bile duct stones. Late biliary complications such as recurrent bile duct stones and cholecystitis were evaluated. Cumulative incidence of late biliary complications was calculated treating death without biliary complications as a competing risk, and its prognostic factor was evaluated.
Results: The rates of late biliary complications were 5.4 and 25.2 % in the cholecystectomy after endoscopic papillary balloon dilation and wait-and-see groups: Recurrent bile duct stones rates were 4.1 and 19.0 %, and cholecystitis rates were 0.7 and 6.1 %. The cumulative incidences of biliary complications in the cholecystectomy after endoscopic papillary balloon dilation and wait-and-see approach were 3.1 versus 13.0 % at 1 year and 5.7 versus 28.0 % at 5 year after endoscopic papillary balloon dilation (p = 0.008). Subdistribution hazard ratio of late biliary complications in the wait-and-see group was 5.1 (p = 0.020).
Conclusion: Cholecystectomy after endoscopic papillary balloon dilation for choledocholithiasis was associated with fewer late biliary complications. Prophylactic cholecystectomy should be offered to all surgically fit patients after endoscopic papillary balloon dilation for bile duct stones with concomitant gallstones.
abstract_id: PUBMED:24123873
Endoscopic papillary large-balloon dilation versus endoscopic papillary regular-balloon dilation for removal of large bile-duct stones. Background: Endoscopic papillary large-balloon dilation (EPLBD) became popular for the treatment of large common bile-duct stones (CBDS), and its feasibility has been reported in comparison to endoscopic sphincterotomy. However, the comparison between EPLBD and endoscopic papillary regular-balloon dilation (EPBD) has not been reported. In the present study, the efficacy and complications of EPLBD were compared with those of EPBD.
Methods: We retrospectively assessed 334 consecutive patients with CBDS of any size that were treated by either EPLBD or EPBD between January 2008 and December 2012.
Results: In cases with large CBDS (>10 mm), EPLBD and EPBD had similar results in terms of the success rate of stone removal in the first (65% vs. 84%) and total attempts (100% vs. 95%), use of mechanical lithotripter (64% vs. 80%), and procedure time (48.0 ± 17.8 min vs. 44.1 ± 17.1 min). The necessity for crushing stones with a mechanical lithotripter was significantly decreased in EPLBD compared to EPBD (25% vs. 80%). In all cases with CBDS, there was no significant difference in complication rates between EPLBD and EPBD (3.3% vs. 4.7%).
Conclusions: Compared to EPBD, EPLBD appears safe and effective for removing large CBDS and decreases the necessity of lithotripsy.
abstract_id: PUBMED:25840897
Nonradiation ERCP with endoscopic biliary sphincterotomy plus papillary balloon dilation for the treatment of choledocholithiasis during pregnancy. Background: Endoscopic retrograde cholangiopancreatography (ERCP) is currently the treatment of choice for symptomatic choledocholithiasis in pregnant patients. We aimed to present our experience with pregnant patients who underwent nonradiation ERCP and to evaluate the safety and efficacy of a new technique.
Methods: A retrospective analysis of nonradiation ERCP in 22 pregnant patients with symptomatic choledocholithiasis between January 2002 and December 2013 was performed. The bile aspiration technique with wire-guided sphincterotome was used to confirm selective biliary cannulation. Transpapillary pancreatic septotomy was performed in cases with difficult biliary cannulation (n = 3). After endoscopic biliary sphincterotomy, endoscopic papillary balloon dilation was performed with a 6- or 8-mm dilation balloon in all patients to reduce the risk of recurrent cholangitis because of residual or additional stones. Stones were extracted by balloon sweeping after dilation. All patients were followed for 6 months after the ERCP procedure.
Results: Biliary cannulation was achieved in all patients. Endoscopic papillary balloon dilation was performed with a 6-mm balloon in 17 patients and an 8-mm balloon in five patients. The stones were extracted in 18 of the 22 patients by balloon sweeping, but no stones were extracted in the remaining four patients. There were two cases of mild post-ERCP pancreatitis. All patients delivered at term, and none experienced recurrence of choledocholithiasis and/or cholangitis during the 6-month follow-up.
Conclusions: Endoscopic biliary sphincterotomy plus endoscopic papillary balloon dilation in nonradiation ERCP is a safe and effective treatment method for symptomatic choledocholithiasis during pregnancy.
Answer: Endoscopic papillary balloon dilation (EPBD) does have an effect on gallbladder motility, but this effect appears to be transient. Studies have shown that after EPBD, gallbladder motility improves transiently at 7 days, with a decrease in fasting volume and an increase in maximum contraction after ingestion of dried egg yolk. However, these changes are reduced by 1 month after the procedure, and gallbladder function does not differ significantly from baseline (PUBMED:10385726). This pattern of transient improvement followed by a return to baseline function was also observed in medium-term (2 years) and long-term (5 years) follow-up studies, indicating that EPBD does not adversely affect gallbladder motility in the medium or long term and does not appear to increase the future risk of acute cholecystitis or gallbladder stone formation (PUBMED:11577307, PUBMED:15628707). |
Instruction: Can Perhexiline Be Utilized Without Long-Term Toxicity?
Abstracts:
abstract_id: PUBMED:26309031
Can Perhexiline Be Utilized Without Long-Term Toxicity? A Clinical Practice Audit. Background: Perhexiline, originally used as a first-line prophylactic antianginal agent, is now regarded primarily as a treatment for otherwise refractory myocardial ischemia. Recent studies have also demonstrated its short-term utility in heart failure, hypertrophic cardiomyopathy, and inoperable aortic stenosis. Its benefits on myocardial energetics state are potentially counter-balanced by risk of hepatotoxicity and peripheral neuropathy during long-term treatment if drug accumulation occurs. Since perhexiline exhibits complex pharmacokinetics with wide inter-individual variability, its long-term use requires regular plasma concentration monitoring. In this study, the risk of neuro- and hepato-toxicity during long-term perhexiline therapy in relation to the intensity of therapeutic drug monitoring was investigated. Furthermore, determinants of mortality during perhexiline treatment were evaluated.
Methods: In 170 patients treated with perhexiline for a median of 50 months (interquartile range: 31-94 months), outcomes and relationship to plasma drug concentrations were documented.
Results: Rationale for treatment with perhexiline included myocardial ischemia in 88% and severe systolic heart failure in 38%. Plasma concentrations were within the therapeutic range of 150-600 ng/mL on 65% of assay occasions and toxic levels accounted for 8.8% of measurements. No patient developed hepatotoxicity attributable to perhexiline while 3 developed peripheral neuropathy possibly induced by treatment. Actuarial 5-year survival rate was 83% overall, and 76.3% in patients with associated systolic heart failure.
Conclusions: This first audit of a large population treated long-term perhexiline demonstrates the following: (1) Although the frequency of monitoring is less than ideal, therapeutic drug monitoring effectively limits occurrence of toxic drug concentrations and virtually eliminates long-term hepato- and neuro-toxicity and (2) Mortality rates during long-term therapy, notably for patients with concomitant heart failure, are surprisingly low.
abstract_id: PUBMED:4156585
Physiopathology of angina and long term antianginal medication. Current concepts N/A
abstract_id: PUBMED:6797064
Ambulatory treatment of heart rhythm disorders Partial suppression of symptomatic arrhythmias is feasible in most instances. Suitability of drugs for long-term treatment depends on their side effects, costs and action duration. Discrepancies between elimination half life and duration of action are described. Perhaps the most important shortcoming in long-term suppression of potentially dangerous arrhythmias is the short-lived action of most available drugs. In a once-daily regimen with allowance for occasional omission, a drug action duration of more than 48 hours is desirable. This applies only to perhexiline, nadolol, amiodarone, digitoxin, and digoxin in elderly patients.
abstract_id: PUBMED:3930086
Amiodarone and its desethyl metabolite: tissue distribution and morphologic changes during long-term therapy. The pharmacokinetic characteristics of amiodarone suggest extensive tissue deposition. We confirmed this by measuring tissue concentrations of the drug and of its major metabolite, desethylamiodarone, in human tissues. These were obtained at autopsy (n = 9), surgery (n = 7), or biopsy (n = 2) from 18 patients who had been treated with amiodarone for varying periods of time. High concentrations of amiodarone were found in fat (316 mg/kg wet weight in autopsy specimens, 344 mg/kg wet weight in biopsy specimens). Amiodarone and desethylamiodarone concentrations (mg/kg wet weight, autopsy samples) were also high in liver (391 and 2354), lung (198 and 952), adrenal gland (137 and 437), testis (89 and 470), and lymph node (83 and 316). We also found high concentrations of amiodarone (306 mg/kg wet weight) and desethylamiodarone (943 mg/kg wet weight) in abnormally pigmented ("blue") skin from patients with amiodarone-induced skin pigmentation. These values were 10-fold higher than those in unpigmented skin from the same patients. These high concentrations were associated with lysosomal inclusion bodies in dermal macrophages in the pigmented skin. The inclusion bodies were intrinsically electron dense and were shown to contain iodine by energy dispersive x-ray microanalysis. Lysosomal inclusion bodies shown by electron microscopy to be multilamellar were seen in other tissues. These tissues included terminal nerve fibers in pigmented skin, pulmonary macrophages, blood neutrophils, and hepatocytes and Kupffer cells. These characteristic ultrastructural findings occur in both genetic lipidoses and lipidoses induced by other drugs, e.g., perhexiline. We conclude that during therapy with amiodarone, widespread deposition of amiodarone and desethylamiodarone occurs. This leads to ultrastructural changes typical of a lipidosis. These changes are seen clearly in tissues associated with the unwanted effects of amiodarone, e.g., skin, liver and lung.
abstract_id: PUBMED:12776925
A new hepatoma cell line for toxicity testing at repeated doses. Many cell models that are used to assess basic cytotoxicity show a good correlation with acute toxicity. However, their correlation with the toxicity seen following chronic in vivo exposure is less evident. The new human hepatoma cell line HBG BC2 possesses the capacity of being reversibly differentiated in vitro and of maintaining a relatively higher metabolic rate when in the differentiated state (3 weeks) as compared to HepG2 cells, and thus may allow the conduct of repeated toxicity testing on cells in culture. In order to evaluate the genetic background of HBG BC2 cells, the expression of selected genes was analyzed in untreated cultures and, in addition, the behavior of HBG BC2 cultures under conditions of repeated treatment was studied with acetaminophen as a test substance and coupled with the use of standard staining techniques to demonstrate toxicity. Results showed that cultures of HBG BC2 cells retained a capacity to undergo apoptosis and proliferation, allowing probable replacement of damaged cells in the culture monolayer. MTT reduction was used to evaluate the toxicity of acetaminophen, acetylsalicylic acid, perhexiline, and propranolol, after both single and repeated (3 times/week for 2 weeks) administration. Under the conditions of repeated treatment, cytotoxicity was observed at lower doses as compared to single administration. In addition, the lowest nontoxic doses were in the same range as plasma concentrations measured in humans under therapeutic use. Our results suggest that the new human hepatoma HBG BC2 cell line is of interest for the evaluation of cell toxicity under conditions of repeated administration.
abstract_id: PUBMED:6779619
Ergonovine testing to detect spontaneous remissions of variant angina during long-term treatment with calcium antagonist drugs. A subgroup of 22 patients with variant angina who had responded well to calcium antagonist drugs were studied to determine if ergonovine testing could help assess the need for continued therapy. Before treatment all 22 patients exhibited angina with S-T elevation during ergonovine testing done in the coronary care unit according to a previously described protocol with sequential ergonovine doses of 0.0125, 0.025, 0.05, 0.1, 0.2, 0.3 and 0.4 mg administered at 5 minute intervals. After 9.4 +/- 4.7 (range 1 to 24) months of treatment (nifedipine 7 patients, diltiazem 3, verapamil 8, perhexiline 3, nifedipine and diltiazem 1), all patients were free from anginal attacks. Medication was discontinued and ergonovine testing repeated 24 to 48 hours later (3 weeks for perhexiline). In 12 of the 22 patients, angina or S-T segment shifts did not occur during the second ergonovine test to a maximal dose of 0.4 mg. Treatment was not restarted in these patients and all 12 remain free of variant anginal attacks 4.2 +/- 2.9 (range 1 to 13) months later. In seven patients angina and S-T elevation occurred during the second ergonovine test, in the same electrocardiographic leads as during the test before treatment. In three patients the ergonovine test induced angina with S-T depression in the leads where S-T elevation had occurred during the previous test. Treatment was reinstituted in these 10 patients with a positive test. No complications resulted from ergonovine testing in any patient. We conclude that in many patients with variant angina, symptoms will disappear spontaneously and the ergonovine test will revert to negative. Treatment with calcium antagonist drugs can probably be safely discontinued in some patients with variant angina; ergonovine testing appears to be helpful in identifying such patients. Longer periods of follow-up are required to confirm that symptoms do not recur.
abstract_id: PUBMED:23577442
A new human hepatoma cell line to study repeated cell toxicity. Early toxicity screening of new drugs is performed to select candidates for development. Many cell models are used to assess basic cytotoxicity and to show a good correlation with acute toxicity. However, their correlation with chronic in vivo exposure is inadequate. The new hepatoma cell line (HBG BC2) possesses the capacities of being reversibly differentiated in vitro, and of maintaining a relatively higher metabolic rate when in the differentiated phase (3 weeks) as compared to Hep G2 cells. MTT reduction was used to evaluate the toxicity of propranolol, perhexiline, aspirin and paracetamol, after both single and repeated treatments (three times a week for 2 weeks). Under conditions of repeated treatment, cytotoxicity was observed at lower doses when compared with single administration. Moreover, the first non-toxic doses were in the same range as plasma concentrations measured in humans during therapeutic use. Our results suggest that the new human hepatoma HBG BC2 cell line may be of interest for the evaluation of cell toxicity under repeated treatment conditions.
abstract_id: PUBMED:36677681
Drug Metabolism of Hepatocyte-like Organoids and Their Applicability in In Vitro Toxicity Testing. Emerging advances in the field of in vitro toxicity testing attempt to meet the need for reliable human-based safety assessment in drug development. Intrahepatic cholangiocyte organoids (ICOs) are described as a donor-derived in vitro model for disease modelling and regenerative medicine. Here, we explored the potential of hepatocyte-like ICOs (HL-ICOs) in in vitro toxicity testing by exploring the expression and activity of genes involved in drug metabolism, a key determinant in drug-induced toxicity, and the exposure of HL-ICOs to well-known hepatotoxicants. The current state of drug metabolism in HL-ICOs showed levels comparable to those of PHHs and HepaRGs for CYP3A4; however, other enzymes, such as CYP2B6 and CYP2D6, were expressed at lower levels. Additionally, EC50 values were determined in HL-ICOs for acetaminophen (24.0−26.8 mM), diclofenac (475.5−>500 µM), perhexiline (9.7−>31.5 µM), troglitazone (23.1−90.8 µM), and valproic acid (>10 mM). Exposure to the hepatotoxicants showed EC50s in HL-ICOs comparable to those in PHHs and HepaRGs; however, for acetaminophen exposure, HL-ICOs were less sensitive. Further elucidation of enzyme and transporter activity in drug metabolism in HL-ICOs and exposure to a more extensive compound set are needed to accurately define the potential of HL-ICOs in in vitro toxicity testing.
abstract_id: PUBMED:19793005
Relationship between in vitro phospholipidosis assay using HepG2 cells and 2-week toxicity studies in rats. Drug candidates under development by industry frequently show phospholipidosis as a side-effect in pre-clinical toxicity studies. This study sets up a cell-based assay for drug-induced phospholipidosis (PLD) and its performance was evaluated based on the in vivo PLD potential of compounds in 2-week toxicity studies in rats. When HepG2 cells were exposed simultaneously to PLD-inducing chemicals and a phospholipid having a fluorophore, an accumulation of phospholipids was detected as an increasing fluorescent intensity. Amiodarone, amitriptyline, fluoxetine, AY-9944, and perhexiline, which are common PLD-inducing chemicals, increased the fluorescent intensity, but acetaminophen, ampicillin, cimetidine, famotidine, or valproic acid, which are non-PLD-inducing chemicals, did not. The fluorescent intensity showed concordance with the pathological observations of phospholipid lamellar bodies in the cells. Then to confirm the predictive performance of the in vitro PLD assay, the 32 proprietary compounds characterized in 2-week toxicity studies in rats were evaluated with this in vitro assay. Because this in vitro assay was vulnerable to cytotoxicity, the innate PLD potential was calculated for each compound. A statistically significant increase in the in vitro PLD potential was seen for the compounds having in vivo PLD-inducing potential in the rat toxicity studies. The results suggest that the in vitro PLD potential could be appropriate to detect the appearance of PLD as a side effect in pre-clinical toxicity studies in rats.
abstract_id: PUBMED:6602668
Factors influencing the long-term prognosis of treated patients with variant angina. To determine the prognosis of variant angina and the factors influencing it, 169 consecutive patients hospitalized in our coronary unit were followed for a mean of 15.3 months (range 1 to 68). Survival at 1, 2, and 3 years was 95%, 90%, and 87%, respectively; survival without myocardial infarction was 80%, 78%, and 75%. Twenty of the 22 myocardial infarctions and eight of the 14 deaths occurred within the first 3 months. Mantel-Haenszel log-rank analysis demonstrated that coronary disease, ventricular function, and the degree of disease activity were significant interdependent variables that influenced both survival and survival without infarction. At 1, 2, and 3 years, survival for patients with multivessel disease was 81%, 76%, and 66%; for patients with one-vessel disease, 97%, 92%, and 92%; and for patients without stenoses greater than or equal to 70%, 98% at each year (p = .0003). Survival without infarction at 1 year was 88% in patients with no stenoses greater than or equal to 70% and 82% in patients with single-vessel disease; it did not change thereafter in either group, but was 62%, 58%, and 50% at 1, 2, and 3 years in patients with multivessel disease (p = .001). Treatment did not influence survival in any subgroup (only 14 patients died overall) or survival without infarction in patients with multivessel disease. However, in patients without multivessel disease, treatment with nifedipine, diltiazem, and verapamil improved survival without infarction compared to treatment with perhexiline maleate or long-acting nitrates alone (92% vs 67% at 1, 2, and 3 years; p less than .005). Thus in addition to preventing angina, nifedipine, diltiazem, and verapamil appear to reduce complications in patients with variant angina without multivessel disease.
Answer: Based on the clinical practice audit reported in the study with PUBMED:26309031, perhexiline can be utilized without long-term toxicity if therapeutic drug monitoring is effectively implemented. In the study involving 170 patients treated with perhexiline for a median of 50 months, plasma concentrations were within the therapeutic range on 65% of assay occasions, and toxic levels accounted for only 8.8% of measurements. No patient developed hepatotoxicity attributable to perhexiline, and only 3 developed peripheral neuropathy possibly induced by treatment. The actuarial 5-year survival rate was 83% overall, and 76.3% in patients with associated systolic heart failure. The audit concluded that therapeutic drug monitoring effectively limits the occurrence of toxic drug concentrations and virtually eliminates long-term hepato- and neuro-toxicity. Mortality rates during long-term therapy, notably for patients with concomitant heart failure, were surprisingly low. Therefore, with appropriate monitoring, perhexiline can be used without long-term toxicity. |
Instruction: Is food insufficiency associated with health status and health care utilization among adults with diabetes?
Abstracts:
abstract_id: PUBMED:11422638
Is food insufficiency associated with health status and health care utilization among adults with diabetes? Objectives: Preliminary studies have shown that among adults with diabetes, food insufficiency has adverse health consequences, including hypoglycemic episodes and increased need for health care services. The purpose of this study was to determine the prevalence of food insufficiency and to describe the association of food insufficiency with health status and health care utilization in a national sample of adults with diabetes.
Methods: We analyzed data from adults with diabetes ( n =1,503) interviewed in the Third National Health and Nutrition Examination Survey. Bivariate and multivariate analyses were used to examine the relationship of food insufficiency to self-reported health status and health care utilization.
Results: Six percent of adults with diabetes reported food insufficiency, representing more than 568,600 persons nationally (95% confidence interval, 368,400 to 768,800). Food insufficiency was more common among those with incomes below the federal poverty level (17% vs 4%, P < or = .001). Adults with diabetes who were food insufficient were more likely to report fair or poor health status than those who were not (63% vs 43%; odds ratio, 2.2; P=.05). In a multivariate analysis, fair or poor health status was independently associated with poverty, nonwhite race, low educational achievement, and number of chronic diseases, but not with food insufficiency. Diabetic adults who were food insufficient reported more physician encounters, either in clinic or by phone, than those who were food secure (12 vs 7, P<.05). In a multivariate linear regression, food insufficiency remained independently associated with increased physician utilization among adults with diabetes. There was no association between food insufficiency and hospitalization in bivariate analysis.
Conclusions: Food insufficiency is relatively common among low-income adults with diabetes and was associated with higher physician utilization.
abstract_id: PUBMED:32187388
Food insecurity, health care utilization, and health care expenditures. Objective: To disentangle the relationships among food insecurity, health care utilization, and health care expenditures.
Data Sources/study Setting: We use national data on 13 465 adults (age ≥ 18) from the 2016 Medical Expenditure Panel Survey (MEPS), the first year of the food insecurity measures.
Study Design: We employ two-stage empirical models (probit for any health care use/expenditure, ordinary least squares, and generalized linear models for amount of utilization/expenditure), controlling for demographics, health insurance, poverty status, chronic conditions, and other predictors.
Principal Findings: Our results show that the likelihood of any health care expenditure (total, inpatient, emergency department, outpatient, and pharmaceutical) is higher for marginal, low, and very low food secure individuals. Relative to food secure households, very low food secure households are 5.1 percentage points (P < .001) more likely to have any health care expenditure, and have total health care expenditures that are 24.8 percent higher (P = .011). However, once we include chronic conditions in the models (ie, high blood pressure, heart disease, stroke, emphysema, high cholesterol, cancer, diabetes, arthritis, and asthma), these underlying health conditions mitigate the differences in expenditures by food insecurity status (only the likelihood of any having any health care expenditure for very low food secure households remains statistically significant).
Conclusions: Policy makers and government agencies are focused on addressing deficiencies in social determinants of health and the resulting impacts on health status and health care utilization. Our results indicate that chronic conditions are strongly associated with food insecurity and higher health care spending. Efforts to alleviate food insecurity should consider the dual burden of chronic conditions. Finally, future research can address specific mechanisms underlying the relationships between food security, health, and health care.
abstract_id: PUBMED:30171678
Incremental Health Care Costs Associated With Food Insecurity and Chronic Conditions Among Older Adults. Introduction: The prevalence of food insecurity and chronic health conditions among older adults is a public health concern. However, little is known about associated health care costs. We estimated the incremental health care costs of food insecurity and selected chronic health conditions among older adults, defined as adults aged 50 or older.
Methods: We analyzed 4 years of data (2011-2014) from the National Health Interview Survey and 3 years of data (2013-2015) from the Medical Expenditure Panel Survey; we used 2-part models to estimate the incremental health care costs associated with food insecurity and 9 chronic conditions (hypertension, coronary heart disease, stroke, emphysema, asthma, cancer, chronic bronchitis, arthritis, and diabetes) among older adults.
Results: Approximately 14% of older adult respondents (n = 2,150) reported being food insecure. The 3 most common chronic conditions were the same for both food-insecure and food-secure older adults: hypertension, arthritis, and diabetes. The adjusted annual incremental health care costs resulting from food insecurity among older adults were higher in the presence of hypertension, stroke, and arthritis (P ≤ .05) and in the presence of diabetes (P ≤ .10). These findings were also true for the incremental health care costs resulting from food insecurity in the absence of these specific chronic conditions.
Conclusion: Our findings show that food insecurity interacts with chronic conditions. We observed higher health care costs in the presence of this interaction for those who were food insecure and had poor health than for those who were food secure.
abstract_id: PUBMED:34870474
Factors Associated with Chronic Disease and Health Care Utilization Among Young Adults in South Korea. Hypertension, diabetes, and hyperlipidemia have become prevalent in young adults. Health care utilization is a key factor in managing early onset chronic diseases. This study aimed to examine the factors affecting health care utilization among young South Korean adults with a single chronic disease. From the Korea Health Panel Survey data collected between 2014 and 2017, young adults who were 30-49 years old and diagnosed with a single chronic condition (hypertension, diabetes, or hyperlipidemia) were included in this study (n = 993). The factors affecting health care utilization were analyzed through multiple logistic regression. The health care utilization rate of the 40-49 and 30-39-year age groups was 84.2% and 71.1%, respectively, and it was significantly higher in the healthy behavior group, who had no smoking and drinking habits and joined in physical activities. Among the chronic diseases, hyperlipidemia obtained the lowest health care utilization rate (62.8%). From the multiple logistic regression analysis, medication intake was likely to increase in the older, unemployed, and healthy behavior groups. Patients with hypertension and diabetes were more likely to use health care services than those with hyperlipidemia. Given the rising prevalence of chronic diseases among young adults, these findings may be helpful in implementing new public health approaches for this type of population by encouraging proper health care utilization.
abstract_id: PUBMED:33999782
Food Security Status among U.S. Older Adults: Functional Limitations Matter. This study aimed to assess the relationship between food security and health outcomes among older adults (age 65+) in the U.S. We used a pooled sample (2011-2015, N = 37,292) from the National Health Interview Survey (NHIS) and ordered logit models to assess characteristics associated with food security including health conditions (diabetes and hypertension) and functional activity limitations. We estimated that 1.3 million individuals aged 65+ in the U.S. had low/very low food security. Having at least one functional limitation (OR = 1.717, 95% CI = 1.436, 2.054) was significantly associated with low/very low food security. Having fair or poor health status (OR = 3.315, 95% CI = 2.938, 3.739) was also a significant factor for food security among older adults, while having health insurance coverage (OR = 0.467, 95% CI = 0.341, 0.64) was negatively associated with food insecurity. Demographics and socioeconomic characteristics were significantly related to food insecurity among seniors. Seniors with functional limitations and poor health status are at risk for food insecurity. Interventions at the clinical site of care may be useful in addressing food security issues for older adults.
abstract_id: PUBMED:7824819
Health-related worries, perceived health status, and health care utilization. This study examines the association of health-related worries (over cancers, diabetes, work-related stress, heart attack, obesity, general physical fitness, and/or other health conditions) and perceived health status (excellent, good, fair or poor) to the utilization of health care services for 19, 139 Japanese local public service employees. Data on health-related worries and health status were obtained from a self-administered questionnaire survey in 1988 and analyzed in relation to the subsequent 12-month utilization of health care. Results showed that perceived health status was associated with the utilization for almost all medical conditions and so was worry over a specific condition and the subsequent utilization of health care services. The implication of these findings is that measures targeting the relief of an employee's health-related worries, through either health consultation or other health programs, may contribute to the reduction of an employee's health care utilization and costs.
abstract_id: PUBMED:31872170
Diabetes-Related Health Care Utilization and Dietary Intake Among Food Pantry Clients. Purpose: Consuming a diet appropriate for management of diabetes mellitus (DM) is challenging, particularly for adults with food insecurity (FI). DM-related health care services are thought to support better dietary intake. In this study, we explored associations between DM-related health care utilization and dietary intake among FI adults with DM. Methods: We used cross-sectional, baseline data (collected 2015-2016) from a trial designed to improve glycemic control among adult food pantry clients with DM. We examined intake of vegetables, fruit, sugar-sweetened beverages (SSBs), and desserts using the California Health Interview Survey dietary screener. We then examined adjusted associations between dietary intake and two components of DM-related health care utilization (<12 months vs. ≥12 months ago): self-reported visit to a health care provider for DM management and DM self-management education. Results: Among 523 participants (mean hemoglobin A1c 9.8%; body mass index 34.6 kg/m2; 17.0% uninsured), vegetable intake was more frequent in those reporting recent utilization of health care providers for DM management and DSME-related services (p<0.01), compared with those with less recent use. There was no association between intake frequency of fruit or SSBs and utilization of either DM-related service. Participants more recently utilizing DSME-related services consumed desserts more frequently (p=0.02). Relationships persisted after controlling for DM duration, race/ethnicity, education, health insurance, location, medication adherence, and depression. Conclusions: Among FI patients, DM-related services offered in clinical settings may more effectively increase vegetable consumption than decrease consumption of food and beverage items that can worsen glycemic control. Food pantry settings may provide an opportunity to reinforce dietary messaging.
abstract_id: PUBMED:35674161
Racial and oral health disparity associated with perinatal oral health care utilization among underserved US pregnant women. Objective: The study aims to identify specific determinants of dental care utilization during the perinatal period (prenatal and 1-year postnatal) among underserved US women residing in Upstate New York.
Method And Materials: The prospective cohort study included 186 low-income US pregnant women. Demographic-socioeconomic parameters and medical-dental conditions were obtained from questionnaires, electronic medical-dental records, and dental examinations. Multivariate regression analyses were used to assess factors associated with perinatal dental care utilization. As an exploratory effort, a separate logistic model assessed factors associated with adverse birth outcomes.
Results: The results demonstrated unmet oral health needs among the underserved US pregnant women residing in Upstate New York. Despite an average of 2.7 ± 3.6 untreated decayed teeth per person during pregnancy, only 39.3% and 19.9% utilized prenatal and 1-year postnatal dental care, respectively. Previous dental care utilization was a notable factor contributing to a higher uptake of perinatal dental care at a subsequent period. Prenatal dental care utilization was significantly lower among African American women (odds ratio 0.43 [95% CI 0.19, 0.98], P = .04) and positively associated with dental caries severity (OR 2.40 [1.09, 5.12], P = .03). Postnatal utilization was associated with caries severity (OR 4.70 [1.73, 12.74], P = .002) and prevalent medical conditions (hypertension, diabetes mellitus, and emotional conditions). Pregnant women who achieved prenatal caries-free status had a lower odds of experiencing adverse birth outcomes; however, this was an insignificant finding due to limited adverse birth cases.
Conclusion: Racial and oral health disparity is associated with perinatal oral health care utilization among underserved US pregnant women in New York. While both prenatal and postnatal dental care utilization was positively associated with oral health status, specifically, postnatal utilization was driven by existing medical conditions such as emotional condition, hypertension, and diabetes mellitus. The results add to existing information on inherent barriers and postulated needs to improve access to perinatal oral care, thereby informing statewide recommendations to maximize utilization. Considering this is a geographically restricted population, the findings are particularly true to this cohort of underserved pregnant women. However, future more robust studies are warranted to assess effective strategies to further improve perinatal dental care utilization among underserved pregnant women.
abstract_id: PUBMED:38407756
Associations Between Food Insufficiency and Health Conditions Among New York City Adults, 2017-2018. Food insecurity, a critical social determinant of health, has been measured nationwide in the United States for years. This analysis focuses on food insufficiency, a more severe form of food insecurity, in New York City (NYC) and its association with self-reported physical and mental health conditions. Data from the 2017-2018 NYC Community Health Survey were used to estimate the prevalence of food insufficiency citywide, by neighborhood, and across selected socioeconomic characteristics. Multivariable logistic regression was used to explore the associations between food insufficiency and hypertension, diabetes obesity, and depression, adjusting for selected sociodemographic characteristics. Approximately 9.4% (95% CI:8.8-10.0%]) of adult New Yorkers aged 18 + reported food insufficiency, with neighborhood variation from 1.7% (95% CI:0.5-6.2%) to 19.4% (95% CI:14.2-25.8%). Food insufficiency was more prevalent among Latinos/as (16.9%, 95% CI:15.5-18.3%, p < 0.001), Black (10.1%, 95% CI:8.8-11.5%, p < 0.001) and Asian/Pacific Islanders (6.6%, 95% CI:5.4-8.1%, p = 0.002) compared to White New Yorkers (4.2%, 95% CI:3.5-5.1%). Prevalence of food insufficiency was higher among NYC adults with less than a high school education, (19.6%, 95% CI:17.7-21.6%), compared to college graduates (3.8%, 95% CI:3.2-4.4%, p < 0.001). In the adjusted logistic regression model, food insufficiency was associated with diabetes (OR = 1.36; 95% CI:1.12-1.65), hypertension (OR = 1.58; 95% CI:1.32-1.89]) and depression (OR = 2.98; 95% CI:2.45-3.59), but not with obesity (OR = 0.99; 95% CI:0.84-1.21). Our findings highlight food insufficiency at an important intersection of inequity and disease burden which is critical to informing public health interventions in the context of a large, densely populated metropolis like NYC.
abstract_id: PUBMED:33096552
Food Insecurity is Directly Associated with the Use of Health Services for Adverse Health Events among Older Adults. Background: In 2018, 14.3 million US households experienced food insecurity, which has been linked to negative health outcomes such as depression and anxiety, diabetes, and hypertension. This connection is particularly important for older adults, who are at greater risk than younger adults for developing certain health conditions.
Objective: We estimated the association between food insecurity and the use of health services for adverse health events over a 12-mo observation period following survey interview for a nationally representative group of older adults participating in the congregate meal (CM) and home-delivered meal (HDM) programs.
Methods: We analyzed data from the Nutrition Services Program (NSP) Outcomes Survey matched to Medicare claims and enrollment data in 2015-2016 for a nationally representative sample of 626 CM or HDM recipients ages 67 y and older. We used logistic regression analysis controlling for demographic characteristics, prior health events, and geography to estimate the association between food insecurity and use of health services, including emergency department visits, inpatient stays, skilled nursing facility stays, and home healthcare episodes. We used ordinary least squares regression analysis to estimate the association between food insecurity and Medicare spending.
Results: Food insecurity was associated with an average increased likelihood of using health services for adverse health events of 16% (95% CI: 1%, 32%) for HDM participants. Food insecurity was associated with an average increased likelihood of emergency department visits of 24% (95% CI: 6%, 41%) for CM participants and 20% (95% CI: 5%, 36%) for HDM participants. There was no observed increase in likelihood of using the other health services. Food-insecure participants were less likely to have a skilled nursing facility stay.
Conclusions: Food insecurity is associated with an increased likelihood of use of health services for adverse health events in older adult participants in meal service programs.
Answer: Yes, food insufficiency is associated with health status and health care utilization among adults with diabetes. A study analyzing data from adults with diabetes interviewed in the Third National Health and Nutrition Examination Survey found that 6% of adults with diabetes reported food insufficiency. This group was more likely to report fair or poor health status than those who were food secure. Additionally, diabetic adults who were food insufficient reported more physician encounters than those who were food secure. However, there was no association between food insufficiency and hospitalization in bivariate analysis (PUBMED:11422638).
Another study using data from the 2016 Medical Expenditure Panel Survey indicated that households with very low food security are more likely to have any health care expenditure and have total health care expenditures that are significantly higher. Chronic conditions, such as high blood pressure, heart disease, stroke, emphysema, high cholesterol, cancer, diabetes, arthritis, and asthma, were strongly associated with food insecurity and higher health care spending (PUBMED:32187388).
Furthermore, research on older adults (aged 50 or older) showed that the adjusted annual incremental health care costs resulting from food insecurity were higher in the presence of hypertension, stroke, arthritis, and diabetes. This suggests that food insecurity interacts with chronic conditions, leading to higher health care costs among those who are food insecure and have poor health (PUBMED:30171678).
In summary, food insufficiency is associated with poorer health status and increased health care utilization among adults with diabetes, and this relationship is also observed in the context of other chronic conditions. |
Instruction: Is there a relationship between solubility and resorbability of different calcium phosphate phases in vitro?
Abstracts:
abstract_id: PUBMED:27212690
Is there a relationship between solubility and resorbability of different calcium phosphate phases in vitro? Background: Does chemistry govern biology or it is the other way around - that is a broad connotation of the question that this study attempted to answer.
Method: Comparison was made between the solubility and osteoclastic resorbability of four fundamentally different monophasic calcium phosphate (CP) powders with monodisperse particle size distributions: alkaline hydroxyapatite (HAP), acidic monetite (DCP), β-calcium pyrophosphate (CPP), and amorphous CP (ACP). Results With the exception of CPP, the difference in solubility between different CP phases became neither mitigated nor reversed, but augmented in the resorptive osteoclastic milieu. Thus, DCP, a phase with the highest solubility, was also resorbed more intensely than any other CP phase, whereas HAP, a phase with the lowest solubility, was resorbed least. CPP becomes retained inside the cells for the longest period of time, indicating hindered digestion of only this particular type of CP. Osteoclastogenesis was mildly hindered in the presence of HAP, ACP and DCP, but not in the presence of CPP. The most viable CP powder with respect to the mitochondrial succinic dehydrogenase activity was the one present in natural biological bone tissues: HAP.
Conclusion: Chemistry in this case does have a direct effect on biology. Biology neither overrides nor reverses the chemical propensities of inorganics with which it interacts, but rather augments and takes a direct advantage of them.
Significance: These findings set the fundamental basis for designing the chemical makeup of CP and other biosoluble components of tissue engineering constructs for their most optimal resorption and tissue regeneration response.
abstract_id: PUBMED:11857428
Resorbability and solubility of zinc-containing tricalcium phosphate. Using zinc-containing tricalcium phosphate (ZnTCP) as the zinc carrier for zinc-releasing calcium phosphate ceramic implants promoted bone formation around the implants. Because no quantitative information was available on the equilibrium solubility and resorbability of ZnTCP, in vitro equilibrium solubility and in vivo resorbability of ZnTCP were determined and compared quantitatively in this study. The solubility of ZnTCP decreased with increasing zinc content. The negative logarithm of the solubility product (K(sp)) of ZnTCP was expressed as pK(sp) = 28.686 + 1.7414C - 0.42239C(2) + 0.063911C(3) - 0.0051037C(4) + 0.0001595C(5) in air, where C is the zinc content in ZnTCP (mol %). The solubility of ZnTCP containing a nontoxic level of zinc (<0.63 wt %) decreased to 52-92% of the solubility of pure tricalcium phosphate (TCP) in the pH range 5.0-7.4. However, the in vivo resorbed volume of ZnTCP containing the same amount of zinc was much lower than that expected from the in vitro solubility, becoming as low as 26-20% of that of TCP. Cellular resorption of TCP is substantially a process of dissolution in a fluid with an acidic pH that is maintained by the activities of cells. Therefore, the reduction of the resorbability of ZnTCP could be attributable principally to its lowered cellular activation property relative to that associated with pure TCP.
abstract_id: PUBMED:2242400
Studies of the solubility of different calcium phosphate ceramic particles in vitro. In vitro solubility tests of hydroxyapatite, tetracalcium phosphate or tricalcium phosphate particles were performed in lactate, citrate, Gomoris or Michaelis buffer with pH 6.2 or 7.2 and in aqua destillata. The results showed that in general the solubility decreased in the order tetracalcium phosphate greater than tricalcium phosphate greater than hydroxyapatite, except for lactate or citrate buffer where the solubility order was tetracalcium phosphate = tricalcium phosphate greater than hydroxyapatite. The influence of the specific buffer used is much larger than either pH or specific calcium phosphate salt tested. The pH stability of lactate buffer and aqua destillata is very low, the other buffer solvents had a rather stable pH value.
abstract_id: PUBMED:26232621
Interlaboratory studies on in vitro test methods for estimating in vivo resorption of calcium phosphate ceramics. A potential standard method for measuring the relative dissolution rate to estimate the resorbability of calcium-phosphate-based ceramics is proposed. Tricalcium phosphate (TCP), magnesium-substituted TCP (MgTCP) and zinc-substituted TCP (ZnTCP) were dissolved in a buffer solution free of calcium and phosphate ions at pH 4.0, 5.5 or 7.3 at nine research centers. Relative values of the initial dissolution rate (relative dissolution rates) were in good agreement among the centers. The relative dissolution rate coincided with the relative volume of resorption pits of ZnTCP in vitro. The relative dissolution rate coincided with the relative resorbed volume in vivo in the case of comparison between microporous MgTCPs with different Mg contents and similar porosity. However, the relative dissolution rate was in poor agreement with the relative resorbed volume in vivo in the case of comparison between microporous TCP and MgTCP due to the superimposition of the Mg-mediated decrease in TCP solubility on the Mg-mediated increase in the amount of resorption. An unambiguous conclusion could not be made as to whether the relative dissolution rate is predictive of the relative resorbed volume in vivo in the case of comparison between TCPs with different porosity. The relative dissolution rate may be useful for predicting the relative amount of resorption for calcium-phosphate-based ceramics having different solubility under the condition that the differences in the materials compared have little impact on the resorption process such as the number and activity of resorbing cells.
Statement Of Significance: The evaluation and subsequent optimization of the resorbability of calcium phosphate are crucial in the use of resorbable calcium phosphates. Although the resorbability of calcium phosphates has usually been evaluated in vivo, establishment of a standard in vitro method that can predict in vivo resorption is beneficial for accelerating development and commercialization of new resorbable calcium phosphate materials as well as reducing use of animals. However, there are only a few studies to propose such an in vitro method within which direct comparison was carried out between in vitro and in vivo resorption. We propose here an in vitro method based on measuring dissolution rate. The efficacy and limitations of the method were evaluated by international round-robin tests as well as comparison with in vivo resorption studies for future standardization. This study was carried out as one of Versailles Projects on Advanced Materials and Standards (VAMAS).
abstract_id: PUBMED:34890630
Studies on the pH-Dependent Solubility of Various Grades of Calcium Phosphate-based Pharmaceutical Excipients. Calcium phosphate-based pharmaceutical excipients, including calcium hydrogen phosphate anhydrous and dihydrate, calcium hydroxide phosphate have been well established in pharmaceutical technology for a very long time. Nowadays, they are of increasing interest to the pharmaceutical industry because, in addition to their advanced functional properties, they offer beneficial biocompatible and biodegradable properties. Yet, there is limited availability of embracing information regarding the solubility of these popular excipients, especially in variable pH conditions, reflecting those of the gastrointestinal tract (GIT). The study has shown that the solubility of calcium phosphates as well as their dissolution rate decreases significantly with increasing pH of dissolution fluids. The highest solubility was observed for dibasic calcium phosphate dihydrate, the lowest for tribasic calcium phosphate. This article provides also a comparison of various calcium phosphate types originating from different manufacturers, which may prove to be useful and help formulation scientists to design new medicinal products.
abstract_id: PUBMED:16929976
Relationship between osteogenic characteristics of bone marrow cells and calcium phosphate surface relief and solubility. The capacity of mouse bone marrow cells to adhere to calcium phosphate surfaces and form tissue plates depending on the surface relief and solubility was studied in ectopic bone formation test. Calcium phosphate coating of titanium disks, made by the anodic spark (microarch) oxidation in 10% orthophosphoric acid with hydroxyl apatite particles, differed by the structure (thickness of coating, size of pores, and roughness) and solubility (level of in vitro oxidation of 1-week extracts of implants). Chemical (phasic and element) composition of the studied calcium phosphate coatings was virtually the same. The findings indicate that histogenesis is regulated by physicochemical characteristics of the implant surface. It seems that the osteogenic potential of calcium phosphate surfaces is largely determined by their relief, but not by pH of degradation products.
abstract_id: PUBMED:28457125
Solubility of Calcium Phosphate in Concentrated Dairy Effluent Brines. The solubility of calcium phosphate in concentrated dairy brine streams is important in understanding mineral scaling on equipment, such as membrane modules, evaporators, and heat exchangers, and in brine pond operation. In this study, the solubility of calcium phosphate has been assessed in the presence of up to 300 g/L sodium chloride as well as lactose, organic acids, and anions at 10, 30, and 50 °C. As a neutral molecule, lactose has a marginal but still detectable effect upon calcium solubility. However, additions of sodium chloride up to 100 g/L result in a much greater increase in calcium solubility. Beyond this point, the concentrations of ions in the solution decrease significantly. These changes in calcium solubility can readily be explained through changes in the activity coefficients. There is little difference in calcium phosphate speciation between 10 and 30 °C. However, at 50 °C, the ratio of calcium to phosphate in the solution is lower than at the other temperatures and varies less with ionic strength. While the addition of sodium lactate has less effect upon calcium solubility than sodium citrate, it still has a greater effect than sodium chloride at an equivalent ionic strength. Conversely, when these organic anions are present in the solution in the acid form, the effect of pH dominates and results in much higher solubility and a calcium/phosphate ratio close to one, indicative of dicalcium phosphate dihydrate as the dominant solid phase.
abstract_id: PUBMED:1542015
Interaction of calcium and phosphate decreases ileal magnesium solubility and apparent magnesium absorption in rats. We tested the hypothesis that increased intakes of calcium and phosphate lower magnesium solubility in the intestinal lumen, causing a decreased magnesium absorption. In in vitro experiments at a constant magnesium concentration, increasing calcium concentrations reduced magnesium solubility. This effect did not occur in the absence of phosphate. Increasing phosphate concentrations decreased the solubility of magnesium in the presence, but not in the absence, of calcium. These results suggest that the formation of an insoluble calcium-magnesium-phosphate complex determines magnesium solubility. To extend this concept to in vivo conditions, rats were fed purified diets containing a constant concentration of magnesium (16.4 mumol/g) but different concentrations of calcium (25, 100 or 175 mumol/g) and phosphate (58, 103 or 161 mumol/g). Increased intakes of calcium decreased magnesium solubility in the ileal lumen and lowered magnesium absorption. The latter result occurred only if the dietary phosphate concentration was at least 103 mumol/g. Increasing dietary phosphate concentrations reduced both magnesium solubility in the ileum and magnesium absorption, but only if the dietary calcium concentration was at least 100 mumol/g. These results support those obtained in vitro. We conclude that increased intakes of calcium and phosphate decrease magnesium absorption by the formation of an insoluble calcium-magnesium-phosphate complex in the intestinal lumen.
abstract_id: PUBMED:2672888
Mineral phases of calcium phosphate. Many studies of calcium phosphate precipitation have been made using relaxation techniques in which the concentrations of the lattice ions are allowed to decrease as equilibrium is approached. Since the nature of the phases that form depend markedly on the solution composition, this decrease can lead to concomitant phase transformations during the crystallization experiments. The results of the present constant composition (CC) studies show that defect apatites may be formed under conditions of sustained supersaturation with a non-stoichiometric coefficient dependent on the pH of the growth medium. An important factor in analyzing these experiments is the initial surface modification and ion-exchange processes involving H+ and Ca2+ ions after inoculation of the supersaturated solutions. Thereafter, active growth sites may be eliminated as the crystals undergo lattice perfection. Transformation of dicalcium phosphate dihydrate to octacalcium phosphate, involving dissolution and subsequent nucleation and growth of the new phase, is also influenced by surface roughening of the initial phase. Typical inhibitors that reduce the rate of growth of seed crystals in supersaturated solutions may actually induce the nucleation of calcium phosphate phases when immobilized on inert surfaces. This may be a factor in the modulation of crystal growth in many biological systems.
abstract_id: PUBMED:26042720
The influence of Sr content in calcium phosphate coatings. In this study calcium phosphate coatings with different amounts of strontium (Sr) were prepared using a biomineralization method. The incorporation of Sr changed the composition and morphology of coatings from plate-like to sphere-like morphology. Dissolution testing indicated that the solubility of the coatings increased with increased Sr concentration. Evaluation of extracts (with Sr concentrations ranging from 0 to 2.37 μg/mL) from the HA, 0.06Sr, 0.6Sr, and 1.2Sr coatings during in vitro cell cultures showed that Sr incorporation into coatings significantly enhanced the ALP activity in comparison to cells treated with control and HA eluted media. These findings show that calcium phosphate coatings could promote osteogenic differentiation even in a low amount of strontium.
Answer: Yes, there is a relationship between solubility and resorbability of different calcium phosphate (CP) phases in vitro. Studies have shown that the solubility of CP phases can influence their resorbability by osteoclasts, which are cells responsible for bone resorption. For instance, a comparison between the solubility and osteoclastic resorbability of four different monophasic CP powders revealed that, with the exception of β-calcium pyrophosphate (CPP), the solubility differences between CP phases were not only maintained but also augmented in the resorptive osteoclastic milieu. Specifically, monetite (DCP), which had the highest solubility, was also resorbed more intensely than other CP phases, while hydroxyapatite (HAP), with the lowest solubility, was resorbed the least. CPP was retained inside the cells for the longest time, indicating hindered digestion of this particular CP type. These findings suggest that chemistry directly affects biology, as biological processes do not override the chemical properties of inorganics but rather take advantage of them (PUBMED:27212690).
Additionally, the solubility of zinc-containing tricalcium phosphate (ZnTCP) was found to decrease with increasing zinc content, and its in vivo resorbability was much lower than expected from its in vitro solubility, suggesting that the reduction in resorbability could be due to its lowered cellular activation property relative to pure tricalcium phosphate (TCP) (PUBMED:11857428).
Moreover, in vitro solubility tests of different calcium phosphate ceramic particles showed that solubility generally decreased in the order of tetracalcium phosphate > tricalcium phosphate > hydroxyapatite, with the specific buffer used having a larger influence than pH or the specific calcium phosphate salt tested (PUBMED:2242400).
Interlaboratory studies also proposed a potential standard method for measuring the relative dissolution rate to estimate the resorbability of calcium-phosphate-based ceramics, with the relative dissolution rate coinciding with the relative volume of resorption pits of ZnTCP in vitro and the relative resorbed volume in vivo in some cases (PUBMED:26232621). |
Instruction: Is direct coronary stenting the best strategy for long-term outcome?
Abstracts:
abstract_id: PUBMED:12360176
Is direct coronary stenting the best strategy for long-term outcome? Results of the multicentric randomized benefit evaluation of direct coronary stenting (BET) study. Background: Preliminary trials of direct coronary stenting have demonstrated the benefits of this approach. It lowers procedural cost, time, and radiation exposure compared with predilatation. Nevertheless, the long-term outcome after direct stenting remains less well known.
Methods: Between January and September 1999, 338 patients were randomly assigned to either direct stent implantation (DS+, n = 173) or standard stent implantation with balloon predilatation (DS-, n = 165). Clinical follow-up was performed.
Results: Baseline characteristics were similar in the 2 groups. Procedural success was achieved in 98.3% of patients assigned to DS+ and 97.5% of patients assigned to DS- (not significant). Clinical follow-up was obtained in 99% of patients (mean 16.4 +/- 4.6 months). Major adverse cardiac events--defined as whichever of the following occurred first; cardiac death, myocardial infarction, unstable angina, new revascularization--were observed at a higher rate in the DS+ group than in the DS-, but this difference was not significant (11.3% vs 18.2%, P = not significant). The difference in target lesion revascularization rate in the DS+ group (7%) and DS- group (5.2%) was also not significant. Multivariate analysis showed that direct stenting had no influence on long-term major adverse cardiac events rate. Independent relationships were found between long-term major adverse cardiac events rate and final minimal lumen diameter <2.48 mm (relative risk [RR] 0.449, CI 0.239-0.845, P =.013), prior myocardial infarction (RR 2.028, CI 1.114-3.69, P =.02), and hypertension (RR 1.859, CI 1.022-3.383, P =.042).
Conclusion: The main finding that emerges from this randomized study is that the influence of direct stenting on long-term need for new target lesion revascularization does not differ from that of stenting with balloon predilatation.
abstract_id: PUBMED:25696356
Safety, efficacy and costs associated with direct coronary stenting compared with stenting after predilatation: A randomised controlled trial. Objectives: Comparison of the in-hospital success rates, procedural costs and short-term clinical outcomes of direct stenting versus stenting after balloon predilatation.
Methods: Altogether, 400 patients with angina pectoris and/or myocardial ischaemia due to coronary stenoses in a single native vessel were randomised to either direct stenting or stenting after predilatation. Baseline characteristics were evenly distributed between the two groups.
Results: Procedural success rates were similar (96.0% direct stenting group vs. 94.5% predilatation) as well as final successful stent implantation (98.3 vs. 97.8%), while the primary success rate of direct stenting alone was 88.3%, p=0.01. In multivariate analysis, angiographic lesion calcification was an independent predictor of unsuccessful direct stenting (odds ratio 7.1, 95% confidence interval 2.8-18.2, p<0.0001). Rates of troponin I rises >0.15 μg/l, used as a measure of distal embolisation, were similar in both groups (17.8 vs. 17.1%). Rates of major adverse cardiac events at 30 days were 4.5% in the direct stenting group versus 5.5% in the predilated group (ns). Direct stenting was associated with savings in fluoroscopy time, and angiographic contrast agent use, and a reduction in utilisation of angioplasty balloons (0.4 vs. 1.17 balloons per patient, p<0.001). Mean per patient procedural costs associated with direct stenting versus predilatation were €2545±914 versus €2763±842 (p=0.01), despite the implantation of more stents in the directly stented group.
Conclusion: Compared with a strategy of stenting preceded by balloon predilatation, direct stenting was equally safe and effective, with similar in-hospital and 30-day clinical outcomes, and modest procedural cost-savings. A calcified lesion predicted unsuccessful direct stenting.
abstract_id: PUBMED:35982991
Clinical Outcomes Following Simple or Complex Stenting for Coronary Bifurcation Lesions: A Meta-Analysis. Background: Stent placement remains a challenge for coronary bifurcation lesions. While both simple and complex stenting strategies are available, it is unclear which one results in better clinical outcomes. This meta-analysis aims to explore the long-term prognosis following treatment with the 2 stenting strategies.
Method: Randomized controlled trials found from searches of the PubMed, EMBASE, and Cochrane Central Register of Controlled Trials were included in this meta-analysis. The complex stent placement strategy was identified as the control group, and the simple stent placement strategy was identified as the experimental group. Data were synthesized with a random effects model. The quality of the randomized controlled trials was assessed by Jadad scale scores. The clinical endpoints at 6 months, 1 year, and 5 years were analyzed.
Results: A total of 11 randomized controlled trials met the inclusion criteria. A total of 2494 patients were included in this meta-analysis. The odds ratio [OR] of the major adverse cardiac events (MACEs) at 6 months was 0.85 (95% confidence interval [CI] 0.53-1.35; P = .49, I2 = 0%). The OR of the MACEs at 1 year was 0.61 (95% CI 0.36-1.05; P = .08, I2 = 0%). The OR of the MACEs at 5 years was 0.69 (95% CI 0.51-0.92; P = .01, I2 = 0%). Compared with the complex strategy, the simple strategy was associated with a lower incidence of MACEs at 5 years.
Conclusion: Compared to the complex stenting strategy, the simple stenting strategy can better reduce the occurrence of long-term MACEs for coronary bifurcation lesions.
abstract_id: PUBMED:35074298
Very Long-Term Clinical Outcomes After Direct Stenting in Patients Presenting With ST-Segment Elevation Myocardial Infarction. Background/purpose: Direct Stenting (DS) could be associated with reduced distal embolization and improved reperfusion in patients with ST-segment elevation myocardial infarction (STEMI). However, the impact of DS on long-term outcomes remains unclear, therefore we evaluated the impact of DS on very long-term clinical outcome in STEMI.
Methods/materials: Between April 2002 and December 2004, patients presenting with STEMI undergoing percutaneous coronary intervention were investigated. The study population was divided into two groups: DS and conventional stenting (CS) and stratified according to initial TIMI flow. Major adverse cardiac events (MACE) were assessed at 10 years and all-cause mortality at 15 years. Cox proportional hazards models were used. When the proportional hazards assumption was not satisfied, landmark analysis at mid-term (2 years) was performed.
Results: A total of 812 consecutive patients were evaluated, 6 patients were excluded due to inadequate angiographic images, 450 (55.8%) underwent DS and 356 (44.2%) CS. At 15 years follow-up, DS was associated with a reduction in all-cause mortality (DS 35.0% vs. CS 45.3%, aHR 0.74, 95% CI 0.58-0.93, p = 0.010). The landmark analysis at 2 years identifies reduced 2-year MACE in DS compared with CS (6.8% vs.14%, aHR 0.67, 95% CI 0.49-0.93, p = 0.015) and beyond 2 years no significant differences were found between the groups (27.4% vs. 29.3%, aHR 1.00, 95% CI 0.74-1.36, p = 0.999). In patients with baseline TIMI 0-1, DS was associated with lower 10-year MACE and 15-year mortality compared with CS (aHR0.71, 95%CI 0.55-0.92, p = 0.010 and aHR0.65, 95%CI 0.50-0.84, p = 0.001, respectively).
Conclusions: DS was associated with reduced 15-year all-cause mortality and reduced mid-term MACE rate in patients with STEMI. Clinical events reduction associated with DS was particularly relevant in patients with initial TIMI flow 0-1.
abstract_id: PUBMED:32025585
The angiography-guided spot versus entire stenting in patients with long coronary lesions trial: Study design and rationale for a randomized controlled trial protocol. Background: /Purpose: Long-stenting, even with a second-generation drug-eluting stent (DES), is an independent predictor of restenosis and stent thrombosis in patients with long coronary lesions. Spot-stenting, i.e., selective stenting of only the most severe stenotic segments of a long lesion, may be an alternative to a DES. The purpose of this study is to compare the one-year clinical outcomes of patients with spot versus entire stenting in long coronary lesions using a second-generation DES.
Method: This study is a randomized, prospective, multi-center trial comparing long-term clinical outcomes of angiography-guided spot versus entire stenting in patients with long coronary lesions (≥25 mm in length). The primary endpoint is target vessel failure (TVF) at 12 months, a composite of cardiac death, target vessel-related myocardial infarction, and target vessel revascularization (TVR). A total of 470 patients are enrolled for this study according to sample size calculations. This study will be conducted to evaluate the non-inferiority of spot stenting compared to entire stenting with zotarolimus-eluting stents (ZES).
Results: This study is designed to evaluate the clinical impact of spot-stenting with ZESs for TVF due to possible edge restenosis or non-target lesion revascularization. Theoretically, spot-stenting may decrease the risk of TVR and the extent of endothelial dysfunction.
Conclusion: This SPOT trial will provide clinical insight into spot-stenting with a current second-generation DES as a new strategy for long coronary lesions.
abstract_id: PUBMED:21039084
Unprotected left main stenting, short- and long-term outcomes. Background: Coronary bypass surgery is recommended for the treatment of left main coronary stenosis. Recently a percutaneous approach has been described as a feasible option.
Objectives: To present the in-hospital and long-term clinical and angiographic outcome of a consecutive group of patients undergoing stenting for unprotected left main coronary artery (LMCA) disease, and to compare the clinical and angiographic outcomes of drug-eluting stent (DES) versus metal stent (BMS).
Methods: 238 consecutive patients underwent unprotected LMCA stenting. 165 received BMS and 73 received DES. Most patients (88.7%) presented with acute coronary syndrome. Clinical (100%) and angiographic (84%) follow-up was obtained.
Results: Patients' presentation: STEMI (7.2%), non-STEMI (13.5%), unstable angina (67.6%), stable angina (11.7%). Procedural success rate was 100%. In-hospital mortality was 2.1%, all in patients presented with unstable hemodynamic conditions. None of the patients needed emergent CABG. In the long-term follow-up (average three years) there were 12 deaths (5%), 3 patients required CABG and 25 patients required TVR. The overall angiographic LM restenosis rate show a trend toward lower rate in the DES group than the BMS group (9.6% versus 13.8%, P = 0.08). There was no difference in one year mortality (4.1% versus 4.2%) and AMI (2.7% versus 2.8%) between DES and BMS.
Conclusions: Stenting for LM stenosis can be performed safely with acceptable in hospital and long-term outcome. Reconsideration of current guidelines should be considered. Drug-eluting stent implantation for unprotected LMCA stenosis appears safe with regard to acute and long-term complications and is more effective in preventing restenosis compared to BMS implantation.
abstract_id: PUBMED:11590675
Direct coronary stenting without balloon or device pretreatment: acute success and long-term results. Improvements in coronary stents have made planned direct coronary stenting technically feasible, though safety, acute success, cost-effectiveness, and long-term results remain to be determined. Sequential patients eligible for direct stenting were prospectively characterized and treated with either direct or secondary stenting. Major adverse cardiovascular events (MACE) such as cardiac death, myocardial infarction (MI), target vessel ischemia, or revascularization (TVR) were followed for 6 months post-PCI. Enrollment included 128 direct (1.38 lesions/patient) and 69 secondary (1.39 lesions/patient) stented patients. Direct stenting was successful in 99% (with 5% crossover to secondary stenting) without major procedural complications and with a similar rate of vessel wall dissection or no-reflow phenomenon (2.3% vs. 2.1%; P > 0.05) as the secondary stenting group. There was a trend toward less postprocedural CPK-MB elevation in the nonacute MI patients with direct vs. secondary stenting (3% vs. 11%, respectively). At 6 months, there were no statistically significant differences in overall MACE. Direct stenting has a high success rate, low complication rate, and durable long-term results. Procedural cost and time savings, less contrast use and radiation exposure make direct stenting attractive in properly selected patients.
abstract_id: PUBMED:37501906
Mid-/Long-Term Outcome of Neuroendovascular Treatment for Chronic Carotid Artery Total Occlusion. Objective: The natural course of chronic carotid artery total occlusion (CTO) is poor. Previous reports suggested that carotid artery stenting (CAS) improves the clinical outcome of CTO. However, its long-term efficacy has not been established. This study assessed the mid- and long-term clinical outcome of CAS for CTO.
Methods: We evaluated the clinical outcome of 15 patients who underwent CAS for CTO between September 2010 and October 2019.
Results: The technical success rate of recanalization was 93.3% (14 of 15 patients). Eight patients were treated using self-expanding stents, and six were treated using self-expanding coronary stents. Symptomatic procedure-related complications developed in two patients (13.3%). During the follow-up period (mean 34.9 months), symptomatic ipsilateral stroke was not noted. One patient (7.1%) developed asymptomatic re-occlusion, but stent patency was preserved in 13 patients (92.9%).
Conclusion: CAS for CTO may be safe and feasible based on the mid- and long-term outcome.
abstract_id: PUBMED:24155092
Impact of direct stenting on outcome of patients with ST-elevation myocardial infarction transferred for primary percutaneous coronary intervention (from the EUROTRANSFER registry). Objectives: We sought to evaluate the impact of direct stenting technique on angiographic and clinical outcomes of patients with ST-segment elevation myocardial infarction (STEMI) undergoing primary angioplasty (PCI).
Methods: Data on 1,419 patients who underwent immediate PCI for STEMI with implantation of ≥1 stent within native coronary artery were retrieved from the EUROTRANSFER Registry database. Patients were stratified based on the stent implantation technique: direct (without predilatation) vs. conventional stenting. Propensity score adjustment was used to control possible selection bias.
Results: Direct stenting technique was used in 276 (19.5%) patients. Remaining 1,143 patients were treated with stent implantation after balloon predilatation. Direct compared with conventional stenting resulted in significantly greater rates of postprocedural TIMI grade 3 flow (conventional vs. direct stenting: 91.5% vs. 94.9%, adjusted OR 2.09 (1.13-3.89), P = 0.020), and lower risk of no-reflow (3.4% vs. 1.4%, adjusted OR 0.31 (0.10-0.92), P = 0.035). The rates for ST-segment resolution >50% after PCI were higher in patients treated with direct stenting technique (76.3% vs. 86.2%, adjusted OR 1.64 (1.10-2.46), P = 0.016). A significant reduction in 1-year mortality in patients from the direct stenting group compared with the conventional stenting group, even after adjustment for propensity score was observed (6.5% vs. 2.9%, adjusted OR 0.45 (0.21-0.99), P = 0.047).
Conclusions: When anatomically and technically feasible, the use of direct stenting technique may result in improved long-term survival in patients with STEMI undergoing primary PCI.
abstract_id: PUBMED:25964649
Comparing Direct Stenting With Conventional Stenting in Patients With Acute Coronary Syndromes: A Meta-Analysis of 12 Clinical Trials. Our aim was to compare direct stenting (DS) with conventional stenting (CS) in patients with acute coronary syndrome (ACS). We searched PubMed, EMBASE, and ISI web of science for eligible studies. Primary end point was major adverse cardiac events (MACEs) in short term. Secondary end points were 1-year mortality and after-procedural no-reflow phenomenon. Twelve trials in 8998 patients were included. The odds ratios (ORs) were pooled using the Mantel-Haenszel fixed effect model. Short-term MACEs were significantly reduced in the DS arm in contrast to the CS (5.00% vs 8.08%, DS vs CS, respectively, OR [95% confidence interval] = 0.61 [0.46-0.80], P = .0004). One-year mortality and after-procedural no-reflow phenomenon were significantly lower in the DS group. No heterogeneity was observed through I(2) test (Phet = .81, .89, and .77 for each end point, respectively). This meta-analysis demonstrated that in selected patients with ACS, DS is not only safe and feasible but also reduces short-term and 1-year mortality as well as the occurrence of after-procedural no-reflow phenomenon.
Answer: The question of whether direct coronary stenting (DS) is the best strategy for long-term outcomes has been explored in various studies, with mixed results.
The BET study (PUBMED:12360176) found that direct stenting did not significantly influence the long-term major adverse cardiac events (MACE) rate compared to stenting with balloon predilatation. The study concluded that the influence of direct stenting on the long-term need for new target lesion revascularization does not differ from that of stenting with balloon predilatation.
Another study (PUBMED:25696356) reported that direct stenting was equally safe and effective compared to stenting after predilatation, with similar in-hospital and 30-day clinical outcomes, and modest procedural cost-savings. However, it noted that a calcified lesion predicted unsuccessful direct stenting.
A meta-analysis (PUBMED:35982991) comparing simple (which could include direct stenting) versus complex stenting strategies for coronary bifurcation lesions found that the simple strategy was associated with a lower incidence of MACEs at 5 years.
In patients presenting with ST-segment elevation myocardial infarction (STEMI), direct stenting was associated with reduced 15-year all-cause mortality and reduced mid-term MACE rate, particularly in patients with initial TIMI flow 0-1 (PUBMED:35074298).
The SPOT trial (PUBMED:32025585) is designed to evaluate the clinical impact of spot-stenting, which is a form of direct stenting, for long coronary lesions, suggesting that spot-stenting may decrease the risk of target vessel revascularization (TVR) and the extent of endothelial dysfunction.
For unprotected left main stenting, one study (PUBMED:21039084) suggested that stenting for left main stenosis can be performed safely with acceptable in-hospital and long-term outcomes, and drug-eluting stent implantation for unprotected left main coronary artery stenosis appears safe and more effective in preventing restenosis compared to bare-metal stent implantation.
Another study (PUBMED:11590675) reported that direct stenting has a high success rate, low complication rate, and durable long-term results, with procedural cost and time savings.
The EUROTRANSFER registry (PUBMED:24155092) found that direct stenting resulted in significantly greater rates of postprocedural The EUROTRANSFER registry (PUBMED:24155092) found that direct stenting resulted in significantly greater rates of postprocedural TIMI grade 3 flow, lower risk of no-reflow, and higher rates of ST-segment resolution after PCI. Additionally, there was a significant reduction in 1-year mortality in patients from the direct stenting group compared with the conventional stenting group, even after adjustment for propensity score.
A meta-analysis (PUBMED:25964649) demonstrated that in selected patients with acute coronary syndrome (ACS), direct stenting is not only safe and feasible but also reduces short-term and 1-year mortality as well as the occurrence of after-procedural no-reflow phenomenon.
In summary, while some studies suggest that direct stenting does not significantly differ from conventional stenting with balloon predilatation in terms of long-term MACE rates (PUBMED:12360176), other studies and meta-analyses indicate that direct stenting may be associated with reduced long-term mortality, lower incidence of MACEs, and other benefits such as reduced procedural costs and time (PUBMED:25696356, PUBMED:35982991, PUBMED:35074298, PUBMED:32025585, PUBMED:21039084, PUBMED:11590675, PUBMED:24155092, PUBMED:25964649). The evidence suggests that direct stenting can be a safe and effective strategy for certain patient populations and lesion characteristics, but it may not be universally superior for all patients or situations. |
Instruction: Does vigorous physical activity provide additional benefits beyond those of moderate?
Abstracts:
abstract_id: PUBMED:23542895
Does vigorous physical activity provide additional benefits beyond those of moderate? Background: Although guidelines suggest that vigorous physical activity (PA) confers "extra" benefits compared with those from moderate-intensity activity alone, the magnitude of this additional benefit is unclear. The aim was to compare the reduction in risk of hypertension (HT) and depressive symptoms (DS) for 12 yr in middle-age women who reported (a) only moderate-intensity PA (MOPA) and (b) a combination of moderate and vigorous PA (MVPA), after controlling for overall volume of activity.
Methods: The study involved 11,285 participants in the Australian Longitudinal Study on Women's Health, who completed surveys in 1998 (age = 46-52 yr), 2001, 2004, 2007, and 2010. Generalized estimating equation models (with 3-yr time lag) were used to examine the relationship between PA in seven categories from 0 to >2000 MET·min·wk-1 and occurrence of HT and DS for women who reported MOPA or MVPA.
Results: For HT, risk was slightly lower for MVPA than for MOPA across the entire range of PA levels, but this difference was only significant at the highest PA level (>2000; odds ratio [OR] = 0.80 MOPA and 0.56 MVPA). For DS, OR values were similar in both groups up to 500 MET·min·wk-1, then slightly lower for MVPA than for MOPA at higher PA levels. Again, this difference was only significant at the highest PA level (>2000; OR = 0.57 MOPA and 0.42 MVPA). OR values were slightly attenuated in adjusted models.
Conclusions: Doing both vigorous and moderate activity does not have significant additional benefits in terms of HT and DS, above those from moderate-intensity activity alone, except at very high levels of PA.
abstract_id: PUBMED:37266358
Relationships between physical, cognitive, and social frailty and locomotive and non-locomotive physical activity of moderate to vigorous intensity. [Purpose] The purpose of this study was to examine the relationships between physical, cognitive, and social frailty and locomotive and non-locomotive physical activity of moderate to vigorous intensity in community-dwelling older adults and to explore effective intervention methods for preventing frailty. [Participants and Methods] Participants were 82 community-dwelling Japanese older males and females. Measurement items included basic information (age, gender, height, weight, body mass index, and the number of underlying diseases), physical activity, and assessment of physical, cognitive, and social frailty. Associations of physical, cognitive, and social frailty with physical activity were analyzed by group comparisons and multivariate analyses. [Results] The comparisons of physical activity indices for each frailty type revealed that physical frailty was associated with the number of steps and locomotive physical activity of moderate to vigorous intensity, whereas cognitive frailty and social frailty were not. Two overlapping types of frailty were associated with locomotive physical activity. When adjusted for age and gender, step counts and locomotive physical activity were each associated with physical frailty. [Conclusion] Future interventions to increase step counts and locomotive physical activity of moderate to vigorous intensity may be effective for preventing physical frailty; however, interventions other than simple physical activity need to be considered for the prevention of cognitive and social frailty.
abstract_id: PUBMED:34475835
Influence of Physical Self-Concept and Motivational Processes on Moderate-to-Vigorous Physical Activity of Adolescents. There is a growing concern about the increasing decline in physical activity among adolescents. In the search for variables that may be related to physical activity, this study examined the influence of physical self-concept on objectively measured moderate-to-vigorous physical activity (MVPA) of adolescents through the mediation of the needs satisfaction and two types of autonomous motivation, for academics and for physical education. Data were collected from 618 students (301 boys and 317 girls) aged 10-14 years from 24 secondary schools in Spain. The path analysis results showed that physical self-concept positively predicted needs satisfaction and this, in turn, was positively and significantly related to the two types of autonomous motivation. Finally, only the autonomous motivation for physical education significantly and positively predicted the adolescents' MVPA. Our findings showed that there was no evidence of an indirect effect of physical self-concept on MVPA. The results are discussed along the lines of the self-determination theory, through the analysis of the role of physical self-concept in increasing adolescents' physical activity.
abstract_id: PUBMED:30202320
Vigorous Physical Activity in Youth: Just One End of the Physical Activity Spectrum for Affecting Health? This article provides commentary on the accompanying manuscript entitled "The Case for Vigorous Physical Activity in Youth" by Owens and colleagues. A major strength of the review was its aim to determine whether vigorous physical activity provides greater benefits with respect to several health outcomes among children and youth while also considering the limitations of the current evidence in terms of number of studies and study design. This commentary presents additional topics to consider, practical applications, and conclusions and recommendations that can be drawn from the current evidence. To expand "the case for vigorous physical activity in youth," future studies should consider delineating the relative benefits of vigorous physical activity compared not only with moderate physical activity, but also with light and total activity.
abstract_id: PUBMED:37508604
Investigating Links between Moderate-to-Vigorous Physical Activity and Self-Rated Health Status in Adolescents: The Mediating Roles of Emotional Intelligence and Psychosocial Stress. Adolescence represents a crucial phase, characterized by rapid physical and mental development and numerous challenges. Physical activity plays a vital role in the mental well-being of adolescents; however, due to the prevailing educational philosophy prioritizing academic performance, adolescent participation in physical activities has yet to reach its full potential. Thus, this study aims to investigate the effects of moderate-to-vigorous physical activity on adolescents' emotional intelligence, psychosocial stress, and self-rated health status. To achieve this objective, a cluster sampling method was employed to collect data from 600 adolescents in 10 schools across five municipal districts of Changsha, China. A total of 426 valid questionnaires were returned and analyzed. Utilizing AMOS v.23, a structural equation model was constructed to validate the hypotheses. The findings reveal that moderate-to-vigorous physical activity significantly impacts adolescents' emotional intelligence and self-rated health status. Conversely, it exerts a significant negative influence on their psychosocial stress. Moreover, emotional intelligence and psychosocial stress mediate the relationship between moderate-to-vigorous physical activity and self-rated health status. In light of these results, education departments, schools, and families must embrace a paradigm shift in educational philosophies and provide robust support for adolescents to engage in moderate-to-vigorous physical activities.
abstract_id: PUBMED:33164227
The effect of photographic activity schedules on moderate-to-vigorous physical activity in children with autism spectrum disorder. Regular moderate-to-vigorous physical activity (MVPA) has been linked to improved bone health, muscular fitness, cognitive function, sleep, and a reduced risk of depression and obesity. Many children are not engaging in the recommended amount of physical activity. Furthermore, children with autism spectrum disorder (ASD) were found to engage in less physical activity than their peers of typical development. We extended previous research by conducting a physical activity context assessment, which included a comparison of indoor to outdoor activities to evaluate which environment produced the lowest percent of MVPA as recorded by the Observational System for Recording Physical Activity in Children. Given the utility of activity schedules to increase self-management and independent engagement during unstructured and low-preferred tasks, we then taught 3 preschool children diagnosed with ASD to use photographic activity schedules to increase the number of different activities that met the definition of MVPA in the 2 lowest-responding conditions of the physical activity context assessment. MVPA remained low during baseline sessions for all participants and immediately increased with the introduction of activity schedule teaching. All participants quickly met activity schedule teaching mastery criterion and demonstrated high levels of MVPA in generalization and maintenance probes without additional teaching.
abstract_id: PUBMED:35461788
Earlier bedtimes and more sleep displace sedentary behavior but not moderate-to-vigorous physical activity in adolescents. Objectives: Correlational models suggest increased cardiometabolic risk when sleep replaces moderate-to-vigorous (but not sedentary or light) physical activity. This study tested which activity ranges are impacted by experimentally altering adolescents' bedtime.
Method: Adolescents completed a 3-week within-subjects crossover experiment with 5 nights of late bedtimes and 5 nights early bedtimes (6.5- and 9.5-hours sleep opportunity, respectively). Experimental condition order was randomized. Waketimes were held constant throughout to mimic school start times. Sleep and physical activity occurred in the natural environments, with lab appointments following each 5-day condition. Waist-worn accelerometers measured physical activity and sedentary behavior. Wrist-worn actigraphs confirmed sleep condition adherence. Wilcoxon tests and linear mixed effects models compared waking activity levels between conditions and across time.
Results: Ninety healthy adolescents (14-17 years) completed the study. When in the early (vs. late) bedtime condition, adolescents fell asleep 1.96 hours earlier (SD = 1.08, d = 1.82, p < .0001) and slept 1.49 hours more (SD = 1.01, d = 1.74, p < .0001). They spent 1.68 and 0.32 fewer hours in sedentary behavior (SD = 1.67, d = 1.0, p < .0001) and light physical activity (SD = 0.87, d = 0.37, p = .0005), respectively. This pattern was reflected in increased proportion of waking hours spent in sedentary and light activity. Absolute and proportion of moderate-to-vigorous physical activity did not differ between conditions (d = 0.02, p = .89; d = 0.14, p = .05, respectively).
Conclusions: Inducing earlier bedtimes (allowing for healthy sleep opportunity) did not affect moderate-to-vigorous physical activity. Alternatively, later bedtimes (allowing for ≤ 6.5 hours of sleep opportunity, mimicking common adolescent school night sleep) increased sedentary behavior. Results are reassuring for the benefits of earlier bedtimes.
abstract_id: PUBMED:29541342
Validation of the PiezoRx® Step Count and Moderate to Vigorous Physical Activity Times in Free Living Conditions in Adults: A Pilot Study. The purpose of the study was to: 1) Validate the PiezoRx® for steps and intensity related physical activity in free-living conditions compared to the criterion measure. 2) Compare PiezoRx®'s steps and intensity related physical activity to physiological assessments. 3) To assess the utility of the PiezoRx® in a subsample of participants. Thirty-nine participants consisting of 28 females aged 54.9±10.6 (33-74) years and 11 males aged 63.9±10.9 (44-80) years wore the PiezoRx® physical activity monitor and the ActiGraph® accelerometer for one full week and completed a physical assessment. A subsample (n=24) wore the PiezoRx® for an additional two weeks and completed a questionnaire regarding usability. The PiezoRx® had strong correlations to the ActiGraph® for step count (r=0.88; p<0.001), moderate-vigorous physical activity (MVPA) (r=0.70; p<0.001), and sedentary activity (r=0.93; p<0.001) in the 1-week monitoring period. The PiezoRx®'s steps/day and MVPA/week were negatively correlated (p<0.001) to body mass index and waist circumference, and positively correlated (p<0.05) to aerobic fitness, pushups, and 30 second sit-to-stand. Within the subsample who completed the additional two-week monitoring, 75% of participants reported that the PiezoRx® increased their physical activity. In conclusion, The PiezoRx® appears to be a valid measure of free-living PA compared to accelerometry. Because of the correlations of the PiezoRx®'s steps/day and MVPA/week to anthropometric, musculoskeletal and aerobic fitness these PA measures may be valuable objective surrogates to use in clinical or professional practice for physical health.
abstract_id: PUBMED:36138618
Parental Support Is Associated with Moderate to Vigorous Physical Activity among Chinese Adolescents through the Availability of Physical Activity Resources in the Home Environment and Autonomous Motivation. This study aimed to use a structural equation model (SEM) to determine the association between parental support and moderate to vigorous physical activity (MVPA) among Chinese adolescents and whether the availability of physical activity (PA) resources in the home environment and autonomous motivation of adolescents mediated the association. Data were collected using questionnaires extracted from the Family Life, Activity, Sun, Health, and Eating (FLASHE) study. A final analytical sample of 3738 adolescents was enrolled. A SEM was performed to evaluate the hypothesized associations. It was found that parental support was not only positively directly but also indirectly associated with MVPA in Chinese boys through the home environment (i.e., availability of PA resources) and the autonomous motivation of adolescents. It is worth noting that the above relationships also exist in Chinese girls, except for the regulatory role of autonomous motivation. These findings suggest that future interventions for increasing adolescents' MVPA should focus on health education for parents to provide more PA resources in the home environment and adequately mobilize children's autonomous motivation.
abstract_id: PUBMED:33912525
The Direct and Indirect Relationships Within the Extended Trans-contextual Model for Moderate-to-vigorous Physical Activity. Given the low levels of physical activity (PA) in adolescence, there are challenges to increasing students' PA outside of the school setting. Thus, researchers emphasize the supportive role that physical education (PE) teachers can play in PA motivation both in and out of school. The aim of the present study was to examine an expanded trans-contextual model (TCM) model for the transit of teachers' perceived support of students' autonomy in terms of contextual and situational motivation in PE to objectively measured moderate-to-vigorous physical activity (MVPA) in PE across different PE contents as well as to the motivational sequence for, and participation in, subjective MVPA during leisure time. This cross-sectional study involved 283 high school students, of whom 43.9% were boys. The autonomous support students received and other motivational factors and objective measures of MVPA in PE and subjective MVPA in leisure time were measured. The results indicate that support for autonomy was significantly and directly related to needs satisfaction (β = 0.61, p < 0.001) and indirectly to autonomous motivation in PE (β = 0.19, p < 0.001) and leisure time (β = 0.16, p < 0.001), intention in PE (β = 0.03, p < 0.05) and leisure time (β = 0.07, p < 0.001), and leisure time MVPA (β = 0.04, p < 0.001), although not MVPA in PE. Gender was a significant covariate for both MVPA in PE (β = -0.62, p < 0.001) and MVPA in leisure time (β = -0.37, p < 0.001), with higher MVPA in boys than girls. This study filled a gap in the scientific literature by demonstrating the full motivational sequence resulting in actual MVPA in PE classes. It also demonstrated that the main goal of PE of enhancing PA not only in school but also outside of school is working. The main motivator is needs satisfaction based on PE teachers' support.
Answer: The question of whether vigorous physical activity (PA) provides additional benefits beyond those of moderate intensity is addressed in several studies. According to the Australian Longitudinal Study on Women's Health, engaging in both vigorous and moderate activity does not have significant additional benefits in terms of reducing the risk of hypertension (HT) and depressive symptoms (DS), above those from moderate-intensity activity alone, except at very high levels of PA (PUBMED:23542895). This suggests that the additional benefits of vigorous activity may only be significant when the overall volume of PA is very high.
In contrast, other studies have explored different aspects of the benefits of moderate-to-vigorous physical activity (MVPA). For instance, research on adolescents indicates that MVPA is positively influenced by physical self-concept and autonomous motivation for physical education, which in turn can increase adolescents' engagement in MVPA (PUBMED:34475835). Another study found that MVPA significantly impacts adolescents' emotional intelligence and self-rated health status, while reducing psychosocial stress (PUBMED:37508604). These findings suggest that MVPA can have positive effects on mental well-being and stress management in adolescents.
Furthermore, interventions to increase MVPA, such as using photographic activity schedules, have been shown to be effective in children with autism spectrum disorder (PUBMED:33164227), indicating that MVPA can be beneficial for diverse populations.
In terms of sleep and activity patterns, inducing earlier bedtimes in adolescents, which allows for more sleep, did not affect MVPA but did reduce sedentary behavior (PUBMED:35461788). This implies that sleep interventions can modify activity patterns without necessarily impacting the time spent in MVPA.
Lastly, the availability of physical activity resources in the home environment and autonomous motivation have been identified as mediators in the relationship between parental support and MVPA among Chinese adolescents (PUBMED:36138618), suggesting that environmental and motivational factors play a role in facilitating MVPA.
In summary, while vigorous PA may not provide significant additional health benefits over moderate PA except at high activity levels (PUBMED:23542895), MVPA is associated with various positive outcomes, including improvements in emotional intelligence, self-rated health status, and reductions in psychosocial stress among adolescents (PUBMED:37508604), as well as being influenced by factors such as physical self-concept, motivation, and environmental resources (PUBMED:34475835, PUBMED:36138618). |
Instruction: Is the Menopause Rating Scale accurate for diagnosing sexual dysfunction among climacteric women?
Abstracts:
abstract_id: PUBMED:19217728
Is the Menopause Rating Scale accurate for diagnosing sexual dysfunction among climacteric women? Background: Although several tools have been designed to assess quality of life (QoL) among middle-aged women their capacity to specifically assess sexual dysfunction (SD) remains uncertain. Moreover, if SD impairs QoL within this population, then sexual assessment becomes a key issue.
Objectives: To evaluate the accuracy of the Menopause Rating Scale (MRS) in diagnosing SD among climacteric women.
Methods: In this cross-sectional study 370 women aged 40-59 years filled out the MRS and the Female Sexual Functioning Index (FSFI) simultaneously. SD among surveyed women was defined as those obtaining a total FSFI score of <or=26.55. A receiver-operator curve (ROC) was used to plot and measure the diagnostic accuracy of one MRS item (item 8, assessing sexual problems) using the FSFI total score as a gold standard.
Results: Mean age of surveyed women was 49.3+/-5.8 years. A 56.5% of them were married, 44.3% were postmenopausal, 66.8% were sexually active and 57% had SD (FSFI total score <or=26.55). ROC curve determined a score >or=1 in the MRS item 8 as a cut-off value for discriminating women with SD (78% sensitivity and 62% specificity with an area below the curve of 0.70 Swett).
Conclusions: The MRS was moderately accurate for diagnosing SD among climacteric women. More research is warranted in this regard.
abstract_id: PUBMED:8147175
Evaluation of climacteric symptoms (Menopause Rating Scale) Quantification and qualification of climacteric symptoms had been described by Kupperman et al in 1953. New findings and ideas in the following forty years needed a correction of Kupperman index. Two important groups reduced the essential symptoms only on two ones, vasomotoric hot flushes and genital atrophy. On the contrary, Menopause Rating Scale (MRS) presented here enables registration of so called psychic symptoms, too, essential for quality of life. Complaint from bladder and urethra, hints and muscles and sexual disorders are also registered. For each of the ten symptom groups there is a rating scale from 0.0 (no symptoms) to 1.0 (very strong symptoms), in a graphic, too. In this way an individual profile will be visible. Using MRC it is possible, to quantify a better or worst status during and after treatment and to depict it.
abstract_id: PUBMED:29345506
Effects of physical and depressive symptoms on the sexual life of Turkish women in the climacteric period. Objective: To assess the effects of physical and depressive symptoms on the sexual life of women in the climacteric period.
Methods: This study was conducted with 572 women at a university hospital. The Beck Depression Inventory (BDI), Menopause Rating Scale (MRS) and Female Sexual Function Index (FSFI) were used to evaluate depressive symptoms, intensity of menopausal symptoms and sexual function.
Results: Sexual dysfunction and depressive symptoms were determined in 86.4% and 54.9% of the women, respectively. In univariate analysis, women without health insurance, with low income, being married for longer than 21 years and being in menopause had low FSFI but high BDI and MRS scores. In multiple regression analysis, advanced age of women, women with low income, unemployed women, low educated women and their husbands and women with depressive symptoms had low FSFI scores. There was a negative relationship between total FSFI and MRS and BDI scores.
Conclusion: Determination and treatment of sexual, emotional and physical problems in the climacteric period are very important for the improvement of the quality of life of women.
abstract_id: PUBMED:8867476
Diagnosis and evaluation of climacteric symptoms. The "Menopause Rating Scale" MRS helps in diagnosis and evaluation of therapeutic effectiveness For the evaluation of climacteric symptoms, Kupperman and his collaborators worked out guidelines as long ago as 1953. As time passed, however, their validity was increasingly called into question. In the nineteen-seventies, on the basis of large epidemiological studies, the conclusion was drawn that merely the symptoms hot flushes and vaginal atrophy were specific to the menopause, while other, largely psychological, complaints represented a "domino effect", so to speak. In contrast to this, the scale (Menopause Rating Scale [MRS]) presented here also permits the identification of emotional complaints. In addition, urinary tract problems, joint and muscle pain, and sexual disorders are also rated. For each of the ten symptom groups, a graphical rating scale ranging from 0.0 (no symptoms) to 1.0 (severe symptoms) is available that permits a synoptic individual complaints profile of the patient to be established.
abstract_id: PUBMED:31110369
Menopause Rating Scale: Validation and Applicability in Nepalese Women. Background: Menopausal Rating Scale is one of the globally used tools to assess quality of life in menopause and peri-menopause. The aim of this study is to validate the standard menopausal rating scale in Nepalese menopausal women and to test menopausal symptoms during clinical consultation at hospital.
Methods: Cross sectional validation study at Paropakar Maternity and Women's Hospital, Thapathali, Kathmandu. Five-step language translation of menopausal rating scale from English to Nepali, questionnaire clarity assessment with gynecologists and Likert scale questionnaire based interview to the clients. Reliability and validity tests applied. Each component of rating scale analyzed.
Results: Nepali version of menopausal rating scale developed. Acceptable level (Cronbach's Alpha = 0.77) of tools reliability obtained. Barlett's test of sphericity was highly significant and Pearson correlation between variables was significant. Average age of menarche was 15 years, and mean and modal value of menopause was 48 and 50 years respectively. First menopausal symptom was vasomotor flush in 62%, one-forth didn't experience flush, half experienced mild to moderate flush and rest one-fourth had severe to very severe form; 50% had significant sleep, bladder and sexual dysfunction. Three-fourth had vaginal dryness and musculoskeletal problem. One-half had some degree of mental dysfunction.
Conclusions: Nepali version of menopausal rating scale developed. Baseline menopausal parameters obtained.
abstract_id: PUBMED:20668822
Official position of the Chilean Society of Climacteric on the management of climacteric women The health of many women is affected in the climacteric period, either by symptoms that deteriorate their life quality (QL) or by chronic diseases that affect their life expectancy. Therefore, it is mandatory to evaluate these two aspects, having as core objectives for any eventual therapeutic intervention, the improvement of QL and the reduction of cardiovascular risk and fractures. To evaluate QL it is mandatory to follow structured interviews that weigh systematically climacteric symptoms such as the Menopause Rating Scale (MRS). The paradigm of the metabolic syndrome constitutes a suitable frame to evaluate cardiovascular risk. Age, a low body weight, a history of fractures and steroid use are risk factors for fractures. A proper evaluation will allow the detection of patients with a low QL or a high risk for chronic disease, therefore identifying those women who require therapy. The clinical management should include recommendations to improve lifestyles, increase physical activity, avoidance of smoking and to follow a low calorie diet rich in vegetables and fruits. Hormonal therapy is the most efficient treatment to improve the QL and its risk is minimized when it is used in low doses or by the transdermal route. Tibolone is an alternative, especially useful in patients with mood disorders and sexual dysfunction. Vaginal estrogens are also a good option, when urogenital symptoms are the main complaint. Some antidepressants can be an effective therapy in patients with vasomotor symptoms who are not willing or cannot use estrogens. The effectiveness of any alternative therapy for menopausal symptoms has not been demonstrated. Dyslipidemia, hypertension, obesity and insulin resistance should be managed according to guidelines. Calcium and vitamin D have positive effects on bone density and certain tendency to reduce vertebral fractures. Bisphosphonates decrease the risk of vertebral fractures.
abstract_id: PUBMED:36763958
Climacteric symptoms, age, and sense of coherence are associated with sexual function scores in women after menopause. Background: Postmenopausal sexual function presupposes the integration of hormonal, neural, and vascular interactions and is subject to optimal crosstalk among psychological, interpersonal, cultural, and environmental factors. Sense of coherence (SOC) reflects a person's ability to cope with stressors and may influence the occurrence of menopausal symptoms and sexual dysfunction.
Aim: To investigate the association of severity of climacteric symptoms, cardiometabolic risk factors, and SOC with sexual function in postmenopausal women.
Methods: Overall 281 sexually active postmenopausal women without significant psychopathology or cardiovascular disease attending the Menopause Unit of Aretaieion Hospital were evaluated by the Female Sexual Function Index (FSFI), Greene Climacteric Scale, Beck Depression Scale, and Sense of Coherence Scale. Hormonal and biochemical parameters and cardiometabolic risk factors were evaluated. FSFI scores <26.5 were considered pathologic.
Outcomes: Total and subdomain scores of sexual response were determined.
Results: Pathologic FSFI scores were found in 79.7% of the sample. Linear models of multivariable regression analysis showed that FSFI scores were associated with (1) Beck scores (b = -0.200; 95% CI, -0.472 to -0.073, P = .001), vasomotor symptom severity (b = -0.324; 95% CI, -0.985 to 0.051; P < .001), and age and (2) SOC (b = 0.150, 95% CI, 0.036-0.331; P = .008), vasomotor symptom severity (b = -0.361; 95% CI, -0.743 to 0.245; P < .001), and age. Both models were adjusted for menopausal age, diabetes mellitus, hypertension, type of menopause, and menopausal hormone therapy intake. SOC was associated with Beck depression scores (β = -0.487, P < .001; Greene Climacteric Scale total scores, β = -0.199, P < .001). FSFI score <26.5 vs >26.5 was associated with SOC (odds ratio, 0.982; 95% CI, 0.563 to 1.947; P = .006) and moderate to severe vasomotor symptom severity (odds ratio, 2.476; 95% CI, 1.478 to 3.120; P = .009) independent of age, diabetes mellitus, hypertension, menopausal hormone therapy intake, type of menopause, or Beck depression classification.
Clinical Implications: The results indicate the importance of psychometric assessment of postmenopausal women when presenting with scores of low sexual function. The severity of vasomotor symptoms should also be addressed in any case.
Strengths And Limitations: This is the first study investigating the relationship between SOC and sexuality in menopause in a carefully selected homogenous population. Limitations included the cross-sectional design and the fact that sexual distress was not assessed.
Conclusions: Pathologic FSFI scores were highly prevalent in this sample of postmenopausal women. FSFI is associated positively with age and severity of vasomotor symptoms and negatively with SOC.
abstract_id: PUBMED:36254126
Severity of climacteric symptomatology related to depression and sexual function in women from a private clinic. Introduction: The climacteric is a natural transition stage in women, in which hormonal changes occur that affect the physical and psychological well-being. Therefore, the objective was to determine the relationship of the severity of climacteric symptomatology with depression and sexual function in women.
Materials And Methods: It was a descriptive, cross-sectional study, with a sample of 60 women between 40 and 65 years old. The Female Sexual Function Questionnaire-2, the Menopause Rating Scale, and the Beck Depression Inventory were used.
Results: The mean age of the women was 49.1 ±5.6 years. 21.7% of the women had severe depression, 28.3% moderate, and 50% mild/minimal. Changes in sleep habits (1.73 ±0.88) and in appetite (1.63 ±0.73) were the most severe manifestations. Difficulty sleeping (1.05 ±0.99), physical and mental fatigue (1.48 ±0.98), and vaginal sequelae (1.45 ±1.26) were the most serious complaints in the somatic, psychological, and urogenital domains, respectively. 60% presented severe sexual dysfunction regarding genital pain and 55% in vaginal penetration. Communicating sexual preferences to the partner was common in 75% of women. 88.3% had frequent sexual activity, but 63.3% had zero or low sexual satisfaction.
Conclusions: Climacteric symptomatology is related to depression but not to women's sexual function.
abstract_id: PUBMED:19186014
Sexuality during the climacteric period. Background: Cultural, social, physiological and psychological factors may alter the course of sexual function in climacteric women.
Objective: The objective of the present literature review is to survey the prevalence of sexual dysfunctions in the climacteric and to establish the association between the organic and psychic changes that occur during this phase and sexual dysfunction. We also discuss potential treatments.
Methods: We evaluated the data available in PubMed (1982-2008). For each original article, two reviewers analyzed the data independently and considered a study to be of high quality if it had all three of the following characteristics: prospective design, valid data and adequate sample size. Both reviewers extracted data from each of the 99 studies selected: 34 cross-sectional studies, 25 cohort studies, 9 trials, 31 reviews related to sexuality in pre- and post-menopausal women.
Results: Sexual dysfunction among climacteric women is widespread and is associated with bio-psychosocial factors. However, there is not enough evidence to correlate sexual dysfunction with a decrease in estrogen levels and biological aging. A strong association exists between climacteric genital symptoms and coital pain. There is, however, sufficient evidence demonstrating the benefits of local estrogen therapy for patients with genital symptoms.
Conclusion: A significant decline in sexual function occurs in climacteric women, although it is still unclear whether this is associated with the known decrease in estrogen levels or with aging, or both. Relational factors may interfere with sexual function during this phase. The climacteric genital symptoms improve with estrogen replacement therapy, and positively influence sexual function. Further studies are needed to establish the actual impact of the decrease in estrogen levels and of aging on the sex life of climacteric women.
abstract_id: PUBMED:35688498
Determining the psychometric properties of the Greene Climacteric Scale (GCS) in women previously treated for breast cancer: A pooled analysis of data from the Women's Wellness after Cancer Programs. Objectives: This paper examines the utility of a common climacteric symptoms scale, the Greene Climacteric Scale (GCS), in two groups of women with a history of breast cancer, those who were at menopause before commencing breast cancer treatment, and those who were not.
Study Design: This pooled analysis of 297 women previously diagnosed with breast cancer, aged 28-74 years, was undertaken on baseline data from two structured lifestyle interventions: the Women's Wellness After Cancer Program (WWACP) and the Younger Women's Wellness After Cancer Program (YWWACP). Data were split into two data subsets (women who were post-menopausal on commencement of breast cancer treatment and those who were either pre- or peri‑menopausal). Multitrait/multi-item analysis was conducted to test scaling assumptions for each group separately.
Results: GCS domain scores were positively skewed, with significant floor effects for vasomotor symptoms and ceiling effects for sexual dysfunction. Multitrait analysis showed acceptable convergent validity (77% of items correlated ≥ 0.40 with their hypothesized domains) but weak discriminant validity for anxiety, depression, and somatic symptoms in both groups. The exploratory factor analysis in women who were menopausal at the commencement of breast cancer treatment and those who were not revealed distinct factor structures that accounted for 60.2% and 62.7% of the total variance, respectively.
Conclusions: The original GCS factor structure was not replicated in this sample. Among women previously treated for breast cancer, the presence of multiple concurrent and severe menopausal symptoms with possible treatment-related causes underpins the need for a breast cancer-specific measure to enhance their identification and management.
Trial Ids, Australian And New Zealand Clinical Trials Registry: WWACP, ACTRN12614000800628; YWWACP, ACTRN12614001087640.
Answer: The Menopause Rating Scale (MRS) has been evaluated for its accuracy in diagnosing sexual dysfunction (SD) among climacteric women. A study involving 370 women aged 40-59 years used both the MRS and the Female Sexual Functioning Index (FSFI) to assess sexual dysfunction. The study defined SD as a total FSFI score of ≤26.55 and used a receiver-operator curve (ROC) to measure the diagnostic accuracy of one MRS item (item 8, assessing sexual problems) against the FSFI total score as a gold standard. The results indicated that a score >1 in the MRS item 8 could discriminate women with SD with 78% sensitivity and 62% specificity, and an area below the curve of 0.70. This suggests that the MRS was moderately accurate for diagnosing SD among climacteric women (PUBMED:19217728).
However, it is important to note that the MRS is primarily designed to assess a range of climacteric symptoms, including psychological and physical symptoms, and not exclusively sexual dysfunction. The MRS includes items that evaluate symptoms such as hot flushes, heart discomfort, sleep problems, depressive mood, irritability, anxiety, physical and mental exhaustion, sexual problems, bladder problems, dryness of the vagina, and joint and muscular discomfort (PUBMED:8147175; PUBMED:8867476).
Other studies have also looked into the effects of physical and depressive symptoms on sexual life during the climacteric period, indicating that sexual dysfunction is common and is negatively related to the intensity of menopausal symptoms and depressive symptoms (PUBMED:29345506). Additionally, the MRS has been validated and found applicable in different populations, such as Nepalese women, further supporting its use in assessing menopausal symptoms (PUBMED:31110369).
In conclusion, while the MRS has been found to be moderately accurate in diagnosing sexual dysfunction among climacteric women, it is a broader tool that assesses a range of climacteric symptoms. More research may be warranted to further refine its accuracy specifically for diagnosing sexual dysfunction (PUBMED:19217728). |
Instruction: Expressed emotion in the client-professional caregiver dyad: are symptoms, coping strategies and personality related?
Abstracts:
abstract_id: PUBMED:12195543
Expressed emotion in the client-professional caregiver dyad: are symptoms, coping strategies and personality related? Objective: The aim of this study was to investigate whether the characteristics of residents and professional caregivers are associated with the professionals' expressed emotion (EE).
Method: Fifty-six residents in sheltered living who suffer from schizophrenia or a related psychotic disorder and their professional caregivers were enlisted. Standardised validated instruments were used to measure EE, the residents' social functioning, symptoms and social network size, and the professional caregivers' coping strategies and personality.
Results: There was strong evidence that high EE was associated with the residents' age, poorer social functioning and smaller network sizes. There was no significant relationship between EE and the residents' symptoms except for excitement. Concerning the professional caregivers, high EE professionals were less open than their low EE colleagues and had a lower education level.
Conclusion: The residents' social functioning is an important correlate of the EE index.
abstract_id: PUBMED:34628949
A comparison of experiences of care and expressed emotion among caregivers of young people with first-episode psychosis or borderline personality disorder features. Objective: Caregivers of individuals with severe mental illness often experience significant negative experiences of care, which can be associated with higher levels of expressed emotion. Expressed emotion is potentially a modifiable target early in the course of illness, which might improve outcomes for caregivers and patients. However, expressed emotion and caregiver experiences in the early stages of disorders might be moderated by the type of severe mental illness. The aim was to determine whether experiences of the caregiver role and expressed emotion differ in caregivers of young people with first-episode psychosis versus young people with 'first-presentation' borderline personality disorder features.
Method: Secondary analysis of baseline (pre-treatment) data from three clinical trials focused on improving caregiver outcomes for young people with first-episode psychosis and young people with borderline personality disorder features was conducted (ACTRN12616000968471, ACTRN12616000304437, ACTRN12618000616279). Caregivers completed self-report measures of experiences of the caregiver role and expressed emotion. Multivariate generalised linear models and moderation analyses were used to determine group differences.
Results: Data were available for 265 caregivers. Higher levels of negative experiences and expressed emotion, and stronger correlations between negative experiences and expressed emotion domains, were found in caregivers of young people with borderline personality disorder than first-episode psychosis. Caregiver group (borderline personality disorder, first-episode psychosis) moderated the relationship between expressed emotion and caregiver experiences in the domains of need to provide backup and positive personal experiences.
Conclusion: Caregivers of young people with borderline personality disorder experience higher levels of negative experiences related to their role and expressed emotion compared with caregivers of young people with first-episode psychosis. The mechanisms underpinning associations between caregiver experiences and expressed emotion differ between these two caregiver groups, indicating that different supports are needed. For borderline personality disorder caregivers, emotional over-involvement is associated with both negative and positive experiences, so a more detailed understanding of the nature of emotional over-involvement for each relationship is required to guide action.
abstract_id: PUBMED:36824059
Emotion regulation mediates the relationship between family caregivers' pain-related beliefs and patients' coping strategies. Background: In order to tailor more effective interventions and minimize the burden of chronic pain, it is critical to identify the interaction and contribution of social and psychological factors in pain. One of the important psychological factors in pain management is related to the choice of pain coping strategies in chronic pain patients. Social resources, including family caregivers' pain attitudes-beliefs, can influence pain coping strategies in chronic pain patients. Moreover, one key factor that may intervene in the relationship between caregivers' pain attitudes-beliefs and the patients' coping strategies is the emotion regulation strategies. Therefore, the present study aimed to investigate the mediating role of emotion regulation strategies of chronic pain patients and their family caregivers on the association between caregivers' pain attitudes-beliefs and pain coping strategies of chronic pain patients. Methods: We recruited 200 chronic musculoskeletal pain patients and their family caregivers. Chronic pain patients responded to measures of pain coping and emotion regulation strategies while family caregivers completed questionnaires related to their attitude toward pain and emotion regulation of themselves. Results: There is an association between caregivers' pain attitudes-beliefs and pain coping strategies in patients with chronic musculoskeletal. Moreover, the structural equation modeling revealed that the emotion regulation of both patients and family caregivers mediate the relationship between the caregivers' pain attitudes-beliefs and pain coping strategies of patients with chronic musculoskeletal. Conclusions: The social context of pain, including the effect of family caregivers' responses to the patient's pain, is a critical pain source that is suggested to affect coping strategies in patients. These findings suggest an association between pain attitudes-beliefs in family caregivers and pain coping strategies in patients. Moreover, these results showed that the emotion regulation of both patients and their family caregivers mediates this association.
abstract_id: PUBMED:29723129
Self-Compassion, Coping Strategies, and Caregiver Burden in Caregivers of People with Dementia. Objective: Caring for someone with dementia can have negative consequences for caregivers, a phenomenon known as caregiver burden. Coping strategies influence the impact of caregiving-related stress. Specifically, using emotion-focused strategies has been associated with lower levels of burden, whereas dysfunctional strategies have been related to increased burden. The concept of self-compassion has been linked to both positive outcomes and the coping strategies that are most advantageous to caregivers. However, as yet, no research has studied self-compassion in caregivers. Therefore, the aim of this study was to explore the relationship between self-compassion, coping strategies and caregiver burden in dementia caregivers.
Method: Cross-sectional survey data was collected from 73 informal caregivers of people with dementia recruited from post-diagnostic support services and caregiver support groups.
Results: Self-compassion was found to be negatively related to caregiver burden and dysfunctional coping strategies and positively related to emotion-focused coping strategies. Dysfunctional strategies mediated the relationship between self-compassion and caregiver burden, whereas emotion-focused strategies did not.
Conclusion: Caregivers with higher levels of self-compassion report lower levels of burden and this is at least partly due to the use of less dysfunctional coping strategies.
Clinical Implications: Interventions that develop self-compassion could represent a useful intervention for struggling caregivers.
abstract_id: PUBMED:25114532
Caregiver burden and coping strategies in caregivers of patients with Alzheimer's disease. Background: Alzheimer's disease (AD) causes considerable distress in caregivers who are continuously required to deal with requests from patients. Coping strategies play a fundamental role in modulating the psychologic impact of the disease, although their role is still debated. The present study aims to evaluate the burden and anxiety experienced by caregivers, the effectiveness of adopted coping strategies, and their relationships with burden and anxiety.
Methods: Eighty-six caregivers received the Caregiver Burden Inventory (CBI) and the State-Trait Anxiety Inventory (STAI Y-1 and Y-2). The coping strategies were assessed by means of the Coping Inventory for Stressful Situations (CISS), according to the model proposed by Endler and Parker in 1990.
Results: The CBI scores (overall and single sections) were extremely high and correlated with dementia severity. Women, as well as older caregivers, showed higher scores. The trait anxiety (STAI-Y-2) correlated with the CBI overall score. The CISS showed that caregivers mainly adopted task-focused strategies. Women mainly adopted emotion-focused strategies and this style was related to a higher level of distress.
Conclusion: AD is associated with high distress among caregivers. The burden strongly correlates with dementia severity and is higher in women and in elderly subjects. Chronic anxiety affects caregivers who mainly rely on emotion-oriented coping strategies. The findings suggest providing support to families of patients with AD through tailored strategies aimed to reshape the dysfunctional coping styles.
abstract_id: PUBMED:32834975
Personality, trait EI and coping with COVID 19 measures. The study views the preventive measures undertaken by government to combat COVID 19 as stressor for individuals, and examines how individuals' personal traits including emotional intelligence and personality factors influence their coping strategies. The concept of trait EI is used in this study to understand its relationship with personality factors and their respective effects on the opted outcomes. Coping strategies in this study are categorised into task, emotion and avoidance-oriented coping. The results show that emotional intelligence is significantly related to all coping strategies whereas only certain personality factors make unique variances. When both emotional intelligence and personality are in the same equation, with the latter being controlled, the former shows incremental variance and the influence of personality factors is reduced. Detailed discussion of these findings and implications for policy makers and researchers are highlighted and conclude the paper.
abstract_id: PUBMED:35206839
Post-Traumatic Growth during COVID-19: The Role of Perceived Social Support, Personality, and Coping Strategies. Although many studies on mental health have been conducted among various populations during the COVID-19 pandemic, few studies have focused on post-traumatic growth (PTG) in the general population. The current study aimed to explore whether perceived social support, personality, and coping strategies are associated with PTG in the COVID-19 pandemic period. The study also investigated whether coping strategies mediate the relations between perceived social support, personality, and PTG. A total of 181 participants (Mage = 24) completed the self-report questionnaire online, which was distributed via various online channels, mainly in China and Sweden. The relations between the study variables were examined with correlation analyses and a multiple mediation analysis. Results showed that more than half of the participants (60.8%) reported experiences of PTG during the pandemic. Additionally, perceived social support, personality traits (extraversion, emotional stability, agreeableness, and conscientiousness) and coping strategies (problem-focused coping, emotion-focused coping, and social support coping) were positively correlated with PTG. In addition, coping strategies (problem-focused coping, emotion-focused coping, and avoidance coping) mediated the relations between perceived social support, personality traits and PTG. Theoretical and practical implications of this study are discussed, concluding that the findings of this study have the potential to guide intervention efforts to promote positive change during the pandemic.
abstract_id: PUBMED:36233634
The Relationship between Coping and Expressed Emotion in Substance Users. The involvement of family is an integral part of the recovery process, and the use of adaptive coping strategies has an important implication for treatment outcomes. Little research to date has examined the relationship between coping and family dynamics in substance users, although this may help to unravel the mechanism underlining the increased risk of relapse for individuals from critical family environment. The aim of the present research was to assess the association between the level of expressed emotion (LEE) (i.e., criticism), coping style, and psychological distress (i.e., anxiety, depression) in people with substance use disorder (SUD). Compared to control subjects, persons with SUD reported less use of rational coping and detached coping, and perceived greater criticism and irritability from family. A higher degree of family criticism and lack of emotional support was associated with greater use of emotional and avoidance coping in persons with SUD, while psychological distress was more related to rational and detached coping. The present study reveals the unique connection between family relationships, coping and psychological distress, implicating the need to address the influence of family relationships and stress on persons' coping in SUD treatment.
abstract_id: PUBMED:35220491
The Mediating Role of Coping Strategies and Emotion Regulation in the Relationship Between Pain Acceptance and Pain-Related Anxiety. Pain is an unpleasant sensory and emotional experience that exists all over the world and has a negative effect on people's ability to engage in valuable life activities. The present study was conducted aiming to investigate the mediating role of coping strategies and emotion regulation in the relationship between pain acceptance and pain anxiety. The research design was descriptive, and the sample consisted of 428 students selected via purposive sampling. Pain Anxiety Symptoms Scale (PASS-20), Chronic Pain Acceptance Questionnaire-8 (CPAQ-8), Ways of Coping Questionnaire (WCQ), The Multidimensional Pain Inventory (MPI), and Emotion Regulation Questionnaire (ERQ) were used as research tools. The results obtained from the correlation revealed that there is a significant relationship between pain acceptance and pain anxiety, coping strategies, and emotion regulation. Structural Equation Modeling (SEM) showed that coping strategies and emotion regulation are able to mediate the relationship between pain acceptance and pain anxiety. In general, results revealed that adaptive coping strategies can be effective in coping with pain; emotional components play an important role in this regard. These results demonstrate the importance of attitude and self-regulation in coping with pain, and suggest that people can expand their adaptation to pain by assessing the situation and adopting an effective coping style.
abstract_id: PUBMED:33553023
The impact of type D personality on self-care of patients with type 2 diabetes: the mediating role of coping strategies. Purpose: As a chronic disease, Diabetes requires special self-care behaviors until the end of life. Psychological factors play an important role in following the self-care plans among patients with diabetes. The present study was designed to examine the relationship between type D personality and self-care with the mediating role of coping strategies in patients with type 2 diabetes.
Methods: This cross-sectional study was carried out on a sample of 361 patients with type 2 diabetes from Ale-Ebrahim Diabetes Charity Center in Isfahan, Iran. Type D personality, coping strategies and self-care were measured by validated questionnaires. The data were analyzed through Structural Equation Modeling (SEM).
Results: The results indicated that type D personality affects the self-care behaviors indirectly via emotion-oriented coping. Problem and avoidant oriented coping strategies did not significantly mediate the effect of type D personality on self-care.
Conclusion: In general, assessing psychological background (such as personality traits and coping strategies) can facilitate the treatment of patients with diabetes by means of self-care enhancement.
Answer: Expressed emotion (EE) in the client-professional caregiver dyad appears to be related to various factors, including symptoms, coping strategies, and personality traits of both the clients and the caregivers. The study by PUBMED:12195543 found that high EE was associated with the residents' age, poorer social functioning, and smaller network sizes. There was no significant relationship between EE and the residents' symptoms except for excitement. High EE professionals were less open than their low EE colleagues and had a lower education level, indicating that the professional caregivers' personality traits are related to EE.
In the context of caregivers of individuals with severe mental illness, PUBMED:34628949 reported that caregivers of young people with borderline personality disorder experienced higher levels of negative experiences related to their role and expressed emotion compared with caregivers of young people with first-episode psychosis. The study also found that the caregiver group moderated the relationship between expressed emotion and caregiver experiences, suggesting that the type of severe mental illness might influence EE and caregiver experiences.
The study by PUBMED:36824059 highlighted that emotion regulation mediates the relationship between family caregivers' pain-related beliefs and patients' coping strategies, indicating that both patients' and caregivers' emotion regulation strategies are important in this relationship.
PUBMED:29723129 found that self-compassion in caregivers was negatively related to caregiver burden and dysfunctional coping strategies and positively related to emotion-focused coping strategies. This suggests that caregivers' personality traits, such as self-compassion, are related to how they cope with the stress of caregiving and their level of burden.
PUBMED:25114532 reported that Alzheimer's disease caregivers mainly adopted task-focused coping strategies, and that women mainly adopted emotion-focused strategies which were related to a higher level of distress. This indicates that coping strategies are related to caregiver burden and may be influenced by the caregiver's gender.
In summary, the relationship between expressed emotion in the client-professional caregiver dyad is complex and influenced by a variety of factors including the symptoms of the client, the coping strategies employed by both parties, and the personality traits of the professional caregivers. These factors can interact in different ways depending on the specific context, such as the type of mental illness being cared for and the individual characteristics of the caregivers and clients involved. |
Instruction: L-type bovine spongiform encephalopathy in genetically susceptible and resistant sheep: changes in prion strain or phenotypic plasticity of the disease-associated prion protein?
Abstracts:
abstract_id: PUBMED:24218507
L-type bovine spongiform encephalopathy in genetically susceptible and resistant sheep: changes in prion strain or phenotypic plasticity of the disease-associated prion protein? Background: Sheep with prion protein (PrP) gene polymorphisms QQ171 and RQ171 were shown to be susceptible to the prion causing L-type bovine spongiform encephalopathy (L-BSE), although RQ171 sheep specifically propagated a distinctive prion molecular phenotype in their brains, characterized by a high molecular mass protease-resistant PrP fragment (HMM PrPres), distinct from L-BSE in QQ171 sheep.
Methods: The resulting infectious and biological properties of QQ171 and RQ171 ovine L-BSE prions were investigated in transgenic mice expressing either bovine or ovine PrP.
Results: In both mouse lines, ovine L-BSE transmitted similarly to cattle-derived L-BSE, with respect to survival periods, histopathology, and biochemical features of PrPres in the brain, as well as splenotropism, clearly differing from ovine classic BSE or from scrapie strain CH1641. Nevertheless and unexpectedly, HMM PrPres was found in the spleen of ovine PrP transgenic mice infected with L-BSE from RQ171 sheep at first passage, reminiscent, in lymphoid tissues only, of the distinct PrPres features found in RQ171 sheep brains.
Conclusions: The L-BSE agent differs from both ovine classic BSE or CH1641 scrapie maintaining its specific strain properties after passage in sheep, although striking PrPres molecular changes could be found in RQ171 sheep and in the spleen of ovine PrP transgenic mice.
abstract_id: PUBMED:26218890
Porcine prion protein amyloid. Mammalian prions are composed of misfolded aggregated prion protein (PrP) with amyloid-like features. Prions are zoonotic disease agents that infect a wide variety of mammalian species including humans. Mammals and by-products thereof which are frequently encountered in daily life are most important for human health. It is established that bovine prions (BSE) can infect humans while there is no such evidence for any other prion susceptible species in the human food chain (sheep, goat, elk, deer) and largely prion resistant species (pig) or susceptible and resistant pets (cat and dogs, respectively). PrPs from these species have been characterized using biochemistry, biophysics and neurobiology. Recently we studied PrPs from several mammals in vitro and found evidence for generic amyloidogenicity as well as cross-seeding fibril formation activity of all PrPs on the human PrP sequence regardless if the original species was resistant or susceptible to prion disease. Porcine PrP amyloidogenicity was among the studied. Experimentally inoculated pigs as well as transgenic mouse lines overexpressing porcine PrP have, in the past, been used to investigate the possibility of prion transmission in pigs. The pig is a species with extraordinarily wide use within human daily life with over a billion pigs harvested for human consumption each year. Here we discuss the possibility that the largely prion disease resistant pig can be a clinically silent carrier of replicating prions.
abstract_id: PUBMED:12407310
Prion diseases Creutzfeldt-Jakob disease, kuru, Gerstmann Sträussler Scheinker syndrome and fatal familial insomnia in humans, as well as scrapie and bovine spongiform encephalopathy, in animals, are fatal disorders of the central nervous system that are part of the group of transmissible spongiform encephalopathies, (TSE) or prion diseases. Neuronal intracellular spongiosis and the accumulation of abnormal, protease resistant prion protein in the nervous central system characterize TSE. The conformational change of a host protein, prion protein, into a pathological isoform is the key pathogenetic event in TSE. Despite their relative rarity, prion diseases have a great impact on the scientific community and society in general. There are two major reasons: first, the heretical hypothesis of a disease transmitted by an "infectious protein" in the absence of nucleic acid, the basis of the conformational transmissibility concept; second, the panic originated from the appearance of new variant Creutzfeldt-Jakob disease and the evidence linking it to the exposure of humans to bovine spongiform encephalopathy via food contaminated by affected bovine tissue. Novel therapeutic approaches are examined.
abstract_id: PUBMED:12818782
Prion diseases Prion is an ubiquitous membrane protein in mammals, which is mainly synthesized in central nervous system. Prion diseases are the result of an accumulation of prions having acquired a resistance to the physiological degradation and an infectious capacity. Human prion diseases are very rare diseases including sporadic Creutzfeldt-Jakob disease (the most frequent form manifesting as a presenile dementia), familial transmissible spongiform encephalopathies and two juvenile transmissible forms: iatrogenic Creutzfeldt-Jakob secondary to treatment with human extractive growth hormone and variant Creutzfeldt-Jakob disease resulting from bovine spongiform encephalopathy food transmission. Knowledge of the underlying prion biology has led to preventive measures which offer today a reasonable guarantee against the juvenile forms.
abstract_id: PUBMED:32195273
Influence of Interspecies Transmission of Atypical Bovine Spongiform Encephalopathy Prions to Hamsters on Prion Characteristics. Bovine spongiform encephalopathy (BSE) is a prion disease in cattle and is classified into the classical type (C-BSE) and two atypical BSEs, designated as high type (H-BSE) and low type (L-BSE). These classifications are based on the electrophoretic migration of the proteinase K-resistant core (PrPres) of the disease-associated form of the prion protein (PrPd). In a previous study, we succeeded in transmitting the H-BSE prion from cattle to TgHaNSE mice overexpressing normal hamster cellular PrP (PrPC). Further, Western blot analysis demonstrated that PrPres banding patterns of the H-BSE prion were indistinguishable from those of the C-BSE prion in TgHaNSE mice. In addition, similar PrPres glycoprofiles were detected among H-, C-, and L-BSE prions in TgHaNSE mice. Therefore, to better understand atypical BSE prions after interspecies transmission, H-BSE prion transmission from TgHaNSE mice to hamsters was investigated, and the characteristics of classical and atypical BSE prions among hamsters, wild-type mice, and mice overexpressing bovine PrPC (TgBoPrP) were compared in this study using biochemical and neuropathological methods. Identical PrPres banding patterns were confirmed between TgHaNSE mice and hamsters in the case of all three BSE prion strains. However, these PrPres banding patterns differed from those of TgBoPrP and wild-type mice infected with the H-BSE prion. In addition, glycoprofiles of TgHaNSE mice and hamsters infected with the L-BSE prion differed from those of TgBoPrP mice infected with the L-BSE prion. These data indicate that the PrPC amino acid sequences of new host species rather than other host environmental factors may affect some molecular aspects of atypical BSE prions. Although three BSE prion strains were distinguishable based on the neuropathological features in hamsters, interspecies transmission modified some molecular properties of atypical BSE prions, and these properties were indistinguishable from those of C-BSE prions in hamsters. Taken together, PrPres banding patterns and glycoprofiles are considered to be key factors for BSE strain typing. However, this study also revealed that interspecies transmission could sometimes influence these characteristics.
abstract_id: PUBMED:1288542
Prion encephalopathies Spongiform encephalopathies, also called prion encephalopathies, are characterized, in human as well as in animals, by (1) their clinical picture which indicates strict localisation in central nervous system, (2) their histological aspect: spongiform degeneration and neuronal loss, and (3) their transmissibility in the same animal species but also from man to animal. The nature of the pathogenic agent is still debated. This agent could be one isoform of the prion protein which, probably because of a modification of its tertiary structure, is partially resistant to proteolytic enzymes. Recent description of a bovine spongiform encephalopathy caused by meat flour absorption has raised again the question of the transmissibility of these animal diseases to human.
abstract_id: PUBMED:31905681
The First Report of Genetic and Structural Diversities in the SPRN Gene in the Horse, an Animal Resistant to Prion Disease. Prion diseases are fatal neurodegenerative diseases and are characterized by the accumulation of abnormal prion protein (PrPSc) in the brain. During the outbreak of the bovine spongiform encephalopathy (BSE) epidemic in the United Kingdom, prion diseases in several species were reported; however, horse prion disease has not been reported thus far. In previous studies, the shadow of prion protein (Sho) has contributed to an acceleration of conversion from normal prion protein (PrPC) to PrPSc, and the shadow of prion protein gene (SPRN) polymorphisms have been significantly associated with the susceptibility of prion diseases. We investigated the genotype, allele and haplotype frequencies of the SPRN gene using direct sequencing. In addition, we analyzed linkage disequilibrium (LD) and haplotypes among polymorphisms. We also investigated LD between PRNP and SPRN single nucleotide polymorphisms (SNPs). We compared the amino acid sequences of Sho protein between the horse and several prion disease-susceptible species using ClustalW2. To perform Sho protein modeling, we utilized SWISS-MODEL and Swiss-PdbViewer programs. We found a total of four polymorphisms in the equine SPRN gene; however, we did not observe an in/del polymorphism, which is correlated with the susceptibility of prion disease in prion disease-susceptible animals. The SPRN SNPs showed weak LD value with PRNP SNP. In addition, we found 12 horse-specific amino acids of Sho protein that can induce significantly distributional differences in the secondary structure and hydrogen bonds between the horse and several prion disease-susceptible species. To the best of our knowledge, this is the first report regarding the genetic and structural characteristics of the equine SPRN gene.
abstract_id: PUBMED:31919511
Porcine Prion Protein as a Paradigm of Limited Susceptibility to Prion Strain Propagation. Although experimental transmission of bovine spongiform encephalopathy (BSE) to pigs and transgenic mice expressing pig cellular prion protein (PrPC) (porcine PrP [PoPrP]-Tg001) has been described, no natural cases of prion diseases in pig were reported. This study analyzed pig-PrPC susceptibility to different prion strains using PoPrP-Tg001 mice either as animal bioassay or as substrate for protein misfolding cyclic amplification (PMCA). A panel of isolates representatives of different prion strains was selected, including classic and atypical/Nor98 scrapie, atypical-BSE, rodent scrapie, human Creutzfeldt-Jakob-disease and classic BSE from different species. Bioassay proved that PoPrP-Tg001-mice were susceptible only to the classic BSE agent, and PMCA results indicate that only classic BSE can convert pig-PrPC into scrapie-type PrP (PrPSc), independently of the species origin. Therefore, conformational flexibility constraints associated with pig-PrP would limit the number of permissible PrPSc conformations compatible with pig-PrPC, thus suggesting that pig-PrPC may constitute a paradigm of low conformational flexibility that could confer high resistance to the diversity of prion strains.
abstract_id: PUBMED:32513872
Incomplete glycosylation during prion infection unmasks a prion protein epitope that facilitates prion detection and strain discrimination. The causative factors underlying conformational conversion of cellular prion protein (PrPC) into its infectious counterpart (PrPSc) during prion infection remain undetermined, in part because of a lack of monoclonal antibodies (mAbs) that can distinguish these conformational isoforms. Here we show that the anti-PrP mAb PRC7 recognizes an epitope that is shielded from detection when glycans are attached to Asn-196. We observed that whereas PrPC is predisposed to full glycosylation and is therefore refractory to PRC7 detection, prion infection leads to diminished PrPSc glycosylation at Asn-196, resulting in an unshielded PRC7 epitope that is amenable to mAb recognition upon renaturation. Detection of PRC7-reactive PrPSc in experimental and natural infections with various mouse-adapted scrapie strains and with prions causing deer and elk chronic wasting disease and transmissible mink encephalopathy uncovered that incomplete PrPSc glycosylation is a consistent feature of prion pathogenesis. We also show that interrogating the conformational properties of the PRC7 epitope affords a direct means of distinguishing different prion strains. Because the specificity of our approach for prion detection and strain discrimination relies on the extent to which N-linked glycosylation shields or unshields PrP epitopes from antibody recognition, it dispenses with the requirement for additional standard manipulations to distinguish PrPSc from PrPC, including evaluation of protease resistance. Our findings not only highlight an innovative and facile strategy for prion detection and strain differentiation, but are also consistent with a mechanism of prion replication in which structural instability of incompletely glycosylated PrP contributes to the conformational conversion of PrPC to PrPSc.
abstract_id: PUBMED:26134409
Treatment of Prion Disease with Heterologous Prion Proteins. Prion diseases such as Creutzfeldt-Jakob disease in humans, bovine spongiform encephalopathy in cattle, and scrapie in sheep are fatal neurodegenerative diseases for which there is no effective treatment. The pathology of these diseases involves the conversion of a protease sensitive form of the cellular prion protein (PrPC) into a protease resistant infectious form (PrPsc or PrPres). Both in vitro (cell culture and cell free conversion assays) and in vivo (animal) studies have demonstrated the strong dependence of this conversion process on protein sequence homology between the initial prion inoculum and the host's own cellular prion protein. The presence of non-homologous (heterologous) proteins is often inhibitory to this conversion process. We hypothesize that the presence of heterologous prion proteins from one species might therefore constitute an effective treatment for prion disease in another species. To test this hypothesis, we infected mice intracerebrally with murine adapted RML-Chandler scrapie and treated them with heterologous prion protein (purified bacterially expressed recombinant hamster prion protein) or vehicle alone. Treated animals demonstrated reduced disease associated pathology, decreased accumulation of protease-resistant disease-associated prion protein, with delayed onset of clinical symptoms and motor deficits. This was concomitant with significantly increased survival times relative to mock-treated animals. These results provide proof of principle that recombinant hamster prion proteins can effectively and safely inhibit prion disease in mice, and suggest that hamster or other non-human prion proteins may be a viable treatment for prion diseases in humans.
Answer: The study described in PUBMED:24218507 investigated the transmission of L-type bovine spongiform encephalopathy (L-BSE) to sheep with different prion protein (PrP) gene polymorphisms, specifically QQ171 and RQ171. It was found that both QQ171 and RQ171 sheep were susceptible to L-BSE, but the prion molecular phenotype in the brains of RQ171 sheep was distinct, characterized by a high molecular mass protease-resistant PrP fragment (HMM PrPres). This phenotype differed from that observed in QQ171 sheep infected with L-BSE. The study aimed to understand whether these differences were due to changes in the prion strain or phenotypic plasticity of the disease-associated prion protein.
The infectious and biological properties of ovine L-BSE prions from both QQ171 and RQ171 sheep were investigated in transgenic mice expressing either bovine or ovine PrP. The results showed that ovine L-BSE transmitted similarly to cattle-derived L-BSE in terms of survival periods, histopathology, biochemical features of PrPres in the brain, and splenotropism. This was clearly different from ovine classic BSE or scrapie strain CH1641. However, unexpectedly, the HMM PrPres was found in the spleen of ovine PrP transgenic mice infected with L-BSE from RQ171 sheep at the first passage, which was reminiscent of the distinct PrPres features found in RQ171 sheep brains but only in lymphoid tissues.
The study concluded that the L-BSE agent maintains its specific strain properties after passage in sheep, distinguishing it from both ovine classic BSE and CH1641 scrapie. Nonetheless, there were significant PrPres molecular changes in RQ171 sheep and in the spleen of ovine PrP transgenic mice infected with L-BSE. These findings suggest that while the L-BSE agent retains its strain characteristics, there is also evidence of phenotypic plasticity of the disease-associated prion protein, particularly in the context of different host genotypes and tissues. |
Instruction: Do women from Seine-Saint-Denis have different prenatal care compared to other women in the region?
Abstracts:
abstract_id: PUBMED:24702967
Do women from Seine-Saint-Denis have different prenatal care compared to other women in the region? Objective: We compared the care of pregnant women residing in Seine-Saint-Denis (SSD) to those resident elsewhere in Île-de-France because of the high rate of perinatal and infant mortality in this district.
Materials And Methods: Data come from the French national perinatal surveys of 2003 and 2010 (n=969 in Seine-Saint-Denis, n=5636 other Île-de-France). We analyzed two indicators: inadequate prenatal care (late initiation, <3 ultrasounds, insufficient number of visits) and lack of knowledge of about screening and diagnostic tests for Down's Syndrome. Models were adjusted for sociodemographic characteristics.
Results: Nineteen percent of women in Seine-Saint-Denis and 12 % elsewhere in Île-de-France had inadequate care and 29 % and 16 % did not know if they had been screened for Down's Syndrome. These rates were higher among migrant women but did not differ by place of residence (25 and 40 % respectively). For French citizens, residence in Seine-Saint-Denis was a risk factor for both indicators.
Conclusion: A reflection on how to improve care during pregnancy should be initiated in Seine-Saint-Denis.
abstract_id: PUBMED:11431612
Regionalization of perinatal care in the Seine-Saint-Denis department of France Objective: To evaluate a policy designed to regionalize perinatal care in the Seine-Saint-Denis department of France.
Methods: The place of birth of every preterm infant (born before 33 weeks gestation) in 1998-1999 was compared with that for the period of 1989-1992. The 1989-1992 data came from a prenatal mortality study. For the 1998-1999 period, we used data from an area-based birth registry recording an experimental health certificate.
Results: In 1989-1992, 40% of live births before 33 weeks gestation took place in level I maternity units, 37.2% in level II maternity units, and 13.0% in level III maternity units. In 1998-1999, 5.4% took place in level I maternity units, 28.9% in level II maternity units and 65.1% in level III maternity units. The number of postnatal transfers of very preterm infants declined markedly. In 1998-1999, 109 pregnant women were transferred to a level III maternity hospital. This constituted 1.2% of the women who gave birth in Seine-Saint-Denis during this period.
Conclusion: The policy to regionalize perinatal care and increase maternal transfers was well accepted and successfully implemented. The delivery of very preterm infants in maternity hospitals without neonatal units became a rare event.
abstract_id: PUBMED:27837770
Better protection for female victims of domestic abuse, the experience of Seine-Saint-Denis The French watchdog on violence against women was created in order to put in place actions to prevent violence and protect women and children victims. Several schemes have been tested in particular in Seine-Saint-Denis: the provision of a mobile telephone with an integrated panic alarm, restraining orders, supervised access visits and the Féminicide protocol. These measures provide support and formalised monitoring of each member of the family, in order to stop the violence and protect mothers and children.
abstract_id: PUBMED:28737334
Barriers to tuberculosis awareness and screening: a qualitative study in a French department The incidence of tuberculosis in Seine-Saint-Denis is considerably higher than the national average. The north-west part of the department is particularly concerned. Screening teams encounter difficulties communicating on the disease, reaching the populations concerned and obtaining their adhesion to the screening proposal. The objective of this study was to identify and elucidate the obstacles to tuberculosis prevention and screening. A qualitative study was conducted based on observation of screening actions in Seine-Saint-Denis and semi-directive interviews with health professionals in charge of screening, community representatives and associations, and individuals from the population concerned. Obstacles to tuberculosis awareness and screening appear to be linked to the way in which screening is organized and implemented, and communication difficulties in relation to this disease. Three major obstacles were identified: the gap between the little attention paid to this disease and the sanitary crisis it can trigger in the event of an epidemic; the unsuitability of communication tools for the target population and their lack of attractiveness; maladjustment of screening actions to the local context. The study highlights the individual and social marginalization of the tuberculosis issue and the mechanisms producing this marginalization. It encourages the development of more interactive communication tools and modes, and increased awareness of the professionals involved in screening about field realities.
abstract_id: PUBMED:2609007
Health care channels for new cases of respiratory tuberculosis in Seine Saint-Denis in 1984-1986 The purpose of this study was to observe routine practice in the care of tuberculosis cases treated in the Seine Saint-Denis department in 1984, with reference to the recent recommendations of the French Pneumology Society. The pathway of each patient through the care network was established for 336 adult cases being treated for respiratory tuberculosis for the first time. The social and economic cost of each pathway was evaluated. The results show the multiplicity of health services intervening in the care of these patients, the persistence of hospitalization, sanatorium care, and long sick-leaves from work, together with major differences in the care pathways according to the nationality, sex, and socio-economic group of the patients. The cost of tuberculosis treatment is shown to be high for both patients and the community.
abstract_id: PUBMED:23199417
Neonatal mortality in Seine-Saint-Denis: analysis of neonatal death certificates The neonatal mortality rate in Seine-Saint-Denis in 2008 was 3.7 per 1000 live births vs. 2.6 in Île de France and 2.4 in Metropolitan France. The analysis of neonatal death certificates between 2001 and 2008 did not find any specific difference in the causes or characteristics of these deaths when compared with Ile de France or Metropolitan France. It seems that excess mortality in SSD affects all deaths, regardless of their cause.
abstract_id: PUBMED:12653026
Socioeconomic determinism of obesity in the Seine-Saint-Denis area Objective: Obesity is a complex multi-factorial disease. The role of socioeconomic factors is known but few studies have attempted to analyse separately the impact of the various participating factors: oncome, level of education, cultural and social status.
Method: These factors were analysed in 26,278 persons aging from 16 to 59 years, living in the district and having benefited from a medical check-up in the Seine-Saint-Denis health and social prevention centre, district particularly concerned by socio-economic insecurity. Among these persons, a representative sample of 1804 filled-in an additional questionnaire including questions on their income, level of education, marital status and the area or country of origin.
Results: The prevalence of obesity (body mass index or BMI > or = 30 kg/m2) and overweight (MCI = 25 to 29.9 kg/m2) was respectively of 17.6 and 32.7%. Using univariate analysis, the prevalence of obesity was significantly associated with age, gender (higher in women), settled way of life, socio-professional category, low education, marital status and origin (higher in persons from Africa and North Africa). Using a logistic regression model, the risk of obesity was increased 1.45-fold in persons earning less than 838,47 euros and 1.67-fold in persons with low education. Moreover, it was 2.28-fold greater in the un-working population, 1.62-fold in the redundant and 1.5-fold in the working class, compared with the executive-freelance population.
Conclusion: The risk of obesity is therefore independently related to cultural, economic and social parameters.
abstract_id: PUBMED:2602618
Respiratory tuberculosis in Seine-Saint-Denis. Results of treatment A study was carried out in 1984-1986 in Seine-Saint-Denis on the clinical management of tuberculous cases. It was possible to observe in routine practice the nature, duration and results of treatment in 336 adult cases suffering for the first time from respiratory tuberculosis. The recommendations of the French Society of Pneumology were taken as a reference. A minority of patients (22%) were treated entirely at home. The others were admitted to hospital then treated at home (33%) or had a stay in a sanatorium (45%). The mean duration of stay in institutions was five months for those patients staying in a sanatorium. The most common therapeutic regime in the initial phase consisted of rifampicin and isoniazid, with additional ethambutol alone (60%) or ethambutol in combination with pyrazinamide (15%). The mean duration of treatment was 10.5 months, without any difference between those regimes consisting of three or four drugs. At the end of the period of treatment 85% of patients were considered to be cured; 16 patients (or 5%) had died and 18 patients (6%) were lost to follow up before the end of treatment. There were 15 patients (or 4%) who showed no significant change between the beginning and the end of the study.
abstract_id: PUBMED:24996878
Obstetric and neonatal outcomes of adolescent pregnancies: a cohort study in a hospital in Seine-Saint-Denis France Objectives: The aim of this study was to describe the characteristics, monitoring, obstetrical complications, childbirth and neonatal outcomes of pregnancies among minors in a cohort of adolescents from Seine-Saint-Denis (France).
Patients And Methods: This is a retrospective, cohort, comparative study, conducted from January 1, 1996 to July 31, 2011, made from the database of Jean-Verdier hospital in Seine-Saint-Denis. Three groups were established: patients aged less than 16 years old, patients aged over 16 years old and under 18 years old compared to a group consisting of older primiparas from 18 to 25 years old. The criteria considered were the characteristics of pregnancy, terms of delivery, neonatal outcome and conduct of post-partum.
Results: Minor patients were statistically more likely to be single, student, smoking and anemia compared to young adults. The obstetrical care was lower for minor compared to the control group with a number of consultations and ultrasounds lower (P < 0.001). Obstetrical complications were similar in the three groups outside of preterm labor. Adolescentes under 16 years old had a higher preterm delivery risk in multivariate analysis (RR = 0.33 CI 95% [0.12; 0.90] P = 0.03). Adolescents had fewer cesarean and instrumental deliveries (P < 0.05).
Discussion And Conclusion: Teenage pregnancy remains an important managing issue for maternities, particularly from a social standpoint. On the medical side, one preterm delivery appears to be more common among these adolescents.
abstract_id: PUBMED:28737324
Repeated BCG vaccine stockouts in France: local findings and consequences in a primary health care centre in Seine-Saint-Denis France is the European country most severely impacted by BCG vaccine stockouts that started in 2013. After a brief history of French tuberculosis vaccine regulations, this article describes the dysfunctions induced by these shortages in a primary health care centre in Seine-Saint-Denis, the department with the highest incidence of tuberculosis in France.Field observations reveal two types of shortcomings in the institutional communication to healthcare professionals: incomplete communication on projected shortages and their duration; lack of technical information on BCG multidose vials for the Polish market replacing BCG SSI. We then report the consequences of this poor information on the performances of the primary health care centre and its workers.In the light of this situation, we propose ways to improve policy implementation, especially by creating bottom-up communication channels from healthcare professionals towards health agencies.
Answer: Yes, women from Seine-Saint-Denis (SSD) do have different prenatal care compared to other women in the Île-de-France region. A study comparing the care of pregnant women residing in SSD with those residing elsewhere in Île-de-France found that 19% of women in SSD had inadequate prenatal care, which included late initiation of care, fewer than three ultrasounds, and an insufficient number of visits. This was higher compared to 12% elsewhere in Île-de-France. Additionally, 29% of women in SSD did not know if they had been screened for Down's Syndrome, compared to 16% in other parts of Île-de-France. These disparities were particularly pronounced among migrant women and were also significant for French citizens residing in SSD, indicating that residence in SSD was a risk factor for inadequate prenatal care and lack of knowledge about Down's Syndrome screening (PUBMED:24702967). |
Instruction: Are tyrosine kinase inhibitors still active in patients with metastatic renal cell carcinoma previously treated with a tyrosine kinase inhibitor and everolimus?
Abstracts:
abstract_id: PUBMED:23332872
Are tyrosine kinase inhibitors still active in patients with metastatic renal cell carcinoma previously treated with a tyrosine kinase inhibitor and everolimus? Experience of 36 patients treated in France in the RECORD-1 Trial. Background: Because the response to treatment is limited, patients with metastatic renal cell carcinoma (mRCC) typically receive multiple treatments. Guidelines recommend everolimus for patients previously treated with tyrosine kinase inhibitors (TKI) sunitinib or sorafenib. This study evaluated the efficacy of TKI re-treatment in patients with disease progression after a TKI-everolimus sequence.
Patients And Methods: Data were reviewed for patients enrolled in RECORD-1 (Renal Cell Cancer Treatment With Oral RAD001 Given Daily) at French sites. Response, progression-free survival (PFS), and overall survival were evaluated in patients treated with a TKI-everolimus-TKI sequence.
Results: Thirty-six patients received a TKI after everolimus: sunitinib in 17 patients, sorafenib in 15, and dovitinib (TKI258) in 4. The response rate with TKI re-treatment was 8%, and the disease-control rate (response plus stable disease) was 75%. The median PFS with each component of the TKI-everolimus-TKI sequence was 10.7 months (95% CI, 1.8-28.5 months), 8.9 months (95% CI, 1.7-34.6 months), and 8.2 months (95% CI, 5.2-11.9 months), respectively. The median overall survival from the start of everolimus was 29.1 months (95% CI 21.1 to not reached months), which suggests a benefit in using TKI in this setting.
Conclusions: Administration of a TKI-everolimus-TKI sequence may be associated with clinical benefit and should be prospectively investigated.
abstract_id: PUBMED:25456838
Efficacy and Safety of Sequential Use of Everolimus in Patients With Metastatic Renal Cell Carcinoma Previously Treated With Bevacizumab With or Without Interferon Therapy: Results From the European AVATOR Study. Background: Everolimus is a mammalian target of rapamycin (mTOR) inhibitor. It gained approval based on the results of the RECORD-1 (Regulation of Coagulation in Orthopedic Surgery to Prevent Deep Venous Thrombosis and Pulmonary Embolism 1) trial, which included patients with metastatic renal cell carcinoma (mRCC) whose disease progressed after receiving vascular endothelial growth factor receptor (VEGFR) tyrosine kinase inhibitors (TKIs). Bevacizumab is a monoclonal antibody targeting angiogenesis that is approved in patients with mRCC. The sequence of everolimus second-line therapy after failure of bevacizumab ± interferon (IFN) first-line therapy has not yet been studied.
Methods: AVAstin(®) followed by afiniTOR(®) (AVATOR) was a noninterventional retrospective multicenter European observational study of 42 unselected patients with mRCC who were previously or currently treated with everolimus after failure of bevacizumab ± IFN. The primary end point was everolimus progression-free survival (PFS). Secondary end points were related to the overall survival (OS) of patients receiving the drug sequence and everolimus treatment and safety.
Results: Exploring the duration of second-line everolimus treatment, 63.8% of patients received at least 3 months of everolimus and 28.8% received at least 8 months of treatment. At the time of data analysis, 15 patients (36%) were still receiving everolimus, 40% had stopped because of progressive disease, and 24% had discontinued treatment for other reasons. Patients receiving everolimus after bevacizumab experienced a median PFS of 17 months (95% confidence interval [CI], 5 [not reached]). Median OS was not reached with everolimus second-line therapy. At 32 months after the start of first-line therapy, 53.3% of patients were still alive. All grades of common adverse events (AEs) were consistent with the known safety profile of everolimus.
Conclusion: The AVATOR-studied sequence displayed a longer than expected median PFS. Further prospective exploratory studies need to be performed to confirm these encouraging results in a larger cohort of patients.
abstract_id: PUBMED:33792094
Lenvatinib with or Without Everolimus in Patients with Metastatic Renal Cell Carcinoma After Immune Checkpoint Inhibitors and Vascular Endothelial Growth Factor Receptor-Tyrosine Kinase Inhibitor Therapies. Introduction: Lenvatinib (Len) plus everolimus (Eve) is an approved therapy for metastatic renal cell carcinoma (mRCC) after first-line vascular endothelial growth factor receptor-tyrosine kinase inhibitors (VEGFR-TKIs), but limited data exist on the efficacy of Len ± Eve after progression on immune checkpoint inhibitors (ICIs) and VEGFR-TKIs.
Methods: We retrospectively reviewed the records of patients with mRCC at our institution who were treated with Len ± Eve after ICI and VEGFR-TKI. A blinded radiologist assessed objective response as defined by RECIST version 1.1. Descriptive statistics and the Kaplan-Meier method were used.
Results: Fifty-five patients were included in the analysis. Of these patients, 81.8% had clear-cell histology (ccRCC), and 76.4% had International Metastatic RCC Database Consortium intermediate-risk disease. Median number of prior therapies was four (range, 2-10); all patients had prior ICIs and VEGFR-TKIs, and 80% were previously treated with ICI and at least two VEGFR-TKIs, including cabozantinib. One patient (1.8%) achieved a complete response, and 11 patients (20.0%) achieved a partial response, for an overall response rate (ORR) of 21.8%; 35 patients (63.6%) achieved stable disease. In all patients, median progression-free survival (PFS) was 6.2 months (95% confidence interval [CI], 4.8-9.4) and median overall survival (OS) was 12.1 months (95% CI, 8.8-16.0). In patients with ccRCC, ORR was 24.4%, PFS was 7.1 months (95% CI, 5.0-10.5), and OS was 11.7 months (95% CI, 7.9-16.1). 50.9% of patients required dose reductions and 7.3% discontinued treatment because of toxicity.
Conclusion: Len ± Eve demonstrated meaningful clinical activity and tolerability in heavily pretreated patients with mRCC after disease progression with prior ICIs and VEGFR-TKIs.
Implications For Practice: As the therapeutic landscape for patients with metastatic renal cell carcinoma continues to evolve, this single-center, retrospective review highlights the real-world efficacy of lenvatinib with or without everolimus in heavily pretreated patients. This article supports the use of lenvatinib with or without everolimus as a viable salvage strategy for patients whose disease progresses after treatment with immune checkpoint inhibitors and vascular endothelial growth factor receptor-tyrosine kinase inhibitor therapies, including cabozantinib.
abstract_id: PUBMED:27059553
Safety and clinical activity of vascular endothelial growth factor receptor (VEGFR)-tyrosine kinase inhibitors after programmed cell death 1 inhibitor treatment in patients with metastatic clear cell renal cell carcinoma. Background: Emerging agents blocking the programmed cell death 1 (PD-1) pathway show activity in metastatic clear cell renal cell carcinoma (mRCC). The aim of this study was to evaluate the efficacy and safety of vascular endothelial growth factor (VEGF)/VEGF receptor (VEGFR)-tyrosine kinase inhibitor (TKI) therapy after PD-1 inhibition.
Patients And Methods: Patients with mRCC treated with anti-PD-1 antibody (aPD-1) monotherapy or in combination (with VEGFR-TKI or ipilimumab) that subsequently received VEGFR-TKI were retrospectively reviewed. The efficacy end points were objective response rate (ORR) and progression-free survival (PFS) stratified by the type of prior PD-1 regimen. Safety by the type and PD-1 exposure was also evaluated.
Results: Seventy patients were included. Forty-nine patients received prior therapy with immune checkpoint inhibitors (CPIs) alone and 21 had combination therapy of aPD-1 and VEGFR-TKI. Overall, ORR to VEGFR-TKI after PD-1 inhibition was 28% (19/68) and the median PFS was 6.4 months (mo) (4.3-9.5). ORR to VEGFR-TKI after aPD-1 in combination with VEGFR-TKI was lower than that in patients treated with VEGFR-TKI after CPI alone (ORR 10% versus 36%, P = 0.039). In the multivariable analysis, patients treated with prior CPI alone were more likely to achieve an objective response than those treated with aPD-1 in combination with VEGFR-TKI (OR = 5.38; 95% CI 1.12-26.0, P = 0.03). There was a trend toward numerically longer median PFS in the VEGFR-TKI after the CPI alone group, 8.4 mo (3.2-12.4) compared with 5.5 mo (2.9-8.3) for those who had VEGFR-TKI after aPD-1 in combination with VEGFR-TKI (P = 0.15). The most common adverse events (AEs) were asthenia, hypertension, and diarrhea.
Conclusions: The efficacy and safety of VEGFR-TKIs after PD-1 inhibition were demonstrated in this retrospective study. The response rate was lower and the median progression-free survival was shorter in those patients who received prior PD-1 in combination with VEGFR-TKI. PD-1 exposure does not seem to significantly influence the safety of subsequent VEGFR-TKI treatment.
abstract_id: PUBMED:21965770
Sunitinib re-challenge in metastatic renal cell carcinoma treated sequentially with tyrosine kinase inhibitors and everolimus. Therapy of patients with metastatic renal cell carcinoma (mRCC) requires sequential use of several agents with different mechanisms and minimal cross-resistance between the different agents. Tyrosine kinase inhibitors (TKIs) and mammalian target of rapamycin (mTOR) inhibitors prolong progression-free survival (PFS) in patients with mRCC. Re-challenge with TKIs provides clinical benefit after everolimus in patients with mRCC. We report the case of an mRCC patient with lung and bone metastases, treated sequentially with sunitinib, sorafenib and everolimus. The patient had an objective response in reducing bone metastases, but adaptative and concomitant progression in lung metastases during sunitinib re-challenge. Previously, these lung metastases had responded to sunitinib. This intriguing paradox suggests that not only was sunitinib able to target a specific metastatic site during the re-challenge, as seen by the reduction of bone metastases, but it also elicited a more invasive adaptation and progression of lung tumor cells.
abstract_id: PUBMED:26833674
Use of mammalian target of rapamycin inhibitors after failure of tyrosine kinase inhibitors in patients with metastatic renal cell carcinoma undergoing hemodialysis: A single-center experience with four cases. We retrospectively identified patients with end-stage renal disease undergoing hemodialysis treated with the mammalian target of rapamycin inhibitors as a second- and/or third-line targeted therapy after treatment failure with the tyrosine kinase inhibitors for metastatic renal cell carcinoma. Patient medical records were reviewed to evaluate the response to therapies and treatment-related toxicities. Four patients were identified. All patients had undergone nephrectomy, and one had received immunotherapy before targeted therapy. Two patients had clear cell histology, and the other two had papillary histology. All patients were classified into the intermediate risk group according to the Memorial Sloan-Kettering Cancer Center risk model. All patients were treated with everolimus as a second- or third-line therapy, and two patients were treated with temsirolimus as a second- or third-line therapy after treatment failure with sorafenib or sunitinib. The median duration of everolimus therapy was 6.7 months, whereas that of temsirolimus was 9.5 months. All patients had stable disease as the best response during each period of therapy. There were no severe adverse events. The use of mammalian target of rapamycin inhibitors in patients who previously failed to respond to tyrosine kinase inhibitors appears to be feasible in patients with end-stage renal disease requiring hemodialysis.
abstract_id: PUBMED:22460837
Comparative efficacy of vascular endothelial growth factor (VEGF) tyrosine kinase inhibitor (TKI) and mammalian target of rapamycin (mTOR) inhibitor as second-line therapy in patients with metastatic renal cell carcinoma after the failure of first-line VEGF TKI. Sequential therapy is a standard strategy used to overcome the limitations of targeted agents in metastatic renal cell carcinoma. It remains unclear whether a mammalian target of rapamycin (mTOR) inhibitor is a more effective second-line therapy after first-line vascular endothelial growth factor tyrosine kinase inhibitor (VEGF TKI) has failed than the alternative, VEGF TKI. A clinical database was used to identify all patients with renal cell carcinoma who failed at first-line VEGF TKI and then treated with second-line VEGF TKI or mTOR inhibitors in the Asan Medical Center. Patient medical characteristics, radiological response and survival status were assessed. Of the 83 patients who met the inclusion criteria, 41 received second-line VEGF TKI [sunitinib (n = 16) and sorafenib (n = 25)] and 42 were treated with mTOR inhibitors [temsirolimus (n = 11) and everolimus (n = 31)]. After a median follow-up duration of 23.9 months (95 % CI, 17.8-30.0), progression-free survival was 3.0 months for both groups [hazard ratio (HR, VEGF TKI vs. mTOR inhibitor) = 0.97, 95 % CI 0.59-1.62, P = 0.92]. Overall survival was 10.6 months for the VEGF TKI group and 8.2 months for the mTOR inhibitor group (HR = 0.98, 95 % CI 0.57-1.68, P = 0.94). The two groups did not differ significantly in terms of disease control rate (51 % for VEGF TKI and 59 % for mTOR inhibitor, P = 0.75). Second-line VEGF TKI seems to be as effective as mTOR inhibitors and may be a viable option as a second-line agent after first-line anti-VEGF agents have failed.
abstract_id: PUBMED:35576438
Telaglenastat plus Everolimus in Advanced Renal Cell Carcinoma: A Randomized, Double-Blinded, Placebo-Controlled, Phase II ENTRATA Trial. Purpose: Glutaminase is a key enzyme, which supports elevated dependency of tumors on glutamine-dependent biosynthesis of metabolic intermediates. Dual targeting of glucose and glutamine metabolism by the mTOR inhibitor everolimus plus the oral glutaminase inhibitor telaglenastat showed preclinical synergistic anticancer effects, which translated to encouraging safety and efficacy findings in a phase I trial of 2L+ renal cell carcinoma (RCC). This study evaluated telaglenastat plus everolimus (TelaE) versus placebo plus everolimus (PboE) in patients with advanced/metastatic RCC (mRCC) in the 3L+ setting (NCT03163667).
Patients And Methods: Eligible patients with mRCC, previously treated with at least two prior lines of therapy [including ≥1 VEGFR-targeted tyrosine kinase inhibitor (TKI)] were randomized 2:1 to receive E, plus Tela or Pbo, until disease progression or unacceptable toxicity. Primary endpoint was investigator-assessed progression-free survival (PFS; one-sided α <0.2).
Results: Sixty-nine patients were randomized (46 TelaE, 23 PboE). Patients had a median three prior lines of therapy, including TKIs (100%) and checkpoint inhibitors (88%). At median follow-up of 7.5 months, median PFS was 3.8 months for TelaE versus 1.9 months for PboE [HR, 0.64; 95% confidence interval (CI), 0.34-1.20; one-sided P = 0.079]. One TelaE patient had a partial response and 26 had stable disease (SD). Eleven patients on PboE had SD. Treatment-emergent adverse events included fatigue, anemia, cough, dyspnea, elevated serum creatinine, and diarrhea; grade 3 to 4 events occurred in 74% TelaE patients versus 61% PboE.
Conclusions: TelaE was well tolerated and improved PFS versus PboE in patients with mRCC previously treated with TKIs and checkpoint inhibitors.
abstract_id: PUBMED:32291161
The Efficacy of Lenvatinib Plus Everolimus in Patients with Metastatic Renal Cell Carcinoma Exhibiting Primary Resistance to Front-Line Targeted Therapy or Immunotherapy. Background: Patients with primary refractory metastatic renal cell carcinoma (mRCC) have a dismal prognosis and poor response to subsequent treatments. While there are several approved second-line therapies, it remains critical to choose the most effective treatment regimen.
Patients And Methods: We identified 7 patients with clear cell mRCC who had primary resistance to vascular endothelial growth factor (VEGF)-targeted tyrosine kinase inhibitors (TKIs) or immune checkpoint inhibitor (ICI) combination therapy. The patients were treated with lenvatinib (a multitargeted TKI) plus everolimus (a mammalian target of rapamycin inhibitor). Among these 7 patients, 2 had prior TKI therapy, 3 had prior ICI therapy, and 2 had prior TKI and ICI therapy. We collected the patients' clinical characteristics, molecular profiles, treatment durations, and toxicity outcomes.
Results: The median time to progression on prior therapies was 1.5 months. Lenvatinib plus everolimus was used either as a second-line (n = 4) or third-line (n = 3) therapy. As best responses, 3 patients had partial responses and 3 achieved stable disease. Patients were followed for ≥17 months; progression-free survival ranged from 3 to 15 months, and overall survival ranged from 4 to 17 months.
Conclusion: These 7 cases provide real-world data for the use of lenvatinib plus everolimus in patients with mRCC with primary resistance to first-line VEGF-targeted TKIs or ICI combination therapy.
abstract_id: PUBMED:25160521
Everolimus and temsirolimus are not the same second-line in metastatic renal cell carcinoma. A systematic review and meta-analysis of literature data. Background: Two mTOR inhibitors, TEM and EVE, proved to be active in mRCC but have never been compared in a prospective trial. We aimed to compare their effectiveness in mRCC patients previously treated with a vascular endothelial growth factor receptor tyrosine kinase inhibitor, and performed a systematic review and meta-analysis of available evidence.
Materials And Methods: The MEDLINE/PubMed database was reviewed for studies that compared EVE with TEM from January 2006 to May 2014. Summary hazard ratio (HR) for overall survival (OS) and time to treatment failure (TTF) were calculated using random and fixed effects models depending on the heterogeneity of included studies. Statistical heterogeneity was assessed using the χ(2) test, and inconsistency was quantified with the I(2) statistic. Publication bias was evaluated using the Begg and Egger test.
Results: Four studies were included in the meta-analysis; data of 937 patients were available: 545 received EVE and 392 TEM. Among the included patients, 863 [92%] were treated with sunitinib and 74 [8%] with pazopanib or sorafenib as first-line therapy. In the overall population, treatment with EVE decreased the risk of death by 26% over TEM (HR, 0.74; 95% confidence interval [CI], 0.59-0.93; P = .008). The TTF was evaluable in 692 patients; in this group, treatment with EVE decreased the risk of treatment failure by 30% (HR, 0.70; 95% CI, 0.56-0.88; P = .002). No significant heterogeneity or publication bias was found for OS and TTF.
Conclusion: In this analysis, we compared EVE with TEM as second-line therapy in mRCC, and report a significant difference between mTOR inhibitors, even if these results need to be confirmed in a prospective trial.
Answer: Yes, tyrosine kinase inhibitors (TKIs) can still be active in patients with metastatic renal cell carcinoma (mRCC) who were previously treated with a TKI and everolimus. A study evaluating the efficacy of TKI re-treatment in patients with disease progression after a TKI-everolimus sequence found that the response rate with TKI re-treatment was 8%, and the disease-control rate (response plus stable disease) was 75%. The median progression-free survival (PFS) with each component of the TKI-everolimus-TKI sequence was 10.7 months, 8.9 months, and 8.2 months, respectively. The median overall survival from the start of everolimus was 29.1 months, suggesting a benefit in using TKI in this setting (PUBMED:23332872).
Furthermore, another study reported that sunitinib re-challenge provided clinical benefit after everolimus in patients with mRCC, indicating that TKIs can still be effective after treatment with everolimus (PUBMED:21965770). Additionally, the use of mammalian target of rapamycin (mTOR) inhibitors, such as everolimus, after failure of TKIs appears to be feasible and can lead to stable disease in patients with mRCC undergoing hemodialysis (PUBMED:26833674).
In conclusion, TKIs can retain activity in patients with mRCC even after previous treatment with a TKI and everolimus, and re-challenging with TKIs may provide clinical benefits in this patient population. |
Instruction: Does atovaquone prolong the disease-free interval of toxoplasmic retinochoroiditis?
Abstracts:
abstract_id: PUBMED:20437247
Does atovaquone prolong the disease-free interval of toxoplasmic retinochoroiditis? Background: To evaluate the efficacy of suppressing a recurrence of Toxoplasma retinochoroiditis after treatment with atovaquone.
Methods: Retrospective, nonrandomized, clinical trial. Forty-one immunocompetent patients were treated for Toxoplasma retinochoroiditis with atovaquone between 1999 and 2006. The diagnosis was based on clinical signs alone. Atovaquone was given 750 mg two to three times daily together with oral steroids. Lesion location, time interval until recurrence, visual function, and adverse events were recorded.
Results: Forty-two eyes of 41 patients were treated with atovaquone for Toxoplasma retinochoroiditis. Side-effects were usually mild and only one patient stopped therapy with atovaquone because of nausea. Reactivation of retinochoroiditis occurred in 18 patients (44%) during a time interval of 3-70 months.
Conclusions: The therapy of Toxoplasma retinochoroiditis with atovaquone is well tolerated. Our data suggests that therapy with atovaquone has the potential to prolong the time to recurrence of Toxoplasma retinochoroiditis. A prospective randomized comparative long-term clinical trial would be necessary to confirm our data.
abstract_id: PUBMED:32651033
Atypical toxoplasmic retinochoroiditis in patients with malignant hematological diseases. In immunocompromised patients, toxoplasmosis may have atypical presentation with bilateral, extensive or multifocal involvement. We report a case series of atypical toxoplasmic retinocoroiditis in patients with malignant hematological diseases who are usually immunosuppressed. Four patients were diagnosed of atypical toxoplasmic retinochoroiditis, all of them had immunosuppression (100%) and half of them (50%) had received a bone marrow transplant. The polymerase chain reaction for toxoplasma was positive in 75% of cases, and in one case (25%) the diagnosis was made with clinical and serological criteria. One patient presented ocular toxoplasmosis despite being on prophylactic treatment with atovaquone. Patients with atypical ocular toxoplasmosis and hematological diseases are generally immunocompromised, but they do not always have history of a bone marrow transplant. The presentation may be due to a primary infection or a reactivation of the disease. The aqueous humor and/or vitreous polymerase chain reaction allow confirming the diagnosis to perform a proper treatment.
abstract_id: PUBMED:19108794
Adverse drug reactions to treatments for ocular toxoplasmosis: a retrospective chart review. Objective: This study evaluated the incidence and types of adverse drug reactions (ADRs) associated with medications used to treat active toxoplasmic chorioretinitis.
Methods: This was a retrospective review of the clinical records of a consecutive series of patients with active toxoplasmic chorioretinitis, examined between March 1991 and August 1998. For inclusion in the review, patients had to have been diagnosed with active toxoplasmic chorioretinitis, been treated with a single drug or drug combination indicated for this condition, and been followed for at least 8 weeks. Patients who were lost to follow-up or who had incomplete chart data were excluded. Demographic data, pertinent aspects of the medical history, drug treatments, and ADRs associated with antitoxoplasmic treatment were recorded.
Results: Fifty-five patients met the criteria for inclusion in the review. In descending order of frequency, they received antitoxoplasmic treatment with clindamycin (n = 50), sulfadiazine (n = 40), pyrimethamine (n = 33), trimethoprim-sulfamethoxazole (n = 16), and atovaquone (n = 10), alone or in combination. Twenty-two patients (40.0%) had a total of 27 ADRs. The most frequently occurring ADRs were rash (19 [34.5%]), mostly associated with sulfadiazine (9/40 [22.5%]) and clindamycin (6/50 [12.0%]), and gastrointestinal ADRs such as diarrhea (6 [10.9%]), stomach upset (6 [10.9%]), and bleeding (1 [1.8%]), mostly associated with clindamycin (5/50 [10.0%], 3/50 [6.0%], and 1/50 [2.0%], respectively). The incidence of ADRs associated with individual antitoxoplasmic drugs was 30.0% (3/10) for atovaquone, 26.0% (13/50) for clindamycin, 22.5% (9/40) for sulfadiazine, 12.5% (2/16) for trimethoprim-sulfamethoxazole, and 12.1% (4/33) for pyrimethamine. There were 2 serious ADRs: gastrointestinal bleeding in a patient treated with clindamycin and leukopenia in a patient treated with pyrimethamine. Twenty-five ADRs were reversed on drug discontinuation; the remaining 2 were mild and did not require drug discontinuation.
Conclusions: The overall incidence of treatment-associated ADRs was high (40.0%) in these patients with ocular toxoplasmosis. The most frequently occurring ADRs were rash and gastrointestinal complaints.
abstract_id: PUBMED:24993473
Development of bilateral acquired toxoplasmic retinochoroiditis during erlotinib therapy. N/A
abstract_id: PUBMED:9620076
Effects of drug therapy on Toxoplasma cysts in an animal model of acute and chronic disease. Purpose: To evaluate the effects of drug therapy on the clinical course of acute acquired Toxoplasma retinochoroiditis and on the number of Toxoplasma cysts present in the brain and ocular tissues in the hamster animal model.
Methods: The Syrian golden hamster animal model of Toxoplasma retinochoroiditis was used. In acute disease, systemically administered atovaquone was compared with conventional therapies (pyrimethamine combined with sulfadiazine; clindamycin; and spiramycin). The clinical course of the ocular disease was determined with retinal examination and photography of the fundus. The number of Toxoplasma cysts remaining after treatment was evaluated in aliquots of brain homogenate and in retinal tissue. The effect of atovaquone on cerebral Toxoplasma cyst count was also studied in chronic disease.
Results: None of the drugs administered altered the course of the acute disease, judged by clinical examination. Atovaquone alone significantly reduced the number of cerebral Toxoplasma cysts after acute disease. Atovaquone also significantly reduced the cerebral Toxoplasma cyst count in chronic disease.
Conclusions: Tissue cysts are believed to be responsible for reactivation of Toxoplasma retinochoroiditis. Atovaquone has the potential to reduce the risk of recurrent disease.
abstract_id: PUBMED:7527323
Management of toxoplasmosis. Toxoplasma gondii, an intracellular coccidian protozoan, is the causative agent of toxoplasmosis, a widespread infection affecting various birds and mammals including humans. In immunocompetent hosts, the infection is usually asymptomatic and benign. Toxoplasmosis is either congenital or acquired. In general, prenatal therapy of congenital toxoplasmosis is beneficial in reducing the frequency of infant infection. Therapies are based primarily on spiramycin because of the relative lack of toxicity and high concentrations achieved in the placenta. Clindamycin is the standard drug for chemoprophylaxis in newborn infants, and is directed at preventing the occurrence of retinochoroiditis as a late sequel to congenital infection. The standard treatment for acquired toxoplasmosis in both immunocompetent and immunodeficient patients is the synergistic combination of pyrimethamine and sulphonamides. Toxoplasmic encephalitis is the most common manifestation of acquired toxoplasmosis in immunocompromised patients and if not treated is fatal. However, because of toxicity, the therapeutic efficacy of pyrimethamine-sulphonamide combinations may be seriously limited in immunodeficient patients. A number of novel and less toxic agents are being currently studied in clinical settings, including macrolide antibiotics (clindamycin, clarithromycin and azithromycin) and atovaquone, as well as some older anti-infective drugs such as cotrimoxazole (trimethoprim/sulfamethoxazole). Maintenance or prophylactic therapy is essential in many patients with acquired immunodeficiency syndrome (AIDS) where toxoplasmosis is most often the result of a pre-existent latent infection.
abstract_id: PUBMED:27384853
Early diagnosis and successful treatment of disseminated toxoplasmosis after cord blood transplantation. A 66-year-old woman with refractory angioimmunoblastic T-cell lymphoma underwent cord blood transplantation. Prior to transplantation, a serological test for Toxoplasma gondii-specific IgG antibodies was positive. On day 96, she exhibited fever and dry cough. Chest CT showed diffuse centrilobular ground glass opacities in both lungs. The reactivation of T. gondii was identified by the presence of parasite DNA in peripheral blood and bronchoalveolar lavage fluid. Moreover, brain MRI revealed a space occupying lesion in the right occipital lobe. Therefore, disseminated toxoplasmosis was diagnosed. She received pyrimethamine and sulfadiazine from day 99. The lung and brain lesions both showed improvement but the PCR assay for T. gondii DNA in peripheral blood was positive on day 133. On day 146, she developed blurred vision and reduced visual acuity, and a tentative diagnosis of toxoplasmic retinochoroiditis was made based on ophthalmic examination results. As agranulocytosis developed on day 158, we decided to discontinue pyrimethamine and sulfadiazine and the treatment was thus switched to atovaquone. Moreover, we added spiramycin to atovaquone therapy from day 174, and her ocular condition gradually improved. In general, the prognosis of disseminated toxoplasmosis after hematopoietic stem cell transplantation (HSCT) is extremely poor. However, early diagnosis and treatment may contribute to improvement of the fundamentally dismal prognosis of disseminated toxoplasmosis after HSCT.
abstract_id: PUBMED:16282146
Toxoplasmosis. Toxoplasmosis is the most common cause of posterior uveitis in immunocompetent subjects. The infection can be congenital or acquired. Ocular symptoms are variable according to the age of the subject. For instance, young children present with reduced visual acuity, strabismus, nystagmus, and leucocoria, while teenagers and adults complain of decreased vision, floaters, photophobia, pain, and hyperemia. Toxoplasmic retinochoroiditis typically affects the posterior pole, and the lesions can be solitary, multiple or satellite to a pigmented retinal scar. Active lesions present as grey-white focus of retinal necrosis with adjacent choroiditis, vasculitis, hemorrhage and vitreitis. Cicatrization occurs from the periphery towards the center, with variable pigmentary hyperplasia. Anterior uveitis is a common finding, with mutton-fat keratic precipitates, fibrine, cells and flare, iris nodules and posterior synechiae. Atypical presentations include punctate outer retinitis, neuroretinitis, papillitis, pseudo-multiple retinochoroiditis, intraocular inflammation without retinochoroiditis, unilateral pigmentary retinopathy, Fuchs'-like anterior uveitis, scleritis and multifocal or diffuse necrotizing retinitis. The laboratory diagnosis of toxoplasmosis is based on detection of antibodies and T. gondii DNA using polymerase chain reaction (PCR). Toxoplasmosis therapy includes specific medication and corticosteroids. There are several regimens, with different drug combinations. Medications include pirimetamine, sulfadiazine, clindamycin, trimethoprime-sulphamethoxazol, spiramycin, azithromycin, atovaquone, tetracycline and minocycline. The prognosis of ocular toxoplasmosis is usually good in immunocompetent individuals, as long as the central macula is not directly involved.
abstract_id: PUBMED:9044964
Treatment of toxoplasmosis retinochoroiditis with atovaquone in an immunocompetent patient Background: In Central Europe ocular toxoplasmosis is the leading cause of posterior uveitis. It is a major cause of severe visual loss and blindness in young people. Drugs for treatment of active lesions (tachyzoites) have been available for decades but are seen controversial especially because of sometimes serious side effects. These drugs don't seem to shorten the active inflammation nor the recurrence rate, in particular because of the poor effect on the cystic form (bradyzoites). Atovaquone (hydroxynaphthoquinone) is well tolerated systemically and is effective against tachyzoites and bradyzoites of Toxoplasma gondii so that we hope to reduce the recurrence rate. PATIENT HISTORY AND CLINICAL FINDINGS: Two immunocompetent patients with the first and respective second symptomatic recurrence of unilateral active toxoplasmic retinochorioiditis located within the major temporal vascular arcades were treated with Atovaquone and Fluorocortolone because of an impending loss of central visual function.
Therapy And Clinical Course: Under the treatment with Atovaquone (3 x 750 mg/day) for three weeks and tapering of the Fluorocortolone the active lesions healed quickly. After a few weeks, atrophic and remarkably little pigmented scars remained. No side effects were observed. After a period of 7 and respective 11 months no recurrence occurred.
Conclusions: Atovaquone is an effective and well tolerated drug for the treatment of active ocular toxoplasmosis in immunocompetent patients. Its efficacy against tachyzoites and cysts of Toxoplasma gondii relative to other drugs remains to be determined by further clinical trials.
abstract_id: PUBMED:11921736
New trends in ocular toxoplasmosis--the review For past ten years some new trends can be observed in the field of ocular toxoplasmosis. In this article on the basis of reviewing scientific articles the author tried to point out important, or new information which may have an impact on clinical researches and clinical practice in the close future. 1. There is evidence that some tissue cysts can be present in untouched retinal tissue, and can produce an inflammatory reaction even many years after primary infection. This concerns both congenital and acquired toxoplasmosis. 2. The coexistence of intraocular inflammatory reaction without focal necrotizing retinochoroiditis in patients with acquired systemic toxoplasmosis was described--but there is no evidence that those inflammations were directly cause by T. gondii parasite themselves. It is too early to include this changes into the classical clinical picture of toxoplasmic retinochoroiditis. 3. For the past ten years there have been published articles showing higher prevalence of acquired ocular toxoplasmosis but this is well documented only in south America. In Europe and USA the frequency of ocular toxoplasmosis cases are low. Despite the different percentage of infected people in these populations, this controversial result can be caused by different distribution of pathogenic and non-pathogenic strains of the parasite as like by differences in human genotypes. 4. There is model of "dynamic equilibrium" existing between host defence and parasite cyst ruptures helpful for understanding reactivation. Level of tissue cyst-specific antigens does not rise in the population of HIV patients or during reactivation of ocular retinochoroiditis--this places the responsibility for reactivation on the side of host immunoresponse. 5. The most specific and reliable laboratory assessment for ocular toxoplasmosis, which is specifically helpful in clinically atypical cases, is combination of IgG, IgA and IgM serology tests made of serum and intraocular fluids. It can proof intraocular production of antibodies. PCR tests have got some value, but they are not as efficient as have been thought previously. 6. There is no efficient treatment diminishing recurrence rate and the time of singular inflammation yet. The combination of pyrimethamine, sulfadiazine and clindamycin, given for 3 to 4 weeks with supplementation of folinic acid for diminishing side effects, can diminish the measure of future scar. Long lasting treatment as it is used in children with congenital disease or in HIV patients can be probably more efficient. Atovaquone is as effective as traditional drugs and gives fewer side effects.
Answer: The available evidence suggests that atovaquone may have the potential to prolong the disease-free interval of toxoplasmic retinochoroiditis. A retrospective, nonrandomized clinical trial involving 41 immunocompetent patients treated with atovaquone for Toxoplasma retinochoroiditis reported that the therapy was well tolerated and suggested that atovaquone therapy could prolong the time to recurrence of the condition (PUBMED:20437247). Additionally, in an animal model of acute and chronic Toxoplasma retinochoroiditis, atovaquone significantly reduced the number of cerebral Toxoplasma cysts after acute disease and also in chronic disease, which implies a potential reduction in the risk of recurrent disease (PUBMED:9620076).
However, it is important to note that a prospective randomized comparative long-term clinical trial would be necessary to confirm these findings (PUBMED:20437247). Moreover, in the context of immunocompromised patients, such as those with malignant hematological diseases, atovaquone prophylaxis did not prevent ocular toxoplasmosis in one reported case, indicating that the efficacy of atovaquone may vary depending on the patient's immune status (PUBMED:32651033).
In summary, while there is some indication that atovaquone could be beneficial in prolonging the disease-free interval of toxoplasmic retinochoroiditis, more robust clinical trials are needed to establish its efficacy conclusively. |
Instruction: Does prolonged cycling of moderate intensity affect immune cell function?
Abstracts:
abstract_id: PUBMED:15728699
Does prolonged cycling of moderate intensity affect immune cell function? Background: Prolonged exercise may induce temporary immunosuppression with a presumed increased susceptibility for infection. However, there are only few data on immune cell function after prolonged cycling at moderate intensities typical for road cycling training sessions.
Methods: The present study examined the influence on immune cell function of 4 h of cycling at a constant intensity of 70% of the individual anaerobic threshold. Interleukin-6 (IL-6) and C-reactive protein (CRP), leukocyte and lymphocyte populations, activities of natural killer (NK), neutrophils, and monocytes were examined before and after exercise, and also on a control day without exercise.
Results: Cycling for 4 h induced a moderate acute phase response with increases in IL-6 from 1.0 (SD 0.5) before to 9.6 (5.6) pg/ml 1 h after exercise and CRP from 0.5 (SD 0.4) before to 1.8 (1.3) mg/l 1 day after exercise. Although absolute numbers of circulating NK cells, monocytes, and neutrophils increased during exercise, on a per cell basis NK cell activity, neutrophil and monocyte phagocytosis, and monocyte oxidative burst did not significantly change after exercise. However, a minor effect over time for neutrophil oxidative burst was noted, tending to decrease after exercise.
Conclusions: Prolonged cycling at moderate intensities does not seem to seriously alter the function of cells of the first line of defence. Therefore, the influence of a single typical road cycling training session on the immune system is only moderate and appears to be safe from an immunological point of view.
abstract_id: PUBMED:21403796
Effect of moderate aerobic cycling on some systemic inflammatory markers in healthy active collegiate men. Background: Based on the inconsistency of some previous results related to moderate exercise effects on systemic inflammatory responses, this study was conducted to determine the effects of 45 minutes of moderate aerobic cycling on inflammatory markers, interleukin-6 (IL-6), interleukin-10 (IL-10), C-reactive protein (CRP), and leucocyte counts in young active men.
Methods: Ten healthy, active collegiate men (aged 21.03 ± 1.2 years, body fat 12.04 ± 2.72% and VO(2)max 59.6 ± 2.4 mL/kg/min) in a quasiexperimental pre/post design, participated in an acute, moderate cycling protocol at an intensity of 50% VO(2)max for 45 minutes. The inflammatory markers (serum IL-6, IL-10, CRP, and peripheral blood leucocyte counts), along with cortisol and epinephrine, were examined before and after the protocol. Data were expressed as mean (± SD) and analyzed by paired t-test using SPSS15 at α ≤ 0.05.
Results: The results showed that serum IL-6, IL-10, CRP, total leukocyte counts, and stress hormones (epinephrine and cortisol) were significantly increased following 45 minutes of moderate cycling in active collegiate men (P < 0.001). However, all pre- and post-measurements were in the population range.
Conclusion: Based on the present results, it can be concluded that moderate cycling is not only sufficient to induce systemic inflammation in active collegiate men, but also appears to be safe from an immunological point of view.
abstract_id: PUBMED:31992987
Inflammatory Effects of High and Moderate Intensity Exercise-A Systematic Review. Background: Exercise leads to a robust inflammatory response mainly characterized by the mobilization of leukocytes and an increase in circulating inflammatory mediators produced by immune cells and directly from the active muscle tissue. Both positive and negative effects on immune function and susceptibility to minor illness have been observed following different training protocols. While engaging in moderate activity may enhance immune function above sedentary levels, excessive amounts of prolonged, high-intensity exercise may impair immune function. Thus, the aim of the present review was to clarify the inflammatory effects in response to different exercise intensities. Methods: Search was performed on PubMed and was completed on July 31st, 2017. The studies were eligible if they met the predefined inclusion criteria: a) observational or interventional studies, b) conducted in healthy adults (18-65 years), c) written in Portuguese, English or Spanish, d) including moderate and/or intense exercise. Eighteen articles were included. The specific components that were examined included circulating blood levels of cytokines, leukocytes, creatine kinase (CK) and C-reactive protein (CRP). The methodological quality of the included studies was assessed. Results: Most of the intervention studies showed changes in the assessed biomarkers, although these changes were not consistent. White blood cells (WBC) had an increase immediately after intensive exercise (> 64% VO2max), without alteration after moderate exercise (46-64% VO2max). The results suggested an elevation of the pro-inflammatory cytokines, namely IL-6, followed by an elevation of IL-10 that were more evident after intense exercise bouts. CRP increased both after intense and moderate exercise, with peak increases up to 28 h. CK increased only after intensive and long exercising. Conclusion: In summary, intense long exercise can lead, in general, to higher levels of inflammatory mediators, and thus might increase the risk of injury and chronic inflammation. In contrast, moderate exercise or vigorous exercise with appropriate resting periods can achieve maximum benefit.
abstract_id: PUBMED:33211989
Repeated prolonged moderate-intensity walking exercise does not appear to have harmful effects on inflammatory markers in patients with inflammatory bowel disease. Background And Objectives: The role of exercise in the management of inflammatory bowel disease (IBD) is inconclusive as most research focused on short or low-intensity exercise bouts and subjective outcomes. We assessed the effects of repeated prolonged moderate-intensity exercise on objective inflammatory markers in IBD patients.
Methods: In this study, IBD patients (IBD walkers, n = 18), and a control group (non-IBD walkers, n = 19), completed a 30, 40 or 50 km walking exercise on four consecutive days. Blood samples were taken at baseline and every day post-exercise to test for the effect of disease on exercise-induced changes in cytokine concentrations. A second control group of IBD patients who did not take part in the exercise, IBD non-walkers (n = 19), was used to test for the effect of exercise on faecal calprotectin. Both IBD groups also completed a clinical disease activity questionnaire.
Results: Changes in cytokine concentrations were similar for IBD walkers and non-IBD walkers (IL-6 p = .95; IL-8 p = .07; IL-10 p = .40; IL-1β p = .28; TNF-α p = .45), with a temporary significant increase in IL-6 (p < .001) and IL-10 (p = .006) from baseline to post-exercise day 1. Faecal calprotectin was not affected by exercise (p = .48). Clinical disease activity did not change in the IBD walkers with ulcerative colitis (p = .92), but did increase in the IBD walkers with Crohn's disease (p = .024).
Conclusion: Repeated prolonged moderate-intensity walking exercise led to similar cytokine responses in participants with or without IBD, and it did not affect faecal calprotectin concentrations, suggesting that IBD patients can safely perform this type of exercise.
abstract_id: PUBMED:37019582
Prolonged Moderate-Intensity Exercise Does Not Increase Muscle Injury Markers in Symptomatic or Asymptomatic Statin Users. Background: Statin use may exacerbate exercise-induced skeletal muscle injury caused by reduced coenzyme Q10 (CoQ10) levels, which are postulated to produce mitochondrial dysfunction.
Objectives: We determined the effect of prolonged moderate-intensity exercise on markers of muscle injury in statin users with and without statin-associated muscle symptoms. We also examined the association between leukocyte CoQ10 levels and muscle markers, muscle performance, and reported muscle symptoms.
Methods: Symptomatic (n = 35; age 62 ± 7 years) and asymptomatic statin users (n = 34; age 66 ± 7 years) and control subjects (n = 31; age 66 ± 5 years) walked 30, 40, or 50 km/d for 4 consecutive days. Muscle injury markers (lactate dehydrogenase, creatine kinase, myoglobin, cardiac troponin I, and N-terminal pro-brain natriuretic peptide), muscle performance, and reported muscle symptoms were assessed at baseline and after exercise. Leukocyte CoQ10 was measured at baseline.
Results: All muscle injury markers were comparable at baseline (P > 0.05) and increased following exercise (P < 0.001), with no differences in the magnitude of exercise-induced elevations among groups (P > 0.05). Muscle pain scores were higher at baseline in symptomatic statin users (P < 0.001) and increased similarly in all groups following exercise (P < 0.001). Muscle relaxation time increased more in symptomatic statin users than in control subjects following exercise (P = 0.035). CoQ10 levels did not differ among symptomatic (2.3 nmol/U; IQR: 1.8-2.9 nmol/U), asymptomatic statin users (2.1 nmol/U; IQR: 1.8-2.5 nmol/U), and control subjects (2.1 nmol/U; IQR: 1.8-2.3 nmol/U; P = 0.20), and did not relate to muscle injury markers, fatigue resistance, or reported muscle symptoms.
Conclusions: Statin use and the presence of statin-associated muscle symptoms does not exacerbate exercise-induced muscle injury after moderate exercise. Muscle injury markers were not related to leukocyte CoQ10 levels. (Exercise-induced Muscle Damage in Statin Users; NCT05011643).
abstract_id: PUBMED:30102685
Continuous Moderate-Intensity but Not High-Intensity Interval Training Improves Immune Function Biomarkers in Healthy Young Men. Khammassi, M, Ouerghi, N, Said, M, Feki, M, Khammassi, Y, Pereira, B, Thivel, D, and Bouassida, A. Continuous moderate-intensity but not high-intensity interval training improves immune function biomarkers in healthy young men. J Strength Cond Res 34(1): 249-256, 2020-Effects of endurance running methods on hematological profile are still poorly known. This study aimed to compare the effects of 2 training regimes; high-intensity interval training (HIIT) and moderate-intensity continuous training (MCT) performed at the same external load on hematological biomarkers in active young men. Sixteen men aged 18-20 years were randomly assigned to HIIT or MCT group. Aerobic capacity and hematological biomarkers were assessed before and after 9 weeks of interventions. At baseline, aerobic and hematological parameters were similar for the 2 groups. After intervention, no significant change was observed in maximal aerobic velocity and estimated VO2max in both groups. Leukocyte (p < 0.01), lymphocyte (p < 0.05), neutrophil (p < 0.05), and monocyte (p < 0.01) count showed significant improvements in response to the MCT compared with the HIIT intervention. The MCT intervention favored an increase in the number of immune cells, whereas the opposite occurred as a result of the HIIT intervention. These findings suggest that MCT interventions might be superior to HIIT regimes in improving immune function in active young men.
abstract_id: PUBMED:15673097
The effect of single and repeated bouts of prolonged cycling on leukocyte redistribution, neutrophil degranulation, IL-6, and plasma stress hormone responses. This study compared immunoendocrine responses to a single bout of prolonged cycling at different times of day and to a 2nd bout of cycling at the same intensity on the same day. In a counterbalanced design, 8 men participated in 3 experimental trials separated by at least 4 d. In the afternoon exercise-only trial, subjects cycled for 2 h at 60% VO2max starting at 14:00. In the other 2 trials, subjects performed either 2 bouts of cycling at 60% VO2max for 2 h (starting at 09:00 and 14:00) or a separate resting trial. The single bout of prolonged exercise performed in the afternoon induced a larger neutrophilia and monocytosis than the identical bout of morning exercise, possibly the result of reduced carbohydrate availability and the circadian rhythm in cortisol levels. The 2nd prolonged exercise bout caused greater immunoendocrine responses but lower plasma glucose levels and neutrophil function compared with the 1st bout.
abstract_id: PUBMED:29042269
The comparison of acute high-intensity interval exercise vs. continuous moderate-intensity exercise on plasma calprotectin and associated inflammatory mediators. Purpose: Calprotectin promotes the release of inflammatory mediators (e.g., monocyte chemoattractant protein-1 [MCP-1] and myeloperoxidase [MPO]) during the innate immune response as a mechanism to augment leukocyte chemotaxis and phagocytosis. Although plasma calprotectin is elevated with traditional continuous moderate-intensity exercise (CME) as an indicator of the inflammatory response, high-intensity interval exercise (HIIE) has been shown to attenuate systemic inflammation while providing similar improvements in cardiovascular health. Therefore, the purpose of this study was to compare plasma levels of calprotectin, MCP-1, and MPO between acute HIIE vs. CME.
Methods: Nine healthy males (24.67±3.27yrs) were recruited to participate in HIIE and CME on a cycle ergometer. HIIE consisted of 10 repeated 60s of cycling at 90% max watts (Wmax) separated by 2min of active recovery intensity of interval exercise, whereas CME consisted of 28min of cycling at 60% Wmax. Blood samples were collected prior to, immediately post, and 30 and 60min into recovery following exercise.
Results: Acute HIIE elicited a lower elevation in calprotectin and MPO compared to CME. An increase in MCP-1 was observed across time in both exercise protocols. Furthermore, our analyses did not reveal any significant correlation in percent change (baseline to immediately following exercise) among calprotectin, MCP1, and MPO in neither HIIE nor CME. However, a significant positive correlation was observed in the overall release of calprotectin and MPO across all four time points in both HIIE and CME. Conclusions Our findings indicate that acute HIIE may potentially diminish the systemic release of inflammatory mediators (calprotectin and MPO) compared to CME.
abstract_id: PUBMED:35452395
Effect of high intensity interval training and moderate intensity continuous training on lymphoid, myeloid and inflammatory cells in kidney transplant recipients. Kidney transplantations are seen to be a double-edge sword. Transplantations help to partially restore renal function, however there are a number of health-related co-morbidities associated with transplantation. Cardiovascular disease (CVD), malignancy and infections all limit patient and graft survival. Immunosuppressive medications alter innate and adaptive immunity and can result in immune dysfunction. Over suppression of the immune system can result in infections whereas under suppression can result in graft rejection. Exercise is a known therapeutic intervention with many physiological benefits. Its effects on immune function are not well characterised and may include both positive and negative influences depending on the type, intensity, and duration of the exercise bout. High intensity interval training (HIIT) has become more popular due to it resulting in improvements to tradional and inflammatory markers of cardiovascular (CV) risk in clinical and non-clinical populations. Though these improvements are similar to those seen with moderate intensity exercise, HIIT requires a shorter overall time commitment, whilst improvements can also be seen even with a reduced exercise volume. The purpose of this study was to explore the physiolocial and immunological impact of 8-weeks of HIIT and moderate intensity continuous training (MICT) in kidney transplan recipients (KTRs). In addition, the natural variations of immune and inflammatory cells in KTRs and non-CKD controls over a longitudinal period are explored. Newly developed multi-colour flow cytometry methods were devised to identify and characterise immune cell populations. Twenty-six KTRs were randomised into one of two HIIT protocols or MICT: HIIT A (n=8; 4-, 2-, and 1-min intervals; 80-90% VO2peak), HIIT B (n=8, 4x4 min intervals; 80-90% VO2peak), or MICT (n=8, ~40 min; 50-60% VO2peak) for 24 supervised sessions on a stationary bike (approx. 3x/week over 8 ± 2 weeks). Blood samples taken pre-training, mid training, post-training and 3 months later. Novel multi-colour flow cytometric panels were developed to characterise lymphoid and myeloid cell population from peripheral blood mononuclear cells. No changes were observed for circulating immune and inflammatory cells over the 8-week interventions. The feasibility study does not suggest that exercise programmes using HIIT and MICT protocols elicit adverse negative effects on immunity in KTRs. Therefore, such protocols may be immunologically safe for these patients. The inability of the participants to achieve the target exercise intensities may be due to physiological abnormalities in this population which warrants further investigation.
abstract_id: PUBMED:10756131
Effect of single bouts of moderate and high intensity exercise and training on equine peripheral blood neutrophil function. The effects of single bouts of moderate (30 to 40 per cent VO(2)max) and high (115 per cent VO(2)max) intensity exercise on equine peripheral blood leucocyte function were evaluated by determining neutrophil phagocytosis and oxidative burst activity before and after treadmill exercise and training. Prior to all exercise tests, the possible effect of diurnal variation was evaluated in samples obtained from four resting horses. Subsequently eight horses underwent moderate and high intensity exercise protocols and then commenced a 17-week training period. High intensity exercise tests were repeated in week 10, after 7 weeks of endurance training, and in week 17, after a further 6 weeks of high intensity training. Time of sampling had a significant effect on neutrophil function for resting, untrained horses. Prior to training, moderate intensity exercise was associated with improved neutrophil phagocytosis and oxidative burst activity. High intensity exercise was associated with transient impairment of these responses. A similar reduction was not demonstrable following high intensity exercise in weeks 10 or 17 of training. Neutrophil function in week 17 was suppressed at all sampling times relative to results obtained in week 10, suggesting that high intensity training may have been associated with a general reduction in neutrophil function.
Answer: Prolonged cycling of moderate intensity does not appear to seriously alter the function of immune cells, which are the first line of defense in the immune system. A study examining the influence of 4 hours of cycling at 70% of the individual anaerobic threshold found that while there was a moderate acute phase response with increases in IL-6 and CRP, the activities of natural killer (NK) cells, neutrophils, and monocytes did not significantly change after exercise. This suggests that a single typical road cycling training session has only a moderate influence on the immune system and is considered safe from an immunological perspective (PUBMED:15728699).
Additionally, another study involving 45 minutes of moderate aerobic cycling at 50% VO2max in active collegiate men showed that inflammatory markers such as IL-6, IL-10, CRP, and total leukocyte counts increased following the exercise. However, all measurements remained within the normal population range, indicating that moderate cycling can induce systemic inflammation but is still safe from an immunological standpoint (PUBMED:21403796).
A systematic review also supports the notion that moderate exercise or vigorous exercise with appropriate resting periods can achieve maximum benefit without leading to chronic inflammation or injury, in contrast to intense long exercise which can lead to higher levels of inflammatory mediators (PUBMED:31992987).
Furthermore, in patients with inflammatory bowel disease, repeated prolonged moderate-intensity walking exercise did not have harmful effects on inflammatory markers, suggesting that such patients can safely perform this type of exercise (PUBMED:33211989).
In summary, prolonged cycling at moderate intensity does not seem to have a detrimental effect on immune cell function, and it is considered safe from an immunological point of view. |
Instruction: Is the cough-stress test necessary when placing the tension-free vaginal tape?
Abstracts:
abstract_id: PUBMED:24246400
Outcome of the laparoscopic two-team sling procedure, tension-free vaginal tape insertion, and transobturator tape insertion in women with recurrent stress urinary incontinence. Objective: Although the surgical treatment of primary stress urinary incontinence (SUI) has been well studied, the optimal treatment of persistent or recurrent SUI represents a significant challenge to the surgeon, and there are limited relevant published data. The aim of this study was to document outcome data for various surgical techniques used at our centre for the treatment of recurrent SUI, and to assess the immediate and long-term complications associated with these procedures.
Methods: This retrospective study assessed the outcome of the laparoscopic two-team sling procedure, tension-free vaginal tape (TVT) insertion, and transobturator tape (TOT) insertion in the treatment of recurrent SUI in women. Data collected included patient demographics, urodynamic data, postoperative subjective cure and objective cure (negative cough stress test), and intraoperative and postoperative complications.
Results: Forty-six women with recurrent SUI were included in the study: 24 had had laparoscopic two-team sling procedures, 15 had had TVT insertion, and 7 had had TOT insertion. For each procedure, objective cure rates were 91.7%, 73.3%, and 85.7%, respectively, and subjective cure rates were 79.2%, 60%, and 57.1% respectively. In the laparoscopic two-team sling group, one woman developed an infected hematoma and one required surgery for a small bowel obstruction.
Conclusion: The laparoscopic two-team sling procedure or TVT or TOT insertion may be used in experienced hands for surgical management of patients with recurrent stress urinary incontinence. We found no statistically significant differences in outcomes between the three groups, possibly because of the small sample size. Larger sample size and longer follow-up within prospective randomized trials are warranted to identify any possible differences.
abstract_id: PUBMED:15684159
Is the cough-stress test necessary when placing the tension-free vaginal tape? Objective: To estimate whether the mode of anesthesia (and the resultant ability or inability to perform the cough-stress test) used during the tension-free vaginal tape (TVT) procedure affects postoperative continence.
Methods: A cohort of 170 women who underwent the TVT procedure without any other concomitant surgery completed the short form of the Urogenital Distress Inventory (UDI-6) to assess their continence status preoperatively and postoperatively. Chi-squared, t, and Mann-Whitney U tests were used to determine the association between these data and anesthesia type during univariate analysis.
Results: Both anesthesia groups showed significant improvement from their preoperative UDI-6 scores to their postoperative scores. However, when comparing the change from pre- to postoperative UDI-Stress Symptoms subscale scores between the 2 groups, we found a significant difference. Mean improvement in the local group was 58.3 (+/- 33.8) compared with 41.7 (+/- 39.4) in the general group (P = .02).
Conclusion: Women who undergo TVT show significant improvements in incontinence severity regardless of anesthesia type. However, greater improvements in stress incontinence, as measured by the UDI-Stress Symptoms subscale, are seen when the TVT is placed while using the cough-stress test under local analgesia.
Level Of Evidence: II-2.
abstract_id: PUBMED:28464312
Retropubic versus transobturator tension-free vaginal tape (TVT vs TVT-O): Five-year results of the Austrian randomized trial. Aims: To compare outcomes of the retropubic versus the transobturator tension-free vaginal tape (TVT vs TVT-O) at 5 years.
Methods: A total of 569 women undergoing surgery for primary stress incontinence were randomized to receive a retropubic or a transobturator tensionfree vaginal tape (TVT or TVT-O). Follow-up at 5 years included clinical examination, urodynamic studies and quality of life. The primary outcome measure was continence defined as a negative cough stress test at a volume of 300 mL. Secondary outcomes included urodynamic parameters, complications and quality of life.ClinicalTrials.gov (NCT 0041454).
Results: Three hundred and thirty-one patients (59%) were evaluated at 5 years (277 were seen, examined and completed questionnaires; 54 only completed questionnaires). No significant differences were seen in rates of a negative cough stress test (83% vs 76%, respectively), urodynamic parameters and complications. Quality-of-life improved significantly in both groups, without significant differences between the groups. Erosion rates were 5.2% and 4.5%, and reoperation rates were 4.1% and 3.2% respectively.
Conclusions: At 5 years, subjective and objective results after TVT and TVT-O are stable and similar, without statistical significant differences between the procedures. Major long-term problems appear rare.
abstract_id: PUBMED:25072131
Efficacy of tension-free vaginal tape obturator and single-incision tension-free vaginal tape-Secur, hammock approach, in the treatment of stress urinary incontinence. Aim: Aim of the present study was to compare the efficacy of tension-free vaginal tape obturator and single-incision tension-free transvaginal tape Secur, hammock approach, in the treatment of stress urinary incontinence.
Methods: Clinical data of patients who received anti-incontinence surgery between June 2008 and July 2012 were retrospectively analyzed. Efficacy and early failure rate of the tension-free vaginal tape obturator and tension-free vaginal tape-Secur hammock approach were assessed by cough test and criteria of International Consultation on Incontinence Questionnaire-Short Form. Intraoperative and postoperative complications were also computed.
Results: There were 28 patients in the tension-free vaginal tape obturator group while 32 patients in the tension-free vaginal tape-Secur group. The mean operation time, intraoperative blood loss and inpatient days after surgery between the two groups showed no significant difference. The catheter retention time of the tension-free vaginal tape obturator group was longer than in the tension-free vaginal tape-Secur group. The cure rate of the tension-free vaginal tape obturator and tension-free vaginal tape-Secur groups were respectively 84% and 80%, and the recurrence rates were 14.3% and 16.7%, without significant difference. The scores of International Consultation on Incontinence Questionnaire-Short Form in two groups both decreased after surgery, but there was no difference between the two groups. There were no serious complications in the two groups.
Conclusion: Our study demonstrated that both tension-free vaginal tape obturator and tension-free vaginal tape-Secur can achieve a cure rate over 80% while with little complications, showing both methods are reliable to treat stress urinary incontinence.
abstract_id: PUBMED:30238448
Long-term efficacy follow-up of tension-free vaginal tape obturator in patients with stress urinary incontinence with or without cystocele. Objective: To assess the long-term outcomes of tension-free vaginal tape obturator (inside-out) (TVTO) with or without anterior colporrhaphy.
Methods: The present prospective follow-up observational study included patients attending the 2nd Department of Obstetrics and Gynecology, Aretaieio Hospital, University of Athens, Greece, between April 3 and December 20, 2017, for follow-up care after treatment for urodynamic stress urinary incontinence (USUI) with or without cystocele. Patients without cystocele had been treated with TVTO only; those with cystocele underwent TVTO with anterior colporrhaphy. The primary outcome was the objective cure rate assessed by the cough stress test during filling cystometry.
Results: Follow-up data were available for 70 patients who underwent TVTO only and 38 who underwent TVTO and anterior colporrhaphy. The mean follow-up period was 13 years. Objective cure was achieved for 57 (81%) patients in the TVTO-only group and 32 (84%) patients in the TVTO and anterior colporrhaphy group. Regarding cystocele management, objective cure was recorded for 35 (92%) patients.
Conclusion: At 13-year follow-up, anterior colporrhaphy demonstrated a cure rate of 92% in the management of cystocele, and 84% in the management of cystocele and USUI when combined with TVTO. TVTO alone for the management of USUI had an objective cure rate of 81%.
abstract_id: PUBMED:15580418
Tension free vaginal tape: is the intra-operative cough test necessary? The tension-free vaginal tape (TVT) procedure is recognised as an effective treatment for genuine stress incontinence. It was first described using local anaesthesia, with an intra-operative cough test helping to correctly position the tape. Many patients prefer general anaesthesia and often, patients with genuine stress incontinence do not leak when supine. This aim of this study was to compare the outcome in TVTs performed under general anaesthesia with those performed under spinal anaesthesia. Retrospective analysis of 105 patients, all of whom had urodynamically proven genuine stress incontinence and underwent TVT procedure, was performed: 52 under spinal anaesthesia and 53 under general anaesthesia. The primary and secondary outcome measures were the success or failure of the procedure and the complication rate, respectively. There was no significant difference in outcome or complication rate between the two groups. The type of anaesthetic used does not influence the outcome and we question the necessity of an intra-operative cough test.
abstract_id: PUBMED:14677000
How does tension-free vaginal tape correct stress incontinence? investigation by perineal ultrasound. Forty patients who underwent a single tension-free vaginal tape procedure were evaluated by perineal ultrasound both pre- and postoperatively in a prospective observational clinical study. The positions of the tape, bladder neck and urethra were sonographically documented at rest and during Valsalva maneuvers. During Valsalva the tape rotated towards the symphysis in all patients. Postoperative urethral angulation could be demonstrated in 36 of 40 patients. Bladder neck mobility remained unchanged after the tension-free vaginal tape procedure, and 36 of the 40 were dry according to patient questionnaires. Postoperative cough test was negative in all patients. Two points seem to be important for the functioning of the tension-free vaginal tape: a dynamic kinking of the urethra during stress, and the movement of the tape against the symphysis, compressing the tissue between the tape and the symphysis. Mobility of the bladder neck is unaffected by the single tension-free vaginal tape procedure.
abstract_id: PUBMED:28453959
Uroflowmetric changes, success rate and complications following Tension-free Vaginal Tape Obturator (TVT-O) operation in obese females. Objective: The goal of this study was to evaluate the outcome of Tension-free Vaginal Tape Obturator (TVT-O) operation in the treatment of urodynamic stress incontinence (USI) in obese females, with respect to uroflowmetric changes, success rate and postoperative complications.
Methods: This prospective observational study included 26 patients with USI at the Obstetrics & Gynecology department-Cairo University hospital during the year 2015. The participants had body mass index (BMI)≥30. Patients underwent TVT-O operation. Follow up of the patients was performed by cough test and uroflowmetry after one week, one month, three months and six months. Postoperative complications such as groin pain, sense of incomplete emptying, need to strain to complete micturition and urinary tract infection were recorded. Comparisons between groups were done using Chi square, Phi-Cramer test for categorical variables.
Results: The mean age for the subjects was 43.58±9.01years. The mean BMI was 33.4±2.1. The success rate of TVT-O operation was 21 out of 26 patients (≈81%). Normal maximum flow rate was in 88% of patients at week one and was normal in 100% of patients at months three and six (p=0.101 & 0.101). Postoperative groin pain was the main complaint during the first week after operation and decreased significantly from week one to the 1st month postoperative (84.62% & 65.38%, P=0.041).
Conclusion: TVT-O operation showed a high success rate in treatment of USI in obese patients without affecting the voiding function of the bladder as proven by the uroflowmetry. The main postoperative complaint was the groin pain which significantly improved after one month.
abstract_id: PUBMED:11236338
Multicenter study on the effectiveness of Tension Vaginal Tape (TVT) in the treatment of stress urinary incontinence Background: The aim of the study was to evaluate the efficacy of Tension Free Vaginal Tape (TVT) for the surgical treatment of stress urinary incontinence.
Methods: The design was an open multicenter study including six Italian hospitals. Between January 1998 and November 1999, 429 stress incontinent women were enrolled in the study. Before surgery subjects had been studied through their history, urine culture, physical examination, cotton swab test, cough provocation test and urodynamic evaluation including: uroflowmetry, water cystometry and urethral profilometry. Incontinence inconvenience has been quantified through a 10-grade visual analogue scale (VAS). Postoperatively patients were assessed after 6, 12 and 24 months.
Results: The mean age of the patients considered was 57 years (range 31-83) and 78 of them had undergone a previous operation for the treatment of stress urinary incontinence or genital prolapse. Out of the 429 patients, 371 were followed for a minimum of 6 months, 11 were lost to follow-up and 47 had been operated recently. After surgery 355 subjects (96%) were subjectively cured and no leakage of urine was observed in 97% of the patients during the postoperative cough provocation test.
Conclusions: This study carried out on a great number of patients demonstrates that TVT is a safe and effective procedure for the treatment of stress urinary incontinence.
abstract_id: PUBMED:20440474
A randomized trial comparing tension-free vaginal tape with tension-free vaginal tape-obturator: 36-month results. Introduction And Hypothesis: This is a randomized multicenter study comparing two mid-urethra tape procedures, the tension-free vaginal tape (TVT) with the tension-free vaginal tape-obturator (TVT-O) in terms of cure rate and complication rate.
Methods: Seven Finnish hospitals participated. Power calculations required 130 women in each group to detect a 10% difference in cure rate. A total of 267 underwent the allocated operation. Follow-up was scheduled at 2, 12, 36 and 60 months. A cough stress test was used as an objective outcome measure. Subjective outcome was assessed by five different condition-specific quality of life questionnaires.
Results: At 36 months of follow-up, 96% of the patients were evaluated. Objective cure rate was 94.6% in the TVT group and 89.5% in the TVT-O group (p = 0.131). Subjective cure rates were significant with no difference between the groups.
Conclusion: The TVT and the TVT-O are equally effective in the treatment of stress urinary incontinence after 36-month follow-up with no difference in complication rates.
Answer: The necessity of the cough stress test when placing tension-free vaginal tape (TVT) is debated in the literature. Some studies suggest that the cough stress test is not essential. For instance, a study comparing the outcome of TVT procedures performed under general anesthesia with those performed under spinal anesthesia found no significant difference in outcome or complication rate between the two groups, questioning the necessity of an intra-operative cough test (PUBMED:15580418). Similarly, another study found significant improvements in incontinence severity regardless of anesthesia type, indicating that the cough stress test under local analgesia may not be strictly necessary (PUBMED:15684159).
On the other hand, the same study (PUBMED:15684159) also reported greater improvements in stress incontinence when the TVT was placed using the cough-stress test under local analgesia, suggesting that the test may have some benefits. Additionally, another study demonstrated that the cough stress test is used as an objective outcome measure, with high objective cure rates reported in both TVT and TVT-O groups (PUBMED:20440474).
In summary, while some evidence suggests that the cough stress test may not be strictly necessary for the success of TVT procedures (PUBMED:15580418), other studies indicate that it can contribute to better outcomes in terms of stress incontinence improvement (PUBMED:15684159) and is used as an objective measure of success (PUBMED:20440474). Therefore, the decision to use the cough stress test may depend on the surgeon's preference, the type of anesthesia used, and the specific circumstances of the procedure. |
Instruction: Is MRI more accurate than CT in estimating the real size of adrenal tumours?
Abstracts:
abstract_id: PUBMED:11504521
Is MRI more accurate than CT in estimating the real size of adrenal tumours? Background: The size of adrenal tumour plays an important role in the indications for surgical excision of non-functioning adrenal tumours and in selecting the best surgical approach. Computed tomography (CT) has been reported to underestimate the real size of adrenal lesions. The accuracy of magnetic resonance imaging (MRI) in predicting the true tumour size has not been previously investigated. The present retrospective study investigates the accuracy of MRI and CT in the pre-operative determination of true adrenal tumour size.
Methods: The medical records of 65 patients who underwent adrenalectomy for an adrenal mass were reviewed. The size of adrenal tumours as determined by pre-operative MRI and/or CT was compared with the "true" histopathological size. The impact of histological diagnosis on size estimation was also investigated.
Results: The median age at diagnosis was 42 years (range 1-82 years) and more patients were female (60%). Five patients had bilateral adrenalectomy, thus giving rise to 70 adrenal specimens. The histopathological size of adrenal tumours ranged from 0.9 to 26 cm with a mean of 5.96 cm and a median of 4.70 cm. For tumours larger than 3 cm, MRI significantly underestimated the real tumour size by 20% (P<0.001). CT also underestimated the size of such tumours by 18.1% (P<0.003). Adrenal phaeochromocytomas were consistently underestimated by both modalities.
Conclusions: MRI and CT significantly underestimated the true size of adrenal tumours larger than 3 cm by 20% and 18%, respectively. Surgeons and endocrinologists should interpret the pre-operative size of adrenal lesions with caution.
abstract_id: PUBMED:28225653
Update on CT and MRI of Adrenal Nodules. Objective: The objective of this article is to review the current role of CT and MRI for the characterization of adrenal nodules.
Conclusion: Unenhanced CT and chemical-shift MRI have high specificity for lipid-rich adenomas. Dual-energy CT provides comparable to slightly lower sensitivity for the diagnosis of lipid-rich adenomas but may improve characterization of lipid-poor adenomas. Nonadenomas containing intracellular lipid pose an imaging challenge; however, nonadenomas that contain lipid may be potentially diagnosed using other imaging features. Multiphase adrenal washout CT can be used to differentiate lipid-poor adenomas from metastases but is limited for the diagnosis of hypervascular malignancies and pheochromocytoma.
abstract_id: PUBMED:36627700
Same-day comparative protocol PET/CT-PET/MRI [68 Ga]Ga-DOTA-TOC in paragangliomas and pheochromocytomas: an approach to personalized medicine. Background: PET/MRI is an emerging imaging modality which enables the evaluation and quantification of biochemical processes in tissues, complemented with accurate anatomical information and low radiation exposure. In the framework of theragnosis, PET/MRI is of special interest due to its ability to delineate small lesions, adequately quantify them, and therefore to plan targeted therapies. The aim of this study was to validate the diagnostic performance of [68 Ga]Ga-DOTA-TOC PET/MRI compared to PET/CT in advanced disease paragangliomas and pheochromocytomas (PGGLs) to assess in which clinical settings, PET/MRI may have a greater diagnostic yield.
Methods: We performed a same-day protocol with consecutive acquisition of a PET/CT and a PET/MRI after a single [68 Ga]Ga-DOTA-TOC injection in 25 patients. Intermodality agreement, Krenning Score (KS), SUVmax (Standard Uptake Value), target-to-liver-ratio (TLR), clinical setting, location, and size were assessed.
Results: The diagnostic accuracy with PET/MRI increased by 14.6% compared to PET/CT especially in bone and liver locations (mean size of new lesions was 3.73 mm). PET/MRI revealed a higher overall lesion uptake than PET/CT (TLR 4.12 vs 2.44) and implied an upward elevation of the KS in up to 60% of patients. The KS changed in 30.4% of the evaluated lesions (mean size 11.89 mm), in 18.4% of the lesions it increased from KS 2 on PET/CT to a KS ≥ 3 on PET/MRI and 24.96% of the lesions per patient with multifocal disease displayed a KS ≥ 3 on PET/MR, that were not detected or showed lower KS on PET/CT. In 12% of patients, PET/MRI modified clinical management.
Conclusions: PET/MRI showed minor advantages over conventional PET/CT in the detection of new lesions but increased the intensity of SSRs expression in a significant number of them, opening the door to select which patients and clinical settings can benefit from performing PET/MRI.
abstract_id: PUBMED:10874978
Ganglioneuromas in childhood: CT and MRI characteristics Purpose: The aim of this study was to demonstrate the typical appearance of ganglioneuromas in computer-assisted tomography (CT), and magnetic resonance imaging (MRI).
Material And Methods: Retrospective analysis of diagnostic imaging (9 CT, 6 MRI) in 9 children aged 3 to 15 years with the histological diagnosis of ganglioneuroma.
Results: The tomographies showed large (max. 13.4 cm in diameter) round or oval tumors with sharp delineation. The sites of the tumors were the retroperitoneum (5), the mediastinum (3), and the adrenal gland (1). Intraspinal tumor involvement occurred in 4 cases. On comparing CT with MRI, MRI was more accurate in defining the intraspinal involvement. The ganglioneuromas were of hypodense appearance in the native CT scan and showed moderate enhancement upon administration of contrast media. In five patients tumor calcifications with a disseminated sprinkled pattern were seen in CT. In MRI T1-weighted scans the tumors were homogeneous and hypointense, after gadolinium application a marked enhancement was evident. In T2-weighted scans the tumors were hyperintense.
Conclusion: At the time of diagnosis ganglioneuromas are generally large tumors which can be well detected by CT and MRI. Information towards the diagnosis is given by the appearance of the ganglioneuromas in CT and MRI. However, MRI is the modality of choice due to its superiority in documenting intraspinal tumor expansion.
abstract_id: PUBMED:27317224
Chemical-shift MRI versus washout CT for characterizing adrenal incidentalomas. Objective: To compare the accuracy of computed tomography (CT) and magnetic resonance imaging (MRI) in characterizing adrenal masses.
Materials And Methods: A total of 45 adrenal masses in 38 patients underwent unenhanced CT, enhanced CT, and chemical-shift MRI. Sensitivities and accuracies using the lesion attenuation values, absolute or relative percentage washout for CT, and adrenal-to-spleen ratio or signal intensity index for MRI were calculated. Follow-up or histopathology was used as standard reference.
Results: A total of 15 lipid-rich adenomas, 6 lipid-poor adenomas, and 24 nonadenomas were obtained. The sensitivities for adenoma on MRI versus CT were 81% and 95%, respectively. The specificities were 100%.
Conclusion: CT is superior to MRI in characterizing adenomas.
abstract_id: PUBMED:36525050
Lexicon for adrenal terms at CT and MRI: a consensus of the Society of Abdominal Radiology adrenal neoplasm disease-focused panel. Purpose: Substantial variation in imaging terms used to describe the adrenal gland and adrenal findings leads to ambiguity and uncertainty in radiology reports and subsequently their understanding by referring clinicians. The purpose of this study was to develop a standardized lexicon to describe adrenal imaging findings at CT and MRI.
Methods: Fourteen members of the Society of Abdominal Radiology adrenal neoplasm disease-focused panel (SAR-DFP) including one endocrine surgeon participated to develop an adrenal lexicon using a modified Delphi process to reach consensus. Five radiologists prepared a preliminary list of 35 imaging terms that was sent to the full group as an online survey (19 general imaging terms, 9 specific to CT, and 7 specific to MRI). In the first round, members voted on terms to be included and proposed definitions; subsequent two rounds were used to achieve consensus on definitions (defined as ≥ 80% agreement).
Results: Consensus for inclusion was reached on 33/35 terms with two terms excluded (anterior limb and normal adrenal size measurements). Greater than 80% consensus was reached on the definitions for 15 terms following the first round, with subsequent consensus achieved for the definitions of the remaining 18 terms following two additional rounds. No included term had remaining disagreement.
Conclusion: Expert consensus produced a standardized lexicon for reporting adrenal findings at CT and MRI. The use of this consensus lexicon should improve radiology report clarity, standardize clinical and research terminology, and reduce uncertainty for referring providers when adrenal findings are present.
abstract_id: PUBMED:27011100
Comparison of Quantitative MRI and CT Washout Analysis for Differentiation of Adrenal Pheochromocytoma From Adrenal Adenoma. Objective: The purpose of this study was to use quantitative analysis to assess MRI and washout CT in the diagnosis of pheochromocytoma versus adenoma.
Materials And Methods: Thirty-four pheochromocytomas (washout CT, 5; MRI, 24; both MRI and CT, 5) resected between 2003 and 2014 were compared with 39 consecutive adenomas (washout CT, 9; MRI, 29; both MRI and CT, 1). A blinded radiologist measured unenhanced attenuation, 70-second peak CT enhancement, 15-minute relative and absolute percentage CT washout, chemical-shift signal intensity index, adrenal-to-spleen signal intensity ratio, T2-weighted signal intensity ratio, and AUC of the contrast-enhanced MRI curve. Comparisons between groups were performed with multivariate and ROC analyses.
Results: There was no difference in age or sex between the groups (p > 0.05). For CT, pheochromocytomas were larger (4.2 ± 2.5 [SD] vs 2.3 ± 0.9 mm; p = 0.02) and had higher unenhanced attenuation (35.7 ± 6.8 HU [range, 24-48 HU] vs 14.0 ± 20.9 HU [range, -19 to 52 HU]; p = 0.002), greater 70-second peak CT enhancement (92.8 ± 31.1 HU [range, 41.0-143.1 HU] vs 82.6 ± 29.9 HU [range, 50.0-139.0 HU ]; p = 0.01), lower relative washout CT (21.7 ± 24.7 [range, -29.3 to 53.7] vs 65.3 ± 22.3 [range, 32.9-115.3]; p = 0.002), and lower absolute washout CT (31.9 ± 42.8 [range, -70.6 to 70.2] vs 76.9 ± 10.3 [range, 60.3-89.6]; p = 0.001). Thirty percent (3/10) of pheochromocytomas had absolute CT washout in the adenoma range (> 60%). For MRI, pheochromocytomas were larger (5.0 ± 4.2 vs 2.0 ± 0.7 mm; p = 0.003) and had a lower chemical-shift signal intensity index and higher adrenal-to-spleen signal intensity ratio (-3.5% ± 14.3% [range, -56.3% to 12.2%] and 1.1% ± 0.1% [range, 0.9-1.3%] vs 47.3% ± 27.8% [range, -9.4% to 86.0%] and 0.51% ± 0.27% [range, 0.13-1.1%]) (p < 0.001) and higher T2-weighted signal intensity ratio (4.4 ± 2.4 vs 1.8 ± 0.8; p < 0.001). There was no statistically significant difference in contrast-enhanced MRI AUC (288.9 ± 265.3 vs 276.2 ± 129.9 seconds; p = 0.96). The ROC AUC for T2-weighted signal intensity ratio was 0.91 with values greater than 3.8 diagnostic of pheochromocytoma.
Conclusion: In this study, the presence of intracellular lipid on unenhanced CT or chemical-shift MR images was diagnostic of adrenal adenoma. Elevated T2-weighted signal intensity ratio was specific for pheochromocytoma but lacked sensitivity. There was overlap in all other MRI and CT washout parameters.
abstract_id: PUBMED:30620676
Can Texture Analysis Be Used to Distinguish Benign From Malignant Adrenal Nodules on Unenhanced CT, Contrast-Enhanced CT, or In-Phase and Opposed-Phase MRI? Objective: The purpose of this study is to determine whether second-order texture analysis can be used to distinguish lipid-poor adenomas from malignant adrenal nodules on unenhanced CT, contrast-enhanced CT (CECT), and chemical-shift MRI.
Materials And Methods: In this retrospective study, 23 adrenal nodules (15 lipid-poor adenomas and eight adrenal malignancies) in 20 patients (nine female patients and 11 male patients; mean age, 59 years [range, 15-80 years]) were assessed. All patients underwent unenhanced CT, CECT, and chemical-shift MRI. Twenty-one second-order texture features from the gray-level cooccurrence matrix and gray-level run-length matrix were calculated in 3D. The mean values for 21 texture features and four imaging features (lesion size, unenhanced CT attenuation, CECT attenuation, and signal intensity index) were compared using a t test. The diagnostic performance of texture analysis versus imaging features was also compared using AUC values. Multivariate logistic regression models to predict malignancy were constructed for texture analysis and imaging features.
Results: Lesion size, unenhanced CT attenuation, and the signal intensity index showed significant differences between benign and malignant adrenal nodules. No significant difference was seen for CECT attenuation. Eighteen of 21 CECT texture features and nine of 21 unenhanced CT texture features revealed significant differences between benign and malignant adrenal nodules. CECT texture features (mean AUC value, 0.80) performed better than CECT attenuation (mean AUC value, 0.60). Multivariate logistic regression models showed that CECT texture features, chemical-shift MRI texture features, and imaging features were predictive of malignancy.
Conclusion: Texture analysis has a potential role in distinguishing benign from malignant adrenal nodules on CECT and may decrease the need for additional imaging studies in the workup of incidentally discovered adrenal nodules.
abstract_id: PUBMED:24511014
(18)F-DOPA PET/CT and MRI: description of 12 histologically-verified pheochromocytomas. Aim: To describe the (18)F-fluorodihydro-xyphenylalanine ((18)F-DOPA), positron emission tomography (PET) and magnetic resonance imaging (MRI) appearance of pheochromocytomas, with a focus on the presence or absence of typical MRI features.
Materials And Methods: Eleven patients with histologically-verified pheochromocytoma [sporadic (n=9), multiple endocrine neoplasia (MEN) 2A syndrome (n=2)] were enrolled retrospectively. All patients underwent an MRI examination of the upper abdomen. Nine out of 11 patients underwent (18)F-DOPA PET/CT, and the remaining two patients underwent independent PET and computed tomography (CT) examinations. (18)F-DOPA-PET/CT examinations were considered positive when an increased tracer accumulation in the adrenal region, as shown on CT images, was observed. When an adrenal mass was detected on MRI, the T1 and T2 signal intensity and contrast enhancement pattern were recorded. Based on MR characteristics, the lesions were divided into typical and atypical.
Results: Ten out of 11 patients had one lesion, while one patient had two lesions. All pheochromocytomas were detected by both PET/CT and MRI. On (18)F-DOPA scans, all lesions showed an increased tracer accumulation, with a mean maximum standardized uptake value (SUVmax) of 13.7±5.75. Eight out of 12 pheochromocytomas exhibited typical MRI features, with intermediate signal intensity on T1-weighted images in-phase, absence of signal drop on T1-weighted images out-of-phase, high signal intensity on T2-weighted images, and clear contrast enhancement in the arterial phase. The remaining four lesions exhibited atypical MRI features, namely absence of one of the listed criteria.
Conclusion: In the assessment of pheochromocytoma, the combination of (18)F-DOPA PET with MRI is superior to MRI-alone. (18)F-DOPA PET/MRI may yield a higher diagnostic confidence for the detection of pheochromocytoma than (18)F-DOPA PET/CT.
abstract_id: PUBMED:9230858
How accurate is computed tomography in predicting the real size of adrenal tumors? A retrospective study. Background: The ability to accurately assess tumor size is an important consideration during the preoperative evaluation of adrenal tumors, particularly solid nonfunctioning masses (incidentalomas or adrenalomas). Does the histological size of the adrenal tumor correspond to the preoperative computed tomography (CT)-estimated size?
Objective: To evaluate the accuracy of CT in predicting the real size of adrenal tumors.
Design: Retrospective review of all clinical records of patients who underwent adrenalectomy from 1984 through 1995. The mean tumor size reported from CT examinations was compared with the corresponding size obtained from the pathology reports and the statistical difference was evaluated.
Setting: University and private hospitals in Athens, Greece.
Patients: Seventy-six patients who underwent adrenalectomy for various adrenal diseases and who met strict entry criteria.
Results: For the entire population, the mean diameter of the tumors was estimated (CT reports) at 4.64 cm, but the real value (pathology reports) was 5.96 cm (P < .001). The underestimation held true for all of the studied subgroups that were defined by the different proposed cutoffs for malignancy. Three patients were incidentally found to have adrenal cancer, with the tumors measuring from 2.6 to 2.9 cm on CT. In addition, 4 pheochromocytomas were clinically and laboratory "silent" at the time of their discovery. The regression line (y = 0.85 + 1.09x) relating CT-estimated and histological tumor size was linear (r = 0.90, P < .001).
Conclusions: Computed tomography underestimates the real size of adrenal tumors. The CT-estimated value should be corrected accordingly to obtain the real size. The size of an adrenal tumor, even when corrected, cannot predict the tumor's clinical behavior in many cases. Surgeons should always cautiously interpret the proposed diagnostic cutoffs, especially when considering surgical or conservative management of small nonfunctioning adrenal tumors.
Answer: The comparison of MRI and CT in estimating the real size of adrenal tumors has been investigated in several studies. According to the retrospective study by PUBMED:11504521, both MRI and CT significantly underestimated the true size of adrenal tumors larger than 3 cm by 20% and 18%, respectively. This suggests that neither MRI nor CT is clearly more accurate than the other in estimating the real size of larger adrenal tumors, as both modalities consistently underestimated tumor size.
Another study, PUBMED:9230858, also found that CT underestimated the real size of adrenal tumors, with a mean diameter of tumors estimated at 4.64 cm by CT, while the actual size was 5.96 cm according to pathology reports. This further supports the notion that CT may not be entirely accurate in assessing the size of adrenal tumors.
However, it is important to note that the accuracy of imaging modalities can also depend on the type of adrenal lesion being evaluated. For instance, PUBMED:27011100 found that the presence of intracellular lipid on unenhanced CT or chemical-shift MR images was diagnostic of adrenal adenoma, and elevated T2-weighted signal intensity ratio was specific for pheochromocytoma but lacked sensitivity. This indicates that certain characteristics of adrenal tumors may be better evaluated by specific imaging modalities.
In summary, based on the available abstracts, it appears that both MRI and CT have limitations in accurately estimating the real size of adrenal tumors, particularly for tumors larger than 3 cm. Neither modality is consistently more accurate than the other, and the choice of imaging may depend on the specific characteristics of the adrenal lesion in question. |
Instruction: Does competition work as a motivating factor in e-learning?
Abstracts:
abstract_id: PUBMED:34501748
Associations between Depression, Anxiety, Fatigue, and Learning Motivating Factors in e-Learning-Based Computer Programming Education. Quarantines imposed due to COVID-19 have forced the rapid implementation of e-learning, but also increased the rates of anxiety, depression, and fatigue, which relate to dramatically diminished e-learning motivation. Thus, it was deemed significant to identify e-learning motivating factors related to mental health. Furthermore, because computer programming skills are among the core competencies that professionals are expected to possess in the era of rapid technology development, it was also considered important to identify the factors relating to computer programming learning. Thus, this study applied the Learning Motivating Factors Questionnaire, the Patient Health Questionnaire-9 (PHQ-9), the Generalized Anxiety Disorder Scale-7 (GAD-7), and the Multidimensional Fatigue Inventory-20 (MFI-20) instruments. The sample consisted of 444 e-learners, including 189 computer programming e-learners. The results revealed that higher scores of individual attitude and expectation, challenging goals, clear direction, social pressure, and competition significantly varied across depression categories. The scores of challenging goals, and social pressure and competition, significantly varied across anxiety categories. The scores of individual attitude and expectation, challenging goals, and social pressure and competition significantly varied across general fatigue categories. In the group of computer programming e-learners: challenging goals predicted decreased anxiety; clear direction and challenging goals predicted decreased depression; individual attitude and expectation predicted diminished general fatigue; and challenging goals and punishment predicted diminished mental fatigue. Challenging goals statistically significantly predicted lower mental fatigue, and mental fatigue statistically significantly predicted depression and anxiety in both sample groups.
abstract_id: PUBMED:34573226
Computer Programming E-Learners' Personality Traits, Self-Reported Cognitive Abilities, and Learning Motivating Factors. Educational systems around the world encourage students to engage in programming activities, but programming learning is one of the most challenging learning tasks. Thus, it was significant to explore the factors related to programming learning. This study aimed to identify computer programming e-learners' personality traits, self-reported cognitive abilities and learning motivating factors in comparison with other e-learners. We applied a learning motivating factors questionnaire, the Big Five Inventory-2, and the SRMCA instruments. The sample consisted of 444 e-learners, including 189 computer programming e-learners, the mean age was 25.19 years. It was found that computer programming e-learners demonstrated significantly lower scores of extraversion, and significantly lower scores of motivating factors of individual attitude and expectation, reward and recognition, and punishment. No significant differences were found in the scores of self-reported cognitive abilities between the groups. In the group of computer programming e-learners, extraversion was a significant predictor of individual attitude and expectation; conscientiousness and extraversion were significant predictors of challenging goals; extraversion and agreeableness were significant predictors of clear direction; open-mindedness was a significant predictor of a diminished motivating factor of punishment; negative emotionality was a significant predictor of social pressure and competition; comprehension-knowledge was a significant predictor of individual attitude and expectation; fluid reasoning and comprehension-knowledge were significant predictors of challenging goals; comprehension-knowledge was a significant predictor of clear direction; and visual processing was a significant predictor of social pressure and competition. The SEM analysis demonstrated that personality traits (namely, extraversion, conscientiousness, and reverted negative emotionality) statistically significantly predict learning motivating factors (namely, individual attitude and expectation, and clear direction), but the impact of self-reported cognitive abilities in the model was negligible in both groups of participants and non-participants of e-learning based computer programming courses; χ² (34) = 51.992, p = 0.025; CFI = 0.982; TLI = 0.970; NFI = 0.950; RMSEA = 0.051 [0.019-0.078]; SRMR = 0.038. However, as this study applied self-reported measures, we strongly suggest applying neurocognitive methods in future research.
abstract_id: PUBMED:24465561
Does competition work as a motivating factor in e-learning? A randomized controlled trial. Background And Aims: Examinations today are often computerized and the primary motivation and curriculum is often based on the examinations. This study aims to test if competition widgets in e-learning quiz modules improve post-test and follow-up test results and self-evaluation. The secondary aim is to evaluate improvements during the training period comparing test-results and number of tests taken.
Methods: Two groups were randomly assigned to either a quiz-module with competition widgets or a module without. Pre-, post- and follow up test-results were recorded. Time used within the modules was measured and students reported time studying. Students were able to choose questions from former examinations in the quiz-module.
Results: Students from the competing group were significantly better at both post-and follow-up-test and had a significantly better overall learning efficiency than those from the non-competing group. They were also significantly better at guessing their post-test results.
Conclusion: Quiz modules with competition widgets motivate students to become more active during the module and stimulate better total efficiency. They also generate improved self-awareness regarding post-test-results.
abstract_id: PUBMED:37364414
Development and validation of experienced work-integrated learning instrument (E-WIL) using a sample of newly graduated registered nurses - A confirmatory factor analysis. Introduction: Research indicates that newly graduated registered nurses struggle to develop practical skills and clinical understanding and to adapt to their professional role. To ensure quality of care and support new nurses, it is vital that this learning is elucidated and evaluated. Aim The aim was to develop and evaluate the psychometric properties of an instrument assessing work-integrated learning for newly graduated registered nurses, the Experienced Work-Integrated Learning (E-WIL) instrument.
Method: The study utilized the methodology of a survey and a cross-sectional research design. The sample consisted of newly graduated registered nurses (n = 221) working at hospitals in western Sweden. The E-WIL instrument was validated using confirmatory factor analysis (CFA).
Results: The majority of the study participants were female, the average age was 28 years, and participants had an average of five months' experience in the profession. The results confirmed the construct validity of the global latent variable E-WIL, "Transforming previous notions and new contextual knowledge into practical meaning," including six dimensions representing work-integrated learning. The factor loadings between the final 29 indicators and the six factors ranged from 0.30 to 0.89, and between the latent factor and the six factors from 0.64 to 0.79. The indices of fit indicated satisfactory goodness-of-fit and good reliability in five dimensions with values ranging from α = 0.70 to 0.81, except for one dimension showing a slightly lower reliability, α = 0.63, due to the low item number. Confirmatory factor analysis also confirmed two second-order latent variables, "Personal mastering of professional roles" with 18 indicators, and "Adapting to organisational requirements" with 11 indicators. Both showed satisfactory goodness-of-fit, and factor loading between indicators and the latent variables ranged from 0.44 to 0.90 and from 0.37 to 0.81, respectively.
Conclusion: The validity of the E-WIL instrument was confirmed. All three latent variables could be measured in their entirety, and all dimensions could be used separately for the assessment of work-integrated learning. The E-WIL instrument could be useful for healthcare organisations when the goal is to assess aspects of newly graduated registered nurses' learning and professional development.
abstract_id: PUBMED:35877295
Research on the Mechanism of Influence of Game Competition Mode on Online Learning Performance. With the rapid development of information technology and the influence of the COVID-19 pandemic, online learning has become an important supplement to the teaching organization form of basic education and higher education. In order to increase user stickiness and improve learning performance, gamification elements are widely introduced into online learning situations. However, scholars have drawn different conclusions on the impact of game-based competition on online learning performance. This study is based on field theory and constructivist learning theory. Taking the online interaction of the curriculum platform as the situation, psychological capital as the intermediary variable and connected classroom atmosphere as the adjustment variable, this paper constructs an interaction model between game competition and online learning performance and discusses in depth the intermediary effect of psychological capital and the adjustment effect of a connected classroom atmosphere. The results show that game-based competition has a significant positive effect on learning performance, and the effect of direct competition is better than that of indirect competition; the self-efficacy dimension of psychological capital plays an intermediary role between direct competition and learning performance, and the resilience dimension plays an intermediary role between competition and learning performance; and a connected classroom atmosphere plays a regulating role in the dimensions of game competition, knowledge mastery and knowledge innovation.
abstract_id: PUBMED:26410320
Effectiveness of e-learning in hospitals. Background: Because medical personnel share different work shifts (i.e., three work shifts) and do not have a fixed work schedule, implementing timely, flexible, and quick e-learning methods for their continued education is imperative. Hospitals are currently focusing on developing e-learning.
Objective: This study aims to explore the key factors that influence the effectiveness of e-learning in medical personnel.
Methods: This study recruited medical personnel as the study participants and collected sample data by using the questionnaire survey method.
Results: This study is based on the information systems success model (IS success model), a significant model in MIS research. This study found that the factors (i.e., information quality, service quality, convenience, and learning climate) influence the e-learning satisfaction and in turn influence effectiveness in medical personnel.
Conclusions: This study provided recommendations to medical institutions according to the derived findings, which can be used as a reference when establishing e-learning systems in the future.
abstract_id: PUBMED:35282231
Relations Between Class Competition and Primary School Students' Academic Achievement: Learning Anxiety and Learning Engagement as Mediators. This study aimed to analyze the relations between class competition and primary school students' academic achievement, considering the possible mediating roles of learning anxiety and learning engagement. Participants were 1,479 primary school students from four primary schools in Zhejiang, China. We analyzed participants' scores for class competition, learning anxiety, and learning engagement and their last two final exam scores. Class competition did not directly predict academic achievement, but indirectly affected academic achievement through learning anxiety and learning engagement. There were three effect paths: (1) class competition negatively predicted academic achievement by increasing learning anxiety; (2) class competition positively predicted academic achievement by promoting learning engagement; and (3) class competition affected academic achievement through multiple mediating effects of learning anxiety and learning engagement. This study highlights the important roles of learning anxiety and learning engagement in class competition and academic achievement, which have theoretical and practical significance.
abstract_id: PUBMED:36395966
Motivating operations as contexts for operant discrimination training and testing. Two groups of mice were exposed to stimulus discrimination training and testing under different motivational conditions to study interactions between motivating operations (MOs) during initial discrimination training and MOs when performance is tested following training. One group received all discrimination training sessions under 24-h food deprivation while the other received all sessions under 0-h food deprivation. The number of responses allowed during discrimination training sessions was limited such that the two groups experienced the same number of response-outcome contingencies. The groups then received two post-discrimination training tests: one conducted under 24-h food deprivation and the other conducted under 0-h food deprivation. Results indicated no difference between groups in terms of discrimination ratio. However, subjects trained under 24-h deprivation made more responses in the 24-h test, while subjects trained under 0-h deprivation made more responses in the 0-h test. These results are discussed in terms of motivational state-dependent learning.
abstract_id: PUBMED:22942538
Motivating operations affect stimulus control: a largely overlooked phenomenon in discrimination learning. Several recent studies have explored what Michael (e.g., 1982) termed the value-altering effect and the behavior-altering effect of motivating operations. One aspect of the behavior-altering effect that has garnered no recent attention involves changes in stimulus control produced by motivating operations. To call attention to this aspect of the behavior-altering effect, we herein review 11 studies that are concerned with the influence of varying levels of food or water deprivation on stimulus generalization. These studies suggest that motivating operations influence stimulus control (a) by changing the evocative strength of not just an established discriminative stimulus, but also of stimuli that are physically similar to it; (b) by changing the range of stimuli that evoke the operant in question; and (c) by exerting these effects in a graded fashion. These findings are potentially of conceptual and applied significance, and it appears that further research examining how motivating operations alter stimulus control, including some studies suggested herein, is warranted.
abstract_id: PUBMED:26173367
The Use of E-Learning in Social Work Education. E-learning is an evolutionary pedagogy in social work. E-learning technologies transform learning so that it can be synchronous or asynchronous. The author provides a systematic discussion of e-learning and its role in social work education. E-learning appears advantageous as a hybrid or blended venue when used in academia and suitable in various formats for continuing education. Theoretical foundations that support positive learning outcomes should guide delivery. Distance delivery, regardless of the media or technology used, is not by itself a contributing variable in students' achievement. The priority of teaching and learning should be on effectiveness of the learning, regardless of the mode of delivery. Current descriptive research on e-learning can be improved by increasing the rigor of methodology and theoretical considerations. This information is necessary as the profession navigates the best ways to meet the changing needs of social work students and social workers in the field.
Answer: Yes, competition can work as a motivating factor in e-learning. A randomized controlled trial found that students who participated in a quiz-module with competition widgets were significantly better at both post-and follow-up tests and had a significantly better overall learning efficiency than those from the non-competing group. They also demonstrated improved self-awareness regarding post-test results, indicating that competition widgets in e-learning can motivate students to become more active during the module and stimulate better total efficiency (PUBMED:24465561). Additionally, research on the mechanism of influence of game competition mode on online learning performance showed that game-based competition has a significant positive effect on learning performance, with the effect of direct competition being better than that of indirect competition (PUBMED:35877295). Furthermore, a study on relations between class competition and primary school students' academic achievement found that class competition indirectly affected academic achievement through learning anxiety and learning engagement, with both positive and negative predictive paths (PUBMED:35282231). These findings suggest that competition, when integrated into e-learning environments, can serve as a motivating factor that enhances learning outcomes. |
Instruction: Are administrative data valid when measuring patient safety in hospitals?
Abstracts:
abstract_id: PUBMED:26133382
Are administrative data valid when measuring patient safety in hospitals? A comparison of data collection methods using a chart review and administrative data. Objective: To evaluate the validity and reliability of German Diagnosis Related Group administrative data to measure indicators of patient safety in comparison to clinical records.
Design: A cross-sectional study was conducted using chart review (CR) as gold standard and screening of associated administrative data based on DRG coding.
Setting: Three German somatic acute care hospitals for adults.
Participants: A total of 3000 cases treated between May and December, 2010.
Main Outcome Measures: Eight indicators were used to analyse the incidence of associated adverse events (AEs): pressure ulcers, catheter-related infections, respiratory failure, deep vein thromboses, hospital-acquired pneumonia, acute renal failure, acute myocardial infarction and wound infections. We calculated sensitivity, specificity, positive predictive value (PPV) and Cohen's Kappa with 95% confidence intervals.
Results: Screening of administrative data identified 171 AEs and 456 were identified by CR. A number of 135 identical events were identified by both methods. Sensitivities for the detection of AEs using administrative data ranged from 6 to 100%. Specificities ranged from 99 to 100%. PPV were 33 to 100% and reliabilities were 12 to 85%.
Conclusions: Indicators based on German administrative data deviate widely from indicators based on clinical data. Therefore, hospitals should be cautious to use indicators based on administrative data for quality assurance. However, some might be useful for case findings and quality improvement. The precision of the evaluated indicators needs further development to detect AEs by the valid use of administrative data.
abstract_id: PUBMED:24004036
Whole-patient measure of safety: using administrative data to assess the probability of highly undesirable events during hospitalization. Hospitals often have limited ability to obtain primary clinical data from electronic health records to use in assessing quality and safety. We outline a new model that uses administrative data to gauge the safety of care at the hospital level. The model is based on a set of highly undesirable events (HUEs) defined using administrative data and can be customized to address the priorities and needs of different users. Patients with HUEs were identified using discharge abstracts from July 1, 2008 through June 30, 2010. Diagnoses were classified as HUEs based on the associated present-on-admission status. The 2-year study population comprised more than 6.5 million discharges from 161 hospitals. The proportion of hospitalizations including at least one HUE during the 24-month study period varied greatly among hospitals, with a mean of 7.74% (SD 2.3%) and a range of 13.32% (max, 15.31%; min, 1.99%). The whole-patient measure of safety provides a global measure to use in assessing hospitals with the patient's entire care experience in mind. As administrative and clinical datasets become more consistent, it becomes possible to use administrative data to compare the rates of HUEs across organizations and to identify opportunities for improvement.
abstract_id: PUBMED:34772409
Measuring and monitoring patient safety in hospitals in Saudi Arabia. Background: There is much variability in the measurement and monitoring of patient safety across healthcare organizations. With no recognized standardized approach, this study examines how the key components outlined in Vincent et al's Measuring and Monitoring Safety (MMS) framework can be utilized to critically appraise a healthcare safety surveillance system. The aim of this study is to use the MMS framework to evaluate the Saudi Arabian healthcare safety surveillance system for hospital care.
Methods: This qualitative study consisted of two distinct phases. The first phase used document analysis to review national-level guidance relevant to measuring and monitoring safety in Saudi Arabia. The second phase consisted of semi-structured interviews with key stakeholders between May and August 2020 via a video conference call and focused on exploring their knowledge of how patient safety is measured and monitored in hospitals. The MMS framework was used to support data analysis.
Results: Three documents were included for analysis and 21 semi-structured interviews were conducted with key stakeholders working in the Saudi Arabian healthcare system. A total of 39 unique methods of MMS were identified, with one method of MMS addressing two dimensions. Of these MMS methods: 10 (25 %) were concerned with past harm; 14 (35 %) were concerned with the reliability of safety critical processes, 3 (7.5 %) were concerned with sensitivity to operations, 2 (5 %) were concerned with anticipation and preparedness, and 11 (27.5 %) were concerned with integration and learning.
Conclusions: The document analysis and interviews show an extensive system of MMS is in place in Saudi Arabian hospitals. The assessment of MMS offers a useful framework to help healthcare organizations and researchers to think critically about MMS, and how the data from different methods of MMS can be integrated in individual countries or health systems.
abstract_id: PUBMED:28495660
Geriatric Patient Safety Indicators Based on Linked Administrative Health Data to Assess Anticoagulant-Related Thromboembolic and Hemorrhagic Adverse Events in Older Inpatients: A Study Proposal. Background: Frail older people with multiple interacting conditions, polypharmacy, and complex care needs are particularly exposed to health care-related adverse events. Among these, anticoagulant-related thromboembolic and hemorrhagic events are particularly frequent and serious in older inpatients. The growing use of anticoagulants in this population and their substantial risk of toxicity and inefficacy have therefore become an important patient safety and public health concern worldwide. Anticoagulant-related adverse events and the quality of anticoagulation management should thus be routinely assessed to improve patient safety in vulnerable older inpatients.
Objective: This project aims to develop and validate a set of outcome and process indicators based on linked administrative health data (ie, insurance claims data linked to hospital discharge data) assessing older inpatient safety related to anticoagulation in both Switzerland and France, and enabling comparisons across time and among hospitals, health territories, and countries. Geriatric patient safety indicators (GPSIs) will assess anticoagulant-related adverse events. Geriatric quality indicators (GQIs) will evaluate the management of anticoagulants for the prevention and treatment of arterial or venous thromboembolism in older inpatients.
Methods: GPSIs will measure cumulative incidences of thromboembolic and bleeding adverse events based on hospital discharge data linked to insurance claims data. Using linked administrative health data will improve GPSI risk adjustment on patients' conditions that are present at admission and will capture in-hospital and postdischarge adverse events. GQIs will estimate the proportion of index hospital stays resulting in recommended anticoagulation at discharge and up to various time frames based on the same electronic health data. The GPSI and GQI development and validation process will comprise 6 stages: (1) selection and specification of candidate indicators, (2) definition of administrative data-based algorithms, (3) empirical measurement of indicators using linked administrative health data, (4) validation of indicators, (5) analyses of geographic and temporal variations for reliable and valid indicators, and (6) data visualization.
Results: Study populations will consist of 166,670 Swiss and 5,902,037 French residents aged 65 years and older admitted to an acute care hospital at least once during the 2012-2014 period and insured for at least 1 year before admission and 1 year after discharge. We will extract Swiss data from the Helsana Group data warehouse and French data from the national health insurance information system (SNIIR-AM). The study has been approved by Swiss and French ethics committees and regulatory organizations for data protection.
Conclusions: Validated GPSIs and GQIs should help support and drive quality and safety improvement in older inpatients, inform health care stakeholders, and enable international comparisons. We discuss several limitations relating to the representativeness of study populations, accuracy of administrative health data, methods used for GPSI criterion validity assessment, and potential confounding bias in comparisons based on GQIs, and we address these limitations to strengthen study feasibility and validity.
abstract_id: PUBMED:37181490
How to analyze and link patient experience surveys with administrative data to drive health service improvement - examples from Alberta, Canada. The ability of hospitals and health systems to learn from those who use its services (i.e., patients and families) is crucial for quality improvement and the delivery of high-quality patient-centered care. To this end, many hospitals and health systems regularly collect survey data from patients and their families, and are engaged in activities to publicly report the results. Despite this, there has been limited research into the experiences of patients and families, and how to improve them. Since 2015, our research team has conducted a variety of studies which have explored patient experience survey data, in isolation, and in linkages with routinely-captured administrative data sets across Alberta; a Canadian province of 4.4 million residents. Via secondary analyses, these studies have shed light upon the drivers of inpatient experience, the specific aspects of care which are most correlated with one's overall experiences, and the association of elements of the patient experience with other measures, such as patient safety indicators and unplanned hospital readmissions. The aim of this paper is to provide an overview of the methods we have used, including further details about the data sets and linkage protocol. The main findings from these papers have been presented for readers and those who wish to conduct their own work in this area.
abstract_id: PUBMED:25782763
Measuring safety culture in belgian psychiatric hospitals: validation of the dutch and French translations of the hospital survey on patient safety culture. Objectives: To measure safety culture in Belgian psychiatric hospitals on 12 dimensions and to examine the psychometric properties of the Dutch and French translations of the Hospital Survey on Patient Safety Culture (HSPSC) for use in psychiatric hospitals.
Methods: The authors analyzed 6,658 completed questionnaires (70.5% response rate) from a baseline measurement (2007-2009) in 44 psychiatric hospitals and 8,353 questionnaires (71.5% response rate) from a follow-up measurement (2011) in 46 psychiatric hospitals. Psychometric properties of the questionnaire were evaluated using item analysis, exploratory factor analysis (EFA), confirmatory factor analysis (CFA), reliability analysis (Cronbach's alpha), and analysis of composite scores and inter-correlations.
Results: For both translations, CFA showed an acceptable fit with the original 12-dimensional model. For the Dutch and French translations, EFA showed a 10-factor and a 9-factor optimal measurement model, respectively. Cronbach's alpha indicated an acceptable level of reliability (≥ 0.70) for 7 of 12 dimensions. Most pair-wise correlations were significant and <0.5, implying good construct validity.
Conclusion: The Dutch and French translations of the HSPSC were found tobe valid and reliable for measuring patient safety culture in psychiatric hospitals. Our results also suggest the use of combinations of specific dimensions as recommended in previous research.
abstract_id: PUBMED:32903092
Measuring patient safety climate in operating rooms: Validation of the Spanish version of the hospital survey on patient safety. Objective: The measurement of patient safety climate within hospitals, and specifically in operating rooms is a basic tool for the development of the patient's safety policy. There are no validated Spanish versions of instruments to measure safety climate. The objective of this research was to validate the Spanish version of the Hospital Survey on Patient Safety (HSOPS®), with the addition of a module for surgical units, to evaluate the patient safety climate in operating rooms.
Methods: Survey validation study. The Hospital Survey on Patient Safety (HSOPS®) was applied to health workers from 6 acute general hospitals, from Medellín (Colombia), with surgical procedures greater than 300 per month, 18 items were added considered specific for Operating Rooms. For construct validation, an exploratory factor analysis (EFA) was used, utilizing principal components as the extraction method. Reliability was evaluated with Cronbach's α.
Results: A 10 dimensions model was obtained with EFA, most of the dimensions of the original questionnaire were conserved, although the factorial structure was not reproduced. Two new dimensions emerged from the added items. The Cronbach's α ranged between 0.66 and 0.87. Conclusions: We found the HSOPS questionnaire is valid and reliable for measuring patient safety climate in Spanish speaking Latin American countries. Two additional dimensions are proposed for Operating Rooms.
abstract_id: PUBMED:23708480
Patient safety climate and worker safety behaviours in acute hospitals in Scotland. Objectives: To obtain a measure of hospital safety climate from a sample of National Health Service (NHS) acute hospitals in Scotland and to test whether these scores were associated with worker safety behaviors, and patient and worker injuries.
Methods: Data were from 1,866 NHS clinical staff in six Scottish acute hospitals. A Scottish Hospital Safety Questionnaire measured hospital safety climate (Hospital Survey on Patient Safety Culture), worker safety behaviors, and worker and patient injuries. The associations between the hospital safety climate scores and the outcome measures (safety behaviors, worker and patient injury rates) were examined.
Results: Hospital safety climate scores were significantly correlated with clinical workers' safety behavior and patient and worker injury measures, although the effect sizes were smaller for the latter. Regression analyses revealed that perceptions of staffing levels and managerial commitment were significant predictors for all the safety outcome measures. Both patient-specific and more generic safety climate items were found to have significant impacts on safety outcome measures.
Conclusion: This study demonstrated the influences of different aspects of hospital safety climate on both patient and worker safety outcomes. Moreover, it has been shown that in a hospital setting, a safety climate supporting safer patient care would also help to ensure worker safety.
Impact On Industry: The Scottish Hospital Safety Questionnaire has proved to be a usable method of measuring both hospital safety climate as well as patient and worker safety outcomes.
abstract_id: PUBMED:28830416
The safety attitudes questionnaire in Chinese: psychometric properties and benchmarking data of the safety culture in Beijing hospitals. Background: In China, increasing attention has been devoted to the patient safety culture within health administrative departments and healthcare organizations. However, no official version of a patient safety culture assessment tool has been published or is widely used, and little is known about the status of the safety culture in Chinese hospitals. The aims of this study were to examine the reliability and validity of the Safety Attitudes Questionnaire in Chinese and to establish benchmark data on the safety culture in Beijing.
Methods: Across-sectional survey on patient safety culture was conducted from August to October 2014 using the Safety Attitudes Questionnaire in Chinese. Using a stratified random sampling method, we investigated departments from five integrative teaching hospitals in Beijing; frontline healthcare workers in each unit participated in the survey on a voluntary basis. The internal consistency and reliability were tested via Cronbach's alpha, and the structural validity of the questionnaire was tested using a correlation analysis and confirmatory factor analysis. The patient safety culture in the five hospitals was assessed and analyzed.
Results: A total of 1663 valid questionnaires were returned, for a response rate of 87.9%. Cronbach's alpha of the total scale was 0.945, and Cronbach's alpha for the six dimensions ranged from 0.785 to 0.899. The goodness-of-fit indices in the confirmatory factor analysis showed an acceptable but not ideal model fit. The safety attitude score of healthcare workers in the five hospitals was 69.72, and the positive response rate was 38.57% overall. The positive response rates of the six dimensions were between 20.80% and 59.31%.
Conclusions: The Safety Attitudes Questionnaire in Chinese has good internal consistency, and the structural validity and reliability are acceptable. This questionnaire can be used to assess the safety culture in Beijing hospitals, but some items require further refinement. The patient safety culture in Beijing hospitals must be improved in certain key areas.
abstract_id: PUBMED:23584363
Meeting the ambition of measuring the quality of hospitals' stroke care using routinely collected administrative data: a feasibility study. Objective: To examine the potential for using routinely collected administrative data to compare the quality and safety of stroke care at a hospital level, including evaluating any bias due to variations in coding practice.
Design: A retrospective cohort study of English hospitals' performance against six process and outcome indicators covering the acute care pathway. We used logistic regression to adjust the outcome measures for case mix.
Setting: Hospitals in England.
Participants: Stroke patients (ICD-10 I60-I64) admitted to English National Health Service public acute hospitals between April 2009 and March 2010, accounting for 91 936 admissions.
Main Outcome Measure: The quality and safety were measured using six indicators spanning the hospital care pathway, from timely access to brain scans to emergency readmissions following discharge after stroke.
Results: There were 182 occurrences of hospitals performing statistically differently from the national average at the 99.8% significance level across the six indicators. Differences in coding practice appeared to only partially explain the variation.
Conclusions: Hospital administrative data provide a practical and achievable method for evaluating aspects of stroke care across the acute pathway. However, without improvements in coding and further validation, it is unclear whether the cause of the variation is the quality of care or the result of different local care pathways and data coding accuracy.
Answer: The validity of administrative data when measuring patient safety in hospitals has been explored in various studies, and the findings suggest that while administrative data can be useful, there are limitations and concerns regarding its accuracy and reliability.
A study conducted in German hospitals found that indicators based on administrative data deviated widely from those based on clinical data, suggesting that hospitals should be cautious when using administrative data for quality assurance. Sensitivities for detecting adverse events using administrative data ranged from 6 to 100%, and specificities ranged from 99 to 100%. The positive predictive value and reliability also varied significantly (PUBMED:26133382).
Another study proposed a model using administrative data to gauge the safety of care at the hospital level, focusing on highly undesirable events (HUEs). This model can be customized to address different users' priorities and needs, and as administrative and clinical datasets become more consistent, it becomes possible to use administrative data to compare rates of HUEs across organizations (PUBMED:24004036).
In Saudi Arabia, a study using the Measuring and Monitoring Safety (MMS) framework to evaluate the healthcare safety surveillance system found an extensive system of MMS in place. The study identified 39 unique methods of MMS, with varying concerns for past harm, reliability of safety-critical processes, sensitivity to operations, anticipation and preparedness, and integration and learning (PUBMED:34772409).
A study proposal in Switzerland and France aimed to develop and validate a set of indicators based on linked administrative health data to assess anticoagulant-related adverse events in older inpatients. This approach would use linked data to improve risk adjustment and capture both in-hospital and post-discharge adverse events (PUBMED:28495660).
In Alberta, Canada, research has been conducted to link patient experience surveys with administrative data to drive health service improvement. These studies have explored the drivers of inpatient experience and the association of elements of the patient experience with measures such as patient safety indicators and unplanned hospital readmissions (PUBMED:37181490).
In conclusion, while administrative data can be a practical and achievable method for evaluating aspects of patient safety across the acute pathway, there are concerns about the representativeness of study populations, accuracy of administrative health data, and potential confounding bias. Improvements in coding and further validation are necessary to ensure the data accurately reflects the quality of care (PUBMED:23584363). Therefore, administrative data has the potential to measure patient safety in hospitals, but its validity can be variable and context-dependent. |
Instruction: Do individuals consider expected income when valuing health states?
Abstracts:
abstract_id: PUBMED:18828945
Do individuals consider expected income when valuing health states? Objectives: The purpose of this study was to empirically explore whether individuals take their expected income into consideration when directly valuing predefined health states. This was intended to help determine how to handle productivity costs due to morbidity in a cost-effectiveness analysis.
Methods: Two hundred students each valued four hypothetical health states by using time trade-off (TTO) and a visual analogue scale (VAS). The students were randomly assigned to two groups. One group was simply asked, without mentioning income, to value the different health states (the non-income group). The other group was explicitly asked to consider their expected income in relation to the health states in their valuations (the income group).
Results: For health states that are usually assumed to have a large effect on income, the valuations made by the income group seemed to be lower than the valuations made by the non-income group. Among the students in the non-income group, 96 percent stated that they had not thought about their expected income when they valued the health states. In the income group, 40 percent believed that their expected income had affected their valuations of the health states.
Conclusion: The results show that, as long as income is not mentioned, most individuals do not seem to consider their expected income when they value health states. This indicates that productivity costs due to morbidity are not captured within individuals' health state valuations. These findings, therefore, suggest that productivity costs due to morbidity should be included as a cost in cost-effectiveness analyses.
abstract_id: PUBMED:10772358
Income effects of reduced health and health effects of reduced income: implications for health-state valuation. There is increasing use of multiattribute health-state utility systems, such as the Health Utilities Index and the EuroQol (now EQ-5D), to estimate quality-adjusted life years (QALYs) for cost-utility analysis. Whereas the preferences elicited from individuals using willingness-to-pay techniques for cost-benefit analysis would be expected to reflect those individuals' income levels, it is often suggested that cost-utility analysis can avoid this income effect by not valuing health in monetary terms. Contrary to this view, the authors argue that income can influence the measurement of utilities used to estimate QALYs. In the context of multiattribute utility instruments, two income effects can take place: 1) when individuals are asked to value health states to generate the set of utilities to apply in subsequent evaluation studies; 2) when those multiattribute systems are used to categorize individuals' (usually patients') health status in the field in applied evaluation studies. The authors review the most popular utility systems regarding how these income effects are handled and assess the implications for the measurement of utilities using these systems.
abstract_id: PUBMED:16338630
Whose health is affected by income inequality? A multilevel interaction analysis of contemporaneous and lagged effects of state income inequality on individual self-rated health in the United States. The empirical relationship between income inequality and health has been much debated and discussed. Recent reviews suggest that the current evidence is mixed, with the relationship between state income inequality and health in the United States (US) being perhaps the most robust. In this paper, we examine the multilevel interactions between state income inequality, individual poor self-rated health, and a range of individual demographic and socioeconomic markers in the US. We use the pooled data from the 1995 and 1997 Current Population Surveys, and the data on state income inequality (represented using Gini coefficient) from the 1990, 1980, and 1970 US Censuses. Utilizing a cross-sectional multilevel design of 201,221 adults nested within 50 US states we calibrated two-level binomial hierarchical mixed models (with states specified as a random effect). Our analyses suggest that for a 0.05 change in the state income inequality, the odds ratio (OR) of reporting poor health was 1.30 (95% CI: 1.17-1.45) in a conditional model that included individual age, sex, race, marital status, education, income, and health insurance coverage as well as state median income. With few exceptions, we did not find strong statistical support for differential effects of state income inequality across different population groups. For instance, the relationship between state income inequality and poor health was steeper for whites compared to blacks (OR=1.34; 95% CI: 1.20-1.48) and for individuals with incomes greater than $75,000 compared to less affluent individuals (OR=1.65; 95% CI: 1.26-2.15). Our findings, however, primarily suggests an overall (as opposed to differential) contextual effect of state income inequality on individual self-rated poor health. To the extent that contemporaneous state income inequality differentially affects population sub-groups, our analyses suggest that the adverse impact of inequality is somewhat stronger for the relatively advantaged socioeconomic groups. This pattern was found to be consistent regardless of whether we consider contemporaneous or lagged effects of state income inequality on health. At the same time, the contemporaneous main effect of state income inequality remained statistically significant even when conditioned for past levels of income inequality and median income of states.
abstract_id: PUBMED:36712801
Downward income mobility among individuals with poor initial health is linked with higher cardiometabolic risk. The effects of socioeconomic position (SEP) across life course accumulate and produce visible health inequalities between different socioeconomic groups. Yet, it is not well-understood how the experience of intergenerational income mobility between origin and destination SEP, per se, affects health outcomes. We use data from the National Longitudinal Study of Adolescent to Adult Health collected in the United States with the outcome measure of cardiometabolic risk (CMR) constructed from data on LDL Cholesterol, Glucose MG/DL, C-reactive protein, systolic and diastolic blood pressure, and resting heart rate. Intergenerational income mobility is estimated as the difference between Waves 1 and 5 income quintiles. Diagonal reference models are used to test if intergenerational income mobility, net of origin and destination income quintile effects, is associated with CMR. We find that individuals in the lowest and the highest income quintiles have, respectively, the highest and the lowest CMR; both origin and destination income quintiles are equally important; there are no significant overall income mobility effects for different gender and race/ethnicity groups, but downward income mobility has negative health implications for individuals with poor initial health. We conclude that downward income mobility can increase inequalities in CMR in the United States by worsening the health of those who had poor health before their mobility experiences.
abstract_id: PUBMED:9756809
Income distribution, socioeconomic status, and self rated health in the United States: multilevel analysis. Objective: To determine the effect of inequalities in income within a state on self rated health status while controlling for individual characteristics such as socioeconomic status.
Design: Cross sectional multilevel study. Data were collected on income distribution in each of the 50 states in the United States. The Gini coefficient was used to measure statewide inequalities in income. Random probability samples of individuals in each state were collected by the 1993 and 1994 behavioural risk factor surveillance system, a random digit telephone survey. The survey collects information on an individual's income, education, self rated health and other health risk factors.
Setting: All 50 states.
Subjects: Civilian, non-institutionalised (that is, non-incarcerated and non-hospitalised) US residents aged 18 years or older.
Main Outcome Measure: Self rated health status.
Results: When personal characteristics and household income were controlled for, individuals living in states with the greatest inequalities in income were 30% more likely to report their health as fair or poor than individuals living in states with the smallest inequalities in income.
Conclusions: Inequality in the distribution of income was associated with an adverse impact on health independent of the effect of household income.
abstract_id: PUBMED:36773532
Eliciting preferences and respecting values: Why ask? This essay explores the pitfalls and ambiguities in relying on preference elicitation to value health states, and it distinguishes preference elicitation, as a fallible method of measuring well-being, from public consultation, as an element of public deliberation. After distinguishing preference elicitation as a method of ascertaining opinions from preference elicitation as a method of measuring well-being, it points out that preferences depend on beliefs and the considerations speaking in favor of deferring to people's values do not carry over to deferring to their beliefs. Instead of valuing health states by their bearing on well-being, as measured by preferences, this essay argues for valuing health states by their bearing on activity limitations and suffering, as determined by public deliberation.
abstract_id: PUBMED:11109180
Primary care, income inequality, and self-rated health in the United States: a mixed-level analysis. Using the 1996 Community Tracking Study household survey, the authors examined whether income inequality and primary care, measured at the state level, predict individual morbidity as measured by self-rated health status, while adjusting for potentially confounding individual variables. Their results indicate that distributions of income and primary care within states are significantly associated with individuals' self-rated health; that there is a gradient effect of income inequality on self-rated health; and that individuals living in states with a higher ratio of primary care physician to population are more likely to report good health than those living in states with a lower such ratio. From a policy perspective, improvement in individuals' health is likely to require a multi-pronged approach that addresses individual socioeconomic determinants of health, social and economic policies that affect income distribution, and a strengthening of the primary care aspects of health services.
abstract_id: PUBMED:21078730
Impact of income and income inequality on infant health outcomes in the United States. Objectives: The goal was to investigate the relationships of income and income inequality with neonatal and infant health outcomes in the United States.
Methods: The 2000-2004 state data were extracted from the Kids Count Data Center. Health indicators included proportion of preterm births (PTBs), proportion of infants with low birth weight (LBW), proportion of infants with very low birth weight (VLBW), and infant mortality rate (IMR). Income was evaluated on the basis of median family income and proportion of federal poverty levels; income inequality was measured by using the Gini coefficient. Pearson correlations evaluated associations between the proportion of children living in poverty and the health indicators. Linear regression evaluated predictive relationships between median household income, proportion of children living in poverty, and income inequality for the 4 health indicators.
Results: Median family income was negatively correlated with all birth outcomes (PTB, r = -0.481; LBW, r = -0.295; VLBW, r = -0.133; IMR, r = -0.432), and the Gini coefficient was positively correlated (PTB, r = 0.339; LBW, r = 0.398; VLBW, r = 0.460; IMR, r = 0.114). The Gini coefficient explained a significant proportion of the variance in rate for each outcome in linear regression models with median family income. Among children living in poverty, the role of income decreased as the degree of poverty decreased, whereas the role of income inequality increased.
Conclusions: Both income and income inequality affect infant health outcomes in the United States. The health of the poorest infants was affected more by absolute wealth than relative wealth.
abstract_id: PUBMED:24438725
Valuing the health states associated with Chlamydia trachomatis infections and their sequelae: a systematic review of economic evaluations and primary studies. Objectives: Economic evaluations of interventions to prevent and control sexually transmitted infections such as Chlamydia trachomatis are increasingly required to present their outcomes in terms of quality-adjusted life-years using preference-based measurements of relevant health states. The objectives of this study were to critically evaluate how published cost-effectiveness studies have conceptualized and valued health states associated with chlamydia and to examine the primary evidence available to inform health state utility values (HSUVs).
Methods: A systematic review was conducted, with searches of six electronic databases up to December 2012. Data on study characteristics, methods, and main results were extracted by using a standard template.
Results: Nineteen economic evaluations of relevant interventions were included. Individual studies considered different health states and assigned different values and durations. Eleven studies cited the same source for HSUVs. Only five primary studies valued relevant health states. The methods and viewpoints adopted varied, and different values for health states were generated.
Conclusions: Limitations in the information available about HSUVs associated with chlamydia and its complications have implications for the robustness of economic evaluations in this area. None of the primary studies could be used without reservation to inform cost-effectiveness analyses in the United Kingdom. Future debate should consider appropriate methods for valuing health states for infectious diseases, because recommended approaches may not be suitable. Unless we adequately tackle the challenges associated with measuring and valuing health-related quality of life for patients with chlamydia and other infectious diseases, evaluating the cost-effectiveness of interventions in this area will remain problematic.
abstract_id: PUBMED:19509120
The incorporation of income and leisure in health state valuations when the measure is silent: an empirical inquiry into the sound of silence. Objective: The objective of the present study is to evaluate whether people 1) expect income and leisure to be affected by certain health states, 2) include the effects of ill-health on income and leisure in health state valuations when the measure is silent on both, and 3) what effect this has on these valuations.
Data And Methods: A convenience sample of 75 individuals from the general public rated 3 different health states on a visual analogue scale without instruction on the incorporation of income and leisure. Different subgroups were created on the basis of expecting income and leisure to be affected and the indicated incorporation of these effects. Comparative and multivariate analyses were used to analyze the data.
Results: The results show that most respondents (69%) did not consider income effects, whereas 61% did consider the effects on leisure. The expected influence of health states on income and leisure differed substantially between respondents. Only the incorporation of leisure proved to be influential in health state valuations.
Conclusions: Health state valuation methods that are silent and noninformative regarding leisure and income lead to interrespondent differences regarding how they expect leisure and income to be affected and regarding the inclusion of these effects. This may be especially problematic for leisure if productivity costs are captured at the cost side of the cost-effectiveness ratio.
Answer: Individuals do not typically consider their expected income when valuing health states, especially if income is not explicitly mentioned during the valuation process. A study involving 200 students who valued hypothetical health states using time trade-off (TTO) and a visual analogue scale (VAS) found that 96% of the students in the non-income group, who were not prompted to think about income, stated that they had not thought about their expected income when valuing the health states. In contrast, in the income group, where students were explicitly asked to consider their expected income, 40% believed that their expected income had affected their valuations of the health states. This suggests that productivity costs due to morbidity are not captured within individuals' health state valuations unless they are specifically asked to consider income, indicating that such costs should be included as a separate cost in cost-effectiveness analyses (PUBMED:18828945).
Moreover, the study implies that when individuals directly value predefined health states without the mention of income, they are less likely to factor in the potential impact on their future earnings. This has implications for how productivity costs due to morbidity are handled in cost-effectiveness analysis, as these costs may not be inherently reflected in the health state valuations provided by individuals (PUBMED:18828945). |
Instruction: Can nitrous oxide be administered effectively by nasal cannula?
Abstracts:
abstract_id: PUBMED:8695091
Can nitrous oxide be administered effectively by nasal cannula? A preliminary report. Study Objective: To predict the inspired concentrations achieved when nitrous oxide (N2O)/oxygen mixtures are administered to patients by way of a nasal cannula.
Design: The method used for estimating the FiN2O is based on one employed to calculate the FiO2 obtained with a nasal cannula. We assume a tidal volume of 500 ml, a respiratory rate of 20 breaths per minute, an inspiratory time of 1 second, an expiratory time of 2 seconds, and an anatomic reservoir volume of 50 ml. The reservoir consists of the nose, the nasopharynx, and the oropharynx. Its volume is assumed to be one-third of the anatomic dead space. It is also assumed that during the last 0.5 second of expiration, there is negligible flow of expired respiratory gases. A 6 L/min flow from the cannula will completely fill the reservoir. The FiO2 or FiN2O is then calculated by assuming that during the 1 second inspiratory time period, the gases in the anatomic reservoir that are provided by the nasal cannula and a volume of air such that the sum of the components of the tidal volume equals 500 ml are inspired.
Setting: Research laboratory of a university-affiliated metropolitan medical center.
Measurements And Main Results: The calculated FiO2 values for 100% oxygen delivered by nasal cannula agree with those determined by others. The FiN2Os estimated were directly proportional to the cannula flow rate and the fraction of N2O delivered. At the maximum total flow rate considered, 6L/min flow, with 70% N2O (remainder O2) delivered to the nasal cannula, an FiN2O of only 0.21 was estimated due to the large volume of air inspired. The FiO2 under these conditions would only be 0.23.
Conclusions: Our analysis shows that the maximum FiN2O achievable by using a nasal cannula is limited to 0.21 even with a 6 L/min flow of 70% N2O for the defined respiratory parameters.
abstract_id: PUBMED:32843510
Inhaled Nitric Oxide: In Vitro Analysis of Continuous Flow Noninvasive Delivery via Nasal Cannula. Background: Inhaled nitric oxide (NO) is most frequently delivered to mechanically ventilated patients in critical care, but it can also be administered noninvasively. The delivered dose and efficiency of continuous flow NO supplied through a nasal cannula has yet to be established. This study aimed to determine the influence of nasal cannula type, supply flow, and breathing pattern on delivered NO using a realistic adult airway replica and lung simulator.
Methods: Simulated breathing patterns were selected to represent rest, sleep, and light exercise, and were varied to investigate the effects of tidal volume and breathing frequency independently. Supplied gas flows targeted tracheal concentrations at rest of 5 or 20 ppm NO and were supplied with 2 L/min O2. Three different cannulas were tested. Tracheal NO concentrations and NO mass flow past the trachea were evaluated.
Results: Cannula type had a minor influence on delivered dose. Tracheal NO concentrations differed significantly based on breathing pattern (P < 0.01); for a target NO concentration of 20 ppm at rest, average inhaled NO concentrations were 23.3 ± 0.5 ppm, 36.5 ± 1.4 ppm, and 17.2 ± 0.3 ppm for the rest, sleep, and light exercise breathing patterns, respectively. For the same test conditions, mass flow of NO past the trachea was less sensitive to breathing pattern: 20.3 ± 0.5 mg/h, 19.9 ± 0.8 mg/h, and 24.3 ± 0.4 mg/h for the rest, sleep, and light exercise breathing patterns, respectively. Mass flow and delivery efficiency increased when minute volume increased.
Conclusions: These results indicate that inhaled NO concentration is strongly influenced by breathing pattern, whereas inhaled NO mass flow is not. NO mass flow may therefore be a useful dose metric for continuous flow delivery via nasal cannula.
abstract_id: PUBMED:28858552
Nitrous Oxide Inhalation Sedation Through a Nasal High-Flow System: The Possibility of a New Technique in Dental Sedation. High-flow nasal cannula (HFNC) systems are increasingly used for patients with both acute and chronic respiratory failure because of the clinical effectiveness and patient comfort associated with their use. Recently, HFNC has been used not only as a respiratory support device, but also as a drug delivery system. HFNC is designed to administer heated and humidified inspiratory oxygen flows (100% relative humidity at 37°C). Therefore, HFNC can provide high flows (up to 60 L/min) without discomfort. Moreover, HFNC improves oxygenation by exerting physiologic effects such as (a) dead-space washout and (b) moderate positive airway pressure. These characteristics and physiologic effects of HFNC may permit administration of high-flow nitrous oxide sedation while ensuring patient comfort and adequate sedative effect.
abstract_id: PUBMED:23287016
Humidification of inspired oxygen is increased with pre-nasal cannula, compared to intranasal cannula. Background: Oxygen therapy is usually combined with a humidification device, to prevent mucosal dryness. Depending on the cannula design, oxygen can be administered pre- or intra-nasally (administration of oxygen in front of the nasal ostia vs cannula system inside the nasal vestibulum). The impact of cannula design on intra-nasal humidity, however, has not been investigated to date.
Objective: First, to develop a system, that samples air from the nasal cavity and analyzes the humidity of these samples. Second, to investigate nasal humidity during pre-nasal and intra-nasal oxygen application, with and without humidification.
Methods: We first developed and validated a sampling and analysis system to measure humidity from air samples. By means of this system we measured inspiratory air samples from 12 subjects who received nasal oxygen with an intra-nasal and pre-nasal cannula at different flows, with and without humidification.
Results: The sampling and analysis system showed good correlation to a standard hygrometer within the tested humidity range (r = 0.99, P < .001). In our subjects intranasal humidity dropped significantly, from 40.3 ± 8.7% to 35.3 ± 5.8%, 32 ± 5.6%, and 29.0 ± 6.8% at flows of 1, 2, and 3 L, respectively, when oxygen was given intra-nasally without humidification (P = .001, P < .001, and P < .001, respectively). We observed no significant change in airway humidity when oxygen was given pre-nasally without humidification. With the addition of humidification we observed no significant change in humidity at any flow, and independent of pre- or intranasal oxygen administration.
Conclusions: Pre-nasal administration of dry oxygen achieves levels of intranasal humidity similar to those achieved by intranasal administration in combination with a bubble through humidifier. Pre-nasal oxygen simplifies application and may reduce therapy cost.
abstract_id: PUBMED:6703290
Nitrous oxide sedation in dentistry. A comparison between Rotameter settings, pharyngeal concentrations and blood levels of nitrous oxide. Nitrous oxide concentrations (V/V) at the delivery Rotameters block, nasal mask, pharynx and venous blood, were compared. There was a dilution of approximately 50% of the delivered nitrous oxide at the nasal mask which was further reduced in the pharynx. Venous blood concentrations 10 minutes after inhalation of nitrous oxide were low but as would be calculated from pharyngeal concentrations. After 5 minutes of oxygenation venous blood nitrous oxide concentrations were still relatively high. A total of 92% of subjects experienced a satisfactory effect with 30% nitrous oxide or less in the pharynx.
abstract_id: PUBMED:26577201
Right Versus Left Prong Nasal Cannula Flow Delivery and the Effects of Nasal Cycling on Inspired F(IO2) in an Adult Anatomic Model. Background: Nasal cycling may present negative consequences for oxygen-dependent patients using a nasal cannula. This study investigates the effects of nasal cycling on the delivered F(IO2) via nasal cannula in an anatomic model following a baseline study comparing right and left prong nasal cannula oxygen flow delivery.
Methods: Flow from right and left nasal cannula prongs were measured simultaneously using thermal mass flow meters while delivering 0.5-6-L/min oxygen for 5 nasal cannulas from different manufacturers. An adult mannikin head with an anatomically correct upper airway was connected to a QuickLung Breather test lung. Nasal cannula-delivered F(IO2) was recorded using a polarographic oxygen analyzer with naris occlusion simulated by inserting a 5.0 endotracheal tube into the naris and inflating the endotracheal tube cuff. Data were recorded with both nares open, for right naris occluded and left naris patent, and for left naris occluded and right naris patent at 0.5-6 L/min.
Results: A paired t test demonstrated statistical differences between right and left nasal cannula prong oxygen flows (P < .01). Multivariate analysis of variance demonstrated no significant differences in nasal cannula prong flow between nasal cannula manufacturers. Repeated measures analysis of variance demonstrated significant differences for measured inspired F(IO2) (P < .01) when alternating nares were occluded and patent. The Bonferroni post hoc test showed significant differences for measured F(IO2) between patent nares and right naris patent-left naris occluded (P < .01) and between patent nares and left naris patent-right naris occluded (P < .01). Measured F(IO2) decreased by as much as 0.1 when one naris was occluded.
Conclusions: Oxygen delivery by nasal cannula may be inefficient in the presence of the nasal cycle. Delivered nasal cannula oxygen concentrations decreased when bilateral nasal patency changed to unilateral nasal patency. Although statistically different, nasal cannula prong oxygen flow may not be clinically important across the full range of flows.
abstract_id: PUBMED:2999676
Microwave sterilization of nitrous oxide nasal hoods contaminated with virus. Although there exists a desire to eliminate the possibility of cross-infection from microbial contaminated nitrous oxide nasal hoods, effective and practical methods of sterilization in a dental office are unsatisfactory. Microwaves have been used to sterilize certain contaminated dental instruments without damage. In this study nasal hoods contaminated with rhinovirus, parainfluenza virus, adenovirus, and herpes simplex virus were sterilized in a modified microwave oven. Ninety-five percent of the virus activity was destroyed after 1 minute of exposure of the contaminated nasal hoods to microwaves. By the end of 4 minutes, complete inactivation of all four viruses was found. Repeated exposure of the nasal hoods to microwaves resulted in no damage to their texture and flexibility. Microwave sterilization may potentially provide a simple and practical method of sterilizing nitrous oxide anesthesia equipment in a dental or medical practice.
abstract_id: PUBMED:22134227
Safety of high-concentration nitrous oxide by nasal mask for pediatric procedural sedation: experience with 7802 cases. Objectives: Nitrous oxide is an effective sedative/analgesic for mildly to moderately painful pediatric procedures. This study evaluated the safety of nitrous oxide administered at high concentration (up to 70%) for procedural sedation.
Methods: This prospective, observational study included all patients younger than 18 years who received nitrous oxide for diagnostic or therapeutic procedures at a metropolitan children's facility. Patients' age, highest concentration and total duration of nitrous oxide administration, and adverse events were recorded.
Results: Nitrous oxide was administered on 7802 occasions to 5779 patients ranging in age from 33 days to 18 years (median, 5.0 years) during the 5.5-year study period. No adverse events were recorded for 95.7% of cases. Minor adverse events included nausea (1.6%), vomiting (2.2%), and diaphoresis (0.4%). Nine patients had potentially serious events, all of which resolved without incident. There was no difference in adverse event rates between nitrous oxide less than or equal to 50% and greater than 50% (P = 0.18). Patients aged 1 to 4 years had the lowest adverse event rate (P < 0.001), with no difference between groups younger than 1 year, 5 to 10 years, and 11 to 18 years. Compared with patients with less than 15 minutes of nitrous oxide administration, patients with 15 to 30 minutes or more than 30 minutes of nitrous oxide administration were 4.2 (95% confidence interval, 3.2-5.4) or 4.9 (95% confidence interval, 2.6-9.3) times more likely to have adverse events.
Conclusions: Nitrous oxide can be safely administered at up to 70% concentration by nasal mask for pediatric procedural sedation, particularly for short (<15 minutes) procedures. Nitrous oxide seems safe for children of all ages.
abstract_id: PUBMED:17377099
Case-series of nurse-administered nitrous oxide for urinary catheterization in children. Background: Children undergoing urologic imaging studies requiring urethral catheterization experience considerable discomfort and psychological distress. Nitrous oxide sedation may mitigate these detriments but the requirement for physician administration has limited the applicability of this technique.
Methods: Registered nurses underwent the nitrous oxide training requirements prescribed for state licensure of dentists and dental hygienists, with special emphasis on pediatric sedation principles. To evaluate the safety of nurse-administered nitrous oxide, we consecutively enrolled all children (ASA PS I-II) sedated for urethral catheterization for urologic imaging in an observational trial designed to identify sedation-related adverse events.
Results: Nitrous oxide was administered on 1018 occasions. There were no major adverse events (apnea, oxygen saturation <92%). Minor adverse events (diaphoresis, nausea, vomiting) occurred in 4% of patients. Eight patients (1%) were described as over-sedated. In 11 (1%) patients, nitrous oxide provided insufficient sedation for completion of urologic imaging.
Conclusions: Nitrous oxide sedation can be provided by a nurse-administered program in pediatric radiology. Administration of nitrous oxide for pediatric procedures by adequately trained nursing staff with appropriate multidisciplinary oversight may increase children's access to this sedative/analgesic drug.
abstract_id: PUBMED:8077442
Self-administered nitrous oxide analgesia for pediatric fracture reductions. We prospectively studied the efficacy and safety of self-administered nitrous oxide analgesia for 54 children undergoing closed reductions of fractures in the emergency department. No child was excluded from entry into the study because of fracture type. Nitrous oxide was the sole source of analgesia. The average Children's Hospital of Eastern Ontario pain score (CHEOPS) rated by the emergency medicine physician observing the reduction was 9.1 (range 6-13). Ninety-one percent of children obtained an analgesic effect; however, 46% of children had a CHEOPS score of > or = 10, indicating significant pain. A statistically significantly higher proportion of failures using nitrous-oxide analgesia occurred in patients with completely displaced radius/ulna fractures (p = 0.027). No complications such as vomiting, respiratory depression, or a change in oxygen saturation resulted from the use of nitrous oxide.
Answer: The administration of nitrous oxide (N2O) via nasal cannula has been explored in various studies, and the effectiveness of this method seems to depend on several factors, including the flow rate of the gas, the concentration of N2O, the type of nasal cannula used, and the breathing pattern of the patient.
One study found that when using a nasal cannula to deliver a mixture of N2O and oxygen, the maximum fraction of inspired nitrous oxide (FiN2O) achievable was limited to 0.21, even with a flow rate of 6 L/min and 70% N2O (PUBMED:8695091). This suggests that the effectiveness of N2O delivery via nasal cannula may be limited, particularly if higher concentrations are required.
Another study investigated the delivery of inhaled nitric oxide (NO) via nasal cannula and found that the type of cannula had a minor influence on the delivered dose, but the breathing pattern significantly affected the tracheal NO concentrations (PUBMED:32843510). This indicates that patient-specific factors such as breathing patterns can influence the effectiveness of gas delivery through a nasal cannula.
High-flow nasal cannula (HFNC) systems, which can provide high flows of gas without discomfort, have been suggested as a potential method for administering high-flow nitrous oxide sedation while ensuring patient comfort and an adequate sedative effect (PUBMED:28858552).
In terms of safety, a study reported that nitrous oxide can be safely administered at up to 70% concentration by nasal mask for pediatric procedural sedation, particularly for short procedures (PUBMED:22134227). This suggests that with proper monitoring and for specific patient populations, higher concentrations of N2O can be effectively and safely delivered.
Overall, while nasal cannulas can be used to administer nitrous oxide, the effectiveness of this method may be limited by the factors mentioned above. It is important to consider the specific clinical scenario, patient characteristics, and desired concentration of N2O when determining the suitability of nasal cannula delivery for nitrous oxide. |
Instruction: Interference screw vs. suture anchor fixation for open subpectoral biceps tenodesis: does it matter?
Abstracts:
abstract_id: PUBMED:25114415
Complications after subpectoral biceps tenodesis using a dual suture anchor technique. Purpose: A variety of fixation techniques for subpectoral biceps tenodeses have been described including interference screw and suture anchor fixation. Biomechanical data suggests that dual suture anchor fixation has equivalent strength compared to interference screw fixation. The purpose of the study is to determine the early complication rate after subpectoral biceps tenodesis utilizing a dual suture anchor technique.
Materials And Methods: A total of 103 open subpectoral biceps tenodeses were performed over a 3-year period using a dual suture anchor technique. There were 72 male and 31 female shoulders. The average age at the time of tenodesis was 45.5 years. 41 patients had a minimum of 6 months clinical follow-up (range, 6 to 45 months). The tenodesis was performed for biceps tendonitis, superior labral tears, biceps tendon subluxation, biceps tendon partial tears, and revisions of prior tenodeses.
Results: There were a total of 7 complications (7%) in the entire group. There were 4 superficial wound infections (4%). There were 2 temporary nerve palsies (2%) resulting from the interscalene block. One patient had persistent numbness of the ear and a second patient had a temporary phrenic nerve palsy resulting in respiratory dysfunction and hospital admission. One patient developed a pulmonary embolism requiring hospital admission and anticoagulation. There were no hematomas, wound dehiscences, peripheral nerve injuries, or ruptures. In the sub-group of patients with a minimum of 6 months clinical follow-up, the only complication was a single wound infection treated with oral antibiotics.
Conclusions: Subpectoral biceps tenodesis utilizing a dual suture anchor technique has a low early complication rate with no ruptures or deep infections. The complication rate is comparable to those previously reported for interference screw subpectoral tenodesis and should be considered as a reasonable alternative to interference screw fixation.
Level Of Evidence: Level IV-Retrospective Case Series.
abstract_id: PUBMED:23415819
Biomechanical evaluation of subpectoral biceps tenodesis: dual suture anchor versus interference screw fixation. Background: Subpectoral biceps tenodesis has been reliably used to treat a variety of biceps tendon pathologies. Interference screws have been shown to have superior biomechanical properties compared to suture anchors; although, only single anchor constructs have been evaluated in the subpectoral region. The purpose of this study was to compare interference screw fixation with a suture anchor construct, using 2 anchors for a subpectoral tenodesis.
Methods: A subpectoral biceps tenodesis was performed using either an interference screw (8 × 12 mm; Arthrex) or 2 suture anchors (Mitek G4) with #2 FiberWire (Arthrex) in a Krackow and Bunnell configuration in seven pairs of human cadavers. The humerus was inverted in an Instron and the biceps tendon was loaded vertically. Displacement driven cyclic loading was performed followed by failure loading.
Results: Suture anchor constructs had lower stiffness upon initial loading (P = .013). After 100 cycles, the stiffness of the suture anchor construct "softened" (decreased 9%, P < .001), whereas the screw construct was unchanged (0.4%, P = .078). Suture anchors had significantly higher ultimate failure strain than the screws (P = .003), but ultimate failure loads were similar between constructs: 280 ± 95 N (screw) vs 310 ± 91 N (anchors) (P = .438).
Conclusion: The interference screw was significantly stiffer than the suture anchor construct. Ultimate failure loads were similar between constructs, unlike previous reports indicating interference screws had higher ultimate failure loads compared to suture anchors. Neither construct was superior with regards to stress; although, suture anchors could withstand greater elongation prior to failure.
abstract_id: PUBMED:37548005
A Radiostereometric Analysis of Tendon Migration After Arthroscopic and Mini-Open Biceps Tenodesis: Interference Screw Versus Single Suture Anchor Fixation. Background: Studies suggest that similar clinical results are achieved via arthroscopic and open biceps tenodesis (BT) techniques.
Purpose: To quantify the postoperative migration of the BT construct between arthroscopic suprapectoral BT (ASPBT) and open subpectoral BT (OSPBT) techniques via interference screw (IS) or single-suture suture anchor (SSSA) fixation using radiostereometric analysis.
Study Design: Cohort study; Level of evidence, 2.
Methods: Distal migration of the biceps tendon after OSPBT with a polyetheretherketone IS, OSPBT with 1 SSSA, ASPBT with polyetheretherketone IS, and ASPBT with 2 SSSAs was measured prospectively. Patients with symptomatic biceps tendinopathy and preoperative patient-reported outcome measures (PROMs) including Constant-Murley subjective, Single Assessment Numeric Evaluation, or Patient-Reported Outcomes Measurement Information System-Upper Extremity scores were included. A tantalum bead was sutured on the proximal end of the long head of the biceps tendon before fixation of tendon tissue. Anteroposterior radiographs were performed immediately postoperatively, at 1 week, and at 3 months. Bead migration was measured, and preoperative PROMs were compared with those at latest follow-up.
Results: Of 115 patients, 94 (82%) were available for final follow-up. IS fixation yielded the least tendon migration with no difference between the open and arthroscopic approaches (4.31 vs 5.04 mm; P = .70). Fixation with 1 suture anchor demonstrated significantly greater migration than that achieved with an IS at both 1 week (6.47 vs 0.1 mm, 6.47 vs 1.75 mm, P < .001;) and 3 months (14.76 vs 4.31 mm, 14.76 vs 5.04 mm, P < .001) postoperatively. Two-suture anchor fixation yielded significantly greater migration than IS fixation at 1 week (7.02 vs 0.1 mm, P < .001; 7.02 vs 1.75 mm, P = .003) but not 3 months postoperatively (8.06 vs 4.31 mm, P = .10; 8.06 vs 5.04 mm, P = .07). Four patients with suture anchor fixation (3 patients in the OSPBT 1 SSSA group, 9.4%, and 1 patient in the ASPBT 2 SSSAs group, 3.8%) developed a Popeye deformity, whereas no Popeye deformities occurred in the IS groups. Mean 3-month bead migration in patients with and without a Popeye deformity was 60.8 and 11.2 mm, respectively (P < .0001). PROMs did not differ among groups at final follow-up.
Conclusion: Interference screw fixation yielded the least tendon migration whether achieved arthroscopically or open. The available data indicated that fixation with 1 SSSA but not 2 SSSAs resulted in significantly greater migration than that achieved with an IS. Despite variations in tendon migration, PROMs were similar among all groups. When SSSAs are used, tendon migration may be minimized by using ≥2 anchors.
abstract_id: PUBMED:18793424
Interference screw vs. suture anchor fixation for open subpectoral biceps tenodesis: does it matter? Background: Bioabsorbable interference screw fixation has superior biomechanical properties compared to suture anchor fixation for biceps tenodesis. However, it is unknown whether fixation technique influences clinical results.
Hypothesis: We hypothesize that subpectoral interference screw fixation offers relevant clinical advantages over suture anchor fixation for biceps tenodesis.
Study Design: Case Series.
Methods: We performed a retrospective review of a consecutive series of 88 patients receiving open subpectoral biceps tenodesis with either interference screw fixation (34 patients) or suture anchor fixation (54 patients). Average follow up was 13 months. Outcomes included Visual Analogue Pain Scale (0-10), ASES score, modified Constant score, pain at the tenodesis site, failure of fixation, cosmesis, deformity (popeye) and complications.
Results: There were no failures of fixation in this study. All patients showed significant improvement between their preoperative and postoperative status with regard to pain, ASES score, and abbreviated modified Constant scores. When comparing IF screw versus anchor outcomes, there was no statistical significance difference for VAS (p = 0.4), ASES score (p = 0.2), and modified Constant score (P = 0.09). One patient (3%) treated with IF screw complained of persistent bicipital groove tenderness, versus four patients (7%) in the SA group (nonsignificant).
Conclusion: Subpectoral biceps tenodesis reliably relieves pain and improves function. There was no statistically significant difference in the outcomes studied between the two fixation techniques. Residual pain at the site of tenodesis may be an issue when suture anchors are used in the subpectoral location.
abstract_id: PUBMED:27039966
Biomechanical Comparison of All-Suture Anchor Fixation and Interference Screw Technique for Subpectoral Biceps Tenodesis. Purpose: To compare the biomechanical characteristics of the subpectoral Y-knot all-suture anchor fixation with those of the interference screw technique.
Methods: Sixteen fresh-frozen human cadaveric shoulders with a mean age of 67.6 ± 5.8 years (range, 52 to 74 years) were studied. The specimens were randomly grouped into 2 experimental biceps tenodesis groups (n = 8): Y-knot all-suture anchor or interference screw. The specimens were cyclically tested to failure by applying tensile forces parallel to the longitudinal axis of the humerus. A preload of 5 N was applied for 2 minutes prior to cyclic loading for 500 cycles from 5 to 70 N at 1 Hz; subsequently, a load-to-failure test at 1 mm/s was performed. The ultimate failure load, stiffness, displacement at cyclic and failure loading, and mode of failure were recorded.
Results: The all-suture anchor technique displayed values of ultimate failure load and stiffness comparable to that of the interference screw technique. The displacement at cyclic and failure loading of the all-suture anchor trials were significantly greater than the interference screw (P = .0002). The all-suture anchor specimens experienced anchor pullout and tendon tear equally during the trials, whereas the interference screw group experienced tendon tear in most of the cases and screw pullout in 2 trials.
Conclusions: The Y-knot all-suture anchor fixation provides equivalent ultimate failure load and stiffness when compared with the interference screw technique in tenodesis of the proximal biceps tendon from a subpectoral approach. However, the interference screw technique demonstrates significantly less displacement in response to cyclic and failure loading.
Clinical Relevance: The all-suture anchor fixation is an alternative technique for subpectoral biceps tenodesis even at greater displacement when compared with the interference screw fixation during cyclic and failure loading.
abstract_id: PUBMED:31663008
Biomechanical Comparison of Subpectoral Biceps Tenodesis Onlay Techniques. Background: Subpectoral biceps tenodesis can be performed with cortical fixation using different repair techniques. The goal of this technique is to obtain a strong and stable reduction of biceps tendon in an anatomic position.
Purpose/hypothesis: The purpose of this study was to compare (1) displacement during cyclic loading, (2) ultimate load, (3) construct stiffness, and (4) failure mode of the biceps tenodesis fixation methods using onlay techniques with an all-suture anchor versus an intramedullary unicortical button. It was hypothesized that fixation with all-suture anchors using a Krackow stitch would exhibit biomechanical characteristics similar to those exhibited by fixation with unicortical buttons.
Study Design: Controlled laboratory study.
Methods: Ten pairs of fresh-frozen cadaveric shoulders (N = 20) were dissected to the humerus, leaving the biceps tendon-muscle unit intact for testing. A standardized subpectoral biceps cortical (onlay) tenodesis was performed using either an all-suture anchor or a unicortical button. The biceps tendon was initially cycled from 5 to 70 N at a frequency of 1.5 Hz. The force on the tendon was then returned to 5 N, and the tendon was pulled until ultimate failure of the construct. Displacement during cyclic loading, ultimate failure load, stiffness, and failure modes were assessed.
Results: Cyclic loading resulted in a mean displacement of 12.5 ± 2.5 mm for all-suture anchor fixation and 29.2 ± 9.4 mm for unicortical button fixation (P = .005). One all-suture anchor fixation and 2 unicortical button fixations failed during cyclic loading. The mean ultimate failure load was 170.4 ± 68.8 N for the all-suture anchor group and 125.4 ± 44.6 N for the unicortical button group (P = .074), with stiffness 59.3 ± 11.6 N/mm and 48.6 ± 6.8 N/mm (P = .091), respectively. For the unicortical button, failure occurred by suture tearing through tendon in 100% of the specimens. For the all-suture anchor, failure occurred by suture tearing through tendon in 56% and knot failure in 44% of the specimens.
Conclusion: The all-suture anchor fixation using a Krackow stitch for subpectoral biceps tenodesis provided ultimate load and stiffness similar to unicortical button fixation using a nonlocking whipstitch. The all-suture anchor fixation technique was shown to be superior in terms of displacement during cyclic loading when compared with the unicortical button fixation technique. However, the results of this study help to show that the fixation method used on the humeral side is less implicative of the overall construct strength than stitch location and technique, as the biceps tendon tissue and stitch configuration seem to be the limiting factor in subpectoral onlay tenodesis techniques.
Clinical Relevance: All-suture anchors have a smaller diameter than traditional suture anchors, can be inserted through curved guides, and preserve humeral bone stock without compromising postoperative imaging. This study supports use of the all-suture anchor fixation technique for subpectoral biceps tenodesis, with high biomechanical fixation strength and low displacement, as an alternative to the subpectoral onlay biceps tenodesis technique.
abstract_id: PUBMED:37969507
Biomechanical properties of suprapectoral biceps tenodesis with double-anchor knotless luggage tag sutures vs. subpectoral biceps tenodesis with single-anchor whipstitch suture using all-suture anchors. Background: As the use of all-suture anchors continues to increase, limited biomechanical data on the use of these anchors in various configurations for tenodesis of the long head biceps tendon (LHBT) exists. The aim of this study was to compare the biomechanical properties of a 2-anchor luggage tag suprapectoral biceps tenodesis (Sup-BT) vs. a single-anchor whipstitch subpectoral biceps tenodesis (Sub-BT) using all-suture anchors. The hypothesis was that the Sub-BT will have a higher ultimate load to failure and less creep relative to the Sup-BT construct.
Methods: Eighteen fresh frozen cadaveric humeri were used. The specimens were randomly divided into 2 groups of 9; i) The Sup-BT were performed with 2 1.8 mm knotless all-suture anchors using a luggage-tag fixation configuration, ii) The Sub-BT were performed using a single 1.9 mm all-suture anchor and a whipstitch suture configuration with a tied knot. The humeri were tested on a hydraulic MTS machine where the specimens were preloaded at 5 N for 2 minutes and then cyclically loaded from 5 to 50 N for 1000 cycles at 1 Hz while maximum displacement was recorded with a motion system and markers attached to the bone and bicep tendon. The tendon was then tensioned at a rate of 1 mm/s to obtain the ultimate load to failure. CT scans of the specimens were used to calculate the bone mineral density at the site of the anchor/bone interface and video recordings were captured during load to failure to document all modes of failure.
Results: There was no significant difference in the average load to failure of the Sup-BT and Sub-BT groups (197 N ± 45 N (SD), 164 N ± 68 N (SD) respectively; P = .122) or creep under fatigue between the Sup-BT vs. Sub-BT specimens (3.1 mm, SD = 1.5 vs. 2.2 mm, SD = 0.9; P = .162). The bone mineral density was statistically different between the 2 groups (P < .001); however, there were no observed failures at the anchor/bone interface and no correlation between failure load and bone mineral density.
Conclusion: The ultimate load to failure and creep between a Sup-BT with 2 knotless all-suture anchors using a luggage tag suture configuration was equivalent to a Sub-BT with 1 all-suture anchor using a whipstitched suture configuration and a tied knot. Surgeons can perform either technique confidently knowing that they are biomechanically equivalent in a cadaver model at time zero, and they offer similar strength to other fixation methods cited in the literature.
abstract_id: PUBMED:31585053
Are Implant Choice and Surgical Approach Associated With Biceps Tenodesis Construct Strength? A Systematic Review and Meta-regression. Background: Despite the increasing use of biceps tenodesis, there is a lack of consensus regarding optimal implant choice (suture anchor vs interference screw) and implant placement (suprapectoral vs subpectoral).
Purpose/hypothesis: The purpose was to determine the associations of procedural parameters with the biomechanical performance of biceps tenodesis constructs. The authors hypothesized that ultimate failure load (UFL) would not differ between sub- and suprapectoral repairs or between interference screw and suture anchor constructs and that the number of implants and number of sutures would be positively associated with construct strength.
Study Design: Meta-analysis.
Methods: The authors conducted a systematic literature search for studies that measured the biomechanical performance of biceps tenodesis repairs in human cadaveric specimens. Two independent reviewers extracted data from studies that met the inclusion criteria. Meta-regression was then performed on the pooled data set. Outcome variables were UFL and mode of failure. Procedural parameters (fixation type, fixation site, implant diameter, and numbers of implants and sutures used) were included as covariates. Twenty-five biomechanical studies, representing 494 cadaveric specimens, met the inclusion criteria.
Results: The use of interference screws (vs suture anchors) was associated with a mean 86 N-greater UFL (95% CI, 34-138 N; P = .002). Each additional suture used to attach the tendon to the implant was associated with a mean 53 N-greater UFL (95% CI, 24-81 N; P = .001). Multivariate analysis found no significant association between fixation site and UFL. Finally, the use of suture anchors and fewer number of sutures were both independently associated with lower odds of native tissue failure as opposed to implant pullout.
Conclusion: These findings suggest that fixation with interference screws, rather than suture anchors, and the use of more sutures are associated with greater biceps tenodesis strength, as well as higher odds of native tissue failure versus implant pullout. Although constructs with suture anchors show inferior UFL compared with those with interference screws, incorporation of additional sutures may increase the strength of suture anchor constructs. Supra- and subpectoral repairs provide equivalent biomechanical strength when controlling for potential confounders.
abstract_id: PUBMED:21717988
Biomechanical evaluation of open suture anchor fixation versus interference screw for biceps tenodesis. Biceps tenodesis provides reliable pain relief for patients with biceps tendon abnormality. Previous cadaver studies have shown that, for biceps tenodesis, an interference screw provides biomechanical strength to failure superior to that of suture anchors. This finding has led some providers to conclude that screw fixation for biceps tenodesis is superior to suture anchor fixation. The purpose of the current study was to test the hypothesis that the strength of a 2-suture-anchor technique with closing of the transverse ligament is equal to that of interference screw fixation for biceps tenodesis.In 6 paired, fresh-frozen cadaveric shoulder specimens, we excised the soft tissue except for the biceps tendon and the transverse ligament. We used 2 different methods for biceps tenodesis: (1) suture anchor repair with closing of the transverse ligament over the repair, and (2) interference screw fixation of the biceps tendon in the bicipital groove. Each specimen was preloaded with 5 N and then stretched to failure at 5 mm/sec on a materials testing machine. The load-to-failure forces of each method of fixation were recorded and compared. Mean loads to failure for the suture anchor and interference screw repairs were 263.2 N (95% confidence interval [CI], 221.7-304.6) and 159.4 N (95% CI, 118.4-200.5), respectively. Biceps tenodesis using suture anchors and closure of the transverse ligament provided superior load to failure than did interference screw fixation. This study shows that mini-open techniques using 2 anchors is a biomechanically comparable method to interference fixation for biceps tendon tenodesis.
abstract_id: PUBMED:33345223
All-suture anchor and unicortical button show comparable biomechanical properties for onlay subpectoral biceps tenodesis. Hypothesis: The purpose of this study was to biomechanically evaluate onlay subpectoral long head of the biceps (LHB) tenodesis with all-suture anchors and unicortical buttons in cadaveric specimens.
Methods: After evaluation of bone mineral density, 18 fresh-frozen, unpaired human cadaveric shoulders were randomly assigned to 2 groups: One group received an onlay subpectoral LHB tenodesis with 1 all-suture anchor, whereas the other group received a tenodesis with 1 unicortical button. The specimens were mounted in a servo-hydraulic material testing system. Tendons were initially loaded from 5 N to 100 N for 5000 cycles at 1 Hz. Displacement of the repair constructs was observed with optical tracking. After cyclic loading, each specimen was loaded to failure at a rate of 1 mm/s.
Results: The mean displacement after cyclic loading was 6.77 ± 3.15 mm in the all-suture anchor group and 8.41 ± 3.17 mm in the unicortical button group (P = not significant). The mean load to failure was 278.05 ± 38.77 N for all-suture anchor repairs and 291.36 ± 49.69 N for unicortical button repairs (P = not significant). The most common mode of failure in both groups was LHB tendon tearing. There were no significant differences between the 2 groups regarding specimen age (58.33 ± 4.37 years vs. 58.78 ± 5.33 years) and bone mineral density (0.50 ± 0.17 g/cm2 vs. 0.44 ± 0.19 g/cm2).
Conclusion: All-suture anchors and unicortical buttons are biomechanically equivalent in displacement and load-to-failure testing for LHB tenodesis. All-suture anchors can be considered a validated alternative for onlay subpectoral LHB tenodesis.
Answer: The question of whether to use interference screw or suture anchor fixation for open subpectoral biceps tenodesis is addressed in several studies, with varying results.
One study found that bioabsorbable interference screw fixation has superior biomechanical properties compared to suture anchor fixation for biceps tenodesis (PUBMED:18793424). However, the same study concluded that there was no statistically significant difference in clinical outcomes, such as pain, ASES score, and modified Constant score, between the two fixation techniques. This suggests that while interference screws may offer biomechanical advantages, these do not necessarily translate into clinical benefits.
Another study compared the biomechanical properties of subpectoral biceps tenodesis using dual suture anchor versus interference screw fixation and found that the interference screw was significantly stiffer than the suture anchor construct. However, ultimate failure loads were similar between constructs, indicating that neither construct was superior with regards to stress (PUBMED:23415819).
A systematic review and meta-regression found that the use of interference screws was associated with a greater ultimate failure load compared to suture anchors, and the use of more sutures was associated with increased strength of suture anchor constructs. However, there was no significant difference in ultimate failure load between sub- and suprapectoral repairs when controlling for potential confounders (PUBMED:31585053).
Other studies have shown that suture anchor fixation can be comparable to interference screw fixation in terms of biomechanical properties. For instance, one study demonstrated that a two-suture-anchor technique with closure of the transverse ligament provided superior load to failure than interference screw fixation (PUBMED:21717988). Additionally, all-suture anchors have been shown to be biomechanically equivalent to interference screws in displacement and load-to-failure testing for long head of the biceps tenodesis (PUBMED:33345223).
In summary, while interference screws may offer some biomechanical advantages in terms of stiffness and ultimate failure load, clinical outcomes do not necessarily differ significantly between the two fixation techniques. Suture anchor fixation, particularly when using multiple sutures or all-suture anchors, can provide comparable biomechanical strength to interference screw fixation. Therefore, the choice between interference screw and suture anchor fixation for open subpectoral biceps tenodesis may not be critical, as both can be effective when properly applied. |
Instruction: Gastric juice nitrite and vitamin C in patients with gastric cancer and atrophic gastritis: is low acidity solely responsible for cancer risk?
Abstracts:
abstract_id: PUBMED:12923371
Gastric juice nitrite and vitamin C in patients with gastric cancer and atrophic gastritis: is low acidity solely responsible for cancer risk? Background: N-nitroso compounds are carcinogens formed from nitrite, a process that is inhibited by vitamin C in gastric juice. Helicobacter pylori infection has been reported to increase nitrite and decrease vitamin C in gastric juice. Therefore, susceptibility to gastric cancer in H. pylori-infected patients may be derived from increased N-nitroso compounds in gastric juice. However, most H. pylori-infected patients do not develop gastric cancer.
Objective: To investigate additional factors that may affect susceptibility to gastric cancer, we compared nitrite and vitamin C levels in gastric juice from H. pylori-infected patients with and without gastric cancer.
Methods: Serum and gastric juice were obtained from 95 patients undergoing diagnostic endoscopy, including those with normal findings, duodenal ulcer, gastric ulcer, atrophic gastritis and gastric cancer. Serum was analysed for H. pylori antibody, nitrate and nitrite, gastrin and pepsinogens; gastric juice was analysed for pH, nitrite and vitamin C.
Results: pH and nitrite levels were increased and vitamin C levels decreased in the gastric juice of patients with atrophic gastritis and gastric cancer compared with other patients. However, in patients with a similar gastric acidity (pH 5-8), nitrite concentrations in the gastric juice were significantly higher and vitamin C levels significantly lower in patients with gastric cancer than in those with atrophic gastritis.
Conclusion: Although hypochlorhydria increases intraluminal nitrite and decreases intraluminal vitamin C, which increases the intraluminal formation of N-nitroso compounds, our results indicate that patients with gastric cancer may have additional factors that emphasize these changes.
abstract_id: PUBMED:32899442
Pathways of Gastric Carcinogenesis, Helicobacter pylori Virulence and Interactions with Antioxidant Systems, Vitamin C and Phytochemicals. Helicobacter pylori is a class one carcinogen which causes chronic atrophic gastritis, gastric intestinal metaplasia, dysplasia and adenocarcinoma. The mechanisms by which H. pylori interacts with other risk and protective factors, particularly vitamin C in gastric carcinogenesis are complex. Gastric carcinogenesis includes metabolic, environmental, epigenetic, genomic, infective, inflammatory and oncogenic pathways. The molecular classification of gastric cancer subtypes has revolutionized the understanding of gastric carcinogenesis. This includes the tumour microenvironment, germline mutations, and the role of Helicobacter pylori bacteria, Epstein Barr virus and epigenetics in somatic mutations. There is evidence that ascorbic acid, phytochemicals and endogenous antioxidant systems can modify the risk of gastric cancer. Gastric juice ascorbate levels depend on dietary intake of ascorbic acid but can also be decreased by H. pylori infection, H. pylori CagA secretion, tobacco smoking, achlorhydria and chronic atrophic gastritis. Ascorbic acid may be protective against gastric cancer by its antioxidant effect in gastric cytoprotection, regenerating active vitamin E and glutathione, inhibiting endogenous N-nitrosation, reducing toxic effects of ingested nitrosodimethylamines and heterocyclic amines, and preventing H. pylori infection. The effectiveness of such cytoprotection is related to H. pylori strain virulence, particularly CagA expression. The role of vitamin C in epigenetic reprogramming in gastric cancer is still evolving. Other factors in conjunction with vitamin C also play a role in gastric carcinogenesis. Eradication of H. pylori may lead to recovery of vitamin C secretion by gastric epithelium and enable regression of premalignant gastric lesions, thereby interrupting the Correa cascade of gastric carcinogenesis.
abstract_id: PUBMED:3169097
CA 19-9 determination in gastric juice: role in identifying gastric cancer and high risk patients. Gastric juice CA 19-9 levels were determined in 23 patients affected by gastric cancer, in 57 patients affected by chronic atrophic gastritis of different severities and in 55 'healthy' controls, undergoing endoscopy for upper gastrointestinal tract symptoms. Increased CA 19-9 levels were documented in chronic atrophic gastritis patients as well as in gastric cancer patients, the difference with respect to controls being statistically significant. However, there was considerable overlap between different groups. In particular, gastric cancer patients had CA 19-9 levels similar to those detected in moderate and severe chronic atrophic gastritis. CA 19-9 correlated with gastric juice pH and CEA concentration. Its values were not influenced by the patients' age or sex. In our opinion CA 19-9 gastric juice determination, although not useful in singling out patients harboring gastric neoplasia, may be used in identifying patients 'at risk' for gastric cancer and who might then be referred for more accurate investigations.
abstract_id: PUBMED:6698440
Relationship between histology and gastric juice pH and nitrite in the stomach after operation for duodenal ulcer. One hundred patients who had undergone operation for duodenal ulcer (68 vagotomy and gastroenterostomy; seven vagotomy and pyloroplasty; 22 gastrectomy and three gastroenterostomy) 10 or more years previously each underwent endoscopy. Biopsies were taken and gastric juice aspirated for measurement of pH and nitrite concentration. Patients were divided into five histological grades; chronic superficial gastritis (+/- minimal atrophic gastritis) (35), atrophic gastritis/intestinal metaplasia (30), mild dysplasia (21), moderate/severe dysplasia (13) and carcinoma (one). A wide spectrum of pH values was found with 35 patients having a fasting intragastric pH below 4.0 and 65 above 4.0. A strong relationship was found between histological grade and pH. Patients with chronic superficial gastritis had a fasting intragastric pH below 4.0 more frequently than those with moderate/severe dysplasia (p less than 0.001). Gastric juice nitrite concentrations were higher in the moderate/severe dysplasia group than in the chronic superficial gastritis group (p = 0.02). The strong correlation between pH and nitrite concentration, previously documented, was confirmed. The implications of these findings in the pathogenesis of carcinogenesis in the postoperative stomach are discussed.
abstract_id: PUBMED:8770466
Nitrite, N-nitroso compounds, and other analytes in physiological fluids in relation to precancerous gastric lesions. Levels of gastric juice nitrite, several urinary N-nitroso compounds, and other analytes were examined among nearly 600 residents in an area of Shandong, China, where precancerous gastric lesions are common and rates of stomach cancer are among the world's highest. Gastric juice nitrite levels were considerably higher among those with gastric juice pH values above 2.4 versus below 2.4. Nitrite was detected more often and at higher levels among persons with later stage gastric lesions, especially when gastric pH was high. Of those with intestinal metaplasia, 17.5% had detectable levels of gastric nitrite, while this analyte was detected in only 7.2% of those with less advanced lesions. Relative to those with undetectable nitrite, the odds of intestinal metaplasia increased from 1.5 (95% confidence interval = 0.6-4.1) to 4.1 (95% confidence interval = 1.8-9.3) among those with low and high nitrite concentrations, respectively. Urinary acetaldehyde and formaldehyde levels also tended to be higher among those with more advanced pathology, particularly dysplasia. However, urinary excretion levels of total N-nitroso compounds and several nitrosamino acids differed little among those with chronic atrophic gastritis and intestinal metaplasia and dysplasia, consistent with findings from recent studies in the United Kingdom, France, and Colombia. The data from this high-risk population suggest that elevated levels of gastric nitrite, especially in a high pH environment, are associated with advanced precancerous gastric lesions, although specific N-nitroso compounds were not implicated.
abstract_id: PUBMED:21608221
Effect of huazhuo jiedu recipe on gastric juice compositions and tumor markers in patients with chronic atrophic gastritic precancerosis Objective: To observe clinical efficacy of Huazhuo Jiedu Recipe (HJR) on chronic atrophic gastritic precancerosis (CAGP), and its effect on contents of lactic acid, total acid, free acid, and nitrite in the gastric juice, as well as tumor markers in gastric juice and blood.
Methods: Two hundred and twenty-nine patients with CAGP were randomly assigned to two groups, the 119 patients in the treated group orally took HJR and the 110 patients in the control group orally took Weifuchun Tablet. The therapeutic course for all was three months, two courses in total. The therapeutic efficacy, changes of gastric acid contents before and after treatment were observed, and the tumor markers in the gastric juice and blood were detected using electrochemical luminescence immunoassay.
Results: The pathological effective rate was 83.2% (99/119) in the treated group and 60.9% (67/110) in the control group, showing significant difference between the two groups (P <0.05). The total acids and free acids in the gastric juice were significantly improved, contents of lactic acid and nitrite were significantly lowered in the two groups. Both contents of carcinoembryonic antigen (CEA), carbohydrate antigen19-9 (CA19-9), carbohydrate antigen72-4 (CA72-4), and carbohydrate antigen125 (CA125) in the gastric juice and serum were significantly lowered after treatment in the treated group (P<0.05). Compared with the normal control group, the therapeutic effect was more obvious in the treated group (P<0.05).
Conclusions: HJR could stimulate the gastric membranous secretion, enhance contents of total acids and free acids. It could prevent the further progress of CAGP by decreasing contents of lactic acid and nitrite in the gastric juice, and lowering contents of CEA, CA19-9, CA72-4, and CA125 in the gastric juice and serum.
abstract_id: PUBMED:3287075
Chronic atrophic gastritis and risk of N-nitroso compounds carcinogenesis. Chronic atrophic gastritis is considered a precancerous condition for carcinoma of the stomach. To evaluate the correlation between progressive alterations in the mucosa and gastric juice microenvironmental factors, retained involved on N-nitroso compounds carcinogenesis, detailed analyses of biochemical and microbiological parameters such as pH, total viable counts (TVC), nitrate reductase positive bacterial counts (NRPBC), nitrite (NO2-) and thiocyanate (SNC-) levels, were carried out on 56 fasting gastric juices samples obtained at endoscopy from 28 patients with chronic atrophic gastritis (CAG), 14 with gastric cancers (GC), and 14 normal controls (NC). The mean values of pH, nitrite, TVC, and NRPBC were significantly lower in the juices of NC than in those of CAG and GC patients. Furthermore, the mean levels of the same parameters were higher in GC than in CAG juices. No significant difference was found in the three groups for SCN- level which principally resulted influenced by smoke habit. The 28 patients with CAG were subdivided into two groups (Group A = Diffuse chronic atrophic gastritis--DCAG; Group B = Multifocal chronic atrophic gastritis--MCAG) according to the involvement of gastric corpus and fundus besides antrum by a process of mucosal atrophy. The mean levels of pH, nitrite, TVC, and NRPBC were significantly higher in MCAG than in normal controls but statistically lower in reference to DCAG and cancers. In these two groups no difference was found for the same variables. The percentage of contaminated juices was higher for DCAG and cancers in respect to MCAG but no difference was found between DCAG and neoplastic stomachs. The results of this study suggest that the DCAG could be considered as the chronic atrophic gastritis type more exposed to the risk of N-nitroso compounds carcinogenesis.
abstract_id: PUBMED:648807
Studies on the CEA-like substance in gastric juice. Concentration and heterogeneity of CEA-like substance in gastric juice were studied using radioimmunoassay. Statistically significant increase of CEA-like substance in gastric juice was found in advanced atrophic gastritis (P less than 0.01), early gastric cancer (P less than 0.05) and advanced gastric cancer (P less than 0.01) as compared with normal subjects. In cases with atrophic gastritis with a high degree of intestinal metaplasia, the concentrations above 300 microgram/dl were noticed. The results indicate that increased concentrations of CEA-like substance in gastric secretions may strongly suggest the presence of a marked intestinal metaplasia and/or cancerous changes of gastric mucosae, including the early cancer. The distribution of CEA activity in gel filtration fractions of gastric juice was compared using kits of two different radioimmunoassay systems. The patterns of CEA activities were different between the two different kits used, but the main peaks were located in the fractions with the molecular weight of 20x10(4) daltons corresponding to that of serum CEA. It is considered, however, that the CEA-like substance in gastric juice specimens may be more or less heterogenous when any of the methods is used.
abstract_id: PUBMED:26280398
Using gastric juice lncRNA-ABHD11-AS1 as a novel type of biomarker in the screening of gastric cancer. Long noncoding RNAs (lncRNAs) play vital roles in tumorigenesis. However, the diagnostic values of most lncRNAs are largely unknown. To investigate whether gastric juice lncRNA-ABHD11-AS1 can be a potential biomarker in the screening of gastric cancer, 173 tissue samples and 130 gastric juice from benign lesion, gastric dysplasia, gastric premalignant lesions, and gastric cancer were collected. ABHD11-AS1 levels were detected by reverse transcription-polymerase chain reaction. Then, the relationships between ABHD11-AS1 levels and clinicopathological factors of patients with gastric cancer were investigated. The results showed that ABHD11-AS1 levels in gastric cancer tissues were significantly higher than those in other tissues. Its levels in gastric juice from gastric cancer patients were not only significantly higher than those from cases of normal mucosa or minimal gastritis, atrophic gastritis, and gastric ulcers but also associated with gender, tumor size, tumor stage, Lauren type, and blood carcinoembryonic antigen (CEA) levels. More importantly, when using gastric juice ABHD11-AS1 as a marker, the positive detection rate of early gastric cancer patients was reached to 71.4 %. Thanks to the special origin of gastric juice, these results indicate that gastric juice ABHD11-AS1 may be a potential biomarker in the screening of gastric cancer.
abstract_id: PUBMED:3451930
CEA levels in gastric juice in precancerous conditions and cancer. First described in 1965 as a specific antigen for cancer of the colon, CEA is now considered to be an antigen associated with many types of malignant neoplasia, although the CEA-Test's role in clinical routine has yet to be clearly defined. In the present study CEA levels in gastric juice were measured in subjects with gastric carcinoma (n = 25) and with benign gastric lesions (n = 171). CEA was significantly (p less than 0.05) higher in patients with gastric carcinoma (GC) than in subjects with benign gastric lesions, other than chronic atrophic gastritis (CAG) associated with intestinal metaplasia (IM). In this latter condition CEA levels were similar to those in patients with GC. These results suggest that the assay of CEA in gastric juice could be included in the diagnostic program for gastric cancer and its precursors with the aim of assessing its utility as risk indicator in the management of precancerous conditions and lesion.
Answer: The relationship between gastric juice nitrite, vitamin C, and the risk of gastric cancer is multifaceted and cannot be attributed solely to low acidity. While hypochlorhydria (low acidity) does increase intraluminal nitrite and decrease intraluminal vitamin C, which in turn increases the formation of carcinogenic N-nitroso compounds, there are additional factors at play in patients with gastric cancer. For instance, even when patients with gastric cancer and those with atrophic gastritis had similar gastric acidity (pH 5-8), the nitrite concentrations in the gastric juice were significantly higher and vitamin C levels significantly lower in patients with gastric cancer (PUBMED:12923371). This suggests that factors beyond acidity are contributing to the increased cancer risk.
Helicobacter pylori infection is a known risk factor for gastric cancer and is associated with increased nitrite and decreased vitamin C in gastric juice. However, the mechanisms by which H. pylori interacts with other risk factors, including vitamin C, in gastric carcinogenesis are complex and involve metabolic, environmental, epigenetic, genomic, infective, inflammatory, and oncogenic pathways. Ascorbic acid (vitamin C) may be protective against gastric cancer due to its antioxidant effect, ability to regenerate active vitamin E and glutathione, inhibition of endogenous N-nitrosation, and reduction of toxic effects of ingested nitrosamines and heterocyclic amines. The effectiveness of this protection is related to the virulence of the H. pylori strain, particularly CagA expression (PUBMED:32899442).
In summary, while low acidity in the stomach contributes to an environment that may increase the risk of gastric cancer, it is not the sole factor. The interplay between H. pylori infection, vitamin C levels, and other molecular and environmental factors also plays a significant role in the carcinogenic process. |
Instruction: The 'obesity paradox': a parsimonious explanation for relations among obesity, mortality rate and aging?
Abstracts:
abstract_id: PUBMED:20440298
The 'obesity paradox': a parsimonious explanation for relations among obesity, mortality rate and aging? Objective: Current clinical guidelines and public health statements generically prescribe body mass index (BMI; kg m(-2)) categories regardless of the individual's situation (age, risk for diseases, and so on). However, regarding BMI and mortality rate, two well-established observations are (1) there is a U-shaped (that is, concave) association-people with intermediate BMIs tend to outlive people with higher or lower BMIs; and (2) the nadirs of these curves tend to increase monotonically with age. Multiple hypotheses have been advanced to explain either of these two observations. In this study, we introduce a new hypothesis that may explain both phenomena, by drawing on the so-called obesity paradox: the unexpected finding that obesity is often associated with increased survival time among people who have some serious injury or illness in spite of being associated with reduced survival time among the general population.
Results: We establish that the obesity paradox offers one potential explanation for two curious but consistently observed phenomena in the obesity field.
Conclusion: Further research is needed to determine the extent to which the obesity paradox is actually an explanation for these phenomena, but if our hypothesis proves true the common practice of prescribing overweight patients to lower their BMI should currently be applied with caution. In addition, the statistical modeling technique used here could be applied in such other areas involving survival analysis of disjoint subgroups, to explain possible interacting causal associations and to determine clinical practice.
abstract_id: PUBMED:31865598
Obesity paradox and aging. Background: In association with the rapid lengthening of life expectancy and the ever-rising prevalence of obesity, many studies explored in the elderly the phenomenon usually defined as the obesity paradox.
Objective And Methods: This article is a narrative overview of seventy-two papers (1999-2019) that investigated the obesity paradox during the aging process. Twenty-nine documents are examined more in detail.
Results: The majority of studies suggesting the existence of an obesity paradox have evaluated just BMI as an index of obesity. Some aspects are often not assessed or are underestimated, in particular body composition and visceral adiposity, sarcopenic obesity, and cardio fitness. Many studies support that central fat and relative loss of fat-free mass may become relatively more important than BMI in determining the health risk associated with obesity in older ages.
Conclusion: Inaccurate assessments may lead to a systematic underestimation of the impact of obesity on morbidity and premature mortality and, consequently, to clinical behaviors that are not respectful of the health of elderly patients. Knowledge of the changes in body composition and fat distribution will help to better understand the relationship between obesity, morbidity, and mortality in the elderly.
Level Of Evidence: Level V, narrative overview.
abstract_id: PUBMED:30202394
The Obesity Paradox in Type 2 Diabetes and Mortality. The obesity paradox for survival among individuals with type 2 diabetes has been observed in some but not all studies. Conflicting evidence for the role of overweight and obesity in all-cause mortality may largely be a result of differences in study populations, epidemiological methods, and statistical analysis. For example, analyses among populations with long-term prevalent diabetes and the accrual of other chronic health conditions are more likely to observe that the sickest participants have lower body weights, and therefore, relative to normal weight, overweight and even obesity appear advantageous. Other mortality risk factors, such as smoking, also confound the relationship between body weight and survival, but this behavior varies widely in intensity and duration, making it difficult to assess and effectively adjust for in statistical models. Disentangling the potential sources of bias is imperative in understanding the relevance of excess body weight to mortality in diabetes. In this review, we summarize methodological considerations underlying the observed obesity paradox. Based on the available evidence, we conclude that the obesity paradox is likely an artifact of biases, and once these are accounted for, it is evident that compared with normal body weight, excess body weight is associated with a greater mortality risk.
abstract_id: PUBMED:29990534
Obesity Paradox in Aging: From Prevalence to Pathophysiology. Recent advances in medical technology and health care have greatly improved the management for chronic diseases and prolonged human lifespan. Unfortunately, increased lifespan and the aging population impose a major challenge on the ever-rising prevalence of chronic diseases, in particular cardiometabolic stress associated with the pandemic obesity in our modern society. Although overweight and obesity are associated with incident cardiovascular diseases (CVD), including heart failure (HF), it paradoxically leads to a more favorable prognosis in patients with chronic HF, a phenomenon commonly defined as "obesity paradox". Numerous population-based and clinical studies have suggested possible explanations such as better metabolic reserve, smoking and disease-associated weight loss for obesity paradox. Recent evidence noticed a shift in obesity paradox with aging. While some studies have reported a more pronounced "obesity paradox" in the older patients, others have seen diminished cardiac benefits with overweight and obesity in the elderly patients with CVD. These findings suggested that a complex relationship among aging, metabolism, and HF severity/chronicity, which may explain the shift in obesity paradox in the elderly. Aging negatively affects body metabolism and cardiac function although its precise impact on obesity paradox remains elusive. To develop new strategies for cardiovascular health in the elderly, it is imperative to understand the precise role for aging on obesity-related CVD.
abstract_id: PUBMED:24525165
The obesity paradox: understanding the effect of obesity on mortality among individuals with cardiovascular disease. Objective: To discuss possible explanations for the obesity paradox and explore whether the paradox can be attributed to a form of selection bias known as collider stratification bias.
Method: The paper is divided into three parts. First, possible explanations for the obesity paradox are reviewed. Second, a simulated example is provided to describe collider stratification bias and how it could generate the obesity paradox. Finally, an example is provided using data from 17,636 participants in the US National and Nutrition Examination Survey (NHANES III). Generalized linear models were fit to assess the effect of obesity on mortality both in the general population and among individuals with diagnosed cardiovascular disease (CVD). Additionally, results from a bias analysis are presented.
Results: In the general population, the adjusted risk ratio relating obesity and all-cause mortality was 1.24 (95% CI 1.11, 1.39). Adjusted risk ratios comparing obese and non-obese among individuals with and without CVD were 0.79 (95% CI 0.68, 0.91) and 1.30 (95% CI=1.12, 1.50), indicating that obesity has a protective association among individuals with CVD.
Conclusion: Results demonstrate that collider stratification bias is one plausible explanation for the obesity paradox. After conditioning on CVD status in the design or analysis, obesity can appear protective among individuals with CVD.
abstract_id: PUBMED:38093956
The obesity paradox in intracerebral hemorrhage: a systematic review and meta-analysis. Background: Intracerebral hemorrhage (ICH) has a mortality rate which can reach 30-40%. Compared with other diseases, obesity is often associated with lower mortality; this is referred to as the 'obesity paradox'. Herein, we aimed to summarize the studies of the relations between obesity and mortality after ICH.
Method: For this systematic review and meta-analysis (PROSPERO registry CRD42023426835), we conducted searches for relevant articles in both PubMed and Embase. Non-English language literature, irrelevant literature, and non-human trials were excluded. All included publications were then qualitatively described and summarized. Articles for which quantitative analyses were possible were evaluated using Cochrane's Review Manager.
Results: Ten studies were included. Qualitative analysis revealed that each of the 10 studies showed varying degrees of a protective effect of obesity, which was statistically significant in 8 of them. Six studies were included in the quantitative meta-analysis, which showed that obesity was significantly associated with lower short-term (0.69 [0.67, 0.73], p<0.00001) and long-term (0.62 [0.53, 0.73], p<0.00001) mortality. (Data identified as (OR [95%CI], p)).
Conclusion: Obesity is likely associated with lower post-ICH mortality, reflecting the obesity paradox in this disease. These findings support the need for large-scale trials using standardized obesity classification methods.
Systematic Review Registration: https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42023426835, identifier CRD42023426835.
abstract_id: PUBMED:34926600
Absence of Obesity Paradox in All-Cause Mortality Among Chinese Patients With an Implantable Cardioverter Defibrillator: A Multicenter Cohort Study. Background: The results of studies on the obesity paradox in all-cause mortality are inconsistent in patients equipped with an implantable cardioverter-defibrillator (ICD). There is a lack of relevant studies on Chinese populations with large sample size. This study aimed to investigate whether the obesity paradox in all-cause mortality is present among the Chinese population with an ICD. Methods: We conducted a retrospective analysis of multicenter data from the Study of Home Monitoring System Safety and Efficacy in Cardiac Implantable Electronic Device-implanted Patients (SUMMIT) registry in China. The outcome was all-cause mortality. The Kaplan-Meier curves, Cox proportional hazards models, and smooth curve fitting were used to investigate the association between body mass index (BMI) and all-cause mortality. Results: After inclusion and exclusion criteria, 970 patients with an ICD were enrolled. After a median follow-up of 5 years (interquartile, 4.1-6.0 years), in 213 (22.0%) patients occurred all-cause mortality. According to the Kaplan-Meier curves and multivariate Cox proportional hazards models, BMI had no significant impact on all-cause mortality, whether as a continuous variable or a categorical variable classified by various BMI categorization criteria. The fully adjusted smoothed curve fit showed a linear relationship between BMI and all-cause mortality (p-value of 0.14 for the non-linearity test), with the curve showing no statistically significant association between BMI and all-cause mortality [per 1 kg/m2 increase in BMI, hazard ratio (HR) 0.97, 95% CI 0.93-1.02, p = 0.2644]. Conclusions: The obesity paradox in all-cause mortality was absent in the Chinese patients with an ICD. Prospective studies are needed to further explore this phenomenon.
abstract_id: PUBMED:28966808
Aging, Metabolism, and Cancer Development: from Peto's Paradox to the Warburg Effect. Medical advances made over the last century have increased our lifespan, but age-related diseases are a fundamental health burden worldwide. Aging is therefore a major risk factor for cardiovascular disease, cancer, diabetes, obesity, and neurodegenerative diseases, all increasing in prevalence. However, huge inter-individual variations in aging and disease risk exist, which cannot be explained by chronological age, but rather physiological age decline initiated even at young age due to lifestyle. At the heart of this lies the metabolic system and how this is regulated in each individual. Metabolic turnover of food to energy leads to accumulation of co-factors, byproducts, and certain proteins, which all influence gene expression through epigenetic regulation. How these epigenetic markers accumulate over time is now being investigated as the possible link between aging and many diseases, such as cancer. The relationship between metabolism and cancer was described as early as the late 1950s by Dr. Otto Warburg, before the identification of DNA and much earlier than our knowledge of epigenetics. However, when the stepwise gene mutation theory of cancer was presented, Warburg's theories garnered little attention. Only in the last decade, with epigenetic discoveries, have Warburg's data on the metabolic shift in cancers been brought back to life. The stepwise gene mutation theory fails to explain why large animals with more cells, do not have a greater cancer incidence than humans, known as Peto's paradox. The resurgence of research into the Warburg effect has given us insight to what may explain Peto's paradox. In this review, we discuss these connections and how age-related changes in metabolism are tightly linked to cancer development, which is further affected by lifestyle choices modulating the risk of aging and cancer through epigenetic control.
abstract_id: PUBMED:27075676
Collider Bias Is Only a Partial Explanation for the Obesity Paradox. Background: "Obesity paradox" refers to an association between obesity and reduced mortality (contrary to an expected increased mortality). A common explanation is collider stratification bias: unmeasured confounding induced by selection bias. Here, we test this supposition through a realistic generative model.
Methods: We quantify the collider stratification bias in a selected population using counterfactual causal analysis. We illustrate the bias for a range of scenarios, describing associations between exposure (obesity), outcome (mortality), mediator (in this example, diabetes) and an unmeasured confounder.
Results: Collider stratification leads to biased estimation of the causal effect of exposure on outcome. However, the bias is small relative to the causal relationships between the variables.
Conclusions: Collider bias can be a partial explanation of the obesity paradox, but unlikely to be the main explanation for a reverse direction of an association to a true causal relationship. Alternative explanations of the obesity paradox should be explored. See Video Abstract at http://links.lww.com/EDE/B51.
abstract_id: PUBMED:22475846
Obesity, health status, and 7-year mortality in percutaneous coronary intervention: in search of an explanation for the obesity paradox. Background: Obesity is a growing health problem and is associated with adverse outcomes in coronary artery disease (CAD). However, recent studies have shown better survival in cardiovascular patients with overweight or obesity, which has been referred to as the "obesity paradox". As there is no clear understanding of the phenomenon, we examined the association between body mass index (BMI) and all-cause mortality in patients treated with percutaneous coronary intervention (PCI) at 7-year follow-up, and the potential role of health status in explaining the obesity paradox.
Methods: Consecutive PCI patients (72.5% men; mean age 62.0 ± 11.2 years, range [27-90]years) from the Rapamycin-Eluting Stent Evaluated at Rotterdam Cardiology Hospital (RESEARCH) registry completed the 36-item short-form health survey (SF-36) to assess health status at baseline. Patients were classified into a normal weight, overweight or obesity group.
Results: The prevalence of normal weight was 34.7% (354/1019), overweight was seen in 45.9% (468/1019) of patients, and 19.3% (197/1019) was obese. After a median follow-up of 7.0 ± 1.7 years, 163 deaths (16.0%) from any cause were recorded. Cumulative hazard functions differed significantly for the obese and overweight group when compared to the normal weight group (log-rank X(2)=6.59, p<0.05). In multivariable analysis, overweight, but not obesity, remained associated with a lower risk for all-cause mortality (HR=0.60, 95%CI [0.42-0.86], p=0.005). Additionally, after adding the 8 health status SF-36 domains to the multivariate model, the association between overweight and mortality remained unchanged.
Conclusion: In our study population overweight, but not obesity, was associated with a lower risk for 7-year mortality in PCI patients. Health status as measured with the SF-36 did not seem to play a role in explaining the obesity paradox.
Answer: The 'obesity paradox' refers to the counterintuitive observation that, in certain populations, overweight and obese individuals have better survival outcomes compared to those with normal or low body weight, particularly in the context of aging and certain diseases. This paradox is seen in the relationship between obesity, mortality rate, and aging, where older individuals with a higher body mass index (BMI) sometimes exhibit lower mortality rates compared to their leaner counterparts (PUBMED:20440298).
Several studies have attempted to explain this phenomenon. One hypothesis suggests that the obesity paradox may be due to the increased metabolic reserves in obese individuals, which could provide an advantage during illness or injury (PUBMED:20440298). However, this explanation is not universally accepted, and further research is needed to determine the extent to which the obesity paradox is a true explanation for the observed phenomena.
The majority of studies suggesting the existence of an obesity paradox have evaluated BMI as the sole index of obesity. However, other factors such as body composition, visceral adiposity, sarcopenic obesity, and cardio fitness may be more important than BMI in determining health risks associated with obesity in older ages (PUBMED:31865598). These factors could lead to an underestimation of the impact of obesity on morbidity and mortality, potentially influencing clinical behaviors and health outcomes for elderly patients.
The obesity paradox has also been observed in specific conditions such as type 2 diabetes, cardiovascular disease, and intracerebral hemorrhage. However, the evidence is conflicting, and the paradox may be an artifact of biases such as collider stratification bias or confounding by factors like smoking and disease-associated weight loss (PUBMED:30202394, PUBMED:24525165, PUBMED:38093956).
Interestingly, the obesity paradox does not appear to be present in all populations. For example, a study among Chinese patients with an implantable cardioverter-defibrillator (ICD) found no significant impact of BMI on all-cause mortality, suggesting the absence of an obesity paradox in this group (PUBMED:34926600).
In conclusion, while the obesity paradox offers a potential explanation for the relationship among obesity, mortality rate, and aging, it is a complex issue that requires careful consideration of various factors beyond BMI, including body composition, fat distribution, and lifestyle. The paradox may be influenced by methodological issues and biases, and its relevance varies across different populations and diseases (PUBMED:20440298, PUBMED:31865598, PUBMED:30202394, PUBMED:24525165, PUBMED:38093956, PUBMED:34926600). |
Instruction: Are circulating autoantibodies to ABCC3 transporter a potential biomarker for lung cancer?
Abstracts:
abstract_id: PUBMED:22699933
Are circulating autoantibodies to ABCC3 transporter a potential biomarker for lung cancer? Purpose: The present study was undertaken to test circulating autoantibody to ATP-binding cassette C3 (ABCC3) transporter in order to confirm whether anti-ABCC3 antibody could serve as a biomarker for early diagnosis of lung cancer.
Methods: This study recruited 275 patients (178 males and 97 females) with non-small cell lung cancer (either squamous carcinoma or adenocarcinoma) and 226 control subjects (134 males and 92 females) well matched in age and smoking history. Anti-ABCC3 IgA and IgG were determined using an enzyme-linked immunosorbent assay (ELISA) approach that was developed in house with the human leukocyte antigen class II (HLA-II) restricted antigens.
Results: Mann-Whitney U test showed that the IgG antibody level was significantly higher in female patients with adenocarcinoma than female controls (Z = -4.34, P < 0.001) and that the IgA antibody level was significantly higher in male patients with squamous carcinoma than male controls (Z = -3.12, P = 0.002). Pearson's Chi-square (χ(2)) test showed that female patients with adenocarcinoma had a significantly higher positive rate for IgG autoantibody than female controls (χ ( 2 ) = 8.73, P = 0.003). The ELISA sensitivity against a specificity of >95 % was 18.1 % for IgG assay in female patients and 18.0 % for IgA assay in male patients. The inter-assay deviation was 10.6 % for IgG assay and 14.5 % for IgA assay.
Conclusions: Circulating autoantibodies to ABCC3 transporter may be a potential biomarker that can be added to a panel of existing biomarkers for early diagnosis and prognosis of lung cancer although the gender differences should be taken into account.
abstract_id: PUBMED:20238115
Down-regulation of lipids transporter ABCA1 increases the cytotoxicity of nitidine. Purpose: Nitidine (NTD) cytotoxicity is highly specific for A549 human lung adenocarcinoma cells. We hypothesized that this cytotoxicity involved the accumulation of NTD in intracellular organelles. However, there have been no reports of NTD transporting factors. In this study, we screened for an NTD transporter and evaluated its association with NTD cytotoxicity.
Methods: Gene expression analyses were done for A549 and human fetal lung normal diploid fibroblast (WI-38) cells. We screened for ABC transporter, multi-drug resistance-associated genes. Gene expressions of ATP-binding cassette transporter A1 (ABCA1) were confirmed in 8 cell lines by quantitative PCR. The involvement of ABCA1 in NTD cytotoxicity was evaluated using siRNA-mediated ABCA1 gene silencing.
Results: Gene expression analysis indicated that A549 cells expressed higher levels of ABCC1, ABCC2, ABCC3, and ABCG2 and a lower level of ABCA1 compared to WI-38 cells. NTD resistant cell lines uniformly showed higher ABCA1 expression levels. Gene silencing experiments showed that the down-regulation of ABCA1 resulted in increased sensitivity to NTD.
Conclusions: These results indicated that NTD efflux is controlled by ABCA1 activity, suggesting that ABCA1 transports molecules other than lipids. Thus, there is a possibility that ABCA1 acts as a drug resistance transporter involved in the cytotoxicity of NTD derivatives. This also suggested that the expression level of the ABCA1 gene may be an indicator for the efficiency of NTD treatment.
abstract_id: PUBMED:27590272
Associations of genetic polymorphisms of the transporters organic cation transporter 2 (OCT2), multidrug and toxin extrusion 1 (MATE1), and ATP-binding cassette subfamily C member 2 (ABCC2) with platinum-based chemotherapy response and toxicity in non-small cell lung cancer patients. Background: Platinum-based chemotherapy is the first-line treatment of non-small cell lung cancer (NSCLC); it is therefore important to discover biomarkers that can be used to predict the efficacy and toxicity of this treatment. Four important transporter genes are expressed in the kidney, including organic cation transporter 2 (OCT2), multidrug and toxin extrusion 1 (MATE1), ATP-binding cassette subfamily B member 1 (ABCB1), and ATP-binding cassette subfamily C member 2 (ABCC2), and genetic polymorphisms in these genes may alter the efficacy and adverse effects of platinum drugs. This study aimed to evaluate the association of genetic polymorphisms of these transporters with platinum-based chemotherapy response and toxicity in NSCLC patients.
Methods: A total of 403 Chinese NSCLC patients were recruited for this study. All patients were newly diagnosed with NSCLC and received at least two cycles of platinum-based chemotherapy. The tumor response and toxicity were evaluated after two cycles of treatment, and the patients' genomic DNA was extracted. Seven single-nucleotide polymorphisms in four transporter genes were selected to investigate their associations with platinum-based chemotherapy toxicity and response.
Results: OCT2 rs316019 was associated with hepatotoxicity (P = 0.026) and hematological toxicity (P = 0.039), and MATE1 rs2289669 was associated with hematological toxicity induced by platinum (P = 0.016). In addition, ABCC2 rs717620 was significantly associated with the platinum-based chemotherapy response (P = 0.031). ABCB1 polymorphisms were associated with neither response nor toxicity.
Conclusion: OCT2 rs316019, MATE1 rs2289669, and ABCC2 rs717620 might be potential clinical markers for predicting chemotherapy toxicity and response induced by platinum-based treatment in NSCLC patients. Trial registration Chinese Clinical Trial Registry ChiCTR-RNC-12002892.
abstract_id: PUBMED:19107762
Genetic susceptibility of lung cancer associated with common variants in the 3' untranslated regions of the adenosine triphosphate-binding cassette B1 (ABCB1) and ABCC1 candidate transporter genes for carcinogen export. Background: Tobacco-specific nitrosamine 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NKK) is a well defined carcinogen that can induce lung cancer. Genetic polymorphisms in its disposition pathways could modify the risk of developing lung cancer. The authors of this report previously catalogued the sequence variations of the adenosine triphosphate-binding cassette B1 (ABCB1) and ABCC1 candidate transporter genes for carcinogen export in the Chinese population and screened out common variants with potential function in their 5' flanking and 3' untranslated regions. The objective of the current study was to test the hypothesis that these common variants are associated with lung cancer risk.
Methods: The genotyping analyses for 6 common regulatory variants (reference single-nucleotide polymorphism 4728709 [rs4728709] and rs2188524 in the 5' flanking region of ABCB1 and rs3842 in its 3' untranslated region; rs3743527, rs212090, and rs212091 in the 3' untranslated region of ABCC1) was conducted in a case-control study of 500 patients with incident lung cancer and 517 cancer-free controls in a Chinese population.
Results: Compared with the wild adenosine/adenosine (A/A) genotype, the variant rs3842 genotype (adenosine/guanosine [A/G] + G/G) of ABCB1 was associated with a statistically significant increased risk of developing lung cancer (odds ratio [OR]. 1.36; 95% confidence interval [95% CI], 1.06-1.76). Also evident was the association between cancer susceptibility and the variant rs212090 genotype (adenosine/thymidine [A/T] + T/T) of ABCC1 (OR, 1.37; 95% CI, 1.03-1.83). Haplotype-based association analysis also emphasized that 2 common haplotypes carrying the culprit alleles of the 2 single-nucleotide polymorphisms were associated with an increased risk of cancer. In addition, stratification analysis demonstrated a remarkable association of ABCB1 rs3842 with the risk of cancer manifested in women (OR, 2.57; 95% CI, 1.36-4.85), in the histologic type of adenocarcinoma (OR, 1.42; 95% CI, 1.03-1.99), and in individuals aged <60 years (OR, 1.50; 95% CI, 1.05-2.14).
Conclusions: The current study demonstrated that common polymorphisms in the 3' untranslated region of ABCB1 and ABCC1 may contribute to the etiology of lung cancer, providing further support for the hypothesis that genetic components in the metabolism and the disposition of NNK may modify the risk of lung cancer, especially in lung adenocarcinoma among women. Functional studies are warranted to elucidate whether aberrant expression and dysfunction of ABC transporters for carcinogen export may play a role in the development of lung cancer.
abstract_id: PUBMED:11781231
The ABCG2 transporter is an efficient Hoechst 33342 efflux pump and is preferentially expressed by immature human hematopoietic progenitors. A promising and increasingly exploited property of hematopoietic stem cells is their ability to efflux the fluorescent dye Hoechst 33342. The Hoechst-negative cells are isolated by fluorescence-activated cell sorting as a so-called side "population" (SP) of bone marrow. This SP from bone marrow, as well as other tissues, is reported to contain immature stem cells with considerable plasticity. Some cell lines also efflux Hoechst and generate SP profiles. Reverse transcription-polymerase chain reaction (RT-PCR) and efflux inhibition studies with the lung carcinoma cell line, A549, implicated the ABCG2 transporter as a Hoechst efflux pump. Furthermore, it is shown that transient expression of ABCG2 generates a robust SP phenotype in human embryonic kidney (HEK293) cells. The results allow the conclusion that ABCG2 is a potent Hoechst efflux pump. Semiquantitative RT-PCR was used to characterize the developmental pattern of expression of ABCG2 in hematopoiesis. It is expressed at relatively high levels in putative hematopoietic stem cells (isolated as SP, 34+/38- or 34+/KDR+ populations) and drops sharply in committed progenitors (34+/38+, 34+/33+, or 34+/10+). Expression remains low in most maturing populations, but rises again in natural killer cells and erythroblasts. Comparison of messenger RNA (mRNA) levels for the 3 major multidrug-resistant efflux pumps, MDR1, MRP1, and ABCG2, in bone marrow SP cells reveals that ABCG2 is the predominant form in these cells. These data suggest that ABCG2 contributes significantly to the generation of the SP phenotype in hematopoietic stem cells. Furthermore, the sharp down-regulation of ABCG2 at the stage of lineage commitment suggests that this gene may play an important role in the unique physiology of the pluripotent stem cell.
abstract_id: PUBMED:30890141
Genetic variation in the ATP binding cassette transporter ABCC10 is associated with neutropenia for docetaxel in Japanese lung cancer patients cohort. Background: Docetaxel is a widely used cytotoxic agent for treatments of various cancers. The ATP binding cassette (ABC) transporter / multidrug resistance protein (MRP) ABCC10/MRP7, involved in transporting taxanes, has been associated with resistance to these agents. Since genetic variation in drug transporters may affect clinical outcomes, we examined whether polymorphism of ABCC10 could affect clinical responses to docetaxel.
Methods: Using 18 NSCLC cell lines and CRISPR-based genome-edited HeLa cells, we analyzed whether genetic variants of ABCC10 (rs2125739, rs9349256) affected cytotoxicity to docetaxel. Subsequently, we analyzed genetic variants [ABCC10 (rs2125739), ABCB1 (C1236T, C3435T, G2677 T/A), ABCC2 (rs12762549), and SLCO1B3 (rs11045585)] in 69 blood samples of NSCLC patients treated with docetaxel monotherapy. Clinical outcomes were evaluated between genotype groups.
Results: In the cell lines, only one genetic variant (rs2125739) was significantly associated with docetaxel cytotoxicity, and this was confirmed in the genome-edited cell line. In the 69 NSCLC patients, there were no significant differences related to rs2125739 genotype in terms of RR, PFS, or OS. However, this SNP was associated with grade 3/4 neutropenia (T/C group 60% vs. T/T group 87%; P = 0.028). Furthermore, no patient with a T/C genotype experienced febrile neutropenia.
Conclusions: Our results indicate that genetic variation in the ABCC10 gene is associated with neutropenia for docetaxel treatment.
abstract_id: PUBMED:25690838
Drug Transporter Protein Quantification of Immortalized Human Lung Cell Lines Derived from Tracheobronchial Epithelial Cells (Calu-3 and BEAS2-B), Bronchiolar-Alveolar Cells (NCI-H292 and NCI-H441), and Alveolar Type II-like Cells (A549) by Liquid Chromatography-Tandem Mass Spectrometry. Understanding the mechanisms of drug transport in the human lung is an important issue in pulmonary drug discovery and development. For this purpose, there is an increasing interest in immortalized lung cell lines as alternatives to primary cultured lung cells. We recently reported the protein expression in human lung tissues and pulmonary epithelial cells in primary culture, (Sakamoto A, Matsumaru T, Yamamura N, Uchida Y, Tachikawa M, Ohtsuki S, Terasaki T. 2013. J Pharm Sci 102(9):3395-3406) whereas comprehensive quantification of protein expressions in immortalized lung cell lines is sparse. Therefore, the aim of the present study was to clarify the drug transporter protein expression of five commercially available immortalized lung cell lines derived from tracheobronchial cells (Calu-3 and BEAS2-B), bronchiolar-alveolar cells (NCI-H292 and NCI-H441), and alveolar type II cells (A549), by liquid chromatography-tandem mass spectrometry-based approaches. Among transporters detected, breast cancer-resistance protein in Calu-3, NCI-H292, NCI-H441, and A549 and OCTN2 in BEAS2-B showed the highest protein expression. Compared with data from our previous study,(Sakamoto A, Matsumaru T, Yamamura N, Uchida Y, Tachikawa M, Ohtsuki S, Terasaki T. 2013. J Pharm Sci 102(9):3395-3406) NCI-H441 was the most similar with primary lung cells from all regions in terms of protein expression of organic cation/carnitine transporter 1 (OCTN1). In conclusion, the protein expression profiles of transporters in five immortalized lung cell lines were determined, and these findings may contribute to a better understanding of drug transport in immortalized lung cell lines.
abstract_id: PUBMED:35759133
Genetic variations in the ATP-binding cassette transporter ABCC10 are associated with neutropenia in Japanese patients with lung cancer treated with nanoparticle albumin-bound paclitaxel. ABCC10/MRP7, an ATP-binding cassette (ABC) transporter, has been implicated in the extracellular transport of taxanes. Our group reported that the ABCC10 single nucleotide polymorphism (SNPs), rs2125739, influences docetaxel cytotoxicity in lung cancer cell lines as well as its side effects in clinical practice. In this study, we investigated whether the rs2125739 variant could affect paclitaxel (PTX) cytotoxicity in lung cancer cell lines. We also investigated the effect of rs2125739 on the efficacy and safety of nanoparticle albumin-bound PTX (nab-PTX) in clinical practice. The association between rs2125739 genotypes and the 50% inhibitory concentration (IC50) of PTX was investigated in 18 non-small cell lung cancer (NSCLC) cell lines, HeLa cells, and genome-edited HeLa cells. Next, blood samples from 77 patients with NSCLC treated with carboplatin plus nab-PTX were collected and analyzed for six SNPs, including rs2125739. The clinical outcomes among the different genotype groups were evaluated. In NSCLC cell lines, HeLa cells, and genome-edited HeLa cells, the IC50 was significantly higher in the ABCC10 rs2125739 T/T group than in the T/C and C/C groups. In 77 patients with NSCLC, there were no significant differences in clinical outcomes between the T/T and T/C groups. However, the rs2125739 T/T genotype was associated with a higher frequency of Grades 3/4 neutropenia. In contrast, there was no association between other SNPs and clinical efficacy or neutropenia. Our results indicate that the ABCC10 rs2125739 variant is associated with neutropenia in response to nab-PTX treatment.
abstract_id: PUBMED:19036469
Expression of breast cancer resistance protein is associated with a poor clinical outcome in patients with small-cell lung cancer. Background: ATP-binding cassette (ABC) transporter and DNA excision repair proteins play a pivotal role in the mechanisms of drug resistance. The aim of this study was to investigate the expression of ABC transporter and DNA excision repair proteins, and to elucidate the clinical significance of their expression in biopsy specimens from patients with small-cell lung cancer (SCLC).
Methods: We investigated expression of the ABC transporter proteins, P-glycoprotein (Pgp), multidrug resistance associated-protein 1 (MRP1), MRP2, MRP3, and breast cancer resistance protein (BCRP), and the DNA excision repair proteins, excision repair cross-complementation group 1 (ERCC1) protein and breast cancer susceptibility gene 1 (BRCA1) protein, in tumor biopsy specimens obtained before chemotherapy from 130 SCLC patients who later received platinum-based combination chemotherapy, and investigated the relationship between their expression and both response and survival.
Results: No significant associations were found between expression of Pgp, MRP1, MRP2, MRP3, ERCC1, or BRCA1 and either response or survival. However, there was a significant association between BCRP expression and both response (p=0.026) and progression-free survival (PFS; p=0.0103).
Conclusions: BCRP expression was significantly predictive of both response and progression-free survival (PFS) in SCLC patients receiving chemotherapy. These findings suggest that BCRP may play a crucial role in drug resistance mechanisms, and that it may serve as an ideal molecular target for the treatment of SCLC.
abstract_id: PUBMED:11279022
The structure of the multidrug resistance protein 1 (MRP1/ABCC1). crystallization and single-particle analysis. Multidrug resistance protein 1 (MRP1/ABCC1) is an ATP-binding cassette (ABC) polytopic membrane transporter of considerable clinical importance that confers multidrug resistance on tumor cells by reducing drug accumulation by active efflux. MRP1 is also an efficient transporter of conjugated organic anions. Like other ABC proteins, including the drug resistance conferring 170-kDa P-glycoprotein (ABCB1), the 190-kDa MRP1 has a core structure consisting of two membrane-spanning domains (MSDs), each followed by a nucleotide binding domain (NBD). However, unlike P-glycoprotein and most other ABC superfamily members, MRP1 contains a third MSD with five predicted transmembrane segments with an extracytosolic NH(2) terminus. Moreover, the two nucleotide-binding domains of MRP1 are considerably more divergent than those of P-glycoprotein. In the present study, the first structural details of MRP1 purified from drug-resistant lung cancer cells have been obtained by electron microscopy of negatively stained single particles and two-dimensional crystals formed after reconstitution of purified protein with lipids. The crystals display p2 symmetry with a single dimer of MRP1 in the unit cell. The overall dimensions of the MRP1 monomer are approximately 80 x 100 A. The MRP1 monomer shows some pseudo-2-fold symmetry in projection, and in some orientations of the detergent-solubilized particles, displays a stain filled depression (putative pore) appearing toward the center of the molecule, presumably to enable transport of substrates. These data represent the first structural information of this transporter to approximately 22-A resolution and provide direct structural evidence for a dimeric association of the transporter in a reconstituted lipid bilayer.
Answer: Yes, circulating autoantibodies to ABCC3 transporter may be a potential biomarker for lung cancer. A study involving 275 patients with non-small cell lung cancer and 226 control subjects found that the IgG antibody level was significantly higher in female patients with adenocarcinoma than in female controls, and the IgA antibody level was significantly higher in male patients with squamous carcinoma than in male controls. The study concluded that circulating autoantibodies to ABCC3 transporter could be added to a panel of existing biomarkers for early diagnosis and prognosis of lung cancer, although gender differences should be taken into account (PUBMED:22699933). |
Instruction: Does workplace health promotion reach shift workers?
Abstracts:
abstract_id: PUBMED:25417210
Does workplace health promotion reach shift workers? Objectives: One reason for health disparities between shift and day workers may be that workplace health promotion does not reach shift workers to the same extent as it reaches day workers. This study aimed to investigate the association between shift work and the availability of and participation in workplace health promotion.
Methods: We used cross-sectional questionnaire data from a large representative sample of all employed people in Denmark. We obtained information on the availability of and participation in six types of workplace health promotion. We also obtained information on working hours, ie, fixed day work (reference) and shift work (four categories), psychosocial work factors, and health behaviors. We conducted binary logistic regression analyses both in the total sample (N=7555) and in a sub-sample consisting of job groups with representatives in all shift work categories (N=2064).
Results: In the general working population, fixed evening and fixed night workers, and employees working variable shifts including night work reported a higher availability of health promotion, while employees working variable shifts without night work reported a lower availability of health promotion. Within job groups undertaking shift work, we found few differences between day and shift workers, and these few differences appear to favor shift workers. Day workers and shift workers did not differ significantly with respect to their participation in health promotion.
Conclusions: The present study could not confirm that shift workers in general report a lower availability of and participation in workplace health promotion.
abstract_id: PUBMED:33615346
Workplace health promotion interventions for Australian workers with intellectual disability. Workplace health promotion (WHP) and the general wellbeing of workers in the Australian workforce should be a priority for all management. Our study argues that management support for workers with an intellectual disability (WWID) can make a difference to their health promotion and ultimately their participation in the workforce. We adopt a qualitative approach, through semi-structured interviews with 22 managers, across various organizations, to examine their perspectives around the WHP of WWID. We integrate the key values of WHP; rights for health, empowerment for health and participation for health (Spencer, Corbin and Miedema, Sustainable development goals for health promotion: a critical frame analysis, Health Promot Int 2019;34:847-58) into the four phases of WHP interventions; needs assessment, planning, implementation and evaluation (Bortz and Döring, Research Methods and Evaluation for Human and Social Scientists, Heidelberg: Springer, 2006) and examine management perspectives (setting-based approach) on WHP of WWID. Where this integration had taken place, we found some evidence of managers adopting more flexible, innovative and creative approaches to supporting the health promotion of WWID. This integration seemed to drive continuous improvement for WWID health promotion at the workplace. We also found evidence that some organizations, such an exemplar film company, even over deliver in terms of supporting WWID needs by encouraging their capabilities in film making interventions, whilst others are more direct in their support by matching skills to routine jobs. Our approach demonstrates that incorporating key WHP values into the four-phase WHP framework is critical for the effective health promotion of WWID.
abstract_id: PUBMED:36833956
The Feasibility of a Text-Messaging Intervention Promoting Physical Activity in Shift Workers: A Process Evaluation. Workplace health promotion programs (WHPPs) can improve shift workers' physical activity. The purpose of this paper is to present the process evaluation of a text messaging health promotion intervention for mining shift workers during a 24-day shift cycle. Data collected from intervention participants with a logbook (n = 25) throughout the intervention, exit interviews (n = 7) and online surveys (n = 17) examined the WHPP using the RE-AIM (Reach, Efficacy, Adoption, Implementation and Maintenance) framework. The program reached 66% of workers across three departments, with 15% of participants dropping out. The program showed the potential to be adopted if the recruitment strategies are improved to reach more employees, especially when involving work managers for recruitment. A few changes were made to the program, and participant adherence was high. Facilitators to adopt and implement the health promotion program included the use of text messaging to improve physical activity, feedback on behaviour, and providing incentives. Work-related fatigue was reported as a barrier to implementing the program. Participants reported that they would recommend the program to other workers and use the Mi fitness band to continue monitoring and improving their health behaviour. This study showed that shift workers were optimistic about health promotion. Allowing for long-term evaluation and involving the company management to determine scale-up should be considered for future programs.
abstract_id: PUBMED:32596022
Preventing Shift Work Disorder in Shift Health-care Workers. The occurrence of the shift work disorder (SWD) in health-care workers (HCWs) employed in 24/7 hospital wards is a major concern through the world. In accordance with literature, SWD is the most frequent work-related disturb in HCWs working on shift schedules including night shift. In agreement with the Luxembourg Declaration on workplace health promotion (WHP) in the European Union, a WHP program has been developed in a large Hospital, involving both individual-oriented and organizational-oriented measures, with the aim to prevent the occurrence of SWD in nurses working on shifts including night shift. The objective assessment of rotating shift work risk and the excessive sleepiness were detected before and after the implementation of the WHP program, by using the Rotating Shiftwork-questionnaire and the Epworth Sleepiness Scale. The findings of this study showed the effectiveness of the implemented WHP program in minimizing the impact of shift work on workers' health and in preventing the misalignment between sleep-wake rhythm and shift working.
abstract_id: PUBMED:19080035
Workplace health promotion in Washington State. The workplace is a powerful setting to reach large numbers of at-risk adults with effective chronic disease prevention programs. Missed preventive care is a particular problem for workers with low income and no health insurance. The costs of chronic diseases among workers--including health care costs, productivity losses, and employee turnover--have prompted employers to seek health promotion interventions that are both effective and cost-effective. The workplace offers 4 avenues for delivering preventive interventions: health insurance, workplace policies, health promotion programs, and communications. For each of the avenues, the evidence base describes a number of preventive interventions that are applicable to the workplace. On the basis of the evidence and of our work in Washington State, we present a public health approach to preventing chronic diseases via the workplace. In addition to relying on the evidence, this approach makes a compelling business case for preventive interventions to employers.
abstract_id: PUBMED:34274696
Effects of Zentangle art workplace health promotion activities on rural healthcare workers. Objectives: Workplace health promotion activities have a positive effect on emotions. Zentangle art relaxes the body and mind through the process of concentrating while painting, achieving a healing effect. This study aimed to promote the physical and mental health of rural healthcare workers through Zentangle art-based intervention.
Study Design: This was a quasi-experimental pilot study.
Methods: A Zentangle art workshop was held from November 2019 to July 2020. A total of 40 healthcare workers were recruited. The participants were asked to provide baseline data, and the Brief Symptom Rating Scale (BSRS-5), work stress management effectiveness self-rating scale, General Self-Efficacy Scale (GSES), and Workplace Spirituality Scale (WSS) were administered before and after the workshop. SPSS 22.0 statistical package software was used to conduct the data analysis.
Results: The median age (interquartile range [IQR]) was 32.00 years (23.00-41.75 years). The Wilcoxon signed-rank test revealed that the median (IQR) BSRS-5 postintervention score was 4.0 (1.25-5.0), which was lower than the preintervention score (P = 0.004). The postintervention score for the work stress management effectiveness self-rating scale was 36.5 (31.0-40.0), which was also lower than the preintervention score (P = 0.009). A higher score for the GSES or WSS indicated improvements in stress management and self-efficacy. The GSES postintervention score 25.00 (21.0-30.75) was significantly higher than the preintervention score (P = 0.010), and the WSS postintervention score 104.0 (88.0-111.75) was significantly higher than the preintervention score (P = 0.005).
Conclusions: The study provides evidence that painting therapy can effectively relieve stress, reduce workplace stress and frustration, enhance self-efficacy, and increase commitment to work among healthcare workers, thus improving their physical, mental, and spiritual well-being. Zentangle art provides employees with multiple channels for expressing their emotions and can improve the physical and mental health of healthcare workers in the workplace. It is beneficial and cost-effective and can serve as a benchmark for peer learning.
abstract_id: PUBMED:29084131
Workplace health promotion programs for older workers in Italy. Background: Italy is the European country with the highest number of citizens over the age of sixty. In recent years, the unsustainability of the social security system has forced the Italian government to raise the retirement age and reduce the chances of early exit, thus sharply increasing the age of the workforce. Consequently, a significant proportion of older workers are currently obliged to do jobs that were designed for young people. Systematic health promotion intervention for older workers is therefore essential.
Objectives: The European Pro Health 65+ project aims at selecting and validating best practices for successful/active aging. In this context we set out to review workplace health promotion projects carried out in Italy.
Methods: To ascertain examples of workplace health promotion for older workers (WHPOW), we carried out a review of the scientific and grey literature together with a survey of companies.
Results: We detected 102 WHPOW research studies conducted in conjunction with supranational organizations, public institutions, companies, social partners, NGOs and educational institutions. The main objectives of the WHPOW were to improve the work environment, the qualifications of older workers and attitudes towards the elderly, and, in many cases, also to improve work organization.
Conclusions: The best way to promote effective WHPOW interventions is by disseminating awareness of best practices and correct methods of analysis. Our study suggests ways of enhancing WHPOW at both a national and European level.
abstract_id: PUBMED:36002884
The process evaluation of a citizen science approach to design and implement workplace health promotion programs. Background: Many workplace health promotion programs (WHPPs) do not reach blue-collar workers. To enhance the fit and reach, a Citizen Science (CS) approach was applied to co-create and implement WHPPs. This study aims to evaluate i) the process of this CS approach and ii) the resulting WHPPs.
Methods: The study was performed in two companies: a construction company and a container terminal company. Data were collected by questionnaires, interviews and logbooks. Using the framework of Nielsen and Randall, process measures were categorized in the intervention, context and mental models. Interviews were transcribed and thematically coded using MaxQDA software.
Results: The involvement in the CS approach and co-creating the WHPPs was positively experienced. Information provision, sustained engagement over time and alignment with the workplace's culture resulted in barriers in the CS process. As to the resulting WHPPs, involvement and interaction during the intervention sessions were particularly experienced in small groups. The reach was affected by the unfavorable planning off the WHPPs and external events of re-originations and the covid-19 pandemic.
Discussion: Continuous information provision and engagement over time, better alignment with the workplace's culture and favorable planning are considered to be important factors for facilitating involvement, reach and satisfaction of the workers in a Citizen science approach to design and implement a WHPP. Further studies continuously monitoring the process of WHPPs using the CS approach could be helpful to anticipate on external factors and increase the adaptability.
Conclusions: Workers were satisfied with the involvement in WHPPs. Organizational and social cultural factors were barriers for the CS approach and its reach. Involvement and interaction in WHPPs were particularly experienced in small grouped sessions. Consequently, contextual and personal factors need be considered in the design and implementation of WHPPs with CS approach among blue-collar workers.
abstract_id: PUBMED:30389653
Using Facebook for Health Promotion in "Hard-to-Reach" Truck Drivers: Qualitative Analysis. Background: Workers in the road transport industry, and particularly truck drivers, are at increased risk of chronic diseases. Innovative health promotion strategies involving technologies such as social media may engage this "hard-to-reach" group. There is a paucity of evidence for the efficacy of social media technologies for health promotion in the Australian transport industry.
Objective: This study analyzed qualitative data from interviews and focus group discussions to evaluate a social media health promotion intervention, the Truckin' Healthy Facebook webpage, in selected Australian transport industry workplaces.
Methods: We engaged 5 workplace managers and 30 truck drivers from 6 transport industry organizations in developing workplace health promotion strategies, including a social media intervention, within a Participatory Action Research approach. Mixed methods, including a pre- and postintervention manager survey, truck driver survey, key informant semistructured interviews, truck driver focus groups, and focused observation, were used to evaluate the social media intervention. We asked questions about workplace managers' and truck drivers' opinions, engagement, and satisfaction with the intervention. This paper focuses on qualitative data.
Results: Of the workplace managers who reported implementing the social media intervention at their workplace, all (3/3, 100%) reported satisfaction with the intervention and expressed a keen interest in learning more about social media and how it may be used for workplace health promotion and other purposes. Truck drivers were poorly engaged with the intervention because (1) many believed they were the "wrong age" and lacked the necessary skills; (2) the cost of smartphone technology was prohibitive; (3) they confined their use of social media to nonwork-related purposes; and (4) many workplaces had "no Facebook" policies.
Conclusions: The use of social media as a health promotion intervention in transport industry workplaces has potential. Workplace interventions using social media can benefit from a Participatory Action Research approach. Involving managers and workers in the design of social media health promotion interventions and developing strategies to support and deliver the interventions helps to facilitate their success. The workers' profile, including their age and familiarity with social media, and work, workplace, and family context is important to consider in this process. Much more research needs to be undertaken to better understand the effective use of social media to engage "hard-to-reach" groups.
abstract_id: PUBMED:37124789
Workplace health promotion to facilitate physical activity among office workers in Sweden. Office workers spend most of their working time being sedentary, contributing to a sedentary lifestyle that increases the risk of developing disease and disability. A gradual decline in cardiorespiratory fitness among adults, along with increased rate of non-communicable diseases across developed countries, makes the workplace an important opportunity for promoting healthy behaviors. This study aimed to investigate: how office companies in Sweden organize and provide workplace health promotion services related to physical activity; the companies' vision for providing workplace health promotion; and potential facilitators and barriers. Nine informants from eight companies participated in the study, and both qualitative and quantitative data were collected by semi-structured interviews. Informants were selected through purposive sampling in collaboration with eight companies in the office market, including companies that own and develop office buildings, shared workspaces, interior design, sustainable solutions, or consult on issues related to the office sector. The framework method was used to analyze the data in a flexible and systematic way. The results showed that workplace health promotion is implemented to maintain employee health, productivity, and employee branding. Also, a significant number of financial resources, organizational support and office space are devoted to workplace health promotion. Convenience and easy access to storage and fitness facilities are key facilitators. In conclusion, this study highlights the importance of employees' engagement in developing and improving workplace health promotion and addressing work-life balance constraints that hinder a healthy lifestyle. Removing barriers on an organizational level may improve the usage of workplace health promotion related to physical activity among office employees.
Answer: The availability and participation in workplace health promotion (WHP) for shift workers have been a subject of investigation in several studies. According to a study using cross-sectional questionnaire data from Denmark, shift workers, including fixed evening and fixed night workers, as well as employees working variable shifts with night work, reported a higher availability of health promotion compared to those working variable shifts without night work. However, within job groups undertaking shift work, there were few differences between day and shift workers in terms of availability and participation in health promotion, suggesting that shift workers do not generally report lower availability or participation in WHP (PUBMED:25417210).
Another study focused on a text-messaging intervention promoting physical activity among mining shift workers, which reached 66% of workers across three departments. The program showed potential for adoption with high participant adherence, indicating that shift workers were receptive to health promotion efforts (PUBMED:36833956).
In the healthcare sector, a WHP program aimed at preventing shift work disorder (SWD) among nurses working night shifts was found to be effective in minimizing the impact of shift work on health and preventing SWD, suggesting that WHP can be successfully implemented among shift healthcare workers (PUBMED:32596022).
However, it is important to note that the success of WHP among shift workers may depend on various factors, including the design and implementation of the program, management support, and the specific needs and preferences of the workers. For instance, a study on Australian workers with intellectual disabilities highlighted the importance of management support and the integration of key WHP values for effective health promotion (PUBMED:33615346). Similarly, the process evaluation of a citizen science approach to design and implement WHP programs emphasized the need to consider organizational and social-cultural factors to enhance the involvement and reach of such programs among blue-collar workers (PUBMED:36002884).
In summary, while there is evidence that WHP can reach shift workers and that they are open to participating in such programs, the effectiveness and reach of WHP among this group may vary depending on the context and implementation strategies. |
Instruction: Is a statewide tobacco quitline an appropriate service for specific populations?
Abstracts:
abstract_id: PUBMED:18048635
Is a statewide tobacco quitline an appropriate service for specific populations? Objective: To assess whether smoking quit rates and satisfaction with the Washington State tobacco quitline (QL) services varied by race/ethnicity, socioeconomic status, area of residence (that is, urban versus non-urban), or sex of Washington QL callers.
Methods: From October 2004 into October 2005, we conducted telephone surveys of Washington QL callers about three months after their initial call to the QL. Analyses compared 7-day quit rates and satisfaction measures by race/ethnicity, education level, area of residence and sex (using alpha = 0.05).
Results: We surveyed half (n = 1312) of the 2638 adult smokers we attempted to contact. The 7-day quit rate among survey participants at the 3-month follow-up was 31% (CI: 27.1% to 34.2%), 92% (CI: 89.9% to 94.1%) were somewhat/very satisfied overall with the QL programme, 97% (CI: 95.5% to 98.2%) indicated that they would probably/for sure suggest the QL to others and 95% (CI: 92.9% to 96.4%) were somewhat/very satisfied with the QL specialist. Quit rate did not vary significantly by race/ethnicity, education level, area of residence or sex. Satisfaction levels were high across subpopulations. Almost all participants (99%) agreed that they were always treated respectfully during interactions with QL staff.
Conclusions: The Washington QL appeared effective and well received by callers from the specific populations studied. States choosing to promote their QL more aggressively should feel confident that a tobacco QL can be an effective and well received cessation service for smokers who call from a broad range of communities.
abstract_id: PUBMED:25985612
Tobacco quitline outcomes for priority populations. Background: Despite declining rates of tobacco use, certain subgroups still experience a disproportionate risk for tobacco-related health issues. The South Dakota QuitLine identifies five priority population subgroups as the following: American Indians, tobacco users receiving Medicaid, youth, pregnant women, and spit tobacco users. The purpose of this study was to describe South Dakota QuitLine use among priority population subgroups and to measure associated cessation rates and service satisfaction.
Methods: Priority population subgroups comprised 22.6 percent (9,558 out of 42,237) of South Dakota QuitLine participants during a six-year period (2008-2013). Of the 34,866 total participants eligible for seven-months follow-up, 15,983 completed a telephone survey that measured tobacco quit status and service satisfaction (45.8 percent overall response). Eligible priority population subgroups had a 41.9 percent response (3,094 out of 7,388).
Results: The seven-month tobacco quit rate for the non-priority population group (46.9 percent) was higher than the quit rate for pregnant women (42.3 percent), youth (37.5 percent), American Indians (38.1 percent), Medicaid participants (35.7 percent) and participants with more than one priority subgroup designation (35.1 percent). The quit rate for spit tobacco users was highest overall (57.3 percent). All subgroups were satisfied with South Dakota Quitline services (≥ 3.5/4.0 scale; 4 = very satisfied).
Conclusions: Tobacco users in high risk and underserved population subgroups of the South Dakota QuitLine seek cessation services. Quit rates were overall favorable and varied between population subgroups (35.1-57.3 percent). Health care providers play a vital role in early identification of tobacco use and referral to cessation services for priority populations. Providers should assess tobacco use, advise users to quit, and refer to the South Dakota QuitLine.
abstract_id: PUBMED:31582932
Influence of new tobacco control policies and campaigns on Quitline call volume in Korea. Introduction: While tobacco control policies have been adopted and enforced, and anti-smoking campaigns have been conducted, the evaluation of their impact on tobacco quitting is lacking in Korea. Therefore, the effectiveness of tobacco control policies and mass media campaigns to encourage use of the Quitline were evaluated by monitoring call volume on Quitline, which has been in operation since 2006, in Korea.
Methods: Tobacco control policies and mass media campaigns, from 1 January of 2007 to 31 December of 2016, were assessed from the review of government documents and the history of law and regulation changes. The corresponding period incoming call volumes of the Quitline were assesed. The average monthly call volume, when policies and anti-smoking advertising were implemented, was compared with that of the whole year or baseline years (2007 and 2008).
Results: Peak call volume occurred in 2010 when the Quitline was directly promoted on television. The call volume in the month the TV campaign aired was 5.5 times higher than the average monthly call volume in the year 2010. A relatively gradual rise in call volume was found from 2013 to 2016 when the tobacco control policies and campaigns, such as Quitline number included on cigarette packs, a fear-oriented anti-tobacco campaign on mass media, and a tax increase on tobacco was implemented, were introduced sequentially. In that period, the average monthly call volume was about five times higher than in 2007 and 2008.
Conclusions: Continuous efforts to contribute to tobacco control policies and campaigns by the promotion of the Quitline is a most effective approach to raise quitting attempts. Based on the Korean experience, Quitline data may be useful for assessing the impact of tobacco control policies and campaigns in Asian Pacific countries.
abstract_id: PUBMED:25914872
Randomized Controlled Trial of the Combined Effects of Web and Quitline Interventions for Smokeless Tobacco Cessation. Background: Use of smokeless tobacco (moist snuff and chewing tobacco) is a significant public health problem but smokeless tobacco users have few resources to help them quit. Web programs and telephone-based programs (Quitlines) have been shown to be effective for smoking cessation. We evaluate the effectiveness of a Web program, a Quitline, and the combination of the two for smokeless users recruited via the Web.
Objectives: To test whether offering both a Web and Quitline intervention for smokeless tobacco users results in significantly better long-term tobacco abstinence outcomes than offering either intervention alone; to test whether the offer of Web or Quitline results in better outcome than a self-help manual only Control condition; and to report the usage and satisfaction of the interventions when offered alone or combined.
Methods: Smokeless tobacco users (N= 1,683) wanting to quit were recruited online and randomly offered one of four treatment conditions in a 2×2 design: Web Only, Quitline Only, Web + Quitline, and Control (printed self-help guide). Point-prevalence all tobacco abstinence was assessed at 3- and 6-months post enrollment.
Results: 69% of participants completed both the 3- and 6-month assessments. There was no significant additive or synergistic effect of combining the two interventions for Complete Case or the more rigorous Intent To Treat (ITT) analyses. Significant simple effects were detected, individually the interventions were more efficacious than the control in achieving repeated 7-day point prevalence all tobacco abstinence: Web (ITT, OR = 1.41, 95% CI = 1.03, 1.94, p = .033) and Quitline (ITT: OR = 1.54, 95% CI = 1.13, 2.11, p = .007). Participants were more likely to complete a Quitline call when offered only the Quitline intervention (OR = 0.71, 95% CI = .054, .093, p = .013), the number of website visits and duration did not differ when offered alone or in combination with Quitline. Rates of program helpfulness (p <.05) and satisfaction (p <.05) were higher for those offered both interventions versus offered only quitline.
Conclusion: Combining Web and Quitline interventions did not result in additive or synergistic effects, as have been found for smoking. Both interventions were more effective than a self-help control condition in helping motivated smokeless tobacco users quit tobacco. Intervention usage and satisfaction were related to the amount intervention content offered. Usage of the Quitline intervention decreased when offered in combination, though rates of helpfulness and recommendations were higher when offered in combination.
Trial Registration: Clinicaltrials.gov NCT00820495; http://clinicaltrials.gov/ct2/show/NCT00820495.
abstract_id: PUBMED:37752980
Texas tobacco quitline knowledge, attitudes, and practices within healthcare agencies serving individuals with behavioral health needs: A multimethod study. Patients with behavioral health conditions have disproportionately high tobacco use rates and face significant barriers to accessing evidence-based tobacco cessation services. Tobacco quitlines are an effective and accessible resource, yet they are often underutilized. We identify knowledge, practices, and attitudes towards the Texas Tobacco Quitline (TTQL) within behavioral healthcare settings in Texas. Quantitative and qualitative data were collected in 2021 as part of a statewide needs assessment in behavioral healthcare settings. Survey respondents (n = 125) represented 23 Federally Qualified Health Centers, 29 local mental health authorities (LMHAs), 12 substance use treatment programs in LMHAs, and 61 standalone substance use treatment centers (26 people participated in qualitative interviews). Over half of respondents indicated familiarity with the TTQL and believed that the TTQL was helpful for quitting. Qualitative findings reveal potential concerns about inconsistency of services, long wait time, and the format of the quitline. About half of respondents indicated that their center promoted patient referral to TTQL, and few indicated that their center had an electronic referral system with direct TTQL referral capacity. Interview respondents reported overall lack of systematic follow up with patients regarding their use of the TTQL services. Findings suggest the need for (1) increased TTQL service awareness among healthcare providers; (2) further investigation into any changes needed to better serve patients with behavioral health conditions who use tobacco; and (3) electronic health record integration supporting direct referrals and enhanced protocols to support patient follow up after TTQL referral.
abstract_id: PUBMED:33270541
Tobacco quitline engagement and outcomes among primary care patients reporting use of tobacco or dual tobacco and cannabis: An observational study. Background: Dual use of tobacco and cannabis is increasingly common, but it is unclear how it impacts individuals' interest in or ability to stop smoking. If dual users fail to engage in treatment or have worse treatment outcomes, it would suggest that tobacco treatment programs may need to be tailored to the specific needs of those using cannabis and tobacco. Methods: We conducted an observational study using electronic treatment records from adults (18 years and older) who (a) were enrolled in a regional healthcare system in Washington state, (b) sought tobacco cessation treatment through an insurance-covered quitline from July 2016 to December 2018 and (c) had cannabis use frequency during the period of their quitline enrollment documented in their electronic health record (EHR) (n = 1,390). Treatment engagement was defined by the total number of quitline counseling calls and web-logins completed. Point prevalent self-reported tobacco abstinence was assessed 6 months post-quitline enrollment. Results: Thirty-two percent of participants (n = 441) reported dual use of tobacco and any cannabis during the observation period; 9.4% (n = 130) reported daily cannabis use. Among dual users reporting daily cannabis use, 13.9% had a diagnosed cannabis user disorder in the EHR. Neither engagement with quitline counseling nor long-term tobacco abstinence rates differed between those using tobacco-only and either dual-use group (i.e., persons using any cannabis or daily cannabis). Conclusions: Dual use of tobacco and cannabis is common among smokers seen in primary care and those enrolling in quitline care, but it may not undermine tobacco quitline engagement or smoking cessation. Opportunities exist in the US to leverage quitlines to identify and intervene with dual users of tobacco and cannabis.HIGHLIGHTSTobacco quitline care was equally engaging and effective among tobacco users and dual users of tobacco and cannabisMany daily cannabis users calling tobacco quitlines likely have a cannabis use disorderTobacco quitlines can be leveraged to identify and intervene with dual users of tobacco and cannabis.
abstract_id: PUBMED:37632451
LGBTQ Utilization of a Statewide Tobacco Quitline: Engagement and Quitting Behavior, 2010-2022. Introduction: Lesbian, gay, bisexual, transgender, and queer/questioning (LGBTQ) individuals use tobacco at disproportionately high rates but are as likely as straight tobacco users to want to quit and to use quitlines. Little is known about the demographics and geographic distribution of LGBTQ quitline participants, their engagement with services, or their long-term outcomes.
Aims And Methods: Californians (N = 333 429) who enrolled in a statewide quitline 2010-2022 were asked about their sexual and gender minority (SGM) status and other baseline characteristics. All were offered telephone counseling. A subset (n = 19 431) was followed up at seven months. Data were analyzed in 2023 by SGM status (LGBTQ vs. straight) and county type (rural vs. urban).
Results: Overall, 7.0% of participants were LGBTQ, including 7.4% and 5.4% of urban and rural participants, respectively. LGBTQ participants were younger than straight participants but had similar cigarette consumption. Fewer LGBTQ participants reported a physical health condition (42.1% vs. 48.4%) but more reported a behavioral health condition (71.1% vs. 54.5%; both p's < .001). Among both LGBTQ and straight participants, nearly 9 in 10 chose counseling and both groups completed nearly three sessions on average. The groups had equivalent 30-day abstinence rates (24.5% vs. 23.2%; p = .263). Similar patterns were seen in urban and rural subgroups.
Conclusions: LGBTQ tobacco users engaged with and appeared to benefit from a statewide quitline even though it was not LGBTQ community-based. A quitline with staff trained in LGBTQ cultural competence can help address the high prevalence of tobacco use in the LGBTQ community and reach members wherever they live.
Implications: This study describes how participants of a statewide tobacco quitline broke down by sexual orientation and gender. It compares participants both by SGM status and by type of county to provide a more complete picture of quitline participation both in urban areas where LGBTQ community-based cessation programs may exist and in rural areas where they generally do not. To our knowledge, it is the first study to compare LGBTQ and straight participants on their use of quitline services and quitting aids, satisfaction with services received, and rates of attempting quitting and achieving prolonged abstinence from smoking.
abstract_id: PUBMED:37118924
Implementation of Quitline Financial Incentives to Increase Counseling Sessions Among Adults Who Use Menthol Tobacco Products. Since 2017, the Vermont Tobacco Control Program (VTCP) has worked to reduce the impact of flavored tobacco products on Vermonters. With the proposed U.S. Food and Drug Administration (FDA) rules banning menthol cigarettes and flavored cigars and proposed legislation banning sales of all menthol and flavored tobacco products in Vermont, VTCP prioritized resources to support cessation among Vermonters who use menthol tobacco products. In March 2021, VTCP began offering a tailored quitline protocol for adults who use menthol tobacco, including financial incentives, for completed coaching sessions. From March 2021 to May 2022, 66 quitline callers enrolled in the menthol incentive protocol, representing 8% of all quitline callers and 25% of participants in the state's quitline incentive programs. A greater proportion of callers in the menthol incentive program completed three or more quitline calls (58% vs. 38%) and enrolled in phone and text support (61% vs. 32%). Quitline callers enrolled in any incentive protocols (menthol, Medicaid/uninsured, or pregnant) were more likely to request one or two forms of nicotine replacement therapy (NRT). Quitlines remain an effective, evidence-based method of tobacco cessation, especially in reaching vulnerable populations. Given the targeted marketing of menthol brands to Black and African American populations, LGBTQ+ populations, youth, and neighborhoods with lower incomes, addressing menthol cigarette use is key to improving health equity and health of Vermonters. Early data indicates that the use of financial incentives can increase engagement with a state quitline among menthol tobacco users through greater completion of cessation coaching calls, enrollment in text message support, and NRT usage.
abstract_id: PUBMED:24936168
Effectiveness of proactive and reactive services at the Swedish National Tobacco Quitline in a randomized trial. Background: The Swedish National Tobacco Quitline (SNTQ), which has both a proactive and a reactive service, has successfully provided tobacco cessation support since 1998. As there is a demand for an increase in national cessation support, and because the quitline works under funding constraints, it is crucial to identify the most clinically effective and cost-effective service. A randomized controlled trial was performed to compare the effectiveness of the high-intensity proactive service with the low-intensity reactive service at the SNTQ.
Methods: Those who called the SNTQ for smoking or tobacco cessation from February 2009 to September 2010 were randomized to proactive service (even dates) and reactive service (odd dates). Data were collected through postal questionnaires at baseline and after 12 months. Those who replied to the baseline questionnaire constituted the study base. Outcome measures were self-reported point prevalence and 6-month continuous abstinence at the 12-month follow-up. Intention-to-treat (ITT) and responder-only analyses were performed.
Results: The study base consisted of 586 persons, and 59% completed the 12-month follow-up. Neither ITT- nor responder-only analyses showed any differences in outcome between proactive and reactive service. Point prevalence was 27% and continuous abstinence was 21% in analyses treating non-responders as smokers, and 47% and 35%, respectively, in responder-only analyses.
Conclusion: Reactive service may be used as the standard procedure to optimize resource utilization at the SNTQ. However, further research is needed to assess effectiveness in different subgroups of clients.
Trial Registration: ClinicalTrials.gov: NCT02085616.
abstract_id: PUBMED:24601063
Tobacco quitline outcomes by service type. Background: Tobacco use is a burden in terms of mortality, chronic disease, and economic impact. Effective treatments exist to aid tobacco users who are motivated to quit. The South Dakota QuitLine provides coaching to all participants and the option of a cessation product (nicotine replacement therapy [NRT], or the prescription medications, varenicline or bupropion) at no cost. This study describes the types of services requested by South Dakota QuitLine participants and the associated cessation outcomes across service types.
Methods: Data from South Dakota QuitLine enrollees during a four year period (2008 to 2011) were included. Enrollment data (demographics and tobacco use) and outcome evaluation data (30 day point prevalence - abstinence) collected seven months later were accessed (N = 11,603/26,876 enrollees, 43.2 percent response). The frequency of requests for each type of cessation service and associated cessation outcomes are reported. Abstinence at seven months was compared for the different services.
Results: Frequencies of cessation services requested were coaching/varenicline (64.6 percent), coaching/bupropion (5 percent), coaching/NRT (22.6 percent), and coaching only (5.4 percent). Overall abstinence at seven months was 47.2 percent. Abstinence rates for service types were the following: coaching/varenicline (49.8 percent), coaching/bupropion (47.3 percent), coaching/NRT (42.9 percent), and coaching only (40.3 percent). Chi-square analysis and confidence interval comparisons identified significantly higher abstinence (p < .05) for varenicline/coaching in comparison to coaching only or coaching/NRT.
Conclusions: All service options available from the South Dakota QuitLine result in cessation rates of 40 percent or greater. Providers should assess tobacco use, advise users to quit, and refer to the South Dakota QuitLine.
Answer: Yes, a statewide tobacco quitline is an appropriate service for specific populations. Studies have shown that quitlines are effective and well-received across various demographic groups, including different races/ethnicities, socioeconomic statuses, areas of residence (urban vs. non-urban), and sexes. For instance, the Washington State tobacco quitline demonstrated high satisfaction levels and a 31% 7-day quit rate among survey participants at the 3-month follow-up, with quit rates and satisfaction not varying significantly by race/ethnicity, education level, area of residence, or sex (PUBMED:18048635).
Furthermore, the South Dakota QuitLine was used by priority population subgroups, such as American Indians, Medicaid recipients, youth, pregnant women, and spit tobacco users, with quit rates varying between 35.1-57.3 percent across these groups. All subgroups expressed satisfaction with the services provided (PUBMED:25985612).
Additionally, the Texas Tobacco Quitline was found to be a helpful resource for individuals with behavioral health needs, although there were suggestions for improvements in service consistency, wait times, and format (PUBMED:37752980). The LGBTQ community also utilized a statewide quitline, with engagement and quit rates comparable to straight participants, indicating that quitlines can be effective for sexual and gender minority populations as well (PUBMED:37632451).
Moreover, the Vermont Tobacco Control Program's tailored quitline protocol for adults using menthol tobacco products, which included financial incentives, showed increased engagement with quitline services (PUBMED:37118924). The Swedish National Tobacco Quitline found no significant differences in outcomes between proactive and reactive services, suggesting that reactive service could be a standard procedure to optimize resource utilization (PUBMED:24936168).
In summary, statewide tobacco quitlines have been shown to be appropriate and effective services for a wide range of specific populations, providing an accessible resource for tobacco cessation support. |
Instruction: Is prophylactic antimicrobial treatment necessary after hypospadias repair?
Abstracts:
abstract_id: PUBMED:15118434
Is prophylactic antimicrobial treatment necessary after hypospadias repair? Purpose: We evaluate the complication rate after hypospadias repair with and without the use of antimicrobial prophylaxis.
Materials And Methods: A total of 101 boys who underwent tubularized incised plate urethroplasty with urethral catheter placement during a 16-month period were randomly divided into group 1-52 treated with cephalexin from day 1 after surgery to 2 days after catheter removal and group 2-49 who did not receive prophylaxis. All children received cefonicid before surgery.
Results: Average patient age was 2.3 years (range 11 months to 6.5 years). Hypospadias was coronal in 54 boys, penile in 33, glanular in 9 and penoscrotal in 5, the distribution of which was similar in both groups. Median time to urethral catheter removal was 8.6 days in group 1 and 8.3 in group 2. Overall bacteriuria was noted in 11 children in group 1 and 25 in group 2. The most common pathogen was Pseudomonas aeruginosa in group 1 and Klebsiella pneumoniae in group 2. Urethrocutaneous fistula developed in 3 boys in group 1 and 9 in group 2, meatal stenosis occurred in 1 boy in group 1 and 4 in group 2, and 1 boy in group 1 had meatal regression. Three boys in group 1 and 12 in group 2 had a complicated urinary tract infection (p <0.05). There was no difference in the number of surgical complications between boys for whom this was the first operation or a repeat hypospadias repair.
Conclusions: A broad-spectrum antibiotic is recommended before and antimicrobial prophylaxis after hypospadias repair. This protocol may decrease the risk of complicated urinary tract infections after surgery, and probably reduce meatal stenosis and urethrocutaneous rates.
abstract_id: PUBMED:30527683
The use of postoperative prophylactic antibiotics in stented distal hypospadias repair: a systematic review and meta-analysis. Introduction: The current literature on the use of antibiotics perioperatively for many pediatric procedures, including hypospadias, is inconsistent. There is currently no clear evidence for the use of postoperative antibiotic prophylaxis for stented distal hypospadias repair.
Objective: This study aims to synthesize and assess the available literature on the use versus non-use of postoperative antibiotic prophylaxis for stented distal hypospadias repair.
Methodology: Systematic literature search was performed on March 2018 for evaluation of trials that assessed the use and non-use of postoperative prophylactic antibiotics among stented distal hypospadias repair in children. Methodological quality of the studies was assessed according to the study design as recommended by the Cochrane Collaboration. The outcome assessed includes composite overall posthypospadias repair complications of infection and wound healing complications. The event rate for each treatment group was extracted to extrapolate intervention relative risk (RR) and corresponding 95% confidence interval (CI). Mantel-Haenszel method with random effect model was used in pooling of effect estimates from the included studies. Heterogeneity was assessed with subgroup analysis performed according to the study design. Publication bias was likewise determined. The protocol of this review was registered in PROSPERO (CRD42018087301) and reported in accordance with preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines.
Result: A total of seven studies (four cohorts, three randomized controlled trials) with 986 stented distal hypospadias repairs (408 with no post-operative prophylactic antibiotics and 578 given postoperative prophylactic antibiotics) were included for the meta-analysis. Moderate to serious risk of bias was noted among the cohort studies, while the included randomized controlled trials (RCT) were of high risk of bias. Inconsistencies of effect estimates between subgroups and publication bias with small study effect were likely present. The overall pooled effect estimates comparing treatment groups showed no significant difference for outcomes of overall composite postoperative complication (RR 0.93, 95% CI 0.45, 1.93). Assessment of composite infection related complications and wound healing complications likewise did not show any significant between-group differences (RR 1.28, 95% CI 0.49, 3.35 and RR 1.01, 95% CI 0.48, 2.12; respectively) (Table). Asymptomatic bacteriuria was noted to be significantly higher among the intervention group with no postoperative prophylactic antibiotics (RR 4.01, 95% CI 1.11, 14.54).
Conclusion: The available evidence to date was assessed to be of high risk. The low level of evidence generated suggests that there is limited utility in the use of postoperative prophylactic antibiotics to prevent clinically significant posthypospadias repair complications.
abstract_id: PUBMED:23416639
Is there a role for prophylactic antibiotics after stented hypospadias repair? Purpose: Data are lacking on prophylactic oral antibiotic use in stented hypospadias repair cases. We evaluated the role of prophylactic oral antibiotics for preventing symptomatic urinary tract infections in this population.
Materials And Methods: We reviewed consecutive patients treated with stented primary/redo hypospadias repair by a single surgeon from September 2009 to January 2012. All patients received antibiotics upon induction. Before April 1, 2011, patients also received prophylactic oral antibiotics while stented. They were compared to those who underwent surgery after April 1, who received no prophylactic oral antibiotics. The primary outcome was symptomatic urinary tract infections, as captured from patient records and verified by an electronic cross-check of ICD-10 codes. Secondary outcomes included cellulitis, fistula, dehiscence and meatal stenosis.
Results: Of the 161 patients reviewed 11 were unstented and 1 underwent followup elsewhere. Of the remaining 149 patients 78 received prophylactic oral antibiotics and 71 did not. The groups were well matched for age, hypospadias characteristics, surgical technique and stent duration. Median followup was 17 months (range 0.2 to 33). No culture proven, symptomatic urinary tract infections developed in either group. One patient in the prophylactic group was treated for cellulitis by the pediatrician. The complication rate, including redo cases, was 18.2% in the prophylactic group and 15.3% in the nonprophylactic group (p = 0.8).
Conclusions: When postoperative prophylactic oral antibiotics were not administered, we identified no increased incidence of symptomatic urinary tract infections or complications. Our data suggest that prophylactic oral antibiotics may not be needed in cases of stented hypospadias repair. This study contributes to the growing body of evidence supporting the rational use of antimicrobials. It can potentially serve as a basis for a prospective, multicenter, randomized study.
abstract_id: PUBMED:36910289
To Compare Short-term Surgical Outcome among Patients given Continuous Postoperative Antibiotic Prophylaxis and those given no Postoperative Antibiotics after Urethroplasty for Hypospadias: A Pilot Study. Introduction: There is no well-accepted guideline or uniform practice for the usage of prophylactic antibiotics along with urethroplasty for hypospadias. As antibiotic resistance is growing, it is imperative to rationalize the usage of antibiotics when a patient is operated for hypospadias.
Aims And Objectives: The study is aimed at finding if there is any difference in outcome if prophylactic antibiotics are given after urethroplasty for hypospadias.
Study Design: Prospective randomized controlled study.
Material And Methods: Forty patients between 6 months and 12 years of age were included in the pilot study. All patients received a single preoperative antibiotic and surgery as per the discretion of the operating surgeon. The participants were randomly assigned to Group A or B, Group A not receiving any prophylactic antibiotic after surgery, and Group B receiving prophylactic antibiotics till indwelling urethral catheter was in situ as per the present antibiotic policy of the institute. The patients were followed up clinically at catheter removal, 1 week after surgery and 1 month after surgery. Urine was analyzed at the start of surgery and after catheter removal. Data were tabulated and analyzed using nonparametric Fischer's exact test with help of Epi Info™ v5.5.8.
Results: Twenty-four patients were included in Group A and 16 in Group B. The clinical profile is presented in the detailed manuscript. Although pus cells could be demonstrated on urine examination in 82.5% of the study participants, only 10% grew organisms on culture media. No difference could be demonstrated among the two groups statistically. On following up with the patients for 1 month, the groups were comparable with respect to surgical site infections, and surgical complications such as urethrocutaneous fistula/dehiscence and thin stream. [Table: see text].
Discussion: There was a wide variability among practicing pediatric urologists in prescribing antibiotic prophylaxis for patients undergoing urethroplasty for hypospadias. In the Urologic Surgery Antimicrobial Prophylaxis Policy by the American Urology Association, no recommendation has been made with respect to urethroplasty. Our results are in concurrence with the available English literature which has not shown any benefit of prophylactic antibiotics after hypospadias repair.
Conclusions: Antibiotics may not have a definite role in the prevention of surgical complications and it may be imperative to avoid unnecessary antibiotics to reduce antibiotic resistance.
abstract_id: PUBMED:33349560
Comparative analysis of perioperative prophylactic antibiotics in prevention of surgical site infections in stented, distal hypospadias repair. Purpose: There is limited evidence that prophylactic antibiotics prevent surgical site infection in stented, distal hypospadias repair. Our hypothesis is that the use of prophylactic antibiotics does not affect the rate of surgical site infection in this setting.
Methods: We conducted a retrospective study of consecutive patients over a 6-year period with distal penile hypospadias treated with urethral stenting. Variables analyzed include age, type of repair, usage of preoperative and/or postoperative antibiotics, and length of follow-up. Patients with a history of proximal or re-operative hypospadias repair were excluded. Surgical site infection was defined by the presence of postoperative penile erythema and/or purulent drainage treated with therapeutic antibiotics. Secondary outcome analysis included the presence of other hypospadias complications.
Results: 441 consecutive subjects met our inclusion criteria with a mean age of 13.3 months. Patients were categorized into groups: Group 1 - Preoperative antibiotics (n = 64), Group 2 - Both Preoperative & Postoperative antibiotics (n = 159), Group 3 - Postoperative antibiotics (n = 122), Group 4 - No Preoperative or Postoperative antibiotics (n = 96). Two surgical site infections were reported out of the 441 patients: 1 in Group 3 and 1 in Group 4 (p = 0.513). There was no significant difference in the total patients with a hypospadias complication between groups. In the table below, Groups 1-3 were combined (345 patients) for comparison to Group 4 (No antibiotics, 96 patients) for further analysis with no difference in SSIs (p = 0.388) or respective hypospadias complications.
Conclusions: The use of perioperative prophylactic antibiotics, both before and after surgery for distal, stented hypospadias repair, have not been shown to reduce the rate of surgical site infections nor hypospadias complications. Consequently, the benefit of prophylactic antibiotics in this setting is unclear.
abstract_id: PUBMED:29761139
Prophylactic Antibiotics After Stented, Distal Hypospadias Repair: Randomized Pilot Study. The usage of prophylactic oral antibiotics following distal hypospadias repair with stenting has been recently challenged. This study evaluated the incidence of symptomatic urinary tract infections (UTIs) following stented, distal hypospadias repair and the impact of prophylactic antibiotic therapy. Subjects 0 to 5 years of age with distal hypospadias were randomized to either Group 1 (antibiotics) or Group 2 (no prophylactic therapy). Urinalysis/urine culture was obtained intraoperatively with no preoperative antibiotics given. Phone interviews at 1 month and 3 months after surgery were done. Forty-eight patients were successfully randomized to either Group 1 (24) or Group 2 (24). The incidence of symptomatic UTI in this pilot study is low, and prophylactic antibiotic therapy does not appear to lower the incidence of symptomatic UTI. A larger, randomized, multicenter trial is needed to determine whether antibiotic prophylaxis reduces the risk of symptomatic UTIs following stented, distal hypospadias repair.
abstract_id: PUBMED:6636394
Postoperative catheterization and prophylactic antimicrobials in children with hypospadias. A prospective study of 78 children who underwent 84 operations for correction of hypospadias was done. Of these, 54 had a transperineal indwelling Foley catheter for ten days after surgery and 30, a transurethral catheter. Forty-five randomly selected children received prophylactic antimicrobial therapy (sulfamethoxazole), and the remaining 39 children served as controls. Incidence of urinary tract infection was significantly higher in the control group (10 of 39) as compared with the treated group (3 of 45) in spite of the higher incidence of vesicoureteral reflux in the treated group. This suggests that prophylactic antimicrobial treatment may prevent urinary tract infection from prolonged indwelling catheterization.
abstract_id: PUBMED:32854924
Staged repair of proximal hypospadias: Reporting outcome of staged tubularized autograft repair (STAG). Introduction: Proximal hypospadias (PPH) repair is a challenge. Dilemma exists whether to do it in single or staged repair. Staged repair is our adopted procedure which was recently modified by Snodgrass into staged tubularized autograft repair (STAG), in which attention was given to ventral straightening of the penis together with some other technical details. Herein, we report our experience with STAG in a cohort of primary posterior hypospadias.
Patients And Methods: In the period from 2011 to 2018 we operated 43 primary posterior hypospadias. Two principal surgeons (HB, MY) and multiple assistants operate children the same way, and data are recorded in a prospectively designed data base. In all children, inner prepuce graft was utilized, when curvature is more than 30 degrees, plate transection with or without ventral corporotomies were adopted.
Results: Forty-three children with PPH and ventral curvature more than 30 degrees underwent first stage with median age 12 months (6-132 IQR16). Penile curvature was corrected by plate transection in 27 children (62.8%), ventral corporotomies in 16 children (37.2%). Graft take was successful in 90.7%, 4 children needed revision of fibrotic graft. Second stage was completed in 37 children, success was 56.8%, 21.6% fistula, 24.3% glanular dehiscence. Overall success after third surgery to correct complications was 78.4%. In a mean follow up of 3.2 years, we had recurrence of curvature in 2 children taking success rate to 72.9%. No meatal stenosis, no diverticulum, no stricture, no urethral dehiscence was encountered. Cosmetic appearance was excellent in follow up.
Conclusion: STAG achieves proper straightening of the penis and allows for reconstruction of a good urethra, yet urethrocutaneous fistula and glanular dehiscence remain the main complications. Follow up is important to address results of ventral corporotomies.
Type Of Study: Therapeutic.
Level Of Evidence: Level IV case series with no comparison group.
abstract_id: PUBMED:24309516
Tubularized incised plate urethroplasty for the treatment of penile fistulas after hypospadias repair. Objective: Urethrocutaneous fistula is the most common complication of hypospadias repair. Tubularized incised plate urethroplasty (TIPU) has been used for the management of distal fistulas. This study reports the usage of TIPU in the treatment of large penile fistulas.
Materials And Methods: Between April 2002 and September 2012, 15 patients with large penile fistulas who were managed with TIPU were included in the study. The fistulas were sited along the penile shaft from proximal to distal penile localization. Glanular and coronal fistulas were excluded. The surgical technique was completed according to the standard TIPU technique. The surrounding scar tissue of the fistula was circumferentially excised, and the urethral plate at the level of the fistula was incised to provide performance of loose urethral tubularization. A urethral stent was kept for 5-7 days.
Results: The mean age of the patients was 7.3 ± 3.1 years. Primary operation of these patients was tubularized preputial island flap (n = 6), on-lay preputial island flap (n = 4), and TIPU (n = 5). The sites of the hypospadias fistulas were as follows; penoscrotal (three), mid-penile (eight) and subcoronal (four). Fistulas recurred in two patients after fistula repair. The postoperative follow up of the patients was 12.4 ± 7.7 months.
Conclusion: TIPU may be used safely for the treatment of fistulas after hypospadias repair.
abstract_id: PUBMED:16700256
The Snodgrass repair: is stenting always necessary? To evaluate the need for stenting in Snodgrass hypospadias repairs. Sixty-five boys underwent hypospadias repairs between April 1996 and July 2001. A variety of techniques were employed. Snodgrass repair was performed in 39 patients, MAGPI in 18, Glanular approximation procedures in 4, Mathieu procedures in 3 and one Duckett Onlay-flap. Seventeen patients, all Snodgrass repairs, had placement of a urethral stent for one week postoperatively. Follow-up at six weeks and one year or when toilet-trained was carried out on all patients. A Fisher's exact test was performed analysing the difference in outcome of the distal Snodgrass repairs depending on the presence or absence of a urethral stent. The patient age at the time of surgery ranged from 8-115 months, with a median of 21 months. Postoperative complications included 1 stent migration, 1 urethral diverticulum and 4 fistulas. Also in one patient part of the ventral skin flap on one side sloughed away, this was debrided and allowed to close by secondary intention. The fistula rate in the entire group is 6%, with an incidence of 10.5% in patients undergoing Snodgrass repair. In the cases having Snodgrass repair for meatal position other than mid- or proximal shaft there was no statistical difference in the incidence of fistula whether or not a stent was used. The long-term follow-up of the entire group indicates that 5 patients developed meatal narrowing, 3 in stented Snodgrass repairs, one patient who had a MAGPI and one GAP patient. Otherwise the long-term outcome of the surgery has been excellent for the entire group. We believe that stenting can be abandoned in distal repairs without compromising patient outcomes.
Answer: The necessity of prophylactic antimicrobial treatment after hypospadias repair is a subject of debate and research. Some studies suggest that prophylactic antimicrobial treatment may be beneficial, while others indicate that it may not be necessary.
One study recommends a broad-spectrum antibiotic before and antimicrobial prophylaxis after hypospadias repair, suggesting that this protocol may decrease the risk of complicated urinary tract infections after surgery, and probably reduce meatal stenosis and urethrocutaneous fistula rates (PUBMED:15118434).
However, a systematic review and meta-analysis found that the available evidence suggests limited utility in the use of postoperative prophylactic antibiotics to prevent clinically significant posthypospadias repair complications. The study noted that asymptomatic bacteriuria was significantly higher among the group without postoperative prophylactic antibiotics, but there was no significant difference in overall composite postoperative complications between groups (PUBMED:30527683).
Another study found no increased incidence of symptomatic urinary tract infections or complications when postoperative prophylactic oral antibiotics were not administered, suggesting that prophylactic oral antibiotics may not be needed in cases of stented hypospadias repair (PUBMED:23416639).
A pilot study concluded that antibiotics may not have a definite role in the prevention of surgical complications after hypospadias repair, emphasizing the importance of avoiding unnecessary antibiotics to reduce antibiotic resistance (PUBMED:36910289).
A retrospective study reported that the use of perioperative prophylactic antibiotics, both before and after surgery for distal, stented hypospadias repair, had not been shown to reduce the rate of surgical site infections nor hypospadias complications, making the benefit of prophylactic antibiotics in this setting unclear (PUBMED:33349560).
A randomized pilot study evaluating the incidence of symptomatic urinary tract infections following stented, distal hypospadias repair found that prophylactic antibiotic therapy does not appear to lower the incidence of symptomatic UTI (PUBMED:29761139).
In contrast, a prospective study from an earlier period suggested that prophylactic antimicrobial treatment may prevent urinary tract infection from prolonged indwelling catheterization (PUBMED:6636394).
In summary, the evidence on the necessity of prophylactic antimicrobial treatment after hypospadias repair is mixed, with some studies suggesting benefits in reducing complications, while others find no significant difference in outcomes with or without prophylaxis. Further research, including larger, randomized controlled trials, may be needed to establish clear guidelines. |
Instruction: Does one more medication help?
Abstracts:
abstract_id: PUBMED:27269293
A population-based study of help seeking and self-medication among trauma-exposed individuals. Epidemiologic studies of trauma highlight the imbalance between prevalence of psychiatric diagnoses and help seeking. We investigated prevalence and correlates of help seeking and self-medication in Norwegian adults with trauma history with a focus on common posttrauma outcomes of posttraumatic stress disorder (PTSD) and substance use disorders (alcohol or drug). Participants reporting at least 1 PTSD symptom (n = 307) were asked if they consulted with a doctor/another professional (help seeking) or used drugs/alcohol (self-medication) for trauma-related problems. PTSD, alcohol abuse or dependence (AUD), and drug use or dependence (DUD) were assessed via structured diagnostic interviews. Help seeking and self-medication were endorsed by 37.4% and 10.4% of the sample, respectively. As compared to the full sample, help seeking was endorsed at a greater rate in individuals with PTSD (χ2 = 8.59, p = .005) and at a lower rate in those with AUD (χ2 = 7.34, p < .004). Self-medication was more likely to be endorsed by individuals with PTSD than without PTSD (χ2 = 25.68, p < .001). In regression analyses, PTSD was associated with increased likelihood of self-medication (odds ratio [OR] = 4.56) and help seeking (OR = 2.29), while AUD was associated with decreased likelihood of help-seeking (OR = .29). When self-medication was included as a predictor, PTSD was no longer associated with help seeking, although AUD remained inversely associated. PTSD and AUDs have a nuanced relationship with formal help seeking as well as the use of substances to cope. Trauma-exposed individuals are likely engaging in adaptive and maladaptive coping strategies, the latter of which may be compounding distress. (PsycINFO Database Record
abstract_id: PUBMED:29843357
One-Stop Dispensing: Hospital Costs and Patient Perspectives on Self-Management of Medication. (1) Objective: To assess hospital medication costs and staff time between One-Stop Dispensing (OSD) and the Traditional Medication System (TMS), and to evaluate patient perspectives on OSD. (2) Methods: The study was conducted at Hvidovre Hospital, University of Copenhagen, Denmark in an elective gastric surgery and acute orthopedic surgery department. This study consists of three sub-studies including adult patients able to self-manage medication. In Sub-study 1, staff time used to dispense and administer medication in TMS was assessed. Medication cost and OSD staff time were collected in Sub-study 2, while patient perspectives were assessed in Sub-study 3. Medication costs with two days of discharge medication were compared between measured OSD cost and simulated TMS cost for the same patients. Measured staff time in OSD was compared to simulated staff time in TMS for the same patients. Patient satisfaction related to OSD was evaluated by a questionnaire based on a five-point Likert scale ('very poor' (1) to 'very good' (5)). (3) Results: In total, 78 elective and 70 acute OSD patients were included. Overall, there was no significant difference between OSD and TMS in medication cost per patient ($2.03 [95% CI -0.57⁻4.63]) (p = 0.131). Compared with TMS, OSD significantly reduced staff time by an average of 12 min (p ≤ 0.001) per patient per hospitalization. The patients' satisfaction for OSD was high with an average score of 4.5 ± 0.7. (4) Conclusion: There were no differences in medication costs, but staff time was significantly lower in OSD and patients were overall satisfied with OSD.
abstract_id: PUBMED:11875225
Adherence to medication regimens and participation in dual-focus self-help groups. Objective: The authors examined the associations between attendance at self-help meetings, adherence to psychiatric medication regimens, and mental health outcomes among members of a 12-step self-help organization specifically designed for persons with both chronic mental illness and a substance use disorder.
Methods: A sample of members of Double Trouble in Recovery (DTR) was interviewed at baseline and one year later. Correlates of adherence to psychiatric medication regimens at the follow-up interview were identified for 240 attendees who had received a prescription for a psychiatric medication.
Results: Consistent attendance at DTR meetings was associated with better adherence to medication regimens after baseline variables that were independently associated with adherence were controlled for. Three baseline variables were associated with adherence: living in supported housing, having fewer stressful life events, and having a lower severity of psychiatric symptoms. In addition, better adherence was associated with a lower severity of symptoms at one year and no psychiatric hospitalization during the follow-up period.
Conclusions: Treatment programs and clinicians should encourage patients who have both mental illness and a substance use disorder to participate in dual-focus self-help groups that encourage the responsible use of effective psychiatric medication, particularly after discharge to community living. Clinicians also should be sensitive to stressful life events and discuss with patients how such events might affect their motivation or ability to continue taking medication.
abstract_id: PUBMED:24867348
Medication Reconciliation-theory and practice The World Health Organization initiated the project "High5s - Action on Patient Safety". The aim of the High5s project is to achieve a measurable, significant and sustained reduction in the occurrence of five serious patient safety problems within five years, in five countries. One of these patient safety issues is medication reconciliation - the process of assuring medication accuracy at transitions of care. In Germany, eleven hospitals are currently implementing medication reconciliation. Medication reconciliation represents the systematic comparison of the current patient's medication list with the medication list in hospital. For this purpose, Lead Technical Agencies of each participating country translated and adapted the standard operating procedure. This standard operating procedure describes the implementation and the procedure of the medication reconciliation process in detail. This process is divided into three parts. First, the best possible medication history is recorded. Second, based on those records, the responsible physician subsequently prescribes the medication. In the third step, the best possible medication history is compared with the medication orders at admission. During this process, it is likely that some discrepancies will occur. Such discrepancies are discussed with the responsible physician and clarified. A comprehensive acquisition of the best possible medication history is thus particularly important. It will be part of medical records throughout the patients' hospital stay. Thus it will be used as an additional source for comparison and adjustment of patients' medication in order to facilitate optimal drug treatment during the entire hospital stay. The practical implementation of medication reconciliation requires extensive change of the current prescription sheets or prescription software. Thus, this provides a great challenge for many hospitals. Nevertheless, in the Netherlands it has been shown that it is possible to prevent 90 % of unintentional discrepancies with medication reconciliation. A German hospital recently showed a reduction of discrepancies by about 77 %. The use of medication reconciliation to improve clinical endpoints is currently subject of further studies.
abstract_id: PUBMED:35621724
What is needed to sustain comprehensive medication management? One health plan's perspectives. Implementation of comprehensive medication management (CMM) in the community pharmacy setting remains sporadic despite its prevalence in other pharmacy contexts. One health plan has been investing in CMM since 2010. Their experience and perceptions in the payer-provider partnership could offer unique insights into the sustainability of CMM in community pharmacy. As part of a broader academic-payer-provider partnership, perceptions of CMM sustainability were explored with key stakeholders in the health plan through a semistructured group interview. Five themes emerged: (1) distinction between CMM and other patient care opportunities, (2) building a CMM program that delivers value requires an investment in network development, (3) payment design influences sustainability, (4) lack of push from community pharmacies to pay for CMM, and (5) the importance of an ongoing facilitated learning and action collaborative. Given previously demonstrated positive return-on-investment, CMM in community pharmacies shows promise for being a sustainable practice model. However, increased reach and performance of networks, as well as number of payers in the market, will be critical to scaling CMM in the community pharmacy setting.
abstract_id: PUBMED:7237882
A self-help program for childhood asthma in a residential treatment center. A structured program designed to enhance self-treatment was successfully implemented in a residential center for asthmatic children. The ultimate objective of the program was to improve compliance with therapeutic regimens, which was felt to be a factor that had necessitated placement of many of the patients. The program was designed to educate the patient and the patient's family regarding the nature of asthma, its treatment and the importance of self-help. Efforts were also made to enhance the emotional maturity of the child. Patients remembered to take their medication over 90% of the time within 1 month of implementation of the program. a similar program was instituted for outpatient use.
abstract_id: PUBMED:31240280
Older Adults' Medication Management in the Home: How can Robots Help? Successful management of medications is critical to maintaining healthy and independent living for older adults. However, medication non-adherence is a common problem with a high risk for severe consequences [5], which can jeopardize older adults' chances to age in place [1]. Well-designed robots assisting with medication management tasks could support older adults' independence. Design of successful robots will be enhanced through understanding concerns, attitudes, and preferences for medication assistance tasks. We assessed older adults' reactions to medication hand-off from a mobile manipulator robot with 12 participants (68-79 yrs). We identified factors that affected their attitudes toward a mobile manipulator for supporting general medication management tasks in the home. The older adults were open to robot assistance; however, their preferences varied depending on the nature of the medication management task. For instance, they preferred a robot (over a human) to remind them to take medications, but preferred human assistance for deciding what medication to take and for administering the medication. Factors such as perceptions of one's own capability and robot reliability influenced their attitudes.
abstract_id: PUBMED:35430239
Feasibility of Customized Pillboxes to Enhance Medication Adherence: A Randomized Controlled Trial. Objective: To test the (1) feasibility of an assistive technology based pillbox intervention on medication adherence; (2) feasibility of trial procedures; and (3) preliminary effectiveness of the pillbox intervention on medication adherence.
Design: A single-blinded randomized controlled clinical trial was conducted during 2-4 weeks.
Setting: Researchers recruited a convenience sample to participate in this university laboratory-based study.
Participants: English-speaking consumers of 2 or more daily medications (N=15) participated in the study. Individuals with cognitive impairment or who did not manage their own medications were excluded.
Interventions: Participants were randomized to 1 of 3 pillbox interventions: (1) standard-of-care pillbox; (2) customized off-the-shelf pillbox; or (3) customized 3-dimensional (3D) printed pillbox.
Main Outcome Measures: Outcome measures were divided among the 3 goals of the study. In addition to feasibility metrics, the Adherence to Refills and Medications Scale was used to measure the primary outcome measure, medication adherence. The Quebec User Evaluation of Satisfaction with Assistive Technology was used to measure pillbox satisfaction.
Results: Researchers successfully administered 6 standard-of-care, 5 custom off-the-shelf, and 4 custom 3D printed pillboxes. Compared with the standard-of-care pillboxes, customized 3D printed pillboxes had large (d=1.04) and customized off-the-shelf pillboxes had medium (d=0.67) effects on medication adherence.
Conclusions: Prescription of customized pillboxes using a manualized and novel assistive technology approach that leverages 3D printing is feasible.
abstract_id: PUBMED:19768373
Self-help group and medication overuse headache: preliminary data. The objective of the study is to investigate the benefits of joining a self-help group for patients with medication overuse headache (MOH). A self-help group is a voluntary gathering of a small number of persons who share a common problem. Little is known about support groups for people with chronic non-malignant pain such as MOH. Eight patients with refractory MOH attended a self-help group twice a month. During the meetings, patients were asked to focus on their headache experiences. Our data showed an increase in resourcefulness in coping with pain and a reduction in cephalalgiophobia. All patients reported general benefits in sharing their headache-related problems. No differences were found for headache frequency or analgesic overuse. To our knowledge, this is the first report on a self-help group for patients with MOH. Joining a self-help group can help patients develop positive attitudes to managing pain.
abstract_id: PUBMED:32364856
Self-Help Groups And Medication Use In Opioid Addiction Treatment: A National Analysis. Self-help groups and medications (buprenorphine, methadone, and naltrexone) both play important roles in opioid addiction treatment. The relative use of these two treatment modalities has not been characterized in a national study. Using national treatment data, we found that self-help groups were rarely provided in conjunction with medication treatment: Among all adult discharges from opioid addiction treatment in the period 2015-17, 10.4 percent used both self-help groups and medications, 29.2 percent used only medications, 29.8 percent used only self-help groups, and 30.5 percent used neither self-help groups nor medications. Use of self-help groups without medication is most common in residential facilities, among those with criminal justice referrals, and among uninsured or privately insured patients, as well as in the South and West regions of the US. These subgroups may be important targets for future efforts to identify and overcome barriers to medication treatment and create multimodal paths to recovery.
Answer: The provided abstracts do not directly answer the question "Does one more medication help?" as they do not discuss the efficacy of adding an additional medication to a treatment regimen. Instead, they focus on various aspects of medication management, help-seeking behavior, self-medication, and the use of self-help groups in relation to medication adherence and treatment for various conditions.
For instance, one study discusses the prevalence and correlates of help seeking and self-medication among Norwegian adults with a trauma history, highlighting the nuanced relationship between PTSD, substance use disorders, and help-seeking behaviors (PUBMED:27269293). Another study evaluates the cost-effectiveness and patient perspectives on One-Stop Dispensing (OSD) compared to the Traditional Medication System (TMS) in a hospital setting (PUBMED:29843357). Adherence to medication regimens and the positive impact of attendance at self-help meetings for individuals with both chronic mental illness and substance use disorder are explored in another study (PUBMED:11875225).
Medication reconciliation processes in hospitals are described in one abstract, emphasizing the importance of accurate medication lists during transitions of care (PUBMED:24867348). The sustainability of comprehensive medication management (CMM) in community pharmacy settings is discussed from the perspective of a health plan (PUBMED:35621724). A self-help program for childhood asthma aimed at improving compliance with therapeutic regimens is outlined in another abstract (PUBMED:7237882).
The potential for robots to assist older adults with medication management tasks is explored, with a focus on understanding older adults' concerns, attitudes, and preferences for such assistance (PUBMED:31240280). The feasibility and effectiveness of customized pillboxes to enhance medication adherence are tested in a randomized controlled trial (PUBMED:35430239). The benefits of joining a self-help group for patients with medication overuse headache (MOH) are investigated, showing an increase in coping resourcefulness and a reduction in cephalalgiophobia (PUBMED:19768373). Lastly, the use of self-help groups and medications in opioid addiction treatment is analyzed nationally, revealing that these modalities are rarely provided together (PUBMED:32364856).
In summary, while the abstracts provide insights into various aspects of medication management and support systems, they do not provide information on the effects of adding an additional medication to a treatment plan. Therefore, to answer the question about the efficacy of adding another medication, one would need to consult clinical studies or guidelines specific to the condition being treated and the medications involved. |
Instruction: Does donor brain death influence acute vascular rejection in the kidney transplant?
Abstracts:
abstract_id: PUBMED:15257053
Does donor brain death influence acute vascular rejection in the kidney transplant? Background: There is increasing experimental evidence to suggest that donor brain death enhances susceptibility to early inflammatory responses such as acute rejection in the kidney transplant. The aim of the present study was to establish whether the injury induced or aggravated by donor brain death could exert an effect on recipient immunologic tolerance by comparing data from patients receiving a kidney from non-heart-beating donors (NHBD) or from brain-dead donors (BDD).
Methods: We reviewed data corresponding to 372 renal transplants performed from January 1996 to May 2002. The data were stratified according to donor type as 197 (53%) brain-dead and 175 (47%) non-heart-beating donors, and the two groups were compared in terms of acute vascular rejection by Cox's regression analysis.
Results: The rate of vascular rejection was 28% in the BDD group and 21.7% in the NHBD (P=0.10). The following predictive variables for acute vascular rejection were established: brain death [RR 1.77 (95% CI 1.06-3.18)], presence of delayed graft function [RR 3.33 (1.99-5.55)], previous transplant [RR 2.35 (1.34-4.13)], recipient age under 60 years [RR 1.86 (0.99-2.28)], female recipient [RR 1.50 (0.99-2.28)], cerebrovascular disease as cause of donor death [RR 1.72 (1.02-2.91)], and triple therapy as immunosuppressive treatment.
Conclusion: Donor brain death could be a risk factor for the development of vascular rejection in kidney recipients. This process could affect the quality of the graft and host alloresponsiveness. Delayed graft function in transplants from dead brain donors could be a reflection of severe autonomic storm, leading to a higher incidence of vascular rejection in these patients.
abstract_id: PUBMED:12538752
Donor tissue characteristics influence cadaver kidney transplant function and graft survival but not rejection. Acute injury and age are characteristics of transplanted tissue that influence many aspects of the course of a renal allograft. The influence of donor tissue characteristics on outcomes can be analyzed by studying pairing, the extent to which two kidneys retrieved from the same cadaver donor manifest similar outcomes. Pairing studies help to define the relative role of donor-related factors (among pairs) versus non-donor factors (within pairs). This study analyzed graft survival for 220 pairs of cadaveric kidneys for the similarity of parameters reflecting function and rejection. It also examined whether the performance of one kidney was predicted by the course of its "mate," the other kidney from that donor. Parameters reflecting function showed sustained pairing posttransplantation, as did graft survival. In contrast, measures of rejection strongly affected survival but showed no pairing. Surprisingly, the survival of a kidney was predicted by the early performance of its mate, an observation we term the "mate effect." Six-month graft survival and renal function were reduced in grafts for which the mate kidney displayed any criteria for functional impairment (dialysis dependency, low urine output [</=1 L] in the first 24 h posttransplant or day-7 serum creatinine >/= 400 micro mol/L), even for kidneys which themselves lacked those criteria. Rejection measures did not demonstrate the mate effect. In conclusion, kidney transplant function is strongly linked to donor-related factors (age, brain death). In contrast, rejection affects survival and function, but it is not primarily determined by the characteristics of the donor tissue. Graft survival reflects both of these influences.
abstract_id: PUBMED:17889141
Effect of the brain-death process on acute rejection in renal transplantation. Introduction: Growing experimental evidence suggests that the state of brain death (BD) activates surface molecules on peripheral organs by the massive release of macrophage- and T cell-associated cytokines as well as adhesion molecules into the circulation. The question is whether the sequelae of the BD process substantially influences the quality of the donor organ, the ensuing host response, or the ultimate transplant outcome. Our aim was to compare explosive BD with gradual-onset injury in terms of a trigger of the host immune mechanisms accelerating acute rejection processes.
Materials And Methods: This retrospective study included 149 cadaveric donors whose kidneys were transplanted in to 264 recipients. Exclusion criteria were previous transplants and hyperimmmunized patients. Donor variables were: sex, age, etiology of death, and hemodynamic conditions during the 24 hours prior to death. The recipient variables included, all possible conditions known to induce rejection.
Results: Cox analysis revealed the following factors to be predictive of acute vascular rejection: initial immunosuppression without induction (risk ratio [RR] 1.83; 95% confidence interval [CI] 1.02 to 3.25; P = .039) which there was a trend to an impact of a regimen without tacrolimus (RR 1.84; 95% CI 0.85 to 3.98; P = .099), or of recipient age < 30 years (RR 2.17; 95% CI 1.06 to 4.48); P = .053) or lower mean donor blood pressure during the 3 hours prior to death (RR 1.17; 95% CI 1.00 to 1.37; P = .054).
Conclusions: Greater sympathetic activity during brain death produces nonspecific endothelial damage and increases organ immunogenicity, promoting rejection.
abstract_id: PUBMED:16298578
The granule exocytosis and Fas/FasLigand pathways at the time of transplantation and during borderline and acute rejection of human renal allografts. Cytotoxic lymphocytes induce target cell death by the granule exocytosis mechanism in which perforin and granzyme B induce target cell lysis, and ligation of the Fas-FasLigand, which results in apoptosis. The purpose was to detect the level of activation of cytolytic pathways at the time of renal transplantation and during acute rejection. We investigated 119 biopsies obtained at transplantation from 100 deceased donor allografts and 19 living donor allografts as well as 45 allograft biopsy specimens collected from recipients because of a clinical suspicion of an acute rejection episode. Total RNA was isolated and transcribed to cDNA. To measure mRNA encoding perforin, granzyme B, and FasLigand by real-time quantitative, polymerase chain reaction we used oligonucleotide primers in a LightCycler equipment with cyclophilin B as the housekeeping gene. At the time of transplantation, the transcript expression levels of perforin and granzyme B were the same in the biopsies from deceased and living donors. During acute rejection episodes (n = 10), perforin (P < .01), and granzyme B levels (P < .05) were significantly up-regulated. In cases of suspected rejection (n = 12), both the clinical picture and the effector gene responses were heterogeneous. The FasLigand expression was up-regulated during acute rejection episodes (n = 8) compared with the time of transplantation, but the change was not significant. In conclusion, brain death did not seem to influence the granule exocytosis pathway in the kidney. The cytolytic effector pathways are up-regulated in renal allograft tissue in acute rejection episodes.
abstract_id: PUBMED:10432416
Donor catecholamine use reduces acute allograft rejection and improves graft survival after cadaveric renal transplantation. Background: Epidemiological data implicate that renal transplants from living unrelated donors result in superior survival rates as compared with cadaveric grafts, despite a higher degree of human lymphocyte antigen (HLA) mismatching. We undertook a center-based case control study to identify donor-specific determinants affecting early outcome in cadaveric transplantation.
Methods: The study database consisted of 152 consecutive cadaveric renal transplants performed at our center between June 1989 and September 1998. Of these, 24 patients received a retransplant. Donor kidneys were allocated on the basis of prospective HLA matching according to the Eurotransplant rules of organ sharing. Immunosuppressive therapy consisted of a cyclosporine-based triple-drug regimen. In 67 recipients, at least one acute rejection episode occurred during the first month after transplantation. They were taken as cases, and the remaining 85 patients were the controls. Stepwise logistic regression was done on donor-specific explanatory variables obtained from standardized Eurotransplant Necrokidney reports. In a secondary evaluation, the impact on graft survival in long-term follow-up was further measured by applying a Cox regression model. The mean follow-up of all transplant recipients was 3.8 years (SD 2.7 years).
Results: Donor age [odds ratio (OR) 1.05; 95% CI, 1.02 to 1.08], traumatic brain injury as cause of death (OR 2.75; 95% CI, 1.16 to 6. 52), and mismatch on HLA-DR (OR 3.0; 95% CI, 1.47 to 6.12) were associated with an increased risk of acute rejection, whereas donor use of dopamine (OR 0.22; 95% CI, 0.09 to 0.51) and/or noradrenaline (OR 0.24; 95% CI, 0.10 to 0.60) independently resulted in a significant beneficial effect. In the multivariate Cox regression analysis, both donor treatment with dopamine (HR 0.44; 95% CI, 0.22 to 0.84) and noradrenaline (HR 0.30; 95% CI, 0.10 to 0.87) remained a significant predictor of superior graft survival in long-term follow-up.
Conclusions: Our data strongly suggest that the use of catecholamines in postmortal organ donors during intensive care results in immunomodulating effects and improves graft survival in long-term follow-up. These findings may at least partially be explained by down-regulating effects of adrenergic substances on the expression of adhesion molecules (VCAM, E-selectin) in the vessel walls of the graft.
abstract_id: PUBMED:31084585
Nonimmunologic Factors Affecting Long-Term Outcomes of Deceased-Donor Kidney Transplant. Objectives: We investigated the impact of nonimmuno-logic factors on patient and graft survival after deceased-donor kidney transplant.
Materials And Methods: All deceased-donor kidney transplants performed between January 2004 and December 2015 were included in our analyses. We used the independent t test to calculate significant differences between means above and below medians of various parameters.
Results: All study patients (N = 205; 58.7% males) received antithymocyte globulin as induction therapy and standard maintenance therapy. Patients were free from infection, malignancy, and cardiac, liver, and pulmonary system abnormalities. Most patients (89.2%) were recipients of a first graft. Median patient age, weight, and cold ischemia time were 38 years, 65 kg, and 15 hours, respectively. Delayed graft function, diabetes mellitus, and hypertension occurred in 19.1%, 43.4%, and 77.9% of patients, respectively. The 1- and 5-year graft survival rates were 95% and 73.8%. Graft survival was not affected by donor or recipient sex or recipient diabetes or hypertension. However, graft survival was longer in patients who received no graft biopsy (8.2 vs 6.9 y; P = .027) and in those who had diagnosis of calcineurin inhibitor nephrotoxicity versus antibody-mediated rejection after biopsy (8.19 vs 3.66 y; P = .0047). Longer survival was shown with donors who had traumatic death versus cerebro-vascular accident (5.9 vs 5.3 y; P = .029) and donors below the 50th percentile in age (8.23 and 7.14 y; P = .0026) but less with donors who had terminal acute kidney injury (6.97 vs 8.16 y; P = .0062). We found a negative correlation between graft survival and donor age (P = .01) and 1-year serum creatinine (P = .01).
Conclusions: Donor age, cause of brain death, and acute kidney injury affected graft survival in our study cohort but not donor or recipient sex or posttransplant or donor blood pressure.
abstract_id: PUBMED:11675425
Influence of donor brain death on chronic rejection of renal transplants in rats. The clinical observation that the results of kidney grafts from living donors (LD), regardless of relationship with the host, are consistently superior to those of cadavers suggests an effect of brain death (BD) on organ quality and function. This condition triggers a series of nonspecific inflammatory events that increase the intensity of the acute immunologic host responses after transplantation (Tx). Herein are examined the influences of this central injury on late changes in renal transplants in rats. A standardized model of BD was used. Groups included both allografts and isografts from normotensive brain dead donors and anesthetized LD. Renal function was determined every 4 wk after Tx, at which time representative grafts were examined by morphology and by reverse transcriptase-PCR. Long-term survival of brain-dead donor transplants was significantly less than LD grafts. Proteinuria was significantly elevated in recipients of grafts from BD donors versus LD controls as early as 6 wk postoperatively and increased progressively through the 52-wk follow up. These kidneys also showed consistently more intense and progressive deterioration in renal morphology. Changes in isografts from brain-dead donors were less marked and developed at a slower tempo than in allografts but were always greater than those in controls. The transcription of cytokines was significantly increased in all brain-dead donor grafts. Donor BD accelerates the progression of long-term changes associated with kidney Tx and is an important risk factor for chronic rejection. These results explain in part the clinically noted difference in long-term function between organs from cadaver and living sources.
abstract_id: PUBMED:21677599
Systemic complement activation in deceased donors is associated with acute rejection after renal transplantation in the recipient. Background: Acute rejection after renal transplantation has been shown to be negatively associated with long-term graft survival. Identifying donor factors that are associated with acute rejection in the recipient could help to a better understanding of the relevant underlying processes that lead to graft injury. Complement activation has been shown to be an important mediator of renal transplant related injury. In this study, we analyzed the effect of systemic complement activation in deceased donors before transplantation of their kidneys on posttransplant outcome in the recipient.
Methods: Plasma from 232 deceased brain-dead and deceased cardiac-dead donors were analyzed for the complement activation markers C5b-9, C4d, Bb, and complement component mannan binding lectin by ELISA. The association of these parameters with posttransplant outcome in recipients was analyzed in a multivariate regression model.
Results: It was found that C5b-9 level in donor plasma is associated with biopsy-proven acute rejection in the recipient during the first year after renal transplantation (P = 0.035). Both in deceased brain-dead and deceased cardiac-dead donors increased complement activation was found.
Conclusions: In conclusion, we found C5b-9 in the donor to be associated with acute rejection of renal transplants in the recipient. Whether targeting complement activation in the donor may ameliorate acute rejection in the recipient needs to be studied.
abstract_id: PUBMED:19273164
The role of chemokines in acute renal allograft rejection and chronic allograft injury. Short and long term outcome of renal transplantation are determined by acute and chronic rejection processes. In acute transplant rejection, expression of chemokines occurs in different renal compartments where it is triggered through various stimuli e.g. brain death, ischemia, reperfusion, and HLA-mismatch. The induction of chemokine expression precedes the process of organ recovery and extends well into the late course of clinical allograft injury. Chemokines function mainly as chemoattractants for leukocytes, monocytes, neutrophils, and other effector cells from the blood to sites of infection or damage. Chemokines are also important in angiogenesis and fibrosis and can have anti-inflammatory functions. The study of chemokine biology in transplantation has broadened the understanding of acute and chronic transplant dysfunction. Data suggest that relatively few chemokine receptors play central roles in these developments, and chemokine blockade, either non-selective or specific, has shown promising results in experimental transplantation and is currently being investigated in human trials.
abstract_id: PUBMED:10903606
Accelerated rejection of renal allografts from brain-dead donors. Objective: To define the potential influences of donor brain death on organs used for transplantation.
Summary Background Data: Donor brain death causes prompt upregulation of inflammatory mediators on peripheral organs. It is hypothesized that this antigen-independent insult may influence the rate and intensity of host alloresponsiveness after engraftment.
Methods: The rates of survival of unmodified Lew recipients sustained by kidney allografts from brain-dead, normal anesthetized, and anesthetized ventilated F344 donors were compared. Brain death was induced by gradually increasing intracranial pressure under electroencephalographic control. Tracheotomized brain-dead animals and anesthetized controls were mechanically ventilated for 6 hours before transplant nephrectomy. The rate and intensity of the acute rejection event were examined by histology, immunohistology, and reverse transcriptase-polymerase chain reaction.
Results: Animals bearing kidneys from brain-dead donors died of renal failure secondary to acute rejection at a significantly faster rate than those from anesthetized living controls or anesthetized animals ventilated for 6 hours. Within 3 hours after placement and reperfusion of brain-dead donor grafts, significant neutrophil infiltration was observed, followed by increasing numbers of macrophages and T cells. mRNA of proinflammatory mediators detected in kidneys within 6 hours of brain death and upregulated even before transplantation increased thereafter and appeared to accelerate and amplify host alloresponsiveness, as manifested by the rapid expression of chemokines, cytokines, adhesion molecules, and major histocompatibility complex class II antigens in the engrafted organ. The process evolved in the controls less intensely and at a slower rate.
Conclusions: Donor brain death is a significant risk factor for peripheral organs used for transplantation. The activated state of such organs appears to trigger host immune mechanisms that accelerate the process of acute rejection. The effects of this central injury may explain in part the less satisfactory performance of cadaver organs in human transplantation compared with those from living sources.
Answer: Yes, donor brain death appears to influence acute vascular rejection in kidney transplants. Studies have shown that donor brain death can be a risk factor for the development of vascular rejection in kidney recipients (PUBMED:15257053). The process of brain death can lead to nonspecific endothelial damage and increase organ immunogenicity, promoting rejection (PUBMED:17889141). Additionally, donor brain death is associated with an upregulation of inflammatory mediators on peripheral organs, which may influence the rate and intensity of host alloresponsiveness after engraftment, potentially accelerating the process of acute rejection (PUBMED:10903606).
Furthermore, systemic complement activation in deceased donors, which can be a consequence of brain death, is associated with acute rejection after renal transplantation in the recipient (PUBMED:21677599). The granule exocytosis and Fas/FasLigand pathways, which are involved in cytolytic activities, do not seem to be influenced by brain death in the kidney, but these pathways are up-regulated in renal allograft tissue during acute rejection episodes (PUBMED:16298578).
However, it is important to note that while donor brain death can influence acute vascular rejection, other factors such as donor age, cause of brain death, and acute kidney injury also affect graft survival (PUBMED:31084585). Additionally, the use of catecholamines in postmortal organ donors during intensive care has been shown to result in immunomodulating effects and improve graft survival in long-term follow-up, suggesting that some interventions may mitigate the negative effects of brain death on acute rejection (PUBMED:10432416).
In summary, donor brain death is a significant risk factor that can influence acute vascular rejection in kidney transplants, and it may accelerate and amplify the host's immune response against the transplanted organ. |
Instruction: Can strong back extensors prevent vertebral fractures in women with osteoporosis?
Abstracts:
abstract_id: PUBMED:8820769
Can strong back extensors prevent vertebral fractures in women with osteoporosis? Objective: To determine the influence of back extensor strength on vertebral fractures in 36 women with osteoporosis.
Design: We conducted a cross-sectional study of female patients with osteoporosis by assessing anthropometric variables, bone mineral density, muscle strength, level of physical activity, and radiographic findings in the spine.
Material And Methods: The 36 study subjects with osteoporosis, who ranged from 47 to 84 years of age, satisfied specific inclusion and exclusion criteria that minimized confounding factors related to pathophysiologic features, diet, and medications. A physical activity score was determined for each subject on the basis of daily physical activities relating to homemaking, occupation, and sports.
Results: The range of the physical activity scores-from 2 to 13-indicated that no subject was involved in unusually demanding physical activities. Bone mineral density values ranged from 0.49 to 0.92 g/cm2. Thoracic kyphosis ranged from 31.0 to 84.0 degrees. Isometric strength of the back extensor muscles ranged from 7.3 to 34.0 kg. Statistical analysis demonstrated a significant negative correlation between the strength of the back extensor muscles and thoracic kyphosis. Significant negative correlations were also found between back extensor strength and the number of vertebral compression fractures and between bone mineral density and the number of vertebral fractures.
Conclusion: The negative association between back extensor strength and both kyphosis and number of vertebral fractures suggests that increasing back strength may prove to be an effective therapeutic intervention for the osteoporotic spine. In persons with stronger back muscles, the risk of vertebral fractures will likely decrease.
abstract_id: PUBMED:25138115
Prospective study of spinal orthoses in women. Background: There are not many clinical trials investigating the efficiency and compliance of using spinal orthoses in the management of osteoporosis.
Objectives: The purpose of this study was to investigate the effect of long-term use and the compliance of spinal orthoses in postmenopausal women with vertebral fractures.
Study Design: Clinical trial of spinal orthoses in postmenopausal women.
Methods: Women were separated into groups wearing different types of orthoses (Spinomed, Osteomed, Spinomed active, and Spine-X). Isometric maximum strength of trunk muscles (F/Wabdominals-extensors) was calculated and back pain was assessed in all women. In addition, women completed a compliance questionnaire about the use of the orthoses.
Results: Spinomed decreased pain (p = 0.001) and increased trunk muscle strength (F/Wabdominals, p = 0.005 and F/Wextensors, p = 0.003, respectively). The compliance of wearing an orthosis for 6 months was 66%.
Conclusion: The results suggest that orthoses could be an effective intervention for back pain and muscle strengthening in osteoporotic women.
Clinical Relevance: In women with established osteoporosis, wearing Spinomed orthosis for at least 2 h/day for 6 months decreased back pain significantly and increased personal isometric trunk muscle strength. All spinal orthoses could be valuable instruments to help all requested rehabilitation programs like spine muscles' strengthening and postural correct behavior, but only when used properly.
abstract_id: PUBMED:16331772
Relationship of health related quality of life to prevalent and new or worsening back pain in postmenopausal women with osteoporosis. Objective: To examine the association between back pain and health related quality of life (HRQOL) in postmenopausal women with osteoporosis.
Methods: The Fracture Prevention Trial was a prospective double blinded, placebo controlled study designed to compare the proportion of women receiving teriparatide who experienced a new fracture to the proportion of women receiving placebo who experienced a new fracture. Subjects were ambulatory postmenopausal women with osteoporosis and prior vertebral fracture. As part of this trial, English-speaking women from Canada, New Zealand, Australia, and the United States participated in a HRQOL substudy using the Osteoporosis Assessment Questionnaire (OPAQ). OPAQ was administered at baseline, 12 months, and at study termination (median treatment duration 19 mo). Back pain data were collected as part of the adverse event monitoring during the trial. Subjects considered to have experienced back pain reported this event spontaneously and were not queried specifically. We examined the influence of prevalent back pain on HRQOL after controlling for spine deformity index score, and the influence of new or worsening back pain on HRQOL after controlling for incident vertebral fracture.
Results: Of 471 women who completed OPAQ at baseline, 172 reported back pain that was associated with a mean decrease in all OPAQ dimension scores (p < 0.05). Of 429 women who completed OPAQ at all timepoints, 88 experienced new or worsening back pain that was associated with a mean decrease in physical function, emotional status, and symptoms scores (p < 0.01 for each). In a subset of 65 women who experienced moderate to severe back pain, all OPAQ dimensions were significantly reduced (p < 0.05).
Conclusion: Both prevalent back pain and new or worsening back pain affected HRQOL negatively. Osteoporosis therapies that prevent the development of back pain in postmenopausal women may also prevent decreases in HRQOL.
abstract_id: PUBMED:30623268
Effect of treatment on back pain and back extensor strength with a spinal orthosis in older women with osteoporosis: a randomized controlled trial. The treatment effect of an activating spinal orthosis on back pain and back extensor strength was compared to a training group and to a control group. Between the groups, there was no significant difference in back pain, back extensor strength, or kyphosis index after the 6 months of treatment.
Purpose: The aim of this study was to study the effect of treatment with an activating spinal orthosis on back pain, back extensor strength, and kyphotic index. Our hypothesis was that an activating spinal orthosis may be an alternative treatment to decrease back pain and increase back extensor strength.
Methods: A total of 113 women aged ≥ 60 years with back pain and osteoporosis, with or without vertebral fractures, were randomized to three groups: a spinal orthosis group, an equipment training group, and a control group. All three groups were examined at baseline and followed up after 3 and 6 months. Statistical analyses were performed with a mixed model for repeated measures according to intention to treat (ITT) and per protocol (PP).
Results: A total of 96 women completed the study. Between the groups, there was no significant difference in baseline characteristics. Comparison between groups showed no significant difference in back pain, back extensor strength, or kyphosis index at the follow-up after 6 months according to ITT and PP analyses. Analysis in each group showed that the back extensor strength had increased by 26.9% in the spinal orthosis group, by 22.1% in the exercise training group and by 9.9% in the control group.
Conclusions: Six months' treatment by an activating spinal orthosis showed no significant difference in back pain, back extensor strength, or kyphosis index between the three groups. In the spinal orthosis group, present back pain decreased slightly and back extensor strength increased by 26.9% which indicates that the spinal orthosis may become an alternative training method. Clinicaltrials.com ID: NCT03263585.
abstract_id: PUBMED:16004669
The effects of teriparatide on the incidence of back pain in postmenopausal women with osteoporosis. Objectives: Back pain is a major cause of suffering, disability, and cost. The risk of developing back pain was assessed following treatment with teriparatide [rh(PTH 1-34)] in postmenopausal women with osteoporosis.
Research Design And Methods: A secondary analysis of back pain findings from the global, multi-site Fracture Prevention Trial was conducted where postmenopausal women with prevalent vertebral fractures were administered teriparatide 20 microg (n = 541) or placebo (n = 544) for a median of 19 months. Treatment-emergent back pain data were collected during adverse event monitoring, and spine radiographs were obtained at baseline and study endpoint.
Main Outcome Measures: The risk of back pain stratified by severity of new or worsening back pain and the risk of back pain associated with both number and severity of new vertebral fractures.
Results: Women randomized to teriparatide 20 microg had a 31% reduced relative risk of moderate or severe back pain (16.5% vs. 11.5%, P = 0.016) and a 57% reduced risk of severe back pain (5.2% vs. 2.2%, P = 0.011). Compared with placebo, teriparatide-treated patients experienced reduced relative risk of developing back pain associated with findings of: one or more new vertebral fractures by 83% (6.5% vs. 1.1%, P < 0.001), two or more new vertebral fractures by 91% (2.5% vs. 0.20%, P = 0.004), and one or more new moderate or severe vertebral fractures by 100% (5.1% vs. 0.0%, P < 0.001).
Conclusions: Teriparatide-treated women had reduced risk for moderate or severe back pain, severe back pain, and back pain associated with vertebral fractures. The mechanism of the back pain reduction likely includes the reduction both in severity and number of new vertebral fractures.
abstract_id: PUBMED:19707260
Effects of short-term combined treatment with alendronate and elcatonin on bone mineral density and bone turnover in postmenopausal women with osteoporosis. The antiresorptive drug elcatonin (ECT) is known to relieve pain in postmenopausal women with osteoporosis. A prospective open-labeled trial was conducted to compare the effects of short-term combined treatment with alendronate (ALN) and ECT on bone mineral density (BMD) and bone turnover with those of single treatment with ALN in postmenopausal women with osteoporosis. Two hundred and five postmenopausal osteoporotic women (mean age: 70 years) were recruited in our outpatient clinic. Forty-six women with back pain were treated with ALN and ECT (intramuscular, 20 units a week), and 159 women without obvious back pain were treated with ALN alone. The lumbar BMD, urinary levels of cross-linked N-terminal telopeptides of type I collagen (NTX), and serum levels of alkaline phosphatase (ALP) were measured during the six-month treatment period. The baseline characteristics, except for age, body weight and number of patients with prevalent vertebral fractures, were not significantly different between the two groups. The mean increase rate in the lumbar BMD at six months was similar in the ALN (+4.41%) and ALN+ECT (+5.15%) groups, following similar reduction rates in urinary NTX levels (-40.2% and -43.0%, respectively, at three months) and serum ALP levels (-19.0% and -19.7%, respectively, at six months). These results were consistent even after adjustments for age, body weight, and number of patients with prevalent vertebral fractures. The present study in postmenopausal osteoporotic women confirmed that the effects of short-term combined treatment with ALN and ECT on lumbar BMD and bone turnover in patients with back pain appeared to be comparable to those of single treatment with ALN in patients without obvious back pain.
abstract_id: PUBMED:23986468
Wearing an active spinal orthosis improves back extensor strength in women with osteoporotic vertebral fractures. Background: Vertebral fractures are the most common clinical manifestations of osteoporosis. Vertebral fractures and reduced back extensor strength can result in hyperkyphosis. Hyperkyphosis is associated with diminished daily functioning and an increased risk of falling. Improvements in back extensor strength can result in decreased kyphosis and thus a decreased risk of falls and fractures.
Objectives: The aim was to examine the effects of an active spinal orthosis - Spinomed III - on back extensor strength, back pain and physical functioning in women with osteoporotic vertebral fractures.
Study Design: Experimental follow-up.
Methods: The women used the active spinal orthosis for 3 months. Outcomes were changes in isometric back extensor strength, changes in back pain and changes in physical functioning.
Results: A total of 13 women were included in the trial. Wearing the orthosis during a 3-month period was associated with an increase in back extensor strength of 50% (p = 0.01). The study demonstrated a 33% reduction in back pain and a 6.5-point improvement in physical functioning. The differences in pain and physical functioning were borderline significant.
Conclusion: The women demonstrated a clinically relevant improvement in the back extensor strength. The differences in pain and physical functioning were clinically relevant and borderline significant.
Clinical Relevance: The results imply that Spinomed III could be recommended for women with vertebral fractures as a supplement to traditional back strengthening exercises. It is essential that the orthosis is adjusted correctly and that there is an individual programme concerning the amount of time the orthosis has to be worn every day.
abstract_id: PUBMED:26564228
Using self-reports of pain and other variables to distinguish between older women with back pain due to vertebral fractures and those with back pain due to degenerative changes. Unlabelled: Women with back pain and vertebral fractures describe different pain experiences than women without vertebral fractures, particularly a shorter duration of back pain, crushing pain and pain that improves on lying down. This suggests a questionnaire could be developed to identify older women who may have osteoporotic vertebral fractures.
Introduction: Approximately 12 % of postmenopausal women have vertebral fractures (VFs), but less than a third come to clinical attention. Distinguishing back pain likely to relate to VF from other types of back pain may ensure appropriate diagnostic radiographs, leading to treatment initiation. This study investigated whether characteristics of back pain in women with VF are different from those in women with no VFs.
Methods: A case control study was undertaken with women aged ≥60 years who had undergone thoracic spinal radiograph in the previous 3 months. Cases were defined as those with VFs identified using the algorithm-based qualitative (ABQ) method. Six hundred eighty-three potential participants were approached. Data were collected by self-completed questionnaire including the McGill Pain Questionnaire. Chi-squared tests assessed univariable associations; logistic regression identified independent predictors of VFs. Receiver operating characteristic (ROC) curves were used to evaluate the ability of the combined independent predictors to differentiate between women with and without VFs via area under the curve (AUC) statistics.
Results: One hundred ninety-seven women participated: 64 cases and 133 controls. Radiographs of controls were more likely to show moderate/severe degenerative change than cases (54.1 vs 29.7 %, P = 0.011). Independent predictors of VF were older age, history of previous fracture, shorter duration of back pain, pain described as crushing, pain improving on lying down and pain not spreading down the legs. AUC for combination of these factors was 0.85 (95 % CI 0.79 to 0.92).
Conclusion: We present the first evidence that back pain experienced by women with osteoporotic VF is different to back pain related solely to degenerative change.
abstract_id: PUBMED:16193299
The effectiveness of calcitonin on chronic back pain and daily activities in postmenopausal women with osteoporosis. The aim of this study was to investigate the effect of nasal calcitonin on chronic back pain and disability attributed to osteoporosis. The study design involved three groups of osteoporotic postmenopausal women suffering from chronic back pain. Group I consisted of 40 women with vertebral fractures, group II of 30 women with degenerative disorders and group III of 40 patients with non specific chronic back pain and without abnormality on plain X-rays. Pain intensity was measured using a numerical rating scale (NRS) and disability due to back pain was measured using the Oswestry disability questionnaire. The patients were randomly assigned to receive, for three months, either 200 IU intranasal salmon calcitonin and 1,000 mg of oral calcium daily (groups IA, IIA, IIIA) or 1,000 mg of oral calcium daily (groups IB, IIB, IIIB). Repeated measures ANOVA showed that there were no significant time, group or interaction effects for pain intensity and disability in any of the groups studied. Mean Oswestry and NRS scores were reduced during the follow-up period in the groups IA, IIIA, but the differences between the two time points were not statistically significant. Intranasal calcitonin has no effect on chronic back pain intensity and functional capacity of osteoporotic women regardless of the presence of fractures, degenerative disorders or chronic back pain of non-specific etiology.
abstract_id: PUBMED:19680106
The relationship between back pain and future vertebral fracture in postmenopausal women. Study Design: Cross sectional and prospective observational study in Japanese postmenopausal women.
Objective: The aim of the study was 2-fold. The first was to investigate what kind of comorbidities could be found in conjunction with back pain in Japanese postmenopausal women. The second was to investigate whether significant relationship between baseline back pain and future fracture exists or not.
Summary Of Background Data: Back pain has been reported to be associated with vertebral degeneration or vertebral fracture. However, there has been no available data that indicates the relationship between back pain and future fracture risks.
Methods: The subjects who visited their practitioner were examined for their prevalent back pain or pains in other site. Bone mineral density, body height, body weight, and serum parameter were measured at baseline, and comorbidities were investigated by interview. Fragility fractures were also assessed at baseline and then followed up with 1- to 2-year intervals. The correlation between back pain and baseline characteristics was investigated by logistic regression analysis. The hazard ratio of back pain to future vertebral fracture was estimated by multivariate Cox regression analysis.
Results: A total of 899 postmenopausal ambulatory women (62.5 +/- 10.3 years old) were enrolled and 81 subjects were dropped out from the study within 1 year. The remaining 818 postmenopausal women (62.1 +/- 10.3 years) were followed-up for 5.7 +/- 3.5 years. Compared to the group without pain, the group with back pain had significantly higher age, lower bone mineral densities at lumbar spine and hip, and higher number of prevalent vertebral fractures. The back pain was significantly associated with rheumatic arthritis (odds ratio [OR]: 2.01, P < 0.05), prevalent vertebral fracture (OR: 4.60, P < 0.001) and osteoporosis (OR: 2.14, P < 0.001). A total of 189 future fractures were observed, of which the most frequent was vertebral fractures (78.3%). The fact that baseline back pain was a significant risk factor for time-dependent vertebral fractures (hazard ratio: 1.62, 95% confidence interval: 1.16-2.27, P = 0.005) was demonstrated by the Cox hazards model after adjusting for traditional risk factors, such as age, bone mineral density, and prevalence of vertebral fractures.
Conclusion: The data obtained in this study indicated that the back pain is significantly associated with osteoporosis and rheumatoid arthritis and that it can be useful predictor for future vertebral fracture risk in Japanese postmenopausal women in clinical settings.
Answer: Yes, strong back extensors may help prevent vertebral fractures in women with osteoporosis. A cross-sectional study of 36 women with osteoporosis found a significant negative correlation between the strength of the back extensor muscles and the number of vertebral compression fractures. This suggests that increasing back strength could be an effective therapeutic intervention for the osteoporotic spine, potentially decreasing the risk of vertebral fractures in persons with stronger back muscles (PUBMED:8820769).
Additionally, wearing spinal orthoses like Spinomed has been shown to decrease pain and increase trunk muscle strength (F/Wabdominals, F/Wextensors) in postmenopausal women with vertebral fractures, indicating that orthoses could be an effective intervention for back pain and muscle strengthening in osteoporotic women (PUBMED:25138115).
Moreover, a randomized controlled trial found that treatment with an activating spinal orthosis for 6 months showed no significant difference in back pain, back extensor strength, or kyphosis index between groups, but the spinal orthosis group did experience a slight decrease in present back pain and a 26.9% increase in back extensor strength, suggesting that the spinal orthosis may become an alternative training method (PUBMED:30623268).
Furthermore, another study demonstrated that wearing an active spinal orthosis, Spinomed III, for a 3-month period was associated with a 50% increase in back extensor strength, a 33% reduction in back pain, and an improvement in physical functioning in women with osteoporotic vertebral fractures (PUBMED:23986468).
These findings collectively support the notion that strong back extensors, potentially enhanced through specific interventions such as spinal orthoses or targeted exercises, can play a role in preventing vertebral fractures in women with osteoporosis. |
Instruction: Emergency Hospital Admissions for Initial Febrile Urinary Tract Infection: Do Patient Demographics Matter?
Abstracts:
abstract_id: PUBMED:26293412
Emergency Hospital Admissions for Initial Febrile Urinary Tract Infection: Do Patient Demographics Matter? Background: In 2011, the American Academy of Pediatrics revised practice parameters regarding febrile urinary tract infection (fUTI) in children aged 2-24 months. The Section on Urology opposed the omission of voiding cystourethrogram (VCUG), and expressed concern that potential untoward consequences of deferring VCUG may be most felt by children on Medicaid.
Objective: We ascertained imaging and characteristics of children presenting to the Emergency Department (ED) with initial fUTI to determine the impact of patient demographics on admissions for pyelonephritis.
Methods: Children aged 2-24 months presenting to the ED with initial fUTI were identified. Demographics, insurance status, laboratory studies, renal-bladder ultrasound (RBUS), VCUG, and hospital admission status were evaluated.
Results: Three-hundred fifty patients met inclusion criteria; 88 (25.1%) were admitted. Admitted patients were significantly (p < 0.001) younger (mean 0.31 ± 0.33 years) than those managed as outpatients (mean 0.91 ± 0.7 years). On univariate analysis, male gender (p < 0.001), Medicaid insurance (p < 0.05), and non-Hispanic race (p < 0.05) were associated with admission. Race retained significance on multivariate analysis; Caucasian children were 2.35 times (95% confidence interval [CI] 0.79-7.23) and African-American children 3.8 times more likely to be admitted than Hispanic patients (95% CI 1.88-7.63). Children with abnormal RBUS were 12.8 times more likely to require admission (95% CI 4.44-37.0). Medicaid was also independently predictive of admission; such patients were 2.6 times more likely to be admitted than those with private insurance (95% CI 1.15-5.88).
Conclusions: Abnormal ultrasound, non-Hispanic race, and public insurance were strongly associated with hospital admission in children presenting to the ED with initial febrile urinary tract infection.
abstract_id: PUBMED:34800973
Impact of the COVID-19 pandemic on emergency department attendances and acute medical admissions. Background: To better understand the impact of the COVID-19 pandemic on hospital healthcare, we studied activity in the emergency department (ED) and acute medicine department of a major UK hospital.
Methods: Electronic patient records for all adult patients attending ED (n = 243,667) or acute medicine (n = 82,899) during the pandemic (2020-2021) and prior year (2019) were analysed and compared. We studied parameters including severity, primary diagnoses, co-morbidity, admission rate, length of stay, bed occupancy, and mortality, with a focus on non-COVID-19 diseases.
Results: During the first wave of the pandemic, daily ED attendance fell by 37%, medical admissions by 30% and medical bed occupancy by 27%, but all returned to normal within a year. ED attendances and medical admissions fell across all age ranges; the greatest reductions were seen for younger adults in ED attendances, but in older adults for medical admissions. Compared to non-COVID-19 pandemic admissions, COVID-19 admissions were enriched for minority ethnic groups, for dementia, obesity and diabetes, but had lower rates of malignancy. Compared to the pre-pandemic period, non-COVID-19 pandemic admissions had more hypertension, cerebrovascular disease, liver disease, and obesity. There were fewer low severity ED attendances during the pandemic and fewer medical admissions across all severity categories. There were fewer ED attendances with common non-respiratory illnesses including cardiac diagnoses, but no change in cardiac arrests. COVID-19 was the commonest diagnosis amongst medical admissions during the first wave and there were fewer diagnoses of pneumonia, myocardial infarction, heart failure, cellulitis, chronic obstructive pulmonary disease, urinary tract infection and other sepsis, but not stroke. Levels had rebounded by a year later with a trend to higher levels of stroke than before the pandemic. During the pandemic first wave, 7-day mortality was increased for ED attendances, but not for non-COVID-19 medical admissions.
Conclusions: Reduced ED attendances in the first wave of the pandemic suggest opportunities for reducing low severity presentations to ED in the future, but also raise the possibility of harm from delayed or missed care. Reassuringly, recent rises in attendance and admissions indicate that any deterrent effect of the pandemic on attendance is diminishing.
abstract_id: PUBMED:28939397
National Survey of Emergency Physicians Concerning Home-Based Care Options as Alternatives to Emergency Department-Based Hospital Admissions. Background: Emergency departments (EDs) in the United States play a prominent role in hospital admissions, especially for the growing population of older adults. Home-based care, rather than hospital admission from the ED, provides an important alternative, especially for older adults who have a greater risk of adverse events, such as hospital-acquired infections, falls, and delirium.
Objective: The objective of the survey was to understand emergency physicians' (EPs) perspectives on home-based care alternatives to hospitalization from the ED. Specific goals included determining how often EPs ordered home-based care, what they perceive as the barriers and motivators for more extensive ordering of home-based care, and the specific conditions and response times most appropriate for such care.
Methods: A group of 1200 EPs nationwide were e-mailed a six-question survey.
Results: Participant response was 57%. Of these, 55% reported ordering home-based care from the ED within the past year as an alternative to hospital admission or observation, with most doing so less than once per month. The most common barrier was an "unsafe or unstable home environment" (73%). Home-based care as a "better setting to care for low-acuity chronic or acute disease exacerbation" was the top motivator (79%). Medical conditions EPs most commonly considered for home-based care were cellulitis, urinary tract infection, diabetes, and community-acquired pneumonia.
Conclusions: Results suggest that EPs recognize there is a benefit to providing home-based care as an alternative to hospitalization, provided they felt the home was safe and a process was in place for dispositioning the patient to this setting. Better understanding of when and why EPs use home-based care pathways from the ED may provide suggestions for ways to promote wider adoption.
abstract_id: PUBMED:37956847
Impact of heat on emergency hospital admissions related to kidney diseases in Texas: Uncovering racial disparities. Background And Objective: While impact of heat exposure on human health is well-documented, limited research exists on its effect on kidney disease hospital admissions especially in Texas, a state with diverse demographics and a high heat-related death rate. We aimed to explore the link between high temperatures and emergency kidney disease hospital admissions across 12 Texas Metropolitan Statistical Areas (MSAs) from 2004 to 2013, considering causes, age groups, and ethnic populations.
Methods: To investigate the correlation between high temperatures and emergency hospital admissions, we utilized MSA-level hospital admission and weather data. We employed a Generalized Additive Model to calculate the association specific to each MSA, and then performed a random effects meta-analysis to estimate the overall correlation. Analyses were stratified by age groups, admission causes, and racial/ethnic disparities. Sensitivity analysis involved lag modifications and ozone inclusion in the model.
Results: Our analysis found that each 1 °C increase in temperature was associated with a 1.73 % (95 % CI [1.43, 2.03]) increase in hospital admissions related to all types of kidney diseases. Besides, the effect estimates varied across different age groups and specific types of kidney diseases. We observed statistically significant associations between high temperatures and emergency hospital admissions for Acute Kidney Injury (AKI) (3.34 % (95 % CI [2.86, 3.82])), Kidney Stone (1.76 % (95 % CI [0.94, 2.60])), and Urinary Tract Infections (UTI) (1.06 % (95 % CI [0.61, 1.51])). Our research findings indicate disparities in certain Metropolitan Statistical Areas (MSAs). In Austin, Houston, San Antonio, and Dallas metropolitan areas, the estimated effects are more pronounced for African Americans when compared to the White population. Additionally, in Dallas, Houston, El Paso, and San Antonio, the estimated effects are greater for the Hispanic group compared to the Non-Hispanic group.
Conclusions: This study finds a strong link between higher temperatures and kidney disease-related hospital admissions in Texas, especially for AKI. Public health actions are necessary to address these temperature-related health risks, including targeted kidney health initiatives. More research is needed to understand the mechanisms and address health disparities among racial/ethnic groups.
abstract_id: PUBMED:28893817
Preventable Emergency Hospital Admissions Among Adults With Intellectual Disability in England. Purpose: Adults with intellectual disabilities experience poorer physical health and health care quality, but there is limited information on the potential for reducing emergency hospital admissions in this population. We describe overall and preventable emergency admissions for adults with vs without intellectual disabilities in England and assess differences in primary care management before admission for 2 common ambulatory care-sensitive conditions (ACSCs).
Methods: We used electronic records to study a cohort of 16,666 adults with intellectual disabilities and 113,562 age-, sex-, and practice-matched adults without intellectual disabilities from 343 English family practices. Incident rate ratios (IRRs) from conditional Poisson regression were analyzed for all emergency and preventable emergency admissions. Primary care management of lower respiratory tract infections and urinary tract infections, as exemplar ACSCs, before admission were compared in unmatched analysis between adults with and without intellectual disabilities.
Results: The overall rate for emergency admissions for adults with vs without intellectual disabilities was 182 vs 68 per 1,000 per year (IRR = 2.82; 95% CI, 2.66-2.98). ACSCs accounted for 33.7% of emergency admissions among the former compared with 17.3% among the latter (IRR = 5.62; 95% CI, 5.14-6.13); adjusting for comorbidity, smoking, and deprivation did not fully explain the difference (IRR = 3.60; 95% CI, 3.25-3.99). Although adults with intellectual disability were at nearly 5 times higher risk for admission for lower respiratory tract infections and urinary tract infections, they had similar primary care use, investigation, and management before admission as the general population.
Conclusions: Adults with intellectual disabilities are at high risk for preventable emergency admissions. Identifying strategies for better detecting and managing ACSCs, including lower respiratory and urinary tract infections, in primary care could reduce hospitalizations.
abstract_id: PUBMED:15884038
Emergency hospital admissions in idiopathic Parkinson's disease. Little is known about the hospital inpatient care of patients with idiopathic Parkinson's disease (PD). Here, we describe the features of the emergency hospital admissions of a geographically defined population of PD patients over a 4-year period. Patients with PD were identified from a database for a Parkinson's disease service in a district general hospital with a drainage population of approximately 180,000. All admissions of this patient subgroup to local hospitals were found from the computer administration system. Two clinicians experienced in both general medicine and PD then reviewed the notes to identify reasons for admission. Admission sources and discharge destinations were recorded. Data regarding non-PD patients was compared to PD patients on the same elderly care ward over the same time period. The total number of patients exposed to analysis was 367. There was a total exposure of 775.8 years and a mean duration of 2.11 years per patient. There were 246 emergency admissions to the hospital with a total duration of stay of 4,257 days (mean, 17.3 days). These days were accounted for by 129 patients (mean age, 78 years; 48% male). PD was first diagnosed during 12 (4.9%) of the admissions. The most common reasons for admission were as follows: falls (n=44, 14%), pneumonia (n=37, 11%), urinary tract infection (n=28, 9%), reduced mobility (n=27, 8%), psychiatric (n=26, 8%), angina (n=21, 6%), heart failure (n=20, 6%), fracture (n=14, 4%), orthostatic hypotension (n=13, 4%), surgical (n=13, 4%), upper gastrointestinal bleed (n=10, 3%), stroke/transient ischemic attack (n=8, 2%), and myocardial infarction (n=7, 2%). The mean length of stay for the PD patients on the care of elderly ward specializing in PD care was 21.3 days compared to 17.8 days for non-PD patients. After hospital admission, there was a reduction in those who returned to their own home from 179 to 163 and there was an increase in those requiring nursing home care from 37 to 52. Infections, cardiovascular diseases, falls, reduced mobility, and psychiatric complications accounted for the majority of admissions. By better understanding the way people with PD use hospital services, we may improve quality of care and perhaps prevent some inpatient stays and care-home placements.
abstract_id: PUBMED:26638742
Better use of technology can cut emergency admissions. HOSPITAL STAFF are bracing themselves for a surge in emergency admissions this winter, but a new report claims the effect of the rise could be mitigated if financial savings are made elsewhere in the NHS.
abstract_id: PUBMED:23401058
Increase in emergency admissions to hospital for children aged under 15 in England, 1999-2010: national database analysis. Objective: To investigate a reported rise in the emergency hospital admission of children in England for conditions usually managed in the community.
Setting And Design: Population-based study of hospital admission rates for children aged under 15, based on analysis of Hospital Episode Statistics and population estimates for England, 1999-2010.
Main Outcome: Trends in rates of emergency admission to hospital.
Results: The emergency admission rate for children aged under 15 in England has increased by 28% in the past decade, from 63 per 1000 population in 1999 to 81 per 1000 in 2010. A persistent year-on-year increase is apparent from 2003 onwards. A small decline in the rates of admissions lasting 1 day or more has been offset by a twofold increase in short-term admissions of <1 day. Considering the specific conditions where high emergency admission rates are thought to be inversely related to primary care quality, admission rates for upper respiratory tract infections rose by 22%, lower respiratory tract infections by 40%, urinary tract infections by 43% and gastroenteritis by 31%, while admission rates for chronic conditions fell by 5.6%.
Conclusions: The continuing increase in very-short-term admission of children with common infections suggests a systematic failure, both in primary care (by general practice, out-of-hours care and National Health Service Direct) and in hospital (by emergency departments and paediatricians), in the assessment of children with acute illness that could be managed in the community. Solving the problem is likely to require restructuring of the way acute paediatric care is delivered.
abstract_id: PUBMED:32164697
Direct and lost productivity costs associated with avoidable hospital admissions. Background: Hospitalizations for ambulatory care sensitive conditions are commonly used to evaluate primary health care performance, as the hospital admission could be avoided if care was timely and adequate. Previous evidence indicates that avoidable hospitalizations carry a substantial direct financial burden in some countries. However, no attention has been given to the economic burden on society they represent. The aim of this study is to estimate the direct and lost productivity costs of avoidable hospital admissions in Portugal.
Methods: Hospitalizations occurring in Portugal in 2015 were analyzed. Avoidable hospitalizations were defined and their associated costs and years of potential life lost were calculated. Direct costs were obtained using official hospitalization prices. For lost productivity, there were estimated costs for absenteeism and premature death. Costs were analyzed by components, by conditions and by variations on estimation parameters.
Results: The total estimated cost associated with avoidable hospital admissions was €250 million (€2515 per hospitalization), corresponding to 6% of the total budget of public hospitals in Portugal. These hospitalizations led to 109,641 years of potential life lost. Bacterial pneumonia, congestive heart failure and urinary tract infection accounted for 77% of the overall costs. Nearly 82% of avoidable hospitalizations were in patients aged 65 years or older, therefore did not account for the lost productivity costs. Nearly 84% of the total cost comes from the direct cost of the hospitalization. Lost productivity costs are estimated to be around €40 million.
Conclusion: The age distribution of avoidable hospitalizations had a significant effect on costs components. Not only did hospital admissions have a substantial direct economic impact, they also imposed a considerable economic burden on society. Substantial financial resources could potentially be saved if the country reduced avoidable hospitalizations.
abstract_id: PUBMED:25650690
Emergency department visits and resulting hospitalizations by elderly nursing home residents, 2001-2008. This study examines emergency department (ED) visits by nursing home (NH) residents aged 65 and over, and factors associated with hospital admission from the ED visit using data from the 2001-2008 National Hospital Ambulatory Medical Care Survey. Cross-sectional analyses were conducted on patient characteristics, diagnosis, procedures received, and triage status. On average, elderly NH residents visited EDs at a rate of 123 visits per 100 institutionalized persons. Nearly 15% of all ED visits had ambulatory care sensitive condition diagnoses. Nearly half of these visits resulted in hospital admission; chronic obstructive pulmonary disease, congestive heart failure, kidney/urinary tract infection, and dehydration were associated with higher odds of admission. Previous studies suggested that adequate medical staffing and appropriate care in the NH could reduce ED visits and hospital admissions. Recent initiatives seek to reduce ED visits and hospitalizations by providing financial incentives to spur better coordination between NH and hospital.
Answer: Yes, patient demographics do matter in emergency hospital admissions for initial febrile urinary tract infection (fUTI). A study examining children aged 2-24 months presenting to the Emergency Department (ED) with initial fUTI found that patient demographics such as age, gender, insurance status, and race were associated with hospital admissions for pyelonephritis. Specifically, admitted patients were significantly younger than those managed as outpatients. Male gender, Medicaid insurance, and non-Hispanic race were associated with admission. Caucasian children were 2.35 times and African-American children 3.8 times more likely to be admitted than Hispanic patients. Children with abnormal renal-bladder ultrasound (RBUS) were 12.8 times more likely to require admission. Medicaid patients were 2.6 times more likely to be admitted than those with private insurance (PUBMED:26293412).
These findings suggest that certain demographic factors, including race and public insurance status, are strongly associated with hospital admission in children presenting to the ED with initial fUTI. This indicates that disparities in healthcare access and utilization may influence the likelihood of hospital admission for pediatric patients with fUTI. |
Instruction: Is breaking of bad news indeed unacceptable to native Africans?
Abstracts:
abstract_id: PUBMED:28066979
When proteostasis goes bad: Protein aggregation in the cell. Protein aggregation is a hallmark of the major neurodegenerative diseases including Alzheimer's, Parkinson's, Huntington's and motor neuron and is a symptom of a breakdown in the management of proteome foldedness. Indeed, it is remarkable that under normal conditions cells can keep their proteome in a highly crowded and confined space without uncontrollable aggregation. Proteins pose a particular challenge relative to other classes of biomolecules because upon synthesis they must typically follow a complex folding pathway to reach their functional conformation (native state). Non-native conformations, including the unfolded nascent chain, are highly prone to aberrant interactions, leading to aggregation. Here we review recent advances in knowledge of proteostasis, approaches to monitor proteostasis and the impact that protein aggregation has on biology. We also include discussion of the outstanding challenges. © 2017 IUBMB Life, 69(2):49-54, 2017.
abstract_id: PUBMED:28408417
Breaking bad…proteins. N/A
abstract_id: PUBMED:17003111
Imatinib spells BAD news for Bcr/abl-positive leukemias. N/A
abstract_id: PUBMED:23154169
Structure-based redesign of the binding specificity of anti-apoptotic Bcl-x(L). Many native proteins are multi-specific and interact with numerous partners, which can confound analysis of their functions. Protein design provides a potential route to generating synthetic variants of native proteins with more selective binding profiles. Redesigned proteins could be used as research tools, diagnostics or therapeutics. In this work, we used a library screening approach to reengineer the multi-specific anti-apoptotic protein Bcl-x(L) to remove its interactions with many of its binding partners, making it a high-affinity and selective binder of the BH3 region of pro-apoptotic protein Bad. To overcome the enormity of the potential Bcl-x(L) sequence space, we developed and applied a computational/experimental framework that used protein structure information to generate focused combinatorial libraries. Sequence features were identified using structure-based modeling, and an optimization algorithm based on integer programming was used to select degenerate codons that maximally covered these features. A constraint on library size was used to ensure thorough sampling. Using yeast surface display to screen a designed library of Bcl-x(L) variants, we successfully identified a protein with ~1000-fold improvement in binding specificity for the BH3 region of Bad over the BH3 region of Bim. Although negative design was targeted only against the BH3 region of Bim, the best redesigned protein was globally specific against binding to 10 other peptides corresponding to native BH3 motifs. Our design framework demonstrates an efficient route to highly specific protein binders and may readily be adapted for application to other design problems.
abstract_id: PUBMED:22470191
The role of the ERM protein family in maintaining cellular polarity, adhesion and regulation of cell motility Ezrin, radixin and moesin, forming the ERM protein family, act as molecular crosslinkers between actin filaments and proteins anchored in the cell membrane. By participating in a complex intracellular network of signal transduction pathways, ERM proteins play a key role in the regulation of adhesion and polarity of normal cells through interactions with membrane molecules, e.g. E-cadherin. Dynamic cytoskeletal transformations, in which the ERM and Rho GTPases are involved, lead to the formation of membrane-cytoplasmic structures, such as filopodia and lamellipodia, which are responsible for cellular motility. The interactions of ERM proteins with active Akt kinase cause the acquisition of antiapoptotic cellular features by downregulation of the proapoptotic protein Bad. ERM protein activity is regulated by phosphorylation/dephosphorylation reactions and linking phosphatidylinositols. The model of activation based on the molecular conformation changes by breaking the intramolecular bonds and exposing actin binding sites is essential for the proper functioning of the ERM proteins. Additionally, the connection types between the ERM and membrane proteins (direct or indirect by EBP50 and E3KARP) play an important role in transduction of signals from the extracellular matrix. Due to the wide range of ezrin, radixin and moesin cytophysiological features, detailed exploration of the ERM biochemistry will provide a series of answers to questions about ambiguous functions in many intracellular signal transduction pathways.
abstract_id: PUBMED:22214866
A moderate decline in U937 cell GSH levels triggers PI3 kinase/Akt-dependent Bad phosphorylation, thereby preventing an otherwise prompt apoptotic response. We report that a moderate decline in GSH levels causes remarkable changes in Bad sub-cellular localization. An about 30% reduction of the GSH pool, regardless of whether mediated by diamide or DL-buthionine-[S,R]-sulfoximine, indeed promoted loss of the fraction of Bad normally associated with the mitochondria of untreated U937 cells via a phosphatidylinositol 3-kinase (PI3K)-dependent mechanism. Interestingly, inhibition of this pathway was associated with an unexpected delayed lethal response, preceded by the translocation and enforced accumulation of Bad and Bax in the mitochondrial compartment, prevented by inhibitors of mitochondrial permeability transition and characterized by morphological and biochemical features of apoptosis. Collectively, the results herein presented demonstrate that mild redox imbalance associated with a slight reduction of the GSH pool commits U937 cells to apoptosis, however prevented by events leading to PI3K/Akt-dependent mitochondrial loss of Bad.
abstract_id: PUBMED:20700721
Assessing Bad sub-cellular localization under conditions associated with prevention or promotion of mitochondrial permeability transition-dependent toxicity. Cells belonging to the monocyte/macrophage lineage are in general highly resistant to peroxynitrite, a reactive nitrogen species extensively produced by these and other cell types under inflammatory conditions. Resistance is not dependent on the scavenging of peroxynitrite but is rather associated with the prompt activation of a survival signaling in response to various molecules largely available at the inflammatory sites, as arachidonic acid and products of the 5-lipoxygenase or cyclooxygenase pathways. We detected significant levels of Bad in the mitochondria of monocytes/macrophages and found that these signaling pathways converge in Bad phosphorylation, and thus in its cytosolic accumulation. Phosphorylation inhibits binding of Bad to Bcl-2, or BclXL, and promotes its translocation to the cytosol, thereby enabling Bcl-2 and BclXL to exert effects leading to prevention of mitochondrial permeability- transition (MPT). Upstream inhibition of the survival signaling indeed promotes the mitochondrial accumulation of Bad and the rapid onset of MPT-dependent toxicity. The above results contribute to the definition of the mechanism(s) whereby monocytes/macrophages survive to peroxynitrite in inflamed tissues.
abstract_id: PUBMED:31028861
The synthetic flavagline FL3 spares normal human skin cells from its cytotoxic effect via an activation of Bad. The molecular pathways by which flavagline derivatives exert their cytotoxicity against various cancer cell types are well documented, while the mechanisms that prevent their cytotoxic effects on normal cells still have to be clarified. Here we provide the underlying molecular events by which normal skin cells remain unaffected after exposure to the synthetic flavagline FL3. Indeed, the anticancer agent fails to trigger apoptosis of healthy cells and is unable to induce the depolarization of their mitochondrial membrane and the cytosolic release of cytochrome C, in contrast to what is observed for cancer cells. Most importantly, FL3 specifically induces in normal cells, but not in malignant cells, an activation of Bad, without significant mitochondrial and cytosolic redistribution of Bax or Bcl-2. Moreover, gene knockdown of Bad sensitizes the normal fibroblastic cells to FL3 and induces a caspase-3 dependent apoptosis. Bad activation, known to promote survival and block apoptosis, explains therefore the lack of cytotoxicity of FL3 on normal skin cells. Finally, these findings provide new insights into the molecular mechanisms of resistance of healthy cells against FL3 cytotoxicity and identify it as a promising anticancer drug.
abstract_id: PUBMED:29872558
TFAM is a novel mediator of immunogenic cancer cell death. Immunogenic cell death (ICD) is a type of cell death that is accompanied by the release of damage-associated molecular patterns (DAMPs) and results in a dead-cell antigen-specific immune response. Here, we report that spautin-1, an inhibitor of ubiquitin-specific peptidases, triggers immunogenic cancer cell death in vitro and in vivo. The anticancer activity of spautin-1 occurs independent of autophagy inhibition, but depends on the intrinsic mitochondrial apoptosis pathway. Spautin-1 causes mitochondrial oxidative injury, which results in JUN transcription factor activation in a JNK-dependent manner. Mechanistically, activation of JUN by spautin-1 leads to apoptosis by upregulation of pro-apoptotic BAD expression. Importantly, the release of TFAM, a mitochondrial DAMP, by apoptotic cells may contribute to spautin-1-induced ICD via its action on the receptor AGER. Indeed, cancer cells treated with spautin-1 in vitro were able to elicit an anticancer immune response when inoculated in vivo, in the absence of any adjuvant. This immunogenic effect of spautin-1-treated cancer cells was lost when TFAM or AGER were neutralized by specific antibodies. Altogether, our results suggest that spautin-1 may stimulate an apoptotic pathway that results in ICD, in TFAM- and AGER-dependent fashion.
abstract_id: PUBMED:15955068
7-Ketocholesterol-induced apoptosis. Involvement of several pro-apoptotic but also anti-apoptotic calcium-dependent transduction pathways. Oxysterols, and particularly 7-ketocholesterol, appear to be strongly involved in the physiopathology of atherosclerosis. These molecules are suspected to be cytotoxic to the cells of the vascular wall and monocytes/macrophages, particularly by inducing apoptosis. Previous studies have demonstrated that 7-ketocholesterol-induced apoptosis is triggered by a sustained increase of cytosolic-free Ca2+, which elicits the mitochondrial pathway of apoptosis by activation of the calcium-dependent phosphatase calcineurin, leading to dephosphorylation of the 'BH3 only' protein BAD. However, thorough study of the results suggests that other pathways are implicated in 7-ketocholesterol-induced cytotoxicity. In this study, we demonstrate the involvement of two other calcium-dependent pathways during 7-ketocholesterol-induced apoptosis. The activation of the MEK-->ERK pathway by the calcium-dependent tyrosine kinase PYK 2, a survival pathway which delays apoptosis as shown by the use of the MEK inhibitor U0126, and a pathway involving another pro-apoptotic BH3 only protein, Bim. Indeed, 7-ketocholesterol treatment of human monocytic THP-1 cells induces the release of Bim-LC8 from the microtubule-associated dynein motor complex, and its association with Bcl-2. Therefore, it appears that 7-ketocholesterol-induced apoptosis is a complex phenomenon resulting from calcium-dependent activation of several pro-apoptotic pathways and also one survival pathway.
Answer: The provided abstracts do not contain information regarding the cultural acceptability of breaking bad news to native Africans. The abstracts focus on scientific research related to protein aggregation in cells, the structure-based redesign of protein binding specificity, the role of the ERM protein family in cellular functions, the effects of glutathione levels on cell apoptosis, the sub-cellular localization of the protein Bad under certain conditions, the resistance of normal cells to the cytotoxic effects of a synthetic flavagline, the role of TFAM in immunogenic cancer cell death, and the involvement of calcium-dependent transduction pathways in 7-ketocholesterol-induced apoptosis. None of these abstracts address cultural practices or attitudes toward delivering bad news in any population, including native Africans (PUBMED:28066979, PUBMED:28408417, PUBMED:17003111, PUBMED:23154169, PUBMED:22470191, PUBMED:22214866, PUBMED:20700721, PUBMED:31028861, PUBMED:29872558, PUBMED:15955068). |
Instruction: Does caffeine impair cerebral oxygenation and blood flow velocity in preterm infants?
Abstracts:
abstract_id: PUBMED:2529912
Effect of caffeine on cerebral blood flow velocity in preterm infants. A continuous-wave form Doppler monitor was used to examine the effect of caffeine on cerebral blood flow velocity (CBFV) in 7 clinically stable preterm neonates suffering from apnea. Caffeine, in the form of caffeine citrate, or saline were given intravenously at loading doses of 20 mg/kg. Every subject was his own control. Placebo (saline) was systematically injected prior to caffeine citrate. Simultaneous recording of heart rate, arterial blood pressure, respiratory rate, TcPO2, TcPCO2 were made before, then at the end of the injection, and 30, 60 and 120 min after the end of each administration of either placebo or caffeine. Compared with placebo, caffeine injection was not associated with significant changes in CBFV. An increase was found in both heart-rate and respiratory rate (p less than 0.05). Mean arterial blood pressure, TcPCO2 and TcPO2 did not change significantly. Our data suggest that a caffeine citrate loading dose of 20 mg/kg as currently used at the beginning of treatment of apnea in preterm neonates has no effect on CBFV.
abstract_id: PUBMED:2533061
Caffeine and cerebral blood flow velocity in preterm infants. Continuous wave Doppler monitor was used to examine the effect on cerebral blood flow velocity (CBFV) of caffeine in 7 clinically stable preterm neonates suffering from apnea. Caffeine, as caffeine citrate at a loading dose of 20 mg.kg-1 BW, or saline were given intravenously. Every subject was his own control. Placebo (saline) was systematically injected prior to caffeine citrate. Simultaneous recording of heart rate, arterial blood pressure, respiratory rate, Tc PO2, and Tc PCO2 were made before, at the end of the injection, and 30, 60 and 120 min after the end of each administration of either placebo or caffeine. Compared with placebo, caffeine injection was not associated with significant changes in CBFV. An increase was found in both heart rate and respiratory rate (p less than 0.05). Mean arterial blood pressure, Tc PCO2 and Tc PO2 did not change significantly. Our data suggest that a caffeine citrate loading dose of 20 mg.kg-1 BW as currently used at the beginning of treatment of apnea in preterm neonates has no effect on CBFV.
abstract_id: PUBMED:28889765
Effect of caffeine on superior mesenteric artery blood flow velocities in preterm neonates. Objective: To investigate the effect of caffeine infusion on superior mesenteric artery (SMA) blood flow velocities (BFV) in preterm infants.
Methods: Prospective observational study on 38 preterm neonates 28-33+6 weeks gestation, who developed apnea on their first day of life, and caffeine citrate infusion was initiated at a loading dose of 20 mg/kg, followed by a maintenance dose of 5-10 mg/kg/day. Duplex ultrasound measurements of SMA BFV were recorded: peak systolic velocity (PSV), end diastolic velocity (EDV) and resistive index (RI), at 15 min before, 1-, 2- and 6-h after caffeine loading dose, and 2 h after two maintenance doses.
Results: There was a significant reduction in PSV 1-h (p = .008), a significant decrease in EDV 1- and 2-h (p = .000 and p = .005, respectively) and a significant increase in RI 1- and 2-h (p = .003 and p = .005, respectively) following caffeine loading dose, as compared to values before caffeine infusion. No significant effect of caffeine maintenance doses on SMA BFV was observed (p > .05).
Conclusion: Blood flow in SMA is significantly reduced after caffeine citrate infusion at a loading dose of 20 mg/kg. This effect continues for at least 2 h. Meanwhile, SMA BFV seems not affected by maintenance doses.
abstract_id: PUBMED:32198745
Effects of Single Loading Dose of Intravenous Caffeine on Cerebral Oxygenation in Preterm Infants. Objective: The aim of this study was to evaluate the effects of caffeine on cerebral oxygenation in preterm infants.
Study Design: This was a prospective study of infants with a gestational age (GA) of < 34 weeks who were treated intravenously with a loading dose of 20 mg/kg caffeine citrate within the first 48 hours of life. Regional cerebral oxygen saturation (rSO2C) and cerebral fractional tissue oxygen extraction (cFTOE) were measured using near-infrared spectroscopy before administering caffeine (baseline), immediately after administering caffeine, and 1, 2, 3, 4, 6, and 12 hours after dose completion; postdose values were compared with the baseline values.
Results: A total of 48 infants with a mean GA of 29.0 ± 1.9 weeks, birth weight of 1,286 ± 301 g, and postnatal age of 32.4 ± 11.3 hours were included in the study. rSO2C significantly decreased from 81.3 to 76.7% soon after administering caffeine, to 77.1% at 1 hour, and to 77.8% at 2 hours with recovery at 3 hours postdose. rSO2C was 80.2% at 12 hours postdose. cFTOE increased correspondingly. Although rSO2C values were lower and cFTOE values were higher compared with the baseline values at 3, 4, 6, and 12 hours after caffeine administration, this was not statistically significant.
Conclusion: A loading dose of caffeine temporarily reduces cerebral oxygenation and increases cerebral tissue oxygen extraction in preterm infants. Most probably these changes reflect a physiological phenomenon without any clinical importance to the cerebral hemodynamics, as the reduction in cerebral oxygenation and increase in cerebral tissue oxygen extraction remain well within acceptable range.
abstract_id: PUBMED:7734901
Cerebral blood flow and left ventricular output in spontaneously breathing, newborn preterm infants treated with caffeine or aminophylline. Aminophylline and caffeine are commonly used for prophylaxis of apnea in premature infants. Previous studies have indicated different effects of the drugs on cerebral circulation. Therefore, we have compared the acute effects of bolus administration of caffeine citrate or aminophylline on left ventricular output, heart rate, blood pressure and global cerebral blood flow. The study group consisted of 33 newborn, spontaneously breathing, preterm infants randomly assigned to receive either aminophylline 5 mg/kg (n = 19) or caffeine citrate 20 mg/kg (n = 14). Two hours after iv drug administration, global cerebral blood flow measured by the Xe-clearance technique was significantly lower after aminophylline than after caffeine (mean (SD)): 13.2 (+2.9/-2.3) versus 17.2 (+7.1/-5.1) ml/100 g/min) (p = 0.01). There were no other statistically significant differences in circulatory or ventilatory parameters between the groups. Further studies are needed to clarify the clinical relevance of these results.
abstract_id: PUBMED:1420623
Perinatal pharmacology and cerebral blood flow. Many of the drugs used in neonatal intensive care units might impede cerebral blood flow, thereby increasing the risk of intraventricular hemorrhage and periventricular leukomalacia. Our studies focussed on sick preterm neonates who were treated with the following drugs: caffeine (20 mg/kg i.v., as caffeine citrate); phenobarbital (loading dose: 20 mg/kg); indomethacin (0.2 mg/kg/dose, every 12 h three doses), and synthetic surfactant (Exosurf; 50 mg/kg = 5 ml/kg intratracheally). All of the drugs studied, except indomethacin, had no adverse effect on cerebral hemodynamics.
abstract_id: PUBMED:29318792
Hemodynamic Effects on Systemic Blood Flow and Ductal Shunting Flow after Loading Dose of Intravenous Caffeine in Preterm Infants according to the Patency of Ductus Arteriosus. Background: In preterm infants, caffeine citrate is used to stimulate breathing before they are weaned from mechanical ventilation and to reduce the frequency of apnea. In recent studies, effects of caffeine on the cardiovascular system have been emphasized in preterm infants with patent ductus arteriosus (PDA).
Methods: This study aimed to assess the short-term hemodynamic effects on systemic blood flow and ductal shunting flow after loading standard doses of intravenous caffeine in preterm infants. Echocardiographic studies were performed by a single investigator, before and at 1 hour and 4 hours after an intravenous infusion of a loading dose as 20 mg/kg caffeine citrate for 30 minutes.
Results: In 25 preterm infants with PDA, left ventricular output decreased progressively during 4 hours after caffeine loading. Superior vena cava (SVC) flow decreased and ductal shunting flow increased at 1 hour and then recovered at 4-hour to baseline values. A diameter of PDA significantly decreased only at 4-hour after caffeine loading. There were no significant changes of these hemodynamic parameters in 29 preterm infants without PDA.
Conclusion: In preterm infants with PDA, a standard intravenous loading dose of 20 mg/kg caffeine citrate was associated with increasing ductal shunting flow and decreasing SVC flow (as a surrogate for systemic blood flow) 1 hour after caffeine loading, however, these hemodynamic parameters recovered at 4 hours according to partial constriction of the ductus arteriosus. Close monitoring of hemodynamic changes would be needed to observe the risk for pulmonary over-circulation or systemic hypo-perfusion due to transient increasing ductal shunting flow during caffeine loading in preterm infants with PDA.
abstract_id: PUBMED:30900326
Early high-dose caffeine citrate for extremely preterm infants: Neonatal and neurodevelopmental outcomes. Aim: To examine neonatal morbidities, including the incidence of cerebellar haemorrhage (CBH), and neurodevelopmental outcomes following the administration of high loading dose caffeine citrate compared to standard loading dose caffeine citrate.
Methods: This was a retrospective study of 218 preterm infants <28 weeks' gestation who received a loading dose of caffeine citrate within the first 36 h of life at the Mater Mothers' Hospital over a 3-year period (2011-2013). Two groups were compared, with 158 neonates in the high-dose cohort receiving a median dose of caffeine citrate of 80 mg/kg and 60 neonates in the standard dose cohort receiving a median dose of 20 mg/kg. Routine cranial ultrasound, including mastoid views, was performed during the neonatal period. At 2 years of age, infants presented for follow-up and were assessed with the Neurosensory Motor Developmental Assessment (NSMDA) and the Bayley Scales of Infant and Toddler Development-III (Bayley-III).
Results: There was no difference in the incidence of neonatal morbidities, including CBH, between the two groups. The incidence of CBH in the high-dose group was 2.5% compared to 1.7% in the standard-dose group. There was no difference in the neurodevelopmental follow-up scores as evaluated with the NSMDA and the Bayley-III.
Conclusions: The use of early high loading dose caffeine citrate in extremely preterm infants was not shown to be associated with CBH or abnormal long-term neurodevelopmental outcomes. The overall incidence of CBH, however, was much lower than in studies using magnetic resonance imaging techniques. It is suggested that a large randomised clinical trial is needed to determine the optimal dose of caffeine citrate when given early to very preterm infants.
abstract_id: PUBMED:32069484
Effects of Caffeine on Splanchnic Oxygenation in Preterm Infants. Objective: The aim of this study is to assess the effects of administering 20 mg/kg loading dose of caffeine citrate intravenously on splanchnic oxygenation in preterm infants.
Study Design: The infants with a gestational age (GA) of <34 weeks who were administered with a 20 mg/kg intravenous loading dose of caffeine citrate within 48 hours after birth were investigated prospectively. Regional splanchnic oxygen saturation (rsSO2) and splanchnic fractional tissue oxygen extraction rate (sFTOE) were measured using near-infrared spectroscopy before caffeine infusion, immediately after caffeine infusion and 1, 2, 3, 4, and 6 hours (h) after dose completion; postdose values were compared with predose values.
Results: A total of 41 infants with a mean GA of 29.2 ± 1.6 weeks and birth weight of 1,315 ± 257 g as well as postnatal age of 32.2 ± 10.8 hours were included in the study. rsSO2 significantly reduced from 63.1 to 57.5% immediately after caffeine infusion, 55.1% after 1 hour, and 55.2% after 2 hours with partial recovery at 3-hour postdose. sFTOE increased correspondingly.
Conclusion: Caffeine reduces splanchnic oxygenation and increases splanchnic oxygen extraction for at least 2 hours with partial recovery to predose levels at 3-hour postdose.
abstract_id: PUBMED:24813556
Amplitude-integrated electroencephalography shows that doxapram influences the brain activity of preterm infants. Aim: The aim of this study was to measure the brain activity of preterm infants treated with caffeine citrate and doxapram for preterm apnoea, using amplitude-integrated electroencephalography (aEEG), to identify any adverse effects on cerebral function.
Methods: We analysed the aEEG tracings of 13 preterm infants <30 weeks of gestation before, during and after doxapram treatment, with regard to background activity (percentages of continuous and discontinuous patterns), occurrence of sleep-wake cycling and appearance of electrographic seizure activity. They were also compared with 61 controls without doxapram treatment.
Results: During doxapram treatment, aEEG tracings showed an increase in continuous background activity (19 ± 30% before treatment, 38 ± 35% during treatment) and a decrease in discontinuous patterns. In addition, they showed more frequent electrographic seizure activity (0% before treatment, 15 ± 37% during treatment) and less frequent sleep-wake cycling (92 ± 27% before treatment, 85 ± 37% during treatment) could be observed. These results were confirmed when compared to the control group.
Conclusion: Doxapram treatment influences aEEG in preterm infants, showing higher percentages of continuous activity as well as more electrographic seizure activity and less sleep-wake cycling. It should, therefore, be used with caution in very preterm infants.
Answer: The administration of caffeine in preterm infants has been studied to determine its effects on cerebral oxygenation and blood flow velocity. The evidence suggests that caffeine does not significantly impair cerebral blood flow velocity (CBFV) in preterm neonates. Studies using a continuous-wave Doppler monitor to examine the effects of caffeine citrate at a loading dose of 20 mg/kg found no significant changes in CBFV when compared with placebo, although there was an increase in both heart rate and respiratory rate (PUBMED:2529912, PUBMED:2533061).
However, caffeine does appear to temporarily reduce cerebral oxygenation and increase cerebral tissue oxygen extraction in preterm infants. A study measuring regional cerebral oxygen saturation (rSO2C) and cerebral fractional tissue oxygen extraction (cFTOE) found that rSO2C significantly decreased soon after administering caffeine, with a corresponding increase in cFTOE, although these changes were within an acceptable range and considered to likely reflect a physiological phenomenon without clinical importance to cerebral hemodynamics (PUBMED:32198745).
In contrast, another study indicated that aminophylline, a drug similar to caffeine, resulted in significantly lower global cerebral blood flow measured by the Xe-clearance technique compared to caffeine, suggesting that caffeine may have a more favorable profile in terms of cerebral hemodynamics (PUBMED:7734901).
Moreover, other research has shown that caffeine can have effects on systemic blood flow and ductal shunting flow in preterm infants with patent ductus arteriosus (PDA), indicating that caffeine can influence cardiovascular dynamics in preterm infants (PUBMED:29318792).
In summary, while caffeine does not seem to significantly impair cerebral blood flow velocity in preterm infants, it can temporarily reduce cerebral oxygenation and increase oxygen extraction. The clinical significance of these findings appears to be minimal, but they underscore the importance of monitoring the effects of caffeine on the cerebral and systemic circulation in preterm infants (PUBMED:2529912, PUBMED:2533061, PUBMED:32198745, PUBMED:7734901, PUBMED:29318792). |
Instruction: Data Gathering Bias: Trait Vulnerability to Psychotic Symptoms?
Abstracts:
abstract_id: PUBMED:27597173
Scanning to conclusions? Visual attention to neutral faces under stress in individuals with and without subclinical paranoia. Background And Objectives: A promising candidate for a vulnerability indicator for psychosis is the restricted scanpath. Restricted scanning of social stimuli, such as faces, might also contribute to misinterpretations of facial expressions and thus increase the likelihood of delusional interpretations. Moreover, similar to other vulnerability indicators of psychosis, scanpaths may be susceptible to stress. Thus, we hypothesized that scanpath restriction would increase as a function of delusion-proneness, stress and their interaction.
Methods: Participants were asked to look at neutral faces and rate their trustworthiness under a stress and a non-stress condition, while the eye gaze was recorded. The non-clinical sample was classified into low- and high-paranoia scorers using a median split. Eye-tracking parameters of interest were number of fixations, fixations within emotion-relevant facial areas, scanpath length and duration of fixations.
Results: In general, high-paranoia scorers had a significantly shorter scanpath compared to low-paranoia scorers (F(1, 48) = 2.831, p = 0.05, ηp2 = 0.056) and there was a trend towards a further decrease of scanpath length under stress in high-paranoia scorers relative to low-paranoia scorers (interaction effect: F(1, 48) = 2.638, p = 0.056, ηp2 = 0.052). However, no effects were found for the other eye-tracking parameters. Moreover, trustworthiness ratings remained unaffected by group or condition.
Limitations: The participants of this study had only slight elevations of delusion-proneness, which might explain the absence of differences in trustworthiness ratings.
Conclusions: Restricted scanpaths appear to be partly present in individuals with subclinical levels of paranoia and appear to be susceptible to stress in this group. Nevertheless, further research in high-risk groups is necessary before drawing more definite conclusions.
abstract_id: PUBMED:26147948
Data Gathering Bias: Trait Vulnerability to Psychotic Symptoms? Background: Jumping to conclusions (JTC) is associated with psychotic disorder and psychotic symptoms. If JTC represents a trait, the rate should be (i) increased in people with elevated levels of psychosis proneness such as individuals diagnosed with borderline personality disorder (BPD), and (ii) show a degree of stability over time.
Methods: The JTC rate was examined in 3 groups: patients with first episode psychosis (FEP), BPD patients and controls, using the Beads Task. PANSS, SIS-R and CAPE scales were used to assess positive psychotic symptoms. Four WAIS III subtests were used to assess IQ.
Results: A total of 61 FEP, 26 BPD and 150 controls were evaluated. 29 FEP were revaluated after one year. 44% of FEP (OR = 8.4, 95% CI: 3.9-17.9) displayed a JTC reasoning bias versus 19% of BPD (OR = 2.5, 95% CI: 0.8-7.8) and 9% of controls. JTC was not associated with level of psychotic symptoms or specifically delusionality across the different groups. Differences between FEP and controls were independent of sex, educational level, cannabis use and IQ. After one year, 47.8% of FEP with JTC at baseline again displayed JTC.
Conclusions: JTC in part reflects trait vulnerability to develop disorders with expression of psychotic symptoms.
abstract_id: PUBMED:38391690
The Relationship between Childhood Trauma Experiences and Psychotic Vulnerability in Obsessive Compulsive Disorder: An Italian Cross-Sectional Study. People with obsessive compulsive disorder (OCD) are at increased risk of developing psychotic disorders; yet little is known about specific clinical features which might hint at this vulnerability. The present study was aimed at elucidating the pathophysiological mechanism linking OCD to psychosis through the investigation of childhood trauma experiences in adolescents and adults with OCD. One hundred outpatients, aged between 12 and 65 years old, were administered the Yale-Brown Obsessive Compulsive Scale (Y-BOCS) and its Child version (CY-BOCS), as well as the Childhood Trauma Questionnaire (CTQ); Cognitive-Perceptual basic symptoms (COPER) and high-risk criterion Cognitive Disturbances (COGDIS) were assessed in the study sample. Greater childhood trauma experiences were found to predict psychotic vulnerability (p = 0.018), as well as more severe OCD symptoms (p = 0.010) and an earlier age of OCD onset (p = 0.050). Participants with psychotic vulnerability reported higher scores on childhood trauma experiences (p = 0.02), specifically in the emotional neglect domain (p = 0.01). In turn, emotional neglect and psychotic vulnerability were found higher in the pediatric group than in the adult group (p = 0.01). Our findings suggest that childhood trauma in people with OCD may represent an indicator of psychotic vulnerability, especially in those with an earlier OCD onset. Research on the pathogenic pathways linking trauma, OCD, and psychosis is needed.
abstract_id: PUBMED:25616503
Jumping to Conclusions About the Beads Task? A Meta-analysis of Delusional Ideation and Data-Gathering. It has been claimed that delusional and delusion-prone individuals have a tendency to gather less data before forming beliefs. Most of the evidence for this "jumping to conclusions" (JTC) bias comes from studies using the "beads task" data-gathering paradigm. However, the evidence for the JTC bias is mixed. We conducted a random-effects meta-analysis of individual participant data from 38 clinical and nonclinical samples (n = 2,237) to investigate the relationship between data gathering in the beads task (using the "draws to decision" measure) and delusional ideation (as indexed by the "Peters et al Delusions Inventory"; PDI). We found that delusional ideation is negatively associated with data gathering (r(s) = -0.10, 95% CI [-0.17, -0.03]) and that there is heterogeneity in the estimated effect sizes (Q-stat P = .03, I(2) = 33). Subgroup analysis revealed that the negative association is present when considering the 23 samples (n = 1,754) from the large general population subgroup alone (r(s) = -0.10, 95% CI [-0.18, -0.02]) but not when considering the 8 samples (n = 262) from the small current delusions subgroup alone (r(s) = -0.12, 95% CI [-0.31, 0.07]). These results provide some provisional support for continuum theories of psychosis and cognitive models that implicate the JTC bias in the formation and maintenance of delusions.
abstract_id: PUBMED:31773383
Psychosis, vulnerability, and the moral significance of biomedical innovation in psychiatry. Why ethicists should join efforts. The study of the neuroscience and genomics of mental illness are increasingly intertwined. This is mostly due to the translation of medical technologies into psychiatry and to technological convergence. This article focuses on psychosis. I argue that the convergence of neuroscience and genomics in the context of psychosis is morally problematic, and that ethics scholarship should go beyond the identification of a number of ethical, legal, and social issues. My argument is composed of two strands. First, I argue that we should respond to technological convergence by developing an integrated, patient-centred approach focused on the assessment of individual vulnerabilities. Responding to technological convergence requires that we (i) integrate insights from several areas of ethics, (ii) translate bioethical principles into the mental health context, and (iii) proactively try to anticipate future ethical concerns. Second, I argue that a nuanced understanding of the concept of vulnerability might help us to accomplish this task. I borrow Florencia Luna's notion of 'layers of vulnerability' to show how potential harms or wrongs to individuals who experience psychosis can be conceptualised as stemming from different sources, or layers, of vulnerability. I argue that a layered notion of vulnerability might serve as a common ground to achieve the ethical integration needed to ensure that biomedical innovation can truly benefit, and not harm, individuals who suffer from psychosis.
abstract_id: PUBMED:26197302
Dopamine effects on evidence gathering and integration. Background: Disturbances in evidence gathering and disconfirmatory evidence integration have been associated with the presence of or propensity for delusions. Previous evidence suggests that these 2 types of reasoning bias might be differentially affected by antipsychotic medication. We aimed to investigate the effects of a dopaminergic agonist (L-dopa) and a dopaminergic antagonist (haloperidol) on evidence gathering and disconfirmatory evidence integration after single-dose administration in healthy individuals.
Methods: The study used a randomized, double-blind, placebo-controlled, 3-way crossover design. Participants were healthy individuals aged 18-40 years. We administered a new data-gathering task designed to increase sensitivity to change compared with traditional tasks. The Bias Against Disconfirmatory Evidence (BADE) task was used as a measure of disconfirmatory evidence integration.
Results: We included 30 individuals in our study. In the data-gathering task, dopaminergic modulation had no significant effect on the amount of evidence gathered before reaching a decision. In contrast, the ability of participants to integrate disconfirmatory evidence showed a significant linear dopaminergic modulation pattern (highest with haloperidol, intermediate with placebo, lowest with L-dopa), with the difference between haloperidol and L-dopa marginally reaching significance.
Limitations: Although the doses used for haloperidol and L-dopa were similar to those used in previous studies, drug plasma level measurements would have added to the validity of findings.
Conclusion: Evidence gathering and disconfirmatory evidence integration might be differentially influenced by dopaminergic agents. Our findings are in support of a dual-disturbance account of delusions and provide a plausible neurobiological basis for the use of interventions targeted at improving reasoning biases as an adjunctive treatment in patients with psychotic disorders.
abstract_id: PUBMED:33041876
Vulnerability to Psychosis: A Psychoanalytical Perspective. The Paradigmatic Example of 22q11.2 Microdeletion Syndrome. This paper outlines a psychoanalytic contribution to a growing research field in psychiatry: that of psychotic vulnerability, and the related neurogenetic modeling of schizophrenia. We explore this contribution by focusing on recent studies concerning a neurodevelopmental disorder, the 22q11.2 microdeletion syndrome - which comprises DiGeorge syndrome in particular. It is one of the most common rare genetic syndromes, and the patients that it affects present a very high rate of psychotic symptoms (between 30 and 40%). For this reason, it has sparked an increasing number of clinical research projects which give it a paradigmatic status, as much for psychotic vulnerability as for potential neurobiological and genetic markers of schizophrenia. This syndrome illustrates one of the major stakes in contemporary psychopathology: the articulation of clinical, neurocognitive, and genetic approaches in a pluri-disciplinary manner. We seek to show that psychoanalysis, when it participates in this articulation, opens up specific hypotheses and research perspectives. In particular, based on the epidemiological observation of the role of anxiety as a predictor for psychosis, we underline the potential relevance of psychoanalytically oriented differential clinical practice and the psychodynamics of anxiety: they can contribute to studies and clinical follow-up on the 22q11.2 microdeletion syndrome and, more widely, to research on the detection and prevention of psychotic vulnerability.
abstract_id: PUBMED:36228437
Whodunit - A novel video-based task for the measurement of jumping to conclusions in the schizophrenia spectrum. Jumping to conclusions (JTC) is implicated in the formation and maintenance of the positive symptoms of psychosis and over the years has become a prominent treatment target. Yet, measures designed to detect JTC are compromised by a number of limitations. We aimed to address some of these shortcomings with a new video-based "Whodunit task" among participants scoring high and low on the Community Assessment of Psychic Experiences (CAPE). We recruited a large sample (N = 979) from the general population who were divided into subgroups high vs. low on psychotic-like experiences (PLE), matched for depression and background characteristics. In the Whodunit task, participants were asked to rate the likelihood that one out of six suspects was the perpetrator of a crime (deliberately ambiguous with no clear clues until the end). The primary measure was the number of sequences-to-decision (STD). In line with the hypothesis, participants scoring high on the CAPE positive subscale displayed significantly lower STD and a higher rate of JTC. Response confidence in the assessments was elevated in the PLE-High group. The number of overall decisions was also significantly elevated for the PLE-High group. No group differences were found when comparing those scoring high versus low on depression. The STD index correlated significantly with a corresponding index from another JTC task. The study presents a new paradigm for the measurement of data gathering in the schizophrenia spectrum. Speaking to its validity, the Whodunit task was correlated with another JTC measure. Future research should test abbreviated versions of the paradigm, preferably using multiple trials with differing topics/emotional themes.
abstract_id: PUBMED:28659703
Mental health in mass gatherings. Background: Hajj pilgrimage, in Saudi Arabia, is one of the world's largest religious mass gatherings. We have similar mass gathering scenarios in India such as the Amarnath Yatra and Kumbh. A unique combination of physical, physiological, and psychological factors makes this pilgrimage a very stressful milieu. We studied the emergence of psychopathology and its determinants, in this adverse environment in mass gathering situation, in Indian pilgrims on Hajj 2016.
Materials And Methods: This is a descriptive study analyzing the mental morbidity in 1.36 lakh Indian pilgrims during Hajj 2016, using SPSS software version 19.
Results: Totally 182 patients reported psychological problems. Twenty-two patients (12%) required admission. Twelve (6.8%) pilgrims reported a past history of a mental illness. One hundred and sixty-five (93.2%) patients never had any mental symptoms earlier in life. The most common illnesses seen were stress related (45.7%) followed by psychosis (9.8%), insomnia (7.3%), and mood disorders (5.6%). The most common symptoms recorded were apprehension (45%), sleep (55%), anxiety (41%), and fear of being lost (27%). Psychotropics were prescribed for 46% of pilgrims. All patients completed their Hajj successfully and returned to India.
Conclusions: Cumulative stress causes full spectrum of mental decompensation, and prompt healing is aided by simple nonpharmacological measures including social support and counseling in compatible sociolinguistic milieu.
abstract_id: PUBMED:26252896
Correction: Data Gathering Bias: Trait Vulnerability to Psychotic Symptoms? N/A
Answer: Data gathering bias, specifically the tendency to "jump to conclusions" (JTC), has been associated with psychotic disorders and symptoms. This bias is characterized by forming beliefs or making decisions based on limited evidence. Research has indicated that JTC may reflect a trait vulnerability to develop disorders with the expression of psychotic symptoms (PUBMED:26147948). This is supported by findings that individuals with first episode psychosis (FEP) and those diagnosed with borderline personality disorder (BPD), who have elevated levels of psychosis proneness, displayed increased rates of JTC. Moreover, JTC was found to be stable over time in a subset of FEP patients (PUBMED:26147948).
The relationship between JTC and delusional ideation has been further explored through meta-analysis, which found a negative association between data gathering in the beads task and delusional ideation, providing some support for continuum theories of psychosis and cognitive models implicating JTC in the formation and maintenance of delusions (PUBMED:25616503).
Additionally, the effects of dopamine on evidence gathering and integration have been studied, revealing that dopaminergic modulation might differentially influence these cognitive processes. Specifically, the ability to integrate disconfirmatory evidence showed a significant pattern of dopaminergic modulation, with haloperidol (dopaminergic antagonist) enhancing and L-dopa (dopaminergic agonist) reducing this ability (PUBMED:26197302).
Furthermore, childhood trauma experiences have been linked to psychotic vulnerability, with greater trauma predicting psychotic vulnerability, more severe OCD symptoms, and an earlier age of OCD onset. Emotional neglect was particularly associated with psychotic vulnerability (PUBMED:38391690).
In summary, data gathering bias, particularly JTC, is a trait vulnerability to psychotic symptoms, with evidence suggesting that it is a stable characteristic in individuals prone to psychosis. This bias is influenced by various factors, including dopamine modulation and childhood trauma experiences, and is implicated in the development and maintenance of delusional beliefs. |
Instruction: Preemptive sub-Tenon's anesthesia for pars plana vitrectomy under general anesthesia: is it effective?
Abstracts:
abstract_id: PUBMED:37899488
Safety and feasibility of sutureless pars-plana vitrectomy in sub-Tenon anesthesia (SAFE-VISA): a prospective study. Background: To determine the safety and feasibility of sutureless pars-plana vitrectomy (ppV) in sub-Tenon anesthesia.
Methods: In this prospective study. Pain and anxiety at various times after ppV using a visual analogue scale (VAS) and Wong-Baker-FACES scale as well as visual sensations during surgery were investigated. The surgeon evaluated motility, chemosis, overall feasibility.
Results: ppV was performed on 67 eyes (33 sub-Tenon anesthesia, 34 general anesthesia). Pain during surgery in sub-Tenon anesthesia was 1.8 ± 2.2 (0.0-8.0), anxiety was 2.3 ± 2.2 (0.0-8.5). There was a moderate correlation between pain and anxiety (R2 = 0.58). Comparing sub-Tenon and general anesthesia no difference in pain perception was found the day after surgery. 27.3% of patients saw details, 21.2% saw colors, 90.1% saw light/motion perception, 3.0% had no light perception. Median chemosis after surgery was 1.0 (IQR = 1.0). Median motility of the eye during surgery was 1.0 (IQR = 1.0), median grade was 1.0 (IQR = 1.0). 24.2% of patients showed subconjunctival hemorrhage during or after surgery.
Conclusions: Sutureless pars-plana vitrectomy in sub-Tenon anesthesia was performed safely, with pain and anxiety levels tolerable for the patients and without the necessity for presence of an anesthesiologist. With 88.9% of patients willing to undergo vitreoretinal surgery in sub-Tenon anesthesia again, we recommend it as a standard option. Trial registration This study was approved by the Institutional Ethical Review Board of the RWTH Aachen University (EK 111/19). This study is listed on clinicaltrials.gov (ClinicalTrials.gov identifier: NCT04257188, February 5th 2020).
abstract_id: PUBMED:24812489
The effect of posterior sub-Tenon's capsule triamcinolone acetonide injection to that of pars plana vitrectomy for diabetic macular edema. Purpose: To compare the effect of posterior sub-Tenon's capsule triamcinolone acetonide (STTA) injection to that of pars plana vitrectomy (PPV) for diabetic macular edema (DME).
Patients And Methods: The medical records of 50 patients (52 eyes) with DME were reviewed. Twenty-six eyes underwent STTA (20 mg) and the other 26 eyes underwent vitrectomy combined with cataract surgery. The central macular thickness (CMT), measured by optical coherence tomography, and best-corrected visual acuity (BCVA) were determined before and 1, 3, and 6 months after treatment.
Results: The differences in the BCVA and the CMT between the STTA group and the PPV group were not significant before or at any time after the treatment. In both the STTA and PPV groups, there were significant differences between the pre-treatment CMT and BCVA at any time after treatment.
Conclusion: We recommend STTA injection for the treatment of DME.
abstract_id: PUBMED:17552386
Preemptive sub-Tenon's anesthesia for pars plana vitrectomy under general anesthesia: is it effective? Background And Objectives: To determine whether irrigation of the sub-Tenon's space with anesthetic agents during pars plana vitrectomy (PPV) involving general anesthesia decreases postoperative pain, analgesic use, or nausea.
Patients And Methods: A prospective, controlled trial of 46 consecutive patients requesting general anesthesia for PPV who were randomized to receive or not receive a sub-Tenon's space injection prior to surgery. A mixture of 3 mL of 2% lidocaine with hyaluronidase and 3 mL of 0.5% bupivacaine was used to induce local blockade. Pain, postoperative nausea, and analgesia use were evaluated.
Results: Local blockade did not significantly alter the proportion of reported pain at 30 minutes and 2, 4, and 24 hours after the operation. The local blockade had no effect on reducing postoperative nausea or the number of patients requiring pain medication.
Conclusions: Local blockade prior to surgery in patients undergoing PPV under general anesthesia does not significantly decrease postoperative pain, analgesic use, or nausea.
abstract_id: PUBMED:25349807
Visual impact of sub-Tenon anesthesia during combined phacoemulsification and vitrectomy surgery. Aim: To investigate the visual impact of sub-Tenon anesthesia during combined phacoemulsification and vitrectomy surgery.
Methods: In this prospective case series, consecutive patients who underwent combined phacoemulsification and pars plana vitrectomy (PPV) under sub-Tenon anesthesia between October 2008 and September 2009 were enrolled. The patients were asked whether they could see the light of the operating microscope or not between various surgical steps with their contralateral eye being covered.
Results: A total of 163 eyes of 163 patients were enrolled in this study. After their contralateral eyes were covered, 152 (93.3%) patients said that they could not see any light at least during one of the surgical steps. All eyes recovered to at least light perception on the first postoperative day. The incidence of no light perception during the surgery was not related to demographic factors, including age, gender, or type of ocular diseases.
Conclusion: The incidence of no light perception during combined phacoemulsification and vitrectomy under sub-Tenon anesthesia was high in our study. Patients should be duly informed about this temporary but potential intraoperative event.
abstract_id: PUBMED:35773662
Efficacy and safety of trans-sub-Tenon's retrobulbar anesthesia for pars plana vitrectomy: a randomized trial. Aim: To compare the efficacy and safety of trans-sub-Tenon's ciliary nerve block anesthesia and transcutaneous retrobulbar anesthesia in patients undergoing pars plana vitrectomy (PPV).
Methods: A prospective, randomized, double-blinded clinical trial was conducted at Zhongda Hospital, Affiliated with Southeast University, from February 2021 to October 2021. Patients undergoing PPV were randomly allocated into two groups: the trans-sub-Tenon's anesthesia group (ST group) and the retrobulbar anesthesia group (RB group) in the ratio of 1:1. The ST group received 2 ml ropivacaine through the Tenon capsule to the retrobulbar space, while the RB group received 2 ml ropivacaine via transcutaneous retrobulbar injection. Visual analog score (VAS) was used to evaluate pain during the whole process, including during anesthesia implementation, intraoperatively and on the first day after the operation. Movement evaluation (Brahma scores) and anesthesia-related complications were also noted.
Results: Finally, a total of 120 patients were included in the study (60 in the ST group and 60 in the RB group). There were no significant differences in baseline patient characteristics or surgical features between the two groups. The VAS pain scores for anesthesia implementation were 0.52 ± 0.47 in the ST group and 1.83 ± 0.87 in the RB group (P < 0.001). The VAS scores during the operation were 0.53 ± 0.49 in the ST group and 1.48 ± 1.02 in the RB group (P < 0.001) and those on the first day after the operation were 0.37 ± 0.38 in the ST group and 0.81 ± 0.80 in the RB group (P = 0.002). No patients required supplemental intravenous anesthesia intraoperatively. The Brahma movement scores were 0.70 ± 1.64 in the ST group (scores ranging from 0 to 8) and 2.38 ± 3.15 in the RB group (ranging from 0 to 12) (P = 0.001). Forty-two patients in each group received laser photocoagulation during surgery. Fifteen patients (36%) in the ST group could not see the flashes of the laser, compared to 8 patients (19%) in the RB group (P = 0.087). No serious sight-threatening or life-threatening complications related to anesthesia were observed in either group.
Conclusions: For PPV, trans-sub-Tenon's ciliary nerve block anesthesia was more effective in controlling pain than transcutaneous retrobulbar anesthesia during the whole surgery process, including during anesthesia implementation, intraoperatively and on the first day after the operation. Additionally, it could achieve better effect of akinesia and was relatively safe. Trans-sub-Tenon's anesthesia could be considered an alternative form of local anesthesia during vitreoretinal procedures.
Trial Registration: The study protocol has been registered at ChiCTR.org.cn on February 2021 under the number ChiCTR2100043109 .
abstract_id: PUBMED:36588230
Posterior sub-Tenon triamcinolone injection in the treatment of postoperative cystoid macular edema secondary to pars plana vitrectomy. Purpose: To evaluate the efficacy and safety of posterior sub-Tenon triamcinolone (PSTA) in chronic postoperative cystoid macular edema (PCME) after pars plana vitrectomy (PPV).
Methods: Consecutive 22 patients who developed chronic PCME after PPV and underwent PSTA treatment were included in this retrospective study. Best-corrected visual acuity (BCVA) and central macular thickness (CMT) were measured pre injection and post injection at one month, three months, six months, and at last visit. The patients were divided into three groups according to the injection response status: complete, partial, and resistant.
Results: The mean follow-up period was 26.4 ± 16.2 months after PSTA. According to pre-injection values, there was a significant improvement in the values of BCVA and CMT at the first, third, and sixth months and at the last examination (P < 0.05). In the final examination, PCME recovered completely in 12 patients, partially in 8 patients, and resistance was observed in 2 patients.
Conclusion: Posterior sub-Tenon triamcinolone seems to be effective in chronic PCME following PPV.
abstract_id: PUBMED:36755887
Outcomes of Pars Plana Vitrectomy with Panretinal Photocoagulation for Treatment of Proliferative Diabetic Retinopathy Without Retinal Detachment: A Seven-Year Retrospective Study. Objective: To review clinical outcomes of patients with proliferative diabetic retinopathy (PDR) and vitreous hemorrhage (VH) who underwent pars plana vitrectomy (PPV) with endolaser panretinal photocoagulation (PRP) without retinal detachment (RD) repair.
Methods: Retrospective chart review of the rate of postoperative clinical findings and visual acuity in patients with PDR from May 2014 to August 2021.
Results: Pars plana vitrectomy with endolaser PRP was performed in 81 eyes of 81 patients (mean age of 62.1 ± 10.5 years). At a median follow-up of 18 months, mean Snellen best-corrected visual acuity (BCVA) significantly improved from 20/774 preoperatively to 20/53 at last follow-up (P < 0.001). Postoperative complications and clinical findings included VH (12.3%), diabetic macular edema (DME) (12.3%), ocular hypertension (8.6%), RD (4.9%), and need for additional PPV (6.2%). Eyes with PRP performed within 6 months before surgery had a lower frequency of developing postoperative VH (5.3%) compared to eyes that received PRP more than 6 months before surgery (27.3%, P = 0.04). Eyes that received preoperative anti-vascular endothelial growth factor (VEGF) treatment (2.0%) had a lower frequency of postoperative VH compared to eyes that did not receive anti-VEGF treatment (14.3%, P = 0.04). Eyes that received intraoperative sub-tenon triamcinolone acetonide developed postoperative DME (4.0%) less frequently than eyes that did not receive sub-tenon triamcinolone acetonide (26.7%, P = 0.04).
Conclusion: In patients with PDR and VH, PPV with PRP yielded significant improvements in visual acuity and resulted in overall low rates of recurrent postoperative VH. Preoperative anti-VEGF and PRP laser treatment were associated with lower rates of postoperative VH. Furthermore, intraoperative use of sub-tenon triamcinolone acetonide was associated with a lower rate of postoperative DME. Pars plana vitrectomy with endolaser PRP in conjunction with the aforementioned pre- and intraoperative therapies is an effective treatment for patients with PDR and VH.
abstract_id: PUBMED:32318938
Correlation between sub-Tenon's anesthesia and transient amaurosis during ophthalmic surgery. Purpose: To verify the correlation between sub-Tenon's anesthesia and intraoperative visual loss in ophthalmic surgery.
Methods: Sixty-four patients underwent phacoemulsification combined pars plana vitrectomy under sub-Tenon's anesthesia. Participants were investigated about their light perception at several time points: before anesthesia, immediately after anesthesia, 10 min after anesthesia without any surgical intervention or microscope illumination, and after the whole surgery. Intraoperative amaurosis was determined as that a patient could not see any light from their operative eye. The incidence rate of amaurosis at different time points and among different anesthetists was analyzed.
Results: The rate of intraoperative amaurosis was 0%, 1.56%, 48.44%, and 95.31% at several time points, respectively: before anesthesia, immediately after anesthesia, 10 min after anesthesia without any surgical intervention or microscope light exposure during the interval, and immediately after the whole surgery, presenting a significantly time-dependent increase (P < 0.01). There was no correlation between the amaurosis and different diseases and anesthesiologists. The amaurosis was transient, and all operative eyes could perceive light on the first postoperative day.
Conclusions: Sub-Tenon's anesthesia contributes to the intraoperative amaurosis during operation. Temporary interruption of optic nerve conduction by the anesthetic could be a credible explanation. The amaurosis is transient and reversible, requires no additional treatment, and should not be considered as a surgical complication.
abstract_id: PUBMED:17319172
Our experience with sub-Tenon's anesthesia in ophthalmic surgery The goal of the paper is to refer to and to inform about our own experiences with the Sub-Tenon's anesthesia in the surgery of the anterior as well as posterior segment of the eye. In cataract surgery it is equally efficient as the retrobulbar, peribulbar or topic anesthesia, and it is safer because no sharp needle is used. We operate on the posterior segment in sub-Tenon's anesthesia in patients with contraindicated general anesthesia because of their advanced age, decompensated diabetes mellitus or hypertension, in whom the pars plana vitrectomy is the only possibility to improve the patients' vision. Using the sub-Tenon's anesthesia, about 13.000 cataract surgeries and 127 pars plana vitrectomies without complications were performed. The simple pars plana vitrectomy may be realized at the outpatient basis as well.
abstract_id: PUBMED:34666493
ENDOTHELIAL CELL LOSS AFTER PARS PLANA VITRECTOMY. Aims: To analyse the changes in endothelial cell density (ECD) after pars plana vitrectomy (PPV) and to identify the factors implicated.
Patients And Methods: This was a prospective, consecutive, and non-randomised, case-control study. All 23-gauge vitrectomies were performed by a single surgeon at a tertiary centre. ECD was measured at baseline before surgery and on postoperative Days 30, 90, and 180. The fellow eye was used as the control eye. The primary outcome was a change in ECD after PPV.
Results: Seventeen patients were included in this study. The mean age of the patients was 65 years. The mean ECD count at baseline was 2340 cells/mm2. The median ECD loss in the vitrectomised eye was 3.6 %, 4.0 %, and 4.7 % at Days 30, 90, and 180, respectively, compared to +1.94 %, +0.75 %, +1.01 %, respectively, in the control eye. The relative risk of ECD loss after PPV was 2.48 (C.I. 1.05-5.85, p = 0.0247). The pseudophakic eyes lost more ECD than the phakic eyes, but this was not statistically significant. There were no significant differences in diagnosis, age, surgical time, or tamponade used after surgery.
Conclusions: Routine pars plana vitrectomy had an impact on the corneal endothelial cells until Day 180 post-op. The phakic status was slightly protective against ECD loss after PPV, although it was not statistically significant. The pathophysiology of corneal cell damage after routine PPV remains unclear. Further studies are required to confirm these findings.
Answer: Preemptive sub-Tenon's anesthesia for pars plana vitrectomy (PPV) under general anesthesia does not appear to be effective in reducing postoperative pain, analgesic use, or nausea. A prospective, controlled trial involving 46 consecutive patients requesting general anesthesia for PPV, who were randomized to receive or not receive a sub-Tenon's space injection prior to surgery, found that local blockade did not significantly alter the proportion of reported pain at various time points after the operation, nor did it have an effect on reducing postoperative nausea or the number of patients requiring pain medication (PUBMED:17552386). |
Instruction: Honorary authorship in radiologic research articles: do geographic factors influence the frequency?
Abstracts:
abstract_id: PUBMED:29555566
Honorary Authorship in Radiologic Research Articles: Assessment of Pattern and Longitudinal Evolution. Rationale And Objectives: To analyze the pattern and longitudinal evolution of honorary authorship in major radiology journals.
Materials And Methods: In this Institutional Review Board-approved study, an electronic survey was sent to first authors of original research articles published in the American Journal of Roentgenology, European Radiology, the Journal of Magnetic Resonance Imaging, and Radiology during 2 years (July 2014 through June 2016). Questions addressed the perception of honorary authorship and contributing factors, as well as demographic information. Univariate analysis was performed by using χ2 tests. Multivariable logistic regression models were used to assess independent factors associated with the perception of honorary authorship.
Results: Of 1839 first authors, 315 (17.3%) responded. Of these, 31.4% (97/309) perceived that at least one coauthor did not make sufficient contributions to merit authorship and 54.3% (159/293) stated that one or more coauthors performed only "nonauthor" tasks according to International Committee of Medical Journal Editors criteria. Of eight factors significantly associated with the perception of honorary authorship on univariate analysis, two were retained by the stepwise multivariate model: having someone suggest adding an author and a coauthor performing only a nonauthorship task.
Conclusion: There has been little variation in the perception of honorary authorship among first authors of original research articles in radiology. The suggestion of adding an author and having coauthors performing only nonauthorship tasks are the two most important risk factors for honorary authorship. Our findings indicate that a prolonged course of transformation of current cultural norms is required to decrease honorary authorship.
abstract_id: PUBMED:24475845
Honorary authorship in radiologic research articles: do geographic factors influence the frequency? Purpose: To quantify the potential effect of geographic factors on the frequency of honorary authorship in four major radiology journals.
Materials And Methods: In this institutional review board-approved study, an electronic survey was sent to first authors of all original research articles published in American Journal of Roentgenology, European Radiology, Journal of Magnetic Resonance Imaging, and Radiology during 2 years (July 2009 through June 2011). Questions addressed guidelines used for determining authorship, perception of honorary authorship, and demographic information. Univariate analysis was performed by using χ(2) tests. Multiple-variable logistic regression models were used to assess independent factors associated with the perception of honorary authorship.
Results: Of 1398 first authors, 328 (23.5%) responded. Of these, 91 (27.7%) perceived that at least one coauthor did not make sufficient contributions to merit authorship, and 165 (50.3%) stated that one or more coauthors performed only "nonauthor" tasks according to International Committee of Medical Journal Editors (ICMJE) criteria. The perception of honorary authorship was significantly higher (P ≤ .0001) among respondents from Asia and Europe than from North America and in institutions where a section or department head was automatically listed as coauthor. A significantly lower (P ≤ .0001) perception of honorary authorship was associated with adherence to ICMJE criteria and with policies providing lectures or courses on publication ethics.
Conclusion: Perceived honorary authorship was substantially higher among respondents from Asia and Europe than from North America. Perceived honorary authorship was lower with adherence to the ICMJE guidelines and policies providing lectures or courses on publication ethics.
abstract_id: PUBMED:21386051
Honorary authorship in radiologic research articles: assessment of frequency and associated factors. Purpose: To quantify the frequency of perceived honorary authorship in radiologic journals and to identify specific factors that increase its prevalence.
Materials And Methods: This study qualified for exempt status by the institutional review board. An electronic survey was sent to first authors of all original research articles published in Radiology and European Radiology over 3 years. Questions included guidelines used for determining authorship, contributions of coauthors, the perception of honorary authorship, and demographic information. Univariable analysis of sample proportions was performed by using χ(2) tests. Multivariable logistic regression models were used to assess the independent factors that were associated with the probability of perceiving honorary authorship.
Results: Of the 392 (29.3%) of 1338 first authors who responded to the survey, 102 (26.0%) perceived that one or more coauthors did not make sufficient contributions to merit being included as an author. Of the 392 respondents, 231 (58.9%) stated that one or more coauthors performed only "nonauthor" tasks according to International Committee of Medical Journal Editors criteria. Factors associated with an increased first-author perception of honorary authorship included lower academic rank (adjusted odds ratio [OR]: 2.89; 95% confidence interval [CI]: 1.66, 5.06), as well as working in an environment in which the section or department head was automatically listed as an author (adjusted OR: 3.80; 95% CI: 2.13, 6.79). The percentage of honorary authorship was significantly higher (P = .019) among respondents who did not follow journal requirements for authorship.
Conclusion: The rate of perceived honorary authorship (overall, 26.0%) was substantially more frequent among respondents of lower academic rank and in those working in an environment in which their section or department head was automatically listed as an author.
Supplemental Material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101500/-/DC1.
abstract_id: PUBMED:27797590
Honorary Authorship Practices in Environmental Science Teams: Structural and Cultural Factors and Solutions. Overinclusive authorship practices such as honorary or guest authorship have been widely reported, and they appear to be exacerbated by the rise of large interdisciplinary collaborations that make authorship decisions particularly complex. Although many studies have reported on the frequency of honorary authorship and potential solutions to it, few have probed how the underlying dynamics of large interdisciplinary teams contribute to the problem. This article reports on a qualitative study of the authorship standards and practices of six National Science Foundation-funded interdisciplinary environmental science teams. Using interviews of the lead principal investigator and an early-career member on each team, our study explores the nature of honorary authorship practices as well as some of the motivating factors that may contribute to these practices. These factors include both structural elements (policies and procedures) and cultural elements (values and norms) that cross organizational boundaries. Therefore, we provide recommendations that address the intersection of these factors and that can be applied at multiple organizational levels.
abstract_id: PUBMED:35379330
Honorary authorship in health sciences: a protocol for a systematic review of survey research. Background: Honorary authorship refers to the practice of naming an individual who has made little or no contribution to a publication as an author. Honorary authorship inflates the output estimates of honorary authors and deflates the value of the work by authors who truly merit authorship. This manuscript presents the protocol for a systematic review that will assess the prevalence of five honorary authorship issues in health sciences.
Methods: Surveys of authors of scientific publications in health sciences that assess prevalence estimates will be eligible. No selection criteria will be set for the time point for measuring outcomes, the setting, the language of the publication, and the publication status. Eligible manuscripts are searched from inception onwards in PubMed, Lens.org , and Dimensions.ai. Two calibrated authors will independently search, determine eligibility of manuscripts, and conduct data extraction. The quality of each review outcome for each eligible manuscript will be assessed with a 14-item checklist developed and piloted for this review. Data will be qualitatively synthesized and quantitative syntheses will be performed where feasible. Criteria for precluding quantitative syntheses were defined a priori. The pooled random effects double arcsine transformed summary event rates of five outcomes on honorary authorship issues with the pertinent 95% confidence intervals will be calculated if these criteria are met. Summary estimates will be displayed after back-transformation. Stata software (Stata Corporation, College Station, TX, USA) version 16 will be used for all statistical analyses. Statistical heterogeneity will be assessed using Tau2 and Chi2 tests and I2 to quantify inconsistency.
Discussion: The outcomes of the planned systematic review will give insights in the magnitude of honorary authorship in health sciences and could direct new research studies to develop and implement strategies to address this problem. However, the validity of the outcomes could be influenced by low response rates, inadequate research design, weighting issues, and recall bias in the eligible surveys.
Systematic Review Registration: This protocol was registered a priori in the Open Science Framework (OSF) link: https://osf.io/5nvar/ .
abstract_id: PUBMED:27522373
Honorary authorship and symbolic violence. This paper invokes the conceptual framework of Bourdieu to analyse the mechanisms, which help to maintain inappropriate authorship practices and the functions these practices may serve. Bourdieu's social theory with its emphasis on mechanisms of domination can be applied to the academic field, too, where competition is omnipresent, control mechanisms of authorship are loose, and the result of performance assessment can be a matter of symbolic life and death for the researchers. This results in a problem of game-theoretic nature, where researchers' behaviour will be determined more by the logic of competition, than by individual character or motives. From this follows that changing this practice requires institutionalized mechanisms, and change cannot be expected from simply appealing to researchers' individual conscience. The article aims at showing that academic capital (administrative power, seniority) is translated into honorary authorship. With little control, undetected honorary authorship gives the appearance of possessing intellectual capital (scientific merit). In this way a dominant position is made to be seen as natural result of intellectual ability or scientific merit, which makes it more acceptable to those in dominated positions. The final conclusion of this paper is that undemocratic authorship decisions and authorship based performance assessment together are a form of symbolic violence.
abstract_id: PUBMED:34377454
Honorary authorship in high-impact journals in anaesthesia and pain medicine. Enlisting an author on a published paper, whose input was insufficient, is called honorary authorship. The aim of this study is to assess the proportion of honorary authorship in the field of pain medicine. Data were collected from seven high-impact journals. Corresponding authors were sent a survey regarding their awareness on authorship guidelines, the decision-making in authorship and specific contributions made to the surveyed article. We identified two types of honorary authorship: (1) self-perceived honorary authorship, which is measured by asking the corresponding author if honorary authorship was present according to their opinion and (2) International Committee of Medical Journal Editors (ICMJE)-defined honorary authorship, which is honorary authorship based on the guidelines. In total, 1051 mails were sent and 231 responded, leading to a response rate of 22.0%. 81.3% of the respondents were familiar with the ICMJE authorship guidelines, while 59.6% were aware of the issue of honorary authorship. 13.3% of the respondents were employed at a department in which the senior member is automatically included on all manuscripts. The ICMJE-defined honorary authorship was 40%, while self-perceived honorary authorship was 13.5%. There seems to be a high awareness of the ICMJE guidelines among corresponding authors in the field of Pain Medicine. Despite this high awareness, a high proportion of journal articles had honorary authorship, suggesting that authorship guidelines fail to be applied in a significant proportion of the literature.
abstract_id: PUBMED:34100137
Knowledge and Perceptions of Honorary Authorship among Health Care Researchers: Online Cross-sectional Survey Data from the Middle East. One of the core problems of scientific research authorship is honorary authorship. It violates the ethical principle of clear and appropriate assignment of scientific research contributions. The prevalence of honorary authorship worldwide is alarmingly high across various research disciplines. As a result, many academic institutions and publishers were trying to explore ways to overcome this unethical research practice. The International Committee of Medical Journal Editors (ICMJE) recommended criteria for authorship as guidance for researchers submitting manuscripts to biomedical Journals. However, despite the ICMJE guidelines, honorary authorship is still significantly present across various health research disciplines. The aim of this study was to explore the perceptions and knowledge of health care researchers towards honorary authorship according to the ICMJE guidelines across different health care fields in Jordan, which to our knowledge was never explored before. Data from an electronic survey that was distributed among researchers working in different healthcare fields across several major universities in Jordan, revealed that most of the respondents were assistant professors working mainly in the schools of Medicine and Pharmacy. The majority of the respondents (65.5%) were not aware of the ICMJE authorship guidelines. And, around 37% reported the inclusion of an honorary author, in which the most common non-authorship task reported by 73% of the respondents was reviewing the manuscript. Our findings emphasize the need for national academic and research institutions to address the issue of authorship in their educational programs and internal policies.
abstract_id: PUBMED:24215989
Honorary authorship: frequency and associated factors in physical medicine and rehabilitation research articles. Objectives: To estimate the prevalences of perceived honorary authorship and International Committee of Medical Journal Editors (ICMJE)-defined honorary authorship, and identify factors affecting each rate in the physical medicine and rehabilitation literature.
Design: Internet-based survey.
Setting: Not applicable.
Participants: First authors of articles published in 3 major physical medicine and rehabilitation journals between January 2009 and December 2011 were surveyed in June and July 2012 (N=1182).
Interventions: Not applicable.
Main Outcome Measures: The reported prevalences of perceived and ICMJE-defined honorary authorship were the primary outcome measures, and multiple factors were analyzed to determine whether they were associated with these measures.
Results: The response rate was 27.3% (248/908). The prevalences of perceived and ICMJE-defined honorary authorship were 18.0% (44/244) and 55.2% (137/248), respectively. Factors associated with perceived honorary authorship in the multivariate analysis included the suggestion that an honorary author should be included (P<.0001), being a medical resident or fellow (P=.0019), listing "reviewed manuscript" as 1 of the nonauthorship tasks (P=.0013), and the most senior author deciding the authorship order (P=.0469). Living outside North America was independently associated with ICMJE-defined honorary authorship (P=.0079) in the multivariate analysis. In the univariate analysis, indicating that the most senior author decided authorship order was significantly associated with ICMJE-defined honorary authorship (P=<.001).
Conclusions: Our results suggest that honorary authorship does occur in a significant proportion of the physical medicine and rehabilitation literature. Additionally, we found several factors associated with perceived and ICMJE-defined honorary authorship and a discrepancy between the 2 rates. Further studies with larger response rates are recommended to further explore this topic.
abstract_id: PUBMED:27500368
Honorary and ghost authorship in reports of randomised clinical trials in oncology. Background: The International Committee of Medical Journal Editors (ICMJE) has developed guidelines for responsible and accountable authorship. Few studies have assessed the frequency and nature of ghost and honorary authorship in publications of oncology trials.
Materials And Methods: Reports of randomised clinical trials evaluating systemic cancer therapy published from July 2010 to December 2012 in six high-impact journals were identified systematically. Ghost authorship was determined to be present in any scenario where investigators or statisticians listed in the protocol were not included as authors and not acknowledged in the report of the trial. The list of contributions for authors of published articles was recorded, and we defined an article as having an honorary author if any author did not meet all three criteria established by ICMJE in 1985.
Results: Two hundred publications were identified. For 61 articles, protocols with listed investigators were available, and 40 (66%) of these articles met our definition of ghost authorship. Medical writers were involved in 89 articles (45%), and assistance was acknowledged only in sponsored trials. Contributions of each author were provided in 195 articles, and 63 (33%) articles met our definition for honorary authorship. Funding source was not a predictor for either honorary or ghost authorship. Journal impact factor was positively associated with honorary authorship (odds ratio = 1.03; 95% confidence interval = 1.004-1.065; P = 0.03), but not with ghost authorship.
Conclusion: Ghost and honorary authorship are prevalent in articles describing trials for systemic therapy of cancer. Guidelines should be enforced to improve transparency and accountability.
Answer: Yes, geographic factors do influence the frequency of honorary authorship in radiologic research articles. A study found that the perception of honorary authorship was significantly higher among respondents from Asia and Europe than from North America (PUBMED:24475845). This study also noted that adherence to International Committee of Medical Journal Editors (ICMJE) criteria and policies providing lectures or courses on publication ethics were associated with a lower perception of honorary authorship. The study suggests that cultural and educational factors related to geographic location can impact the prevalence of honorary authorship in the field of radiology. |
Instruction: Laparoscopic cholecystectomy and postoperative pain: is it affected by intra-abdominal pressure?
Abstracts:
abstract_id: PUBMED:20729688
Laparoscopic cholecystectomy and postoperative pain: is it affected by intra-abdominal pressure? Background And Purpose: Intra-abdominal pressure created during laparoscopic cholecystectomy is accepted as a factor for postoperative pain. In this prospective, randomized, clinical study, the goal is to determine the effects of different intra-abdominal pressure values on visceral type pain.
Materials And Methods: Sixty women who underwent laparoscopic cholecystectomy were included in this study. Low-pressure (8 mm Hg), standard-pressure (SP: 12 mm Hg), and high-pressure (HP: 14 mm Hg) groups were designed for the study. The statistical analysis included mean age, weight, analgesic consumption, postoperative pain assessed by the Numeric Scale, duration of anesthesia, and operation.
Results: No statistically significant difference was found between the groups comparing age, weight, analgesic consumption, and Numeric Scale values. In terms of duration of anesthesia, statistically significant difference was found between the groups low-pressure and HP and SP and HP, and statistically significant difference was found regarding operative duration between the groups SP and HP. There was no difference between the others groups.
Conclusions: We think that intra-abdominal pressure has no effect on postoperative visceral pain, but has effect on duration of anesthesia and operation.
abstract_id: PUBMED:23118052
Effects of ovariohysterectomy on intra-abdominal pressure and abdominal perfusion pressure in cats. Intra-abdominal pressure (IAP) and abdominal perfusion pressure (APP) have shown clinical relevance in monitoring critically ill human beings submitted to abdominal surgery. Only a few studies have been performed in veterinary medicine. The aim of this study was to assess how pregnancy and abdominal surgery may affect IAP and APP in healthy cats. For this purpose, pregnant (n=10) and non-pregnant (n=11) queens undergoing elective spaying, and tomcats (n=20, used as controls) presented for neutering by scrotal orchidectomy were included in the study. IAP, mean arterial blood pressure (MAP), APP, heart rate and rectal temperature (RT) were determined before, immediately after, and four hours after surgery. IAP increased significantly immediately after abdominal surgery in both female groups when compared with baseline (P<0.05) and male (P<0.05) values, and returned to initial perioperative readings four hours after surgery. Tomcats and pregnant females (P<0.05) showed an increase in MAP and APP immediately after surgery decreasing back to initial perioperative values four hours later. A significant decrease in RT was appreciated immediately after laparotomy in both pregnant and non-pregnant queens. IAP was affected by abdominal surgery in this study, due likely to factors, such as postoperative pain and hypothermia. Pregnancy did not seem to affect IAP in this population of cats, possibly due to subjects being in early stages of pregnancy.
abstract_id: PUBMED:32253560
The impact of intra-abdominal pressure on perioperative outcomes in laparoscopic cholecystectomy: a systematic review and network meta-analysis of randomized controlled trials. Background: Laparoscopic cholecystectomy involves using intra-abdominal pressure (IAP) to facilitate adequate surgical conditions. However, there is no consensus on optimal IAP levels to improve surgical outcomes. Therefore, we conducted a systematic literature review (SLR) to examine outcomes of low, standard, and high IAP among adults undergoing laparoscopic cholecystectomy.
Methods: An electronic database search was performed to identify randomized controlled trials (RCTs) that compared outcomes of low, standard, and high IAP among adults undergoing laparoscopic cholecystectomy. A Bayesian network meta-analysis (NMA) was used to conduct pairwise meta-analyses and indirect treatment comparisons of the levels of IAP assessed across trials.
Results: The SLR and NMA included 22 studies. Compared with standard IAP, on a scale of 0 (no pain at all) to 10 (worst imaginable pain), low IAP was associated with significantly lower overall pain scores at 24 h (mean difference [MD]: - 0.70; 95% credible interval [CrI]: - 1.26, - 0.13) and reduced risk of shoulder pain 24 h (odds ratio [OR] 0.24; 95% CrI 0.12, 0.48) and 72 h post-surgery (OR 0.22; 95% CrI 0.07, 0.65). Hospital stay was shorter with low IAP (MD: - 0.14 days; 95% CrI - 0.30, - 0.01). High IAP was not associated with a significant difference for these outcomes when compared with standard or low IAP. No significant differences were found between the IAP levels regarding need for conversion to open surgery; post-operative acute bleeding, pain at 72 h, nausea, and vomiting; and duration of surgery.
Conclusions: Our study of published trials indicates that using low, as opposed to standard, IAP during laparoscopic cholecystectomy may reduce patients' post-operative pain, including shoulder pain, and length of hospital stay. Heterogeneity in the pooled estimates and high risk of bias of the included trials suggest the need for high-quality, adequately powered RCTs to confirm these findings.
abstract_id: PUBMED:29761274
Lower intra-abdominal pressure has no cardiopulmonary benefits during laparoscopic colorectal surgery: a double-blind, randomized controlled trial. Background: Higher intra-abdominal pressure may impair cardiopulmonary functions during laparoscopic surgery. While 12-15 mmHg is generally recommended as a standard pressure, the benefits of lower intra-abdominal pressure are unclear. We thus studied whether the low intra-abdominal pressure compared with the standard pressure improves cardiopulmonary dynamics during laparoscopic surgery.
Methods: Patients were randomized according to the intra-abdominal pressure and neuromuscular blocking levels during laparoscopic colorectal surgery: low pressure (8 mmHg) with deep-block (post-tetanic count 1-2), standard pressure (12 mmHg) with deep-block, and standard pressure with moderate-block (train-of-four count 1-2) groups. During the laparoscopic procedure, we recorded cardiopulmonary variables including cardiac index, pulmonary compliance, and surgical conditions. We also assessed postoperative pain intensity and recovery time of bowel movement. The primary outcome was the cardiac index 30 min after onset of laparoscopy.
Results: Patients were included in the low pressure with deep-block (n = 44), standard pressure with deep-block (n = 44), and standard pressure with moderate-block (n = 43) groups. The mean (SD) of cardiac index 30 min after laparoscopy was 2.7 (0.7), 2.7 (0.9), and 2.6 (1.0) L min-1 m-2 in each group (P = 0.715). The pulmonary compliance was higher but the surgical condition was poorer in the low intra-abdominal pressure than the standard pressure (both P < 0.001). Other variables were comparable between groups.
Conclusion: We observed few cardiopulmonary benefits but poor surgical conditions in the low intra-abdominal pressure during laparoscopy. Considering cardiopulmonary dynamics and surgical conditions, the standard intra-abdominal pressure may be preferable to the low pressure for laparoscopic surgery.
abstract_id: PUBMED:33565010
Reduced Laparoscopic Intra-abdominal Pressure During Laparoscopic Cholecystectomy and Its Effect on Post-operative Pain: a Double-Blinded Randomised Control Trial. Background: Laparoscopic surgery is regarded as the gold standard for the surgical management of cholelithiasis. To improve post-operative pain, low-pressure laparoscopic cholecystectomies (LPLC) have been trialed. A recent systematic review found that LPLC reduced pain; however, many of the randomised control trials were at a high risk of bias and the overall quality of evidence was low.
Methods: One hundred patients undergoing elective laparoscopic cholecystectomy were randomised to a LPLC (8 mmHg) or a standard pressure laparoscopic cholecystectomy (12 mmHg) (SPLC) with surgeons and anaesthetists blinded to the pressure. Pressures were increased if vision was compromised. Primary outcomes were post-operative pain and analgesia requirements at 4-6 h and 24 h.
Results: Intra-operative visibility was significantly reduced in LPLC (p<0.01) resulting in a higher number of operations requiring the pressure to be increased (29% vs 8%, p=0.010); however, there were no differences in length of operation or post-operative outcomes. Pain scores were comparable at all time points across all pressures; however, recovery room fentanyl requirement was more than four times higher when comparing 8 to 12 mmHg (12.5mcg vs 60mcg, p=0.047). Nausea and vomiting was also higher when comparing these pressures (0/36 vs 7/60, p=0.033). Interestingly, when surgeons estimated the operating pressure, they were correct in only 69% of cases.
Conclusion: Although pain scores were similar, there was a significant reduction in fentanyl requirement and nausea/vomiting in LPLC. Although LPLC compromised intra-operative visibility requiring increased pressure in some cases, there was no difference in complications, suggesting LPLC is safe and beneficial to attempt in all patients.
Trial Registration: Registered with the Australia and New Zealand Clinical Trials Registry (ACTRN12619000205134).
abstract_id: PUBMED:27895352
Effect of intra-abdominal pressure on post-laparoscopic cholecystectomy shoulder tip pain: A randomized control trial. Objective: To compare the effect of intra-abdominal pressure on postoperative shoulder-tip pain in laparoscopic cholecystectomy.
Methods: This was a randomized control study, conducted at Lady Reading Hospital Peshawar from January to August 2013 on160 patients, randomized to two groups i.e. the low pressure (LPLC) and the standard pressure group (SPLC) where the intra abdominal pressures were kept 10mmHg and above 10mmHg during surgery respectively. The age, gender, weight, duration of surgery, postoperative pain and frequency of analgesic administration in first 24 hours recorded and analyzed using Statistical Package for Social Sciences v20.0. Frequency and percentages were calculated for categorical while mean ± SD was calculated for continuous variables. P-value of <0.05 was considered significant.
Results: The mean operative times in group A and B were 27.84±6.078 vs. 28.51±7.45 minutes (p-value=0.625) respectively. Overall, the shoulder tip pain was reported in 25 (15%) patients. The frequencies in group A and B were 6 (7.5%) vs. 19 (23.8%) respectively (p-value = 0.005). The mean intensity of pain on VAS was 0.28±0.90 vs. 1.31±2.38 in the two groups respectively (p-value of 0.001). The mean number of analgesic administration in the first 24 hours was 2.24±0.48 in Group A vs.2.41±0.52 in Group B (p-value = 0.02) respectively.
Conclusions: Our study shows that low intra-abdominal pressure results in reduced frequency of post-operative shoulder tip pain without any prolongation of duration of surgery.
abstract_id: PUBMED:32118762
Low intra-abdominal pressure and deep neuromuscular blockade laparoscopic surgery and surgical space conditions: A meta-analysis. Background: Low intra-abdominal pressure (IAP) and deep neuromuscular blockade (NMB) are frequently used in laparoscopic abdominal surgery to improve surgical space conditions and decrease postoperative pain. The evidence supporting operations using low IAP and deep NMB is open to debate.
Methods: The feasibility of the routine use of low IAP +deep NMB during laparoscopic surgery was examined. A meta-analysis is conducted with randomized controlled trials (RCTs) to compare the influence of low IAP + deep NMB vs. low IAP + moderate NMB, standard IAP +deep NMB, and standard IAP + moderate NMB during laparoscopic procedures on surgical space conditions, the duration of surgery and postoperative pain. RCTs were identified using the Cochrane, Embase, PubMed, and Web of Science databases from initiation to June 2019. Our search identified 9 eligible studies on the use of low IAP + deep NMB and surgical space conditions.
Results: Low IAP + deep NMB during laparoscopic surgery did not improve the surgical space conditions when compared with the use of moderate NMB, with a mean difference (MD) of -0.09 (95% confidence interval (CI): -0.55-0.37). Subgroup analyses showed improved surgical space conditions with the use of low IAP + deep NMB compared with low IAP + moderate NMB, (MD = 0.63 [95% CI:0.06-1.19]), and slightly worse conditions compared with the use of standard IAP + deep NMB and standard IAP + moderate NMB, with MDs of -1.13(95% CI:-1.47 to 0.79) and -0.87(95% CI:-1.30 to 0.43), respectively. The duration of surgery did not improve with low IAP + deep NMB, (MD = 1.72 [95% CI: -1.69 to 5.14]), and no significant reduction in early postoperative pain was found in the deep-NMB group (MD = -0.14 [95% CI: -0.51 to 0.23]).
Conclusion: Low IAP +deep NMB is not significantly more effective than other IAP +NMB combinations for optimizing surgical space conditions, duration of surgery, or postoperative pain in this meta-analysis. Whether the use of low IAP + deep NMB results in fewer intraoperative complications, enhanced quality of recovery or both after laparoscopic surgery should be studied in the future.
abstract_id: PUBMED:24701492
Pain management after laparoscopic cholecystectomy-a randomized prospective trial of low pressure and standard pressure pneumoperitoneum. Background: Abdominal pain and shoulder tip pain after laparoscopic cholecystectomy are distressing for the patient. Various causes of this pain are peritoneal stretching and diaphragmatic irritation by high intra-abdominal pressure caused by pneumoperitoneum . We designed a study to compare the post operative pain after laparoscopic cholecystectomy at low pressure (7-8 mm of Hg) and standard pressure technique (12-14 mm of Hg). Aim : To compare the effect of low pressure and standard pressure pneumoperitoneum in post laparoscopic cholecystectomy pain . Further to study the safety of low pressure pneumoperitoneum in laparoscopic cholecystectomy.
Settings And Design: A prospective randomised double blind study.
Materials And Methods: A prospective randomised double blind study was done in 100 ASA grade I & II patients. They were divided into two groups -50 each. Group A patients underwent laparoscopic cholecystectomy with low pressure pneumoperitoneum (7-8 mm Hg) while group B underwent laparoscopic cholecystectomy with standard pressure pneumoperitoneum (12-13 mm Hg). Both the groups were compared for pain intensity, analgesic requirement and complications.
Statistical Analysis: Demographic data and intraoperative complications were analysed using chi-square test. Frequency of pain, intensity of pain and analgesics consumption was compared by applying ANOVA test.
Results: Post-operative pain score was significantly less in low pressure group as compared to standard pressure group. Number of patients requiring rescue analgesic doses was more in standard pressure group . This was statistically significant. Also total analgesic consumption was more in standard pressure group. There was no difference in intraoperative complications.
Conclusion: This study demonstrates the use of simple expedient of reducing the pressure of pneumoperitoneum to 8 mm results in reduction in both intensity and frequency of post-operative pain and hence early recovery and better outcome.This study also shows that low pressure technique is safe with comparable rate of intraoperative complications.
abstract_id: PUBMED:19542853
A prospective randomized, controlled study comparing low pressure versus high pressure pneumoperitoneum during laparoscopic cholecystectomy. Background: The increase in intra-abdominal pressure by insufflation of carbon dioxide during laparoscopy brings certain changes in function of organ systems and also leads to postoperative pain. Degree of intra-abdominal pressure is directly related with such change. Laparoscopic cholecystectomy can be performed at low pressure pneumoperitoneum. However, available space for dissection is less than the high pressure pneumoperitoneum.
Methods: Twenty-six patients for elective laparoscopic cholecystectomy were studied in a prospective, randomized, patient, and surgeon blinded manner. The intra-abdominal pressure was kept either in low pressure (8 mm Hg) or in high pressure (12 mm Hg). All patients underwent two dimensional echocardiography, pulmonary function test and color Doppler examination of lower limb vessels preoperatively and postoperatively. Arterial blood gas analysis and End Tidal CO2 monitored before insufflation, during surgery and after deflation. Pain score was measured by visual analog scale and surgeon's comfort level was recorded. Postoperative analgesia requirement, complications, and hospital stay were recorded. Student t test used for the statistical analysis.
Results: Both groups match for the demographic parameters. Four patients required conversion to high pressure. Intraoperative pO2 level, postoperative pain, analgesic requirement, pulmonary function, and hospital stay were favoring low pressure pneumoperitoneum in a statistically significant manner. There was no difference between 2 groups for duration of surgery, intraoperative, and postoperative complications. However, the technical difficulties were graded more (statistically nonsignificant) with low pressure pneumoperitoneum.
Conclusions: An uncomplicated gall stone disease can be treated by low pressure laparoscopic cholecystectomy with reasonable safety by an experienced surgeon. Though surgeons experience more difficulty in dissection during low pressure pneumoperitoneum, it is significantly advantageous in terms of postoperative pain, use of analgesics, preservation of pulmonary function, and hospital stay.
abstract_id: PUBMED:29429775
Effect of retension sutures on abdominal pressure after abdominal surgery. Purpose: To evaluate the effect of retention sutures on abdominal pressure and postoperative prognosis in abdominal surgery patients.
Methods: This prospective cohort study included patients who were admitted to Daping Hospital from May 15, 2014 to October 11, 2014. A total of 57 patients were enrolled, including 18 patients in the "U" type retention suture group, 17 patients in the intermittent retention suture group, and 22 patients in non-retention suture group. The demographic data, clinical data and risk factors for abdominal wound dehiscence were recorded. The bladder pressure (IVP) was monitored preoperatively, intraoperatively, and four days postoperatively. Additionally, the incidence of abdominal wound dehiscence and infection 14 days after the operation was recorded.
Results: During the operation, the IVP decreased and then increased; it was at its lowest 1 h after the start of the operation (5.3 mmHg ± 3.2 mmHg) and peaked after tension-reducing (8.8 mmHg ± 4.0 mmHg). The IVP values in the "U" type retention suture group and intermittent retention suture group were higher than in the non-retention suture group 4 days after operation (p < 0.005). The Visual Analogue Scale (VAS) pain scores were 3.9 ± 2.2, 3.8 ± 2.0, and 3.0 ± 1.0 in the retention suture group, intermittent retention suture group and non-retention suture group, respectively. The VAS pain scores in the "U" type tension-reducing group and intermittent tension-reducing group were higher than in the non-tension-reducing group (p < 0.005).
Conclusion: Although retention sutures may reduce the incidence of postoperative wound dehiscence in abdominal surgery patients, they can increase the IVP and postoperative pain.
Answer: The relationship between intra-abdominal pressure (IAP) during laparoscopic cholecystectomy and postoperative pain has been explored in several studies, with varying results.
One study found that different intra-abdominal pressure values (low-pressure at 8 mm Hg, standard-pressure at 12 mm Hg, and high-pressure at 14 mm Hg) did not significantly affect postoperative visceral pain, analgesic consumption, or Numeric Scale values for pain. However, the duration of anesthesia and operation were affected, with significant differences found between the low-pressure and high-pressure groups, as well as between the standard-pressure and high-pressure groups (PUBMED:20729688).
Another study conducted a systematic review and network meta-analysis of randomized controlled trials and concluded that low IAP was associated with significantly lower overall pain scores at 24 hours, reduced risk of shoulder pain at 24 and 72 hours post-surgery, and shorter hospital stays compared to standard IAP. High IAP did not show a significant difference for these outcomes when compared with standard or low IAP (PUBMED:32253560).
A randomized control trial comparing low pressure (8 mm Hg) to standard pressure (12 mm Hg) during laparoscopic cholecystectomy found that low pressure resulted in reduced frequency of post-operative shoulder tip pain without prolonging the duration of surgery (PUBMED:27895352).
Another randomized controlled trial reported that low-pressure pneumoperitoneum (7-8 mm Hg) resulted in less post-operative pain and analgesic requirement compared to standard pressure pneumoperitoneum (12-14 mm Hg), without an increase in intraoperative complications (PUBMED:24701492).
However, a double-blind, randomized controlled trial found that lower intra-abdominal pressure had no cardiopulmonary benefits during laparoscopic colorectal surgery and resulted in poorer surgical conditions compared to standard pressure (PUBMED:29761274).
In summary, the evidence suggests that lower intra-abdominal pressure during laparoscopic cholecystectomy may be associated with reduced postoperative pain and shorter hospital stays, without significantly affecting the duration of surgery or increasing intraoperative complications. However, it may result in poorer surgical conditions, and the overall quality of evidence is variable, indicating a need for further high-quality research to confirm these findings. |
Instruction: Should supplemental estrogens be used as steroid-sparing agents in asthmatic women?
Abstracts:
abstract_id: PUBMED:8020306
Should supplemental estrogens be used as steroid-sparing agents in asthmatic women? Objective: To determine if supplemental estrogens should be used as steroid-sparing agents in asthmatic women.
Design: Case series.
Setting: Ambulatory care, community hospital.
Patients: Volunteer sample of three steroid-dependent asthmatic women.
Intervention: Addition of conjugated estrogens to existing asthma treatment.
Main Outcome Measure: Ability to decrease oral steroid requirement.
Results: The mean age of the women was 55 +/- 11 years; two were former smokers (cases 1 and 2) and one was a nonsmoker (case 3). One women (case 3) was premenopausal and noted worsening of her asthma before and during menses. The other two women (cases 1 and 2) were postmenopausal. All three had been symptomatic from their asthma for 13.2 +/- 7.6 years. Each woman was being treated with maximal doses of inhaled albuterol, inhaled steroids, and therapeutic theophylline doses. Despite this aggressive management, all three women required daily supplemental steroids (mean dose, 26.7 +/- 11.5 mg of prednisone). Case 3 was started on a regimen of norethindrone/ethinyl estradiol 1/35, and cases 2 and 3 were begun on regimens of daily conjugated estrogen, 0.625 mg. Over the next 12 to 24 weeks, the conditions of all three women were symptomatically improved and their steroid therapy was discontinued. In addition, steroid-associated side effects of hypertension, weight gain, osteoporosis, and easy bruising lessened.
Conclusion: Although this new observation of the steroid-sparing effect of estrogens remains preliminary, further study may help advance understanding of the mechanisms and treatment of asthma in women.
abstract_id: PUBMED:29031617
Steroid sparing effects of doxofylline. Glucocorticosteroids are widely used in the treatment of asthma and chronic obstructive pulmonary disease (COPD). However, there are growing concerns about the side effect profile of this class of drug, particularly an increased risk of pneumonia. Over the last two decades there have been many attempts to find drugs to allow a reduction of glucocorticosteroids, including xanthines such as theophylline. Use of xanthines has been shown to lead to a reduction in the requirement for glucocorticosteroids, although xanthines also have a narrow therapeutic window limiting their wider use. Doxofylline is another xanthine that has been shown to be of clinical benefit in patients with asthma or COPD, but to have a wider therapeutic window than theophylline. In the present study we have demonstrated that doxofylline produces a clear steroid sparing effect in both an allergic and a non-allergic model of lung inflammation. Thus, we have shown that concomitant treatment with a low dose of doxofylline and a low dose of the glucocorticosteroid dexamethasone (that alone had no effect) significantly reduced both allergen-induced eosinophil infiltration into the lungs of allergic mice, and lipopolysaccharide (LPS)-induced neutrophil infiltration into the lung, equivalent to a higher dose of each drug. Our results suggest that doxofylline demonstrates significant anti-inflammatory activity in the lung which can result in significant steroid sparing activity.
abstract_id: PUBMED:7842806
Should supplemental estrogen be used as steroid-sparing agents in women with asthma? N/A
abstract_id: PUBMED:31024635
Efficacy and steroid-sparing effect of benralizumab: has it an advantage over its competitors? Severe refractory asthma is characterized by a higher risk of asthma-related symptoms, morbidities, and exacerbations. This disease also determines much greater healthcare costs and deterioration in health-related quality of life (HR-QoL). Another concern, which is currently much discussed, is the high percentage of patients needing regular use of oral corticosteroids (OCS), which can lead to several systemic side effects. Airway eosinophilia is present in the majority of asthmatic patients, and elevated levels of blood and sputum eosinophils are associated with worse control of asthma. Regarding severe refractory eosinophilic asthma, interleukin-5 (IL-5) plays a fundamental role in the inflammatory response, due to the profound effect on eosinophils biology. The advent of the biological therapies provided an effective strategy, even if the increased number of molecules with different targets raised the challenge of choosing the right therapy and avoid overlapping. When considering severe refractory eosinophilic asthma and anti-IL-5 treatments, it is not easy to define which drug to choose between mepolizumab, reslizumab, and benralizumab. In this article, we carried out an indirect comparison among literature data, especially between OCS reduction studies (ZONDA-SIRIUS) and pivotal studies (SIROCCO-MENSA), evaluating whether the clinical efficacy and the steroid-sparing effect of benralizumab may represent an advantage over other compounds. This data could help the clinician in the decision process of treatment choice, within the different available therapeutic options for eosinophilic refractory severe asthma.
abstract_id: PUBMED:31958239
Antibodies targeting the interleukin-5 signaling pathway used as add-on therapy for patients with severe eosinophilic asthma: a review of the mechanism of action, efficacy, and safety of the subcutaneously administered agents, mepolizumab and benralizumab. Introduction: Since the discovery of eosinophils in the sputum of asthmatic patients, several studies have offered evidence on their prominent role in the pathology and severity of asthma. Blood eosinophils, are a useful biomarker for therapy selection in severe asthma patients. IL-5 plays crucial role on maturation, activation, recruitment, and survival of eosinophils and constitutes an important therapeutic target for patients with severe uncontrolled eosinophilic asthma.Areas covered: This review focuses on the similarities and differences on mechanisms of action, efficacy, and safety, of two subcutaneously(SC) administered agents, the anti-interleukin(IL)-5 monoclonal antibody mepolizumab and the IL-5 receptor-α(IL-5Rα)-directed cytolytic monoclonal antibody benralizumab. All information used was collected from PubMed using keywords such as severe asthma, eosinophils, IL-5, airway inflammation, asthma exacerbations, mepolizumab, benralizumab, anti-IL5, and anti-IL5R either as single terms or in several combinations.Expert opinion: Both mepolizumab and benralizumab are promising for the treatment of severe eosinophilic asthma resulting in asthma control improvement and exacerbations reduction and can serve as steroid-sparing agents. However, since no head-to-head comparisons exist, it is unknown whether their different mechanisms of action might be related to different efficacy in specific patients' sub-phenotypes. Long-term clinical observations will provide real-world evidence regarding their lasting effectiveness and safety.
abstract_id: PUBMED:20177514
Oral-steroid sparing effect of inhaled fluticasone propionate in children with steroid-dependent asthma. Objective: To evaluate the oral steroid-sparing effect of inhaled fluticasone propionate (FP) in eight children with steroid-dependent asthma.
Design And Setting: Treatment protocol study at a tertiary pulmonary care centre at a children's hospital.
Patients: Eight children with severe persistent steroid dependent asthma (mean age 11.6 years [range 10 to 13 years], mean duration of asthma 8.37 years [range three to 11 years]) were enrolled in the study.
Measurements: Inhaled FP 880 mug/day (two puffs of 220 mug/puff, two times a day) was added to the children's asthma treatment, and attempts were made to reduce the dose of oral steroids by 20% every two weeks over a six-month period. After this six-month period, in the patients responding to inhaled FP, the dose of inhaled FP was reduced to 440 mug/day (two puffs of 110 mug/puff, two times a day) for the next six months. The mean percentage predicted values for forced expiratory volume in 1 s (FEV(1)) and maximal mid-expiratory flow rate (FEF(25%-75%)) were compared during the first month, at two to six months, and at seven to 12 month intervals before and after starting FP. The number of asthma exacerbations, emergency room visits, hospital admissions and number of school days lost were also compared.
Results: Within three months of starting inhaled FP, the mean alternate-day oral steroid dose decreased from 38 mg to 2.5 mg. In addition, six patients (66%) were able to discontinue the use of oral steroids. There was significant improvement in the number of mean emergency room visits per patient (P=0.016), mean asthma exacerbations per patient (P=0.016), mean hospital admissions per patient (P=0.016) and mean number of school days lost per patient (P=0.004) while patients were receiving high dose inhaled FP compared with oral steroids. There was no deterioration of any of the above mentioned parameters during the six month period when the dose of inhaled FP was reduced. The mean FEV(1) and FEF(25%-75%) during the two- to six-month and seven- to 12-month periods showed significant improvement, while the patients were receiving FP compared with oral steroids (P<0.05 for both parameters for both time periods).
Conclusions: High dose inhaled FP 880 mug/day has an important oral steroid-sparing effect. After oral steroids are tapered, patients maintain adequate control of asthma with low dose inhaled FP. These findings suggest that FP may control asthma better than oral steroids.
abstract_id: PUBMED:11447376
Alternative agents in asthma. Glucocorticosteroids are the backbone of asthma therapy and are administered mainly by the inhaled route. Patients with "difficult" asthma are not a single homogeneous group. Some are stable on high-dose steroid therapy but experience unacceptable side effects; others remain unstable despite receiving high doses of inhaled or oral steroids. Several different steroid-sparing and alternative agents have been tried, with varying degrees of success. Some success has been achieved with conventional immunosuppressants such as methotrexate, gold, and cyclosporin A, but these agents can be justified only in a limited range of cases. Leukotriene receptor antagonists have proved a useful addition to asthma therapy and have been shown to have a modest steroid-sparing effect. Although the existing range of alternative agents has not proved to be particularly effective, several new therapeutic agents have been developed to target specific components of the inflammatory process in asthma. These include IgE antibodies, cytokines, chemokines, and vascular adhesion molecules. Future developments might include better forms of immunotherapy and strategies targeting the remodeling of structural elements of the airways.
abstract_id: PUBMED:37667254
Exogenous female sex steroid hormones and new-onset asthma in women: a matched case-control study. Background: Evidence on the role of exogenous female sex steroid hormones in asthma development in women remains conflicting. We sought to quantify the potential causal role of hormonal contraceptives and menopausal hormone therapy (MHT) in the development of asthma in women.
Methods: We conducted a matched case-control study based on the West Sweden Asthma Study, nested in a representative cohort of 15,003 women aged 16-75 years, with 8-year follow-up (2008-2016). Data were analyzed using Frequentist and Bayesian conditional logistic regression models.
Results: We included 114 cases and 717 controls. In Frequentist analysis, the odds ratio (OR) for new-onset asthma with ever use of hormonal contraceptives was 2.13 (95% confidence interval [CI] 1.03-4.38). Subgroup analyses showed that the OR increased consistently with older baseline age. The OR for new-onset asthma with ever MHT use among menopausal women was 1.17 (95% CI 0.49-2.82). In Bayesian analysis, the ORs for ever use of hormonal contraceptives and MHT were, respectively, 1.11 (95% posterior interval [PI] 0.79-1.55) and 1.18 (95% PI 0.92-1.52). The respective probability of each OR being larger than 1 was 72.3% and 90.6%.
Conclusions: Although use of hormonal contraceptives was associated with an increased risk of asthma, this may be explained by selection of women by baseline asthma status, given the upward trend in the effect estimate with older age. This indicates that use of hormonal contraceptives may in fact decrease asthma risk in women. Use of MHT may increase asthma risk in menopausal women.
abstract_id: PUBMED:14583965
Chloroquine as a steroid sparing agent for asthma. Background: For the majority of chronic asthmatics, symptoms are best controlled using inhaled steroids, but for a small group of asthma sufferers, symptoms cannot be controlled using inhaled steroids and instead continuous use of high dosage oral steroids (corticosteroids) are required. However, using high dosage oral steroids for long periods is associated with severe side effects. Steroid-sparing treatments have been sought and one of these is chloroquine. Chloroquine is an anti-inflammatory agent, also used in the treatment of malarial infection and as a second-line therapy in the treatment of rheumatoid arthritis, sarcoidosis and systemic lupus erythematosus. All these diseases are associated with immunologic abnormalities hence the speculation that chloroquine might be used to control severe, poorly controlled bronchial asthma. There is a need to systematically evaluate the evidence regarding its use to reduce or eliminate oral corticosteroid use in asthma.
Objectives: The object of this review was to assess the efficacy of adding chloroquine to oral corticosteroids in patients with chronic asthma who are dependent on oral corticosteroids with the intention of minimising or eventually eliminating the use of these oral steroids.
Search Strategy: Searches of the Cochrane Airways Group asthma and wheeze trials register were undertaken with predefined search terms in February 2003.
Selection Criteria: Only studies with a randomised placebo-controlled design met the inclusion criteria for the review.
Data Collection And Analysis: Two reviewers independently assessed studies for suitable in the review. Data were extracted and entered into RevMan 4.2.2
Main Results: One small study was included in the review. No significant findings were reported.
Reviewer's Conclusions: There is insufficient evidence to support the use of chloroquine as an oral steroid-sparing agent in chronic asthma. Further trials should optimise oral steroid dosage before addition of the steroid-sparing agent.
abstract_id: PUBMED:20861293
Simvastatin in the treatment of asthma: lack of steroid-sparing effect. Background: Statins have anti-inflammatory actions which in theory are potentially beneficial in asthma. Small trials have failed to show a significant benefit, but a systematic study to evaluate the steroid-sparing effect of statin treatment has not been carried out.
Methods: A randomised, placebo-controlled, crossover trial was conducted of simvastatin 40 mg at night with simultaneous stepwise reduction of fluticasone propionate dose until loss of control occurred, followed by an increase until regain of control ('minimum' dose required) in 51 patients with asthma and sputum eosinophils (steroid-free) ≥ 2%.
Results: 43 patients completed the study. There was no significant difference in 'minimum' inhaled corticosteroid (ICS) dose requirement between simvastatin and placebo: (median (IQR) 50 μg daily (0-250) vs 100 μg daily (0-250), p=0.931). 'Minimum' dose distribution was similar (p=0.269). The fluticasone dose at which loss of control occurred did not differ significantly between simvastatin and placebo (p=0.404). In patients with loss of control in both treatment arms, fluticasone dose at loss of control was similar with simvastatin and placebo (median (IQR) 50 μg daily (0-100) for both, p=0.620). In those patients who reached 0 μg/day (n=18), Astma Control Questionnaire (ACQ) was lower (p=0.037), forced expiratory volume in 1 s (FEV(1)) higher (p<0.01) and sputum eosinophils lower with simvastatin compared with placebo (9.5% compared with 25.4%, p=0.033).
Conclusions: Simvastatin does not have clinically important steroid-sparing effects in patients with eosinophilic asthma. In the absence of steroid, simvastatin is associated with minor improvements in symptoms and lung function, and a reduction in sputum eosinophils. Clinical trial number ACTRN12606000531516.
Answer: The use of supplemental estrogens as steroid-sparing agents in asthmatic women is a topic that has been explored in some studies, but the evidence remains preliminary. A case series involving three steroid-dependent asthmatic women who were given supplemental estrogens showed symptomatic improvement and the discontinuation of steroid therapy, with a reduction in steroid-associated side effects such as hypertension, weight gain, osteoporosis, and easy bruising (PUBMED:8020306). This suggests that supplemental estrogens may have a steroid-sparing effect in asthmatic women, but the authors of the study caution that this observation is preliminary and further study is needed to advance understanding of the mechanisms and treatment of asthma in women.
Other studies have explored different steroid-sparing agents for asthma treatment. For instance, doxofylline, a xanthine derivative, has been shown to produce a clear steroid-sparing effect in both allergic and non-allergic models of lung inflammation (PUBMED:29031617). Biological therapies targeting the interleukin-5 signaling pathway, such as benralizumab and mepolizumab, have also been shown to improve asthma control, reduce exacerbations, and serve as steroid-sparing agents in patients with severe eosinophilic asthma (PUBMED:31024635, PUBMED:31958239).
In contrast, a study on the use of simvastatin as a steroid-sparing agent found no clinically important steroid-sparing effects in patients with eosinophilic asthma (PUBMED:20861293). Similarly, chloroquine was not supported as an oral steroid-sparing agent in chronic asthma based on insufficient evidence from a small study (PUBMED:14583965).
Given the limited and preliminary nature of the evidence specifically regarding supplemental estrogens as steroid-sparing agents in asthmatic women, it is not possible to make a definitive recommendation for their use in this context. Further research is needed to confirm the potential benefits and to understand the mechanisms by which estrogens might exert a steroid-sparing effect in asthmatic women. |
Instruction: Is there an association between the coverage of immunisation boosters by the age of 5 and deprivation?
Abstracts:
abstract_id: PUBMED:25527213
Is there an association between the coverage of immunisation boosters by the age of 5 and deprivation? An ecological study. Objective: To determine whether there was an association between the coverage of booster immunisation of Diphtheria, Tetanus, acellular Pertussis and Polio (DTaP/IPV) and second Measles, Mumps and Rubella (MMR) dose by age 5 in accordance with the English national immunisation schedule by area-level socioeconomic deprivation and whether this changed between 2007/08 and 2010/11.
Design: Ecological study.
Data: Routinely collected national Cover of Vaccination Evaluated Rapidly data on immunisation coverage for DTaP/IPV booster and second MMR dose by age 5 and the Index of Multiple Deprivation (IMD).
Setting: Primary Care Trust (PCT) areas in England between 2007/08 and 2010/11.
Outcome Measures: Population coverage (%) of DTaP/IPV booster and second MMR immunisation by age 5.
Results: Over the 4 years among the 9,457,600 children there was an increase in the mean proportion of children being immunised for DTaP/IPV booster and second MMR across England, increasing from 79% (standard deviation (SD12%)) to 86% (SD8%) for DTaP/IPV and 75% (SD10%) to 84% (SD6%) for second MMR between 2007/08 and 2010/11. In 2007/08 the area with lowest DTaP/IPV booster coverage was 31% compared to 54.4% in 2010/11 and for the second MMR in 2007/08 was 39% compared to 64.8% in 2010/11. A weak negative correlation was observed between average IMD score and immunisation coverage for the DTaP/IPV booster which reduced but remained statistically significant over the study period (r=-0.298, p<0.001 in 2007/08 and r=-0.179, p=0.028 in 2010/11). This was similar for the second MMR in 2007/08 (r=-0.225, p=0.008) and 2008/09 (r=-0.216, p=0.008) but there was no statistically significant correlation in 2009/10 (r=-0.108, p=0.186) or 2010/11 (r=-0.078, p=0.343).
Conclusion: Lower immunisation coverage of DTaP/IPV booster and second MMR dose was associated with higher area-level socioeconomic deprivation, although this inequality reduced between 2007/08 and 2010/11 as proportions of children being immunised increased at PCT level, particularly for the most deprived areas. However, coverage is still below the World Health Organisation recommended 95% threshold for Europe.
abstract_id: PUBMED:32829214
Investigating spatial variation and change (2006-2017) in childhood immunisation coverage in New Zealand. Background: Immunisation is a safe and effective way of protecting children and adults against harmful diseases. However, immunisation coverage of children is declining in some parts of New Zealand.
Aim: Use a nationwide sample to first, examine the socioeconomic and demographic determinants of immunisation coverage and spatial variation in these determinants. Second, it investigates change in immunisation coverage in New Zealand over time.
Methods: Individual immunisation records were obtained from the National Immunisation Register (NIR) (2005-2017; 4,482,499 events). We calculated the average immunisation coverage by year and milestone age for census area units (CAU) and then examined the immunisation coverage by selected socioeconomic and demographic determinants. Finally, local variations in the association between immunisation coverage and selected determinants were investigated using geographically weighted regression.
Results: Findings showed a decrease of immunisation rates in recent years in CAUs with high immunisation coverage in the least deprived areas and increasing immunisation rates in more deprived areas. Nearly all explanatory variables exhibited a spatial variation in their association with immunisation coverage. For instance, the strongest negative effect of area-level deprivation is observed in the northern part of the South Island, the central-southern part of the North Island, around Auckland, and in Northland.
Conclusion: Our findings show that childhood immunisation coverage varies by socioeconomic and demographic factors across CAUs. We also identify important spatial variation and changes over time in recent years. This evidence can be used to improve immunisation related policy in New Zealand.
abstract_id: PUBMED:30879287
Immunisation coverage annual report, 2015 This 9th annual immunisation coverage report shows data for 2015 derived from the Australian Childhood Immunisation Register and the National Human Papillomavirus (HPV) Vaccination Program Register. This report includes coverage data for ‘fully immunised’ and by individual vaccines at standard age milestones and timeliness of receipt at earlier ages according to Indigenous status. Overall, ‘fully immunised’ coverage has been mostly stable at the 12- and 24-month age milestones since late 2003, but at 60 months of age, coverage reached its highest ever level of 93% during 2015. As in previous years, coverage for ‘fully immunised’ at 12 and 24 months of age among Indigenous children was 3.4% and 3.3% lower than for non-Indigenous children overall, respectively. In 2015, 77.8% of Australian females aged 15 years had 3 documented doses of HPV vaccine (jurisdictional range 68.0–85.6%), and 86.2% had at least one dose, compared to 73.4% and 82.7%, respectively, in 2014. The differential of on-time vaccination between Indigenous and non-Indigenous children in 2015 diminished progressively from 18.4% for vaccines due at 12 months to 15.7% for those due at 24 months of age. In 2015, the proportion of children whose parents had registered an objection to vaccination was 1.2% at the national level, with large regional variations. This was a marked decrease from 1.8% in 2014 and the lowest rate of registered vaccination objection nationally since 2007 when it was 1.1%. Medical contraindication exemptions for Australia were more than double in 2015 compared with the previous year (635 to 1,401).
abstract_id: PUBMED:31522666
Annual Immunisation Coverage Report 2016. This tenth annual immunisation coverage report shows data for the calendar year 2016 derived from the Australian Immunisation Register (AIR) and the National Human Papillomavirus (HPV) Vaccination Program Register. After a decade of being largely stable at around 90%, 'fully immunised' coverage at the 12-month assessment age increased in 2016 to reach 93.7% for the age assessment quarterly data point in December 2016, similar to the 93.4% for the age assessment quarterly data point in December 2016 for 60 months of age. Implementation of the 'No Jab No Pay' policy may have contributed to these increases. While 'fully immunised' coverage at the 24-month age assessment milestone decreased marginally from 90.8%, in December 2015, to 89.6% for the age assessment quarterly data point in December 2016, this was likely due to the assessment algorithm being amended in December 2016 to include four doses of DTPa vaccine instead of three, following reintroduction of the 18-month booster dose. Among Indigenous children, the gap in coverage assessed at 12 months of age decreased fourfold, from 6.7 percentage points in March 2013 to only 1.7 percentage points lower than non-Indigenous children in December 2016. Since late 2012, 'fully immunised' coverage among Indigenous children at 60 months of age has been higher than for non-Indigenous children. Vaccine coverage for the nationally funded seasonal influenza vaccine program for Indigenous children aged 6 months to <5 years, which commenced in 2015, remained suboptimal nationally in 2016 at 11.6%. Changes in MMR coverage in adolescents were evaluated for the first time. Of the 411,157 ten- to nineteen-year-olds who were not recorded as receiving a second dose of MMR vaccine by 31 December 2015, 43,103 (10.5%) of them had received it by the end of 2016. Many of these catch-up doses are likely to have been administered as a result of the introduction on 1 January 2016 of the Australian Government's 'No Jab No Pay' policy. In 2016, 78.6% of girls aged 15 years had three documented doses of HPV vaccine (jurisdictional range 67.8-82.9%), whereas 72.9% of boys (up from 67.1 % in 2015) had received three doses.
abstract_id: PUBMED:31738865
Annual Immunisation Coverage Report 2017. This eleventh national annual immunisation coverage report focuses on data for the calendar year 2017 derived from the Australian Immunisation Register (AIR) and the National Human Papillomavirus (HPV) Vaccination Program Register. This is the first report to include data on HPV vaccine course completion in Aboriginal and Torres Strait Islander (Indigenous) adolescents. 'Fully immunised' vaccination coverage in 2017 increased at the 12-month assessment age reaching 93.8% in December 2017, and at the 60-month assessment age reaching 94.5%. 'Fully immunised' coverage at the 24-month assessment age decreased slightly to 89.8% in December 2017, following amendment in December 2016 to require the fourth DTPa vaccine dose at 18 months. 'Fully immunised' coverage at 12 and 60 months of age in Indigenous children reached the highest ever recorded levels of 93.2% and 96.9% in December 2017. Catch-up vaccination activity for the second dose of measles-mumps-rubella-containing vaccine was considerably higher in 2017 for Indigenous compared to non-Indigenous adolescents aged 10-19 years (20.3% vs. 6.4%, respectively, of those who had not previously received that dose). In 2017, 80.2% of females and 75.9% of males aged 15 years had received a full course of three doses of human papillomavirus (HPV) vaccine. Of those who received dose one, 79% and 77% respectively of Indigenous girls and boys aged 15 years in 2017 completed three doses, compared to 91% and 90% of non-Indigenous girls and boys, respectively. A separate future report is planned to present adult AIR data and to assess completeness of reporting.
abstract_id: PUBMED:32868133
Pertussis and influenza immunisation coverage of pregnant women in New Zealand. Background: Immunisation is an important public health policy and measuring coverage is imperative to identify gaps and monitor trends. New Zealand (NZ), like many countries, does not routinely publish coverage of immunisations given during pregnancy. Therefore, this study examined pregnancy immunisation coverage of all pregnant NZ women between 2013 and 2018, and what factors affected uptake.
Methods: A retrospective cohort study of pregnant women who delivered between 2013 and 2018 was undertaken using administrative datasets. Maternity and immunisation data were linked to determine coverage of pertussis and influenza vaccinations in pregnancy. Generalised estimating equations were used to estimate the odds of receiving a vaccination during pregnancy.
Results: From 2013 to 2018 data were available for 323,622 pregnant women, of whom 21.7% received maternal influenza immunisations and 25.7% maternal pertussis immunisations. Coverage for both vaccines increased over time, pertussis increased from 10.2% to 43.6% and influenza from 11.2% to 30.8%. The odds of being vaccinated, with either vaccine, during pregnancy increased with increasing age and decreasing deprivation. Compared to NZ European or Other women, Māori and Pacific women had lower odds of receiving a maternal pertussis (OR:0.55, 95% CI: 0.54, 0.57; OR:0.60, 95% CI: 0.58, 0.62, respectively) and influenza (OR: 0.69, 95% CI: 0.67, 0.71; OR:0.90, 95% CI: 0.87, 0.94, respectively) immunisations during pregnancy. Women were also more likely to be vaccinated against pertussis if they received antenatal care from a General Practitioner or Obstetrician compared to a Midwife. A similar pattern was seen for influenza vaccination.
Conclusion: Gaps in maternal coverage for pertussis and influenza exist and work is needed to reduce immunisation inequities.
abstract_id: PUBMED:30959563
Children overdue for immunisation: a question of coverage or reporting? An audit of the Australian Immunisation Register. Objective: Vaccinations in Australia are reportable to the Australian Immunisation Register (AIR). Following major immunisation policy initiatives, the New South Wales (NSW) Public Health Network undertook an audit to estimate true immunisation coverage of NSW children at one year of age, and explore reasons associated with under-reporting.
Methods: Cross-sectional survey examining AIR immunisation records of a stratified random sample of 491 NSW children aged 12≤15 months at 30 September 2017 who were >30 days overdue for immunisation. Survey data were analysed using population weights.
Results: Estimated true coverage of fully vaccinated one-year-old children in NSW is 96.2% (CI:95.9-96.4), 2.1% higher than AIR reported coverage of 94.1%. Of the children reported as overdue on AIR, 34.9% (CI:30.9-38.9) were actually fully vaccinated. No significant association was found between under-reporting and socioeconomic status, rurality or reported local coverage level. Data errors in AIR uploading (at provider level) and duplicate records contributed to incorrect AIR coverage recording.
Conclusions: Despite incentives to record childhood vaccinations on AIR, under-reporting continues to be an important contributor to underestimation of true coverage in NSW. Implications for public health: More reliable transmission of encounters to AIR at provider level and removal of duplicates would improve accuracy of reported coverage.
abstract_id: PUBMED:38200476
Immunisation coverage and factors associated with incomplete immunisation in children under two during the COVID-19 pandemic in Sierra Leone. Background: Routine childhood immunisation is one of the most important life-saving public health interventions. However, many children still have inadequate access to these vaccines and millions remain (partially) unvaccinated globally. As the COVID-19 pandemic disrupted health systems worldwide, its effects on immunisation have become apparent. This study aimed to estimate routine immunisation coverage among children under two in Sierra Leone and to identify factors associated with incomplete immunisation during the COVID-19 pandemic.
Methods: A cross-sectional household survey was conducted in three districts in Sierra Leone: Bombali, Tonkolili and Port Loko. A three-stage cluster sampling method was followed to enrol children aged 10-23 months. Information regarding immunisation status was based on vaccination cards or caretaker's recall. Using WHO's definition, a fully immunised child received one BCG dose, three oral polio vaccine doses, three pentavalent vaccine doses and one measles-containing vaccine dose. Following the national schedule, full immunisation status can be achieved at 9 months of age. Data were weighted to reflect the survey's sampling design. Associations between incomplete immunisation and sociodemographic characteristics were assessed through multivariable logistic regression.
Results: A total of 720 children were enrolled between November and December 2021. Full vaccination coverage was estimated at 65.8% (95% CI 60.3%-71.0%). Coverage estimates were highest for vaccines administered at birth and decreased with doses administered subsequently. Adjusting for age, the lowest estimated coverage was 40.7% (95% CI 34.5%-47.2%) for the second dose of the measles-containing vaccine. Factors found to be associated with incomplete immunisation status were: living in Port Loko district (aOR = 3.47, 95% CI = 2.00-6.06; p-value < 0.001), the interviewed caretaker being Muslim (aOR = 1.94, 95% CI = 1.25-3.02; p-value = 0.015) and the interviewed caretaker being male (aOR = 1.93, 95% CI = 1.03-3.59, p-value = 0.039).
Conclusion: Though full immunisation coverage at district level improved compared with pre-pandemic district estimates from 2019, around one in three surveyed children had missed at least one basic routine vaccination and over half of eligible children had not received the recommended two doses of a measles-containing vaccine. These findings highlight the need to strengthen health systems to improve vaccination uptake in Sierra Leone, and to further explore barriers that may jeopardise equitable access to these life-saving interventions.
abstract_id: PUBMED:31565416
Tracking coverage, dropout and multidimensional equity gaps in immunisation systems in West Africa, 2000-2017. Background: Several West African countries are unlikely to achieve the recommended Global Vaccine Action Plan (GVAP) immunisation coverage and dropout targets in a landscape beset with entrenched intra-country equity gaps in immunisation. Our aim was to assess and compare the immunisation coverage, dropout and equity gaps across 15 West African countries between 2000 and 2017.
Methods: We compared Bacille Calmette Guerin (BCG) and the third dose of diphtheria-tetanus-pertussis (DTP3) containing vaccine coverage between 2000 and 2017 using the WHO and Unicef Estimates of National Immunisation Coverage for 15 West African countries. Estimated subregional median and weighted average coverages, and dropout (DTP1-DTP3) were tracked against the GVAP targets of ≥90% coverage (BCG and DTP3), and ≤10% dropouts. Equity gaps in immunisation were assessed using the latest disaggregated national health survey immunisation data.
Results: The weighted average subregional BCG coverage was 60.7% in 2000, peaked at 83.2% in 2009 and was 65.7% in 2017. The weighted average DTP3 coverage was 42.3% in 2000, peaked at 70.3% in 2009 and was 61.5% in 2017. As of 2017, 46.7% of countries (7/15) had met the GVAP targets on DTP3 coverage. Average weighted subregional immunisation dropouts consistently reduced from 16.4% in 2000 to 7.4% in 2017, meeting the GVAP target in 2008. In most countries, inequalities in BCG, and DTP3 coverage and dropouts were mainly related to equity gaps of more than 20% points between the wealthiest and the poorest, high coverage regions and low coverage regions, and between children of mothers with at least secondary education and those with no formal education. A child's sex and place of residence (urban or rural) minimally determined equity gaps.
Conclusions: The West African subregion made progress between 2000 and 2017 in ensuring that its children utilised immunisation services, however, wide equity gaps persist.
abstract_id: PUBMED:31050966
The Impact of Conflict on Immunisation Coverage in 16 Countries. Background: Military conflict has been an ongoing determinant of inequitable immunisation coverage in many low- and middle-income countries, yet the impact of conflict on the attainment of global health goals has not been fully addressed. This review will describe and analyse the association between conflict, immunisation coverage and vaccine-preventable disease (VPD) outbreaks, along with country specific strategies to mitigate the impact in 16 countries.
Methods: We cross-matched immunisation coverage and VPD data in 2014 for displaced and refugee populations. Data on refugee or displaced persons was sourced from the United Nations High Commissioner for Refugees (UNHCR) database, and immunisation coverage and disease incidence data from World Health Organization (WHO) databases. Demographic and Health Survey (DHS) databases provided additional data on national and sub-national coverage. The 16 countries were selected because they had the largest numbers of registered UNHCR "persons of interest" and received new vaccine support from Global Alliance for Vaccine and Immunisation (GAVI), the Vaccine Alliance. We used national planning and reporting documentation including immunisation multiyear plans, health system strengthening strategies and GAVI annual progress reports (APRs) to assess the impact of conflict on immunisation access and coverage rates, and reviewed strategies developed to address immunisation program shortfalls in conflict settings. We also searched the peer-reviewed literature for evidence that linked immunisation coverage and VPD outbreaks with evidence of conflict.
Results: We found that these 16 countries, representing just 12% of the global population, were responsible for 67% of global polio cases and 39% of global measles cases between 2010 and 2015. Fourteen out of the 16 countries were below the global average of 85% coverage for diphtheria, pertussis, and tetanus (DPT3) in 2014. We present data from countries where the onset of conflict has been associated with sudden drops in national and sub-national immunisation coverage. Tense security conditions, along with damaged health infrastructure and depleted human resources have contributed to infrequent outreach services, and delays in new vaccine introductions and immunisation campaigns. These factors have in turn contributed to pockets of low coverage and disease outbreaks in sub-national areas affected by conflict. Despite these impacts, there was limited reference to the health needs of conflict affected populations in immunisation planning and reporting documents in all 16 countries. Development partner investments were heavily skewed towards vaccine provision and working with partner governments, with comparatively low levels of health systems support or civil partnerships.
Conclusion: Global and national policy and planning focus is required on the service delivery needs of conflict affected populations, with increased investment in health system support and civil partnerships, if persistent immunisation inequities in conflict affected areas are to be addressed.
Answer: Yes, there is an association between the coverage of immunisation boosters by the age of 5 and deprivation. An ecological study conducted in England found that lower immunisation coverage of DTaP/IPV booster and second MMR dose was associated with higher area-level socioeconomic deprivation. This inequality reduced between 2007/08 and 2010/11 as proportions of children being immunised increased at the Primary Care Trust (PCT) level, particularly for the most deprived areas. However, coverage was still below the World Health Organisation recommended 95% threshold for Europe (PUBMED:25527213). |
Instruction: Is surgical excision necessary for atypical ductal hyperplasia of the breast diagnosed by Mammotome?
Abstracts:
abstract_id: PUBMED:11113443
Is surgical excision necessary for atypical ductal hyperplasia of the breast diagnosed by Mammotome? Background: Core biopsy findings of atypical ductal hyperplasia (ADH) underestimates the diagnosis of malignancy by 18% to 88%. Using the Mammotome biopsy technique, more accurate assessment of the lesion is possible, making selective excision of these lesions a consideration.
Methods: The records of 62 patients who were found to have ADH at Mammotome biopsy and subsequently underwent excision of the lesion were reviewed. Patient data were statistically analyzed for predictors of malignancy at the time of surgical excision.
Results: Of the 62 patients, 9 (15%) had malignancy at excision. Variables predicting for malignancy included markedly atypical hyperplasia and incomplete removal of calcifications at Mammotome biopsy, a previous contralateral breast cancer, and a family history of breast cancer, with a combined sensitivity of 100% and specificity of 80%.
Conclusions: Mild ADH found on Mammotome, not associated with a personal or family history of breast cancer, may not need excision if all calcifications have been removed.
abstract_id: PUBMED:31612568
Benign breast papilloma: Is surgical excision necessary? In many centers internationally, current standard of care is to excise all papillomas of the breast, despite recently reported low rates of upgrade to malignancy on final excision. The objective of this study was to determine the upgrade rate to malignancy in patients with papilloma without atypia. A retrospective review of a prospectively maintained database of all cases of benign intraductal papilloma in a tertiary referral symptomatic breast unit between July 2008 and July 2018 was performed. Patients with evidence of malignancy or atypia on core biopsy and those with a history of breast cancer or genetic mutations predisposing to breast cancer were excluded. One hundred and seventy-three cases of benign papilloma diagnosed on core biopsy were identified. Following exclusions, the final cohort comprised of 138 patients. Mean age at presentation was 51. Mean follow-up time was 9.6 months. The most common symptom was a lump (40%). Of the 124 patients who underwent excision, three had ductal carcinoma in situ and there were no cases of invasive disease, giving an upgrade rate to malignancy of 2.4%. Upgrade to other high-risk lesions (atypical lobular and ductal hyperplasia and lobular carcinoma in situ) was demonstrated in 15 cases (12.1%). Benign papilloma was confirmed in 100 cases (81.5%), and 6 (4.8%) had no residual papilloma found on final excision. Twelve patients (8.7%) were managed conservatively. Of those, one later went on to develop malignancy. Patients with a diagnosis of benign papilloma without atypia on core biopsy have a low risk of upgrade to malignancy on final pathology, suggesting that observation may be a safe alternative to surgical excision. Further research is warranted to determine which patients can be safely managed conservatively.
abstract_id: PUBMED:18837890
The use of a vacuum-assisted biopsy device (Mammotome) in the early detection of breast cancer in the United Arab Emirates. Stereotactic core needle biopsy has proven to be an accurate technique for evaluation of mammographically detected microcalcification. The development of the Mammotome biopsy system has led many medical centers to use this vacuum-assisted device for the sampling of microcalcifications in mammographically detected nonpalpable breast lesions. Ninety-six women underwent 101 stereotactic Mammotome core biopsies for mammographic calcifications over a 32-month period in the Department of Surgery at Tawam Hospital, the national referral oncology center in the UAE. The stereotactic procedure was performed by surgeons using the Mammotome biopsy system. Microcalcifications were evident on specimen radiographs and microscopic sections in 96% and 87% of the cases, respectively. Excisional biopsy was recommended for diagnoses of atypical ductal hyperplasia or carcinoma. Patients with benign diagnoses underwent mammographic follow-up. Eighty-one lesions were benign, 5 atypical ductal hyperplasias and 14 carcinomas were diagnosed (2 invasive lobular carcinoma, 4 invasive ductal carcinoma, and 8 intraductal carcinomas in situ: 1 comedo, 1 cribriform, 6 mixed cribriform and micropapillary). Surgical excision in four patients with atypia on Mammotome biopsy (one was lost to follow-up) showed atypical ductal hyperplasia. Surgical excision in seven patients diagnosed with intraductal carcinoma in situ (one patient lost to follow-up) showed intraductal carcinoma with no evidence of microinvasion. Similar diagnoses were made in all the invasive ductal and lobular carcinomas in both Mammotome and excisional biopsies. A diagnosis of atypia on Mammotome biopsy warranted excision of the atypical area, yet the underestimation rate for the presence of carcinoma remained low. The likelihood of an invasive component at excision was negligible for microcalcification diagnosed as intraductal carcinoma in situ on Mammotome biopsy. Mammotome biopsy proved to be an accurate technique for the sampling, diagnosis, and early detection of breast cancer.
abstract_id: PUBMED:18455920
Underestimation of malignancy of atypical ductal hyperplasia diagnosed on 11-gauge stereotactically guided Mammotome breast biopsy: an Asian breast screen experience. The incidence of malignancy in excision biopsies performed for atypical ductal hyperplasia (ADH) diagnosed on needle biopsies has decreased since the advent of larger tissue sampling and improved accuracy using vacuum-assisted Mammotome biopsy. We undertook a retrospective study to identify predictive factors for understaging of ADH diagnosed on 11-gauge Mammotome biopsy, to determine whether it was possible to avoid surgical excision in women where mammographically visible calcifications had been completely removed. Sixty-one biopsy diagnosed ADH lesions were correlated with surgical excision findings that revealed DCIS in 14 (23%). The mammographic and biopsy features were statistically analyzed using Fisher's exact test. There was no association between morphology, extent of calcifications, number of cores sampled with underestimation of malignancy (P=0.503, 0.709, 0.551 respectively). In the absence of residual calcifications, the frequency of underestimation of carcinoma still occurred in 17%.
abstract_id: PUBMED:24482711
Ultrasound-guided vacuum-assisted breast biopsy using Mammotome biopsy system for detection of breast cancer: results from two high volume hospitals. Ultrasound-guided vacuum-assisted breast biopsy (VABB) has been recently regarded as a feasible, effective, minimally invasive and safe method for removal of benign breast lesions without serious complications. The frequency of detection of noninvasive malignant breast lesions by ultrasound-guided VABB is increasing. The aim of this study was to evaluate the role of the ultrasound-guided VABB using Mammotome biopsy system in the early detection of breast cancer. Retrospective review between January 2008 to March 2013 the First Affiliated Hospital, Zhejiang University School of Medicine and Taizhou Hospital, Wenzhou Medical College. From January 2008 to March 2013, a total of 5232 ultrasound-guided VABB procedures were performed in 3985 patients whose mean ages were 36.3 years (range: 16-73). The histological results of 5232 ultrasound-guided VABB were retrospectively reviewed. Ultrasonography follow-up was performed at 3 to 6 month intervals in order to assess recurrence. Two hundred twenty three high risk lesions (comprising 59 papilloma, 57 papillomatosis, and 107 atypical hyperplasia) and 61 malignant lesions (comprising 23 ductal carcinoma in situ, 21 lobular carcinoma in situ, 12 infiltrating ductal carcinoma, and 5 infiltrating mucinous carcinoma) were identified. Sensitivity (100%) and diagnostic accuracy (100%) regarding the detection of malignancy were excellent for ultrasound-guided VABB using Mammotome biopsy system. Our results indicate that ultrasound-guided VABB using Mammotome biopsy system is an accurate technique for the sampling, diagnosis, and early detection of breast cancer. It is recommended that the Mammotome biopsy system could be as the method of choice for detecting nonpalpable early breast cancer.
abstract_id: PUBMED:11148574
Mammotome core biopsy for mammary microcalcification: analysis of 160 biopsies from 142 women with surgical and radiologic followup. Background: Although stereotaxic fine-needle aspiration biopsy or core biopsy (14-gauge) have proven to be accurate techniques for the evaluation of mammographically detected microcalcification, the development of the Mammotome Biopsy System (Biopsys Medical, Inc., Irvine, CA) has led many medical centers to use this vacuum-assisted device for the sampling of microcalcification.
Methods: One hundred forty-two women underwent 160 stereotaxic Mammotome core biopsies of mammographic calcification over a 1-year period. The stereotaxic procedure was performed by radiologists using the Mammotome Biopsy System. Microcalcification was evident on specimen radiographs and microscopic slides in 99% of the cases. Excisional biopsy was recommended for diagnoses of atypia or carcinoma. Patients with benign diagnoses underwent mammographic followup.
Results: One hundred thirty-two benign, 12 atypical, and 15 adenocarcinoma diagnoses (comprising 1 lobular adenocarcinoma in situ [LCIS], 1 invasive ductal adenocarcinoma [IDC], and 13 intraductal adenocarcinomas [DCIS]: 10 comedo, 1 cribriform, 2 mixed cribriform and micropapillary) were rendered. Surgical excision in eight patients with atypia on Mammotome biopsy (two refused surgery, two were lost to followup) showed ductal hyperplasia in three, atypical ductal hyperplasia (ADH) in three and DCIS (low grade, solid) in two patients. Surgical excisions in 14 patients diagnosed with carcinoma (1 patient lost to followup) showed ADH in 3, ADH and LCIS in 1, residual DCIS in 8, IDC in 1, and microinvasive carcinoma in 1 patient.
Conclusions: A diagnosis of atypia on Mammotome biopsy warranted excision of the atypical area, yet the underestimation rate for the presence of carcinoma remained low. The likelihood of an invasive component at excision was low for microcalcification diagnosed as DCIS on Mammotome biopsy. Mammotome biopsy proved to be an accurate technique for the sampling and diagnosis of mammary microcalcification.
abstract_id: PUBMED:11844176
Ultrasound-guided mammotome vacuum biopsy for the diagnosis of impalpable breast lesions. Objectives: To assess the diagnostic accuracy of ultrasound-guided mammotome vacuum biopsy in impalpable breast lesions.
Methods: Seventy-three patients who presented with impalpable breast lesions that were suspicious for malignancy at mammography and/or sonography were included in the study. In the first instance the women underwent ultrasound-guided fine-needle aspiration cytology, then, 3 days later, histological biopsy with an ultrasound-guided mammotome device. The patients with both cytological and histological diagnoses of malignancy underwent surgery; those with a negative (for malignancy) cytological diagnosis, but with a histological diagnosis of atypical hyperplasia or sclerosing adenosis, underwent surgical biopsy.
Results: The diagnostic accuracy of fine-needle aspiration cytology was 67.2%; the sensitivity was 86.7%, the specificity was 48.4%, the negative predictive value was 78.9% and the positive predictive value was 61.9%. In comparison, the diagnostic accuracy of histological sampling by mammotome vacuum biopsy was 97.3%; the sensitivity was 94.7%, the specificity was 100%, the negative predictive value was 94.6% and the positive predictive value was 100%. Thus there was a statistically significant difference in diagnostic accuracy between fine-needle aspiration cytology and mammotome vacuum biopsy (67.2% vs. 97.3%; chi2 test, P < 0.001). The 2.7% (2/73) failure rate of mammotome biopsy was likely to be due to an error in the positioning of the needle. The subsequent surgical biopsy proved that two cases, negative for malignancy by mammotome biopsy, were in fact malignant.
Conclusions: Our data confirm the value of sonography for the diagnosis of breast carcinoma in the preclinical phase and the efficacy of ultrasound sampling using a mammotome device to confirm the diagnosis in impalpable breast lesions.
abstract_id: PUBMED:15094596
Management of non-palpable breast lesions with vacuum-assisted large core needle biopsies (Mammotome). Experience with 560 procedures at the Val d'Aurelle Center Percutaneous vacuum-assisted large core needle biopsy of breast microcalcifications is now commonly performed as the initial approach to nonpalpable breast lesions. It can obviate the need for surgery in women with benign lesions and often lead to a one-stage surgical procedure when malignant lesions are diagnosed. To illustrate this strategy, we describe our experience based on 560 procedures performed within a 36 Month-period. Sixty percent of the lesions were benign, mostly fibrocystic changes. Thirty percent of the specimens were malignant, almost exclusively intraductal carcinomas, sometimes associated with an invasive component. This component must be identified by the pathologist in order to avoid incomplete treatment and to plan lymph node excision. Finally, 10% of the specimens were boderline including lobular neoplasia, atypical ductal hyperplasia and columnar cell lesions with atypia. Surgical excision is recommended for atypical ductal hyperplasia, columnar cell lesions with atypia and lobular neoplasia with particular features, pleomorphic or comedo-like, to avoid missing more aggressive associated lesions. A strict procedure is required for the analysis of needle core biopsies and the subsequent surgical specimens, to accurately classify breast lesions provided by a mammographic screening program. This procedure should be based on a multidisciplinary approach and dialog.
abstract_id: PUBMED:16055333
Atypical ductal hyperplasia of the breast: the controversial management of a borderline lesion: experience of 47 cases diagnosed at vacuum-assisted biopsy. The present paper describes our experience of 47 cases of atypical ductal hyperplasia (ADH) diagnosed at vacuum-assisted biopsy. From June 1999 to December 2003, 47 consecutive diagnoses of non-palpable ADH of the breast were made by 11-gauge vacuum-assisted biopsy (Mammotome). Of these, 17 were subjected to surgical excision and 11 underwent a second Mammotome at the site of the previous vacuum-assisted biopsy. Diagnostic underestimation occurred in only two cases, with a surgical diagnosis of ductal carcinoma in situ. In both patients, aged between 46 and 55 years, the radiological images showed microcalcifications of >20 mm, and the lesions were not completely removed by Mammotome. Despite the obvious limitations of the present study, it can be concluded that the probability of underestimating ADH diagnosis by Mammotome appears to be related to the radiological features of the lesion (>20 mm) and to the adequacy of specimens.
abstract_id: PUBMED:35654997
High-risk and selected benign breast lesions diagnosed on core needle biopsy: Evidence for and against immediate surgical excision. The vast majority of image-detected breast abnormalities are diagnosed by percutaneous core needle biopsy (CNB) in contemporary practice. For frankly malignant lesions diagnosed by CNB, the standard practice of excision and multimodality therapy have been well-defined. However, for high-risk and selected benign lesions diagnosed by CNB, there is less consensus on optimal patient management and the need for immediate surgical excision. Here we outline the arguments for and against the practice of routine surgical excision of commonly encountered high-risk and selected benign breast lesions diagnosed by CNB. The entities reviewed include atypical ductal hyperplasia, atypical lobular hyperplasia, lobular carcinoma in situ, intraductal papillomas, and radial scars. The data in the peer-reviewed literature confirm the benefits of a patient-centered, multidisciplinary approach that moves away from the reflexive "yes" or "no" for routine excision for a given pathologic diagnosis.
Answer: Surgical excision for atypical ductal hyperplasia (ADH) of the breast diagnosed by Mammotome may not always be necessary. According to the study by PUBMED:11113443, mild ADH found on Mammotome, not associated with a personal or family history of breast cancer, may not need excision if all calcifications have been removed. This study found that 15% of patients had malignancy at excision, and variables predicting malignancy included markedly atypical hyperplasia, incomplete removal of calcifications at Mammotome biopsy, a previous contralateral breast cancer, and a family history of breast cancer.
However, the underestimation of malignancy by Mammotome biopsy has been a concern. PUBMED:18455920 reported that even when mammographically visible calcifications had been completely removed, the frequency of underestimation of carcinoma still occurred in 17% of cases. This suggests that there is still a risk of malignancy even when the ADH appears to be fully sampled by Mammotome.
PUBMED:18837890 also indicated that a diagnosis of atypia on Mammotome biopsy warranted excision of the atypical area, although the underestimation rate for the presence of carcinoma remained low. Similarly, PUBMED:11148574 found that a diagnosis of atypia on Mammotome biopsy justified excision, and the underestimation rate for carcinoma was low.
PUBMED:35654997 emphasizes the benefits of a patient-centered, multidisciplinary approach that moves away from the reflexive decision for routine excision for a given pathologic diagnosis, suggesting that the decision to excise should be individualized based on various factors.
In conclusion, while surgical excision may not be necessary for all cases of ADH diagnosed by Mammotome, particularly when there is no family history of breast cancer and all calcifications have been removed, there is still a risk of malignancy being underestimated. Therefore, the decision to proceed with surgical excision should be made on a case-by-case basis, considering the patient's history, the extent of calcifications removed, and the degree of atypia. |
Instruction: Treatment of non-melanoma skin cancer in North Sardinia: is there a need for biopsy?
Abstracts:
abstract_id: PUBMED:23592167
Effect of biopsy type on outcomes in the treatment of primary cutaneous melanoma. Background: Surgical excision remains the primary and only potentially curative treatment for melanoma. Although current guidelines recommend excisional biopsy as the technique of choice for evaluating lesions suspected of being primary melanomas, other biopsy types are commonly used. We sought to determine the impact of biopsy type (excisional, shave, or punch) on outcomes in melanoma.
Methods: A prospectively collected, institutional review board-approved database of primary clinically node-negative melanomas (stages cT1-4N0) was reviewed to determine the impact of biopsy type on T-staging accuracy, wide local excision (WLE) area (cm(2)), sentinel lymph node biopsy (SLNB) identification rates and results, tumor recurrence, and patient survival.
Results: Seven hundred nine patients were diagnosed by punch biopsy (23%), shave biopsy (34%), and excisional biopsy (43%). Shave biopsy results showed significantly more positive deep margins (P < .001). Both shave and punch biopsy results showed more positive peripheral margins (P < .001) and a higher risk of finding residual tumor (with resulting tumor upstaging) in the WLE (P < .001), compared with excisional biopsy. Punch biopsy resulted in a larger mean WLE area compared with shave and excisional biopsies (P = .030), and this result was sustained on multivariate analysis. SLNB accuracy was 98.5% and was not affected by biopsy type. Similarly, biopsy type did not confer survival advantage or impact tumor recurrence; the finding of residual tumor in the WLE impacted survival on univariate but not multivariate analysis.
Conclusions: Both shave and punch biopsies demonstrated a significant risk of finding residual tumor in the WLE, with pathologic upstaging of the WLE. Punch biopsy also led to a larger mean WLE area compared with other biopsy types. However, biopsy type did not impact SLNB accuracy or results, tumor recurrence, or disease-specific survival (DSS). Punch and shave biopsies, when used appropriately, should not be discouraged for the diagnosis of melanoma.
abstract_id: PUBMED:34575876
Liquid Biopsy in Melanoma: Significance in Diagnostics, Prediction and Treatment Monitoring. Liquid biopsy is a common term referring to circulating tumor cells and other biomarkers, such as circulating tumor DNA (ctDNA) or extracellular vesicles. Liquid biopsy presents a range of clinical advantages, such as the low invasiveness of the blood sample collection and continuous control of the tumor progression. In addition, this approach enables the mechanisms of drug resistance to be determined in various methods of cancer treatment, including immunotherapy. However, in the case of melanoma, the application of liquid biopsy in patient stratification and therapy needs further investigation. This review attempts to collect all of the relevant and recent information about circulating melanoma cells (CMCs) related to the context of malignant melanoma and immunotherapy. Furthermore, the biology of liquid biopsy analytes, including CMCs, ctDNA, mRNA and exosomes, as well as techniques for their detection and isolation, are also described. The available data support the notion that thoughtful selection of biomarkers and technologies for their detection can contribute to the development of precision medicine by increasing the efficacy of cancer diagnostics and treatment.
abstract_id: PUBMED:15715142
Treatment of non-melanoma skin cancer in North Sardinia: is there a need for biopsy? Unlabelled: Non-melanoma skin cancer (NMSC) is the most common type of skin cancer. Important controversial issues are the need for incisional biopsies, surgical margin, and timing of follow-up.
Methods: A retrospective study was undertaken on 2544 lesions. Accuracy of diagnosis and prevalence of incomplete excision were evaluated, comparing clinical and histological diagnosis using chi2 tests with Yates' correction. Kaplan-Meier recurrence graphs have been obtained.
Results: Lesions were correctly diagnosed in 94% of basal cell carcinomas (BCC) and in 69% of squamous cell carcinomas (SCC) (p < 0.001). Positive margins on pathological examination were 6.6% for BCC and 6.8% for SCC. A significant difference for incomplete excision has been found for BCC in the face (p < 0.001). Kaplan-Meier survival curves showed a different pattern for BCC and SCC.
Conclusions: On the basis of our data, if clinical diagnosis is BCC, excision and reconstruction may be undertaken without an incisional biopsy. Alternatively, if clinical diagnosis is SCC, it is advisable to consider an incisional biopsy, before definitive surgical treatment.
abstract_id: PUBMED:33098016
Association of surgical interval and survival among hospital and non-hospital based patients with melanoma in North Carolina. Surgical excision is important for melanoma treatment. Delays in surgical excision after diagnosis of melanoma have been linked to decreased survival in hospital-based cohorts. This study was aimed at quantifying the association between the timeliness of surgical excision and overall survival in patients diagnosed with melanoma in hospital- and non-hospital-based settings, using a retrospective cohort study of patients with stage 0-III melanoma and using data linked between the North Carolina Central Cancer Registry to Medicare, Medicaid, and private health insurance plan claims across the state. We identified 6,496 patients diagnosed between 2004 and 2012 with follow-up through 2017. We categorized the time from diagnostic biopsy to surgical excision as < 6 weeks after diagnosis, 6 weeks to 90 days after diagnosis, and > 90 days after melanoma diagnosis. Multivariable Cox regression was used to estimate differences in survival probabilities. Five-year overall survival was lower for those with time to surgery over 90 days (78.6%) compared with those with less than 6 weeks (86%). This difference appeared greater for patients with Stage 1 melanoma. This study was retrospective, included one state, and could not assess melanoma specific mortality. Surgical timeliness may have an effect on overall survival in patients with melanoma. Timely surgery should be encouraged.
abstract_id: PUBMED:32663337
Effect of Sentinel Lymph Node Biopsy and LVI on Merkel Cell Carcinoma Prognosis and Treatment. Objective: Prognostic factors and optimal treatment approaches for Merkel cell carcinoma (MCC) remain uncertain. This study evaluated the influences of sentinel lymph node (SLN) biopsy and lymphovascular invasion (LVI) on treatment planning and prognosis.
Study Design: Retrospective cohort study.
Methods: Stage 1 to 3 MCC patients treated 2005 to 2018. Predictors of nodal radiation were tested using logistic regression. Predictors of recurrence-free, disease-specific, and overall survival were tested in Cox proportional hazard models.
Results: Of 122 patients, 99 were without clinically apparent nodal metastases. Of these, 76 (77%) underwent excision and SLN biopsy; 29% had metastasis in SLNs, including 20% of MCCs 1 cm or less. Primary tumor diameter, site, patient age, gender, and immunosuppressed status were not significantly associated with an involved SLN. Among patients who underwent SLN biopsy, 13 of 21 (62%) MCCs with LVI had cancer in SLNs compared with 14 of 44 (25.5%) without LVI (P = .003). Although local radiation was common, nodal radiation was infrequently employed in SLN negative (pathologic N0) patients (21.8% vs. 76.2% for patients with SLN metastases, P = .0001). Survival of patients with positive SLNs was unfavorable, regardless of completion lymphadenectomy and/or adjuvant radiation. After accounting for tumor (T) and node (N) classification, age, immunosuppression, and primary site, a positive SLN and LVI were independently associated with worse survival (LVI/recurrence-free survival [RFS]: hazard ratio [HR] 2.3 (1.04-5, P = .04; LVI/disease-specific survival [DSS]: HR 5.2 (1.8-15, P = .007); N1a vs. pN0/RFS HR 3.6 (1.42-9.3, P = .007); DSS HR5.0 (1.3-19, P = .17).
Conclusion: SLN biopsy assists in risk stratification and radiation treatment planning in MCC. LVI and disease in SLNs, independently associated with worse survival, constitute markers of high-risk disease warranting consideration for investigational studies.
Level Of Evidence: III Laryngoscope, 131:E828-E835, 2021.
abstract_id: PUBMED:18795923
Shave biopsy without local anaesthetic to diagnose basal cell carcinoma and other skin tumours prior to definitive treatment: analysis of 109 lesions. Background: Diagnostic biopsy of basal cell carcinoma and other skin tumours may be necessary prior to definitive treatment.
Objectives: To assess whether shave biopsy sampling of tumours without local anaesthetic can provide adequate tissue to make an accurate histological diagnosis, and to determine whether any discomfort is associated with the technique.
Methods: One hundred and nine lesions from 99 patients were sampled by shave biopsy without local anaesthetic. Any discomfort associated with the procedure, and the adequacy of the histological specimen, were documented. The pathology diagnosis was also compared against the clinically suspected diagnosis.
Results: In 108 of the 109 lesions sampled, sufficient tissue was obtained to make an accurate histological diagnosis. In only six of the 109 procedures was any discomfort reported and in all cases this was rated as minor. A high correlation was found between histological diagnosis and initial clinical suspicion.
Conclusions: Shave biopsy without local anaesthetic is a simple, relatively pain-free method of obtaining tissue samples for histological diagnosis in appropriate tumours.
abstract_id: PUBMED:32553682
Skin biopsy and skin cancer treatment use in the Medicare population, 1993 to 2016. Background: Skin biopsies are increasing at a rapid rate, and some may be unnecessary. Although skin cancer incidence is rising, there is varied biopsy accuracy between dermatologists and advanced practice professionals (APPs). A comparison of Current Procedural Terminology code (American Medical Association, Chicago, IL) use for skin biopsy and skin cancer treatment over 18 years and a comparison of provider types is needed. Excess skin biopsies increase health care costs and patient morbidity.
Objective: To examine changes in skin biopsy and skin cancer treatment utilization rates per year in the Medicare fee-for-service (FFS) population and to compare skin biopsy utilization rates between dermatologists and APPs.
Methods: Retrospective cross-sectional study of Medicare FFS paid claims using the Centers for Medicare and Medicaid Services Physician Claims databases. We calculated the number of skin biopsies and skin cancer treatments in the Medicare FFS population from 1993 to 2016, and percentage use by provider type from 2001 to 2016. Our primary outcome measurements were the number of skin biopsies and skin cancer treatments per 1000 Medicare FFS beneficiaries per year and the number of additional skin biopsies per 1000 Medicare FFS beneficiaries per year, or the difference in the number of skin biopsies and number of skin cancer treatments per 1000 Medicare FFS beneficiaries. Our secondary outcome measurements were the skin biopsy-to-skin cancer treatment ratio and the number of procedures per 1000 Medicare FFS beneficiaries per year by provider type.
Results: After adjusting for the number of enrollees in the Medicare FFS population from 1993 to 2016, skin biopsies per 1000 Medicare FFS beneficiaries increased 153% (from 39.31 to 99.33), and skin cancer treatments per 1000 Medicare FFS beneficiaries increased 39% (from 34.67 to 48.26). Between 1993 and 2016, the skin biopsy-to-skin cancer treatment ratio increased 81% (from 1.134 to 2.058), and the number of additional biopsies per 1000 Medicare FFS beneficiaries increased 1001% (from 4.638 to 51.072) between 1993 and 2016. Utilization data by provider type is available from 2001 to 2016. The number of skin biopsies per 1000 Medicare beneficiaries performed by APPs increased from 0.82 to 17.19 or 1996% (nurse practitioners, 2211%; physician assistants, 1916%) and the number of biopsies by dermatologists increased by 41% from 53.98 to 76.17.
Limitations: Medicare claims data do not provide specific information regarding skin biopsy or skin cancer treatment use.
Conclusion: The number of skin biopsies has risen 153% since 1993, while the number of skin cancer treatments has only increased 39%. Our data highlight the rise of biopsy use and the increase in biopsies that do not result in skin cancer diagnosis or treatment. This suggests APPs may be responsible for increasing the cost of skin cancer management by biopsying significantly more benign lesions than dermatologists.
abstract_id: PUBMED:28857446
Diagnostic accuracy of pre-treatment biopsy for grading cutaneous mast cell tumours in dogs. Mast cell tumours (MCTs) are common tumours of the canine skin, and are estimated to represent up to 20% of all skin tumours in dogs. Tumour grade has a major impact on the incidence of local recurrence and metastatic potential. In addition to helping the clinician with surgical planning, knowledge of the tumour grade also assists in proper prognostication and client education. For pre-treatment biopsies to be useful, there must exist a high level of correlation between the histopathological grade obtained from the pre-treatment biopsy and the actual histopathological grade from the excisional biopsy. The aim of this study was to determine concordance of tumour grade between various biopsy techniques (wedge, punch, needle core) and the "gold standard" excisional biopsy method. We found an overall concordance rate of 96% based on the Patnaik grading system, and an overall concordance rate of 92% based on the Kiupel grading system. The accuracy of the various biopsy techniques (wedge, punch and needle core) when compared with excisional biopsy was 92%, 100% and 100%, respectively, based on the Patnaik grading system, and 90%, 95% and 100%, respectively, based on the Kiupel grading system. Of the cases with discordant results, the pre-treatment biopsies tended to underestimate the grade of the tumour. Based on these results, we conclude that pre-treatment biopsies are sufficiently accurate for differentiating low-grade from high-grade MCTs, regardless of biopsy technique or tumour location.
abstract_id: PUBMED:23269759
Actinic keratosis: when is a skin biopsy necessary? The diagnosis of actinic keratosis is generally established by clinical examination. However, actinic keratosis can progress into an invasive squamous cell carcinoma, therefore biopsy and histological examination may be needed. The risk of progression into an invasive carcinoma is not well established; it varies in the literature between 0.025% and 20% for a given lesion. Some clinical aspects suggest transformation into an invasive carcinoma; they include inflammation, induration, size > 1 cm, rapid enlargement of the lesion, bleeding or ulceration, and should prompt the clinician to perform a skin biopsy. In case of resistance after a well-driven treatment, a biopsy may be necessary. Finally, in cases of atypical clinical aspect, a biopsy may be useful in order to obtain a correct diagnosis.
abstract_id: PUBMED:36739830
Clinical Impact and Accuracy of Shave Biopsy for Initial Diagnosis of Cutaneous Melanoma. Introduction: Effective treatment of malignant melanomas is dependent upon accurate histopathological staging of preoperative biopsy specimens. While narrow excision is the gold standard for melanoma diagnosis, superficial shave biopsies have become the preferred method by dermatologists but may transect the lesion and result in inaccurate Breslow thickness assessment. This is a retrospective cohort study evaluating an initial method of biopsy for diagnosis of cutaneous melanoma and indication for reoperation based on inaccurate initial T-staging.
Methods: We retrospectively analyzed consecutive patients referred to the Medical College of Wisconsin, a tertiary cancer center, with a diagnosis of primary cutaneous melanoma. Adult patients seen between 2015 and 2018 were included. Fisher's exact test was used to assess the association between method of initial biopsy and need for unplanned reoperation.
Results: Three hundred twenty three patients with cutaneous melanoma from the head and neck (H&N, n = 101, 31%), trunk (n = 90, 15%), upper extremity (n = 84, 26%), and lower extremity (n = 48, 28%) were analyzed. Median Breslow thickness was 0.54 mm (interquartile range = 0.65). Shave biopsy was the method of initial biopsy in 244 (76%), excision in 23 (7%), and punch biopsy in 56 (17%). Thirty nine (33%) shave biopsies had a positive deep margin, as did seven (23%) punch biopsies and 0 excisional biopsies. Residual melanoma at definitive excision was found in 131 (42.5%) of all surgical specimens: 95 (40.6%) shave biopsy patients, 32 (60.4%) punch biopsy patients, and four (19.0%) excision biopsy patients. Recommendations for excision margin or sentinel lymph node biopsy changed in 15 (6%) shave biopsy patients and five (9%) punch biopsy patients.
Conclusions: Shave biopsy is the most frequent method of diagnosis of cutaneous melanoma in the modern era. While shave and punch biopsies may underestimate true T-stage, there was no difference in need for reoperation due to T-upstaging based on initial biopsy type, supporting current diagnostic practices. Partial biopsies can thus be used to guide appropriate treatment and definitive wide local excision when adjusting for understaging.
Answer: The retrospective study on the treatment of non-melanoma skin cancer (NMSC) in North Sardinia evaluated the accuracy of diagnosis and prevalence of incomplete excision by comparing clinical and histological diagnosis. The study found that lesions were correctly diagnosed in 94% of basal cell carcinomas (BCC) and in 69% of squamous cell carcinomas (SCC). Positive margins on pathological examination were 6.6% for BCC and 6.8% for SCC, with a significant difference for incomplete excision found for BCC in the face. Based on these data, the study suggests that if the clinical diagnosis is BCC, excision and reconstruction may be undertaken without an incisional biopsy. However, if the clinical diagnosis is SCC, it is advisable to consider an incisional biopsy before definitive surgical treatment (PUBMED:15715142). |
Instruction: Are lopinavir and efavirenz serum concentrations in HIV-infected children in the therapeutic range in clinical practice?
Abstracts:
abstract_id: PUBMED:24225343
Are lopinavir and efavirenz serum concentrations in HIV-infected children in the therapeutic range in clinical practice? Background: In antiretroviral treatment the role of therapeutic drug monitoring via measurement of serum levels remains unclear, especially in children.
Aim: To quantify exposure to LPV and EFV in children receiving therapy in a routine clinical setting in order to identify risk factors associated with inadequate drug exposure.
Method: A prospective study was conducted in a routine clinical setting in Tygerberg Children's Hospital, South Africa. A total of 53 random serum levels were analyzed. Serum concentrations were determined by an established high-performance liquid chromatography method.
Results: Of 53 HIV-infected children treated with lopinavir (n = 29, median age 1·83 y) or efavirenz (n = 24, median age 9·3 years), 12 showed serum levels outside the therapeutic range (efavirenz) or below Cmin (lopinavir). Low bodyweight, rifampicin co-treatment, and significant comorbidity were potential risk factors for inadequate drug exposure.
Conclusion: These findings, together with previous studies, indicate that therapeutic drug monitoring can improve the management of antiretroviral therapy in children at risk.
abstract_id: PUBMED:28718515
Impact of Single Nucleotide Polymorphisms on Plasma Concentrations of Efavirenz and Lopinavir/Ritonavir in Chinese Children Infected with the Human Immunodeficiency Virus. Single nucleotide polymorphisms (SNPs) in the genes that encode the cytochrome P450 (CYP) drug metabolizing enzymes and drug transporters have been reported to influence antiretroviral drug pharmacokinetics. Although primarily metabolized by CYP2B6 and -3A, efavirenz (EFV) and lopinavir/ritonavir (LPV/r) are substrates of P-glycoprotein and the solute carrier organic (SLCO) anion transporter, respectively. We investigated the association between SNPs and efavirenz (EFV) or lopinavir/ritonavir (LPV/r) concentrations in Chinese children infected with the human immunodeficiency virus (HIV). Genotyping was performed on CYP2B6 516G→T, -1459C→T, and -983T→C, ABCB1 3435C→T, and SLCO1B1 521T→C in 229 HIV-infected Chinese pediatric patients (age range 4.0 to 17.5 yrs). Plasma concentrations of EFV and LPV/r were measured using validated high-performance liquid chromatography coupled with the mass spectrum method among 39 and 69 children who received EFV- and LPV/r-containing regimens, respectively. The frequencies of CYP2B6 516G→T in the study participants were 71%, 25%, and 4% for the G/G, G/T, and T/T genotypes, respectively. Among the children under therapeutic drug monitoring, 21% and 39% experienced EFV and LPV concentrations, respectively, above the upper threshold of the therapeutic window. CYP2B6 516G→T was significantly associated with EFV concentrations (p<0.001). Older children (older than 10 yrs) were more likely to have significantly higher EFV concentrations than the younger ones (p=0.0314). CYP2B6 genotyping and EFV concentration monitoring may help optimize antiretroviral therapy in pediatric patients who initiate an EFV-based regimen.
abstract_id: PUBMED:15509183
Practical guidelines to interpret plasma concentrations of antiretroviral drugs. Several relationships have been reported between antiretroviral drug concentrations and the efficacy of treatment, and toxicity. Therefore, therapeutic drug monitoring (TDM) may be a valuable tool in improving the treatment of HIV-1-infected patients in daily practice. In this regard, several measures of exposure have been studied, e.g. trough and maximum concentrations, concentration ratios and the inhibitory quotient. However, it has not been unambiguously established which pharmacokinetic parameter should be monitored to maintain optimal viral suppression. Each pharmacokinetic parameter has its pros and cons. Many factors can affect the pharmacokinetics of antiretroviral agents, resulting in variability in plasma concentrations between and within patients. Therefore, plasma concentrations should be considered on several occasions. In addition, the interpretation of the drug concentration of a patient should be performed on an individual basis, taking into account the clinical condition of the patient. Important factors herewith are viral load, immunology, occurrence of adverse events, resistance pattern and comedication. In spite of the described constraints, the aim of this review is to provide a practical guide for TDM of antiretroviral agents. This article outlines pharmacokinetic target values for the HIV protease inhibitors amprenavir, atazanavir, indinavir, lopinavir, nelfinavir, ritonavir and saquinavir, and the non-nucleoside reverse transcriptase inhibitors efavirenz and nevirapine. Detailed advice is provided on how to interpret the results of TDM of these drugs.
abstract_id: PUBMED:28483965
Pharmacokinetics of Efavirenz at a High Dose of 25 Milligrams per Kilogram per Day in Children 2 to 3 Years Old. The MONOD ANRS 12206 trial was designated to assess simplification of a successful lopinavir (LPV)-based antiretroviral treatment in HIV-infected children younger than 3 years of age using efavirenz (EFV; 25 mg/kg of body weight/day) to preserve the class of protease inhibitors for children in that age group. In this substudy, EFV concentrations were measured to check the consistency of an EFV dose of 25 mg/kg and to compare it with the 2016 FDA recommended dose. Fifty-two children underwent blood sampling for pharmacokinetic study at 6 months and 12 months after switching to EFV. We applied a Bayesian approach to derive EFV pharmacokinetic parameters using the nonlinear mixed-effect modeling (NONMEM) program. The proportion of midinterval concentrations 12 h after drug intake (C12 h) corresponding to the EFV therapeutic pharmacokinetic thresholds (1 to 4 mg/liter) was assessed according to different dose regimens (25 mg/kg in the MONOD study versus the 2016 FDA recommended dose). With both the 25 mg/kg/day dose and the 2016 FDA recommended EFV dose, simulations showed that the majority of C12 h values were within the therapeutic range (62.6% versus 62.8%). However, there were more children underexposed with the 2016 FDA recommended dose (11.6% versus 1.2%). Conversely, there were more concentrations above the threshold of toxicity with the 25 mg/kg dose (36.2% versus 25.6%), with C12 h values of up to 15 mg/liter. Only 1 of 52 children was switched back to LPV because of persistent sleeping disorders, but his C12 h value was within therapeutic ranges. A high EFV dose of 25 mg/kg per day in children under 3 years old achieved satisfactory therapeutic effective levels. However, the 2016 FDA recommended EFV dose appeared to provide more acceptable safe therapeutic profiles. (This study has been registered at ClinicalTrials.gov under identifier NCT01127204.).
abstract_id: PUBMED:18641538
Determination of unbound antiretroviral drug concentrations by a modified ultrafiltration method reveals high variability in the free fraction. Total plasma concentrations are used for therapeutic drug monitoring of antiretroviral drugs, whereas antiviral activity is expected to depend on unbound concentrations. The determination of free (unbound) concentrations by ultrafiltration may be flawed by the irreversible adsorption of many drugs onto the membrane filters and plastic components of the device. The authors describe a modified ultrafiltration method enabling the accurate measurement of unbound concentrations of 10 antiretroviral drugs by liquid chromatography-tandem mass spectroscopy, which circumvents the problem of loss by adsorption in the early ultrafiltration fractions. The method was applied to assess the variability of free fractions of antiretroviral drugs during routine therapeutic drug monitoring in 144 patients with HIV. In in vitro experiments, ultrafiltrate collected in four fractions (0-8, 8-16, 16-24, and 24-30 minutes) gave much lower and more variable free drug concentrations in the first ultrafiltrate fraction than in the last three fractions for lopinavir, nelfinavir, saquinavir, tipranavir, and efavirenz. In the last two fractions, free concentrations remained constant, indicating saturable adsorption. The adsorption was modest for indinavir, amprenavir, and ritonavir, and unnoticeable for atazanavir and nevirapine. Free fraction values obtained with this modified ultrafiltration method reveal substantial interindividual variability, suggesting that monitoring unbound antiretroviral drug concentrations may increase its clinical usefulness, especially for lopinavir, saquinavir, and efavirenz.
abstract_id: PUBMED:17503748
A computer-based system to aid in the interpretation of plasma concentrations of antiretrovirals for therapeutic drug monitoring. Objectives: To develop a computer-based system for modelling and interpreting plasma antiretroviral concentrations for therapeutic drug monitoring (TDM).
Methods: Data were extracted from a prospective TDM study of 199 HIV-infected patients (CCTG 578). Lopinavir (LPV) and efavirenz (EFV) pharmacokinetic (PK) parameters were modelled using a Bayesian method and interpreted by an expert committee of HIV specialists and pharmacologists who made TDM recommendations. These PK models and recommendations formed the knowledge base to develop an artificial intelligence (AI) system that could estimate drug exposure, interpret PK data and generate TDM recommendations. The modelled PK exposures and expert committee TDM recommendations were considered optimum and used to validate results obtained by the AI system.
Results: A group of patients, 67 on LPV, 46 on EFV and three on both drugs, were included in this analysis. Correlations were high for LPV and EFV estimated trough and 4 h post-dose concentrations between the Al estimates and modelled values (r > 0.79 for all comparisons; P < 0.0001). Although trough concentrations were similar, significant differences were seen for mean predicted 4 h concentrations for EFV (4.16 microg/ml versus 3.89 microg/ml; P = 0.02) and LPV (7.99 microg/ml versus 8.79 microg/ml; P < 0.001). The AI and expert committee TDM recommendations agreed in 53 out of 69 LPV cases [kappa (kappa) = 0.53; P < 0.001] and 47 out of 49 EFV cases (kappa = 0.91; P < 0.001).
Conclusion: The AI system successfully estimated LPV and EFV trough concentrations and achieved good agreement with expert committee TDM recommendations for EFV- and LPV-treated patients.
abstract_id: PUBMED:32921396
Effect of nevirapine, efavirenz and lopinavir/ritonavir on the therapeutic concentration and toxicity of lumefantrine in people living with HIV at Lagos University Teaching Hospital, Nigeria. Patients living with HIV in malarial endemic regions may experience clinically significant drug interaction between antiretroviral and antimalarial drugs. Effects of nevirapine (NVP), efavirenz (EFV) and lopinavir/ritonavir (LPVr) on lumefantrine (LM) therapeutic concentrations and toxicity were evaluated. In a four-arm parallel study design, the blood samples of 40 participants, treated with artemether/lumefantrine (AL), were analysed. Lumefantrine Cmax was increased by 32% (p = 0.012) and 325% (p < 0.0001) in the NVP and LPVr arms respectively but decreased by 62% (p < 0.0001) in the EFV-arm. AUC of LM was, respectively, increased by 50% (p = 0.27) and 328% (p < 0.0001) in the NVP and LPVr arms but decreased in the EFV-arm by 30% (p = 0.019). Median day 7 LM concentration was less than 280 ng/mL in EFV-arm (239 ng/mL) but higher in control (290 ng/mL), NVP (369 ng/mL, p = 0.004) and LPVr (1331 ng/mL, p < 0.0001) arms. There were no clinically relevant toxicities nor adverse events in both control and test arms. Artemether/lumefantrine is safe and effective for treatment of malaria in PLWHA taking NVP and LPVr based ART regimen but not EFV-based regimen.
abstract_id: PUBMED:12600653
Effect of therapeutic drug monitoring on outcome in antiretroviral experienced HIV-infected individuals. Background: The role of therapeutic drug monitoring (TDM) in the routine management of HIV-infected individuals is still unclear, largely due to a lack of basic data regarding specific drug concentrations and how they correlate with maximal effect and minimal toxicity within given populations. Nevertheless, it has a potentially important role to play in the management of HIV-infected patients, with the aim of limiting toxicity, optimising antiviral effect and decreasing virological failure and emergence of viral resistance.
Objectives: To measure serum concentrations of specific antiretroviral drugs in individuals changing antiretroviral therapy and assess relationship to virological response.
Study Design: A prospective, non-randomised, 24-week study of 40 antiretroviral experienced HIV-infected patients. Subjects had failed their previous antiretroviral regimen and were beginning new regimens based on genotypic testing. Serum antiretroviral concentrations and virological response was measured after initiation of treatment.
Results: There was a significant correlation between higher concentrations of lopinavir and efavirenz and better virological outcome. This was not seen with amprenavir.
Conclusions: Use of TDM in this setting helps predict virological response to therapy. Optimal use of TDM would require dose adjustment on the basis of a TDM level. Further research is necessary to enable this practice to become routine in the management of HIV-infected patients.
abstract_id: PUBMED:35485334
Comparison of three galenic forms of lamivudine in young West African children living with Human Immunodeficiency Virus. Background: Few pharmacokinetic data were reported on dispersible tablets despite their increasing use. One hundred fifty HIV-infected children receiving lamivudine were enrolled in the MONOD ANRS 12,206 trial. Three galenic forms were administered: liquid formulation, tablet form and dispersible scored tablet.
Method: HIV-infected children <4 years old were enrolled in the MONOD ANRS 12,206 trial designed to assess the simplification of a successful 12-months lopinavir-based antiretroviral treatment with efavirenz. Lamivudine plasma concentrations were analysed using nonlinear mixed effects modelling approach.
Results: One hundred and fifty children (age: 2.5 years (1.9-3.2), weight 11.1 (9.5-12.5) kg (median (IQR)) were included in this study. Over the study period, 79 received only the syrup form, 29 children switched from syrup form to tablet 3TC/AZT form, 36 from syrup to the orodispersible ABC/3TC form and two from the 3TC/AZT form to the orodispersible ABC/3TC form. The 630 lamivudine concentrations were best described by a two-compartment model allometrically scaled. Galenic form had no significant effect on 3TC pharmacokinetic.
Conclusion: This trial provided an opportunity to compare three galenic forms (liquid formulation, tablet form and dispersible scored tablet) of lamivudine in the target population of young HIV-1-infected children. Galenic form had no significant effect on lamivudine pharmacokinetics.
abstract_id: PUBMED:11901656
Efficacy of highly active antiretroviral therapy in HIV-1 infected children. Although the reduction in HIV-1-related deaths with highly active antiretroviral therapy (HAART) is similar in adults and children, the extent of the changes in two important surrogate markers HIV-1 RNA levels and CD4+ T cell counts, differs widely. In most paediatric studies virological response rates to HAART are inferior to those in adults. This review provides an overview of the paediatric clinical studies using HAART and seeks to improve the understanding of factors that may contribute to success or failure of HAART in children. An overview of all current articles on paediatric clinical trials using HAART is provided. 23 papers were available. HIV-1 RNA loads and CD4+ T cell counts were used as primary outcome measures. Virological response rates were highly variable, both among the different antiretroviral drugs but also among different studies using the same medication. Four studies in which dosages of the administrated protease inhibitor (PI) were adjusted after pharmacokinetic evaluation had superior virological response rates compared with those in which fixed dosages were used. Immunological response rates were more uniform than virological responses. In almost all studies increases of CD4+ T cell counts are reported independent of the extent of the virological response. Side-effects of HAART were generally mild, transient, and of gastrointestinal origin. Significant percentages of patients with serum lipid abnormalities were reported in three paediatric studies. However, signs of clinical lipodystrophy were not observed. The inferior virological response rates, which have been reported in HIV-1 infected children treated with HAART form a reflection of the challenges that are encountered in the treatment of these children. Difficulties with adherence and with the pharmacokinetics of PIs in children require an intensive, child-adjusted approach. A practical approach to therapy in institutions without tertiary care facilities may be induction therapy with a lopinavir containing regimen (lacking a need for therapeutic drug monitoring), to reduce high viral load levels followed by an easily tolerated maintenance regimen, for example containing abacavir or nevirapine.
Answer: In clinical practice, the serum concentrations of lopinavir and efavirenz in HIV-infected children can vary and may not always be within the therapeutic range. A study conducted at Tygerberg Children's Hospital in South Africa found that among 53 HIV-infected children treated with lopinavir or efavirenz, 12 showed serum levels outside the therapeutic range for efavirenz or below the minimum concentration (Cmin) for lopinavir. Potential risk factors for inadequate drug exposure included low bodyweight, rifampicin co-treatment, and significant comorbidity (PUBMED:24225343).
Another study investigating the impact of single nucleotide polymorphisms (SNPs) on plasma concentrations of efavirenz and lopinavir/ritonavir in Chinese children infected with HIV found that 21% and 39% of children experienced efavirenz and lopinavir concentrations, respectively, above the upper threshold of the therapeutic window. The study highlighted the significant association between CYP2B6 516G→T polymorphism and efavirenz concentrations, suggesting that genotyping and drug concentration monitoring could help optimize antiretroviral therapy in pediatric patients (PUBMED:28718515).
Furthermore, a study on the pharmacokinetics of efavirenz at a high dose of 25 mg/kg per day in children 2 to 3 years old indicated that while the majority of midinterval concentrations were within the therapeutic range, there were more children underexposed with the 2016 FDA recommended dose compared to the 25 mg/kg/day dose. Conversely, there were more concentrations above the threshold of toxicity with the 25 mg/kg dose (PUBMED:28483965).
These findings suggest that therapeutic drug monitoring (TDM) can be a valuable tool in managing antiretroviral therapy in children, as it can help in adjusting doses to maintain optimal viral suppression and limit toxicity (PUBMED:15509183). However, the variability in plasma concentrations between and within patients due to factors such as age, weight, genetic polymorphisms, and co-medications makes it challenging to consistently achieve therapeutic serum concentrations in clinical practice. |
Instruction: Liver resection for breast cancer metastasis: does it improve survival?
Abstracts:
abstract_id: PUBMED:24719119
Does liver resection provide long-term survival benefits for breast cancer patients with liver metastasis? A single hospital experience. Purpose: Liver resection with colorectal liver metastasis widely accepted and has been considered safe and effective therapeutic option. However, the role of liver resection in breast cancer with liver metastasis is still controversial. Therefore, we reviewed the outcome of liver resection in breast cancer patients with liver metastases in a single hospital experiences.
Materials And Methods: Between January 1991 and December 2006, 2176 patients underwent breast cancer surgery in Gangnam Severance Hospital. Among these patients, 110 cases of liver metastases were observed during follow-up and 13 of these patients received liver resection with potential feasibility to achieve an R0 resection.
Results: The median time interval between initial breast cancer and detection of liver metastasis was 62.5 months (range, 13-121 months). The 1-year and 3-year overall survival rates of the 13 patients with liver resection were 83.1% and 49.2%, respectively. The 1-year and 3-year overall survival rates of patients without extrahepatic metastasis were 83.3% and 66.7% and those of patients with extrahepatic metastasis were 80.0% and 0.0%, respectively (p=0.001).
Conclusion: Liver resection for metastatic breast cancer results in improved patient survival, particularly in patients with solitary liver metastasis and good general condition.
abstract_id: PUBMED:35322733
Hepatic resection for breast cancer related liver metastases: A single institution experience. Background & Objective: Liver resection for breast cancer liver metastases is becoming a more widely accepted therapeutic option for selected groups of patients. The aim of this study was to describe the outcomes of patients undergoing liver resection for breast cancer-related liver metastases and identify any variables associated with recurrence or survival.
Methods: A retrospective review of a prospectively maintained database was undertaken for the 12 year period between 2009 and 2021. Clinicopathological, treatment, intraoperative, recurrence, survival and follow-up data were collected on all patients. Kaplan-Meier methods, the log-rank test and Cox proportional hazards regression analysis were used to identify variables that were associated with recurrence and survival.
Results: A total of 20 patients underwent 21 liver resections over the 12-year period. There were no deaths within 30 days of surgery and an operative morbidity occurred in 23.8% of cases. The median local recurrence free survival and disease free survival times were both 50 months, while the 5 year overall survival rate was 65%. The presence of extrahepatic metastases were associated with a decreased time to local recurrence (p < 0.01) and worse overall survival (p = 0.02).
Conclusions: This study has demonstrated that liver resection for breast cancer-related liver metastases is feasible, safe and associated with prolonged disease free and overall survival in selected patients. It is likely that this option will be offered to more patients going forward, however, the difficulty lies in selecting out those who will benefit from liver resection particularly given the increasing number of systemic treatments and local ablative methods available that offer good long-term results.
abstract_id: PUBMED:29393171
Ten-Year Survival after Liver Resection for Breast Metastases: A Single-Center Experience. Background: The role of liver resection for metastatic breast carcinoma is still debated.
Methods: Fifty-one resected patients were reviewed. All patients received adjuvant chemotherapy after resection of the primary tumor. Clinicopathological characteristics and immunohistochemistry expression of estrogen (ER), progesterone (PR), human epidermal growth factor (HER2), or Ki67 were evaluated.
Results: The median number of metastases was 2; single metastases were present in 24 (47%) patients. The median tumor diameter was 4 cm. Major hepatectomies were performed in 31 (61%) patients. Postoperative mortality was null. Postoperative morbidity was 13.7%. The 1-, 5-, and 10-year survival rates were 92, 36, and 16% respectively. Eleven (21.6%) patients survived longer than 5 years and 8.9% are alive without recurrence 10 years after surgery. At the univariate analysis, tumor diameter, lymph node status, PR receptor status, and triple positive receptors (ER+/PR+/Her2+) were significantly related to survival. At the multivariate analysis, tumor diameter, PR receptor, and triple negative status were significantly related to the long-term outcome.
Conclusion: Liver resection seems to be a safe and effective treatment for metastases from breast cancer, and encouraging long-term survival can be obtained with acceptable risk in selected patients. Tumors less than 5 cm and positive hormone receptor status are the best prognostic factors.
abstract_id: PUBMED:27764727
The safety and effectiveness of liver resection for breast cancer liver metastases: A systematic review. Breast cancer liver metastases have traditionally been considered incurable and any treatment given therefore palliative. Liver resections for breast cancer metastases are being performed, despite there being no robust evidence for which patients benefit. This review aims to determine the safety and effectiveness of liver resection for breast cancer metastases. A systematic literature review was performed and resulted in 33 papers being assembled for analysis. All papers were case series and data extracted was heterogeneous so a meta-analysis was not possible. Safety outcomes were mortality and morbidity (in hospital and 30-day). Effectiveness outcomes were local recurrence, re-hepatectomy, survival (months), 1-, 2-, 3-, 5- year overall survival rate (%), disease free survival (months) and 1-, 2-, 3-, 5- year disease free survival rate (%). Overall median figures were calculated using unweighted median data given in each paper. Results demonstrated that mortality was low across all studies with a median of 0% and a maximum of 5.9%. The median morbidity rate was 15%. Overall survival was a median of 35.1 months and a median 1-, 2-, 3- and 5-year survival of 84.55%, 71.4%, 52.85% and 33% respectively. Median disease free survival was 21.5 months with a 3- and 5-year median disease free survival of 36% and 18%. Whilst the results demonstrate seemingly satisfactory levels of overall survival and disease free survival, the data are of poor quality with multiple confounding variables and small study populations. Recommendations are for extensive pilot and feasibility work with the ultimate aim of conducting a large pragmatic randomised control trial to accurately determine which patients benefit from liver resection for breast cancer liver metastases.
abstract_id: PUBMED:28193572
Systematic review of early and long-term outcome of liver resection for metastatic breast cancer: Is there a survival benefit? Background: Isolated liver metastases occur rarely in patients with metastatic breast cancer. The success of liver resection (LR) for other metastatic disease has led centres to explore the option of LR for patients with isolated breast cancer liver metastases (BCLM). A number of small series have been published in the literature, however the evidence is conflicting. This study aimed to systematically review the literature to determine the perioperative outcome and survival of patients undergoing LR for BCLM.
Methods: An electronic search of Medline and Embase databases was performed to identify all published series. Patient demographics, management, peri-operative outcome and overall survival (OS) were obtained.
Results: A total of 1705 articles were identified of which 531 included patients with non-colorectal and non-neuroendocrine metastases. 43 articles including 1686 patients, met all the inclusion and exclusion criteria. R0 resection was achieved in 83% (683/825). Morbidity and 30-day mortality rates were 20% (174/852) and 0.7% (6/918), respectively. The median OS was 36 months (12-58 months). The median 1-, 3-and 5-year OS were 90%, 56% and 37%, respectively.
Conclusions: LR for BCLM can be carried out with acceptable peri-operative risks in selected patients with survival outcomes that appear to be superior to chemotherapy alone.
abstract_id: PUBMED:18368316
Liver resection for breast cancer metastasis: does it improve survival? Purpose: To assess the outcome and prognostic factors of liver surgery for breast cancer metastasis.
Methods: We retrospectively examined 16 patients who underwent partial liver resection for breast cancer liver metastasis (BCLM). All patients had been treated with chemotherapy or hormonotherapy, or both, before referral for surgery. We confirmed by preoperative radiological examinations that metastasis was confined to the liver. The survival curve was estimated using the Kaplan-Meier method. Univariate and multivariate analysis were conducted to evaluate the role of the known factors of breast cancer survival.
Results: The median age of the patients was 54 years (range 38-68) and the median disease-free interval between the diagnoses of breast cancer and liver metastasis was 54 months (range 7-120). Nine major and 7 minor hepatectomies were performed. There was no postoperative death. The overall 1-, 3-, and 5-year survival rates were 94%, 61%, and 33%, respectively. The median survival rate was 42 months. Univariate analysis revealed that hormone receptor status, number of metastases, a major hepatectomy, and a younger age were associated with a poorer prognosis. The survival rate was not influenced by the disease-free interval, grade or stage of breast cancer, or intraoperative blood transfusions. The number of liver metastases was identified as a significant independent factor of survival according to the Cox proportional hazard model (P = 0.04).
Conclusions: Liver resection, when done in combination with adjuvant therapy, can improve the prognosis of selected patients with BCLM.
abstract_id: PUBMED:22682709
Resection of liver metastases in patients with breast cancer: survival and prognostic factors. Aims: Patients with breast cancer metastasized to the liver have a median survival of 4-33 months and treatment options are usually restricted to palliative systemic therapy. The aim of this observational study was to evaluate the effectiveness and safety of resection of liver metastases from breast cancer and to identify prognostic factors for overall survival.
Methods: Patients were identified using the national registry of histo- and cytopathology in the Netherlands (PALGA). Included were all patients who underwent resection of liver metastases from breast cancer in 11 hospitals in The Netherlands of the last 20 years. Study data were retrospectively collected from patient files.
Results: A total of 32 female patients were identified. Intraoperative and postoperative complications occurred in 3 and 11 patients, respectively. There was no postoperative mortality. After a median follow up period of 26 months (range, 0-188), 5-year and median overall survival after partial liver resection was 37% and 55 months, respectively. The 5-year disease-free survival was 19% with a median time to recurrence of 11 months. Solitary metastases were the only independent significant prognostic factor at multivariate analysis.
Conclusion: Resection of liver metastases from breast cancer is safe and might provide a survival benefit in a selected group of patients. Especially in patients with solitary liver metastasis, the option of surgery in the multimodality management of patients with disseminated breast cancer should be considered.
abstract_id: PUBMED:26131746
Resection of liver metastases in breast cancer Liver metastases have the poorest prognosis of all types of breast cancer metastases, with a 5-year survival rate of 0 to 12%. In comparison, the 5-year overall survival rate of patients with colorectal liver metastases undergoing curative liver resection is approximately 30 to 40% and even 50% in selected patients. Partial liver resection in combination with systemic treatment for patients with hepatogenic metastases from breast cancer may lead to improved survival rates for selected patients.
abstract_id: PUBMED:10776428
Liver metastases from breast cancer: long-term survival after curative resection. Background: Liver metastases from breast cancer are associated with a poor prognosis (median survival < 6 months). A subgroup of these patients with no dissemination in other organs may benefit from surgery. Available data in the literature suggest that only in exceptional cases do these patients survive more than 2 years when given chemohormonal therapy or supportive care alone. We report the results of liver resection in patients with isolated hepatic metastases from breast cancer and evaluate the rate of long-term survival, prognostic factors, and the role of neoadjuvant high-dose chemotherapy.
Patients And Methods: Over the past decade, 17 women underwent hepatic metastectomy with curative intent for metastatic breast cancer. The follow-up was complete in each patient. The median age at the time breast cancer was diagnosed was 48 years. Neoadjuvant high-dose chemotherapy (HDC) with hematopoietic progenitor support was used in 10 patients before liver resection. Perioperative complications, long-term outcome, and prognostic factors were evaluated.
Results: Seven of the 17 patients are currently alive, with follow-up of up to 12 years. Four of these patients are free of tumors after 6 and 17 months and 6 and 12 years. The actuarial 5-year survival rate is 22%. One patient died postoperatively (mortality rate, 6%) of carmustine-induced fibrosing pneumonitis. There was no further major morbidity in the other patients. The liver was the primary site of recurrent disease after liver resection in 67% of the patients. Patients in whom liver metastases were found more than 1 year after resection of the primary breast cancer had a significantly better outcome than those with early (< 1 year) metastatic disease (P = .04). The type of liver resection, the lymph node status at the time of the primary breast cancer resection, and HDC had no significant impact on patient survival in this series.
Conclusions: Favorable 22% long-term survival can be achieved with metastasectomy in this selected group of patients. Careful evaluation of pulmonary toxicity from carmustine and exclusion of patients with extrahepatic disease are critical. Improved survival might be achieved with better selection of patients and the use of liver-directed adjuvant therapy.
abstract_id: PUBMED:29392651
The role of liver resection in patients with metastatic breast cancer: a systematic review examining the survival impact. Introduction: Approximately 10% of breast cancer patients will present with solid organ metastases, while up to 30% will develop metastatic disease during their treatment course. Liver metastases are usually treated with systemic chemotherapy. Although colorectal liver metastases are routinely resected, this is not yet the standard of care for breast cancer-related liver metastases. This review examines the evidence for resection of breast cancer-related liver metastases.
Methods: A systematic review identified 25 articles for inclusion, 12 papers concerning patients with isolated liver metastases, and 13 papers concerning patients with extrahepatic metastases. Data from 1080 patients were included.
Results: Two hundred eighty patients underwent hepatic resections for breast cancer-associated metastases with no extrahepatic metastases. Reported 5-year survival ranged from 24.6 to 78%. Median overall survival ranged from 29.5 to 116 months. For patients with oligometastatic disease undergoing resection, 5-year survival ranged from 21 to 57%, with median overall survival ranging from 32 to 58 months. Reported 30-day morbidity ranged from 14 to 42% for isolated and multiple metastases.
Conclusion: Hepatic resection can be considered in the management of breast cancer patients with isolated liver metastases as well as those with oligometastatic disease.
Answer: Liver resection for breast cancer metastasis does appear to improve survival in selected patients. Several studies have demonstrated that liver resection can be a safe and effective treatment for metastases from breast cancer, leading to improved patient survival, particularly in patients with solitary liver metastasis and good general condition (PUBMED:24719119). A single institution experience showed that liver resection for breast cancer-related liver metastases is feasible, safe, and associated with prolonged disease-free and overall survival in selected patients (PUBMED:35322733). A single-center experience reported that 10-year survival rates after liver resection were encouraging, with acceptable risk in selected patients (PUBMED:29393171).
A systematic review found that liver resection for breast cancer liver metastases is associated with low mortality and morbidity rates, and median overall survival and disease-free survival rates that seem satisfactory (PUBMED:27764727). Another systematic review concluded that liver resection for breast cancer liver metastases (BCLM) can be carried out with acceptable peri-operative risks and survival outcomes that appear to be superior to chemotherapy alone (PUBMED:28193572). Similarly, a retrospective examination of patients who underwent partial liver resection for BCLM suggested that liver resection, when done in combination with adjuvant therapy, can improve the prognosis of selected patients (PUBMED:18368316).
An observational study indicated that resection of liver metastases from breast cancer is safe and might provide a survival benefit, especially in patients with solitary liver metastasis (PUBMED:22682709). Furthermore, a systematic review examining the survival impact of liver resection in patients with metastatic breast cancer found that hepatic resection can be considered in the management of breast cancer patients with isolated liver metastases as well as those with oligometastatic disease (PUBMED:29392651).
In conclusion, the evidence suggests that liver resection for breast cancer metastasis can improve survival in a subset of patients, particularly those with isolated or limited liver metastases, and should be considered as part of a multimodal treatment approach in appropriate candidates. |
Instruction: Prolonging proton pump inhibitor-based anti-Helicobacter pylori treatment from one to two weeks in duodenal ulcer: is it worthwhile?
Abstracts:
abstract_id: PUBMED:11515622
Prolonging proton pump inhibitor-based anti-Helicobacter pylori treatment from one to two weeks in duodenal ulcer: is it worthwhile? Aims: To compare the efficacy of one-week versus two-week treatment with lansoprazole, amoxycillin and clarithromycin in inducing healing of Helicobacter pylori-positive duodenal ulcers as well as to investigate the role of several factors, determinant in the ulcer healing process.
Patients And Methods: Seventy-one active duodenal ulcer patients were randomised to receive one- or two-week treatment with lansoprazole (30 mg bid), clarithromycin (500 mg bid) and amoxycillin (1 g bid), not followed by any additional acid suppressive therapy. Ulcer healing and Helicobacter pylori infection were assessed by endoscopy and urea breath test 4 weeks after the end of treatment. Before entering the trial and four weeks after the end of treatment, dyspeptic symptoms were recorded and scored by a validated questionnaire. The potential effects of a number of clinical variables on the ulcer healing process were evaluated by means of univariate and multivariate analyses.
Results: Duodenal ulcer was healed in 80.5% patients treated for one week and in 91.4% patients treated for 2 weeks according to intention-to-treat analysis (p=NS). Ulcer healing was more frequent in the Helicobacter pylori cured patients compared to those with persisting infection (90.9% vs 68.5%; p=0.04). Multivariate analysis did not reveal any significant predictor of duodenal ulcer healing.
Conclusions: Two-week treatment with lansoprazole, amoxycillin and clarithromycin, without continuation of antisecretive therapy, is better, although the difference is not statistically significant, than one-week treatment in healing Helicobacter pylori-positive duodenal ulcer disease. The eradication of Helicobacter pylori is the most important factor related to ulcer healing.
abstract_id: PUBMED:9033612
The treatment of Helicobacter pylori infection H. pylori causes inflammatory lesions of the stomach and duodenum. At the present time eradication is essentially recommended in case of gastric or duodenal ulcer. The choice of the appropriate drug depends on the characteristics of the H. pylori infection, the localization deep in the gastric mucosa, the physico-chemical properties of the gastric medium, especially the acidity which deactivates antibiotics, slow bacterial growth and the germ's sensitivity to antibiotics. Anti-infectious treatment is now based on a three-drug regimen combining an antisecretory drug (proton pump inhibitor or H2 receptor antagonist) and two antibiotics: clarithromycin associated with amoxicillin or an imidazol derivative (metronidazol or tinidazol) or tetracycline. Two antibiotics (clarithromycin, amoxicillin) as well as three anti-secretory agents (lansoprazole, omeprazole, ranitidine) have been authorized in France for three-drug regimens of 1 or 2 weeks leading to approximately 90% eradication. Special attention should be placed on the risk of resistance to antibiotics (macrolids and imidazol derivatives) and patient compliance required for successful eradication of H. pylori. Other therapeutic schemes are under assessment and a vaccine is being prepared. Eradication of H. pylori has totally changed the treatment of gastric and duodenal ulcers, eliminating the need for long-term treatment and avoiding complications.
abstract_id: PUBMED:9454363
Peptic ulcer, Helicobacter pylori The etiology of gastric or duodenal ulcer defines the choice of treatment. In patients with H. pylori infection but without NSAID treatment of the acute ulcer is achieved by a one week eradication therapy. Prolonged treatment with acid inhibitors is usually not necessary. Eradication should be done by a triple therapy consisting of one acid inhibitor and two antibiotics. Success of the eradication should be controlled 4 weeks after end of the treatment by a C13-urea breath test. Serology is not useful for this matter. NSAID induced ulcers without H. pylori infection should be treated for 4-6 weeks with a potent acid inhibitor, preferably a proton pump inhibitor. If NSAID is continued afterwards prophylaxis against ulcer relapse is necessary. Prostaglandin analog Misoprostol is the only well established drug for that. Proton pump inhibitors seemed also to prevent NSAID ulcer, but solid publications are lacking. H. pylori and NSAID are independent risk factors. Thus. H. pylori eradication does not necessarily prevent relapse of NSAID induced Ulcers. Relapse prophylaxis by Misoprostol or possibly by PPI seems advisable. Ulcer without H. pylori infection and without NSAID is seldom. Other reasons, such as carcinoma, Whipple's disease or Zollinger-Ellison syndrome has to be ruled out. False negative H. pylori tests should be excluded by searching for H. pylori with other methods.
abstract_id: PUBMED:11464622
Treatment of Helicobacter pylori infection. Whom to treat and with what? Actually is considered that Helicobacter pylori play a major role in the genesis of peptic ulcer. Like in the gastric and duodenal ulcer. When we demonstrate the presence of Helicobacter pylori in the gastric antrum of patients with ulcer they must receive eradication treatment. Another indication for eradication treatment are the patients with malt lymphoma or patients with endoscopical resection of gastric carcinoma. The ideal treatment is the therapy that eradicate 90% of the cases. The most effective are the triple therapies with one proton pump inhibitor with two antibiotics like amoxycillin plus clarithromycin. In Mexico the therapies with metronidazole are not recommended because we have high rates of resistance to this drug 70%. Is not justified to treat patients with non ulcer dyspepsia. We still recommended the schemes of 14 days. A good alternative is the combination of ranitidine bismuth citrate plus two antibiotics. Is possible that in the future we can have a vaccine to eradicate and to prevent the infection.
abstract_id: PUBMED:11582994
Healing peptic ulcer disease with therapy of Helicobacter pylori infection--an overview Over the last 15 years several new aspects in the pathogenesis and basic changes in therapeutic strategies for healing of peptic ulcers have been introduced. The discovery of Helicobacter pylori, the possibility of treatment of the infection and consecutive healing of peptic ulcer disease have changed the understanding of the pathophysiology of the peptic ulcer disease. Most gastric or duodenal ulcers are based on Helicobacter pylori infection. Newer therapeutic strategies to cure Helicobacter pylori infection consist of proton pump inhibitor (PPI) based triple therapy, containing in addition two antibiotics chosen from clarithromycin, amoxicillin and metronidazole, administered over 7 days. The other main cause of gastroduodenal ulcers are non-steroidal antirheumatic drugs or aspirin intake. PPI are therapeutic strategies of choice for treatment of such lesions. Main topics of this overview are the principles and the therapeutic proceeding in the management of Helicobacter pylori-associated peptic ulcer disease. The differences between duodenal and gastric ulcer are especially dealt with.
abstract_id: PUBMED:12700497
What role today for Helicobacter pylori in peptic ulcer? Helicobacter pylori (H. pylori) and nonsteroidal anti-inflammatory drugs/aspirin (NSAIDs) remain today the main etiologies of duodenal (DU) and gastric (GU) ulcers. In some countries or areas in which prevalence of H. pylori infection has decreased, and probably also in which the consumption of NSAIDs is high, the proportion of ulcers not associated with H. pylori is high. Nevertheless, the proportion of Helicobacter pylori-negative, NSAID-negative ulcers remains low, less than 6% in most studies. Furthermore, this proportion is probably overestimated because the search for the infection and NSAIDs treatments was not always performed properly. Data about characteristics of idiopathic GU (after excluding cancer) are missing. In two studies, Helicobacter pylori-negative, NSAID-negative DU were associated in two thirds to three quarters of the cases with co-morbidities, often severe (cirrhosis, respiratory, renal or cardiac failure, malignancy). Numerous digestive diseases can give a DU, Crohn's disease being probably the most frequent one. Gastric acid hypersecretion, as in H. pylori-positive DU, seems to be the pathogenic factor of idiopathic DU, with increased duodenal acid load. The outcome of idiopathic ulcers has been little studied. There are no reliable data concerning the risk of complications. Therapy is based on proton pump inhibitors at a dosage allowing prolonged healing
abstract_id: PUBMED:9214051
Helicobacter pylori in 1997 In this review Helicobacter pylori (H. pylori) infection and its relation to different diseases is presented. H. pylori doesn't cause inconvenience to most infected people, though all infected persons have chronic active gastritis. The 10 year risk of peptic ulcer for people infected with H. pylori is about 10%. Randomized double-blinded trials have shown that eradication of H. pylori can cure most patients with peptic ulcer disease. Some people infected with H. pylori develop atrophic gastritis which is a risk factor for development of gastric cancer. It is not known if H. pylori screening and eradication would have a prophylactic effect against gastric cancer. It is also unknown if persons with non-organic dyspepsia and persons in long-term treatment with proton-pump-inhibitors would benefit from H. pylori eradication.
abstract_id: PUBMED:17051708
Effect of preceding acid-reducing therapy on the Helicobacter pylori eradication rate in patients with ulcerous disease 32 patients having for the last one or two years an episode of exacerbation of stomach ulcer disease associated with Helicobacter pylory had received 7 days Helicobacter eradicative therapy of the first line according to Maastricht consensus-2, 2002 year. All the patients were divided into 2 groups. 15 persons of the first group have received just after establishing diagnosis antihelicobacter therapy, other 17 patients of the second group preliminarily over the period of 2-3 weeks had been treated with proton pomp's inhibitor. Helicobacter pylori eradication rate in the first and second group was 93,3% and 52,9% respectively. The obtained results enable us to discuss the regimen of rational medical therapy of patients with Helicobacter pylori associated diseases.
abstract_id: PUBMED:12071078
Helicobacter pylori--2002 Treatment recommendations for H. pylori infection are peptic ulcer disease, MALT lymphoma, atrophic gastritis and following gastric cancer resection as well as first degree relatives of gastric cancer patients. Advisable situations are functional dyspepsia, before introduction of NSAID's or intended long-term proton-pump inhibitor treatment. It is thought that eradication therapy is not associated with gastro-esophageal reflux disease and does not enhance NSAID induced peptic ulcer healing. Therapy should be given as a package which considers first and second line eradication therapies together; in uncomplicated duodenal ulcer patients, eradication therapy does not need to be followed by further antisecretory treatment. First line therapy should be with triple therapy using a proton pump inhibitor (PPI), combined with clarithromycin and amoxycilline or metronidazole. Second-line therapy should use a quadruple therapy with a PPI, bismuth, metronidazole and tetracycline. Where bismuth is not available, second line therapy should be with a PPI triple therapy. If second line quadruple therapy fails in primary care, patients should be referred to the specialist and handled on a case by case basis. Successful eradication should always be confirmed by urea breath test (UBT), or endoscopy-based tests if endoscopy is clinically indicated. Stool antigen test is the alternative if UBT is not available. A 'test and treat' approach based on non-invasive testing can be offered to adult patients presenting in primary care with persistent dyspepsia under the age of 45 years (the age cut-off may vary locally), having excluded those with predominantly gastroesophageal reflux disease (GERD) symptoms, NSAID users, and patients with alarm symptoms.
abstract_id: PUBMED:10076792
Therapeutic news on Helicobacter pylori from Budapest and Wien. Helicobacter pylori is the most common infection in the world and is associated with some of the most prevalent gastroduodenal diseases. There is now considerable evidence that Helicobacter pylori eradication not only heals duodenal ulcer, but also prevents ulcer recurrences and probably ulcer complication. It is, therefore, universally accepted that, all patients with duodenal ulcer disease should receive eradication therapy. This review represents an attempt to summarize and analyse all the regimens proposed in September at the last International Meetings in Budapest and Wien in which different anti Helicobacter pylori regimens have been used. Despite the large number of clinical trials which had been performed, no significant advances have been made in relation to Helicobacter pylori therapy, so the "optimal" eradication treatment still remains a matter of debate. More than 10,000 patients were treated and most of them (around 5000) were found to have peptic ulcer disease which represents no more than 10-15% of our endoscopic findings. The proton pump inhibitor regimens are definitely the most representative drug policy appearing in the literature and despite several attempts (proton pump inhibitor given for less, one, or more than one week) the short one-week regimen (The Maastricht regimen) has been recognized as the most promising treatment.
Answer: The question of whether prolonging proton pump inhibitor (PPI) based anti-Helicobacter pylori treatment from one to two weeks in duodenal ulcer is worthwhile has been addressed in a study comparing the efficacy of one-week versus two-week treatment with lansoprazole, amoxicillin, and clarithromycin. The results indicated that duodenal ulcer healing was achieved in 80.5% of patients treated for one week and in 91.4% of patients treated for two weeks, according to intention-to-treat analysis. Although the difference was not statistically significant, the two-week treatment was better in healing H. pylori-positive duodenal ulcer disease. The eradication of H. pylori was identified as the most important factor related to ulcer healing (PUBMED:11515622).
Other studies and reviews have supported the use of PPI-based triple therapy, typically involving a PPI and two antibiotics such as clarithromycin and amoxicillin, for the treatment of H. pylori infection associated with peptic ulcers. These treatments are generally recommended for one or two weeks and have been shown to lead to approximately 90% eradication rates (PUBMED:9033612, PUBMED:9454363, PUBMED:11464622, PUBMED:11582994, PUBMED:12700497, PUBMED:9214051, PUBMED:12071078, PUBMED:10076792).
In conclusion, while the study suggests that a two-week treatment may be more effective than a one-week regimen, the difference in healing rates was not statistically significant. However, the eradication of H. pylori is crucial for ulcer healing, and PPI-based triple therapy remains a cornerstone of treatment for H. pylori-associated duodenal ulcers. The decision to extend treatment duration may depend on individual patient factors and the clinician's judgment. |
Instruction: Can strength training predictably improve gait kinematics?
Abstracts:
abstract_id: PUBMED:30947107
The effect of combined functional anaerobic and strength training on treadmill gait kinematics and kinetics in ambulatory young adults with cerebral palsy. Background: Leg muscle weakness is a major impairment for individuals with cerebral palsy (CP) and is related to reduced functional capacity. Evidence is limited regarding the translation of strength improvements following conventional resistance training to improved gait outcomes.
Research Question: Does a combined functional anaerobic and lower limb strength training intervention improve gait kinematics and kinetics in individuals with CP aged 15-30 years? 17 young adults (21 ± 4 years, 9 males, GMFCS I = 11, II = 6) were randomized to 12 weeks, 3 sessions per week, of high intensity functional anaerobic and progressive resistance training of the lower limbs (n = 8), or a waitlist control group (n = 9). Pre- and post-training outcomes included maximum ankle dorsiflexion angle at foot contact and during stance, gait profile score, ankle and hip power generation during late stance, and the ratio of ankle to hip power generation.
Results: There were no between-group differences after the intervention for any kinematic or kinetic gait outcome variable. Within-group analysis revealed an increase in peak ankle power during late stance (0.31 ± 0.28 W·kg-1, p = 0.043) and ankle to hip power ratio (0.43 ± 0.37, p = 0.034) following training in the intervention group.
Significance: We have previously reported increased overground walking capacity, agility and sprint power, in the training group compared to the control group at 12-weeks. These changes in overground measures of functional capacity occurred in the absence of changes in treadmill gait kinematics and kinetics reported here.
Anzctr: 12614001217695.
abstract_id: PUBMED:33450595
Ankle dorsiflexors and plantarflexors neuromuscular electrical stimulation training impacts gait kinematics in older adults: A pilot study. Background: While ankle muscles, highly affected by aging, are highly implicated in the changes in gait kinematics and involved in the limitation of seniors' mobility, whether neuromuscular electrical stimulation (NMES) training of these muscles could impact gait kinematics in older adults has not been investigated yet.
Research Question: What are the effects of 12 weeks of ankle plantar and dorsiflexors NMES training on strength and gait kinematics in healthy older adults?
Methods: Fourteen older adults (73.6 ± 4.9 years) performed a three-time per week, three months long NMES training of both ankle plantar and dorsiflexors. Before and after training, neuromuscular parameters, gait kinematic parameters, and daily physical activity were measured.
Results: The participants significantly increased their lower limb muscle mass and their plantar and dorsiflexors isometric strength after training. They reduced the hip abduction/adduction and the pelvic anterior tilt range of motion and variability during gait. However, the participants became less active after the training.
Significance: NMES training of ankle muscles, by increasing ankle muscle mass and strength,modified gait kinematics. NMES training of ankle muscles is feasible and effective to lower the hip implication and increment foot progression angle during gait. Further study should determine if this could lower the risk of falling.
abstract_id: PUBMED:19716637
Strength training improves fall-related gait kinematics in the elderly: a randomized controlled trial. Background: Falls are one of the greatest concerns among the elderly. Among a number of strategies proposed to reduce the risk of falls, improving muscle strength has been applied as a successful preventive strategy. Although it has been suggested as a relevant strategy, no studies have analyzed how muscle strength improvements affect the gait pattern. The aim of this study was to determine the effects of a lower limb strength training program on gait kinematics parameters associated with the risk of falls in elderly women.
Methods: Twenty seven elderly women were assigned in a balance and randomized order into an experimental (n=14; age=61.1 (4.3)years, BMI=26.4 (2.8)kgm(-2)) and a control (n=13; age=61.6 (6.6)years; BMI=25.9 (3.0)kgm(-2)) group. The EG performed lower limb strength training during 12 weeks (3 days per week), being training load increased weekly.
Findings: Primary outcomes were gait kinematics parameters and maximum voluntary isometric contractions at pre- and post-training period. Secondary outcomes were training load improvement weekly and one repetition maximum every two weeks. The 1 maximal repetition increment ranged from 32% to 97% and was the best predictor of changes in gait parameters (spatial, temporal and angular variables) after training for the experimental group. Z-score analysis revealed that the strength training was effective in reversing age-related changes in gait speed, stride length, cadence and toe clearance, approaching the elderly to reference values for healthy young women.
Interpretation: Lower limb strength training improves fall-related gait kinematic parameters. Thus, strength training programs should be recommended to the elderly women in order to change their gait pattern towards young adults.
abstract_id: PUBMED:33770532
Association of isometric quadriceps strength with stride and knee kinematics during gait in community dwelling adults with normal knee or early radiographic knee osteoarthritis. Background: Identifying indicators of early knee osteoarthritis is important for preventing the onset and/or progression of the disease. Although low quadriceps strength and changes in stride and knee kinematics during gait have been suggested as possible indicators, their relevance and relationships have not been fully examined. This study aimed to analyze the association of quadriceps strength with stride and knee kinematics during gait in adults with normal knee or early knee osteoarthritis.
Methods: A total of 881 knees from 474 community dwelling adults (238 males and 236 females) were included. Radiographic images of the knee in standing position were obtained, and grading of knee osteoarthritis was classified. Isometric quadriceps strength was measured using a force detector device. Three-dimensional knee kinematics during gait was obtained by a motion capture system. Sex-based difference of quadriceps strength, stride and knee kinematics during gait was evaluated by multiple comparison among grades by sex and multiple regression of quadriceps strength was analyzed by stride and knee kinematics during gait.
Findings: Stride length and quadriceps strength were significantly reduced with higher grade in both sexes, and changes in knee kinematics during gait differed by sex from early knee osteoarthritis. Quadriceps strength in both sexes was significantly correlated with changes in stride length and knee kinematics during gait.
Interpretation: Improving quadriceps strength in early knee osteoarthritis was related with maintaining gait ability and restraining abnormal knee kinematics during gait. This may help to develop clinical approaches to prevent the onset and/or progression of knee osteoarthritis.
abstract_id: PUBMED:26817456
A Novel Application of Eddy Current Braking for Functional Strength Training During Gait. Functional strength training is becoming increasingly popular when rehabilitating individuals with neurological injury such as stroke or cerebral palsy. Typically, resistance during walking is provided using cable robots or weights that are secured to the distal shank of the subject. However, there exists no device that is wearable and capable of providing resistance across the joint, allowing over ground gait training. In this study, we created a lightweight and wearable device using eddy current braking to provide resistance to the knee. We then validated the device by having subjects wear it during a walking task through varying resistance levels. Electromyography and kinematics were collected to assess the biomechanical effects of the device on the wearer. We found that eddy current braking provided resistance levels suitable for functional strength training of leg muscles in a package that is both lightweight and wearable. Applying resistive forces at the knee joint during gait resulted in significant increases in muscle activation of many of the muscles tested. A brief period of training also resulted in significant aftereffects once the resistance was removed. These results support the feasibility of the device for functional strength training during gait. Future research is warranted to test the clinical potential of the device in an injured population.
abstract_id: PUBMED:35074398
The Effects of the EquiAmiTM Training Aid on the Kinematics of the Horse at the Walk and Trot In-Hand. The EquiAmi Training Aid (ETA) is a popular training and rehabilitation tool, however knowledge about its effect on the equine gait is lacking. Understanding of its effects on equine kinematics, and the clinical relevance of these effects is vital to promote optimal use of training aids within training and rehabilitation programmes. Therefore, this study aimed to determine how the ETA influences horses' gait kinematics at walk and trot. Eight horses walked and trotted in-hand with and without the ETA. Optical motion capture was used to measure forelimb and hindlimb pro- and retraction angles, withers-croup angle, and stride length. Separate repeated-measures ANOVAs in each gait were used to assess the differences between gait kinematics and stride length variability with and without the ETA. The ETA did not significantly influence the horses' kinematics in walk or trot, however, individual differences in the effect of the ETA on the horses' angular and linear kinematics were found, with variation between gaits within the same horse observed. The ETA does not have the same effect on every horse, and its effect can vary within the same horse between gaits. Therefore, the individual characteristics and needs of the horse must be considered when applying training aids.
abstract_id: PUBMED:35721873
Gait and Neuromuscular Changes Are Evident in Some Masters Club Level Runners 24-h After Interval Training Run. Purpose: To examine the time course of recovery for gait and neuromuscular function immediately after and 24-h post interval training. In addition, this study compared the impact of different statistical approaches on detecting changes.
Methods: Twenty (10F, 10M) healthy, recreational club runners performed a high-intensity interval training (HIIT) session consisting of six repetitions of 800 m. A 6-min medium intensity run was performed pre, post, and 24-h post HIIT to assess hip and knee kinematics and coordination variability. Voluntary activation and twitch force of the quadriceps, along with maximum isometric force were examined pre, post, and 24-h post significance HIIT. The time course of changes were examined using two different statistical approaches: traditional null hypothesis significance tests and "real" changes using minimum detectable change.
Results: Immediately following the run, there were significant (P < 0.05) increases in the hip frontal kinematics and coordination variability. The runners also experienced a loss of muscular strength and neuromuscular function immediately post HIIT (P < 0.05). Individual assessment, however, showed that not all runners experienced fatigue effects immediately post HIIT. Null hypothesis significance testing revealed a lack of recovery in hip frontal kinematics, coordination variability, muscle strength, and neuromuscular function at 24-h post, however, the use of minimum detectable change suggested that most runners had recovered.
Conclusion: High intensity interval training resulted in altered running kinematics along with central and peripheral decrements in neuromuscular function. Most runners had recovered within 24-h, although a minority still exhibited signs of fatigue. The runners that were not able to recover prior to their run at 24-h were identified to be at an increased risk of running-related injury.
abstract_id: PUBMED:24149223
Effects of Whole-body Vibration Training on Sprint Running Kinematics and Explosive Strength Performance. The aim of this study was to investigate the effect of 6 wk of whole body vibration (WBV) training on sprint running kinematics and explosive strength performance. Twenty-four volunteers (12 women and 12 men) participated in the study and were randomised (n = 12) into the experimental and control groups. The WBV group performed a 6-wk program (16-30 min·d(-1), 3 times a week) on a vibration platform. The amplitude of the vibration platform was 2.5 mm and the acceleration was 2.28 g. The control group did not participate in any training. Tests were performed Pre and post the training period. Sprint running performance was measured during a 60 m sprint where running time, running speed, step length and step rate were calculated. Explosive strength performance was measured during a counter movement jump (CMJ) test, where jump height and total number of jumps performed in a period of 30 s (30CVJT). Performance in 10 m, 20 m, 40 m, 50 m and 60 m improved significantly after 6 wk of WBV training with an overall improvement of 2.7%. The step length and running speed improved by 5.1% and 3.6%, and the step rate decreased by 3.4%. The countermovement jump height increased by 3.3%, and the explosive strength endurance improved overall by 7.8%. The WBV training period of 6 wk produced significant changes in sprint running kinematics and explosive strength performance. Key pointsWBV training.Sprint running kinematics.Explosive strength performance.
abstract_id: PUBMED:27754998
Effects of High- and Low-Velocity Resistance Training on Gait Kinematics and Kinetics in Individuals with Hip Osteoarthritis: A Randomized Controlled Trial. Objective: The aim of this study was to investigate the effects of high-velocity (HV) and low-velocity (LV) resistance training on gait kinematics and kinetics in patients with hip osteoarthritis.
Design: This was a single-blind, randomized controlled trial. Forty-six women with hip osteoarthritis were randomly allocated to the HV (n = 23) or LV (n = 23) training group. The participants underwent an 8-week home-based the HV or LV resistance-training program, involving the hip and knee muscles. Outcome measures included gait kinematics and kinetics using 3-dimensional analyses, muscle strength and power, the Harris Hip Score, and hip pain using the visual analog scale.
Results: There was no significant difference in changes for any of the outcome measures between groups. After the training session, muscle power, walking speed, and cadence significantly increased only in the HV group, whereas stride length and the peak hip extension angle during gait significantly increased, and pain on the visual analog scale and the peak ankle dorsiflexion moment during gait significantly decreased only in the LV group. Muscle strength and Harris Hip Score significantly increased in both groups.
Conclusions: The results of this study may indicate that the potential effect of resistance training on abnormal gait pattern depends on movement velocities during training.
abstract_id: PUBMED:25310220
A behavioral mechanism of how increases in leg strength improve old adults' gait speed. We examined a behavioral mechanism of how increases in leg strength improve healthy old adults' gait speed. Leg press strength training improved maximal leg press load 40% (p = 0.001) and isometric strength in 5 group of leg muscles 32% (p = 0.001) in a randomly allocated intervention group of healthy old adults (age 74, n = 15) but not in no-exercise control group (age 74, n = 8). Gait speed increased similarly in the training (9.9%) and control (8.6%) groups (time main effect, p = 0.001). However, in the training group only, in line with the concept of biomechanical plasticity of aging gait, hip extensors and ankle plantarflexors became the only significant predictors of self-selected and maximal gait speed. The study provides the first behavioral evidence regarding a mechanism of how increases in leg strength improve healthy old adults' gait speed.
Answer: Strength training can have an impact on gait kinematics, but the predictability of these improvements seems to vary based on the population studied and the specifics of the training program. Several studies have shown positive effects of strength training on gait kinematics in different populations:
1. In elderly women, lower limb strength training improved fall-related gait kinematic parameters such as gait speed, stride length, cadence, and toe clearance, suggesting that strength training programs should be recommended to elderly women to change their gait pattern towards that of young adults (PUBMED:19716637).
2. In older adults, neuromuscular electrical stimulation (NMES) training of ankle muscles increased muscle mass and strength, which modified gait kinematics by reducing the hip abduction/adduction and the pelvic anterior tilt range of motion and variability during gait (PUBMED:33450595).
3. In individuals with hip osteoarthritis, high-velocity and low-velocity resistance training had different effects on gait kinematics and kinetics, with high-velocity training increasing muscle power, walking speed, and cadence, while low-velocity training increased stride length and the peak hip extension angle during gait (PUBMED:27754998).
4. In healthy older adults, leg press strength training improved leg strength and altered the predictors of self-selected and maximal gait speed, with hip extensors and ankle plantarflexors becoming significant predictors of gait speed after the training (PUBMED:25310220).
However, there are also studies showing that strength training does not always lead to predictable improvements in gait kinematics:
1. In young adults with cerebral palsy, a combined functional anaerobic and lower limb strength training intervention did not result in between-group differences for any kinematic or kinetic gait outcome variable, although within-group analysis revealed an increase in peak ankle power during late stance and ankle to hip power ratio following training in the intervention group (PUBMED:30947107).
2. In a study involving a novel application of eddy current braking for functional strength training during gait, the device showed feasibility for functional strength training and resulted in significant increases in muscle activation, but further research is needed to test its clinical potential (PUBMED:26817456).
These findings suggest that while strength training can improve gait kinematics in some cases, the outcomes may not be universally predictable and may depend on the specific characteristics of the training program and the population being studied. It is important to consider individual differences and the specific needs of the population when designing and applying strength training interventions to improve gait kinematics.
For instance, in the case of young adults with cerebral palsy, although the combined functional anaerobic and strength training did not lead to significant changes in treadmill gait kinematics and kinetics, it did result in increased walking capacity, agility, and sprint power, indicating that strength training can still have beneficial effects on functional capacity even if not directly observable in gait kinematics (PUBMED:30947107).
Moreover, the effects of strength training on gait kinematics can also be influenced by the type of training. For example, whole-body vibration (WBV) training led to significant changes in sprint running kinematics and explosive strength performance, suggesting that different training modalities can have specific effects on gait parameters (PUBMED:24149223).
In summary, while strength training has the potential to improve gait kinematics, the predictability of these improvements is not guaranteed and may vary based on the type of strength training, the population being studied, and individual responses to the training. It is essential to tailor strength training programs to the specific needs and abilities of individuals to maximize the potential benefits on gait kinematics. |
Instruction: Do specialty registrars change their attitudes, intentions and behaviour towards reporting incidents following a patient safety course?
Abstracts:
abstract_id: PUBMED:20416053
Do specialty registrars change their attitudes, intentions and behaviour towards reporting incidents following a patient safety course? Background: Reporting incidents can contribute to safer health care, as an awareness of the weaknesses of a system could be considered as a starting point for improvements. It is believed that patient safety education for specialty registrars could improve their attitudes, intentions and behaviour towards incident reporting. The objective of this study was to examine the effect of a two-day patient safety course on the attitudes, intentions and behaviour concerning the voluntary reporting of incidents by specialty registrars.
Methods: A patient safety course was designed to increase specialty registrars' knowledge, attitudes and skills in order to recognize and cope with unintended events and unsafe situations at an early stage. Data were collected through an 11-item questionnaire before, immediately after and six months after the course was given.
Results: The response rate at all three points in time assessed was 100% (n = 33). There were significant changes in incident reporting attitudes and intentions immediately after the course, as well as during follow-up. However, no significant changes were found in incident reporting behaviour.
Conclusions: It is shown that patient safety education can have long-term positive effects on attitudes towards reporting incidents and the intentions of registrars. However, further efforts need to be undertaken to induce a real change in behaviour.
abstract_id: PUBMED:30799925
Implications from China patient safety incidents reporting system. Objective: We aimed to explain the operational mechanism of China National Patient Safety Incidents Reporting System, analyze patterns and trends of incidents reporting, and discuss the implication of the incidents reporting to improve hospital patient safety.
Design: A nationwide, registry-based, observational study design.
Data Source: The database of China National Patient Safety Incidents Reporting System.
Outcome Measures: Outcome measures of this study included the temporal, regional, and hospital distribution of the reports, as well as the incident type, location, parties, and possible reasons for frequently occurring incidents.
Results: During 2012-2017, 36,498 patient safety incidents were reported. By analyzing the time trends, we found that there was a significant upward trend on incidents reporting in China. The most common type of incidents was drug-related incidents, followed by nursing-related incidents and surgery-related incidents. The three most frequent locations of incident occurrence were Patient's Room (65.4%), Ambulatory Care Unit (8.4%), and Intensive Care Unit (7.4%). The majority of the incidents involved nurses (40.7%), followed by physicians (29.5%) and medical technologist (13.6%). About 44.4% of the incidents were attributed to the junior staff (work experience ≤5 years). In addition, incidents triggered by the senior staff (work experience >5 years) were more often associated with severe patient harm.
Conclusion: To strengthen the incidents reporting system and generate useful evidence through learning from incidents reporting will be important to China's success in improving the nation's patient safety status.
abstract_id: PUBMED:26142282
Examining the attitudes of hospital pharmacists to reporting medication safety incidents using the theory of planned behaviour. Objective: To assess the effect of factors within hospital pharmacists' practice on the likelihood of their reporting a medication safety incident.
Design: Theory of planned behaviour (TPB) survey.
Setting: Twenty-one general and teaching hospitals in the North West of England.
Participants: Two hundred and seventy hospital pharmacists (response rate = 45%).
Intervention: Hospital pharmacists were invited to complete a TPB survey, based on a prescribing error scenario that had resulted in serious patient harm. Multiple regression was used to determine the relative influence of different TPB variables, and participant demographics, on the pharmacists' self-reported intention to report the medication safety incident.
Main Outcome Measures: The TPB variables predicting intention to report: attitude towards behaviour, subjective norm, perceived behavioural control and descriptive norm.
Results: Overall, the hospital pharmacists held strong intentions to report the error, with senior pharmacists being more likely to report. Perceived behavioural control (ease or difficulty of reporting), Descriptive Norms (belief that other pharmacists would report) and Attitudes towards Behaviour (expected benefits of reporting) showed good correlation with, and were statistically significant predictors of, intention to report the error [R = 0.568, R(2) = 0.323, adjusted R(2) = 0.293, P < 0.001].
Conclusions: This study suggests that efforts to improve medication safety incident reporting by hospital pharmacists should focus on their behavioural and control beliefs about the reporting process. This should include instilling greater confidence about the benefits of reporting and not harming professional relationships with doctors, greater clarity about what/not to report and a simpler reporting system.
abstract_id: PUBMED:25935666
Using the Theory of Planned Behaviour to examine health professional students' behavioural intentions in relation to medication safety and collaborative practice. Background: Safe medication practices depend upon, not only on individual responsibilities, but also effective communication and collaboration between members of the medication team. However, measurement of these skills is fraught with conceptual and practical difficulties.
Aims: The aims of this study were to explore the utility of a Theory of Planned Behaviour-based questionnaire to predict health professional students' behavioural intentions in relation to medication safety and collaborative practice; and to determine the contribution of attitudes, subjective norms, and perceived control to behavioural intentions.
Design: A descriptive cross-sectional survey based upon the Theory of Planned Behaviour was designed and tested.
Participants: A convenience sample of 65 undergraduate pharmacy, nursing and medicine students from one semi-metropolitan Australian university were recruited for the study.
Methods: Participants' behavioural intentions, attitudes, subjective norms, and perceived control to behavioural intentions in relation to medication safety were measured using an online version of the Theory of Planned Behaviour Medication Safety Questionnaire.
Results: The Questionnaire had good internal consistency with a Cronbach's alpha of 0.844. The three predictor variables of attitudes, subjective norms, and perceived control accounted for between 30 and 46% of the variance in behavioural intention; this is a strong prediction in comparison to previous studies using the Theory of Planned Behaviour. Data analysis also indicated that attitude was the most significant predictor of participants' intention to collaborate with other team members to improve medication safety.
Conclusion: The results from this study provide preliminary support for the Theory of Planned Behaviour-Medication Safety Questionnaire as a valid instrument for examining health professional students' behavioural intentions in relation to medication safety and collaborative practice.
abstract_id: PUBMED:35668490
Attitudes of home-visiting nurses toward risk management of patient safety incidents in Japan. Background: In situations of home care, patients and their family members must address problems and emergencies themselves. For this reason, home-visiting nurses (HVNs) must practice risk management to ensure that patients can continue receiving care in the comfort of their homes. The purpose of this study was to examine HVNs' attitudes toward risk management.
Methods: This study adopted a qualitative description approach. Semi-structured interviews were conducted to collect information on HVNs' risk management behavior and their attitudes toward it. Participants comprised 11 HVNs working at home-visiting nursing agencies in a prefecture of Japan. Transcribed interviews were analyzed using content analysis.
Results: Nurses' attitudes toward risk management comprised the following themes: (i) predicting and avoiding risks, (ii) ensuring medical safety in home settings, (iii) coping with incidents, and (iv) playing the role of administrators in medical safety, which was answered only by administrators.
Conclusions: When practicing risk management, home-visiting nurses should first assess the level of understanding of the patient and family, followed by developing safety measures tailored to their everyday needs. These results further suggest that administrators should take actions to foster a working environment conducive to risk management. These actions include coordinating duties to mitigate risk and improve the process of reporting risks. This study provides a baseline for future researchers to assist patients and families requiring medical care services of this nature.
abstract_id: PUBMED:32495328
Differences in Patient Safety Reporting Attitudes and Knowledge Among Different Hospital Levels Background: Establishing a positive reporting culture, which helps medical and healthcare workers learn from errors and reduce the risks of future adverse events, is essential to fostering a culture of patient safety.
Purpose: The objectives of this study were to investigate the differences among the three levels of hospitals in terms of the knowledge and attitudes of hospital staff regarding the patient safety reporting system and to identify the potential factors affecting these differences.
Methods: This cross-sectional study was carried out in six hospitals, including two academic medical centers, two regional hospitals, and two district hospitals. The subjects were physicians, nurses, medical technicians, and administrative staffs. Data were collected using a patient safety reporting questionnaire.
Results: Three hundred and forty-eight participants were recruited, with 348 valid questionnaires returned (response rate: 100%). The average score for knowledge of patient safety reporting was 12.76 (total possible score: 14). Age, work position, and work experience were significantly associated with knowledge of patient safety reporting (p < .01). The patient safety reporting attitudes questionnaire comprised 21 items, each of which was scored using a five-point Likert scale. The mean score for each item was 3.92 ± 0.50. Gender, age, work position, work experience, and job discipline were significantly associated with attitude toward reporting (p < .01). The level of hospital was found to significantly impact attitudes toward patient safety reporting (p = .01), with participants working at medical centers scoring the highest. In addition, participants who were older and in more-senior positions scored higher and more positively for both knowledge and attitudes.
Conclusions: The key factors to successfully fostering a strong patient safety reporting culture are staff security, a reliable reporting system, and a user-friendly interface. Improving attitudes toward reporting requires more resources and time than improving knowledge of reporting, which may be improved using education and promotion. Regional hospitals may invest more resources to enhance positive attitudes toward reporting and increase the willingness of staff to report.
abstract_id: PUBMED:22151773
Effects on incident reporting after educating residents in patient safety: a controlled study. Background: Medical residents are key figures in delivering health care and an important target group for patient safety education. Reporting incidents is an important patient safety domain, as awareness of vulnerabilities could be a starting point for improvements. This study examined effects of patient safety education for residents on knowledge, skills, attitudes, intentions and behavior concerning incident reporting.
Methods: A controlled study with follow-up measurements was conducted. In 2007 and 2008 two patient safety courses for residents were organized. Residents from a comparable hospital acted as external controls. Data were collected in three ways: 1] questionnaires distributed before, immediately after and three months after the course, 2] incident reporting cards filled out by course participants during the course, and 3] residents' reporting data gathered from hospital incident reporting systems.
Results: Forty-four residents attended the course and 32 were external controls. Positive changes in knowledge, skills and attitudes were found after the course. Residents' intentions to report incidents were positive at all measurements. Participants filled out 165 incident reporting cards, demonstrating the skills to notice incidents. Residents who had reported incidents before, reported more incidents after the course. However, the number of residents reporting incidents did not increase. An increase in reported incidents was registered by the reporting system of the intervention hospital.
Conclusions: Patient safety education can have immediate and long-term positive effects on knowledge, skills and attitudes, and modestly influence the reporting behavior of residents.
abstract_id: PUBMED:35847382
Factors contributing to under-reporting of patient safety incidents in Indonesia: leaders' perspectives. Background: Understanding the causes of patient safety incidents is essential for improving patient safety; therefore, reporting and analysis of these incidents is a key imperative. Despite its implemention more than 15 years ago, the institutionalization of incident reporting in Indonesian hospitals is far from satisfactory. The aim of this study was to analyze the factors responsible for under-reporting of patient safety incidents in Indonesian public hospitals from the perspectives of leaders of hospitals, government departments, and independent institutions.
Methods: A qualitative research methodology was adopted for this study using semi-structured interviews of key informants. 25 participants working at nine organizations (government departments, independent institutions, and public hospitals) were interviewed. The interview transcripts were analyzed using a deductive analytic approach. Nvivo 10 was used to for data processing prior to thematic analysis.
Results: The key factors contributing to the under-reporting of patient safety incidents were categorized as hospital related and nonhospital related (government or independent agency). The hospital-related factors were: lack of understanding, knowledge, and responsibility for reporting; lack of leadership and institutional culture of reporting incidents; perception of reporting as an additional burden. The nonhospital-related factors were: lack of feedback and training; lack of confidentiality mechanisms in the system; absence of policy safeguards to prevent any punitive measures against the reporting hospital; lack of leadership.
Conclusion: Our study identified factors contributing to the under-reporting of patient safety incidents in Indonesia. The lack of government support and absence of political will to improve patient safety incident reporting appear to be the root causes of under-reporting. Our findings call for concerted efforts involving government, independent agencies, hospitals, and other stakeholders for instituting reforms in the patient safety incident reporting system.
abstract_id: PUBMED:31843322
Learning from Patient Safety Incidents in the Emergency Department: A Systematic Review. Background: Patient safety incidents are commonly observed in critical and high demanding care settings, including the emergency department. There is a need to understand what causes patient safety incidents in emergency departments and determine the implications for excellence in practice.
Objective: Our aim was to systematically review the international literature on patient safety incidents in emergency departments and determine what can be learned from reported incidents to inform and improve practice.
Discussion: Patient safety incidents in emergency departments have a number of recognized contributing factors. These can be used as groundwork for the development of effective tools to systematically identify incident risk. Participation in efforts to diminish risk and improve patient safety through appropriate incident reporting is critical for removing barriers to safe care.
Conclusions: This review enhances our awareness of contributing factors to patient safety incidents within emergency departments and encourages researchers from different disciplines to investigate the causes of practice errors and formulate safety improvement strategies.
abstract_id: PUBMED:33227749
Health Professionals' Perceptions and Reactions to ICT-Related Patient Safety Incidents. Patient incident reporting is an important way to promote safer health care. The barriers for reporting can be organizational (leadership, culture, lack of feedback, etc.) or individual (time pressure, perceived competence, attitude, etc.). In this study, we examined what kinds of ICT-related incidents health professionals observe in Finland, how they react to them and the reasons for non-reporting. Our data was collected using a nationwide survey during the Spring of 2020. The theory of planned behaviour by Ajzen served as our framework for explaining non-reporting behaviour. While we found that attitudes, subjective norms and perceived behavioural control all explain non-reporting, our factor model based on our confirmatory factor analysis did not directly match Ajzen's theory.
Answer: Yes, specialty registrars do change their attitudes and intentions towards reporting incidents following a patient safety course. A study examining the effect of a two-day patient safety course on specialty registrars found significant changes in their attitudes and intentions concerning the voluntary reporting of incidents immediately after the course and during follow-up. However, the study did not find significant changes in actual incident reporting behavior (PUBMED:20416053). This suggests that while education can positively influence attitudes and intentions, additional efforts may be needed to translate these changes into consistent changes in behavior. |
Instruction: Does acute alcoholic pancreatitis precede the chronic form or is the opposite true?
Abstracts:
abstract_id: PUBMED:15128075
Does acute alcoholic pancreatitis precede the chronic form or is the opposite true? A histological study. Objectives: Whether acute alcoholic pancreatitis occurs in a normal pancreas or in a pancreas that has already been altered by chronic pancreatitis is unclear. Our objective is to clarify the relation between acute and chronic alcoholic pancreatitis by histologic study of the pancreas in a group of patients having a first attack of acute alcoholic pancreatitis.
Methods: From January 1989 to December 1999, 138 patients with acute pancreatitis, of whom 28 had alcoholic pancreatitis, were seen by us; in 21 of the latter 28 patients, it was the first attack. Of these 21, 6 underwent surgery for acute necrotic pancreatitis. In all 6 patients, adequate pancreatic biopsies were obtained during surgery. Tissue samples were prepared for histologic examination according to standard procedures.
Results: In all 6 patients, both acute necrotic and chronic lesions were found. The chronic lesions had characteristics of chronic calcifying pancreatitis and consisted of perilobular and intralobular fibrosis. loss of exocrine parenchyma, dilated interlobular ducts, and protein plugs within dilated ducts.
Conclusions: This study suggests that acute alcoholic pancreatitis develops in a pancreas already affected by chronic pancreatitis. The hypothesis that in alcoholics chronic pancreatitis derives from acute pancreatitis is not supported by the present data.
abstract_id: PUBMED:19561104
The opposite effects of acute and chronic alcohol on lipopolysaccharide-induced inflammation are linked to IRAK-M in human monocytes. Impaired host defense after alcohol use is linked to altered cytokine production, however, acute and chronic alcohol differently modulate monocyte/macrophage activation. We hypothesized that in human monocytes, acute alcohol induces hyporesponsiveness to LPS, resulting in decreased TNF-alpha, whereas chronic alcohol increases TNF-alpha by sensitization to LPS. We found that acute alcohol increased IL-1R-associated kinase-monocyte (IRAK-M), a negative regulator of IRAK-1, in human monocytes. This was associated with decreased IkappaB alpha kinase activity, NFkappaB DNA binding, and NFkappaB-driven reporter activity after LPS stimulation. In contrast, chronic alcohol decreased IRAK-M expression but increased IRAK-1 and IKK kinase activities, NFkappaB DNA binding, and NFkappaB-reporter activity. Inhibition of IRAK-M in acute alcohol-exposed monocytes using small interfering RNA restored the LPS-induced TNF-alpha production whereas over-expression of IRAK-M in chronic alcohol macrophages prevented the increase in TNF-alpha production. Addition of inhibitors of alcohol metabolism did not alter LPS signaling and TNF-alpha production during chronic alcohol exposure. IRAK-1 activation induces MAPKs that play an important role in TNF-alpha induction. We determined that acute alcohol decreased but chronic alcohol increased activation of ERK in monocytes and ERK inhibitor, PD98059, prevented the chronic alcohol-induced increase in TNF-alpha. In summary, inhibition of LPS-induced NFkappaB and ERK activation by acute alcohol leads to hyporesponsiveness of monocytes to LPS due to increased IRAK-M. In contrast, chronic alcohol sensitizes monocytes to LPS through decreased IRAK-M expression and activation of NFkappaB and ERK kinases. Our data indicate that IRAK-M is a central player in the opposite regulation of LPS signaling by different lengths of alcohol exposure in monocytes.
abstract_id: PUBMED:27172353
Epidemiology and Healthcare Burden of Acute-on-Chronic Liver Failure. Chronic liver disease and cirrhosis, a common end result of viral hepatitis, alcohol abuse, and the emerging epidemic of nonalcoholic fatty liver disease are a significant source of morbidity and premature mortality globally. Acute clinical deterioration of chronic liver disease exemplifies the pinnacle of healthcare burden due to the intensive medical needs and high mortality risk. Although a uniformly accepted definition for epidemiological studies is lacking, acute-on-chronic liver failure (ACLF) is increasingly recognized as an important source of disease burden. At least in the United States, hospitalizations for ACLF have increased several fold in the last decade and have a high fatality rate. Acute-on-chronic liver failure incurs extremely high costs, exceeding the yearly costs of inpatient management of other common medical conditions. Although further epidemiological data are needed to better understand the true impact and future trends of ACLF, these data point to the urgency in the clinical investigation for ACLF and the deployment of healthcare resources for timely and effective interventions in affected patients.
abstract_id: PUBMED:8966746
Acute and chronic pancreatitis in the elderly patient Normal pancreatic ageing is characterized by functional and morphological changes of the pancreatic parenchyma and of the ductal system, which, however, do not interfere with normal exocrine pancreatic function. It can be speculated that 'pancreatic lithiasis in the aged' as well as the 'senile idiopathic chronic pancreatitis', two conditions of chronic pancreatitis in the elderly, may represent more extreme forms of these normal age-related changes in pancreatic structure and function. In elderly people, acute and chronic pancreatitis are only rarely related to alcohol abuse, in contrast to the situation in a younger patient population. The presence of gallstones represents the most frequent cause of acute pancreatitis in the elderly. In most aged patients with acute biliary pancreatitis, endoscopic sphincterotomy is the treatment of choice, even when bile duct stones cannot clearly be demonstrated at ERCP. Endoscopic sphincterotomy has been shown to reduce morbidity as well as mortality rates in acute biliary pancreatitis. This technique can even be considered as treatment of choice in elderly patients with an increased operative risk. An elective laparoscopic cholecystectomy should be performed in elderly patients with an acceptable operative risk.
abstract_id: PUBMED:10470329
Progression from acute to chronic pancreatitis: a physician's view. Whether or not AP may progress to the chronic form is controversial. Equally debatable is whether AP caused by alcohol abuse develops in a chronically diseased gland or in a normal pancreas. As for the state of the gland, several postmortem studies have shown that AP may occur after acute alcohol abuse in the normal pancreas. As for progression from acute to chronic pancreatitis, many experimental studies have demonstrated signs of the chronic from of the disease in animals, but these signs were reversible. Some clinical studies have shown that alcohol-induced pancreatitis may progress to chronic pancreatitis. There are, however, presently no predictive parameters indicating when such a progression does or does not occur.
abstract_id: PUBMED:27172436
Chronic pancreatitis diagnosed after the first attack of acute pancreatitis Introduction: One of the diseases involving a potential risk of developing chronic pancreatitis is acute pancreatitis.
Material: Of the overall number of 231 individuals followed with a diagnosis of chronic pancreatitis, 56 patients were initially treated for acute pancreatitis (24.2 %). Within an interval of 12- 24 months from the first attack of acute pancreatitis, their condition gradually progressed to reached the picture of chronic pancreatitis. The individuals included in the study abstained (from alcohol) following the first attack of acute pancreatitis and no relapse of acute pancreatitis was proven during the period of their monitoring.
Results: The etiology of acute pancreatitis identified alcohol as the predominant cause (55.3 %), biliary etiology was proven in 35.7 %. According to the revised Atlanta classification, severe pancreatitis was established in 69.6 % of the patients, the others met the criterion for intermediate form, those with the light form were not included.
Conclusion: Significant risk factors present among the patients were smoking, obesity and 18 %, resp. 25.8 % had pancreatogenous diabetes mellitus identified. 88.1 % of the patients with acute pancreatitis were smokers. The majority of individuals with chronic pancreatitis following an attack of acute pancreatitis were of a productive age from 25 to 50 years. It is not only acute alcoholic pancreatitis which evolves into chronic pancreatitis, we have also identified this transition for pancreatitis of biliary etiology.
abstract_id: PUBMED:7456561
HLA-antigens in acute and chronic pancreatitis. HLA-A and B-antigens were determined in 22 patients with acute and in 65 patients with chronic pancreatitis as well as in 165 healthy controls. There was a tendency of over- and underrepresentation of certain antigens indicating that there may be a genetically linked susceptibility of some patients to acute and chronic pancreatitis respectively. For further evaluation of these tendencies on a larger scale a multi-centre study is required.
abstract_id: PUBMED:17592227
Alcohol consumption in patients with acute or chronic pancreatitis. Understanding of the relation between the alcoholic consumption and the development of pancreatitis should help in defining the alcoholic etiology of pancreatitis. Although the association between alcohol consumption and pancreatitis has been recognized for over 100 years, it remains still unclear why some alcoholics develop pancreatitis and some do not. Surprisingly little data are available about alcohol amounts, drinking patterns, type of alcohol consumed and other habits such as dietary habits or smoking in respect to pancreatitis preceding the attack of acute pancreatitis or the time of the diagnosis of chronic pancreatitis. This review summarizes the current knowledge. Epidemiological studies clearly show connection between the alcohol consumption in population and the development of acute and chronic pancreatitis. In the individual level the risk to develop either acute or chronic pancreatitis increases along with the alcohol consumption. Moreover, the risk for recurrent acute pancreatitis after the first acute pancreatitis episode seems also to be highly dependent on the level of alcohol consumption. Abstaining from alcohol may prohibit recurrent acute pancreatitis and reduce pain in chronic pancreatitis. Therefore, all the attempts to decrease alcohol consumption after acute pancreatitis and even after the diagnosis of chronic pancreatitis should be encouraged. Smoking seems to be a remarkable co-factor together with alcohol in the development of chronic pancreatitis, whereas no hard data are available for this association in acute pancreatitis. Setting the limits for accepting the alcohol as the etiology cannot currently be based on published data, but rather on the 'political' agreement.
abstract_id: PUBMED:483921
Morphology of acute and chronic pancreatitis The pathological anatomy of pancreatitis is treated, based on a classification in acute haemorrhagic necrotizing (tryptic), acute serous (interstitial), acute purulent, chronic sclerosing (primary) and chronic relapsing (tryptic) pancreatitis, in which case is referred to the problems of such classifications. Among the individual forms etiologic and pathogenetic points of view are briefly discussed, among others also questions of the autoimmune pancreatitis and of the key positions of the acinus cell in the development of the necrotizing pancreatitis, taking into consideration experimental result. Further paragraphs deal with the special forms as well as with the course and the complications of pancreatitis, taking above all into consideration morphologic aspects and actual problems.
abstract_id: PUBMED:26576097
Acute-on-chronic liver failure: Pathogenesis, prognostic factors and management. Acute-on-chronic liver failure (ACLF) is increasingly recognized as a complex syndrome that is reversible in many cases. It is characterized by an acute deterioration of liver function in the background of a pre-existing chronic liver disease often associated with a high short-term mortality rate. Organ failure (OF) is always associated, and plays a key role in determining the course, and the outcome of the disease. The definition of ACLF remains controversial due to its overall ambiguity, with several disparate criteria among various associations dedicated to the study of liver diseases. Although the precise pathogenesis needs to be clarified, it appears that an altered host response to injury might be a contributing factor caused by immune dysfunction, ultimately leading to a pro-inflammatory status, and eventually to OF. The PIRO concept (Predisposition, Insult, Response and Organ Failure) has been proposed to better approach the underlying mechanisms. It is accepted that ACLF is a different and specific form of liver failure, where a precipitating event is always involved, even though it cannot always be ascertained. According to several studies, infections and active alcoholism often trigger ACLF. Viral hepatitis, gastrointestinal haemorrhage, or drug induced liver injury, which can also provoke the syndrome. This review mainly focuses on the physiopathology and prognostic aspects. We believe these features are essential to further understanding and providing the rationale for improveddisease management strategies.
Answer: The relationship between acute alcoholic pancreatitis and chronic pancreatitis has been a subject of debate. However, a histological study of patients having a first attack of acute alcoholic pancreatitis found that in all patients examined, both acute necrotic and chronic lesions were present, suggesting that acute alcoholic pancreatitis develops in a pancreas already affected by chronic pancreatitis (PUBMED:15128075). This study does not support the hypothesis that chronic pancreatitis in alcoholics derives from acute pancreatitis. Instead, it indicates that the chronic form may precede the acute form.
Other studies have also explored the progression from acute to chronic pancreatitis. Some clinical studies have shown that alcohol-induced pancreatitis may progress to chronic pancreatitis, but there are no predictive parameters indicating when such a progression does or does not occur (PUBMED:10470329). Additionally, a study of individuals initially treated for acute pancreatitis found that within 12-24 months from the first attack, their condition progressed to chronic pancreatitis, indicating that not only acute alcoholic pancreatitis but also pancreatitis of biliary etiology can evolve into chronic pancreatitis (PUBMED:27172436).
In summary, the evidence suggests that acute alcoholic pancreatitis often occurs in a pancreas that has already been altered by chronic pancreatitis, rather than the acute form leading to the chronic form. |
Instruction: Predicting the success of a benzodiazepine discontinuation programme: myths or clinical wisdom?
Abstracts:
abstract_id: PUBMED:17007475
Predicting the success of a benzodiazepine discontinuation programme: myths or clinical wisdom? Background: After successful completion of a benzodiazepine withdrawal programme it nevertheless is hard to remain abstinent in the long term.
Aim: To determine to what extent the success of a benzodiazepine discontinuation programme for psychiatric patients with chronic benzodiazepine use (> or = 3 months) can be predicted from the severity of the anxiety, sleep disorders and depressive symptoms, and from the level of benzodiazepine dependence. The predictive values of coping style and personality characteristics were also studied.
Method: A prognostic cohort study with patients of the Radboud University Nijmegen Medical Centre was conducted. Before entering the programme 92 patients were given a psychological assessment. Anxiety level, benzodiazepine dependence, coping style and personality traits were measured by means of psychological questionnaires. The DSM-IV axis I classification for each patient was known. Patients who had stopped their medication immediately after the discontinuation programme ended (n = 6o) were compared with patients who had not been successful in completing the programme (n = 32). Thereafter, patients who were still abstinent at the follow-up about 2 years later (n = 25) were compared with patients who at that time /used benzodiazepine (n = 43).
Results: Of all the variables examined, it was only a specific coping style whereby patients expressed their (negative) emotions which was associated with the short- and long-term success of the discontinuation programme. The more patients expressed their negative emotions, the greater the chance of a successful outcome and permanent abstinence. Coping style, however, predicted for only a small proportion of the variance in the success of the discontinuation programme.
Conclusion: The psychological characteristics and the DSM-IV axis I classifications should not exert undue influence on the clinician's decision to advise the patient to stop or continue taking benzodiazepines.
abstract_id: PUBMED:29713452
Challenges of the pharmacological management of benzodiazepine withdrawal, dependence, and discontinuation. Background: Benzodiazepines (BZDs) are among the most prescribed sedative hypnotics and among the most misused and abused medications by patients, in parallel with opioids. It is estimated that more than 100 million Benzodiazepine (BZD) prescriptions were written in the United States in 2009. While medically useful, BZDs are potentially dangerous. The co-occurring abuse of opioids and BZD, as well as increases in BZD abuse, tolerance, dependence, and short- and long-term side effects, have prompted a worldwide discussion about the challenging aspects of medically managing the discontinuation of BZDs. Abrupt cessation can cause death. This paper addresses the challenges of medications suggested for the management of BZD discontinuation, their efficacy, the risks of abuse and associated medical complications. The focus of this review is on the challenges of several medications suggested for the management of BZD discontinuation, their efficacy, the risks of abuse, and associated medical complications.
Methods: An electronic search was performed of Medline, Worldwide Science, Directory of Open Access Journals, Embase, Cochrane Library, Google Scholar, PubMed Central, and PubMed from 1990 to 2017. The review includes double-blind, placebo-controlled studies for the most part, open-label pilot studies, and animal studies, in addition to observational research. We expand the search to review articles, naturalistic studies, and to a lesser extent, letters to the editor/case reports. We exclude abstract and poster presentations, books, and book chapters.
Results: The efficacy of these medications is not robust. While some of these medicines are relatively safe to use, some of them have a narrow therapeutic index, with severe, life-threatening side effects. Randomized studies have been limited. There is a paucity of comparative research. The review has several limitations. The quality of the documents varies according to whether they are randomized studies, nonrandomized studies, naturalistic studies, pilot studies, letters to the editors, or case reports.
Conclusions: The use of medications for the discontinuation of BZDs seems appropriate. It is a challenge that requires further investigation through randomized clinical trials to maximize efficacy and to minimize additional risks and side effects.
abstract_id: PUBMED:29336611
Interventions to improve benzodiazepine tapering success in the elderly: a systematic review. Background: Long-term benzodiazepine use in the elderly population is a significant public health problem that leads to impaired cognitive functioning, medication dependence and increased risks for adverse drug reactions. The aim of this review was to examine randomized controlled trials (RCTs) on the efficacy of different methods for tapering and discontinuing benzodiazepines.
Method: We used four databases (Ovid, PubMed, Academic Search Complete, Web of Science) to retrieve randomized controlled trials published in peer-reviewed journals that explored different methods for tapering benzodiazepine use in a primarily geriatric population.
Results: Eleven papers met the inclusion criteria. Methods to assist in benzodiazepine tapering included patient education, cognitive behavioural therapy (CBT), and pharmaceutical adjuvants (SSRIs, melatonin, progesterone). Patient education was consistently effective in increasing benzodiazepine discontinuation success while CBT had mixed but promising results. The use of medications to help improve tapering success was inconclusive.
Conclusions: Patient education is a successful, time- and cost-effective intervention that can significantly help with benzodiazepine discontinuation success. CBT may also be an effective approach. However, cost can be an issue since public healthcare coverage in Canada does not cover psychotherapy. More research is needed in looking at pharmaceutical adjuvants and their role in assisting with benzodiazepine discontinuation.
abstract_id: PUBMED:36498061
Attitudes and Difficulties Associated with Benzodiazepine Discontinuation. Long-term use of benzodiazepine receptor agonists (BZDs) may depend on clinicians' BZD discontinuation strategies. We aimed to explore differences in strategies and difficulties with BZD discontinuation between psychiatrists and non-psychiatrists and to identify factors related to difficulties with BZD discontinuation. Japanese physicians affiliated with the Japan Primary Care Association, All Japan Hospital Association, and Japanese Association of Neuro-Psychiatric Clinics were surveyed on the following items: age group, specialty (psychiatric or otherwise), preferred time to start BZD reduction after improvement in symptoms, methods used to discontinue, difficulties regarding BZD discontinuation, and reasons for the difficulties. We obtained 962 responses from physicians (390 from non-psychiatrists and 572 from psychiatrists), of which 94.0% reported difficulty discontinuing BZDs. Non-psychiatrists had more difficulty with BZD discontinuation strategies, while psychiatrists had more difficulty with symptom recurrence/relapse and withdrawal symptoms. Psychiatrists used more candidate strategies in BZD reduction than non-psychiatrists but initiated BZD discontinuation after symptom improvement. Logistic regression analysis showed that psychosocial therapy was associated with less difficulty in BZD discontinuation (odds ratio, 0.438; 95% confidence interval, 0.204-0.942; p = 0.035). Educating physicians about psychosocial therapy may alleviate physicians' difficulty in discontinuing BZDs and reduce long-term BZD prescriptions.
abstract_id: PUBMED:37653209
Examining Adult Patients' Success with Discontinuing Long-term Benzodiazepine Use: a Qualitative Study. Background: Little is known about patients' experiences with benzodiazepine (BZD) discontinuation, which is thought to be challenging given the physiological and psychological dependence and accompanying potential for significant withdrawal symptoms. The marked decline in BZD prescribing over the past decade in the US Department of Veterans Affairs healthcare system presents an important opportunity to examine the experience of BZD discontinuation among long-term users.
Objective: Examine the experience of BZD discontinuation among individuals prescribed long-term BZD treatment to identify factors that contributed to successful discontinuation.
Design: Descriptive qualitative analysis of semi-structured interviews conducted between April and December of 2020.
Participants: A total of 21 Veterans who had been prescribed long-term BZD pharmacotherapy (i.e., > 120 days of exposure in a 12-month period) and had their BZD discontinued.
Approach: We conducted semi-structured interviews with Veteran participants to learn about their BZD use and the process of discontinuation, with interviews recorded and transcribed verbatim. Data were deductively and inductively coded and coded text entered into a matrix to identify factors that contributed to successful BZD discontinuation.
Key Results: The mean age of interview participants was 63.0 years (standard deviation 3.9); 94.2% were male and 76.2% were white. Of 21 participants, only 1 had resumed BZD treatment (prescribed by a non-VA clinician). Three main factors influenced success with discontinuation: (1) participants' attitudes toward BZDs (e.g., risks of long-term use, perceived lack of efficacy, potential for dependence); (2) limited withdrawal symptoms; and (3) effective alternatives, either from their clinician (e.g., medication, psychotherapy) or identified by participants.
Conclusions: BZD discontinuation after long-term use is relatively well tolerated, and participants appreciated reducing their medication exposure, particularly to one associated with physical dependence. These findings may help reduce both patient and clinician anxiety related to BZD discontinuation.
abstract_id: PUBMED:35196378
Discontinuation of chronic benzodiazepine use in primary care: a nonrandomized intervention. Background: Chronic benzodiazepine use is a challenge in primary care practice. Protocols to support safe discontinuation are still needed, especially in countries with high utilization rates.
Objectives: To evaluate the feasibility, effectiveness, and safety of a benzodiazepine discontinuation protocol in primary care setting.
Methods: Nonrandomized, single-arm interventional study, at primary care units. Family physicians (FPs) recruited patients (18-85 years-old) with benzodiazepine dependence and chronic daily use ≥3 months. Patients with daily dosages ≥30 mg diazepam-equivalent, taking zolpidem, with a history of other substance abuse or major psychiatric disease were excluded. After the switch to diazepam, the dosage was gradually tapered according to a standardized protocol. Primary endpoint was the percentage of patients who stopped benzodiazepine at the intervention last visit. Dosage reduction, withdrawal symptoms, patients' and FPs' satisfaction with the protocol were evaluated.
Results: From 66 enrolled patients (74% female; 66.7% aged >64 years; median time of benzodiazepine use was 120 months), 2 withdrew due to medical reasons and 3 presented protocol deviations. Overall, 59.4% of participants successfully stopped benzodiazepine (60.7% when excluding protocol deviations). Men had higher probability of success (relative risk = 0.51, P = 0.001). A total of 31 patients reported at least 1 withdrawal symptom, most frequently insomnia and anxiety. Most of participating FP considered the clinical protocol useful and feasible in daily practice. Among patients completing the protocol, 77% were satisfied. For the patients who reduced dosage, 85% kept without benzodiazepines after 12 months.
Conclusion: The discontinuation protocol with standardized dosage reduction was feasible at primary care and showed long-term effectiveness.
abstract_id: PUBMED:26427515
The professionalization and training of psychologists: The place of clinical wisdom. Objective: The current study examines how clinical wisdom develops and how it both is and can be influenced by professional training processes. In this way, the project is studying the intersection of developmental and systemic processes related to clinical wisdom.
Method: Researchers analyzed the interviews of psychologists practicing in the USA and Canada who were nominated for their clinical wisdom by their peers. These interviews explored how graduate training and professionalization were thought to influence the development of clinical wisdom and were subjected to an adapted grounded theory analysis.
Results: The findings described both professional and personal disincentives toward developing wisdom, including the dangers of isolation. Therapists reported concerns about educational systems that rewarded quick answers instead of thoughtful questioning in processes of admittance, training, and accreditation. Findings emphasized the importance of teaching multiple psychotherapy orientations, critical self- and professional-reflection skills, and openly supporting graduate students' curiosities and continued professional engagement.
Conclusions: Recommended principles for training are put forward for the development and evaluation of psychotherapy training programs that aim to foster clinical wisdom. These principles complement training models focused upon clinical competence by helping trainees to develop a foundation for clinical wisdom.
abstract_id: PUBMED:34764912
Gender Effect on Views of Wisdom and Wisdom Levels. Gender differences in wisdom are an important theme in mythology, philosophy, psychology, and daily life. Based on the existing psychological research, consensus and dispute exist between the two genders on the views of wisdom and in the levels of wisdom. In terms of the views of wisdom, the way men and women view wisdom is highly similar, and from the perspectives of both ordinary people and professional researchers of wisdom psychology, wise men and women are extremely similar. Regarding wisdom level, research has revealed that, although significant gender effects exist in the level of overall wisdom, reflective and affective dimension, and interpersonal conflict coping styles, the effect sizes were small, which indicated that these gender differences were not obvious. It would be desirable for future research to combine multiple wisdom measurements, strengthen research on the psychological gender effect of wisdom, and focus on the moderating role of age on the relationship between wisdom and gender.
abstract_id: PUBMED:36114823
Factors predicting successful discontinuation of acute kidney replacement therapy: A retrospective cohort study. Background: Treatment for severe acute kidney injury (AKI) typically involves the use of acute kidney replacement therapy (AKRT) to prevent or reverse complications.
Methodology: We aimed to determine the prevalence of successful discontinuation of AKRT and its predictive factors. A retrospective cohort study was performed with 316 patients hospitalized at a public Brazilian university hospital between January 2011 and June 2020.
Results: Success and hospital discharge were achieved for most patients (85% and 74%, respectively). Multivariable logistic regression analysis showed that C-reactive protein (CRP), urine output, and need mechanical ventilation at the time of interruption were variable associated with discontinuation success (OR 0.969, CI 0.918-0.998, p = 0.031; OR 1.008, CI 1.001-1.012, p = 0.041 and OR 0.919, CI 0.901-0.991, p = 0.030; respectively), while the absence of comorbidities such as chronic kidney disease (OR 0.234, CI 0.08-0.683, p = 0.008), cardiovascular disease (OR 0.353, CI 0.134-0.929, p = 0.035) and hypertension (OR 0.278, CI 0.003-0.882, p = 0.009), as well as pH values at the time of AKRT indication (OR 1.273, CI 1.003-1.882, p = 0.041), mechanical ventilation at the time of interruption (OR 0.19, CI 0.19-0.954, p = 0.038) and successful discontinuation (OR 8.657, CI 3.135-23.906, p < 0.001) were identified as variables associated with hospital discharge.
Conclusion: These results show that clinical conditions such as comorbidities, urine output, and mechanical ventilation, and laboratory variables such as pH and CRP are factors associated with hospital discharge and AKRT discontinuation success, requiring larger studies for confirmation.
abstract_id: PUBMED:35499041
Experiences with benzodiazepine use, tapering, and discontinuation: an Internet survey. Background: Over 92 million prescriptions for benzodiazepines are dispensed in the United States annually, yet little is known about the experiences of those taking and discontinuing them.
Objective: The aim of this study is to assess the experiences of those taking, tapering, or having discontinued benzodiazepines.
Methods: An online survey (n = 1207) elicited information about benzodiazepine use, including long-term use, tapering, discontinuation, and withdrawal symptoms.
Results: Symptoms associated with benzodiazepine use, tapering, and discontinuation were numerous and ranged from symptoms such as anxiety, insomnia, and nervousness to digestive problems, irregular heart rhythms, uncontrollable anger, photosensitivity, balance problems, and others. When asked how benzodiazepine symptoms affected their lives, 82.9% reported work problems, 86.3% had problems with social interactions and friendships, and 88.8% had problems with fun, recreation, and hobbies. Suicidal thoughts or attempted suicide was reported by 54.4%, and 46.8% said benzodiazepines caused lost employment. Most of the respondents for whom benzodiazepines were prescribed (76.2%) stated they had not been informed that benzodiazepines were indicated for short-term use only and that discontinuation might be difficult. About a third (31.5%) reported food allergies and/or seasonal allergies that occurred only after benzodiazepine use.
Conclusion: The trajectory of those who taper or discontinue benzodiazepines is unpredictable, and many patients experience a range of protracted and severe symptoms, even years after benzodiazepines were completely discontinued. Greater awareness is needed for both prescribers and patients about the potential for a difficult withdrawal from benzodiazepines.
Answer: The success of a benzodiazepine discontinuation program can be challenging to predict, and while there may be myths surrounding this process, there is also some clinical wisdom that can be gleaned from research. A study conducted with patients from the Radboud University Nijmegen Medical Centre found that of various psychological characteristics examined, a specific coping style where patients expressed their negative emotions was associated with both short- and long-term success in discontinuing benzodiazepine use. The more patients expressed their negative emotions, the greater the chance of a successful outcome and permanent abstinence. However, this coping style only predicted a small proportion of the variance in the success of the discontinuation program, suggesting that other factors are also at play (PUBMED:17007475).
Other research has highlighted the challenges of medically managing benzodiazepine discontinuation, noting that while some medications can aid in the process, their efficacy is not robust, and some carry the risk of severe side effects. The need for further investigation through randomized clinical trials to maximize efficacy and minimize risks is emphasized (PUBMED:29713452).
Interventions such as patient education and cognitive behavioural therapy (CBT) have been found to be effective in aiding benzodiazepine tapering in the elderly, with patient education being a particularly successful, time- and cost-effective intervention (PUBMED:29336611). Additionally, attitudes towards benzodiazepines, limited withdrawal symptoms, and effective alternatives have been identified as factors contributing to successful discontinuation (PUBMED:37653209).
In primary care settings, a standardized protocol for benzodiazepine discontinuation has been shown to be feasible and effective, with a significant proportion of patients successfully stopping benzodiazepine use (PUBMED:35196378). Furthermore, educating physicians about psychosocial therapy may alleviate difficulties in discontinuing benzodiazepines and reduce long-term prescriptions (PUBMED:36498061).
In summary, while predicting the success of a benzodiazepine discontinuation program is not straightforward, certain factors such as coping styles, patient education, CBT, attitudes towards medication, and the use of standardized protocols can contribute to successful outcomes. These insights reflect a combination of clinical wisdom and evidence-based practices that can guide healthcare providers in supporting patients through the discontinuation process. |
Instruction: Are cervical physical outcome measures influenced by the presence of symptomatology?
Abstracts:
abstract_id: PUBMED:12426909
Are cervical physical outcome measures influenced by the presence of symptomatology? Background And Purpose: Outcome measures must be repeatable over time to judge changes as a result of treatment. It is unknown whether the presence of neck pain can affect measurement reliability over a time period when some change could be expected as a result of an intervention. The present study investigated the reliability of two measures, active cervical range of movement (AROM) and pressure pain thresholds (PPTs), in symptomatic and asymptomatic subjects.
Method: A repeated-measures study design with one week between testing sessions was used. Nineteen healthy asymptomatic subjects and 19 subjects with chronic neck pain participated in the study. The neck movements measured were: flexion, extension, right and left lateral flexion, and axial rotation. PPTs were measured over six bilateral sites, both local and remote to the cervical spine.
Results: The between-week intra-class correlation coefficients (ICCs2,1) for AROM ranged from 0.67 to 0.93 (asymptomatic group) and from 0.64 to 0.88 (chronic neck pain group). Standard error of measurement (SEM) was similar in both groups, from 2.66 degrees to 5.59 degrees (asymptomatic group) and from 2.36 degrees to 6.72 degrees (chronic neck pain group). ICCs2,1 for PPTs ranged from 0.70 to 0.91 (asymptomatic group) and from 0.69 to 0.92 (chronic neck pain group). SEM ranged from 11.14 to 87.71 kPa (asymptomatic group) and from 14.25 to 102.95 kPa (chronic neck pain group).
Conclusions: The findings of moderate to very high between-week reliability of measures of AROM and PPTs in both asymptomatic and chronic neck pain subjects suggest the presence of symptomatology does not adversely affect reliability of these measures. The results support the use of these measures for monitoring change in chronic neck pain conditions.
abstract_id: PUBMED:31607075
Outcome Measures and Variables Affecting Prognosis of Cervical Spondylotic Myelopathy: WFNS Spine Committee Recommendations. This study is conducted to review the literature systematically to determine most reliable outcome measures, important clinical and radiological variables affecting the prognosis in cervical spondylotic myelopathy patients. A literature search was performed for articles published during the last 10 years. As functional outcome measures we recommend to use modified Japanese Orthopaedic Association scale, Nurick's grade, and Myelopathy Disability Index. Three clinical variables that affect the outcomes are age, duration of symptoms, and severity of the myelopathy. Examination findings require more detailed study to validate their effect on the outcomes. The predictive variables affecting the outcomes are hand atrophy, leg spasticity, clonus, and Babinski's sign. Among the radiological variables, the curvature of the cervical spine is the most important predictor of prognosis. Patients with instability are expected to have a poor surgical outcome. Spinal cord compression ratio is a critical factor for prognosis. High signal intensity on T2-weighted magnetic resonance images is a negative predictor for prognosis. The most important predictors of outcome are preoperative severity and duration of symptoms. T2 hyperintensity and cord compression ratio can also predict outcomes. New radiological tests may give promising results in the future.
abstract_id: PUBMED:35690639
Effect of diabetes on patient-reported outcome measures at one year after laminoplasty for cervical spondylotic myelopathy. Although patients with diabetes reportedly have more peripheral neuropathy, the impacts of diabetes on postoperative recovery in pain and patient-reported outcome measures (PROMs) after laminoplasty for cervical spondylotic myelopathy (CSM) is not well characterized. The authors aimed to elucidate the effects of diabetes on neck/arm/hand/leg/foot pain and PROMs after laminoplasty CSM. The authors retrospectively reviewed 339 patients (82 with diabetes and 257 without) who underwent laminoplasty between C3 and C7 in 11 hospitals during April 2017 -October 2019. Preoperative Numerical Rating Scale (NRS) scores in all five areas, the Short Form-12 Mental Component Summary, Euro quality of life 5-dimension, Neck Disability Index, and the Core Outcome Measures Index-Neck) were comparable between the groups. The between-group differences were also not significant in NRS scores and PROMs one year after surgery. The change score of NRS hand pain was larger in the diabetic group than the nondiabetic group. The diabetic group showed worse preoperative score but greater improvement in the Short Form-12 Physical Component Summary than the nondiabetic group, following comparable score one year after surgery. These data indicated that the preoperative presence of diabetes, at least, did not adversely affect pain or PROMs one year after laminoplasty for CSM.
abstract_id: PUBMED:24252123
Outcome measures monitoring physical function in children with haemophilia: a systematic review. Our objective was to provide a synthesis of measurement properties for performance-based outcome measures used to evaluate physical function in children with haemophilia. A systematic review of articles published in English using Medline, PEDro, Cinahl and The Cochrane Library electronic databases was conducted. Studies were included if a performance-based method, clinical evaluation or measurement tool was used to record an aspect of physical function in patients with haemophilia aged ≤ 18 years. Recording of self-perceived or patient-reported physical performance, abstracts, unpublished reports, case series reports and studies where the outcome measure was not documented or cross-referenced was excluded. Description of outcome measures, patient characteristics, measurement properties for construct validity, internal consistency, repeatability, responsiveness and feasibility was extracted. Data synthesis of 41 studies evaluating 14 measures is reported. None of the outcome measures demonstrated the requirements for all the measurement properties. Data on validity and test-retest repeatability were most lacking together with studies of sufficient size. Measurement of walking and muscle strength demonstrated good repeatability and discriminative properties; however, correlation with other measures of musculoskeletal impairment requires investigation. The Haemophilia Joint Health Score demonstrated acceptable construct validity, internal consistency and repeatability, but the ability to discriminate changes in physical function is still to be determined. Rigorous evaluation of the measurement properties of performance-based outcome measures used to monitor physical function of children with haemophilia in larger collaborative studies is required.
abstract_id: PUBMED:37094774
Evolution of patient-reported outcome measures, 1, 2, and 5 years after surgery for subaxial cervical spine fractures, a nation-wide registry study. Background Context: A longer duration of patient follow up arguably provides more reliable data on the long-term effects of a treatment. However, the collection of long-term follow up data is resource demanding and often complicated by missing data and patients being lost to follow up. In surgical fixation for cervical spine fractures, data are lacking on the evolution of patient reported outcome measures (PROMs) beyond 1-year of follow up. We hypothesized that the PROMs would remain stable beyond the 1-year postoperative follow up mark, regardless of the surgical approach.
Purpose: To assess the trends in the evolution of patient-reported outcome measures (PROMs) at 1, 2-, and 5-years following surgery in patients with traumatic cervical spine injuries.
Study Design: Nation-wide observational study on prospectively collected data.
Patient Sample: Individuals treated for subaxial cervical spine fractures with anterior, posterior, or combined anteroposterior approaches, between 2006 and 2016 were identified in the Swedish Spine Registry (Swespine).
Outcome Measures: PROMs consisting of EQ-5D-3Lindex and the Neck Disability Index (NDI) were considered.
Methods: PROMs data were available for 292 patients at 1 and 2 years postoperatively. Five-years PROMs data were available for 142 of these patients. A simultaneous within-group (longitudinal) and between group (approach-dependent) analysis was performed using mixed ANOVA. The predictive ability of 1-year PROMs was subsequently assessed using linear regression.
Results: Mixed ANOVA revealed that PROMs remained stable from 1- to 2-years as well as from 2- to 5-years postoperatively and were not significantly affected by the surgical approach (p<0.05). A strong correlation was found between 1-year and both 2- and 5-years PROMs (R>0.7; p<0.001). Linear regression confirmed the accuracy of 1-year PROMs in predicting both 2- and 5-years PROMs (p<0.001).
Conclusion: PROMs remained stable beyond 1-year of follow up in patients treated with anterior, posterior, or combined anteroposterior surgeries for subaxial cervical spine fractures. The 1-year PROMs were strong predictors of PROMs measured at 2, and 5 years. The 1-year PROMs were sufficient to assess the outcomes of subaxial cervical fixation irrespective of the surgical approach.
abstract_id: PUBMED:37782344
Patient-reported outcome measures in physical therapy practice for neck pain: an overview of reviews. Background: Understanding which patient-reported outcome measures are being collected and utilized in clinical practice and research for patients with neck pain will help to inform recommendations for a core set of measures that provide value to patients and clinicians during diagnosis, clinical decision-making, goal setting and evaluation of responsiveness to treatment. Therefore, the aim of this study was to conduct a review of systematic reviews using a qualitative synthesis on the use of patient-reported outcome measures (PROMs) for patients presenting with neck pain to physical therapy.
Methods: An electronic search of systematic reviews and guideline publications was performed using MEDLINE (OVID), Embase (Elsevier), CINAHL Complete (EBSCOhost), and Web of Science (Clarivate) databases to identify reviews that evaluated physical therapy interventions or interventions commonly performed by a physical therapist for individuals with neck pain and included at least one patient-reported outcome measure. The frequency and variability in which the outcome measures were reported among the studies in the review and the constructs for which they measured were evaluated. The evaluation of a core set of outcome measures was assessed. Risk of bias and quality assessment was performed using A Measurement Tool to Assess systematic Reviews 2.
Results: Of the initial 7,003 articles, a total of 37 studies were included in the final review. Thirty-one PROMs were represented within the 37 reviews with eleven patient-reported outcome measures in three or more reviews. The eleven PROMs assessed the constructs of disability, pain intensity, psychosocial factors and quality of life. The greatest variability was found amongst individual measures assessing psychosocial factors. Assessment of psychosocial factors was the least represented construct in the included studies. Overall, the most frequently utilized patient reported outcome measures were the Neck Disability Index, Visual Analog Scale, and Numeric Pain Rating Scale. The most frequently used measures evaluating the constructs of disability, pain intensity, quality of life and psychosocial functioning included the Neck Disability Index, Visual Analog Scale, Short-Form-36 health survey and Fear Avoidance Belief Questionnaire respectively. Overall risk of bias and quality assessment confidence levels ranged from critically low (2 studies), low (12 studies), moderate (8 studies), and high (15 studies).
Conclusion: This study identified a core set of patient-reported outcome measures that represented the constructs of disability, pain intensity and quality of life. This review recommends the collection and use of the Neck Disability Index and the Numeric Pain Rating Scale or Visual Analog Scale. Recommendation for a QoL measure needs to be considered in the context of available resources and administrative burden. Further research is needed to confidently recommend a QoL and psychosocial measure for patients presenting with neck pain. Other measures that were not included in this review but should be further evaluated for patients with neck pain are the Patient Reported Outcomes Measurement Information System (PROMIS) Physical function, PROMIS Pain Interference and the Optimal Screening for Prediction of Referral and Outcome Yellow Flag (OSPRO-YF) tool.
abstract_id: PUBMED:37889328
Validating the preoperative Japanese Core Outcome Measures Index for the Neck and comparing quality of life in patients with cervical spondylotic myelopathy and ossification of the posterior longitudinal ligament by the patient-reported outcome measures. Purpose: This cross-sectional study serves two main purposes. Firstly, it aims to validate the preoperative Japanese Core Outcome Measures Index for the Neck (COMI-Neck) in patients with cervical spondylotic myelopathy (CSM) and ossification of the posterior longitudinal ligament (OPLL). Secondly, it seeks to elucidate differences in preoperative quality of life (QOL) between these two cervical pathologies using patient-reported outcome measures (PROMs).
Methods: A total of 103 preoperative patients (86 with CSM and 17 with OPLL) scheduled for cervical spine surgery were included in the study. Validated PROMs, including the Japanese COMI-Neck, Neck Disability Index (NDI), EuroQol-5 Dimension-3 level (EQ-5D-3L), and SF-12v2, were used to assess QOL. Baseline demographic and clinical data were collected, and statistical analyses were performed to compare the PROMs between CSM and OPLL groups.
Results: The Japanese COMI-Neck demonstrated good construct validity, with positive correlations with NDI and negative correlations with EQ-5D-3L and SF-12v2. Comparison of preoperative PROMs between CSM and OPLL groups revealed differences in age, body mass index, and EQ-5D-3L scores. The CSM group had higher NDI scores for concentration and lower EQ-5D-3L scores for self-care compared to the OPLL group.
Conclusions: This study validated the preoperative Japanese COMI-Neck in CSM and OPLL patients and identified specific QOL issues associated with each condition. The findings highlight the importance of considering disease-specific QOL and tailoring treatment plans accordingly. Further research should include postoperative assessments and a more diverse population to enhance generalizability.
abstract_id: PUBMED:32502657
Criteria for success after surgery for cervical radiculopathy-estimates for a substantial amount of improvement in core outcome measures. Background Context: Defining clinically meaningful success criteria from patient-reported outcome measures (PROMs) is crucial for clinical audits, research and decision-making.
Purpose: We aimed to define criteria for a successful outcome 3 and 12 months after surgery for cervical degenerative radiculopathy on recommended PROMs.
Study Design: Prospective cohort study with 12 months follow-up.
Patient Sample: Patients operated at one or two levels for cervical radiculopathy included in the Norwegian Registry for Spine Surgery (NORspine) from 2011 to 2016.
Outcome Measures: Neck disability index (NDI), Numeric Rating Scale for neck pain (NRS-NP) and arm pain (NRS-AP), health-related quality-of-life EuroQol 3L (EQ-5D), general health status (EQ-VAS).
Methods: We included 2,868 consecutive cervical degenerative radiculopathy patients operated for cervical radiculopathy in one or two levels and included in the Norwegian Registry for Spine Surgery (NORspine). External criterion to determine accuracy and optimal cut-off values for success in the PROMs was the global perceived effect scale. Success was defined as "much better" or "completely recovered." Cut-off values were assessed by analyzing the area under the receiver operating curves for follow-up scores, mean change scores, and percentage change scores.
Results: All PROMs showed high accuracy in defining success and nonsuccess and only minor differences were found between 3- and 12-month scores. At 12 months, the area under the receiver operating curves for follow-up scores were 0.86 to 0.91, change scores were 0.74 to 0.87, and percentage change scores were 0.74 to 0.91. Percentage scores of NDI and NRS-AP showed the best accuracy. The optimal cut-off values for each PROM showed considerable overlap across those operated due to disc herniation and spondylotic foraminal stenosis.
Conclusions: All PROMs, especially NDI and NRS-AP, showed good to excellent discriminative ability in distinguishing between a successful and nonsuccessful outcome after surgery due to cervical radiculopathy. Percentage change scores are recommended for use in research and clinical practice.
abstract_id: PUBMED:30817731
PROMIS Correlates With Legacy Outcome Measures in Patients With Neck Pain and Improves Upon NDI When Assessing Disability in Cervical Deformity. Study Design: Retrospective cohort study.
Objective: To evaluate the ability of patient reported outcome measurement information system (PROMIS) assessments to capture disability related to cervical sagittal alignment and secondarily to compare these findings to legacy outcome measures.
Summary Of Background Data: PROMIS is a validated patient-reported outcome metric that is increasing in popularity due to its speed of administration relative to legacy metrics. The ability of PROMIS to capture disability from sagittal alignment and baseline health status in patients with neck pain has not been investigated.
Methods: Patients presenting with a chief complaint of neck pain from December 2016 to July 2017 were included. Demographics and comorbidities were retrospectively collected. All patients prospectively completed the neck disability index (NDI), EQ-5D, visual analog scale (VAS) neck, VAS arm, PROMIS physical function, PROMIS pain intensity, and PROMIS pain interference metrics. Cervical sagittal alignment parameters were measured on standing X-rays. The correlations between outcome measures, health status indexes, psychiatric diagnoses, and sagittal alignment were analyzed.
Results: Two hundred twenty-six patients were included. The sample was 58.4% female with a mean age of 55.1 years. In patients with neck pain, PROMIS physical function correlated strongly with the NDI (r = -0.763, P < 0.01), EQ-5D (r = 0.616, P < 0.01), VAS neck pain (-0.466, P < 0.01), and VAS arm pain (r = -0.388, P < 0.01). One hundred seventy-seven patients (69.96%) were included in the radiographic analysis. 20.3% of the radiographic cohort had cervical deformity and in this group, less cervical lordosis correlated with PROMIS pain intensity and EQ-5D but not NDI. In patients without cervical deformity, no outcome metric was found to correlate significantly with cervical alignment parameters.
Conclusion: PROMIS domains correlated strongly with legacy outcome metrics. For the whole cohort, sagittal alignment was not correlated with outcomes. In patients with sagittal cervical deformity, worsening alignment correlated with PROMIS pain intensity and EQ-5D but not NDI.
Level Of Evidence: 3.
abstract_id: PUBMED:37445446
Patient-Reported Outcome Measures following Coblation Nucleoplasty for Cervical Discogenic Dizziness. Background: There is little research in the literature comparing the efficacy of coblation nucleoplasty with conservative treatment in the treatment of cervical discogenic dizziness and reporting the achieved rate of minimal clinically important differences (MCID) and patient acceptable symptom state (PASS) after surgery. This retrospective study aims to explore the patient-reported outcome measures (PROM) following coblation nucleoplasty for cervical discogenic dizziness and to compare the therapeutic effect of coblation nucleoplasty with prolonged conservative treatment.
Methods: Sixty-one patients with cervical discogenic dizziness and a positive intradiscal diagnostic test eligible for single-level cervical coblation nucleoplasty were included in the study. Among these 61 patients, 40 patients underwent cervical coblation nucleoplasty, while the remaining 21 patients refused surgery and received continued conservative treatment. The primary PROMs were the intensity and frequency of dizziness and secondary PROMs were related to the neck disability index (NDI) and visual analog scale (VAS) for neck pain (VAS-neck) during a 12-month follow-up period. Moreover, the achieved rate of MCID and PASS in both groups was assessed 12 months after surgery.
Results: Dizziness intensity, dizziness frequency, VAS-neck score, and NDI score were significantly improved from the baseline at all follow-up time points in both treatment groups, except for showing no significant improvement in dizziness frequency in the conservative treatment group at 6 and 12 months after surgery. However, at each follow-up time point, the above indexes were lower in the surgery group than in the conservative treatment group. In addition, the achieved rates for PASS and MCID in all indexes in the surgery group were significantly higher than those in the conservative treatment group at 12 months after surgery.
Conclusions: Cervical coblation nucleoplasty significantly improved the intensity and frequency of dizziness, neck pain, and NDI in patients with cervical discogenic dizziness, and the results were superior to those from prolonged conservative treatment. Meanwhile, cervical coblation nucleoplasty is a good choice for patients with chronic neck pain and refractory cervical discogenic dizziness who have not demonstrated the indications for open surgery and have not responded well to conservative treatment.
Answer: The presence of symptomatology does not adversely affect the reliability of cervical physical outcome measures. A study investigated the reliability of active cervical range of movement (AROM) and pressure pain thresholds (PPTs) in both symptomatic and asymptomatic subjects and found moderate to very high between-week reliability of measures of AROM and PPTs in both groups. This suggests that symptomatology does not negatively impact the reliability of these measures for monitoring change in chronic neck pain conditions (PUBMED:12426909). |
Instruction: Is peritoneal reflection the best anatomical repair landmark in experimental colorectal surgery on rats?
Abstracts:
abstract_id: PUBMED:20011835
Is peritoneal reflection the best anatomical repair landmark in experimental colorectal surgery on rats? Purpose: To validate Peyer's patch as an anatomical repair landmark for colorectal surgery in rats and to measure the collagen content in segments of the colon containing or not containing Peyer's patch.
Methods: The distance between Peyer's patch and the peritoneal reflection was measured in forty-five Wistar rats. The colon and rectum were resected for quantification of collagen content by means of computer-assisted image analysis in regions of the colon with and without Peyer's patch.
Results: There was great variation in the distance between Peyer's patch and the peritoneal reflection when the male and female rats were considered as a single group (p=0.04). Comparison between the genders showed that the distance between the patch and the peritoneal reflection was greater in female than in male rats (p=0.001). The colonic segment containing Peyer's patch was observed to have lower tissue collagen content than the segment in which this structure was not present (p=0.02).
Conclusion: Peyer's patch can be indicated as an anatomical repair landmark, and there is a need to study the healing of colorectal anastomoses in rats based on differing quantities of tissue collagen existing in the colonic wall with or without this structure.
abstract_id: PUBMED:33969466
The Landmark Series: Surgical Treatment of Colorectal Cancer Peritoneal Metastases. Background: Peritoneal metastases (PM) are a form of metastatic spread affecting approximately 5-15% of colon cancer patients. The attitude towards management of peritoneal metastases has evolved from therapeutic nihilism towards a more comprehensive and multidisciplinary approach, in large part due to the development of cytoreductive surgery (CRS), usually coupled with heated intraperitoneal chemotherapy (HIPEC), along with the constant improvement of systemic chemotherapy of colorectal cancer. Several landmark studies, including 5 randomized controlled trials have marked the development and refinement of surgical approaches to treating colorectal cancer peritoneal metastases.
Methods: This review article focuses on these landmark studies and their influence in 4 key areas: the evidence supporting surgical resection of peritoneal metastases, the identification and standardization of important prognostic variables influencing patient selection, the role of surgery and intraperitoneal chemotherapy in prevention of colorectal PM and the role of intraperitoneal chemotherapy as an adjuvant to surgical resection.
Results: These landmark studies indicate that surgical resection of colorectal PM should be considered as a therapeutic option in appropriately selected patients and when adequate surgical expertise is available. Standardized prognostic variables including the Peritoneal Cancer Index and the Completeness of Cytoreduction Score should be used for evaluating both indications and outcomes.
Conclusions: Current evidence does not support the use of second look surgery with oxaliplatin HIPEC or prophylactic oxaliplatin HIPEC in patients with high risk colon cancer nor the use of oxaliplatin HIPEC with CRS of colorectal PM.
abstract_id: PUBMED:29907236
Peritoneal lavage with povidone-iodine solution in colorectal cancer-induced rats. Background: Although peritoneal lavage with povidone-iodine (PVPI) is frequently performed after surgery on the gastrointestinal tract, the effects of PVPI on the intestinal epithelial barrier are unknown. The purpose of this study was to investigate the effects of abdominal irrigation with PVPI on the intestinal epithelial barrier in a colorectal cancer (CRC)-induced rat model.
Materials And Methods: The CRC model was induced in rats with azoxymethane and dextran sodium sulfate. Next, a total of 24 male CRC-induced rats were randomly divided into three groups (n = 8): (1) a sham-operated group, (2) an NS group (peritoneal lavage 0.9% NaCl), and (3) a PVPI group (peritoneal lavage with 0.45%-0.55% PVPI). The mean arterial pressure was continuously monitored throughout the experiment. The levels of plasma endotoxin and D-lactate, blood gases, and protein concentration were measured. The ultrastructural changes of the epithelial tight junctions were observed by transmission electron microscopy.
Results: The mean arterial pressure after peritoneal lavage was lower in the PVPI group than that in the NS group. The protein concentration and levels of endotoxin and D-lactate were higher in the PVPI group than they were in the PVPI group. In addition, PVPI treatment resulted in a markedly severe metabolic acidosis and intestinal mucosal injury compared with NS rats.
Conclusions: Peritoneal lavage with PVPI dramatically compromises the integrity of the intestinal mucosa barrier and causes endotoxin shock in CRC rats. It is unsafe for clinical applications to include peritoneal lavage with PVPI in colorectal operations.
abstract_id: PUBMED:28541718
Colorectal Cancer Cells Adhere to Traumatized Peritoneal Tissue in Clusters, An Experimental Study. Purpose/Aim: Colorectal malignity is one of the most common forms of cancer. The finding of free intraperitoneal colorectal cancer cells during surgery has been shown to be associated with poor outcome. The aim of this study was to develop an experimental model designed to investigate adhesion of colorectal cancer cells to the peritoneal surface.
Materials And Methods: Two human experimental models were developed, the first using cultured mesothelial cells and the second consisting of an ex vivo model of peritoneal tissue. Both models were subjected to standardized trauma, following which labeled colorectal cancer cells (Colo205) were introduced. Adhesion of tumor cells was monitored using microscopy and detection of fluorochromes.
Results: The mesothelial cell layers and peritoneal membranes remained viable in culture medium for several weeks. In our experimental model, the tumor cells added were seen to adhere to the edges of the traumatized area in cluster formations.
Conclusions: The use of human peritoneal tissue in an ex vivo model would appear to be a potentially useful tool for the study of interaction between human peritoneal membrane and free tumor cells. Experimental surgical trauma increases the ability of tumor cells to adhere to the peritoneal membrane. This ex vivo model should be useful in future studies on biological interactions between peritoneum and tumor cells in the search for novel forms of peritoneal cancer therapy.
abstract_id: PUBMED:38068319
Effect of Intraperitoneal Chemotherapy with Regorafenib on IL-6 and TNF-α Levels and Peritoneal Cytology: Experimental Study in Rats with Colorectal Peritoneal Carcinomatosis. Cytoreductive surgery (CRS), combined with hyperthermic intraperitoneal chemotherapy, has significantly improved survival outcomes in patients with peritoneal carcinomatosis from colorectal cancer (CRC). Regorafenib is an oral agent administered in patients with refractory metastatic CRC. Our aim was to investigate the outcomes of intraperitoneal administration of regorafenib for intraperitoneal chemotherapy (IPEC) or/and CRS in a rat model of colorectal peritoneal metastases regarding immunology and peritoneal cytology. A total of 24 rats were included. Twenty-eight days after carcinogenesis induction, rats were randomized into following groups: group A: control group; group B: CRS only; group C: IPEC only; and group D: CRS + IPEC. On day 56 after carcinogenesis, euthanasia and laparotomy were performed. Serum levels of interleukin-6 (IL-6) and tumor necrosis factor α (TNF-α) as well as peritoneal cytology were investigated. Groups B and D had statistically significant lower mean levels of IL-6 and TNF-α compared to groups A and C, but there was no significant difference between them. Both B and D groups presented a statistically significant difference regarding the rate of negative peritoneal cytology, when compared to the control group, but not to group C. In conclusion, regorafenib-based IPEC, combined with CRS, may constitute a promising tool against peritoneal carcinomatosis by altering the tumor microenvironment.
abstract_id: PUBMED:34865996
Peritoneal recurrence of colorectal cancer with microsatellite instability: Is immunotherapy alone more effective than surgery? A 67-year-old man was treated with systemic chemotherapy and cytoreductive surgery for microsatellite instable (MSI), deficient mismatch repair (dMMR) right colonic cancer with peritoneal metastases. Disease was controlled only when anti-PD1 and anti-CTLA4 immune checkpoint inhibitors were introduced. The patient is in complete remission after five years of follow-up. First-line immunotherapy could have a central role in the management of patients with peritoneal recurrence from MSI/dMMR colorectal cancer even though amenable to surgical treatment.
abstract_id: PUBMED:31664989
Using inferior epigastric vascular anatomical landmarks for anterior inguinal hernia repair. Background: Inferior epigastric vascular anatomical landmarks for anterior inguinal hernia repair is an alternative surgical procedure. We present our experience and outcome of the way.
Methods: We performed a retrospective analysis of 230 patients who received anterior tension-free hernia repair between May 2016 to May 2017. Among these cases, 120 were performed using the traditional transinguinal preperitoneal (TTIPP) technique while 100 were performed using the vascular anatomic landmark transinguinal preperitoneal (VALTIPP) technique. Between these two groups, we compared the operation time, length of hospital stay, complication rates, and the visual analog scale (VAS) for pain at 2 days, 3 months, and 6 months after surgery.
Results: Surgery was well-tolerated in both groups with no significant hemorrhage or complications. The operation times for the VALTIPP and TTIPP groups were 42.52 ± 9.15 and 53.84 ± 10.64 min (P < 0.05), respectively. Ten patients in the VALTIPP group and 17 patients in the TTIPP group reported sensations of foreign bodies (P < 0.05). The VAS pain score in VALTIPP patients at 2 days (4.0 ± 0.5), 3 months (1.0 ± 0.3), and 6 months (0.9 ± 0.3) were significantly lower when compared with those of TTIPP patients (5.3 ± 0.9 at 2 days, 1.8 ± 0.4 at 3 months, and 1.1 ± 0.1 at 6 months, p < 0.05). No statistically significant differences were found in age, gender, BMI, hernia type and location, follow-up period, incidence of post-operative seromas, recurrence rate, or length of hospital stay.
Conclusion: Anterior inguinal hernia repair using inferior epigastric vascular anatomical landmarks may lead to reduced operation times, reduced sensations of foreign bodies, and reduced post-operative pain. This technique is simple, practical, and effective in the management of inguinal hernias.
abstract_id: PUBMED:17122992
Timing of adjuvant radioimmunotherapy after cytoreductive surgery in experimental peritoneal carcinomatosis of colorectal origin. Background: Treatment of patients with peritoneal carcinomatosis (PC) of colorectal cancer (CRC) includes cytoreductive surgery (CS) in combination with (hyperthermic) intraperitoneal chemotherapy (HIPEC), resulting in a limited survival benefit with high morbidity and mortality rates. Radioimmunotherapy (RIT) as adjuvant therapy after CS of CRC has been shown to prolong survival in preclinical studies. However, the optimal setting of RIT remains to be determined.
Methods: PC was induced by intraperitoneal inoculation of CC-531 colon carcinoma cells in Wag/Rij rats. Animals were subjected to exploratory laparotomy (Sham), CS only or CS + RIT at different time points after surgery. RIT consisted of 55 MBq lutetium-177-labelled anti-CC531 antibody MG1 (183 mug). The primary endpoint was survival.
Results: Cytoreductive surgery with or without RIT was well tolerated. Median survival of animals in the Sham and CS group was 29 days and 39 days, respectively (P < 0.04). Compared to CS alone, median survival of rats after adjuvant RIT was 77 days (P < 0.0001), 52 days (P < 0.0001) and 45 days (P < 0.0001) when given directly, 4 and 14 days after surgery, respectively.
Conclusion: The efficacy of adjuvant RIT after CS for the treatment of PC of colonic origin decreases when the administration of the radiolabelled MAbs is postponed. This study shows that adjuvant RIT should be given as early as possible after surgery.
abstract_id: PUBMED:30107696
Treatment for peritoneal metastasis of colorectal cancer Peritoneal metastasis is the second leading cause of death of colorectal cancer patients. Cytoreductive surgery (CRS) combined with hyperthermia intraperitoneal chemotherapy (HIPEC) is the primary method to treat peritoneal metastasis of colorectal cancer, though there remain some controversies. We reviewed current studies of colorectal peritoneal carcinomatosis (PC) and CRS+ HIPEC, and discussed some issues with regard to the scoring system for peritoneal metastasis, selection criteria for CRS+ HIPEC treatment, and the new drug application for colorectal PC. Peritoneal carcinomatosis index (PCI) is the most useful scoring system for peritoneal metastasis and CRS+ HIPEC is the primary treatment for colorectal PC. Patients with PCI<20 should receive thorough assessment on the feasibility of R0 or R1 resection and CRS+ HIPEC treatment. For patients with unresectable PC at the initial stage, active drug therapy should be adopted to achieve tumor regression, so that some of them would have the opportunity to receive CRS+ HIPEC treatment.
abstract_id: PUBMED:31745890
Peritoneal Metastases in Colorectal Cancer: Biology and Barriers. Background: Advances in the molecular biology of tumor metastasis have paralleled the evolution in the management of metastatic disease from colorectal cancer. In this review, we summarize the current understanding of the mechanism of colorectal cancer metastases, in particular that of peritoneal metastases, as well as clinical data on the treatment of this disease.
Methods: A review of relevant English literature using MEDLINE/PubMed on the biology of colorectal cancer metastases, determinants of oligometastasis, and use of cytoreductive surgery and hyperthermic intraperitoneal chemotherapy in the treatment of metastatic colorectal cancer is presented.
Results: Recognition of oligometastasis in the evolution of colorectal peritoneal metastases provides the theoretical framework for which cytoreductive surgery with or without hyperthermic intraperitoneal chemotherapy is considered. Clearly, a subset of patients benefit from peritoneal metastasectomy.
Conclusion: Advances in cancer biology and clinical imaging promise to expand the role of cytoreductive surgery with or without intraperitoneal chemotherapy in the management of peritoneal metastases from colorectal cancer.
Answer: No, the peritoneal reflection is not necessarily the best anatomical repair landmark in experimental colorectal surgery on rats. A study aimed to validate Peyer's patch as an anatomical repair landmark for colorectal surgery in rats. The results showed that there was significant variation in the distance between Peyer's patch and the peritoneal reflection, especially when considering male and female rats as a single group. Furthermore, the distance between the patch and the peritoneal reflection was greater in female than in male rats. Additionally, the colonic segment containing Peyer's patch had lower tissue collagen content than the segment without it. These findings suggest that Peyer's patch can be indicated as an anatomical repair landmark and highlight the need to study the healing of colorectal anastomoses in rats based on differing quantities of tissue collagen in the colonic wall with or without Peyer's patch (PUBMED:20011835). |
Instruction: Is the circumferential resection margin a predictor of local recurrence after preoperative radiotherapy and optimal surgery for rectal carcinoma?
Abstracts:
abstract_id: PUBMED:17535279
Is the circumferential resection margin a predictor of local recurrence after preoperative radiotherapy and optimal surgery for rectal carcinoma? Objective: Circumferential resection margin (CRM) involvement has been correlated with a high risk of developing local recurrence. The aim of this study was to examine the prognostic significance of the CRM involvement after curative resection of rectal cancer in patients treated with preoperative radiotherapy and postoperative chemotherapy where indicated.
Method: All patients with rectal cancer treated in a regional central unit from 1996 to 2004 were identified. A surgical resection was performed on 257 patients, and in 229 of these this was assessed as potentially curative. The CRM was examined in all patients. A CRM of < or = 1 mm was considered positive.
Results: A positive margin was seen in 19 (8%) patients. At a median follow up of 40 months, only four (1.7%) patients had developed local recurrence, one of whom had a positive CRM. In the four patients the tumour was 5 cm or less from the anal verge. There were no significant differences regarding local recurrence and survival between CRM positive and negative tumours.
Conclusion: Rectal cancer managed by combined radiochemotherapy and surgery resulted in a low positive CRM rate and a low local recurrence rate. An involved CRM was not a predictor of local recurrence.
abstract_id: PUBMED:34988641
Outcomes of rectal cancer patients with a positive pathological circumferential resection margin. Purpose: Evidence-based management of positive pathological circumferential resection margin (pCRM) following preoperative radiation and an adequate rectal resection for rectal cancers is lacking.
Methods: Retrospective analysis of prospectively maintained single-centre institutional database was done to study the patterns of failure and management strategies after a rectal cancer surgery with a positive pCRM.
Results: A total of 86 patients with rectal adenocarcinoma with a positive pCRM were identified over 8 years (2011-2018). Majority had low-lying rectal cancers (90.7%) and were operated after preoperative radiotherapy (95.3%). Operative procedures included abdomino-perineal resections, inter-sphincteric resections, low anterior resections and pelvic exenteration in 61 (70.9%), 9 (10.5%), 11(12.8%) and 5 (5.8%) patients respectively. A total of 83 (96.5%) received chemotherapy as the sole adjuvant treatment modality while 2 patients (2.3%) were given post-operative radiotherapy and 1 patient underwent revision surgery. A total of 53 patients (61.6%) had recurrence, with 16 (18.6%), 20 (23.2%), 8(9.3%) and 9 (10.5%) patients having locoregional, systemic, peritoneal and simultaneous local-systemic relapse. Systemic recurrences were more often detected either by surveillance in an asymptomatic patient (20.1%) while local (13.1%) and peritoneal (13.2%) recurrences were more often symptomatic (p = 0.000). The 2-year overall survival (OS) and disease-free survival (DFS) of the cohort was 82.4% and 74.0%. Median local recurrence-free survival (LRFS) was 10.3 months.
Conclusions: Patients with a positive pCRM have high local and distal relapse rates. Systemic relapses are more often asymptomatic as compared to peritoneal or locoregional relapse and detected on follow-up surveillance. Hence, identification of such recurrences while still salvageable via an intensive surveillance protocol is desirable.
abstract_id: PUBMED:11859207
Circumferential margin involvement is still an important predictor of local recurrence in rectal carcinoma: not one millimeter but two millimeters is the limit. Despite improved surgical treatment strategies for rectal cancer, 5-15% of all patients will develop local recurrences. After conservative surgery, circumferential resection margin (CRM) involvement is a strong predictor of local recurrence. The consequences of a positive CRM after total mesorectal excision (TME) have not been evaluated in a large patient population. In a nationwide randomized multicenter trial comparing preoperative radiotherapy and TME versus TME alone for rectal cancer, CRM involvement was determined according to trial protocol. In this study we analyze the criteria by which the CRM needs to be assessed to predict local recurrence for nonirradiated patients (n = 656, median follow-up 35 months). CRM involvement is a strong predictor for local recurrence after TME. A margin of < or = 2 mm is associated with a local recurrence risk of 16% compared with 5.8% in patients with more mesorectal tissue surrounding the tumor (p <0.0001). In addition, patients with margins < or = 1 mm have an increased risk for distant metastases (37.6% vs 12.7%, p <0.0001) as well as shorter survival. The prognostic value of CRM involvement is independent of TNM classification. Accurate determination of CRM in rectal cancer is important for determination of local recurrence risk, which might subsequently be prevented by additional therapy. In contrast to earlier studies, we show that an increased risk is present when margins are < or = 2 mm.
abstract_id: PUBMED:17659680
Study of circumferential resection margin in patients with middle and lower rectal carcinoma. Aim: To clarify the relationship between circumferential resection margin status and local and distant recurrence as well as survival of patients with middle and lower rectal carcinoma. The relationship between circumferential resection margin status and clinicopathologic characteristics of middle and lower rectal carcinoma was also evaluated.
Methods: Cancer specimens from 56 patients with middle and lower rectal carcinoma who received total mesorectal excision at the Department of General Surgery of Guangdong Provincial People's Hospital were studied. A large slice technique was used to detect mesorectal metastasis and evaluate circumferential resection margin status.
Results: Local recurrence occurred in 12.5% (7 of 56 cases) of patients with middle and lower rectal carcinoma. Distant recurrence occurred in 25% (14 of 56 cases) of patients with middle and lower rectal carcinoma. Twelve patients (21.4%) had positive circumferential resection margin. Local recurrence rate of patients with positive circumferential resection margin was 33.3% (4/12), whereas it was 6.8% (3/44) in those with negative circumferential resection margin (P = 0.014). Distant recurrence was observed in 50% (6/12) of patients with positive circumferential resection margin; conversely, it was 18.2% (8/44) in those with negative circumferential resection margin (P = 0.024). Kaplan-Meier survival analysis showed significant improvements in median survival (32.2 +/- 4.1 mo, 95% CI: 24.1-40.4 mo vs 23.0 +/- 3.5 mo, 95% CI: 16.2-29.8 mo) for circumferential resection margin-negative patients over circumferential resection margin-positive patients (log-rank, P < 0.05). 37% T(3) tumors examined were positive for circumferential resection margin, while only 0% T(1) tumors and 8.7% T(2) tumors were examined as circumferential resection margin. The difference between these three groups was statistically significant (P = 0.021). In 18 cancer specimens with tumor diameter >= 5 cm 7 (38.9%) were detected as positive circumferential resection margin, while in 38 cancer specimens with a tumor diameter of < 5 cm only 5 (13.2%) were positive for circumferential resection margin (P = 0.028).
Conclusion: Our findings indicate that circumferential resection margin involvement is significantly associated with depth of tumor invasion and tumor diameter. The circumferential resection margin status is an important predictor of local and distant recurrence as well as survival of patients with middle and lower rectal carcinoma.
abstract_id: PUBMED:16328130
The circumferential resection margin in rectal carcinoma surgery. After radical resection of rectal carcinoma, the circumferential resection margin (CRM) on the non-peritonealized surface of the resected specimen is of critical importance. Histopathological examination of resected specimens must include careful assessment of the CRM. There is a need to distinguish between CRM-positive (CRM directly involved by tumor or minimal distance between tumor and CRM 1 mm or less) and CRM-negative (distance between tumor and CRM more than 1 mm) situations. Optimized surgery (so-called TME surgery) and an experienced surgeon decrease the frequency of CRM-positive specimens. The CRM status is an important predictor of local and distant recurrence as well as survival. The CRM status can be reliably predicted by preoperative thin-slice high-resolution magnetic resonance imaging (MRI). In the event of predicted CRM-positivity, neoadjuvant radiochemotherapy is indicated.
abstract_id: PUBMED:17180256
Prognostic groups in 1,676 patients with T3 rectal cancer treated without preoperative radiotherapy. Purpose: The use of preoperative radiotherapy in patients with T3 tumors shows considerable variation among countries and institutions. The Norwegian guidelines have been very restrictive, limiting the indication to T4. This study was designed to identify subgroups of patients with T3 tumors with presumed high risks on adverse outcome and to use these results to reevaluate the national guidelines for preoperative radiotherapy.
Methods: This was a national cohort study of 2,460 patients with pT3 rectal adenocarcinoma, undergoing major surgery without preoperative radiotherapy from November 1993 to December 2002. Circumferential resection margin in millimeters was given for 1,676 patients.
Results: Multivariate analyses identified circumferential resection margin and nodal status as independent prognostic factors for local recurrence, metastases, and overall mortality. Analyses based on 12 combinations of N stage and circumferential resection margin showed that the estimated five-year rate of local recurrence increased from 11.1 percent (circumferential resection margin >3 mm; N0) to 36.5 percent (circumferential resection margin < or =1 mm; N2). The rate of distant metastases increased from 18.5 to 77.7 percent and the five-year survival decreased from 68.6 to 25.7 percent, respectively.
Conclusions: There is great variation in outcome for patients with T3 cancers, and the outcome is not acceptable for the groups of patients with circumferential resection margin <3 mm or involved lymph nodes. These groups should be considered for neoadjuvant therapy.
abstract_id: PUBMED:2688804
Predicting local recurrence of carcinoma of the rectum after preoperative radiotherapy and surgery. A prospective study of prognostic factors has been carried out in a group of 186 patients with tethered rectal carcinomas. Of these, 97 were randomized to surgery alone and 89 to receive preoperative radiotherapy (20 Gy in four fractions). DNA ploidy was determined by flow cytometry. DNA aneuploidy was detected in 60 patients (62 per cent) in the surgery only group, but in only 33 patients (37 per cent) after radiotherapy (P less than 0.01). There was a significant reduction in local recurrence in irradiated patients (P less than 0.0001). DNA diploid tumours were less likely to recur locally. This was more marked in the radiotherapy group (P = 0.01) than in the surgery only group (P = 0.06). After radiotherapy, only the surgeons' assessments of a 'curative' resection and DNA ploidy were independent predictors of local recurrence in multivariate regression analysis, whilst Dukes' classification was not. In conclusion, DNA ploidy may indicate response to radiotherapy and is an important predictor of subsequent local tumour progression.
abstract_id: PUBMED:7551999
Abdominoperineal resection and anterior resection in the treatment of rectal cancer: results in relation to adjuvant preoperative radiotherapy. The outcome of patients with rectal cancer treated by abdominoperineal or anterior resection, with or without preoperative radiotherapy, was assessed to detect any differences attributable to the operative method and interactions between radiotherapy and type of surgery. The study was based on 1292 patients included in two consecutive controlled randomized trials of preoperative radiotherapy in operable rectal carcinoma. The outcome was not related to surgical method. Radiotherapy increased postoperative mortality and complications and reduced local and distant recurrence, but had no effect on overall survival. Effects of radiotherapy were similar irrespective of the type of surgery, except that the increase in postoperative mortality in irradiated patients was greater in those treated with abdominoperineal resection. Sphincter-saving procedures appear to have no adverse effects on outcome of rectal cancer, but the optimum use of radiotherapy is still to be defined.
abstract_id: PUBMED:15786412
Prognostic significance of circumferential margin involvement in rectal adenocarcinoma treated with preoperative chemoradiotherapy and low anterior resection. Introduction: Histologic examination of circumferential margins is an important predictor of local and distant relapse in non-radiated rectal cancer. However, for patients who received preoperative chemoradiotherapy this role has not yet been addressed.
Methods: From January 1995 to December 1997, 61 patients with rectal adenocarcinoma located between 0 and 10 cm from anal verge with invasion into perirectal fat assessed by rectal ultrasound were included. All patients received 45 Gy + bolus infusion of 5-FU (450 mg/m(2)/days 1-5, 28-33 of RT); 4-6 weeks later, surgery was performed. Circumferential margin was assessed (<2 mm was considered as positive). Five-year survival was calculated by Kaplan-Meier method and comparison of groups with log-rank test. Multivariate Cox regression analysis was performed to find risk factors affecting local control and survival.
Results: There were 35 males and 26 females, mean age 60.3 years. Twelve patients (19.7%) had circumferential margin involvement. Median follow-up was 44 months. Overall local recurrence was observed in 6 of 61 patients (9.8%); in patients without circumferential margin involvement this was 8%, whereas it was 16% in those with circumferential margin involvement (P = 0.33). Distant recurrence was observed in 22% of patients without circumferential margin involvement; conversely, it was 58.3% in those with involvement (P = 0.02). Five-year survival of patients without circumferential resection involvement margin was 81%, while it was 42% in patients with circumferential involvement (P = 0.006).
Conclusions: In patients with rectal cancer treated by preoperative chemoradiation plus total mesorectal excision (TME) and sphincter saving surgery, circumferential margin involvement is associated with high incidence of distant recurrence and cancer-related death.
abstract_id: PUBMED:32928609
Preoperative predictive risk to cancer quality in robotic rectal cancer surgery. Background: Circumferential resection margin (CRM) involvement is widely considered the strongest predictor of local recurrence after TME. This study aimed to determine preoperative factors associated with a higher risk of pathological CRM involvement in robotic rectal cancer surgery.
Methods: This was a retrospective review of a prospectively maintained database of consecutive adult patients who underwent elective, curative robotic low anterior or abdominoperineal resection with curative intent for primary rectal adenocarcinoma in a tertiary referral cancer center from March 2012 to September 2019. Pretreatment magnetic resonance imaging (MRI) reports were reviewed for all the patients. Risk factors for pathological CRM involvement were investigated using Firth's logistic regression and a predictive model based on preoperative radiological features was formulated.
Results: A total of 305 patients were included, and 14 (4.6%) had CRM involvement. Multivariable logistic regression found both T3 >5 mm (OR 6.12, CI 1.35-36.44) and threatened or involved mesorectal fascia (OR 4.54, CI 1.33-17.55) on baseline MRI to be preoperative predictors of pathologic CRM positivity, while anterior location (OR 3.44, CI 0.72-33.13) was significant only on univariate analysis. The predictive model showed good discrimination (area under the receiver-operating characteristic curve >0.80) and predicted a 32% risk of positive CRM if all risk factors were present.
Conclusion: Patients with pre-operatively assessed threatened radiological margin, T3 tumors with greater than 5 mm extension and anterior location are at risk for a positive CRM. The predictive model can preoperatively estimate the CRM positivity risk for each patient, allowing surgeons to tailor management to improve oncological outcomes.
Answer: The circumferential resection margin (CRM) is considered an important predictor of local recurrence in rectal carcinoma after surgery. Several studies have investigated the relationship between CRM involvement and the risk of local recurrence, with varying conclusions.
One study found that CRM involvement was not a predictor of local recurrence in patients managed by combined radiochemotherapy and surgery, resulting in a low positive CRM rate and a low local recurrence rate (PUBMED:17535279). However, this finding is not universally supported.
Other research has shown that a positive pathological CRM (pCRM) is associated with high local and distal relapse rates, indicating that patients with a positive pCRM have a significant risk of recurrence (PUBMED:34988641). Similarly, another study concluded that CRM involvement is a strong predictor for local recurrence after total mesorectal excision (TME), with a margin of ≤2 mm associated with a higher local recurrence risk compared to patients with more mesorectal tissue surrounding the tumor (PUBMED:11859207).
Further evidence suggests that CRM involvement is significantly associated with the depth of tumor invasion and tumor diameter, and is an important predictor of local and distant recurrence as well as survival in patients with middle and lower rectal carcinoma (PUBMED:17659680). Additionally, CRM status can be reliably predicted by preoperative magnetic resonance imaging (MRI), and in cases of predicted CRM-positivity, neoadjuvant radiochemotherapy is indicated (PUBMED:16328130).
Moreover, in patients with T3 rectal cancer treated without preoperative radiotherapy, CRM involvement and nodal status were identified as independent prognostic factors for local recurrence, metastases, and overall mortality (PUBMED:17180256).
In conclusion, while there may be some variability in findings, the majority of evidence suggests that CRM is indeed a predictor of local recurrence after preoperative radiotherapy and optimal surgery for rectal carcinoma. |
Instruction: Varicella vaccination during early pregnancy: a cause of in utero miliary fetal tissue calcifications and hydrops?
Abstracts:
abstract_id: PUBMED:12625972
Varicella vaccination during early pregnancy: a cause of in utero miliary fetal tissue calcifications and hydrops? Background: It is the purpose of this article to describe a suspected association of inadvertent vaccination with varicella vaccine during early pregnancy with the subsequent development of in utero miliary fetal tissue calcifications and fetal hydrops detected by sonogram at 15 weeks of gestation.
Case: This is a case presentation of a pregnant patient who received varicella vaccination during the same menstrual cycle that she became pregnant, and is supplemented by a literary review. The fetus developed miliary fetal tissue calcifications and fetal hydrops detected by a targeted sonogram at 15 weeks gestation.
Conclusion: Varicella vaccination during early pregnancy may be a cause of miliary fetal tissue calcifications and fetal hydrops.
abstract_id: PUBMED:18979430
Ultrasound findings in fetal infection Infections acquired in utero or during the birth process are a significant cause of fetal and neonatal mortality and an important contributor to early and later childhood morbidity. Advances in ultrasound, invasive prenatal procedures and molecular diagnostics have allowed in utero evaluation and given rise to more timely and accurate diagnosis in infected fetuses. Transplacental transmission of the infectious agent, even in subclinical maternal infection, may result in a severe congenital syndrome. Prenatal detection of infection is based on fetal sonographic findings and polymerase chain reaction to identify the specific agent. Nevertheless, most affected fetuses appear sonographically normal, but serial scanning may reveal evolving findings. Sonographic fetal abnormalities may be indicative of fetal infections, although they are generally not sensitive or specific. These include growth restriction, hydrops, ventriculomegaly, hydrocephaly, microcephaly, intracranial or hepatic calcifications, ascites, hepatosplenomegaly, echogenic bowel, placentomegaly, and abnormal amniotic fluid volume. When abnormalities are detected on ultrasound, a thorough fetal evaluation is recommended because of potential multiorgan involvement. The sonologist should understand the limitations of ultrasound. Patients should be counseled that ultrasound is not a sensitive test for fetal infection and that a normal fetal anatomy survey cannot reliably predict a favorable outcome.
abstract_id: PUBMED:16635273
Sonographic findings in fetal viral infections: a systematic review. Unlabelled: Viral infections are a major cause of fetal morbidity and mortality. Transplacental transmission of the virus, even in subclinical maternal infection, may result in a severe congenital syndrome. Prenatal detection of viral infection is based on fetal sonographic findings and polymerase chain reaction to identify the specific infectious agent. Most affected fetuses appear sonographically normal, but serial scanning may reveal evolving findings. Common sonographic abnormalities, although nonspecific, may be indicative of fetal viral infections. These include growth restriction, ascites, hydrops, ventriculomegaly, intracranial calcifications, hydrocephaly, microcephaly, cardiac anomalies, hepatosplenomegaly, echogenic bowel, placentomegaly, and abnormal amniotic fluid volume. Some of the pathognomonic sonographic findings enable diagnosis of a specific congenital syndrome (eg, ventriculomegaly and intracranial and hepatic calcifications in cytomegalovirus, eye and cardiac anomalies in congenital rubella syndrome, limb contractures and cerebral anomalies in varicella zoster virus). When abnormalities are detected on ultrasound, a thorough fetal evaluation is recommended because of multiorgan involvement.
Target Audience: Obstetricians & Gynecologists, Family Physicians.
Learning Objectives: After completion of this article, the reader should be able to recall that both clinical and subclinical maternal viral infections can cross the placenta, explain that there are specific sonographic findings along with laboratory findings to detect infectious agents, and state that when sonographic abnormalities are detected fetal viral infections need to be considered.
abstract_id: PUBMED:22391985
Fetal liver calcifications: an autopsy study. Fetal liver calcifications are occasionally found in fetal autopsies. However, the incidence, associated findings, clinical significance, and presumed pathogenesis of fetal liver calcifications are not well documented. This study analyzed the characteristics and significance of fetal liver calcifications found on fetal autopsies. Cases of fetal liver calcifications were collected from a fetal autopsy database. Their clinical and pathological characteristics were analyzed in comparison to the remaining cases in the database. Thirty-five cases (4.2%) of fetal liver calcifications were found among 827 consecutive fetal autopsies that had been performed in our hospital during the 16-year period from January 1, 1994 through December 31, 2009. Twenty-nine cases had nodular calcifications, predominantly subcapsular. Calcification in portal spaces and porta hepatis were present in six cases. Twenty cases were missed abortions and intrauterine fetal death. Missed abortion at or earlier than 23 weeks had significantly more subcutaneous edema and other evidence of circulatory abnormalities. Calcifications in older fetuses (>23 weeks) were located more commonly in portal spaces and in other organs. Fetal liver calcification is an incidental finding during autopsies. The significance of fetal liver calcifications has to be assessed in combination with other clinical and pathological parameters, including location and number of the lesions, signs of circulatory compromise, and abnormalities of placenta, umbilical cord, and fetal malformations. Fetal liver calcifications are commonly associated with conditions related to impaired circulation, including umbilical cord abnormalities and subcutaneous edema. We suggest that fetal liver calcifications might attest to circulatory compromise preceding death, especially if subcutaneous edema is present and even when no other abnormal findings are seen.
abstract_id: PUBMED:12566779
Giant fetal hepatic hemangioma. Case report and literature review. The purpose of this case report is to demonstrate the importance of prenatal imaging for treatment management of fetal giant hepatic hemangiomas. Prenatal ultrasound revealed an abdominal mass with several cystic areas and punctate calcifications in a fetus at 29 weeks' gestation. Doppler scans confirmed the highly vascular nature of the mass. In this case, ultrasound diagnosed the mass was of hepatic origin, while magnetic resonance imaging at 32 weeks' gestation was more equivocal with respect to the anatomy source of the lesion. Imminent hydrops caused by a rapidly enlarged liver tumor was sonographically demonstrated at 34 weeks' gestation. An elective C-section and immediate tumor resection was performed. At the age of 20 months the infant is thriving. This case supports the notion that the survival rates for giant hepatic hemangiomas improve when fetal hydrops is averted and specific pre- and postnatal treatment is applied based on correct prenatal imaging diagnostics.
abstract_id: PUBMED:8645385
An association between fetal parvovirus B19 infection and fetal anomalies: a report of two cases. The association between fetal parvovirus B19 infection and hydrops was first reported in 1984. The virus has a predilection for the erythroid cell line, which in the fetus may produce anemia. Recent cases of parvovirus infection in other fetal cell lines have raised concern that the infection may induce fetal anomalies in rare cases. We report two pregnancies complicated by parvovirus B19 infection. In each instance the patient had normal second trimester ultrasounds but subsequently developed fetal abnormalities--disruptions of normal structure. One infant has myocardial infarction, splenic calcifications, and mild hydrocephalus. The other had moderate hydrocephalus with central nervous system scarring. There are two possible mechanisms in which parvovirus may induce fetal anomalies. Both direct infection of fetal organs and vascular inflammation have been documented in association with B19 parvovirus. Although fetal abnormalities associated with parvovirus are rare, continued study of this organism may indicate a greater pathologic potential than is now thought.
abstract_id: PUBMED:27928837
Prenatal diagnosis of idiopathic infantile arterial calcification without fetal hydrops. Idiopathic infantile arterial calcification (IIAC) is a rare autosomal recessive disease that is characterized by extensive calcification of the internal elastic lamina and intimal proliferation of large- and medium-sized arteries, including the aortic, coronary, pulmonary, and iliac arteries. Most reported cases of IIAC were diagnosed in the neonatal periods. Prenatal diagnosis of this condition is extremely rare and is usually made in the third trimester when fetuses had nonimmune hydrops together with aortic and pulmonary calcification. Early prenatal diagnosis can hardly be made without fetal hydrops in the second trimester. We report a case of IIAC referred to our center because of hyperechogenic tricuspid valve. The prenatal diagnosis was made by echocardiographic detection of diffuse hyperechogenicity of the cardiac valves, annuli, aorta, pulmonary artery, renal artery and common iliac artery without fetal hydrops. To the best of our knowledge, this was the first case of IIAC accurately diagnosed prenatally in the absence of fetal hydrops.
abstract_id: PUBMED:19848336
Ultrasound in the evaluation of intrauterine infection during pregnancy Ultrasound has an important role in the detection and follow- up of intrauterine infection. Viral infections are a major cause of fetal morbidity and mortality. Transplacental transmission of the virus, even in sub-clinical maternal infection, may result in a severe congenital syndrome. Prenatal detection of viral infection is based on fetal sonographic findings and PCR to identify the specific infectious agent. Most affected fetuses appear sonographically normal, but serial scanning may reveal evolving findings. Common sonographic abnormalities, although non-specific, may be indicative of fetal viral infections. These include growth restriction, ascites, hydrops, ventriculomegaly, intracranial calcifications, hydrocephaly, microcephaly, cardiac anomalies, hepatosplenomegaly, echogenic bowel, placentomegaly and abnormal amniotic fluid volume. Some of the pathognomonic sonographic findings enable diagnosis of a specific congenital syndrome (e.g., ventriculomegaly and intracranial and hepatic calcifications in cytomegalovirus or in toxoplasma; eye and cardiac anomalies in congenital Rubella syndrome; limb contractures and cerebral anomalies in Varicella Zoster virus). When abnormalities are detected on ultrasound, a thorough fetal evaluation is recommended because of multiorgan involvement.
abstract_id: PUBMED:18383476
Secondary cytomegalovirus infection can cause severe fetal sequelae despite maternal preconceptional immunity. Objectives: To describe our experience in cases with sonographic signs of fetal infection and with maternal serological 'immunity' to cytomegalovirus (CMV) infection.
Methods: This was a bicenter study of six pregnant women referred for evaluation of suspected fetal infection. All cases had confirmed maternal serology for past exposure to CMV but no evidence of recent secondary CMV infection. All underwent sonographic evaluation as well as complete investigation for CMV infection.
Results: The mean age of the women was 29 (range, 23-35) years and the mean gestational age at diagnosis was 23.5 weeks (range, 20-31) weeks. Sonographic findings included microcephaly, ventriculomegaly, periventricular calcifications and cystic lesions, echogenic bowel, hydrops and hepatosplenomegaly. Amniocentesis was performed in all cases for fetal karyotyping and viral assessment, and all were found by polymerase chain reaction to be positive for CMV infection. Four pregnancies were terminated following the parents' request. One pregnancy continued until intrauterine fetal death occurred 2 weeks after diagnosis. Postmortem was denied in all cases but one. One infant was delivered with evidence of severe cerebral palsy.
Conclusion: In the presence of sonographic findings suggestive of fetal CMV infection, prompt investigation of amniotic fluid should follow even if maternal serology does not support recent maternal seroconversion.
abstract_id: PUBMED:3921362
Fetal PCB syndrome: clinical features, intrauterine growth retardation and possible alteration in calcium metabolism. Pregnant mothers with yusho in Fukuoka, Nagasaki and Kochi Prefectures delivered babies with a peculiar clinical manifestation which will be called fetal PCB syndrome (FPS). The birth rate incidences were 3.6% (Fukuoka Prefecture), 4% (Nagasaki Prefecture), 2.9% (Kochi Prefecture) and 3.9% (total). The manifestations consisted of dark brown pigmentation of the skin and the mucous membrane, gingival hyperplasia, exophthalmic edematous eye, dentition at birth, abnormal calcification of the skull as demonstrated by X-ray, rocker bottom heel and high incidence of light for date (low birth weight) babies. We suggest that there may be a possible alteration in calcium metabolism in these babies, related to the fragile egg shells observed in PCB-contaminated birds and to the female hormone-enhancing effect of PCB. The high incidence of low birth weight among these newborns and two other similar studies indicated that PCBs suppress fetal growth.
Answer: The case presented in the abstract from PUBMED:12625972 suggests a suspected association between inadvertent varicella vaccination during early pregnancy and the subsequent development of in utero miliary fetal tissue calcifications and fetal hydrops. This was detected by a targeted sonogram at 15 weeks of gestation in a pregnant patient who received the varicella vaccine during the same menstrual cycle in which she became pregnant. The conclusion drawn from this case is that varicella vaccination during early pregnancy may be a cause of miliary fetal tissue calcifications and fetal hydrops. |
Instruction: Cancer rehabilitation: psychosocial rehabilitation needs after discharge from hospital?
Abstracts:
abstract_id: PUBMED:18792854
Cancer rehabilitation: psychosocial rehabilitation needs after discharge from hospital? Objective: This study explores former cancer patients' psychosocial rehabilitation needs after hospital discharge.
Method: Three focus-group interviews with 15 patients.
Results: Five main areas were identified: (1) Need for continuous support and information about rehabilitation opportunities; (2) Support to the family; (3) Psychological help also addressing fear of relapse of cancer; (4) Needs for social support; (5) Needs pertaining to how friends and acquaintances relate to the patients.
Conclusion: Each of these aspects should be carefully considered for each patient, preferably by one assigned healthcare provider. Fear of cancer relapse prevails among the patients, the family and the social network and it is important in relation to psychosocial rehabilitation.
abstract_id: PUBMED:31294502
An assessment of survivorship care needs of patients with colorectal cancer: The experiences and perspectives of hospital nurses. Purpose: To describe and analyse hospital nurses' experiences and perspectives of needs assessment in relation to colorectal cancer patients' survivorship care and rehabilitation needs.
Method: The methodology and design of this study was phenomenological-hermeneutic, and the analysis was performed by Ricoeur's theory of interpretation. Twelve hospital nurses working within the care of patients with colorectal cancer participated in four focus group interviews between February-March 2018. Focus group interviews were recorded, transcribed and analysed. The study adhered to the COREQ checklist.
Results: Our analysis showed that nurses experienced challenges and barriers in conducting needs assessment. These challenges were described in three main themes. Encountering paradigms brought to light the difficulties relating to implementation of needs assessment into daily practice in the complex context of a hospital setting. Patient involvement could be challenging because of insufficient involvement and inadequate health literacy of patients in relation to needs assessment. A negative attitude towards systematic needs assessment among nurses could present a barrier because of their role as gatekeepers.
Conclusion: The findings point to important elements that are necessary to consider when planning cancer survivorship care in the hospital setting so that all patients experience the best possible cancer trajectory. These insights can guide future clinical practice in the endeavour to ensure more systematic initiatives towards cancer rehabilitation.
Relevance To Clinical Practice: Based on our findings, cancer survivorship care needs assessment in the hospital setting should encompass specific guidelines on needs assessment and systematic implementation of these guidelines by involving hospital management, nurses and patients through use of visionary information and communication. Implementation of these guidelines would be supported by securing knowledge on cancer survivorship care for all hospital health professionals. Health literacy should be considered in formulating guidelines that enhance involvement of patients by use of patient-centred communication.
abstract_id: PUBMED:34402768
Supportive care needs of men with prostate cancer after hospital discharge: multi-stakeholder perspectives. Purpose: This study explored the supportive care needs of men with prostate cancer (PCa) after hospital discharge based on the perceptions of multiple stakeholders.
Methods: Eight semi-structured focus groups and three individual interviews were conducted between September 2019 and January 2020, with 34 participants representing men with PCa, primary and secondary healthcare professionals, and cancer organizations in western Norway. Data was analysed using systematic text condensation.
Results: Four categories emerged: 1) men with PCa have many information needs which should be optimally provided throughout the cancer care process; 2) various coordination efforts among stakeholders are needed to support men with PCa during follow-up; 3) supportive care resources supplement the healthcare services but knowledge about them is random; and 4) structured healthcare processes are needed to improve the services offered to men with PCa. Variations were described regarding priority, optimal mode and timeliness of supportive care needs, while alignment was concerned with establishing structures within and between stakeholders to improve patient care and coordination.
Conclusions: Despite alignment among stakeholders' regarding the necessity for standardization of information and coordination practices, the mixed prioritization of supportive care needs of men with PCa indicate the need for additional individualized and adapted measures.
abstract_id: PUBMED:27344328
Prevalence of unmet needs and correlated factors in advanced-stage cancer patients receiving rehabilitation. Purpose: Although rehabilitation for patients with cancer is currently being provided throughout all phases of the disease, including the advanced stage, much remains unknown about the needs of such patients. The aims of this study were to identify the supportive care and unmet needs of cancer patients receiving rehabilitation interventions and to investigate the factors associated with those unmet needs.
Methods: A total of 45 patients with cancer receiving rehabilitation interventions participated in this study between June 2013 and December 2015. Measures included the Japanese version of the Short-Form Supportive Care Needs Survey Questionnaire (SCNS-SF34), the Functional Independence Measure (FIM), the Hospital Anxiety and Depression Scale (HADS), and various other medico-social factors.
Results: The mean age of the cancer patients was 66.6 years, the mean (±standard deviation) FIM score was 111.8 (±16.1), and the mean HADS score was 13.9 (±8.2). The patients had a mean of 17.4 (±10.3) unmet needs. The top ten unmet needs related to rehabilitation intervention included seven psychological needs, two health system and information needs, and one physical and daily living need. Multiple regression analysis revealed that psychological distress (HADS ≥11), marital status, and sex were significantly associated with physical and daily living needs.
Conclusions: These results suggest that psychosocial factors are important in understanding the supportive care and unmet needs of cancer patients receiving rehabilitation interventions.
abstract_id: PUBMED:28266259
Burden and Rehabilitation Goals of Families in Pediatric-oncological Rehabilitation Burden and Rehabilitation Goals of Families in Pediatric-oncological Rehabilitation Survival rates of childhood cancer patients increased during the past years up to 80 %. Therefore, pediatric oncological rehabilitation is essential for reintegrating children with cancer into normal life. We performed an analysis of the current state in pediatric oncological rehabilitation with regards to the impairments of the participants and results in rehabilitation. Descriptive and content analyses of 422 medical discharge summaries were conducted. 55 % of the pediatric patients are male; the average age is 8.7 years. Children attending rehabilitation program are affected by various functional and psychosocial impairments. We identified global rehabilitation-goals such as integration in peer group and specific goals such as pain relief. According to rehabilitation physicians' opinion most patients achieve their rehabilitation-goals. Accompanying family members report a range of psychosocial burden and diverse concerns for rehabilitation. Medical discharge summaries display the complexity of family-oriented rehabilitation. We conclude that rehabilitation treatment needs to be tailored according to individual burdens and the whole family.
abstract_id: PUBMED:26808254
Assessment of rehabilitation needs in colorectal cancer treatment: Results from a mixed audit and qualitative study in Denmark. Background Systematic assessments of cancer patients' rehabilitation needs are a prerequisite for devising appropriate survivorship programs. Little is known about the fit between needs assessment outlined in national rehabilitation policies and clinical practice. This study aimed to explore clinical practices related to identification and documentation of rehabilitation needs among patients with colorectal cancer at Danish hospitals. Material and methods A retrospective clinical audit was conducted utilizing data from patient files randomly selected at surgical and oncology hospital departments treating colorectal cancer patients. Forty patients were included, 10 from each department. Semi-structured interviews were carried out among clinical nurse specialists. Audit data was analyzed using descriptive statistics, qualitative data using thematic analysis. Results Documentation of physical, psychological and social rehabilitation needs initially and at end of treatment was evident in 10% (n = 2) of surgical patient trajectories and 35% (n = 7) of oncology trajectories. Physical rehabilitation needs were documented among 90% (n = 36) of all patients. Referral to municipal rehabilitation services was documented among 5% (n = 2) of all patients. Assessments at surgical departments were shaped by the inherent continuous assessment of rehabilitation needs within standardized fast-track colorectal cancer surgery. In contrast, the implementation of locally developed assessment tools inspired by the distress thermometer (DT) in oncology departments was challenged by a lack of competencies and funding, impeding integration of data into patient files. Conclusion Consensus must be reached on how to ensure more systematic, comprehensive assessments of rehabilitation needs throughout clinical cancer care. Fast-track surgery ensures systematic documentation of physical needs, but the lack of inclusion of data collected by the DT in oncological departments questions the efficacy of assessment tools and points to a need for distinguishing between surgical and oncological settings in national rehabilitation policies.
abstract_id: PUBMED:28912923
Lack of Needs Assessment in Cancer Survivorship Care and Rehabilitation in Hospitals and Primary Care Settings. Background: Formalized and systematic assessment of survivorship care and rehabilitation needs is prerequisite for ensuring cancer patients sufficient help and support through their cancer trajectory. Patients are often uncertain as to how to express and address their survivorship care and rehabilitation needs, and little is known about specific, unmet needs and the plans necessary to meet them. There is a call for both ensuring survivorship care and rehabilitation for cancer patients in need and further for documenting the specific needs related to the cancer disease and its treatment. Thus the aim of this study was to describe specific survivorship care and rehabilitation needs and plans as stated by patients with cancer at hospitals when diagnosed and when primary care survivorship care and rehabilitation begins.
Methods: Needs assessment forms from cancer patients at two hospitals and two primary care settings were analyzed. The forms included stated needs and survivorship care and rehabilitation plans. All data were categorized using the International Classification of Functioning, Disability and Health (ICF).
Results: Eighty-nine patients at hospitals and 99 in primary care, stated their needs. Around 50% of the patients completed a survivorship care and rehabilitation plan. In total, 666 (mean 7.5) needs were stated by hospital patients and 836 (mean 8.0) by those in primary care. The needs stated were primarily within the ICF component "body functions and structure", and the most frequent needs were (hospitals/primary care) fatigue (57%/67%), reduced muscle strength (55%/67%) and being worried (37%/36%).
Conclusions: The results underpin an urgent need for a systematic procedure to assess needs in clinical practice where cancer patients are being left without survivorship care and rehabilitation needs assessment. Gaining knowledge on needs assessment and the detailed description of needs and plans can facilitate targeted interventions. The findings indicate an urgent need to change the practice culture to be systematic in addressing and identifying survivorship care needs among patients with cancer. Further the findings call for considering the development of a new needs assessment form with involvement of both patients and healthcare professionals.
abstract_id: PUBMED:34313031
Thirty-day hospital readmission rate, reasons, and risk factors after acute inpatient cancer rehabilitation. Objectives: To evaluate the 30-day hospital readmission rate, reasons, and risk factors for patients with cancer who were discharged to home setting after acute inpatient rehabilitation.
Design, Setting, And Participants: This was a secondary retrospective analysis of participants in a completed prospective survey study that assessed the continuity of care and functional safety concerns upon discharge and 30 days after discharge in adults. Patients were enrolled from September 5, 2018, to February 7, 2020, at a large academic quaternary cancer center with National Cancer Institute Comprehensive Cancer Center designation.
Main Outcomes And Measures: Thirty-day hospital readmission rate, descriptive summary of reasons for readmissions, and statistical analyses of risk factors related to readmission.
Results: Fifty-five (21%) of the 257 patients were readmitted to hospital within 30 days of discharge from acute inpatient rehabilitation. The reasons for readmissions were infection (20, 7.8%), neoplasm (9, 3.5%), neurological (7, 2.7%), gastrointestinal disorder (6, 2.3%), renal failure (3, 1.1%), acute coronary syndrome (3, 1.1%), heart failure (1, 0.4%), fracture (1, 0.4%), hematuria (1, 0.4%), wound (1, 0.4%), nephrolithiasis (1, 0.4%), hypervolemia (1, 0.4%), and pain (1, 0.4%). Multivariate logistic regression modeling indicated that having a lower locomotion score (OR = 1.29; 95% CI, 1.07-1.56; p = 0.007) at discharge, having an increased number of medications (OR = 1.12; 95% CI, 1.01-1.25; p = 0.028) at discharge, and having a lower hemoglobin at discharge (OR = 1.31; 95% CI, 1.03-1.66; p = 0.031) were independently associated with 30-day readmission.
Conclusion And Relevance: Among adult patients with cancer discharged to home setting after acute inpatient rehabilitation, the 30-day readmission rate of 21% was higher than that reported for other rehabilitation populations but within the range reported for patients with cancer who did not undergo acute inpatient rehabilitation.
abstract_id: PUBMED:37934256
The unmet needs of patients in the early rehabilitation stage after lung cancer surgery: a qualitative study based on Maslow's hierarchy of needs theory. Purpose: This study aimed to explore the unmet needs of lung cancer patients in early rehabilitation, based on Maslow's hierarchy of needs theory.
Methods: Information on the experiences of 20 patients was collected through semi-structured interviews. The interviews were conducted in the surgical nursing clinic within 1 week of discharge from hospital. The data were analysed using a combination of deductive (theory-driven) and inductive (data-driven) methods, using Maslow's Hierarchy of Needs as a framework for identifying and organising themes.
Results: Patients had a mean age of 50.92 years (SD 11.88); n = 11 (55%) were female. Major themes aligned with the dimensions of Maslow's hierarchy of needs model. Five major themes with 12 corresponding sub-themes emerged: (1) physiological needs, including "self-care and independence in life", "return to pre-operative status as soon as possible", "increase exercise under specialist guidance" and "reduce cough and pain and improve sleep quality"; (2) safety and security needs, such as "symptom management", "regulation of the emotions of worry and fear" and "access accurate treatment information"; (3) love and belonging needs, including "accompany family members" and "chat with friends";(4)Esteem needs: "live with dignity";(5) Self-actualization, such as "accept and submit to the reality of cancer" and "live meaningfully".
Conclusions: The findings of this study indicated that there were many unmet needs for patients during the early recovery period after lung cancer surgery. An overview of the different areas of need identified in this study may guide future research and development of interventions to improve patients' quality of life during the home rehabilitation phase.
abstract_id: PUBMED:19264838
Cancer survivors' rehabilitation needs in a primary health care context. Background: Studies of cancer survivors' rehabilitation needs have mostly addressed specific areas of needs, e.g. physical aspects and/or rehabilitation needs in relation to specific cancer types.
Objective: To assess cancer survivors' perceived need for physical and psychosocial rehabilitation, whether these needs have been presented to and discussed with their GP.
Methods: A survey among a cohort of cancer survivors approximately 15 months after diagnosis. The questionnaire consisted of an ad hoc questionnaire on rehabilitation needs and the two validated questionnaires, the SF-12 and the Research and Treatment of Cancer quality of life questionnaire, the QLQ C-30 version 3.
Results: Among 534 eligible patients, we received 353 (66.1%) answers. Two-thirds of the cancer survivors had discussed physical rehabilitation needs with their GPs. Many (51%) feared cancer relapse, but they rarely presented this fear to the GP or the hospital staff. The same applied to social problems and problems within the family. Good physical and mental condition and low confidence in the GP were associated with no contact to the GP after hospital discharge.
Conclusion: Cancer survivors have many psychosocial rehabilitation needs and intervention should effectively target these needs. If this task is assigned to the GPs, they need to be proactive when assessing psychosocial aspects.
Answer: After discharge from the hospital, former cancer patients have several psychosocial rehabilitation needs that have been identified through various studies. These needs encompass:
1. Continuous support and information about rehabilitation opportunities, as patients require ongoing assistance and access to relevant information to navigate their post-treatment phase (PUBMED:18792854).
2. Support for the family, as the family unit is often affected by the cancer experience and may need resources to cope and assist the patient (PUBMED:18792854).
3. Psychological help that addresses the fear of cancer relapse, which is a prevalent concern among patients and their social networks (PUBMED:18792854).
4. Social support needs, as patients may require help in re-engaging with their social circles and community activities (PUBMED:18792854).
5. Assistance in how friends and acquaintances relate to the patients, which can involve educating the patient's social circle on how to provide appropriate support (PUBMED:18792854).
6. Information needs, particularly for men with prostate cancer, who require optimal information provision throughout the cancer care process (PUBMED:34402768).
7. Coordination efforts among stakeholders to support follow-up care, which involves structured healthcare processes to improve services offered to patients (PUBMED:34402768).
8. Knowledge about supportive care resources, which may not be uniformly available or known to patients (PUBMED:34402768).
9. Addressing psychological distress, marital status, and sex, which are factors significantly associated with physical and daily living needs (PUBMED:27344328).
10. Rehabilitation goals tailored to individual burdens and the whole family, especially in pediatric oncological rehabilitation (PUBMED:28266259).
11. Systematic, comprehensive assessments of rehabilitation needs throughout clinical cancer care, as there is often a lack of systematic documentation and referral to municipal rehabilitation services (PUBMED:26808254, PUBMED:28912923).
12. A systematic procedure to assess needs in clinical practice, as many cancer patients are left without a formal assessment of their survivorship care and rehabilitation needs (PUBMED:28912923).
13. Addressing unmet needs in the early recovery period after lung cancer surgery, which includes physiological, safety and security, love and belonging, esteem, and self-actualization needs (PUBMED:37934256).
14. |
Instruction: Road rage: a psychiatric phenomenon?
Abstracts:
abstract_id: PUBMED:35966943
Saudi Arabian road accident mortality and traffic safety interventions dataset (2010-2020). Increased traffic volumes worldwide have resulted in an increased number of road accident injuries and mortalities. This global phenomenon motivated the United Nations (UN) to initiate a decade-long global road safety plan in 2010. In response, Saudi Arabia concurrently initiated a comprehensive road safety program, supported by detailed and comprehensive road safety data for the Eastern Province (EP) of Saudi Arabia. The contributed EP-Traffic-Mortality-and-Policy-Interventions Dataset provides multidimensional road safety data for 2010-2020 via two primary and five secondary data subsets. The first primary subset provides road accident mortality data. The five secondary data subsets reflect road accident mortalities at different time scales and administrative (provincial or governorate) levels. The second primary subset provides details of traffic safety policy interventions implemented during the same period. Researchers and policymakers can use this comprehensive dataset to study accident mortality patterns across various geospatial and time scales and analyze the effectiveness of policies intended to mitigate road accident mortalities.
abstract_id: PUBMED:23896449
The contribution of on-road studies of road user behaviour to improving road safety. For over 40 years transport safety researchers have been using methods of vehicle instrumentation to gain greater insights into the factors that contribute to road user crash risk and the associated crash factors. In the previous decade in particular the widespread availability of lower cost and more advanced methods of vehicle instrumentation and recording technologies are supporting the increasing number of on-road research studies worldwide. The design of these studies ranges from multi-method studies using instrumented test vehicles and defined driving routes, to field operational tests, through to much larger and more naturalistic studies. It is timely to assess the utility of these methods for studying the influences of driver characteristics and states, the design and operation of the road system, and the influences of in-vehicle technologies on behaviour and safety for various road user groups. This special issue considers the extent to which on-road studies using vehicle instrumentation have been used to advance knowledge across these areas of road safety research. The papers included in this issue illustrate how research using instrumented test vehicles continues to generate new knowledge, and how the larger scale United States and European naturalistic and field operational test studies are providing a wealth of data about road user behaviour in real traffic. This is balanced with a number of studies that present methodological developments in data collection and analysis methods that, while promising, need further validation. The use of on-road methods to accurately describe the behaviours occurring in everyday real-world conditions, to quantify risks for safety critical events, and an improved understanding of the factors that contribute to risk, clearly has huge potential to promote further road trauma reductions.
abstract_id: PUBMED:33071583
Corrugation of an unpaved road surface under vehicle weight. Road corrugation refers to the formation of periodic, transverse ripples on unpaved road surfaces. It forms spontaneously on an initially flat surface under heavy traffic and can be considered to be a type of unstable growth phenomenon, possibly caused by the local volume contraction of the underlying soil due to a moving vehicle's weight. In the present work, we demonstrate a possible mechanism for road corrugation using experimental data of soil consolidation and numerical simulations. The results indicate that the vertical oscillation of moving vehicles, which is excited by the initial irregularities of the surface, plays a key role in the development of corrugation.
abstract_id: PUBMED:33995563
Regulating Road Rage. Road rage has been a problem since the advent of cars. Given the ubiquity of road rage, and its potentially devastating consequences, understanding road rage and developing interventions to curb it are important priorities. Emerging theoretical and empirical advances in the study of emotion and emotion regulation have provided new insights into why people develop road rage and how it can be prevented and treated. In the current article, we suggest an integrative conceptual framework for understanding road rage, based upon a psychological analysis of emotion and emotion regulation. We begin by defining road rage and other key constructs. We then consider the interplay between road rage generation and road rage regulation. Using an emotion regulation framework, we describe key points at which emotion-regulation difficulties can lead to road rage, followed by strategies that may alleviate these difficulties. We suggest that this framework usefully organizes existing research on road rage, while exposing key directions for future research.
abstract_id: PUBMED:30229895
BRAZIL ROAD-KILL: a data set of wildlife terrestrial vertebrate road-kills. Mortality from collision with vehicles is the most visible impact of road traffic on wildlife. Mortality due to roads (hereafter road-kill) can affect the dynamic of populations of many species and can, therefore, increase the risk of local decline or extinction. This is especially true in Brazil, where plans for road network upgrading and expansion overlaps biodiversity hotspot areas, which are of high importance for global conservation. Researchers, conservationists and road planners face the challenge to define a national strategy for road mitigation and wildlife conservation. The main goal of this dataset is a compilation of geo-referenced road-kill data from published and unpublished road surveys. This is the first Data Paper in the BRAZIL series (see ATLANTIC, NEOTROPICAL, and BRAZIL collections of Data Papers published in Ecology), which aims make public road-kill data for species in the Brazilian Regions. The dataset encompasses road-kill records from 45 personal communications and 26 studies published in peer-reviewed journals, theses and reports. The road-kill dataset comprises 21,512 records, 83% of which are identified to the species level (n = 450 species). The dataset includes records of 31 amphibian species, 90 reptile species, 229 bird species, and 99 mammal species. One species is classified as Endangered, eight as Vulnerable and twelve as Near Threatened. The species with the highest number of records are: Didelphis albiventris (n = 1,549), Volatinia jacarina (n = 1,238), Cerdocyon thous (n = 1,135), Helicops infrataeniatus (n = 802), and Rhinella icterica (n = 692). Most of the records came from southern Brazil. However, observations of the road-kill incidence for non-Least Concern species are more spread across the country. This dataset can be used to identify which taxa seems to be vulnerable to traffic, analyze temporal and spatial patterns of road-kill at local, regional and national scales and also used to understand the effects of road-kill on population persistence. It may also contribute to studies that aims to understand the influence of landscape and environmental influences on road-kills, improve our knowledge on road-related strategies on biodiversity conservation and be used as complementary information on large-scale and macroecological studies. No copyright or proprietary restrictions are associated with the use of this data set other than citation of this Data Paper.
abstract_id: PUBMED:34117544
The nexus between road transport intensity and road-related CO2 emissions in G20 countries: an advanced panel estimation. This study determines the dynamic linkages between road transport intensity, road transport passenger and road transport freight, and road carbon emissions in G20 countries in the presence of economic growth, urbanization, crude oil price, and trade openness for the period of 1990 to 2016, under the multivariate framework. This study employs the residual-based Kao and Westerlund cointegration technique to find long-run cointegration, and continuously updated bias-corrected (CUP-BC) and continuously updated fully modified (CUP-FM) methods to check the long-run elasticities between the variables. The long-run estimators' findings suggest a positive and significant impact of road transport intensity, road passenger transport, road freight transport on road transport CO2 emissions. Economic growth and urbanization are significant contributing factors in road transport CO2 emissions, while trade openness and crude oil price significantly reduce road transport CO2 emissions. The Dumitrescu and Hurlin causality test results disclose unidirectional causality from road transport intensity and road transport freight to the road transport CO2 emissions. However, the causality between road passenger transport and road transport CO2 emissions is bidirectional. Finally, comprehensive policy options like subsidizing environmental-friendly technologies, developing green transport infrastructure, and enacting decarbonizing regulations are suggested to address the G20 countries' environmental challenges.
abstract_id: PUBMED:35267829
Classification and Characterization of Tire-Road Wear Particles in Road Dust by Density. Tire treads are abraded by friction with the road surface, producing tire tread wear particles (TWPs). TWPs combined with other particles on the road such as road wear particles (RWPs) and mineral particles (MPs), forming tire-road wear particles (TRWPs). Dust on an asphalt pavement road is composed of various components such as TRWPs, asphalt pavement wear particles (APWPs), MPs, plant-related particles (PRPs), and so on. TRWPs have been considered as one of major contaminants produced by driving and their properties are important for study on real abrasion behaviors of tire treads during driving as well as environmental contamination. Densities of the TRWPs are totally dependent on the amount of the other components deposited in the TWPs. In this study, a classification method of TRWPs in the road dust was developed using density separation and the classified TRWPs were characterized using image analysis and pyrolytic technique. Chloroform was used to remove APWPs from mixture of TRWPs and APWPs. TRWPs were found in the density range of 1.20-1.70 g/cm3. By decreasing the particle size of the road dust, the TRWP content in the road dust increased and its density slightly tended to increase. Aspect ratios of the TRWPs varied and there were many TRWPs with low aspect ratio below 2.0. The aspect ratio range was 1.2-5.2. Rubber compositions of the TRWPs were found to be mainly NR/SBR biblend or NR/BR/SBR triblend.
abstract_id: PUBMED:35954659
Evaluation of the Influence of Road Geometry on Overtaking Cyclists on Two-Lane Rural Roads. Road cycling, both individually and in groups, is common in Spain, where most two-lane rural roads have no cycle lanes. Due to this, and the difference in speed between drivers and cyclists, the overtaking manoeuvre is one of the most dangerous interactions. This study analyses how road geometry influences the overtaking manoeuvre performance. Field data of 1355 overtaking manoeuvres were collected using instrumented bicycles, riding along different rural road segments, and considering individual, medium and large groups of cyclists. The safety variables that characterise the overtaking manoeuvre are overtaking vehicle speed and lateral clearance. These variables have been correlated to geometric characteristics of the road, such as the type of centre line, the horizontal alignment, the speed limit, and the road cross section. Regression models have been fitted considering each cyclist group size and configuration. For individuals and medium groups, wider roads generate higher lateral clearances and lower overtaking speeds, while for large groups only the solid centre line was significant, generating lower clearances and higher speeds. Results suggest that other factors need to be considered, especially for large groups. Results offer a deeper understanding of the phenomenon by providing key points for improving road geometry design, such as widening the shoulders.
abstract_id: PUBMED:37837055
Analysis of the Correctness of Mapping the Passage of a Semi-Trailer through a Road Obstacle on a Road Simulator. Road simulators enable accelerated durability tests under similar-to-real road conditions. However, the road simulator itself generates the signals with the appropriate strength and amplitude that is adequate to the response registered by the sensors during the real run. Therefore, there is a need for verification of the validity of the representation of vehicle runs on a road simulator in terms of the shape of the generated profile and possible sources of uncertainty. The tests in this study were carried out for a multi-axle vehicle passing an obstacle of known shape. Various signals were registered while the vehicle was passing over the obstacle. The MTS (System Corporation) road simulator's response to the signal given by the obstacle was then checked. The results showed a 99% correlation between the simulation and the road test results. A numerical model of the vehicle was developed to verify the quality of representation of the real conditions by the road simulator, especially in terms of forces resulting from the road profile. Interestingly, the input signal generated by the road simulator provided a very good accuracy of the vehicle response, as tested with use of the numerical model.
abstract_id: PUBMED:27548176
Road Lane Detection by Discriminating Dashed and Solid Road Lanes Using a Visible Light Camera Sensor. With the increasing need for road lane detection used in lane departure warning systems and autonomous vehicles, many studies have been conducted to turn road lane detection into a virtual assistant to improve driving safety and reduce car accidents. Most of the previous research approaches detect the central line of a road lane and not the accurate left and right boundaries of the lane. In addition, they do not discriminate between dashed and solid lanes when detecting the road lanes. However, this discrimination is necessary for the safety of autonomous vehicles and the safety of vehicles driven by human drivers. To overcome these problems, we propose a method for road lane detection that distinguishes between dashed and solid lanes. Experimental results with the Caltech open database showed that our method outperforms conventional methods.
Answer: The abstracts provided do not directly address the question of whether road rage is a psychiatric phenomenon. However, the abstract from PUBMED:33995563 discusses road rage in the context of emotion and emotion regulation, suggesting that road rage can be understood through a psychological analysis. The article proposes an integrative conceptual framework for understanding road rage, which is based on the study of emotion and emotion regulation. It describes the generation of road rage and strategies that may alleviate the difficulties leading to it. This suggests that road rage may have psychological underpinnings, which could be relevant to psychiatric study, although the abstract does not explicitly classify road rage as a psychiatric phenomenon.
The other abstracts focus on various aspects of road safety, traffic interventions, and environmental impacts of road use, but do not provide information on the psychiatric classification of road rage. For instance, PUBMED:35966943 discusses road accident mortality and traffic safety interventions in Saudi Arabia, while PUBMED:23896449 talks about on-road studies of road user behavior to improve road safety. PUBMED:33071583 examines the physical phenomenon of road corrugation, PUBMED:30229895 compiles data on wildlife road-kills in Brazil, PUBMED:34117544 explores the relationship between road transport intensity and CO2 emissions, PUBMED:35267829 characterizes tire-road wear particles, PUBMED:35954659 evaluates the influence of road geometry on overtaking cyclists, PUBMED:37837055 analyzes the accuracy of road simulators in replicating real road conditions, and PUBMED:27548176 presents a method for road lane detection that discriminates between dashed and solid lanes.
In summary, while the abstracts do not directly answer the question, the one from PUBMED:33995563 suggests that road rage can be analyzed through psychological frameworks, which may imply that it has aspects that could be of interest to psychiatric research. |
Instruction: Does deep water running reduce exercise-induced breast discomfort?
Abstracts:
abstract_id: PUBMED:17535854
Does deep water running reduce exercise-induced breast discomfort? Aim: To establish whether exercise-induced vertical breast displacement and discomfort in women with large breasts were reduced during deep water running compared to treadmill running.
Methods: Sixteen women (mean age = 32 years, range 19-43 years; mean mass = 74.1 kg, range 61-114 kg; mean height = 1.7 m, range 1.61-1.74 m), who were professionally sized to wear a C+ bra cup, were recruited as representative of women with large breasts. After extensive familiarisation, vertical breast motion of the participants was quantified as they ran at a self-selected stride rate on a treadmill and in 2.4 m deep water. Immediately after running, the subjects rated their breast discomfort and breast pain (visual analogue scale) and their perceived exertion (Borg scale). Breast discomfort, breast pain, perceived exertion, vertical breast displacement and vertical breast velocity were compared between the two experimental conditions.
Results: Exercise-induced breast discomfort was significantly less and perceived exertion was significantly greater during deep water running relative to treadmill running. Although there was no significant between-condition difference in vertical breast displacement, mean peak vertical breast velocity was significantly (p<0.05) less during deep water (upward mean (SD): 29.7 (14.0) cm x s(-1); downward: 31.1 (17.0) cm x s(-1)) compared to treadmill running (upward mean (SD): 81.4 (21.7) cm x s(-1); downward: 100.0 (25.0) cm x s(-1)).
Conclusion: Deep water running was perceived as a more strenuous but comfortable exercise mode for women with large breasts. Increased comfort was attributed to reduced vertical breast velocity rather than reduced vertical breast displacement.
abstract_id: PUBMED:20019639
Breast elevation and compression decrease exercise-induced breast discomfort. Purpose: The aim of this study was to investigate whether a sports bra designed to both elevate and compress the breasts could decrease exercise-induced breast discomfort and bra fit discomfort experienced by women with large breasts relative to a standard encapsulation sports bra.
Methods: Breast kinematic data, bra fit comfort, exercise-induced breast discomfort, and bra rankings in terms of preference to wear during running were compared in 20 women with large breasts who ran on a treadmill under three bra conditions: an experimental bra that incorporated both breast compression and elevation, an encapsulation sports bra, and a placebo bra. Subjective data were collected immediately before and after the treadmill running trials.
Results: Exercise-induced breast discomfort (P < 0.01) and bra discomfort (P < 0.01) were significantly less for the experimental bra condition relative to the sports bra and placebo bra. This reduction in discomfort was achieved through greater breast elevation (P < 0.01) and compression, with no difference found in vertical breast displacement (P = 0.12) or vertical breast velocity (P = 0.06).
Conclusions: The design features of greater breast elevation and compression provided significantly increased breast and bra comfort compared with a standard encapsulation sports bra during physical activity for women with large breasts.
abstract_id: PUBMED:2030055
The intensity of exercise in deep-water running. The intensity of exercise during 30-min sessions of continuous deep-water running at a "hard" pace was determined by monitoring oxygen consumption (VO2), respiratory quotient (RQ), heart rate, perceived physical effort and perceived aches and pains in the legs in eight competitive runners, six of whom had not previously practised the technique. The intensity was compared with that of 30-min runs on a treadmill at hard and "normal" training paces and a 30-min outdoor run at normal training pace. VO2 during the last session of deep-water running (73% of maximum VO2) was not significantly different from that of the treadmill hard run (78%), but was significantly higher than that of the treadmill normal run (62%). Similar results were obtained for RQ, perceived effort and pain. In contrast, heart rates for deep-water running were similar to those of normal training and significantly less than those of the treadmill hard run. The disparity between VO2 and heart rate for deep-water running may reflect cooling or increased venous return caused by water immersion. It is concluded that deep-water running can be performed at a sufficient intensity for a sufficient period to make it an effective endurance training technique.
abstract_id: PUBMED:8873185
Perceptual responses to deep water running and treadmill exercise. Perceived exertion during deep water running and treadmill exercise was measured to examine gender and mode specific responses. Deep water running to VO2 peak was performed in 3-min. stages at leg speeds controlled by a metronome. Treadmill exercise was performed at matched leg speeds. VO2 and heart rate were continuously monitored by open circuit spirometry and radiotelemetry. Perceived exertion was measured using Borg's 6-20 point scale. Statistical analyses were performed using multiple linear regression with dummy coded discrete variables. Ratings of perceived exertion were significantly higher during deep water running when exercising at equal leg speeds. Mean rated perceived exertion at each stage of the test for either exercise mode was not significantly different between men and women.
abstract_id: PUBMED:23947581
A multimodal physiotherapy programme plus deep water running for improving cancer-related fatigue and quality of life in breast cancer survivors. The aim of the study was to assess the feasibility and effectiveness of aquatic-based exercise in the form of deep water running (DWR) as part of a multimodal physiotherapy programme (MMPP) for breast cancer survivors. A controlled clinical trial was conducted in 42 primary breast cancer survivors recruited from community-based Primary Care Centres. Patients in the experimental group received a MMPP incorporating DWR, 3 times a week, for an 8-week period. The control group received a leaflet containing instructions to continue with normal activities. Statistically significant improvements and intergroup effect size were found for the experimental group for Piper Fatigue Scale-Revised total score (d = 0.7, P = 0.001), as well as behavioural/severity (d = 0.6, P = 0.05), affective/meaning (d = 1.0, P = 0.001) and sensory (d = 0.3, P = 0.03) domains. Statistically significant differences between the experimental and control groups were also found for general health (d = 0.5, P < 0.05) and quality of life (d = 1.3, P < 0.05). All participants attended over 80% of sessions, with no major adverse events reported. The results of this study suggest MMPP incorporating DWR decreases cancer-related fatigue and improves general health and quality of life in breast cancer survivors. Further, the high level of adherence and lack of adverse events indicate such a programme is safe and feasible.
abstract_id: PUBMED:14748454
The physiology of deep-water running. Deep-water running is performed in the deep end of a swimming pool, normally with the aid of a flotation vest. The method is used for purposes of preventing injury and promoting recovery from strenuous exercise and as a form of supplementary training for cardiovascular fitness. Both stroke volume and cardiac output increase during water immersion: an increase in blood volume largely offsets the cardiac decelerating reflex at rest. At submaximal exercise intensities, blood lactate responses to exercise during deep-water running are elevated in comparison to treadmill running at a given oxygen uptake (VO2). While VO2, minute ventilation and heart rate are decreased under maximal exercise conditions in the water, deep-water running nevertheless can be justified as providing an adequate stimulus for cardiovascular training. Responses to training programmes have confirmed the efficacy of deep-water running, although positive responses are most evident when measured in a water-based test. Aerobic performance is maintained with deep-water running for up to 6 weeks in trained endurance athletes; sedentary individuals benefit more than athletes in improving maximal oxygen uptake. There is some limited evidence of improvement in anaerobic measures and in upper body strength in individuals engaging in deep-water running. A reduction in spinal loading constitutes a role for deep-water running in the prevention of injury, while an alleviation of muscle soreness confirms its value in recovery training. Further research into the applications of deep-water running to exercise therapy and athletes' training is recommended.
abstract_id: PUBMED:8819239
Metabolic responses and mechanisms during water immersion running and exercise. The low impact nature of exercise in the water has increased interest in this form of exercise and specifically in water running as a cross-training modality. It is used as a possible preventative and therapeutic modality for rehabilitation. The high impact nature of land running predisposes the runner to stress of the lower limbs and overuse injuries. The need to reduce impact, as well as provide a low impact or non-weight-bearing condition for rehabilitation, has led runners and their coaches to the water. This increased interest by coaches and their athletes, attending sports medicine physicians and rehabilitative professionals has stimulated research into water immersion to the neck (WI) running. Exercise in the water has long been used by rehabilitative professionals with patients who have physically debilitating conditions (i.e. arthritis, musculoskeletal disorders) as it provides a medium for even those with limited mobility to exercise and relax their muscles. Numerous comparative studies into WI running from a metabolic as well as a training perspective have been published. WI has also long been used to simulate weightlessness for the comparative study of cardiorespiratory function and thermoregulation. WI and the associated cephalad shift in blood volume has implications on exercise responses during WI running exercise. In addition, the non-weight-bearing nature of WI running also raises issues of the cross-training benefits of WI running. WI running style and prior familiarity with the activity have been found to have a direct relationship with the comparability of WI to land running. This review presents current research into WI running, training specificity and comparative physiology.
abstract_id: PUBMED:20155571
Maximal and submaximal physiological responses to adaptation to deep water running. The aim of the study was to compare physiological responses between runners adapted and not adapted to deep water running at maximal intensity and the intensity equivalent to the ventilatory threshold. Seventeen runners, either adapted (n = 10) or not adapted (n = 7) to deep water running, participated in the study. Participants in both groups undertook a maximal treadmill running and deep water running graded exercise test in which cardiorespiratory variables were measured. Interactions between adaptation (adapted vs. non-adapted) and condition (treadmill running vs. deep water running) were analysed. The main effects of adaptation and condition were also analysed in isolation. Runners adapted to deep water running experienced less of a reduction in maximum oxygen consumption (VO2max) in deep water running compared with treadmill running than runners not adapted to deep water running. Maximal oxygen consumption, maximal heart rate, maximal ventilation, VO2max at the ventilatory threshold, heart rate at the ventilatory threshold, and ventilation at the ventilatory threshold were significantly higher during treadmill than deep water running. Therefore, we conclude that adaptation to deep water running reduces the difference in VO2max between the two modalities, possibly due to an increase in muscle recruitment. The results of this study support previous findings of a lower maximal and submaximal physiological response on deep water running for most of the measured parameters.
abstract_id: PUBMED:26256619
Effect of sports bra type and gait speed on breast discomfort, bra discomfort and perceived breast movement in Chinese women. This study investigated the effect of sports bra type (encapsulation versus compression) and gait speed on perceptions of breast discomfort, bra discomfort and breast movement reported by Chinese women. Visual analogue scales were used to evaluate breast discomfort, bra component discomfort and perceived breast movement of 21 Chinese participants when they wore an encapsulation or a compression sports bra, while static and while exercising at three different gait speeds. Participants perceived less breast discomfort and breast movement when wearing a compression bra compared to an encapsulation bra at a high gait speed, suggesting that compression bras are likely to provide the most effective support for Chinese women. However, significantly higher bra discomfort was perceived in the compression bra compared to the encapsulation bra when static and at the lower gait speed, implying that ways to modify the design of sports bras, particularly the straps, should be investigated to provide adequate and comfortable breast support.
Practitioner Summary: The compression sports bra provided more comfortable support than the encapsulation sports bra for these Chinese women when running on a treadmill. However, these women perceived higher bra discomfort when wearing the compression bra when stationary. Further research is needed to modify the design of sports bras, particularly the straps, to provide adequate and comfortable breast support.
abstract_id: PUBMED:16648650
Older women's cardiovascular responses to deep-water running. The aim of the study was to examine submaximal and maximal physiological responses and perceived exertion during deep-water running with a vest compared with the responses during treadmill running in healthy elderly women. Eleven healthy women 70 +/- 2 years old participated. On two different occasions they performed a graded maximal exercise test on a treadmill on land and a graded maximal exercise test in water wearing a vest. At maximal work the oxygen uptake was 29% lower (p < .05), the heart rate was 8% lower (p < .05), and the ventilation was 16% lower (p < .05) during deep-water running than during treadmill running. During submaximal absolute work the heart rate was higher during deep-water running than during treadmill running for the elderly women. The participants had lower maximal oxygen uptake, heart rate, ventilation, respiratory-exchange ratio, and rate of perceived exertion during maximal deep-water running with a vest than during maximal treadmill running. These responses were, however, higher during submaximal deep-water running than during treadmill running.
Answer: Yes, deep water running does reduce exercise-induced breast discomfort. A study involving women with large breasts found that exercise-induced breast discomfort was significantly less during deep water running compared to treadmill running. This increased comfort was attributed to reduced vertical breast velocity rather than reduced vertical breast displacement (PUBMED:17535854). Additionally, deep water running was perceived as a more strenuous but comfortable exercise mode for women with large breasts. |
Instruction: Homocysteine, cysteine, and body composition in the Hordaland Homocysteine Study: does cysteine link amino acid and lipid metabolism?
Abstracts:
abstract_id: PUBMED:18779291
Homocysteine, cysteine, and body composition in the Hordaland Homocysteine Study: does cysteine link amino acid and lipid metabolism? Background: The lean phenotype of cystathionine beta-synthase-deficient homocystinuria and the positive association of plasma total cysteine (tCys) with body mass index (BMI) suggest that total homocysteine (tHcy) and tCys are associated with body composition.
Objectives: We aimed to study associations of tCys and tHcy with body composition in the general population.
Design: Using data from 7038 Hordaland Homocysteine Study participants, we fitted regression models and dose-response curves of tCys and tHcy with BMI. In 5179 participants, we investigated associations of tCys and tHcy with fat mass and lean mass and examined whether changes in these aminothiols predicted body composition 6 y later.
Results: tCys showed positive associations with BMI (partial r = 0.28, P < 0.001), and fat mass (partial r = 0.25, P < 0.001), independent of diet, exercise, and plasma lipids. Women in the highest tCys quintile had fat mass 9 kg (95% CI: 8, 10 kg; P < 0.001) greater than that of women in the lowest quintile. The corresponding values for men were 6 kg (95% CI: 5, 7 kg; P < 0.001; P < 0.001 in both sexes, ANOVA across quintiles). The rise in tCys over 6 y was associated with greater fat mass at follow-up (P < 0.001), but there was no effect on lean mass. tHcy was not associated with lean mass, and it became significantly inversely associated with BMI and fat mass only after adjustment for tCys. The association between tHcy and lean mass was not significant.
Conclusions: tCys concentrations show a strong positive association with BMI, mediated through fat mass. The link between cysteine and lipid metabolism deserves further investigation.
abstract_id: PUBMED:19168166
Cysteine, homocysteine and bone mineral density: a role for body composition? Background: Plasma total cysteine (tCys) and homocysteine (tHcy) are associated with body composition, which in turn affects bone mineral density (BMD).
Objectives: To investigate whether associations of tCys and tHcy with BMD are mediated through body composition (fat mass and/or lean mass).
Design: Using data from 5238 Hordaland Homocysteine Study participants, we fit multiple linear regression models and concentration-response curves to explore the relationships between tCys, tHcy, and BMD, with and without adjustment for body mass index (BMI), lean mass and/or fat mass.
Results: All associations were stronger in women. tCys was positively associated with BMD (women, partial r=0.11; men, partial r=0.07, p<or=0.001 for both), but this association was markedly attenuated after adjustment for fat mass. tHcy showed an inverse association with BMD in women (partial r=-0.09, p<0.001), which remained significant after adjustment for lean mass and fat mass. In men and women, changes in tCys or tHcy during 6 years were not associated with BMD at follow-up. Weight gain during 6 years predicted higher BMD at follow-up (p<or=0.009) independent of nutrient intakes, physical activity and baseline BMI. Baseline tHcy inversely predicted BMD measured 6 years later (partial r=-0.11, p<0.001 in women; partial r=-0.07, p=0.002 in men) independent of baseline BMI, while a positive association of baseline tCys with BMD at follow-up (partial r=0.10 in women, 0.09 in men, p<or=0.001) disappeared after adjustment for baseline BMI.
Conclusion: tHcy is inversely associated with BMD independent of body composition, while the positive association of tCys with BMD appears to be mainly mediated through fat mass.
abstract_id: PUBMED:29414770
Homocysteine regulates fatty acid and lipid metabolism in yeast. S-Adenosyl-l-homocysteine hydrolase (AdoHcy hydrolase; Sah1 in yeast/AHCY in mammals) degrades AdoHcy, a by-product and strong product inhibitor of S-adenosyl-l-methionine (AdoMet)-dependent methylation reactions, to adenosine and homocysteine (Hcy). This reaction is reversible, so any elevation of Hcy levels, such as in hyperhomocysteinemia (HHcy), drives the formation of AdoHcy, with detrimental consequences for cellular methylation reactions. HHcy, a pathological condition linked to cardiovascular and neurological disorders, as well as fatty liver among others, is associated with a deregulation of lipid metabolism. Here, we developed a yeast model of HHcy to identify mechanisms that dysregulate lipid metabolism. Hcy supplementation to wildtype cells up-regulated cellular fatty acid and triacylglycerol content and induced a shift in fatty acid composition, similar to changes observed in mutants lacking Sah1. Expression of the irreversible bacterial pathway for AdoHcy degradation in yeast allowed us to dissect the impact of AdoHcy accumulation on lipid metabolism from the impact of elevated Hcy. Expression of this pathway fully suppressed the growth deficit of sah1 mutants as well as the deregulation of lipid metabolism in both the sah1 mutant and Hcy-exposed wildtype, showing that AdoHcy accumulation mediates the deregulation of lipid metabolism in response to elevated Hcy in yeast. Furthermore, Hcy supplementation in yeast led to increased resistance to cerulenin, an inhibitor of fatty acid synthase, as well as to a concomitant decline of condensing enzymes involved in very long-chain fatty acid synthesis, in line with the observed shift in fatty acid content and composition.
abstract_id: PUBMED:24815046
Body composition in patients with classical homocystinuria: body mass relates to homocysteine and choline metabolism. Introduction: Classical homocystinuria is a rare genetic disease caused by cystathionine β-synthase deficiency, resulting in homocysteine accumulation. Growing evidence suggests that reduced fat mass in patients with classical homocystinuria may be associated with alterations in choline and homocysteine pathways. This study aimed to evaluate the body composition of patients with classical homocystinuria, identifying changes in body fat percentage and correlating findings with biochemical markers of homocysteine and choline pathways, lipoprotein levels and bone mineral density (BMD) T-scores.
Methods: Nine patients with classical homocystinuria were included in the study. Levels of homocysteine, methionine, cysteine, choline, betaine, dimethylglycine and ethanolamine were determined. Body composition was assessed by bioelectrical impedance analysis (BIA) in patients and in 18 controls. Data on the last BMD measurement and lipoprotein profile were obtained from medical records.
Results: Of 9 patients, 4 (44%) had a low body fat percentage, but no statistically significant differences were found between patients and controls. Homocysteine and methionine levels were negatively correlated with body mass index (BMI), while cysteine showed a positive correlation with BMI (p<0.05). There was a trend between total choline levels and body fat percentage (r=0.439, p=0.07). HDL cholesterol correlated with choline and ethanolamine levels (r=0.757, p=0.049; r=0.847, p=0.016, respectively), and total cholesterol also correlated with choline levels (r=0.775, p=0.041). There was no association between BMD T-scores and body composition.
Conclusions: These results suggest that reduced fat mass is common in patients with classical homocystinuria, and that alterations in homocysteine and choline pathways affect body mass and lipid metabolism.
abstract_id: PUBMED:32994931
Cysteine and homocysteine as biomarker of various diseases. Cysteine and homocysteine (Hcy), both sulfur-containing amino acids (AAs), produced from methionine another sulfur-containing amino acid, which is converted to Hcy and further converted to cysteine. This article aims to highlight the link between cysteine and Hcy, and their mechanisms, important functions, play in the body and their role as a biomarker for various types of diseases. So that using cysteine and Hcy as a biomarker, we can prevent and diagnose many diseases. This review concluded that hyperhomocysteinemia (elevated levels of homocysteine) is considered as toxic for cells and is associated with different health problems. Hyperhomocysteinemia and low levels of cysteine associated with various diseases like cardiovascular diseases (CVD), ischemic stroke, neurological disorders, diabetes, cancer like lung and colorectal cancer, renal dysfunction-linked conditions, and vitiligo.
abstract_id: PUBMED:11421110
Homocysteine metabolism and risk of cardiovascular diseases: importance of the nutritional status on folic acid, vitamins B6 and B12 Homocysteine is a thiol-containing amino acid derived from methionine metabolism that can be degraded through two enzymatic pathways: remethylation and trans-sulfuration. In remethylation, homocysteine regenerates methionine. In the trans-sulfuration pathway, homocysteine forms cysteine. Due to the rapid metabolic utilization, the plasma concentration of this amino acid is low. Homocysteine circulates as free thiol, homocystine, or bound to free cysteine or to cysteine residues of proteins. Genetic defects of some enzymes in the homocysteine metabolism, or nutritional deficiencies of folic acid, vitamin B6 and B12 lead to an increase in homocysteine plasma concentration and is associated to an increment in cardiovascular diseases. On the basis of clinical and epidemiological studies, homocysteine plasma concentration is considered to be an independent risk factor for the development of atherothrombotic and cardiovascular diseases. The present review describes the homocysteine metabolism, the epidemiological evidence showing the association between homocysteine and the incidence of cardiovascular diseases. The mechanisms by which homocysteine produces vascular damage are indicated. Finally, some recommendations are given for the nutritional therapy of patients with hyperhomocysteinemia.
abstract_id: PUBMED:26046927
Stearoyl-CoA Desaturase-1: Is It the Link between Sulfur Amino Acids and Lipid Metabolism? An association between sulfur amino acids (methionine, cysteine, homocysteine and taurine) and lipid metabolism has been described in several experimental and population-based studies. Changes in the metabolism of these amino acids influence serum lipoprotein concentrations, although the underlying mechanisms are still poorly understood. However, recent evidence has suggested that the enzyme stearoyl-CoA desaturase-1 (SCD-1) may be the link between these two metabolic pathways. SCD-1 is a key enzyme for the synthesis of monounsaturated fatty acids. Its main substrates C16:0 and C18:0 and products palmitoleic acid (C16:1) and oleic acid (C18:1) are the most abundant fatty acids in triglycerides, cholesterol esters and membrane phospholipids. A significant suppression of SCD-1 has been observed in several animal models with disrupted sulfur amino acid metabolism, and the activity of SCD-1 is also associated with the levels of these amino acids in humans. This enzyme also appears to be involved in the etiology of metabolic syndromes because its suppression results in decreased fat deposits (regardless of food intake), improved insulin sensitivity and higher basal energy expenditure. Interestingly, this anti-obesogenic phenotype has also been described in humans and animals with sulfur amino acid disorders, which is consistent with the hypothesis that SCD-1 activity is influenced by these amino acids, in particularly cysteine, which is a strong and independent predictor of SCD-1 activity and fat storage. In this narrative review, we discuss the evidence linking sulfur amino acids, SCD-1 and lipid metabolism.
abstract_id: PUBMED:22209966
Plasma homocysteine level and hepatic sulfur amino acid metabolism in mice fed a high-fat diet. Purpose: Obesity, a feature of metabolic syndrome, is a risk factor for cardiovascular disease, and elevated plasma homocysteine is associated with increased cardiovascular risk. However, little published information is available concerning the effect of obesity on homocysteine metabolism.
Methods: Hepatic homocysteine metabolism was determined in male C57BL/6 mice fed a high-fat diet for 12 weeks.
Results: High-fat diet increased plasma homocysteine but decreased hepatic homocysteine levels. Hepatic S-adenosylhomocysteine hydrolase levels were down-regulated in the obese mice, which was in part responsible for the decrease in hepatic S-adenosylmethionine/S-adenosylhomocysteine, which served as an index of transmethylation potential. Despite the decrease in hepatic cysteine, hepatic taurine synthesis was activated via up-regulation of cysteine dioxygenase. Hepatic levels of methionine adenosyltransferase I/III, methionine synthase, methylene tetrahydrofolate reductase, and gamma-glutamylcysteine ligase catalytic subunit were unchanged. Obese mice showed elevated betaine-homocysteine methyltransferase and decreased cystathionine beta-synthase activities, although the quantities of these enzymes were unchanged.
Conclusion: This study suggests that plasma homocysteine level is increased in obesity-associated hepatic steatosis, possibly as a result of increased hepatic homocysteine efflux along with an altered sulfur amino acid metabolism.
abstract_id: PUBMED:14642812
The biosynthesis of cysteine and homocysteine in Methanococcus jannaschii. The pathway for the biosynthesis of cysteine and homocysteine in Methanococcus jannaschii has been examined using a gas chromatography-mass spectrometry (GC-MS) stable isotope dilution method to identify and quantitate the intermediates in the pathways. The first step in the pathway, and the one responsible for incorporation of sulfur into both cysteine and methionine, is the reaction between O-phosphohomoserine and a presently unidentified sulfur source present in cell extracts, to produce L-homocysteine. This sulfur source was shown not to be sulfide. The resulting L-homocysteine then reacts with O-phosphoserine to form L-cystathionine, which is cleaved to L-cysteine. The pathway has elements of both the plant and mammalian pathways in that the sulfur is first incorporated into homocysteine using O-phosphohomoserine as the acceptor and the resulting homocysteine, via transsulfuration, supplies the sulfur for cysteine formation. The pathway leading to these two amino acids represents an example of metabolic thrift where the preexisting cellular metabolites O-phosphohomoserine and O-phosphoserine are used as the ultimate source of the carbon framework for the biosynthesis of these amino acids. These findings explain the absence of identifiable genes in the genome of this organism for the biosynthesis of cysteine and homocysteine.
abstract_id: PUBMED:14719920
Visual detection of cysteine and homocysteine. The determination of cysteine and homocysteine levels is of great current interest for the monitoring of desease states. A new colorimetric method for the simultaneous detection of l-cysteine and l-homocysteine has been developed. A fluorescein derivative reacts with the above amino acids, producing their respective thiazolidines resulting in color changes. Interference from other amino acids and proteins is minimal.
Answer: The Hordaland Homocysteine Study investigated the associations of plasma total cysteine (tCys) and total homocysteine (tHcy) with body composition, and the findings suggest that there may indeed be a link between cysteine, amino acid metabolism, and lipid metabolism. The study found that tCys showed positive associations with body mass index (BMI) and fat mass, independent of diet, exercise, and plasma lipids. Women in the highest tCys quintile had significantly greater fat mass compared to those in the lowest quintile, and a similar pattern was observed in men. The rise in tCys over 6 years was associated with greater fat mass at follow-up, but there was no effect on lean mass. tHcy, on the other hand, was not associated with lean mass and only became significantly inversely associated with BMI and fat mass after adjustment for tCys (PUBMED:18779291).
Additionally, the study found that tCys was positively associated with bone mineral density (BMD), but this association was markedly attenuated after adjustment for fat mass. tHcy showed an inverse association with BMD in women, which remained significant after adjustment for lean mass and fat mass. Changes in tCys or tHcy during 6 years were not associated with BMD at follow-up (PUBMED:19168166).
Furthermore, research in yeast models of hyperhomocysteinemia (HHcy) has shown that homocysteine supplementation up-regulated cellular fatty acid and triacylglycerol content and induced a shift in fatty acid composition. This suggests that elevated homocysteine levels can lead to deregulation of lipid metabolism (PUBMED:29414770).
In patients with classical homocystinuria, a genetic disease causing homocysteine accumulation, reduced fat mass has been associated with alterations in choline and homocysteine pathways, affecting body mass and lipid metabolism (PUBMED:24815046).
Overall, these findings indicate that cysteine and homocysteine are not only biomarkers for various diseases but also play a role in regulating fatty acid and lipid metabolism, which can have implications for body composition (PUBMED:32994931). |
Instruction: Endovascular treatment for chronic cerebrospinal venous insufficiency: is the procedure safe?
Abstracts:
abstract_id: PUBMED:21107001
Endovascular treatment for chronic cerebrospinal venous insufficiency: is the procedure safe? Objectives: The aim of this report is to assess the safety of endovascular treatment for chronic cerebrospinal venous insufficiency (CCSVI). Although balloon angioplasty and stenting seem to be safe procedures, there are currently no data on the treatment of a large group of patients with this vascular pathology.
Methods: A total of 564 endovascular procedures (balloon angioplasty or, if this procedure failed, stenting) were performed during 344 interventions in 331 CCSVI patients with associated multiple sclerosis.
Results: Balloon angioplasty alone was performed in 192 cases (55.8%), whereas the stenting of at least one vein was required in the remaining 152 cases (44.2%). There were no major complications (severe bleeding, venous thrombosis, stent migration or injury to the nerves) related to the procedure, except for thrombotic occlusion of the stent in two cases (1.2% of stenting procedures) and surgical opening of femoral vein to remove angioplastic balloon in one case (0.3% of procedures). Minor complications included occasional technical problems (2.4% of procedures): difficulty removing the angioplastic balloon or problems with proper placement of stent, and other medical events (2.1% of procedures): local bleeding from the groin, minor gastrointestinal bleeding or cardiac arrhythmia.
Conclusions: The procedures appeared to be safe and well tolerated by the patients, regardless of the actual impact of the endovascular treatments for venous pathology on the clinical course of multiple sclerosis, which warrants long-term follow-up.
abstract_id: PUBMED:22640503
Reported outcomes after the endovascular treatment of chronic cerebrospinal venous insufficiency. Chronic cerebrospinal venous insufficiency (CCSVI) has recently been implicated as a potential causal factor in the development of multiple sclerosis (MS). The treatment of jugular and azygous vein stenoses, characteristic of CCSVI, has been proposed as a potential component of therapy for MS. In the few short years since Dr. Paulo Zamboni published "A Prospective Open label Study of Endovascular Treatment of Chronic Cerebrospinal Venous Insufficiency", there has been tremendous patient-driven demand for treatment. Concurrently, there have been numerous publications since 2009 addressing CCSVI and its association with MS. The purpose of this article is to present a brief review of CCSVI and its association with MS and to review the available literature to date with a focus on outcomes data.
abstract_id: PUBMED:22088659
Safety of endovascular treatment of chronic cerebrospinal venous insufficiency: a report of 240 patients with multiple sclerosis. Purpose: To evaluate the safety of outpatient endovascular treatment in patients with multiple sclerosis (MS) and chronic cerebrospinal venous insufficiency (CCSVI).
Materials And Methods: A retrospective analysis was performed to assess complications occurring within 30 days of endovascular treatment of CCSVI. The study population comprised 240 patients; 257 procedures were performed over 8 months. The indication for treatment in all patients was symptomatic MS. Of the procedures, 49.0% (126 of 257) were performed in a hospital, and 51.0% (131 of 257) were performed in the office. Primary procedures accounted for 93.0% (239 of 257) of procedures, and repeat interventions accounted for 7% (18 of 257). For patients treated primarily, 87% (208 of 239) had angioplasty, and 11% (26 of 239) had stent placement; 5 patients were not treated. Of patients with restenosis, 50% (9 of 18) had angioplasty, and 50% (9 of 18) had stent placement.
Results: After the procedure, all but three patients were discharged within 3 hours. Headache after the procedure was reported in 8.2% (21 of 257) of patients; headache persisted > 30 days in 1 patient. Neck pain was reported in 15.6% (40 of 257); 52.5% (21 of 40) of these patients underwent stent placement. Three patients experienced venous thrombosis requiring retreatment within 30 days. Sustained intraprocedural arrhythmias were observed in three patients, and two required hospital admission. One of these patients, who was being retreated for stent thrombosis, was hospitalized because of a stress-induced cardiomyopathy.
Conclusions: Endovascular treatment of CCSVI is a safe procedure; there is a 1.6% risk of major complications. Cardiac monitoring is essential to detect intraprocedural arrhythmias. Ultrasonography after the procedure is recommended to confirm venous patency and to identify patients experiencing acute venous thrombosis.
abstract_id: PUBMED:23948669
Feasibility and safety of endovascular treatment for chronic cerebrospinal venous insufficiency in patients with multiple sclerosis. Objective: Chronic cerebrospinal venous insufficiency (CCSVI) is a recently discovered syndrome mainly due to stenoses of internal jugular (IJV) and/or azygos (AZ) veins. The present study retrospectively evaluates the feasibility and safety of endovascular treatment for CCSVI in a cohort of patients with multiple sclerosis (MS).
Methods: From September 2010 to October 2012, 1202 consecutive patients were admitted to undergo phlebograpy ± endovascular treatment for CCSVI. All the patients had previously been found positive at color Doppler sonography (CDS) for at least two Zamboni criteria for CCSVI and had a neurologist-confirmed diagnosis of MS. Only symptomatic MS were considered for treatment. Percutaneous transluminal angioplasty was carried out as an outpatient procedure at two different institutes. Primary procedures, regarded as the first balloon angioplasty ever performed for CCSVI, and secondary (reintervention) procedures, regarded as interventions performed after venous disease recurrence, were carried out in 86.5% (1037 of 1199) and 13.5% (162 of 1199) of patients, respectively. Procedural success and complications within 30 days were recorded.
Results: Phlebography followed by endovascular recanalization was carried out in 1999 patients consisting of 1219 interventions. Balloon angioplasty alone was performed in 1205 out of 1219 (98.9%) procedures, whereas additional stent placement was required in the remaining 14 procedures (1.1%) following unsuccessful attempts at AZ dilatation. No stents were ever implanted in the IJV. The feasibility rate was as high as 99.2% (1209 interventions). Major complications included one (0.1%) AZ rupture occurring during balloon dilatation and requiring blood transfusion, one (0.1%) severe bleeding in the groin requiring open surgery, two (0.2%) surgical openings of the common femoral vein to remove balloon fragments, and three (0.2%) left IJV thromboses. The overall major and minor complication rates at 30 days were 0.6% and 2.5%, respectively.
Conclusions: Endovascular treatment for CCSVI appears feasible and safe. However, a proper learning curve can dramatically lower the rate of adverse events. In our experience, the vast majority of complications occurred in the first 400 cases performed.
abstract_id: PUBMED:27514093
THE COMPARATIVE ANALYSIS RESULTS OF ENDOVASCULAR LASER COAGULATION AND A STANDARD PHLEBECTOMY IN THE TREATMENT OF CHRONIC DISEASES OF THE LOWER EXTREMITIES VEINS The results of treatment of 58 patients, suffering chronic diseases of the lower extremities veins, were analyzed. In 28 patients a vertical reflux was eliminated using endovascular laser coagulation, in 32 patients--a standard phlebectomy in accordance to Babcock method was performed. The complications rate was compared as well as the term of the patients' stationary treatment. After elimination of endovascular laser coag- ulation the complications rate and severity is significantly lesser, than after a standard phiebectomy. In accordance to the ultrasonographic duplex scanning data in 12 mo in one patient a partial recanalization of large subcutaneous vein was noted. A total fibrous transformation of the coagulated venous trunks was achieved in 95.24% of the patients. Duration of postoperative stationary treatment have had reduced from (4.8 ± 0.8) to (1.2 ± 0.1) days (p < 0.001).
abstract_id: PUBMED:22640501
Catheter venography and endovascular treatment of chronic cerebrospinal venous insufficiency. Multiple sclerosis (MS) is a disorder characterized by damage to the myelin sheath insulation of nerve cells of the brain and spinal cord affecting nerve impulses which can lead to numerous physical and cognitive disabilities. The disease, which affects over 500,000 people in the United States alone, is widely believed to be an autoimmune condition potentially triggered by an antecedant event such as a viral infection, environmental factors, a genetic defect or a combination of each. Chronic cerebrospinal venous insufficiency (CCSVI) is a condition characterized by abnormal venous drainage from the central nervous system that has been theorized to have a possible role in the pathogenesis and symptomatology of MS (1). A significant amount of attention has been given to this theory as a possible explanation for the etiology of symptoms related to MS patients suffering from this disease. The work of Dr. Zamboni, et al, who reported that treating the venous stenoses causing CCSVI with angioplasty resulting in significant improvement in the symptoms and quality of life of patients with MS (2) has led to further interest in this theory and potential treatment. The article presented describes endovascular techniques employed to diagnose and treat patients with MS and CCSVI.
abstract_id: PUBMED:23380649
Adverse events after endovascular treatment of chronic cerebro-spinal venous insufficiency (CCSVI) in patients with multiple sclerosis. Although it is debated whether chronic cerebro-spinal venous insufficiency (CCSVI) plays a role in multiple sclerosis (MS) development, many patients undergo endovascular treatment (ET) of CCSVI. A study is ongoing in Italy to evaluate the clinical outcome of ET. Severe adverse events (AEs) occurred in 15/462 subjects at a variable interval after ET: jugular thrombosis in seven patients, tetraventricular hydrocephalus, stroke, paroxysmal atrial fibrillation, status epilepticus, aspiration pneumonia, hypertension with tachicardia, or bleeding of bedsore in the remaining seven cases. One patient died because of myocardial infarction 10 weeks after ET. The risk of severe AEs related to ET for CCSVI must be carefully considered.
abstract_id: PUBMED:34478907
Comparison of endovascular strategy versus hybrid procedure in treatment of chronic venous obstructions involving the confluence of common femoral vein. Objective: Treatment of extensive chronic venous obstruction (CVO) with post-thrombotic trabeculation involving the common femoral vein with extension into the femoral vein or deep femoral vein remains a challenge and the best treatment technique for such cases is not clear. In the present study, we compared the results of endovascular alone vs endovascular with additional endophlebectomy (hybrid) procedures for such patients.
Methods: The medical records of 102 consecutive patients (108 limbs) treated between 2015 and 2020 for iliofemoral CVO extending to the femoral confluence were retrospectively reviewed. The patients were divided into two groups: the hybrid procedure (HP) and endovascular treatment (EN) groups. The HP group consisted of those treated with stent implantation and endophlebectomy of the common femoral vein with creation of an arteriovenous fistula. The EN group included those who had undergone stent implantation alone. The patency rates, complications, and clinical outcomes were analyzed.
Results: Of the 102 patients, 47 (49 limbs) were in the EN group and 55 (59 limbs) were in the HP group. The demographics of the two groups were similar with no statistically significant differences in cumulative primary, assisted primary, or secondary patency rates at 36 months (33.7% vs 36.3%, P = .839; 59.8% vs 64%, P = .941; 69% vs 72.7%, P = .851; respectively). The patients in the EN group, however, had better clinical improvement with a lower postoperative complication rate (P = .012), shorter procedure duration (P < .001), and shorter hospital stay (P = .025).
Conclusions: The EN and HP both provided similar patency rates for patients with CVO extending into the femoral confluence. The endovascular strategy has the benefit of fewer postoperative complications and a shorter procedure duration and hospital stay compared with the HP.
abstract_id: PUBMED:30935347
Long-Term Results of Endovascular Treatment of Chronic Iliofemoral Venous Obstructive Lesions. Objective: To evaluate the long-term results in endovascular treatment of iliofemoral venous obstructive lesions.
Methods: From January 2009 to March 2017, 75 patients were admitted for endovascular treatment of chronic obstructive lesions of the iliofemoral veins. Of these, 60 patients underwent stenting of postthrombotic obstructions and 15 patients stenting of nonthrombotic obstructive lesions of the iliac veins (May-Thurner syndrome in 11, for tumor-induced compression and cicatricial stenosis in 4). Dynamic control of stent patency was carried out by means of duplex ultrasound. Efficacy of endovascular intervention was evaluated by measuring the venous pressure gradient and malleolar circumference. The clinical result was determined by the Venous Clinical Severity Score (VCSS).
Results: Technical success of endovascular intervention in postthrombotic occlusions of iliac vein was 92% and in nonthrombotic iliac vein lesions was 100%. Cumulative primary and secondary patency in postthrombotic lesions at 60 months amounted to 72% and 81%, respectively, in nonthrombotic lesions to 85% (primary patency). Reinterventions were successfully performed in 6 patients including catheter-directed thrombolysis (3 patients) and stenting (3 patients). The mean VCSS score fell from 14.2 (4.2) to 7.5 (2.6; P < .001). The quality of life was improved; its mean score decreased from 62.6 (18.7) to 48.7 (12.8; P < .01).
Conclusion: Endovascular angioplasty and stenting for obstructive lesions of the iliofemoral veins is a minimally invasive, safe, and highly effective method of treatment, which is confirmed by a significant improvement of the limb's condition and good long-term results of patency of the restored venous segments.
abstract_id: PUBMED:21679067
Safety profile of endovascular treatment for chronic cerebrospinal venous insufficiency in patients with multiple sclerosis. Purpose: To evaluate the safety of endovascular treatment of chronic cerebrovascular insufficiency (CCSVI) in patients with multiple sclerosis (MS).
Methods: In a 1-year period, 461 MS patients (261 women; mean age 45.4 years, range 21-79) with CCSVI underwent endovascular treatment of 1012 venous lesions during 495 procedures [34 (6.9%) reinterventions]. While balloon angioplasty was preferred, 98 stents were implanted in 76 patients for lesion recoil, restenosis, or suboptimal dilation. The procedures were analyzed for incidences of major adverse events (death, major bleeding, or clinical deterioration of MS), access site complications, procedure-related complications, and procedural safety-related variables (fluoroscopy and contrast times). The complication rates were compared to published data for similar endovascular methods.
Results: There were no deaths, major bleeding events, or clinical deterioration of MS. Access site complications included limited groin hematoma (5, 1.0%); there were no arteriovenous fistulas or puncture site infections. Systemic complications included only rare cardiac arrhythmias (6, 1.2%). Procedure-related complications included vein rupture (2, 0.4%), vein dissection (15, 3.0%), acute in-stent/in-segment thrombosis (8, 1.6%), and acute recoil (1, 0.2%); there was no stent migration or fracture or distal embolization. Mean fluoroscopy time was 22.7 minutes, and mean contrast volume was 136.3 mL.
Conclusion: Endovascular therapy appears to be a safe and reliable method for treating CCSVI. Innovations such as purpose-specific materials and devices are needed, as are case-controlled and randomized data to establish efficacy in ameliorating MS symptoms.
Answer: Endovascular treatment for chronic cerebrospinal venous insufficiency (CCSVI) appears to be a generally safe procedure based on the analysis of various studies. A study involving 564 endovascular procedures in 331 CCSVI patients with multiple sclerosis reported no major complications such as severe bleeding, venous thrombosis, stent migration, or nerve injury, except for thrombotic occlusion of the stent in two cases and surgical opening of the femoral vein to remove an angioplastic balloon in one case. Minor complications included technical problems and medical events such as local bleeding from the groin, minor gastrointestinal bleeding, or cardiac arrhythmia (PUBMED:21107001).
Another study evaluating the safety of outpatient endovascular treatment in 240 MS patients with CCSVI found a 1.6% risk of major complications, with the recommendation of cardiac monitoring to detect intraprocedural arrhythmias and ultrasonography post-procedure to confirm venous patency (PUBMED:22088659).
A retrospective evaluation of the feasibility and safety of endovascular treatment in a cohort of MS patients showed that the procedure is feasible and safe, with major and minor complication rates at 30 days being 0.6% and 2.5%, respectively. The study also noted that a proper learning curve can dramatically lower the rate of adverse events (PUBMED:23948669).
However, it is important to note that there have been reports of severe adverse events (AEs) after endovascular treatment of CCSVI in MS patients, including jugular thrombosis, stroke, and other serious conditions, with one patient dying due to myocardial infarction 10 weeks after the procedure (PUBMED:23380649). Despite these risks, the overall safety profile of endovascular treatment for CCSVI in MS patients is considered acceptable, with no deaths, major bleeding events, or clinical deterioration of MS reported in a study of 461 MS patients undergoing the treatment (PUBMED:21679067).
In conclusion, while endovascular treatment for CCSVI is generally safe, it is not without risks, and the potential for complications, including serious ones, should be carefully considered. Further research and innovations in treatment methods and devices are needed to improve safety and establish efficacy in ameliorating MS symptoms. |
Instruction: Is laparoscopy safe and effective for treatment of acute small-bowel obstruction?
Abstracts:
abstract_id: PUBMED:10384077
Is laparoscopy safe and effective for treatment of acute small-bowel obstruction? Background: Acute small bowel obstruction (SBO) has been a relative contraindication for laparoscopic treatment due to the potential for bowel distention and the risk of enteric injury. However, as laparoscopic experience has increased, surgeons have begun to apply minimal access techniques to the management of acute SBO.
Methods: A retrospective review was performed of all patients with acute SBO in whom laparoscopic treatment was attempted. Patients with chronic symptoms and elective admission were excluded. Patients treated by laparoscopy were compared to those converted to laparotomy for differences in morbidity, postoperative length of stay, and return of bowel function as evidenced by toleration of a liquid diet.
Results: Laparoscopy was performed in 40 patients for acute SBO. The etiologies of obstruction included adhesions (35 cases), Meckel's diverticulum (two cases), femoral hernia (one case), periappendiceal abscess (one case), and regional enteritis (one case). Laparoscopic treatment was possible in 24 patients (60%), but 13 patients required conversion to laparotomy for inadequate laparoscopic visualization (two cases), infarcted bowel (two cases), enterotomy (four cases), and inability to relieve the obstruction laparoscopically (five cases). There were ten complications-one in the laparoscopic group (pneumonia) and nine in the converted group (prolonged ileus, four cases; wound infection, two cases; pneumonia, two cases; and perioperative myocardial infarction, one case). Respectively, the laparoscopic and converted groups had mean operative times of 68 and 106 min a mean return of bowel function of 1.8 and 6.2 days, and a mean postoperative stay of 3.6 and 10.5 days. Long-term follow-up was available in 34 patients. One recurrence of SBO requiring operation occurred in each group during a mean follow-up of 88 weeks.
Conclusions: Laparoscopy is a safe and effective procedure for the treatment of acute SBO in selected patients. This approach requires surgeons to have a low threshold for conversion to laparotomy. Laparoscopic treatment appears to result in an earlier return of bowel function and a shorter postoperative length of stay, and it will likely have lower costs.
abstract_id: PUBMED:26058112
Laparoscopy as a method of final diagnosis of acute adhesive small bowel obstruction in a previously unoperated patients The article presents the use of laparoscopic interventions in 38 patients with Acute Adhesive Small Bowel Obstruction (AASBO) in patients without previous history of abdominal surgery. Clinical, radiological and ultrasound patterns of disease are analyzed. The use of laparoscopy has proved itself the most effective and relatively safe diagnostic procedure. In 14 (36.8%) patients convertion to laparotomy was made due to contraindications for laparoscopy. In 24 (63.2%) patients laparosopic adhesyolisis was performed and AASBO subsequently treated with complications rate of 4.2%.
abstract_id: PUBMED:9069135
The acute abdomen in the pregnant patient. Is there a role for laparoscopy? Background: The acute abdomen in the pregnant patient poses a difficult diagnostic and therapeutic challenge to the surgeon. Appendicitis, cholecystitis, and bowel obstruction account for the majority of the abdominal pain syndromes which require surgical intervention. Laparoscopy is being used increasingly in the diagnosis and operative management of these disorders.
Methods: We examine our experience over the last 3 years with 47 women who developed significant abdominal pain during pregnancy. Thirty-four patients had symptomatic gallstone disease, nine had appendicitis, two had incarcerated inguinal hernias, and two had pelvic masses. Twenty-two patients with biliary colic and two patients with acute cholecystitis were managed conservatively during pregnancy. Twenty-three of these underwent laparoscopic cholecystectomy in the postpartum period. A total of 23 women required surgical intervention during pregnancy and 15 underwent a variety of laparoscopic procedures. Ten patients underwent laparoscopic cholecystectomy, and five had laparoscopic appendectomy. The remaining five patients had open appendectomy. Among the 15 laparoscopic procedures, four were performed in the first trimester, seven were performed in the second trimester, and four were performed in the third trimester.
Results: Laparoscopy didn't result in increased maternal morbidity. There were no congenital malformations, fetal losses, or premature deliveries in the pregnant patients who underwent laparoscopy.
Conclusions: Laparoscopy can be a useful means of diagnosis and in addition a therapeutic tool in selected pregnant patients with abdominal pain. Close maternal and fetal monitoring is essential during and after the procedure. Laparoscopic cholecystectomy is safe and can be performed without additional risk to the fetus for those who require surgical intervention during pregnancy.
abstract_id: PUBMED:1421543
Laparoscopy in the diagnosis and treatment of acute small bowel obstruction. The surgical correction of acute small bowel obstruction is conventionally performed through a vertical laparotomy incision. The increasing use of the laparoscope for elective general surgery has led to an increase in its use for the diagnosis and treatment of acute abdominal conditions. The authors report five cases of acute small bowel obstruction treated with the aid of the laparoscope. All five patients were able to leave the hospital in the early postoperative period and remain symptom free. Laparoscopy is a useful technique in the management of selected cases of small bowel obstruction.
abstract_id: PUBMED:7777807
Laparoscopy for abdominal emergencies. The role of laparoscopy has been reviewed for these conditions: abdominal trauma, acute abdomen, abdominal pain of uncertain etiology, appendicitis and the acute abdomen in the intensive care unit patient. Laparoscopy should only be performed in trauma patients who are hemodynamically stable and who have some evidence for abdominal injury, such as a positive peritoneal lavage or a positive CT scan. Laparoscopy is an excellent procedure for determining whether a knife or missile has penetrated the peritoneum. For penetrating wounds in the chest and upper abdomen, laparoscopy also allows excellent evaluation of the diaphragm. In blunt trauma, laparoscopy identifies the majority of injuries, but there has been a 5-15% incidence of missed injuries to the small bowel and colon. The acute abdomen is generally caused by perforation, acute inflammation or intestinal obstruction. Of the various types of perforation, diagnostic and therapeutic laparoscopy is most applicable for duodenal perforation. Acute perforation of the stomach and colon should probably be treated by standard open techniques. For acute inflammatory disorders, laparoscopy is an excellent diagnostic tool and can also provide definitive treatment in the form of drainage of an abscess or appendectomy. The role of laparoscopy for ileus and bowel obstruction is controversial; some surgeons advocate diagnostic laparoscopy and treatment, while many others still consider bowel obstruction and abdominal distention to be contra-indications. Finally, there are the intensive care unit patients in whom an acute intraabdominal process is suspected. Laparoscopy in such patients alters the clinical management in about 50% of patients.(ABSTRACT TRUNCATED AT 250 WORDS)
abstract_id: PUBMED:10883993
Laparoscopy for acute small-bowel obstruction secondary to adhesions. Background And Purpose: Postoperative adhesions are the leading cause of small-bowel obstruction in developed countries. Several arguments suggest that laparoscopy may lead to fewer adhesions than does laparotomy. We report here the short-term results of laparoscopy in patients admitted on an emergency basis for acute small-bowel obstruction secondary to adhesions.
Patients And Methods: This prospective trial included 134 consecutive patients: 39 underwent emergency surgery, and 95 had laparoscopic adhesiolysis shortly after resolution of the obstruction with nasogastric suction. Of the previous operations for which the dates were known, 16% had taken place within 1 year of the obstruction and 33.5% within 5 years. In all, 27% of the patients had open laparoscopy, and 16% had conversions: 7% after elective laparoscopy and 36% after emergency laparoscopy.
Results: There were no operative deaths. One patient underwent a reoperation the following day for fistula after incomplete adhesiolysis attributable to multiple adhesions found during elective laparoscopy. If laparoscopy is considered to have failed when adhesiolysis was incomplete or conversion or reoperation was necessary, our success rate was 80% after elective laparoscopy and 59% after emergency laparoscopy.
Conclusion: Emergency situations in acute small-bowel obstruction combine several circumstances unfavorable for laparoscopy: a limited work area and a distended and fragile small bowel. Laparoscopic adhesiolysis after the crisis has passed may produce better results, but only long-term follow-up can confirm the role of elective laparoscopy for this indication.
abstract_id: PUBMED:19391338
Role of laparoscopy in acute obstruction of the small bowel: personal experience and analysis of the literature Small bowel obstruction is caused by postoperative adhesions in most patients. The traditional surgical treatment has been laparotomy with adhesiolysis and possible resection of the ischaemic intestine. The laparoscopic approach has proved feasible but not without risks. We analysed our experience in the management of acute small bowel obstruction and then reviewed the literature in an attempt to identify the real role of laparoscopy. From January 2003 to June 2008, 19 patients operated on for small bowel obstruction were identified. We evaluated our performance in terms of the aetiology of the obstruction, operative time, length of postoperative hospital stay, conversion rate, and major morbidity and mortality. Postoperative adhesions were responsible for the occlusion in 13 cases; a single band was identified in 47% of patients (9 cases). Neoplastic disease (3 cases), a gallstone ileus, Crohn's disease and an internal hernia were the remaining cases. Laparoscopic treatment was only possible in 7 patients with single adhesions (77%), and a conversion was carried out in the remaining 12 cases (63%), including "laparoscopy-assisted" cases (6 cases). The duration of the intervention (89 +/- 21 min vs 135 +/- 27.5 min) and postoperative hospitalisation (3.6 +/- 1 days vs 6.25 +/- 1.6 days) were in favour of the completely laparoscopic group as compared to the laparoscopy-assisted group. A case of postoperative peritonitis due to bowel perforation required a second intervention. With an appropriate selection of patients, confirming the high incidence of the single adhesions responsible for the occlusion and the resulting high success rate of laparoscopy, we believe that only an initial laparoscopic approach can help identify such favourable situations.
abstract_id: PUBMED:21898013
The role of laparoscopy in the management of acute small-bowel obstruction: a review of over 2,000 cases. Background: Adhesive small-bowel obstruction (SBO) contributes significantly to emergency surgical workload. Laparotomy remains the standard approach. Despite published reports with high success rates and low morbidity, acute SBO is still considered by many a relative contraindication to laparoscopy. Our aim was to review the available literature and define important outcomes such as feasibility, safety, iatrogenic bowel injury, and benefits to patients with acute SBO who are approached laparoscopically.
Methods: A systematic literature search was carried out using the Medline database and the search terms "laparoscopy" or "laparoscopic approach" and "bowel obstruction." Only adult studies published in English between 1990 and 2010 were included. Studies were excluded if data specific to outcomes for laparoscopic management of acute SBO could not be extracted.
Results: Twenty-nine studies were identified. A laparoscopic approach was attempted in 2,005 patients with acute SBO. Adhesions were the most common etiology (84.9%). Laparoscopy was completed in 1,284 cases (64%), 6.7% were lap-assisted, and 0.3% were converted to hernia repair. The overall conversion rate to midline laparotomy was 29% (580/2,005). Dense adhesions, bowel resection, unidentified pathology, and iatrogenic injury accounted for the majority of conversions. When the etiology of SBO was a single-band adhesion, the success rate was 73.4%. Morbidity was 14.8% (283/1,906) and mortality was 1.5% (29/1,951). The enterotomy rate was 6.6% (110/1,673). The majority were recognized and converted to laparotomy. Laparoscopy was associated with reduced morbidity and length of stay.
Conclusion: Laparoscopy is a feasible and effective treatment for acute SBO with acceptable morbidity. Further studies are required to determine its impact on recurrent SBO.
abstract_id: PUBMED:25423020
Laparoscopy for small bowel obstruction in children--an update. Introduction: We evaluated the current role of minimally invasive surgery (MIS) in children with small bowel obstruction (SBO) at our institution.
Subjects And Methods: A retrospective review of patients undergoing MIS for acute SBO was performed from 2008 to 2013. The study population was compared with a historical control including patients from 2001 to 2008.
Results: There were 71 patients who met inclusion criteria; 35 were male, and 36 were female. Sixty-two children underwent laparoscopy for their first episode of SBO, and 12 underwent laparoscopy for recurrent SBO, accounting for 74 episodes of SBO managed with MIS. The most common etiology of SBO was adhesions (n=40). Laparoscopy and laparoscopic-assisted procedures were associated with shorter nasogastric tube decompression (1.4±2 days [P<.001] and 1.5±2.7 days [P=.002], respectively) and time to regular diet (3.9±4 days [P=.002] and 4.6±2.8 days [P=.024], respectively) compared with those converted to laparotomy (5.1±4.9 days of nasogastric tube decompressions and 8±4.7 days to regular diet). There was no difference in postoperative morbidity comparing laparoscopy (11%), laparoscopic-assisted (5%), and laparoscopic converted to open procedures (18%) (P=.48).
Conclusions: Laparoscopy continues to be a safe diagnostic and therapeutic tool in the management of pediatric SBO.
abstract_id: PUBMED:16494554
Laparoscopy in diagnosis and management of Meckel's diverticulum. Objective: The aim of this paper is to demonstrate that laparoscopy can successfully diagnose otherwise elusive Meckel's diverticuli and at the same time offer a therapeutic access for successful resection.
Materials And Methods: Two patients with anemia of unknown cause and a further patient with acute small bowel obstruction underwent diagnostic laparoscopy. A Meckel's diverticulum was identified in each case and successfully resected through a minilaparotomy.
Results: All three patients were successfully investigated and treated with laparoscopy with no morbidity or mortality.
Conclusion: Laparoscopy is a safe and effective surgical modality for diagnosis of atypically presenting Meckel's diverticulum and has a therapeutic role that results in an excellent cosmetic result.
Answer: Yes, laparoscopy is considered safe and effective for the treatment of acute small bowel obstruction (SBO) in selected patients. Studies have shown that laparoscopy can be successfully used to manage acute SBO with various etiologies, including adhesions, Meckel's diverticulum, hernias, periappendiceal abscess, and regional enteritis (PUBMED:10384077). The use of laparoscopy for acute SBO has been associated with an earlier return of bowel function, shorter postoperative length of stay, and potentially lower costs compared to traditional open surgery (PUBMED:10384077).
In patients without a previous history of abdominal surgery, laparoscopy has been shown to be the most effective and relatively safe diagnostic procedure for acute adhesive small bowel obstruction (AASBO), with a conversion to laparotomy required in some cases due to contraindications for laparoscopy (PUBMED:26058112). Additionally, laparoscopy has been used successfully in pregnant patients with acute abdominal conditions without increased maternal morbidity or risk to the fetus (PUBMED:9069135).
Laparoscopy has also been reported as a useful technique in the management of selected cases of small bowel obstruction, allowing patients to leave the hospital in the early postoperative period and remain symptom-free (PUBMED:1421543). However, the role of laparoscopy in the management of acute SBO is still considered controversial by some, with concerns about the risk of iatrogenic injury and missed injuries, particularly in cases of dense adhesions or when bowel resection is required (PUBMED:7777807).
Despite these concerns, a review of over 2,000 cases indicated that laparoscopy is a feasible and effective treatment for acute SBO with acceptable morbidity, and it is associated with reduced postoperative morbidity and length of stay (PUBMED:21898013). In children with SBO, laparoscopy has been shown to be a safe diagnostic and therapeutic tool, with shorter nasogastric tube decompression and time to regular diet compared to those converted to laparotomy (PUBMED:25423020).
In conclusion, while laparoscopy is safe and effective for the treatment of acute SBO, it requires careful patient selection and a low threshold for conversion to laparotomy when necessary. Further studies are needed to determine its impact on recurrent SBO (PUBMED:21898013). |
Instruction: Advice to consult a general medical practitioner in Western Australia: could it be cancer?
Abstracts:
abstract_id: PUBMED:19281671
Advice to consult a general medical practitioner in Western Australia: could it be cancer? Background: Many people will consult a medical practitioner about lower bowel symptoms, and the demand for access to general practitioners (GPs) is growing. We do not know if people recognise the symptoms of lower bowel cancer when advising others about the need to consult a doctor. A structured vignette survey was conducted in Western Australia.
Method: Participants were recruited from the waiting rooms at five general practices. Respondents were invited to complete self-administered questionnaires containing nine vignettes chosen at random from a pool of 64 based on six clinical variables. Twenty-seven vignettes described high-risk bowel cancer scenarios. Respondents were asked if they would recommend a medical consultation for the case described and whether they believed the scenario was a cancer presentation. Logistic regression was used to estimate the independent effects of each variable on the respondent's judgement. Two-hundred and sixty-eight completed responses were collected over eight weeks.
Results: The majority (61%) of respondents were female, aged 40 years and older. A history of rectal bleeding, six weeks of symptoms, and weight loss independently increased the odds of recommending a consultation with a medical practitioner by a factor of 7.64, 4.11 and 1.86, respectively. Most cases that were identified as cancer (75.2%) would not be classified as such on current research evidence. Factors that predict recognition of cancer presentations include rectal bleeding, weight loss and diarrhoea.
Conclusion: Within the limitation of this study, respondents recommended that most symptomatic people present to their GP. However, we report no evidence that they recognised a cancer presentation, and duration of symptoms was not a significant variable in this regard. Cases that were identified as 'cancer' could not be classified as high risk on the available evidence.
abstract_id: PUBMED:31097892
Patients Leaving Against Medical Advice-A National Survey. Background: Leaving against medical advice (LAMA) is a common health concern seen worldwide. It has variable incidence and reasons depending upon disease, geographical region and type of health care system.
Materials And Methods: We approached anesthesiologists and intensivists for their opinion through ISA and ISCCM contact database using Monkey Survey of 22 questions covering geographical area, type of healthcare system, incidence, reasons, type of disease, expected outcome of LAMA patients etc.
Results: We received only 1154 responses. Only 584 answered all questions. Out of 1154, only 313 respondents were from government medical colleges or hospitals while remaining responses were from private and corporate sector. Most hospitals had >100 beds. ICUs were semi-closed and supervised by critical-care physicians. LAMA incidence was maximum from ICU (45%) followed by ward (32%) and emergency (25%). Most patients of LAMA had ICU stay for >1 week (60%). Eighty percent of the respondents opined that financial constraints are the most common reason of LAMA. Unsatisfactory care was rarely considered as a factor for LAMA. Approximately 40% patients had advanced malignancy or disease. Nearly 2/3rd strongly believed that insurance cover may reduce the LAMA rate.
Conclusion: Most patients get LAMA from the ICU after a stay of week. Financial constraints, terminal medical illness, malignancy and sepsis are major causes of LAMA. Remedial methods suggested to decrease the incidence include a good national health policy by the state; improved communication between the patient, caregivers and heathcare team; practice of palliative and end-of-life care support; and lastly, awareness among the people about advance directives.
How To Cite This Article: Paul G, Gautam PL et al. Patients Leaving Against Medical Advice-A National Survey. Indian J Crit Care Med 2019;23(3):143-148.
abstract_id: PUBMED:30186010
Retrospective Evaluation of Patients Who Leave against Medical Advice in a Tertiary Teaching Care Institute. Context: Discharge against medical advice or leave against medical advice (DAMA or LAMA) is a global phenomenon. The magnitude of LAMA phenomenon has a wide geographical variation. LAMA reasons are an area of concern for all involved in health-care delivery system.
Aims And Objectives: The study aimed to evaluate cases of LAMA retrospectively in a tertiary teaching care institute (1) to find the magnitude of LAMA cases (2) to evaluate demographic and patient characteristics of these cases.
Subjects And Methods: We screened hospital record of a referral institute over 1 year after approval from IEC and ICMR, New Delhi. Patient demographics and disease characteristics were noted and statistically analyzed after compilation.
Results: A total of 47,583 patients were admitted in the year 2015 through emergency and outpatient department. One thousand five hundred and fifty-six (3.3%) patients got DAMA. The mean age of patient excluding infants was 46.64 ± 20.55 years. There were 62.9% of males. Average hospital stay of these cases was 4.09 ± 4.39 days. Most of the patients (70%) belonged to medical specialties and had longer stay as compared to surgical specialties. Most of LAMA patients were suffering from infections, trauma, and malignancies. Most of the patients had LAMA from ward (62%) followed by Intensive Care Unit (ICU) (28.8%) and emergency (9.2%). In 592 (38%) of LAMA patients, the reason for leaving was not clear. The common cited reasons for LAMA were financial (27.6%) and poor prognosis (20.5%).
Conclusions: About 3.3% of patients left hospital against medical advice in our retrospective analysis. Most of these cases did so from ward followed by ICU. Financial reasons and expected poor outcome played a significant role.
abstract_id: PUBMED:10024706
Why do dyspeptic patients over the age of 50 consult their general practitioner? A qualitative investigation of health beliefs relating to dyspepsia. Background: The prognosis of late-diagnosed gastric cancer is poor, yet less than half of dyspeptic patients consult their general practitioner (GP).
Aim: To construct an explanatory model of the decision to consult with dyspepsia in older patients.
Method: A total of 75 patients over the age of 50 years who had consulted with dyspepsia at one of two inner city general practices were invited to an in-depth interview. The interviews were taped, transcribed, and analysed using the computer software NUD.IST, according to the principles of grounded theory.
Results: Altogether, 31 interviews were conducted. The perceived threat of cancer and the need for reassurance were key influences on the decision to consult. Cues such as a change in symptoms were important in prompting a re-evaluation of the likely cause. Personal vulnerability to serious illness was often mentioned in the context of family or friends' experience, but tempered by an individual's life expectations.
Conclusion: Most patients who had delayed consultation put their symptoms down to 'old age' or 'spicy food'. However, a significant minority were fatalistic, suspecting the worst but fearing medical interventions.
abstract_id: PUBMED:23587722
The role of general practitioner in French cancer centers Oncology is undergoing profound change with the development of treatments and techniques, the evolution of care taking (outpatient, overall patient care, prevention and screening), attracting more and more women. This field is also concerned by the medical demography issue. Each professional team organisation and functions are meant to be reconsidered. We took interest in the general practitioner functions in cancer centers (they are present in 80% of those); a new concept which has not been studied in France yet. A questionnaire survey of general practitioners, oncologists and directors from 19 regional cancer centers and 9 private cancer clinics, was conducted during summer 2008. The overall response rate was 51% (260/512). This study aimed to underline the general practitioner main functions, who is widely qualified, with high relational ability, a role different from family physicians and oncologists, but closely working together with them, with hardly recognized specific activities: overall patient care, continuous care with the daily management of hospitalized patients allowing a reduction in oncologists working load, the continuity of care with the family physician, the involvement in the day hospital management, in the emergency department, in outpatient palliative care consultations and follow-up consultations.
abstract_id: PUBMED:29327350
Adopting innovation in gynaecology: The introduction of e-consult. Objective: To describe the development of an e-consultation service as part of the triaging and grading process of referrals and to report on the efficacy and safety of such a service.
Methods: All gynaecology e-consults in the study period June 2015 to March 2016 were retrospectively reviewed. The outcomes of interest were the initial reduction in first face-to-face hospital visits, and the rate of re-referrals. Acute admission for the same reason, a subsequent diagnosis of underlying (pre)-malignancy, or patient death from the condition related to the index referral were selected as measures for patient safety.
Results: Seven thousand and forty-two (7042) referrals were made to the gynaecology service in the 10 month study period. After exclusion of referrals to colposcopy and the early pregnancy clinic, 4738 e-referrals remained. Of these, 1013 referrals (21.4%) were triaged for an e-consult. One hundred and forty-seven patients (14.5%) with an initial e-consult were re-referred within 6 months for the same condition. The reduction in face-to-face contacts was 18.2% (866/4738). No death and/or acute admission for the same reason as stated in the initial referral occurred among the patients with e-consultation and none were later diagnosed with an underlying (pre)-malignancy.
Conclusion: E-consultation was effective at reducing the number of first outpatient face-to-face contacts without notable compromise of the quality of care or patient safety. E-consultation allows specialists to provide expert clinical guidance, management and support to the referring provider when appropriate. Topics for further study include patient benefits and satisfaction, and further assessment of the social, economic and financial impacts on all parties involved.
abstract_id: PUBMED:32487036
Improving communication between the general practitioner and the oncologist: a key role in coordinating care for patients suffering from cancer. Background: Patients suffering from cancers are increasingly numerous in general practice consultations. The General Practitioner (GP) should be at the heart of the management of patients. Several studies have examined the perceptions of GPs confronted with the patient suffering from cancer and the relationships of GPs with oncologists, but few studies have focused on the patients' perspective. We studied the three-way relationship between the oncologist, the GP, and the patient, from the patient's point of view.
Methods: A questionnaire validated by a group consisting of GPs, oncologists, nurses, an epidemiologist and quality analyst, was administered over a three-week period to patients suffering from cancer receiving chemotherapy in a day hospital.
Results: The analysis was based on 403 questionnaires. Patients had confidence in the GP's knowledge of oncology in 88% of cases; 49% consulted their GP for pain, 15% for cancer-related advice, and 44% in emergencies. Perceived good GP/oncologist communication led patients to turn increasingly to their GP for cancer-related consultations (RR = 1.14; p = 0.01) and gave patients confidence in the GP's ability to manage cancer-related problems (RR = 1.30; p < 0.01). Mention by the oncologist of the GP's role increased the consultations for complications (RR = 1.82; p < 0.01) as well as recourse to the GP in an emergency (RR = 1.35; p < 0.01).
Conclusion: Patients suffering from cancer considered that the GP was competent, but did not often consult their GP for cancer-related problems. There is a discrepancy between patients' beliefs and their behaviour. When the oncologist spoke to patients of the GP's role, patients had recourse to their GP more often. Systematically integrating a GP consultation to conclude cancer diagnosis disclosure, could improve management and care coordination.
abstract_id: PUBMED:34080394
Long-term follow-up of survivors of cancer : general practitioner's role and resources More and more patients surviving cancer consult again their general practitioner for various reasons. The aim of this article is to consider ways to reinforce the role of the general practitioners in the follow-up protocol. Two candidates for general practice synthetized, based on literature review, cancer follow-up information of childhood cancer, breast and colorectal cancers. Their concise presentations are examples of useful documents for their colleagues. The general practitioner must receive all information concerning the cancer disease, the treatment and the agenda of the follow-up examinations to guarantee continuity of care. Collaboration between general practitioners and cancer specialists is necessary to provide best care to the patients, to share clear and relevant information and to train future general practitioners.
abstract_id: PUBMED:32035649
Recourse of patients to their general practitioner in unplanned hospitalization in oncology management: A prospective study in an oncology institute Introduction: Cancer management is a public health issue in France. Its incidence is stabilizing even decreasing, but the prevalence increases. Public policies give at the general practitioner (GP) a central role in oncological care: it must be present at all stages of the disease, from screening to post-cancer.
Methods: One-year prospective monocentric study in a cancer institute. Distribution of a questionnaire to unplanned hospitalized patients and collection of socio-demographic and medical data in their files.
Results: Fifty-four percent of the study patients did not consult their GP for the health problem that motivated hospitalization. Sixty-nine percent of patients surveyed believe that GP is not the primary care physician for the management of complications and adverse effects of oncology treatments. A large majority of patients have metastatic cancer while only 40 % followed by the support care team.
Discussion: GP's are not yet integrated into the active management of cancer for study patients. The symptoms that GP's say they can easily manage are the most frequent reasons for hospitalization.
abstract_id: PUBMED:17273179
Mouth ulcers: a study of where members of the general public might seek advice. Objective: To investigate where the public seek advice about mouth ulcers and to what extent the public approach the community pharmacy for advice.
Subjects And Methods: One thousand members of the general public were randomly chosen and surveyed throughout the day in the main shopping streets of two towns in Israel, Haifa and Tel Aviv. All participants were presented with four standard verbal questions designed to identify reactions to and past experience of mouth ulcers. All responses were recorded immediately on to data sheets and transferred to a computer for analysis of frequencies and percentages and Chi-square analysis.
Results: Nearly one third of those surveyed admitted to a history of mouth ulceration. The vast majority (66-69%) would first approach their general medical practitioner for advice, 13-17% would first approach a general dental practitioner, and only a small minority of the public (4-10%) would first approach the community pharmacy. Lay persons could not distinguish an ulcer with features that would strongly suggest a malignant neoplasm from other types of mouth ulcer.
Conclusions: The dental profession needs to collaborate more closely with the medical and pharmaceutical professions in order to offer patients better access to information and care.
Answer: The study conducted in Western Australia aimed to determine whether people recognize the symptoms of lower bowel cancer and whether they would advise others to consult a doctor based on various symptom scenarios (PUBMED:19281671). The structured vignette survey involved participants from the waiting rooms at five general practices who completed questionnaires containing nine vignettes chosen at random from a pool of 64. These vignettes were based on six clinical variables, with 27 describing high-risk bowel cancer scenarios. Respondents were asked if they would recommend a medical consultation for the case described and whether they believed the scenario was a cancer presentation.
The results showed that a history of rectal bleeding, six weeks of symptoms, and weight loss independently increased the odds of recommending a consultation with a medical practitioner. However, most cases identified as cancer by respondents (75.2%) would not be classified as such based on current research evidence. The study concluded that while respondents recommended that most symptomatic people present to their GP, there was no evidence that they recognized a cancer presentation, and the duration of symptoms was not a significant variable in this regard (PUBMED:19281671).
This suggests that while people may advise consulting a general medical practitioner for certain symptoms, their ability to recognize these symptoms as indicative of cancer is limited. Therefore, it is crucial to improve public awareness and education regarding the symptoms of lower bowel cancer to ensure timely medical consultation and potential early detection of the disease. |
Instruction: Early identification: are speech/language-impaired toddlers at increased risk for Developmental Coordination Disorder?
Abstracts:
abstract_id: PUBMED:17439447
Early identification: are speech/language-impaired toddlers at increased risk for Developmental Coordination Disorder? Background: Developmental Coordination Disorder (DCD) is a movement skill disorder which impacts upon a child's ability to perform age-appropriate self-care and academic tasks. DCD is commonly comorbid with speech/language learning disabilities.
Aim: The present study was conducted to determine whether children who had been identified with speech/language delays as toddlers demonstrated characteristics of DCD and/or speech/language problems at kindergarten age.
Results: Speech/language and motor assessments who were followed up at 63-80 months of age. Of the 40 children, 18 showed evidence of significant motor impairment and two-thirds of these met diagnostic criteria for DCD at follow-up. Twelve children were identified as having persistent speech/language problems and, of these, nine presented with significant motor co-ordination difficulties. Parental report of gross motor and fine motor problems at follow-up correlated highly with actual motor impairment scores.
Conclusions: Young children who are in early intervention programmes for speech/language delays may have significant co-ordination difficulties that will become more evident at kindergarten age when motor deficits begin to impact self-care and academic tasks. Clinical implications for early recognition of motor issues by speech/language pathologists and the potential use of parental reporting tools are addressed.
abstract_id: PUBMED:26209772
The Toddler Language and Motor Questionnaire: A mother-report measure of language and motor development. This study empirically evaluates the psychometric properties of a new mother-answered developmental instrument for toddlers, the Toddler Language and Motor Questionnaire (TLMQ). Mothers of 1132 15- to 38-month-old children filled out a 144-item instrument, tapping the toddlers' competences in five language and motor subtests. Concurrent validity was investigated in an independent sample by administering the McCarthy Scales of Children's Abilities (MSCA) individually to 47 children and the TLMQ to their mothers. A two-factor solution emerged in principal axis factor analyses with a promax rotation, with motor subtests loading high on one of the factors and the language subtests on the other. Toddlers' genders significantly affected outcome on all of the five subtests. Divergent and convergent correlations emerged between the TLMQ's motor composite and scales of the MSCA. Partially convergent and divergent correlations emerged between the TLMQ's language composite and scales of the MSCA. The findings show that young children's motor and language development can be reliably and validly assessed by using a psychometrically constructed questionnaire completed by mothers.
abstract_id: PUBMED:16608545
Early motor development and later language and reading skills in children at risk of familial dyslexia. Relationships between early motor development and language and reading skills were studied in 154 children, of whom 75 had familial risk of dyslexia (37 females, 38 males; at-risk group) and 79 constituted a control group (32 females, 47 males). Motor development was assessed by a structured parental questionnaire during the child's first year of life. Vocabulary and inflectional morphology skills were used as early indicators of language skills at 3 years 6 months and 5 years or 5 years 6 months of age, and reading speed was used as a later indicator of reading skills at 7 years of age. The same subgroups as in our earlier study (in which the cluster analysis was described) were used in this study. The three subgroups of the control group were 'fast motor development', 'slow fine motor development', and 'slow gross motor development', and the two subgroups of the at-risk group were 'slow motor development' and 'fast motor development'. A significant difference was found between the development of expressive language skills. Children with familial risk of dyslexia and slow motor development had a smaller vocabulary with poorer inflectional skills than the other children. They were also slower in their reading speed at the end of the first grade at the age of 7 years. Two different associations are discussed, namely the connection between early motor development and language development, and the connection between early motor development and reading speed.
abstract_id: PUBMED:24751905
Motor and language abilities from early to late toddlerhood: using formalized assessments to capture continuity and discontinuity in development. Developmental tests reflect the premise that decreases in skills over time should be a sign of atypical development. In contrast, from a psychological perspective, discontinuity may be viewed as a normal part of typical development. This study sought to describe the variability in patterns of continuity and discontinuity in developmental scores over time. Seventy-six toddlers (55% boys) from a larger screening study were evaluated at 13 and 30 months using the Mullen Scales of Early Development (MSEL) in five areas: gross motor, fine motor, visual perception, receptive language, and expressive language. Parents completed the First Year Inventory (FYI) at 12 months as well. At 30 months, 23.68% of the sample received a clinical diagnosis (e.g., developmental delay, autism spectrum disorder [ASD]). Toddlers were classified as stable, increasing, or decreasing by at least 1.5 standard deviations (SD) on their scores in each of the five MSEL areas from 13 to 30 months. Between 3.9% and 51.3% of the sample was classified as increasing and 0-23.7% as decreasing across areas. Decreases in motor areas were associated with increases in language areas. None of the toddlers showed decreases greater than 1.5 SD on their MSEL composite scores. There was no single pattern that characterized a certain diagnosis. Higher FYI sensory-regulatory risk was associated with decreases in gross motor. Lower FYI risk was linked with increases in receptive language. Developmental discontinuity in specific developmental areas was the rule rather than the exception. Interpretations of decreases in developmental levels must consider concurrent increases in skill during this emerging period.
abstract_id: PUBMED:26518005
Identifying Infants and Toddlers at High Risk for Persistent Delays. Objectives: Little is known about the extent to which a developmental delay identified in infancy persists into early childhood. This study examined the persistence of developmental delays in a large nationally representative sample of infants and toddlers who did not receive early intervention.
Methods: In a sample (n ≈ 8700) derived from the early childhood longitudinal study, birth cohort, we examined developmental changes between 9 and 24 months. Motor and cognitive delays were categorized as none, mild, and moderate/severe. Adjusted ordinal logistic regression models estimated the likelihood of worse developmental delay at 24 months.
Results: About 24 % of children had a cognitive delay and 27 % had a motor delay at either 9- or 24-months. About 77 % of children with mild and 70 % of children with moderate/severe cognitive or motor developmental delay at 9-months had no delay at 24-months. Children with mild cognitive delay at 9-months had 2.4 times the odds of having worse cognitive function at 24-months compared to children with no cognitive delay at 9 months. Children with moderate/severe cognitive delay at 9-months had three times the odds of having worse cognitive abilities at 24-months than children who had no cognitive delay at 9-months. Similar results were found for motor skills.
Conclusions: Developmental delays in infants are changeable, often resolving without treatment. This work provides knowledge about baseline trajectories of infants without and without cognitive and motor delays. It documents the proportion of children's delays that are likely to be outgrown without EI and the rate at which typically-developing infants are likely to display developmental delays at 2-years of age.
abstract_id: PUBMED:26692550
Early gross motor skills predict the subsequent development of language in children with autism spectrum disorder. Background: Motor milestones such as the onset of walking are important developmental markers, not only for later motor skills but also for more widespread social-cognitive development. The aim of the current study was to test whether gross motor abilities, specifically the onset of walking, predicted the subsequent rate of language development in a large cohort of children with autism spectrum disorder (ASD).
Methods: We ran growth curve models for expressive and receptive language measured at 2, 3, 5 and 9 years in 209 autistic children. Measures of gross motor, visual reception and autism symptoms were collected at the 2 year visit. In Model 1, walking onset was included as a predictor of the slope of language development. Model 2 included a measure of non-verbal IQ and autism symptom severity as covariates. The final model, Model 3, additionally covaried for gross motor ability.
Results: In the first model, parent-reported age of walking onset significantly predicted the subsequent rate of language development although the relationship became non-significant when gross motor skill, non-verbal ability and autism severity scores were included (Models 2 & 3). Gross motor score, however, did remain a significant predictor of both expressive and receptive language development.
Conclusions: Taken together, the model results provide some evidence that early motor abilities in young children with ASD can have longitudinal cross-domain influences, potentially contributing, in part, to the linguistic difficulties that characterise ASD. Autism Res 2016, 9: 993-1001. © 2015 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research.
abstract_id: PUBMED:24117483
Comorbidities in preschool children at family risk of dyslexia. Background: Comorbidity among developmental disorders such as dyslexia, language impairment, attention deficit/hyperactivity disorder and developmental coordination disorder is common. This study explores comorbid weaknesses in preschool children at family risk of dyslexia with and without language impairment and considers the role that comorbidity plays in determining children's outcomes.
Method: The preschool attention, executive function and motor skills of 112 children at family risk for dyslexia, 29 of whom also met criteria for language impairment, were assessed at ages 3½ and 4½ years. The performance of these children was compared to the performance of children with language impairment and typically developing controls.
Results: Weaknesses in attention, executive function and motor skills were associated with language impairment rather than family risk status. Individual differences in language and executive function are strongly related during the preschool period, and preschool motor skills predicted unique variance (4%) in early reading skills over and above children's language ability.
Conclusion: Comorbidity between developmental disorders can be observed in the preschool years: children with language impairment have significant and persistent weaknesses in motor skills and executive function compared to those without language impairment. Children's early language and motor skills are predictors of children's later reading skills.
abstract_id: PUBMED:25436914
Co-occurring motor, language and emotional-behavioral problems in children 3-6 years of age. Purpose: Developmental Coordination Disorder (DCD) has been shown to co-occur with behavioral and language problems in school-aged children, but little is known as to when these problems begin to emerge, or if they are inherent in children with DCD. The purpose of this study was to determine if deficits in language and emotional-behavioral problems are apparent in preschool-aged children with movement difficulties.
Method: Two hundred and fourteen children (mean age 4years 11months, SD 9.8months, 103 male) performed the Movement Assessment Battery for Children 2nd Edition (MABC-2). Children falling at or below the 16th percentile were classified as being at risk for movement difficulties (MD risk). Auditory comprehension and expressive communication were examined using the Preschool Language Scales 4th Edition (PLS-4). Parent-reported emotional and behavioral problems were assessed using the Child Behavior Checklist (CBCL).
Results: Preschool children with diminished motor coordination (n=37) were found to have lower language scores, higher externalizing behaviors in the form of increased aggression, as well as increased withdrawn and other behavior symptoms compared with their typically developing peers.
Conclusions: Motor coordination, language and emotional-behavioral difficulties tend to co-occur in young children aged 3-6years. These results highlight the need for early intervention.
abstract_id: PUBMED:17852518
Comparing language profiles: children with specific language impairment and developmental coordination disorder. Background: Although it is widely recognized that substantial heterogeneity exists in the cognitive profiles of children with Developmental Coordination Disorder (DCD), very little is known about the language skills of this group.
Aims: To compare the language abilities of children with DCD with a group whose language impairment has been well described: children with Specific Language Impairment (SLI).
Methods & Procedures: Eleven children with DCD and 11 with SLI completed standardized and non-standardized assessments of vocabulary, grammatical skill, non-word repetition, sentence recall, story retelling, and articulation rate. Performance on the non-standardized measures was compared with a group of typically developing children of the same age.
Outcomes & Results: Children with DCD were impaired on tasks involving verbal recall and story retelling. Almost half of those in the DCD group performed similarly to the children with SLI over several expressive language measures, while 18% had deficits in non-word repetition and story retelling only. Poor non-word repetition was observed for both the DCD and the SLI groups. The articulation rate of the children with SLI was slower than that of the DCD group, which was slower than that of typically developing children.
Conclusions: Language impairment is a common co-occurring condition in DCD. The language profile of children with either DCD or SLI was similar in the majority of, but not all, cases.
abstract_id: PUBMED:26094140
Reprint of "Co-occurring motor, language and emotional-behavioral problems in children 3-6 years of age". Purpose: Developmental Coordination Disorder (DCD) has been shown to co-occur with behavioral and language problems in school-aged children, but little is known as to when these problems begin to emerge, or if they are inherent in children with DCD. The purpose of this study was to determine if deficits in language and emotional-behavioral problems are apparent in preschool-aged children with movement difficulties.
Method: Two hundred and fourteen children (mean age 4 years 11 months, SD 9.8 months, 103 male) performed the Movement Assessment Battery for Children 2nd Edition (MABC-2). Children falling at or below the 16th percentile were classified as being at risk for movement difficulties (MD risk). Auditory comprehension and expressive communication were examined using the Preschool Language Scales 4th Edition (PLS-4). Parent-reported emotional and behavioral problems were assessed using the Child Behavior Checklist (CBCL).
Results: Preschool children with diminished motor coordination (n = 37) were found to have lower language scores, higher externalizing behaviors in the form of increased aggression, as well as increased withdrawn and other behavior symptoms compared with their typically developing peers.
Conclusions: Motor coordination, language and emotional-behavioral difficulties tend to co-occur in young children aged 3-6 years. These results highlight the need for early intervention.
Answer: Yes, speech/language-impaired toddlers are at increased risk for Developmental Coordination Disorder (DCD). A study conducted to determine whether children who had been identified with speech/language delays as toddlers demonstrated characteristics of DCD at kindergarten age found that of the 40 children followed up at 63-80 months of age, 18 showed evidence of significant motor impairment, and two-thirds of these met diagnostic criteria for DCD at follow-up. Additionally, 12 children were identified as having persistent speech/language problems, and of these, nine presented with significant motor coordination difficulties. Parental report of gross motor and fine motor problems at follow-up correlated highly with actual motor impairment scores, suggesting that early intervention programs for speech/language delays should also consider the potential for coordination difficulties that may become more evident when motor deficits begin to impact self-care and academic tasks (PUBMED:17439447). |
Instruction: Extrasinonasal infiltrative process associated with a sinonasal fungus ball: does it mean invasive fungal sinusitis?
Abstracts:
abstract_id: PUBMED:27283592
Extrasinonasal infiltrative process associated with a sinonasal fungus ball: does it mean invasive fungal sinusitis? Purpose: Invasive fungal sinusitis (IFS) has rarely been reported to develop from non-IFS. The purpose of this study was to disclose the nature of the extrasinonasal infiltrative process in the presence of a sinonasal fungus ball (FB).
Methods: We retrospectively reviewed the medical records, computed tomography, magnetic resonance images of 13 patients with sinonasal FB and the extrasinonasal infiltrative process. Based on histology and clinical course, we divided the extrasinonasal infiltrative process into IFS and the nonfungal inflammatory/infectious process (NFIP). The images were analyzed with particular attention to the presence of cervicofacial tissue infarction (CFTI).
Results: Of the 13 patients, IFS was confirmed in only one, while the remaining 12 were diagnosed to have presumed NFIP. One patient with IFS died shortly after diagnosis. In contrast, all 12 patients with presumed NFIP, except one, survived during a mean follow-up of 17 months. FB was located in the maxillary sinus (n=4), sphenoid sinus (n=8), and both sinuses (n=1). Bone defect was found in five patients, of whom four had a defect in the sphenoid sinus. Various sites were involved in the extrasinonasal infiltrative process, including the orbit (n=10), intracranial cavity (n=9), and soft tissues of the face and neck (n=7). CFTI was recognized only in one patient with IFS.
Conclusion: In most cases, the extrasinonasal infiltrative process in the presence of sinonasal FB did not seem to be caused by IFS but probably by NFIP. In our study, there were more cases of invasive changes with the sphenoid than with the maxillary FB.
abstract_id: PUBMED:32111952
Treatment outcomes in acute invasive fungal rhinosinusitis extending to the extrasinonasal area. Acute invasive fungal rhinosinusitis (AIFRS) can spread beyond the sinonasal cavity. It is necessary to analyze the association between the specific site involved in the extrasinonasal area and the survival rate to predict patient prognosis. We investigated 50 patients who had extrasinonasal lesions on preoperative gadolinium (Gd)-enhanced magnetic resonance imaging (MRI) scan and underwent wide surgical resection of AIFRS. The specific sites with loss of contrast enhancement (LoCE) on Gd-enhanced MRI were analyzed for AIFRS-specific survival rate. The most common underlying disease was diabetes mellitus followed by hematological malignancy. The most common symptoms were headache and facial pain. Seven patients (14.0%) expired because of AIFRS progression. Poor prognosis was independently associated with LoCE at the skull base on preoperative MRI (HR = 35.846, P = 0.004). In patients with AIFRS extending to the extrasinonasal area, LoCE at the skull base was an independent poor prognostic factor.
abstract_id: PUBMED:17505847
Allergic fungal sinusitis, fungus ball and invasive sinonasal mycosis - three fungal-related diseases Background: Three different fungal-related clinical pictures have to be differentiated in the paranasal sinuses: allergic fungal sinusitis, fungus ball and invasive sinonasal mycosis.
Purpose: A morphological reevaluation of fungal-related diseases of the paranasal sinuses as well as a retrospective analysis of their clinical parameters was performed.
Patients And Methods: 86 patients with patho-histological proven fungal-related disease of the nasal sinuses were enclosed in this study. Reevaluation and correlation of clinical and histological parameters were conducted on routine material (HE, PAS and Grocott) according to the modern morphological definitions.
Results: Invasive sinonasal mycosis was seen in 22 cases, eleven male and eleven female, mean age 57 years (22 to 84 years). It was significantly related (nine out of 22 patients, 41%) to immunocompromising conditions: three patients had diabetes mellitus type II, five had have a radiation therapy due to carcinoma and one patient suffered from bacterial endocarditis. A fungus ball was diagnosed in 60 patients, 26 male, 34 female, mean age 54 years (22-88 years). An immunocompromising condition was seen in nine out of 60 patients (15%). Causes for immune impairment were diabetes mellitus (two patients), radiation therapy due to carcinoma (four patients), myocarditis (one patient) and chronic hepatitis (two patients). Allergic fungal sinusitis was recorded in four patients, three male, one female, mean age 43 years (17-63 years). No immunosuppression was diagnosed.
Conclusions: Despite the fact that allergic fungal sinusitis is the most common fungal disease of the paranasal sinuses, it is not well known among physicians and pathologists and therefore underrepresented within the diagnoses of paranasal infections. The term "aspergilloma" is imprecise and does not represent a clear diagnosis. A further differentiation in "fungus ball" (without invasion) and "invasive sinonasal mycosis" is required. The three groups of fungal-related sinusitis occur at different ages. Allergic fungal sinusitis is common among young adults. An immunocompromising condition is a prerequisite for an invasive sinonasal mycosis.
abstract_id: PUBMED:22852108
Sinonasal risk factors for the development of invasive fungal sinusitis in hematological patients: Are they important? Invasive fungal sinusitis (IFS) is a highly aggressive infection that can affect hematologic patients. The classically described general risk factors, however, do not fully explain the development of IFS in a small percentage of cases. This study examined the impact of anatomic sinonasal factors and environmental factors on the development of IFS in high-risk patients. Medical records and computed tomography (CT) scans of patients admitted to our institution who were at high risk of developing IFS were retrospectively reviewed. Twenty-seven patients of 797 fulfilled the inclusion criteria. Patients affected by IFS were compared with patients not affected to identify possible sinonasal and environmental risk factors of IFS. Seven patients were excluded because of the lack of adequate radiological images. Six of the 20 eligible patients were assigned to the study group of patients affected by IFS and the remaining 14 patients were assigned to the control group. All but one case developed the infection during the summer with a significantly higher mean environmental temperature (p = 0.002). Anatomic nasal alterations were found in all patients affected by IFS and were significantly more frequent than in the control group (p = 0.014). It would be advisable to have patients with hematologic risk factors of IFS, especially during the summer period, undergo endoscopic nasal assessment. Furthermore, a CT finding of anatomic nasal alterations, such as anterior nasal septum deviation causing nasal obstruction, should increase the suspicion of IFS in case of the occurrence of nasal symptoms.
abstract_id: PUBMED:24105873
Surgery for pediatric invasive fungal sinonasal disease. Objectives/hypothesis: To evaluate the management and outcomes of children with invasive fungal sinonasal disease treated with radical surgery.
Study Design: Retrospective case series.
Methods: From 1994 to 2007, 11 pediatric patients were identified with invasive fungal sinonasal disease treated surgically by the same pediatric otolaryngologist. Collected data included demographics, oncologic diagnoses, absolute neutrophil counts, symptoms, computed tomography scan findings, biopsy and culture results, surgical procedures, concurrent medical therapies, complications, and survival.
Results: The studied patient population consisted of four males and seven females with an average age of 10 years (range, 2-14 years). Six patients were diagnosed with acute lymphoblastic leukemia and five with acute myeloid leukemia, which included 10 cases of relapsed disease. The average number of severely neutropenic days prior to diagnosis of an invasive fungal infection was 18 (range, 8-41 days). Culture results demonstrated Alternaria in seven patients and Aspergillus in four. Nine patients underwent an external medial maxillectomy, five of which were bilateral, and six underwent septectomy. All 11 patients (100%) were cured of their invasive fungal sinonasal disease without relapse. Three patients eventually died from unrelated causes.
Conclusions: Invasive fungal sinonasal disease is a life-threatening problem in immunocompromised children, especially with relapsed leukemia. Successful treatment depends on timely and aggressive surgical, antifungal, and supportive therapies. To our knowledge, this study represents the largest series of pediatric patients with invasive fungal sinonasal disease managed via an aggressive surgical approach with the best outcomes to date.
Level Of Evidence: 4.
abstract_id: PUBMED:30209746
Acute Invasive Fungal Rhinosinusitis: Frozen Section Histomorphology and Diagnosis with PAS Stain. Acute invasive fungal rhinosinusitis (AIFRS) is a fulminant infection in immunocompromised patients requiring rapid diagnosis (DX), frequently made on frozen section (FS) of sinonasal biopsies, followed by prompt surgical debridement. However, FS interpretation is often difficult and DX sometimes not possible. In this study we sought to characterize reasons for misinterpretation and methods to improve diagnostic accuracy. The FS slides from 271 biopsies of suspected AIFRS in a 16-year period were reviewed and the morphologic features evaluated for their utility in DX. Recurring specific patterns of necrosis were identified, which to our knowledge have not been described in the literature. Although they provide strong evidence for AIFRS, identifying fungus consistently in necrotic tissue is essential for DX. Clues to identifying fungus and pitfalls in misidentification were identified, but even with expert knowledge of these, a gap in accurate DX remained. The key to FS DX of AIFRS is to improve fungus identification in necrotic tissues. Methods had been sought in the past to stain fungus at FS without consistent success. The Periodic Acid Schiff's Reaction for Fungi was modified by our histopathology department for use on frozen tissue (PASF-fs) resulting in effective staining of the fungus. It stained fungus on all 62 positive slides when applied retrospectively over hematoxylin and eosin (H&E) stained FSs and used prospectively at FS for DX. Although knowledge of histologic morphology on FS is important, the crucial value of this study is the novel use of PASF-fs to identify fungus in the DX of AIFRS.
abstract_id: PUBMED:21525111
Radiologic characteristics of sinonasal fungus ball: an analysis of 119 cases. Background: It is important to differentiate sinonasal fungus ball from non-fungal sinusitis and other forms of fungal sinusitis in order to determine the optimal treatment. In particular, a sinonasal fungus ball, a non-invasive fungal sinusitis, can be characterized by radiologic findings before surgery.
Purpose: To differentiate a sinonasal fungus ball from other types of sinusitis and determine optimal treatment on the basis of radiologic findings before surgery.
Material And Methods: We studied 119 patients with clinically and pathologically proven sinonasal fungus balls. Their condition was evaluated radiologically with contrast-enhanced CT (99 patients), non-contrast CT (18 patients) and/or MRI (17 patients) prior to sinonasal surgery.
Results: Calcifications were found in 78 of 116 (67.2%) patients who underwent CT scans for fungus ball. As opposed to non-contrast CT scans, contrast CT scans revealed hyperattenuating fungal ball in 82.8% and enhanced inflamed mucosa in 65.5% of the patients, respectively. On MRI, most sinonasal fungal balls showed iso- or hypointensity on T1-weighted images and marked hypointensity on T2-weighted images. Inflamed mucosal membranes were noted and appeared as hypointense on T1-weighted images (64.7%) and hyperintense on T2-weighted images (88.2%).
Conclusion: When there are no calcifications visible on the CT scan, a hyperattenuating fungal ball located in the central area of the sinus with mucosal thickening on enhanced CT scans is an important feature of a non-invasive sinonasal fungus ball. On MRI, a sinonasal fungus ball has typical features of a marked hypointense fungus ball with a hyperintense mucosal membrane in T2-weighted images. A contrast-enhanced CT scan or MRI provides sufficient information for the preoperative differentiation of a sinonasal fungus ball from other forms of sinusitis.
abstract_id: PUBMED:23377235
Cervicofacial tissue infarction in patients with acute invasive fungal sinusitis: prevalence and characteristic MR imaging findings. Introduction: Tissue infarction is known as one of the characteristic features of invasive fungal sinusitis (IFS). The purpose of this study was to investigate the prevalence and characteristic MR imaging findings of cervicofacial tissue infarction (CFTI) associated with acute IFS.
Methods: We retrospectively reviewed MR images in 23 patients with histologically or microbiologically proven acute IFS. CFTI was defined as an area of lack of enhancement in and around the sinonasal tract on contrast-enhanced T1-weighted images. We divided CFTI into two groups, i.e., intrasinonasal and extrasinonasal. Particular attention was paid to the location of extrasinonasal CFTI and the signal intensity of CFTI on T1- and T2-weighted images. The presence of bone destruction on CT scans was also recorded.
Results: CFTI was found in 17 (74%) of 23 patients. All of these 17 patients had intrasinonasal CFTI, and 13 patients also had extrasinonasal CFTI. All 13 patients with extrasinonasal CFTI died of disease directly related to IFS. Various locations were involved in the 13 patients with extrasinonasal CFTI, including the orbit (n = 8), infratemporal fossa (n = 7), intracranial cavity (n = 3), and oral cavity and/or facial soft tissue (n = 4). Various signal intensities were noted at the area of CFTI on T1- and T2-weighted images. Bone destruction was found on CT scans in only 3 of 17 patients with CFTI.
Conclusion: CFTI with preservation of the bony wall of the involved sinonasal tract may be a characteristic MR imaging finding of acute IFS. The mortality is very high once the lesion extends beyond the sinonasal tract.
abstract_id: PUBMED:29187720
Histopathological Diagnosis of Fungal Sinusitis and Variety of its Etiologic Fungus Fungal sinusitis is divided into two categories depending on mucosal invasion by fungus, i.e., invasive and noninvasive. Invasive fungal sinusitis is further divided into acute and chronic disease based on time course. Noninvasive fungal sinusitis includes chronic noninvasive sinusitis (fungal ball type) and allergic fungal sinusitis. Chronic noninvasive sinusitis is the most predominant fungal sinusitis in Japan, followed by allergic fungal sinusitis. Invasive fungal sinusitis is rare. Hyphal tissue invasion is seen in invasive fungal sinusitis. Acute invasive fungal sinusitis demonstrates hyphal vascular invasion while chronic invasive fungal sinusitis usually does not. Fungal tissue invasion is never found in noninvasive sinusitis. A fungal ball may exist adjacent to sinus mucosa, but its hyphae never invade the mucosa. Fungal balls sometimes contain conidial heads and calcium oxalate, which aid in identifying the fungus in the tissue. Allergic fungal sinusitis is characterized by allergic mucin that is admixed with numerous eosinophils and sparsely scattered fungal elements. Histopathology is important in classifying fungal sinusitis, especially in confirming tissue invasion by the fungus.
abstract_id: PUBMED:28616054
An investigation on non-invasive fungal sinusitis; Molecular identification of etiologic agents. Background: Fungal sinusitis is increasing worldwide in the past two decades. It is divided into two types including invasive and noninvasive. Noninvasive types contain allergic fungal sinusitis (AFS) and fungus ball. AFS is a hypersensitivity reaction to fungal allergens in the mucosa of the sinonasal tract in atopic individuals. The fungus ball is a different type of noninvasive fungal rhinosinusitis which is delineated as an accumulation of debris and fungal elements inside a paranasal sinus. Fungal sinusitis caused by various fungi such as Aspergillus species, Penicillium, Mucor, Rhizopus, and phaeohyphomycetes. The aim of the present study is to identify fungal species isolated from noninvasive fungal sinusitis by molecular methods.
Materials And Methods: During 2015-2016, a total of 100 suspected patients were examined for fungal sinusitis. Functional endoscopic sinus surgery was performed using the Messerklinger technique. Clinical samples were identified by phenotypic and molecular methods. Polymerase chain reaction (PCR) sequencing of ITS1-5.8S-ITS2 region and PCR-restriction fragment length polymorphism with MspI restriction enzyme was performed for molecular identification of molds and yeasts, respectively.
Results: Twenty-seven out of 100 suspected cases (27%) had fungal sinusitis. Nasal congestion (59%) and headache (19%) were the most common clinical signs among patients. Fifteen patients (55.5%) were male and 12 patients (44.5%) were female. Aspergillus flavus was the most prevalent fungal species (26%), followed by Penicillium chrysogenum (18.5%) and Candida glabrata species complex (15%).
Conclusion: Since clinical manifestations, computed tomography scan, endoscopy, and histopathological findings are very nonspecific in AFS and fungus ball; therefore, molecular investigations are compulsory for precise identification of etiologic agents and appropriate management of these fungal infections.
Answer: The extrasinonasal infiltrative process associated with a sinonasal fungus ball does not necessarily mean invasive fungal sinusitis (IFS). In a study that retrospectively reviewed the medical records, computed tomography, and magnetic resonance images of 13 patients with sinonasal fungus ball and extrasinonasal infiltrative process, it was found that IFS was confirmed in only one patient, while the remaining 12 were diagnosed with presumed nonfungal inflammatory/infectious process (NFIP). The study concluded that in most cases, the extrasinonasal infiltrative process in the presence of a sinonasal fungus ball was probably caused by NFIP rather than IFS (PUBMED:27283592).
Furthermore, the study indicated that there were more cases of invasive changes with the sphenoid than with the maxillary fungus ball. It is important to differentiate between IFS and NFIP because the clinical outcomes and treatment strategies may differ significantly. For instance, the patient with IFS in the study died shortly after diagnosis, while all but one of the patients with presumed NFIP survived during a mean follow-up of 17 months. This highlights the importance of accurate diagnosis in managing patients with extrasinonasal infiltrative processes associated with sinonasal fungus balls. |
Instruction: Are probiotics detectable in human feces after oral uptake by healthy volunteers?
Abstracts:
abstract_id: PUBMED:17721765
Are probiotics detectable in human feces after oral uptake by healthy volunteers? Goals: Assessment of the presence of probiotic bacteria in feces after oral ingestion.
Background: Probiotic bacteria are said to have beneficial effects on the host. As a precondition for any effect, probiotic strains must survive passage through the gastrointestinal tract.
Study: The feces of seven volunteers were analyzed for the presence of probiotic strains after one week's oral ingestion of each of six commercially available products: E. coli Nissle 0.5-5 x 10(9) cells (Mutaflor), Enterococcus faecium SF 68 7.5 x 10(7) cells (Bioflorin), Lactobacillus acidophilus and Bifidobacterium infantis both 1 x 10(9) cells (Infloran), Lactobacillus gasseri and Bifidobacterium longum both 1 x 10(8) cells (Omniflora), Lactobacillus casei rhamnosus 1 x 10(9) cells (Antibiophilus), and yoghurt enriched with Lactobacillus casei Immunitas 1 x 10(10) cells (Actimel). Ten colonies were selected from each stool sample, and DNA was extracted and typed using random amplification of polymorphic DNA (RAPD). Typing patterns of the ingested probiotics and the fecal isolates were compared.
Results: Fingerprints identical to the ingested probiotic strains were recovered from fecal samples of 4/7 volunteers after one week of Mutaflor, from 4/6 after taking Bioflorin, and from 1/6 after Infloran. Cultivation of strains of the same species from fecal specimens was negative after consumption of Antibiophilus, Omniflora and Actimel.
Conclusions: After oral consumption of probiotics, E. coli and enterococci could be detected in stool samples (57% and 67%, respectively). In contrast, with only one exception, ingested lactobacilli and bifidobacteria could not be detected in human feces.
abstract_id: PUBMED:32662591
Efficacy of probiotics on stress in healthy volunteers: A systematic review and meta-analysis based on randomized controlled trials. Background: Probiotics seems to play a beneficial role in stressed populations; thus, a systematic review and meta-analysis to assess the effects of probiotics on stress in healthy subjects were conducted.
Methods: Randomized controlled trials on the effects of probiotics on stress in healthy subjects were retrieved from five databases. The effects of probiotics on subjective stress level, stress-related subthreshold anxiety/depression level, cortisol level, and adverse reactions were analyzed. Separate subgroup analyses were conducted on single-strain probiotics versus multi-strain probiotics, and short-term administration versus long-term administration.
Results: Seven studies were included, involving a total of 1,146 subjects. All the studies were rated as low or moderate risk of bias. Our research found that probiotic administration can generally reduce the subjective stress level of healthy volunteers and may improve their stress-related subthreshold anxiety/depression level, but no significant effect was observed in the subgroup analysis. The effect of probiotics on cortisol level was not significant. Adverse reactions were reported in only one of seven studies, but left undescribed.
Conclusion: Current evidence suggests that probiotics can reduce subjective stress level in healthy volunteers and may alleviate stress-related subthreshold anxiety/depression level, without significant effect on cortisol level, and there is not enough support to draw conclusions about adverse effects; thus, more reliable evidence from clinical trials is needed.
abstract_id: PUBMED:26796604
Pharmacokinetics, metabolism, and excretion of nefopam, a dual reuptake inhibitor in healthy male volunteers. 1. The disposition of nefopam, a serotonin-norepinephrine reuptake inhibitor, was characterized in eight healthy male volunteers following a single oral dose of 75 mg [(14)C]-nefopam (100 μCi). Blood, urine, and feces were sampled for 168 h post-dose. 2. Mean (± SD) maximum blood and plasma radioactivity concentrations were 359 ± 34.2 and 638 ± 64.7 ngEq free base/g, respectively, at 2 h post-dose. Recovery of radioactive dose was complete (mean 92.6%); a mean of 79.3% and 13.4% of the dose was recovered in urine and feces, respectively. 3. Three main radioactive peaks were observed in plasma (metabolites M2 A-D, M61, and M63). Intact [(14)C]-nefopam was less than 5% of the total radioactivity in plasma. In urine, the major metabolites were M63, M2 A-D, and M51 which accounted for 22.9%, 9.8%, and 8.1% of the dose, respectively. An unknown entity, M55, was the major metabolite in feces (4.6% of dose). Excretion of unchanged [(14)C]-nefopam was minimal.
abstract_id: PUBMED:36165644
Synbiotic Intervention with Lactobacilli, Bifidobacteria, and Inulin in Healthy Volunteers Increases the Abundance of Bifidobacteria but Does Not Alter Microbial Diversity. Synbiotics combine probiotics and prebiotics and are being investigated for potential health benefits. In this single-group-design trial, we analyzed changes in the gut microbiome, stool quality, and gastrointestinal well-being in 15 healthy volunteers after a synbiotic intervention comprising Lacticaseibacillus rhamnosus (LGG), Lactobacillus acidophilus (LA-5), Lacticaseibacillus paracasei subsp. paracasei (L. CASEI 431), and Bifidobacterium animalis subsp. lactis BB-12 and 20 g of chicory-derived inulin powder consumed daily for 4 weeks. Fecal samples were collected at baseline and at completion of the intervention, and all participants completed a fecal diary based on the Bristol Stool Scale and recorded their gastrointestinal well-being. No adverse effects were observed after consumption of the synbiotic product, and stool consistency and frequency remained almost unchanged during the trial. Microbiome analysis of the fecal samples was achieved using shotgun sequencing followed by taxonomic profiling. No changes in alpha and beta diversity were seen after the intervention. Greater relative abundances of Bifidobacteriaceae were observed in 12 subjects, with indigenous bifidobacteria species constituting the main increase. All four probiotic organisms increased in abundance, and L. rhamnosus, B. animalis, and L. acidophilus were differentially abundant, compared to baseline. Comparison of the fecal strains to the B. animalis subsp. lactis BB-12 reference genome and the sequenced symbiotic product revealed only a few single-nucleotide polymorphisms differentiating the probiotic B. animalis subsp. lactis BB-12 from the fecal strains identified, indicating that this probiotic strain was detectable after the intervention. IMPORTANCE The effects of probiotics/synbiotics are seldom investigated in healthy volunteers; therefore, this study is important, especially considering the safety aspects of multiple probiotics together with prebiotic fiber in consumption by humans. The study explores at the potential of a synbiotic intervention with lactobacilli, bifidobacteria, and inulin in healthy volunteers and tracks the ingested probiotic strain B. animalis subsp. lactis.
abstract_id: PUBMED:27939319
Ability of Lactobacillus kefiri LKF01 (DSM32079) to colonize the intestinal environment and modify the gut microbiota composition of healthy individuals. Background: Probiotics have been observed to positively influence the host's health, but to date few data about the ability of probiotics to modify the gut microbiota composition exist.
Aims: To evaluate the ability of Lactobacillus kefiri LKF01 DSM32079 (LKEF) to colonize the intestinal environment of healthy subjects and modify the gut microbiota composition.
Methods: Twenty Italian healthy volunteers were randomized in pre-prandial and post-prandial groups. Changes in the gut microbiota composition were detected by using a Next Generation Sequencing technology (Ion Torrent Personal Genome Machine).
Results: L. kefiri was recovered in the feces of all volunteers after one month of probiotic administration, while it was detected only in three subjects belonging to the pre-prandial group and in two subjects belonging to the post-prandial group one month after the end of probiotic consumption. After one month of probiotic oral intake we observed a reduction of Bilophila, Butyricicomonas, Flavonifractor, Oscillibacter and Prevotella. Interestingly, after the end of probiotic administration Bacteroides, Barnesiella, Butyricicomonas, Clostridium, Haemophilus, Oscillibacter, Salmonella, Streptococcus, Subdoligranolum, and Veillonella were significantly reduced if compared to baseline samples.
Conclusion: L. kefiri LKF01 showed a strong ability to modulate the gut microbiota composition, leading to a significant reduction of several bacterial genera directly involved in the onset of pro-inflammatory response and gastrointestinal diseases.
abstract_id: PUBMED:7547151
Fecal recovery following oral administration of Lactobacillus strain GG (ATCC 53103) in gelatine capsules to healthy volunteers. Recovery of the suggested probiotic strain Lactobacillus GG in feces was studied after oral administration. Lactobacillus GG was given to 20 healthy human volunteers for 7 days in gelatine capsules with daily doses of 1.6 x 10(8) cfu and 1.2 x 10(10) cfu. All the volunteers in the higher dose group had detectable numbers of Lactobacillus GG in their feces during the test period. The strain was detected in feces of all the volunteers after 3 days of administration. No effect was observed on the total number of fecal lactobacilli. Fecal detection of the strain may facilitate dose-response studies and provide a useful tool in dietary studies utilizing the strain in foods or food-type products.
abstract_id: PUBMED:24590312
Mass balance and metabolism of the antimalarial pyronaridine in healthy volunteers. This was a single dose mass balance and metabolite characterization study of the antimalarial agent pyronaridine. Six healthy male adults were administered a single oral dose of 720 mg pyronaridine tetraphosphate with 800 nCi of radiolabeled (14)C-pyronaridine. Urine and feces were continuously collected through 168 h post-dose, with intermittent 48 h collection periods thereafter through 2064 h post-dose. Drug recovery was computed for analyzed samples and interpolated for intervening time periods in which collection did not occur. Blood samples were obtained to evaluate the pharmacokinetics of total radioactivity and of the parent compound. Total radioactivity in urine, feces, and blood samples was determined by accelerator mass spectrometry (AMS); parent concentrations in blood were determined with LC/MS. Metabolite identification based on blood, urine, and feces samples was conducted using a combination of LC + AMS for identifying radiopeaks, followed by LC/MS/MS for identity confirmation/elucidation. The mean cumulative drug recovery in the urine and feces was 23.7 and 47.8 %, respectively, with an average total recovery of 71.5 %. Total radioactivity was slowly eliminated from blood, with a mean half-life of 33.5 days, substantially longer than the mean parent compound half-life of 5.03 days. Total radioactivity remained detectable in urine and feces collected in the final sampling period, suggesting ongoing elimination. Nine primary and four secondary metabolites of pyronaridine were identified. This study revealed that pyronaridine and its metabolites are eliminated by both the urinary and fecal routes over an extended period of time, and that multiple, varied pathways characterize pyronaridine metabolism.
abstract_id: PUBMED:19871687
TRANSMISSION OF EPIDEMIC GASTROENTERITIS TO HUMAN VOLUNTEERS BY ORAL ADMINISTRATION OF FECAL FILTRATES. Epidemic gastroenteritis was transmitted to human volunteers by the oral administration of fecal filtrates. The original inocula were obtained from patients in a natural outbreak which occurred at Marcy State Hospital in the winter of 1946-47. The experimental disease closely resembled that of the donors. The incubation period ranged from I to 5 days, with a mean of 3 days. The disease was carried through three generations, in the last two by means of fecal filtrates. Oral administration of unfiltered throat washings from experimental cases of the disease likewise induced gastroenteritis but subjects who inhaled a portion of the same throat washings remained asymptomatic. Volunteers who inhaled throat washings taken from patients in the epidemic at Marcy State Hospital also failed to develop the disease. Five volunteers who had previously been inoculated with fecal filtrates were reinoculated with the same material. Gastroenteritis followed in one of the two subjects who had failed to contract the disease the first time. The others remained well. Embryonated hens' eggs were inoculated with one of the two unfiltered stool suspensions used in the pool which had induced gastroenteritis in each of the three volunteers to whom it was fed. Three sets of eggs were inoculated: one on the chorioallantoic membrane, another into the yolk sac, and a third into the amniotic sac. Three serial passages were carried out by each method at varying time intervals. Penicillin and streptomycin were employed as antibacterial agents. Tissue and extraembryonic fluids from the third passage were non-infective for volunteers.
abstract_id: PUBMED:32221754
Open-label, single-center, phase I trial to investigate the mass balance and absolute bioavailability of the highly selective oral MET inhibitor tepotinib in healthy volunteers. Tepotinib (MSC2156119J) is an oral, potent, highly selective MET inhibitor. This open-label, phase I study in healthy volunteers (EudraCT 2013-003226-86) investigated its mass balance (part A) and absolute bioavailability (part B). In part A, six participants received tepotinib orally (498 mg spiked with 2.67 MBq [14C]-tepotinib). Blood, plasma, urine, and feces were collected up to day 25 or until excretion of radioactivity was <1% of the administered dose. In part B, six participants received 500 mg tepotinib orally as a film-coated tablet, followed by an intravenous [14C]-tepotinib tracer dose (53-54 kBq) 4 h later. Blood samples were collected until day 14. In part A, a median of 92.5% (range, 87.1-96.9%) of the [14C]-tepotinib dose was recovered in excreta. Radioactivity was mainly excreted via feces (median, 78.7%; range, 69.4-82.5%). Urinary excretion was a minor route of elimination (median, 14.4% [8.8-17.7%]). Parent compound was the main constituent in excreta (45% [feces] and 7% [urine] of the radioactive dose). M506 was the only major metabolite. In part B, absolute bioavailability was 72% (range, 62-81%) after oral administration of 500 mg tablets (the dose and formulation used in phase II trials). In conclusion, tepotinib and its metabolites are mainly excreted via feces; parent drug is the major eliminated constituent. Oral bioavailability of tepotinib is high, supporting the use of the current tablet formulation in clinical trials. Tepotinib was well tolerated in this study with healthy volunteers.
abstract_id: PUBMED:38044419
Mass Balance and Metabolic Pathways of Eliapixant, a P2X3 Receptor Antagonist, in Healthy Male Volunteers. Background: Overactive adenosine triphosphate signaling via P2X3 homotrimeric receptors is implicated in multiple conditions. To fully understand the metabolism and elimination pathways of eliapixant, a study was conducted to assess the pharmacokinetics, mass balance, and routes of excretion of a single oral dose of the selective P2X3 receptor antagonist eliapixant, in addition to an in vitro characterization.
Methods: In this single-center open-label non-randomized non-placebo-controlled phase I study, healthy male subjects (n = 6) received a single dose of 50 mg eliapixant blended with 3.7 MBq [14C]eliapixant as a PEG 400-based oral solution. Total radioactivity and metabolites excreted in urine and feces, and pharmacokinetics of total radioactivity, eliapixant, and metabolites in plasma were assessed via liquid scintillation counting and high-performance liquid chromatography-based methods coupled to radiometric and mass spectrometric detection. Metabolite profiles of eliapixant in human in vitro systems and metabolizing enzymes were also investigated.
Results: After administration as an oral solution, eliapixant was rapidly absorbed, reaching maximum plasma concentrations within 2 h. Eliapixant was eliminated from plasma with a mean terminal half-life of 48.3 h. Unchanged eliapixant was the predominant component in plasma (72.6% of total radioactivity area under the curve). The remaining percentage of drug-related components in plasma probably represented the sum of many metabolites, detected in trace amounts. Mean recovery of total radioactivity was 97.9% of the administered dose (94.3-99.4%) within 14 days, with 86.3% (84.8-88.1%) excreted via feces and 11.6% (9.5-13.1%) via urine. Excretion of parent drug was minimal in feces (0.7% of dose) and urine (≈ 0.5%). In feces, metabolites formed by oxidation represented > 90% of excreted total radioactivity. The metabolites detected in the in vitro experiments were similar to those identified in vivo.
Conclusion: Complete recovery of administered eliapixant-related radioactivity was observed in healthy male subjects with predominant excretion via feces. Eliapixant was almost exclusively cleared by oxidative biotransformation (> 90% of dose), with major involvement of cytochrome P450 3A4. Excretion of parent drug was of minor importance (~ 1% of dose).
Clinical Trial Registration: ClinicalTrials.gov: NCT04487431 (registered 27 July 2020)/EudraCT number: 2020-000519-54 (registered 3 February 2020), NCT02817100 (registered 26 June 2016), NCT03310645 (registered 16 October 2017).
Answer: Yes, probiotics are detectable in human feces after oral uptake by healthy volunteers. Studies have shown that after oral consumption of probiotics, certain strains of probiotics such as E. coli and enterococci could be detected in stool samples of volunteers. Specifically, after one week of oral ingestion of E. coli Nissle and Enterococcus faecium SF 68, these probiotics were recovered from fecal samples of 57% and 67% of volunteers, respectively. However, with only one exception, ingested lactobacilli and bifidobacteria could not be detected in human feces after consumption of other probiotic products (PUBMED:17721765).
In another study, Lactobacillus kefiri LKF01 was recovered in the feces of all volunteers after one month of probiotic administration, indicating its ability to colonize the intestinal environment and modify the gut microbiota composition (PUBMED:27939319). Similarly, Lactobacillus GG was detected in the feces of all volunteers in the higher dose group during the test period after being given in gelatine capsules (PUBMED:7547151).
Moreover, a synbiotic intervention with Lactobacilli, Bifidobacteria, and inulin in healthy volunteers increased the abundance of Bifidobacteria, and all four probiotic organisms increased in abundance, indicating that these probiotic strains were detectable after the intervention (PUBMED:36165644).
These findings suggest that certain probiotic strains can survive the passage through the gastrointestinal tract and be detected in feces, although the detectability may vary depending on the specific strain and product consumed. |
Instruction: Are advance directives associated with better hospice care?
Abstracts:
abstract_id: PUBMED:24852308
Are advance directives associated with better hospice care? Objectives: To describe individuals with advance directives at the time of hospice enrollment and to determine whether they have patterns of care and outcomes that are different from those of individuals without advance directives.
Design: Electronic health record-based retrospective cohort study with propensity score-adjusted analysis.
Setting: Three hospice programs in the United States.
Participants: Individuals admitted to hospice between January 1, 2008, and May 15, 2012 (N = 49,370).
Measurements: Timing of hospice enrollment before death, rates of voluntary withdrawal from hospice, and site of death.
Results: Most participants (35,968, 73%) had advance directives at the time of hospice enrollment. These participants were enrolled in hospice longer (median 29 vs 15 days) and had longer survival times before death (adjusted hazard ratio = 0.62; 95% confidence interval (CI) 0.58-0.66; P < .001). They were less likely to die within the first week after hospice enrollment (24.3% vs 33.2%; adjusted odds ratio (aOR) = 0.83, 95% CI = 0.78-0.88; P < .001). Participants with advance directives were less likely to leave hospice voluntarily (2.2% vs 3.4%; aOR = 0.82, 95% CI = 0.74-0.90; P = .003) and more likely to die at home or in a nursing home than in an inpatient unit (15.3% vs 25.8%; aOR = 0.82, 95% CI = 0.77-0.87; P < .001).
Conclusion: Participants with advance directives were enrolled in hospice for a longer period of time before death than those without and were more likely to die in the setting of their choice.
abstract_id: PUBMED:33535787
Awareness of Palliative Care, Hospice Care, and Advance Directives in a Racially and Ethnically Diverse Sample of California Adults. Background: Numerous studies have documented multilevel racial inequalities in health care utilization, medical treatment, and quality of care in minority populations in the United States. Palliative care for people with serious illness and hospice services for people approaching the end of life are no exception. It is also well established that Hispanics and non-Hispanic Blacks are more likely than non-Hispanic Whites to have less knowledge about advance care planning and directives, hospice, and palliative care. Both qualitative and quantitative research has identified lack of awareness of palliative and hospice services as one of the major factors contributing to the underuse of these services by minority populations. However, an insufficient number of racial/ethnic comparative studies have been conducted to examine associations among various independent factors in relation to awareness of end-of-life, palliative care and advance care planning and directives.
Aims: The main objective of this analysis was to examine correlates of awareness of palliative, hospice care and advance directives in a racially and ethnically diverse large sample of California adults.
Methods: This cross-sectional study includes 2,328 adults: Hispanics (31%); non-Hispanic Blacks (30%); and non-Hispanic Whites (39%) from the Survey of California Adults on Serious Illness and End-of-Life 2019. Using multivariate analysis, we adjusted for demographic and socio-economic variables while estimating the potential independent impact of health status, lack of primary care providers, and recent experiences of participants with a family member with serious illnesses.
Results: Hispanic and non-Hispanic Black participants are far less likely to report that they have heard about palliative and hospice care and advance directives than their non-Hispanic White counterparts. In this study, 75%, 74%, and 49% of Hispanics, non-Hispanic Blacks, and non-Hispanic White participants, respectively, claimed that they have never heard about palliative care. Multivariate analysis of data show gender, age, education, and income all significantly were associated with awareness. Furthermore, being engaged with decision making for a loved one with serious illnesses and having a primary care provider were associated with awareness of palliative care and advance directives.
Discussion: Our findings reveal that lack of awareness of hospice and palliative care and advance directives among California adults is largely influenced by race and ethnicity. In addition, demographic and socio-economic variables, health status, access to primary care providers, and having informal care giving experience were all independently associated with awareness of advance directives and palliative and hospice care. These effects are complex, which may be attributed to various historical, social, and cultural mechanisms at the individual, community, and organizational levels. A large number of factors should be addressed in order to increase knowledge and awareness of end-of-life and palliative care as well as completion of advance directives and planning. The results of this study may guide the design of multi-level community and theoretically-based awareness and training models that enhance awareness of palliative care, hospice care, and advance directives among minority populations.
abstract_id: PUBMED:28797645
Advance Directives in Hospice Healthcare Providers: A Clinical Challenge. Background: On a daily basis, healthcare providers, especially those dealing with terminally ill patients, such as hospice workers, witness how advance directives help ensure the wishes of patients. They also witness the deleterious consequences when patients fail to document the care they desire at their end of life. To the best of our knowledge there are no data concerning the prevalence of advance directives among hospice healthcare providers. We therefore explored the prevalence and factors influencing completion rates in a survey of hospice healthcare providers.
Methods: Surveys that included 32 items to explore completion rates, as well as barriers, knowledge, and demographics, were e-mailed to 2097 healthcare providers, including employees and volunteers, at a nonprofit hospice.
Results: Of 890 respondents, 44% reported having completed an advance directive. Ethnicity, age, relationship status, and perceived knowledge were all significant factors influencing the completion rates, whereas years of experience or working directly with patients had no effect. Procrastination, fear of the subject, and costs were common reasons reported as barriers. Upon completion of the survey, 43% said they will now complete an advance directive, and 45% will talk to patients and families about their wishes.
Conclusion: The majority of hospice healthcare providers have not completed an advance directive. These results are very similar to those for other healthcare providers treating patients with terminal diseases, specifically oncologists. Because, at completion, 43% said that they would now complete an advance directive, such a survey of healthcare providers may help increase completion rates.
abstract_id: PUBMED:33287561
Disparities in Palliative and Hospice Care and Completion of Advance Care Planning and Directives Among Non-Hispanic Blacks: A Scoping Review of Recent Literature. Objectives: Published research in disparities in advance care planning, palliative, and end-of-life care is limited. However, available data points to significant barriers to palliative and end-of-life care among minority adults. The main objective of this scoping review was to summarize the current published research and literature on disparities in palliative and hospice care and completion of advance care planning and directives among non-Hispanc Blacks.
Methods: The scoping review method was used because currently published research in disparities in palliative and hospice cares as well as advance care planning are limited. Nine electronic databases and websites were searched to identify English-language peer-reviewed publications published within last 20 years. A total of 147 studies that addressed palliative care, hospice care, and advance care planning and included non-Hispanic Blacks were incorporated in this study. The literature review include manuscripts that discuss the intersection of social determinants of health and end-of-life care for non-Hispanic Blacks. We examined the potential role and impact of several factors, including knowledge regarding palliative and hospice care; healthcare literacy; communication with providers and family; perceived or experienced discrimination with healthcare systems; mistrust in healthcare providers; health care coverage, religious-related activities and beliefs on palliative and hospice care utilization and completion of advance directives among non-Hispanic Blacks.
Discussion: Cross-sectional and longitudinal national surveys, as well as local community- and clinic-based data, unequivocally point to major disparities in palliative and hospice care in the United States. Results suggest that national and community-based, multi-faceted, multi-disciplinary, theoretical-based, resourceful, culturally-sensitive interventions are urgently needed. A number of practical investigational interventions are offered. Additionally, we identify several research questions which need to be addressed in future research.
abstract_id: PUBMED:18771455
What explains racial differences in the use of advance directives and attitudes toward hospice care? Cultural beliefs and values are thought to account for differences between African Americans and whites in the use of advance directives and beliefs about hospice care, but few data clarify which beliefs and values explain these differences. Two hundred five adults aged 65 and older who received primary care in the Duke University Health System were surveyed. The survey included five scales: Hospice Beliefs and Attitudes, Preferences for Care, Spirituality, Healthcare System Distrust, and Beliefs About Dying and Advance Care Planning. African Americans were less likely than white subjects to have completed an advance directive (35.5% vs 67.4%, P<.001) and had less favorable beliefs about hospice care (Hospice Beliefs and Attitudes Scale score, P<.001). African Americans were more likely to express discomfort discussing death, want aggressive care at the end of life, have spiritual beliefs that conflict with the goals of palliative care, and distrust the healthcare system. In multivariate analyses, none of these factors alone completely explained racial differences in possession of an advance directive or beliefs about hospice care, but when all of these factors were combined, race was no longer a significant predictor of either of the two outcomes. These findings suggest that ethnicity is a marker of common cultural beliefs and values that, in combination, influence decision-making at the end of life. This study has implications for the design of healthcare delivery models and programs that provide culturally sensitive end-of-life care to a growing population of ethnically diverse older adults.
abstract_id: PUBMED:21398271
Advance directives in home health and hospice agencies: United States, 2007. This report provides nationally representative data on policies, storage, and implementation of advance directives (ADs) in home health and hospice (HHH) agencies in the United States using the National Home and Hospice Care Survey. Federally mandated ADs policies were followed in >93% of all agencies. Nearly all agencies stored ADs in a file at the agency, but only half stored them at the patient's residence. Nearly all agencies informed staff about the AD, but only 77% and 72% of home health agencies informed the attending physician and next-of-kin, respectively. Home health and hospice agencies are nearly universally compliant with ADs policies that are required in order to receive Medicare and Medicaid payments, but have much lower rates of adoption of ADs policies beyond federally mandated minimums.
abstract_id: PUBMED:21576090
Documentation of advance directives among home health and hospice patients: United States, 2007. This report provides nationally representative data on documentation of advance directives (ADs) among home health (HH) and hospice patients. Advance directives were recorded for 29% of HH patients and 90% of hospice discharges. Among HH patients, increasing age and use of assistive devices were associated with greater odds of having an AD, while being Hispanic or black (relative to white) and enrolled in Medicaid decreased the odds of having ADs. Among hospice discharges, being enrolled in Medicare and having 4 or 5 activities of daily living (ADL) limitations were associated with higher odds of ADs while depression, use of emergency services, and being black (relative to White) were associated with lower odds. Even after adjustment for potentially confounding factors, racial differences persist in AD documentation in both care settings.
abstract_id: PUBMED:22310025
Integrative palliative care, advance directives, and hospital outcomes of critically ill older adults. Objective: To examine the associations between palliative care types and hospital outcomes for patients who have or do not have advance directives.
Method: Using administrative claims and clinical data for critically ill older adults (n = 1291), multivariable regressions examined the associations between palliative care types and hospital outcomes by advance directive status.
Results: Integrative palliative care was associated with lower hospital costs, lower adjusted probability of in-hospital deaths, and higher adjusted probability of hospice discharges. There was no difference in hospital outcomes by palliative care types in those with advance directives.
Conclusion: Significantly lower hospital costs and in-hospital deaths with higher hospice discharges were observed in integrative palliative care compared to consultative palliative care, but these findings were diminished with the presence of advance directives.
abstract_id: PUBMED:23079306
Dementia, death and advance directives. This article considers the ethics of advance directives, especially in relation to conditions such as dementia. For some choices, such as over whether one's life should end at home or in a hospice, advance directives can be very enlightened and helpful. For others, such as those to end the life of an autonomous subject, against their will, have no moral appeal and would rightly be ignored. In a wide range of intermediate cases, given our typical lack of insight into how changes in our health condition will affect us in other ways, we should be very cautious indeed in promoting the use of advance directives in end-of-life decisions, at least where a reasonable quality of life remains. There may be some reasons for giving priority to the earlier autonomous self over a later, contented but non-autonomous self, but these reasons seem far from compelling.
abstract_id: PUBMED:26146326
Nurses' knowledge about the health care proxy and advance directives The Basse-Normandie palliative care nurses' group carried out a survey regarding nurses' knowledge of the health care proxy and advance directives. The study revealed a lack of connection between these two arrangements, poor knowledge about advance directives but an understanding of the role of the health care proxy. How, therefore, can patients be effectively informed? How should they be supported in this process of determining themselves the conditions of their end of life?
Answer: Yes, advance directives are associated with better hospice care. Individuals with advance directives at the time of hospice enrollment were found to be enrolled in hospice longer (median 29 vs 15 days) and had longer survival times before death. They were also less likely to die within the first week after hospice enrollment and less likely to leave hospice voluntarily. Furthermore, they were more likely to die at home or in a nursing home than in an inpatient unit, suggesting that they were more likely to die in the setting of their choice (PUBMED:24852308). This indicates that advance directives can lead to care that is more aligned with the patient's preferences and potentially improve the overall quality of hospice care received. |
Instruction: Is an isolated ventricular septal defect detected before 24 weeks on ultrasound associated with fetal aneuploidy?
Abstracts:
abstract_id: PUBMED:26135785
Is an isolated ventricular septal defect detected before 24 weeks on ultrasound associated with fetal aneuploidy? Objective: Whether the isolated VSD (i-VSD) is associated with aneuploidy to the same degree as a more severe heart anomaly is unclear. Our objective was to determine the likelihood of aneuploidy in pregnancies at a tertiary referral center when an i-VSD is detected before 24 weeks.
Methods: A retrospective chart review of all detailed anatomy ultrasounds before 24 weeks performed at the University of Kansas Medical Center from 08/23/2006 to 06/07/2012 was conducted. A complete evaluation of the fetal heart was accomplished using gray scale and spectral/color Doppler examinations. The outcomes of each pregnancy were reviewed for any diagnoses of aneuploidy. Odds ratios were calculated.
Results: A total of 4078 pregnancies with complete obstetric and neonatal data were reviewed. The prevalence of an i-VSD was 2.7% (112/4078). The odds ratio of aneuploidy when an i-VSD was present was (OR: 36.0, 95% CI: 5.0, 258.1). This odds ratio remained large when either an abnormal or unknown serum screen was present.
Conclusion: The presence of an i-VSD present before 24 weeks does increase the risk of fetal aneuploidy. Whether a normal serum screen or first trimester screen for aneuploidy negates the association of an i-VSD with aneuploidy still remains undetermined.
abstract_id: PUBMED:36427970
Prenatal diagnosis and molecular cytogenetic characterization of a de novo deletion of 4q34.1→qter associated with low PAPP-A and low PlGF in the first-trimester maternal serum screening, congenital heart defect on fetal ultrasound and a false negative non-invasive prenatal testing (NIPT) result. Objective: We present prenatal diagnosis and molecular cytogenetic characterization of a de novo deletion of 4q34.1→qter associated with low pregnancy associated plasma protein-A (PAPP-A) and low placental growth factor (PlGF) in the first-trimester maternal serum screening, congenital heart defect (CHD) on fetal ultrasound and a false negative non-invasive prenatal testing (NIPT) result.
Case Report: A 40-year-old, primigravid woman underwent amniocentesis at 20 weeks of gestation because of advanced maternal age. This pregnancy was conceived by in vitro fertilization (IVF) and embryo transfer (ET). First-trimester maternal serum screening at 12 weeks of gestation revealed low PAPP-A [0.349 multiples of the median (MoM)] and low PlGF (0.299 MoM) and showed a risk for fetal trisomy 21 and trisomy 13. However, NIPT detected no genomic imbalance and a normal result. Nevertheless, level II ultrasound revealed ventricular septal defect, single umbilical artery and a small brain midline cyst. Amniocentesis revealed a karyotype of 46,XX,del(4)(q34.1) and a 17.8-Mb deletion of 4q34.1q.35.2 on array comparative genomic hybridization (aCGH) analysis. The parental karyotypes were normal. The pregnancy was terminated at 23 weeks of gestation, and a malformed fetus was delivered with craniofacial dysmorphism. Postnatal cytogenetic analysis of the placenta confirmed the prenatal diagnosis. There was a 17.8-Mb deletion of 4q34.1q.35.2 encompassing the genes of HAND2, SORBS2 and DUX4. Polymorphic DNA marker analysis on the parental bloods and cord blood showed a paternal origin of the deletion.
Conclusion: An abnormal first-trimester maternal serum screening result along with abnormal fetal ultrasound should alert the possibility of fetal aneuploidy, and amniocentesis is indicated even in the presence of a normal NIPT result.
abstract_id: PUBMED:8469453
Prenatal diagnosis of congenital heart disease and fetal karyotyping. Objective: To determine the incidence of aneuploidy among fetuses with congenital heart disease diagnosed in utero.
Methods: From June 1988 through December 1991, 502 fetuses at risk for congenital heart disease underwent fetal echocardiography. Fetal karyotyping was performed whenever a cardiac anomaly was diagnosed. Autopsy reports, postnatal echocardiograms, and angiograms were obtained to confirm the diagnosis.
Results: Congenital heart disease was found in 31 of 469 fetuses with complete follow-up. Fifteen of these 31 fetuses (48%) were found to have an abnormal karyotype: five of 17 (29.4%) with isolated cardiac anomalies and ten of 14 (71.4%) with cardiac and extracardiac anomalies. Detected chromosomal abnormalities included six trisomy 21, four trisomy 18, four trisomy 13, and one triploidy 69,XXX. Atrioventricular septal defects and ventricular septal defects were the cardiac malformations most often associated with abnormal karyotypes (77 and 71%, respectively).
Conclusions: The risk of aneuploidy associated with fetal cardiac anomalies is much greater than that associated with elevated maternal age; therefore, fetal karyotyping should be offered whenever a cardiac defect is diagnosed. Advanced gestational age should not represent a deterrent, because the discovery of a lethal trisomy in a fetus with a cardiac malformation can affect dramatically the prognosis and the obstetric and neonatal management. We believe that a screening view such as the four-chamber view should now be included routinely in obstetric ultrasound examinations.
abstract_id: PUBMED:28850695
Isolated Single Umbilical Artery and Fetal Echocardiography: A 25-Year Experience at a Tertiary Care City Hospital. Objectives: To review our 25-year experience with a single umbilical artery and fetal echocardiography to estimate the need for this test in cases of an isolated single umbilical artery.
Methods: We conducted a retrospective review of 436 patients with a diagnosis of a single umbilical artery at our institution between 1990 and 2015. Two hundred eighty-eight women had both an anatomic survey and a fetal echocardiogram. Pregnancies with concurrent extracardiac anomalies or aneuploidy were excluded. The study population was divided into 3 groups based on cardiac views on the anatomic survey: normal, incomplete, and suspicious. Echocardiographic results were compared among the 3 groups. The primary outcome measure was the incidence of cardiac anomalies in the normal group at fetal echocardiography. The data were analyzed by the χ2 test or Fisher exact test.
Results: The mean maternal age ± SD of the group was 29.2 ± 6.2 years; 44.1% were primiparas. The mean gestational age at diagnosis was 22.6 ± 5.2 weeks, and the mean gestational age at fetal echocardiography was 25.1 ± 3.6 weeks. In the normal group, 99.1% (230 of 232) of women had a normal fetal echocardiogram; the 2 abnormal cases were ventricular septal defects. Normal echocardiograms were obtained in 81.8% (36 of 44) and 25.0% (3 of 12) of the "incomplete" and "suspicious" groups, respectively.
Conclusions: Fetuses with a single umbilical artery, in the absence of structural abnormalities, and with normal cardiac views at the time of the anatomic survey do not warrant an echocardiogram.
abstract_id: PUBMED:18377486
Atrioventricular septal defect recently diagnosed by fetal echocardiography: echocardiographic features, associated anomalies, and outcomes. Objectives: We report our recent experience with atrioventricular septal defect (AVSD) diagnosed in utero.
Methods: We reviewed fetal echocardiograms diagnosed with AVSD between November 2002 and November 2004, comparing fetuses with and without aneuploidy. We compared results with previous studies.
Results: Twenty (1.8%) fetuses had AVSD. Mean maternal age was 33 years (range 19-43). Mean gestational age was 26 weeks (range 18-38). Indications for fetal echocardiography were: abnormal obstetrical ultrasound (75%), chromosomal anomaly (15%), undetermined (10%). AVSD was an isolated cardiac defect in 5 (25%), associated with double-outlet right ventricle (9) or tetralogy of Fallot (3) in 12 (60%). Four had aortic arch anomalies. Atrioventricular valve regurgitation was mild in 7 (35%) and moderate in 4 (20%). Heart block existed in 2 (10%). Five (25%) with trisomy had Rastelli type A AVSD as a single lesion (odds ratio 24, P < .01). Extracardiac anomalies existed in 6, with and without aneuploidy. Pregnancy was terminated in 4 (20%), neonatal death in 4 (20%), and reparative surgery in 6 (30%), not ascertained in 6.
Conclusion: Atrioventricular septal defect is usually an isolated cardiac lesion in fetuses with aneuploidy. In the absence of aneuploidy, fetal AVSD is often associated with conotruncal and aortic arch abnormalities, which are important in determining outcomes. Pregnancy termination and neonatal death continue to be prevalent.
abstract_id: PUBMED:18424646
Prenatal course of isolated muscular ventricular septal defects diagnosed only by color Doppler sonography: single-institution experience. Objective: Counseling patients with an isolated ventricular septal defect (i-VSD) is clinically important because with high-resolution ultrasound equipment, more small muscular VSDs are now being diagnosed. The prevalence of these lesions is not yet completely described, and the frequency with which muscular VSDs resolve in utero has also not been extensively reported.
Methods: We investigated the perinatal course of isolated muscular VSDs diagnosed only on color Doppler examinations and followed between January 1, 2005, and December 31, 2006. A complete evaluation of the fetal heart was performed by gray scale, spectral Doppler, and color Doppler examinations.
Results: We performed a total of 2583 fetal echocardiographic examinations on 2410 fetuses during 2318 pregnancies. The study group included 78 twin gestations (3.4%) and 7 triplet gestations (0.3%). There were 16 fetuses with an i-VSD (6.6/1000 fetuses) within the study group. The mean gestational age +/- SD at diagnosis was 23.5 +/- 4.3 weeks. Two of the i-VSDs (12.5%) spontaneously resolved prenatally. One fetus with an i-VSD had trisomy 21 and also had increased nuchal translucency in the first trimester. One i-VSD was diagnosed among 22 fetuses with trisomy 21 examined during the study period.
Conclusions: An i-VSD is a common congenital heart defect. Prenatal resolution of i-VSDs is less frequent than reported in the literature. A larger cohort is needed to provide a better risk estimate for aneuploidy in the presence of an i-VSD.
abstract_id: PUBMED:36880000
Radial Ray Anomaly with Associated Ventricular Septal Defect - Case Report with Review of Literature. Ultrasound examination is used for the assessment of abnormal findings on prenatal screening. Radial ray defect can be screened by using ultrasonography. Abnormal findings can be detected quickly by having the understanding of the etiology, pathophysiology and embryology. It is a rare congenital defect that may be isolated or associated with other anomalies including Fanconi's syndrome and Holt-Oram syndrome. We report the case of a 28-year-old woman (G2P1L1) who presented for routine antenatal ultrasound at 25 weeks 0 days according to the last menstrual period. The patient did not have any level-II antenatal anomaly scan done. An ultrasound was performed, and the gestational age according to the ultrasound scan was 24 weeks and 3 days. In this paper, we present a brief review of embryology and critical practical points, and report a rare case of radial ray syndrome with associated ventricular septal defect.
abstract_id: PUBMED:10595385
Prenatal diagnosis of heart defects and associated chromosomal aberrations Aim: According to epidemiological studies on newborns, the association of congenital heart defects with chromosomal anomalies varies between 4 and 12%. Prenatally this rate is probably higher, due to antenatal death occurring in fetuses with chromosomal aberrations. The aim of the study was therefore to determine the rate and the distribution of chromosomal aberrations in prenatally detected heart defects.
Patients And Method: Within a period of 7 years fetal echocardiography was performed on 2716 fetuses at high risk for CHD. The analysis of the fetal heart was achieved by the visualization of different planes. Once a heart defect was detected, karyotyping was performed after amniocentesis, cordocentesis or chorion villous sampling, or in a few cases postnatally from cord blood. Prenatal ultrasound findings were confirmed postnatally by ultrasound examination or, in case of abortion, stillbirth or neonatal death, by autopsy.
Results: A total of 203 fetal heart malformations were detected and 46 of them (22%) had associated chromosomal anomalies. 60% of all cases and 80% of the study group had extracardiac anomalies. Only eight out of the 46 pregnant women (17.5%) were older than 35 years. Eight out of the 15 fetuses with trisomy 18 had a ventricular septal defect, 9/13 fetuses with trisomy 21 had an atrioventricular septal defect and all 5 fetuses with monosomy X had a left heart outflow obstruction. No typical cardiac defects were found in the remaining 13 fetuses (5 trisomy 13, 2 triploidies, 6 miscellaneous). Of the 13 live births (23 terminations of pregnancy and 10 intrauterine deaths) 6 children survived (46% and overall survival rate 13%). The following rates of associations with aneuploidies were found: atrioventricular septal defect 55%, ventricular septal defect and aortic coaction both 43%, tetralogy of Fallot and double outlet right ventricle both 36%. In comparison, fetuses with isomerism, transposition of the great arteries and pulmonary atresia or stenosis had normal chromosomes.
Conclusion: We conclude that the rate of association of heart defects and chromosomal abnormalities is higher prenatally than in the neonatal period and is approximately 22%. After detecting a fetal cardiac malformation, karyotyping is mandatory for the further management of pregnancy. The likelihood of detection of an aneuploidy increases when some typical heart defects are detected or when an association with extracardiac anomalies is found.
abstract_id: PUBMED:12666217
The role of fetal nuchal translucency and ductus venosus Doppler at 11-14 weeks of gestation in the detection of major congenital heart defects. Objective: To determine whether, in a selected high-risk population, Doppler velocimetry of the ductus venosus can improve the predictive capacity of increased nuchal translucency in the detection of major congenital heart defects in chromosomally normal fetuses at 11-14 weeks of gestation.
Methods: Ductus venosus Doppler ultrasound blood velocity waveforms were obtained prospectively at 11-14 weeks of gestation in 1040 consecutive singleton pregnancies. Waveforms were classified either as normal in the presence of a positive A-wave, or as abnormal if the A-wave was absent or negative. All cases were screened for chromosomal defects by a combination of maternal age and fetal nuchal translucency thickness. In 484 cases karyotyping was performed. Those fetuses found to be chromosomally normal by prenatal cytogenetic analysis, and which had abnormally increased nuchal translucency and/or abnormal ductus venosus Doppler velocimetry, underwent fetal echocardiography at 14-16 weeks of gestation. Ultrasound examination was repeated at 22-24 weeks of gestation in all women. The sensitivity, specificity and positive and negative predictive values for the detection of major cardiac defects of increased nuchal translucency thickness alone, ductus venosus Doppler alone and increased nuchal translucency thickness in association with abnormal ductus venosus Doppler were determined.
Results: In 29 of 998 fetuses presumed to be chromosomally normal, reversed or absent flow during atrial contraction was associated with increased (> 95(th) centile for crown-rump length) nuchal translucency. Major cardiac defects were observed in 9 of these 29 fetuses. No other major cardiac abnormalities were found in chromosomally normal fetuses in spite of the presence of either increased nuchal translucency alone or abnormal ductus venosus velocimetry. A total of 25 cardiac malformations were observed in the population. Fifteen were associated with aneuploidy and 10 fetuses had a normal karyotype. Nine of the 10 had major cardiac anomalies and one had a ventricular septal defect. The nine cases with normal karyotype and major cardiac anomalies had both increased nuchal translucency and abnormal ductus venosus flow velocity waveforms.
Conclusion: In chromosomally normal fetuses with increased nuchal translucency, assessment of ductus venosus blood flow velocimetry could improve the predictive capacity for an underlying major cardiac defect.
abstract_id: PUBMED:16456841
Prenatal diagnosis of persistent left superior vena cava and its associated congenital anomalies. Objective: To evaluate the associated conditions and the outcome of persistent left superior vena cava (PLSVC) detected in fetal life.
Methods: This was a retrospective review of all cases of PLSVC detected prenatally between 1998 and 2004 in two tertiary referral centers in Germany. Patient charts, ultrasound video recordings and still frames of all cases were reviewed for associated conditions and outcome.
Results: Eighty-two cases of PLSVC were detected in the study period. Thirty-seven cases (45%) were associated with heterotaxy syndromes, 19 (23%) with isolated cardiac malformations, seven (9%) with aneuploidy, six (7%) with complex malformation syndromes and six (7%) with isolated extracardiac malformations. Seven cases (9%) had no associated condition. Eighty-three percent of the fetuses in this series had associated cardiac malformations; the most frequent cardiac malformations in those with heterotaxy syndromes were complete atrioventricular septal defect (75%) and right outflow tract obstruction (58%). After exclusion of cases with heterotaxy, most congenital heart defects were ventricular septal defects (41%) and coarctation (34%). The outcome of PLSVC was determined solely by the associated conditions. After exclusion of terminated cases, heterotaxy syndromes as well as complete atrioventricular septal defects were associated significantly with perinatal and infant death. In contrast, all cases with isolated PLSVC or associated correctable extracardiac malformations survived and were doing well at the time of writing.
Conclusions: PLSVC detected in fetal life has to be followed by a meticulous inspection of the fetal anatomy as it is frequently associated with heterotaxy syndromes, other cardiac/non-cardiac malformations and aneuploidy that determine the outcome. Isolated PLSVC is a benign vascular anomaly and may not affect the outcome.
Answer: Yes, an isolated ventricular septal defect (i-VSD) detected before 24 weeks on ultrasound is associated with an increased risk of fetal aneuploidy. A study conducted at the University of Kansas Medical Center reviewed detailed anatomy ultrasounds performed before 24 weeks and found that the presence of an i-VSD increased the odds of aneuploidy (OR: 36.0, 95% CI: 5.0, 258.1) (PUBMED:26135785). This suggests that when an i-VSD is present, there is a significant association with chromosomal abnormalities. However, the study also noted that it remains undetermined whether a normal serum screen or first trimester screen for aneuploidy negates the association of an i-VSD with aneuploidy. |
Instruction: Are women with thicker cortices in the femoral shaft at higher risk of subtrochanteric/diaphyseal fractures?
Abstracts:
abstract_id: PUBMED:22547423
Are women with thicker cortices in the femoral shaft at higher risk of subtrochanteric/diaphyseal fractures? The study of osteoporotic fractures. Context: Femoral shaft cortical thickening has been mentioned in reports of atypical subtrochanteric and diaphyseal (S/D) femur fractures, but it is unclear whether thickening precedes fracture or results from a preceding stress fracture and what role bisphosphonates might play in cortical thickening.
Objective: Our objective was to examine the relationship of cortical thickness to S/D fracture risk as well as establish normal reference values for femoral cortical thickness in a large population-based cohort of older women.
Design: Using pelvic radiographs obtained in 1986-1988, we measured femoral shaft cortical thickness 3 cm below the lesser trochanter in women in the Study of Osteoporotic Fractures. We measured this in a random sample and in those with S/D fractures and femoral neck and intertrochanteric fractures. Low-energy S/D fractures were identified from review of radiographic reports obtained between 1986 and 2010. Radiographs to evaluate atypia were not available. Analysis used case-cohort, proportional hazards models.
Outcomes: Cortical thickness as a risk factor for low-energy S/D femur fractures as well as femoral neck and intertrochanteric fractures in the Study of Osteoporotic Fractures, adjusting for age and bone mineral density in proportional hazards models.
Results: After age adjustment, women with thinner medial cortices were at a higher risk of S/D femur fracture, with a relative hazard of 3.94 (95% confidence interval = 1.23-12.6) in the lowest vs. highest quartile. Similar hazard ratios were seen for femoral neck and intertrochanteric fractures. Medial or total cortical thickness was more strongly related to fracture risk than lateral cortical thickness.
Conclusions: In primarily bisphosphonate-naive women, we found no evidence that thick femoral cortices placed women at higher risk for low-energy S/D femur fractures; in fact, the opposite was true. Women with thin cortices were also at a higher risk for femoral neck and intertrochanteric fractures. Whether cortical thickness among bisphosphonate users plays a role in atypical S/D fractures remains to be determined.
abstract_id: PUBMED:28503351
Subtrochanteric and Distal Femur Fractures in a Patient with Femoral Shaft Fracture Malunion and Knee Disarticulation: A Rare and Challenging Case Report. This study aims to describe a rare and challenging case of a patient who presented ipsilateral subtrochanteric and distal femur fractures due to low-energy trauma. The peculiarity of this case is the presence of femoral shaft fracture malunion and knee disarticulation in the same limb resulting from an accident suffered 30 years ago. The patient underwent femoral diaphyseal osteotomy and fixation of the subtrochanteric and distal femur fractures with a long cephalomedullary nail and distal femur locking plate, respectively. Despite the magnitude of the surgical procedure, all fractures healed, preserving the femoral length with the absence of infection and clinical complications. There was an improvement of the preinjury function attributed to the osteotomy of the femoral diaphyseal, which alleviated the anterior thigh discomfort.
abstract_id: PUBMED:21343577
Bisphosphonate use and the risk of subtrochanteric or femoral shaft fractures in older women. Context: Osteoporosis is associated with significant morbidity and mortality. Oral bisphosphonates have become a mainstay of treatment, but concerns have emerged that long-term use of these drugs may suppress bone remodeling, leading to unusual fractures.
Objective: To determine whether prolonged bisphosphonate therapy is associated with an increased risk of subtrochanteric or femoral shaft fracture.
Design, Setting, And Patients: A population-based, nested case-control study to explore the association between bisphosphonate use and fractures in a cohort of women aged 68 years or older from Ontario, Canada, who initiated therapy with an oral bisphosphonate between April 1, 2002, and March 31, 2008. Cases were those hospitalized with a subtrochanteric or femoral shaft fracture and were matched to up to 5 controls with no such fracture. Study participants were followed up until March 31, 2009.
Main Outcome Measures: The primary analysis examined the association between hospitalization for a subtrochanteric or femoral shaft fracture and duration of bisphosphonate exposure. To test the specificity of the findings, the association between bisphosphonate use and fractures of the femoral neck or intertrochanteric region, which are characteristic of osteoporotic fractures, was also examined.
Results: We identified 716 women who sustained a subtrochanteric or femoral shaft fracture following initiation of bisphosphonate therapy and 9723 women who sustained a typical osteoporotic fracture of the intertrochanteric region or femoral neck. Compared with transient bisphosphonate use, treatment for 5 years or longer was associated with an increased risk of subtrochanteric or femoral shaft fracture (adjusted odds ratio, 2.74; 95% confidence interval, 1.25-6.02). A reduced risk of typical osteoporotic fractures occurred among women with more than 5 years of bisphosphonate therapy (adjusted odds ratio, 0.76; 95% confidence interval, 0.63-0.93). Among 52,595 women with at least 5 years of bisphosphonate therapy, a subtrochanteric or femoral shaft fracture occurred in 71 (0.13%) during the subsequent year and 117 (0.22%) within 2 years.
Conclusion: Among older women, treatment with a bisphosphonate for more than 5 years was associated with an increased risk of subtrochanteric or femoral shaft fractures; however, the absolute risk of these fractures is low.
abstract_id: PUBMED:34635953
Risk of subtrochanteric and femoral shaft fractures due to bisphosphonate therapy in asthma: a population-based nested case-control study. Concerns have been raised over the association between bisphosphonates and atypical fractures in subtrochanteric and femoral shaft regions, but the potential risk of these fractures due to bisphosphonate use in asthma has not been examined.
Introduction: Bisphosphonates are used as first-line treatment for osteoporosis; however, concerns have been raised over their association with atypical subtrochanteric (ST) and femoral shaft (FS) fractures. The potential risk of atypical ST/FS fractures from bisphosphonate use in asthma has not been examined.
Methods: A nested case-control study was conducted using linked data from the Clinical Practice Research Datalink (CPRD) and Hospital Episode Statistics (HES) databases. Using an asthma cohort, we identified patients with atypical ST/FS fractures and sex, age, and practice-matched controls. Conditional logistic regression was used to determine the association between bisphosphonate exposure and atypical ST/FS fractures.
Results: From a cohort of 69,074 people with asthma, 67 patients with atypical ST/FS fractures and 260 matched control subjects were identified. Of the case patients, 40.3% had received bisphosphonates as compared with 14.2% of the controls corresponding to an adjusted odds ratio (aOR) of 4.42 (95%CI, 2.98 to 8.53). The duration of use influenced the risk with long-term users to be at a greater risk (> 5 years vs no exposure; aOR = 7.67; 95%CI, 1.75 to 33.91). Drug withdrawal was associated with diminished odds of atypical ST/FS fractures.
Conclusion: Regular review of bisphosphonates should occur in patients with asthma. The risks and benefits of bisphosphonate therapy should be carefully considered in consultation with the patient. To improve AFF prevention, early signs which may warrant imaging, such as prodromal thigh pain, should be discussed.
abstract_id: PUBMED:31824873
Characteristics and Surgical Outcomes of Intertrochanteric or Subtrochanteric Fractures Associated with Ipsilateral Femoral Shaft Fractures Treated with Closed Intramedullary Nailing: A Review of 31 Consecutive Cases over Four Years at a Single Institution. Purpose: To evaluate the clinical characteristics of intertrochanteric or subtrochanteric fractures associated with ipsilateral femoral shaft fractures and assess the surgical outcomes of a novel, closed intramedullary nailing surgical approach designed to minimize fixation failure.
Materials And Methods: Between May 2013 and April 2017, 31 patients with intertrochanteric or subtrochanteric fractures associated with ipsilateral femoral shaft fractures treated with closed intramedullary nailing or long proximal femoral nail antirotation (PFNA) were enrolled in this study. Preoperative data included age, sex, injury severity score, body mass index, location of shaft fracture, injury mechanism, accompanying traumatic injury, walking ability before injury, and surgical timing. Perioperative outcomes, including follow-up period, types of intramedullary nails, number of blocking screws used, operation time, and blood loss were assessed. Radiologic outcomes, including union rate, time from surgery to union, and femoral shortening, and clinical outcomes, including hip flexion, walking ability, and Harris hip score were also evaluated.
Results: A total of 29 unions (93.5%) were achieved. The time to union was 16.8 months (range, 11-25 months) for hip fractures (15.7 weeks for intertrochanteric fractures and 21.7 weeks for subtrochanteric fractures) and 22.8 months for femoral shaft fractures. There were no significant differences in surgical outcomes between the two groups except for type of intramedullary nail.
Conclusion: Closed intramedullary nailing in the treatment of intertrochanteric or subtrochanteric fractures associated with ipsilateral femoral shaft fractures may be a good surgical option. However, fixation of femoral shaft fractures might not be sufficient depending on the implant design.
abstract_id: PUBMED:29774402
Subtrochanteric and diaphyseal femoral fractures in hypophosphatasia-not atypical at all. Risk for subtrochanteric and diaphyseal femoral fractures is considered increased in patients with hypophosphatasia (HPP). Evaluating a large cohort of HPP patients, we could for the first time quantify the prevalcence and identify both morphometric features as well as predisposing factors for this complication of severe HPP.
Introduction: Subtrochanteric and diaphyseal femoral fractures have been associated with both, long-term antiresorptive treatment and metabolic bone disorders, specifically Hypophosphatasia (HPP). Building on a cross-sectional evaluation of real-world data, this study reports risk factors, prevalence, treatment outcome and morphometric particularities for such fractures in HPP as compared to Atypical Femoral Fractures (AFF) in long-term antiresorptive treatment.
Methods: For 15 out of 150 HPP patients identified with having experienced at least one such fracture, medical records were reviewed in detail, extracting medical history, genotype, lab assessments, bone mineral density (DXA), radiographic data on femoral geometry and clinical aspects of fracture etiology and healing.
Results: Bilateral fractures were documented in 10 of these 15 patients, yielding a total of 25 fractures for evaluation. Disease-inherent risk factors included autosomal-recessive, childhood onset HPP, apparently low alkaline phosphatase (ALP) ≤ 20 U/l and substantially elevated pyridoxal 5'-phosphate (PLP) > 3 times upper limit of normal as well as high lumbar spine BMD. Fracture morphology met definition criteria for AFF in 88% of cases. Femoral geometry revealed additional risk factors previously described for AFF, including decreased femoral neck-shaft angle and increased femoral offset. Extrinsic risk factors include Hypovitaminosis D (80%) and pre-treatment with bisphosphonates (46,7%) and Proton-Pump Inhibitors (40%).
Conclusions: Increased risk for subtrochanteric and diaphyseal femoral fractures in HPP appears to result from both compromised bone metabolism as well as disease-associated bone deformities. In severe HPP, generous screening for such fractures seems advisable. Bisphosphonates and Hypovitaminosis D should be avoided. Healing is compromised and requires mindful consideration of both pharmacological and surgical options.
abstract_id: PUBMED:33821304
Differences in subtrochanteric and diaphyseal atypical femoral fractures in a super-aging prefectural area: YamaCAFe Study. Introduction: Atypical femoral fractures (AFFs) have been correlated with long-term use of bisphosphonates (BPs), glucocorticoids (GCs), and femoral geometry. We investigated the incidence and characteristics of subtrochanteric (ST) and diaphyseal (DP) AFFs in all institutes in a super-aging prefectural area.
Materials And Methods: We performed a blinded analysis of radiographic data in 87 patients with 98 AFFs in all institutes in Yamagata prefectural area from 2009 to 2014. Among the 98 AFFs, 57 AFFs comprising 11 ST fractures in 9 patients and 46 DP fractures in 41 patients with adequate medical records and X-rays were surveyed for time to bone healing and geometry.
Results: Of the 87 patients, 67 received BPs/denosumab (77%) and 10 received GCs (11%). Surgery was performed in 94 AFFs. Among 4 AFFs with conservative therapy, 3 required additional surgery. In univariate regression analyses for ST group versus DP group, male-to-female ratio was 2/7 versus 1/40, mean age at fracture was 58.2 (37-75) versus 78 (60-89) years, rheumatic diseases affected 55.5% (5/9) versus 4.9% (2/41), femoral lateral bowing angle was 1.7 (0-6) versus 11.8 (0.8-24)°, GC usage was 67% (6/9) versus 4.9% (2/41), and bone healing time was 12.1 (6-20) versus 8.1 (3-38) months (p < 0.05). In multivariate analyses, higher male-to-female ratio, younger age, greater proportion affected by rheumatic diseases, and higher GC usage remained significant (p < 0.05).
Conclusions: The incidence of AFFs in our prefectural area was 1.43 cases/100,000 persons/year. This study suggests that the onset of ST AFFs have greater correlation with the worse bone quality, vice versa, the onset of DP AFFs correlated with the bone geometry. The developmental mechanisms of AFFs may differ significantly between ST and DP fractures.
abstract_id: PUBMED:32470545
Biological activity is not suppressed in mid-shaft stress fracture of the bowed femoral shaft unlike in "typical" atypical subtrochanteric femoral fracture: A proposed theory of atypical femoral fracture subtypes. Background: We have investigated mid-shaft stress fractures of the bowed femoral shaft (SBFs), well before the first report of an association between suppression of bone turnover and atypical femoral fractures (AFFs). Although all cases of SBF meet the criteria for AFF, SBFs can also occur in patients with no exposure to bone turnover suppression-related drugs (e.g., bisphosphonates). Using bone morphometry and biomechanical analyses, we devised a theory of AFF subtypes, dividing AFFs into fragility SBFs in the mid-shaft and "typical" subtrochanteric AFFs caused by suppressed bone turnover. The aim of this multicenter prospective study was to provide evidence for this novel concept in terms of biological activity.
Methods: The study was conducted at 12 hospitals in Japan from 2015 through 2019. Thirty-seven elderly women with AFF were included and classified according to location of the fracture into a mid-shaft AFF group (n = 18) and a subtrochanteric AFF group (n = 19). Patient demographics and clinical characteristics were investigated to compare the two groups. The main focus was on histological analysis of the fracture site, and bone metabolism markers were evaluated to specifically estimate biological activity.
Results: All patients in the subtrochanteric AFF group had a history of long-term (>3 years) exposure to specific drugs that have been reported to cause AFF, but 5 of the 18 patients in the mid-shaft AFF group had no history of exposure to such drugs. Femoral bowing was significantly greater in the mid-shaft AFF group (p < 0.001). In the histological analysis, active bone remodeling or endochondral ossification was observed in the mid-shaft AFF group, whereas no fracture repair-related biological activity was observed in the majority of patients in the subtrochanteric AFF group. Levels of tartrate-resistant acid phosphatase-5b and undercaroxylated osteocalcin were significantly lower in the subtrochanteric AFF group (p < 0.05).
Conclusion: The possibility of our devised AFF subtype theory was demonstrated. Biological activity tends not to be suppressed in mid-shaft SBFs unlike in "typical" subtrochanteric AFFs involving bone turnover suppression. Although validation of the proposed theory in other populations is needed, we suggest that the pathology and treatment of AFFs be reconsidered based on its subtype.
abstract_id: PUBMED:21165600
Risk of femoral shaft and subtrochanteric fractures among users of bisphosphonates and raloxifene. Unlabelled: Prior studies have suggested an association between bisphosphonate use and subtrochanteric fractures. This cohort study showed an increased risk of subtrochanteric and femoral shaft fractures both before and after the start of drugs against osteoporosis including bisphosphonates. This may suggest an effect of the underlying disease rather than the drugs used.
Introduction: The objective of this study is to determine the association between drugs against osteoporosis and the risk of femoral shaft and subtrochanteric fractures. No separation was made between atypical and typical fractures.
Methods: Nationwide cohort study from Denmark with all users of bisphosphonates and other drugs against osteoporosis between 1996 and 2006 (n = 103,562) as exposed group and three age- and gender-matched controls from the general population (n = 310,683). Adjustments were made for prior fracture, use of systemic hormone therapy, and use of systemic corticosteroids.
Results: After initiation of therapy, an increased risk of subtrochanteric fractures was seen for alendronate (hazard ratio (HR) = 2.41, 95% confidence interval (CI) 1.78-3.27), etidronate (HR = 1.96, 95% CI 1.62-2.36), and clodronate (HR = 20.0, 95% CI 1.94-205), but not for raloxifene (HR = 1.06, 95% CI 0.34-3.32). However, an increased risk of subtrochanteric fractures was also present before the start of alendronate (OR = 2.36, 95% CI 2.05-2.72), etidronate (OR = 3.05, 95% CI 2.59-3.58), clodronate (OR = 10.8, 95% CI 1.14-103), raloxifene (OR = 1.90, 95% CI 1.07-3.40), and strontium ranelate (OR = 2.97, 95% CI 1.07-8.27). Similar trends were seen for femoral shaft fractures and overall fracture risk. After the start of etidronate, no dose-response relationship was present (p for trend, 0.54). For alendronate, a decreasing risk was present with increasing average daily dose (p for trend, <0.01).
Conclusions: Although an increased risk of femoral shaft and subtrochanteric fractures are seen with the use of several types of bisphosphonates, the increased risk before the start of the drugs may point at an effect of the underlying disease being treated. The increased risk may, thus, perhaps be due to confounding by indication.
abstract_id: PUBMED:28377976
Iliac crest histomorphometry and skeletal heterogeneity in men. Purpose: The cortical characteristics of the iliac crest in male have rarely been investigated with quantitative histomorphometry. Also it is still unknown how cortical microarchitecture may vary between the iliac crest and fractures related sites at the proximal femur. We studied the microarchitecture of both external and internal cortices within the iliac crest, and compared the results with femoral neck and subtrochanteric femoral shaft sites.
Methods: Undecalcified histological sections of the iliac crest were obtained bicortically from cadavers (n = 20, aged 18-82 years, males). They were cut (7 μm) and stained using modified Masson-Goldner stain. Histomorphometric parameters of cortical bone were analysed with low (× 50) and high (× 100) magnification, after identifying cortical bone boundaries using our previously validated method. Within cortical bone area, only complete osteons with typical concentric lamellae and cement line were selected and measured.
Results: At the iliac crest, the mean cortical width of external cortex was higher than at the internal cortex (p < 0.001). Also, osteon structural parameters, e.g. mean osteonal perimeter, were higher in the external cortex (p < 0.05). In both external and internal cortices, pore number per cortical bone area was higher in young subjects (≤ 50 years) (p < 0.05) while mean pore perimeter was higher in the old subjects (> 50 years) (p < 0.05). Several cortical parameters (e.g. osteon area per cortical bone area, pore number per cortical area) were the lowest in the femoral neck (p < 0.05). The maximal osteonal diameter and mean wall width were the highest in the external cortex of the iliac crest (p < 0.05), and the mean cortical width, osteon number per cortical area were the highest in the subtrochanteric femoral shaft (p < 0.05). Some osteonal structural parameters (e.g. min osteonal diameter) were significantly positively correlated (0.29 ≤ R2 ≤ 0.45, p < 0.05) between the external iliac crest and the femoral neck.
Conclusions: This study reveals heterogeneity in cortical microarchitecture between the external and internal iliac crest cortices, as well as between the iliac crest, the femoral neck and the subtrochanteric femoral shaft. Standard iliac crest biopsy does not reflect accurately cortical microarchitecture of other skeletal sites.
Answer: No, women with thicker cortices in the femoral shaft are not at higher risk of subtrochanteric/diaphyseal fractures. In fact, the opposite is true. A study found that after adjusting for age, women with thinner medial cortices were at a higher risk of subtrochanteric/diaphyseal femur fracture, with a relative hazard of 3.94 in the lowest vs. highest quartile for cortical thickness. This indicates that women with thin cortices were also at a higher risk for femoral neck and intertrochanteric fractures. The study concluded that in primarily bisphosphonate-naive women, there was no evidence that thick femoral cortices placed women at higher risk for low-energy subtrochanteric/diaphyseal femur fractures (PUBMED:22547423). |
Instruction: Focal opacification of the olfactory recess on sinus CT: just an incidental finding?
Abstracts:
abstract_id: PUBMED:18272552
Focal opacification of the olfactory recess on sinus CT: just an incidental finding? Background And Purpose: The CT appearance of the anterior skull base has been investigated but with limited attention directed to the olfactory recess. As defined by opacity abutting the undersurface of the cribriform plate, the prevalence of olfactory recess opacity (ORO) on sinus CT was examined to clarify whether this should raise suspicion for an unsuspected pathologic process.
Materials And Methods: Outpatient sinus CTs were evaluated for ORO in 500 consecutive patients (mean age, 46.9 years; 52.6% women). On a per-side basis (n = 1000), the presence of surgical changes, inflammatory sinus disease, and concha bullosa was determined by 2 neuroradiologists. Logistic regression was used to examine the association of ORO with these variables.
Results: ORO was identified in 59 (11.8%) patients, bilateral in 27 (5.4%), and unilateral in 32 (6.4%). There were 343 of 1000 ethmoid sides that were diseased, and 66 (27.2%) showed ipsilateral ORO. In contrast, only 20 (3.0%) of 657 clear ethmoid sides showed ORO (P < .0001). ORO was significantly (P = .013) more common with previous surgery (18/75; 24.0%) than without (68/925; 7.4%). Ipsilateral concha bullosa was not associated with ORO. Of 32 patients with unilateral ORO, 5 (15.6%) had no ethmoid opacification or previous surgery, and 1 of these patients had an encephalocele causing the ORO. Finally, unilateral ORO was present in only 1 of 122 patients with completely clear sinuses (the encephalocele that was just mentioned).
Conclusion: ORO is distinctly uncommon without sinonasal inflammation or previous surgery. Isolated unilateral ORO raises suspicion for an underlying neoplasm or cephalocele and warrants further evaluation.
abstract_id: PUBMED:24577441
The role of the olfactory recess in olfactory airflow. The olfactory recess - a blind pocket at the back of the nasal airway - is thought to play an important role in mammalian olfaction by sequestering air outside of the main airstream, thus giving odorants time to re-circulate. Several studies have shown that species with large olfactory recesses tend to have a well-developed sense of smell. However, no study has investigated how the size of the olfactory recess relates to air circulation near the olfactory epithelium. Here we used a computer model of the nasal cavity from a bat (Carollia perspicillata) to test the hypothesis that a larger olfactory recess improves olfactory airflow. We predicted that during inhalation, models with an enlarged olfactory recess would have slower rates of flow through the olfactory region (i.e. the olfactory recess plus airspace around the olfactory epithelium), while during exhalation these models would have little to no flow through the olfactory recess. To test these predictions, we experimentally modified the size of the olfactory recess while holding the rest of the morphology constant. During inhalation, we found that an enlarged olfactory recess resulted in lower rates of flow in the olfactory region. Upon exhalation, air flowed through the olfactory recess at a lower rate in the model with an enlarged olfactory recess. Taken together, these results indicate that an enlarged olfactory recess improves olfactory airflow during both inhalation and exhalation. These findings add to our growing understanding of how the morphology of the nasal cavity may relate to function in this understudied region of the skull.
abstract_id: PUBMED:29777294
Impact of residual frontal recess cells on frontal sinusitis after endoscopic sinus surgery. Purpose: Endoscopic sinus surgery (ESS) is a well-established treatment for chronic rhinosinusitis (CRS). However, ESS for frontal sinusitis remains complicated and challenging. The aim of this study was to identify the relationship between residual frontal recess cells and primary ESS failure in the frontal sinus.
Methods: We prospectively collected information on 214 sides of 129 patients with CRS who underwent standard ESS from June 2010 to May 2011. To identify risk factors, we retrospectively analyzed clinical data and computed tomography (CT) images before and 3 months after surgery.
Results: The posterior side of the frontal recess cells remained relatively common: suprabullar cells (SBCs) were found in 12.2% (16 sides), suprabullar frontal cells (SBFCs) in 20.3% (12 sides), and supraorbital ethmoid cells in 23.7% (14 sides). In contrast, the anterior side of the frontal recess cells, agger nasi cells, supra agger cells, and supra agger frontal cells remained at < 10.0%. Frontal septal cells persisted in 25.0% (5 sides). The presence of residual frontal recess cells was an independent risk factor for postoperative frontal sinus opacification as were well-recognized risk factors such as nasal polyps, the peripheral eosinophil count, and the CT score. Among residual frontal recess cells, SBCs and SBFCs were independent risk factors for opacification.
Conclusions: Residual frontal recess cells, especially SBCs and SBFCs, were independent risk factors for postoperative opacification of the frontal sinus. Complete surgical excision of frontal recess cells may improve surgical outcomes.
abstract_id: PUBMED:25312368
Olfactory epithelium in the olfactory recess: a case study in new world leaf-nosed bats. The olfactory recess (OR) is a restricted space at the back of the nasal fossa in many mammals that is thought to improve olfactory function. Mammals that have an olfactory recess are usually described as keen-scented, while those that do not are typically thought of as less reliant on olfaction. However, the presence of an olfactory recess is not a binary trait. Many mammal families have members that vary substantially in the size and complexity of the olfactory recess. There is also variation in the amount of olfactory epithelium (OE) that is housed in the olfactory recess. Among New World leaf-nosed bats (family Phyllostomidae), species vary by over an order of magnitude in how much of their total OE lies within the OR. Does this variation relate to previously documented neuroanatomical proxies for olfactory reliance? Using data from 12 species of phyllostomid bats, we addressed the hypothesis that the amount of OE within the OR relates to a species' dependence on olfaction, as measured by two commonly used neuroanatomical metrics, the size of the olfactory bulb, and the number of glomeruli in the olfactory bulb, which are the first processing units within the olfactory signal cascade. We found that the percentage of OE within the OR does not relate to either measure of olfactory "ability." This suggests that olfactory reliance is not reflected in the size of the olfactory recess. We explore other roles that the olfactory recess may play.
abstract_id: PUBMED:28129912
Isolated sphenoid sinus opacification: A systematic review. Objective: Unilateral sphenoid sinus opacification (SSO) on imaging is a common incidental radiologic finding. Inflammatory sinus disease is rarely isolated to one sinus cavity therefore SSO raises the potential for neoplastic etiology. The clinical significance of SSO was evaluated and compared to maxillary sinus opacification (MSO).
Methods: A systematic review of unilateral sinus opacification was performed via Medline (1966-January 12th, 2015) and Embase (1980-January 12th, 2015), limited to English literature and human subjects. Case series of patients treated with radiologic evidence of unilateral sinus opacification either from maxillary or sphenoid sinuses and with pathology results were included. Individual cases were classified as neoplastic, malignant, or a condition requiring surgical intervention (i.e. fungal ball). Exclusion criteria were single case reports, lack of primary data, series of complications, or single pathology series. Case-by-case analysis was performed for both SSO and MSO.
Results: Search strategy revealed 3264 studies. A total of 31 studies including 1581 patients met the inclusion criteria. In these studies, SSO was described in n=1215 (76.9%) and MSO in n=366 (23.1%). For SSO, the final diagnosis was neoplasia 18%, (malignancy in 10.9%). 58.3% of cases required surgical intervention and 13% were inflammatory. For MSO, neoplasia represented 18.3% (malignancy 7.1%), surgical intervention required in 47% of cases and 27.6%. were inflammatory.
Conclusion: Isolated MSO and SSO is a marker of neoplasia in 18% and malignancy in 7-10% of patients presenting with these radiologic findings. Clinicians should be wary of conservative management given the high incidence of neoplasia and consider a lower threshold for early surgical intervention.
abstract_id: PUBMED:37889336
Endoscopic endonasal access to the lateral recess of the sphenoid sinus. Background: The standard endoscopic endonasal approach gives access to the median sphenoid sinus, but not to its lateral part. We propose an endoscopic technique for lesions in the lateral sphenoid sinus.
Method: Based on our experience with 28 patients, we have developed a less invasive approach to the lateral recess of the sphenoid sinus, limiting the opening of the maxillary sinus while avoiding resection of the inferior turbinate and ethmoidal cells. The technique is described.
Conclusion: The proposed endoscopic approach is reliable and safe to treat CSF leak or tumours located within the lateral recess of the sphenoid sinus.
abstract_id: PUBMED:24427693
A study of anatomy of frontal recess in patients suffering from 'chronic frontal sinus disease'. A study designed to describe the anatomical features of the frontal recess area in patients suffering from chronic frontal sinusitis. A prospective study done in adult patients admitted in our hospital between July 2009 to June 2011. Tertiary level, private ENT care centre. 50 adult patients of chronic frontal sinusitis who did not have history of previous sinus surgery. The frontal recess anatomy was studied by 2 mm slice CT scans pre-operatively. CT findings were confirmed intra operatively by meticulous dissection in frontal recess area endoscopically with aid of image guided system. A chart prepared for each patient of different anatomical variations present in frontal recess on each nasal side and analyzed. Agger nasi cell was found in 94 % of cases. The superior attachment of the uncinate was to the lamina papyraceae in 82 % of cases. Type 1 frontal recess cells were found in 44 %, type 2 in 8 %, type 3 in 48 % and type 4 in 2 % of the cases. Over all 74 % of cases had frontal recess cells. The management of frontal sinusitis is a challenge to endoscopic surgeon and as more and more rhinologists got expertise in endoscopic sinus surgery skills; the next challenge is management of frontal sinus. Hence, the need arises for more precise study of frontal recess anatomy. Detailed studies of anatomic features of the frontal recess by coronal and sagittal CT scans are very important and helpful for endoscopic frontal sinus surgery. Our study suggests that there is high prevalence of frontal recess cells in Indian population suffering from frontal sinusitis.
abstract_id: PUBMED:32921179
Management of Cerebrospinal Fluid Rhinorrhea in the Sphenoid Sinus Lateral Recess Through an Endoscopic Endonasal Transpterygoid Approach With Obliteration of the Lateral Recess. Background: Cerebrospinal fluid rhinorrhea in the sphenoid sinus lateral recess is a rare occurrence and poses unique challenges due to limited surgical access for surgical repair.
Objective: To report our experience of surgical repair of cerebrospinal fluid rhinorrhea in the sphenoid sinus lateral recess through an endoscopic endonasal transpterygoid approach with obliteration of the lateral recess. To evaluate the efficiency of this surgical procedure.
Methods: A retrospective study. Twelve cases with cerebrospinal fluid rhinorrhea in the sphenoid sinus lateral recess were reviewed. Assisted by image-guided navigation, cerebrospinal fluid rhinorrhea was repaired through an endoscopic endonasal transpterygoid approach, with obliteration of the lateral recess. Complications and recurrence were recorded. Medical photographs were used.
Results: This surgical approach provided a relatively spacious corridor to dissect the sphenoid sinus lateral recess and do postoperative surveillance. The repair area completely healed in 3 months after surgery. Cerebrospinal fluid rhinorrhea in the sphenoid sinus lateral recess was successfully repaired on the first attempt in all cases (100%). No main complications or recurrence was observed during a mean follow-up time of 40.3 months.
Conclusion: The endoscopic endonasal transpterygoid approach gives appropriate access for the treatment of spontaneous cerebrospinal fluid rhinorrhea in the sphenoid sinus lateral recess. Multilayer reconstruction of a skull base defect with obliteration of the lateral recess is a reliable and simple method.
abstract_id: PUBMED:37970777
Endonasal repair of spontaneous CSF fistulas of the lateral recess of the sphenoid sinus CSF fistulas of the lateral recess of the sphenoid sinus are a rare surgical pathology. Cerebrospinal fluid leak from lateral recess of the sphenoid sinus is observed with a frequency of 7.7% among all leakafe of the skull base. The article presents 3 clinical cases of patients with spontaneous cerebrospinal fluid leak from lateral recess of the sphenoid sinus and surgical treatments by transsphenoidal and transpterygoid (transpterygoid) endoscopic approaches with various postoperative results. The plastic surgery success of CSF fistulas from lateral recess of the sphenoid sinus doesn`t depend on the type of endonasal surgical approach, but on the plastic quality and the preoperative level of CSF pressure.
abstract_id: PUBMED:36742792
Frontal Recess Cells in International Frontal Sinus Anatomy Classification (IFAC); Prevalence, Infection Incidence, and Relation to Frontal Sinus Infection in Chronic Sinusitis Patients. Frontal recess cells have many types with different sizes, arrangement, and extend. It plays an important role in successful functional endoscopic sinus surgery (FESS) as most causes of failure are related to it. Outline the prevalence of the frontal recess cells, pathological incidence of each cell regarding to frontal sinus pathology. Prospective study on 100 consecutive patients (200 sides) complaining from nasal and sinus symptoms which did not respond to medical management and indicated for FESS. Anterior group was infected in 30.8%; agger nasi cell (ANC) present in 97% (25.8% infected, 74.2% not infected), supra agger cell (SAC) present in 48% (39.6% infected, 60.4% not infected), supra agger frontal cell (SAFC) present in 11% (36.4% infected, 63.6% not infected). Posterior group was infected in 24.8%; supra bulla cell (SBC) present in 72% (30.6% infected, 69.4% not infected), supra bulla frontal cell (SBFC) present in 23% (17.4% infected, 82.6% not infected), supra orbital ethmoid cell (SOEC) present in 42% of cases (19% infected, 81% not infected). Medial group [frontal septal cell (FSC)] was present in 21% (33.3% infected, 66.7% not infected). FSC, SAC, SAFC, and SBC showed high infection rate in association with infected frontal sinus, while, the SOEC, ANC, and SBFC did not have such high infection rate. Frontal recess cells show no difference in their prevalence either if the frontal sinus infected or not, however their infection rate show significant difference.
Answer: Focal opacification of the olfactory recess (ORO) on sinus CT is not merely an incidental finding. The study by PUBMED:18272552 found that ORO is uncommon without sinonasal inflammation or previous surgery. In their examination of outpatient sinus CTs from 500 consecutive patients, ORO was identified in 11.8% of patients, with a significant association between ORO and ipsilateral ethmoid disease or previous surgery. Isolated unilateral ORO, particularly in the absence of ethmoid opacification or previous surgery, raises suspicion for an underlying neoplasm or cephalocele and warrants further evaluation. In their study, only one patient with completely clear sinuses had unilateral ORO, which was caused by an encephalocele. This suggests that while ORO can be associated with inflammatory sinus disease or surgical changes, when it appears in isolation, especially unilaterally, it may indicate a more serious underlying condition that requires additional investigation. |
Instruction: Does a predialysis education program increase the number of pre-emptive renal transplantations?
Abstracts:
abstract_id: PUBMED:23622579
Does a predialysis education program increase the number of pre-emptive renal transplantations? Objectives: Renal transplantation (RT) is the most appropriate form of treatment for end-stage renal disease (ESRD). Pre-emptive RT decreases the rates of delayed graft function and acute rejection episodes, increasing patient and graft survival, while reducing costs and complications associated with dialysis. In this study, we investigated the relationship between a predialysis education program (PDEP) for patients and their relatives and pre-emptive RT.
Methods: We divided 88 live donor kidney transplant recipients into 2 groups: transplantation without education (non-PDEP group; n = 27), and enrollment in an education program before RT (PDEP group n = 61).
Results: Five patients in the non-PDEP group underwent pre-emptive transplantation, versus 26 of the PDEP group. The rate of pre-emptive transplantations was significantly higher among the educated (42.62%) versus the noneducated group (18.51%; P < .001).
Conclusion: PDEP increased the number of pre-emptive kidney transplantations among ESRD patients.
abstract_id: PUBMED:33954087
Pre-emptive live donor kidney transplantation-moving barriers to opportunities: An ethical, legal and psychological aspects of organ transplantation view. Live donor kidney transplantation (LDKT) is the optimal treatment modality for end stage renal disease (ESRD), enhancing patient and graft survival. Pre-emptive LDKT, prior to requirement for renal replacement therapy (RRT), provides further advantages, due to uraemia and dialysis avoidance. There are a number of potential barriers and opportunities to promoting pre-emptive LDKT. Significant infrastructure is needed to deliver robust programmes, which varies based on socio-economic standards. National frameworks can impact on national prioritisation of pre-emptive LDKT and supporting education programmes. Focus on other programme's components, including deceased kidney transplantation and RRT, can also hamper uptake. LDKT programmes are designed to provide maximal benefit to the recipient, which is specifically true for pre-emptive transplantation. Health care providers need to be educated to maximize early LDKT referral. Equitable access for varying population groups, without socio-economic bias, also requires prioritisation. Cultural barriers, including religious influence, also need consideration in developing successful outcomes. In addition, the benefit of pre-emptive LDKT needs to be emphasised, and opportunities provided to potential donors, to ensure timely and safe work-up processes. Recipient education and preparation for pre-emptive LDKT needs to ensure increased uptake. Awareness of the benefits of pre-emptive transplantation require prioritisation for this population group. We recommend an approach where patients approaching ESRD are referred early to pre-transplant clinics facilitating early discussion regarding pre-emptive LDKT and potential donors for LDKT are prioritized for work-up to ensure success. Education regarding pre-emptive LDKT should be the norm for patients approaching ESRD, appropriate for the patient's cultural needs and physical status. Pre-emptive transplantation maximize benefit to potential recipients, with the potential to occur within successful service delivery. To fully embrace preemptive transplantation as the norm, investment in infrastructure, increased awareness, and donor and recipient support is required.
abstract_id: PUBMED:33961303
Barriers to pre-emptive kidney transplantation in New Zealand children. Aim: Pre-emptive kidney transplantation (PKT) is generally considered the optimal treatment for kidney failure as it minimises dialysis-associated morbidity and mortality and is associated with improved allograft survival. This study aimed to determine rates of paediatric PKT in New Zealand, identify barriers to PKT and consider potential interventions to influence future rates of pre-emptive transplantation.
Methods: Children commencing kidney replacement therapy between 2005 and 2017 in New Zealand were included. Descriptive analysis considered those referred late (referral <3 months prior to kidney replacement therapy initiation) or early based on referral timing to paediatric nephrology. Additional analysis compared characteristics of children receiving dialysis versus pre-emptive transplant as their first mode of kidney replacement therapy.
Results: PKT occurred in 15 of 90 children (17%). One-third of all patients were referred late. No late referrals received a pre-emptive transplant. Pre-emptively transplanted children were referred younger (median age 0.49 years), lived in less deprived areas, were more likely to have congenital anomalies of the kidney and urinary tract and none were Māori or Pasifika ethnicity.
Conclusions: Late referral, higher deprivation levels and Māori and Pasifika ethnicity confer a greater risk of not receiving pre-emptive transplantation. Improved education amongst health professionals about recognition of paediatric chronic kidney disease and the importance of timely referral to paediatric nephrology is recommended to reduce rates of late referral. A modified approach including enhanced culturally appropriate support for those diagnosed with chronic kidney disease during transplant evaluation should be pursued to improve equity.
abstract_id: PUBMED:25013618
A Systematic Review and Meta-analysis of Prophylactic versus Pre-emptive Strategies for Preventing Cytomegalovirus Infection in Renal Transplant Recipients. Background: In kidney transplant (KT) recipients, CMV infection poses significant morbidity and mortality. Both prophylactic and pre-emptive approaches for preventing CMV infection have been utilized.
Objective: To compare the effectiveness of routine prophylaxis vs. pre-emptive treatment for preventing CMV disease after KT.
Methods: We conducted a systematic review and meta-analysis comparing the effectiveness of routine prophylaxis vs. pre-emptive treatment for preventing CMV disease after KT. Combining 4 comprehensive search terms (CMV, renal transplant, prophylaxis, pre-emptive); we searched PubMed, EMBASE, ISI Web of Science, and Cochrane Central Register from inception through January 2011. We also evaluated studies referenced in review articles and abstracts from meetings of major nephrology and transplant societies (2009-2011). Two authors independently extracted data and assessed methodological criteria. The primary outcome was the pooled estimate of the odds ratio (OR) of developing CMV infection. Secondary outcomes included OR of acute rejection, OR of graft loss and OR of death within first year of KT. Comprehensive Meta-analysis V2 software was used for data analysis.
Results: Analysis of 9 randomized controlled trials (991 patients; ganciclovir=5, valganciclovir=4) with CMV infection as an outcome revealed the OR of CMV infection to be 0.34 (95% CI: 0.25-0.46, p=0.008) for the prophylactic vs. the pre-emptive groups. The OR of acute rejection (7 studies; 1358 patients) was 0.52 (95% CI: 0.41-0.67, p=0.001) with prophylactic approach compared to pre-emptive treatment; graft loss (7 studies; OR 0.52 [95% CI: 0.34-1.12, p=0.32] and mortality (6 studies; OR 0.84 [95% CI: 0.62-1.23, p=0.23]) were similar between the two groups.
Conclusions: Prophylactic approach is superior to pre-emptive approach in preventing CMV infection within the first year of kidney transplant. The risk of developing acute rejection is also lower with prophylactic approach in the first year of transplant but there is no significant difference in graft loss or mortality with either approach.
abstract_id: PUBMED:38395149
Defining pre-emptive living kidney donor transplantation as a quality indicator. Quality indicators in kidney transplants are needed to identify care gaps and improve access to transplants. We used linked administrative health care databases to examine multiple ways of defining pre-emptive living donor kidney transplants, including different patient cohorts and censoring definitions. We included adults from Ontario, Canada with advanced chronic kidney disease between January 1, 2013, to December 31, 2018. We created 4 unique incident patient cohorts, varying the eligibility by the risk of progression to kidney failure and whether individuals had a recorded contraindication to kidney transplant (eg, home oxygen use). We explored the effect of 4 censoring event definitions. Across the 4 cohorts, size varied substantially from 20 663 to 9598 patients, with the largest reduction (a 43% reduction) occurring when we excluded patients with ≥1 recorded contraindication to kidney transplantation. The incidence rate (per 100 person-years) of pre-emptive living donor kidney transplant varied across cohorts from 1.02 (95% CI: 0.91-1.14) for our most inclusive cohort to 2.21 (95% CI: 1.96-2.49) for the most restrictive cohort. Our methods can serve as a framework for developing other quality indicators in kidney transplantation and monitoring and improving access to pre-emptive living donor kidney transplants in health care systems.
abstract_id: PUBMED:26138458
Cytomegalovirus post kidney transplantation: prophylaxis versus pre-emptive therapy? Cytomegalovirus is the most important pathogen causing opportunistic infections in kidney allograft recipients. The occurrence of CMV disease is associated with higher morbidity, higher incidence of other opportunistic infections, allograft loss and death. Therefore, an efficient strategy to prevent CMV disease after kidney transplantation is required. Two options are currently available: pre-emptive therapy based on regular CMV PCR monitoring and generalized antiviral prophylaxis during a defined period. In this review, we describe those two approaches, highlight the distinct advantages and risks of each strategy and summarize the four randomized controlled trials performed in this field so far. Taken this evidence together, pre-emptive therapy and anti-CMV prophylaxis are both equally potent in preventing CMV-associated complications; however, the pre-emptive approach may have distinct advantages in allowing for development of long-term anti-CMV immunity. We propose a risk-adapted use of these approaches based on serostatus, immunosuppressive therapy and availability of resources at a particular transplant centre.
abstract_id: PUBMED:37692991
The pattern of cytomegalovirus replication in post-renal transplant recipients with pre-emptive therapy strategy during the 1st year of post-transplantation. Objectives: The prevalence and reactivating pattern of cytomegalovirus (CMV) among renal transplant recipients in Sri Lanka is scarce. The study was aimed to describe the replication patterns of CMV in post-renal transplant recipients who were on pre-emptive therapy and identify the risk factors and time period for CMV reactivating during the 1st year of transplantation and provide an insight into the selection of pre-emptive therapy in the local setting.
Methods: A retrospective and cohort study was conducted, enrolling renal transplant recipients who have completed routine 1-year follow-up for pre-emptive management at the National Hospital, Kandy, from January 2016 to January 2021. CMV quantitative polymerase chain reaction results and demographic data of enrolled recipients were analyzed to investigate the CMV replication pattern and risk factors. Categorical data were analyzed using Pearson's Chi-square test, considering P < 0.05 statistically significant. Continuous variables were presented as percentages.
Results: Two hundred and fifty-one renal transplant recipients' data were included in the study. Of them, 75.70% were male patients, and the mean age of the study population was 43.25 years. CMV DNAemia incidence was 56.57% during the 1st year of post-renal transplantation. Only 9.16% had developed more than 104 IU/mL or significant DNAemia. Sex and donor type were not risk factors for CMV reactivation. However, the recipient's age was significantly associated with CMV viraemia among renal transplant recipients.
Conclusion: Considering the low incidence of significant viraemia among the study population, pre-emptive treatment would be the cost-effective strategy for management of the post-renal transplant recipients in local settings.
abstract_id: PUBMED:28918445
Pre-emptive Intestinal Transplant: The Surgeon's Point of View. Pre-emptive transplantation is a well-established practice for certain types of end-organ failure such as in the use of kidney transplantation. For irreversible intestinal failure, total parenteral nutrition (TPN) remains the gold standard, due to the suboptimal long-term results of intestinal transplantation. As such, the only role for pre-emptive transplantation, if at all, will be for patients identified to be at high risk of complications and mortality while on definitive long-term TPN. In these patients, the timing of early listing and transplantation could become life-saving, taking into account that mortality on the waiting list is still the highest for intestinal candidates. The development of simulation models or pre-transplant scoring systems could help in selecting patients based on potential outcome on TPN or with transplantation, and recent reports from high-volume centers identify few underlying pathologic conditions and some TPN complications as at higher risk of increased morbidity and mortality. A pre-emptive transplant could be used as a rehabilitative procedure in a well-selected case-by-case scenario, among TPN patients at risk of liver failure, repeated central line infections, mesenteric infarction, short bowel syndrome (SBS) <50 cm or with end stoma, congenital mucosal disease, desmoid tumors: These conditions must be carefully evaluated, not to underestimate the clinical stage nor to over-estimate the impact of a temporary situation. At the present time, diseases with a variable and unpredictable course, such as intestinal dysmotility disorders, or quality of life and financial issues are still far from being considered as indications for a pre-emptive transplant.
abstract_id: PUBMED:35349232
Influence of formalized Predialysis Education Program (fPEP) on the chosen and definitive renal replacement therapy option. Background: It is widely accepted that patients with chronic kidney disease (CKD) should play an active role in the selection of renal replacement therapy (RRT) option. However, patients' knowledge about CKD and treatment options is limited. The implementation of structured education program and shared decision-making may result in a better preparation to RRT, more balanced choice of dialysis modalities and better access to kidney transplantation (TX).
Objectives: The aim of this long-term study was to assess the impact of formalized Predialysis Education Program (fPEP) on knowlege on RRT options, as well as on selected and definitive therapy.
Material And Methods: The study included 435 patients (53% men, mean age 60 years) with CKD stage 4 and 5, participating in fPEP at our center. The program included at least 3 visits, during which balanced information about all RRT options was presented and self-care and informed decision-making were encouraged. The knowledge about RRT options before and after fPEP attendance, and selected and definitive RRT options were assessed.
Results: Ninety-two percent of patients received prior nephrology care. After fPEP completion, in most patients, the knowledge about CKD and RRT options and selected preferred modality improved - 40% of participants chose hemodialysis (HD), 32% peritoneal dialysis (PD) and 18% TX. During the observation period, 4% of patients died before commencement of dialysis, 2.7% received preemptive kidney transplant, 8.6% were placed on transplant waiting list, and 94% started dialysis (30% PD and 70% HD). Among those who chose PD, 69% started PD and 24% started HD; the leading causes of the discrepancy between choosing and receiving PD was the deterioration in clinical condition (50%) and change of decision (32%).
Conclusions: The fPEP increases CKD patients' knowledge on RRT methods. The implementation of a decision-making process based on fPEP leads to a satisfying distribution between modalities, with a good concordance between chosen and definitive modality.
abstract_id: PUBMED:33173640
The Impact of Predialysis Patient Education Counseling on Relative Frequencies of Renal Replacement Modalities. Background and objective The predialysis education program (PDEP) is an integral part of the management of patients with end-stage renal disease (ESRD). Hence, the aim of this study was to assess the distribution of renal replacement therapy (RRT) among patients with ESRD who received PDEP counseling at a single tertiary care center in Khamis Mushait, Kingdom of Saudi Arabia (KSA). Methodology In this study, we included 177 patients with ESRD who received a series of structured PDEP counseling sessions between March 2018 and February 2019 at the Armed Forces Hospital, Southern Region, Khamis Mushait, KSA. All patients were offered available RRT options, which included hemodialysis (HD), continuous ambulatory peritoneal dialysis (CAPD), or renal transplantation. Patients' opted RRT modality was recorded and managed as per the international guidelines and institutional protocols. Results Out of 177 patients, 58.8% (104) were males, with a mean age of 59 ± 13.7 years. The most common comorbid condition as the primary diagnosis was diabetic nephropathy in 49.7% (88), followed by hypertension in 30.5% (54), bilateral small kidney in 15.3% (27), and renal stone in 5.1% (nine) of the patients. Among the available RRTs, 51.4% (91) chose HD, 5.1% (nine) decided on CAPD, four patients (2.3%) opted for renal transplantation, while the remaining 41.2% (73) had yet to choose one, out of which 83.6% (61/73) had stage-IV chronic kidney disease (CKD) while the remaining 16.4% (12/73) had stage-V CKD. Conclusions In conclusion, a series of structured PDEP sessions for the patients progressing to ESRD can facilitate their selection of RRT. In a resource-limited setting, such as ours, where the number of patients who seek treatment for ESRD is rising, PDEP can provide patients with adequate information and knowledge to equip them with the ability regarding the selection of a self-care RRT modality.
Answer: Yes, a predialysis education program (PDEP) does increase the number of pre-emptive renal transplantations. A study investigating the relationship between PDEP for patients and their relatives and pre-emptive renal transplantation (RT) found that the rate of pre-emptive transplantations was significantly higher among the educated group (42.62%) compared to the noneducated group (18.51%; P < .001) (PUBMED:23622579). This suggests that PDEP can effectively increase the number of patients undergoing pre-emptive kidney transplantation among those with end-stage renal disease (ESRD). |
Instruction: Dual therapy with peg-interferon and ribavirin in thalassemia major patients with chronic HCV infection: Is there still an indication?
Abstracts:
abstract_id: PUBMED:27012446
Dual therapy with peg-interferon and ribavirin in thalassemia major patients with chronic HCV infection: Is there still an indication? Background: Iron overload and hepatitis C virus (HCV) infection together can lead to chronic liver damage in thalassemia major (TM) patients.
Aims: We investigated viral, genetic, and disease factors influencing sustained virological response (SVR) after peg-interferon and ribavirin therapy in TM patients with HCV infection.
Methods: We analyzed 230 TM patients with HCV infection (mean age 36.0±6.3 years; 59.1% genotype 1; 32.2% genotype 2; 3.4% genotype 3; and 5.3% genotype 4; 28.7% carried CC allele of rs12979860 in IL28B locus; 79.6% had chronic hepatitis and 20.4% cirrhosis; 63.5% naive and 36.5% previously treated with interferon alone) treated in 14 Italian centers.
Results: By multivariate regression analysis SVR was independently associated with CC allele of IL28B SNP (OR 2.98; CI 95% 1.29-6.86; p=0.010) and rapid virologic response (OR 11.82; CI 95% 3.83-36.54; p<0.001) in 136 genotype 1 patients. Combining favorable variables the probability of SVR ranged from 31% to 93%. In genotype 2 patients, only RVR (OR 8.61; CI 95% 2.85-26.01; p<0.001) was associated with SVR higher than 80%. In 3 patients with cirrhosis a decompensation of liver or heart disease were observed. Over 50% of patients increased blood transfusions.
Conclusion: Dual therapy in TM patients with chronic HCV infection is efficacious in patients with the best virological, genetic and clinical predictors. Patients with cirrhosis have an increased risk of worsening liver or heart disease.
abstract_id: PUBMED:24669610
Treatment outcome of HCV infected paediatric patients and young adults at Karachi, Pakistan. Background: Scanty data are available regarding outcome of children and young adults treated conventionally for Hepatitis C. The present study was undertaken to evaluate the outcome of paediatric and young adult patients treated with PEG-IFN-alpha or conventional interferon (IFN) plus Ribavirin at a public sector hospital of Karachi.
Methods: This was an observational study, conducted at Sarwar Zuberi Liver Centre, Civil Hospital Karachi, from 2007 to 2010. Patients up to 20 year of age were tested for Anti-HCV antibodies by 4th generation ELISA and in positive cases HCV RNA was done by PCR. Patients with HBV, HIV and other comorbids such as thalassaemia minor, haemophilia, kidney disease, and co-existing active illness other than HCV were excluded. Depending upon the genotype, patients were treated for 24-48 weeks with IFN 3 MIU x3 per week or PEG-IFN-alpha (1.5 microg/Kg) per week plus Ribavirin 15 mg/Kg/day. Nearly all patients were followed till the end of treatment.
Results: Mean age of 55 patients, was 18.42 +/- 2.59 years (range 9-20 years) and BMI 19.56 +/- 2.36 Kg/m2. Females were 70.9% (n = 39). More than 80% had genotype 3 (subtype a or b). Remaining had genotype 1, 4 or mixed. Slight decreases in haemoglobin, platelet and white cell count at 1, 3 and 6 months of treatment were noted. No significant side effects were noted. There was a marked decrease in the ALT post treatment (pre-treatment values 72.69 +/- 50.73 versus post-treatment 24.81 +/- 14.09 IU/l). End-treatment response (ETR) was 90.9%; of these sustained viral response (SVR) was achieved in 86.3%.
Conclusion: HCV infected paediatric and young adult patients treated with PEG-IFN-alpha/or conventional interferon plus Ribavirin (combination therapy) achieved an ETR of 90.9% and SVR of 86.3%.
abstract_id: PUBMED:20187529
Combination therapy with interferon and ribavirin for chronic hepatitis C infection in beta-thalassaemia major. Treatment of chronic hepatitis C virus (HCV) infection in transfusion-dependent beta-thalassaemia major patients is complicated by existing hepatic siderosis and the fear of ribavirin-associated haemolysis. We evaluated the efficacy and side-effects of combination interferon-alpha (INF) and ribavirin therapy for HCV-infected thalassaemia patients. A total of 17 patients were enrolled (10 nonresponders to INF monotherapy, 7 naive to treatment, mean age 23.1 years) and they received 12 months of combination therapy. The sustained virological response rate 6 months after treatment was 58.8%. Blood transfusion requirements during treatment temporarily increased by 36.6%. Combination therapy was tolerated by, and may be useful for, HCV-infected thalassaemia major patients.
abstract_id: PUBMED:19404206
Treatment of chronic hepatitis C in sickle cell disease and thalassaemic patients with interferon and ribavirin. Background/aim: Hepatic complications are a major cause of death in patients with congenital anaemia and chronic hepatitis C. Ribavirin is usually contraindicated in patients with haemolytic anaemia. This pilot study evaluated the efficacy and safety of antiviral treatment in patients with sickle cell disease (SCD) or beta-thalassaemia major (TM).
Methods: Eleven consecutive SCD and TM patients were included. Interferon monotherapy was administrated in the two first thalassaemic patients. Other patients received combination therapy with full dose of pegylated interferon 2b and increasing doses of ribavirin, starting with a low dose of ribavirin (400 mg/day).
Results: Hepatitis C virus genotypes were 1 or 4 in nine cases. A sustained virological response achieved in five of 11 patients despite unfavourable factors to response (genotypes, nonresponders to an earlier treatment). Haemoglobin level at the end of treatment was higher than baseline levels in five of six SCD patients. No SCD patient needed a transfusion during and after treatment period, neither presented vasoocclusive crisis. The mean increase in transfusion requirements was 32.5% in the thalassaemic group.
Conclusion: A sustained virological response can be obtained in SCD and TM patients. No earlier study of excellent haematological tolerance among SCD patients under ribavirin has been reported to date. The results of this study suggest that full dose ribavirin could be used from the start of treatment in SCD patients.
abstract_id: PUBMED:21845061
Effect of hepatic iron concentration and viral factors in chronic hepatitis C-infected patients with thalassemia major, treated with interferon and ribavirin. Background: Beta thalassemia major patients are vulnerable to transfusion-transmitted infection, especially hepatitis C virus (HCV), and iron overload. These comorbidities lead to cirrhosis and hepatocellular carcinoma in these patients. In order to prevent these complications, treatment of HCV infection and regular iron chelating seems to be necessary. The aim of this study was to evaluate the effect of hepatic iron concentration (HIC) and viral factors on the sustained virological response (SVR) in chronic HCV-infected patients, with beta thalassemia major being treated with interferon and ribavirin.
Materials And Methods: We enrolled 30 patients with thalassemia major and chronic HCV who were referred to the Hematology Clinic of Guilan University of Medical Sciences, between December 2002 and April 2006. HIC was measured by atomic absorption spectroscopy before treatment. The viral factors (viral load, genotype) and HIC were compared between those who achieved a SVR and nonresponders.
Results: Mean age of the 30 thalassemic patients, was 22.56 ± 4.28 years (14-30 years). Most patients were male (56.7%). Genotype 1a was seen in 24 (80%) cases. SVR was achieved in 15 patients (50%). There were no significant correlations between HIC (P = 1.00), viral load (P = 0.414), HCV genotype (P = 0.068), and SVR. No difference was observed in viral load (P = 0.669) and HIC (P = 0.654) between responders and nonresponders.
Conclusion: HIC, HCV viral load, and HCV genotype were not correlated with virological response, and it seems that there is no need to postpone antiviral treatment for more vigorous iron chelating therapy.
abstract_id: PUBMED:28439915
Treatment of chronic hepatitis C with direct-acting antivirals in patients with β-thalassaemia major and advanced liver disease. Interferon-based regimens for chronic hepatitis C (CHC) were often deferred in patients with β-thalasaemia major (β-TM) due to poor efficacy and tolerance. Current guidelines recommend direct-acting antivirals (DAAs) for these patients. The aim of this study was to assess the safety and efficacy of DAAs in patients with β-TM and advanced liver disease due to CHC. Patients were recruited from eight liver units in Greece. The stage of liver disease was assessed using transient elastography and/or liver histology. Five regimens were used: sofosbuvir (SOF) + ribavirin (RBV); SOF + simeprevir ± RBV; SOF + daclatasvir ± RBV; ledipasvir/SOF ± RBV and ombitasvir/paritaprevir-ritonavir + dasabuvir ± RBV. Sixty-one patients (median age 43 years) were included. The majority of patients was previously treated for hepatitis C (75%) and had cirrhosis (79%). Viral genotype distribution was: G1a: n = 10 (16%); G1b: n = 22 (36%); G2: n = 2 (3%); G3: n = 14 (23%); G4: n = 13 (22%). The predominant chelation therapy was a combination of deferoxamine and deferiprone (35%). Overall sustained virological response rates were 90%. All treatment regimens were well tolerated and no major adverse events or drug-drug interactions were observed. Approximately half of the patients who received RBV (7/16, 44%) had increased needs for blood transfusion. Treatment of CHC with DAAs in patients with β-TM and advanced liver disease was highly effective and safe.
abstract_id: PUBMED:21526103
Efficacy of interferon alpha-2b with or without ribavirin in thalassemia major patients with chronic hepatitis C virus infection: A randomized, double blind, controlled, parallel group trial. Background: The aim of this study was to evaluate the effectiveness of monotherapy with interferon alpha-2b and combination therapy with interferon alpha-2b plus ribavirin on chronic hepatitis C infection in thalassaemic patients.
Methods: In parallel group randomized, double blind, controlled trial, 32 thalassaemic patients with chronic hepatitis C infection completed the study. In a random fashion, one group was treated with three million units of interferon alpha-2b three times a week plus ribavirin (800-1200 mg daily). The second group received interferon alpha-2b alone. Treatment duration was 24-48 weeks. Primary efficacy variables were HCV RNA after treatment and sustained viral response (SVR) six months after treatment.
Results: The mean age of patients was 22 ± 7.4 years; 19 (59.4%) were male and 13 (40.6) were female. At the end of treatment, no statistically significant differences were found between the groups in HCV RNA and AST. The proportion of patients with SVR six months after treatment was significantly greater in the monotherapy group (90.9%) than in the combination therapy group (44.4%; p = 0.049). A significant difference in mean of ALT was also obtained at the end of treatment between monotherapy and combination therapy groups (30.4 ± 19.2 and 60.1 ± 48.9, respectively; p = 0.02). Response rates were not associated with genotype and severity of hepatitis C infection in both groups.
Conclusions: These results suggest that monotherapy may be considered as the first-line therapy in patients with thalassemia.
abstract_id: PUBMED:20443101
Efficacy and safety of pegylated IFN alfa 2b alone or in combination with ribavirin in thalassemia major with chronic hepatitis C. Background: Treatment of HCV infection in patients with thalassemia major (TM) is limited by the lack of large clinical trials and concerns about ribavirin-induced hemolysis.
Methods: We conducted a prospective, randomized, open-label study to determine efficacy and tolerability of pegylated-interferon alfa 2b (1.5 microg/kg/week) alone (group A) or with ribavirin (12-15 mg/kg/day; group B) in patients with TM and chronic HCV infection. Patients with genotype 1 or 4 HCV were treated for 48 weeks and those with genotype 3 or 2 HCV for 24 weeks. Early viral response (EVR; after 12 weeks of treatment), end-of-treatment virological response (ETR) and sustained virological response (SVR; 6 months after stopping therapy) were assessed.
Results: Of 40 patients, 20 each were allocated to the two treatment groups. EVR rates in group A and B were 15 (75%) and 18 (90%), respectively. ETR occurred in 17/20 (85%) patients in each group. SVR occurred in 8 (40%) patients in group A and 14 (70%) in group B. Blood transfusion requirements increased in one patient in group A and four patients in group B. One patient in group A had severe sepsis and one in group B had nephrotic syndrome. Two patients in each group required reduction in drug dose.
Conclusions: In patients with TM and chronic HCV infection, pegylated interferon alfa 2b and ribavirin combination therapy achieves a higher SVR rate than pegylated interferon alone, and is well tolerated except for an increase in blood transfusion requirement.
abstract_id: PUBMED:16570722
Peginterferon alfa-2b and ribavirin in thalassaemia/chronic hepatitis C virus-co-infected non-responder to standard interferon-based. We describe a patient with HbE-beta thalassaemia and chronic hepatitis C virus infection (genotype 1a) who was treated successfully with peginterferon alfa-2b and ribavirin, following failure to respond to standard interferon and ribavirin therapy. She had sustained virological response for nearly 24 months after completing peginterferon alfa-2b and ribavirin therapy. Transfusion requirements were significantly increased during combination therapy due to ribavirin-induced haemolysis. The adverse effects of interferon were well tolerated. Combination therapy with peginterferon alfa-2b and ribavirin maybe a feasible treatment option for a subset of thalassaemia/HCV infected non-responders to standard interferon-based therapy.
abstract_id: PUBMED:29633743
The frequency of hypothyroidism and its relationship with HCV positivity in patients with thalassemia major in southern Iran. Introduction: Hypothyroidism is one the most complication due to iron overload in patients with β-thalassemia major (TM). On the other hand these patients are prone to Hepatitis C virus (HCV) infection that can cause thyroid dysfunction by itself or as the side effect of treatment with interferon (INF) or IFN plus ribavirin. The aim of this study is to evaluate the association of hypothyroidism with HCV positivity and serum ferritin levels in patients with TM.
Methods: In this cross-sectional study, 201 randomly selected patients with TM who were registered at the Thalassemia Clinic of a tertiary hospital in Shiraz, southern Iran were investigated. Thyroid function tests and serologic screening assays for HCV seropositivity (HCV Ab and HCV-RNA) were conducted for all patients.
Results: Frequency of hypothyroidism was 22.9% including 19.9% subclinical hypothyroidism, 2% primary overt hypothyroidism and 1% central hypothyroidism. Eighty six patients (42.8%) were HCV Ab positive and 60 patients (29.9%) were HCV RNA positive. No significant relationship was found between hypothyroidism and HCV positivity or receiving IFN-α (P>0.05). Hypothyroidism showed a borderline significant association with high serum ferritin levels in TM patients (P=0.055).
Conclusion: Our results showed no significant association between hypothyroidism and HCV infection in TM patients. It seems that the main mechanism of hypothyroidism in our patients is iron overload; however, for better evaluation a larger multicenter study is recommended. Also due to the importance of consequences of HCV infection, more careful pre-transfusional screening of blood should be considered in TM patients.
Answer: The use of dual therapy with peg-interferon and ribavirin in thalassemia major (TM) patients with chronic hepatitis C virus (HCV) infection has been evaluated in various studies, with mixed results regarding its efficacy and safety.
One study found that dual therapy is efficacious in TM patients with chronic HCV infection who have favorable virological, genetic, and clinical predictors. The study indicated that sustained virological response (SVR) was independently associated with the CC allele of IL28B SNP and rapid virologic response (RVR) in genotype 1 patients. However, patients with cirrhosis had an increased risk of worsening liver or heart disease, and over 50% of patients increased blood transfusions during treatment (PUBMED:27012446).
Another study reported that HCV-infected pediatric and young adult patients treated with PEG-IFN-alpha/or conventional interferon plus ribavirin achieved an ETR of 90.9% and SVR of 86.3%, suggesting a good outcome for this population (PUBMED:24669610).
A study on combination therapy with interferon-alpha (INF) and ribavirin for HCV-infected thalassemia patients showed a sustained virological response rate of 58.8%, with blood transfusion requirements temporarily increasing by 36.6% during treatment (PUBMED:20187529).
In contrast, a study evaluating the efficacy of interferon alpha-2b with or without ribavirin in thalassemia major patients with chronic HCV infection found that monotherapy may be considered as the first-line therapy in patients with thalassemia, as the SVR six months after treatment was significantly greater in the monotherapy group (90.9%) than in the combination therapy group (44.4%) (PUBMED:21526103).
Given the advent of direct-acting antivirals (DAAs), which have shown high efficacy and safety in patients with β-thalassaemia major and advanced liver disease due to CHC (PUBMED:28439915), the indication for dual therapy with peg-interferon and ribavirin may be limited. |
Instruction: Are breastfeeding and complementary feeding of children of adolescent mothers different from those of adult mothers?
Abstracts:
abstract_id: PUBMED:14513130
Are breastfeeding and complementary feeding of children of adolescent mothers different from those of adult mothers? Objective: To study breastfeeding during the first year of life and the kind of complementary food provided at one year of life to children of adolescent mothers. To compare these data with breastfeeding and complementary food received by children of adult mothers.
Method: A dual cohort was performed. Children were selected from the files of CAISM/UNICAMP and assessed when they were one year old. This study consisted of 122 children born from adolescent mothers and 123 children born from adult mothers--full-term births, birthweight was 2,500 g or higher. When the children were one year old, the mothers were interviewed at home or at CIPED/UNICAMP. The results were compared using the chi-square test and the Fisher's test; alpha=5%; the Kaplan-Meier method was used to analyze the duration of breastfeeding and the Wilcoxon test (Breslow) to compare the exclusive, predominant, full and total breastfeeding curves.
Results: 94.3% of children of adolescent mothers and 95.9% of children of adult mothers left the maternity hospital being breastfed (p=0.544). The median exclusive breastfeeding duration for both groups was 90 days. After completing one year, 35.3% and 28.5% of children of adolescent and adult mothers, respectively, continued breastfeeding (p=0.254): only breastfeeding 11.5% vs. 8.9% and mixed feeding 23.8% vs. 19.5% (p=0.519). Meat intake by children of adolescent mothers was lower than that of children of adult mothers (13.9% vs. 26.0%; Fisher's test: p=0.031). With regard to egg intake, 11.5% vs. 19.5% of children of adolescent mothers and adult mothers did not eat egg but the results suggested that the egg intake of children of adolescent mothers was higher (p=0.082).
Conclusion: Duration and pattern of breastfeeding were similar between children of adolescent mothers and of adult mothers. The complementary nutrition was similar, except for a lower intake of meat and a higher intake of eggs among the children of adolescent mothers.
abstract_id: PUBMED:34371886
Infant and Young Child Feeding Practices among Adolescent Mothers and Associated Factors in India. Adequate infant and young child feeding (IYCF) improve child survival and growth. Globally, about 18 million babies are born to mothers aged 18 years or less and have a higher likelihood of adverse birth outcomes in India due to insufficient knowledge of child growth. This paper examined factors associated with IYCF practices among adolescent Indian mothers. This cross-sectional study extracted data on 5148 children aged 0-23 months from the 2015-2016 India National Family Health Survey. Survey logistic regression was used to assess factors associated with IYCF among adolescent mothers. Prevalence of exclusive breastfeeding, early initiation of breastfeeding, timely introduction of complementary feeding, minimum dietary diversity, minimum meal frequency, and minimum acceptable diet rates were: 58.7%, 43.8%, 43.3%, 16.6%, 27.4% and 6.8%, respectively. Maternal education, mode of delivery, frequency of antenatal care (ANC) clinic visits, geographical region, child's age, and household wealth were the main factors associated with breastfeeding practices while maternal education, maternal marital status, child's age, frequency of ANC clinic visits, geographical region, and household wealth were factors associated with complementary feeding practices. IYCF practices among adolescent mothers are suboptimal except for breastfeeding. Health and nutritional support interventions should address the factors for these indicators among adolescent mothers in India.
abstract_id: PUBMED:33567634
Breastfeeding Practices among Adolescent Mothers and Associated Factors in Bangladesh (2004-2014). Optimal breastfeeding practices among mothers have been proven to have health and economic benefits, but evidence on breastfeeding practices among adolescent mothers in Bangladesh is limited. Hence, this study aims to estimate breastfeeding indicators and factors associated with selected feeding practices. The sample included 2554 children aged 0-23 months of adolescent mothers aged 12-19 years from four Bangladesh Demographic and Health Surveys collected between 2004 and 2014. Breastfeeding indicators were estimated using World Health Organization (WHO) indicators. Selected feeding indicators were examined against potential confounding factors using univariate and multivariate analyses. Only 42.2% of adolescent mothers initiated breastfeeding within the first hour of birth, 53% exclusively breastfed their infants, predominant breastfeeding was 17.3%, and 15.7% bottle-fed their children. Parity (2-3 children), older infants, and adolescent mothers who made postnatal check-up after two days were associated with increased exclusive breastfeeding (EBF) rates. Adolescent mothers aged 12-18 years and who watched television were less likely to delay breastfeeding initiation within the first hour of birth. Adolescent mothers who delivered at home (adjusted OR = 2.63, 95% CI:1.86, 3.74) and made postnatal check-up after two days (adjusted OR = 1.67, 95% CI: 1.21, 2.30) were significantly more likely to delay initiation breastfeeding within the first hour of birth. Adolescent mothers living in the Barisal region and who listened to the radio reported increased odds of predominant breastfeeding, and increased odds for bottle-feeding included male infants, infants aged 0-5 months, adolescent mothers who had eight or more antenatal clinic visits, and the highest wealth quintiles. In order for Bangladesh to meet the Sustainable Development Goals (SDGs) 2 and 3 by 2030, breastfeeding promotion programmes should discourage bottle-feeding among adolescent mothers from the richest households and promote early initiation of breastfeeding especially among adolescent mothers who delivered at home and had a late postnatal check-up after delivery.
abstract_id: PUBMED:32721051
'I just don't think it's that natural': adolescent mothers' constructions of breastfeeding as deviant. Breastfeeding is recognised globally as the optimal method of infant feeding. For Murphy (1999) its nutritional superiority positions breastfeeding as a moral imperative where mothers who formula-feed are open to charges of maternal deviance and must account for their behaviour. We suggest that this moral superiority of breastfeeding is tenuous for mothers from marginalised contexts and competes with discourses which locate breastfeeding, rather than formula feeding, as maternal deviance. We draw on focus group and interview data from 27 adolescent mothers from socio-economically deprived neighbourhoods in three areas of the UK, and five early years professionals working at a Children's Centre in the Northeast of England. We argue that breastfeeding is constructed as deviance at three 'levels' as (i) a deviation from broad social norms about women's bodies, (ii) a deviation from local mothering behaviours and (iii) a transgression within micro-level interpersonal and familial relationships. Given this positioning of breastfeeding as deviant, breastfeeding mothers feel obliged to account for their deviance. In making this argument, we extend and rework Murphy's (1999) framework to encompass diverse experiences of infant feeding. We conclude with reflections on future research directions and potential implications for practice.
abstract_id: PUBMED:36927075
Investigating the Effect of Supportive Interventions on Initiation of Breastfeeding, Exclusive Breastfeeding, and Continuation of Breastfeeding in Adolescent Mothers: A Systematic Review and Meta-Analysis. Introduction: The initiation of breastfeeding, exclusive breastfeeding, and its duration for 2 years in adolescent mothers is less than adult mothers. The purpose of this study is to determine the effect of supportive interventions on the initiation of breastfeeding, exclusive breastfeeding, and continuation of breastfeeding in adolescent mothers. Methods: Web of Science, PubMed, Scopus, Cochrane Library, EMBASE, ProQuest, SID, Iranmedex, and Google Scholar were searched to find English and Persian clinical trial studies without time limit. The Cochrane checklist was used to check the bias of the articles. Data analysis was done using STATA version 11. I-squared index was used to check the heterogeneity, and funnel plot and Begg test were used to examine the publication bias. The combined odds ratio (OR) and random effects model were used to combine the studies and perform meta-analysis. Results: Of 492 articles, 11 articles were entered to the systematic review. Of 11 articles, three articles were entered to the meta-analysis. The supportive interventions included educational and counseling interventions, home visit, and peer support. The results of the present random effects meta-analysis model showed that the combined OR was 3.38 with 95% confidence interval (1.66-6.88, p = 0.001), thus that, breastfeeding initiation in the intervention group was higher than the control group. Conclusion: Supportive interventions such as educational and counseling interventions, home visits, and peer support are suitable strategies to promote breastfeeding in adolescent mothers. Therefore, it is suggested to integrate these strategies in prenatal and postpartum care of adolescent mothers.
abstract_id: PUBMED:34958231
Parental Cohabitation and Breastfeeding Outcomes Among United States Adolescent Mothers. AbstractBackground: Adolescent mothers in the United States experience disproportionately lower rates of breastfeeding compared to older mothers. Evidence suggests that paternal support helps improve breastfeeding outcomes; however, support is difficult to quantify. Parental cohabitation is easy to identify and could be used to quantify paternal support. Research Aim: Our study is to investigate the association between parental cohabitation and breastfeeding initiation and duration among US adolescent mothers. Materials and Methods: Data from the 2011-2017 National Survey of Family Growth were used. Our study sample included primipara, adolescent mothers (aged 15-19 years) who gave birth to a singleton (n = 1,867). Multivariate logistic regression and Cox Proportional Hazards models were used to analyze the relationship between cohabitation and breastfeeding initiation and duration, respectively. All models were subsequently stratified by race/ethnicity due to evidence of effect modification. Results: After adjusting for all a priori confounders, cohabiting with the infant's father at birth was associated with increased odds of breastfeeding initiation compared to noncohabiting adolescent mothers (odds ratio [OR]: 1.5, 95% confidence interval [CI]: 1.08-2.16). After stratifying by race/ethnicity, both Hispanic and non-Hispanic white adolescent mothers were more likely to initiate breastfeeding if cohabiting with the infant's father (ORHispanic: 1.9, 95% CI: 1.10-3.35; ORNon-Hispanic white: 1.7, 95% CI: 1.05-2.87). We found no evidence of an association between parental cohabitation and breastfeeding duration. Conclusions: Our study found evidence that cohabitation status at birth increases the odds of breastfeeding initiation in adolescent mothers. Practitioners should consider cohabitation status when working with adolescent mothers.
abstract_id: PUBMED:33282126
Compliance of mothers' breastfeeding and complementary feeding practices with WHO recommendations in Turkey. Background/objectives: This study aimed to evaluate how breastfeeding and complementary nutrition practices of mothers of 0-24-month-old children comply with the World Health Organization (WHO) recommendations for infant and young child feeding and to compare the results with selected demographic parameters related to the mother and child.
Subjects/methods: The research sample comprised mothers (n = 250) with children less than 2 years old. Data were obtained via questionnaire and were analyzed using SPSS 20.0 package program. The Pearson χ2 or Fisher's exact tests were used for assessing relationships between categorical variables. The one-sample t-test was used for comparisons with reference values.
Results: Most mothers (97.2%) breastfed their babies immediately after birth. The mean time to breastfeeding after delivery was 47.8 ± 14.8 minutes, and 40.8% of the mothers complied with the WHO recommendation. Furthermore, 59.8% of the mothers exclusively breastfed their children for 6 months (mean 5.2 ± 1.5 months). The mean duration to the start of providing complementary food was 5.8 ± 0.6 months, and 76.1% of mothers who complied with the WHO recommendation. Only 12.3% of mothers breastfed their children for at least 12 months (mean 7.7 ± 3.3 months). On average, mothers gave cow milk to their children for the first time at 10.1 ± 1.7 months and honey at 11.8 ± 2.3 months. The mothers' rates of compliance with the WHO recommendations on cow milk and honey feeding were 32.0% and 71.6%, respectively. The rate of mothers who complied with the WHO minimum meal frequency recommendation was 88.3%.
Conclusions: We suggest that the WHO recommendations on this subject will be realized more fully by emphasizing the importance of the positive effects of breastfeeding until the age of 2 years and of a timely start of complementary food provision. Such changes will affect child health over the long term.
abstract_id: PUBMED:26161657
The Effect of a Pro-Breastfeeding and Healthy Complementary Feeding Intervention Targeting Adolescent Mothers and Grandmothers on Growth and Prevalence of Overweight of Preschool Children. Introduction: The pattern and duration of breastfeeding (BF) and the age at onset of complementary feeding, as well as its quality, have been associated with the prevalence of overweight in childhood.
Objective: To assess the effect of a pro-BF and healthy complementary feeding intervention, targeted to adolescent mothers and maternal grandmothers, on growth and prevalence of overweight and obesity in children at preschool age. This intervention had a positive impact on duration of BF and timing of onset of complementary feeding.
Methods: This randomized clinical trial involved 323 adolescent mothers, their infants, and the infants' maternal grandmothers, when they cohabited. Mothers and grandmothers in the intervention group received counseling sessions on BF and healthy complementary feeding at the maternity ward and at home (7, 15, 30, 60, and 120 days after delivery). When children were aged 4 to 7 years, they underwent anthropometric assessment and collection of data on dietary habits. Multivariable Poisson regression with robust estimation was used for analysis.
Results: BMI-for-age and height-for-age were similar in the intervention and control groups, as was the prevalence of overweight (39% vs. 31% respectively; p=0.318). There were no significant between-group differences in dietary habits.
Conclusion: Although the intervention prolonged the duration of exclusive BF and delayed the onset of complementary feeding, it had no impact on growth or prevalence of overweight at age 4 to 7 years.
Trial Registration: ClinicalTrials.gov NCT00910377.
abstract_id: PUBMED:29855364
Factors associated with the maintenance of breastfeeding for 6, 12, and 24 months in adolescent mothers. Background: Previous studies have demonstrated that adolescent mothers present a higher risk of not breastfeeding or of early interruption of this practice. Considering the scarcity of studies investigating the determining factors of breastfeeding in adolescent mothers, and the absence of studies exploring the determining factors of breastfeeding maintenance for different periods of time in a single population of adolescent mothers, the aim of this research was to identify factors associated with breastfeeding maintenance for at least 6, 12, and 24 months in adolescent mothers.
Methods: Data analysis from a randomised control trial involving adolescent mothers recruited at a university hospital in southern Brazil. Participants were followed through the first year of life of their infants and reassessed at 4-7 years. Factors associated with any breastfeeding for at least 6, 12, and 24 months were assessed using multivariate Poisson regression.
Results: Data for 228, 237, and 207 mothers were available, respectively. Breastfeeding maintenance for at least 6, 12, and 24 months was observed in 68.4, 47.3, and 31.9% of the sample, respectively. Only one factor was associated with breastfeeding maintenance at all outcomes: infant not using a pacifier showed a higher probability of breastfeeding maintenance in the first 2 years. Maternal grandmother breastfeeding support and exclusive breastfeeding duration were associated with breastfeeding maintenance for 6 and 12 months. The other factors evaluated were associated with breastfeeding maintenance at only one of the time points assessed: 6 months, maternal skin color (black/brown); 12 months, female infant and partner breastfeeding support; and 24 months, older paternal age and multiparity.
Conclusions: The present findings shed light upon barriers and facilitators of breastfeeding practices among adolescent mothers. In order to contribute to the challenge of increasing BF duration among adolescent mothers interventions aimed at boosting breastfeeding maintenance among this population should take into consideration the determining factors here identified. Additionally, breastfeeding education and support should be provided continuously as factors influencing these practices vary with time. Thus, support for adolescent mothers during the different stages of breastfeeding need to be tailored to have a positive impact on breastfeeding experience.
abstract_id: PUBMED:30303630
Factors related to exclusive breastfeeding in Thai adolescent mothers: Concept mapping approach. The advantages of exclusive breastfeeding for the first 6 months are well-known. Unfortunately, adolescent mothers have lower rates of breastfeeding exclusivity and shorter duration of breastfeeding. There is limited evidence regarding exclusive breastfeeding determinants in adolescent mothers. The purpose of this study was to obtain adolescent mother-generated factors related to exclusive breastfeeding at 6 months. A mixed-methods design was conducted through concept mapping. Thirty adolescent mothers aged 15-19 years who had wide range of exclusive breastfeeding experiences brainstormed about specific factors. They then sorted and rated the statements into key clusters. Finally, half of the participants were involved in the interpretation of the mapping results and the creation of pathway diagrams. Data were analysed by multivariate statistics in the Concept System Global MAX program. The results showed that the adolescent mothers brainstormed about 104 statements on the factors related to exclusive breastfeeding at 6 months. These factors can be categorized into the following six key clusters: (a) breastfeeding advantages; (b) facilitating factors and necessary skills; (c) promotion and support needed; (d) community and social influence; (e) internal and external barriers; and (f) key problems in families. The pattern matching provided understanding of how key clusters are important to successful breastfeeding through comparing three groups of exclusive breastfeeding durations. Finally, the relationships of the stated factors were drawn in pathway diagrams. Exclusive breastfeeding experiences among Thai adolescent mothers showed complexity involving multilevel influences of social systems. The promotion of optimal breastfeeding should recognize the influences of both personal and environmental factors.
Answer: Breastfeeding and complementary feeding practices of children of adolescent mothers can differ from those of adult mothers, but the differences are not always consistent across various studies and contexts.
In a study comparing children of adolescent mothers to those of adult mothers, it was found that the duration and pattern of breastfeeding were similar between the two groups. However, the complementary nutrition was slightly different, with children of adolescent mothers having a lower intake of meat and a higher intake of eggs (PUBMED:14513130).
In India, infant and young child feeding (IYCF) practices among adolescent mothers were found to be suboptimal except for breastfeeding. Factors such as maternal education, mode of delivery, frequency of antenatal care visits, geographical region, child's age, and household wealth were associated with breastfeeding practices, while similar factors were associated with complementary feeding practices (PUBMED:34371886).
In Bangladesh, optimal breastfeeding practices among adolescent mothers were limited, with only 42.2% initiating breastfeeding within the first hour of birth and 53% exclusively breastfeeding their infants. Factors such as parity, older infants, and postnatal check-up timing were associated with increased exclusive breastfeeding rates (PUBMED:33567634).
A study in the UK found that breastfeeding is constructed as deviant among adolescent mothers, which can influence their feeding practices (PUBMED:32721051).
Supportive interventions, including educational and counseling interventions, home visits, and peer support, have been shown to be effective in promoting breastfeeding initiation among adolescent mothers (PUBMED:36927075).
In the United States, parental cohabitation was associated with increased odds of breastfeeding initiation among adolescent mothers, suggesting that paternal support may play a role (PUBMED:34958231).
In Turkey, compliance with WHO recommendations on breastfeeding and complementary feeding varied among mothers, with a significant number exclusively breastfeeding for 6 months and starting complementary food around the recommended time (PUBMED:33282126).
An intervention targeting adolescent mothers and grandmothers in Brazil showed a positive impact on the duration of breastfeeding and timing of onset of complementary feeding, but it did not affect growth or prevalence of overweight at preschool age (PUBMED:26161657). |
Instruction: High vitamin D deficiency rate in metabolic inpatients: is bariatric surgery planning found guilty?
Abstracts:
abstract_id: PUBMED:24825599
High vitamin D deficiency rate in metabolic inpatients: is bariatric surgery planning found guilty? Background: High rates of vitamin D insufficiency are usually found in obese patients, even before any malabsorptive bariatric surgery. It is not clear whether they lack vitamin D because of different food intake, different solar exposure, or different storage pathways or bioavailability in adipose tissue. To better understand vitamin D deficiency, we studied different categories of inpatients.
Methods: We collected clinical and biological data from 457 consecutive inpatients during a year: 217 nonobese diabetic patients, 159 obese nonsurgical diabetic patients, 46 obese surgical nondiabetic patients, and 35 obese surgical diabetic patients. Statistically significant differences between two mean 25-hydroxyvitamin D (25(OH)D) levels were defined at the 5 % level using a Z-test.
Results: Vitamin D deficiency was found in 69 % of the patients, while 24 % had a normal level and 7 % an optimal level. A significant difference was found between obese (25(OH)D = 40.3 nmol/l) and nonobese patients (25(OH)D = 46.8 nmol/l). Patients undergoing bariatric surgery were not different from the other obese patients.
Conclusion: No significant difference in 25(OH) vitamin D level could be demonstrated between obese patients before bariatric surgery and obese patients with no obesity surgery project. No difference was found between our Parisian obese population and a Spanish obese population, which benefits from a better solar exposure. Both findings suggest that obesity itself is the link with vitamin D deficiency, independently from behavioral differences.
abstract_id: PUBMED:27789220
Pleiotropic protective effects of Vitamin D against high fat diet-induced metabolic syndrome in rats: One for all. Several lines of evidence point to the association of vitamin D deficiency with the different components of metabolic syndrome. Yet, the effect of vitamin D supplementation on metabolic syndrome is not clearly elucidated. Herein, we tested the hypothesis that administration of vitamin D, either alone or in combination of metformin can improve metabolic and structural derangements associated with metabolic syndrome. Fifty wistar rats were randomly assigned to serve either as normal control (10 rats) or metabolic syndrome rats, by feeding them with a standard or a high fat diet (HFD), respectively. Metabolic syndrome rats were further assigned to receive either vehicle, Metformin (100mg/Kg orally), vitamin D (6ng/Kg SC.) or both, daily for 8 weeks. Body weight, blood pressure, serum glucose, insulin, insulin resistance, lipid profile, oxidative stress, serum uric acid and Ca+2 were assessed at the end of the study. Histopathological examination of hepatic, renal and cardiac tissues were also performed. Treatment with vitamin D was associated with a significant improvement of the key features of metabolic syndrome namely obesity, hypertension and dyslipidaemia with a neutral effect on Ca+2 level. When combined with metformin, most of the other metabolic abnormalities were ameliorated. Furthermore, a significant attenuation of the associated hepatic steatosis was observed with vitamin D as well as vitamin D/metformin combination. In conclusion, vitamin D can improve hypertension, metabolic and structural abnormalities induced by HFD, and it provides additional benefits when combined with metformin. Therefore, vitamin D could represent a feasible therapeutic option for prevention of metabolic syndrome.
abstract_id: PUBMED:12448393
Severe metabolic bone disease as a long-term complication of obesity surgery. Background: Metabolic bone disease is a well-documented long-term complication of obesity surgery. It is often undiagnosed, or misdiagnosed, because of lack of physician and patient awareness. Abnormalities in calcium and vitamin D metabolism begin shortly after gastrointestinal bypass operations; however, clinical and biochemical evidence of metabolic bone disease may not be detected until many years later.
Case Report: A 57-year-old woman presented with severe hypocalcemia, vitamin D deficiency, and radiographic evidence of osteomalacia, 17 years after vertical banded gastroplasty and Roux-en-Y gastric bypass. Following these operations, she was diagnosed with a variety of medical disorders based on symptoms that, in retrospect, could have been attributed to metabolic bone disease. Additionally, she had serum metabolic abnormalities that were consistent with metabolic bone disease years before this presentation. Radiographic evidence of osteomalacia at the time of presentation suggests that her condition was advanced, and went undiagnosed for many years. These symptoms and laboratory and radiographic abnormalities most likely were a result of the long-term malabsorptive effects of gastric bypass, food intake restriction, or a combination of the two.
Conclusion: This case illustrates not only the importance of informed consent in patients undergoing obesity operations, but also the importance of adequate follow-up for patients who have undergone these procedures. A thorough history and physical examination, a high index of clinical suspicion, and careful long-term follow-up, with specific laboratory testing, are needed to detect early metabolic bone disease in these patients.
abstract_id: PUBMED:36014825
Vitamin D Deficiency in Patients with Morbid Obesity before and after Metabolic Bariatric Surgery. Background: Metabolic bariatric surgery (MBS) is the most effective treatment for severe obesity. Vitamin D deficiency is a common complication encountered both during preoperative workup and follow-up.
Aim: To estimate the prevalence of vitamin D deficiency in patients undergoing MBS.
Methods: Prospectively maintained database of our university MBS center was searched to assess the rate of preoperative and postoperative vitamin D deficiency or insufficiency in patients undergoing MBS over a one-year period.
Results: In total, 184 patients were included, 85 cases of Sleeve Gastrectomy (SG), 99 Gastric Bypass (GB; 91 One Anastomosis and 8 Roux-en-Y). Preoperative vitamin D deficiency and insufficiency were respectively found in 61% and 29% of patients, with no significant difference between SG and GB. After six months, 15% of patients had vitamin D deficiency, and 34% had vitamin D insufficiency. There was no significant difference in the rate of vitamin D deficiency or insufficiency and the percentage of total weight loss (%TWL) at 1, 3, and 6 postoperative months between SG and GB.
Conclusions: Preoperative vitamin D deficiency or insufficiency is common in MBS candidates. Regular follow-up with correct supplementation is recommended when undergoing MBS. Early postoperative values of vitamin D were comparable between SG and OAGB.
abstract_id: PUBMED:30730424
Metabolic and Endocrine Disorders in Pseudarthrosis. Study Design: Retrospective Cohort.
Objective: Establish 1-year patient-reported outcomes after spine surgery for symptomatic pseudarthrosis compared with other indications. In the subgroup of pseudarthrosis patients, describe preexisting metabolic and endocrine-related disorders, and identify any new diagnoses or treatments initiated by an endocrine specialist.
Summary Of Background: Despite surgical advances in recent decades, pseudarthrosis remains among the most common complications and indications for revision after fusion spine surgery. A better understanding of the outcomes after revision surgery for pseudarthrosis and risk factors for pseudarthrosis are needed.
Methods: Using data from our institutional spine registry, we retrospectively reviewed patients undergoing elective spine surgery between October 2010 and November 2016. Patients were stratified by surgical indication (pseudarthrosis vs. not pseudarthrosis), and 1-year outcomes for satisfaction, disability, quality of life, and pain were compared. In a descriptive subgroup analysis of pseudarthrosis patients, we identified preexisting endocrine-related disorders, frequency of endocrinology referral, and any new diagnoses and treatments initiated through the referral.
Results: Of 2721 patients included, 169 patients underwent surgery for pseudarthrosis. No significant difference was found in 1-year satisfaction between pseudarthrosis and nonpseudarthrosis groups (77.5% vs. 83.6%, respectively). A preexisting endocrine-related disorder was identified in 82% of pseudarthrosis patients. Endocrinology referral resulted in a new diagnosis or treatment modification in 58 of 59 patients referred. The most common diagnoses identified included osteoporosis, vitamin D deficiency, diabetes, hyperlipidemia, sex-hormone deficiency, and hypothyroidism. The most common treatments initiated through endocrinology were anabolic agents (teriparatide and abaloparatide), calcium, and vitamin D supplementation.
Conclusions: Patients undergoing revision spine surgery for pseudarthrosis had similar 1-year satisfaction rates to other surgical indications. In conjunction with a bone metabolic specialist, our descriptive analysis of endocrine-related disorders among patients with a pseudarthrosis can guide protocols for workup, indications for endocrine referral, and guide prospective studies in this field.
abstract_id: PUBMED:20960547
Metabolic management following bariatric surgery. Bariatric surgery is an effective treatment option for obesity. Commonly utilized procedures are either restrictive, malabsorptive, or both. Substantial weight loss can be achieved. Postoperatively, patients experience nutritional, metabolic, and hormonal changes that have important clinical implications. The postoperative diet should be advanced carefully, according to protocol. Micronutrient deficiencies such as vitamin C, vitamin A, and zinc deficiencies are common, especially following malabsorptive procedures. Bone metabolism is greatly affected, in part due to vitamin D deficiency, decreased calcium absorption, and secondary hyperparathyroidism. Diabetes improves acutely in malabsorptive procedures and in sequence with weight loss in restrictive procedures. Polycystic ovarian syndrome improves in nearly all women with this condition who undergo bariatric surgery. Testosterone levels in men also improve after surgery. Consideration of these nutritional, metabolic, and hormonal changes allows for optimal medical management following bariatric surgery.
abstract_id: PUBMED:35151587
Metabolic bone disease and fracture risk after gastric bypass and sleeve gastrectomy: comparative analysis of a multi-institutional research network. Background: Roux-en-Y gastric bypass (RYGB) and sleeve gastrectomy (SG) are the two most performed bariatric procedures. Multiple studies have investigated the metabolic bone complications after bariatric surgery, but there is a paucity of data comparing bone health after RYGB and SG.
Objectives: To compare the rates of major fractures and osteoporosis after Roux-en-Y gastric bypass and sleeve gastrectomy.
Setting: Data from TriNetX multi-institutional research network that includes data from multiple health care organizations in the USA was analyzed at West Virginia University.
Methods: We conducted a retrospective cohort study using TriNetX, a federated multi-institutional research network. We identified patients who underwent RYGB or SG. Primary outcome was the rate of major fractures at 3 years after the procedure. Other outcomes included the rate of spine fracture, femur fracture, osteoporosis, and vitamin D deficiency at follow-up.
Results: In unmatched analysis, patients with SG were less likely to have major fractures or an osteoporosis diagnosis than RYGB patients at 3 years after the procedure (P < .05). After propensity-score matching, similar results were noted; patients with SG were less likely to have major fractures than RYGB patients at 3 years after procedure (2.85% versus 3.66%, risk ratio [RR]: .78, 95% confidence interval [CI]: .71-.85), and a lower rate of osteoporosis diagnosis was noted in the SG group. High rates of vitamin D deficiency were noted in both cohorts. The incidence of spine fractures was significantly lower in the SG group than in the RYGB group (.76% versus 1.18%, RR: .65, 95% CI: .54-.77). Similarly, the incidence of femur fracture was significantly lower after SG (RR: .62, 95% CI: .44-.88). Female sex, higher age, smoking history, and diabetes were independently associated with osteoporosis diagnosis during follow-up (all P values <.05).
Conclusion: Our analyses showed that RYGB is associated with a higher risk of osteoporosis, vitamin D deficiency, and osteoporotic fractures. Thus, in patients with a higher baseline osteoporotic risk, SG may be preferred option; however, further studies are needed.
abstract_id: PUBMED:7609687
Metabolic bone diseases. Metabolic bone diseases often present in old age and some are more easily treatable than others. Osteoporosis is best managed by prevention, with maximisation of peak bone density and reduction of subsequent bone loss. Although hormone replacement therapy is most useful in prevention, it also has a role in established osteoporosis. Other treating agents include calcium, calcitriol, calcitonin and bisphosphonates. Osteomalacia in the elderly mainly results from vitamin D deficiency and supplementation should be considered in those at risk. The newer bisphosphonates show great promise in the treatment of Paget's disease, while surgery remains the only treatment option in primary hyperparathyroidism.
abstract_id: PUBMED:27662816
Associations of vitamin D with insulin resistance, obesity, type 2 diabetes, and metabolic syndrome. The aim of this study is to determine the relationships of vitamin D with diabetes, insulin resistance obesity, and metabolic syndrome. Intra cellular vitamin D receptors and the 1-α hydroxylase enzyme are distributed ubiquitously in all tissues suggesting a multitude of functions of vitamin D. It plays an indirect but an important role in carbohydrate and lipid metabolism as reflected by its association with type 2 diabetes (T2D), metabolic syndrome, insulin secretion, insulin resistance, polycystic ovarian syndrome, and obesity. Peer-reviewed papers, related to the topic were extracted using key words, from PubMed, Medline, and other research databases. Correlations of vitamin D with diabetes, insulin resistance and metabolic syndrome were examined for this evidence-based review. In addition to the well-studied musculoskeletal effects, vitamin D decreases the insulin resistance, severity of T2D, prediabetes, metabolic syndrome, inflammation, and autoimmunity. Vitamin D exerts autocrine and paracrine effects such as direct intra-cellular effects via its receptors and the local production of 1,25(OH)2D3, especially in muscle and pancreatic β-cells. It also regulates calcium homeostasis and calcium flux through cell membranes, and activation of a cascade of key enzymes and cofactors associated with metabolic pathways. Cross-sectional, observational, and ecological studies reported inverse correlations between vitamin D status with hyperglycemia and glycemic control in patients with T2D, decrease the rate of conversion of prediabetes to diabetes, and obesity. However, no firm conclusions can be drawn from current studies, because (A) studies were underpowered; (B) few were designed for glycemic outcomes, (C) the minimum (or median) serum 25(OH) D levels achieved are not measured or reported; (D) most did not report the use of diabetes medications; (E) some trials used too little (F) others used too large, unphysiological and infrequent doses of vitamin D; and (G) relative paucity of rigorous clinical data on the effects of vitamin D sufficiency on non-calcium endpoints. Although a large number of observational studies support improving T2D, insulin resistance, obesity, and metabolic syndrome with vitamin D adequacy, there is a lack of conclusive evidence from randomized control clinical trials that, these disorders are prevented following optimization of serum levels of 25(OH)D. However, none of the currently conducted clinical studies would resolve these issues. Thus, specifically designed, new clinical studies are needed to be conducted in well-defined populations, following normalizing the serum vitamin D levels in vitamin D deficient prediabetes subjects, to test the hypothesis that hypovitaminosis D worsens these disorders and correction would alleviate it.
abstract_id: PUBMED:33727603
Effects of vitamin D deficiency on the improvement of metabolic disorders in obese mice after vertical sleeve gastrectomy. Vertical sleeve gastrectomy (VSG) is one of the most commonly performed clinical bariatric surgeries for the remission of obesity and diabetes. Its effects include weight loss, improved insulin resistance, and the improvement of hepatic steatosis. Epidemiologic studies demonstrated that vitamin D deficiency (VDD) is associated with many diseases, including obesity. To explore the role of vitamin D in metabolic disorders for patients with obesity after VSG. We established a murine model of diet-induced obesity + VDD, and we performed VSGs to investigate VDD's effects on the improvement of metabolic disorders present in post-VSG obese mice. We observed that in HFD mice, the concentration of VitD3 is four fold of HFD + VDD one. In the post-VSG obese mice, VDD attenuated the improvements of hepatic steatosis, insulin resistance, intestinal inflammation and permeability, the maintenance of weight loss, the reduction of fat loss, and the restoration of intestinal flora that were weakened. Our results suggest that in post-VSG obese mice, maintaining a normal level of vitamin D plays an important role in maintaining the improvement of metabolic disorders.
Answer: The high rate of vitamin D deficiency in metabolic inpatients cannot be solely attributed to bariatric surgery planning. Studies have shown that high rates of vitamin D insufficiency are commonly found in obese patients even before any malabsorptive bariatric surgery is performed (PUBMED:24825599). This suggests that the deficiency is linked to obesity itself, rather than the surgical intervention, and is independent of behavioral differences such as food intake, solar exposure, or storage pathways in adipose tissue.
Furthermore, vitamin D deficiency is a common complication encountered both during preoperative workup and follow-up of patients undergoing metabolic bariatric surgery (MBS), with a significant prevalence of deficiency or insufficiency noted in patients before surgery (PUBMED:36014825). This indicates that the deficiency is present prior to the surgical intervention.
Additionally, metabolic bone disease, which includes abnormalities in calcium and vitamin D metabolism, is a well-documented long-term complication of obesity surgery, but it often goes undiagnosed or misdiagnosed due to lack of awareness (PUBMED:12448393). The onset of metabolic bone disease can begin shortly after gastrointestinal bypass operations, but clinical and biochemical evidence may not be detected until many years later, suggesting that the surgery can exacerbate pre-existing deficiencies.
In conclusion, while bariatric surgery, particularly malabsorptive procedures, can lead to further vitamin D deficiency and metabolic bone disease due to decreased absorption (PUBMED:20960547), the high rate of vitamin D deficiency in metabolic inpatients is not solely a result of bariatric surgery planning. Obesity itself is a significant factor contributing to the deficiency, and the issue is compounded by the surgery. Therefore, regular follow-up and correct supplementation of vitamin D are recommended for patients undergoing MBS to manage pre-existing deficiencies and prevent new ones from developing postoperatively (PUBMED:36014825). |
Instruction: Criteria for electrocardiographic diagnosis of vagotonia. Is there a consensus in the opinion of specialists?
Abstracts:
abstract_id: PUBMED:7611914
Criteria for electrocardiographic diagnosis of vagotonia. Is there a consensus in the opinion of specialists? Purpose: To identify the most important criteria for the ECG diagnosis of vagotonia in the opinion of cardiologists.
Methods: A written questionnaire was applied to 40 cardiologists attending the 9th Brazilian Congress of Cardiac Arrhythmias (S. José do Rio Preto, SP, 1992). The sample represented approximately 15% of all participants and was intentionally biased to include 70% of the invited speakers and free communications presenters, and to exclude non-medical professionals, aiming to enhance the validity of the answers. It was divided in two parts: the first, with spontaneous response, answered without knowledge of the following; and the second, where a list of ECG criteria obtained in the literature was presented to the respondent in a random order. In both parts, the specialists were requested to attribute an order of importance for each criterion.
Results: In the 1st part, 35 different criteria were cited, but only 3 were assigned by more than 25% of the sample: sinus bradycardia (95%), tall and peaked T waves (30%) and early repolarization (27.5%). In the 2nd part, the best classified criterion was sinus bradycardia, followed by J point elevation and ST segment elevation.
Conclusion: Among cardiologists with a interest in electrocardiography and cardiac arrhythmias, apart of sinus bradycardia, there is no clear consensus concerning to the group of criteria to identify vagotonia in the standard 12-lead ECG. Further research is necessary to objectively validate the main criteria herein identified.
abstract_id: PUBMED:22920782
Current electrocardiographic criteria for diagnosis of Brugada pattern: a consensus report. Brugada syndrome is an inherited heart disease without structural abnormalities that is thought to arise as a result of accelerated inactivation of Na channels and predominance of transient outward K current (I(to)) to generate a voltage gradient in the right ventricular layers. This gradient triggers ventricular tachycardia/ventricular fibrillation possibly through a phase 2 reentrant mechanism. The Brugada electrocardiographic (ECG) pattern, which can be dynamic and is sometimes concealed, being only recorded in upper precordial leads, is the hallmark of Brugada syndrome. Because of limitations of previous consensus documents describing the Brugada ECG pattern, especially in relation to the differences between types 2 and 3, a new consensus report to establish a set of new ECG criteria with higher accuracy has been considered necessary. In the new ECG criteria, only 2 ECG patterns are considered: pattern 1 identical to classic type 1 of other consensus (coved pattern) and pattern 2 that joins patterns 2 and 3 of previous consensus (saddle-back pattern). This consensus document describes the most important characteristics of 2 patterns and also the key points of differential diagnosis with different conditions that lead to Brugada-like pattern in the right precordial leads, especially right bundle-branch block, athletes, pectus excavatum, and arrhythmogenic right ventricular dysplasia/cardiomyopathy. Also discussed is the concept of Brugada phenocopies that are ECG patterns characteristic of Brugada pattern that may appear and disappear in relation with multiple causes but are not related with Brugada syndrome.
abstract_id: PUBMED:29507171
Consensus and clustering in opinion formation on networks. Ideas that challenge the status quo either evaporate or dominate. The study of opinion dynamics in the socio-physics literature treats space as uniform and considers individuals in an isolated community, using ordinary differential equation (ODE) models. We extend these ODE models to include multiple communities and their interactions. These extended ODE models can be thought of as being ODEs on directed graphs. We study in detail these models to determine conditions under which there will be consensus and pluralism within the system. Most of the consensus/pluralism analysis is done for the case of one and two cities. However, we numerically show for the case of a symmetric cycle graph that an elementary bifurcation analysis provides insight into the phenomena of clustering. Moreover, for the case of a cycle graph with a hub, we discuss how having a sufficient proportion of zealots in the hub leads to the entire network sharing the opinion of the zealots.This article is part of the theme issue 'Stability of nonlinear waves and patterns and related topics'.
abstract_id: PUBMED:25288820
Boltzmann-type control of opinion consensus through leaders. The study of formations and dynamics of opinions leading to the so-called opinion consensus is one of the most important areas in mathematical modelling of social sciences. Following the Boltzmann-type control approach recently introduced by the first two authors, we consider a group of opinion leaders who modify their strategy accordingly to an objective functional with the aim of achieving opinion consensus. The main feature of the Boltzmann-type control is that, owing to an instantaneous binary control formulation, it permits the minimization of the cost functional to be embedded into the microscopic leaders' interactions of the corresponding Boltzmann equation. The related Fokker-Planck asymptotic limits are also derived, which allow one to give explicit expressions of stationary solutions. The results demonstrate the validity of the Boltzmann-type control approach and the capability of the leaders' control to strategically lead the followers' opinion.
abstract_id: PUBMED:27306348
Determining the Criteria and Their Weights for Medical Schools' Ranking: A National Consensus. Delphi as a consensus development technique enables anonymous, systematic refinement of expert opinion with the aim of arriving at a combined or consensual position. In this study, we determined the criteria and their weights for Iranian Medical Schools' ranking through a Delphi process. An expert committee devised 13 proposed criteria with 32 indicators with their weights, which were arranged hierarchically in the form of a tree diagram. We used the Delphi technique to reach a consensus on these criteria and weights among the deans of 38 public Iranian medical schools. For this purpose, we devised and sent a questionnaire to schools and asked them to suggest or correct the criteria and their weights. We repeated this process in two rounds till all the schools reached an acceptable consensus on them. All schools reached a consensus on the set of 13 criteria and 30 indicators and their weights in three main contexts of education, research and facilities, and equipment which were used for Medical Schools' ranking. Using Delphi technique for devising the criteria and their weights in evaluation processes such as ranking makes their results more acceptable among universities.
abstract_id: PUBMED:35528864
Research on improvement of DPoS consensus mechanism in collaborative governance of network public opinion. With the increasingly complex social situation, the problems of traditional online public opinion governance are increasingly serious. Especially the problem of transmission efficiency, public opinion data management and user information security of Internet users is urgently needed. Here, we design a functional infrastructure framework of the network public opinion collaborative governance model based on the blockchain with strong practicality and comprehensiveness. In order to reach the consensus mechanism requirements under the framework, the algorithm is improved on the basis of the defects of the traditional DPoS consensus algorithm. Considering time dynamic factors in the process of reaching consensus, the paper proposes a reputation-based voting model. Furthermore, the paper purposes a rewards and punishments incentive mechanism, and also designs a new method of counting votes. From the simulation results, it was found that after the improvement of the algorithm, the enthusiasm of node participation was significantly increased, the proportion of error nodes was significantly reduced, and the operating efficiency was significantly improved. It shows that the improved consensus algorithm we propose applies to public opinion governance can not only improve the security of the system with the reduce of false public opinion spreading, but also improve the efficiency of information processing, so it can be well applied to information sharing and public opinion governance scenarios.
abstract_id: PUBMED:32117540
Definition, Diagnosis, Treatment, and Prognosis of Frozen Shoulder: A Consensus Survey of Shoulder Specialists. Background: The objective of this study was to identify a consensus on definition, diagnosis, treatment, and prognosis of frozen shoulder (FS) among shoulder specialists.
Methods: A questionnaire composed of 18 questions about FS-definition, classification, utilization of diagnostic modalities, the propriety of treatment at each stage, and prognosis-was sent to 95 shoulder specialists in Korea. Most questions (15 questions) required an answer on a 5-point analog scale (1, strongly disagree; 5, strongly agree); three questions about the propriety of treatment were binary.
Results: We received 71 responses (74.7%). Of the 71 respondents, 84.5% agreed with the proposed definition of FS, and 88.8% agreed that FS should be divided into primary and secondary types according to the proposed definition. Only 43.7% of the respondents agreed that FS in patients with systemic disease should be classified as secondary FS. For the diagnosis of FS, 71.9% agreed that plain radiography should be used and 64.8% agreed ultrasonography should be used. There was a high consensus on proper treatment of FS: 97.2% agreed on education, 94.4%, on the use of nonsteroidal anti-inflammatory drugs; 76.1%, on intra-articular steroid injections; and 97.2%, on stretching exercise. Among all respondents, 22.5% answered that more than 10% of the patients with FS do not respond to conservative treatment.
Conclusions: The survey revealed a general consensus among shoulder specialists on the definition and treatment of FS. However, classification of FS was found controversial.
abstract_id: PUBMED:35620005
Opinion dynamics in social networks under competition: the role of influencing factors in consensus-reaching. The rapid development of information technology and social media has provided easy access to the vast data on individual preferences and social interactions. Despite a series of problems, such as privacy disclosure and data sensitivity, it cannot be denied that this access also provides beneficial opportunities and convenience for campaigns involving opinion control (e.g. marketing campaigns and political election). The profitability of opinion and the finiteness of individual attention have already spawned extensive competition for individual preferences on social networks. It is necessary to investigate opinion dynamics over social networks in a competitive environment. To this end, this paper develops a novel social network DeGroot model based on competition game (DGCG) to characterize opinion evolution in a competitive opinion dynamics. Social interactions based on trust relationships are captured in the DGCG model. From the model, we then obtain equilibrium results in a stable state of opinion evolution. We also analyse what role relevant factors play in the final consensus and competitive outcomes, including the resource ratio of both contestants, initial opinions, self-confidence and network structure. Theoretical analyses and numerical simulations show that these factors can significantly sway the consensus and even reverse competition outcomes.
abstract_id: PUBMED:34371527
Argentine Consensus of the Diagnosis and Treatment of Hereditary Angioedema Objectives: Hereditary angioedema (HAE) is a rare disease. During the last years, many studies and advances have been developed with the aim of better understanding the pathophysiology, as well as optimizing patient management. Several international working groups have attempted to clarify and standardize the care of HAE communicated as guidelines and consensus recommendations. We considered necessary to provide recommendations for the diagnosis and treatment of patients with HAE in Argentina.
Methods: A group of specialists of allergy and immunology from Argentina by using the online surveys methodology as well as face to face meetings developed the intended consensus.
Results: Recommendations were established based on published evidence and the expert opinion. The consensus focused on diagnosis, acute management of attacks, short and long-term prophylaxis, special situations (pediatrics and pregnancy) and disease management considering the health care system in Argentina.
Conclusion: The recommendations established in this consensus guidelines will optimize the management of patients with HAE in Argentina.
abstract_id: PUBMED:28400921
Validity of electrocardiographic criteria for increased left ventricular mass in young patients in the general population. Aim: To investigate validity of electrocardiographic (ECG) criteria for left ventricular hypertrophy (LVH) in young adults.
Methods: Retrospectively, echocardiograms showing LVH and concomitant electrocardiograms were collected in patients 18 to 39 years old. A control group of patients without LVH was collected. Using echocardiogram as the gold standard, electrocardiograms were analyzed using common voltage criteria.
Results: Study included 100 subjects (52% male, mean age = 28 ± 6.8 years, 96% Hispanic or African-American) with 50% LVH prevalence. Sensitivity and specificity for Sokolow-Lyon criteria were 24% (95%CI: 13.5%-38.4%) and 88% (95%CI: 74.9%-95%). For Cornell criteria, sensitivity was 32% (95%CI: 19.9%-46.8%) and specificity 98% (95%CI: 87.9%-99.8%). For R in aVL criteria, sensitivity was 12% (95%CI: 4.9%-25%) and specificity 100% (95%CI: 91.1%-100%).
Conclusion: In young adults common ECG voltage criteria have low sensitivities and high specificities similar to other age groups. Low sensitivities preclude these ECG criteria from serving as effective screening tests.
Answer: Based on the information provided in the abstracts, there is no clear consensus among specialists regarding the criteria for the electrocardiographic diagnosis of vagotonia. A study that surveyed 40 cardiologists at the 9th Brazilian Congress of Cardiac Arrhythmias found that while sinus bradycardia was widely recognized as a criterion for vagotonia, cited by 95% of the sample, there was no agreement on other criteria. Only tall and peaked T waves and early repolarization were mentioned by more than 25% of the respondents, and even then, they were not as consistently recognized as sinus bradycardia (PUBMED:7611914). This indicates that apart from sinus bradycardia, there is no strong consensus on a group of criteria to identify vagotonia on a standard 12-lead ECG, and further research is necessary to validate the main criteria identified in the study. |
Instruction: Can illness perceptions and coping predict psychological distress amongst allergy sufferers?
Abstracts:
abstract_id: PUBMED:17535490
Can illness perceptions and coping predict psychological distress amongst allergy sufferers? Objective: The aim of the present study was to measure the extent to which illness perceptions and coping strategies are associated with the levels of psychological distress amongst allergy sufferers.
Design And Method: One hundred and fifty-six allergy sufferers (all members of Allergy U.K.) completed a postal survey consisting of the Revised Illness Perception Questionnaire (IPQ-R) and the COPE. Psychological distress was measured using the General Health Questionnaire (GHQ-28) and the Perceived Stress Scale (PSS).
Results: Multiple regression analyses indicated that illness perceptions explained between 6 and 26% of variance on measures of psychological distress; coping strategies explained between 12 and 25%. A strong illness identity and emotional representations of the allergy were associated with higher levels of psychological distress; as were less adaptive coping strategies such as focusing on and venting of emotions. Strong personal control beliefs were associated with the lower levels of distress, as were adaptive coping strategies such as positive reinterpretation and growth. Coping partially mediated the link between the illness perceptions and the outcome; however, illness identity, emotional representations and personal control retained an independent significant association with psychological distress.
Conclusion: The findings support a role for illness perceptions and coping in explaining levels of psychological distress amongst allergy sufferers. This has implications for targeted health interventions aimed at reducing the strength of illness identity and emotional representations and increasing a sense of control and the use of more adaptive coping strategies.
abstract_id: PUBMED:32141262
A Prospective Observation of Psychological Distress in Patients With Anaphylaxis. Purpose: Anaphylaxis is an immediate allergic reaction characterized by potentially life-threatening, severe, systemic manifestations. While studies have evaluated links between serious illness and posttraumatic stress disorder (PTSD), few have investigated PTSD after anaphylaxis in adults. We sought to investigate the psychosocial burden of recent anaphylaxis in Korean adults.
Methods: A total of 203 (mean age of 44 years, 120 females) patients with anaphylaxis were recruited from 15 university hospitals in Korea. Questionnaires, including the Impact of Event Scale-Revised-Korean version (IES-R-K), the Korean version of the Beck Anxiety Inventory (K-BAI), and the Korean version of the Beck Depression Inventory (K-BDI), were administered. Demographic characteristics, causes and clinical features of anaphylaxis, and serum inflammatory markers, including tryptase, platelet-activating factor, interleukin-6, tumor necrosis factor-α, and C-reactive protein, were evaluated.
Results: PTSD (IES-R-K ≥ 25) was noted in 84 (41.4%) patients with anaphylaxis. Of them, 56.0% had severe PTSD (IES-R-K ≥ 40). Additionally, 23.2% and 28.1% of the patients had anxiety (K-BAI ≥ 22) and depression (K-BDI ≥ 17), respectively. IES-R-K was significantly correlated with both K-BAI (r = 0.609, p < 0.0001) and K-BDI (r = 0.550, p < 0.0001). Among the inflammatory mediators, tryptase levels were lower in patients exhibiting PTSD; meanwhile, platelet-activating factor levels were lower in patients exhibiting anxiety and depression while recovering from anaphylaxis. In multivariate analysis, K-BAI and K-BDI were identified as major predictive variables of PTSD in patients with anaphylaxis.
Conclusions: In patients with anaphylaxis, we found a remarkably high prevalence of PTSD and associated psychological distresses, including anxiety and depression. Physicians ought to be aware of the potential for psychological distress in anaphylactic patients and to consider psychological evaluation.
abstract_id: PUBMED:20097803
Psychologic distress and maladaptive coping styles in patients with severe vs moderate asthma. Background: Though several biologic factors have been suggested to play a role in the development and persistence of severe asthma, those associated with psychologic factors remain poorly understood. This study assessed levels of psychologic distress and a range of disease-relevant emotional and behavioral coping styles in patients with severe vs moderate asthma.
Methods: Eighty-four patients (50% women, mean [M] age 46 years) with severe (n = 42) and moderate (n = 42) asthma were recruited. Severe asthma was defined according to American Thoracic Society criteria. Patients underwent demographic and medical history interviews and pulmonary function and allergy testing. Patients also completed questionnaires measuring asthma symptoms and the Millon Behavioral Medicine Diagnostic Inventory, which assesses psychologic distress and emotional/behavioral coping factors that influence disease progression and treatment.
Results: After adjustment for covariates and applying a correction factor that reduced the significant P level to < .01, patients with severe vs moderate asthma reported experiencing more psychologic distress, including worse cognitive dysfunction (F = 6.72, P < .01) and marginally worse anxiety-tension (F = 4.02, P < .05). They also reported worse emotional coping (higher illness apprehension [F = 9.57, P < .01], pain sensitivity [F = 10.65, P < .01], future pessimism [F= 8.53, P < .01], and interventional fragility [F = 7.18, P < .01]), and marginally worse behavioral coping (more functional deficits [F = 5.48, P < .05] and problematic compliance [F = 4.32, P < .05]).
Conclusions: Patients with severe asthma have more psychologic distress and difficulty coping with their disease, both emotionally and behaviorally, relative to patients with moderate asthma. Future treatment studies should focus on helping patients with severe asthma manage distress and cope more effectively with their illness, which may improve outcomes in these high-risk patients.
abstract_id: PUBMED:18239264
Psychological distress and associated risk factors in bronchial asthma patients in Kuwait. Context: Recent literature shows a high prevalence of psychological distress in bronchial asthma.
Aim: To find the extent of psychological distress and associated risk factors in bronchial asthma patients in Kuwait.
Design: Case-control study.
Materials And Methods: In a study at Kuwait's allergy center, 102 patients aged 20-60 years with asthma (67%), asthma with allergic rhinitis (33%) completed a self-administered questionnaire (WHO-Five Well-being Index). A score below 13 was considered as psychological distress; and 13 and above, as normal. An equal number of controls, matched for age, gender, nationality, were also enrolled.
Statistical Analysis: The data were analyzed using SPSS software, and proportions were tested with Chi-square or Fisher's test. Odds ratio (OR) with 95% confidence interval (CI) was calculated to quantify the risk factors.
Results: A significantly large proportion (69%) of patients were found to be psychologically distressed, compared to 24% among controls (P<0.001, OR=7.5; 95% CI: 4-14). As many as 83.3% of cases, in the younger (20-30 years) age group, were distressed (P<0.044), compared to other age groups. A declining trend in proportion of distressed cases with increasing age was observed (P<0.013). A higher proportion of females (73.8%) and Kuwaitis (71.6%) with distress were observed, both among cases and controls.
Conclusions: We found a high rate of poor well-being and psychological distress in patients suffering from asthma. Young patients and those with relatively short duration of illness, as well as asthmatic females, are more vulnerable to distress and need further psychological evaluation.
abstract_id: PUBMED:10127063
Consumer illness careers: an investigation of allergy sufferers and their universe of medical choices. The concept of the consumer illness career with a focus on allergies is introduced and developed by the authors in terms of a trajectory of five stages over time, the related product-service unities or constellations--including health care treatments and remedies--and various situational and trait factors that influence the course of a consumer's response to his or her disease. Next, they investigate the career's holistic nature and thematic content in an in-depth study of allergy sufferers. The study indicates that allergy sufferers engage ina wide range of strategic behaviors and choices associated with coping with their allergies, much of which can be captured in terms of patterned themes. Finally, the authors offer research, managerial, and public policy implications.
abstract_id: PUBMED:26118055
ROLE OF PSYCHOLOGICAL FACTORS IN THE ETIOLOGY OF ASTHMA IN CHILDREN Influence of psychological factors was studied on appearance and motion of bronchial asthma for children. Straight proportional dependence is set between weight of flow of illness and united influence of psychological factors on a background biological propensity to the allergy for a patient. An additional stress factor--non-acceptance of display of emotions of child that complicates the flow of disease parents is educed.
abstract_id: PUBMED:24565772
The Mastocytosis Society survey on mast cell disorders: patient experiences and perceptions. Background: Mast cell diseases include mastocytosis and mast cell activation syndromes, some of which have been shown to involve clonal defects in mast cells that result in abnormal cellular proliferation or activation. Numerous clinical studies of mastocytosis have been published, but no population-based comprehensive surveys of patients in the United States have been identified. Few mast cell disease specialty centers exist in the United States, and awareness of these mast cell disorders is limited among nonspecialists. Accordingly, information concerning the experiences of the overall estimated population of these patients has been lacking.
Objective: To identify the experiences and perceptions of patients with mastocytosis, mast cell activation syndromes, and related disorders, The Mastocytosis Society (TMS), a US based patient advocacy, research, and education organization, conducted a survey of its members and other people known or suspected to be part of this patient population.
Methods: A Web-based survey was publicized through clinics that treat these patients and through TMS's newsletter, Web site, and online blogs. Both online and paper copies of the questionnaire were provided, together with required statements of consent.
Results: The first results are presented for 420 patients. These results include demographics, diagnoses, symptoms, allergies, provoking factors of mast cell symptoms, and disease impact.
Conclusion: Patients with mastocytosis and mast cell activation syndromes have provided clinical specialists, collaborators, and other patients with information to enable them to explore and deepen their understanding of the experiences and perceptions of people coping with these disorders.
abstract_id: PUBMED:19666668
The Subjective Health Complaints Inventory: a useful instrument to identify various aspects of health and ability to cope in older people? Aims: The aims were to investigate the factor structure of the Subjective Health Complaints Inventory (SHC) in a population of 75 years and above and to identify whether somatic, psychosocial, and coping factors were associated with the SHC factors.
Methods: Data from 242 elderly persons were analyzed. The measures were: the SHC Inventory, Sense of Coherence, Social Provision Scale, Self-Rated Health, General Health Questionnaire, Clinical Dementia Rating, Reported Illness, Barthel ADL Index, sex, age, and education.
Results: The factor analysis resulted in four subgroups: musculoskeletal pain (15% of variance), gastrointestinal problems (12% of variance), respiratory/allergy complaints (11% of variance), and pseudoneurology (11% of variance). The occurrence of complaints was 76% for musculoskeletal complaints, 51% for gastrointestinal complaints, 30% for flu, 43% for allergy, and 93% for pseudoneurology. Self-rated health and reported illness were significantly associated with musculoskeletal complaints (15% of variance), impairment in activities of daily living (ADL) with gastrointestinal complaints (3% of variance), and finally sense of coherence, self-rated health, and psychological distress were associated with pseudoneurology (32% of variance). No variables were associated with respiratory/allergy complaints.
Conclusions: This study supports the stability of the SHC's factor structure. The low occurrence of health complaints could possibly be due to survival effects, or that old people to a greater extent than younger people compare themselves with aged peers. The subscales focusing on somatic symptoms were explained by reported illnesses and functional impairments to a limited degree only. The pseudoneurology subscale score was associated with psychological measures, particularly ability to cope.
abstract_id: PUBMED:35996871
Perceptions of patient disease burden and management approaches in systemic mastocytosis: Results of the TouchStone Healthcare Provider Survey. Background: Systemic mastocytosis (SM) is a rare clonal neoplasm driven by the KIT D816V mutation and has a broad range of debilitating symptoms. In this study, the authors evaluated SM disease perceptions and management strategies among US health care providers (HCPs).
Methods: Hematologist/oncologist (H/O) HCPs and allergist/immunologist (A/I) HCPs who were treating four or more patients with SM completed an online, 51-item TouchStone HCP Survey, which queried provider characteristics, perceptions of disease burden, and current management. Descriptive analyses by specialty and SM subtype were performed.
Results: Of 304 HCPs contacted, 111 (37%) met eligibility criteria, including 51% A/I specialists and 49% H/O specialists. On average, the HCPs had 14 years of practice experience and cared for 20 patients with SM. A/I HCPs saw more patients with nonadvanced SM (78%) compared with H/O HCPs, who saw similar proportions of patients with nonadvanced SM (54%) and advanced SM (46%). HCPs reported testing 75% of patients for the KIT D816V mutation and found an estimated prevalence of 47%. On average, HCPs estimated 8 months between symptom onset and SM diagnosis. HCPs reported that 62% of patients with indolent SM felt depressed or discouraged because of symptoms. In terms of treatment goals for SM, both types of specialists prioritized symptom improvement for nonadvanced SM and improved survival for advanced SM while also prioritizing improving patient quality of life.
Conclusions: Both A/I and H/O specialists highlighted unmet needs for patients with SM. The HCPs surveyed reported a lower rate of KIT D816V mutations and a perceived shorter time between symptom onset and SM diagnosis compared with published estimates.
Lay Summary: Specialists treating systemic mastocytosis (SM) completed a 51-item questionnaire about their clinical practices and perceptions of disease impact. The study included 111 hematology, oncology, allergy, and immunology physicians. Physicians reported that most patients had nonadvanced disease, yet SM symptoms significantly disrupted their patients' lives. Physicians estimated that SM is diagnosed within months of symptom onset, in contrast with published reports of years' long delays reported by patients with SM. This study identified unmet needs that can inform educational and patient management priorities in this rare disease.
abstract_id: PUBMED:16033744
The economic and quality of life impact of seasonal allergic conjunctivitis in a Spanish setting. Introduction: Seasonal allergic conjunctivitis (SAC) is a highly prevalent condition that exacts a range of costs from its sufferers. The aim of this study was to examine quality of life (QoL) and economic consequences of SAC amongst private health care patients in Spain.
Methods: 201 sufferers of SAC and 200 controls were recruited from four private eye clinics and one public hospital in five Spanish cities: Alicante, Madrid, Albacete, Las Palmas de Gran Canarias and Valladolid. Participants were between 10 and 80 years of age and Spanish speaking. All potential participants were asked selected questions and sorted into one of the two groups or excluded. Sufferers were administered a set of four questionnaires by researchers consisting of the EQ-5D Health Questionnaire, the National Eye Institute (US) Visual Functioning Questionnaire 25 (VFQ-25), the Rhinoconjunctivitis Quality of Life Questionnaire (RQLQ) and a Health Economic and Demographic Questionnaire (HEDQ). Control participants self-administered the full EQ-5D and VFQ-25 and abbreviated versions of the RQLQ and HEDQ.
Results: The groups were comparable in terms of age and sex, but sufferers had a significantly higher hourly income than controls (p = 0.02). Sufferers exhibited a significantly higher incidence of asthma, nasal symptoms, perennial allergic conjunctivitis, food allergies and other allergies (p < 0.01 for all comorbidities). Sufferers returned significantly lower quality of life scores than controls in all EQ-5D domains other than mobility (p = 0.14). SAC VFQ-25 scores were significantly less than in controls in the overall vision (p < 0.01), distance vision (p < 0.01), ocular pain (p < 0.01), mental health (p = 0.01), role limitations (p < 0.01), and dependency (p = 0.01) domains. Mean scores for the activity domain of the RQLQ were greater in the sufferer group than the control group (P < 0.01) . The total mean per-sufferer cost of SAC amongst our sample of private patients was 348.50 Euros.
Discussion: Sufferers experienced QoL reductions in general health and specific aspects of vision as a result of their SAC. In relative terms, the per-capita economic cost of the condition was higher than in a previous public health care study. This was ascribed to sufferers' willingness-to-pay for symptom alleviation. The unexpectedly high expenditure of the proportion of SAC sufferers who receive private health care serves to highlight the importance of SAC as a costly condition. It also illustrates the need to account for both private and public heath care modalities when attempting to ascribe a total cost to a medical condition.
Answer: Yes, illness perceptions and coping can predict psychological distress amongst allergy sufferers. The study by PUBMED:17535490 found that illness perceptions explained between 6 and 26% of variance on measures of psychological distress, and coping strategies explained between 12 and 25%. Strong illness identity and emotional representations of the allergy were associated with higher levels of psychological distress, while strong personal control beliefs were associated with lower levels of distress. Less adaptive coping strategies, such as focusing on and venting of emotions, were linked to higher distress, whereas adaptive coping strategies like positive reinterpretation and growth were associated with lower distress. Coping partially mediated the link between illness perceptions and psychological distress, but certain factors like illness identity, emotional representations, and personal control retained an independent significant association with psychological distress. This suggests that targeting interventions to reduce the strength of illness identity and emotional representations, increase a sense of control, and promote the use of more adaptive coping strategies could be beneficial for allergy sufferers. |
Instruction: Treatment of intracranial cysts in children: peritoneal derivation or endoscopic fenestration?
Abstracts:
abstract_id: PUBMED:12407318
Treatment of intracranial cysts in children: peritoneal derivation or endoscopic fenestration? Objective: The goal of this study is to evaluate the indications, benefit and complications of shunts and endoscopic fenestrations in the treatment of malformative intracranial cysts.
Material And Method: The records of 172 consecutive children (mean age of 4 years) were reviewed. All had a malformative cyst. Dandy Walker malformation, mega cisterna magna, and cysts from tumoral or porencephalic origin were excluded from the study. The cysts were diagnosed either in utero (n=64) or postnatally (n=108). Most of them were unique (94.8%) and localized in the posterior fossa (26.2%) or at the convexity (23.2%). Indication for surgery was based on clinical symptoms (n=101; 86.3%) or size of the lesion (n=16; 13.7%). Endoscopy was the treatment of choice when cysts were in closed relationship with enlarged ventricles. Shunting procedures were indicated when endoscopy was not feasible and craniotomies when shunt insertion was unsafe or diagnosis uncertain. Fifty children underwent an endoscopic fenestration, 55 a shunting procedure, 7 the puncture or the external drainage of a pericerebral collection and 5 a direct surgical approach. The mean follow-up was 5.5 years. Psycho-motor, intellectual and school performances were evaluated in 93 children (54%). Success was defined by both the disappearance of symptoms of increased intra cranial pressure and regression of the cyst.
Results: Compared to shunts, endoscopic fenestrations were more frequently successful (70% vs 61.8%), led to less complications (6% vs 61.8%) and to a lesser number of reoperations (in average 1.6 operation per child vs 2.2). Median developmental and intellectual quotients for the whole series were respectively 98 and 97 and did not depend upon the type of treatment.
Conclusion: The study of this series shows that treatment modalities necessarily vary according to the site of the cysts but that endoscopic fenestrations are preferable to shunts whenever feasible.
abstract_id: PUBMED:34223964
Endoscopic treatment of intracranial cysts in infants: personal experience and review of literature. Background: A wide variety of intracranial cysts is known to occur in infants. If symptomatic, they require treatment; the ideal surgical treatment and indications of surgery are yet a matter of discussion. Traditional treatment is either by cystoperitoneal shunting, or microsurgical fenestration. Endoscopic treatment is an alternative procedure that avoids the invasiveness of open craniotomy and the complications caused by shunting.
Methods: This article reviews the endoscopic treatment of intracranial cysts in infants. The author presents personal experience by reviewing the results of endoscopic treatment in different subgroups among his series of pediatric patients extending over 20 years.
Results: Different types of intracranial cysts in infants were discussed and the role of endoscopy in the management of these patients was reviewed. The author also presented the results of endoscopic treatment of a personal series including 87 infants with intracranial cysts operated by the endoscopic procedure.
Conclusions: It has been recommended to use the endoscopic procedure in the treatment of intracranial cysts in infants, because it is effective, simple, minimally invasive, and associated with low morbidity and mortality rates. However, an important prerequisite is the presence of an area of contiguity with the subarachnoid cisterns and/or the ventricular system.
abstract_id: PUBMED:16398484
Endoscopic management of intracranial cysts. Object: Endoscopic fenestration has been recognized as an accepted treatment choice for patients with symptomatic arachnoid cysts. The success of this procedure, however, is greatly influenced by individual cyst anatomy and location as well as the endoscopic technique used. This review was conducted to assess what variables influence the treatment success for different categories of arachnoid cysts.
Methods: Thirty-three consecutive patients who underwent endoscopic fenestration for treatment of an intracranial arachnoid cyst were identified from a prospective database. The surgical indications and techniques were reviewed, and surgical success rates and patient outcomes were assessed. Specific examples of each cyst category are included to illustrate the technical aspects of endoscopic cyst fenestration. Endoscopic fenestration of arachnoid cysts was successful when judged by cyst decompression, and symptom resolution was noted in 32 (97%) of 33 cases. The one patient with short-term treatment failure underwent a successful repetition of the operation. There were no surgery-related morbidities or deaths.
Conclusions: Arachnoid cysts are a relatively benign pathological entity that can be managed by performing endoscopically guided cyst wall fenestrations into the ventricular system or cerebrospinal fluid-containing cisterns. Proper patient selection, preoperative planning of endoscope trajectory, use of frameless navigation, and advances in endoscope lens technology and light intensity combine to make this a safe procedure with excellent outcomes.
abstract_id: PUBMED:34509680
Endoscopic Third Ventriculostomy and Endoscopic Intracranial Cyst Fenestration in an Outpatient Ambulatory Surgery Center Yields Reduced Cost But Equal Efficacy and Safety Compared with Surgery in the Hospital. Background: A transition is underway in neurosurgery to perform relatively safe surgeries outpatient, often at ambulatory surgery centers (ASC). We sought to evaluate whether simple intracranial endoscopic procedures such as third ventriculostomy and cyst fenestration can be safely and effectively performed at an ASC, while comparing costs with the hospital.
Methods: A retrospective chart review was performed for patients who underwent elective intracranial neuroendoscopic (NE) intervention at either a quaternary hospital or an affiliated ASC between August 2014 and September 2017. Groups were compared on length of stay, perioperative and 30-day morbidity, as well as clinical outcome at last follow-up. The total cost for these procedures were compared in relative units between all ASC cases and a small subset of hospital cases.
Results: In total, 16 NE operations performed at the ASC (mean patient age 29.8 years) and 37 at the hospital (mean age 15.4 years) with average length of stay of 3.5 hours and 23.1 hours respectively (P < 0.05). There were no acute complications in either cohort or morbid events requiring hospitalization within 30 days. Surgical success was noted for 75% of the ASC patients and 73% of the hospital cohort. The mean cost of 5 randomly selected hospital operations with same-day discharge and 5 with overnight stay was 3.4 and 4.1 times that of the ASC cohort, respectively (P < 0.05).
Conclusions: Elective endoscopic third ventriculostomy and other simple NE procedures can be safely and effectively performed at an ASC for appropriate patients with significantly reduced cost compared with the hospital.
abstract_id: PUBMED:15292634
Endoscopic cyst fenestration outcomes in children one year of age or less. The use of endoscopic fenestration (EF) is becoming an increasingly common treatment for symptomatic intracranial cysts. Very little data exist regarding outcomes for this procedure in children 1 year of age or younger. We retrospectively reviewed the clinical outcomes of 8 children 1 year of age or less treated at our institution with endoscopic cyst fenestration. The mean follow-up was roughly 2.5 years. These data were combined with 17 other cases obtained from the published literature. EF was successful in rendering patients shunt-free or minimizing the number of ventricular catheters in 18 of 26 operations. There were 8 outright failures -- two in 1 patient. Given the risks and complications of cerebrospinal fluid shunting in children less than 1 year of age, we advocate the consideration of EF as initial treatment of symptomatic intracranial cysts.
abstract_id: PUBMED:32569761
Endoscopic Fenestration of a Symptomatic Porencephalic Cyst in an Adult. A porencephalic cyst is an aberrant accumulation of cerebrospinal fluid within the brain parenchyma. Its occurrence is rare, with an incidence of 3.5 per 100,000 live births. Etiology is considered the result of perinatal cerebral ischemia or hemorrhage, leading to parenchymal loss. Porencephalic cysts are diagnosed radiologically, and management depends on the clinical manifestation. Our case depicts a porencephalic cyst presenting with nondominant parietal lobe symptoms in adulthood. We hypothesize that a membrane between the cyst and ventricle allowed formation of a 1-way valve that led to slowly progressive cyst enlargement, eventually causing mass effect and nondominant parietal lobe symptoms. Endoscopic fenestration of the cyst membrane into the lateral ventricle was successful in reducing cyst volume and improving mass effect.
abstract_id: PUBMED:16681360
Endoscopic surgery for intracranial cerebrospinal fluid cyst malformations. Endoscopic surgery represents a new and very useful modality of treatment for intracranial cysts. The authors review the cases of 19 patients with intracranial malformative CSF cysts (seven intraventricular, six paraventricular, and six arachnoid) who underwent endoscopic fenestration by using a burr-hole approach. The various endoscopic approaches and techniques of fenestration, according to the type and location of the cyst, and the causes of unsuccessful outcome are critically discussed. The authors recommend endoscopic fenestration as the treatment of choice for patients with para- and intraventricular cysts, in whom the procedure may help to avoid the microsurgical approach and shunt placement in nearly all patients. In patients with arachnoid cysts, the endoscopic procedure, although associated with a lower rate of successful outcome, may be performed as the primary procedure in most cases because it is a minimally invasive procedure; the traditional surgical treatment may be performed without additional risk in which endoscopic surgery has failed.
abstract_id: PUBMED:16122006
Neuroendoscopic transventricular ventriculocystostomy in treatment for intracranial cysts. Object: Although in recent years endoscopic procedures have been used for intracranial arachnoid cysts with favorable preliminary results in certain locations, optimal surgical treatment is still controversial. The purpose of this study was to evaluate the efficacy and safety of endoscopic transventricular ventriculocystostomy in the treatment of intracranial cysts based on the concept of normalizing cerebrospinal fluid (CSF) dynamics.
Methods: Twelve symptomatic pediatric patients with congenital intracranial cysts underwent surgery at Jikei University in Tokyo. A neuroendoscopic transventricular ventriculocystostomy was performed in nine patients and an endoscope-assisted craniotomy in the remaining three. Endoscopy was performed using a freehand maneuver with a newly designed rigid-rod neuroendoscope that is frameless and has a small diameter. Clinical results were good in all patients, although cysts in three were not prominently reduced in size when follow-up imaging studies were performed. Neither death nor symptomatic morbidity occurred, and no patient required shunt placement. In three cases the endoscopic fenestration was associated with an endoscopic third ventriculostomy (ETV). Postoperative CSF dynamics studies consisting of computerized tomography ventriculocysternography, and pre- and postoperative cine-mode magnetic resonance imaging demonstrated free communication between fenestrated cysts and ventricular/ cistern CSF pathways consistent with normalization of CSF dynamics.
Conclusions: Neuroendoscopic transventricular ventriculocystostomy constitutes a valid alternative to microsurgery for intracranial cysts located within or adjacent to the ventricles. It creates an effective CSF flow within the cyst with minimal alteration of subarachnoid spaces. It may be combined with an ETV procedure in case of obstruction of CSF pathways and should be preferred to the insertion of shunts.
abstract_id: PUBMED:28488617
Do the clinicoradiological outcomes of endoscopic fenestration for intracranial cysts count on age? An institutional experience. Background: The clinicoradiological outcome of endoscopic fenestration of intracranial cysts and predictors of an unfavorable outcome, including age, are under reported in the neurosurgical literature. In this cohort, our experience in the endoscopic fenestration of intracranial cysts is reviewed.
Materials And Methods: Thirty consecutive patients treated with endoscopic fenestration for intracranial cysts were identified and analyzed. The study population in our series was followed clinically and radiographically.
Results: In this series, the overall resolution of clinical symptoms such as headache, seizures, and neurological deficits was 83%, P= 0.0001. The percentage of clinical resolution after endoscopic intervention was significantly higher (85% vs. 76%, P= 0.001) in arachnoid cysts compared to other cyst types. The reduction of arachnoid cyst size was significantly higher in adults with obstructive hydrocephalus compared to the children group (P = 0.037). In addition, requirement of a cystoperitoneal shunt placement (P = 0.0001) and its subsequent revision (P = 0.0001) was significantly lower in adults compared to children. Adults (P = 0.041), presence of an arachnoid cyst (P = 0.026), female gender (P = 0.016), and presence of communicative hydrocephalus (P = 0.015) were significant predictors for improvement in the symptoms of intracranial pressure. Lastly, adults (P = 0.028), presence of arachnoid cyst (P = 0.046), and presence of communicative hydrocephalus (P = 0.012) were significant positive predictors for shunt revision.
Conclusions: This study revealed that endoscopic fenestration is an effective neurosurgical procedure for the management of intracranial cysts both in adults and children. Moreover, endoscopic fenestration is more beneficial in adults and patients with an arachnoid cyst compared to that in children and other cyst types, respectively.
abstract_id: PUBMED:16922071
Treatment of suprasellar cysts and patient outcome. Object: The authors sought to determine the natural history of and optimal treatment for suprasellar cysts (SSCs).
Methods: Three hundred forty-two patients harboring intracranial cysts presented to the authors' neurosurgery unit between January 1986 and August 2004. The patients' records were reviewed to assess symptomatology, results of imaging studies, and outcome according to mode of treatment. Thirty-three patients (9.6%) were eligible for this study. Nine SSCs were diagnosed prenatally and 24 were identified postnatally (range 0 months-18.2 years, mean 5.6 years). The mean follow-up period was 66.8 +/- 44.6 months (standard deviation). Seven cysts were left untreated, six (66%) detected before birth and one (4.5%) after birth. Of the 26 patients who required surgery, three were admitted elsewhere for complications of shunt surgery. A ventriculocystostomy (VC) was performed in all three of these patients, but the treatment failed in two. The primary treatment in the remaining 23 children was: open fenestration in two patients, VC in seven, ventriculocystocisternostomy (VCC) in 13, and cystoperitoneal (CP) shunt in one patient. Both open fenestration procedures were successful, as was the CP shunt insertion. The success rate of primary endoscopic surgery, although not statistically significant, was higher for VCCs (11 [85%] of 13 patients) than for VCs (four [57%] of seven patients). None of the patients' preoperative endocrine disorders resolved postoperatively. The distribution of intellectual and developmental quotients paralleled the normal range. Intellectual performance was unrelated to patient-specific factors or to treatment modalities.
Conclusions: Most SSCs are of moderate size, are stable and asymptomatic, and have a favorable outcome. Treatment is required when the cyst evolves or the patient is symptomatic, but endocrine disturbances alone are not an indication for surgery. When hydrocephalus is present, endoscopic fenestration is the primary treatment of choice. The goal of the procedure should be to open the cyst into both the ventricles and the cisterns. Intellectual capability after treatment at outcome is not related to age at diagnosis, initial or final cyst size, presence or absence of hydrocephalus, or type of endoscopic treatment.
Answer: The treatment of intracranial cysts in children can involve different approaches, including peritoneal derivation (shunting) and endoscopic fenestration. The choice of treatment depends on various factors such as the cyst's location, relationship with the ventricular system, and the feasibility of the procedure.
Endoscopic fenestration is often the preferred method when feasible, as it is less invasive than open craniotomy and avoids the complications associated with shunting. Studies have shown that endoscopic fenestrations are more frequently successful compared to shunts, lead to fewer complications, and result in a lesser number of reoperations (PUBMED:12407318). Endoscopic treatment is recommended for intracranial cysts in infants because it is effective, simple, minimally invasive, and associated with low morbidity and mortality rates, especially when there is an area of contiguity with the subarachnoid cisterns and/or the ventricular system (PUBMED:34223964).
The success of endoscopic fenestration is greatly influenced by individual cyst anatomy and location, as well as the endoscopic technique used. Endoscopic fenestration of arachnoid cysts has been successful in the majority of cases, with no surgery-related morbidities or deaths reported (PUBMED:16398484). Additionally, performing simple intracranial endoscopic procedures such as third ventriculostomy and cyst fenestration in an outpatient ambulatory surgery center has been shown to be safe, effective, and cost-efficient compared with hospital-based surgery (PUBMED:34509680).
For children one year of age or less, endoscopic fenestration has been successful in rendering patients shunt-free or minimizing the number of ventricular catheters, making it a viable initial treatment option for symptomatic intracranial cysts in this age group (PUBMED:15292634). Endoscopic fenestration is also a valid alternative to microsurgery for intracranial cysts located within or adjacent to the ventricles, as it creates an effective cerebrospinal fluid flow within the cyst with minimal alteration of subarachnoid spaces (PUBMED:16122006). |
Instruction: Can a revised paediatric radiation dose reduction CT protocol be applied and still maintain anatomical delineation, diagnostic confidence and overall imaging quality?
Abstracts:
abstract_id: PUBMED:24959737
Can a revised paediatric radiation dose reduction CT protocol be applied and still maintain anatomical delineation, diagnostic confidence and overall imaging quality? Objective: To compare multidetector CT (MDCT) radiation doses between default settings and a revised dose reduction protocol and to determine whether the diagnostic confidence can be maintained with imaging quality made under the revised protocol in paediatric head, chest and abdominal CT studies.
Methods: The study retrospectively reviewed head, chest, abdominal and thoracoabdominal MDCT studies, comparing 231 CT studies taken before (Phase 1) and 195 CT studies taken after (Phase 2) the implemented revised protocol. Image quality was assessed using a five-point grading scale based on anatomical criteria, diagnostic confidence and overall quality. Image noise and dose-length product (DLP) were collected and compared.
Results: The relative dose reductions between Phase 1 and Phase 2 were statistically significant in 35%, 51% and 54% (p < 0.001) of head, chest and abdominal CT studies, respectively. There were no statistically significant differences in overall image quality score comparisons in the head (p = 0.3), chest (p = 0.7), abdominal (p = 0.7) and contiguous thoracic (p = 0.1) and abdominal (p = 0.2) CT studies, with the exception of anatomical quality in definition of bronchial walls and delineation of intrahepatic portal branches in thoracoabdominal CTs, and diagnostic confidence in mass lesion in head CTs, liver lesion (>1 cm), splanchnic venous thrombosis, pancreatitis in abdominal CTs, and emphysema and aortic dissection in thoracoabdominal CTs.
Conclusion: Paediatric CT radiation doses can be significantly reduced from manufacturer's default protocol while still maintaining anatomical delineation, diagnostic confidence and overall imaging quality.
Advances In Knowledge: Revised paediatric CT protocol can provide a half DLP reduction while preserving overall imaging quality.
abstract_id: PUBMED:32823818
Diagnostic Reference Level of Radiation Dose and Image Quality among Paediatric CT Examinations in A Tertiary Hospital in Malaysia. Pediatrics are more vulnerable to radiation and are prone to dose compared to adults, requiring more attention to computed tomography (CT) optimization. Hence, diagnostic reference levels (DRLs) have been implemented as part of optimization process in order to monitor CT dose and diagnostic quality. The noise index has recently been endorsed to be included as a part of CT optimization in the DRLs report. In this study, we have therefore set local DRLs for pediatric CT examination with a noise index as an indicator of image quality. One thousand one hundred and ninety-two (1192) paediatric patients undergoing CT brain, CT thorax and CT chest-abdomen-pelvis (CAP) examinations were analyzed retrospectively and categorized into four age groups; group 1 (0-1 year), group 2 (1-5 years), group 3 (5-10 years) and group 4 (10-15 years). For each group, data such as the volume-weighted CT dose index (CTDIvol), dose-length product (DLP) and the effective dose (E) were calculated and DRLs for each age group set at 50th percentile were determined. Both CT dose and image noise values between age groups have differed significantly with p-value < 0.05. The highest CTDIvol and DLP values in all age groups with the lowest noise index value reported in the 10-15 age group were found in CT brain examination. In conclusion, there was a significant variation in doses and noise intensity among children of different ages, and the need to change specific parameters to fit the clinical requirement.
abstract_id: PUBMED:36766563
Clinical Low-Dose Photon-Counting CT for the Detection of Urolithiasis: Radiation Dose Reduction Is Possible without Compromising Image Quality. Background: This study evaluated the feasibility of reducing the radiation dose in abdominal imaging of urolithiasis with a clinical photon-counting CT (PCCT) by gradually lowering the image quality level (IQL) without compromising the image quality and diagnostic value. Methods: Ninety-eight PCCT examinations using either IQL70 (n = 31), IQL60 (n = 31) or IQL50 (n = 36) were retrospectively included. Parameters for the radiation dose and the quantitative image quality were analyzed. Qualitative image quality, presence of urolithiasis and diagnostic confidence were rated. Results: Lowering the IQL from 70 to 50 led to a significant decrease (22.8%) in the size-specific dose estimate (SSDE, IQL70 4.57 ± 0.84 mGy, IQL50 3.53 ± 0.70 mGy, p < 0.001). Simultaneously, lowering the IQL led to a minimal deterioration of the quantitative quality, e.g., image noise increased from 9.13 ± 1.99 (IQL70) to 9.91 ± 1.77 (IQL50, p = 0.248). Radiologists did not notice major changes in the image quality throughout the IQLs. Detection rates of urolithiasis (91.3-100%) did not differ markedly. Diagnostic confidence was high and not influenced by the IQL. Conclusions: Adjusting the PCCT scan protocol by lowering the IQL can significantly reduce the radiation dose without significant impairment of the image quality. The detection rate and diagnostic confidence are not impaired by using an ultra-low-dose PCCT scan protocol.
abstract_id: PUBMED:28378236
A new low-dose multi-phase trauma CT protocol and its impact on diagnostic assessment and radiation dose in multi-trauma patients. Purpose: Computed tomography (CT) examinations, often using high-radiation dosages, are increasingly used in the acute management of polytrauma patients. This study compares a low-dose polytrauma multi-phase whole-body CT (WBCT) protocol on a latest generation of 16-cm detector 258-slice multi-detector CT (MDCT) scanner with advanced dose reduction techniques to a single-phase polytrauma WBCT protocol on a 64-slice MDCT scanner.
Methods: Between March and September 2015, 109 polytrauma patients (group A) underwent acute WBCT with a low-dose multi-phase WBCT protocol on a 258-slice MDCT whereas 110 polytrauma patients (group B) underwent single-phase trauma CT on a 64-slice MDCT. The diagnostic accuracy to trauma-related injuries, radiation dose, quantitative and semiquantitative image quality parameters, subjective image quality scorings, and workflow time parameters were compared.
Results: In group A, statistically significantly more arterial injuries (p = 0.04) and arterial dissections (p = 0.002) were detected. In group A, the mean (±SD) dose length product value was 1681 ± 183 mGy*cm and markedly lower when compared to group B (p < 0.001). The SDs of the mean Houndsfield unit values of the brain, liver, and abdominal aorta were lower in group A (p < 0.001). Mean signal-to-noise ratios (SNRs) for the brain, liver, and abdominal aorta were significantly higher in group A (p < 0.001). Group A had significantly higher image quality scores for all analyzed anatomical locations (p < 0.02). However, the mean time from patient registration until completion of examination was significantly longer for group A (p < 0.001).
Conclusions: The low-dose multi-phase CT protocol improves diagnostic accuracy and image quality at markedly reduced radiation. However, due to technical complexities and surplus electronic data provided by the newer low-dose technique, examination time increases, which reduces workflow in acute emergency situations.
abstract_id: PUBMED:25066756
Radiation dose reduction in chest CT--review of available options. Computed tomography currently accounts for the majority of radiation exposure related to medical imaging. Although technological improvement of CT scanners has reduced the radiation dose of individual examinations, the benefit was overshadowed by the rapid increase in the number of CT examinations. Radiation exposure from CT examination should be kept as low as reasonably possible for patient safety. Measures to avoid inappropriate CT examinations are needed. Principles and information on radiation dose reduction in chest CT are reviewed in this article. The reduction of tube current and tube potential are the mainstays of dose reduction methods. Study results indicate that routine protocols with reduced tube current are feasible with diagnostic results comparable to conventional standard dose protocols. Tube current adjustment is facilitated by the advent of automatic tube current modulation systems by setting the appropriate image quality level for the purpose of the examination. Tube potential reduction is an effective method for CT pulmonary angiography. Tube potential reduction often requires higher tube current for satisfactory image quality, but may still contribute to significant radiation dose reduction. Use of lower tube potential also has considerable advantage for smaller patients. Improvement in image production, especially the introduction of iterative reconstruction methods, is expected to lower radiation dose significantly. Radiation dose reduction in CT is a multifaceted issue. Understanding these aspects leads to an optimal solution for various indications of chest CT.
abstract_id: PUBMED:10525786
Radiation dose reduction in paediatric cranial CT. Background: There is no consensus about the optimal milliamperage-second (mAs) settings for computed tomography (CT). Most operators follow the recommended settings of the manufacturers, but these may not be the most appropriate settings.
Objective: To determine whether a lower radiation dose technique could be used in CT of the paediatric brain without jeopardising the diagnostic accuracy of the images.
Materials And Methods: A randomised prospective trial. A group of 53 children underwent CT using manufacturer's default levels of 200 or 250 mAs; 47 underwent scanning at 125 or 150 mAs. Anatomical details and the confidence level in reaching a diagnosis were evaluated by two radiologists in a double-blinded manner using a 4-point scoring system.
Results: For both readers there was no statistically significant difference in the confidence level for reaching a diagnosis between the two groups. The 95 % confidence intervals and P values were -0.9-1.1 and 0.13 (reader 1) and -1.29-1.37 and 0.70 (reader 2), respectively. Reliability tests showed the results were consistent.
Conclusions: The recommended level may not be the optimum setting. Dose reduction of 40 % is possible on our system in paediatric brain CT without affecting the diagnostic quality of the images.
abstract_id: PUBMED:35894003
Clinical Low Dose Photon Counting CT for the Detection of Urolithiasis: Evaluation of Image Quality and Radiation Dose. The purpose of this study was the evaluation of image quality and radiation dose parameters of the novel photon counting CT (PCCT, Naeotom Alpha, Siemens Healthineers) using low-dose scan protocols for the detection of urolithiasis. Standard CT scans were used as a reference (S40, Somatom Sensation 40, Siemens Healthineers). Sixty-three patients, who underwent CT scans between August and December 2021, were retrospectively enrolled. Thirty-one patients were examined with the PCCT and 32 patients were examined with the S40. Radiation dose parameters, as well as quantitative and qualitative image parameters, were analyzed. The presence of urolithiasis, image quality, and diagnostic certainty were rated on a 5-point-scale by 3 blinded readers. Both patient groups (PCCT and S40) did not differ significantly in terms of body mass index. Radiation dose was significantly lower for examinations with the PCCT compared to the S40 (2.4 ± 1.0 mSv vs. 3.4 ± 1.0 mSv; p < 0.001). The SNR was significantly better on images acquired with the PCCT (13.3 ± 3.3 vs. 8.2 ± 1.9; p < 0.001). The image quality of the PCCT was rated significantly better (4.3 ± 0.7 vs. 2.8 ± 0.6; p < 0.001). The detection rate of kidney or ureter calculi was excellent with both CT scanners (PCCT 97.8% and S40 99%, p = 0.611). In high contrast imaging, such as the depiction of stones of the kidney and the ureter, PCCT allows a significant reduction of radiation dose, while maintaining excellent diagnostic confidence and image quality. Given this image quality with our current protocol, further adjustments towards ultra-low-dose CT scans appear feasible.
abstract_id: PUBMED:34165410
The Feasibility of Low-dose Chest CT Acquisition Protocol for the Imaging of COVID-19 Pneumonia. Objective: This study aimed to investigate the feasibility of low-dose chest CT acquisition protocol for the imaging of COVID 19 disease or suspects of this disease in adults.
Methods: In this retrospective case-control study, the study group consisted of 141 patients who were imaged with low dose chest CT acquisition protocol. The control group consisted of 92 patients who were imaged with standard protocol. Anteroposterior and lateral diameters of chest, effective diameter and scan length, qualitative and quantitative noise levels, volumetric CT dose index (CTDIvol), dose length product (DLP), and size-specific dose estimations were compared between groups.
Results: Radiation dose reduction by nearly 90% (CTDIvol and DLP values 1.06 mGy and 40.3 mGy.cm vs. 8.07 mGy and 330 mGy.cm, respectively; p < 0.001) was achieved with the use of low-dose acquisition chest CT protocol. Despite higher image noise with low-dose acquisition protocol, no significant effect on diagnostic confidence was encountered. Cardiac and diaphragm movement-related artifacts were similar in both groups (p=0.275). Interobserver agreement was very good in terms of diagnostic confidence assessment.
Conclusion: For the imaging of COVID-19 pneumonia or suspects of this disease in adults, lowdose chest CT acquisition protocol provides remarkable radiation dose reduction without adversely affecting image quality and diagnostic confidence.
abstract_id: PUBMED:34343544
Low dose cone beam CT for paediatric image-guided radiotherapy: Image quality and practical recommendations. Purpose: Cone beam CT (CBCT) is used in paediatric image-guided radiotherapy (IGRT) for patient setup and internal anatomy assessment. Adult CBCT protocols lead to excessive doses in children, increasing the risk of radiation-induced malignancies. Reducing imaging dose increases quantum noise, degrading image quality. Patient CBCTs also include 'anatomical noise' (e.g. motion artefacts), further degrading quality. We determine noise contributions in paediatric CBCT, recommending practical imaging protocols and thresholds above which increasing dose yields no improvement in image quality.
Methods And Materials: Sixty CBCTs including the thorax or abdomen/pelvis from 7 paediatric patients (aged 6-13 years) were acquired at a range of doses and used to simulate lower dose scans, totalling 192 scans (0.5-12.8 mGy). Noise measured in corresponding regions of each patient and a 10-year-old phantom were compared, modelling total (including anatomical) noise, and quantum noise contributions as a function of dose. Contrast-to-noise ratio (CNR) was measured between fat/muscle. Soft tissue registration was performed on the kidneys, comparing accuracy to the highest dose scans.
Results: Quantum noise contributed <20% to total noise in all cases, suggesting anatomical noise is the largest determinant of image quality in the abdominal/pelvic region. CNR exceeded 3 in over 90% of cases ≥ 1 mGy, and 57% of cases at 0.5 mGy. Soft tissue registration was accurate for doses > 1 mGy.
Conclusion: Anatomical noise dominates quantum noise in paediatric CBCT. Appropriate soft tissue contrast and registration accuracy can be achieved for doses as low as 1 mGy. Increasing dose above 1 mGy holds no benefit in improving image quality or registration accuracy due to the presence of anatomical noise.
abstract_id: PUBMED:36971838
Feasibility of photon-counting CT for femoroacetabular impingement syndrome evaluation: lower radiation dose and improved diagnostic confidence. Objective: The feasibility of low-dose photon-counting detector (PCD) CT to measure alpha and acetabular version angles of femoroacetabular impingement (FAI).
Material And Methods: FAI patients undergoing an energy-integrating detector (EID) CT underwent an IRB-approved prospective ultra-high-resolution (UHR) PCD-CT between 5/2021 and 12/2021. PCD-CT was dose-matched to the EID-CT or acquired at 50% dose. Simulated 50% dose EID-CT images were generated. Two radiologists evaluated randomized EID-CT and PCD-CT images and measured alpha and acetabular version angles on axial image slices. Image quality (noise, artifacts, and visualization of cortex) and confidence in non-FAI pathology were rated on a 4-point scale (3 = adequate). Preference tests of standard dose PCD-CT, 50% dose PCD-CT, and 50% dose EID-CT relative to standard dose EID-CT were performed using Wilcoxon Rank test.
Results: 20 patients underwent standard dose EID-CT (~ CTDIvol, 4.5 mGy); 10 patients, standard dose PCD-CT (4.0 mGy); 10 patients, 50% PCD-CT (2.6 mGy). Standard dose EID-CT images were scored as adequate for diagnostic task in all categories (range 2.8-3.0). Standard dose PCD-CT images scored higher than the reference in all categories (range 3.5-4, p < 0.0033). Half-dose PCD-CT images also scored higher for noise and cortex visualization (p < 0.0033) and equivalent for artifacts and visualization of non-FAI pathology. Finally, simulated 50% EID-CT images scored lower in all categories (range 1.8-2.4, p < 0.0033).
Conclusions: Dose-matched PCD-CT is superior to EID-CT for alpha angle and acetabular version measurement in the work up of FAI. UHR-PCD-CT enables 50% radiation dose reduction compared to EID while remaining adequate for the imaging task.
Answer: Yes, a revised paediatric radiation dose reduction CT protocol can be applied while still maintaining anatomical delineation, diagnostic confidence, and overall imaging quality. Studies have shown that significant reductions in radiation doses from the manufacturer's default protocol are possible without compromising the quality of the images. For instance, a study that compared multidetector CT (MDCT) radiation doses between default settings and a revised dose reduction protocol in paediatric head, chest, and abdominal CT studies found that relative dose reductions were statistically significant, and there were no statistically significant differences in overall image quality score comparisons in most of the CT studies (PUBMED:24959737).
Additionally, other studies have demonstrated the feasibility of reducing radiation doses in various CT examinations without significant impairment of image quality or diagnostic value. For example, a study on clinical low-dose photon-counting CT for the detection of urolithiasis showed that adjusting the scan protocol by lowering the image quality level can significantly reduce the radiation dose without major changes in image quality or diagnostic confidence (PUBMED:36766563). Similarly, a new low-dose multi-phase trauma CT protocol on a latest generation MDCT scanner improved diagnostic accuracy and image quality at markedly reduced radiation doses (PUBMED:28378236).
Moreover, research on low-dose chest CT acquisition protocol for the imaging of COVID-19 pneumonia in adults achieved a remarkable radiation dose reduction without adversely affecting image quality and diagnostic confidence (PUBMED:34165410). Furthermore, a study on low dose cone beam CT for paediatric image-guided radiotherapy found that anatomical noise dominates quantum noise, and appropriate soft tissue contrast and registration accuracy can be achieved for doses as low as 1 mGy (PUBMED:34343544).
In conclusion, revised paediatric CT protocols that focus on radiation dose reduction can be successfully implemented, maintaining the necessary image quality and diagnostic confidence required for accurate medical evaluation. |
Instruction: Search for secondary osteoporosis: are Z scores useful predictors?
Abstracts:
abstract_id: PUBMED:19240287
Search for secondary osteoporosis: are Z scores useful predictors? Aim: To determine whether Z scores can be used to predict the likelihood of patients having a secondary cause of low bone mineral density.
Methods: A retrospective cross sectional study was conducted among 136 consecutive patients with osteoporosis at Ninewells Hospital, Dundee, UK, between 1998-2002.
Results: 20.5% of female patients in this study were identified with previously unrecognised contributors to the low bone mineral density. In women, at a Z score cut-off of -1, the sensitivity of detecting a secondary cause for osteoporosis is 87.5% with a positive predictive value of 29.2%.
Conclusion: In women, a Z score of -1 would identify a majority of patients with a secondary cause for low bone mineral density and identifies patients who would especially benefit from a thorough history and clinical examination.
abstract_id: PUBMED:29046326
DIAGNOSIS OF ENDOCRINE DISEASE: Bone turnover markers: are they clinically useful? Bone turnover markers (BTMs) are useful in clinical practice as they are inexpensive, and they have proven useful for treatment monitoring and identification of poor adherence. BTMs cannot be used in individual patients for identifying accelerated bone loss or an increase in fracture risk or in deciding on the optimal therapy. They are useful for monitoring both anti-resorptive and anabolic treatment. Response can be defined as a result that exceeds an absolute target, or by a change greater than the least significant change; if such a response is not present, then poor compliance or secondary osteoporosis are likely causes. A baseline BTM measurement is not always made; in that case, a value of BTM on anti-resorptive treatment that is low or low normal or above the reference interval for anabolic therapy may be taken to indicate a satisfactory response. We provide an approach to using these bone turnover markers in clinical practice by describing algorithms for anti-resorptive and anabolic therapy and describing the changes we observe in the clinical practice setting.
abstract_id: PUBMED:35153504
Development of a Nomogram for Predicting Very Low Bone Mineral Density (T-Scores <-3) in the Chinese Population. Purpose: Fragility fractures, the most serious complication of osteoporosis, affect life quality and increase medical expenses and economic burden. Strategies to identify populations with very low bone mineral density (T-scores <-3), indicating very high fracture risk according to the American Association of Clinical Endocrinologists/American College of Endocrinology (AACE/ACE), are necessary to achieve acceptable fracture risk levels. In this study, the characteristics of persons with T-scores <-3 were analyzed in the Chinese population to identify risk factors and develop a nomogram for very low bone mineral density (T-scores <-3) identification.
Materials And Methods: We conducted a cross-sectional study using the datasets of the Health Improvement Program of Bone (HOPE), with 602 men aged ≥50 years and 482 postmenopausal women. Bone mineral density (BMD) was measured using dual energy X-ray absorptiometry (DXA). Data on clinical risk factors, including age, sex, weight, height, previous fracture, parental hip fracture history, smoking, alcohol intake >3 units/day, glucocorticoid use, rheumatoid arthritis, and secondary osteoporosis were collected. A multivariate logistic regression to evaluate the relationship between the clinical risk factors and very low BMD (T-scores <-3) was conducted. Parameter estimates of the final model were then used to construct a nomogram.
Results: Sixty-three of 1084 participants (5.8%) had BMD T-score <-3. In multivariable regression analysis, age (odds ratio [OR] = 1.068, 95% confidence interval [CI]: 1.037-1.099) and weight (OR = 0.863, 95% CI: 0.830-0.897) were significant factors that were associated with very low BMD (T-scores <-3). These variables were the factors considered in developing the nomogram. The area under the receiver operating characteristic (ROC) curve for the model was 0.861. The cut-off value of the ROC curve was 0.080.
Conclusion: The nomogram can effectively assist clinicians to identify persons with very low BMD (T-scores <-3) and very high fracture risk in the Chinese population.
abstract_id: PUBMED:31742245
Expert consensus on relevant risk predictors for the occurrence of osteoporotic fractures in specific clinical subgroups - Delphi survey. Background: There is an ongoing discussion about incorporating additional risk factors to established WHO fracture risk assessment tool (FRAX) to improve the prediction accuracy in clinical subgroups. We aimed to reach an expert consensus on possible additional predictive parameters for specific clinical subgroups.
Methods: Two-round modified Delphi survey: We generated a shortlist of experts from the authors' lists of the pertinent literature and complemented the list with experts known to the authors. Participants were asked to name possible relevant risk factors besides the FRAX-parameters for the occurrence of osteoporotic fractures. Experts specified these possible predictors for specific subgroups of patients. In the second round the expert panel was asked to weight each parameter of every subgroup assigning a number between one (not important) to ten (very important). We defined the threshold for an expert consensus if the interquartile range (IQR) of a predictor was ≤2. The cut-off value of the median attributed weights for a relevant predictor was set at ≥7.
Results: Eleven experts of seven countries completed both rounds of the Delphi. The participants agreed on nine additional parameters for seven categories. For the category "secondary osteoporosis", "older adults" and "nursing home patients", there was a consensus that history of previous falls was relevant, while for men and postmenopausal women, there was a consensus that the spine fracture status was important. For the group "primary and secondary osteoporosis" the experts agreed on the parameters "high risk of falls", "lumbar spine bone mineral density (BMD)" and "sarcopenia".
Conclusion: This Delphi survey reached a consensus on various parameters that could be used to refine the currently existing FRAX for specific clinical situations or patient groups. The results may be useful for studies aiming at improving the predictive properties of instruments for fracture prediction.
abstract_id: PUBMED:34862576
Factors associated with pain-related disorders and gait disturbance scores from the Japanese orthopedic association back pain evaluation questionnaire and Oswestry Disability Index in patients with osteoporosis. In the current study, multivariate analyses were performed to determine factors associated with low back pain (LBP) in patients with osteoporosis. Aging, high bone turnover, obesity, low trunk muscle mass, spinal global sagittal malalignment, and a high number of previous vertebral fractures were potential independent risk factors of pain-related disorders, gait disturbance, or ADL deficit due to LBP.
Purpose: Patients with osteoporosis often experience low back pain (LBP) even in the absence of acute fractures. This study identifies factors that may affect questionnaires about LBP.
Methods: The data of 491 patients with osteoporosis were retrospectively reviewed. Data included patient age, sex, body mass index (BMI), bone mineral density of the lumbar spine, tartrate-resistant acid phosphatase 5b level (TRACP5b), trunk muscle mass, sagittal vertical axis (SVA), previous vertebral fractures, secondary osteoporosis, controlling nutritional status score, pain-related disorders and gait disturbance scores from the Japanese Orthopedic Association Back Pain Evaluation questionnaire (JOABPEQ), and Oswestry disability index (ODI) scores for activities of daily living (ADL) deficit. Patients with scores of 100 for each subsection of the JOABPEQ, or an ODI scores < 12 were considered to not have dysfunction (dysfunction (-) group). Multivariate analyses were used to determine variables associated with dysfunction.
Results: Pain-related disorders score of JOABPEQ was associated with aging, high BMI, and high SVA. Aging, high TRACP5b, high BMI, low TM, high SVA, and more previous vertebral fractures were associated with gait disturbance score of JOABPEQ. ODI scores were associated with high BMI, low TM, high SVA, and more previous vertebral fractures.
Conclusions: Aging, high bone turnover, obesity, a low TM, spinal global sagittal malalignment, and a high number of previous VFs were potential independent risk factors of pain-related disorders or gait disturbance according to the JOABPEQ or ODI score in patients with osteoporosis.
abstract_id: PUBMED:23225281
The prevalence of recognized contributors to secondary osteoporosis in South East Asian men and post-menopausal women. Are Z score diagnostic thresholds useful predictors of their presence? Unlabelled: The prevalence of secondary contributors to osteoporosis in our population of SE Asian patients is high. Though various low thresholds Z score values have been proposed as suggestive of a high likelihood of secondary osteoporosis, they appear to have only limited discriminatory value in identifying a secondary cause.
Introduction: Many patients with osteoporosis have significant secondary contributors towards their bone loss. The sensitivity and diagnostic utility of using Z score thresholds to screen for secondary osteoporosis have not yet been convincingly demonstrated nor has there been any previous attempt to estimate the prevalence of secondary osteoporosis in South East Asia. We aimed to study the prevalence of commonly recognized contributors and to determine the discriminatory ability of Z score thresholds in screening for them in Singaporean men and post-menopausal women with osteoporosis.
Method: Three hundred thirty-two consecutive patients seen at the osteoporosis clinic of the largest hospital in Singapore were evaluated. The frequencies of the different contributors were determined and sensitivities, specificities, and positive and negative predictive values (PPV and NPV) of pre-specified Z score cut-off values calculated.
Results: Vitamin D deficiency was present in 18.5% of the patients, hyperthyroidism in 10.11%, primary hyperparathyroidism in 1%, secondary hyperparathyroidism in 6%, hypercalciuria in 21.63%, glucocorticoid use in 8.43%, and hypogonadism in 9.4% of males. A Z score value of <-1 had a sensitivity of 71.7 % and NPV of 66.2 % in identifying the presence of a secondary contributor in post-menopausal women. The sensitivity and NPV of a similar threshold in men was 59.1 and 40 %, respectively. ROC curves used to investigate various Z score diagnostic thresholds for sensitivity and specificity showed that they provided poor predictive value for the presence of secondary osteoporosis.
Conclusion: Secondary contributors are common in our patients with osteoporosis. Z score diagnostic thresholds have only limited value in discriminating between primary and secondary osteoporosis.
abstract_id: PUBMED:26653615
The Radiology of Vertebral Fractures in Childhood Osteoporosis Related to Glucocorticoid Administration. A number of unusual conditions cause decreased bone mass and density in children and these may be associated with low-trauma fractures. However, a series of reports have more recently identified that children with chronic disease sustain vertebral fractures (VFs) much more often than had been suspected. The common denominator involved is glucocorticoid (GC) administration, although other factors such as disease activity come into play. This review will focus on the imaging findings in this form of secondary osteoporosis. Spinal fractures in children have been found to correlate with back pain. At the same time, up to 2/3 of children with VFs in the GC-treated setting are asymptomatic, underscoring the importance of routine surveillance in at-risk children. Other predictors of prevalent and incident VFs include GC exposure (average daily and cumulative dose), declines in lumbar spine bone mineral density Z-scores and increases in body mass index Z-scores, as well as increases in disease activity scores. The imaging diagnosis of osteoporotic VFs in children is made differently from that in adults because immature vertebral bodies continue to ossify during growth. Thus, it is not possible to assess the vertebral end plates or periphery until late, as enchondral ossification extends centripetally within the centrum. Diagnosis, therefore, is much more dependent upon changes in shape than on loss of structural integrity, which may have a more prominent diagnostic role in adults. However, children have a unique ability to model (a growth-dependent process) and thereby reshape previously fractured vertebral bodies. If the underlying disease is successfully treated and the child has sufficient residual growth potential, this means that, on one hand, treatment of the bone disease may be of more limited duration, and, as a last recourse, the diagnosis may be apparent retrospectively.
abstract_id: PUBMED:38107823
Classification of Osteoporosis. Osteoporosis is defined by low bone quality, strength and increased fracture risk. Primary and secondary osteoporosis are the two forms of osteoporosis classified on the basis of factors affecting the metabolism of bone. Primary osteoporosis develops as a result of aging or menopause-related bone demineralization. Type I/postmenopausal and type II/senile osteoporosis are two subtypes of primary osteoporosis. Secondary osteoporosis is due to pathological conditions and medications other than aging and menopause that lead to deprivation of bone mass and elevated fracture risk. Classification of osteoporosis based on BMD testing with DEXA devised by the World Health Organization utilizes T-score in BMD reporting of women in menopausal transition or postmenopause and men ≥ 50 years. Z-scores are preferred, while BMD reporting in premenopausal women, adults < 50 years of age, and children. BMD alone is not diagnostic of osteoporosis in men < 50 years. The Fracture Risk Assessment Tool Model (FRAX) is a software algorithm that incorporates significant predictors of fracture risk and BMD in individuals to predict the risk of fracture. FRAX predicts the "10-year probability of a major fracture (hip, clinical spine, humerus, or wrist fracture) and the 10-year probability of a hip fracture".
abstract_id: PUBMED:35407629
Machine Learning Algorithms: Prediction and Feature Selection for Clinical Refracture after Surgically Treated Fragility Fracture. Background: The number of patients with fragility fracture has been increasing. Although the increasing number of patients with fragility fracture increased the rate of fracture (refracture), the causes of refracture are multifactorial, and its predictors are still not clarified. In this issue, we collected a registry-based longitudinal dataset that contained more than 7000 patients with fragility fractures treated surgically to detect potential predictors for clinical refracture.
Methods: Based on the fact that machine learning algorithms are often used for the analysis of a large-scale dataset, we developed automatic prediction models and clarified the relevant features for patients with clinical refracture. Formats of input data containing perioperative clinical information were table data. Clinical refracture was documented as the primary outcome if the diagnosis of fracture was made at postoperative outpatient care. A decision-tree-based model, LightGBM, had moderate accuracy for the prediction in the test and the independent dataset, whereas the other models had poor accuracy or worse.
Results: From a clinical perspective, rheumatoid arthritis (RA) and chronic kidney disease (CKD) were noted as the relevant features for patients with clinical refracture, both of which were associated with secondary osteoporosis.
Conclusion: The decision-tree-based algorithm showed the precise prediction of clinical refracture, in which RA and CKD were detected as the potential predictors. Understanding these predictors may improve the management of patients with fragility fractures.
abstract_id: PUBMED:8619370
Predictors of hip fractures in elderly men. To assess the influence on the risk of hip fractures in men of medical conditions associated with secondary osteoporosis or with an increased likelihood of falling, we conducted a population-based nested case-control study among the 232 Rochester, Minnesota, men with an initial hip fracture due to moderate trauma in 1965-1989 and an equal number of age-matched control men from the general population. Information on selected medical and surgical conditions and certain behavioral risk factors prior to fracture (or comparable index date for controls) was obtained from inpatient and outpatient medical records in the community that averaged over 36 years in duration. After adjusting for age, obesity, and inactivity, disorders linked with secondary osteoporosis were associated with a 2-fold increase in the risk of hip fracture in men (odds ratio [OR] 2.3; 95% confidence interval [CI] 1.3-4.3), while conditions linked with an increased risk of falling were associated with almost a 7-fold increase in risk (OR 6.9; 95% CI 3.3-14.8). These factors together appeared to account for about 72% of the hip fractures in men. Increased attention must be paid to these conditions which, in aggregate, are very common in elderly men and lead to a substantial increase in the risk of hip fracture with its devastating sequelae of death, disability and cost.
Answer: The utility of Z scores as predictors for secondary osteoporosis has been investigated in various studies, with mixed results. In a study conducted among 136 consecutive patients with osteoporosis at Ninewells Hospital, Dundee, UK, it was found that 20.5% of female patients had previously unrecognized contributors to low bone mineral density (BMD). In this study, a Z score cut-off of -1 had a sensitivity of 87.5% and a positive predictive value of 29.2% for detecting a secondary cause for osteoporosis in women (PUBMED:19240287). This suggests that in women, a Z score of -1 could identify a majority of patients with a secondary cause for low BMD, indicating that Z scores could be useful in this context.
However, another study conducted in South East Asian men and post-menopausal women found that secondary contributors to osteoporosis were common, but Z score diagnostic thresholds had only limited value in discriminating between primary and secondary osteoporosis. A Z score value of <-1 had a sensitivity of 71.7% and a negative predictive value of 66.2% in identifying the presence of a secondary contributor in post-menopausal women, while the sensitivity and negative predictive value in men were 59.1% and 40%, respectively. The study concluded that Z score diagnostic thresholds provided poor predictive value for the presence of secondary osteoporosis (PUBMED:23225281).
These findings suggest that while Z scores may have some utility in predicting secondary osteoporosis, particularly in women, they are not definitive diagnostic tools and should be used in conjunction with other clinical assessments and risk factors. It is important to consider the individual patient's clinical context and other potential risk factors for secondary osteoporosis when interpreting Z scores. |
Instruction: A 20-year experience of hepatic resection for melanoma: is there an expanding role?
Abstracts:
abstract_id: PUBMED:24249545
Chemosaturation with percutaneous hepatic perfusion for unresectable metastatic melanoma or sarcoma to the liver: a single institution experience. Background: Patients with unresectable melanoma or sarcoma hepatic metastasis have a poor prognosis with few therapeutic options. Percutaneous hepatic perfusion (PHP), isolating and perfusing the liver with chemotherapy, provides a promising minimally invasive management option. We reviewed our institutional experience with PHP.
Methods: We retrospectively reviewed patients with unresectable melanoma or sarcoma hepatic metastasis treated with PHP from 2008 to 2013 and evaluated therapeutic response, morbidity, hepatic progression free survival (hPFS), and overall survival (OS).
Results: Ten patients were treated with 27 PHPs (median 3). Diagnoses were ocular melanoma (n = 5), cutaneous melanoma (n = 3), unknown primary melanoma (n = 1), and sarcoma (n = 1). Median hPFS was 240 days, 9 of 10 patients (90%) demonstrated stable disease or partial response to treatment. At a median follow up of 11.5 months, 4 of 10 (40%) remain alive. There were no perioperative mortalities. Myelosuppresion was the most common morbidity, managed on an outpatient basis with growth factors. The median hospital stay was 3 days.
Conclusions: Patients with metastatic melanoma and sarcoma to the liver have limited treatment options. Our experience with PHP demonstrates promising results with minimal morbidity and should be considered (pending FDA approval) as a management option for unresectable melanoma or sarcoma hepatic metastasis.
abstract_id: PUBMED:36100850
Analysis of patient's X-ray exposure in hepatic chemosaturation procedures: a single center experience. Background: Hepatic chemosaturation is a technique in which a high dose of the chemotherapeutic agent melphalan is administered directly into the liver while limiting systemic side effects. We reviewed our institutional experience regarding patient's X-ray exposure caused by the procedure.
Methods: Fifty-five procedures, performed between 2016 and 2020 in 18 patients by three interventional radiologists (radiologist), were analyzed regarding the patient's exposure to radiation. Dose-area-product (DAP) and fluoroscopy time (FT) were correlated with the experience of the radiologist and whether the preprocedural evaluation (CS-EVA) and the procedure were performed by the same radiologist. Additionally, the impact of previous liver surgery on DAP/FT was analyzed.
Results: Experienced radiologist require less DAP/FT (50 ± 18 Gy*cm2/13.2 ± 3.84 min vs. 69 ± 20 Gy*cm2/15.77 ± 7.82 min; p < 0.001). Chemosaturations performed by the same radiologist who performed CS-EVA required less DAP/FT (41 ± 12 Gy*cm2/11.46 ± 4.41 min vs. 62 ± 11 Gy*cm2/15.55 ± 7.91 min; p < 0.001). Chemosaturations in patients with prior liver surgery with involvement of the inferior cava vein required significantly higher DAP/FT (153 ± 27 Gy*cm2/25.43 ± 4.57 min vs. 56 ± 25 Gy*cm2/14.44 ± 7.55 min; p < 0.001).
Conclusion: There is a significant learning curve regarding the procedure of hepatic chemosaturation. Due to dose reduction the evaluation and chemosaturation therapy should be performed by the same radiologist. Procedures in patients with previous liver surgery require higher DAP/FT.
abstract_id: PUBMED:24952441
A 20-year experience of hepatic resection for melanoma: is there an expanding role? Background: Melanoma liver metastasis is most often fatal, with a 4- to 6-month median overall survival (OS). Over the past 20 years, surgical techniques have improved in parallel with more effective systemic therapies. We reviewed our institutional experience of hepatic melanoma metastases.
Study Design: Overall and disease-specific survivals were calculated from hepatic metastasis diagnosis. Potential prognostic factors including primary tumor type, depth, medical treatment response, location, and surgical approach were evaluated.
Results: Among 1,078 patients with melanoma liver metastases treated at our institution since 1991, 58 (5.4%) received surgical therapy (resection with or without ablation). Median and 5-year OS were 8 months and 6.6 %, respectively, for 1,016 nonsurgical patients vs 24.8 months and 30%, respectively, for surgical patients (p < 0.001). Median OS was similar among patients undergoing ablation (with or without resection) relative to those undergoing surgery alone. On multivariate analysis of surgical patients, completeness of surgical therapy (hazard ratio [HR] 3.4, 95% CI 1.4 to 8.1, p = 0.007) and stabilization of melanoma on therapy before surgery (HR 0.38, 95% CI 0.19 to 0.78, p = 0.008) predicted OS.
Conclusions: In this largest single-institution experience, patients selected for surgical therapy experienced markedly improved survival relative to those receiving only medical therapy. Patients whose disease stabilized on medical therapy enjoyed particularly favorable results, regardless of the number or size of their metastases. The advent of more effective systemic therapy in melanoma may substantially increase the fraction of patients who are eligible for surgical intervention, and this combination of treatment modalities should be considered whenever it is feasible in the context of a multidisciplinary team.
abstract_id: PUBMED:3937848
Adjuvant chemotherapy of intraocular malignant melanoma. Apropos of 20 cases The frequency of haematogenously propagated hepatic metastases occurring from intra-ocular melanoma led us to treat 20 patients with adjuvant chemotherapy. 19 patients starting chemotherapy during the month following enucleation and one patient, a year after enucleation. With a median follow-up of 6 years, 17 patients (80%) are disease-free. Three patients developed hepatic metastases at 24, 24 and 30 months respectively. The résults suggest that adjuvant chemotherapy is effective in preventing metastases from intra-ocular melanoma.
abstract_id: PUBMED:19760961
Liver metastases from uveal melanoma: clinical experience of hepatic arterial infusion of cisplatin, vinblastine and dacarbazine. Background/aims: Liver is the most common site of metastases in uveal melanoma. Hepatic arterial infusion of cytotoxic agents may be an effective method of controlling the disease in these patients.
Methodology: A retrospective analysis of 10 patients with hepatic metastases of uveal melanoma treated with hepatic arterial infusion (HAI) of the combination of cisplatin, vinblastine and dacarbazine was performed.
Results: Two patients had an objective response, 4 patients had stable disease and 4 patients had progressive disease. The median survival from the start of therapy was 16 (range 5 - 69) months. HAI of second line agents was of limited effectiveness. All patients with progressive disease died within one year while all patients with clinical benefit response (objective response or stable disease) survived more than one year.
Conclusions: Present data demonstrate, in agreement with the literature, the effectiveness of HAI in the treatment of uveal melanoma metastatic to the liver. The HAI of combination of cisplatin, vinblastine and dacarbazine seems to have similar efficacy as other HAI regimens.
abstract_id: PUBMED:3702027
Treatment of hepatic metastases in ocular melanoma. Embolization of the hepatic artery with polyvinyl sponge and cisplatin. Patients with ocular melanoma have a high incidence of hepatic metastases, which primarily determine their length of survival. In an attempt to control the neoplastic disease in the liver, embolization of the hepatic artery with a combination of polyvinyl sponge (Ivalon) and a suspension of cisplatin was performed in two patients with hepatic metastases from ocular melanoma. Dramatic regression of the hepatic metastases, lasting 19 and six months, occurred in these two patients after one or two such treatments. Our preliminary, albeit successful, experience with this therapeutic approach suggests that it may offer relatively prolonged periods of remission and warrants further investigation.
abstract_id: PUBMED:36232829
Role of Fibroblast Growth Factors in the Crosstalk of Hepatic Stellate Cells and Uveal Melanoma Cells in the Liver Metastatic Niche. Hepatic metastasis is the critical factor determining tumor-associated mortality in different types of cancer. This is particularly true for uveal melanoma (UM), which almost exclusively metastasizes to the liver. Hepatic stellate cells (HSCs) are the precursors of tumor-associated fibroblasts and support the growth of metastases. However, the underlying mechanisms are widely unknown. Fibroblast growth factor (FGF) signaling is dysregulated in many types of cancer. The aim of this study was to analyze the pro-tumorigenic effects of HSCs on UM cells and the role of FGFs in this crosstalk. Conditioned medium (CM) from activated human HSCs significantly induced proliferation together with enhanced ERK and JNK activation in UM cells. An in silico database analysis revealed that there are almost no mutations of FGF receptors (FGFR) in UM. However, a high FGFR expression was found to be associated with poor survival for UM patients. In vitro, the pro-tumorigenic effects of HSC-CM on UM cells were abrogated by a pharmacological inhibitor (BGJ398) of FGFR1/2/3. The expression analysis revealed that the majority of paracrine FGFs are expressed by HSCs, but not by UM cells, including FGF9. Furthermore, the immunofluorescence analysis indicated HSCs as a cellular source of FGF9 in hepatic metastases of UM patients. Treatment with recombinant FGF9 significantly enhanced the proliferation of UM cells, and this effect was efficiently blocked by the FGFR1/2/3 inhibitor BGJ398. Our study indicates that FGF9 released by HSCs promotes the tumorigenicity of UM cells, and thus suggests FGF9 as a promising therapeutic target in hepatic metastasis.
abstract_id: PUBMED:30762212
Hepatic angiomyolipoma with early drainage veins into the hepatic and portal vein. Hepatic angiomyolipoma (AML) is a rare stromal tumor composed of variable admixtures of thick-walled vessels, smooth muscles and adipose tissue. One of the specific radiological findings of hepatic AML is an early drainage vein noted via enhanced computed tomography (CT). We report a case of hepatic AML showing early drainage veins into both the hepatic and portal vein. The case involved a 46-year-old woman who was referred to our hospital because of a giant hepatic tumor. CT revealed well-enhanced 14 cm and 1 cm tumors in the left and right lobes, respectively. Magnetic resonance imaging demonstrated the existence of adipose tissues in the larger tumor. Hepatic arteriography revealed early drainage veins draining into both the hepatic and portal vein. Based on a diagnosis of hepatic AML, left hepatectomy and partial hepatectomy were performed. Pathology revealed both tumors as hepatic AML based on human melanoma black-45 immuno-positivity. Hepatic AML with early drainage veins into both the hepatic and portal vein is rare. The dilated and retrogressive vein drains the abundant arterial blood flow of the tumor. The finding of early drainage veins into not only the hepatic vein but also the portal vein should be helpful for diagnosing hepatic AMLs.
abstract_id: PUBMED:33372814
Over 12 Years Single Institutional Experience Performing Percutaneous Hepatic Perfusion for Unresectable Liver Metastases. Patients with unresectable hepatic metastases, from uveal or ocular melanoma, are challenging to treat with an overall poor prognosis. Although over the past decade significant advances in systemic therapies have been made, metastatic disease to the liver, especially from uveal melanoma, continues to be a poor prognosis. Percutaneous hepatic perfusion (PHP) is a safe, viable treatment option for these patients. PHP utilizes high dose chemotherapy delivered directly to the liver while minimizing systemic exposure and can be repeated up to 6 times. Isolation of the hepatic vasculature with a double-balloon catheter allows for high concentration cytotoxic therapy to be administered with minimal systemic adverse effects. A detailed description of the multidisciplinary treatment protocol used at an institution with over 12 years of experience is discussed and recommendations are given. A dedicated team of a surgical or medical oncology, interventional radiology, anesthesiology and a perfusionist allows PHP to be repeatedly performed as a safe treatment strategy for unresectable hepatic metastases.
abstract_id: PUBMED:27017869
Hepatic Perfusion Therapy. Isolated hepatic perfusion uses the unique vascular supply of hepatic malignancies to deliver cytotoxic chemotherapy. The procedure involves vascular isolation of the liver and delivery of chemotherapy via the hepatic artery and extraction from retrohepatic vena cava. Benefits of hepatic perfusion have been observed in hepatic metastases of ocular melanoma and colorectal cancer and primary hepatocellular carcinoma. Percutaneous and prophylactic perfusions are avenues of ongoing research.
Answer: The abstract from the 20-year experience of hepatic resection for melanoma (PUBMED:24952441) suggests that there may be an expanding role for surgical intervention in the treatment of hepatic melanoma metastases. The study reviewed the outcomes of patients with melanoma liver metastases treated at their institution since 1991, with a focus on those who received surgical therapy, which included resection with or without ablation. The results indicated that patients selected for surgical therapy experienced significantly improved survival compared to those receiving only medical therapy. The median overall survival (OS) for nonsurgical patients was 8 months with a 5-year OS of 6.6%, whereas for surgical patients, the median OS was 24.8 months with a 5-year OS of 30%.
The study also found that the completeness of surgical therapy and stabilization of melanoma on therapy before surgery were significant predictors of overall survival. These findings suggest that with advancements in surgical techniques and more effective systemic therapies, there is a potential for an increased number of patients with melanoma liver metastases to be eligible for surgical intervention. The study concludes that combining surgical and medical therapies should be considered whenever feasible within a multidisciplinary team approach, as it may lead to markedly improved survival outcomes for these patients.
This expanding role for hepatic resection in melanoma is supported by the advent of more effective systemic therapies, which may increase the fraction of patients who can benefit from surgical intervention. The study emphasizes the importance of a multidisciplinary approach in the management of melanoma liver metastases, suggesting that the combination of surgical and medical treatments could become a more common strategy in the future. |
Instruction: Inflammaging: should this term be suitable for age related macular degeneration too?
Abstracts:
abstract_id: PUBMED:9875992
Location and stability of a newly established eccentric retinal locus suitable for reading, achieved through training of patients with a dense central scotoma. Six patients, median age 71 years, with a dense central scotoma in one eye and a median visual acuity of 0.06 (20/330) in the same eye, were all (100%) shown by means of fundus photography including a fixation target to preferably use an unfavorable retinal locus for fixation, i.e., within the lesion (scotoma). None of the patients was able to read novel text with the affected eye. A computer and video display system were used to determine the most suitable area above or below the visual field scotoma (below or above the retinal lesion) for reading and the magnification needed at this eccentricity. The same setup was also used for an introductory training in reading single words as well as scrolled text with the aim of establishing a preferred retinal locus (PRL) at a favorable, eccentric position, the trained retinal locus (TRL). Thereafter, the patients were provided with strong positive lenses (median power, 40 D) for reading printed text at a very short reading distance (median, 2.5 cm), first single words, above and below which help lines were printed to facilitate eccentric fixation, and finally, novel text. The total training time was 4 to 5 h. Thereafter, fundus photography showed that five of the patients (83%) used their TRL as their PRL. Reading speed was 71 words per minute (median). Our results seem to indicate that an eccentric PRL favorable for effective reading can be established through training and that a fairly low number of training sessions is required.
abstract_id: PUBMED:24202618
Inflammaging: should this term be suitable for age related macular degeneration too? Introduction: Inflammaging is a phenomenon triggered by the conjunction of chronic repetitive and subclinical inflammation from external aggressors and internal inflammatory mechanisms due to the progressive degradation of systems such as the mitochondrial function. Age-related macular degeneration is the leading cause of blindness and visual impairment in patients older than 60 years in developed countries.
Discussion: Remarkable correlations have been documented between common or rare immunological/inflammatory gene polymorphisms and AMD, unequivocally indicating the involvement of inflammation and immune-mediated processes (complement activation) in the pathogenesis of this disease.
Conclusion: Altogether these factors also drive this pathologic condition under the general heading of "Inflammaging".
abstract_id: PUBMED:17721591
Fluorescence characterisation of multiply-loaded anti-HER2 single chain Fv-photosensitizer conjugates suitable for photodynamic therapy. We report the synthesis, spectroscopic properties and intracellular imaging of recombinant antibody single chain fragment (scFv) conjugates with photosensitizers used for photodynamic therapy of cancer (PDT). Two widely-studied photosensitizers have been selected: preclinical pyropheophorbide-a (PPa) and verteporfin (VP), which has been clinically approved for the treatment of acute macular degeneration (Visudyne). Pyropheophorbide-a and verteporfin have been conjugated to an anti-HER2 scFv containing on average ten photosensitizer molecules per scFv with a small contribution (<or=20%) from non-covalently bound molecules. Confocal fluorescence microscopy demonstrates good cellular uptake of PPa conjugate with the HER2-positive cell line, SKOV-3, while negligible cell uptake is demonstrated for the HER2-negative cell line, KB. For the VP conjugate, increased rate of cellular uptake and prolonged retention in SKOV-3 cells is observed compared to free photosensitizer. In clinical applications this could provide increased potency and desired selectivity towards malignant tissue, leaving surrounding healthy tissue unharmed and reducing skin photosensitivity. The present study highlights the usefulness of photosensitizer immunoconjugates with scFvs for targeted PDT.
abstract_id: PUBMED:30247475
Engineering Transplantation-suitable Retinal Pigment Epithelium Tissue Derived from Human Embryonic Stem Cells. Several pathological conditions of the eye affect the functionality and/or the survival of the retinal pigment epithelium (RPE). These include some forms of retinitis pigmentosa (RP) and age-related macular degeneration (AMD). Cell therapy is one of the most promising therapeutic strategies proposed to cure these diseases, with already encouraging preliminary results in humans. However, the method of preparation of the graft has a significant impact on its functional outcomes in vivo. Indeed, RPE cells grafted as a cell suspension are less functional than the same cells transplanted as a retinal tissue. Herein, we describe a simple and reproducible method to engineer RPE tissue and its preparation for an in vivo implantation. RPE cells derived from human pluripotent stem cells are seeded on a biological support, the human amniotic membrane (hAM). Compared to artificial scaffolds, this support has the advantage of having a basement membrane that is close to the Bruch's membrane where endogenous RPE cells are attached. However, its manipulation is not easy, and we developed several strategies for its proper culturing and preparation for grafting in vivo.
abstract_id: PUBMED:31122362
Rod-Mediated Dark Adaptation as a Suitable Outcome for Early and Intermediate Age-Related Macular Degeneration. N/A
abstract_id: PUBMED:36769075
High-Capacity Mesoporous Silica Nanocarriers of siRNA for Applications in Retinal Delivery. The main cause of subretinal neovascularisation in wet age-related macular degeneration (AMD) is an abnormal expression in the retinal pigment epithelium (RPE) of the vascular endothelial growth factor (VEGF). Current approaches for the treatment of AMD present considerable issues that could be overcome by encapsulating anti-VEGF drugs in suitable nanocarriers, thus providing better penetration, higher retention times, and sustained release. In this work, the ability of large pore mesoporous silica nanoparticles (LP-MSNs) to transport and protect nucleic acid molecules is exploited to develop an innovative LP-MSN-based nanosystem for the topical administration of anti-VEGF siRNA molecules to RPE cells. siRNA is loaded into LP-MSN mesopores, while the external surface of the nanodevices is functionalised with polyethylenimine (PEI) chains that allow the controlled release of siRNA and promote endosomal escape to facilitate cytosolic delivery of the cargo. The successful results obtained for VEGF silencing in ARPE-19 RPE cells demonstrate that the designed nanodevice is suitable as an siRNA transporter.
abstract_id: PUBMED:20462213
Biochemical and structural analysis of the binding determinants of a vascular endothelial growth factor receptor peptidic antagonist. Cyclic peptide antagonist c[YYDEGLEE]-NH(2), which disrupts the interaction between vascular endothelial growth factor (VEGF) and its receptors (VEGFRs), represents a promising tool in the fight against cancer and age-related macular degeneration. Furthermore, coupled to a cyclen derivative, this ligand could be used as a medicinal imaging agent. Nevertheless, before generating such molecular probes, some preliminary studies need to be undertaken in order to define the more suitable positions for introduction of the cyclen macrocycle. Through an Ala-scan study on this peptide, we identified its binding motif, and an NMR study highlights its binding sites on the VEGFR-1D2 Ig-like domain. Guided by the structural relationship results deduced from the effect of the peptides on endothelial cells, new peptides were synthesized and grafted on beads. Used in a pull-down assay, these new peptides trap the VEGFRs, thus confirming that the identified amino acid positions are suitable for further derivatization.
abstract_id: PUBMED:12955441
Determination of the concentration distribution of macular pigment from reflection and fluorescence images Objective: The macular pigment xanthophyll protects the macula in two ways: firstly, it absorbs hazardous blue light and secondly, it acts as a radical scavenger. A low concentration of xanthophyll may be regarded as a risk factor for age-related macular degeneration (AMD). Therefore, we investigated a simple method to determine the xanthophyll concentration at the fundus which is suitable for patient screening.
Method: The local distribution of xanthophyll density was determined from monochromatic blue reflection images and autofluorescence images of the fundus in 18 healthy volunteers (mean age: 23.9 years). The significance of the parameters maximal, global, and mean concentration were compared.
Results: The maximal optical density of xanthophyll determined from reflection images was found to be 0.29+/-0.08 (mean for all test persons) which is in good agreement with literature data. The total xanthophyll concentration which is proportional to the maximal density, appeared to be appropriate to describe a person's overall xanthophyll status. Because of the low intensity of autofluorescence images, these are less useful for the determination of the xanthophyll concentration.
Conclusions: Because of it's simplicity, the determination of xanthophyll concentration as described here can be performed by every ophthalmologist using a fundus camera and is, therefore, suitable as a screening method.
abstract_id: PUBMED:27754400
Development of an Advanced HPLC-MS/MS Method for the Determination of Carotenoids and Fat-Soluble Vitamins in Human Plasma. The concentration of carotenoids and fat-soluble vitamins in human plasma may play a significant role in numerous chronic diseases such as age-related macular degeneration and some types of cancer. Although these compounds are of utmost interest for human health, methods for their simultaneous determination are scarce. A new high pressure liquid chromatography (HPLC)-tandem mass spectrometry (MS/MS) method for the quantification of selected carotenoids and fat-soluble vitamins in human plasma was developed, validated, and then applied in a pilot dietary intervention study with healthy volunteers. In 50 min, 16 analytes were separated with an excellent resolution and suitable MS signal intensity. The proposed HPLC-MS/MS method led to improvements in the limits of detection (LOD) and quantification (LOQ) for all analyzed compounds compared to the most often used HPLC-DAD methods, in some cases being more than 100-fold lower. LOD values were between 0.001 and 0.422 µg/mL and LOQ values ranged from 0.003 to 1.406 µg/mL, according to the analyte. The accuracy, precision, and stability met with the acceptance criteria of the AOAC (Association of Official Analytical Chemists) International. According to these results, the described HPLC-MS/MS method is adequately sensitive, repeatable and suitable for the large-scale analysis of compounds in biological fluids.
abstract_id: PUBMED:38105897
Adeno-associated virus vectors for retinal gene therapy in basic research and clinical studies. Retinal degenerative diseases, including glaucoma, age-related macular degeneration, diabetic retinopathy, and a broad range of inherited retinal diseases, are leading causes of irreversible vision loss and blindness. Gene therapy is a promising and fast-growing strategy to treat both monogenic and multifactorial retinal disorders. Vectors for gene delivery are crucial for efficient and specific transfer of therapeutic gene(s) into target cells. AAV vectors are ideal for retinal gene therapy due to their inherent advantages in safety, gene expression stability, and amenability for directional engineering. The eye is a highly compartmentalized organ composed of multiple disease-related cell types. To determine a suitable AAV vector for a specific cell type, the route of administration and choice of AAV variant must be considered together. Here, we provide a brief overview of AAV vectors for gene transfer into important ocular cell types, including retinal pigment epithelium cells, photoreceptors, retinal ganglion cells, Müller glial cells, ciliary epithelial cells, trabecular meshwork cells, vascular endothelial cells, and pericytes, via distinct injection methods. By listing suitable AAV vectors in basic research and (pre)clinical studies, we aim to highlight the progress and unmet needs of AAV vectors in retinal gene therapy.
Answer: Yes, the term "inflammaging" should be suitable for age-related macular degeneration (AMD) as well. The concept of inflammaging refers to the phenomenon triggered by chronic repetitive and subclinical inflammation from external aggressors and internal inflammatory mechanisms due to the progressive degradation of systems such as mitochondrial function. Age-related macular degeneration is the leading cause of blindness and visual impairment in patients older than 60 years in developed countries. There are remarkable correlations documented between common or rare immunological/inflammatory gene polymorphisms and AMD, unequivocally indicating the involvement of inflammation and immune-mediated processes, such as complement activation, in the pathogenesis of this disease. Therefore, these factors drive the pathological condition of AMD under the general heading of "Inflammaging" (PUBMED:24202618). |
Instruction: Do all sub acute stroke patients benefit from robot-assisted therapy?
Abstracts:
abstract_id: PUBMED:33285652
Effects of short-term upper limb robot-assisted therapy on the rehabilitation of sub-acute stroke patients. Background: Robot-assisted therapy (RT) has become a promising stroke rehabilitation intervention.
Objective: To examine the effects of short-term upper limb RT on the rehabilitation of sub-acute stroke patients.
Methods: Subjects were randomly assigned to the RT group (n= 23) or conventional rehabilitation (CR) group (n= 22). All subjects received conventional rehabilitation therapy for 30 minutes twice a day, for 2 weeks. In addition, the RT group received RT for 30 minutes twice a day, for 2 weeks. The outcomes before treatment (T0) and at 2 weeks (T1) and 1 month follow-up (T2) were evaluated in the patients using the upper limb motor function test of the Fugl-Meyer assessment (FMA) the Motricity Index (MI), the Modified Ashworth Scale (MAS), the Functional Independence Measure (FIM), and the Barthel Index (BI).
Results: There were significant improvements in motor function scales (P< 0.001 for FMA and MI) and activities of daily living (P< 0.001 for FIM and BI) but without muscle tone (MAS, P> 0.05) in the RT and CR groups. Compared to the CR group, the RT group showed improvements in motor function and activities of daily living (P< 0.05 for FMA, MI, FIM, BI) at T1 and T2. There was no significant difference between the two groups in muscle tone (MAS, P> 0.05).
Conclusions: RT may be a useful tool for sub-acute stroke patients' rehabilitation.
abstract_id: PUBMED:25931706
Effects of upper limb robot-assisted therapy in the rehabilitation of stroke patients. [Purpose] The aim of this study was to examine the effects of upper limb robot-assisted therapy in the rehabilitation of stroke patients. [Subjects and Methods] Fifteen stroke patients with no visual or cognitive problems were enrolled. All subjects received robot-assisted therapy and comprehensive rehabilitation therapy for 30 minutes each. The experimental group received a conventional therapy and an additional half hour per weekday of robot therapy. The patients participated in a total of 20 sessions, each lasting 60 minutes (conventional therapy 30 min, robot-assisted therapy 30 min), which were held 5 days a week for 4 weeks. [Result] The patients showed a significant difference in smoothness and reach error of the point to point test, circle size and independence of the circle in the circle test, and hold deviation of the playback static test between before and after the intervention. On the other hand, no significant difference was observed in the displacement of the round dynamic test. The patients also showed significant improvement in the Fugl-Meyer Assessment and Modified Barthel Index after the intervention. [Conclusion] These kinematic factors can provide good information when analyzing the upper limb function of stroke patients in robot-assisted therapy. Nevertheless, further research on technology-based kinematic information will be necessary.
abstract_id: PUBMED:25420902
Do all sub acute stroke patients benefit from robot-assisted therapy? A retrospective study. Purpose: Upper limb robot-assisted rehabilitation is a highly intensive therapy, mainly recommended after stroke. Whether robotic therapy is suitable for subacute patients with severe impairments including cognitive disorders is unknown. This retrospective study explored factors impacting on motor performance achieved in a 16-session robotic training combined with standard rehabilitation.
Methods: Seventeen subacute inpatients (age 53 ± 18; 49 ± 26 days post-stroke) were assessed at baseline using upper extremity motor impairments scales, Functional Independence Measure, aphasia and neglect scores. Number of movements and robotic assistance were compared between Session 2 (S2), 8 (8) and 16 (S16), Motricity Index between pre and post-treatment. Correlation analyses explored predictors of motor performance.
Results: Overall, number of movements and Motricity Index increased significantly while robot-assistance decreased. The mean number of movements per session correlated positively with baseline motor capacities but not with age, aphasia and neglect. However, the increase in Motricity index correlated negatively with baseline Motricity index and the increase in the number of movements correlated negatively with the number of movements at S2.
Conclusion: High intensity robot-assisted training may be associated with motor improvement in subacute hemiparesis. More severely impaired patients may derive greater benefit from robot-assisted training; age, aphasia and neglect do not represent exclusion criteria.
abstract_id: PUBMED:24396811
Robot-assisted Therapy in Stroke Rehabilitation. Research into rehabilitation robotics has grown rapidly and the number of therapeutic rehabilitation robots has expanded dramatically during the last two decades. Robotic rehabilitation therapy can deliver high-dosage and high-intensity training, making it useful for patients with motor disorders caused by stroke or spinal cord disease. Robotic devices used for motor rehabilitation include end-effector and exoskeleton types; herein, we review the clinical use of both types. One application of robot-assisted therapy is improvement of gait function in patients with stroke. Both end-effector and the exoskeleton devices have proven to be effective complements to conventional physiotherapy in patients with subacute stroke, but there is no clear evidence that robotic gait training is superior to conventional physiotherapy in patients with chronic stroke or when delivered alone. In another application, upper limb motor function training in patients recovering from stroke, robot-assisted therapy was comparable or superior to conventional therapy in patients with subacute stroke. With end-effector devices, the intensity of therapy was the most important determinant of upper limb motor recovery. However, there is insufficient evidence for the use of exoskeleton devices for upper limb motor function in patients with stroke. For rehabilitation of hand motor function, either end-effector and exoskeleton devices showed similar or additive effects relative to conventional therapy in patients with chronic stroke. The present evidence supports the use of robot-assisted therapy for improving motor function in stroke patients as an additional therapeutic intervention in combination with the conventional rehabilitation therapies. Nevertheless, there will be substantial opportunities for technical development in near future.
abstract_id: PUBMED:32592282
Robot-assisted therapy for upper-limb rehabilitation in subacute stroke patients: A systematic review and meta-analysis. Background: Stroke survivors often experience upper-limb motor deficits and achieve limited motor recovery within six months after the onset of stroke. We aimed to systematically review the effects of robot-assisted therapy (RT) in comparison to usual care on the functional and health outcomes of subacute stroke survivors.
Methods: Randomized controlled trials (RCTs) published between January 1, 2000 and December 31, 2019 were identified from six electronic databases. Pooled estimates of standardized mean differences for five outcomes, including motor control (primary outcome), functional independence, upper extremity performance, muscle tone, and quality of life were derived by random effects meta-analyses. Assessments of risk of bias in the included RCTs and the quality of evidence for every individual outcomes were conducted following the guidelines of the Cochrane Collaboration.
Results: Eleven RCTs involving 493 participants were included for review. At post-treatment, the effects of RT when compared to usual care on motor control, functional independence, upper extremity performance, muscle tone, and quality of life were nonsignificant (all ps ranged .16 to .86). The quality of this evidence was generally rated as low-to-moderate. Less than three RCTs assessed the treatment effects beyond post-treatment and the results remained nonsignificant.
Conclusion: Robot-assisted therapy produced benefits similar, but not significantly superior, to those from usual care for improving functioning and disability in patients diagnosed with stroke within six months. Apart from using head-to-head comparison to determine the effects of RT in subacute stroke survivors, future studies may explore the possibility of conducting noninferiority or equivalence trials, given that the less labor-intensive RT may offer important advantages over currently available standard care, in terms of improved convenience, better adherence, and lower manpower cost.
abstract_id: PUBMED:38337500
Benefits of Robot-Assisted Upper-Limb Rehabilitation from the Subacute Stage after a Stroke of Varying Severity: A Multicenter Randomized Controlled Trial. Background: The aim of this study was to compare the clinical effectiveness of robot-assisted therapy with that of conventional occupational therapy according to the onset and severity of stroke.
Methods: In this multicenter randomized controlled trial, stroke patients were randomized (1:1) to receive robot-assisted therapy or conventional occupational therapy. The robot-assisted training group received 30 min of robot-assisted therapy twice and 30 min of conventional occupational therapy daily, while the conventional therapy group received 90 min of occupational therapy. Therapy was conducted 5 days/week for 4 weeks. The primary outcome was the Wolf Motor Function Test (WMFT) score after 4 and 8 weeks of therapy.
Results: Overall, 113 and 115 patients received robot-assisted and conventional therapy, respectively. The WMFT score after robot-assisted therapy was not significantly better than that after conventional therapy, but there were significant improvements in the Motricity Index (trunk) and the Fugl-Meyer Assessment. After robot-assisted therapy, wrist strength significantly improved in the subacute or moderate-severity group of stroke patients.
Conclusions: Robot-assisted therapy improved the upper-limb functions and activities of daily living (ADL) performance as much as conventional occupational therapy. In particular, it showed signs of more therapeutic effectiveness in the subacute stage or moderate-severity group.
abstract_id: PUBMED:34842770
The Route of Motor Recovery in Stroke Patients Driven by Exoskeleton-Robot-Assisted Therapy: A Path-Analysis. Background: Exoskeleton-robot-assisted therapy is known to positively affect the recovery of arm functions in stroke patients. However, there is a lack of evidence regarding which variables might favor a better outcome and how this can be modulated by other factors. Methods: In this within-subject study, we evaluated the efficacy of a robot-assisted rehabilitation system in the recovery of upper limb functions. We performed a path analysis using a structural equation modeling approach in a large sample of 102 stroke patients (age 63.6 ± 13.1 years; 61% men) in the post-acute phase. They underwent 7 weeks of bilateral arm training assisted by an exoskeleton robot combined with a conventional treatment (consisting of simple physical activity together with occupational therapy). The upper extremity section of the Fugl-Meyer (FM-UE) scale at admission was used as a predictor of outcome, whereas age, gender, side of the lesion, days from the event, pain scale, duration of treatment, and number of sessions as mediators. Results: FM-UE at admission was a direct predictor of outcome, as measured by the motricity index of the contralateral upper limb and trunk control test, without any other mediating factors. Age, gender, days from the event, side of lesion, and pain scales were independently associated with outcomes. Conclusions: To the best of our knowledge, this is the first study assessing the relationship between clinical variables and outcomes induced by robot-assisted rehabilitation with a path-analysis model. We define a new route for motor recovery of stroke patients driven by exoskeleton-robot-assisted therapy, highlighting the role of FM-UE at admission as a useful predictor of outcome, although other variables need to be considered in the time-course of disease.
abstract_id: PUBMED:30202655
Translational effects of robot-mediated therapy in subacute stroke patients: an experimental evaluation of upper limb motor recovery. Robot-mediated therapies enhance the recovery of post-stroke patients with motor deficits. Repetitive and repeatable exercises are essential for rehabilitation following brain damage or other disorders that impact the central nervous system, as plasticity permits to reorganize its neural structure, fostering motor relearning. Despite the fact that so many studies claim the validity of robot-mediated therapy in post-stroke patient rehabilitation, it is still difficult to assess to what extent its adoption improves the efficacy of traditional therapy in daily life, and also because most of the studies involved planar robots. In this paper, we report the effects of a 20-session-rehabilitation project involving the Armeo Power robot, an assistive exoskeleton to perform 3D upper limb movements, in addition to conventional rehabilitation therapy, on 10 subacute stroke survivors. Patients were evaluated through clinical scales and a kinematic assessment of the upper limbs, both pre- and post-treatment. A set of indices based on the patients' 3D kinematic data, gathered from an optoelectronic system, was calculated. Statistical analysis showed a remarkable difference in most parameters between pre- and post-treatment. Significant correlations between the kinematic parameters and clinical scales were found. Our findings suggest that 3D robot-mediated rehabilitation, in addition to conventional therapy, could represent an effective means for the recovery of upper limb disability. Kinematic assessment may represent a valid tool for objectively evaluating the efficacy of the rehabilitation treatment.
abstract_id: PUBMED:35765084
Effects of robot-assisted therapy on upper limb and cognitive function in patients with stroke: study protocol of a randomized controlled study. Background: Impairments in upper limb motor function and cognitive ability are major health problems experienced by stroke patients, necessitating the development of novel and effective treatment options in stroke care. The aim of this study is to examine the effects of robot-assisted therapy on improving upper limb and cognitive functions in stroke patients.
Methods: This will be a single-blinded, 2-arm, parallel design, randomized controlled trial which will include a sample size of 86 acute and subacute stroke patients to be recruited from a single clinical hospital in Shanghai, China. Upon qualifying the study eligibility, participants will be randomly assigned to receive either robot-assisted therapy or conventional therapy with both interventions being conducted over a 6-week period in a clinical rehabilitation setting. In addition to comprehensive rehabilitation, the robot-assisted therapy group will receive a 30-min Armguider robot-assisted therapy intervention 5 days a week. Primary efficacy outcomes will include Fugl-Meyer Assessment for Upper Extremity (FMA-UE) and Mini-Mental Status Examination (MMSE). Other secondary outcomes will include Trail Making Test (TMT), Auditory Verbal Learning Test (AVLT), Digit Symbol Substitution Test (DSST), and Rey-Osterrieth Complex Figure Test (ROCFT). All trial outcomes will be assessed at baseline and at 6-week follow-up. Intention-to-treat analyses will be performed to examine changes from baseline in the outcomes. Adverse events will be monitored throughout the trial period.
Discussion: This will be the first randomized controlled trial aimed at examining the effects of robot-assisted therapy on upper limb and cognitive functions in acute and subacute stroke patients. Findings from the study will contribute to our understanding of using a novel robotic rehabilitation approach to stroke care and rehabilitation.
Trial Registration: Chinese Clinical Trial Registry ChiCTR2100050856 . Registered on 5 September 2021.
abstract_id: PUBMED:33739436
Effects of robot-assisted training on balance function in patients with stroke: A systematic review and meta-analysis. Objective: To investigate the effectiveness of robot-assisted therapy on balance function in stroke survivors.
Data Sources: PubMed, the Cochrane Library, Embase and China National Knowledge Infrastructure databases were searched systematically for relevant studies.
Study Selection: Randomized controlled trials reporting robot-assisted therapy on balance function in patients after stroke were included.
Data Extraction: Information on study characteristics, demographics, interventions strategies and outcome measures were extracted by 2 reviewers.
Data Synthesis: A total of 19 randomized trials fulfilled the inclusion criteria and 13 out of 19 were included in the meta-analysis. Analysis revealed that robot-assisted therapy significantly improved balance function assessed by berg balance scale (weighted mean difference (WMD) 3.58, 95% confidence interval (95% CI) 1.89-5.28, p < 0.001) compared with conventional therapy. Secondary analysis indicated that there was a significant difference in balance recovery between the conventional therapy and robot-assisted therapy groups in the acute/subacute stages of stroke (WMD 5.40, 95% CI 3.94-6.86, p < 0.001), while it was not significant in the chronic stages. With exoskeleton devices, the balance recovery in robot-assisted therapy groups was significantly better than in the conventional therapy groups (WMD 3.73, 95% CI 1.83-5.63, p < 0.001). Analysis further revealed that a total training time of more than 10 h can significantly improve balance function (WMD 4.53, 95% CI 2.31-6.75, p < 0.001). No publication bias or small study effects were observed according to the Cochrane Collaboration tool.
Conclusion: These results suggest that robot-assisted therapy is an effective intervention for improving balance function in stroke survivors.
Answer: Not all subacute stroke patients may benefit equally from robot-assisted therapy (RT). While RT has been shown to be a useful tool for the rehabilitation of subacute stroke patients, with significant improvements in motor function scales and activities of daily living (PUBMED:33285652), the benefits can vary depending on several factors.
A retrospective study indicated that while overall motor performance and Motricity Index improved with high-intensity robot-assisted training, the benefits were more pronounced in patients with more severe impairments. Age, aphasia, and neglect did not appear to be exclusion criteria for the potential benefits of RT (PUBMED:25420902). However, a systematic review and meta-analysis found that the effects of RT compared to usual care on motor control, functional independence, upper extremity performance, muscle tone, and quality of life were nonsignificant, suggesting that RT may offer similar benefits to usual care but not necessarily superior outcomes (PUBMED:32592282).
Moreover, a multicenter randomized controlled trial found that robot-assisted therapy improved upper-limb functions and activities of daily living as much as conventional occupational therapy. It also suggested that RT might be more effective in the subacute stage or for patients with moderate-severity stroke (PUBMED:38337500). Another study using path analysis indicated that the Fugl-Meyer Assessment at admission was a direct predictor of outcome, with other factors such as age, gender, and days from the event also associated with outcomes (PUBMED:34842770).
In summary, while RT can be beneficial for subacute stroke patients, the degree of benefit may vary based on individual patient characteristics, severity of impairment, and other factors. It is not a one-size-fits-all solution, and some patients may derive greater benefit than others. Further research is needed to identify the specific patient populations that would benefit most from RT and to optimize therapy protocols for individual needs. |
Instruction: Is there a difference in neurologic outcome in medical versus early operative management of cervical epidural abscesses?
Abstracts:
abstract_id: PUBMED:24937797
Is there a difference in neurologic outcome in medical versus early operative management of cervical epidural abscesses? Background Context: The ideal management of cervical spine epidural abscess (CSEA), medical versus surgical, is controversial. The medical failure rate and neurologic consequences of delayed surgery are not known.
Purpose: The purpose of this study is to assess the neurologic outcome of patients with CSEA managed medically or with early surgical intervention and to identify the risk factors for medical failure and the consequences of delayed surgery.
Study Design/setting: Retrospective electronic medical record (EMR) review.
Patient Sample: Sixty-two patients with spontaneous CSEA, confirmed with advanced imaging, from a single tertiary medical center from January 5 to September 11.
Outcome Measures: Patient data were collected from the EMR with motor scores (MS) (American Spinal Injury Association 0-100) recorded pre/posttreatment. Three treatment groups emerged: medical without surgery, early surgery, and those initially managed medically but failed requiring delayed surgery.
Methods: Inclusion criteria: spontaneous CSEA based on imaging and intraoperative findings when available, age >18 years, and adequate EMR documentation of the medical decision-making process. Exclusion criteria: postoperative infections, Pott disease, isolated discitis/osteomyelitis, and patients with imaging findings suggestive of CSEA but negative intraoperative findings and cultures.
Results: Of the 62 patients included, 6 were successfully managed medically (Group 1) with MS increase of 2.3 points (standard deviation [SD] 4.4). Thirty-eight patients were treated with early surgery (Group 2) (average time to operating room 24.4 hours [SD 19.2] with average MS increase 11.89 points [SD 19.5]). Eighteen failed medical management (Group 3) requiring delayed surgery (time to OR 7.02 days [SD 5.33]) with a net MS drop of 15.89 (SD 24.9). The medical failure rate was 75%. MS change between early and delayed surgery was significant (p<.001) favoring early surgery. Risk factors and laboratory data did not predict medical failure or posttreatment MS because of the high number of medical failures when abscess involves the cervical epidural space.
Conclusions: Early surgery results in improved posttreatment MS compared with medical failure and delayed surgery. In our patients, the failure rate of medical management was high, 75%. Based on our results, we recommend early surgical decompression for all CSEA.
abstract_id: PUBMED:30580794
Factors related to post surgical neurologic improvement for cervical spine infection. Background: Cervical spine infections are uncommon but potentially dangerous, having the highest rate of neurological compromise and resulting disability. However, the factors related to surgical success is multiple yet unclear.
Methods: We retrospectively reviewed the medical records of 27 patients (16 men and 11 women) with cervical spine infection who underwent surgical treatment at Chang Gung Memorial Hospital, Linkou branch, between 2001 and 2014. The neurological status, by Frankel classification, was recorded preoperatively and at discharge. Group X had neurologic improvement of at least 1 grade, group Y had unchanged neurologic status, and group Z showed deterioration. We recorded the patient demographic data, presenting symptoms and signs, interval from admission to surgery, surgical procedure, laboratory data, perioperative antibiotic course, pathogens identified, coexisting medical disease, concomitant nonspinal infection, and clinical outcomes. We intended to evaluate the different characteristics of patients who improved neurologically after treatment.
Results: The mean age of our cohort was 56.6 years. Anterior cervical discectomy and fusion was the most commonly performed surgical procedure (74.1%). The Frankel neurological status improved in 70.4% (group X, n = 19) and unchanged in 29.6% (group Y, n = 8). No patients worsened. Motor weakness was most common (96.3%) neurological deficit, followed by sensory abnormalities (37.0%), and bowel/urine incontinence (33.3%). The main difference in presentation between group X and group Y was neck pain (100% vs. 75.0%; p = .02), not fever. Group X had a shorter preoperative antibiotic course (p = .004), interval from admission to operation (p = .02), and hospital stay (p = .01).
Conclusion: Clinicians should be more suspicious in patients who present with neck pain and any neurological involvement even in those without fever while establishing early diagnosis. Earlier operative treatment in group X result in better neurologic recovery and shorter hospital stay due to disease improvement.
abstract_id: PUBMED:28457929
Cervical Spondylodiscitis: Presentation, Timing, and Surgical Management in 59 Patients. Background: Cervical spondylodiscitis is thought to carry a significant risk for rapid neurologic deterioration with a poor response to nonsurgical management.
Methods: A retrospective surgical case series of the acute surgical management of cervical spondylodiscitis is reviewed to characterize the neurologic presentation and postoperative neurologic course in a relatively uncommon disease.
Results: Fifty-nine patients were identified (mean age, 59 years [range, 18-83 years; SD ± 13.2 years]) from a single-institution neurosurgical database. The most common levels of radiographic cervical involvement were C4-C5, C5-C6, and C6-C7, in descending order. Overall, statistically significant clinical improvement was noted after surgery (P < 0.05). Spinal cord hyperintensity on T2-weighted magnetic resonance imaging was significantly associated with a worse preoperative neurologic grade (P = 0.036), but did not correlate with a relatively worse neurologic outcome by discharge. No significant difference was noted between potential preoperative predictors (organism cultured, presence of epidural abscess, tobacco use, early surgery within 24 hours of clinical presentation) and preoperative American Spinal Injury Association injury scale, with the exception of the duration between symptom onset and surgical intervention. A negative correlation between increased preoperative duration of symptoms and magnitude in motor improvement was observed. Relative to anteroposterior decompression and fusion, anterior treatment alone demonstrated a relatively greater effect in neurologic improvement.
Conclusions: Cervical spondylodiscitis is a rare disease that typically manifests with preoperative motor deficits. Surgery was associated with a significant improvement in motor score by hospital discharge. Significant predictors of neurologic improvement were not observed. Prolonged symptomatic duration was correlated with a significantly lower likelihood of motor score improvement.
abstract_id: PUBMED:24231778
Spinal epidural abscesses: risk factors, medical versus surgical management, a retrospective review of 128 cases. Background Context: Spinal epidural abscess (SEA) is a rare, serious and increasingly frequent diagnosis. Ideal management (medical vs. surgical) remains controversial.
Purpose: The purpose of this study is to assess the impact of risk factors, organisms, location and extent of SEA on neurologic outcome after medical management or surgery in combination with medical management.
Study Design: Retrospective electronic medical record (EMR) review.
Patient Sample: We included 128 consecutive, spontaneous SEA from a single tertiary medical center, from January 2005 to September 11. There were 79 male and 49 female with a mean age of 52.9 years (range, 22-83).
Outcome Measures: Patient demographics, presenting complaints, radiographic features, pre/post-treatment neurologic status (ASIA motor score [MS] 0-100), treatment (medical vs. surgical) and clinical follow-up were recorded. Neurologic status was determined before treatment and at last available clinical encounter. Imaging studies reviewed location/extent of pathology.
Methods: Inclusion criteria were a diagnosis of a bacterial SEA based on radiographs and/or intraoperative findings, age greater than 18 years, and adequate EMR. Exclusion criteria were postinterventional infections, Pott's disease, isolated discitis/osteomyelitis, treatment initiated at an outside facility, and imaging suggestive of a SEA but negative intraoperative findings/cultures.
Results: The mean follow-up was 241 days. The presenting chief complaint was site-specific pain (100%), subjective fevers (50%), and weakness (47%). In this cohort, 54.7% had lumbar, 39.1% thoracic, 35.9% cervical, and 23.4% sacral involvement spanning an average of 3.85 disc levels. There were 36% ventral, 41% dorsal, and 23% circumferential infections. Risk factors included a history of IV drug abuse (39.1%), diabetes mellitus (21.9%), and no risk factors (22.7%). Pathogens were methicillin-sensitive Staphylococcus aureus (40%) and methicillin-resistance S aureus (30%). Location, SEA extent, and pathogen did not impact MS recovery. Fifty-one patients were treated with antibiotics alone (group 1), 77 with surgery and antibiotics (group 2). Within group 1, 21 patients (41%) failed medical management (progressive MS loss or worsening pain) requiring delayed surgery (group 3). Irrespective of treatment, MS improved by 3.37 points. Thirty patients had successful medical management (MS: pretreatment, 96.5; post-treatment, 96.8). Twenty-one patients failed medical therapy (41%; MS: pretreatment, 99.86, decreasing to 76.2 [mean change, -23.67 points], postoperative improvement to 85.0; net deterioration, -14.86 points). This is significantly worse than the mean improvement of immediate surgery (group 2; MS: pretreatment, 80.32; post-treatment, 89.84; recovery, 9.52 points). Diabetes mellitus, C-reactive protein greater than 115, white blood count greater than 12.5, and positive blood cultures predict medical failure: None of four parameters, 8.3% failure; one parameter, 35.4% failure; two parameters, 40.2% failure; and three or more parameters, 76.9% failure.
Conclusion: Early surgery improves neurologic outcomes compared with surgical treatment delayed by a trial of medical management. More than 41% of patients treated medically failed management and required surgical decompression. Diabetes, C-reactive protein greater than 115, white blood count greater than 12.5, and bacteremia predict failure of medical management. If a SEA is to be treated medically, great caution and vigilance must be maintained. Otherwise, early surgical decompression, irrigation, and debridement should be the mainstay of treatment.
abstract_id: PUBMED:36629954
ACDF versus corpectomy in octogenarians with cervical epidural abscess: early complications and outcomes with 2 years of follow-up. Purpose: Cervical spinal epidural abscess (CSEA) is a rare condition, manifesting as rapid neurological deterioration and leading to early neurological deficits. Its management remains challenging, especially in patients older than 80 years. Therefore, we aimed to compare the clinical course and determine morbidity and mortality rates after anterior cervical discectomy and fusion (ACDF) versus corpectomy in octogenarians with ventrally located CSEA at two levels.
Methods: In this single-center retrospective review, we obtained the following from electronic medical records between September 2005 and December 2021: patient demographics, surgical characteristics, complications, hospital clinical course, and 90-day mortality rate. Comorbidities were assessed using the age-adjusted Charlson comorbidity index (CCI).
Results: Over 16 years, 15 patients underwent ACDF, and 16 patients underwent corpectomy with plate fixation. Between the two groups, patients who underwent corpectomy had a significantly poorer baseline reserve (9.0 ± 2.6 vs. 10.8 ± 2.7; p = 0.004) and had a longer hospitalization period (16.4 ± 13.1 vs. 10.0 ± 5.3 days; p = 0.004) since corpectomy lasted significantly longer (229.6 ± 74.9 min vs. 123.9 ± 47.5 min; p < 0.001). Higher in-hospital and 90-day mortality and readmission rates were observed in the corpectomy group, but the difference was not statistically significant. Both surgeries significantly improved blood infection parameters and neurological status at discharge. Revision surgery due to pseudoarthrosis was required in two patients after corpectomy.
Conclusions: We showed that both ACDF and corpectomy for ventrally located CSEA can be considered as safe treatment strategies for patients aged 80 years and above. However, the surgical approach should be carefully weighed and discussed with the patients and their relatives.
abstract_id: PUBMED:27190742
Upper Cervical Epidural Abscess in Clinical Practice: Diagnosis and Management. Study Design Narrative review. Objective Upper cervical epidural abscess (UCEA) is a rare surgical emergency. Despite increasing incidence, uncertainty remains as to how it should initially be managed. Risk factors for UCEA include immunocompromised hosts, diabetes mellitus, and intravenous drug use. Our objective is to provide a comprehensive overview of the literature including the history, clinical manifestations, diagnosis, and management of UCEA. Methods Using PubMed, studies published prior to 2015 were analyzed. We used the keywords "Upper cervical epidural abscess," "C1 osteomyelitis," "C2 osteomyelitis," "C1 epidural abscess," "C2 epidural abscess." We excluded cases with tuberculosis. Results The review addresses epidemiology, etiology, imaging, microbiology, and diagnosis of this condition. We also address the nonoperative and operative management options and the relative indications for each as reviewed in the literature. Conclusion A high index of suspicion is required to diagnose this rare condition with magnetic resonance imaging being the imaging modality of choice. There has been a shift toward surgical management of this condition in recent times, with favorable outcomes.
abstract_id: PUBMED:32684428
Spinal epidural abscesses - The role for non-operative management: A systematic review. Background: Spinal Epidural Abscesses (SEAs) are traditionally seen as a surgical emergency. However, SEAs can be discovered in entirely asymptomatic patients. This presents a dilemma for the attending clinician as to whether to subject these patients to significant surgery. This systematic review updates the evidence surrounding the efficacy of non-operative SEA management by means of intravenous antibiotics ± radiologically-guided aspiration.
Aims: 1. To assess failure rates of medical therapy for SEA. The absolute definition of 'failure' used by the study was recorded, and comparisons made. 2. To review of risk factors for success/failure of medical treatment for SEA.
Methods: A database search with the MESH term 'epidural abscess' and keywords ['treatment' OR 'management'] were used.
Results: 14 studies were included. The number of SEA patients managed non-operatively ranged from 19 to 142. There was significant heterogeneity across the studies. Pooled Failure of Medical Therapy (FMT) (defined as any poor outcome) was 29.40%. When FMT = mortality the pooled rate was 11.49%. Commonly cited risk factors for FMT included acute neurological compromise, diabetes mellitus, increasing age and Staphylococcus aureus.
Conclusion: SEA will always be a condition mostly managed surgically. Despite this, there is growing evidence that non-operative management can be possible in the correct patients. The key is in patient selection - patients with any of the above-mentioned risk factors have the potential to deteriorate further on medical treatment and have a worse outcome than if they had undergone emergency surgery straight away. Ongoing research will hopefully further investigate this crucial step.
abstract_id: PUBMED:31367375
Management of cervical spine epidural abscess: a systematic review. Background: Cervical spinal epidural abscess (CSEA) is a localized infection between the thecal sac and cervical spinal column which may result in neurological deficit and death if inadequately treated. Two treatment options exist: medical management and surgical intervention. Our objective was to analyze CSEA patient outcomes in order to determine the optimal method of treatment.
Methods: An electronic literature search for relevant case series and retrospective reviews was conducted through June 2016. Data abstraction and study quality assessment were performed by two independent reviewers. A lack of available data led to a post hoc decision not to perform meta-analysis of the results; study findings were synthesized qualitatively.
Results: 927 studies were identified, of which 11 were included. Four studies were ranked as good quality, and seven ranked as fair quality. In total, data from 173 patients were included. Mean age was 55 years; 61.3% were male. Intravenous drug use was the most common risk factor for CSEA development. Staphylococcus aureus was the most commonly cultured pathogen. 140 patients underwent initial surgery, an additional 18 patients were surgically treated upon failure of medical management, and 15 patients were treated with antibiotics alone.
Conclusion: The rates of medical management failure described in our review were much higher than those reported in the literature for thoracolumbar spinal epidural abscess patients, suggesting that CSEA patients may be at a greater risk for poor outcomes following nonoperative treatment. Thus, early surgery appears most viable for optimizing CSEA patient outcomes. Further research is needed in order to corroborate these recommendations.
abstract_id: PUBMED:32637213
Diagnosis, and Treatment of Cervical Epidural Abscess and/or Cervical Vertebral Osteomyelitis with or without Retropharyngeal Abscess; A Review. Background: Every year approximately 19.6 patients/100,000 per year are admitted to hospitals with spinal epidural abscesses (CSEA), 7.4/100,000 have vertebral osteomyelitis (VO)/100,000/year, while 4.1/100.000 children/year have cervical retropharyngeal abscesses (RPA) (i.e., data insufficient for adults).
Methods: Here we evaluated 11 individual case studies, 6 multiple patient series, and looked at 9 general review articles focusing on CSEA, and/or VO, with/without RPA.
Results: Of the 11 case studies involving 15 patients, 14 had cervical spinal epidural abscesses (CSEA: 10 CSEA/ VO/RPA, 2 CSEA/VO, 1 CSEA/TSEA, 1 CSEA/ TSEA/LSEA), 13 had cervical osteomyelitis (VO: 11 VO/CSEA, 2 VO/RPA), and 12 had cervical retropharyngeal abscesses (RPA: 10 RPA/CSEA/VO, 2 RPA/VO alone). When patients were treated surgically, they required 12 anterior, and 2 posterior approaches; 1 patient required no surgery. In the 6 larger cervical series involving 355 patients, 4 series involved CSEA (3 CSEA, 1 CSEA/VO), and 2 seires had cervical VO. Primary surgery was performed in 298 patients, while 57 were initially managed medically; 24 of these latter patients failed non-surgical therapy, and required delayed cervical surgery. Notably, all 17 clinical studies advocated early surgery where clinically appropriate for varying combinations of CSEA and/or VO with or without RPA. The 8 final articles reviewed all-levels of SEA and or VO, while also providing additional unique information regarding RPA.
Conclusion: We analyzed 11 case studies and 6 multiple case series regarding the diagnosis and treatment of combinations of cervical CSEA, and/or VO with or without RPA. We also reviewed 8 articles on the evaluation/ management of all-level SEAs and/or VOs, along with the unique features of RPAs.
abstract_id: PUBMED:27836393
Comparing outcomes of early, late, and non-surgical management of intraspinal abscess. Intraspinal abscesses (ISAs) are rare lesions that are often neurologically devastating. Current treatment paradigms vary widely including early surgical decompression, drainage, and systemic antibiotics, delayed surgery, and sole medical management. The National Inpatient Sample (NIS) database was queried for cases of ISA from 2003 to 2012. Early and late surgery were defined as occurring before or after 48h of admission. Outcome measures included mortality, incidence of major complications, length of stay (LOS), and inpatient costs. A total of 10,150 patients were included (6281 early surgery, 3167 delayed surgery, 702 medical management). Paralysis, the main comorbidity, was most associated with early surgery (p<0.0001). In multivariate analysis, the rates of postoperative infection and paraplegia were highest with early surgery (p<0.0001), but the incidence of sepsis was higher with delayed surgery (p<0.0001). Early surgery was least associated with in-hospital mortality (p=0.0212), sepsis (p<0.001), and had the shortest LOS (p<0.001). Charges were highest with delayed surgery, and least with medical management (p<0.001). Medical management was associated with lower rates of complications (p<0.001). This is the largest study of patients with ISAs ever performed. Our results suggest that patients with ISAs undergoing surgical management have better outcomes and lower costs when operated on within 48h of admission, emphasizing the importance of accurate and early diagnosis of ISA.
Answer: Yes, there is a difference in neurologic outcome when comparing medical versus early operative management of cervical epidural abscesses (CSEAs). Studies have shown that early surgical intervention tends to result in improved neurologic outcomes compared to medical management alone.
A retrospective review of 62 patients with spontaneous CSEA found that early surgery resulted in a significant increase in motor scores (MS) compared to those who failed medical management and required delayed surgery. The medical failure rate was high at 75%, and the change in MS between early and delayed surgery was significant, favoring early surgery (PUBMED:24937797).
Another study involving 27 patients with cervical spine infection who underwent surgical treatment reported that earlier operative treatment resulted in better neurologic recovery and shorter hospital stays (PUBMED:30580794). Similarly, a retrospective surgical case series of 59 patients with cervical spondylodiscitis showed that surgery was associated with significant improvement in motor score by hospital discharge, and prolonged symptomatic duration was correlated with a significantly lower likelihood of motor score improvement (PUBMED:28457929).
A retrospective review of 128 cases of spinal epidural abscess (SEA) indicated that early surgery improves neurologic outcomes compared with surgical treatment delayed by a trial of medical management. More than 41% of patients treated medically failed management and required surgical decompression (PUBMED:24231778).
A systematic review on the management of CSEA suggested that the rates of medical management failure were much higher than those reported for thoracolumbar spinal epidural abscess patients, implying that CSEA patients may be at greater risk for poor outcomes following nonoperative treatment. Thus, early surgery appears to be the most viable option for optimizing CSEA patient outcomes (PUBMED:31367375).
In summary, the evidence suggests that early operative management of cervical epidural abscesses is associated with better neurologic outcomes compared to medical management, which has a higher rate of failure and may result in delayed neurologic recovery. |
Instruction: Is early postoperative feeding feasible in elective colon and rectal surgery?
Abstracts:
abstract_id: PUBMED:8811376
Is early postoperative feeding feasible in elective colon and rectal surgery? Unlabelled: In reports on earlier non-prospectively randomized trials the authors have claimed that early oral postoperative feeding is a unique benefit of laparoscopic surgery. On the other hand, some authors have suggested that early feeding could be tolerated by the majority of patients after elective open surgery.
Aim: This prospective randomized study was undertaken to assess the feasibility and safety of immediate oral feeding in patients subjected to elective open colorectal surgery.
Methods: This trial included 190 patients who underwent an elective colon or rectal operation. Patients were randomized after the operative procedure into one of two groups. Group I (n = 95): On the first evening after the operation, patients were allowed ab libitum intake of clear liquids; this continued until the first postoperative day at which time they progressed to a regular diet as desired. Group II (n = 95): In this group the nasogastric tube was removed when the surgeon considered that postoperative ileus had been resolved.
Results: Early oral intake was tolerated by 79.6% of the patients in the first 4 days in group I; there were no differences between the two groups from the 4th day on. The incidence of vomiting and nasogastric tube insertion (21.5%) was higher in patients in group I than in those in group II. The time until the first bowel movement was 4.3 days in group I and 4.7 days in group II. Complications appeared in 17.3% of the patients in group I and in 19.3% in group II.
Conclusion: This study has objectively demonstrated that early oral feeding is feasible and safe in patients who have elective colorectal surgery.
abstract_id: PUBMED:24833141
Benefit of oral feeding as early as one day after elective surgery for colorectal cancer: oral feeding on first versus second postoperative day. The optimal timing of early oral intake after surgery has not been fully established. The objective of this study was to compare early oral intake at postoperative day 1 after resection of colorectal cancer with that of day 2 to identify the optimal timing for resumption of oral intake in such patients. Consecutive patients with colorectal cancer who underwent elective colorectal resection were separated into two groups. Sixty-two patients began a liquid diet on the first postoperative day (POD1 group) and 58 patients began on POD2 (POD2 group) and advanced to a regular diet within the next 24 hours as tolerated. As for gastrointestinal recovery, the first passage of flatus was experienced, on average, on postoperative day 3.1 ± 1.0 in the POD2 group and on day 2.3 ± 0.7 in the POD1 group. The first defecation was also significantly earlier in patients in the POD1 group than those in the POD2 group (POD 3.2 ± 1.2 versus 4.2 ± 1.4, respectively). No statistical difference was found between the two groups in terms of postoperative complications. Our results suggest that very early feeding on POD1 after colorectal resection is safe and feasible and that induced a quicker recovery of postoperative gastrointestinal movement in patients.
abstract_id: PUBMED:9007628
"Is early postoperative feeding feasible in elective colon and rectal surgery?". N/A
abstract_id: PUBMED:8854962
Early oral feeding after elective colorectal surgery: is it safe. The authors have carried out a prospective trial to assess the safety, tolerability and outcome of early resumption of oral feeding after elective abdominal surgery involving the small or the large bowel. Over the study period, 161 patients undergoing elective laparotomy and bowel resection were randomized to two groups. Patients undergoing laparoscopic surgery were not included. In both groups, the nasogastric tube was removed immediately after surgery. In group I, oral feeding was started on first postoperative day, beginning with clear fluids and gradually progressing to a normal diet over a period of 24 to 48 hours, as tolerated. In group II, oral feeding was started after resolution of postoperative ileus, starting again with clear fluids as in group I. The resolution of postoperative ileus was defined as having bowel movements with no abdominal distention or vomiting. In both groups, nasogastric tube was reinserted if the patient had two episodes of vomiting of more than 100 ml over 24 hours in the absence of bowel movements. Postoperative analgesia was similar in both groups and same criteria for discharge from the hospital were followed. Of the 161 patients, 80 were in the early feeding group and 81 in the other group. The age and sex distribution of the patients in both groups was similar. In both groups, segmental colonic, rectal or small bowel resection was the commonest surgery. In group I, 79% patients tolerated feeds compared to 86% in group II. The incidence of vomiting was thus 21% in group I and 14% in group II, the difference being statistically insignificant. Reinsertion of nasogastric tube was required only in 11% patients in group I and 10% patients in group II. Further, the length of postoperative ileus (3.8 + 0.1 vs 4.1 + 0.1 days), length of hospital stay (6.2 + 0.2 vs 6.8 + 0.2 days) and incidence of complications (7.5% vs 6.1%) were not significantly different between the two groups. However, regular diet was tolerated significantly earlier. (p <0.001) in group I as compared to group II (2.6 + 0.1 vs. 5.0 + 0.1 days). Further, there was no incidence of anastomotic leaks or aspiration pneumonia, complications which could be expected to occur secondary to early feeding. The authors have reviewed the literature which shows a trend towards decreasing use of routine postoperative nasogastric drainage. Based on the results of the current study, they suggest that there is no need to delay oral feeding till resolution of colonic ileus as early feeding is safe and well tolerated. They also suggest that early resumption of oral feeding may have a positive impact on the psychological state of the patient and may help the recovery.
abstract_id: PUBMED:9931801
Early postoperative nutrition after elective colonic surgery Unlabelled: Our intent was to show that immediate postoperative oral feeding of a regular diet after elective open colorectal surgery is safe, feasible and can be tolerated by the patients. Our prospective study included 96 consecutive patients, and their results were compared with those of the literature.
Conclusion: Early oral feeding after elective colorectal surgery is safe (morbidity: 12.5%; mortality: 2%); it can be tolerated without symptoms by a majority of patients (85%); it is easy, feasible, and shortens the postoperative length of hospital stay (10.6 days).
abstract_id: PUBMED:8951516
Early postoperative feeding after elective colorectal surgery is not a benefit unique to laparoscopy-assisted procedures. Unlabelled: Previous analyses of non-prospectively randomized trials have suggested that early oral postoperative feeding might be a benefit unique to laparoscopic surgery. However, some authors have indicated that early feeding can be tolerated by the majority of patients after elective open surgery.
Aim: This prospective randomized study was undertaken to assess whether the time prior to oral intake of food after laparoscopy-assisted surgery is shorter than that after standard laparotomy.
Methods: This trial included 40 patients who were divided randomly into two groups before operation. Group I included 20 patients (mean age, 52 years; range, 15-77 years) who underwent a laparoscopy-assisted colon or rectal procedure (LAP). Group II consisted of 20 patients (mean age, 56 years, range, 41-74 years) who underwent surgery with a standard midline incision (SMI). On the evening after surgery, patients were allowed clear liquids ab libitum. This regimen was continued until the first postoperative day at which time they could elect to start eating a regular diet. If a patient had two episodes of vomiting, a nasogastric tube was inserted.
Results: Five laparoscopic procedures were converted to SMI because of adhesions (25%) and an equal number of patients was excluded from the group that was treated in the traditional manner. Therefore, only 30 patients were included in the analysis. There were no deaths in this trial. Complications appeared in four of the patients in the LAP group and in two of the patients in the SMI group (no significant difference). There were no statistically significant differences between the two groups in terms of the ability to tolerate the early oral intake of food, in the frequency of vomiting or in the incidence of insertion of a nasogastric tube. The time to the first bowel movement was 5.4 days in LAP and 5.5 days in SMI, and the difference was not significant.
Conclusion: This study invalidates the claim by laparoscopic surgeons that earlier oral intake of food is tolerated by their patients than by patients who undergo standard procedures.
abstract_id: PUBMED:31319385
Do Outcomes in Elective Colon and Rectal Cancer Surgery Differ by Weekday? An Observational Study Using Data From the Dutch ColoRectal Audit. Background: Previous studies showing higher mortality after elective surgery performed on a Friday were based on administrative data, known for insufficient case-mix adjustment. The goal of this study was to investigate the risk of adverse events for patients with colon and rectal cancer by day of elective surgery using clinical data from the Dutch ColoRectal Audit.
Patients And Methods: Prospectively collected data from the 2012-2015 Dutch ColoRectal Audit (n=36,616) were used to examine differences in mortality, severe complications, and failure to rescue by day of elective surgery (Monday through Friday). Monday was used as a reference, analyses were stratified for colon and rectal cancer, and case-mix adjustments were made for previously identified variables.
Results: For both colon and rectal cancer, crude mortality, severe complications, and failure-to-rescue rates varied by day of elective surgery. After case-mix adjustment, lower severe complication risk was found for rectal cancer surgery performed on a Friday (odds ratio, 0.84; 95% CI, 0.72-0.97) versus Monday. No significant differences were found for colon cancer surgery performed on different weekdays.
Conclusions: No weekday effect was found for elective colon and rectal cancer surgery in the Netherlands. Lower severe complication risk for elective rectal cancer surgery performed on a Friday may be caused by patient selection.
abstract_id: PUBMED:7618972
Is early oral feeding safe after elective colorectal surgery? A prospective randomized trial. Introduction: The routine use of a nasogastric tube after elective colorectal surgery is no longer mandatory. More recently, early feeding after laparoscopic colectomy has been shown to be safe and well tolerated. Therefore, the aim of our study was to prospectively assess the safety and tolerability of early oral feeding after elective "open" abdominal colorectal operations.
Materials And Methods: All patients who underwent elective laparotomy with either colon or small bowel resection between November 1992 and April 1994 were prospectively randomized to one of the following two groups: group 1: early oral feeding--all patients received a clear liquid diet on the first postoperative day followed by a regular diet as tolerated; group 2: regular feeding--all patients were treated in the "traditional" way, with feeding only after the resolution of their postoperative ileus. The nasogastric tube was removed from all patients in both groups immediately after surgery. The patients were monitored for vomiting, bowel movements, nasogastric tube reinsertion, time of regular diet consumption, complications, and length of hospitalization. The nasogastric tube was reinserted if two or more episodes of vomiting of more than 100 mL occurred in the absence of bowel movement. Ileus was considered resolved after a bowel movement in the absence of abdominal distention or vomiting.
Results: One hundred sixty-one consecutive patients were studied, 80 patients in group 1 (34 males and 46 females, mean age 51 years [range 16-82 years]), and 81 patients in group 2 (43 males and 38 females, mean age 56 years [range 20-90 years]). Sixty-three patients (79%) in the early feeding group tolerated the early feeding schedule and were advanced to regular diet within the next 24 to 48 hours. There were no significant differences between the early and regular feeding groups in the rate of vomiting (21% vs. 14%), nasogastric tube reinsertion (11% vs. 10%), length of ileus (3.8 +/- 0.1 days vs. 4.1 +/- 0.1 days), length of hospitalization (6.2 +/- 0.2 days vs. 6.8 +/- 0.2 days), or overall complications (7.5% vs. 6.1%), respectively, (p = NS for all). However, the patients in the early feeding group tolerated a regular diet significantly earlier than did the patients in the regular feeding group (2.6 +/- 0.1 days vs. 5 +/- 0.1 days; p < 0.001).
Conclusion: Early oral feeding after elective colorectal surgery is safe and can be tolerated by the majority of patients. Thus, it may become a routine feature of postoperative management in these patients.
abstract_id: PUBMED:33595962
Early Versus Traditional Oral Feeding Following Elective Colorectal Surgery: A Literature Review. Traditional feeding protocols withhold oral intake until the return of bowel function for concern of postoperative complications following elective colorectal surgery. Implementation of early feeding into clinical practice challenges this conventional approach. The purpose of this literature review is to analyze the current evidence and compare the impact of traditional versus early oral feeding protocols on postoperative outcomes following elective colorectal resection. A literature search of PubMed, EMBASE, CINAHL, and Scopus was conducted. Outcomes of interest include the resolution of postoperative ileus, the incidence of anastomotic leakage, and length of hospital stay. Patients assigned to early oral feeding began oral caloric intake within 24 hours of surgery, whereas oral intake was withheld until the resolution of postoperative ileus for patients in the traditional oral feeding group. Five trials met the inclusion criteria for this review. All studies reported on the primary outcomes of interest. A P value less than .05 was considered significant. Results indicate that early oral feeding following elective colorectal surgery did not extend the duration of postoperative ileus as evidenced by shorter time to first flatus and defecation, did not increase the incidence of anastomotic leakage, and demonstrated shorter length of hospital stay when compared with traditional management.
abstract_id: PUBMED:30747693
Practice Changes in Postoperative Feeding after Elective Colorectal Surgery: From Prospective Randomized Study to Everyday Practice. Early postoperative feeding is the standard of care after colorectal surgery and is associated with improved outcomes. A controlled study performed at our center demonstrated faster bowel function recovery and shorter hospital stay without an increase in postoperative morbidity for a solid versus clear-liquid diet on postoperative day (POD) 1 after colorectal surgery. The present study aims to evaluate the impact of these findings on the practice of diet advancement by board-certified general surgeons (GS) and colorectal surgeons (CRS) at an urban teaching hospital. Patients undergoing elective colorectal surgery were prospectively evaluated. Data were compared with that of a study cohort that underwent elective colorectal surgery reported in 2012. Early postoperative feeding in the more recent cohort statistically increased for all cases on POD 0 and POD 1. Diet advancement over time revealed significant changes in early feeding on POD 0 for patients operated on by CRS (0% vs 83%; P < 0.0001) but not GS (13% vs 13%; P = 1). Patients operated on by either CRS or GS and offered early feeding on POD 1 did not significantly differ between time periods. Despite well-documented evidence of the advantages of early feeding, GS, but not CRS, remain reluctant to administer early diets to patients after colorectal surgery.
Answer: Yes, early postoperative feeding is feasible in elective colon and rectal surgery. Several studies have demonstrated that immediate or early oral feeding after elective colorectal surgery is safe and can be tolerated by a majority of patients. A prospective randomized study showed that early oral intake was tolerated by 79.6% of patients within the first 4 days after surgery, with no significant differences in complications compared to a control group that resumed feeding after the resolution of postoperative ileus (PUBMED:8811376). Another study found that early feeding on the first postoperative day (POD1) after colorectal resection induced quicker recovery of postoperative gastrointestinal movement without increasing postoperative complications (PUBMED:24833141).
Additional research supports these findings, with one study reporting that 79% of patients tolerated early feeds compared to 86% in the control group, with no significant differences in the incidence of vomiting, nasogastric tube reinsertion, length of postoperative ileus, length of hospital stay, or incidence of complications (PUBMED:8854962). Furthermore, a study comparing laparoscopy-assisted procedures with standard laparotomy found no significant differences between the two groups in terms of the ability to tolerate early oral intake of food, frequency of vomiting, or incidence of nasogastric tube insertion (PUBMED:8951516).
A literature review also concluded that early oral feeding following elective colorectal surgery did not extend the duration of postoperative ileus, did not increase the incidence of anastomotic leakage, and was associated with a shorter length of hospital stay compared to traditional management (PUBMED:33595962).
Overall, the evidence suggests that early postoperative feeding is not only feasible but also beneficial in terms of promoting faster recovery without increasing the risk of complications in elective colon and rectal surgery. |
Instruction: Does sodium dodecyl sulfate wash out of detergent-treated bovine pericardium at cytotoxic concentrations?
Abstracts:
abstract_id: PUBMED:19301560
Does sodium dodecyl sulfate wash out of detergent-treated bovine pericardium at cytotoxic concentrations? Background And Aim Of The Study: The ionic detergent sodium dodecyl sulfate (SDS) is a proposed treatment for the removal of antigenic proteins from unfixed biological scaffolds used in tissue engineering. However, questions remain about possible cytotoxic effects of SDS-treated tissues. The study aims were to: (i) develop a sensitive SDS assay for physiological solutions; (ii) measure SDS concentrations in the washing media of SDS-treated tissue; and (iii) determine cytotoxic SDS concentrations in cultured ovine vascular cells.
Methods: An assay was developed to monitor SDS concentrations at microM levels, based on attenuated total reflectance infrared spectroscopy. Bovine pericardium was treated with SDS (1.0 to 0.01%) and washed for 96 h. The SDS concentration in the washing media was measured at 24-h intervals; data were expressed as microM/g tissue. Ovine vascular cells were cultured in DME media at 37 degrees C for 48 h in various SDS concentrations (10 to 1000 microM). The cells were then counted, and the percentage live cells expressed, based on trypan blue exclusion (n=5).
Results: SDS concentrations > or =10 microM significantly reduced (p < 0.05) the total cell number, while concentrations > or =100 microM reduced (p < 0.05) the percentage live cells of ovine vascular cell cultures. SDS was present in the washing media of SDS-treated bovine pericardium. SDS leaching from bovine pericardium was found to depend on the SDS concentration used for the treatment, and diminished with time.
Conclusion: SDS leaches from SDS-treated bovine pericardium at concentrations that are potentially cytotoxic. An understanding of the dynamics of SDS washout, based on a sensitive SDS assay, may lead to the creation of protocols for the preparation of biological scaffolds that are free from cytotoxic leaching.
abstract_id: PUBMED:19852149
Immunoblot detection of soluble protein antigens from sodium dodecyl sulfate- and sodium deoxycholate-treated candidate bioscaffold tissues. Background And Aims Of The Study: The detergent-based 'decellularization' of xenogeneic tissues is one approach to scaffolding a tissue-engineered heart valve construct; however, concern persists regarding the immunogenicity of decellularized xenogeneic bioscaffolds. The study aims were to: (i) develop a sensitive and robust immunoblot-based assay for the detection of soluble protein antigens in xenogeneic bioscaffolds; and (ii) evaluate the completeness of protein antigen removal from sodium dodecyl sulfate (SDS)- or sodium deoxycholate (SD)-treated bovine pericardium (BP) or porcine aortic valve (PAV) conduit.
Methods: Homogenized BP or PAV were injected into rabbits to generate immune serum towards these tissues. Soluble proteins were extracted from untreated BP and PAV. Immunoblot analyses of the extracts were performed using pre-immune and 14-, 28-, 42-, 56- and 70-day post-immune serum. BP and PAV were treated sequentially with 4 h hypotonic lysis; with 0, 0.01, 0.025, 0.05, 0.1, 0.25 or 0.5% SDS or SD for 24 h; and with 96 h of aqueous wash-out. Immunoblot analyses of protein extracts from treated tissues were performed using 70-day post-immune rabbit serum.
Results: Immunoblot analysis of untreated BP or PAV with pre-immune serum showed no immune banding. The immune banding density increased progressively when immunoblots were performed with 14-day through 70-day post-immune serum. The immunoblot analysis of treated BP and PAV showed that soluble protein antigen removal from SDS- or SD-treated tissues was incomplete.
Conclusion: Immunoblot analysis is a sensitive and robust assay for detecting soluble protein xenogeneic antigens after the decellularization of xenogeneic bioscaffolds. Under the study conditions, hypotonic lysis, SDS or SD detergent treatment, and aqueous wash-out-based decellularization of bovine pericardium and porcine aortic valve conduit did not completely remove detectable protein antigens.
abstract_id: PUBMED:2126168
Enhanced removal of detergent and recovery of enzymatic activity following sodium dodecyl sulfate-polyacrylamide gel electrophoresis: use of casein in gel wash buffer. The inclusion of 1% casein or bovine serum albumin in buffer used to reactivate enzymes subjected to sodium dodecyl sulfate (SDS)-polyacrylamide electrophoresis resulted in accelerated removal of SDS and restoration of nuclease and beta-galactosidase enzyme activities. Nuclease and beta-galactosidase activities which are absent from gels after longer wash procedures are detectable with this technique. Enzyme activity in gels prepared with SDS which contained inhibitory contaminants was partially restored by the casein wash procedure. The threshold of detection of two-dimensionally separated deoxyribonuclease I using the casein wash procedure was 1 picogram.
abstract_id: PUBMED:17484467
Biomechanical characterization of decellularized and cross-linked bovine pericardium. Background And Aim Of The Study: Although bovine pericardium has been used extensively in cardiothoracic surgery, its degeneration and calcification are important limiting factors in the continued use of this material. The study aims were to decellularize bovine pericardium and to compare the biomechanical properties of fresh and decellularized bovine pericardia to those treated with different concentrations of glutaraldehyde (GA).
Methods: An established protocol for decellularization using sodium dodecyl sulfate was used, and histological analysis performed to validate the adequacy of decellularization. Contact cytotoxicity was used to study the in-vitro biocompatibility of variously treated pericardia. Mechanical testing involved uniaxial testing to failure. Mechanical properties of the fresh and decellularized pericardia (untreated and treated with 0.5% and 0.05% GA) were compared.
Results: Histological analysis of decellularized bovine pericardium did not show any remaining cells or cell fragments. The histoarchitecture of the collagen-elastin matrix appeared well preserved. Untreated decellularized pericardium was biocompatible in contact cytotoxicity tests with smooth muscle and fibroblast cells. The GA-treated tissue was cytotoxic. There were no significant differences in the mechanical properties of fresh and decellularized pericardia, but there was an overall tendency for GA-treated pericardia to be stiffer than their untreated counterparts.
Conclusion: An acellular matrix, cross-linked with a reduced concentration of GA, can be produced using bovine pericardium. This biomaterial has excellent biomechanical properties and, potentially, may be used in the manufacture of heart valves and pericardial patches for clinical application.
abstract_id: PUBMED:22736904
Glutaraldehyde treatment elicits toxic response compared to decellularization in bovine pericardium. Glutaraldehyde-stabilized bovine pericardium is used for clinical application since 1970s because of its desirable features such as less immunogenicity and acceptable durability. However, a propensity for calcification is reported on account of glutaraldehyde treatment. In this study, commercially available glutaraldehyde cross-linked bovine pericardium was evaluated for its in vitro cytotoxic effect, macrophage activation, and in vivo toxic response in comparison to decellularized bovine pericardium. Glutaraldehyde-treated bovine pericardium and its extract were observed to be cytotoxic and it also caused significant inflammatory cytokine release from activated macrophages. Significant antibody response, calcification response, necrotic, and inflammatory response were noticed in glutaraldehyde-treated bovine pericardium in comparison to decellularized bovine pericardium in a rat subcutaneous implantation model. Glutaraldehyde-treated bovine pericardium also failed in acute systemic toxicity testing and intracutaneous irritation testing as per ISO 10993. With respect to healing and implant remodeling, total lack of host tissue incorporation and angiogenesis was noticed in glutaraldehyde-treated bovine pericardium compared to excellent host fibroblast incorporation and angiogenesis within the implant in decellularized bovine pericardium. In conclusion, using in vitro and in vivo techniques, this study has demonstrated that glutaraldehyde-treated bovine pericardium elicits toxic response compared to decellularized bovine pericardium which is not congenial for long-term implant performance.
abstract_id: PUBMED:479121
Sodium dodecyl sulfate-polyacrylamide gel electrophoresis of bovine serum albumin oligomers produced by lipid peroxidation. The oligomers of bovine serum albumin were produced by controlled reaction with peroxidizing linoleic acid to examine their possible utility as calibration proteins insodium dodecyl sulfate-polyacrylamide gel electrophoresis. The polymerization was effected in reaction mixtures containing linoleic acid undergoing peroxidation in the presence of ascorbic acid, and conditions that yield soluble oligomers with a wide molecular weight distribution were established. The interaction of these soluble oligomers with sodium dodecyl sulfate exhibited a binding isotherm indistinguishable from that obtained with bovine serum albumin. Furthermore, sodium dodecy sulfate-polyacrylamide gel electrophoresis of the albumin oligomers conformed to the empirical relation of molecular weight to mobility that pertains to the use of these oligomers as standard molecular weight markers.
abstract_id: PUBMED:31902896
Removal of Dodecyl Sulfate Ions Bound to Human and Bovine Serum Albumins Using Sodium Cholate. The secondary structures of human serum albumin (HSA) and bovine serum albumin (BSA) were disrupted in the solution of sodium dodecyl sulfate (SDS), while being hardly damaged in the solution of the bile salt, sodium cholate (NaCho). In the present work, the removal of dodecyl sulfate (DS) ions bound to these proteins was attempted by adding various amounts of NaCho. The extent of removal was estimated by the restoration of α-helical structure of each protein disrupted by SDS. Increases and decreases in α-helical structure were examined using the mean residue ellipticity at 222 nm, [θ]222, which was frequently used as a measure of α-helical structure content. The magnitudes of [θ]222 of HSA and BSA, weakened by SDS, were restrengthened upon the addition of NaCho. This indicated that the α-helical structures of HSA and BSA that were disrupted by the binding of DS ions were nearly reformed by the addition of NaCho. The NaCho concentration at which the maximum restoration of [θ]222 of each protein was attained increased nearly linearly with SDS concentration. These results indicated that most of the bound DS ions were removed from the proteins but the removal was incomplete. The removal of DS ions, examined by means of the equilibrium dialysis, was also incomplete. The α-helical structure restoration and the DS ion removal by NaCho were considered to be due to the ability of cholate anions to strip the surfactant ions bound to HSA and BSA. These stripped DS ions appeared to be more likely to form SDS-NaCho mixed micelles in bulk rather than SDS-NaCho mixed aggregates on the proteins.
abstract_id: PUBMED:7447446
Growth of Enterobacter cloacae in the presence of 25% sodium dodecyl sulfate. The growth of Enterobacter cloacae in 25% sodium dodecyl sulfate is described. The bacteria appeared to tolerate sodium dodecyl sulfate rather than metabolize it. The process was energy dependent, and cell lysis occurred during stationary phase. Extreme detergent resistance may be characteristic of the genus Enterobacter.
abstract_id: PUBMED:476724
Protection of sodium dodecyl sulfate-induced aggregation of concanalvalin A by saccharide ligands. Concanavalin A is visibly aggregated by low concentrations of sodium dodecyl sulfate, maximum aggregation being obtained at pH 4.6. Other denaturants, such as urea, guanidine hydrochloride, Triton X-100, cetyltrimethylammonium bromide, Tween 80, and Brij 35 are ineffective in promoting visible aggregation. The sodium dodecyl sulfate-induced aggregation of concanavalin A requires the presence of an intact, saccharide-ligand binding-site. Rapid and complete reversal of the detergent effect was achieved by use of saccharides which bind to the lectin. Such compounds as tryptophan and o-nitrophenyl beta-D-galactopyranoside did not inhibit the aggregation of concanavalin A by sodium dodecyl sulfate, suggesting that the detergent does not bind the hydrophobic pocket on the surface of the protein. The results suggest that concanavalin A may have an additional, ligand-binding site which is metal-dependent and which can be modified by the addition of a saccharide ligand.
abstract_id: PUBMED:4213988
Non-immunological precipitation of serum by sodium dodecyl sulfate in agar diffusion. Marmoset serum or serum of other species of animal may react with sodium dodecyl sulfate and forms nonspecific precipitin lines in agar diffusion. The protein detergent complexes are not readily dialyzable. Therefore precipitin lines derived from studies that use sodium dodecyl sulfate-treated antigens in agar diffusion must be interpreted with caution.
Answer: Yes, sodium dodecyl sulfate (SDS) does wash out of detergent-treated bovine pericardium at concentrations that are potentially cytotoxic. A study aimed to measure SDS concentrations in the washing media of SDS-treated tissue and determine cytotoxic SDS concentrations in cultured ovine vascular cells found that SDS concentrations greater than or equal to 10 microM significantly reduced the total cell number, and concentrations greater than or equal to 100 microM reduced the percentage of live cells in ovine vascular cell cultures. SDS was present in the washing media of SDS-treated bovine pericardium, and the leaching of SDS from the tissue depended on the concentration of SDS used for treatment and diminished over time (PUBMED:19301560). |
Instruction: Multi-modal CT in acute stroke: wait for a serum creatinine before giving intravenous contrast?
Abstracts:
abstract_id: PUBMED:26306403
Multi-modal CT in acute stroke: wait for a serum creatinine before giving intravenous contrast? No! Background: Multi-modal CT (MMCT) to guide decision making for reperfusion treatment is increasingly used, but there remains a perceived risk of contrast-induced nephropathy (CIN). At our center, MMCT is used empirically without waiting for serum-creatinine (sCR) or renal profiling.
Aims: To determine the incidence of CIN, examine the risk factors predisposing to its development, and investigate its effects on clinical outcome in the acute stroke population.
Methods: An institution-wide protocol was implemented for acute stroke presentations to have MMCT (100-150 ml nonionic tri-iodinated contrast, perfusion CT and CT angiography) without waiting for serum-creatinine to minimize delays. Intravenous saline is routinely infused (80-125 ml/h) for at least 24-h after MMCT. Serial creatinine levels were measured at baseline, risk period, and follow-up. Renal profiles and clinical progress were reviewed up to 90 days.
Results: We analyzed 735 consecutive patients who had MMCT for the evaluation of acute ischemic or hemorrhagic stroke during the last five-years. A total of 623 patients met the inclusion criteria for analysis: 16 cases (2·6%) biochemically qualified as CIN; however, the risk period serum-creatinine for 15 of these cases was confounded by dehydration, urinary tract infection, or medications. None of the group had progression to chronic kidney disease or required dialysis.
Conclusions: The incidence of CIN is low when MMCT is used routinely to assess acute stroke patients. In this population, CIN was a biochemical phenomenon that did not have clinical manifestations, cause chronic kidney disease, require dialysis, or negatively impact on 90-day mRS outcomes. Renal profiling and waiting for a baseline serum-creatinine are an unnecessary delay to emergency reperfusion treatment.
abstract_id: PUBMED:38279811
M2AI-CVD: Multi-modal AI approach cardiovascular risk prediction system using fundus images. Cardiovascular diseases (CVD) represent a significant global health challenge, often remaining undetected until severe cardiac events, such as heart attacks or strokes, occur. In regions like Qatar, research focused on non-invasive CVD identification methods, such as retinal imaging and dual-energy X-ray absorptiometry (DXA), is limited. This study presents a groundbreaking system known as Multi-Modal Artificial Intelligence for Cardiovascular Disease (M2AI-CVD), designed to provide highly accurate predictions of CVD. The M2AI-CVD framework employs a four-fold methodology: First, it rigorously evaluates image quality and processes lower-quality images for further analysis. Subsequently, it uses the Entropy-based Fuzzy C Means (EnFCM) algorithm for precise image segmentation. The Multi-Modal Boltzmann Machine (MMBM) is then employed to extract relevant features from various data modalities, while the Genetic Algorithm (GA) selects the most informative features. Finally, a ZFNet Convolutional Neural Network (ZFNetCNN) classifies images, effectively distinguishing between CVD and Non-CVD cases. The research's culmination, tested across five distinct datasets, yields outstanding results, with an accuracy of 95.89%, sensitivity of 96.89%, and specificity of 98.7%. This multi-modal AI approach offers a promising solution for the accurate and early detection of cardiovascular diseases, significantly improving the prospects of timely intervention and improved patient outcomes in the realm of cardiovascular health.
abstract_id: PUBMED:25854414
Comparing uni-modal and multi-modal therapies for improving writing in acquired dysgraphia after stroke. Writing therapy studies have been predominantly uni-modal in nature; i.e., their central therapy task has typically been either writing to dictation or copying and recalling words. There has not yet been a study that has compared the effects of a uni-modal to a multi-modal writing therapy in terms of improvements to spelling accuracy. A multiple-case study with eight participants aimed to compare the effects of a uni-modal and a multi-modal therapy on the spelling accuracy of treated and untreated target words at immediate and follow-up assessment points. A cross-over design was used and within each therapy a matched set of words was targeted. These words and a matched control set were assessed before as well as immediately after each therapy and six weeks following therapy. The two approaches did not differ in their effects on spelling accuracy of treated or untreated items or degree of maintenance. All participants made significant improvements on treated and control items; however, not all improvements were maintained at follow-up. The findings suggested that multi-modal therapy did not have an advantage over uni-modal therapy for the participants in this study. Performance differences were instead driven by participant variables.
abstract_id: PUBMED:35810717
Contrast-associated acute kidney injury in acute ischemic stroke patients following multi-dose iodinated contrast. Background And Objective: lthough intravenous contrast in neuroimaging has become increasingly important in selecting patients for stroke treatment, clinical concerns remain regarding contrast-associated acute kidney injury (CA-AKI). Given the increasing utilization of CT angiography and/or perfusion coupled with cerebral angiography, the purpose of this study was to assess the association of CA-AKI and multi-dose iodinated contrast in acute ischemic stroke (AIS) patients.
Materials And Methods: etrospective review of AIS patients at a comprehensive stroke center was performed from January 2018 to December 2019. Data collection included patient demographics, stroke risk factors, stroke severity, discharge disposition, modified Rankin Scale, contrast type/volume, and creatinine levels (baseline, 48-72 h). CA-AKI was defined as creatinine increase ≥ 25 % from baseline. Bivariate analyses and multivariable logistic regression models were implemented to compare AIS patients with multi-dose and single-dose contrast.
Results: Of 440 AIS patients, 215 (48.9 %) were exposed to a single-dose contrast, and 225 (51.1 %) received multi-dose. In single-dose patients, CA-AKI at 48/72 h was 9.7 %/10.2 % compared to 8.0 %/8.9 % in multi-dose patients. Multi-dose patients were significantly more likely to receive a higher volume of contrast (mean 142.1 mL versus 80.8 mL; p &lt; 0.001), but there was no significant difference in their creatinine levels or CA-AKI. NIHSS score (OR=1.08, 95 % CI=[1.04,1.13]), and patient transfer from another hospital (OR=3.84, 95 % CI=[1.94,7.62]) were significantly associated with multi-dose contrast.
Conclusions: No significant association between multi-dose iodinated contrast and CA-AKI was seen in AIS patients. Concerns of CA-AKI should not deter physicians from pursuing timely and appropriate contrast-enhanced neuroimaging that may optimize treatment outcomes in AIS patients.
abstract_id: PUBMED:35580541
MnCO3@BSA-ICG nanoparticles as a magnetic resonance/photoacoustic dual-modal contrast agent for functional imaging of acute ischemic stroke. Timely and accurate diagnosis of acute ischemic stroke (AIS) and simultaneous functional imaging of cerebral oxygen saturation (sO2) are essential to improve the survival rate of stroke patients but remains challenging. Herein, we developed a pH-responsive manganese (Mn)-based nanoplatform as a magnetic resonance/photoacoustic (MR/PA) dual-modal contrast agent for AIS diagnosis. The Mn-based nanoplatform was prepared via a simple and green biomimetic method using bovine serum albumin (BSA) as a scaffold for fabrication of MnCO3 NPs as the T1 MR contrast agent and accommodation of indocyanine green (ICG) as the PA probe. The obtained MnCO3@BSA-ICG NPs were biocompatible and exhibited a pH-responsive longitudinal relaxation rate and a concentration-dependent PA signal. In vivo MR/PA dual-modal imaging demonstrated that MnCO3@BSA-ICG NPs quickly and efficiently led to the MR/PA contrast enhancements in the infarcted area while not in the normal region, allowing a timely and accurate diagnosis of AIS. Moreover, PA imaging could directly monitor the sO2 level, enabling a functional imaging of AIS. Therefore, MnCO3@BSA-ICG NPs could be applied as a potential MR/PA contrast agent for timely and functional imaging of AIS.
abstract_id: PUBMED:33051090
Impact of creatinine screening on contrast-induced nephropathy following computerized tomography for stroke. Objective: This study sought to evaluate rates of acute kidney injury in patients undergoing contrast-enhanced computerized tomography for acute stroke in the emergency department (ED) before and after the cessation of creatinine screening.
Methods: This retrospective study compared ED patients receiving contrast-enhanced imaging for suspected acute stroke with and without protocolized creatinine screening. The primary outcome was CIN, defined as an increase in serum creatinine of 0.3 mg/dl within 48 hours or 50% above baseline within 7 days after contrast administration. Secondary outcomes consisted of CIN based on other definitions, renal impairment greater than 30 days from contrast administration, hemodialysis, and mortality. Outcomes were compared using difference of proportions and odds ratios with 95% confidence intervals.
Results: This study included 382 subjects, with 186 and 196 in the screening and post-screening cohorts, respectively. No significant differences were observed for CIN (7.0% vs 7.1%, difference 0.1% [95% CI -5.6-5.1%], OR 1.02 [95% CI 0.47-2.24]), renal impairment greater than 30 days post-contrast (8.4% vs 7.5%, OR 0.88 [0.38-2.07]), or mortality (index visit: 4.8% vs 2.6%, OR 0.51 [0.17-1.57], 90-day follow-up: 6.7% vs 4.0%, OR 0.58 [0.22-1.53]). No patients from either group required hemodialysis.
Conclusions: The elimination of creatinine screening prior to obtaining contrast-enhanced computerized tomography in patients with suspected acute stroke did not adversely affect rates of CIN, hemodialysis, or mortality at a comprehensive stroke center.
abstract_id: PUBMED:24145699
Serum creatinine may indicate risk of symptomatic intracranial hemorrhage after intravenous tissue plasminogen activator (IV tPA). Symptomatic intracranial hemorrhage (sICH) is a known complication following administration of intravenous tissue plasminogen activator (IV tPA) for acute ischemic stroke. sICH results in high rates of death or long-term disability. Our ability to predict its occurrence is important in clinical decision making and when counseling families. The initial National Institute of Neurological Disorders and Stroke (NINDS) investigators developed a list of relative contraindications to IV tPA meant to decrease the risk of subsequent sICH. To date, the impact of renal impairment has not been well studied. In the current study we evaluate the potential association between renal impairment and post-tPA intracranial hemorrhage (ICH). Admission serum creatinine and estimated glomerular filtration rate (eGFR) were recorded in 224 patients presenting within 4.5 hours from symptom onset and treated with IV tPA based on NINDS criteria. Neuroimaging was obtained 1 day post-tPA and for any change in neurologic status to evaluate for ICH. Images were retrospectively evaluated for hemorrhage by a board-certified neuroradiologist and 2 reviewers blinded to the patient's neurologic status. Medical records were reviewed retrospectively for evidence of neurologic decline indicating a "symptomatic" hemorrhage. sICH was defined as subjective clinical deterioration (documented by the primary neurology team) and hemorrhage on neuroimaging that was felt to be the most likely cause. Renal impairment was evaluated using both serum creatinine and eGFR in a number of ways: 1) continuous creatinine; 2) any renal impairment by creatinine (serum creatinine >1.0 mg/dL); 3) continuous eGFR; and 4) any renal impairment by eGFR (eGFR <60 mL/min per 1.73 m²). Student paired t tests, Fisher exact tests, and multivariable logistic regression (adjusted for demographics and vascular risk factors) were used to evaluate the relationship between renal impairment and ICH. Fifty-seven (25%) of the 224 patients had some evidence of hemorrhage on neuroimaging. The majority of patients were asymptomatic. Renal impairment (defined by serum creatinine >1.0 mg/dL) was not associated with combined symptomatic and asymptomatic intracranial bleeding (p = 0.359); however, there was an adjusted 5.5-fold increased odds of sICH when creatinine was >1.0 mg/dL (95% confidence interval, 1.08-28.39), and the frequency of sICH for patients with elevated serum creatinine was 10.6% (12/113), versus 1.8% (2/111) in those with normal renal function (p = 0.010). Our study suggests that renal impairment is associated with higher risk of sICH after administration of IV tPA. As IV tPA is an important and effective treatment for acute ischemic stroke, a multicenter study is needed to determine whether the observation that renal dysfunction is associated with sICH from this retrospective study holds true in a larger prospective trial.
abstract_id: PUBMED:26321059
BaHoF5 nanoprobes as high-performance contrast agents for multi-modal CT imaging of ischemic stroke. CT angiography (CTA) and CT perfusion (CTP) imaging can play important roles in the workup of acute ischemic stroke. However, these techniques are hindered by the large amounts of contrast agents (CAs) required, high doses of X-ray radiation exposure, and nephrotoxicity of the clinical used iodinated CAs. To address these problems, we synthesized and validated a novel class of CT CAs, PEGylated BaHoF5 nanoparticles (NPs), for CTA and CTP imaging, which can greatly enhance the diagnostic sensitivity and accuracy of ischemic stroke. These agents have unique advantages over conventional iodinated CT agents, including much lower dosage required, major metabolism through liver and better imaging efficiency at different voltages. Once translated, these PEGylated BaHoF5 NPs can replace iodine-based CAs for diagnostic contrast-enhanced imaging of patients with kidney/heart diseases and improve the overall diagnostic index with negligible side effects.
abstract_id: PUBMED:16374692
Safety and efficacy of intravenous contrast imaging in pediatric echocardiography. This study was performed to determine the safety and efficacy of intravenous contrast echocardiography in children attending a tertiary cardiac center. This was a prospective study to evaluate the use of Optison contrast agent in children with severely limited transthoracic echocardiographic windows. Twenty children (median age, 15 years; range, 9-18) underwent fundamental imaging (FI), harmonic imaging (HI), and HI with intravenous contrast (Optison FS-069). Endocardial border delineation was determined based on a visual qualitative scoring system (0, none: 4, excellent). Endocardial border definition was significantly improved in all patients using contrast echocardiography (FI vs Optison, p < 0.001 for each). Improved border definition was most dramatic in the apical and left ventricular (LV) free wall regions. Left ventricular ejection fraction (LVEF) was measurable in 20 patients (100%) using contrast compared to 11 (55%) with FI or HI (p < 0.05). The echocardiographic diagnosis was correctly delineated in 1 patient with a severely dyskinetic LV segment only with use of intravenous contrast and HI. No patients suffered adverse hemodynamic effects, changes in taste, or flushing episodes. Three patients experienced transient headaches. Intravenous contrast echocardiography offers an additional tool in evaluating children with very poor transthoracic echocardiographic windows. Such a strategy increases diagnostic accuracy and allows accurate LVEF determination. Adverse hemodynamic effects related to intravenous contrast are exceedingly rare.
abstract_id: PUBMED:28087509
Use of Both Serum Cystatin C and Creatinine as Diagnostic Criteria for Contrast-Induced Acute Kidney Injury and Its Clinical Implications. Background: Contrast-induced acute kidney injury (CI-AKI) was traditionally defined as an increase in serum creatinine (sCr) after contrast media exposure. Recently, serum cystatin C (sCyC) has been proposed as an alternative to detect acute changes in renal function. The clinical implications of combining sCyC and sCr to diagnose CI-AKI remain unknown.
Methods And Results: One thousand seventy-one consecutive patients undergoing coronary angiography/intervention were prospectively enrolled. SCyC and sCr were assessed at baseline and 24 to 48 hours after contrast media exposure. CI-AKI determined by sCr (CI-AKIsCr) was defined as an sCr increase greater than 0.3 mg/dL or 50% from baseline. Major adverse events at 12 months were assessed. CI-AKIsCr developed in 25 patients (2.3%). Twelve-month follow-up was available for 1063 patients; major adverse events occurred in 61 patients (5.7%). By receiver operating characteristic curve analysis, an sCyC increase of greater than 15% was the optimal cutoff for CI-AKIsCr detection, which occurred in 187 patients (17.4%). To evaluate the use of both sCyC and sCr as CI-AKI diagnostic criteria, we stratified patients into 3 groups: no CI-AKI, CI-AKI detected by a single marker, and CI-AKI detected by both markers. Multivariable logistic regression revealed that the predictability of major adverse events increased in a stepwise fashion in the 3 groups (no-CI-AKI group as the reference, CI-AKI detected by a single marker: odds ratio=2.25, 95% CI: 1.24-4.10, P<0.01; CI-AKI detected by both markers: odds ratio=10.00, 95% CI: 3.13-31.91, P<0.001).
Conclusions: Combining sCyC and sCr to diagnose CI-AKI would be beneficial for risk stratification and prognosis in patients after contrast media exposure.
Answer: Based on the provided abstracts, there is evidence suggesting that waiting for a serum creatinine before administering intravenous contrast for multi-modal CT (MMCT) in acute stroke may not be necessary. The study described in abstract PUBMED:26306403 found that the incidence of contrast-induced nephropathy (CIN) is low when MMCT is used routinely to assess acute stroke patients. In this population, CIN was a biochemical phenomenon that did not have clinical manifestations, cause chronic kidney disease, require dialysis, or negatively impact 90-day modified Rankin Scale (mRS) outcomes. The authors concluded that renal profiling and waiting for a baseline serum creatinine are unnecessary delays to emergency reperfusion treatment.
Similarly, the study in abstract PUBMED:33051090 reported that the elimination of creatinine screening prior to obtaining contrast-enhanced computerized tomography in patients with suspected acute stroke did not adversely affect rates of CIN, hemodialysis, or mortality at a comprehensive stroke center.
However, it is important to note that the study in abstract PUBMED:24145699 found an association between renal impairment and a higher risk of symptomatic intracranial hemorrhage (sICH) after administration of intravenous tissue plasminogen activator (IV tPA). This suggests that renal function may still be an important consideration in certain acute stroke treatments.
In conclusion, while there is evidence to support the idea that waiting for serum creatinine results may not be necessary before administering intravenous contrast for MMCT in acute stroke, individual patient risk factors, such as potential renal impairment, should still be considered in the decision-making process. It is also important to consider that these findings are based on specific study populations and protocols, and further research may be needed to generalize these conclusions to all acute stroke patients. |
Instruction: Does labeling matter?
Abstracts:
abstract_id: PUBMED:24371796
Gray matter contamination in arterial spin labeling white matter perfusion measurements in patients with dementia. Introduction: White matter (WM) perfusion measurements with arterial spin labeling can be severely contaminated by gray matter (GM) perfusion signal, especially in the elderly. The current study investigates the spatial extent of GM contamination by comparing perfusion signal measured in the WM with signal measured outside the brain.
Material And Methods: Four minute 3T pseudo-continuous arterial spin labeling scans were performed in 41 elderly subjects with cognitive impairment. Outward and inward geodesic distance maps were created, based on dilations and erosions of GM and WM masks. For all outward and inward geodesic distances, the mean CBF was calculated and compared.
Results: GM contamination was mainly found in the first 3 subcortical WM voxels and had only minor influence on the deep WM signal (distances 4 to 7 voxels). Perfusion signal in the WM was significantly higher than perfusion signal outside the brain, indicating the presence of WM signal.
Conclusion: These findings indicate that WM perfusion signal can be measured unaffected by GM contamination in elderly patients with cognitive impairment. GM contamination can be avoided by the erosion of WM masks, removing subcortical WM voxels from the analysis. These results should be taken into account when exploring the use of WM perfusion as micro-vascular biomarker.
abstract_id: PUBMED:35419550
Reliability of arterial spin labeling derived cerebral blood flow in periventricular white matter. We aimed to assess the reliability of cerebral blood flow (CBF) measured using arterial spin labeled (ASL) perfusion magnetic resonance imaging (MRI) from the periventricular white matter (PVWM) by computing its repeatability and comparing to [15O]-water Positron Emission Tomography (PET) as a reference. Simultaneous PET/MRI perfusion data were acquired twice in the same session, about 15 min apart, from 16 subjects (age: 41.4 ± 12.0 years, 9 female). ASL protocols used pseudocontinuous labeling (pCASL) with background-suppressed 3-dimensional readouts, and included both single and multiple post labeling delay (PLD) acquisitions, each acquired twice, with the latter providing both CBF and arterial transit time (ATT) maps. The reliability of ASL derived PVWM CBF was evaluated using intra-session repeatability assessed by the within-subject coefficient of variation (wsCV) of the PVWM CBF values obtained from the two scans, correlation with concurrently-acquired PET CBF values, and by comparing them with that measured in other commonly used regions of interest (ROIs) such as whole brain (WB), gray matter (GM) and white matter (WM). The wsCVs for PVWM CBF with single and multi-PLD acquisitions were 5.7 (95% CI: (3.4,7.7)) % and 6.1 (95% CI: (3.8,8.3))%, which were similar to those obtained from WB, GM and WM CBF even though the PVWM region is the most weakly perfused region of brain parenchyma. Correlations between relative PVWM CBF derived from ASL and from [15O]-water PET were also comparable to the other ROIs. Finally, the ATT of the PVWM region was found to be 1.27 ± 0.27s, which was not an outlier for the arterial circulation of the brain. These findings suggest that PVWM CBF can be reliably measured with the current state-of-the-art ASL methods.
abstract_id: PUBMED:29926756
Test-retest reliability of perfusion of the precentral cortex and precentral subcortical white matter on three-dimensional pseudo-continuous arterial spin labeling. Objective This study was performed to evaluate the test-retest reliability of perfusion of the cortex and subcortical white matter on three-dimensional spiral fast spin echo pseudo-continuous arterial spin labeling (3D-ASL). Methods Eight healthy subjects underwent 3D-ASL and structural imaging at the same time each day for 1 week. ASL data acquisition was performed in the resting state and right finger-tapping state. Cerebral blood flow (CBF) images were calculated, and the CBF values of the precentral cortex (PCC) and precentral subcortical white matter (PCSWM) were automatically extracted based on the structural images and CBF images. Results In the resting state, the intraclass correlation coefficient (ICC) of the bilateral PCC was 0.84 (left) and 0.81 (right) and that of the bilateral SCWM was 0.89 (left) and 0.85 (right). In the finger-tapping state, the ICC of the bilateral PCC was 0.91 (left) and 0.87 (right) and that of the bilateral PCSWM was 0.87 (left) and 0.92 (right). The CBF value of the left PCC and PCSWM was not significantly different between the resting state and finger-tapping state on two ASL scans. Conclusion 3D-ASL provides reliable CBF measurement in the cortex and subcortical white matter in the resting or controlled state.
abstract_id: PUBMED:29312135
Cerebral Hemodynamic and White Matter Changes of Type 2 Diabetes Revealed by Multi-TI Arterial Spin Labeling and Double Inversion Recovery Sequence. Diabetes has been reported to affect the microvasculature and lead to cerebral small vessel disease (SVD). Past studies using arterial spin labeling (ASL) at single post-labeling delay reported reduced cerebral blood flow (CBF) in patients with type 2 diabetes. The purpose of this study was to characterize cerebral hemodynamic changes of type 2 diabetes using a multi-inversion-time 3D GRASE pulsed ASL (PASL) sequence to simultaneously measure CBF and bolus arrival time (BAT). Thirty-six patients with type 2 diabetes (43-71 years, 17 male) and 36 gender- and age-matched control subjects underwent MRI scans at 3 T. Mean CBF/BAT values were computed for gray and white matter (GM and WM) of each subject, while a voxel-wise analysis was performed for comparison of regional CBF and BAT between the two groups. In addition, white matter hyperintensities (WMHs) were detected by a double inversion recovery (DIR) sequence with relatively high sensitivity and spatial resolution. Mean CBF of the WM, but not GM, of the diabetes group was significantly lower than that of the control group (p < 0.0001). Regional CBF decreases were detected in the left middle occipital gyrus (p = 0.0075), but failed to reach significance after correction of partial volume effects. BAT increases were observed in the right calcarine fissure (p < 0.0001), left middle occipital gyrus (p < 0.0001), and right middle occipital gyrus (p = 0.0011). Within the group of diabetic patients, BAT in the right middle occipital gyrus was positively correlated with the disease duration (r = 0.501, p = 0.002), BAT in the left middle occipital gyrus was negatively correlated with the binocular visual acuity (r = -0.408, p = 0.014). Diabetic patients also had more WMHs than the control group (p = 0.0039). Significant differences in CBF, BAT, and more WMHs were observed in patients with diabetes, which may be related to impaired vision and risk of SVD of type 2 diabetes.
abstract_id: PUBMED:32493483
Automatic group-wise whole-brain short association fiber bundle labeling based on clustering and cortical surface information. Background: Diffusion MRI is the preferred non-invasive in vivo modality for the study of brain white matter connections. Tractography datasets contain 3D streamlines that can be analyzed to study the main brain white matter tracts. Fiber clustering methods have been used to automatically group similar fibers into clusters. However, due to inter-subject variability and artifacts, the resulting clusters are difficult to process for finding common connections across subjects, specially for superficial white matter.
Methods: We present an automatic method for labeling of short association bundles on a group of subjects. The method is based on an intra-subject fiber clustering that generates compact fiber clusters. Posteriorly, the clusters are labeled based on the cortical connectivity of the fibers, taking as reference the Desikan-Killiany atlas, and named according to their relative position along one axis. Finally, two different strategies were applied and compared for the labeling of inter-subject bundles: a matching with the Hungarian algorithm, and a well-known fiber clustering algorithm, called QuickBundles.
Results: Individual labeling was executed over four subjects, with an execution time of 3.6 min. An inspection of individual labeling based on a distance measure showed good correspondence among the four tested subjects. Two inter-subject labeling were successfully implemented and applied to 20 subjects and compared using a set of distance thresholds, ranging from a conservative value of 10 mm to a moderate value of 21 mm. Hungarian algorithm led to a high correspondence, but low reproducibility for all the thresholds, with 96 s of execution time. QuickBundles led to better correspondence, reproducibility and short execution time of 9 s. Hence, the whole processing for the inter-subject labeling over 20 subjects takes 1.17 h.
Conclusion: We implemented a method for the automatic labeling of short bundles in individuals, based on an intra-subject clustering and the connectivity of the clusters with the cortex. The labels provide useful information for the visualization and analysis of individual connections, which is very difficult without any additional information. Furthermore, we provide two fast inter-subject bundle labeling methods. The obtained clusters could be used for performing manual or automatic connectivity analysis in individuals or across subjects.
abstract_id: PUBMED:32217128
Utility of a diffusion-weighted arterial spin labeling (DW-ASL) technique for evaluating the progression of brain white matter lesions. Purpose: To investigate the utility of diffusion-weighted arterial spin labeling (DW-ASL) for detecting the progression of brain white matter lesions.
Materials And Methods: A total of 492 regions of interest (ROIs) in 41 patients were prospectively analyzed. DW-ASL was performed using the diffusion gradient prepulse of five b-values (0, 25, 60, 102, and 189) before the ASL readout. We calculated the water exchange rate (Kw) with post-processing using the ASL signal information for each b-value. The cerebral blood flow (CBF) was also calculated using b0 images. Using the signal information in FLAIR (fluid-attenuated inversion recovery) images, we classified the severity of white matter lesions into three grades: non-lesion, moderate, and severe. In addition, the normal Kw level was measured from DW-ASL data of 60 ROIs in five control subjects. The degree of variance of the Kw values (Kw-var) was calculated by squaring the value of the difference between each Kw value and the normal Kw level. All patient's ROIs were divided into non-progressive and progressive white matter lesions by comparing the present FLAIR images with those obtained 2 years before this acquisition.
Results: Compared to the non-progressive group, the progressive group had significantly lower CBF, significantly higher severity grades in FLAIR, and significantly greater Kw-var values. In a receiver operator characteristic curve analysis, a high area under the curve (AUC) of 0.89 was obtained with the use of Kw-var. In contrast, the AUCs of 0.59 for CBF and 0.72 for severity grades in FLAIR were obtained.
Conclusions: The DW-ASL technique can be useful to detect the progression of brain white matter lesions. This technique will become a clinical tool for patients with various degrees of white matter lesions.
abstract_id: PUBMED:30344894
CEREBRAL BLOOD FLOW AND PREDICTORS OF WHITE MATTER LESIONS IN ADULTS WITH TETRALOGY OF FALLOT. Long-term outcomes for Tetralogy of Fallot (TOF) have improved dramatically in recent years, but survivors are still afflicted by cerebral damage. In this paper, we characterized the prevalence and predictors of cerebral silent infarction (SCI) and their relationship to cerebral blood flow (CBF) in 46 adult TOF patients. We calculated both whole brain and regional CBF using 2D arterial spin labeling (ASL) images, and investigated the spatial overlap between voxel-wise CBF values and white matter hyperintensities (WMHs) identified from T2-FLAIR images. SCIs were found in 83% of subjects and were predicted by the year of the patient's first cardiac surgery and patient's age at scanning (combined r2 0.44). CBF was not different in brain regions prone to stroke compared with healthy white matter.
abstract_id: PUBMED:26755444
Unilateral fetal-type circle of Willis anatomy causes right-left asymmetry in cerebral blood flow with pseudo-continuous arterial spin labeling: A limitation of arterial spin labeling-based cerebral blood flow measurements? The accuracy of cerebral blood flow measurements using pseudo-continuous arterial spin labeling can be affected by vascular factors other than cerebral blood flow, such as flow velocity and arterial transit time. We aimed to elucidate the effects of common variations in vascular anatomy of the circle of Willis on pseudo-continuous arterial spin labeling signal. In addition, we investigated whether possible differences in pseudo-continuous arterial spin labeling signal could be mediated by differences in flow velocities. Two hundred and three elderly participants underwent magnetic resonance angiography of the circle of Willis and pseudo-continuous arterial spin labeling scans. Mean pseudo-continuous arterial spin labeling-cerebral blood flow signal was calculated for the gray matter of the main cerebral flow territories. Mean cerebellar gray matter pseudo-continuous arterial spin labeling-cerebral blood flow was significantly lower in subjects having a posterior fetal circle of Willis variant with an absent P1 segment. The posterior fetal circle of Willis variants also showed a significantly higher pseudo-continuous arterial spin labeling-cerebral blood flow signal in the ipsilateral flow territory of the posterior cerebral artery. Flow velocity in the basilar artery was significantly lower in these posterior fetal circle of Willis variants. This study indicates that pseudo-continuous arterial spin labeling measurements underestimate cerebral blood flow in the posterior flow territories and cerebellum of subjects with a highly prevalent variation in circle of Willis morphology. Additionally, our data suggest that this effect is mediated by concomitant differences in flow velocity between the supplying arteries.
abstract_id: PUBMED:27073378
Pulsed arterial spin labeling effectively and dynamically observes changes in cerebral blood flow after mild traumatic brain injury. Cerebral blood flow is strongly associated with brain function, and is the main symptom and diagnostic basis for a variety of encephalopathies. However, changes in cerebral blood flow after mild traumatic brain injury remain poorly understood. This study sought to observe changes in cerebral blood flow in different regions after mild traumatic brain injury using pulsed arterial spin labeling. Our results demonstrate maximal cerebral blood flow in gray matter and minimal in the white matter of patients with mild traumatic brain injury. At the acute and subacute stages, cerebral blood flow was reduced in the occipital lobe, parietal lobe, central region, subcutaneous region, and frontal lobe. Cerebral blood flow was restored at the chronic stage. At the acute, subacute, and chronic stages, changes in cerebral blood flow were not apparent in the insula. Cerebral blood flow in the temporal lobe and limbic lobe diminished at the acute and subacute stages, but was restored at the chronic stage. These findings suggest that pulsed arterial spin labeling can precisely measure cerebral blood flow in various brain regions, and may play a reference role in evaluating a patient's condition and judging prognosis after traumatic brain injury.
abstract_id: PUBMED:26106546
Characterizing the white matter hyperintensity penumbra with cerebral blood flow measures. Objective: White matter hyperintensities (WMHs) are common with age, grow over time, and are associated with cognitive and motor impairments. Mechanisms underlying WMH growth are unclear. We aimed to determine the presence and extent of decreased normal appearing white matter (NAWM) cerebral blood flow (CBF) surrounding WMHs to identify 'WM at risk', or the WMH CBF penumbra. We aimed to further validate cross-sectional finding by determining whether the baseline WMH penumbra CBF predicts the development of new WMHs at follow-up.
Methods: Sixty-one cognitively intact elderly subjects received 3 T MPRAGE, FLAIR, and pulsed arterial spin labeling (PASL). Twenty-four subjects returned for follow-up MRI. The inter-scan interval was 18 months. A NAWM layer mask, comprised of fifteen layers, 1 mm thick each surrounding WMHs, was generated for periventricular (PVWMH) and deep (DWMH) WMHs. Mean CBF for each layer was computed. New WMH and persistent NAWM voxels for each penumbra layer were defined from follow-up MRI.
Results: CBF in the area surrounding WMHs was significantly lower than the total brain NAWM, extending approximately 12 mm from both the established PVWMH and DWMH. Voxels with new WMH at follow-up had significantly lower baseline CBF than voxels that maintained NAWM, suggesting that baseline CBF can predict the development of new WMHs over time.
Conclusions: A CBF penumbra exists surrounding WMHs, which is associated with future WMH expansion. ASL MRI can be used to monitor interventions to increase white matter blood flow for the prevention of further WM damage and its cognitive and motor consequences.
Answer: Yes, labeling matters in the context of neuroimaging and the study of cerebral blood flow (CBF) and white matter (WM) lesions. Labeling techniques such as arterial spin labeling (ASL) are non-invasive MRI methods used to measure CBF and are particularly relevant in the research of various neurological conditions, including dementia, traumatic brain injury, and small vessel disease (SVD) associated with conditions like type 2 diabetes and Tetralogy of Fallot (TOF).
For instance, in the study of dementia, it is important to accurately measure WM perfusion without contamination from gray matter (GM) perfusion signal. This can be achieved by eroding WM masks to avoid GM contamination, which is crucial when using WM perfusion as a micro-vascular biomarker (PUBMED:24371796).
The reliability of ASL-derived CBF measurements in periventricular white matter (PVWM) has been demonstrated to be comparable to other regions of the brain, such as whole brain and GM, despite PVWM being the most weakly perfused brain region. This suggests that ASL can be reliably used for measuring PVWM CBF (PUBMED:35419550).
Test-retest reliability of ASL has also been shown to be high for perfusion measurements in the precentral cortex and subcortical white matter, indicating that ASL provides reliable CBF measurements in both cortical and subcortical WM in resting or controlled states (PUBMED:29926756).
In patients with type 2 diabetes, ASL has revealed cerebral hemodynamic changes and white matter alterations, which may be related to impaired vision and the risk of SVD (PUBMED:29312135).
Labeling is also significant in the context of diffusion MRI and tractography, where fiber clustering methods and cortical surface information are used to label short association bundles in the brain. This labeling is essential for visualizing and analyzing individual connections and for performing connectivity analysis across subjects (PUBMED:32493483).
Furthermore, diffusion-weighted ASL (DW-ASL) has been found useful for detecting the progression of brain white matter lesions, providing a potential clinical tool for patients with various degrees of white matter lesions (PUBMED:32217128).
In adults with TOF, the prevalence and predictors of cerebral silent infarction (SCI) and their relationship to CBF were characterized, but CBF was not found to differ in brain regions In adults with TOF, the prevalence and predictors of cerebral silent infarction (SCI) and their relationship to CBF were characterized, but CBF was not found to differ in brain regions prone to stroke compared with healthy white matter. This suggests that while SCIs are common in TOF patients, ASL-derived CBF measurements may not directly reflect regions at risk for SCI (PUBMED:30344894).
Additionally, variations in vascular anatomy, such as those found in the circle of Willis, can affect ASL signal and, consequently, CBF measurements. For example, a unilateral fetal-type circle of Willis anatomy can cause asymmetry in CBF measurements, indicating that vascular factors other than CBF, such as flow velocity and arterial transit time, can influence ASL signal (PUBMED:26755444).
In the context of mild traumatic brain injury, pulsed ASL has been used to observe changes in CBF in different brain regions, demonstrating its effectiveness in dynamically tracking cerebral perfusion changes after injury (PUBMED:27073378).
Finally, the concept of a white matter hyperintensity (WMH) penumbra has been introduced, where decreased CBF in normal-appearing white matter surrounding WMHs can predict the development of new WMHs. This highlights the importance of labeling and measuring CBF to identify areas at risk for WMH expansion (PUBMED:26106546).
In summary, labeling techniques such as ASL are crucial for accurately measuring and characterizing CBF and WM lesions, which have implications for understanding, diagnosing, and potentially treating various neurological conditions. The choice of labeling strategy and its proper application are essential for obtaining reliable and meaningful neuroimaging data. |
Instruction: Retrograde abdominal visceral perfusion: is it beneficial?
Abstracts:
abstract_id: PUBMED:28942833
Effect of retrograde colonic electrical stimulation on colonic transit and stress-induced visceral hypersensitivity in rats with irritable bowel syndrome. Objective: To evaluate the effects of retrograde colonic electrical stimulation (RCES) with trains of short pulses and RCES with long pulses on colonic transit in irritable bowel syndrome (IBS) rats and to investigate whether stress-induced visceral hypersensitivity could be alleviated by RCES so as to find a valuable new approach for IBS treatment.
Methods: A total of 48 male rats were randomly divided into model group and control group. Visceral hypersensitivity model was induced by a 6-day HIS protocol composed of two stressors, restraint stress for 40 min and forced swimming stress for 20 min. The extent of visceral hypersensitivity was quantified by electromyography and abdominal withdrawal reflex scores (AWRs) of colorectal distension (use a balloon) at different pressures. After the modeling, all rats were equipped with electrodes in descending colon for retrograde electrical stimulation and a PE tube for perfusing phenol red saline solution in the ileocecus. After recovering from surgery, RCES with long pulses, RCES with trains of short pulses, and sham RCES were performed in colonic serosa of rats for 40 min in six groups of 8 each, including three groups of visceral hypersensitivity rats and three groups of health rats. Colonic transit was assessed by calculating the output of phenol red from the anus every 10 min for 90 min. Finally, the extent of visceral hypersensitivity will be quantified again in model group.
Results: After the 6-day HIS protocol, the HIS rats displayed an increased sensitivity to colorectal distention, compared to control group at different distention pressures (P < 0.01). CRES with trains of short pulses and long pulses significantly attenuated the hypersensitive responses to colorectal distention in the HIS rats compared with sham RCES group (P < 0.01). The effects of RCES on rats colon transmission: In the IBS rats, the colonic emptying were (77.4 ± 3.4)%, (74.8 ± 2.4)% and (64.2 ± 1.6)% in the sham RCES group, long pulses group and trains of short pulses group at 90 min; In healthy rats, The colonic emptying was (65.2 ± 3.5)%, (63.5 ± 4.0)% and (54.0 ± 2.5)% in the sham RCES group, long pulses group and trains of short pulses group at 90 min.
Conclusion: RCES with long pulses and RCES with trains of short pulses can significantly alleviate stress-induced visceral hypersensitivity. RCES with trains of short pulses has an inhibitory effect of colonic transit, both in visceral hypersensitivity rats and healthy rats.
abstract_id: PUBMED:35788892
Interventional Treatment Modalities for Chronic Abdominal and Pelvic Visceral Pain. Purpose Of Review: Chronic abdominal and pelvic visceral pain is an oftentimes difficult to treat pain condition that requires a multidisciplinary approach. This article specifically reviews the interventional treatment options for pain resulting from visceral abdominal and pelvic pain.
Recent Findings: Sympathetic nerve blocks are the main interventional option for the treatment of chronic abdominal and pelvic visceral pain. Initially, nerve blocks are performed, and subsequently, neurolytic injections (alcohol or phenol) are longer term options. This review describes different techniques for sympathetic blockade. Neuromodulation is a potential option via dorsal column stimulation or dorsal root ganglion stimulation. Finally, intrathecal drug delivery is sometimes appropriate for refractory cases. This paper will review interventional options for the treatment of chronic abdominal and pelvic visceral pain.
abstract_id: PUBMED:24388260
Visceral injury in abdominal trauma: a retrospective study Background: Abdominal trauma is a major cause of morbi-mortality all over the world which makes it essential an approach focused on rapid diagnosis and treatment. The main goals of this study are to identify global epidemiologic data of abdominal trauma in our tertiary trauma center and to study traumatic lesions, treatment and outcome.
Material And Methods: Retrospective analysis of the clinical file of all patients admitted with abdominal trauma, over a period of 5 years, in a tertiary trauma center.
Results: the total mean of ages was 42.6 years and the male gender was the most affected (74.2%). At admission, most patients had a Revised Trauma Score > 4. The mainly causes of trauma were blunt from motor-vehicle collisions (39.9% as motor-vehicle occupant and 10.7% from pedestrian collisions) and falls (25.5%). Penetrating trauma, by abdominal stab wounds and gunshot wounds, occurred only in 12.3% of the cases. Hollow visceral injuries were more frequent in that context. In 19.5% of the cases multiple abdominal organ injury occurred. Conservative treatment was performed in 65.3% of the cases. Global mortality was 12%, being null after penetrating lesions.
Conclusions: Abdominal trauma, more frequently, is the result of motor-vehicle crashes and falls, being blunt in the majority of the cases. The most affected organs are solid and the approach is conservative. Hollow visceral lesions continue to be of difficult diagnose.
abstract_id: PUBMED:24948557
Visceral adiposity is not associated with abdominal aortic aneurysm presence and growth. Previous studies in rodent models and patients suggest that visceral adipose could play a direct role in the development and progression of abdominal aortic aneurysm (AAA). This study aimed to assess the association of visceral adiposity with AAA presence and growth. This study was a case-control investigation of patients that did (n=196) and did not (n=181) have an AAA who presented to The Townsville Hospital vascular clinic between 2003 and 2012. Cases were patients with AAA (infra-renal aortic diameter >30 mm) and controls were patients with intermittent claudication but no AAA (infra-renal aortic diameter <30 mm). All patients underwent computed tomography angiography (CTA). The visceral to total abdominal adipose volume ratio was estimated from CTAs by assessing total and visceral adipose deposits using an imaging software program. Measurements were assessed for reproducibility by repeat assessments on 15 patients. AAA risk factors were recorded at entry. Forty-five cases underwent two CTAs more than 6 months apart to assess AAA expansion. The association of visceral adiposity with AAA presence and growth was examined using logistic regression. Visceral adipose assessment by CTA was highly reproducible (mean coefficient of variation 1.0%). AAA was positively associated with older age and negatively associated with diabetes. The visceral to total abdominal adipose volume ratio was not significantly associated with AAA after adjustment for other risk factors. Patients with a visceral to total abdominal adipose volume ratio in quartile four had a 1.63-fold increased risk of AAA but with wide confidence intervals (95% CI 0.71-3.70; p=0.248). Visceral adiposity was not associated with AAA growth. In conclusion, this study suggests that visceral adiposity is not specifically associated with AAA presence or growth although larger studies are required to confirm these findings.
abstract_id: PUBMED:32010417
Clinical Features of Spontaneous Isolated Dissection of Abdominal Visceral Arteries. Background: Spontaneous isolated dissection of abdominal visceral arteries without aortic dissection is rare and its pathology and prognosis are not yet clear; therefore, therapeutic strategies for this disease have not been established. The present multi-institution investigational study analyzed the clinical features of patients with spontaneous isolated dissection of abdominal visceral arteries.
Methods: A total of 36 patients diagnosed as spontaneous isolated dissection of abdominal visceral arteries from January 2010 to October 2016 were enrolled. The medical data of the patients were retrospectively reviewed. Imaging characteristics were evaluated. Spontaneous isolated dissection of abdominal visceral arteries was detected on upper abdominal computed tomography examination in almost patients, and was detected on magnetic resonance imaging in one patient.
Results: Of the 36 cases, 26 cases involved the superior mesenteric artery dissection, nine involved the celiac artery, two involved the splenic artery, one involved the common hepatic artery, one involved the gastroduodenal artery and one involved the left gastric artery. Among the 36 patients, 20 had hypertension and 14 were current smokers. Additionally, only one patient had diabetes and four patients had dyslipidemia. Moreover, 32 cases complained of pain including abdominal pain and back pain, one had cough and three had no symptoms. Of the 36 patients, 34 cases (94.4%) were treated conservatively, and two (5.6%) required intravascular treatment. All patients were discharged without complications.
Conclusions: Our findings indicate that hypertension and smoking might be closely involved in the pathogenesis of spontaneous isolated dissection of abdominal visceral arteries, whereas dyslipidemia and diabetes might be less involved. Additionally, few asymptomatic patients were accidentally diagnosed, indicating that the absence of symptoms cannot be used to rule out the presence of this disease. Randomized clinical trials cannot be performed because a considerable number of cases are required. Therefore, detailed descriptions of clinical features, as provided in our report, are important.
abstract_id: PUBMED:30476681
Impact of Abdominal Visceral Adiposity on Adult Asthma Symptoms. Background: Previous studies have shown the association of anthropometric measures with poor asthma symptoms, especially among women. However, the potential influence of visceral adiposity on asthma symptoms has not been investigated well.
Objective: In this study, we have evaluated whether visceral adiposity is related to poor adult asthma symptoms independent of anthropometric measures and sex. If this relationship presented, we investigated whether it is explained by influence on pulmonary functions and/or obesity-related comorbidities.
Methods: We analyzed data from 206 subjects with asthma from Japan. In addition to anthropometric measures (body mass index and waist circumference), abdominal visceral and subcutaneous fat were assessed by computed tomography scan. Quality of life was assessed using the Japanese version of the Asthma Quality of Life Questionnaire.
Results: All obesity indices had inverse association with reduced asthma quality of life among females. However, only the visceral fat area showed a statistical inverse association with Asthma Quality of Life Questionnaire in males. Only abdominal visceral fat was associated with higher gastroesophageal reflux disease and depression scores. Although all obesity indices showed inverse association with functional residual capacity, only visceral fat area had a significant inverse association with FEV1 % predicted, independent of other obesity indices.
Conclusions: Regardless of sex, abdominal visceral fat was associated with reduced asthma quality of life independent of other obesity indices, and this may be explained by the impact of abdominal visceral fat on reduced FEV1 % predicted and higher risk for gastroesophageal reflux disease and depression. Therefore, visceral adiposity may have more clinical influence than any other obesity indices on asthma symptoms.
abstract_id: PUBMED:34686264
Abdominal Aortic and Visceral Artery Aneurysms. Abdominal aortic aneurysms account for nearly 9000 deaths annually, with ruptured abdominal aortic aneurysms being the thirteenth leading cause of death in the United States. Abdominal aortic aneurysms can be detected by screening, but a majority are detected incidentally. Visceral artery aneurysms are often discovered incidentally, and treatment is guided by symptoms, etiology, and size. A timely diagnosis and referral to a vascular specialist are essential for timely open or endovascular repair and to ensure successful patient outcomes.
abstract_id: PUBMED:38025491
Functional Gastrointestinal Disorders and Abdominal Visceral Fat in Children and Adolescents. Purpose: Few reports have investigated the correlation between functional gastrointestinal disorders (FGIDs) and the degree of obesity in children and adolescents. Thus, this study aimed to examine the relationship between FGIDs and the degree of obesity in children and adolescents.
Methods: Children and adolescents (<19 years old) who had undergone abdominopelvic computed tomography and had been diagnosed with FGIDs from 2015 to 2016 were included in this retrospective case-control study in a ratio of 1:2. Abdominal visceral fat was measured using an image analysis software.
Results: The mean age of all 54 FGID patients was 12.9±3.4 years, and the male: female ratio was 1:1.2. We observed no difference in body mass index (BMI) between the FGID and control groups (19.5±4.6 vs. 20.6±4.3 kg/m2, p=0.150). However, the FGID group had less abdominal visceral fat than that of the control group (26.2±20.0 vs. 34.4±26.9 cm2, p=0.048). Boys in the FGID group had lower BMI (18.5±3.5 vs. 20.9±4.3 kg/m2, p=0.019) and less abdominal visceral fat (22.8±15.9 vs. 35.9±31.8 cm2, p=0.020) than those of boys in the control group. However, we found no difference in BMI (20.5±5.3 vs. 20.4±4.2 kg/m2, p=0.960) and abdominal visceral fat (29.0±22.9 vs. 33.1±22.1 cm2, p=0.420) between girls in both groups.
Conclusion: Our study revealed a difference in the relationship between FGID and the degree of obesity according to sex, which suggests that sex hormones influence the pathogenesis of FGIDs. Multicenter studies with larger cohorts are required to clarify the correlation between FGID subtypes and the degree of obesity.
abstract_id: PUBMED:30680265
Visceral Injuries in Patients with Blunt and Penetrating Abdominal Trauma Presenting to a Tertiary Care Facility in Karachi, Pakistan. Introduction Abdominal injuries are responsible for 10% of the mortalities due to trauma. Delays in early diagnosis or misdiagnoses are two major reasons for the mortality and morbidity associated with abdominal trauma. The objectives of this study were to determine the frequency of visceral injuries in patients with abdominal trauma and compare the frequency of visceral injuries in patients with blunt and penetrating abdominal trauma. Methods We conducted a cross-sectional study from May 2016 to May 2018 of patients presenting to the emergency department (ED) at Jinnah Postgraduate Medical Center in Karachi, Pakistan. Patients were 12 to 65 years old and presented within 24 hours of abdominal trauma. We recorded the type of abdominal visceral injuries, such as liver, spleen, intestine, stomach, mesentery, and pancreas. Results The mean patient age was 31 ±13 years. Penetrating trauma was found in most patients (n=72, 51%). Liver injuries were found in 37 patients (26.4%), spleen injuries in 29 patient (20.7%), stomach injuries in eight patients (5.7%), intestine injuries in 67 patients (47.9%), mesentery injuries in 21 patients (15%), and pancreas injuries in nine patients (6.4%). The type of abdominal trauma was found significantly associated with liver injury (p-value 0.021), and intestine injury (p-value <0.001). Conclusion Penetrating trauma (51.4%) was more common than blunt trauma (48.5%), and intestines are the most commonly affected by penetrating and blunt trauma injuries (70.1% and 47.8%, respectively). The liver is the most commonly affected (42.85%) in blunt trauma injuries, followed by the spleen (28.5%). The appropriate authorities should consider this information when instituting public health and safety initiatives.
abstract_id: PUBMED:37385141
Correlation between abdominal visceral volume measured by a simple procedure and bulging at the abdominal free flap donor site for breast reconstruction. Background: Breast reconstructions using autologous abdominal tissue result in breasts with a natural shape and consistency. One of the major complications is abdominal bulging. Because of the increased abdominal wall tension, high visceral volume (not visceral fat alone) may increase the occurrence rate of abdominal bulging. A simple procedure involving CT imaging was used to assess this relationship in patients undergoing an abdominal free flap for unilateral breast reconstruction.
Methods: A total of 278 patients were enrolled in this study. The patients' demographics, the thicknesses of visceral volume were compared (Bulging (+) vs. Bulging (-)). Visceral volume was investigated based on the horizontal thickness, which was measured at the thickest part at the level of the umbilical fossa between both sides under the transverse abdominis muscles.
Results: Bulging (+) consisted of 39 patients (14.0%), whereas Bulging (-) included 239 patients. Patients with Bulging (+) were significantly older, had higher gestational history rate, and had thin rectus abdominis muscle. In terms of visceral volume, the Bulging (+) group had significantly higher horizontal thicknesses (median 233 mm vs. 219 mm, P < 0.001). No significant differences were observed with respect to other factors (age, BMI, history of laparotomy, and operative details). The multivariate logistic regression analysis revealed that the thickness of the rectus abdominis muscle, horizontal visceral volume, and gestational history were independently significant predictors.
Conclusions: Not only the patients with thin rectus abdominis muscle but also patients with a thick horizontal visceral volume may have a higher risk of abdominal bulging.
Answer: The provided abstracts do not directly address the concept of "retrograde abdominal visceral perfusion," which typically refers to a technique used during certain surgeries to maintain blood flow to abdominal organs. Instead, the abstracts discuss various topics related to abdominal and visceral conditions, including the effects of retrograde colonic electrical stimulation on IBS rats (PUBMED:28942833), interventional treatment modalities for chronic abdominal and pelvic visceral pain (PUBMED:35788892), visceral injury in abdominal trauma (PUBMED:24388260), the lack of association between visceral adiposity and abdominal aortic aneurysm (PUBMED:24948557), clinical features of spontaneous isolated dissection of abdominal visceral arteries (PUBMED:32010417), the impact of abdominal visceral adiposity on adult asthma symptoms (PUBMED:30476681), abdominal aortic and visceral artery aneurysms (PUBMED:34686264), the relationship between functional gastrointestinal disorders and abdominal visceral fat in children and adolescents (PUBMED:38025491), visceral injuries in patients with abdominal trauma (PUBMED:30680265), and the correlation between abdominal visceral volume and bulging at the abdominal free flap donor site for breast reconstruction (PUBMED:37385141).
Given the lack of information on retrograde abdominal visceral perfusion in the provided abstracts, it is not possible to determine its benefits based on these sources. However, it is worth noting that retrograde colonic electrical stimulation (RCES) was found to significantly alleviate stress-induced visceral hypersensitivity in IBS rats, suggesting that certain retrograde interventions in the abdominal region may have therapeutic potential (PUBMED:28942833). For a definitive answer on the benefits of retrograde abdominal visceral perfusion, one would need to consult sources specifically addressing this procedure. |
Instruction: Does intramesorectal excision for ulcerative colitis impact bowel and sexual function when compared with total mesorectal excision?
Abstracts:
abstract_id: PUBMED:25124292
Does intramesorectal excision for ulcerative colitis impact bowel and sexual function when compared with total mesorectal excision? Background: Proctectomy for ulcerative colitis (UC) can be performed via intramesorectal (IME) or total mesorectal excision (TME).
Methods: We compared patient-reported bowel and sexual function among IME versus TME UC patients (September 2000 to March 2011) using the Memorial Sloan-Kettering Cancer Center Bowel Function scale, Fecal Incontinence Quality of Life, Fecal Incontinence Severity Index, Female Sexual Function Instrument, and International Index of Erectile Dysfunction surveys.
Results: Eighty-nine IME versus TME patients (35 ± 2 years, 57% male, 62% IME) had similar baseline characteristics, although IME patients had more open procedures (P ≤ .03). IME patients reported better fecal continence (P = .009) but similar fecal incontinence-related quality of life (P ≥ .44). For sexual function, there were no differences for either women (Female Sexual Function Instrument; P ≥ .20) or men (International Index of Erectile Dysfunction; P ≥ .22).
Conclusions: IME appears to be associated with better fecal continence but no difference in overall bowel or sexual function compared with TME in patients with UC.
abstract_id: PUBMED:34934583
Laparoscopic Proctocolectomy With Transanal Total Mesorectal Excision for Ulcerative Colitis. Transanal total mesorectal excision (TaTME) refers to endoscopic retrograde total mesorectal excision and is becoming increasingly popular worldwide. TaTME improves surgical manipulation and minimizes the risk of local recurrence of rectal cancer by ensuring circumferential resection margins. TaTME is mainly indicated for patients in whom transabdominal approaches are expected to be technically challenging. We extended the indications for TaTME to include surgery for ulcerative colitis lesions that might be cancerous in the rectum. Here, we report a case of proctocolectomy with TaTME for ulcerative colitis. A 38-year-old woman who was receiving treatment for ulcerative colitis underwent a biopsy for random samples from the transverse colon to the rectum. Histopathological findings revealed noninvasive dysplasia with p53 overexpression, suggestive of cancer. We extended the indication of TaTME to surgery for ulcerative colitis. We formed two surgical teams and performed laparoscopic proctocolectomy with TaTME simultaneously. This simultaneous operation reduced the duration of the procedures in the present case. The patient was discharged without any complications and underwent loop ileostomy closure four months postoperatively. The patient recovered without significant loss of the anal sphincter function and is doing well four months after the second surgery. We propose laparoscopic proctocolectomy with TaTME to be conducted simultaneously by two teams as a safe and effective technique that is associated with a shorter operation time than that reported previously. Additionally, TaTME was useful in confirming the appropriate dissection layer as well as in surgical manipulation. Hence, TaTME could serve as a useful therapeutic option for ulcerative colitis surgery.
abstract_id: PUBMED:27885870
Transanal total mesorectal excision for restorative coloproctectomy in an obese high-risk patient with colitis-associated carcinoma. Transanal total mesorectal excision (TaTME) offers great potential for the treatment of malign and benign diseases. However, laparoscopic-assisted TaTME in ulcerative colitis has not been described in more than a handful of patients. We present a 47-year-old highly comorbid female patient with an ulcerative colitis-associated carcinoma of the ascending colon and steroid- refractory pancolitis. A two-stage restorative coloproctectomy including right-sided complete mesocolic excision was conducted. The second step consisted of a successful nerve-sparing TaTME and a handsewn ileal pouch-anal anastomosis. TaTME may extend the possible treatment options in inflammatory bowel disease, especially for high-risk patients.
abstract_id: PUBMED:24146339
Does intramesorectal proctectomy with rectal eversion affect postoperative complications compared to standard total mesorectal excision in patients with ulcerative colitis? Introduction: Proctectomy for ulcerative colitis (UC) can be performed via intramesorectal proctectomy with concomitant rectal eversion (IMP/RE) or total mesorectal excision (TME). No data exists comparing the outcomes of the two techniques.
Methods: All UC patients undergoing J-pouch surgery at a single institution over 10.5 years were included. Postoperative complications with IMP/RE vs. TME were analyzed using univariable and multivariable statistics.
Results: One hundred nineteen of 201 (59 %) patients underwent IMP/RE. Demographic and disease characteristics were similar between groups. On univariable analysis, IMP/RE had fewer total perioperative complications than TME (p = 0.02), but no differences in postoperative length of stay or readmissions. Multivariable regression accounting for patient age, comorbidities, disease severity, preoperative medications, operative technique, and follow-up time (mean 5.5 ± 0.2 years) suggested that both anastomotic leak rate (OR 0.32; p = 0.04) and overall postoperative complications (2.10 ± 0.17 vs. 2.60 ± 0.20; p = 0.05) were lower in the IMP/RE group.
Conclusions: IMP/RE may be associated with fewer overall postoperative complications compared to TME. However, further studies on functional and long-term outcomes are needed.
abstract_id: PUBMED:30675660
The current state of the transanal approach to the ileal pouch-anal anastomosis. Background: The transanal approach to pelvic dissection has gained considerable traction and utilization continues to expand, fueled by the transanal total mesorectal excision (TaTME) for rectal cancer. The same principles and benefits of transanal pelvic dissection may apply to the transanal restorative proctocolectomy with ileal pouch-anal anastomosis (IPAA)-the TaPouch procedure. Our goal was to review the literature to date on the development and current state of the TaPouch.
Materials And Methods: We performed a PubMed database search for original articles on transanal pelvic dissections, IPAA, and the TaPouch procedure, with a manual search from relevant citations in the reference list. The main outcomes were the technical aspects of the TaPouch, clinical and functional outcomes, and potential advantages, drawbacks, and future direction for the procedure.
Results: The conduct of the procedure has been defined, with the safety and feasibility demonstrated in small series. The reported rates of conversion and anastomotic leakage are low. There are no randomized trials or large-scale comparative studies available for comparative effectiveness compared to the traditional IPAA.
Conclusions: The transanal approach to ileal pouch-anal anastomosis is an exciting adaption of the transanal total mesorectal excision for refining the technical steps of a complex operation. Additional experience is needed for comparative outcomes and defining the ideal training and implementation pathways.
abstract_id: PUBMED:6370632
Sexual function and perineal wound healing after intersphincteric excision of the rectum for inflammatory bowel disease. The technique of intersphincteric excision of the rectum in patients with inflammatory bowel disease was introduced with the aim of avoiding postoperative sexual dysfunction and, combined with primary perineal suture, should decrease morbidity from delayed perineal wound healing. In a series of 98 patients so treated at St. Mark's Hospital, permanent sexual dysfunction from sympathetic nerve damage occurred in one male patient among 23 aged 60 years or less assessed postoperatively. No patient exhibited evidence of permanent parasympathetic nerve damage. Primary healing of the perineal wound was successful in 50 per cent of the cases and in 69 per cent the wound healed within three months of operation. It is suggested that this combination of operative techniques significantly decreases morbidity from rectal excision compared with more extensive procedures and should be more widely adopted.
abstract_id: PUBMED:36526828
How to Do It: Laparoscopic Total Abdominal Colectomy with Complete Mesocolic Excision (CME) for Transverse Colon Cancer in Ulcerative Colitis. Background: Ulcerative colitis (UC) is a chronic mucosal inflammatory bowel disease of the colon and rectum. After 10 years of having the disease, there is a significant risk of dysplasia or cancer in the affected colon and rectum, and because of the often aggressive biology of these tumors, frequent endoscopic surveillance is warranted. Over a third of patients with UC will ultimately require an operation, and although for specific cases alternative operations can be pursued, most patients prefer an ileal pouch-anal anastomosis (IPAA) with J-pouch construction.
Case: A staged IPAA removes the affected colon and rectum treating UC and also restores intestinal continuity. However, the standard colectomy for UC includes low ligations of the main colonic vascular pedicle branches (ileocolic, right colic, middle colic and inferior mesenteric) that does not address a proper oncologic operation. A high ligation of the named vessels as well as a proper resection of the affected colon with its mesentery and lymph node package are needed to treat colon cancer. Analogous to a total mesorectal exicision for rectal cancer, a more radical procedure to remove the tumor and lymph node packet for colon cancer is described as a complete mesocolic exision (CME) in efforts to increase disease free survival.
Discussion: We demonstrate a laparoscopic subtotal colectomy for UC, with an oncologic complete mesocolic excision for a left transverse colon carcinoma in the setting of chronic mucosal inflammation secondary to chronic UC as the first procedure in a 3-staged IPAA. In the video, it is also demonstrated how the lymph node dissection is extended towards the greater gastric curvature and comprising omentum and gastrocolic ligament. There were no postoperative complications in the 44-year old male patient.
abstract_id: PUBMED:26254470
Inflammatory Bowel Disease and Sexual Function in Male and Female Patients: An Update on Evidence in the Past Ten Years. Background And Aims: Inflammatory bowel diseases [IBD] are a group of chronic, debilitating inflammatory intestinal conditions. The aim of this review was to assess the recent data regarding the impact of IBD in sexual function of male and female patients.
Methods: A literature search was conducted on MEDLINE using, among others, the following search terms or their combinations: ulcerative colitis; Crohn's disease; sexual function; sexual health; relationship status; erectile dysfunction; surgery. All English-language studies published in the past 10 years which provided data evaluating the sexual function in IBD patients were included.
Results: Fourteen studies were identified; six included IBD patients registered on a national database or presented in a clinical setting, whereas eight evaluated sexual function after a surgical intervention for IBD. The majority of the studies used the validated for general populations International Index for Erectile Function [IIEF] and the Female Sexual Function Index [FSFI] for the assessment of sexual function among males and females, respectively. An impaired sexual function has been reported in general cohorts of IBD patients; females seemed to experience worse sexual dysfunction than males. Furthermore, depression was a consistent negative predictive factor across studies. Surgery did not seem to affect sexual function in the majority of studies, except a prospective one which reported a significant improvement in male sexual function [IIEF, p < 0.05] but not female [FSFI, p = 0.6].
Conclusions: Sexual function among IBD patients may be impaired, thus more studies are needed in order to develop the appropriate instruments and proper and effective management strategies.
abstract_id: PUBMED:32668025
The impact of inflammatory bowel disease on sexual health in men: A scoping review. Aims And Objectives: To review the literature on the impact of inflammatory bowel disease on the sexual health of men and make recommendations for nursing practice and research.
Background: Inflammatory bowel disease is a chronic condition of the gastrointestinal tract, causing symptoms that may impact upon sexual health. Specialist nurses are well positioned to assess and manage sexual health, but there is a lack of clinical guidance, especially in relation to men.
Design: A systematic scoping review following the Arksey and O'Malley (International Journal of Social Research Methodology, 8, 2005, 19) framework reported in line with the PRISMA-ScR checklist (Tricco et al., Annals of Internal Medicine, 169, 2018, 467).
Methods: OVID MEDLINE ALL [R], OVID EMBASE [R], OVID PsychINFO, EBSCO CINAHL Complete, The Cochrane Library and ProQuest were searched. Inclusion and exclusion criteria were applied independently by two reviewers. Data were extracted, charted and summarised from eligible studies.
Results: Thirty-one studies met the inclusion criteria. These were synthesised under three categories: mediators, moderators and descriptors of sexual health. Depression, disease activity and surgery were the most commonly cited disease-related factors to affect sexual health in men. The most commonly used assessment tool was The International Index of Erectile Function. Descriptors of function included frequency of intercourse, libido and the ability to maintain a desired sexual role.
Conclusions: The effect of inflammatory bowel disease on sexual health in men involves a complex interaction of physical and psychosocial factors. Researchers must explore areas outside of erectile function to understand how the disease impacts sexuality, sexual well-being and masculinity. This can be achieved through qualitative exploration of patient, partner and health professional experiences.
Relevance To Clinical Practice: A holistic nursing assessment of men with inflammatory bowel disease should include sexual health. Developing understanding of how the disease influences sexual interaction and expression will facilitate support that is relevant, accessible and of value to men living with the disease.
abstract_id: PUBMED:33471205
Complications and functional outcomes after ileo-anal pouch excision-a systematic review of 14 retrospective observational studies. Purpose: The ileo-anal pouch (IAP) has been the gold standard procedure for maintenance of bowel continuity after panproctocolectomy for ulcerative colitis, familial adenomatous polyposis or hereditary non-polyposis colorectal cancer. However, the IAP has an estimated failure rate of 13% at 10 years post-procedure (Tulchinsky et al., Ann Surg 238(2):229-34, 2003), which can result in pouch excision (P.E.). This systematic review aims to synthesise all the available studies reporting post-operative outcomes of P.E. and its impact on patient quality of life (QoL), when available, which have not previously been summarised.
Methods: PubMed, Embase, Medline and the Cochrane library databases were searched with terms 'Pouch AND excision' OR 'Pouch AND removal' OR 'Pouch AND remove' OR 'IAP AND excision'. All studies reporting post-operative morbidity, mortality or functional outcomes in patients who had P.E. were included. Studies with < 5 patients, non-English studies and conference abstracts were excluded.
Results: 14 studies comprising 1601 patients were included. Overall complications varied from 18 to 63% with the most common being persistent perineal sinus (9-40%) or surgical site infection (wound-2 to 30%; intra-abdominal collection-3 to 24%). The mortality rate was between 0.58 and 1.4%. QoL is generally lower in P.E. patients compared to the normal population across various QoL measures and P.E. patients often had urinary and sexual dysfunction post-operatively.
Conclusions: There is a substantial incidence of complications after P.E.; however, there is no evidence describing QoL pre- and post-P.E. Further longitudinal research comparing QoL in patients undergoing P.E. and other treatment options such as indefinite diversion is required to definitively assess QoL post-procedure.
Answer: Intramesorectal excision (IME) for ulcerative colitis (UC) appears to be associated with better fecal continence but no difference in overall bowel or sexual function when compared with total mesorectal excision (TME). A study comparing patient-reported bowel and sexual function among IME versus TME UC patients found that IME patients reported better fecal continence (P = .009) but similar fecal incontinence-related quality of life (P ≥ .44). For sexual function, there were no differences for either women (Female Sexual Function Instrument; P ≥ .20) or men (International Index of Erectile Dysfunction; P ≥ .22) (PUBMED:25124292).
Additionally, another study suggested that intramesorectal proctectomy with rectal eversion (IMP/RE) may be associated with fewer overall postoperative complications compared to TME, although further studies on functional and long-term outcomes are needed (PUBMED:24146339).
The transanal total mesorectal excision (TaTME) approach, which is becoming increasingly popular, has also been used in UC surgery. It has been reported to be a safe and effective technique that is associated with a shorter operation time and useful in confirming the appropriate dissection layer as well as in surgical manipulation (PUBMED:34934583). TaTME may extend the possible treatment options in inflammatory bowel disease, especially for high-risk patients (PUBMED:27885870).
The transanal approach to ileal pouch-anal anastomosis (TaPouch procedure) is an adaptation of TaTME for refining the technical steps of a complex operation. While the safety and feasibility of the TaPouch have been demonstrated in small series, additional experience is needed for comparative outcomes and defining the ideal training and implementation pathways (PUBMED:30675660).
In summary, IME for UC is associated with better fecal continence and similar sexual function outcomes compared to TME. The transanal approaches, including TaTME and TaPouch, are emerging as potential options in UC surgery, with promising results in terms of safety and effectiveness, but more research is needed to fully assess their impact on bowel and sexual function. |