input
stringlengths
6.82k
29k
Instruction: Is subclinical thyroid dysfunction in the elderly associated with depression or cognitive dysfunction? Abstracts: abstract_id: PUBMED:17043339 Is subclinical thyroid dysfunction in the elderly associated with depression or cognitive dysfunction? Background: Widespread use of automated sensitive assays for thyroid hormones and thyroid-stimulating hormone (TSH) has increased identification of mild thyroid dysfunction, especially in elderly patients. The clinical significance of this dysfunction, however, remains uncertain, and associations with cognitive impairment, depression, and anxiety are unconfirmed. Objective: To determine the association between mild thyroid dysfunction and cognition, depression, and anxiety in elderly persons. Design: Cross-sectional study. Associations were explored through mixed-model analyses. Setting: Primary care practices in central England. Patients: 5865 patients 65 years of age or older with no known thyroid disease who were recruited from primary care registers. Measurements: Serum TSH and free thyroxine (T4) were measured. Depression and anxiety were assessed by using the Hospital Anxiety and Depression Scale (HADS), and cognitive functioning was established by using the Middlesex Elderly Assessment of Mental State and the Folstein Mini-Mental State Examination. Comorbid conditions, medication use, and sociodemographic profiles were recorded. Results: 295 patients met the criteria for subclinical thyroid dysfunction (127 were hyperthyroid, and 168 were hypothyroid). After confounding variables were controlled for, statistically significant associations were seen between anxiety (HADS score) and TSH level (P = 0.013) and between cognition and both TSH and free T4 levels. The magnitude of these associations lacked clinical relevance: A 50-mIU/L increase in the TSH level was associated with a 1-point reduction in the HADS anxiety score, and a 1-point increase in the Mini-Mental State Examination score was associated with an increase of 50 mIU/L in the TSH level or 25 pmol/L in the free T4 level. Limitations: Because of the low participation rate, low prevalence of subclinical thyroid dysfunction, and other unidentified recruitment biases, participants may not be representative of the elderly population. Conclusions: After the confounding effects of comorbid conditions and use of medication were controlled for, subclinical thyroid dysfunction was not associated with depression, anxiety, or cognition. abstract_id: PUBMED:22385117 Subclinical hypothyroidism and cognitive dysfunction in the elderly. While overt hypothyroidism is associated with reversible dementia in the elderly, the relationship of subclinical hypothyroidism with cognition remains a controversial issue. Our aim was to investigate the correlation between subclinical hypothyroidism and cognition in the elderly, with particular reference to long term memory and selective attention. We selected 337 outpatients (177 men and 160 women), mean age 74.3 years, excluding the subjects with thyroid dysfunction and those treated with drugs influencing thyroid function. The score of Mini Mental State Examination (MMSE) was significantly lower in the group of patients with subclinical hypothyroidism than in euthyroid subjects (p<0.03). It was observed that patients with subclinical hypothyroidism had a probability about 2 times greater (RR = 2.028, p<0.05) of developing cognitive impairment. Prose Memory Test (PMT) score resulted significantly lower in subjects with subclinical hypothyroidism (p<0.04). Considering the Matrix Test (MT) score, the performance was slightly reduced in subclinical hypothyroidism (NS). Furthermore, TSH was negatively correlated with MMSE (p<0.04), PMT (p<0.05) and MT score (NS). No correlation was found between FT4 and FT3 and MMSE, PMT and MT score. In the elderly, subclinical hypothyroidism is associated with cognitive impairment, and its impact on specific aspects of cognition (long term memory and selective attention) is less evident. abstract_id: PUBMED:34688413 Decision Making in Subclinical Thyroid Disease. Subclinical thyroid disease is frequently encountered in clinic practice. Although overt thyroid dysfunction has been associated with adverse clinical outcomes, uncertainty remains about the implications of subclinical thyroid disease. Available data suggest that subclinical hypothyroidism may be associated with increased risk of cardiovascular disease and death. Despite this finding, treatment with thyroid hormone has not been consistently demonstrated to reduce cardiovascular risk. Subclinical hyperthyroidism has been associated with increased risk of atrial fibrillation and osteoporosis, but the association with cardiovascular disease and death is uncertain. The decision to treat depends on the degree of thyroid-stimulating hormone suppression and underlying comorbidities. abstract_id: PUBMED:20518189 Subclinical thyroid disease in elderly subjects. Subclinical thyroid disease (STD) is defined as circulating concentrations of free T4 and free T3 within their respective reference ranges in the presence of abnormal circulating concentrations of TSH. SCD is being diagnosed more frequently in clinical practice and is reported to be more prevalent in elderly as compared to young or adult subjects. The clinical impact of subclinical thyroid dysfunction is still a matter of debate, although it has been associated with various negative clinical outcomes, such as increased cardiovascular risk, reduction in bone density, decline in cognitive function, and increased risk of overt thyroid dysfunction. The treatment of STD is controversial and there is no consensus on the TSH cutoff values which can be used as indicators for treatment, especially in elderly subjects. In the present review, we report data on the prevalence of STD and on the potential clinical consequences of these disorders. Also, data of the Literature regarding the issue of the treatment of STD in relation to the age of the patient are reported. abstract_id: PUBMED:37940558 Association of Subclinical Thyroid Dysfunction with Cognitive Impairment in Rats: The Role of Autophagy. Background: We investigated the effect of subclinical hyperthyroidism and subclinical hypothyroidism on cognitive function in rats and the role of autophagy in this process. Methods: Forty Wistar rats were randomized into normal control (NC), hyperthyroidism (Hyper), hypothyroidism (Hypo), subclinical hyperthyroidism (sHyper), and subclinical hypothyroidism (sHypo) groups. Cognitive function (spatial learning and memory) was tested by the Morris water maze test. Hippocampal histopathology was analyzed by H&E staining, and expression levels of caspase-3 in hippocampal CA1 neurons were measured. In addition, immunoblot analysis was performed to detect hippocampal autophagy-related proteins. Results: Escape latency from day 1 to day 4 was significantly longer in the Hypo, Hyper, and sHyper groups than in the NC group (P < 0.01). In addition, the number of rats crossing the virtual platform was significantly lower in the Hypo, Hyper, and sHyper groups than in the NC group (P < 0.01). Compared with the NC group, all four groups had significantly lower residence time in the target quadrant (P < 0.05). Beclin-1 and LC3-II protein expression in hippocampal tissues was significantly higher in the Hyper and sHyper groups than in the NC group (P < 0.01). Beclin-1 and LC3-II protein expression in hippocampal tissues did not significantly differ between the sHypo group and NC group (P > 0.05). Conclusions: Subclinical thyroid dysfunction in rats might lead to cognitive impairment. Subclinical hyperthyroidism might be associated with excessive activation of autophagy and hippocampal neuron damage and necrosis. abstract_id: PUBMED:35694927 Subclinical Hypothyroidism and Cognitive Impairment. Background: Although thyroid dysfunction has been considered as a cause of reversible cognitive impairment, association between subclinical hypothyroidism and cognitive impairment is controversial. Objective: We compared cognitive profiles of patients in an euthyroid or subclinical hypothyroid (sHypo) state, as well as their disease progression from mild cognitive impairment (MCI) to dementia within 3 years. Methods: We included 2,181 patients in a euthyroid and 284 in a sHypo state over 60 years of age who underwent an extensive cognitive assessment at Seoul National University Bundang Hospital but were not prescribed levothyroxine, methimazole, carbimazole, or propylthiouracil. After propensity score matching for age, sex, and education level, 1,118 patients in a euthyroid and 283 patients in a sHypo state were included. Attention, language, memory, visuocontructive, and executive functions were compared between the groups using Student's t-test or the Mann-Whitney U test. To investigate the association between disease progression and subclinical hypothyroidism, a Cox regression analyses was performed in 379 patients with MCI. Patients with thyroid-stimulating hormone levels over 10 mlU/L was classified as the "sHypo10", and hazard ratios for sHypo or sHypo10 were assessed. Results: There was no difference in attention, language, memory, visuoconstructive, and executive functions between the patient groups. Progression from MCI to dementia was not associated with sHypo or sHypo10. Conclusion: There was no difference in cognitive profile between euthyroid and sHypo patients, and no association between subclinical hypothyroidism and disease progression. This might suggest a clue of strategies regarding hormone therapy in subclinical hypothyroidism with cognitive impairment. abstract_id: PUBMED:26897535 Is thyroid status associated with cognitive impairment in elderly patients in China? Background: The relationship between alterations in thyroid function and cognitive deficits has been investigated in several previous studies. Hypo-or hyperthyroidism and, to a lesser extent, subclinical thyroid dysfunction can negatively affect cognitive performance. However, limited data are available on the potential association of thyroid function with mild cognitive impairment (MCI) and Alzheimer's disease (AD) in the elderly Chinese population. Methods: In the present study focusing on a population of elderly Chinese individuals ≥ 50 years of age, 77 cognitively normal controls, 64 patients with MCI, and 154 patients diagnosed with AD underwent assessment of thyroid status using thyroid stimulating hormone (TSH), free triiodothyronine (fT3) and free thyroxine (fT4) levels as variables. Cognitive function was evaluated with the aid of comprehensive neuropsychological tests, such as the Mini-Mental State Examination (MMSE) and Memory and Executive Screening (MES). Results: Overall, 88.1 % of the subjects displayed normal thyroid function, 4.7 % were diagnosed with clinical hypothyroidism, 3.1 % with subclinical hypothyroidism, and 4.1 % with subclinical hyperthyroidism. After adjusting for covariates (age, sex, education years and body mass index), no association was evident between mild cognitive impairment or AD and thyroid dysfunction. However, lower serum TSH was correlated with risk of AD (odds ratio [OR]: 2.78, 95 % confidence interval [95% CI]: 1.11-6.99). Conclusion: Neither hypothyroidism nor subclinical hyperthyroidism was associated with AD and MCI in this population-based elderly Chinese cohort. Our findings need to be confirmed in a longitudinal study. abstract_id: PUBMED:23554996 Subclinical thyroid dysfunction and cognitive decline in old age. Background: Subclinical thyroid dysfunction has been implicated as a risk factor for cognitive decline in old age, but results are inconsistent. We investigated the association between subclinical thyroid dysfunction and cognitive decline in the PROspective Study of Pravastatin in the Elderly at Risk (PROSPER). Methods: Prospective longitudinal study of men and women aged 70-82 years with pre-existing vascular disease or more than one risk factor to develop this condition (N = 5,154). Participants taking antithyroid medications, thyroid hormone supplementation and/or amiodarone were excluded. Thyroid function was measured at baseline: subclinical hyper- and hypothyroidism were defined as thyroid stimulating hormones (TSH) <0.45 mU/L or >4.50 mU/L respectively, with normal levels of free thyroxine (FT4). Cognitive performance was tested at baseline and at four subsequent time points during a mean follow-up of 3 years, using five neuropsychological performance tests. Results: Subclinical hyperthyroidism and hypothyroidism were found in 65 and 161 participants, respectively. We found no consistent association of subclinical hyper- or hypothyroidism with altered cognitive performance compared to euthyroid participants on the individual cognitive tests. Similarly, there was no association with rate of cognitive decline during follow-up. Conclusion: We found no consistent evidence that subclinical hyper- or hypothyroidism contribute to cognitive impairment or decline in old age. Although our data are not in support of treatment of subclinical thyroid dysfunction to prevent cognitive dysfunction in later life, only large randomized controlled trials can provide definitive evidence. abstract_id: PUBMED:25896972 Thyroid hormone: Influences on mood and cognition in adults. The association of thyroid dysfunction with alterations in mood and cognition has been recognised since some of the earliest descriptions of thyroid disease. Over the years, researchers have aimed to further define these effects throughout the spectrum of thyroid disorders, to better understand the underlying condition and refine indications for treatment. More recently, attention has turned towards examining the impact of differences in thyroid hormones within the normal reference range, particularly in older adults, providing new insights into the association of thyroid hormone with cognitive decline. This review summarises the evidence assessing the influence of thyroid hormone on mood and cognition in overt and subclinical hypothyroidism, within the reference range, and in subclinical and overt hyperthyroidism. Treatment of overt thyroid dysfunction largely resolves associated disturbances in mood and cognitive dysfunction, however in the setting of overt hypothyroidism subtle detrimental effects on cognition may not be fully reversed. Subclinical hyperthyroidism and higher free thyroxine (FT4) within the normal range have been associated with poorer cognitive outcomes. Future research including randomised controlled trials are required to confirm causality and guide the assessment of benefits vs risks of intervention in the increasing population of older adults with subclinical thyroid disease. abstract_id: PUBMED:30606015 Thyroid disease in the elderly. The incidence of most thyroid diseases are prevalent in women in ratio 8 : 1 to men, and especially hypothyroidism arises with age. Unrecognized thyroid dysfunction leads to increased: cardiovascular risk, bone fractures, cognitive impairment, depression, and mortality. The symptoms of thyroid diseases can be nonspecific or common in seniors with ageing complaints. The interpretation of thyroid function tests, the physiological changes in secretion and metabolism of thyrotropin (TSH) and thyroid hormones must be considered, as well as the influence of comorbidities, certain drugs, and individual "set point" of pituitary gland. According to many observations the serum TSH, thyroxine (T4), concentrations depend on age, comorbidities, and medical treatment - these together sometimes make the diagnosis of thyroid dysfunction complicated in older population. The observational data may suggest a diminished pituitary sensitivity to T4 in the ageing population. According to several studies, serum TSH concentration is probably age-dependant and the upper limit of TSH could be 5.28-5.9 mIU/l in those who are > 70 years old. Therapy of thyroid dysfunction is different in elderly persons than in young people; hypothyroidism should be treated with caution, because high doses of thyroxine can lead to cardiac arrhythmias and increased bone turnover. Hyperthyroidism could be treated either with surgery or preferable with radioiodin. Especially the diagnosis of subclinical hypothyroidism should be made with caution after concerning different important circumstances. Nevertheless there are certain conditions, when subclinical hypothyroidism must be treated. Key words: ageing - hyperthyroidism - hypothyroidism - thyroid diseases. Answer: The association between subclinical thyroid dysfunction in the elderly and depression or cognitive dysfunction is not definitively established, with studies showing varying results. One cross-sectional study found that after controlling for confounding variables, subclinical thyroid dysfunction was not associated with depression, anxiety, or cognition in elderly persons (PUBMED:17043339). However, another study suggested that subclinical hypothyroidism in the elderly is associated with cognitive impairment, with a higher probability of developing cognitive impairment and lower scores on cognitive tests such as the Mini Mental State Examination (MMSE) and Prose Memory Test (PMT) (PUBMED:22385117). Further research in rats indicated that subclinical thyroid dysfunction might lead to cognitive impairment, with subclinical hyperthyroidism potentially associated with excessive activation of autophagy and hippocampal neuron damage (PUBMED:37940558). Conversely, a study comparing cognitive profiles of euthyroid and subclinical hypothyroid patients found no difference in cognitive functions between the groups, nor an association between subclinical hypothyroidism and progression from mild cognitive impairment to dementia (PUBMED:35694927). Another study in a Chinese elderly population found no association between mild cognitive impairment or Alzheimer's disease and thyroid dysfunction, although lower serum TSH was correlated with a risk of Alzheimer's disease (PUBMED:26897535). Similarly, the PROspective Study of Pravastatin in the Elderly at Risk (PROSPER) found no consistent evidence that subclinical hyper- or hypothyroidism contributes to cognitive impairment or decline in old age (PUBMED:23554996). In summary, while some studies suggest a potential link between subclinical thyroid dysfunction and cognitive impairment in the elderly, the evidence is inconsistent, and no clear association with depression has been established. Further research, including large randomized controlled trials, is needed to provide definitive evidence on the impact of subclinical thyroid dysfunction on cognitive and mood disorders in the elderly population.
Instruction: Is waist circumference a useful measure in predicting health outcomes in the elderly? Abstracts: abstract_id: PUBMED:12355330 Is waist circumference a useful measure in predicting health outcomes in the elderly? Objective: To examine the effect of age on the relationship between body mass index (BMI) and waist circumference (WC), and the usefulness of BMI, WC and waist-hip ratio (WHR) in predicting mortality and cardiovascular risk in the elderly population. Design: Longitudinal observational study of 36 months duration. Subjects And Method: A stratified random sample of 2,032 Chinese subjects (990 male, 1,033 female) mean age (s.d.) 80.1 (7.5), interviewed and examined at baseline and after 36 months. Deaths and presence of diabetes mellitus and hypertension were documented. A younger data set of 1,010 subjects (500 male, 510 female), mean age (s.d.) 45.5 (11.6), was used for comparison of the BMI-WC relationship between younger and older subjects. In predicting outcomes using different values of BMI, WC and WHR, receiver operating characteristic curve analysis was used to derive cut-off values with optimal sensitivity and specificity, and the likelihood ratios for mortality, diabetes and hypertension for different anthropometric values were plotted. Results: The waist circumference values corresponding to BMI values of 25 and 30 kg/m(2) were higher in elderly (92 and 103 cm for men; 88 and 99 cm for women) compared with younger subjects (85 and 97 cm for men; 78 and 88 cm for women). BMI and WC are inversely associated with mortality, in both men and women, positively associated with diabetes in men but not in women. WC was positively associated with hypertension in men and women. WHR was not associated with any outcome measures. The anthropometric measurement at the point of intersection of the likelihood curves for mortality and diabetes may be considered the optimum value, being BMI=21 kg/m(2) for men and 25 kg/m(2) for women, WC between 80 and 85 cm, and WHR 0.88-0.90. Conclusion: Waist measurement values for predicting health outcomes in elderly people aged 70 y and over are different compared with younger subjects, and have similar predictive accuracy compared with body mass index. Waist-hip ratio is not a useful predictor. abstract_id: PUBMED:24788716 Waist circumference and health-related quality of life by sex in the Korean elderly. Objective: This study aims to explore the impact of differential degrees of obesity on health-related quality of life (HRQL) by sex in the Korean elderly. Method: We analyzed data on those aged between 65 and 74 years from the Korean National Health and Nutrition Examination Survey (KNHANES) using multiple regression analysis. Results: Compared with the lowest waist circumference (WC) quintile, the women in the fourth and fifth quintiles demonstrated significantly lower unadjusted HRQL but not the men in any quintiles. Whereas higher WC quintiles impaired mobility, usual activities, and pain/discomfort in women, the fourth WC quintile improved usual activities in men. After adjustment, only the women of the highest WC quintile reported impaired mobility. Discussion: Extreme obesity worsens mobility, and comorbidity in combination with obesity worsens HRQL in elderly women. Monitoring and controlling comorbidity and maintaining adequate WC decreases the risks of lowered HRQL in Korean elderly women. abstract_id: PUBMED:27149905 Waist circumference is correlated with poorer cognition in elderly type 2 diabetes women. Introduction: Waist circumference is associated with type 2 diabetes (T2D) and cognition, yet the relationship between waist circumference and cognition in individuals with T2D is not well understood. Methods: We studied the relationship of waist circumference with five cognitive outcomes (executive functioning, language/semantic categorization, attention/working memory, episodic memory, and an overall cognition measure) in 845 cognitively normal elderly with type 2 diabetes (T2D). Results: In women, waist circumference was correlated with significantly lower language and/or semantic categorization performance (P < .0001), executive functioning (P = .026), and overall cognition (P = .003) after controlling for age, education, BMI, and cardiovascular, diabetes-related, APOE ε4, and inflammatory potential confounders. Attention/working memory (P = .532) and episodic memory (P = .144) were not associated with waist circumference. These correlations were not found in men. Discussion: These results suggest that central adiposity in elderly women with T2D may increase their risk for dementia. abstract_id: PUBMED:34028707 Relationship of domain-specific quality of life with body mass index and waist circumference in a Korean elderly population. Background: Obesity affects health-related quality of life (QoL); however, their relationship among elderly Asians is not well known. Aims: Relationship of domain-specific QoL with body mass index (BMI) and waist circumference and the sex differences were investigated using a nationally representative sample of elderly Korean population. Methods: In the Korea National Health and Nutrition Examination Survey phase VII (2016-2018), 3659 adults aged ≥ 65 years (1551 men and 2108 women) participated. BMI and waist circumference were classified according to Asian- and Korean-specific criteria. QoL was evaluated using the European Quality of Life Scale-Five Dimensions (EQ-5D). Multivariable logistic regressions were used to examine the relationship of QoL with BMI and waist circumference. Results: Men with BMI < 18.5 kg/m2 and ≥ 25.0 kg/m2 had a significant association with poor QoL in mobility and self-care, but no relationship was found with the other domains. Women with BMI ≥ 25.0 kg/m2 had poor QoL in mobility and self-care, and those with BMI ≥ 30.0 kg/m2 had poor QoL in usual activities and pain/discomfort. There was no significant association with anxiety/depression. Both elderly men and women with abdominal obesity had a significant association with poor QoL in mobility, self-care, usual activities, and pain/discomfort; however, there was no significant relationship with waist circumference and anxiety/depression. Conclusions: The association between QoL and BMI was different according to sex and the domains of QoL. Domain-specific QoL should be considered in the management of body weight of the elderly. abstract_id: PUBMED:28210067 The effects of Tai Chi on waist circumference and blood pressure in the elderly. [Purpose] The purpose of this study was to investigate the effects of Tai Chi on waist circumference and blood pressure in the elderly. The present study used a nonequivalent control group pretest-posttest design. [Subjects and Methods] Sixty-eight elderly individuals residing in J city were divided into 2 groups: 34 in the experimental group, who received Tai Chi training for 6 weeks, and 34 in the control group, who did not receive Tai Chi training. Simplified Yang style 24-form Tai Chi was used as the intervention, which was conducted for 60 minutes per session, 5 sessions per week, for a total of 6 weeks. In each session, subjects in the experimental group conducted 10 minutes of warm-up exercises, 45 minutes of Tai Chi, and 5 minutes of cool-down exercises. Waist circumference and blood pressure were measured before and after the 6-week intervention. [Results] Waist circumference and blood pressure decreased significantly after the 6-week intervention in the experimental group compared with the control group. [Conclusion] Tai Chi can be used as an effective intervention to improve waist circumference and blood pressure in the elderly. abstract_id: PUBMED:31336437 Optimal waist circumference cutoff values for predicting metabolic syndrome among older adults in Ecuador. Aims: To determine optimal waist circumference cutoff values for predicting metabolic syndrome (MetS) among adults aged 60 years and older in Ecuador. Material And Methods: The National Survey of Health, Wellbeing, and Aging survey was used to describe the prevalence of MetS according to standard definitions. Receiver operating characteristics (ROC) curve analyses and the Younden index J (YI) were performed to examine optimal waist circumference cutoff values for predicting MetS. Moreover, the prevalence of MetS according to country-specific waist circumference cutoff values was compared with those using standard definitions. Results: Among 2306 participants with a mean age of 70.6 years, the optimal waist circumference cutoff value for predicting MetS was 90.7 cm in women with a sensitivity of 66.1%, specificity of 65.6% and YI of 0.31. In men, a waist circumference of 91.2 cm with a sensitivity of 73.2%, specificity of 62.8%, and YI of 0.36 was the optimal cutoff point for predicting MetS. In general, applying country-specific cutoff values decreased the prevalence of MetS among older Ecuadorian women. In men, except with the ATP III definition, similar MetS prevalence rates were seen whether the country-specific or standard waist circumference cutoff values were applied. Conclusions: The optimal waist circumference cutoff values for predicting metabolic syndrome among older Ecuadorians adults were 90.7 cm for women and 91.2 cm for men. Moreover, current standard definitions of abdominal obesity may overestimate the prevalence of MetS, particularly in Latin American women. abstract_id: PUBMED:25548090 Waist circumference cutoff points for central obesity in the Korean elderly population. The aim is to determine the appropriate cutoff values of waist circumference (WC) for an increased risk of the metabolic syndrome in the Korean elderly population. We analyzed the WC cutoff values of four groups divided according to sex and age with a total of 2,224 elderly participants aged 65 years old and above from the Fourth Korean National Health and Nutrition Examination Survey using the receiver operating characteristic curve and multiple logistic regression. The WC cutoff values associated with an increased risk of metabolic syndrome were 89.6 cm for men and 90.5 cm for women for those who were 65 to 74 years old, and 89.9 cm for men and 87.9 cm for women for those who were 75 years old or older. WC cutoff points for estimating metabolic risk are similar in elderly men and women. Age-specific optimal WC cutoff points should be considered especially for elderly women in screening for metabolic syndrome. abstract_id: PUBMED:32728557 Health risks associated with high waist circumference: A systematic review. Obesity is a health disorder characterized by an increase in body weight, measured by waist circumference. High waist circumference is linked to potential development of non-communicable diseases. A systematic review study was used to explore health risks of high waist circumference through Google Scholar, Science Direct, Pubmed, and Proquest. Findings show that high waist circumference increased the risks of developing hypertension, type 2 diabetes mellitus, hypercholesterolemia, joint pain, low back pain, and hyperuricemia. It is recommended that government increase their role in raising public awareness to maintain healthy lifestyle. abstract_id: PUBMED:17879255 Body mass index, waist circumference, waist-hip ratio and depressive symptoms in Chinese elderly: a population-based study. Background: Studies that investigated the relationship between obesity and depressive symptoms in the elderly have generated conflicting findings, partly because of the use of body mass index (BMI) alone to measure obesity in the elderly. The use of BMI fails to account for varying proportions of muscle, fat and bone, and few studies have used other measures of central obesity, such as waist-hip ratio (WHR) and waist circumference (WC). Objectives: We examined whether individually BMI, WHR and WC were consistently associated with depressive symptoms in the elderly. Methods: Analysis of cross-sectional data of 2604 community dwelling Chinese elderly aged 55 and above, including socio-emotional characteristics, self-rated health and functional status, anthropometric measurements and Geriatric Depression Scale (15 items, GDS-15). Results: There was a negative trend in the prevalence of depressive symptoms (GDS > or =5) across increasing BMI categories: 16.9% in low BMI, 14.2% in normal weight, 12.1% in moderate to high BMI. The associations for moderate to high BMI (OR, 0.77; p = 0.04) relative to normal BMI, were statistically significant after controlling for confounding variables. However, no consistent trends in the prevalence of depressive symptoms and OR's were observed for increasing WHR and WC categories. Conclusion: Our results suggest that waist-hip and circumference measures of central obesity did not support an inverse relationship of obesity and depressive symptoms. An inverse relationship of BMI with depressive symptoms may indicate greater physiologic and functional reserve from greater muscle mass that protects against depressive symptoms. abstract_id: PUBMED:30609857 Waist Circumference and All-Cause Mortality among Older Adults in Rural Indonesia. Waist circumference, a measure of abdominal obesity, is associated with all-cause mortality in general adult population. However, the link between abdominal obesity with all-cause mortality in the studies of older adults is unclear. This study aims to determine the association between waist circumference and all-cause mortality in older adults in Indonesia. The association between waist circumference and all-cause mortality was examined in 10,997 men and women aged 50 years and older, in the World Health Organization (WHO) and International Network of field sites for continuous Demographic Evaluation of Populations and their Health in developing countries (INDEPTH) collaboration Study on global AGEing and adult health (SAGE) in Purworejo District Central Java, Indonesia during 2007⁻2010. Multivariate Cox regression analysis with restricted cubic splines was used to assess the non-linear association between waist circumference and all-cause mortality. During the 3-year follow-up, a total of 511 men and 470 women died. The hazard ratio plot shows a pattern of U-shape relationship between waist circumference and all-cause mortality among rich women, though the result was significant only for women in the lower end of waist circumference distribution (p < 0.05). Poor men with a low waist circumference (5th percentile) have a two times higher mortality risk (HR = 2.1; 95% CI = 1.3, 3.3) relative to those with a waist circumference of 90 cm. Poor women with a low waist circumference (25th percentile) have a 1.4 times higher mortality risk (HR = 1.4; 95% CI = 1.1, 1.8) relative to those with a waist circumference of 80 cm. This study shows a significant association between low waist circumference measure and mortality, particularly among poor men and women. Though the association between large waist circumference and mortality was not significant, we observed a trend of higher mortality risk particularly among rich women with large waist circumference measure. Public health intervention should include efforts to improve nutritional status among older people and promoting healthy lifestyle behaviours including healthy food and active lifestyle. Answer: Yes, waist circumference (WC) is a useful measure in predicting health outcomes in the elderly. Studies have shown that WC values corresponding to body mass index (BMI) values of 25 and 30 kg/m^2 are higher in the elderly compared to younger subjects, indicating that WC is an important measure for this age group. In the elderly population, WC is inversely associated with mortality in both men and women, positively associated with diabetes in men (but not in women), and positively associated with hypertension in both men and women. Waist-hip ratio (WHR), however, was not associated with any outcome measures (PUBMED:12355330). Furthermore, extreme obesity, as indicated by higher WC, has been found to worsen mobility and comorbidity, which in turn worsens health-related quality of life (HRQL) in elderly women. Maintaining an adequate WC can decrease the risks of lowered HRQL in Korean elderly women (PUBMED:24788716). Additionally, in elderly women with type 2 diabetes, a higher WC is correlated with significantly lower performance in language/semantic categorization, executive functioning, and overall cognition, suggesting that central adiposity may increase their risk for dementia (PUBMED:27149905). The relationship between obesity, as measured by WC and BMI, and domain-specific quality of life (QoL) has also been investigated. It was found that both elderly men and women with abdominal obesity had a significant association with poor QoL in mobility, self-care, usual activities, and pain/discomfort (PUBMED:34028707). Moreover, optimal WC cutoff values for predicting metabolic syndrome among older adults have been determined, suggesting that current standard definitions of abdominal obesity may overestimate the prevalence of metabolic syndrome, particularly in Latin American women (PUBMED:31336437). In the Korean elderly population, WC cutoff points for estimating metabolic risk are similar in elderly men and women, and age-specific optimal WC cutoff points should be considered, especially for elderly women, in screening for metabolic syndrome (PUBMED:25548090). High WC has been associated with increased risks of developing various health conditions such as hypertension, type 2 diabetes mellitus, hypercholesterolemia, joint pain, low back pain, and hyperuricemia (PUBMED:32728557).
Instruction: Is there a transient rise in sub-sarcolemmal Na and activation of Na/K pump current following activation of I(Na) in ventricular myocardium? Abstracts: abstract_id: PUBMED:12650880 Is there a transient rise in sub-sarcolemmal Na and activation of Na/K pump current following activation of I(Na) in ventricular myocardium? Objective: The primary aim of this study was to investigate whether activation of Na influx via voltage-gated Na channels can elevate sub-sarcolemmal ('fuzzy-space') [Na] and transiently activate Na/K pump current (I(p)). Methods And Results: Initially, Na/K pump activity was characterised in whole-cell voltage-clamped single guinea-pig ventricular myocytes. I(p) was activated by intracellular Na with a K(m) of 15.5 mM and a Hill coefficient of 1.7. Extracellular K activated I(p) with a K(m) of 1.6 mM. In these experiments, a finite ouabain-sensitive I(p) was measured when the pipette [Na] was zero. This suggests that there is an accumulation of Na in a sub-sarcolemmal space that is not in equilibrium with the bulk cytosol (which is assumed to be efficiently dialysed by the low-resistance patch-pipettes used). Such a sub-sarcolemmal Na gradient was observed in separate experiments in intact rabbit papillary muscles using electron probe X-ray microanalysis. In these studies, a fuzzy-space of limited Na diffusion was observed 100-200 nm below the sarcolemmal membrane. This sub-sarcolemmal Na gradient was similar whether muscles were frozen at peak-systole or end-diastole suggesting that the fuzzy-space Na does not change over the course of the contractile cycle. This was further investigated in isolated guinea pig myocytes where evidence for a transient activation of I(p) was sought immediately after the activation of voltage-gated Na channels. A single clamp step from -80 to 0 mV activated Na influx but, in the 10-2000 ms immediately following the initial Na influx no evidence for a transient activation of I(p) was observed. Similarly, no activation of I(p) could be detected immediately following a train of 20 rapid (5-Hz) pulses designed to maximise Na influx. Conclusions: These studies provide evidence for the existence of a maintained sub-sarcolemmal elevation of [Na] in ventricular myocardium; however, this fuzzy-space [Na] did not change immediately after the activation of Na influx via voltage-gated Na channels or throughout the contractile cycle. abstract_id: PUBMED:33113543 Monensin-Induced Increase in Intracellular Na+ Induces Changes in Na+ and Ca2+ Currents and Regulates Na+-K+ and Na+-Ca2+ Transport in Cardiomyocytes. Background/aims: Monensin, an Na ionophore, increases intracellular Na ([Na]i). Alteration of [Na]i influences ion transport through the sarcolemmal membrane. So far, the effects of monensin on ventricular myocytes have not been examined in detail. The main objective of this study was to elucidate the mechanism via which monensin-evoked increases in [Na]i affect the membrane potential and currents in ventricular myocytes of guinea pigs. Methods: Membrane potentials and currents were measured using the whole-cell patch-clamp technique in single myocytes. The concentration of intracellular Ca ([Ca]i) was evaluated by measuring fluorescence intensity of Fluo-4. Results: Monensin (10-5M) shortened the action potential duration (APD) and reduced the amplitude of the plateau phase. In addition, monensin decreased the sodium current (INa) and shifted the inactivation curve to the hyperpolarized direction. Moreover, it decreased the L-type calcium current (ICa). However, this effect was attenuated by increasing the buffering capacity of [Ca]i. The Na-Ca exchange current (INa-Ca) was activated particularly in the reverse mode. Na-K pump current (INa-K) was also activated. Notably, the inward rectifying K current (IK1) was not affected, and the change in the delayed outward K current (IK) was not evident. Conclusion: These results suggest that the monensin-induced shortened APD and reduced amplitude of the plateau phase are primarily due to the decrease in the ICa, the activation of the reverse mode of INa-Ca, and the increased INa-K, and second due to the decreased INa. The IK and the IK1 may not be associated with the abovementioned changes induced by monensin. The elevation of [Na]i can exert multiple influences on electrophysiological phenomena in cardiac myocytes. abstract_id: PUBMED:10718738 Activation of the cAMP-protein kinase A pathway facilitates Na+ translocation by the Na+-K+ pump in guinea-pig ventricular myocytes. 1. The effects of the adenylyl cyclase activator forskolin on steady-state and transient currents generated by the Na+-K+ pump were studied in guinea-pig ventricular myocytes by means of whole-cell voltage clamp at 30 C. 2. In external solution containing 144 mM Na+ (Na+o) and 10 mM K+ (K+o), steady-state Na+-K+ pump current (Ip) activated by 5 mM pipette Na+ (Na+pip) at -20 mV was reversibly augmented by forskolin (4 microM) to 133 +/- 4 % of the control current (n = 15). The forskolin analogue 1, 9-dideoxyforskolin (10 microM), which does not activate adenylyl cyclases, did not increase Ip (n = 2). Application of the protein kinase A (PKA) inhibitor H-89 (10 microM) in the continued presence of forskolin reversed the forskolin-induced elevation of Ip (n = 3). 3. The forskolin effect on Ip persisted in the presence of 50 mM Na+pip which ensured that the internal Na+-binding sites of the Na+-K+ pump were nearly saturated. Under these conditions, the drug increased Ip to 142 +/- 3 % of the control Ip when the pipette free Ca2+ concentration ([Ca2+]pip) was 0.013 nM (n = 5) and to 138 +/- 4 % of the control Ip when free [Ca2+]pip was 15 nM (n = 9). 4. In Na+-free external solution, Ip activated by 50 mM Na+pip and 1.5 mM K+o was likewise increased by forskolin but to a lesser extent than in Na+-containing medium (116 +/- 3 % of control, n = 10). 5. In order to investigate exclusively partial reactions in the Na+ limb of the pump cycle, transient pump currents under conditions of electroneutral Na+-Na+ exchange were studied. Transient pump currents elicited by voltage jumps displayed an initial peak and then decayed monoexponentially. Moved charge (Q) and the rate constant of current decay varied with membrane potential (V). The Q-V relationship followed a Boltzmann distribution characterized by the midpoint voltage (V0.5) and the maximum amount of movable charge (DeltaQmax). Forskolin (2-10 microM) shifted V0.5 to more negative values while DeltaQmax was not affected (n = 11). The effects of forskolin on transient pump currents were mimicked by 8-bromo-cAMP (500 microM; n = 2) and abolished by a peptide inhibitor of PKA (PKI, 10 microM; n = 5). 6. We conclude that activation of the cAMP-PKA pathway in guinea-pig ventricular myocytes increases Na+-K+ pump current at least in part by modulating partial reactions in the Na+ limb of the pump cycle. Under physiological conditions, the observed stimulation of the cardiac Na+-K+ pump may serve to shorten the action potential duration and to counteract the increased passive sarcolemmal Na+ and K+ fluxes during sympathetic stimulation of the heart. abstract_id: PUBMED:2834846 M-cholinergic regulation of Na+,K+-ATPase activity of vesicular preparations of sarcolemma from the myocardium and intestinal smooth muscles Acetylcholine (10(-7)-10(-2) M) enhanced the Na+, K+-ATPase activity in sarcolemmal vesicles from myocardium and intestinal smooth muscle. The stimulation of the enzyme from canine ventricles reaches 150% and was less pronounced (10-20%) in the case of frog myocardium and canine ileal muscles. The activating action of the neurotransmitter was simulated by gramicidin D (1-5 microM), but not by valinomycin 1-5 microM), blocked both by ouabain (200 microM) and atropine (0.1 microM), a muscarinic cholinergic antagonist. The activating action disappeared after treatment of membranes with alamethicin, a pore-producing antibiotic (0.8 mg/mg of protein). It is suggested that an increase in the Na+, K+-ATPase activity caused by acetylcholine is induced by Na+ which permeate the sarcolemmal vesicles through the ionic channel coupled with muscarinic acetylcholine receptor. abstract_id: PUBMED:25678464 β3 subunit of Na,K ATPase regulates T cell activation with no involvement of Na,K ATPase activity. Na,K ATPase plays an important role in the regulation of Na(+) and K(+) ions that are required for normal resting membrane potential and various cellular functions. Na,K ATPase is composed of two subunits, α and β subunits. Engagement of the β subunit by an agonistic monoclonal antibody (mAb) P-3E10 inhibited T cell activation and induced the G0/G1 cell cycle arrest. In addition, mAb P-3E10 decreased CD25 expression. The mAb P-3E10, however, did not inhibit the proliferation of cell lines and the phagocytosis activity of phagocytes, and did not interfere with the Na,K ATPase activity. These results indicate that mAb P-3E10 reacts to the β subunit and, as a consequence, brings about the regulation of the T cell activation without disturbing the Na,K pump activity. By sequential immunoprecipitation, we demonstrated the expression of the β3 subunit free form apart from the α subunit. In this study, we propose that the β3 subunits of Na,K ATPase are expressed separately from the α subunit, and play a role in regulation of the immune response. abstract_id: PUBMED:10087140 Activation of PKC increases Na+-K+ pump current in ventricular myocytes from guinea pig heart. We have previously shown activation of alpha1-adrenergic receptors increases Na+-K+ pump current (Ip) in guinea pig ventricular myocytes, and the increase is eliminated by blockers of phosphokinase C (PKC). In this study we examined the effect of activators of PKC on Ip. Phorbol 12-myristate 13-acetate (PMA), a PKC activator, increased IP at each test potential without shifting its voltage dependence. The concentration required for a half-maximal response (K0.5) was 6 microM at 15 nM cytosolic [Ca2+] ([Ca2+]i) and 13 nM at 314 nM [Ca2+]i. The maximal increase at either [Ca2+]i was about 30%. Another activator of PKC, 1, 2-dioctanoyl-sn-glycerol (diC8), increased Ip similarly. The effect of PMA on IP was eliminated by the PKC inhibitor staurosporine, but not by the peptide PKI, an inhibitor of protein kinase A (PKA). PMA and alpha1-adrenergic agonist effects both were sensitive to [Ca2+]i, blocked by PKC inhibitors, unaffected by PKA inhibition, and increased Ip uniformly at all voltages. However, they differed in that alpha1-activation caused a maximum increase of 15% vs 30% via PMA, and alpha1-effects were less sensitive to [Ca2+]i than PMA effects. These results demonstrate that activation of PKC causes an increase in Ip in guinea pig ventricular myocytes. Moreover, they suggest that the coupling of alpha1-adrenergic activation to Ip is entirely through PKC, however alpha1-activation may be coupled to a specific population of PKC whereas PMA is a more global agonist. abstract_id: PUBMED:9458726 Na+ influx and Na(+)-K+ pump activation during short-term exposure of cardiac myocytes to aldosterone. To examine the effect of aldosterone on sarcolemmal Na+ transport, we measured ouabain-sensitive electrogenic Na(+)-K+ pump current (Ip) in voltage-clamped ventricular myocytes and intracellular Na+ activity (alpha iNa) in right ventricular papillary muscles. Aldosterone (10 nM) induced an increase in both Ip and the rate of rise of alpha iNa during Na(+)-K+ pump blockade with the fast-acting cardiac steroid dihydroouabain. The aldosterone-induced increase in Ip and rate of rise of alpha iNa was eliminated by bumetanide, suggesting that aldosterone activates Na+ influx through the Na(+)-K(+)-2Cl- cotransporter. To obtain independent support for this, the Na+, K+, and Cl- concentrations in the superfusate and solution of pipettes used to voltage clamp myocytes were set at levels designed to abolish the inward electrochemical driving force for the Na(+)-K(+)-2Cl- cotransporter. This eliminated the aldosterone-induced increase in Ip. We conclude that in vitro exposure of cardiac myocytes to aldosterone activates the Na(+)-K(+)-2Cl- cotransporter to enhance Na+ influx and stimulate the Na(+)-K+ pump. abstract_id: PUBMED:7551315 Na-Ca exchange tail current indicates voltage dependence of the Cai transient in rabbit ventricular myocytes. Introduction: In mammalian cardiac myocytes, a rise of intracellular calcium (Cai) is well known to activate Ca extrusion via forward Na-Ca exchange, which generates an inward membrane current. This can be observed as an inward "tail" current (INa-Ca) when the membrane is repolarized after a depolarization-activated rise of Cai. If, during a voltage step, the membrane is repolarized at the time of the peak of the Cai transient, the size of the INa-Ca tail might be expected to reflect the magnitude of the Cai transient. Therefore, it might be possible to estimate the amplitude and voltage dependence of the Cai transient without, for instance, using fluorescent indicators that can interfere with Cai regulation. The first aim of this study was to use INa-Ca tails to investigate the voltage dependence of the Cai transient in whole cell patch clamped rabbit ventricular myocytes dialyzed with a "normal" level of internal Na. The second aim was to investigate how the voltage dependence of the INa-Ca tails varied with changes to the dialyzing Na concentration. The third aim was to test the correlation of voltage dependence of INa-Ca tails with the voltage dependence of the Cai transient obtained using a fluorescent Ca indicator. Methods And Results: Experiments were performed at 35 degrees to 37 degrees C using whole cell patch clamp, and the holding potential was set at -40 mV. Depolarization elicited a Cai transient that peaked in 40 to 50 msec. We reasoned, therefore, that membrane repolarization after 50 msec would cause the raised level of Cai to activate an inward current on forward Na-Ca exchange. The amplitude of INa-Ca measured shortly (10 msec) after repolarization should reflect the peak amplitude of the Cai transient elicited by the depolarization. In cells dialyzed with 10 mM Na-containing solution and depolarized for 50 msec to differing test potentials, the INa-Ca tail on repolarization increased progressively after pulses to between -40 and +20 mV. The INa-Ca tail was maximal after a +20-mV pulse and showed no decline after depolarizations to more positive potentials, up to +100 mV (P > 0.1; n = 8). This implies that the Cai transient has a similar amplitude for depolarizing pulses between +20 and +100 mV. When Na-free solution dialyzed the cell, the voltage dependence of the INa-Ca tail became bell-shaped, with a maximum at +20 mV (n = 4). Voltage dependence of the INa-Ca tail was little affected by raising dialyzing Na from 10 to 20 mM (n = 4); but the amplitude of the INa-Ca tail increased. Inhibition of the Na-K pump with strophanthidin in cells dialyzed with 10 mM Na had qualitatively similar effects to increasing dialyzing Na. In Fura-2 loaded cells dialyzed with 10 mM Na, the Cai transient exhibited a similar voltage dependence to the INa-Ca tail (n = 6). Conclusion: The results of this study suggest that in cells dialyzed with 10 mM Na, the voltage dependence of the Cai transient is different from the L-type Ca current, since this current declines at potentials > +20 mV. The results obtained using Fura-2 suggest that the INa-Ca tail current measurement tracked the Cai sufficiently well to reflect the voltage dependence of the Cai transient. The data also confirm that the voltage dependence of the Cai transient in rabbit cells can be modulated by altering dialyzing Na concentration. abstract_id: PUBMED:3026888 Heart sarcolemmal (Na+ + K+)-ATPase has an essential amino group in the potassium binding site on the enzyme molecule. Selective modification of primary amino groups of (Na+ + K+)-ATPase by trinitrobenzene sulfonic acid (TNBS) resulted in a considerable inhibition of the specific activity of the enzyme. Investigation by means of enzyme and sorption kinetics of activation of heart sarcolemmal (Na+ + K+)-ATPase by its monovalent cationic ligands added simultaneously with TNBS revealed: a considerable competition between K+-ions and TNBS for the potassium binding site on the enzyme molecule; a non-competitive type of inhibition of Na+-induced activation of the enzyme. Both, potassium and sodium ions depressed, and magnesium ions enhanced the initial rate of TNBS-sorption; however, none of the above cations influenced the equilibrium value of TNBS sorption onto isolated sarcolemmal membranes. Ouabain, on the other hand, did not inhibit the initial rate and decreased the equilibrium value of TNBS sorption onto the membranes. The results obtained enabled the identification of an essential amino group in the potassium binding site of the (Na+ + K+)-ATPase molecule. abstract_id: PUBMED:2989573 Effects of adenosine on transmembrane potential and sarcolemmal Na+-K+ ATPase activity of human atrial myocardium. Adenosine effects on the transmembrane potential characteristics and the sarcolemmal Na+-K+ ATPase activity of human atrial myocardium were studied in tissue from 20 patients who were divided into 2 groups based on the maximum diastolic potentials (MDP) greater than or less than -60 mV. Group A consisted of 10 patients with MDP of 70.84 +/- 4.20 mV and Na+-K+ ATPase activity of 15.37 +/- 0.46 mumole Pi/mg/hr. Ten patients with MDP of 44.54 +/- 6.24 mV and Na+-K+ ATPase activity of 12.55 +/- 0.42 mumole Pi/mg/hr were included in group B. Adenosine had no effects on the electrophysiological properties and the sarcolemmal Na+-K+ ATPase activity of atrial myocardium at concentrations below 1 X 10(-5) M in either group. Adenosine resulted in mildly altered atrial transmembranes potentials without significant effect on Na+-K+ ATPase activity at concentrations between 1 X 10(-5) M and 5 X 10(-4) M. However, a significant reduction of transmembrane potentials and an apparent inhibition of Na+-K+ ATPase activity were observed only in tissue from group B. These results suggest that: 1) adenosine has no effect on the electrophysiological properties and the sarcolemmal Na+-K+ ATPase activity of human atrial myocardium at physiological concentrations; 2) adenosine induced inhibition of the sarcolemmal Na+-K+ ATPase activity in slow channel-dependent atrial tissues may be a mechanism responsible for the alterations of transmembrane potentials under unphysiological conditions; and 3) adenosine contributes to the genesis of cardiac arrhythmias during acute myocardial ischemia, which can reduce transmembrane potentials of the myocardial cells and may increase the myocardial adenosine level above its effective concentration. Answer: Based on the provided abstracts, there is evidence for the existence of a maintained sub-sarcolemmal elevation of Na+ in ventricular myocardium, but this "fuzzy-space" Na+ did not change immediately after the activation of Na+ influx via voltage-gated Na+ channels or throughout the contractile cycle (PUBMED:12650880). This suggests that there is not a transient rise in sub-sarcolemmal Na+ and activation of Na+/K+ pump current (I(p)) following activation of I(Na) in ventricular myocardium under the conditions studied.
Instruction: Is Helicobacter pylori related to endothelial dysfunction during childhood? Abstracts: abstract_id: PUBMED:18353048 Is Helicobacter pylori related to endothelial dysfunction during childhood? Background: Helicobacter pylori infection has been proposed to have a role in the development of atherosclerosis preceded by endothelial dysfunction. The aim of the present study was to determine if a relationship exists between H. pylori infection in childhood and endothelial dysfunction and level of high-sensitivity C-reactive protein (hsCRP). Methods: Between October 2003 and November 2004, 28 subjects who were anti-H. pylori IgG-positive and 25 who were anti-H. pylori IgG-negative were included in the study. Mean ages of the H. pylori-positive and negative groups were not significantly different. Endothelial functions were evaluated on Doppler ultrasonography of the brachial artery. Percent ratio of the change in systolic diameter during hyperemic phase to the basal diameter was evaluated. Each subject's serum was tested for hsCRP, homocysteine and lipids. Results: Percent ratio of the change in systolic diameters during hyperemic phase to the basal diameter was not significantly different between the H. pylori-negative and -positive groups (P > 0.29). Mean levels of hsCRP were also not significantly different (1.48 +/- 1.8 g/dL vs 2.35 +/- 3.33 g/dL; P > 0.24). Similarly, serum levels of lipids and homocysteine were not significantly different (P > 0.05 for all lipids). Conclusions: Non-invasive techniques used in the present study were not indicative of early findings of atherosclerosis in H. pylori infection during childhood. Further studies are required to evaluate the relationship between early endothelial dysfunction and H. pylori infection in children with cardiovascular risk factors. abstract_id: PUBMED:17082728 Pre-eclampsia is associated with Helicobacter pylori seropositivity in Italy. Objectives: Pre-eclampsia (PE) is characterized by an excess of inflammation and endothelial dysfunction. Helicobacter pylori (H. pylori) causes chronic inflammatory changes and endothelial damage. We investigated the prevalence of seropositivity for IgG against H. pylori and cytotoxin-associated antigen A (CagA) in PE patients and the presence of H. pylori DNA in their placentas. Methods: We tested 47 pregnant women with PE and 47 with uneventful pregnancies for serum antibodies against H. pylori (enzyme immunoassays) and CagA protein (immunoblot assays). In 20 of them (10 normal and 10 PE) we assessed the presence, in the placenta, of H. pylori DNA by means of nested polymerase chain reaction (PCR). The odds ratios (OR) and 95% confidence intervals (CI), adjusted for parity, were calculated using logistic regression analysis to assess the risk of PE associated with H. pylori infection. Results: Helicobacter pylori seropositivity frequency was higher in mothers with PE (51.1%) compared to women with uneventful pregnancy (31.9%) (OR, 2.668; 95% CI, 1.084-6.566; P = 0.033). The difference was even greater for CagA seropositivity (80.9 and 14.9%, respectively) (OR, 26.035; 95% CI, 8.193-82.729; P < 0.001). All placentas were negative for H. pylori DNA. Conclusions: Helicobacter pylori, and especially strains carrying the CagA gene, may contribute to the inflammatory mechanisms involved in the pathogenesis of PE. abstract_id: PUBMED:17669100 A review of the postulated mechanisms concerning the association of Helicobacter pylori with ischemic heart disease. Since its discovery, Helicobacter pylori has been implicated in the pathogenesis of several diseases, both digestive and extradigestive. Interestingly, the majority of the extradigestive-related literature is focused on two vascular manifestations: stroke and ischemic heart disease. Potential mechanisms for the establishment of a H. pylori-induced ischemic heart disease have been proposed with regard to chronic inflammation, molecular mimicry, oxidative modifications, endothelial dysfunction, direct effect of the microorganism on atherosclerotic plaques as well as changes regarding traditional or novel risk factors for ischemic heart disease or even platelet-H. pylori interactions. A positive link between H. pylori infection and ischemic heart disease has been suggested by a series of studies focusing on epidemiologic evidence, dyslipidemic alterations, upregulation of inflammatory markers or homocysteine levels, induction of hypercoagulability, oxidation of low-density lipoprotein, causation of impaired endothelial function, detection of H. pylori DNA in atherosclerotic plaques, and participation of certain antigens and antibodies in a cross-reactivity model. There are studies, however, which investigated the relationship between H. pylori and ischemic heart disease with regard to the same parameters and failed to confirm the suggested positive association. Further studies in the direction of interaction between H. pylori and the host's genotype as well as a quest for evidence towards novel risk factors for ischemic heart disease such as oxidative stress, vascular remodeling, vascular calcification, or vasomotor activity, may reveal a field of great interest, thus contributing to the determination of new potential mechanisms. abstract_id: PUBMED:35433874 CagA+Helicobacter pylori, Not CagA-Helicobacter pylori, Infection Impairs Endothelial Function Through Exosomes-Mediated ROS Formation. Background: Helicobacter pylori (H. pylori) infection increases the risk for atherosclerosis, and ROS are critical to endothelial dysfunction and atherosclerosis. CagA is a major H. pylori virulence factor associated with atherosclerosis. The present study aimed to test the hypothesis that CagA+H. pylori effectively colonizes gastric mucosa, and CagA+H. pylori, but not CagA-H. pylori, infection impairs endothelial function through exosomes-mediated ROS formation. Methods: C57BL/6 were used to determine the colonization ability of CagA+H. pylori and CagA-H. pylori. ROS production, endothelial function of thoracic aorta and atherosclerosis were measured in CagA+H. pylori and CagA-H. pylori infected mice. Exosomes from CagA+H. pylori and CagA-H. pylori or without H. pylori infected mouse serum or GES-1 were isolated and co-cultured with bEND.3 and HUVECs to determine how CagA+H. pylori infection impairs endothelial function. Further, GW4869 was used to determine if CagA+H. pylori infection could lead to endothelial dysfunction and atherosclerosis through an exosomes-mediated mechanism. Results: CagA+H. pylori colonized gastric mucosa more effectively than CagA-H. pylori in mice. CagA+H. pylori, not CagA-H. pylori, infection significantly increased aortic ROS production, decreased ACh-induced aortic relaxation, and enhanced early atherosclerosis formation, which were prevented with N-acetylcysteine treatment. Treatment with CagA-containing exosomes significantly increased intracellular ROS production in endothelial cells and impaired their function. Inhibition of exosomes secretion with GW4869 effectively prevented excessive aortic ROS production, endothelial dysfunction, and atherosclerosis in mice with CagA+H. pylori infection. Conclusion: These data suggest that CagA+H. pylori effectively colonizes gastric mucosa, impairs endothelial function, and enhances atherosclerosis via exosomes-mediated ROS formation in mice. abstract_id: PUBMED:36317116 Helicobacter Pylori Infection Induces Intestinal Dysbiosis That Could Be Related to the Onset of Atherosclerosis. Cardiovascular diseases represent one of the first causes of death around the world, and atherosclerosis is one of the first steps in the development of them. Although these problems occur mainly in elderly, the incidence in younger people is being reported, and an undetermined portion of patients without the classic risk factors develop subclinical atherosclerosis at earlier stages of life. Recently, both the H. pylori infection and the intestinal microbiota have been linked to atherosclerosis. The mechanisms behind those associations are poorly understood, but some of the proposed explanations are (a) the effect of the chronic systemic inflammation induced by H. pylori, (b) a direct action over the endothelial cells by the cytotoxin associated gene A protein, and (c) alterations of the lipid metabolism and endothelial dysfunction induced by H. pylori infection. Regarding the microbiota, several studies show that induction of atherosclerosis is related to high levels of Trimethylamine N-oxide. In this review, we present the information published about the effects of H. pylori over the intestinal microbiota and their relationship with atherosclerosis and propose a hypothesis to explain the nature of these associations. If H. pylori contributes to atherosclerosis, then interventions for eradication and restoration of the gut microbiota at early stages could represent a way to prevent disease progression. abstract_id: PUBMED:25436099 Association of Helicobacter pylori with central serous chorioretinopathy in Iranian patients. Aim: This study was conducted to investigate a possible association between H. pylori infection and CSR. Background: Helicobacter pylori infection is one of the most common infections worldwide. Central Serous Chorioretinopathy (CSR) is a serious macular detachment that usually affects young people. The etiopathogenesis of the disease is still not completely understood. Patients And Methods: A prospective study was performed and we evaluated a total of 35 CSR patients and control group of138 patients who referred to gastroenterology research center of Taleghani Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran. Central serous chorioretinopathy was diagnosed on the basis of findings in ophthalmic examinations and confirmed by fluoresce in angiogram. All patients underwent a 13C-urea breath test (UBT) to detect H. pylori infection. Patients were defined as H. pylori infected, if breath test was positive. Results: The mean duration of symptoms before diagnosis was8.3± 2.5 days. Overall, no statistically significant difference was found between left and right eyes, bilateral CSR was observed in 5 patients (14.2%).The incidence of H. pylori infection was 85.7% in CSR patients and 55.1% in control subjects (p=0.001). Odd's ratio was 4.895. Conclusion: These results indicate that the prevalence of H. pylori infection is significantly higher in patients with CSR than in controls. No effect of age or sex was seen on H.pylori test results. Further multiple centers, randomized, case control trials are necessary to confirm the potential contributory role of the H. pylori infection in the pathogenesis of CSR as a possible association between infectious agents and endothelial dysfunction. abstract_id: PUBMED:25918633 Helicobacter pylori gastritis, a presequeale to coronary plaque. Helicobacter pylori are considered the most common human pathogen colonizing gastric mucosa. Gastritis with or without H. pylori infection is associated with increase in levels of homocysteine and high-sensitivity C-reactive protein (hs-CRP) but a more pronounced increase is noted in gastritis with H. pylori infection. Increasing level of homocysteine, due to decreased absorption of vitamin B12 and folic acid, together with increased CRP levels in gastritis with H. pylori infection may be the earliest event in the process of atherosclerosis and plaque formation. Retrospective study conducted at tertiary care hospital in Mumbai by Department of Biochemistry in association with Department of Surgery. Eighty patients who underwent gastroscopy in view of gastritis were subjected to rapid urease test for diagnosis of H. pylori infection. Vitamin B12, folic acid, homocysteine and hs-CRP were analyzed using chemiluminescence immuno assay. Student's t-test, Pearson's correlation and linear regression used for statistical analysis. Patients with H. pylori gastritis had significantly lower levels of vitamin B12 (271.6±101.3 vs 390.6±176.7 pg/mL; P=0.0005), as well as higher levels of homocysteine (17.4±7.4 vs 13.8±7.8 µmol/L; P=0.037) and hs-CRP (2.5±2.9 vs 1.2±1.1 mg/L; P=0.017), than in patients without H. pylori gastritis. However, folic acid showed (8.9±3.2 vs 10.0±3.6 ng/mL; P=0.171) no significant difference. Elevated homocysteine and hs-CRP in H. pylori gastritis may independently induce endothelial dysfunction, leading to cardiovascular pathology. abstract_id: PUBMED:25810813 Role of Helicobacter pylori infection in pathogenesis of atherosclerosis. Though a century old hypothesis, infection as a cause for atherosclerosis is still a debatable issue. Epidemiological and clinical studies had shown a possible association but inhomogeneity in the study population and study methods along with potential confounders have yielded conflicting results. Infection triggers a chronic inflammatory state which along with other mechanisms such as dyslipidemia, hyper-homocysteinemia, hypercoagulability, impaired glucose metabolism and endothelial dysfunction, contribute in pathogenesis of atherosclerosis. Studies have shown a positive relations between Cytotoxic associated gene-A positive strains of Helicobacter pylori and vascular diseases such as coronary artery disease and stroke. Infection mediated genetic modulation is a new emerging theory in this regard. Further large scale studies on infection and atherosclerosis focusing on multiple pathogenetic mechanisms may help in refining our knowledge in this aspect. abstract_id: PUBMED:35087887 Helicobacter pylori Infection Acts as an Independent Risk Factor for Intracranial Atherosclerosis in Women Less Than 60 Years Old. Background: Studies show inconsistent results regarding the relationship between Helicobacter pylori (H. pylori) infection and stroke. The present study assessed a potential association between H. pylori infection and an important risk factor for stroke, intracranial atherosclerosis. Methods: In total, 15,798 subjects with transcranial Doppler (TCD) and 13C-urea breath test (13C-UBT) were enrolled from March 2012 to March 2017. Intracranial atherosclerosis was further measured using intracranial carotid artery calcification (ICAC) on past or recent head CT, and 14,084 subjects were ultimately included in the study. Baseline demographics, atherosclerosis risk factors, and laboratory results were investigated. Since endothelial dysfunction is critical to the development of atherosclerosis, the role of H. pylori in migration, tube formation, and proliferation of human brain microvascular endothelial cells (HBMECs) was assessed in vitro. Results: The intracranial atherosclerosis group had a higher proportion of women and a greater rate of H. pylori infection than those without intracranial atherosclerosis. H. pylori infection was significantly more common in women with intracranial atherosclerosis than males. In addition, the incidence of intracranial atherosclerosis was significantly higher in women with H. pylori infection than uninfected women (53.8 vs. 46.4%, p < 0.001). In an adjusted model, H. pylori was shown to be an independent risk factor for intracranial atherosclerosis in women ≤ 60 years of age [odds ratio (OR) = 2.261, 95% CI = 1.839-2.780, p < 0.001]. Serum exosomes from patients with H. pylori infection had significantly reduced brain endothelial cell migration, tube formation, and proliferation in vitro. Conclusion:Helicobacter pylori infection may be an important independent risk factor for intracranial atherosclerosis in women ≤ 60 years of age. abstract_id: PUBMED:32174233 Helicobacter pylori Infection Impairs Endothelial Function Through an Exosome-Mediated Mechanism. Background Epidemiological studies have suggested an association between Helicobacter pylori (H pylori) infection and atherosclerosis through undefined mechanisms. Endothelial dysfunction is critical to the development of atherosclerosis and related cardiovascular diseases. The present study was designed to test the hypothesis that H pylori infection impaires endothelial function through exosome-mediated mechanisms. Methods and Results Young male and female patients (18-35 years old) with and without H pylori infection were recruited to minimize the chance of potential risk factors for endothelial dysfunction for the study. Endothelium-dependent flow-mediated vasodilatation of the brachial artery was evaluated in the patients and control subjects. Mouse infection models with CagA+H pylori from a gastric ulcer patient were created to determine if H pylori infection-induced endothelial dysfunction could be reproduced in animal models. H pylori infection significantly decreased endothelium-dependent flow-mediated vasodilatation in young patients and significantly attenuated acetylcholine-induced endothelium-dependent aortic relaxation without change in nitroglycerin-induced endothelium-independent vascular relaxation in mice. H pylori eradication significantly improved endothelium-dependent vasodilation in both patients and mice with H pylori infection. Exosomes from conditioned media of human gastric epithelial cells cultured with CagA+H pylori or serum exosomes from patients and mice with H pylori infection significantly decreased endothelial functions with decreased migration, tube formation, and proliferation in vitro. Inhibition of exosome secretion with GW4869 effectively preserved endothelial function in mice with H pylori infection. Conclusions H pylori infection impaired endothelial function in patients and mice through exosome-medicated mechanisms. The findings indicated that H pylori infection might be a novel risk factor for cardiovascular diseases. Answer: The relationship between Helicobacter pylori infection and endothelial dysfunction during childhood has been explored, but the evidence is not conclusive. A study conducted between October 2003 and November 2004, which included 28 subjects who were anti-H. pylori IgG-positive and 25 who were anti-H. pylori IgG-negative, found that the percent ratio of the change in systolic diameters during the hyperemic phase to the basal diameter was not significantly different between the H. pylori-negative and -positive groups. Additionally, mean levels of high-sensitivity C-reactive protein (hsCRP) were also not significantly different between the two groups, nor were serum levels of lipids and homocysteine. The study concluded that non-invasive techniques used were not indicative of early findings of atherosclerosis in H. pylori infection during childhood, suggesting that further studies are required to evaluate the relationship between early endothelial dysfunction and H. pylori infection in children with cardiovascular risk factors (PUBMED:18353048). In summary, based on the study referenced, there is no clear evidence to suggest that Helicobacter pylori is related to endothelial dysfunction during childhood. Further research is needed to establish any potential link, especially in children who may have other cardiovascular risk factors.
Instruction: Can multivariable risk-benefit profiling be used to select treatment-favorable patients for thrombolysis in stroke in the 3- to 6-hour time window? Abstracts: abstract_id: PUBMED:17068306 Can multivariable risk-benefit profiling be used to select treatment-favorable patients for thrombolysis in stroke in the 3- to 6-hour time window? Background And Purpose: The Stroke-Thrombolytic Predictive Instrument (Stroke-TPI) uses multivariate equations to predict outcomes with and without thrombolysis. We sought to examine whether such a multivariate predictive instrument might be useful in selecting patients with a favorable risk-benefit treatment profile for therapy after 3 hours. Methods: We explored outcomes in patients from 5 major randomized clinical trials testing intravenous recombinant tissue plasminogen activator (rt-PA) classified by the Stroke-TPI as "treatment-favorable" or "treatment-unfavorable." We used iterative bootstrap re-sampling to estimate how such a model would perform on independent test data. Results: Among patients treated within the 3- to 6-hour window, 67% of patients were classified by Stroke-TPI predicted outcomes as "treatment-favorable" and 33% were classified as "treatment-unfavorable." Outcomes in the treatment-favorable group demonstrated benefit for thrombolysis (modified Rankin Scale score < or =1: 44.0% with rt-PA versus 34.2 with placebo, P=0.005), whereas harm was demonstrated in the treatment-unfavorable group (modified Rankin Scale score < or =1: 31.3% with rt-PA versus 38.3% with placebo; P=0.004). Bootstrap resampling with complete cross-validation showed that the absolute margin of benefit in the treatment-favorable group diminished on average by 36% between derivation and independent validation sets, but still represented a significant tripling of improvement in benefit compared with conventional inclusion criteria (5.2% [interquartile range, 1.7% to 8.6%] versus 1.8% [interquartile range, -0.5 to 4.1], P<0.0001). Conclusions: Such multivariable risk-benefit profiling may be useful in the selection of acute stroke patients for rt-PA therapy even more than 3 hours after symptom onset. Prospective testing is indicated. abstract_id: PUBMED:16833040 Thrombolysis beyond the three-hour time window. Intravenous (i.v.) thrombolysis with tissue plasminogen activator (rt-PA, i.v. 0.9 mg/kg body weight) has been approved by European health authorities in 2002, with a 3-hr time window. The meta-analysis of i.v. rt-PA trials suggests efficacy of rt-PA up to 4.5 to 6 hr. However, treatment efficacy declines rapidly over time: the numbers of patients needed to treat to prevent 1 death or dependency are respectively 7, 12, and > 30 in the 0-3 hr, 0-6 hr, and 3-6 hr time windows. Magnetic resonance imaging may be the best way to select candidates for thrombolysis beyond 3 hr, on the basis of the presence of arterial occlusion and mismatch between diffusion and perfusion images. New trials are testing the possibility of extending the time window. Trials with new thrombolytic agents, ultrasound thrombolysis, and mechanical thrombolysis also are running or planned in the 3- to 6-hr time window. The 3-hr limit is just a matter of safety. Patients should be treated as soon as possible, and the earliest is the best. abstract_id: PUBMED:33363505 The Relationship Between Penumbral Tissue and Blood-Brain Barrier Disruption in Acute Stroke Patients Presenting in an Extended Time Window. Background: Penumbral brain tissue identified with multimodal imaging can be salvaged with reperfusion in an extended time window. The risk of severe hemorrhagic complications after reperfusion therapy increases with worsening disruption of the blood-brain barrier (BBB). The relationship between penumbral tissue and BBB disruption has not been previously studied. Methods: Stroke patients presenting in an extended time window without a large vessel occlusion who underwent diffusion-perfusion MRI within 24 h of last-seen-normal were included. The volume of penumbral tissue was calculated using mismatch on MRI. Mean permeability derangement (MPD) of the BBB was measured within the ischemic lesion. A target profile (TP) for treatment was defined based on the EXTEND trial. Results: 222 patients were included with a median age of 73 and 55% women. The median NIHSS was 6, the mean core volume was 14 ml, the mean ischemic volume was 47 mL and the mean mismatch volume was 33 mL. Higher MPD was significantly associated with less mismatch volume (p = 0.001). A target profile was associated with lower MPD (OR 0.97; CI 0.96:0.99; p < 0.001). Of the 105 patients who had a TP, 31 (30%) had a MPD > 20% suggesting an increased risk of hemorrhage. Thus, 33% (74/222) of patients had a favorable profile for benefit and safety. Conclusions: Patients presenting in an extended time window with a favorable penumbral profile for treatment have less severe BBB disruption. Up to a third of patients who currently go untreated could be considered for enrollment in a clinical trial of thrombolysis in an extended time window. abstract_id: PUBMED:34464927 Adapting Clinical Practice of Thrombolysis for Acute Ischemic Stroke Beyond 4.5 Hours: A Review of the Literature. Several clinical trials have demonstrated that advanced neuroimaging can select patients for recanalization therapy in an extended time window. The favorable functional outcomes and safety profile of these studies have led to the incorporation of neuroimaging in endovascular treatment guidelines, and most recently, also extended to decision making on thrombolysis. Two randomized clinical trials have demonstrated that patients who are not amenable to endovascular thrombectomy within 4.5 hours from symptoms discovery or beyond 4.5 hours from the last-known-well time may also be safely treated with intravenous thrombolysis and have a clinical benefit above the risk of safety concerns. With the growing aging population, increased stroke incidence in the young, and the impact of evolving medical practice, healthcare and stroke systems of care need to adapt continuously to provide evidence-based care efficiently. Therefore, understanding and incorporating appropriate screening strategies is critical for the prompt recognition of potentially eligible patients for extended-window intravenous thrombolysis. Here we review the clinical trial evidence for thrombolysis for acute ischemic stroke in the extended time window and provide a review of new enrolling clinical trials that include thrombolysis intervention beyond the 4.5 hour window. abstract_id: PUBMED:23010678 Will delays in treatment jeopardize the population benefit from extending the time window for stroke thrombolysis? Background And Purpose: Pooled analyses show benefits of intravenous alteplase (recombinant tissue-type plasminogen activator) treatment for acute ischemic stroke up to 4.5 hours after onset despite marketing approval for up to 3 hours. However, the benefit from thrombolysis is critically time-dependent and if extending the time window reduces treatment urgency, this could reduce the population benefit from any extension. Methods: Based on 3830 UK patients registered between 2005 to 2010 in the Safe Implementation of Treatments in Stroke-International Stroke Thrombolysis Registry (SITS-ISTR), a Monte Carlo simulation was used to model recombinant tissue-type plasminogen activator treatment up to 4·5 hours from onset and assess the impact (numbers surviving with little or no disability) from changes in hospital treatment times associated with this extended time window. Results: We observed a significant relation between time remaining to treat and time taken to treat in the UK SITS-ISTR data set after adjustment for censoring. Simulation showed that as this "deadline effect" increases, an extended treatment time window entails that an increasing number of patients are treated at a progressively lower absolute benefit to a point where the population benefit from extending the time window is entirely negated. Conclusions: Despite the benefit for individual patients treated up to 4.5 hours after onset, the population benefit may be reduced or lost altogether if extending the time window results in more patients being treated but at a lower absolute benefit. A universally applied reduction in hospital arrival to treatment times of 8 minutes would confer a population benefit as large as the time window extension. abstract_id: PUBMED:34970213 Real-World Cost-Effectiveness of Late Time Window Thrombectomy for Patients With Ischemic Stroke. Background: To compare the cost-effectiveness of providing endovascular thrombectomy (EVT) for patients with ischemic stroke in the >4.5 h time window between patient groups who met and did not meet the perfusion imaging trial criteria. Methods: A discrete event simulation (DES) model was developed to simulate the long-term outcome post EVT in patients meeting or not meeting the extended time window clinical trial perfusion imaging criteria at presentation, vs. medical treatment alone (including intravenous thrombolysis). The effectiveness of thrombectomy in patients meeting the landmark trial criteria (DEFUSE 3 and DAWN) was derived from a prospective cohort study of Australian patients who received EVT for ischemic stroke, between 2015 and 2019, in the extended time window (>4.5 h). Results: Endovascular thrombectomy was shown to be a cost-effective treatment for patients satisfying the clinical trial criteria in our prospective cohort [incremental cost-effectiveness ratio (ICER) of $11,608/quality-adjusted life year (QALY) for DEFUSE 3-postive or $34,416/QALY for DAWN-positive]. However, offering EVT to patients outside of clinical trial criteria was associated with reduced benefit (-1.02 QALY for DEFUSE 3; -1.43 QALY for DAWN) and higher long-term patient costs ($8,955 for DEFUSE 3; $9,271 for DAWN), thereby making it unlikely to be cost-effective in Australia. Conclusions: Treating patients not meeting the DAWN or DEFUSE 3 clinical trial criteria in the extended time window for EVT was associated with less gain in QALYs and higher cost. Caution should be exercised when considering this procedure for patients not satisfying the trial perfusion imaging criteria for EVT. abstract_id: PUBMED:37807200 Endovascular treatment for acute ischemic stroke beyond the 24-h time window: Selection by target mismatch profile. Introduction: Endovascular treatment for acute ischemic stroke patients with large vessel occlusion (LVO) has been established as a promising clinical intervention within a late time window of 6-24 h after symptom onset. Patients with slow progression, however, may still benefit from endovascular treatment beyond the 24-h time window (very late window). Aim: The aim of this study is to report insight into the potential clinical benefits of endovascular treatment for acute ischemic stroke beyond 24 h from symptom onset. Methods: A retrospective analysis was performed on consecutive patients undergoing endovascular treatment for acute anterior circulation LVO ischemic stroke beyond 24 h. Participants were recruited between July 2019 and November 2020. Patients were selected based on the DAWN/DEFUSE 3 criteria (Perfusion-RAPID, iSchemaView) and patients receiving treatment beyond 24 h were compared to a group of patients receiving endovascular treatment between 6 and 24 h after symptom onset. The primary outcome was the proportion of patients with functional independence at 90 days (modified Rankin Scale score of 0-2). The secondary outcomes were shift modified Rankin Scale (mRS) analysis and successful reperfusion was defined by thrombolysis in cerebral infarction (TICI) 2b-3 on the final procedure. Safety outcomes were symptomatic intracranial hemorrhage and death at the 90-day follow-up. Propensity score (PS)-matched analyses were employed to rectify the imbalanced baseline characteristics between the two groups. Results: A total of 166 patients were recruited with a median age of 63.0 (56.0-69.0) and 28.9% of all patients were females. Patients in the beyond 24-h group had a longer onset-to-groin time (median = 27.2 vs 14.3 h, p < 0.001) than those in the 6- to 24-h group. There were no statistically significant differences between the two groups in National Institutes of Health Stroke Scale (NIHSS) (median = 12.0 vs 15.0, p = 0.37), perfusion imaging characteristics (core: median = 11.0 vs 9.0 mL, p = 0.86; mismatch volume: median = 106.0 vs 96.0, p = 0.44; mismatch ratio = 6.46 vs 7.24, p = 0.91), and perfusion-to-groin time (median = 72.5 vs 76.0 min, p = 0.77). No significant differences were noted among patients between the two groups in the primary endpoint functional independence analysis (50.0% vs 46.6%, p = 0.77) and in the safety endpoint analysis: mortality (15.0% vs 11.0%, p = 0.71) or symptomatic hemorrhage (0% vs 3.42%, p > 0.999). In PS-matched analyses, there were no significant differences among patients between the two groups in functional independence (50.0% vs 54.8%, p = 0.74), mortality (16.7% vs 9.68%, p = 0.50), or symptomatic hemorrhage (0% vs 6.45%, p = 0.53). Conclusion: Endovascular treatment can be performed safely and effectively in LVO patients beyond 24 h from symptom onset when selected by target mismatch profile. The clinical outcome of these patients was comparable to those treated in the 6- to 24-h window. Larger studies are needed to confirm these findings. abstract_id: PUBMED:33210039 Frequency of thrombolytic targets in stroke patients presenting in an extended time window. Objective: The objective of this study was to determine the proportion of stroke patients presenting in an extended time window who have a thrombolytic treatment target. Background: Patients presenting up to 24 h after stroke onset have been found to have penumbral tissue on multimodal imaging. Stroke patients presenting in this extended time window without a large vessel occlusion (LVO) may benefit from reperfusion therapy using thrombolysis. Methods: Patients seen at our institutions from 2011 through 2015 were reviewed to identify those who presented >4 h and <24 h from last seen normal (LSN) and did not receive acute treatment. Magnetic resonance imaging (MRI) scans were used to dichotomize patients using a diffusion-perfusion mismatch ratio of 1.2. Results: During the study period, 3469 patients were evaluated by our stroke service, with 893 seen 4-24 h from LSN who were not treated. MRI was performed with diffusion and perfusion imaging in 439 patients, of whom 26 were excluded due to hemorrhage and 37 were excluded due to LVO. This left 376 patients who potentially could have been treated with thrombolysis in an extended time window and were included in the analysis. Of these, 156 (42%) demonstrated a mismatch ratio >1.2. Patients with a mismatch presented earlier (P = 0.012), were more likely to be female (P = 0.03), and had higher National Institutes of Health Stroke Scale (P < 0.001). Conclusions: Almost half of the patients presenting 4-24 h from LSN had a target for thrombolysis in our study. Multimodal imaging may be able to expand the population of treatable stroke patients given the results of recent clinical trials. abstract_id: PUBMED:17366377 Effective acute stroke treatment beyond approval limitations: intravenous thrombolysis within an extended time window (3-6 h) and in old patients (aged 80 or older) Intravenous thrombolysis with tPA is the only approved and effective treatment for acute ischemic stroke. The approval, however, is restricted to treatment within 3 hours of stroke onset, and it is not recommended to treat patients beyond 80 years of age. Due to these restrictions, thrombolysis is only given to a small number of acute stroke patients. At the same time there is growing evidence that patients can be treated with thrombolysis safely and effectively even beyond these restrictions. We give an overview over the published data regarding thrombolysis beyond the 3 hour time window and in patients aged 80 or over. Based on these data we conclude that (1.) intravenous thrombolysis in MRI selected patients is safe and effective within an extended time window of up to 6 hours, and (2.) there is no increase in mortality or symptomatic intracerebral bleeding complications in patients aged 80 or over treated with thrombolysis. A great number of acute stroke patients reaches the hospital beyond the 3 hour time window, and there is a growing number of old and very old stroke patients in the western world. Treating patients up to a 6 hour time window and beyond the age of 80 years would clearly increase the number of patients, which might benefit from this effective treatment. To summarize, we recommend experienced stroke centres to treat acute stroke patients with thrombolysis up to 6 hours using MRI criteria for patient selection, and to treat also patients aged 80 years or older. abstract_id: PUBMED:30012573 Revisiting the therapeutic time window dogma: successful thrombectomy 6 days after stroke onset. Stroke is the leading cause of serious long-term disability in the USA. Recent clinical trials, DAWN and DEFUSE 3, have expanded the endovascular therapeutic time window which has been adopted by the American Heart Association stroke guideline. However, there continues to be a dilemma as to what is the best approach for patients who present beyond the time window set by these trials and the current guideline. The interval from arterial occlusion to completion of brain tissue infarction varies from patient to patient and depends on the actual time and also a physiological clock or a tissue time window. Offering endovascular treatment based solely on a rigid time criterion excludes patients who may have a clinical benefit because of potentially salvageable tissue. We present a case of a patient who underwent successful stroke thrombectomy 6 days after stroke onset. Answer: Yes, multivariable risk-benefit profiling can be used to select treatment-favorable patients for thrombolysis in stroke in the 3- to 6-hour time window. The Stroke-Thrombolytic Predictive Instrument (Stroke-TPI) is an example of such a multivariate predictive instrument that has been explored for this purpose. It uses multivariate equations to predict outcomes with and without thrombolysis. A study examining outcomes in patients from major randomized clinical trials found that 67% of patients treated within the 3- to 6-hour window were classified as "treatment-favorable" according to Stroke-TPI predicted outcomes. These patients demonstrated a significant benefit from thrombolysis compared to placebo. Conversely, harm was demonstrated in the "treatment-unfavorable" group. Bootstrap resampling with complete cross-validation showed that the absolute margin of benefit in the treatment-favorable group diminished on average by 36% between derivation and independent validation sets but still represented a significant tripling of improvement in benefit compared with conventional inclusion criteria. These findings suggest that multivariable risk-benefit profiling may be useful in selecting acute stroke patients for rt-PA therapy even more than 3 hours after symptom onset, and prospective testing is indicated (PUBMED:17068306).
Instruction: Ileal brake activation: macronutrient-specific effects on eating behavior? Abstracts: abstract_id: PUBMED:23911804 Lipids, CHOs, proteins: can all macronutrients put a 'brake' on eating? The gastrointestinal (GI) tract and specifically the most distal part of the small intestine, the ileum, has become a renewed focus of interest for mechanisms targeting appetite suppression. The 'ileal brake' is stimulated when energy-containing nutrients are delivered beyond the duodenum and jejunum and into the ileum, and is named for the feedback loop which slows or 'brakes' gastric emptying and duodeno-jejunal motility. More recently it has been hypothesized that the ileal brake also promotes secretion of satiety-enhancing GI peptides and suppresses hunger, placing a 'brake' on food intake. Postprandial delivery of macronutrients to the ileum, other than unavailable carbohydrates (CHO) which bypass absorption in the small intestine en route to fermentation in the large bowel, is an uncommon event and hence this brake mechanism is rarely activated following a meal. However the ability to place a 'brake' on food intake through delivery of protected nutrients to the ileum is both intriguing and challenging. This review summarizes the current clinical and experimental evidence for activation of the ileal brake by the three food macronutrients, with emphasis on eating behavior and satiety as well as GI function. While clinical studies have shown that exposure of the ileum to lipids, CHOs and proteins may activate GI components of the ileal brake, such as decreased gut motility, gastric emptying and secretion of GI peptides, there is less evidence as yet to support a causal relationship between activation of the GI brake by these macronutrients and the suppression of food intake. The predominance of evidence for an ileal brake on eating comes from lipid studies, where direct lipid infusion into the ileum suppresses both hunger and food intake. Outcomes from oral feeding studies are less conclusive with no evidence that 'protected' lipids have been successfully delivered into the ileum in order to trigger the brake. Whether CHO or protein may induce the ileal brake and suppress food intake has to date been little investigated, although both clearly have GI mediated effects. This review provides an overview of the mechanisms and mediators of activation of the ileal brake and assesses whether it may play an important role in appetite suppression. abstract_id: PUBMED:24957485 Ileal brake activation: macronutrient-specific effects on eating behavior? Background: Activation of the ileal brake, by infusing lipid directly into the distal part of the small intestine, alters gastrointestinal (GI) motility and inhibits food intake. The ileal brake effect on eating behavior of the other macronutrients is currently unknown. Objective: The objective of this study was to investigate the effects of ileal infusion of sucrose and casein on food intake, release of GI peptides, gastric emptying rate and small-bowel transit time with safflower oil as positive control. Design: This randomized, single-blind, crossover study was performed in 13 healthy subjects (6 male; mean age 26.4±2.9 years; mean body mass index 22.8±0.4 kg m(-2)) who were intubated with a naso-ileal catheter. Thirty minutes after the intake of a standardized breakfast, participants received an ileal infusion, containing control ((C) saline), safflower oil ((HL) 51.7 kcal), low-dose casein ((LP) 17.2 kcal) or high-dose casein ((HP) 51.7 kcal), low-dose sucrose ((LC) 17.2 kcal) and high-dose sucrose ((HC) 51.7 kcal), over a period of 90 min. Food intake was determined during an ad libitum meal. Visual analogue score questionnaires for hunger and satiety and blood samples were collected at regular intervals. Results: Ileal infusion of lipid, protein and carbohydrate resulted in a significant reduction in food intake compared with control (HL: 464.3±90.7 kcal, P<0.001; HP: 458.0±78.6 kcal, P<0.005; HC: 399.0±57.0 kcal, P<0.0001 vs control: 586.7±70.2 kcal, P<0.001, respectively). A reduction in energy intake was still apparent when the caloric amount of infused nutrients was added to the amount eaten during the ad libitum meal.Secretion of cholecystokinin and peptide YY but not of glucagon-like peptide-1 (7-36) was increased during ileal perfusion of fat, carbohydrates and protein. During ileal perfusion of all macronutrients, a delay in gastric emptying and intestinal transit was observed, but differences were not significant compared with control. Conclusion: Apart from lipids, also sucrose and casein reduce food intake on ileal infusion, thereby activating the ileal brake. In addition to food intake, also satiety and GI peptide secretion were affected. abstract_id: PUBMED:31223098 Review: Effects of fibre, grain starch digestion rate and the ileal brake on voluntary feed intake in pigs. Grains rich in starch constitute the primary source of energy for both pigs and humans, but there is incomplete understanding of physiological mechanisms that determine the extent of digestion of grain starch in monogastric animals including pigs and humans. Slow digestion of starch to produce glucose in the small intestine (SI) leads to undigested starch escaping to the large intestine where it is fermented to produce short-chain fatty acids. Glucose generated from starch provides more energy than short-chain fatty acids for normal metabolism and growth in monogastrics. While incomplete digestion of starch leads to underutilised feed in pigs and economic losses, it is desirable in human nutrition to maintain consistent body weight in adults. Undigested nutrients reaching the ileum may trigger the ileal brake, and fermentation of undigested nutrients or fibre in the large intestine triggers the colonic brake. These intestinal brakes reduce the passage rate in an attempt to maximise nutrient utilisation, and lead to increased satiety that may reduce feed intake. The three physiological mechanisms that control grain digestion and feed intake are: (1) gastric emptying rate; (2) interplay of grain digestion and passage rate in the SI controlling the activation of the ileal brake; and (3) fermentation of undigested nutrients or fibre in the large intestine activating the colonic brake. Fibre plays an important role in influencing these mechanisms and the extent of their effects. In this review, an account of the physiological mechanisms controlling the passage rate, feed intake and enzymatic digestion of grains is presented: (1) to evaluate the merits of recently developed methods of grain/starch digestion for application purposes; and (2) to identify opportunities for future research to advance our understanding of how the combination of controlled grain digestion and fibre content can be manipulated to physiologically influence satiety and food intake. abstract_id: PUBMED:28593706 Does the ileal brake mechanism contribute to sustained weight loss after bariatric surgery? Bariatric surgery is currently the most effective strategy for treating morbid obesity. Weight regain following significant weight loss, however, remains a problem, with the outcome proportional to the period of follow-up. This review revisits a well-established physiological neurohormonally-mediated feedback loop, the so called ileal brake mechanism, with a special emphasis on the gut hormone peptide tyrosine tyrosine. The manuscript not only highlights the potential role of the ileal brake mechanism in weight loss and weight maintenance thereafter following obesity surgery, it also provides a compelling argument for using this appetite suppressing feedback loop to enable sustained long-term weight loss in patients undergoing surgery for morbid obesity. abstract_id: PUBMED:36135136 A Simulation-Based Study of the Effect of Brake Light Flashing Frequency on Driver Brake Behavior from the Perspective of Response Time. To prevent vehicle crashes, studies have proposed the use of flashing signals (brake lights or other light indicators) to improve the driver's response time when the leading vehicle is braking. However, there are no consistent results on the ideal flashing frequency of the brake lights. This study aimed to investigate different brake light flashing frequencies to assess their impact on braking response time. Twenty-four participants aged 25 to 30 were recruited. Two driving speed environments (50 and 80 km/h), three deceleration rates (0.25, 0.6, and 1 g), and four brake light flashing frequencies (0, 2, 4, and 7 Hz) were examined. Braking response time, average braking force, and braking response time ratio were used to evaluate the driving behavior. The results showed that the braking response time and average braking force were affected by the deceleration rate in the 50 km/h driving environment. In the 50 and 80 km/h driving environments, although there were no significant differences among the three deceleration rates, the braking response time decreased by 3-7% under the flashing brake light condition. These findings can be used as a reference for safety designs as well as future studies on driving behavior. abstract_id: PUBMED:18692080 Ileal brake: a sensible food target for appetite control. A review. With the rising prevalence of obesity and related health problems increases, there is increased interest in the gastrointestinal system as a possible target for pharmacological or food-based approaches to weight management. Recent studies have shown that under normal physiological situations undigested nutrients can reach the ileum, and induce activation of the so-called "ileal brake", a combination of effects influencing digestive process and ingestive behaviour. The relevance of the ileal brake as a potential target for weight management is based on several findings: First, activation of the ileal brake has been shown to reduce food intake and increase satiety levels. Second, surgical procedures that increase exposure of the ileum to nutrients produce weight loss and improved glycaemic control. Third, the appetite-reducing effect of chronic ileal brake activation appears to be maintained over time. Together, this evidence suggests that activation of the ileal brake is an excellent long-term target to achieve sustainable reductions in food intake. This review addresses the role of the ileal brake in gut function, and considers the possible involvement of several peptide hormone mediators. Attention is given to the ability of macronutrients to activate the ileal brake, and particularly variation attributable to the physicochemical properties of fats. The emphasis is on implications of ileal brake stimulation on food intake and satiety, accompanied by evidence of effects on glycaemic control and weight loss. abstract_id: PUBMED:35635627 Discovery of the GI Effects of GLP-1: An Historical Perspective. In 1993, my laboratory published an article in Digestive Diseases and Sciences that clearly demonstrated the pronounced effects of the newly discovered intestinal hormone, glucagon-like peptide-1 (GLP-1), on a number of gastrointestinal functions, including gastric emptying rate, gastric acid secretion, and pancreatic enzyme secretion. The gut hormone is released in response to nutrient intake, and in further experiments, its release from the ileum paralleled inhibition of both gastric and pancreatic secretions. Based on these studies, it was concluded that GLP-1 is an important regulator of the so-called ileal brake, a term given for the observation that ileal perfusion of lipids delayed gastric emptying, reduced food intake, and induced satiety Welch et al. (1985), in addition to its functions as an incretin hormone. GLP-1 was subsequently identified as a physiological inhibitor of appetite and food intake, and based on these actions, the GLP-1 receptor agonists are today considered among the most powerful and effective antiobesity and antidiabetic agents available, with the added benefits of reducing the risk of the cardiovascular and renal complications associated with these conditions. abstract_id: PUBMED:35079895 Activation of gastrointestinal ileal brake response with dietary slowly digestible carbohydrates, with no observed effect on subjective appetite, in an acute randomized, double-blind, crossover trial. Purpose: To test the hypothesis that oral ingestion of slowly digestible carbohydrates (SDCs) that reach the ileum triggers the ileal brake as indicated by delayed gastric emptying, reduced glycemic response, and decreased subjective appetite. Methods: The study was a five-arm, randomized, double-blind, crossover trial with a 1-week washout period between treatments (n = 20; 9 females, 11 males). Five treatments consisted of three SDC ingredients [raw corn starch, isomaltooligosaccharide (IMO), sucromalt], and an IMO/sucromalt combination, shown in vitro to have slow and extended digestion profiles, and a rapidly digestible carbohydrate control (maltodextrin). Carbohydrates (26 g) were incorporated into yogurt [300 g total; carbohydrate (~ 77 g), fat (~ 0.2 g), and protein (~ 9 g)] with closely matched energy content (346 kcal) and viscosity (~ 30,000 cP). Outcomes were measured in a 4 h postprandial period. Results: Mean gastric half-emptying times were moderately though significantly increased for the raw corn starch and IMO treatments (P < 0.05), but they could be sub-divided into larger effect responder (n = 11) and non-responder groups (n = 9). Longer time for glycemic response to return to baseline was associated with increased gastric half-emptying time in an exploratory subset of data removing gastric half-emptying times > 3.5 h (P = 0.02). No significant differences in appetite ratings were observed. Conclusion: SDCs caused slower gastric emptying rate through activation of the ileal brake, as closely matched semi-solid yogurts were used and only rate of carbohydrate digestion differed. Extending glycemic response through consumption of SDCs was associated with triggering the ileal brake. Trial Registration: ClinicalTrials.gov NCT03630445, August 2018, retrospectively registered. abstract_id: PUBMED:34064724 Review on the Regional Effects of Gastrointestinal Luminal Stimulation on Appetite and Energy Intake: (Pre)clinical Observations. Macronutrients in the gastrointestinal (GI) lumen are able to activate "intestinal brakes", feedback mechanisms on proximal GI motility and secretion including appetite and energy intake. In this review, we provide a detailed overview of the current evidence with respect to four questions: (1) are regional differences (duodenum, jejunum, ileum) present in the intestinal luminal nutrient modulation of appetite and energy intake? (2) is this "intestinal brake" effect macronutrient specific? (3) is this "intestinal brake" effect maintained during repetitive activation? (4) can the "intestinal brake" effect be activated via non-caloric tastants? Recent evidence indicates that: (1) regional differences exist in the intestinal modulation of appetite and energy intake with a proximal to distal gradient for inhibition of energy intake: ileum and jejunum > duodenum at low but not at high caloric infusion rates. (2) the "intestinal brake" effect on appetite and energy appears not to be macronutrient specific. At equi-caloric amounts, the inhibition on energy intake and appetite is in the same range for fat, protein and carbohydrate. (3) data on repetitive ileal brake activation are scarce because of the need for prolonged intestinal intubation. During repetitive activation of the ileal brake for up to 4 days, no adaptation was observed but overall the inhibitory effect on energy intake was small. (4) the concept of influencing energy intake by intra-intestinal delivery of non-caloric tastants is intriguing. Among tastants, the bitter compounds appear to be more effective in influencing energy intake. Energy intake decreases modestly after post-oral delivery of bitter tastants or a combination of tastants (bitter, sweet and umami). Intestinal brake activation provides an interesting concept for preventive and therapeutic approaches in weight management strategies. abstract_id: PUBMED:16036508 Melatonin as a modulator of the ileal brake mechanism. Objective: The gastrointestinal tract represents the most important extrapineal source of melatonin. Intestinal melatonin release is induced by the ileal passage of nutrients and could play a part in the control of postprandial gut motility. The specific aim of this study was to determine the putative role of melatonin in the "ileal brake" reflex, an important mechanism released by ileal lipids that regulates the gastric emptying of chyme. Material And Methods: Under general anaesthesia rats were fitted with ileal cannula exteriorized at the back of the neck. After a 1-week recovery, experiments were performed in conscious fasted animals. Rats were fed by gavage 1.5 ml casein hydrolyse plus 0.05% phenol red and either saline or Intralipid were continuously infused (2 ml/h) into the ileum. Gastric emptying was measured 50 min after ingestion by gastric lavage and determination of phenol red by spectrophotometry. The effects of melatonin (1 mg/kg) and melatonin antagonist S-22153 (dose-response study 0.2-25 mg/kg) were tested versus vehicle in paired experiments at 1-week intervals. Results: Ileal infusion of lipids delayed gastric emptying. During ileal infusion of lipids, melatonin antagonist S-22153, but not melatonin, potentiated the delay in gastric emptying induced by the ileal brake mechanism. The inhibition of gastric emptying induced by S-22153 was dose related. Neither melatonin nor S-22153 had noticeable effects on gastric emptying during ileal infusion of saline. Conclusions: Our data suggest that melatonin, released in response to ileal lipids, exerts a modulatory influence that decreases the inhibitory effects of the ileal brake on gastric emptying of nutrients. Answer: The ileal brake is a feedback mechanism that slows gastric emptying and duodeno-jejunal motility when nutrients are delivered to the ileum, potentially suppressing hunger and reducing food intake (PUBMED:23911804). The activation of the ileal brake by macronutrients and its effects on eating behavior have been investigated in several studies. Lipids have been shown to activate the ileal brake, with direct infusion into the ileum suppressing both hunger and food intake (PUBMED:23911804). Clinical studies have demonstrated that ileal infusion of lipids, proteins, and carbohydrates can all significantly reduce food intake compared to a control, with the reduction in energy intake still apparent even when the caloric amount of infused nutrients is added to the amount eaten during an ad libitum meal (PUBMED:24957485). This suggests that the ileal brake effect on eating behavior is not macronutrient-specific, as all three macronutrients were able to activate the brake and influence food intake. Furthermore, the ileal brake's activation by macronutrients is associated with increased secretion of satiety-enhancing gastrointestinal peptides such as cholecystokinin and peptide YY, although the effects on glucagon-like peptide-1 (GLP-1) were not significant in one study (PUBMED:24957485). GLP-1 itself has been identified as an important regulator of the ileal brake, influencing gastric emptying, appetite, and food intake (PUBMED:35635627). The ileal brake mechanism has also been implicated in sustained weight loss following bariatric surgery, suggesting its potential role in long-term weight management (PUBMED:28593706). Additionally, slowly digestible carbohydrates (SDCs) that reach the ileum have been shown to trigger the ileal brake, indicated by delayed gastric emptying and reduced glycemic response, although no significant differences in subjective appetite ratings were observed in one study (PUBMED:35079895). In summary, the activation of the ileal brake and its effects on eating behavior appear to be influenced by all macronutrients—lipids, proteins, and carbohydrates—rather than being specific to one type of macronutrient. This suggests that the ileal brake could be a viable target for appetite control and weight management strategies across different types of diets (PUBMED:23911804; PUBMED:24957485; PUBMED:28593706; PUBMED:35079895).
Instruction: The quality of nursing home care: do the opinions of family members change after implementation of emotion-oriented care? Abstracts: abstract_id: PUBMED:11529975 The quality of nursing home care: do the opinions of family members change after implementation of emotion-oriented care? Objective: The present study focuses on opinions on the quality of nursing home care of family members of nursing home residents with dementia. Furthermore, we examined whether family members' appreciation of the care increased as a result of the implementation of emotion-oriented care. Design: Randomized clinical trial. Instrument: An 18-item questionnaire was developed. The following subjects were addressed: communication activities between staff and family members; satisfaction regarding contacts with staff; the extent to which family members can participate in care; the contact that family members experience with the person with dementia, and opinions about the way in which nursing staff treat residents. Results: Most family members already had a positive opinion on the nursing home care prior to the implementation of emotion-oriented care. The most positive assessment concerned the way in which nursing staff treated residents. The lowest scores concerned communication activities between ward staff and family members. Comparison of the first and end measurements showed that in general opinions on the quality of care did not change. Study Limitations: A large number of incomplete questionnaires made it impossible to conduct factor analysis on the classification of the questions in various sections and therefore allowed us only to make statements at the item level. Conclusions: For the most part family members had a positive opinion on the nursing home care. In general, implementation of emotion-oriented care did not lead to a more positive assessment. Despite the generally accepted notion that involving family members in care is important, family members were regularly treated as outsiders. This demonstrates that there is room for improvement in the communication by nursing home staff with family members (e.g. more frequent contacts and information about the illness). abstract_id: PUBMED:24337328 The implementation of integrated emotion-oriented care: did it actually change the attitude, skills and time spent of trained caregivers? Background: Successful implementation is a vital precondition for investigating the outcome of care innovation. This study concerned the evaluation of the implementation of integrated emotion-oriented care (IEOC) in psychogeriatric nursing home wards. The main question was whether the trained caregivers actually applied the knowledge and techniques of IEOC during their daily work. Methods: The study was conducted within the framework of a randomized clinical trial into the effectiveness of IEOC in 16 wards. Preceding the experimental period, staff from 16 wards were educated and trained to work with a standardized care plan, resulting in a similar level of quality of care at the start of the trial. On the experimental wards IEOC was then implemented by training on the job in addition to training courses for personnel. To examine the implementation effectiveness, a self-report questionnaire, 'Emotion-oriented Skills in the Interaction with Elderly People with Dementia', was administered at baseline and after 7 months to a sample of caregivers from the experimental and the control wards. In addition, participant observation was conducted on four experimental and four control wards, and time spent by care personnel on different type of care tasks was registered. Results: The implementation of IEOC resulted in increased emotion-oriented skills and more knowledge of the residents among the caregivers. Providing IEOC was not more time consuming for the caregivers than providing usual care. Conclusion: This study shows that the implementation of IEOC was effective. It is recommended that in intervention studies the correct application of a new intervention or care approach is examined before jumping to conclusions about the effectiveness of the intervention or care approach itself. abstract_id: PUBMED:37089513 Satisfaction with care quality and anxiety among family members during nursing home visiting restrictions: The chain mediating effect of emotional regulation and perceived stress. Introduction: This study aimed to investigate the psychological well-being (perceived stress and anxiety) of Chinese family members during nursing home visiting restrictions and to elucidate the relationships among satisfaction with care quality, emotion regulation, perceived stress, and anxiety. Methods: An online survey was conducted with a cross-sectional study design. From 18 to 29 January 2022, a total of 571 family members of nursing home residents completed online questionnaires comprising socio-demographic characteristics, satisfaction with care quality, emotion regulation, perceived stress, and anxiety. Mediation analyses were performed to estimate the direct and indirect effects of satisfaction with care quality on anxiety using the PROCESS macro for SPSS. Results: The results showed that approximately one-quarter of Chinese family members had anxiety symptoms during nursing home visiting restrictions. Satisfaction with care quality affected anxiety via three mediating paths: (a) through cognitive reappraisal (effect = 0.028); (b) through cognitive reappraisal and perceived stress sequentially (effect = -0.057); and (c) through perceived stress (effect = -0.212). The chain mediating effect (path b) accounted for 23.7% of the total effect. Conclusions: These findings corroborated our hypothesis that cognitive reappraisal (a kind of emotion regulation strategy) and perceived stress mediated the relationship between satisfaction with care quality and anxiety during nursing home visiting restrictions. Efforts to address family members' psychological well-being by focusing on cognitive reappraisal should be considered. abstract_id: PUBMED:33684629 "From living in a hotel to living in a home": Stakeholders' views about living and working in a nursing home which is undergoing culture change. Culture change models are intended to improve the quality of life and care of nursing home residents. Using longitudinal qualitative methodology, this study evaluated the effectiveness of implementing culture change on the main stakeholders living and working in an Israeli nursing home. Eight focus groups with nursing home residents, family members and staff members (N = 57) were conducted at two-time points: one year after and two years after implementing a culture change model. Using thematic content analysis and comparing the experiences of each group and across time, the results revealed that implementing culture change in a nursing home is a complex process, which requires am adaptation in the values, expectations, and perceptions of care of all those involved. Particular attention should be paid to the unique needs of each stakeholder group, while ensuring their engagement and cooperation in the change process. abstract_id: PUBMED:33825569 Evaluation of the Implementation Process of a Family Systems Nursing Approach in Home Health Care: A Mixed-Methods Study. To support the incorporation of Family Systems Nursing (FSN) in clinical practice, more understanding is needed about the implementation of FSN in home health practice settings. Thus, the aim of this study was to evaluate nurses' perspectives about the implementation process of Family Systems Nursing Conversations (FSNCs) in home health care. A mixed-methods research design was used, integrating qualitative and quantitative data, and using triangulation as a methodological metaphor. The Quality Implementation Framework (QIF) was applied to guide the implementation process, and Proctor et al.'s taxonomy of implementation outcomes was used to evaluate the process. The findings demonstrated that FSN implementation was in progress. Overall, acceptability and appropriateness of FSNCs were evaluated as positive by home health nurses; however, some obstacles were found relating to feasibility, adoption, and fidelity. These results contribute to an increased understanding of the process and challenges of implementing FSNCs in home health care. abstract_id: PUBMED:25017391 Hospice family members' perceptions of and experiences with end-of-life care in the nursing home. Objective: Even though more than 25% of Americans die in nursing homes, end-of-life care has consistently been found to be less than adequate in this setting. Even for those residents on hospice, end-of-life care has been found to be problematic. This study had 2 research questions; (1) How do family members of hospice nursing home residents differ in their anxiety, depression, quality of life, social networks, perceptions of pain medication, and health compared with family members of community dwelling hospice patients? (2) What are family members' perceptions of and experiences with end-of-life care in the nursing home setting? Methods: This study is a secondary mixed methods analysis of interviews with family members of hospice nursing home residents and a comparative statistical analysis of standard outcome measures between family members of hospice patients in the nursing home and family members of hospice patients residing in the community. Results: Outcome measures for family members of nursing home residents were compared (n = 176) with family members of community-dwelling hospice patients (n = 267). The family members of nursing home residents reported higher quality of life; however, levels of anxiety, depression, perceptions of pain medicine, and health were similar for hospice family members in the nursing home and in the community. Lending an understanding to the stress for hospice family members of nursing home residents, concerns were found with collaboration between the nursing home and the hospice, nursing home care that did not meet family expectations, communication problems, and resident care concerns including pain management. Some family members reported positive end-of-life care experiences in the nursing home setting. Conclusion: These interviews identify a multitude of barriers to quality end-of-life care in the nursing home setting, and demonstrate that support for family members is an essential part of quality end-of-life care for residents. This study suggests that nursing homes should embrace the opportunity to demonstrate the value of family participation in the care-planning process. abstract_id: PUBMED:34150125 Effect of hospice care on quality of life and negative emotion of core family members of patients with advanced liver cancer. Objective: To explore the effect of hospice care on the quality of life of patients with advanced liver cancer and the negative emotion of core family members. Methods: Altogether 166 patients with advanced liver cancer were randomly divided into the study group (n=83, treated with hospice care) and the control group (n=83, treated with routine care). The quality of life, pain degree, subjective well being, anxiety, depression and other negative emotions of core family members before and after intervention and their satisfaction with nursing were compared between the two groups. Results: After intervention, the five-dimensional health scale for cancer patients (EQ-5D) scores of patients in both groups were significantly higher than those before intervention, and the scores in the study group was higher than that in the control group (P<0.05). After the intervention, the scores of hamilton anxiety scale (HAMA) and hamilton depression scale (HAMD) of the core family members of the two groups were significantly lower than those before the intervention, and those of the study group were lower than those of the control group (P<0.05). VAS scores of patients in both groups decreased after intervention, those in the study group were lower than in the control group (P<0.05). The memorial university of newfoundland scale of happiness (MUNSH) scores of patients in both groups increased, and those in the study group were higher than those in the control group (all P<0.05). The core family members in the study group were more satisfied with nursing than those in the control group (P<0.05). Conclusion: The hospice nursing can obviously relieve the pain degree of patients with advanced liver cancer, improve their subjective well-being and quality of life before they died, and can also improve the negative emotions such as anxiety and depression of core family members. abstract_id: PUBMED:21324007 Care recipients' and family members' perceptions of quality of older people care: a comparison of home-based care and nursing homes. Aims: To compare care recipients' and their relatives' perceptions of quality of care in nursing homes and home-based care. Background: Older people care is increasingly being provided in community care facilities and private homes instead of hospitals. A few studies have compared care recipients' and relatives' perceptions of care quality in nursing homes as opposed to home-based care. Design: Cross-sectional surveys. Methods: Care recipients' and relatives' perceptions of quality of care were measured by questionnaire in 2003 in two older people care organisations. Quality measures were compared between care settings. Multiple regression was used to determine the predictors of overall quality ratings for each group, and possible interactions between quality measures and care settings were tested. Results: Care recipients rated their opportunities for activities significantly lower in home care than in nursing homes. Relatives of care recipients in home-based care rated several aspects of care quality significantly lower than relatives of nursing home residents. No significant interaction effects regarding predictors of overall quality ratings between the care settings were found. Staff behaviour was the strongest predictor of care recipients' overall quality rating, and staff professional skills were the strongest predictor of relatives' overall quality rating. Conclusions: Compared with nursing homes, home-based older people care seems to be in greater need of development regarding staff competence, staff interaction with relatives and activities offered to older people. In both settings, nursing staff behaviour influences these stakeholders' satisfaction with care. Relevance To Clinical Practice: These results point to a need to improve services in both care settings, but especially in home-based care. It is suggested that care recipients' preferences for social and physical activities be investigated on a regular basis in both care settings. Efforts should also be made to improve communication and interaction between family members and older people care staff, especially in home-based care. abstract_id: PUBMED:33068016 Effects of family nursing conversations on families in home health care: A controlled before-and-after study. Aim: To assess the effects of family nursing conversations on family caregiver burden, patients' quality of life, family functioning and the amount of professional home health care. Design: A controlled before-and-after design. Methods: Intervention group families participated in two family nursing conversations incorporated in home health care; control group families received usual home health care. Patients and family members completed a set of questionnaires on entering the study and 6 months later to assess family caregiver burden, family functioning and patients' quality of life. The amount of home health care was extracted from patient files. Data were collected between January 2018-June 2019. Results: Data of 51 patients (mean age 80; 47% male) and 61 family members (mean age 67; 38% male) were included in the results. Family caregiver burden remained stable in the intervention group whereas it increased in the control group. Family functioning improved significantly compared with the control group for patients and family members in the intervention group. No significant effects on patients' quality of life emerged. The amount of professional home health care decreased significantly in the intervention group whereas it remained equal in the control group. Conclusion: Family nursing conversations prevented family caregiver burden, improved family functioning, but did not affect patients' quality of life. In addition, the amount of home health care decreased following the family nursing conversations. Impact: Countries with ageing populations seek to reduce professional and residential care and therefore encourage family caregiving. Intensive family caregiving, however, places families at risk for caregiver burden which may lead to increased professional care and admission into residential care. This study demonstrates that family nursing conversations help nurses to prevent family caregiver burden and improve family functioning while decreasing the amount of home health care. abstract_id: PUBMED:33872859 Nursing staff member experiences with household model implementation in a nursing home setting. This qualitative study used conventional content analysis of interview data to examine nursing staff members' experiences with Household Model implementation in a nursing home setting. Staff members were recruited from the skilled nursing center post implementation and completed individual interviews (N=16). Participants described the impact of implementation in relation to three categories: antecedents, explicit reactions and change consequences. Participants described barriers and facilitators to implementation as well as outcomes such as role strain, feelings of isolation and harder working conditions. This study provides opportunities for further examination of nursing staff members' experiences with person centered care models in nursing homes, how those experiences may relate to effectiveness of implementation and the need for support of nursing home staff. Answer: The opinions of family members on the quality of nursing home care did not significantly change after the implementation of emotion-oriented care. A study reported that most family members already had a positive opinion on the nursing home care prior to the implementation of emotion-oriented care, particularly appreciating the way in which nursing staff treated residents. However, the lowest scores from family members concerned communication activities between ward staff and family members. After comparing opinions before and after the implementation of emotion-oriented care, it was found that the general assessment of the quality of care did not change. Despite the generally accepted notion that involving family members in care is important, family members were often treated as outsiders, indicating room for improvement in communication by nursing home staff with family members (PUBMED:11529975). Another study on the implementation of integrated emotion-oriented care (IEOC) found that the implementation was effective in increasing emotion-oriented skills and knowledge of the residents among caregivers. However, this study did not directly address changes in family members' opinions but focused on the caregivers' application of the knowledge and techniques of IEOC during their daily work (PUBMED:24337328). In summary, while emotion-oriented care may have improved the skills and knowledge of caregivers, the available evidence suggests that it did not lead to a more positive assessment of nursing home care quality by family members (PUBMED:11529975).
Instruction: Is there a reverse J-shaped association between 25-hydroxyvitamin D and all-cause mortality? Abstracts: abstract_id: PUBMED:36523355 J-shaped association between serum albumin levels and long-term mortality of cardiovascular disease: Experience in National Health and Nutrition Examination Survey (2011-2014). Background: Cardiovascular disease (CVD) is a constellation of heart, brain, and peripheral vascular diseases with common soil hypothesis of etiology, and its subtypes have been well-established in terms of the albumin-mortality association. However, the association between albumin and the mortality of CVD as a whole remains poorly understood, especially the non-linear association. We aimed to investigate the association of albumin levels with long-term mortality of CVD as a whole. Materials And Methods: This study included all CVD patients who participated in the National Health and Nutrition Examination Survey (NHANES 2011-2014). CVD was defined as coronary heart disease, stroke, heart failure, or any combination of these two or three diseases. Serum albumin was tertile partitioned: tertile 1, <4.1; tertile 2, 4.1-4.3; and tertile 3, >4.3 g/dl. COX proportional hazards model was used to assess the association between the serum albumin levels and CVD mortality. Restricted cubic spline (RCS) curves were used to explore the non-linear relationship. Results: A total of 1,070 patients with CVD were included in the analysis, of which 156 deaths occurred during a median 34 months of follow-up. On a continuous scale, per 1 g/dl albumin decrease was associated with an adjusted HR (95% CI) of 3.85 (2.38-6.25). On a categorical scale, as compared with tertile 3, the multivariable adjusted hazard ratio (95% CI) was 1.42 (0.74-2.71) for the tertile 2, and 2.24 (1.20-4.16) for the tertile 1, respectively, with respect to mortality. RCS curve analysis revealed a J-shaped association between albumin and CVD mortality. Conclusion: A J-shaped association between low serum albumin levels and increased long-term mortality of CVD has been revealed. This J-shaped association's implications for CVD prevention and treatment are deserving of being further studied. abstract_id: PUBMED:22573406 A reverse J-shaped association of all-cause mortality with serum 25-hydroxyvitamin D in general practice: the CopD study. Context: Optimal levels of vitamin D have been a topic of heavy debate, and the correlation between 25-hydroxyvitamin D [25(OH)D] levels and mortality still remains to be established. Objective: The aim of the study was to determine the association between all-cause mortality and serum levels of 25(OH)D, calcium, and PTH. Design And Setting: We conducted a retrospective, observational cohort study, the CopD Study, in a single laboratory center in Copenhagen, Denmark. Participants: Serum 25(OH)D was analyzed from 247,574 subjects from the Copenhagen general practice sector. In addition, serum levels of calcium, albumin-adjusted calcium, PTH, and creatinine were measured in 111,536; 20,512; 34,996; and 189,496 of the subjects, respectively. Main Outcome Measures: Multivariate Cox regression analysis was used to compute hazard ratios for all-cause mortality. Results: During follow-up (median, 3.07 yr), 15,198 (6.1%) subjects died. A reverse J-shaped association between serum level of 25(OH)D and mortality was observed. A serum 25(OH)D level of 50-60 nmol/liter was associated with the lowest mortality risk. Compared to 50 nmol/liter, the hazard ratios (95% confidence intervals) of all-cause mortality at very low (10 nmol/liter) and high (140 nmol/liter) serum levels of 25(OH)D were 2.13 (2.02-2.24) and 1.42 (1.31-1.53), respectively. Similarly, both high and low levels of albumin-adjusted serum calcium and serum PTH were associated with an increased mortality, and secondary hyperparathyroidism was associated with higher mortality (P < 0.0001). Conclusion: In this study from the general practice sector, a reverse J-shaped relation between the serum level of 25(OH)D and all-cause mortality was observed, indicating not only a lower limit but also an upper limit. The lowest mortality risk was at 50-60 nmol/liter. The study did not allow inference of causality, and further studies are needed to elucidate a possible causal relationship between 25(OH)D levels, especially higher levels, and mortality. abstract_id: PUBMED:36844732 The U-shaped association of non-high-density lipoprotein cholesterol with all-cause and cardiovascular mortality in general adult population. Background: Non-high-density lipoprotein cholesterol (non-HDL-C) has been associated with atherosclerosis. However, the association between non-HDL-C and mortality in adult population remains unclear. We intended to investigate the association of non-HDL-C with cardiovascular and all-cause mortality using national representative data. Methods: The study included 32,405 participants from the National Health and Nutrition Examination Survey (1999-2014). Mortality outcomes were ascertained by linkage to National Death Index records through December 31, 2015. Multivariable-adjusted Cox regression models were used to evaluate hazard ratio (HR) and 95% confidence interval (CI) of non-HDL-C concentrations in quintiles. Two-piecewise linear regression and restricted cubic spline analyzes were performed to test dose-response associations. Results: After a median follow-up of 98.40 months, 2,859 (8.82%) all-cause and 551 (1.70%) cardiovascular deaths occurred. Compared with the highest group, the multivariable-adjusted hazard ratio (HR) of the first quintile for all-cause mortality was 1.53 (95%CI, 1.35-1.74). Higher non-HDL-C above a cutoff value of 4.9 mmol/L was related with cardiovascular mortality (HR = 1.33, 95%CI, 1.13-1.57). A U-shaped relationship between non-HDL-C and all-cause mortality was found in spline analysis with a cutoff value around 4 mmol/L. Similar results in subgroups analyzes were found among male, non-white population, participants who were not taking lipid-lowering drugs, and with body mass index (BMI) <25 kg/m2. Conclusion: Our findings suggest a U-shaped association between non-HDL-C and mortality among adult population. abstract_id: PUBMED:23666975 Is there a reverse J-shaped association between 25-hydroxyvitamin D and all-cause mortality? Results from the U.S. nationally representative NHANES. Context: A reverse J-shaped association between serum 25-hydroxyvitamin D (25[OH]D) concentration and all-cause mortality was suggested in a 9-year follow-up (1991-2000) analysis of the Third National Health and Nutrition Examination Survey (NHANES III, 1988-1994). Objective: Our objective was to repeat the analyses with 6 years additional follow-up to evaluate whether the association persists through 15 years of follow-up. Participants: The study included 15 099 participants aged ≥ 20 years with 3784 deaths. Main Outcome Measure: Relative risk (RR) of death from all causes was adjusted for age, sex, race/ethnicity, and season using 2 Poisson regression approaches: traditional categorical and cubic splines. Results were given for 9 25(OH)D levels: <20, 20 to 29, 30 to 39, 40 to 49, 50 to 59, 60 to 74, 75 to 99 (reference), 100 to 119, and ≥ 120 nmol/L. Results: The reverse J-shaped association became stronger with longer follow-up and was not affected by excluding deaths within the first 3 years of follow-up. Similar results were found from both statistical approaches for levels <20 through 119 nmol/L. Adjusted RR (95% confidence interval [CI]) estimates for all levels <60 nmol/L were significantly >1 compared with the reference group. The nadir of risk was 81 nmol/L (95% CI, 73-90 nmol/L). For 25(OH)D ≥ 120 nmol/L, results (RR, 95% CI) were slightly different using traditional categorical (1.5, 1.02-2.3) and cubic splines approaches (1.2, 0.9-1.4). The association appeared in men, women, adults ages 20 to 64 years, and non-Hispanic whites but was weaker in older adults. The study was too small to evaluate the association in non-Hispanic black and Mexican-American adults. Conclusions: A reverse J-shaped association between serum 25(OH)D and all-cause mortality appears to be real. It is uncertain whether the association is causal. abstract_id: PUBMED:38143017 J-shaped association of serum uric acid concentrations with all-cause mortality in individuals with osteoarthritis: A prospective cohort study. Objective: The purpose of this study was to explore the relationship between serum uric acid (SUA) concentrations and all-cause mortality in individuals with osteoarthritis (OA). Methods: All participant data were retrieved from the National Health and Nutrition Examination Survey database. A total of 4671 participants (age range: 20 to 85 years old), including 2988 females and 1683 males, were included in this study. The determination of death outcome was based on the National Death Index (up to December 31, 2019). We explored the nonlinear relationship between SUA concentrations and all-cause mortality in OA patients by establishing a Cox proportional risk model and a two-segment Cox proportional risk model and ran an interaction test to identify the high-risk population for all-cause mortality. Results: During 30,645 person-years of follow-up, the number of all-cause deaths for females and males was 736 and 516, respectively. After multivariate adjustment, we found a nonlinear relationship between SUA concentrations and all-cause mortality in both females and males with OA. In addition, we found a J-shaped relationship between SUA concentrations and all-cause mortality. The SUA concentration thresholds for all-cause mortality of females and males were stable at 5.6mg/dl and 6.2mg/dl, respectively. Compared with SUA concentrations below the inflection point, the all-cause mortality risk at higher SUA concentrations in females and males with OA increased by 20% (hazard ratio [HR]: 1.2, 95% confidence interval [CI]: 1.1 to 1.2) and 25% (HR: 1.2, 95% CI: 1.12 to 1.39), respectively. Conclusions: There is a nonlinear relationship between SUA concentrations and all-cause mortality in the American OA population (J-shaped association). The all-cause mortality thresholds for SUA concentrations in females and males are 5.6mg/dl and 6.2mg/dl, respectively. abstract_id: PUBMED:36161059 Vitamin d deficiency and metabolic syndrome: The joint effect on cardiovascular and all-cause mortality in the United States adults. Background: The long-term impact of vitamin D deficiency and metabolic syndrome (MetS) on cardiovascular disease (CVD) and all-cause mortality are still a matter of debate. Aim: To test the hypotheses that lower serum 25 hydroxyvitamin D [25(OH)D] concentrations (a marker of vitamin D level) and MetS have a long-term impact on the risk of CVD and all-cause mortality, and individuals with vitamin D deficiency can be identified by multiple factors. Methods: A sample of 9094 adults, 20 to 90 years of age, who participated in the Third National Health and Nutrition Examination Survey (NHANES III, 1988 to 1994) were followed through December 2015 was analyzed. The associations of serum 25(OH)D concentrations and MetS with CVD and all-cause mortality were analyzed longitudinally using Cox regression models. Classification and regression tree (CART) for machine learning was applied to classify individuals with vitamin D deficiency. Results: Of 9094 participants, 30% had serum 25(OH)D concentrations < 20 ng/mL (defined as vitamin D deficiency), 39% had serum 25(OH)D concentrations between 20 to 29 ng/mL (insufficiency), and 31% had serum 25(OH)D concentrations ≥30 ng/mL (sufficiency). Prevalence of MetS was 28.4%. During a mean of 18 years follow-up, vitamin D deficiency and MetS were significantly associated with increased risk of CVD and all-cause mortality. Subjects with both vitamin D deficiency and MetS had the highest risk of CVD mortality (HR = 1.77, 95%CI: 1.22-2.58) and all-cause mortality (HR = 1.62, 95%CI: 1.26-2.09), followed by those with both vitamin D insufficiency and MetS for CVD mortality (HR = 1.59, 95%CI: 1.12-2.24), and all-cause mortality (HR = 1.41, 95%CI: 1.08-1.85). Meanwhile, vitamin D sufficiency significantly decreased the risk of CVD and all-cause mortality for those who even had MetS. Among the total study sample, CART analysis suggests that being non-Hispanic Black, having lower serum folate level, and being female were the first three predictors for those with serum 25(OH)D deficiency. Conclusion: Vitamin D deficiency and MetS were significantly associated with increased risk of CVD and all-cause mortality. There was a significant joint effect of vitamin D deficiency and MetS on the risk of mortality. Findings of the CART analysis may be useful to identify individuals positioned to benefit from interventions to reduce the risk of CVD and all-cause mortality. abstract_id: PUBMED:35872887 U-Shaped Relationship of Non-HDL Cholesterol With All-Cause and Cardiovascular Mortality in Men Without Statin Therapy. Background: Non-HDL-C is well established causal risk factor for the progression of atherosclerotic cardiovascular disease. However, there remains a controversial pattern of how non-HDL-C relates to all-cause and cardiovascular mortality, and the concentration of non-HDL-C where the risk of mortality is lowest is not defined. Methods: A population-based cohort study using data from the National Health and Nutrition Examination Survey (NHANES) from 1999 to 2014. Male participants without statin therapy were divided into the six groups according to non-HDL-C levels (<100, 100-129, 130-159, 160-189, 190-219, ≥220 mg/dl). Multivariable Cox proportional hazards models were conducted with a hazard ratio (HR) and corresponding 95% confidence interval (CI). To further explore the relationship between non-HDL-C and mortality, Kaplan-Meier survival curves, restricted cubic spline curves, and subgroup analysis were performed. Results: Among 12,574 individuals (average age 44.29 ± 16.37 years), 1,174(9.34%) deaths during a median follow-up 98.38 months. Both low and high non-HDL-C levels were significantly associated with increased risk of all-cause and cardiovascular mortality, indicating a U-shaped association. Threshold values were detected at 144 mg/dl for all-cause mortality and 142 mg/dl for cardiovascular mortality. Below the threshold, per 30 mg/dl increase in non-HDL-C reduced a 28 and 40% increased risk of all-cause (p < 0.0001) and cardiovascular mortality (p = 0.0037), respectively. Inversely, above the threshold, per 30 mg/dl increase in non-HDL-C accelerated risk of both all-cause mortality (HR 1.11, 95% CI 1.03-1.20, p = 0.0057) and cardiovascular mortality (HR 1.30, 95% CI 1.09-1.54, p = 0.0028). Conclusions: Non-HDL-C was U-shaped related to all-cause and cardiovascular mortality among men without statin therapy. abstract_id: PUBMED:29304766 Relationship between systolic blood pressure and all-cause mortality: a prospective study in a cohort of Chinese adults. Background: The association between systolic blood pressure (SBP) and all-cause mortality in Chinese adults remains unclear. This study aimed to identify the relationship of SBP with all-cause mortality in Chinese men and women. Methods: One hundred twenty-one thousand eighty-two employees of the Kailuan Group Corporation, aged 18 or older, who participated in physical examination from 2006 to 2007 or from 2008 to 2009, were enrolled and followed up for all-cause mortality. The information used to ascertain the outcome of death during follow-up was extracted from provincial vital statistics offices, hospitalization records from the 11 hospitals, or medical records from medical insurance companies. Results: The average age was 50.06 ± 12.85 in the overall sample. Over 7 years of follow-up, 5945 participants, including 5520 men and 425 women had all-cause mortality. After multivariate adjustment, men in SBP group of <100, 120-139, 140-159, 160-179 and ≥180 mmHg had hazard ratios (HR) of 1.46 (1.14-1.86), 1.14 (1.04-1.26), 1.29 (1.16-1.44), 1.57 (1.38-1.79) and 2.07 (1.76-2.43), respectively, and displayed significantly increased risk of all-cause mortality compared to those with SBP in the range of 100-119 mmHg. Compared with the group of 100-119 mmHg, women in SBP group of 140-159, 160-179 and ≥180 mmHg had significantly greater risk with HRs of 1.44 (95% CI, 1.01-2.07), 1.63 (95% CI, 1.04-2.55) and 2.31 (95% CI, 1.27-4.20). Conclusions: Either lower (<100 mmHg) or higher (>120 mmHg) SBP was associated with an increased all-cause mortality risk and a J-shaped relationship was observed between SBP and all-cause mortality in men. Only SBP exceeding 140 mmHg was related to a higher risk in women. The relationship between SBP and all-cause mortality among Chinese adults may differ by sex. abstract_id: PUBMED:32978754 Association Between Fasting Blood Glucose and All-Cause Mortality in a Rural Chinese Population: 15-Year Follow-Up Cohort Study. Introduction: The worldwide prevalence of diabetes has been increasing for decades; diabetes can lead to serious health problems and even death, but the effects of maintaining low fasting blood glucose (FBG) remain controversial. The purpose of this study was to investigate the relationship between FBG levels and all-cause mortality in a long-term follow-up cohort and to find a relatively safe range of FBG levels. Methods: This study included 17,902 adults from a community-based cohort study in rural China who were prospectively followed from 2003 to 2018. Generalized estimating equations were used to evaluate the association between FBG and all-cause mortality, adjusting for pertinent covariates and auto-correlations among siblings. Results: A total of 1053 (5.9%) deaths occurred during 15 years of follow-up. There was a significant U-shaped association between all-cause mortality and FBG. Compared with the reference group (FBG of 5.6 - < 6.1 mmol/l), the risk of death among individuals with FBG levels < 5.6 mmol/l significantly increased by 38% (OR 1.34; 95% CI 1.13-1.59), while the risk of death among individuals with FBG ≥ 6.1 mmol/l or participants with a self-reported history of diabetes significantly increased by 51% (OR 1.49; 95% CI 1.20-1.85). Additionally, the U-shaped association remained steady in any stratification of risk factors. Conclusion: Our study showed a significant U-shaped relationship between FBG levels and risk of all-cause mortality in this rural Chinese population. When FBG was within the range of 5.6 - < 6.1 mmol/l, the risk of all-cause mortality was the lowest. abstract_id: PUBMED:25710567 A Reverse J-Shaped Association Between Serum 25-Hydroxyvitamin D and Cardiovascular Disease Mortality: The CopD Study. Context: Cardiovascular disease is the major cause of death in the Western world, but the association between 25-hydroxyvitamin D [25(OH)D] levels and the risk of cardiovascular disease mortality remains unclear. Objective: The objective of the study was to determine the association between cardiovascular, stroke, and acute myocardial infarct mortality and serum levels of 25(OH)D. Design: This was an observational cohort study, the Copenhagen vitamin D study, data from a single laboratory center in Copenhagen, Denmark. Follow-up was from 2004 to 2011. Setting: Serum 25(OH)D was analyzed from 247 574 subjects from the Copenhagen general practice sector. Participants: Examination of the association 25(OH)D levels and mortality from cardiovascular disease, stroke, and acute myocardial infarct was performed among 161 428 women and 86 146 men. Main Outcome Measures: A multivariate Cox regression analysis was used to compute hazard ratios for cardiovascular, stroke, and acute myocardial infarct mortality. Results: Of 247 574 subjects, a total of 16 645 subjects died in the ensuing 0-7 years. A total of 5454 died from cardiovascular disease including 1574 from stroke and 702 from acute myocardial infarct. The 25(OH)D level of 70 nmol/L was associated with the lowest cardiovascular disease mortality risk. Compared with that level, the hazard ratio for cardiovascular disease mortality was 2.0 [95% confidence interval (CI) 1.8-2.1] at the lower extreme (∼ 12.5 nmol/L) with a higher risk for men [2.5 (95% CI 2.2-2.9)] than for women [1.7 (95% CI 1.5-1.9)]. At the higher extreme (∼ 125 nmol/L), the hazard ratio of cardiovascular disease mortality was 1.3 (95% CI 1.2-1.4), with a similar risk among men and women. Results were similar for stroke and acute myocardial subgroups. Conclusions: In this large observational study, low and high levels of 25(OH)D were associated with cardiovascular disease, stroke, and acute myocardial mortality in a nonlinear, reverse J-shaped manner, with the highest risk at lower levels. Whether this was a causal or associational finding cannot be determined from our data. There is a need for randomized clinical trials that include information on the effects of 25(OH)D levels greater than 100 nmol/L. Answer: Yes, there is evidence of a reverse J-shaped association between 25-hydroxyvitamin D [25(OH)D] levels and all-cause mortality. This association has been observed in several studies. For instance, a study from the Copenhagen general practice sector found a reverse J-shaped relationship between serum levels of 25(OH)D and all-cause mortality, with the lowest mortality risk at serum 25(OH)D levels of 50-60 nmol/liter. Compared to a level of 50 nmol/liter, both very low (10 nmol/liter) and high (140 nmol/liter) serum levels of 25(OH)D were associated with increased hazard ratios for all-cause mortality (PUBMED:22573406). Similarly, a U.S. nationally representative NHANES study also suggested a reverse J-shaped association between serum 25(OH)D concentration and all-cause mortality, with the nadir of risk at 81 nmol/L. The association became stronger with longer follow-up and was not affected by excluding deaths within the first 3 years of follow-up (PUBMED:23666975). These findings indicate that both low and high levels of 25(OH)D are associated with increased mortality risk, suggesting that there is an optimal range for serum 25(OH)D levels to minimize the risk of all-cause mortality.
Instruction: Does vitamin C or its combination with vitamin E improve radial artery endothelium-dependent vasodilatation in patients awaiting coronary artery bypass surgery? Abstracts: abstract_id: PUBMED:24217301 Does vitamin C or its combination with vitamin E improve radial artery endothelium-dependent vasodilatation in patients awaiting coronary artery bypass surgery? Background: We evaluated the vasodilatory effects of two antioxidants, vitamins C (ascorbic acid) and E (α-tocopherol), on radial artery and endothelium-dependent responses in patients awaiting coronary artery bypass surgery. Methods: The study was performed in three groups. The first group took 2 g of vitamin C orally (n = 31, vitamin C group), the second group took 2 g of vitamin C with 600 mg of vitamin E orally (n = 31, vitamins C + E group), and the third group took no medication (n = 31, control group). After baseline measurements were taken of the radial artery lumen diameter, flow volume and lumen area in the non-dominant radial artery, occlusion was maintained for five minutes with a pressure cuff placed around the arm. The measurements were taken again at the time of deflating the cuff, and 60 seconds later. The measurements were repeated after medication in two of the groups and after placebo in the third group. Results: We compared values of the vitamin C group with those of the vitamins C + E group, and found that the latter were higher than those of the vitamin C group but not statistically significant. In the control group, there was no statistical difference. Conclusion: Vitamin C or its combination with vitamin E significantly enhanced endothelium-dependent vasodilatation in the radial circulation of patients with coronary artery disease. Its combination with vitamin E was superior to vitamin C administration alone for endothelial enhancement but this difference was not statistically significant. We hypothesised that vitamin C or its combination with vitamin E may be used as antioxidants for arterial graft patency in patients undergoing coronary artery surgery. abstract_id: PUBMED:14500072 Effect of ascorbic acid on endothelium-dependent vasodilatation of human arterial conduits for coronary artery bypass grafting. Objective: Techniques aimed at improving the performance of arterial conduits will maximize the clinical benefit achievable with coronary artery bypass surgery. Controlling oxidant stress could be a strategy for preventing early graft deterioration. We tested the effect of a free radical scavenger, ascorbic acid (vitamin C), on preserving the endothelium-dependent vasodilatation function in vitro of radial artery and internal thoracic artery. We also tested its effect on the amount of reactive oxygen species (ROS) generated by each graft. Methods: Radial artery (RA, n=25) and internal thoracic (ITA, n=19) segments were obtained from coronary artery bypass grafting patients. Each segment was divided into 3-4 mm vascular rings and incubated with or without ascorbic acid (10(-3) mol/l) for 1 h or 72 h. Using the organ bath technique, the endothelium-dependent vasodilatation function was tested in vitro by the addition of cumulative concentrations of acetylcholine (10(-9)-10(-5) mol/l) following vasocontraction by endothelin-1 (3 x 10(-8) mol/l). ROS were measured by using chemiluminescence technique at 1-h and after 72 h incubation with or without ascorbic acid. Results: There were no differences in the vasodilatation function between control and ascorbic acid group of both arteries in the 1-hour incubation experiment. However, in the 72 h incubation experiment, ascorbic acid preserved the endothelium-dependent vasodilatation function of RA compared with control group (35.8+/-2.2% vs. 25.9+/-2.1%; P=0.005), but not ITA (39+/-3.5% vs. 40.5+/-9.3%; P=0.438). After 72 h incubation, RA generated significantly more free radicals compared with 1 h (133.7+/-151.5 vs. 16.8+/-16.8 cps/mg x 100; P=0.01); however, AA has no statistically significant effect on decreasing the amount of free radicals generated by both arteries. Conclusions: In RA, ascorbic acid is able to preserve the endothelium-dependent vasodilatation function after 72 h incubation, but not after 1 h. However, the mechanism of action of AA is not completely understood. This finding could open the door for understanding the role of oxidant stress and antioxidants in preserving the endothelial function of coronary artery bypass grafts. abstract_id: PUBMED:12579102 Is vitamin C superior to diltiazem for radial artery vasodilation in patients awaiting coronary artery bypass grafting? Objectives: We aimed to measure the vasodilating effects of vitamin C on the radial arteries of healthy subjects and to assess whether vitamin C is superior in this regard to diltiazem, a commonly used vasodilator in coronary artery bypass using radial conduits. Methods: In a case-control study (study 1) oral single-dose vitamin C (2 g) was given to 15 healthy nonsmokers and 15 matched otherwise healthy smokers. In a randomized double-blind study (study 2) oral single-dose vitamin C (2 g, n = 15) and diltiazem (180 mg, n = 15) were compared in preoperative patients with coronary artery disease. We examined the dilation of the radial artery with high-resolution ultrasonography and measurement of the lumen surface and color Doppler images of the nondominant radial artery just before and 2 hours after drug administration. Results: In study 1 both smokers and nonsmokers showed a significant increase in the lumen surface at 2 hours compared with at baseline (P <.001 and P =.013, respectively). The increase was larger in smokers (median, 37.5% vs 14.3%; P =.004). In study 2 both groups showed statistically significant increases in the lumen surface at 2 hours compared with at baseline (P <.001 and P =.008 for vitamin C and diltiazem, respectively). Vitamin C achieved a larger increase than diltiazem (median, 33.3% vs 18.2%; P =.016). In multivariate modeling the increase in lumen surface was independently predicted by use of vitamin C over diltiazem (+21.2%, P =.007), diabetes mellitus (+14.5%, P =.085), increased cholesterol (+26.2%, P =.001), and smoking history (+20.8%, P =.017). Conclusions: Vitamin C is a potent acute vasodilator in both smokers and nonsmokers and is superior to diltiazem in preoperative coronary patients who need protection from vasospasm of the radial conduit. abstract_id: PUBMED:16281488 Disturbance of endothelium-dependent vasodilation and morphological changes of the radial artery in patients with coronary heart disease Aim: To elucidate whether endothelium-dependent vasodilation (EDVD) is a method of early diagnosis of endothelial dysfunction or is a sign of established morphological changes in the vascular wall. Material And Methods: Endothelialfunction was studied in 44 males with essential hypertension (EH) (mean age 46.9 +/- 0.9 years), in 52 patients with coronary heart disease (CHD) (mean age 50.0 +/- 0.9 years) after mammary-coronary and aortocoronary bypass operation with use of radial artery (RA), and in 28 healthy males (mean age 44.2 +/- 1.2 years). Intraoperative RA biopsy results for patients with ischemic heart disease after coronary artery bypass grafting with RA as an arterial graft were analysed. Results: Patients with CHD showed thicker complex intima-media (TIM) of the common carotid arteries compared to control and hypertensive patients free of CHD clinical symptoms. This is associated with lowering of EDVD in RA and, therefore, suggests systemic occurrence of endothelial dysfunction. The histological study of RA has found hyperplasia of the internal vascular intima of 1.1 +/- 0.2 degree in 97% of CHD patients. Combination of CHD with hypertension showed thicker intima, higher index of intima thickness and RA narrowing percentage vs such parameters in CHD free of hypertension. Conclusion: Impaired EDVD in the test with reactive hyperemia is due to morphological changes in the radial artery, its intima. The degree of changes in endothelial vascular reactions in RA is closely related to severity of systemic atherosclerosis. In CHD, RA has a marked intimal hyperplasia which leads to vasospasticity and may result in development of stenosis in its use as a graft for myocardial revascularization. One of the basic factors influencing the degree of intimal hyperplasia of the radial artery is severity of arterial hypertension. abstract_id: PUBMED:24409416 Efficacy of Combination Therapy of Statin and Vitamin C in Comparison with Statin in the Prevention of Post-CABG Atrial Fibrillation. Purpose: Atrial fibrillation (AF) is the most frequent arrhythmia that follows coronary artery bypass graft (CABG). Patients developing postoperative AF (POAF) have significantly higher mortality rates. The consistent prophylactic effectiveness of statins and vitamin C are well-accepted; however, no evaluation on combined therapy has been performed. We aimed at assessing the efficacy of combination therapy with statin and vitamin C in comparison with statin alone in the prevention of post CABG-AF. Methods: In a randomized double blind clinical trial, 120 candidates of CABG were recruited in Tabriz Madani Educational Center in a 15-month period of time. Patients were randomized into two groups of 60 receiving oral atorvastatin (40mg) plus oral vitamin C (2g/d operation day and 1g/d for five consequent days) for intervention group and oral atorvastatin (40mg) for control group. Occurrence of post CABG AF was compared between the two groups. Results: There were 60 patients, 43 males and 17 females with a mean age of 61.0±11.5 (29-78) years, in the intervention group and sixty patients, 39 males and 21 females with a mean age of 60.5±11.3 (39-81) years, in the control group. The post CABG AF occurred in 6 cases (10%) in the interventional group and 15 patients (25%) in the controls (P=0.03, odds ratio=0.33, 95% confidence interval 0.12-0.93). Conclusion: Based on our findings, combination prophylaxis against post CABG AF with oral atorvastatin plus vitamin C is significantly more effective than single oral atorvastatin. abstract_id: PUBMED:37891226 Effects of perioperative vitamin E and zinc co-supplementation on systemic inflammation and length of stay following coronary artery bypass graft surgery: a randomized controlled trial. Objectives: Coronary artery bypass graft (CABG) surgery has been reported to be associated with lower postoperative plasma antioxidant and zinc levels. We hypothesized that perioperative supplementation of vitamin E and zinc might improve short-term postoperative outcomes. Methods: In this placebo-controlled double-blind, randomized study, patients undergoing CABG performed with cardiopulmonary bypass were recruited. The intervention group received zinc and vitamin E supplementation (1200 IU vitamin E and 120 mg elemental zinc) the day before surgery, followed by postoperative daily supplementation of 30 mg zinc and 200 IU vitamin E from the 2nd day after surgery to 3 weeks. The control group received placebos. Length of stay (LOS) in the intensive care unit and hospital, sequential organ failure assessment score on 3rd day after surgery, and plasma inflammatory markers on days 3 and 21 post-surgery were evaluated. Results: Seventy-eight patients completed the study (40 in the intervention group and 38 in the placebo group). The hospital LOS was significantly shorter (p < 0.05) in the intervention group. Postoperative changes in plasma albumin levels were not different between the two groups. The plasma zinc level was higher (p < 0.0001), but plasma C-reactive protein (p = 0.01), pentraxin 3 (p < 0.0001), interferon γ (p < 0.05), malondialdehyde (p < 0.05), and aspartate aminotransferase (p < 0.01) were lower in the intervention group compared to the placebo group. Conclusions: Perioperative vitamin E and zinc supplementation significantly reduced hospital LOS and the inflammatory response in CABG surgery patients. In these patients, the optimal combination and dose of micronutrients need further study but could include zinc and vitamin E. Clinical Trial Registry: This trial was registered at ClinicalTrials.gov website (NCT05402826). abstract_id: PUBMED:27890433 Levels of vitamin C In the blood plasma patients treated with coronary artery bypass grafting increases significantly after surgery. One strong risk factor of coronary artery disease (CAD), which affects the levels of vitamin C in the blood is cigarette smoking. The supplementary effects of coronary artery bypass grafting (CABG) is smoking cessation by patients. Therefore, the aim of this study was to examine the level of vitamin C in the blood plasma one day before and one month after CABG. Human blood were collected from 20 patients (men); 1day before and 1 month after CABG. All patients were smoker and after CABG they declared their not smoking. The concentration of vitamin C in the blood plasma was assayed by FIA method with spectrophotometric detection. The mean value of the vitamin C concentration 1day before CABG was 12.36±2.84μmol/L (mean±SD), 1 month after CABG 40.07±10.95μmol/L (mean±SD). The average increase in the concentration of vitamin C was 3.27±0.73 times (mean±SD) and showed significant positive correlations (Pearson r=0.657, p=0.002). It should be consider incorporating the recommendations of preoperative smoking cessation for at least one month prior to CABG and/or additional supplementation. In addition it would be relevant to monitor the level of vitamin C in the patients' blood in the preoperative period. abstract_id: PUBMED:7776501 Serial coronary angiographic evidence that antioxidant vitamin intake reduces progression of coronary artery atherosclerosis. Objective: To explore the association of supplementary and dietary vitamin E and C intake with the progression of coronary artery disease. Design: A subgroup analysis of the on-trial antioxidant vitamin intake database acquired in the Cholesterol Lowering Atherosclerosis Study, a randomized, placebo-controlled, serial angiographic clinical trial evaluating the risk and benefit of colestipol-niacin on coronary artery disease progression. Setting: Community- and university-based cardiac catheterization laboratories. Subjects: A total of 156 men aged 40 to 59 years with previous coronary artery bypass graft surgery. Intervention: Supplementary and dietary vitamin E and C intake (nonrandomized) in association with cholesterol-lowering diet and either colestipol-niacin or placebo (randomized). Outcome: Change per subject in the percentage of vessel diameter obstructed because of stenosis (%S) determined by quantitative coronary angiography after 2 years of randomized therapy on all lesions, mild/moderate lesions (< 50%S), and severe lesions (> or = 50%S). Results: Overall, subjects with supplementary vitamin E intake of 100 IU per day or greater demonstrated less coronary artery lesion progression than did subjects with supplementary vitamin E intake less than 100 IU per day for all lesions (P = .04) and for mild/moderate lesions (P = .01). Within the drug group, benefit of supplementary vitamin E intake was found for all lesions (P = .02) and mild/moderate lesions (P = .01). Within the placebo group, benefit of supplementary vitamin E intake was not found. No benefit was found for use of supplementary vitamin C exclusively or in conjunction with supplementary vitamin E, use of multivitamins, or increased dietary intake of vitamin E or vitamin C. Conclusions: These results indicate an association between supplementary vitamin E intake and angiographically demonstrated reduction in coronary artery lesion progression. Verification from carefully designed, randomized, serial arterial imaging end point trials is needed. abstract_id: PUBMED:14667634 Differential in vitro response of the human radial artery versus left internal thoracic artery to cerivastatin: implications to bypass grafting. Background: This study investigated acute (in vitro) and long-term (in vivo) effects of statins on the vascular function of human radial artery (RA) and left internal thoracic artery (LITA). Methods: RA and LITA specimens were divided into vascular rings, which were incubated in the absence or presence of 10(-6) mol/L Cerivastatin for 2 or 24 hours. In terms of preoperative statin treatment, four groups included: group 1 [preop statin(-)/in vitro cerivastatin(-)]; group 2 [preop(-)/in vitro(+)]; group 3 [preop(+)/in vitro(-)]; and group 4 [preop(+)/in vitro(+)]. Endothelial function was assessed with acetylcholine (10(-9) to 10(-5) mol/L) following contraction by 3 x 10(-8) mol/L endothelin-1. Results: Although endothelium-dependent vasodilatation was higher in RA (57.7% +/- 3.5%) than in LITA (46.5% +/- 3.8%, p = 0.046), there was no significant evidence that it depended on the preoperative use of statins or incubation period. In vitro incubation with cerivastatin significantly increased endothelium-dependent vasodilatation by 14.2% +/- 2.4% (p < 0.0001) independent of artery types (RA/LITA). There was no significant evidence that endothelium-dependent vasodilatation depended on the preoperative use of statins or incubation period. Conclusions: In vitro incubation with cerivastatin preserved endothelial function more effectively than preoperative use of statins. This could have implications to perioperative use of statins for patients undergoing coronary surgery. abstract_id: PUBMED:10917946 Midterm endothelial function and remodeling of radial artery grafts anastomosed to the aorta. Background: The purpose of this study was to elucidate the midterm endothelium-dependent vasodilatory capacity of radial artery grafts anastomosed to the aorta, as well as their morphometric evolution with the time. Methods: Five years after surgery we evaluated the response of aorta-anastomosed radial artery grafts to the endovascular infusion of acetylcholine in 11 of the first 61 patients operated on at our institution, and we compared it to the response with that of internal thoracic artery grafts. Moreover, the first 20 patients who had a perfect radial artery graft on angiography at 1 year were restudied at 5 years and subjected to a comparative analysis of the diameters of the radial artery graft and the grafted coronary arteries. Results: At midterm angiography, dilation of the 2 types of grafts was similar in response to acetylcholine administration (radial artery, from 2.61 +/- 0.39 to 2. 90 +/- 0.34 mm; internal thoracic artery, from 2.68 +/- 0.21 to 2.93 +/- 0.27 mm; P =.01 for both). The diameters of aorta-anastomosed radial artery grafts and grafted coronary arteries increased between both 1 and 5 years according to angiographic studies (radial artery grafts, from 2.08 +/- 0.45 to 2.54 +/- 0.53 mm; grafted coronary arteries, from 1.92 +/- 0.47 to 2.18 +/- 0.41 mm; P <.001 for both), but the increase was greater for the radial artery grafts (P <.001). Conclusions: Aorta-anastomosed radial artery grafts maintain an appreciable capacity for endothelium-dependent vasodilatation 5 years after implantation and undergo a progressive increase in luminal diameter with time. These observations contradict the presumed tendency for progressive fibrous intimal hyperplasia to develop in radial artery grafts. Answer: Yes, vitamin C or its combination with vitamin E significantly enhances endothelium-dependent vasodilatation in the radial circulation of patients with coronary artery disease. The combination of vitamin C with vitamin E was found to be superior to vitamin C administration alone for endothelial enhancement, although this difference was not statistically significant (PUBMED:24217301). Additionally, ascorbic acid (vitamin C) has been shown to preserve the endothelium-dependent vasodilatation function of the radial artery after 72 hours of incubation, suggesting a potential role in maintaining the function of arterial grafts used in coronary artery bypass grafting (CABG) (PUBMED:14500072). Furthermore, vitamin C has been demonstrated to be a potent acute vasodilator in both smokers and nonsmokers and is superior to diltiazem, a commonly used vasodilator, in preoperative coronary patients (PUBMED:12579102). These findings suggest that vitamin C, alone or in combination with vitamin E, may be beneficial for arterial graft patency in patients undergoing coronary artery surgery.
Instruction: Is diabetic retinopathy related to subclinical cardiovascular disease? Abstracts: abstract_id: PUBMED:36407461 Diabetic retinopathy predicts cardiovascular disease independently of subclinical atherosclerosis in individuals with type 2 diabetes: A prospective cohort study. Background: Diabetic retinopathy (DR) and preclinical atherosclerosis are associated with higher cardiovascular risk. However, no studies have investigated the predictive role of DR and preclinical atherosclerosis jointly on cardiovascular events in subjects with type 2 diabetes (T2D). We aimed to assess the contribution of DR and subclinical atherosclerosis on the risk of adverse cardiovascular events in subjects with T2D without previous cardiovascular disease (CVD). Methods: We included two prospective cohorts of subjects with T2D from the same geographical area. Assessment of subclinical atherosclerosis was performed by carotid ultrasound. An ophthalmologist classified DR according to standard criteria. Cardiovascular outcomes considered for analysis were the following: ischemic heart disease, stroke, heart failure, peripheral artery disease, revascularization procedures, and cardiovascular mortality. Bivariable and multivariable predictive models were performed. Results: From a total of 374 subjects with T2D 44 developed cardiovascular events during the 7.1 years of follow-up. Diabetes duration, total cholesterol, and glycated hemoglobin (HbA1c) at baseline were higher in subjects who developed cardiovascular outcomes (p < 0.001, p = 0.026, and p = 0.040, respectively). Compared with subjects without events, those developing cardiovascular events had higher prevalence of retinopathy (65.9% vs. 38.8%, p = 0.001; respectively) and more than mild retinopathy (43.2% vs. 31.8%, p = 0.002; respectively). Furthermore, all-cause mortality was higher in subjects with MACE than those without events (13.6% vs. 3.3%, p = 0.009; respectively). The multivariable analyses showed that HbA1c and the presence of DR at baseline were predictive of cardiovascular outcomes (p = 0.045 and p = 0.023, respectively). However, the burden of subclinical atherosclerosis was not (p = 0.783 and p = 0.071, respectively). Conclusion: DR is a strong predictor of cardiovascular events in T2D individuals at primary CVD prevention, even after accounting for the presence of preclinical carotid atherosclerosis. These results may help to individualize CVD prevention strategies in T2D. abstract_id: PUBMED:21168222 Is diabetic retinopathy related to subclinical cardiovascular disease? Objective: Persons with diabetic retinopathy (DR) have an increased risk of clinical cardiovascular events. This study aimed to determine whether DR is associated with a range of measures of subclinical cardiovascular disease (CVD) in persons without clinical CVD. Design: Population-based, cross-sectional epidemiologic study. Participants: Nine hundred twenty-seven persons with diabetes without clinical CVD in the Multi-Ethnic Study of Atherosclerosis. Methods: Diabetic retinopathy was ascertained from retinal photographs according to modification of the Airlie House Classification system. Vision-threatening DR (VTDR) was defined as severe nonproliferative DR, proliferative DR, or clinically significant macular edema. Subclinical CVD measures were assessed and defined as follows: high coronary artery calcium (CAC) score, defined as CAC score of 400 or more; low ankle-brachial index (ABI), defined as ABI of less than 0.9; high ABI, defined as ABI of 1.4 or more; high carotid intima-media thickness (IMT), defined as highest 25% of IMT; and carotid stenosis, defined as more than 25% stenosis or presence of carotid plaque. Main Outcome Measures: Associations between DR and subclinical CVD measures. Results: The prevalence of DR and VTDR in this sample was 30.0% and 7.2%, respectively, and VTDR was associated with a high CAC score (odds ratio [OR], 2.33; 95% confidence interval [CI], 1.15-4.73), low ABI (OR, 2.54; 95% CI, 1.08-5.99), and high ABI (OR, 12.6; 95% CI, 1.14-140.6) after adjusting for risk factors including hemoglobin A1c level and duration of diabetes. The association between VTDR and high CAC score remained significant after further adjustment for hypoglycemic, antihypertensive, and cholesterol-lowering medications. Diabetic retinopathy was not significantly associated with measures of carotid artery disease. Conclusions: In persons with diabetes without a history of clinical CVD, the presence of advanced-stage DR is associated with subclinical coronary artery disease. These findings emphasize the need to be careful about the use of anti-vascular endothelial growth factor for the treatment of DR. abstract_id: PUBMED:17941864 Subclinical hypothyroidism is a risk factor for nephropathy and cardiovascular diseases in Type 2 diabetic patients. Aims: The purpose of this study was to determine the relationship between subclinical hypothyroidism and prevalence of retinopathy and nephropathy, incident cardiovascular disease, and mortality in Type 2 diabetic patients without taking thyroid medication. Methods: Serum thyrotropin and free thyroxine concentrations were measured in 588 Type 2 diabetic subjects in Taipei Veterans General Hospital, Taiwan. In a cross-sectional study, we examined the prevalence of retinopathy and nephropathy. In a longitudinal study, we examined the risk of cardiovascular disease events, cardiovascular mortality and total mortality in the 4-year follow-up. Results: In the cross-sectional analysis, subclinical hypothyroidism was associated with a greater prevalence of diabetic nephropathy (odds ratio, 3.15 [95% CI, 1.48-6.69]) and did not show a high prevalence of diabetic retinopathy (odds ratio, 1.15 [95% CI, 0.59-2.26]) compare to euthyroid diabetics. During the 44.0 +/- 7.4 months of follow-up, 51 participants had cardiovascular events. The risk of cardiovascular events was significantly increased in Type 2 diabetics with subclinical hypothyroidism after adjustment for age, sex, A1C, other standard cardiovascular risk factors and medication (hazard ratio, 2.93; 95% CI, 1.15-7.48; P = 0.024), but it became nonsignificant after additional adjustment for urinary albumin-to-creatinine ratio (hazard ratio, 2.06; 95% CI, 0.67-6.36; P = 0.211). The rates of cardiovascular-related and total mortality did not significantly differ by thyroid status. Conclusions: Type 2 diabetic patients with subclinical hypothyroidism are associated with an increased risk of nephropathy and cardiovascular events, but not with retinopathy. Our data suggest that the higher cardiovascular events in subclinical hypothyroidism with Type 2 diabetes may be mediated with nephropathy. abstract_id: PUBMED:21129801 Diabetic retinopathy is associated with subclinical atherosclerosis in newly diagnosed type 2 diabetes mellitus. Aims: We aimed to evaluate the association between diabetic microangiopathy and subclinical atherosclerosis as a marker of cardiovascular disease (CVD) risk in patients with newly diagnosed type 2 diabetes. Methods: A total of 142 newly diagnosed type 2 diabetics who were free from CVD underwent evaluation of diabetic microangiopathy. Subclinical atherosclerosis was assessed by measuring carotid intima-media thickness (IMT), and the 10-year absolute risk of CVD was estimated using the UK Prospective Diabetes Study (UKPDS) Risk Engine. Results: Subclinical atherosclerosis was found in 27 subjects (19.0%). The rates of hypertension and diabetic retinopathy were significantly higher among patients with subclinical atherosclerosis. The UKPDS 10-year risk for CVD was significantly increased in subjects with subclinical atherosclerosis. Old age, hypertension and the presence of diabetic retinopathy showed a significant association to subclinical atherosclerosis after further adjustments for gender, body mass index, smoking status, HbA1c, HDL cholesterol, LDL cholesterol and the presence of diabetic nephropathy. Conclusions: This study shows that diabetic retinopathy is an independent risk marker for subclinical atherosclerosis in patients with newly diagnosed type 2 diabetes. We suggest that a diagnosis of diabetic retinopathy may warrant a more careful cardiovascular assessment even in the early stages of diabetes. abstract_id: PUBMED:31908800 Diabetic retinopathy as an independent predictor of subclinical cardiovascular disease: baseline results of the PRECISED study. Objective: Detection of subclinical cardiovascular disease (CVD) has significant impact on the management of type 2 diabetes. We examined whether the assessment of diabetic retinopathy (DR) is useful for identifying patients at a higher risk of having silent CVD. Research Design And Methods: Prospective case-control study comprising 200 type 2 diabetic subjects without history of clinical CVD and 60 age-matched non-diabetic subjects. The presence of subclinical CVD was examined using two parameters: (1) calcium coronary score (CACs); (2) composite of CACs >400 UA, carotid plaque ≥3 mm, carotid intima-media thickness ratio >1, or the presence of ECG changes suggestive of previous asymptomatic myocardial infarction. In addition, coronary angio-CT was performed. DR was assessed by slit-lamp biomicroscopy and retinography. Results: Type 2 diabetic subjects presented higher CACs than non-diabetic control subjects (p<0.01). Age, male gender, and the presence of DR were independently related to CACs >400 (area under the receiver operating characteristic curve (AUROC) 0.76). In addition, an inverse relationship was observed between the degree of DR and CACs <10 AU. The variables independently associated with the composite measurement of subclinical CVD were age, diabetes duration, the glomerular filtration rate, microalbuminuria, and the presence of DR (AUROC 0.71). In addition, a relationship (p<0.01) was observed between the presence and degree of DR and coronary stenosis. Conclusions: The presence and degree of DR is independently associated with subclinical CVD in type 2 diabetic patients. Our results lead us to propose a rationalized screening for coronary artery disease in type 2 diabetes based on prioritizing patients with DR, particularly those with moderate-severe degree. abstract_id: PUBMED:22011803 Severe forms of retinopathy predict the presence of subclinical atherosclerosis in type 1 diabetes subjects. Background: In patients with type 2 diabetes, the presence of retinopathy is associated with increased cardiovascular disease, regardless of known risk factors for vascular disease. Objective: To investigate the association of diabetic retinopathy (DR) and its grades with the presence of subclinical coronary atherosclerosis in patients with type 1 diabetes. Methods: A cross-sectional study was conducted with 150 type 1 diabetes individuals asymptomatic for coronary artery disease. They underwent clinical evaluation for microvascular complications and for the presence of coronary artery calcification (CAC). Results: Severe forms of DR (severe non-proliferative DR and proliferative DR) were associated with CAC (OR: 3.98 95% CI 1.13-13.9, p = 0.03), regardless of known risk factors for cardiovascular disease (age, A1C, hypertension, dyslipidemia and male gender). Conclusion: Patients with severe forms of DR are at risk for the presence of coronary artery disease regardless of traditional cardiovascular risk factors. abstract_id: PUBMED:34481974 Targeting the microRNAs in exosome: A potential therapeutic strategy for alleviation of diabetes-related cardiovascular complication. Diabetes-related cardiovascular disease (CVD) is a global health issue that causes thousands of people's death around the world annually. Diabetes-related CVD is still prevailing despite the progression being made in its diagnosis and treatment. Therefore it is urgent to find therapeutic strategies.to prevent it. MicroRNA (miRNA) is a single-stranded non-coding RNA involved in the process of post-transcriptional control of gene expression in eukaryotes. A large number of literatures reveal that miRNAs are implicated in diabetes-related CVD. The increase of miRNAs in exosomes may promote the occurrence and development of diabetes-related cardiovascular complication. However, some other studies identify that miRNAs in exosomes are supposed to be involved in cardiac regeneration and confer cardiac protection effect. Therefore, targeting the miRNA in exosome is regarded as a potent therapeutic measure to alleviate diabetes-related CVD. In this article, we review current knowledge about the role of exosomal miRNAs in diabetes-related cardiovascular complication, such as coronary heart disease, Peripheral artery disease, stroke, diabetic cardiomyopathy, diabetic nephropathy and diabetic retinopathy. Exosomal miRNAs are considered to be central regulators of diabetes-Related CVD and provide a therapeutic tool for diagnosis and treatment of diabetes-related cardiovascular complication. abstract_id: PUBMED:36460217 Retinopathy as an independent predictor of all-cause mortality in individuals with type 2 diabetes. Aims: To assess whether the presence and grade of diabetic retinopathy (DR) predict all-cause mortality, independent of risk factors for cardiovascular disease (CVD) and other complications, including diabetes-related kidney disease (DKD) and CVD, in individuals with type 2 diabetes mellitus. Methods: Prospective cohort study that enroled 15,773 patients in 19 Italian centers in 2006-2008. DR ascertained by fundoscopy, DKD by albuminuria and estimated glomerular filtration rate, and prior CVD by hospital discharge records. All-cause mortality retrieved for 15,656 patients on 31 October 2015. Results: The adjusted risk of death was increased in patients with any DR (hazard ratio, 1.136 [95% confidence interval, 1.054;1.224] P < 0.0001), advanced DR, including severe non-proliferative and proliferative DR and diabetic macula edema (1.213 [1.097;1.340] P < 0.0001), and especially proliferative DR alone (1.381 [1.207;1.580] P < 0.0001), compared with those without DR. The impact of DR was more evident in patients without than in those with DKD or CVD. Mortality risk was increased in participants with DR alone, though much less than in those with DKD or CVD alone and particularly in those with both DR and DKD or CVD. DR grade was related to mortality in individuals without DKD or CVD, whereas it conferred no additional risk to those with albuminuric or nonalbuminuric DKD or established CVD. Conclusions: In patients with type 2 diabetes mellitus, the excess mortality risk conferred by DR is relatively small and higher in those without DKD and CVD, suggesting that it may be mediated by the concurrent presence of these complications, even at a subclinical level. abstract_id: PUBMED:37041490 Vision-related quality of life is selectively affected by comorbidities in patients with geographic atrophy. Background: The atrophic late stage of age-related macular degeneration (AMD) is termed geographic atrophy (GA), and affects visual acuity (VA) as well as quality of life (QoL). Previous studies have found that best-corrected VA (BCVA), the standard vision assessment often underrepresents functional deficits. Therefore, the purpose of this study was to evaluate the correlation between atrophic lesion size, VA and QoL measured with the National Eye Institute Visual Function Questionnaire (VFQ-39) in a Danish population. Moreover, we wanted to evaluate the correlation between comorbidities, behavioural factors, and QoL. Methods: This was prospective clinical study of 51 patients with GA in one or both eyes, of these 45 patients had bilateral GA. Patients were consecutively included between April 2021 and February 2022. All patients filled in the VFQ-39 questionnaire except the subscales "ocular pain" and "peripheral vision." Lesion size was measured from fundus autoflourescense images, and BCVA was assessed by the Early Treatment Diabetic Retinopathy Study (ETDRS) protocol. Results: We found an overall low score in each VFQ-39 subscale scores reflected by GA. Lesion size and VA were both significantly associated with all VFQ-39 subscale scores except for "general health." VA showed a larger effect on QoL than lesion size. Chronic obstructive pulmonary disease (COPD) was associated with a lower score in the subscale score "general health" but none of the other subscale scores were affected. Cardiovascular disease (CVD) was associated with a lower BCVA as well as in QoL reflected in the subscale scores "poor general vision," "near activities," and "dependency" of VFQ-39. Conclusion: Both atrophic lesion size and visual acuity affects QoL in Danish patients with GA, who reports an overall poor QoL. CVD seems to have a negative effect on disease, as well as in VFQ-39 in several subscales, whereas COPD did not affect disease severity or vision-related subscales in VFQ-39. abstract_id: PUBMED:33024733 REvisiting Lipids in REtinal Diseases: A Focused Review on Age-related Macular Degeneration and Diabetic Retinopathy. Dyslipidemia refers to an abnormal amount of lipid in the blood, and the total cholesterol level is defined as the sum of high-density lipoprotein cholesterol, low-density lipoprotein (LDL) cholesterol, and very-LDL cholesterol concentrations. In Korea, the westernization of lifestyle habits in recent years has caused an increase in the incidence of dyslipidemia, which is an important risk factor of cardiovascular disease (CVD). Several studies have been conducted on how dyslipidemia affects not only CVD, but also chorioretinal diseases such as age-related macular degeneration (AMD) and diabetic retinopathy. Recently, a pathological model of AMD was proposed under the assumption that AMD proceeds through a mechanism similar to that of atherosclerotic CVD. However, controversy remains regarding the relationship between chorioretinal diseases and lipid levels in the blood, and the effects of lipid-lowering agents. Herein, we summarize the role of lipids in chorioretinal diseases. In addition, the effects of lipid-lowering agents on the prevention and progression of chorioretinal diseases are presented. Answer: Yes, diabetic retinopathy (DR) is related to subclinical cardiovascular disease (CVD). Studies have found associations between DR and various measures of subclinical CVD in individuals with diabetes, even in the absence of clinical CVD. For instance, one study found that vision-threatening diabetic retinopathy (VTDR) was associated with high coronary artery calcium (CAC) score, low ankle-brachial index (ABI), and high ABI, suggesting a link between advanced-stage DR and subclinical coronary artery disease (PUBMED:21168222). Another study showed that DR is an independent risk marker for subclinical atherosclerosis in patients with newly diagnosed type 2 diabetes (PUBMED:21129801). Furthermore, severe forms of DR have been associated with the presence of subclinical coronary atherosclerosis in patients with type 1 diabetes (PUBMED:22011803). In a prospective cohort study, DR was found to be a strong predictor of cardiovascular events in individuals with type 2 diabetes at primary CVD prevention, even after accounting for the presence of preclinical carotid atherosclerosis (PUBMED:36407461). Additionally, the presence and degree of DR were independently associated with subclinical CVD in type 2 diabetic patients, suggesting that DR could be used to prioritize patients for coronary artery disease screening (PUBMED:31908800). These findings indicate that DR, particularly in its more severe forms, can be an indicator of subclinical CVD, and its presence may warrant a more careful cardiovascular assessment in diabetic patients.
Instruction: Is the efficacy of hormonal therapy affected by lymph node status? Abstracts: abstract_id: PUBMED:35422568 Role of Hormonal Receptor in Predicting Sentinel Lymph Node Metastasis in Early Breast Cancer. Background: Sentinel lymph node biopsy is one of the minimally invasive techniques that can confirm the presence of metastasis of regional lymph nodes in cancer. Sentinel lymph node biopsy can be done with a lymph mapping technique using blue-dye, radiotracer, or a combination of both. In developing countries, sentinel lymph node biopsy is often done with a single agent, which is the blue dye. The limitation of conducting SLNB in Indonesia is the availability of patent blue dye and radioisotope tracer. To overcome that, a hormonal receptor is expected to aid in predicting sentinel lymph node metastasis. Objective: The aim of this study was to investigate the hormonal receptor as a prognostic factor of sentinel lymph node metastasis in breast cancer. Methods: This study was conducted in Universitas Sumatera Utara Teaching Hospital with the acknowledgment from the Ethics Committee of the respected hospital by the number of 116/KEP/USU/2020. Total of 51 patients participated in this research. Results: Statistically, the p-value in each immunohistochemistry group is > 0.05 in all ER (+) / PR (+); ER (+) / PR (-); ER (-) / PR (+) groups. This shows that there is no significant relationship between hormonal receptors on sentinel lymph node metastases. Conclusion: The statistical evaluation showed that there is no significant correlation between the hormonal receptor and sentinel lymph node metastasis (p>0.05), but is found clinically significant. Therefore, hormonal receptors should be considered as a predicting factor for sentinel lymph node metastasis. abstract_id: PUBMED:15232693 Sentinel lymph node biopsy in colorectal carcinoma Lymph node status as an important prognostic factor in colon and rectal cancer is affected by the selection and number of lymph nodes examined and by the quality of histopathological assessment. The multitude of influences is accompanied by an elevated risk of quality alterations. Sentinel lymph node biopsy (SLNB) is currently under investigation for its value in improving determination of the nodal status. Worldwide, the data of 800 to 1000 patients from about 20 relatively small studies are available that focus rather on colon than rectal cancer patients. SLNB may be of clinical value for the collective of patients that are initially node-negative after H&E staining but reveal small micrometastases or isolated tumor cells in the SLN after intensified histopathological workup. If further studies confirm that these patients benefit from adjuvant therapy, the method may have an important effect on the therapy and prognosis of colon cancer patients as well. Another potential application could be the determination of the nodal status after endoscopic excision of early cancer to avoid bowel resection and lymphonodectomy. abstract_id: PUBMED:15221088 Sentinel lymph node biopsy in breast cancer Surgical treatment of breast cancer has changed dramatically during the last few decades with the aim of reducing surgical radicality in the breast and also in the axilla. In this context sentinel node biopsy is a reliable technique to determine axillary node status of breast cancer patients and reduce morbidity associated with standard axillary lymph node dissection. Although SNB is performed with increasing frequency, an incorrect indication and lack of experience with the surgical procedure can minimize the sensitivity and specificity of this method. In consequence, this could lead to wrong tumor staging and inappropriate adjuvant therapy. Therefore, we describe the basic requirements, indications, and pitfalls of this technique. Since many issues are controversial such as the prognostic value of micrometastases and extramammary nodes as well as the use of SNB after primary chemotherapy and in ductal carcinoma in situ, further clinical studies are necessary. abstract_id: PUBMED:31605168 Sentinel node biopsy and lymph node dissection in the era of new systemic therapies for malignant melanoma Background: Recently, adjuvant therapies with checkpoint inhibitors and BRAF/MEK inhibitors have become available for patients with malignant melanoma and microscopic nodal disease. Meanwhile the number of complete nodal dissections for a melanoma-positive sentinel node (SN) have decreased significantly. Objective: The authors discuss the significance of sentinel node biopsy (SNB) and early lymph node dissection in the era of adjuvant systemic therapy for stage III melanoma. Materials And Methods: Current publications and recommendations were evaluated. Results: Complete nodal dissection for a positive SN significantly reduces the risk of regional nodal relapse. However, neither SNB nor complete nodal dissection following a positive SN are associated with a benefit in survival. With the availability of novel adjuvant systemic treatment strategies for stage III melanoma, SNB has become an even more important part of modern staging diagnostics. Thus, detection of early dissemination of melanoma cells into the SN as well as the quantification of the tumor load are decisive for further therapy planning. Conclusion: Accurate assessment of the regional lymph node status by SNB is becoming even more important in the era of novel effective adjuvant therapies for microscopic nodal disease. Whether complete lymph node dissection is performed in patients with a positive SN needs to be assessed individually. In the case of "active nodal surveillance" instead of surgery, long-term close follow-up in specialized centers, including ultrasonographic controls, is required. abstract_id: PUBMED:26368331 The Impact of Radiation Therapy, Lymph Node Dissection, and Hormonal Therapy on Outcomes of Tissue Expander-Implant Exchange in Prosthetic Breast Reconstruction. Background: Total skin-sparing mastectomy, with preservation of the nipple-areola complex, must account for adjuvant medical and surgical treatments for cancer. The authors assessed risk factors for complications after second-stage tissue expander-implant exchange. Methods: The authors reviewed all institutional total skin-sparing mastectomy cases that had completed tissue expander-implant exchange with at least 3 months of follow-up. They developed multivariate generalized estimating equation models to obtain adjusted relative risks of radiation therapy, type of lymph node dissection, and hormonal therapy in relation to postoperative complications. Results: The authors performed 776 cases in 489 patients, with a median follow-up of 26 months (interquartile range, 10 to 48 months). Radiation therapy was associated with increased wound breakdown risk [relative risk (RR), 3.3; 95 percent CI, 2.0 to 5.7]; infections requiring oral antibiotics (RR, 2.2; 95 percent CI, 1.31 to 3.6), intravenous antibiotics (RR, 6.4; 95 percent CI, 3.9 to 10.7), or procedures (RR, 8.9; 95 percent CI, 4.5 to 17.5); implant exposure (RR, 3.9; 95 percent CI, 1.86 to 8.3); and implant loss (RR, 4.2; 95 percent CI, 2.4 to 7.4). Axillary lymph node dissection was associated with an increased risk of implant loss (RR, 2.0; 95 percent CI, 1.11 to 3.7) relative to sentinel lymph node biopsy. Conclusions: Axillary lymph node dissection increases the risk of implant loss compared with sentinel lymph node biopsy, independent of radiation therapy. Patients who require axillary lymph node dissection may be encouraged to undergo breast conservation or autologous reconstruction when possible. Clinical Question/level Of Evidence: Therapeutic, III. abstract_id: PUBMED:15241522 Methods of sentinel lymph node mapping Sentinel lymph node mapping is used as a diagnostic method in order to increase staging accuracy without the well-known morbidities of radical lymphadenectomy. The sentinel node is defined as the first node to receive lymphatic drainage from a primary tumor. The sentinel node concept postulates that if the first draining node is negative for metastasis, the remaining lymph nodes in the nodal basin can be spared. Thus one can predict the status of the nodal basin with high accuracy. It also shows that lymphatic spread seems to follow certain rules that do not concern the whole lymphatic basin. Skip metastases were proven to be rare in breast and gastric cancer. The main issue in sentinel lymph node mapping remains the improvement of staging accuracy for detection of lymph nodes most likely to develop metastasis as well as reducing morbidity by planning lymphadenectomies individually for each tumor and each patient. abstract_id: PUBMED:15278234 Sentinel lymph node mapping in gastric and esophageal carcinomas During the last 5 years, the concept of sentinel lymph nodes has been investigated in a variety of solid tumors. Despite the multidirectional and complex lymphatic drainage of the stomach, early gastric cancer has been shown to be a suitable model for sentinel lymph node mapping. In contrast, sentinel lymph node mapping of esophageal cancer is compromised by the anatomic location of the esophagus and its lymphatic drainage in the closed space of the mediastinum. The technique and clinical application of sentinel lymph node mapping thus differ between esophageal and gastric cancer. Reliable detection of sentinel lymph nodes in the mediastinum requires radioisotope labelling, while blue dye and radioisotope labelling are both feasible for gastric cancer. In patients with early gastric cancer, laparoscopic resection with sentinel node negative status is already under investigation in clinical trials. In esophageal cancer, sentinel node mapping is still considered an experimental technique. Preliminary data, however, indicate that it may be reliable and feasible in patients with early adenocarcinoma of the distal esophagus. abstract_id: PUBMED:9336149 The use of combined radiation therapy and hormonal therapy in the management of lymph node-positive prostate cancer. Purpose: To determine the rate of tumor response and patterns of relapse following combined hormonal-radiation therapy of adenocarcinoma of the prostate and to measure the survival in a group of men with tumor metastatic to pelvic lymph nodes. Methods And Materials: 66 patients with adenocarcinoma of the prostate with pathologically confirmed pelvic lymph node involvement were treated with combined radiation therapy and hormonal therapy. An additional five patients declined hormonal therapy. The patients treated with combined therapy represented a group with locally advanced disease including 44 patients (67%) with T3 or T4 tumors and 51 patients (80%) had N2 or N3 lymph node metastases. The pelvic lymph nodes were treated to a dose of 45 Gy and the prostate was boosted to a dose of 65 to 71 Gy. Hormonal therapy began up to 2 months before radiation and continued indefinitely. Patients were allowed to select their hormonal therapy and could choose DES (2 patients), orchiectomy (21 patients), LHRH agonist (7 patients) or combined androgen blockade (34 patients). Results: Median follow-up is 49 months (range 12 to 131 months) and 21 patients have been followed for longer than 5 years. There have been 15 recurrences the entire group including three local recurrences in the prostate, seven patients with distant metastases, four patients with biochemical recurrences without clinical evidence of disease, and one patient where the location was unknown. Two of the PSA recurrences occurred in patients who elected to discontinue hormones after less than 3 years of therapy. The overall survival at 5 and 8 years is 94 and 84%, the clinical disease free survival is 85 and 67%, and the biochemical disease-free survival is 78 and 47%. There was no increased toxicity of the combined modality regimen compared to the expected effects of radiation and hormonal therapy. Conclusion: Combined hormonal and radiation therapy represents an effective treatment option for patients with adenocarcinoma of the prostate metastatic to pelvic lymph nodes. Combined modality therapy appears to extend the disease-free survival and allow patients to maintain their independent function. abstract_id: PUBMED:32477930 Radiomics-Based Preoperative Prediction of Lymph Node Status Following Neoadjuvant Therapy in Locally Advanced Rectal Cancer. Background and Purpose: Lymph node status is a key factor for the recommendation of organ preservation for patients with locally advanced rectal cancer (LARC) following neoadjuvant therapy but generally confirmed post-operation. This study aimed to preoperatively predict the lymph node status following neoadjuvant therapy using multiparametric magnetic resonance imaging (MRI)-based radiomic signature. Materials and Methods: A total of 391 patients with LARC who underwent neoadjuvant therapy and TME were included, of which 261 and 130 patients were allocated to the primary cohort and the validation cohort, respectively. The tumor area, as determined by preoperative MRI, underwent radiomics analysis to build a radiomic signature related to lymph node status. Two radiologists reassessed the lymph node status on MRI. The radiomic signature and restaging results were included in a multivariate analysis to build a combined model for predicting the lymph node status. Stratified analyses were performed to test the predictive ability of the combined model in patients with post-therapeutic MRI T1-2 or T3-4 tumors, respectively. Results: The combined model was built in the primary cohort, and predicted lymph node metastasis (LNM+) with an area under the curve of 0.818 and a negative predictive value (NPV) of 93.7% were considered in the validation cohort. Stratified analyses indicated that the combined model could predict LNM+ with a NPV of 100 and 87.8% in the post-therapeutic MRI T1-2 and T3-4 subgroups, respectively. Conclusion: This study reveals the potential of radiomics as a predictor of lymph node status for patients with LARC following neoadjuvant therapy, especially for those with post-therapeutic MRI T1-2 tumors. abstract_id: PUBMED:15278236 Sentinel lymph node in melanoma The procedure of sentinel node biopsy (SNB) has emerged as an important advance especially with respect to staging of malignant melanoma. Elective (prophylactic) lymph node dissection that had been practiced in primary melanoma with a suspected increased risk of (clinically occult) lymphatic metastasis has been replaced by SNB. Patients with proven metastatic involvement of the sentinel node (12-25%) can be specifically selected for regional lymph node dissection. Metastatic involvement of the sentinel node (SN) is a significant independent prognostic factor. The value of detecting metastasis by highly sensitive diagnostic tools such as RT-PCR is just as uncertain as is the therapeutic benefit of operative or conservative therapies in sentinel node-positive patients with respect to improving prognosis and is currently under study. Answer: The efficacy of hormonal therapy in relation to lymph node status is not directly addressed in the provided abstracts. However, some of the abstracts do touch on aspects of hormonal therapy and lymph node involvement in the context of cancer treatment. For instance, in the study of early breast cancer, hormonal receptors were investigated as a prognostic factor for sentinel lymph node metastasis, but no significant statistical correlation was found between hormonal receptors and sentinel lymph node metastasis (PUBMED:35422568). This suggests that while hormonal receptors may be clinically significant, their role in predicting lymph node metastasis is not clear-cut, and thus their impact on the efficacy of hormonal therapy in the context of lymph node status is not directly established. In the context of prostate cancer, combined radiation therapy and hormonal therapy were used to treat patients with adenocarcinoma of the prostate with pathologically confirmed pelvic lymph node involvement. The study reported effective treatment outcomes, with high overall survival and clinical disease-free survival rates, suggesting that hormonal therapy, when combined with radiation therapy, can be effective in patients with lymph node-positive prostate cancer (PUBMED:9336149). While these abstracts provide some insight into the use of hormonal therapy in cancer treatment, they do not provide a direct answer to the question of whether the efficacy of hormonal therapy is affected by lymph node status. Further research and more specific studies would be required to determine the relationship between hormonal therapy efficacy and lymph node status in cancer patients.
Instruction: Is adjunctive treatment with medication of liver-soothing-oriented method beneficial for depression after cerebrovascular accident? Abstracts: abstract_id: PUBMED:27858863 Is adjunctive treatment with medication of liver-soothing-oriented method beneficial for depression after cerebrovascular accident?: A PRISMA-compliant meta-analysis. Background: Adjunctive treatment with medication of liver-soothing-oriented method (MLSM) is one of the most commonly used approaches for subjects with depression after cerebrovascular accident (DCVA) in China. The purpose of this meta-analysis was to evaluate the outcome of MLSM treatment in subjects with DCVA using relevant published literature. Methods: The PubMed, Cochrane Library, Embase, Chinese databases of China National Knowledge Infrastructure, WanFang, Sinomed, and VIP were used to collect all publications until March 2016. Randomized controlled trials comparing treatments with and without MLSM for subjects with DCVA were included. The quality of each publication was assessed based on the recent Handbook (5.1 version) for Cochrane Reviewers. Cochrane Collaboration's software RevMan 5.3 software was applied for data analysis. Results: Thirty studies, including 2599 cases, were identified and collected. Adjunctive treatment with MLSM noticeably enhanced total effective rates (odds ratio 3.76; 95% confidence interval [CI] 2.92-4.85, I = 0%, P = 0.96) in comparison to non-MLSM conventional pharmacotherapy. Compared to non-MLSM treatment, the changes of Hamilton Depression Scale in adjunctive treatment with MLSM, respectively, decreased and showed beneficial effects after 3 weeks (weighted mean difference [WMD] -4.83; 95% CI -6.82 to -2.83; I = 86%, P < 0.001), 4 weeks (WMD -4.20; 95% CI -5.06 to -3.33; I = 78%, P < 0.001), 6 weeks (WMD -3.36; 95% CI -4.05 to -2.68; I = 54%, P = 0.02), 8 weeks (WMD -4.83; 95% CI -5.62 to -4.04; I = 73%, P < 0.001), and 12 weeks (WMD -2.88; 95% CI -4.09 to -1.67; I = 58%, P = 0.09). As for changes in inflammatory cytokine levels, adjunctive treatment with MLSM was associated with a significant decrease in tumor necrosis factor-α, IL-6, and interleukin-1β levels in comparison to non-MLSM treatment. Moreover, there were positive effects on score changes for National Institute of Health Stroke Scale, activities of daily living, Hamilton Anxiety Scale, Modified Edinburgh Scandinavian Stroke Scale, and Self-Rating Anxiety Scale. No serious adverse events were reported. Conclusion: MLSM appears to improve symptoms of depressive disorders, enhance immediate responses, and the quality of life in subjects with DCVA. The positive action of MLSM might be potentially connected with its immunoregulating effects. More prospective trials with strict design and larger sample sizes are warranted to clarify its effectiveness and safety. abstract_id: PUBMED:24199581 Effect of early intervention of liver-smoothing and blood-activating decoction combined with acupuncture on patients with post-stroke depression To study the effect of early intervention of liver-soothing and Blood-activating decoction combined with acupuncture in improving neurological functions, depressive symptom and life quality of patients with post-stroke depression, and compare with fluoxetine hydrochloride. Specifically, 63 patients with post-stroke depression were randomly divided into the traditional Chinese medicine (TCM) acupuncture group (31 cases) and the western medicine group (32 cases). On the basis of the conventional treatment of the primary disease, the TCM acupuncture group was treated with liver-soothing and blood-activating decoction and acupuncture, while the western medicine group was treated with fluoxetine hydrochloride for four weeks. In the follow-up visit six months later, scores of HAMD, NIHSS and SS-QOL were observed. The scores of HAMD and NIHSS of both groups were significantly decreased (P < 0.01), while the scores of SS-QOL increased significantly, with a notable difference compared with that before the treatment (P < 0.01). Specifically, the TCM acupuncture group's was superior to the western medicine group (P < 0.05). The study suggests that the early intervention of liver-soothing and blood-activating decoction combined with acupuncture on patients with post-stroke depression has the effect in relieving depression symptom and improving neurological functions, thereby improving their quality of life and prognosis. abstract_id: PUBMED:34137175 Association between depression and medication adherence in stroke survivor older adults. Aim: This study aimed to investigate the association between depression score and medication adherence in stroke survivor older adults. Methods: The participants were 102 stroke survivor older adults. The outcome was medication adherence during a 6-month follow-up. The independent variable was the depression score which was assessed by the Patient Health Questionnaire-9 (PHQ-9). The confounding factors included sociodemographic data, clinical characteristics, number of comorbidities, and number of medications. We analyzed the association between depression and medication adherence using multiple linear regression analyses. Results: The PHQ-9 score of stroke survivor older adults at the baseline was 1.11±2.03, and at the 6-month follow-up was increased to 5.06±3.91. The medication adherence mean score at the outcome was 4.15±1.83. After full adjustment, the PHQ-9 scores at baseline and 6-month follow-up were significantly associated with medication adherence (β = -.315, 95% CI = -.483 to -.086, p = 0.006 and β = -.270, 95% CI = -.238 to -.020, p = 0.021, respectively). Other variables that affect medication adherence during a 6-month follow-up included living with others and a lower number of medications. Conclusion: This study revealed that lower depression scores were associated with high medication adherence in post-stroke older adults. Additionally, living with others and a lower number of medications were associated with medication adherence. Therefore, stroke survivor older adults should be assessed for depression and given medication, and education should be used to improve mediation adherence, especially for the ones who live alone and have polypharmacy to prevent recurrent stroke. abstract_id: PUBMED:37521479 Herbal medicine as adjunctive therapy with antidepressants for post-stroke depression: a systematic review and network meta-analysis of randomized controlled trials. Background: Herbal medicine can provide adjunctive therapy for adults with post-stroke depression. This study summarizes the latest evidence regarding the harms and benefits of herbal antidepressants. Methods: The literature searched from the Cochrane Library (using the OVID platform), Embase, PubMed, the China National Knowledge Infrastructure (CNKI), the Wan Fang Data Knowledge Service Platform, and the China Scientific Journal Database (VIP) from their inception to 18 August 2021, for randomized controlled trials of herbal medicine in adults with post-stroke depression, were included in this systematic review and network meta-analysis. The search was updated on 1 December 2022. To summarize the evidence, the frequentist random-effect network meta-analyses were conducted. To categorize interventions, rate the certainty of the evidence, and present the findings, the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) frameworks were carried out. The registration number of this study on PROSPERO website is CRD 42021273956. Findings: Of 1132 citations identified from the search, 51 randomized clinical trials, totaling 4,507 participants, met the inclusion criteria for this study. For response rate, Shugan Jieyu capsule (SJC) plus selective serotonin reuptake inhibitors (SSRI), Jie-Yu Pills plus SSRI, and Wuling capsule plus SSRI were shown to be among the most effective with moderate certainty of evidence (RR: 1·45, 95%CI: 1·23 to 1·7; RR: 1·35, 95%CI: 1·09 to 1·68; RR: 1·32, 95%CI: 1·09 to 1·59). In terms of mean changes in Hamilton depression scale (HAMD) score after the completion of treatment, Wuling capsule plus Hypericum and Wuling capsule plus SSRI were found to be among the most effective in reducing symptoms of depression with moderate certainty of evidence (MD: 10·12, 95%CI: -17·25 to -2·99; MD: -3·81, 95%CI: -6·19 to -1·42). The network meta-analysis (NMA) showed that SJC may be a safer intervention than SSRI in terms of both total gastrointestinal and total nervous system events with moderate certainty of evidence (RR:0.34, 95%CI:0.18, 0.62 and RR: 0.11, 95%CI: 0.03, 0.35, respectively). Interpretation: SJC plus SSRI, Jie-Yu Pills plus SSRI, and Wuling capsule plus SSRI were among the most effective in terms of HAMD score reduction response rates. Low to very low certainty of evidence revealed no increased risk of gastrointestinal and nervous system events. Systematic Review Registration: https://www.crd.york.ac.uk/PROSPERO/display_record.php?RecordID=273956; Identifier: CRD42021273956. abstract_id: PUBMED:24229660 The role of electroconvulsive and neuromodulation therapies in the treatment of geriatric depression. Geriatric depression is associated with increased mortality because of suicide and decreases in functional and physical health. Many elders' depression is resistant to psychotherapy and medication and can become chronic. Electroconvulsive therapy (ECT) is increasingly used in the treatment of medication-resistant or life-threatening geriatric depression. Neuromodulation therapies (subconvulsive, focal, or subconvulsive and focal) are alternatives for the management of treatment-resistant depression in the elderly. Therapies that combine both strategies could be safer but may not be as effective as ECT. This review covers the evidence on the safety and efficacy of ECT and the neuromodulation therapies in geriatric depression. abstract_id: PUBMED:32084080 Examining Adherence to Medication in Patients With Atrial Fibrillation: The Role of Medication Beliefs, Attitudes, and Depression. Background/objectives: This study examined whether beliefs about medicines, drug attitudes, and depression independently predicted anticoagulant and antiarrhythmic adherence (focusing on the implementation phase of nonadherence) in patients with atrial fibrillation (AF). Methods: This cross-sectional study was part of a larger longitudinal study. Patients with AF (N = 118) completed the Patient Health Questionnaire-8. The Beliefs about Medicines Questionnaire, Drug Attitude Inventory, and Morisky-Green-Levine Medication Adherence Scale (self-report adherence measure), related to anticoagulants and antiarrhythmics, were also completed. Correlation and multiple logistic regression analyses were conducted. Results: There were no significant differences in nonadherence to anticoagulants or antiarrhythmics. Greater concerns (r = 0.23, P = .01) were significantly, positively associated with anticoagulant nonadherence only. Depression and drug attitudes were not significantly associated with anticoagulant/antiarrhythmic adherence. Predictors reliably distinguished adherers and nonadherers to anticoagulant medication in the regression model, explaining 14% of the variance, but only concern beliefs (odds ratio, 1.20) made a significant independent contribution to prediction (χ = 11.40, P = .02, with df = 4). When entered independently into a regression model, concerns (odds ratio, 1.24) significantly explained 10.3% of the variance (χ = 7.97, P = .01, with df = 1). Regressions were not significant for antiarrhythmic medication (P = .30). Conclusions: Specifying medication type is important when examining nonadherence in chronic conditions. Concerns about anticoagulants, rather than depression, were significantly associated with nonadherence to anticoagulants but not antiarrhythmics. Anticoagulant concerns should be targeted at AF clinics, with an aim to reduce nonadherence and potentially modifiable adverse outcomes such as stroke. abstract_id: PUBMED:23967632 Post-stroke depression treated with acupuncture and moxibustion: an evaluation of therapeutic effect and safety Objective: To evaluate the efficacy and safety of acupuncture and moxibustion in the treatment of post-stroke depression (PSD) by regulating the liver and strengthening the root prescription. Methods: In light of the random controlled trial principle, 123 cases were randomized into an acupuncture-moxibustion group (42 cases), a medication group (40 cases) and an acupuncture + medication group (41 cases). In the acupuncture-moxibustion group, the therapy of regulating the liver and strengthening the root was applied. Acupuncture was given at Hegu (LI 4), Taichong (LR 3), Baihui (GV 20, Yintang (GV 29), etc. The granule moxibustion was applied at Zhongwan (CV 12), Xiawan (CV 10), Guanyuan (CV 4) and Qihai (CV 6). The treatment was given once a day and stopped in weekend. In the medication group, paroxetine hydrochloride tablets were prescribed for oral administration, 20 mg each day after breakfast. In the acupuncture + medication group, the therapies were adopted as the acupuncture-moxibustion group and the medication group. In the 2nd and 4th week of treatment, HAMD was used as the primary index and Barthel index as the secondary one for the efficacy assessment in the three groups respectively. The spleen and stomach syndrome scale of TCM was used for the assessment of TCM syndrome efficacy. Treatment emergent symptom scale (TESS) was applied for the safety assessment. Results: In the 4th week of treatment, the markedly effective rates were 69.0% (29/42), 65.0% (26/40) and 70.7% (29/41) in the acupuncture-moxibustion group, the medication group and the acupuncture + medication group respectively. The efficacies of anti-depression were similar in comparison among groups (P > 0.05). In the 2nd week of treatment, the improvement of HAMD score was significant in the acupuncture + medication group as compared with that before treatment and in comparison with the medication group (P < 0.05, P < 0.01). As for improving Barthel index, reducing the spleen and stomach symptom score of TCM and decreasing the score of TESS, the results in the acupuncture-moxibustion group and the acupuncture + medication group were superior to those in the medication group (P < 0.05, P < 0.01). Conclusion: The therapy of acupuncture and moxibustion with regulating the liver and strengthening the root achieves the definite efficacy of anti-depression and presents the quite high safety in treatment. abstract_id: PUBMED:24926198 Two cases of emotional disorder after middle cerebral artery infarction showing distinct responses to antidepressant treatment. Many emotional disturbances such as post-stroke depression (PSD) and emotional incontinence (EI) commonly occur following cerebrovascular events. The efficacy of antidepressants for these conditions has been established but their comorbid treatment has not been well characterized. In the current study, the authors describe two cases of post-stroke emotional dysregulation; one case with EI; and the other with EI complicated by PSD. The authors describe their differential responses to treatment. Case 1 developed EI after an infarct due to occlusion of the penetrating branches of the left middle cerebral artery (MCA). Case 2 developed both PSD and EI after right MCA stem occlusion. Both patients were initially treated with the selective serotonin reuptake inhibitor (SSRI) paroxetine. Case 1 reacted promptly to SSRI treatment. However, Case 2 had only a partial response to paroxetine, even after many months of treatment. Adjunctive therapy with low-dose aripiprazole was eventually added, resulting in complete improvement of both EI and PSD after 2 additional months of treatment. Thus, Case 2 required a different treatment strategy than Case 1. These findings suggest that aripiprazole adjunctive therapy could be effective for some complex post-stroke emotional disorders. abstract_id: PUBMED:36397217 Acupuncture combined with repetitive transcranial magnetic stimulation for post-stroke depression: a randomized controlled trial Objective: To observe the curative effect of Shugan Tiaoshen (soothing liver and regulating mind) acupuncture combined with repetitive transcranial magnetic stimulation (rTMS) in the treatment of post-stroke depression (PSD), and to explore its mechanism. Methods: Ninety patients of PSD were randomly divided into an acupuncture+rTMS combination group (30 cases), a rTMS combination group (30 cases, 1 case dropped off) and a western medication group (30 cases, 1 case dropped off). The western medication group was treated with escitalopram oxalate tablets, 10 mg orally each time, once a day; on the basis of the treatment in the western medication group, the rTMS combination group was additionally given rTMS, and the frequency was 20 Hz, 20 min each time, once a day, 5 times a week; on the basis of the treatment in the rTMS combination group, the acupuncture+rTMS combination group was additionally treated with Shugan Tiaoshen acupuncture at Baihui (GV 20), Sishencong (EX-HN 1), Yintang (GV 24+), Shenting (GV 24), etc. for 40 min each time, once a day, weekly 5 times, and each group was treated for 4 weeks. Before and after treatment, the scores of Hamilton depression scale-17 (HAMD-17), Montreal cognitive assessment scale (MoCA), Pittsburgh sleep quality index (PSQI) were observed, and serum levels of 5-hydroxytryptamine (5-HT) and brain-derived neurotrophic factor (BDNF) were detected in each group. Results: After treatment, the HAMD-17 scores in the three groups were lower than those before treatment (P<0.01), and the PSQI scores in the acupuncture+rTMS combination group and the rTMS combination group were lower than those before treatment (P<0.01); the above indexes in the acupuncture+rTMS combination group and the rTMS combination group were lower than those in the western medication group (P<0.05), and the PSQI score in the acupuncture+rTMS combination group was lower than that in the rTMS combination group (P<0.05). After treatment, the MoCA scores and serum BDNF levels in the acupuncture+rTMS combination group and the rTMS combination group were higher than those before treatment (P<0.01), and the serum 5-HT levels in the three groups were higher than those before treatment (P<0.01); and the above indexes in the acupuncture+rTMS combination group and the rTMS combination group were higher than those in the western medication group (P<0.05), and which in the acupuncture+rTMS combination group were higher than those in the rTMS combination group (P<0.05). Conclusion: On the basis of western medication escitalopram oxalate, the addition of Shugan Tiaoshen acupuncture combined with rTMS therapy can effectively improve cognitive function and sleep quality in patients with PSD, and the effect is better than that of western medication alone or rTMS combined with western medication. Its mechanism of action may be related to the increase of peripheral serum 5-HT and BDNF levels. abstract_id: PUBMED:28110588 Neuroprotective effects of adjunctive treatments for acute stroke thrombolysis: a review of clinical evidence. The narrow therapeutic time window and risk of intracranial hemorrhage largely restrict the clinical application of thrombolysis in acute ischemic stroke. Adjunctive treatments added to rt-PA may be beneficial to improve the capacity of neural cell to withstand ischemia, and to reduce the hemorrhage risk as well. This review aims to evaluate the neuroprotective effects of adjunctive treatments in combination with thrombolytic therapy for acute ischemic stroke. Relevant studies were searched in the PubMed, Web of Science and EMBASE database. In this review, we first interpret the potential role of adjunctive treatments to thrombolytic therapy in acute ischemic stroke. Furthermore, we summarize the current clinical evidence for the combination of intravenous recombinant tissue plasminogen activator and various adjunctive therapies in acute ischemic stroke, either pharmacological or non-pharmacological therapy, and discuss the mechanisms of some promising treatments, including uric acid, fingolimod, minocycline, remote ischemic conditioning, hypothermia and transcranial laser therapy. Even though fingolimod, minocycline, hypothermia and remote ischemic conditioning have yielded promising results, they still need to be rigorously investigated in further clinical trials. Further trials should also focus on neuroprotective approach with pleiotropic effects or combined agents with multiple protective mechanisms. Answer: Yes, adjunctive treatment with medication of liver-soothing-oriented method (MLSM) appears to be beneficial for depression after cerebrovascular accident (DCVA). A meta-analysis evaluating the outcome of MLSM treatment in subjects with DCVA found that adjunctive treatment with MLSM significantly enhanced total effective rates in comparison to non-MLSM conventional pharmacotherapy. The study also reported beneficial effects on the Hamilton Depression Scale after 3, 4, 6, 8, and 12 weeks of treatment, as well as significant decreases in inflammatory cytokine levels (tumor necrosis factor-α, IL-6, and interleukin-1β) compared to non-MLSM treatment. Additionally, positive effects were observed on scores for various scales including the National Institute of Health Stroke Scale, activities of daily living, Hamilton Anxiety Scale, Modified Edinburgh Scandinavian Stroke Scale, and Self-Rating Anxiety Scale. No serious adverse events were reported, suggesting that MLSM could improve symptoms of depressive disorders, enhance immediate responses, and improve the quality of life in subjects with DCVA (PUBMED:27858863). Furthermore, another study on early intervention of liver-soothing and blood-activating decoction combined with acupuncture in patients with post-stroke depression showed that this approach was effective in relieving depression symptoms and improving neurological functions, thereby improving their quality of life and prognosis (PUBMED:24199581). In summary, the evidence suggests that MLSM and related traditional Chinese medicine approaches can be beneficial adjunctive treatments for managing depression following a cerebrovascular accident. However, the meta-analysis also recommends more prospective trials with strict design and larger sample sizes to clarify the effectiveness and safety of MLSM (PUBMED:27858863).
Instruction: Intensive nutritional counselling in bulimia nervosa: a role for supplementation with fluoxetine? Abstracts: abstract_id: PUBMED:9272261 Intensive nutritional counselling in bulimia nervosa: a role for supplementation with fluoxetine? Objective: The aims of the paper are to determine whether nutritional counselling is associated with an improvement in bulimic symptomatology, whether this improvement is maintained during post-treatment follow-up, and whether the addition of fluoxetine 3 x 20 mg/day confers additional benefit. Method: Psychological, pharmacological and combined psychopharmacological treatments of bulimia nervosa were reviewed briefly. Sixty-seven patients referred to specialist eating disorder services who fulfilled strict diagnostic criteria were treated with intensive nutritional counselling and randomly assigned to either fluoxetine 3 x 20 mg/day or placebo. After a 1-week 'wash-out', active treatment was given over 8 weeks, followed by post-treatment interviews at 12 and 20 weeks. Results: Both groups of patients improved significantly during treatment. In some respects, the fluoxetine group did slightly better as demonstrated by the items 'restraint', 'weight concern' and 'shape concern' (p < 0.05 vs p < 0.0001) on the Eating Disorder Examination (EDE). Fluoxetine patients decreased their energy intake and lost a modest amount of weight. They went on to regain weight during the follow-up period, returning to levels higher than they were initially. These patients also appeared more likely to have a recurrence of symptoms, as shown by the fall in percentage of binge-free patients and by changes in the EDE. Conclusion: Nutritional counselling is an effective means of treating bulimia nervosa, with improvement maintained up to 3 months follow-up. The addition of fluoxetine may confer some benefit during active treatment, but its discontinuation may contribute to a higher rate of recurrence of symptoms post treatment. Of course, this study cannot be extrapolated to the efficacy of fluoxetine when used as the only form of treatment in patients for whom intensive nutritional counselling or other structured psychological programs are not available. abstract_id: PUBMED:2011615 Fluoxetine versus placebo: a double-blind study with bulimic inpatients undergoing intensive psychotherapy. In a double-blind trial 40 patients with bulimia nervosa according to DSM III-R criteria were randomly assigned either to a 60 mg fluoxetine group or to a placebo control group. Fluoxetine or placebo was given over a period of 35 days. Parallel to the drug trial, patients participated in an intensive inpatient behavioral psychotherapy program. There were no dropouts at all in the study. Fluoxetine was well tolerated and had only minor adverse effects. In self-ratings and expert ratings concerning attitudes towards eating, eating behavior, and general psychopathology, significant improvements over time were observed in both groups. Using analysis of variance (ANOVA), however, there were no statistically significant "group by time" differences. Results show that the intensive inpatient-care and psychotherapy program was highly effective in changing eating behavior and attitudes as well as general psychopathology. Fluoxetine showed a significant reduction in body weight, especially during the first three weeks of fluoxetine treatment. It was not possible to demonstrate a statistically significant improvement in eating attitudes, eating behavior, and general psychopathology beyond that elicited by intensive inpatient psychotherapy and general inpatient care. These results can possibly be explained by the existence of a "ceiling effect". abstract_id: PUBMED:9807638 Serotonin neuronal function and selective serotonin reuptake inhibitor treatment in anorexia and bulimia nervosa. Anorexia nervosa (AN) and bulimia nervosa (BN) are disorders characterized by aberrant patterns of feeding behavior and weight regulation, and disturbances in attitudes toward weight and shape and the perception of body shape. Emerging data support the possibility that substantial biologic and genetic vulnerabilities contribute to the pathogenesis of AN and BN. Multiple neuroendocrine and neurotransmitter abnormalities have been documented in AN and BN, but for the most part, these disturbances are state-related and tend to normalize after symptom remission and weight restoration; however, elevated concentrations of 5-hydroxyindoleacetic acid in the cerebrospinal fluid after recovery suggest that altered serotonin activity in AN and BN is a trait-related characteristic. Elevated serotonin activity is consistent with behaviors found after recovery from AN and BN, such as obsessionality with symmetry and exactness, harm avoidance, perfectionism, and behavioral over control. In BN, serotonergic modulating antidepressant medications suppress symptoms independently of their antidepressant effects. Selective serotonin reuptake inhibitors (SSRIs) are not useful when AN subjects are malnourished and under-weight; however, when given after weight restoration, fluoxetine may significantly reduce the extremely high rate of relapse normally seen in AN. Nonresponse to SSRI medication in ill AN subjects could be a consequence of an inadequate supply of nutrients, which are essential to normal serotonin synthesis and function. These data raise the possibility that a disturbance of serotonin activity may create a vulnerability for the expression of a cluster of symptoms that are common to both AN and BN and that nutritional factors may affect SSRI response in depression, obsessive-compulsive disorder, or other conditions characterized by disturbances in serotonergic pathways. abstract_id: PUBMED:21961502 World Federation of Societies of Biological Psychiatry (WFSBP) guidelines for the pharmacological treatment of eating disorders. Objectives: The treatment of eating disorders is a complex process that relies not only on the use of psychotropic drugs but should include also nutritional counselling, psychotherapy and the treatment of the medical complications, where they are present. In this review recommendations for the pharmacological treatment of eating disorders (anorexia nervosa (AN), bulimia nervosa (BN), binge eating disorder (BED)) are presented, based on the available literature. Methods: The guidelines for the pharmacological treatment of eating disorders are based on studies published between 1977 and 2010. A search of the literature included: anorexia nervosa bulimia nervosa, eating disorder and binge eating disorder. Many compounds have been studied in the therapy of eating disorders (AN: antidepressants (TCA, SSRIs), antipsychotics, antihistaminics, prokinetic agents, zinc, Lithium, naltrexone, human growth hormone, cannabis, clonidine and tube feeding; BN: antidepressants (TCA, SSRIs, RIMA, NRI, other AD), antiepileptics, odansetron, d-fenfluramine Lithium, naltrexone, methylphenidate and light therapy; BED: antidepressants (TCA, SSRIs, SNRIs, NRI), antiepileptics, baclofen, orlistat, d-fenfluramine, naltrexone). Results: In AN 20 randomized controlled trials (RCT) could be identified. For zinc supplementation there is a grade B evidence for AN. For olanzapine there is a category grade B evidence for weight gain. For the other atypical antipsychotics there is grade C evidence. In BN 36 RCT could be identified. For tricyclic antidepressants a grade A evidence exists with a moderate-risk-benefit ratio. For fluoxetine a category grade A evidence exists with a good risk-benefit ratio. For topiramate a grade 2 recommendation can be made. In BED 26 RCT could be identified. For the SSRI sertraline and the antiepileptic topiramate a grade A evidence exists, with different recommendation grades. Conclusions: Additional research is needed for the improvement of the treatment of eating disorders. Especially for anorexia nervosa there is a need for further pharmacological treatment strategies. abstract_id: PUBMED:12510994 Eating disorders in females: genetics, pathophysiology, and treatment. The eating disorders anorexia nervosa and bulimia nervosa are best conceptualized as syndromes and are classified on the basis of the clusters of symptoms they present. According to the multidimensional model, eating disorders begin with dieting, which is propelled into a full-blown disorder by antecedent conditions of biological vulnerability and genetics, premorbid psychological characteristics, family interactions, and social climate. The medical abnormalities present in individuals with eating disorders are due to starvation conditions and purging behaviors and will resolve with nutritional rehabilitation and the cessation of purging. Comorbid psychiatric conditions such as affective disorders, anxiety disorders, substance abuse, and personality disorders are frequently present. For anorexia nervosa, the most effective strategy is multidimensional treatment, consisting of nutritional rehabilitation, medical attention, individual cognitive psychotherapy, and family counseling or therapy if the patient is younger than age 18 years. For bulimia nervosa, the treatment of choice is cognitive-behavioral therapy with directions in a manual for therapists. A second choice for treatment is an antidepressant, beginning with fluoxetine. abstract_id: PUBMED:31046927 Pharmacologic Treatment of Eating Disorders. Medications are a useful adjunct to nutritional and psychotherapeutic treatments for eating disorders. Antidepressants are commonly used to treat bulimia nervosa; high-dose fluoxetine is a standard approach, but many other antidepressants can be used. Binge eating disorder can be treated with antidepressants, with medications that diminish appetite, or with lisdexamfetamine. Anorexia nervosa does not generally respond to medications, although recent evidence supports modest weight restoration benefits from olanzapine. abstract_id: PUBMED:7704702 Anorexia and bulimia nervosa: psychiatric approach. N/A abstract_id: PUBMED:20838508 Management of anorexia and bulimia nervosa: An evidence-based review. Anorexia nervosa and bulimia nervosa are primarily psychiatric disorders characterized by severe disturbances of eating behavior. Eating disorders are most prevalent in the Western culture where food is in abundance and female attractiveness is equated with thinness. Eating disorders are rare in countries like India. Despite a plethora of management options available to the mental health professionals, no major breakthrough has been achieved in recent years. Nutritional rehabilitation along with some form of re educative psychotherapy remains the mainstay of management of anorexia nervosa. In bulimia nervosa, both fluoxetine and cognitive behavior therapy have been found to be effective. Although the above-mentioned management options have been in use for decades, the active ingredient is still to be ascertained. abstract_id: PUBMED:36878454 Through the looking glass: Skin signs that help diagnose eating disorders. Eating disorders (EDs) such as anorexia nervosa and bulimia nervosa have a plethora of skin manifestations. They can be categorized into skin signs of self-purging, skin signs of starvation, skin signs related to drug abuse, skin signs of psychiatric comorbidity, and miscellaneous signs. "Guiding signs" are valuable because they are pointers to the diagnosis of an ED. They include hypertrichosis (lanugo-like hair), Russell's sign (knuckle calluses), self-induced dermatitis, and perimylolysis (tooth enamel erosion). Practitioners should recognize such skin signs promptly, as this helps to diagnose early, which may improve the prognosis of ED. Management requires a multidisciplinary approach that focuses on psychotherapy combined with attention to medical complications, nutritional needs, and nonpsychiatric findings such as cutaneous manifestations. Psychotropic medications that are currently used in EDs include pimozide and atypical antipsychotic agents, such as aripiprazole and olanzapine, fluoxetine, and lisdexamfetamine. abstract_id: PUBMED:28547702 Emerging Treatments in Eating Disorders. Eating disorders (EDs), including anorexia nervosa, bulimia nervosa, and binge-eating disorder, constitute a class of common and deadly psychiatric disorders. While numerous studies in humans highlight the important role of neurobiological alterations in the development of ED-related behaviors, the precise neural substrate that mediates this risk is unknown. Historically, pharmacological interventions have played a limited role in the treatment of eating disorders, typically providing symptomatic relief of comorbid psychiatric issues, like depression and anxiety, in support of the standard nutritional and psychological treatments. To date there are no Food and Drug Administration-approved medications or procedures for anorexia nervosa, and only one Food and Drug Administration-approved medication each for bulimia nervosa (fluoxetine) and binge-eating disorder (lisdexamfetamine). While there is little primary interest in drug development for eating disorders, postmarket monitoring of medications and procedures approved for other indications has identified several novel treatment options for patients with eating disorders. In this review, I utilize searches of the PubMed and ClinicalTrials.gov databases to highlight emerging treatments in eating disorders. Answer: Intensive nutritional counseling has been recognized as an effective means of treating bulimia nervosa, with improvements maintained up to 3 months of follow-up. The addition of fluoxetine, a selective serotonin reuptake inhibitor (SSRI), may confer some benefit during active treatment. In a study where patients with bulimia nervosa were treated with intensive nutritional counseling and randomly assigned to either fluoxetine 3 x 20 mg/day or placebo, both groups improved significantly. However, the fluoxetine group showed slightly better outcomes in terms of 'restraint', 'weight concern', and 'shape concern' on the Eating Disorder Examination (EDE). Despite these benefits, discontinuation of fluoxetine may contribute to a higher rate of recurrence of symptoms post-treatment (PUBMED:9272261). Another study involving a double-blind trial with bulimic inpatients undergoing intensive psychotherapy found that while fluoxetine was well tolerated and led to a significant reduction in body weight, especially during the first three weeks of treatment, it did not demonstrate a statistically significant improvement in eating attitudes, eating behavior, and general psychopathology beyond that elicited by intensive inpatient psychotherapy and general inpatient care (PUBMED:2011615). The World Federation of Societies of Biological Psychiatry (WFSBP) guidelines for the pharmacological treatment of eating disorders indicate that for bulimia nervosa, fluoxetine has a category grade A evidence with a good risk-benefit ratio (PUBMED:21961502). Additionally, pharmacologic treatment of eating disorders suggests that antidepressants, particularly high-dose fluoxetine, are commonly used to treat bulimia nervosa and can be a useful adjunct to nutritional and psychotherapeutic treatments (PUBMED:31046927). In conclusion, intensive nutritional counseling is a cornerstone in the treatment of bulimia nervosa, and supplementation with fluoxetine may offer additional benefits during treatment. However, the potential for symptom recurrence after discontinuation of fluoxetine warrants consideration, and the role of fluoxetine should be evaluated in the context of a comprehensive treatment plan that includes psychotherapy and medical management.
Instruction: Distribution of spinal fractures in children: does age, mechanism of injury, or gender play a significant role? Abstracts: abstract_id: PUBMED:14504843 Distribution of spinal fractures in children: does age, mechanism of injury, or gender play a significant role? Background: The distribution of fractures in the spine reported in the literature is quite variable. Application of such data to the pediatric population needs to take into account differences between children and adults, including overall decreased fracture frequency in children, developmental and physiological differences, and mechanism of injury. Knowledge of specific regions of injury may alter search patterns and protocols. Objective: To determine if the distribution of spinal injuries in pediatric patients is related to age, mechanism of injury, or gender. Patients And Methods: All pediatric patients (<18 years old) referred to our trauma service over a 5-year period were retrospectively reviewed. All patients with vertebral fracture and/or neurological injury were included. The levels of the spinal fractures were tabulated. Correlation was then made with age, gender, and mechanism of injury (motor vehicle accident versus non-motor vehicle accident). Results: Of the 2614 pediatric patients, 84 sustained vertebral fracture and 50 had neurological injury without radiographic abnormality. A total of 164 fractures were identified. The thoracic region (T2-T10) was most commonly injured, accounting for 47 fractures (28.7%) followed by the lumbar region (L2-L5) with 38 fractures (23.2%), the mid-cervical region with 31 fractures (18.9%), the thoracolumbar junction with 24 fractures (14.6%), the cervicothoracic junction with 13 fractures (7.9%), and the cervicocranium with 11 fractures (6.7%). There was no relationship to gender or mechanism of injury. Conclusions: Regardless of gender or mechanism of injury, the thoracic spine (T2-T10) is the most common region of fracture in pediatric trauma patients. abstract_id: PUBMED:27853664 Return to Play after Cervical Spine Injuries: A Consensus of Opinion. Study Design Survey. Objective Sports-related spinal cord injury (SCI) represents a growing proportion of total SCIs but lacks evidence or guidelines to guide clinical decision-making on return to play (RTP). Our objective is to offer the treating physician a consensus analysis of expert opinion regarding RTP that can be incorporated with the unique factors of a case for clinical decision-making. Methods Ten common clinical scenarios involving neurapraxia and stenosis, atlantoaxial injury, subaxial injury, and general cervical spine injury were presented to 25 spine surgeons from level 1 trauma centers for whom spine trauma is a significant component of their practice. We evaluated responses to questions about patient RTP, level of contact, imaging required for a clinical decision, and time to return for each scenario. The chi-square test was used for statistical analysis, with p < 0.05 considered significant. Results Evaluation of the surgeons' responses to these cases showed significant consensus regarding return to high-contact sports in cases of cervical cord neurapraxia without symptoms or stenosis, surgically repaired herniated disks, and nonoperatively healed C1 ring or C2 hangman's fractures. Greater variability was found in recommendations for patients showing persistent clinical symptomatology. Conclusion This survey suggests a consensus among surgeons for allowing patients with relatively normal imaging and resolution of symptoms to return to high-contact activities; however, patients with cervical stenosis or clinical symptoms continue to be a challenge for management. This survey may serve as a basis for future clinical trials and consensus guidelines. abstract_id: PUBMED:25109809 Gender differences in the clinical characteristics of traumatic spinal fractures among the elderly. In order to illustrate the epidemiology of traumatic spinal fractures among the elderly, with an emphasis on exploring gender differences in clinical characteristics, we retrospectively reviewed hospital records on all elderly patients with traumatic spinal fractures who were 60 years of age or older at two university-affiliated hospitals between January 2001 and December 2010. A total of 642 elderly patients with traumatic spinal fractures were identified, of whom 249 were male and 393 were female. Accidental falls from low heights were the most common cause of traumatic spinal fractures among the elderly (50.8%). Frequencies of falls from high heights and direct collisions with a blunt object were significantly higher in male than in female elderly patients (P<0.05). Frequencies of falls from low heights, daily life injuries and jolt injuries in female patients were significantly higher than in male patients (P<0.05). There were 984 vertebral body fractures, with the thoracolumbar segment involved in 60.3% of cases (227/984). Frequencies of cervical spinal fractures, spinal cord injuries, associated non-spinal injuries (ASOIs) and mean injury severity scores (ISSs) were significantly higher in males than in females (P<0.05). Frequencies of thoracic and thoracolumbar spinal fractures in females were significantly higher than in males (P<0.05). Clinicians should make their diagnoses and direct their injury prevention strategies according to gender differences in the clinical characteristics of traumatic spinal fractures among the elderly. abstract_id: PUBMED:18842385 Automobile versus pedestrian injuries: does gender matter? Background: Automobile vs. pedestrian (AVP) injuries cause substantial morbidity and mortality. Gender may be an important factor in determining the anatomic distribution and severity of these injuries. The objective of this study was to examine the effect of gender on the nature and severity of automobile vs. pedestrian injuries and the outcome. Methods: Trauma registry study that included all AVP pedestrian injuries admitted during a 14-year period to a Level I trauma center. The following variables were included in an Excel (Microsoft Corporation, Redmond, WA) file for the purpose of this study: age, gender, body area Abbreviated Injury Score, Injury Severity Score, specific fractures (pelvic, spine, femur, tibia), survival, and intensive care unit (ICU) and hospital length of stay. Results: The study population included 6965 patients, 67.3% of whom were male. Overall, 20.7% were in the age group < 15 years, 60.5% in the age group 15-55 years, 7.6% in the age group 56-65 years, and 11.1% in the age group > 65 years. Pelvic fractures were significantly more common in females than males (20.7% vs. 11.4%, respectively, p < 0.0001). This difference was present in all age groups, but especially in the groups 56-65 years (28.5% vs. 12.3%, respectively, p < 0.0001) and > 65 years (32.5% vs. 15.7%, respectively, p < 0.0001). Males in the age group 15-55 years were significantly more likely to suffer tibia fractures (31.8% vs. 25.7%, respectively, p < 0.001). Multivariate analysis showed no difference in survival or ICU stay between the two genders, but there was a significantly longer hospital stay in males 15-65 years. Conclusions: Gender plays a significant role in the incidence of pelvic and tibial fractures but has no effect on survival or ICU stay, but male patients in the age group 15-65 years had a significantly longer hospital stay. abstract_id: PUBMED:26555187 Age- and gender-specific clinical characteristics of acute adult spine fractures in China. Purpose: The purpose of this study was to investigate the age- and gender-specific clinical characteristics of adult spine fractures in China. Method: A retrospective investigation was performed on consecutive patients with acute spinal fractures in 83 hospitals of 31 provinces in China between January 2010 and December 2011. All adult patients (≥16 years) with spinal column fracture were included, while pathologic, old, and periprosthetic fractures were excluded. Each patient was analysed by reviewing the medical records and initial radiographs. Results: There were 54,533 adult patients with spinal column fractures, accounting for 14.57 % of overall fractures. The medium age was 57 years (range 16-100 years). There were 27,466 male patients with medium age of 49 years and 27,067 females with medium age of 63 years. The male to female ratio was 1.01:1 in study. It was 1.74:1 in patients less than 59 years, and 0.51:1 above 60 years. The most common injury site was L1, which accounted for 33.83 % of cases. In the AO classification, the 53A and 52A groups comprised 85.53 % of all fractures. Conclusions: This study demonstrates the age- and gender-specific clinical characteristics of adults spine fractures and reveals a significant relationship among ages, sex, fracture localisation and complexity of spine fractures. Using the age of patients, the likelihood of sustaining adult spine fractures is predictable. abstract_id: PUBMED:36879014 Sex, age, role and geographic differences in traumatic spinal fractures caused by motor vehicle collisions: a multicentre retrospective study. To investigate the sex, age, role and geographic differences in traumatic spinal fractures (TSFs) caused by motor vehicle collisions (MVCs) in adults (≥ 18 years old). This was a multicentre retrospective observational study. In total, 798 patients with TSFs caused by MVCs admitted to our hospitals from January 2013 to December 2019 were enrolled. The patterns were summarized with respect to different sexes (male and female), age group (18-60 and ≥ 60), role (driver, passenger and pedestrian) and geographic location (Chongqing and Shenyang). Significant differences in distribution related to district (p = 0.018), role (p < 0.01), motorcycle (p = 0.011), battery electric vehicle (p = 0.045), bicycle (p = 0.027), coma after injury (p = 0.002), pelvic fracture (p = 0.021), craniocerebral injury (p = 0.008) and fracture location (p < 0.01) were observed between the male and female groups. Significant differences in distribution related to district (p < 0.01), role (p < 0.01), car (p = 0.013), coma after injury (p = 0.003), lower limb fracture (p = 0.016), fracture location (p = 0.001) and spinal cord injury (p < 0.01) were observed between the young adult and elderly groups. Significant differences in distribution related to sex ratio (p < 0.01), age (p < 0.01), district (p < 0.01), most vehicles involved (P < 0.01), lower limb fracture (p < 0.01), pelvic fracture (p < 0.01), fracture location (p < 0.01), complications (p < 0.01), and spinal cord injury (p < 0.01) were observed between the three different groups of pedestrian, passenger, and driver. Significant differences in distribution related to sex ratio (p = 0.018), age (p < 0.01), role (p < 0.01), most vehicles involved (p < 0.01), coma after injury (p = 0.030), LLF (P = 0.002), pelvic fracture (p < 0.01), craniocerebral injury (p = 0.011), intrathoracic injury (p < 0.01), intra-abdominal injury (p < 0.01), complications (p = 0.033) and spinal cord injury (p < 0.01) were observed between the Chongqing and Shenyang groups. This study demonstrates the age-, gender-, role- and geographic-specific clinical characteristics of TSFs resulting from MVCs and reveals a significant relationship between different ages, sexes, roles, geographic locations and associated injuries, complications and spinal cord injuries. abstract_id: PUBMED:25931831 Differences in injury distribution in professional and recreational snowboarding. Introduction: Little is known about the distribution of injuries in competitive snowboarding and there are no reports describing long-term complications of professional snowboarding. To examine the injury distribution and long-term impairments in active and retired professional snowboarders, a retrospective data analysis was performed using a questionnaire. As a control group, injuries of recreational snowboarders of a southern German university clinic were examined. Methods: Retired and active snowboarders were identified from the result lists of the International Ski Federation (FIS) and from the current team lists of the German-speaking European national snowboard teams. After identification, a questionnaire regarding injury distribution/treatment was sent to the athletes and the filled-in questionnaires were subsequently evaluated. Injuries in recreational snowboarding were evaluated by analyzing the medical records and surgical reports for a seven-year period. Results: Compared to the available literature, significantly more severe injuries that necessitated surgical treatment in recreational snowboarders were documented. Most injuries accounted for wrist fractures but also many nonunions and spinal fractures were recorded. Between active professionals, distinct differences in injury distribution were found. The number of days off differed greatly with less days in speed disciplines (35.5 days) and the maximum number of days off in snowboardcross (51 days). Injury distribution varied clearly with more injuries of the upper extremity and ankle in speed disciplines compared to snowboardcross and freestyle professionals, who exhibited more injuries to the lower extremity and head. Freestyle athletes used significantly less protectors compared to speed (P=0.01) and snowboardcross athletes (P=0.00). An analysis of retired professionals revealed a higher number of impairments in daily life and a significantly higher number of days off snowboarding (P=0.018). Conclusion: Knowledge about snowboard-specific injuries is a requirement for an appropriate treatment of these patients. This first description of long-term impairments after professional snowboarding highlights the importance of the development of specific strategies to prevent daily life impairments after a professional sports career. abstract_id: PUBMED:37423338 Traumatic Dens Fracture Patients Comprise Distinct Subpopulations Distinguished by Differences in Age, Sex, Injury Mechanism and Severity, and Outcome. Background: Dens fractures are an increasingly common injury, yet their epidemiology and its implications remain underexamined. Methods: We retrospectively analyzed all traumatic dens fracture patients managed at our institution over a 10-year period, examining demographic, clinical, and outcomes data. Patient subsets were compared across these parameters. Results: Among 303 traumatic dens fracture patients, we observed a bimodal age distribution with a strong goodness of fit centered at age 22.3 ± 5.7 (R = 0.8781) and at 77.7 ± 13.9 (R = 0.9686). A population pyramid demonstrated a bimodal distribution among male patients, but not female patients, which was confirmed with a strong goodness of fit for male patient subpopulations age <35 (R = 0.9791) and age ≥35 (R = 0.8843), but a weaker fit for a second female subpopulation age <35. Both age groups were equally likely to undergo surgery. Patients younger than age 35 were more likely to be male (82.4% vs. 46.9%, odds ratio [OR] = 5.29 [1.54, 17.57], P = 0.0052), have motor vehicle collision as their mechanism of injury (64.7% vs. 14.1%, OR = 11.18 [3.77, 31.77], P < 0.0001), and to have a severe trauma injury severity score (17.6% vs. 2.9%, OR = 7.23 [1.88, 28.88], P = 0.0198). Nevertheless, patients age <35 were less likely to have fracture nonunion at follow (18.2% vs. 53.7%, OR = 0.19 [0.041, 0.76], P = 0.0288). Conclusions: The dens fracture patient population comprises 2 subpopulations, distinguished by differences in age, sex, injury mechanism and severity, and outcome, with male dens fracture patients demonstrating a bimodal age distribution. Young, male patients were more likely to have high-energy injury mechanisms leading to severe trauma, yet were less likely to have fracture nonunion at follow-up. abstract_id: PUBMED:11856676 Cervical spine injuries in patients 65 years old and older: epidemiologic analysis regarding the effects of age and injury mechanism on distribution, type, and stability of injuries. Objective: Our objective was to describe types and distribution of cervical spine injuries in elderly patients in regard to causative trauma mechanism and patient age. Materials And Methods: The distribution and type of 225 cervical spine injuries in 149 consecutive patients 65 years old and older over a 5-year interval were retrospectively assessed. For each patient, initial admission imaging studies were reviewed, and injuries were classified. Trauma mechanism (falls from standing or seated height vs higher energy mechanisms) and initial clinical and neurologic status were recorded. Data were correlated according to patients' age (65-75 years and >75 years) and causative trauma mechanism. Results: Ninety-five (64%) of 149 patients had upper cervical spine injuries. Fifty-nine (40%) of 149 patients had multilevel injuries. Main causes for cervical spine injuries were motor vehicle crashes in "young elderly" (65-75 years old; 36/59, 61%) and falls from standing or seated height in "old elderly" (>75 years old; 36/90, 40%). Fracture patterns at risk for neurologic deterioration were common (>50%), even in the absence of acute myelopathy or radiculopathy. Patients older than 75 years, independent of causative mechanism, and patients who fell from standing height, independent of age, were more likely to have injuries of the upper cervical spine (p = 0.026 and p = 0.006, respectively). Conclusion: Cervical spine injuries in elderly patients tend to involve more than one level with consistent clinical instability and commonly occur at the atlantoaxial complex. Old elderly patients and patients who fall from standing height are more prone to injuries of the upper cervical spine. abstract_id: PUBMED:30840088 Influence of the trauma mechanism on cervical spine injuries Background: The influence of trauma-related kinematics on the injury pattern of the cervical spine is currently not considered in the available classification systems, only the force vector. Whether the strength of the trauma has an additional influence on the number and complexity of the injuries and whether this can be classified, has not yet been finally investigated. Objective: What influence do different kinematics apart from the traumatic force vector have on injuries of the cervical spine? Material And Methods: Based on the AOSpine classification system for the upper and subaxial cervical spine, data from 134 trauma patients from a first level trauma center were retrospectively analyzed. Analogue to the S3 guidelines on polytrauma, patients were assigned to six trauma groups and the injuries were classified on the basis of computed tomography (CT) cross-sectional imaging. Results: A higher trauma energy had a significant impact on the number of cervical spine injuries (p = 0.005). In low velocity accidents C2 was the most frequently injured vertebra (51%; p = 0.022) and high velocity accidents showed more C7 fractures (37%; p = 0.017). Furthermore, upper cervical spine injuries occurred more often in low energy trauma and older female patients (e.g. falling from a standing position). Subaxial cervical spine involvement was found significantly more often in high velocity accidents and younger male patients (p = 0.012). Conclusion: Exact knowledge of the trauma mechanism is helpful in the primary treatment of an injured person. Injury patterns can be better estimated and the appropriate diagnostics can be initiated. The results underline the importance of immediate cervical spine immobilization even after minor trauma. In high velocity trauma, patients more often suffer from lower cervical spine injuries, especially C7. Due to the accumulation of multilevel spinal injuries in high velocity trauma, radiographic imaging of the whole spine is advisable. Answer: The distribution of spinal fractures in children is not significantly influenced by gender or mechanism of injury. A study examining pediatric patients referred to a trauma service over a 5-year period found that the thoracic spine (T2-T10) was the most common region of fracture in pediatric trauma patients, regardless of gender or mechanism of injury (motor vehicle accident versus non-motor vehicle accident) (PUBMED:14504843). This suggests that while the distribution of fractures in the spine can be variable, in the pediatric population, factors such as age, mechanism of injury, or gender do not significantly alter the pattern of spinal injuries.
Instruction: Can familial factors account for the association of body mass index with poor mental health in men or women? Abstracts: abstract_id: PUBMED:23664570 Can familial factors account for the association of body mass index with poor mental health in men or women? Objective: This study examined if associations between body mass index (BMI) and mental and physical health were independent of genetic and familial factors. Method: Data from 2831 twins (66% female) were used in an epidemiological co-twin control design with measures of BMI and mental and physical health outcomes. Generalized estimating equation regressions assessed relationships between BMI and health outcomes controlling for interdependency among twins and demographics. Within-pair regression analyses examined the association of BMI with health outcomes controlling for genetic and familial influences. Results: Adjusted analyses with individual twins found associations in women between BMI and perceived stress (P=.01) and depression (P=.002), and the link between BMI and depression (P=.03) was significant in men. All physical health outcomes were significantly related to BMI. Once genetic and familial factors were taken into account, mental health outcomes were no longer significantly associated with BMI. BMI in women remained related to ratings of physical health (P=.01) and body pain (P=.004), independent of genetic and familial influences. Conclusion: These findings suggest that genetic and familial factors may account for the relationship between increased weight and poor mental health. abstract_id: PUBMED:26543419 Association of Body Weight and Body Mass Index with Bone Mineral Density in Women and Men from Kosovo. Background And Objective: Body weight and body mass index (BMI) are considered potentially modifiable determinants of bone mass. Therefore, the aim of this study was to explore the association between body weight and body mass index (BMI) with total hip and lumbar spine bone mineral density (BMD). Methods: This cross-sectional study included a population of 100 women and 32 men from Kosovo into three BMI groups. All the study subjects underwent dual-energy X-ray absorptiometry (DXA) measurements. Results: Total hip BMD levels of obese menopausal and premenopausal women and men were significantly higher compared to overweight or normal weight subjects, while lumbar spine BMD levels of only menopausal women and men were higher among obese subjects. Age-adjusted linear regression analysis showed that BMI is a significant independent associate of lumbar spine and total hip BMD in menopausal women and men. Conclusion: Despite positive association between BMI and lumbar spine and total hip BMD in menopausal women, presence of more obese and osteoporotic subjects among menopausal women represent a population at risk for fractures because of poor balance and frequent falls; therefore, both obesity and osteoporosis prevention efforts should begin early on in life. abstract_id: PUBMED:28950839 Obesity, change of body mass index and subsequent physical and mental health functioning: a 12-year follow-up study among ageing employees. Background: Studies suggest an association between weight change and subsequent poor physical health functioning, whereas the association with mental health functioning is inconsistent. We aimed to examine whether obesity and change of body mass index among normal weight, overweight and obese women and men associate with changes in physical and mental health functioning. Methods: The Helsinki Health Study cohort includes Finnish municipal employees aged 40 to 60 in 2000-02 (phase 1, response rate 67%). Phase 2 mail survey (response rate 82%) took place in 2007 and phase 3 in 2012 (response rate 76%). This study included 5668 participants (82% women). Seven weight change categories were formed based on body mass index (BMI) (phase 1) and weight change (BMI change ≥5%) (phase 1-2). The Short Form 36 Health Survey (SF-36) measured physical and mental health functioning. The change in health functioning (phase 1-3) score was examined with repeated measures analyses. Covariates were age, sociodemographic factors, health behaviours, and somatic ill-health. Results: Weight gain was common among women (34%) and men (25%). Weight-gaining normal weight (-1.3 points), overweight (-1.3 points) and obese (-3.6 points) women showed a greater decline in physical component summary scores than weight-maintaining normal weight women. Among weight-maintainers, only obese (-1.8 points) women showed a greater decline than weight-maintaining normal weight women. The associations were similar, but statistically non-significant for obese men. No statistically significant differences in the change in mental health functioning occurred. Conclusion: Preventing weight gain likely helps maintaining good physical health functioning and work ability. abstract_id: PUBMED:19301533 Sociodemographic, psychological and health-related factors associated with poor mental health in Spanish women and men in midlife. Background: The purpose of this work was to examine the sociodemographic, psychological, and health-related factors (considered jointly) associated with poor mental health in midlife and to analyze whether risk and protective factors differed in men and women. Methods: A cross-sectional study was conducted with a middle-class sample of 252 women and 189 men between 45 and 65 years of age from Spanish rural areas. Mental health status was measured with the 12-Item General Health Questionnaire (GHQ-12; Goldberg & Williams, 1988). Multiple logistic regression models were used to calculate odds ratios with confidence intervals of 95%, adjusting for confounding variables. Results: The percentage of women (14.3%) with poor mental health was twice that of the men (7.4%). In women, the following variables were significantly and positively related to poor mental health: consumption of psychoactive drugs, physiological and cognitive anxiety; self-esteem and family satisfaction were protective factors. For men, physical complaints and cognitive anxiety were significant risk factors, and job satisfaction was a protective factor. Conclusions: In general, the psychological variables were more clearly related to poor mental health. Women had a more unfavorable profile, and the variables related to poor mental health differed for men and women, perhaps due to social roles associated with gender. To facilitate diagnosis and take preventive measures, men's and women's risk factors for poor mental health should be differentiated. abstract_id: PUBMED:15729457 Body mass index, body weight perception and common mental disorders among university employees in Rio de Janeiro Objective: We tested the hypothesis that inadequate self-perception of body weight with or without obesity would be associated with common mental disorders (CMD). Method: We analyzed cross-sectional data from 4,030 university employees participating in the longitudinal Pró-Saúde Study Phase 1 (1999) in Rio de Janeiro. Participants (22-59 years of age) were invited to fill out a questionnaire that includes an evaluation of common mental disorders by General Health Questionnaire (GHQ-12). Body mass index (BMI=kg/m2) was calculated based on measured weight and height, and participants self-classified their own current body weight as highly above ideal, slightly above ideal, ideal, slightly below ideal or highly below ideal. Results: Among woman 58.3% of those with body mass index lower than 25.0 kg/m2 considered themselves as being above ideal weight; for men, this proportion was 23.5%. Multivariate logistic regression adjusted for age, income, leisure-time physical activity, self-reported health problem and body mass index, showed that body weight perception highly above ideal had a strong association with common mental disorders among women (OR=1.84, 95%CI=1.22-2.76). For women but not men, body mass index showed a borderline association with common mental disorders (p-trend=0.05) that did not persist after adjustment for income and self-reported health problem. Conclusion: Inadequate body weight perception, independent of body mass index, was associated with common mental disorders in women, but not men. One possible explanation for this association may be the socio-cultural pressure placed on women to conform the thinness ideal. abstract_id: PUBMED:17960318 The relationship between body mass index and mental health. A population-based study of the effects of the definition of mental health. Background: The association between atypical body weight and mental health remains poorly understood. We examined the relationship between body mass index and mental health in a population-based study of adults that included the full range of body weights, three disorder types, and three levels of mental illness severity. Methods: Data came from the 2003 Alberta Mental Health Survey (n=5383), which included a validated, standard instrument for measurement of DSM-IV mental disorders as well as several indicators of psychiatric symptoms. Associations were examined using crosstabulation and chi squared statistics, and logistic regression adjusting for sociodemographic variables. Results: Findings differed by type and severity of mental illness and by sex and age. For instance, anxiety disorders were elevated among underweight men compared to normal weight men and to women. Substance use disorders were elevated among obese men at younger compared to older ages. Mood disorders were elevated among obese women compared to normal weight women, and subclinical anxiety/depression was reduced among obese men compared to normal weight men and to women. Conclusions: These analyses highlight the importance of considering type of mental illness, level of severity, sex and age when examining the relationship between BMI and mental health. The diversity of patterns observed, detectable at the population level, warrant further examination and monitoring. abstract_id: PUBMED:24274839 Association between body mass index and mental health among Scottish adult population: a cross-sectional study of 37,272 participants. Background: The evidence is conflicting as to whether body mass index (BMI) is associated with mental health and, if so, to what extent it varies by sex and age. We studied mental health across the full spectrum of BMI among the general population, and conducted subgroup analyses by sex and age. Method: We undertook a cross-sectional study of a representative sample of the Scottish adult population. The Scottish Health Survey provided data on mental health, measured by the General Health Questionnaire-12 (GHQ), BMI, demographic and life-style information. Good mental health was defined as a GHQ score <4, and poor mental health as a GHQ score ⩾4. Logistic regression models were applied. Results: Of the 37 272 participants, 5739 (15.4%) had poor mental health. Overall, overweight participants had better mental health than the normal-weight group [adjusted odds ratio (OR) 0.93, 95% confidence interval (CI) 0.87-0.99, p = 0.049], and individuals who were underweight, class II or class III obese had poorer mental health (class III obese group: adjusted OR 1.26, 95% CI 1.05-1.51, p = 0.013). There were significant interactions of BMI with sex (p = 0.013) and with age (p < 0.001). Being overweight was associated with significantly better mental health in middle-aged men only. In contrast, being underweight at all ages or obese at a young age was associated with significantly poorer mental health in women only. Conclusions: The adverse associations between adiposity and mental health are specific to women. Underweight women and young women who are obese have poorer mental health. In contrast, middle-aged overweight men have better mental health. abstract_id: PUBMED:28007535 Association between weight status and men's positive mental health: The influence of marital status. The purpose of this study was to (1) examine the association between weight status and men's positive mental health, defined as the presence of symptoms of emotional, psychological, and social well-being, and (2) evaluate the moderating effect of marital status. A total of 645 men aged between 19 and 71 years self-reported their height and weight and answered a questionnaire measuring their emotional, psychological, and social well-being. Analysis of variance revealed that mean levels of emotional, psychological, and social well-being did not significantly differ according to men's weight status. Moderation analyses indicated that, for men in a relationship (married or living common-law), there were no significant associations between overweight, obesity, and the three components of positive mental health. However, for single men, overweight was marginally associated with higher emotional well-being, while obesity was associated with lower psychological well-being and marginally associated with lower social well-being. Results of the present study suggest that health professionals and researchers should take the characteristics (such as marital status) of men with obesity and overweight into account when working with them. Mental health researchers may need to examine men in each weight category separately (e.g. obesity vs. overweight), since the association with positive mental health can differ from one category to another. abstract_id: PUBMED:29407361 Association between body mass index and suicidal ideation among seniors in Shandong, China. Background: Recently, an emerging group of studies has reported the association between Body Mass Index (BMI) and suicidal ideation in the Western countries. However, this relationship is still unclear with controversial results, and we have little knowledge about this relationship in China which is one of few countries reported higher suicide rates. Methods: This study aims to analyze the association between BMI and suicidal ideation among seniors (≥60 years old) in Shandong, China. A total of 3313 seniors were included in the data analysis. Suicidal ideation, weight, height, socio-demographic and psychological variables were evaluated in this study. Logistic regression was conducted to explore the association between BMI and suicidal ideation among male and female seniors. Results: The results showed that 4.2% of the seniors reported suicidal ideation, and 3.4% for men, 4.9% for women. After controlling social-demographic variables, economic status, physical disease, social support and mental health, an inverse relationship between BMI and suicidal ideation was found for men, but not for women. Mental health was still an important factor associated with suicidal ideation. Conclusion: The results inform health care professors that underweight in male seniors can be associated with higher risk of suicidal ideation in China. abstract_id: PUBMED:22469283 Body mass index and mental health problems in general adults: disparity in gender and socioeconomic status. Objective: This nationwide population-based study aims to evaluate the association between Body Mass Index (BMI) and mental health problems, and to investigate the effects of gender and socioeconomic status (SES) on this association. Methods: This study was a longitudinal study. 12,862 adults (aged 18years or older), who responded to the 2001 National Health Interview Survey (NHIS) and gave consent to the linking of the survey data to their National Health Insurance (NHI) claims records, were included. The 2002-2004 NHI claims records of the sample were analyzed. Logistic regression was used to evaluate the odds ratios. Those with middle and high income (per capita annual household income ≧US$9375) in the sample were referred to as the better-offs. Results: After adjustment, increased BMI was significantly associated with a higher risk of schizophrenia (OR: 1.14, 95% CI: 1.07-1.22) and a lower risk of mood disorder (OR: 0.95, 95% CI: 0.91-0.99) in women. In addition, a higher BMI was significantly associated with a lower risk of mood disorders (risk reduction: 6%) and a lower risk of any major mental health problem (risk reduction: 3%) among the better-offs. Conclusions: In contrast to other (cross-sectional) studies in other countries, our longitudinal results suggest that in a general Taiwanese population, the association between BMI and mental health may vary by type of mental health problems. Gender and SES may have influenced this relationship. Further studies are required to elucidate the biological or social factors underlying these empirical observations. Answer: Yes, familial factors may account for the association of body mass index (BMI) with poor mental health in both men and women. A study that examined twins found that once genetic and familial factors were controlled for, the associations between BMI and mental health outcomes were no longer significant. This suggests that the relationship between increased weight and poor mental health could be influenced by genetic and familial factors (PUBMED:23664570).
Instruction: Choking during sleep: can it be expression of arousal disorder? Abstracts: abstract_id: PUBMED:6747223 Arousal responses to chemical stimuli during sleep. Arousal from sleep is an important component of any co-ordinated response to an external stimulus. Respiratory stimuli imposed during sleep are capable of eliciting arousal responses as well as the classic ventilatory responses Considerable survival advantage may result as a consequence of intact arousal mechanisms. Developmental studies of arousal responses to respiratory stimuli are lacking. Arousal responses to chemoreceptor stimuli have been examined in adult tracheotomized dogs, both before and following bilateral surgical denervation of the carotid bodies. Prior to denervation of the carotid bodies arousal in response to isocapnic progressive hypoxia occurred at an arterial oxygen saturation (SaO2) of 83% during slow-wave sleep and 70% in rapid-eye-movement sleep. Airway occlusion induced arousal from Slow-wave-sleep at SaO2 of 88% and from rapid-eye-movement sleep at SaO2 of 84%. Following denervation of the carotid bodies, arousal failed to occur in response to either progressive hypoxia or airway occlusion despite desaturation to 60% in Slow-wave sleep and 50% in rapid-eye-movement-sleep, at which level experiments were arbitrarily terminated. The effect of sleep fragmentation on arousal responses of sleeping dogs to chemoreceptor stimuli has also been studied. A marked impairment of both hypoxic and hypercapnic arousal was produced by sleep fragmentation. The consequences of interfering with arousal mechanisms in experimental animals indicates the considerable potential for similar dysfunction in man to contribute to the pathophysiology of the clinical syndromes of obstructive sleep apnoea and sudden infant death.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:14646811 Cortical arousal, autonomic arousal. Evaluation techniques and clinical importance The respiratory disorders expressed by obstructive hypopneas and apneas during sleep, as well as the sequences of crescendo in respiratory effort without hypopneas or apneas which define the upper airway resistance syndrome, terminate with (thanks to) an arousal, defined by EEG changes. In some cases, the activation of the central nervous system is restricted to a sympathetic activation, which has been mainly studied in the cardiovascular area, and is not always accompanied by a cortical arousal. Various approaches (heart rate, blood pressure, pulse transit time, peripheral arterial tonometry) make the identification of sympathetic activation possible. Sympathetic activation seems to be more sensitive than cortical arousal to the stimulations generated by the respiratory system via an activation of mechanoreceptors stimulated by the increased respiratory effort in response to total or partial occlusion of the upper airway. The mechanisms of the cortical or autonomic arousal are not fully understood, but their detection could be a diagnostic tool for the identification of such disorders. Such tools are currently under validation. abstract_id: PUBMED:1885441 Behavioral arousal in newborn infants and its association with termination of apnea. Arousal is an important protective mechanism that aids in the resolution of obstructive sleep apnea in adults and children, but its role in neonatal apnea has not been investigated. The primary aim of the present study was to determine the role of arousal in the termination of apnea in preterm infants. Videorecording was used to identify spontaneous behavioral arousal in a group of healthy full-term (n = 7) and preterm (n = 10) infants before and during polygraphic monitoring of cardiorespiratory variables and in a group of preterm infants with apnea (n = 10) during similar polygraphic monitoring. Spontaneous arousal rates (mean +/- SE) in full-term infants before and during polygraphic monitoring were 0.18 +/- 0.03 and 0.23 +/- 0.07 episodes/min, respectively. Corresponding values in nonapneic preterm infants were 0.24 +/- 0.03 and 0.24 +/- 0.02 episodes/min. In apneic preterm infants, mean spontaneous arousal rate during polygraphic recording was 0.26 +/- 0.02, but it was considerably higher during apneic sleep periods (0.59 +/- 0.17) than during nonapneic sleep periods (0.25 +/- 0.01). The frequency of occurrence of arousal was significantly higher (P less than 0.005) in long vs. short apnea, mixed vs. central apnea, and severe vs. mild apnea. Although a clear association between arousal and apneic resolution was observed in preterm infants, lack of arousal responses in a large number of apneic episodes suggests that behavioral arousal is not essential for the termination of apnea in these infants. abstract_id: PUBMED:35388549 Sleepwalking, sleep terrors, sexsomnia and other disorders of arousal: the old and the new. Disorders of arousal (DOA) is an umbrella term initially covering classical sleepwalking, sleep terrors, and confusional arousals, and now including a wider spectrum of specialised forms of non rapid eye movement (non REM) parasomnias such as sexsomnia, sleep-related eating disorder, and sleep-related choking syndrome. Growing evidence has shown that DOA are not restricted to children but are also prevalent in adults (2%-4% of the adult population). While DOA run in family, genetics studies remain scarce and inconclusive. In addition to the risk of injury on themselves and others (including sexual assaults in sexsomnia), adults with DOA frequently suffer from excessive daytime sleepiness, pain, and altered quality of life. The widespread view of DOA as automatic and amnesiac behaviours has now been challenged by subjective (dream reports) and objective (dream-enacting behaviours documented on video-polysomnography) observations, suggesting that sleepwalkers are 'dream walking' during their episodes. Behavioural, experiential, cognitive, and brain (scalp electroencephalography [EEG], stereo-EEG, high density-EEG, functional brain imaging) data converge in showing a dissociated pattern during the episodes. This dissociated pattern resembles the new concept of local arousal with a wake-like activation in motor and limbic regions and a preserved (or even increased) sleep intensity over a frontoparietal network. EEG and behavioural criteria supporting the DOA diagnosis with high sensitivity and specificity are now available. However, treatment is still based on controlling priming and precipitating factors, as well as on clinicians' personal experience with sedative drugs. Placebo-controlled trials are needed to improve patients' treatment. DOA deserve more attention from sleep researchers and clinicians. abstract_id: PUBMED:2382892 The influence of increasing ventilatory effort on arousal from sleep. Arousal from sleep in response to asphyxia can be a lifesaving event. However, the mechanisms responsible for this important arousal response are uncertain. A unifying hypothesis is that arousal results from the increased respiratory effort that occurs as a result of ventilatory stimulation. If this is true, the magnitude of this effort during the breaths immediately preceding arousal from sleep should be similar regardless of the stimulus. Therefore, the negative inspiratory pleural pressure during the breaths preceding arousal would be similar, whether stimulated by added inspiratory resistive load, hypoxia, or hypercapnia. To test this hypothesis, we studied eight young, healthy men during full-night sleep studies. We measured their electroencephalography (EEG), electromyography (EMG), electrooculography (EOG), inspired ventilation (VI), end tidal PCO2 (PETCO2), O2 saturation, and esophageal pressure (esophageal balloon) while inducing arousal from non-REM sleep using (1) a 30-cm H2O/L/s added resistive load, (2) progressive hypoxia, and (3) progressive hyperoxic hypercapnia. All subjects were eventually aroused following the addition of the 30-cm H2O/L/s added load and during progressive hypercapnia. However, only six of the eight men were aroused when the O2 saturation was reduced to a minimum of 70%. For each stimulus, arousal occurred at very different levels of ventilation and arterial chemistry (SaO2 and CO2). However, ventilatory effort for each subject was similar at the point of arousal regardless of the stimulus. The peak-negative esophageal pressure for the single inspiration preceding arousal (for the six subjects arousing with all three stimuli) was 16.8 +/- 1.4 cm H2O for added resistive load, 15.0 +/- 2.4 cm H2O for hypoxia, and 14.7 +/- 2.1 cm H2O for hypercapnia. We conclude that increasing ventilatory effort may be the stimulus to arousal from sleep independent of the source of this rising drive to breathe. abstract_id: PUBMED:8964736 Airway obstruction during sleep increases blood pressure without arousal. Recent studies suggest that arousal is the dominant factor acutely increasing blood pressure in obstructive sleep apnea and that neither stimulation of chemoreceptors nor mechanical factors associated with large negative swings in intrapleural pressure substantially contribute to the rise in blood pressure associated with each obstructive apneic event. A canine model of obstructive sleep apnea was used to examine the relative contributions of these mechanisms in the blood pressure response to induced airway obstruction during non-rapid-eye-movement sleep. In part A of the study, the arousal response was eliminated from an obstructive event by restoring airway patency just before the expected arousal, allowing blood pressure responses to be compared between obstructive events with and without arousal. In part B of the study, the protocol of Part A was repeated after pharmacological blockade of the autonomic nervous system with hexamethonium (20 mg/kg iv), eliminating neurally mediated responses due to arousal, stimulation of chemoreceptors, or other reflexes, while maintaining any mechanical effects on blood pressure related to swings in intrapleural pressure. The results of part A (n = 4 dogs) show that obstructive apneic events of 28.5 +/- 3.1 s duration, with arterial hemoglobin desaturation to 92.9 +/- 0.8% and airway pressure swings of -37.6 +/- 6 mmHg, significantly increased mean arterial pressure (MAP) by 13.8 +/- 1.5 mmHg in the absence of arousal (P < 0.005). In comparison, when arousal was allowed to occur, MAP increased by a further 11.8 +/- 1.2 mmHg (P < 0.01). In part B (n = 3 dogs), there was no change in MAP during the obstructive apneic event, and MAP fell by > 10 mmHg in the postobstruction period whether or not arousal occurred (P < 0.05). We conclude that neural reflexes, but not mechanical factors, substantially contribute to the acute blood pressure response to an obstructive apneic event and that arousal produces a separate, additional acute hypertensive response. abstract_id: PUBMED:3597257 Arousal responses to airway occlusion in sleeping dogs: comparison of nasal and tracheal occlusions. Previous studies have shown that the arousal threshold to hypoxia, hypercapnia, and tracheal occlusions is greatly depressed in rapid-eye-movement (REM) sleep compared with slow-wave sleep (SWS). The aim of this study was to compare the arousal thresholds in SWS and REM sleep in response to an upper airway pressure stimulus. We compared the waking responses to tracheal (T) vs. nasal (N) occlusion in four unanesthetized, naturally sleeping dogs. The dogs either breathed through a tracheal fistula or through the snout using a fiberglass mask. A total of 295 T and 160 N occlusion tests were performed in SWS and REM sleep. The mean time to arousal during N and T tests was variable in the same dog and among the dogs. The mean time to arousal in SWS-tracheal occlusion was longer than that in N tests in only two of the four dogs. The total number of tests inducing arousal within the first 15 s of SWS-nasal occlusion tests was significantly more than that of T tests (N: 47%; T: 27%). There was a marked depression of arousal within the initial 15 s of REM sleep in T tests compared with N tests (N: 21%; T: 0%). The frequency of early arousals in REM tests was less than that of SWS for both N and T tests. The early arousal in N occlusion is in sharp contrast to the well-described depressed arousal responses to hypoxia, hypercapnia, and asphyxia. This pattern of arousal suggests that the upper airway mechanoreceptors may play an important role in the induction of an early arousal from nasal occlusion. abstract_id: PUBMED:3822590 Effects of hyperoxia on the arousal response to upper airway obstruction in lambs. Experiments were done to investigate the effects of increased inspired oxygen on the arousal response from sleep to upper airway obstruction in 10 newborn lambs. Each lamb was anesthetized and instrumented for recordings of electrocorticograms, electrooculogram, nuchal and diaphragm electromyograms, and measurements of systemic arterial blood pressure and oxygen saturation (fiberoptic catheter oximeter). A tracheotomy was performed and a fenestrated tracheostomy tube was placed in the trachea. A 5F balloon tipped catheter was inserted into the tube so that airflow could be obstructed by inflating the balloon. No sooner than 3 days after surgery, measurements were made during a control period and during an experimental period of upper airway obstruction; the inspired oxygen fraction was alternated hourly between 0.21 and 0.60. A total of 57 epochs of quiet sleep and 58 epochs of active sleep was obtained in eight lambs. Arousal was significantly delayed (p less than 0.005) during active sleep (21 +/- 6 s; mean +/- 1 SD) compared to quiet sleep (7 +/- 2 s) in room air. Increased inspired oxygen significantly delayed arousal (p less than 0.05) during active sleep (47 +/- 25 s), but had little effect on arousal in quiet sleep (10 +/- 4 s). These results provide evidence that arousal from active sleep following upper airway obstruction in lambs is primarily initiated by a decrease in arterial oxygen. However, arousal from quiet sleep following upper airway obstruction in lambs appears to be initiated by other stimuli. abstract_id: PUBMED:8795098 Arousal responses of the newborn piglet to airways obstruction and rebreathing during normothermia and hyperthermia. Most experimental studies of arousal to respiratory stressors have been performed under neutral thermal conditions. The present study focussed on arousal responses with the added challenge of heat stress. The subjects were two groups of 10 sedated piglets aged 4-6 days and 2-6 days respectively. Respiratory stimuli (partial and total airways obstruction (AO) or rebreathing (RB) expired gases) were applied during non-rapid-eye-movement (NREM) sleep. Measurements of heart rate, SaO2, intra-pleural pressure or inspired CO2 and O2 were recorded during tests in normothermia and hyperthermia. Hyperthermia significantly shortened the time to arousal. Thus, arousal times from partial and total AO and from RB were: in normothermia, 40.3 +/- 5.8, 9.2 +/- 0.7 and 94.9 +/- 14.7 s respectively; in hyperthermia, 17.8 +/- 3.7, 7.6 +/- 0.9 and 68.5 +/- 9.1 s respectively. Cardio-respiratory variables at arousal were similar in both thermal states. Seven non-sedated piglets were challenged with RB stimuli during normothermia only to determine the influence of sleep state on arousal. Arousal in REM sleep was delayed approximately 2-3-fold (NREM, 41.2 +/- 8.2 s; REM, 88.5 +/- 18.1 s), occurring at a lower SaO2 and higher inspired CO2. It is concluded that arousal from NREM sleep under adverse conditions of hyperthermia shortens the arousal time from asphyxial stimuli induced by AO and RB with no change in the arousal threshold. abstract_id: PUBMED:1629105 Comparison of arousal responses to tracheal and face mask occlusions in sleeping newborn piglets. The arousal responses after occlusion of the airway at the mid-trachea were compared with the responses after occlusion of the airway in a face mask in chronically instrumented 3- to 5-day-old piglets. For each site of occlusion arousal latency was significantly longer from active sleep than from quiet sleep. There was a significant increase in the frequency of early arousals after face mask occlusions compared with tracheal occlusions in both sleep states. During quiet sleep the frequency of arousal by 1 s after occlusion was 0.55 with face mask occlusions compared with 0.28 with tracheal occlusion (P less than 0.01). During active sleep the frequency of arousal by 3 s after a face mask occlusion was 0.32 compared with 0.08 after tracheal occlusion (P less than 0.05). Arousal from quiet sleep occurred before changes in arterial oxygen saturation. During active sleep mean saturation at arousal was not different between face mask and tracheal occlusions. Exposure of the upper airway to the pressures generated during airway occlusions results in earlier arousal in both quiet and active sleep, indicating a potential role for upper airway mechanoreceptors in initiating arousal in the newborn piglet. Answer: Choking during sleep could potentially be an expression of an arousal disorder. Disorders of arousal (DOA) encompass a range of conditions that include behaviors such as sleepwalking, sleep terrors, and confusional arousals, and can also involve more specialized forms of non-REM parasomnias like sleep-related eating disorder and sleep-related choking syndrome (PUBMED:35388549). These disorders are characterized by partial arousals from non-REM sleep, where the individual may exhibit complex behaviors while remaining in a state of altered consciousness. The mechanisms underlying DOA are not fully understood, but they are believed to involve a dissociated state with wake-like activation in certain brain regions, such as motor and limbic areas, while a sleep-like state is preserved in other regions, particularly the frontoparietal network (PUBMED:35388549). This could potentially explain why someone might experience choking during sleep as part of a DOA episode, as the coordination of protective airway reflexes may be impaired due to the dissociated brain state. Moreover, arousal responses to respiratory stimuli, such as airway occlusion, have been shown to be an important protective mechanism during sleep (PUBMED:6747223). Arousal responses can be triggered by the activation of mechanoreceptors stimulated by increased respiratory effort in response to total or partial occlusion of the upper airway (PUBMED:14646811). This suggests that choking during sleep could trigger an arousal response as the body attempts to clear the obstruction and restore normal breathing. However, it is also important to note that arousal is not always essential for the termination of apnea or airway obstruction in infants (PUBMED:1885441), and the arousal response to airway obstruction can be influenced by factors such as hyperoxia (PUBMED:3822590) and the site of occlusion (PUBMED:1629105). Therefore, while choking during sleep could be an expression of an arousal disorder, it is also possible that it could occur independently of such disorders, and the specific circumstances and individual factors would need to be considered.
Instruction: Can spontaneous movements be modulated by visual and acoustic stimulation in 3-month-old infants? Abstracts: abstract_id: PUBMED:12191527 Can spontaneous movements be modulated by visual and acoustic stimulation in 3-month-old infants? Background: The assessment of the quality of general movements (GMs) in infants proves to be a reliable and valid diagnostic tool for detecting brain dysfunction early in life. Of special interest, particularly for the prediction of cerebral palsy, is the fidgety kind of GMs, the so-called fidgety movements (FMs) observable in 3- to 5-month-old infants. GMs are part of an infant's spontaneous motor repertoire and as such endogenously generated by the nervous system itself. Aim: The question was raised as to what extent the temporal organisation of FMs can be modulated by visual and acoustic stimulation. Study Design: Spontaneous motility in supine position with and without stimulation was recorded on video and the data were semiquantitatively analysed. We studied the effect of visual stimulation (red ring, red puppet), unanimated acoustic stimulation (68, 77, 88 dB) and of the mother, approaching her infant in a talkative manner after an absence of a quarter of an hour. Subjects: Twenty-nine healthy infants at the age of 12 weeks who all showed normal FMs. Results: Visual stimulation demonstrated that only the presentation of a red puppet elicited a significant level of focussed attention and led to a decrease of FMs. A red ring, unanimated acoustic stimulation as well as the interaction with the mother had no influence on the temporal organisation of FMs. Conclusion: This study demonstrated that in 3-month-old infants, FMs is a predominant motor pattern and that it is possible to assess FMs during (playful) social interaction. abstract_id: PUBMED:2633601 Smooth pursuit eye movements evoked by head-phone acoustic stimulation Smooth pursuit eye movement evoked by binaural acoustic stimulation is examined in the present work. The effect of lateralization was achieved by modulating the interaural difference in intensity, within the headphone, following a pendular pattern. The test was performed on 10 healthy subjects; oscillation frequencies of the acoustic target being 0.15, 0.5 and 0.83 Hz, stimulus intensity 65 and 92 dB SPL. The acoustic signal consisted of white noise. Each trial was performed with eyes closed and then with eyes open. The results demonstrate that sinusoidal eye movement can only be obtained in a few subjects when the acoustic target has the lowest velocity and especially at the higher intensity. In all other cases eye movement was characterized by multiple saccades. The variability in eye movement amplitude is reduced and the value of this parameter decreases significantly under open-eye conditions. The data obtained are in agreement with other reports obtained with free field acoustic stimulation and demonstrate that smooth pursuit eye movement is principally an ocular reflex. In conclusion, acoustic smooth pursuit eye movement appears normally to be characterized by multiple saccades whose pattern can be modulated by intensity and oscillation frequency of the acoustic target as well as by the presence of visual inputs and attentional factors. abstract_id: PUBMED:24055340 Spontaneous movements in the supine position of preterm infants with intellectual disability. Objective: Spontaneous movements at 2 months of corrected age in preterm infants with intellectual disability (ID) were investigated by assessing individual motor elements separated from movements involving the entire body. Methods: Video recordings of 20 preterm infants with ID (16 males, 4 females; median gestational age 26 weeks; median birth weight 810 g) were analyzed and were compared with those of 21 normal preterm infants (8 males, 13 females; median gestational age 30 weeks; median birth weight 1216 g). Results: In the preterm infants with ID at 2 months corrected age, startle response, lateral decumbent position, predominant shoulder rotation, and maintaining hip adduction were more frequently observed and hand sucking, maintaining shoulder abduction, to-and-fro shoulder abduction, to-and-fro elbow flexion, isolated hip adduction, to-and-fro hip abduction, and leg lift were less frequently seen than in the normal preterm infants (Fisher's exact test, p<0.05). Conclusion: Abnormal spontaneous movements at 2 months of age in preterm infants with ID result from persistent immature movements and non-emergence of mature movements. abstract_id: PUBMED:27010721 Developmental changes in intralimb coordination during spontaneous movements of human infants from 2 to 3 months of age. Human infants show a variety of spontaneous movements in the first few months of life. Although the pattern of spontaneous movements changes at approximately 2 months of age, the precise mechanism that governs the developmental changes in intralimb coordination remains unclear. In the present study, we focused on knee-ankle coordination during spontaneous movements of human infants from 2 to 3 months of age. Multiple attitude sensors were used to measure three-dimensional angular motion of knee and ankle joint motions. We acquired a one-dimensional time series of the knee joint angle around the putative hinge joint and a two-dimensional time series of ankle motions on the putative sagittal and frontal plane. First, we found that 3-month-old infants show a significant predominance to extend their knee joints, remarkably so on the left side. To quantify dissociated motions of the knee and ankle, we calculated the temporal correlation and the regression slope between them. We observed that 3-month-old infants moved their ankle joints more independently of knee motions than 2-month-old infants. Finally, we found that dissociated motions of the knee and ankle simultaneously develop with knee extension predominance. The developmental change from synchronization to dissociation of intralimb joint movements during spontaneous movements suggests that the development of the cortical and/or subcortical mechanism may mediate selective activation and inhibition of joint motions at approximately 2 months of age. abstract_id: PUBMED:31629232 A novel two-body sensor system to study spontaneous movements in infants during caregiver physical contact. Spontaneous movements, which refer to repetitive limb movements in the absence of any external stimulus, have been found to be reflective of neurodevelopmental status during infancy. These movements are modulated by both individual and environmental factors, including physical contact (holding) with the caregiver. However, it is a challenge to measure spontaneous movements during physical contact because infant-generated movements become coupled with caregiver-generated movements in such contexts. Here, we propose the use of a novel two-body sensor system to distinguish infant-generated movements in the presence of physical contact with the caregiver. Data from seven typically developing infants and their caregivers were recorded during different simulated home activities, which involved different combinations of physical interaction, caregiver's movement and infant positions. The two-body sensor system consisted of two wearable accelerometers - one placed on the infant's arm and one on the caregiver's arm, and we developed a Kalman-filter based algorithm to isolate the infant-generated movements. In addition, video was recorded for qualitative analysis. Results indicated that spontaneous movement activity was higher when there was no physical contact with caregiver. When there was physical contact, spontaneous movements were increased when the caregiver was still and when the infant was held horizontally. These results show that the novel two-body sensor system and the associated algorithms were able to isolate infant-generated movements during physical contact with the caregiver. This approach holds promise for the automated long-term tracking of spontaneous movements in infants, which may provide critical insight into developmental disorders. abstract_id: PUBMED:3687152 Controlled acoustic and photic stimulation of the fetus in the last pregnancy trimester The aim of the study communicated here was to check any fetal reactions occurring after external stimulation (light-optic and acoustic stimuli), depending on fetal behavioral status. The group studied (acoustic/light stimulation) comprised 85 patients with normal course of pregnancy at term (37th to 42nd week of gestation). They were compared with a control group of nonstimulated patients (n = 26), also with normal course of pregnancy and corresponding gestational age. Fetal heart frequencies and an external tokogram were recorded synchronously with the fetal respiration, body, limb and eye movements registered simultaneously by two independent observers and the fetal behavioral statuses were defined (Nijhuis, Prechtl et al. 1982; Boos and Schmidt 1984; van Vliet et al. 1985). The stimulation methods used were as follows: 1. Single-frequency pure tone (2 kHz, 120 dB, 5 sec) 2. Sawtooth-modulated pure tone (0.5-2.0 kHz, modulation frequency 10 Hz, 120 dB, 5 sec) 3. Low-frequency, sawtooth-modulated pure tone (with vibration effect), (0.1-0.3 kHz, modulation frequency 10 Hz, 120 dB, 5 sec) 4. Light stimulation: electronic photographic flash, luminous intensity 25 X 10(6) Lux/10(-3) sec, 8 flash sequences in 10 sec, distance from maternal abdomen 20 cm. In order to rule out purely coincidental changes in fetal attitude, both genuine and "sham" stimulations were performed in randomized sequence, in the latter case without any acoustic or light-optic signals. No differences in fetal reactions were found with the three different acoustic stimulation methods.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:36010065 Spontaneous Orofacial Movements at Writhing and Fidgety General Movements Age in Preterm and Full-Term Infants. Background: As general spontaneous movements at the writhing and fidgety ages have been important for the early identification of neurodevelopmental impairment of both full-term and preterm infants, the knowledge of the spontaneous orofacial movements at these ages also seems to be important for the diagnosis of oral function, particularly in preterm infants. Therefore, we decided to first classify preterm and full-term infants according to general movements ages, and then to record, describe, compare, and discuss their spontaneous orofacial movements. Methods: This cross-sectional study included 51 preterm infants (born between 28 and 36 weeks) and 43 full-term infants who were classified at the writhing and fidgety ages of Prechtl's method of general movements assessment. Their spontaneous orofacial movements were recorded on video, and The Observer XT software (Noldus) was used to record the quantitative values of the movements. Results: Poor repertoires of writhing movements were more frequent in the preterm infants (90.9%) compared to full-term ones (57.9%). Positive fidgety movements were observed in 100% of both preterm and full-term infants. Oral movements were similar for both preterm and full-term infants, regardless of their movement stage. Conclusion: All spontaneous orofacial movements were present both in preterm and full-term infants, albeit with higher frequency, intensity, and variability at fidgety age. abstract_id: PUBMED:24951073 Jerky spontaneous movements at term age in preterm infants who later developed cerebral palsy. Background: Assessment of spontaneous movements in infants has been a powerful predictor of cerebral palsy (CP). Recent advancements on computer-based video analysis can provide detailed information about the properties of spontaneous movements. Aims: The aim of this study was to investigate the relationship between spontaneous movements of the 4 limbs at term age and the development of CP at 3 years of age by using a computer-based video analysis system. Study Design And Subjects: We analyzed video recordings of spontaneous movements at 36-44 weeks postmenstrual age (PMA) for 145 preterm infants who were born preterm (22-36 weeks PMA with birthweights of 460-1498g). Sixteen of the infants developed CP by 3 years of age, while 129 developed normally. We compared 6 movement indices calculated from 2-dimensional trajectories of all limbs between the 2 groups. Results: We found that the indices of jerkiness were higher in the CP group than in the normal group (p<0.1 for arms and p<0.01 for legs). No decline was observed in the average velocity and number of movement units in the CP group compared with to the normal group. Conclusions: Jerkiness of spontaneous movements at term age provides additional information for predicting CP in infants born preterm. abstract_id: PUBMED:8345399 Changes in human spontaneous otoacoustic emissions with contralateral acoustic stimulation According to current cochlear concepts, spontaneous otoacoustic emissions (SOAEs) are thought to be the product of outer hair cell (OHCs) movement. Furthermore, efferent neurofibers from the olivocochlear bundle primarily innervate OHCs. Thus, we designed this investigation to determine changes in SOAE under conditions of contralateral acoustic stimulation. Fouty-two normal ears of 34 subjects with uni- or bilateral SOAEs were examined. The sound pressure level (SPL) of SOAE was measured 3 times in each subject and was initially averaged. The white noises of 30 or 40dBSPL, believed not to affect the other side, were then loaded contralaterally and the SPL of SOAEs were measured and compared with the non-loaded data. The SPL of single SOAEs were actually suppressed with 40dBSPL stimulation while the SPL of multiple SOAEs showed typical changes. Interestingly, no significant suppression of SOAEs was noticed under conditions of contralateral acoustic stimulation of the impaired ear. These data suggest that contralateral acoustic stimulation could affect OHC function via the olivocochlear bundle. abstract_id: PUBMED:25185999 Cortical networks for face perception in two-month-old infants. Newborns have an innate system for preferentially looking at an upright human face. This face preference behaviour disappears at approximately one month of age and reappears a few months later. However, the neural mechanisms underlying this U-shaped behavioural change remain unclear. Here, we isolate the functional development of the cortical visual pathway for face processing using S-cone-isolating stimulation, which blinds the subcortical visual pathway. Using luminance stimuli, which are conveyed by both the subcortical and cortical visual pathways, the preference for upright faces was not observed in two-month-old infants, but it was observed in four- and six-month-old infants, confirming the recovery phase of the U-shaped development. By contrast, using S-cone stimuli, two-month-old infants already showed a preference for upright faces, as did four- and six-month-old infants, demonstrating that the cortical visual pathway for face processing is already functioning at the bottom of the U-shape at two months of age. The present results suggest that the transient functional deterioration stems from a conflict between the subcortical and cortical functional pathways, and that the recovery thereafter involves establishing a level of coordination between the two pathways. Answer: Yes, spontaneous movements can be modulated by visual and acoustic stimulation in 3-month-old infants. A study demonstrated that the presentation of a red puppet elicited a significant level of focused attention and led to a decrease in fidgety movements (FMs), which are a type of spontaneous movement. However, other forms of visual stimulation, such as a red ring, unanimated acoustic stimulation, and interaction with the mother, did not influence the temporal organization of FMs (PUBMED:12191527). This suggests that certain types of visual stimuli can modulate spontaneous movements in infants, but the effects may be specific to the nature of the stimulus.
Instruction: Can there be a single system for peer assessment of professionalism among medical students? Abstracts: abstract_id: PUBMED:10777974 Factors associated with delayed tobacco uptake among Vietnamese/Asian and Arabic youth in Sydney, NSW. Objective: To describe the smoking behaviour and possible reasons for delayed uptake of tobacco smoking among Arabic and Vietnamese/Asian speaking senior school students in Sydney Method: A descriptive study involving four adult in-depth interviews and five student focus groups plus a quantitative survey of 2,573 school students attending Years 10 and 11 from 12 high schools with high Vietnamese and Arabic populations was conducted in Sydney in 1998. Self-reported smoking behaviour and peer, parent, school and cultural background information was collected. Results: Students who smoke were more likely to have more than $20 a week pocket money, be from an English-speaking background, have no rules at home about smoking, have family members who smoke, not feel close to their father, spend three or more evenings a week out with friends, and have negative perceptions of the school environment and of the level of teacher support. They were less likely to smoke if they perceived their peers to be unsupportive. Conclusions: These results confirm the delayed uptake of smoking among students from a Vietnamese/Asian and Arabic-speaking backgrounds compared with those from an English-speaking background. A number of family and school factors were associated with smoking. Implications: Positive parental modelling, active parenting including awareness of or supervision of student leisure time, strict rules about not smoking and less pocket money are important strategies for preventing smoking among all adolescents. abstract_id: PUBMED:23524992 Personal attitudes, perceived social norms, and health-risk behavior among female adolescents with chronic medical conditions. To examine whether perceived peer/parent norms or personal beliefs about adolescent substance use influence substance use among female adolescents with chronic medical conditions. Sixty-eight females reported on substance use, personal beliefs, and perceived peer/parent norms. Personal beliefs and perceived peer/parent norms were associated with adolescent's current and future substance use. Although perceived peer norms accounted for variance in current substance use, only personal beliefs accounted for variance in future alcohol use. Targeting perceived peer norms may be effective for intervention efforts among adolescents endorsing current substance use, whereas alcohol use prevention efforts should target personal beliefs. abstract_id: PUBMED:24853361 The use of an ecodevelopmental approach to examining substance use among rural and urban Latino/a youth: peer, parental, and school influences. Using an ecodevelopmental framework, we examined how peer, parent, and student variables influence substance (tobacco, alcohol, and marijuana) use among rural and urban Latino/a adolescents (N = 2,500). Generally speaking, Latino/a adolescents in rural areas endorsed higher levels of substance use. Among the primary variables of study, there were a number of nuanced differences noted based on location, gender, and type of substance. Peer use was related to individual substance use in both rural and urban areas. However, peer use was a significantly stronger predictor of tobacco use among rural Latinas than urban dwelling. Parental monitoring was not predictive of urban marijuana use, yet was negatively associated with substance use for all subgroups and was especially pronounced for female alcohol use. Parental emotional involvement predicted higher alcohol use among urban boys. School achievement was negatively associated with substance use for all subgroups while, conversely, school involvement was associated with higher alcohol use for rural boys. Cultural and contextual implications for intervention and prevention are discussed. abstract_id: PUBMED:8500453 Decision-making orientation and AIDS-related knowledge, attitudes, and behaviors of Hispanic, African-American, and white adolescents. How adolescents' personal sense of directedness (i.e., peer, parent, or self-directed orientation) affects the decision-making processes of adolescent students regarding AIDS-related knowledge, attitudes, beliefs, behaviors, and skills (KABBS) is examined. The sample consisted of 10th-grade students in 8 public high schools (N = 2,515) in Dade County (greater Miami), Florida. The findings showed that decision-making orientation and directedness was a significant predictor of AIDS-related KABBS of adolescents. Overall, the level of AIDS-related KABBS that were associated with low risk was found significantly more often among self-directed students and least often among peer-directed students. The findings of this study suggest that future preadult health-risk research should incorporate the concept of differences of information processing across adolescents. abstract_id: PUBMED:10446768 Trends in adolescent alcohol and other substance use: relationships to trends in peer, parent, and school influences. The aim of this study is to improve understanding of trends in adolescent substance use by comparing them with trends in peer substance use, school problem behavior, parental disapproval of alcohol and drugs, and exposure to school prevention information. These trends were determined using data from three large comparable representative surveys of 7-12th grade students in New York State conducted in 1983, 1990, and 1994. Analysis of variance was used to test the significance of the trends and to identify meaningful differences in trends by demographic subgroups (gender, grace level, and ethnicity). Adolescent alcohol and substance use declined in the 1980s, then increased from 1990 to 1994. Trends in friend's substance use and school problem behavior paralleled the alcohol and other substance use trends, consistent with their being part of the same adolescent problem behavior syndrome. Parental disapproval also followed a trend consistent with the substance use trends, i.e., parental disapproval increased in the 1980s but then decreased in the 1990s. However, the trend in school prevention influences did not parallel these substance use trends: student exposure to school primary prevention programs continued to increase from 1990 to 1994. Use of alcohol, cigarettes, marijuana, and other illicit drugs from 1990 to 1994, increased fastest among the younger students, despite increased school prevention exposure. Other factors must be sought as possible explanations of the increasing alcohol and substance use among adolescents. abstract_id: PUBMED:32986075 Effects of Counseling by Peer Human Advisors vs Computers to Increase Walking in Underserved Populations: The COMPASS Randomized Clinical Trial. Importance: Effective and practical treatments are needed to increase physical activity among those at heightened risk from inactivity. Walking represents a popular physical activity that can produce a range of desirable health effects, particularly as people age. Objective: To test the hypothesis that counseling by a computer-based virtual advisor is no worse than (ie, noninferior to) counseling by trained human advisors for increasing 12-month walking levels among inactive adults. Design, Setting, And Participants: A cluster-randomized, noninferiority parallel trial enrolled 245 adults between July 21, 2014, and July 29, 2016, with follow-up through September 15, 2017. Data analysis was performed from March 15 to December 20, 2018. The evidence-derived noninferiority margin was 30 minutes of walking per week. Participants included inactive adults aged 50 years and older, primarily of Latin American descent and capable of walking without significant limitations, from 10 community centers in Santa Clara and San Mateo counties, California. Interventions: All participants received similar evidence-based, 12-month physical activity counseling at their local community center, with the 10 centers randomized to a computerized virtual advisor program (virtual) or a previously validated peer advisor program (human). Main Outcomes And Measures: The primary outcome was change in walking minutes per week over 12 months using validated interview assessment corroborated with accelerometry. Both per-protocol and intention-to-treat analysis was performed. Results: Among the 245 participants randomized, 193 were women (78.8%) and 241 participants (98.4%) were Latino. Mean (SD) age was 62.3 (8.4) years (range, 50-87 years), 107 individuals (43.7%) had high school or less educational level, mean BMI was 32.8 (6.8), and mean years residence in the US was 47.4 (17.0) years. A total of 231 participants (94.3%) completed the study. Mean 12-month change in walking was 153.9 min/wk (95% CI, 126.3 min/wk to infinity) for the virtual cohort (n = 123) and 131.9 min/wk (95% CI, 101.4 min/wk to infinity) for the human cohort (n = 122) (difference, 22.0, with lower limit of 1-sided 95% CI, -20.6 to infinity; P = .02); this finding supports noninferiority. Improvements emerged in both arms for relevant clinical risk factors, sedentary behavior, and well-being measures. Conclusions And Relevance: The findings of this study indicate that a virtual advisor using evidence-based strategies produces significant 12-month walking increases for older, lower-income Latino adults that are no worse than the significant improvements achieved by human advisors. Changes produced by both programs are commensurate with those reported in previous investigations of these behavioral interventions and provide support for broadening the range of light-touch physical activity programs that can be offered to a diverse population. Trial Registration: ClinicalTrials.gov Identifier: NCT02111213. abstract_id: PUBMED:8879083 Substance use among incarcerated adolescents: associations with peer, parent, and community use of substances. This study of incarcerated adolescents documents the number and proportion of incarcerated adolescents who use cigarettes, alcohol, and illegal drugs; examines substance use among the adolescents' peers, parents, and community members; and examines associations between incarcerated adolescents' use of substances and substance use by peers, parents, and community members. Questionnaire data indicated that substance use was common among incarcerated adolescents as well as among the adolescents' peers, parents, and community members. Logistic regression analyses revealed that, similar to patterns observed in the general population, peer substance use was the strongest predictor of substance use among incarcerated adolescents. abstract_id: PUBMED:34536716 Impacts of the respecting the circle of life teen pregnancy prevention program on risk and protective factors for early substance use among native American youth. Background: Early substance use disproportionately impacts Native American (Native) youth and increases their risk for future abuse and dependence. The literature urges for interventions to move beyond focusing on single risk behaviors (e.g. substance use) and instead have capacity to improve health risk behaviors co-occuring during adolescence, particularly among Native populations for whom few evidence-based interventions (EBI) exist. We evaluated the effectiveness of the Respecting the Circle of Life program (RCL) on risk and protective factors for early substance use. RCL is a culturally tailored EBI shown to improve sexual health outcomes among Native youth. Methods: We conducted secondary analyses of data collected through a community-based randomized controlled trial of RCL evaluated among Native youth (ages 11-19) residing on a rural reservation between 2015-2020 (N = 534, 47.4 % male). We used linear regression, controlling for baseline age and sex, to test between study group differences in outcomes at 3-, 9-, and 12-month post-intervention. Models were stratified by sex and age (11-12, 13-14, and 15+ years of age) to examine differences within these subgroups. Results: Youth receiving RCL reported lower intention to use substances through 12-months follow-up (p = 0.006). Statistically significant improvements were also observed across peer, parent, and sexual partner risk and protective factors to delay substance use initiation, with notable differences among boys and participants ages 13-14. Conclusions: RCL is a primary prevention, skills-based program effective in preventing risks for substance use. This evaluation underscores the value in developing programs that influence concurrent adolescent risk behaviors, especially for Native communities who endure multiple health disparities. abstract_id: PUBMED:38460913 ROUTE-T1D: A behavioral intervention to promote optimal continuous glucose monitor use among racially minoritized youth with type 1 diabetes: Design and development. Background: Type 1 diabetes management is often challenging during adolescence, and many youth with type 1 diabetes struggle with sustained and optimal continuous glucose monitor (CGM) use. Due to racial oppression and racially discriminatory policies leading to inequitable access to quality healthcare and life necessities, racially minoritized youth are significantly less likely to use CGM. Methods: ROUTE-T1D: Research on Optimizing the Use of Technology with Education is a pilot behavioral intervention designed to promote optimal CGM use among racially minoritized youth with type 1 diabetes. Intervention strategies include problem solving CGM challenges and promoting positive caregiver-youth communication related to CGM data. Results: This randomized waitlist intervention provides participants with access to three telemedicine sessions with a Certified Diabetes Care and Education Specialist. Caregiver participants are also connected with a peer-parent coach. Conclusion: Hypothesized findings and anticipated challenges are discussed. Future directions regarding sustaining and optimizing the use of diabetes technology among racially minoritized pediatric populations are reviewed. abstract_id: PUBMED:29531988 Daily tobacco smoking, heavy alcohol use, and hashish use among adolescents in southern Sweden: A population-based multilevel study. Introduction: The aim of this study was to investigate school contextual effects on daily tobacco smoking, heavy alcohol use and hashish use among adolescents, using multilevel analyses adjusting for individual-level factors. Methods: The 2012 public health survey among adolescents in Skåne includes pupils in ninth grade in primary school (predominantly 15-16 years old) and second grade in secondary school (gymnasium) (predominantly 17-18 years old). Multilevel logistic regressions were performed. Results: The prevalence of all three behaviors was higher in the second grade in the gymnasium. Several sociodemographic, psychosocial and parental factors were associated with these behaviors. In the ninth grade, variance partition coefficients (VPCs) for tobacco smoking decreased from 10.2% in the empty model to 1.9% in the fully adjusted model, for heavy alcohol use from 6.5% to 6.3%, while VPCs for hashish increased from 9.9% to 11.0%. In the second grade, VPCs for daily tobacco smoking decreased from 13.6% in the empty model to 6.5% in the fully adjusted model, VPCs for heavy alcohol use decreased from 4.6% to 1.7%, and VPCs for hashish use increased from 7.3% to 8.3%. Conclusions: Daily tobacco smoking (in both grades) and heavy alcohol use in the second grade in the gymnasium may be preventable by actions directed against individual-level protective factors including social capital, social support and peer/parent behavior and attitude, while interventions directed at school contexts may be more important for alcohol use in the ninth grade and hashish use in both grades. Answer: The provided abstracts do not directly address the question of whether a single system for peer assessment of professionalism among medical students can be established. The abstracts focus on various factors influencing substance use among adolescents and young adults, including peer influence, parental involvement, cultural background, and school environment (PUBMED:10777974, PUBMED:23524992, PUBMED:24853361, PUBMED:8500453, PUBMED:10446768, PUBMED:32986075, PUBMED:8879083, PUBMED:34536716, PUBMED:38460913, PUBMED:29531988). They discuss the impact of peer, parent, and community influences on behaviors such as tobacco smoking, alcohol use, and drug use, as well as interventions to modify these behaviors. However, these abstracts do not provide information on peer assessment systems in educational settings, particularly in the context of assessing professionalism among medical students. Peer assessment in medical education typically involves students evaluating the performance and professional behaviors of their peers, which can include aspects such as communication skills, teamwork, ethical conduct, and reliability. To answer the question, one would need to consider the unique context of medical education, the specific competencies that define professionalism in this field, and the existing literature on peer assessment methods in medical schools. It would also be important to consider the challenges and benefits of standardizing such a system across different institutions, as well as the need for training and calibration to ensure fairness and reliability in peer evaluations. Since the abstracts provided do not cover these topics, additional research and literature on medical education and peer assessment would be required to adequately address the question.
Instruction: Ethnic variation in timing of hospice referral: does having no informal caregiver matter? Abstracts: abstract_id: PUBMED:18363492 Ethnic variation in timing of hospice referral: does having no informal caregiver matter? Background: The length of hospice stay, as an indicator of timing of hospice referral, is an important outcome to examine in end-of-life care because it is relevant to the quality and cost efficiency of end-of-life care that patients receive. Although the majority receives nonmedical care from informal caregivers, many elderly hospice users rely on paid caregivers or staff of residential facilities. Objective: This study examined whether availability of informal primary caregiver interact with ethnicity to affect length of hospice stay. Design: A retrospective cross-sectional study. Setting/subjects: Data on 3024 hospice patients aged 65 and older discharged between 1997 and 2000 was extracted from the National Center for Health Statistics' National Home and Hospice Care Survey (NHHCS). Measurements: Length of hospice stay prior to death or discharge. Results: Survival analysis revealed that among patients with formal caregivers, minority patients were likely to have significantly shorter hospice stays than non-Hispanic whites. There were no significant ethnic differences in length of stay among patients with informal caregivers. Conclusions: Our findings suggest that ethnic differences in length of stay should be discussed in terms of type of caregiver, not just type of setting, since patients in residential facilities can have informal primary caregivers who are vigilant advocates for their dying relatives. We discuss possible reasons for the influence of having formal caregivers on length of stay of minority elders. abstract_id: PUBMED:38271546 Family Caregiver Communication and Perceptions of Involvement in Hospice Care. Background: The burden of caregiving for family members is significant and becomes particularly challenging at end of life, with negative effects on mental health, including anxiety and depression. Research has shown caregivers need better communication with their health care team. Objectives: To evaluate the relationship between hospice team communication with caregivers and caregiver involvement in care. Methods: The purpose of this secondary analysis of data collected from a U.S.-based cluster crossover randomized trial was to evaluate whether caregiver-centered communication (Caregiver-Centered Communication Questionnaire) is associated with a caregiver's perceptions of involvement in care (Perceived Involved in Care Scale). A block-wise approach was used to estimate linear models, which were created using total scores and subscale scores. Results: Caregiver-centered communication was positively associated with perceptions of involvement in care. Conclusion: Skilled communication between hospice clinicians and family caregivers is critical in helping family members perception they are involved in the care of their loved one. There could be similar benefit in caregiver-centered communication during cancer treatment as well. abstract_id: PUBMED:37677767 Caregiver Burden and Workplace Productivity Among Hospice Cancer Caregivers. Objectives: To examine the relationships among family caregiver burden and workplace productivity and activity impairment among home hospice family caregivers of individuals with cancer who worked while providing end-of-life caregiving. Sample & Setting: Baseline data from a longitudinal study of communication between hospice providers and hospice family caregivers were used for this secondary analysis. Methods & Variables: Working family caregivers with complete workplace productivity and activity impairment data were included in this analysis (N = 30). Demographic data, caregiver burden, and workplace productivity and activity impairment were examined with descriptive statistics, correlation analysis, and hierarchical linear regressions. Results: Hospice family caregivers were primarily White, female, married, and employed full-time. Caregiver burden levels were significantly positively associated with activity impairment, presenteeism, and work productivity loss. These relationships remained statistically significant when controlling for age. Implications For Nursing: Hospice and oncology nurses can support working hospice family caregivers by assessing for burden and associated workplace challenges, as well as by providing referrals for respite and community resources. abstract_id: PUBMED:33038426 Associations Between Hospice Care and Scary Family Caregiver Experiences. Context: Hospice deaths in the U.S. are increasing. Dying hospice patients may have rapidly emerging needs the hospice team cannot immediately meet, exposing family caregivers to fright-inducing (i.e., scary) situations. Objectives: To examine relationships between hospice care and family caregiver exposures and psychological responses to witnessing common and distressing patient symptoms near the end of life. Methods: Secondary analysis of prospective cohort study of 169 patients with advanced cancer and their family caregivers was analyzed. Multivariable regression analyses modeled associations between hospice use and caregiver exposures and psychological responses (fear and helplessness) to witnessing distressing symptoms common near death, adjusting for potential confounding influences (e.g., home death, patient characteristics, and suffering). Caregiver self-reported exposures and responses to observing patient symptoms during the last month of life were assessed using the validated Stressful Caregiving Response to Experiences of Dying (SCARED) scale. Results: Hospice care was significantly positively associated with more exposures and negative psychological responses to distressing patient symptoms, adjusting for home death, patient characteristics, and physical and mental suffering. On average, hospice patients' caregivers scored 1.6 points higher on the SCARED exposure scale and 6.2 points higher on the SCARED psychological response scale than caregivers of patients without hospice (exposure: 10.53 vs. 8.96; psychological responses: 29.85 vs. 23.67). Patient pain/discomfort, delirium, and difficulty swallowing/choking were reported by three-fourths of caregivers and associated with the most fear and helplessness among caregivers. Conclusion: Hospice care is associated with more exposures to and caregiver fear and helplessness in response to scary patient experiences. Research is needed to understand how better to support family caregivers of hospice patients to enable them to cope with common distressing symptoms of dying cancer patients. Hospice clinicians providing additional education and training about these symptoms might enable caregivers to better care for dying loved ones and reduce the stresses of end-of-life caregiving. abstract_id: PUBMED:35666474 Hospice Transitions From the Perspective of the Caregiver: A Qualitative Study and Development of a Preliminary Hospice Transition Checklist. Background: Relative to curative and traditional care delivery, hospice care has been associated with superior end of life (EOL) outcomes for both patient and caregiver. Still, comprehensive orientation and caregiver preparation for the transition to hospice is variable and often inadequate. From the perspective of the caregiver, it is unclear what information would better prepare them to support the transition of their loved one to hospice. Objectives: Our two sequential objectives were: 1) Explore caregivers' experiences and perceptions on the transition of their loved one to hospice; and 2) Develop a preliminary checklist of considerations for a successful transition. Design: We conducted semi-structured interviews and used a descriptive inductive/deductive thematic analysis to identify themes. Subjects: 19 adult caregivers of patients across the United States who had enrolled in hospice and died in the year prior (January - December 2019). Measurements: An interview guide was iteratively developed based on prior literature and expanded through collaborative coding and group discussion. Results: Four key themes for inclusion in our framework emerged: hospice intake, preparedness, burden of care and hospice resources. Conclusions: Focusing on elements of our preliminary checklist, such as educating families on goals of hospice or offering opportunities for respite care, into the orientation procedures may be opportunities to improve satisfaction with the transition and the entirety of the hospice experience. Future directions include testing the effectiveness of the checklist and adapting for expanded poputlations. abstract_id: PUBMED:36799950 Moderating Effect of Work on Burden and Hospice Family Caregiver Well-Being. Background: Hospice family caregiving is often physically and emotionally taxing, but it is unclear how employment status impacts hospice caregiver burden and well-being. Objective(s): To examine the relationship between caregiver burden and well-being, and the moderating role of employment status (i.e., working, not working). Design, Setting/Subjects: This was a secondary data analysis of baseline data from a longitudinal observational study of family caregivers of home hospice cancer patients in the United States. Descriptive statistics, correlations, hierarchical linear regressions, and moderation analyses were used. Measurements: Baseline data included demographics, preparedness for caregiving, perceived burden, and well-being (i.e., global health, positive affect and well-being, anxiety, depression). Results: The majority of the 90 participants were White (86.7%), married (71.1%), and college educated (85.6%). The mean age was 58.27 ± 14.22, 53.3% cared for a spouse, and 56.6% worked full or part time. After controlling for demographics, and using employment status as a moderator, greater caregiver burden was significantly associated with lower global health (β = -0.82 [-1.22 to -0.42], p < 0.001), positive affect and well-being scores (β = -0.69 [-1.03 to -0.36], p < 0.001), and higher depression (β = 0.24 [0.12-0.37], p < 0.001) and anxiety scores (β = 0.22 [0.07-0.37], p < 0.005). Employment status significantly moderated the relationship between burden and global health (β = 0.65 [0.22 to 1.08], p < 0.005), and burden and positive affect and well-being (β = 0.45 [0.06 to 0.84], p < 0.05). At high levels of burden, workers had greater well-being than nonworkers. Conclusion(s): More burdened hospice caregivers may experience worse well-being, especially among nonworking caregivers. Employment may be a protective factor for highly burdened hospice family caregivers. abstract_id: PUBMED:28029712 Caregiver, patient, and nurse visit communication patterns in cancer home hospice. Objective: Few studies have examined the triadic communication between patients, spouse caregivers, and nurses in the home hospice setting. Thus, little is known about the types of communication patterns that unfold. The goals of the study were to, first, identify common patterns of communication in nurse-patient-caregiver home hospice visits and, second, to identify nurse, caregiver-patient dyad, and visit characteristics that predict visit communication patterns. Method: Nurses (N = 58) and hospice cancer patient and spouse caregiver dyads (N = 101; 202 individuals) were recruited from 10 hospice agencies. Nurses audio recorded visits to patient/caregiver homes from study enrollment until patient death. All patient, caregiver, and nurse utterances from the audio recordings were coded using an adapted Roter interaction analysis system. Using identified codes, cluster analysis was conducted to identify communication patterns within hospice visits. Logistic regression was used with demographic variables to predict visit communication patterns. Results: Six visit communication patterns were identified and were defined largely by 2 dimensions: (1) either the patient, the caregiver, or the patient and caregiver dyad interacting with the nurse and (2) the relatively high or low expression of distress during the visit. Time until death significantly predicted several clusters. Conclusion: This study leads the way in outlining triadic communication patterns in cancer home hospice visits. Our findings have implications for nursing education, letting future nurses know what to expect, and lays the foundation for future research to determine effectiveness and interventions to improve health care communication. abstract_id: PUBMED:34793244 Anxiety, Depression, Quality of Life, Caregiver Burden, and Perceptions of Caregiver-Centered Communication among Black and White Hospice Family Caregivers. Background: Anxiety, depression, and reduced quality of life (QOL) are common problems for hospice family caregivers, but it is unknown if disparities in these experiences exist among Black and White caregivers. Objectives: To compare anxiety, depression, QOL, caregiver burden, and perceptions of caregiver-centered hospice team communication between Black and White hospice family caregivers. Design: Secondary analysis of baseline data from two randomized clinical trials. Setting/Subjects: Seven hundred twenty-two Black and White hospice family caregivers ages 18+ from Midwestern and Northeastern United States. Measurements: Measures included the Generalized Anxiety Disorder-7 (GAD-7), Patient Health Questionnaire (PHQ-9), Caregiver Quality-of-Life Index-Revised (CQLI-R), Zarit Burden Interview (ZBI-7), and Caregiver-Centered Communication Questionnaire (CCCQ). Results: Black and White caregivers differed across demographic and socioeconomic variables. Nearly one-third of hospice family caregivers reported moderate-to-severe anxiety (32.1%) and moderate-to-severe depressive symptoms (32.0%). White caregivers reported lower QOL than Black caregivers (p = 0.04), specifically in emotional (p = 0.02) and social (p = 0.0005) domains. In multiple regression analyses controlling for caregiver and patient factors, we found no racial differences in depression, anxiety, QOL, caregiver burden, or perceptions of caregiver-centered hospice communication. Conclusions: Despite demographic and socioeconomic differences, Black and White hospice family caregivers experience similarly high levels of anxiety, depression, burden, and perceptions of hospice communication. Interventions to support hospice family caregivers across racial groups and research that identifies factors that mediate social determinants of health in this population are needed. The development and validation of culture-concordant mental health screening tools in racially diverse populations is recommended. Trial registration: ClinicalTrials.gov NCT02929108 and ClinicalTrials.gov NCT01444027. abstract_id: PUBMED:35620797 Insomnia Symptoms Among Hospice Family Caregivers: Prevalence and Association with Caregiver Mental and Physical Health, Quality of Life, and Caregiver Burden. Background: Poor sleep exacerbates mental health problems and reduces quality-of-life (QOL) but prevalence of insomnia symptoms among hospice family caregivers and associations of poor sleep with caregiver health and QOL outcomes are not known. Objective: To describe prevalence of insomnia symptoms among hospice family caregivers and compare anxiety, depression, self-rated health, QOL, and caregiver burden between hospice family caregivers with and without insomnia symptoms. Methods: Descriptive sub-study using data collected during baseline interviews of hospice family caregivers involved in a randomized clinical trial in Midwestern United States (xxxxxxxx). Caregivers were dichotomized based on Insomnia Severity Index (ISI) scores (8+ indicated insomnia symptoms). Results: Among 57 hospice family caregivers, the mean ISI score was 8.2; nearly half (49.1%) experienced insomnia symptoms. Compared to caregivers without insomnia symptoms, caregivers with insomnia symptoms reported 2.4 times greater mean anxiety scores (4.7 vs 11.4); 3.5 times greater mean depression scores (3.1 vs 10.7); 2.1 times greater caregiver burden scores (5.6 vs 11.8); and 1.3 times lower self-rated health (3.5 vs 2.8); 1.3 times lower total QOL scores (29.3 vs 22.6); including differences in emotional QOL (7.9 vs 2.2), social QOL (7.2 vs 3.0), and physical QOL (7.4 vs 5.3). Conclusions: Hospice family caregivers experience high prevalence of insomnia symptoms; caregivers with insomnia symptoms report worse anxiety, depression, caregiver burden, QOL, self-rated health. Clinicians must screen hospice caregivers for poor sleep and mental health and offer supportive interventions that improve their sleep and health. Policy makers must expand hospice benefits to better support family caregivers. abstract_id: PUBMED:26384557 Unique Correlates of Heart Failure and Cancer Caregiver Satisfaction With Hospice Care. Context: Heart failure patients, along with their informal caregivers are increasingly enrolling in hospice care. Caregiver satisfaction with hospice care is a key quality indicator. The role that diagnosis plays in shaping satisfaction is unclear. Objectives: Our aim was to identify unique correlates of caregiver satisfaction in heart failure and cancer caregivers and explore whether the identified correlates differ between the two diagnosis groups. Methods: This was a retrospective cohort study of national data collected in 2011 by the National Hospice and Palliative Care Organization using the 61-item Family Evaluation of Hospice Care survey. We used complete Family Evaluation of Hospice Care responses of adult heart failure (n = 7324) and cancer (n = 23,871) caregivers. Multiple logistic regression was used to examine the relationship between possible correlates and caregiver satisfaction. Correlates examined included caregiver and patient demographics, patient clinical characteristics, and hospice characteristics. Results: Caregiver-reported patient dyspnea was associated with global and symptom management satisfaction in the heart failure cohort, whereas caregiver race was associated with global and symptom management satisfaction in the cancer cohort. Nursing home placement was associated with lower satisfaction odds in both cancer and heart failure cohorts, but heart failure patients were twice as likely as cancer patients to receive care in a nursing home. Conclusion: This study generated hypotheses about unique factors related to caregiver satisfaction among two diagnosis cohorts that require further study, particularly the impact of race on satisfaction in the cancer cohort and the management of dyspnea in heart failure hospice patients. Answer: The study by Ngo-Metzger et al. (PUBMED:18363492) examined the impact of the availability of an informal primary caregiver on the length of hospice stay among different ethnic groups. The retrospective cross-sectional study used data from 3024 hospice patients aged 65 and older. The results indicated that among patients with formal caregivers, minority patients were likely to have significantly shorter hospice stays than non-Hispanic whites. However, there were no significant ethnic differences in length of stay among patients with informal caregivers. This suggests that the presence of an informal caregiver may mitigate ethnic disparities in the timing of hospice referral, as patients with informal caregivers did not show the same variation in hospice stay length as those with formal caregivers. The study concludes that ethnic differences in length of stay should be discussed in terms of the type of caregiver, highlighting the importance of informal caregivers in potentially extending the length of hospice stay for minority elders.
Instruction: Are we aware of the external factors that influence our food intake? Abstracts: abstract_id: PUBMED:18823179 Are we aware of the external factors that influence our food intake? Objectives: This research examines the extent to which people accurately report some of the external influences on their food intake. Design: In two studies, specific factors (the presence and behavior of others) were manipulated in order to influence the amount of food that individuals consumed. Main Outcome Measures: The main outcomes of interest were participants' spontaneously generated explanations for their food intake (Study 1; n = 122), and their ratings of the importance of several potential determinants of food intake (Study 2; n = 75). Results: In Study 1, there was high concordance between the amounts eaten by members of a dyad, but very few participants indicated that they were influenced by their partner's behavior; they instead identified hunger and taste as the primary determinants of intake. Study 2 showed that participants' intake was strongly influenced by the behavior of others, but people rated taste and hunger as much more important influences on their intake. Conclusions: If external environmental factors influence people's food intake without their awareness or acknowledgment, then maintaining a healthy diet can be a challenge, with long-term consequences for health and well-being. abstract_id: PUBMED:25447014 Awareness of social influence on food intake. An analysis of two experimental studies. There is consistent evidence that the amount of food we consume can be influenced by the eating behaviour of other people. Some previous experimental studies reported that consumers are unaware of this influence on their behaviour. The present research tested whether people may be more aware of social influence on their eating than previously assumed. In two studies, participants (total n = 160) were exposed to information about the amount of snack food other people had been eating shortly before being served the same snack food and eating as much as they liked. After this, participants responded to questions regarding whether they thought their food intake had been socially influenced, and reported the reasons why they believed they had or had not been influenced. Of the 160 participants, 34% reported that they had been influenced, 10% were unsure and 56% reported they had not been influenced. Crucially, participants' reports of social influence appeared to be accurate; the food intake of participants reporting social influence was significantly affected by the amount of food other people had been eating, whereas the food intake of participants denying social influence was unaffected. Individuals may be more aware of the effect that social influence has on their eating behaviour than previously assumed. Further work is needed to identify the factors which determine whether people are susceptible to social influence on eating behaviour. abstract_id: PUBMED:25911264 External eating mediates the relationship between impulsivity and unhealthy food intake. Recent evidence from the eating domain shows a link between impulsivity and unhealthy food intake. However, the mechanism underlying this relationship remains unclear. One possibility is an external eating style, which has been linked to both impulsivity and food intake. The current study investigated the potential mediating role of external eating in the relationship between impulsivity and food intake. Participants were 146 undergraduate women who completed measures of impulsivity and external eating, and took part in a laboratory taste test as a behavioural index of unhealthy snack food intake. It was found that attentional and motor impulsivity interacted in predicting sweet food intake, but only motor impulsivity predicted both external eating and sweet food intake. Furthermore, the relationship between motor impulsivity and food intake was mediated by external eating. These findings support the development of interventions aimed at targeting specific aspects of impulsivity in order to reduce unhealthy eating behaviour. abstract_id: PUBMED:29355211 Real-time topic-aware influence maximization using preprocessing. Background: Influence maximization is the task of finding a set of seed nodes in a social network such that the influence spread of these seed nodes based on certain influence diffusion model is maximized. Topic-aware influence diffusion models have been recently proposed to address the issue that influence between a pair of users are often topic-dependent and information, ideas, innovations etc. being propagated in networks are typically mixtures of topics. Methods: In this paper, we focus on the topic-aware influence maximization task. In particular, we study preprocessing methods to avoid redoing influence maximization for each mixture from scratch. Results: We explore two preprocessing algorithms with theoretical justifications. Conclusions: Our empirical results on data obtained in a couple of existing studies demonstrate that one of our algorithms stands out as a strong candidate providing microsecond online response time and competitive influence spread, with reasonable preprocessing effort. abstract_id: PUBMED:34920051 The influence of experimental confederate peers on children's food intake: A systematic review and meta-analysis. Confederates influence eating behaviour. Systematic reviews and meta-analyses have been conducted on this topic, however, the majority have examined adults, or a combination of adults and children, therefore, an up-to-date meta-analysis is needed to examine the impact of confederate peers on children's food intake. We systematically reviewed and meta-analysed the influence of confederate peers on children's food intake in research using present and remote-confederates. Six publications summarising findings from seven studies were included in this review. One publication was excluded from the meta-analysis because it was not possible to extract the required data. The meta-analysis showed that children were influenced by confederate peers; eating more when exposed to a high-intake compared to a no or low-intake confederate. Larger effects were observed when children were exposed to a remote-than a present-confederate, and for studies using healthy snacks compared to high fat high sugar (HFHS) snacks. No difference in effect size was observed when children were exposed to a high-vs. low-intake confederate compared to a high-vs. no-intake confederate. In the narrative synthesis, confederate intake influenced children's eating behaviour 24-h later, and possible moderators and a potential mechanism underlying the influence of confederates were identified. Caution is needed when interpreting the results, as the sub-groups were not compared statistically due to high heterogeneity, and a small number of studies were included in this review. Furthermore, all studies using the present-confederate design examined HFHS snack intake, therefore, it is unclear whether observed differences in effect sizes between present- and remote-confederates may be due to confederate or food type. Research is needed to further examine the influence of confederate peers on children's food intake and to examine mechanisms and moderators. abstract_id: PUBMED:25149198 Impression management and food intake. Current directions in research. This paper reviews recent research on consumption stereotypes (judgments of others based on what they eat) and impression management (modifying one's eating behavior in order to create a particular impression). A major recent focus in the literature has been on masculinity and meat eating, with research showing that meat is strongly associated with masculinity, and that individuals who follow a meat-based diet are perceived as more masculine than are individuals who follow a vegetarian diet. Although direct evidence for impression management through food intake remains sparse, a number of methodological approaches (including priming techniques and ecological valid assessments) are described that could be used in future research to identify the motives underlying people's eating behavior. Consumption stereotypes and impression management may be important influences on people's eating behavior, but the complexities of how, when, and for whom these factors influence food intake are still not well understood. abstract_id: PUBMED:19379024 Internal and external moderators of the effect of variety on food intake. Many factors contribute to how much we eat. One such factor is the variety of different foods available. The current article reviews the variety literature with a specific focus on the factors that moderate the effects of variety on food intake and that moderate the processes that may underlie the variety effect (i.e., sensory-specific satiety and monotony). The moderators have been categorized as being of either an internal nature or an external nature. The literature suggests that internal moderators, including characteristics such as gender, weight, and dietary restraint, do not act as moderators of the variety effect. One possible exception to the absence of internal moderators is old age. Alternatively, external moderators, such as particular properties of food and the eater's perception of the situation, appear to affect the strength of the variety effect on intake to some degree. An evolutionary hypothesis may account for the distinct roles that internal and external variables play in moderating the variety effect. (PsycINFO Database Record (c) 2009 APA, all rights reserved). abstract_id: PUBMED:16979321 Influence of physiological and subjective arousal on food intake in humans. Objectives: This study explored the influence of physiologic and subjective arousal on ambient factors of the food environment, such as the eating location, the presence of other people, sound, and time of day and on food intake. It was hypothesized that the influence of environmental stimuli on food intake would be mediated by arousal. Methods: College students were asked to wear heart rate monitors and to record their food intake in a 7-d diet diary. Subjective arousal (elation and excitement) and aspects of the eating environment were recorded for each meal occasion. Results: Results showed that subjective arousal seems to be related to food intake (P < 0.01) and appears to be affected by environmental stimuli such as the eating location (P < 0.01) and the number of people present (P < 0.01), but only small effects of some environmental stimuli on heart rate were found with higher average heart rates in restaurants (P < 0.05). No effects of heart rate on food intake were detected. Regression analyses failed to show arousal as a significant mediator between environmental stimuli and food intake. Conclusion: It can be concluded that, for normal individuals in their everyday environment, ambient influences on food intake are not mediated by changes in arousal. However, the environment and the emotional state of the individual appear to play a role in the individual's eating behavior. abstract_id: PUBMED:27020091 Influencing factors of children's fruit, vegetable and sugar-enriched food intake in a Finnish preschool setting - Preschool personnel's perceptions. Introduction: A large proportion of young children spend most of their weekdays at preschool in Western countries. In Finland, three meals are included in a full day at preschool. These meals have the potential to promote healthy eating. This study aimed to obtain the personnel's (preschool teachers, day-care nurses) views on the factors influencing children's fruit, vegetable, and sugar-enriched food intake at preschool. Study Design: Four focus groups, in all 14 preschool personnel. Two researchers independently analysed the data using a socio-ecological framework. Results: At the child level, age, peers, and the child's personality were recognized as factors influencing the fruit and vegetable (FV) and sugar-enriched food intake. At the preschool level, both the physical and social environments were discussed thoroughly, whereas at the societal level, policies of the EU, the state, and the municipality were mentioned as factors that influence what children eat in preschool. The personnel also discussed the interactions between factors both between levels and within levels. Conclusions: In Finnish preschools, children's food intake is influenced on and within several levels of the socio-ecological model. The identification of the factors influencing food intake allows different methods of intervention at multiple levels to promote healthy eating behaviours in preschools. abstract_id: PUBMED:31046092 Timing of Food Intake: Identifying Contributing Factors to Design Effective Interventions. Observations that mistimed food intake may have adverse metabolic health effects have generated interest in personalizing food timing recommendations in interventional studies and public health strategies for the purpose of disease prevention and improving overall health. Small, controlled, and short-termed intervention studies suggest that food timing may be modified as it is presumed to be primarily regulated by choice. Identifying and evaluating social and biological factors that explain variability in food timing may determine whether changes in food timing in uncontrolled, free-living environments are sustainable in the long term, and may facilitate design of successful food timing-based interventions. Based on a comprehensive literature search, we summarize 1) cultural and environmental factors; 2) behavioral and personal preference factors; and 3) physiological factors that influence the time when people consume foods. Furthermore, we 1) highlight vulnerable populations who have been identified in experimental and epidemiological studies to be at risk of mistimed food intake and thus necessitating intervention; 2) identify currently used food timing assessment tools and their limitations; and 3) indicate other important considerations for the design of food timing interventions based on successful strategies that address timing of other lifestyle behaviors. Conclusions drawn from this overview may help design practical food timing interventions, develop feasible public health programs, and establish guidelines for effective lifestyle recommendations for prevention and treatment of adverse health outcomes attributed to mistimed food intake. Answer: The research suggests that people may not be fully aware of the external factors that influence their food intake. In one study, participants' food intake was strongly influenced by the behavior of others, yet they rated taste and hunger as much more important influences on their intake, indicating a lack of awareness of social influences (PUBMED:18823179). However, another study found that individuals may be more aware of social influence on their eating behavior than previously assumed, with 34% of participants reporting that they had been influenced by others, and their food intake reflecting this reported influence (PUBMED:25447014). Despite some level of awareness, the complexity of how, when, and for whom external factors such as consumption stereotypes and impression management influence food intake is not well understood (PUBMED:25149198). Additionally, factors such as the variety of foods available and the perception of the situation have been identified as external moderators that can affect food intake (PUBMED:19379024). The environment and the emotional state of an individual also appear to play a role in eating behavior, although ambient influences on food intake are not mediated by changes in arousal for normal individuals in their everyday environment (PUBMED:16979321). Overall, while there is some awareness of social influences, the full extent of external factors and their impact on food intake may not be entirely recognized by individuals.
Instruction: Mediterranean diet and health: is all the secret in olive oil? Abstracts: abstract_id: PUBMED:33249203 Beneficial effects of olive oil and Mediterranean diet on cancer physio-pathology and incidence. Virgin olive oil is a characteristic component and the main source of fat of the Mediterranean diet. It is a mix of high-value health compounds, including monounsaturated fatty acids (mainly oleic acid), simple phenols (such as hydroxytyrosol and tyrosol), secoiridoids (such as oleuropein, oleocanthal), flavonoids, and terpenoids (such as squalene). Olive oil consumption has been shown to improve different aspects of human health and has been associated with a lower risk of cancer. However, the underlying cellular mechanisms involved in such effects are still poorly defined, but seem to be related to a promotion of apoptosis, modulation of epigenetic patterns, blockade of cell cycle, and angiogenesis regulation. The aim of this review is to update the current associations of cancer risk with the Mediterranean diet, olive oil consumption and its main components. In addition, the identification of key olive oil components involved in anticarcinogenic mechanisms and pathways according to experimental models is also addressed. abstract_id: PUBMED:37891921 Zebrafish Model Insights into Mediterranean Diet Liquids: Olive Oil and Wine. In this review, we explored the potential of a zebrafish model to investigate the antioxidant effects of key components of the Mediterranean diet, namely, olive oil and wine, in the context of preventing age-related diseases, particularly cardiovascular conditions. This paper explores the spectrum of observational studies to preclinical investigations and ultimately converges toward potential translational insights derived from animal experimentation. This review highlights the potential and underutilization of zebrafish as an experimental model in this domain. We highlighted the genetic proximity of zebrafish to humans, offering a unique opportunity for translational insights into the health benefits of olive oil and wine. Indeed, we wanted to focus on the potential of zebrafish to elucidate the health benefits of olive oil and wine while calling for continued exploration to unlock its full potential to advance our knowledge of age-related disease prevention within the Mediterranean diet framework. abstract_id: PUBMED:24499124 The historical development and nutritional importance of olive and olive oil constituted an important part of the Mediterranean diet. The olive tree (Olea europaea) is widely cultivated for the production of both oil and table olives and very significant because of its economic value. Olive and olive oil, a traditional food product with thousands of years of history, are the essential components of the Mediterranean diet and are largely consumed in the world. Beside of their economical contribution to national economy, these are an important food in terms of their nutritional value. Olive and olive oil may have a role in the prevention of coronary heart disease and certain cancers because of their high levels of monosaturated fatty acids and phenolic compounds. In addition, olives (Olea europaea L.) and olive oils provide a rich source of natural antioxidants. These make them both fairly stable against auto-oxidation and suitable for human health. The aim of this paper is to define the historical development and nutritional importance of olive and olive oil constituted an important part of the Mediterranean diet. abstract_id: PUBMED:12947443 Olive oil and the Mediterranean diet: beyond the rhetoric. Objective: The purpose of this study was to analyze the association of food, nutrient and energy intakes with olive oil consumption in Spain. Design: Cross-sectional study by face-to-face interview. Setting: Population-based random sample derived from the Catalan Nutrition Survey. Subjects: In total, 1600 individuals between 18 and 60 y of age. Intervention: Two 24-h recalls were administered to measure food and nutrient intakes. Food consumption and nutrient intakes were analyzed comparing the highest and lowest quartiles of olive oil consumption. Results: Those with the highest consumption of olive oil (greater than 13.5% of total calories, fourth quartile) consumed less cereal, baked goods, whole milk, sausages, candy, fruit juice and soft drinks, but more fish, eggs, vegetables and added fats, as compared to those with the lowest olive oil consumption (less than 6.8% of total calories, first quartile). The group with the greatest olive oil consumption also demonstrated high total fat intake, although saturated fats showed a lower percentage of total energy intake. Vitamin intake was more adequate in those with the highest consumption of olive oil. Conclusion: Olive oil is a key contributor to the healthy aspects attributed to the Mediterranean diet, and as such, nutritional objectives in Mediterranean countries should address reducing saturated fats, without modifying quantities of olive oil. abstract_id: PUBMED:21443484 Mediterranean diet rich in olive oil and obesity, metabolic syndrome and diabetes mellitus. After decades of epidemiological, clinical and experimental research, it has become clear that consumption of Mediterranean dietary patterns rich in olive oil has a profound influence on health outcomes, including obesity, metabolic syndrome (MetS) and diabetes mellitus. Traditionally, many beneficial properties associated with this oil have been ascribed to its high oleic acid content. Olive oil, however, is a functional food that, besides having high-monounsaturated (MUFA) content, contains other minor components with biological properties. In this line, phenolic compounds have shown antioxidant and antiinflammatory properties, prevent lipoperoxidation, induce favorable changes of lipid profile, improve endothelial function, and disclose antithrombotic properties. Research into the pharmacological properties of the minor components of olive oil is very active and could lead to the formulation of functional food and nutraceuticals. Although more data are mandatory the Mediterranean diet rich in olive oil does not contribute to obesity and appears to be a useful tool in the lifestyle management of the MetS. Moreover there is good scientific support for MUFA diets, especially those based on olive oil, as an alternative approach to low-fat diets for the medical nutritional therapy in diabetes. The objective of this review is to present evidence illustrating the relationship between Mediterranean diet, olive oil and metabolic diseases, including obesity, MetS and diabetes mellitus and to discuss potential mechanisms by which this food can help in disease prevention and treatment. abstract_id: PUBMED:31817038 The Secrets of the Mediterranean Diet. Does [Only] Olive Oil Matter? Diet plays a key role in the maintenance and optimal functioning of immune cells. The Mediterranean dietary pattern is an example of a prudent choice of lifestyle and scientifically accepted to help preserve human health by protecting against major chronic and inflammatory diseases. Mediterranean diets (MedDiets) are characteristically high in the consumption of fruits, vegetables and salad, bread and whole grain cereals, potatoes, legumes/beans, nuts, and seeds. Their common central feature is the usage of olive oil as the main source of fat. The health benefits attributed to olive oil are specifically related to extra virgin olive oil (EVOO) intake with its high nutritional quality and multiple positive effects on health. Overall, MedDiets have direct (mono-unsaturated fatty acids (MUFAs), tocopherols, polyphenols) and indirect (low saturated fats, well-balanced linoleic/alpha linolenic acid) effects on the immune system and inflammatory responses. In the present paper, we summarize the current knowledge on the effect of olive oil per se and MedDiets generally on immune-mediated and inflammatory diseases, such as coronary heart disease (CHD)/cardiovascular diseases (CVD), obesity, type-2 diabetes, cancer, asthma, and allergies. abstract_id: PUBMED:22166191 Olive oil, an essential component of the Mediterranean diet, and breast cancer. Objective: The Mediterranean diet has been related to a lower risk of some chronic diseases, including cancer. We aim to gain insight into the effects of the main source of fat of this diet on breast cancer, the most common type of malignancy in women. Design: Data from sixteen experimental series analysing the effects of dietary lipids on mammary carcinogenesis in an animal model, in the context of the international literature on the Mediterranean diet, olive oil and breast cancer risk. Setting: Experimental and human data on the effects of olive oil and Mediterranean diet on breast cancer. Subjects: An animal model of induced breast cancer and other human and experimental studies in the literature. Results: Diets rich in extra virgin olive oil (EVOO) exert a negative modulatory effect on experimental breast cancer to a weak promoting effect, much lower than that obtained with a high-corn oil diet. EVOO confers to the mammary adenocarcinomas a clinical behaviour and morphological features compatible with low tumour aggressiveness. This differential effect, in relation to other dietary lipids, may be related to a lower effect on body weight and sexual maturation. In addition, EVOO induced different molecular changes in tumours, such as in the composition of cell membranes, activity of signalling proteins and gene expression. All these modifications could induce lower proliferation, higher apoptosis and lower DNA damage. These results, together with the favourable effect of olive oil reported in the literature when it is consumed in moderate quantities, suggest a beneficial influence of EVOO on breast cancer risk. Conclusions: Consumption of EVOO in moderate quantities and throughout the lifetime appears to be a healthy choice and may favourably influence breast cancer risk. abstract_id: PUBMED:11582840 Olive oil and the Mediterranean diet. The high intake of olive oil in the Mediterranean diet contributes to the low intake of saturated fatty acids among populations in Southern Europe and is associated with a low incidence of coronary heart disease. Replacement of saturated fatty acids with oleic acid leads to a reduction in low-density lipoprotein (LDL) cholesterol without decreasing the concentration of high density lipoprotein (HDL) cholesterol. Oleic acid, however, may not be neutral with regards to its effects on risk of thrombosis which may have adverse consequences in populations with established atherosclerosis. abstract_id: PUBMED:11472248 Mediterranean diet and health: biological importance of olive oil. Olive oil, the main fatty component of the Mediterranean diet, is characterized by consisting of monounsaturated fatty acids as well as by its elevated content in antioxidant agents. This oil exhibits numerous biological functions which are beneficial for the state of health. A diet rich in monounsaturated fatty acids provides an adequate fluidity to the biological membranes, diminishing the hazard of lipid peroxidation which affects polyunsaturated fatty acids. Moreover, the antioxidants present in olive oil are able to scavenge free radicals and afford an adequate protection against peroxidation. Regarding the heart, olive oil decreases the plasmatic levels of LDL-cholesterol and increases those of HDL-cholesterol, hence diminishing the risk of suffering from heart complaints. In this context, it has been suggested that increased consumption of monounsaturated fatty acids in place of polyunsaturated fatty acids will render circulating lipoproteins less sensitive to peroxidation and thereby diminish the development of atherosclerosis. Olive oil has also been proven to contribute to a better control of the hypertriglyceridemia accompanying diabetes and may reduce the risk of breast cancer and colorectum. On the other hand, several investigations have suggested that olive oil can be beneficial in inflammatory and autoimmune diseases, such as rheumatoid arthritis. In this sense, some reports have indicated that olive oil modifies inflammatory cytokines production. As for the digestive system, olive oil enhances gallbladder emptying consequently reducing cholelithiasis risk, decreases the pancreatic exocrine secretion and gastric secretory function in response to food. Finally, it has been demonstrated that a diet rich in olive oil is associated with a high percentage of gastric ulcer healing and affords a higher resistance against non steroidal antiinflammatory drugs-induced gastric ulcerogenesis. abstract_id: PUBMED:30809527 Network Meta-Analysis of Metabolic Effects of Olive-Oil in Humans Shows the Importance of Olive Oil Consumption With Moderate Polyphenol Levels as Part of the Mediterranean Diet. The beneficial role of olive oil consumption is nowadays widely recognized. However, it is not clear whether its health effects are due to the presence of monounsaturated lipids and/or to the antioxidant fraction of microconstituents present in olive oil. The aim of the present study was to analyze the exact role of olive oil in the modification of metabolic factors (glucose and circulating lipids) and explore the role of its antioxidant polyphenols. In the present work, we have performed a network meta-analysis of 30 human intervention studies, considering direct and indirect interactions and impact of each constituent. Interestingly, we show that the impact of olive oil on glucose, triglycerides, and LDL-cholesterol is mediated through an adherence to the Mediterranean diet, with the only notable effect of olive oil polyphenols being the increase of HDL-cholesterol, and the amelioration of the antioxidant and inflammatory status of the subjects. Additionally, we report for the first time that lower antioxidant polyphenol levels may be sufficient for the beneficial effects of olive oil, while we show that the lipid fraction of olive oil may be responsible for some of its beneficial actions. In all parameters examined the beneficial effect of olive oil was more pronounced in subjects with an established metabolic syndrome or other chronic conditions/diseases. In conclusion, all these findings provide new knowledge that could lead to re-establishment of the role of olive oil in human nutrition. Answer: The Mediterranean diet is renowned for its health benefits, and while olive oil is a central component of this diet, it is not the only factor contributing to its health-promoting properties. Olive oil, particularly extra virgin olive oil (EVOO), is rich in monounsaturated fatty acids (mainly oleic acid), phenolic compounds, and other antioxidants, which have been associated with various health benefits, including a lower risk of cancer, cardiovascular diseases, and metabolic disorders (PUBMED:33249203; PUBMED:24499124; PUBMED:21443484; PUBMED:31817038; PUBMED:22166191; PUBMED:11472248). Olive oil consumption has been linked to improved lipid profiles, such as increased HDL-cholesterol and decreased LDL-cholesterol, as well as better glucose regulation and anti-inflammatory effects (PUBMED:30809527; PUBMED:11472248). It also appears to have a role in the prevention of chronic diseases like coronary heart disease, certain cancers, and metabolic syndrome, including obesity and diabetes mellitus (PUBMED:21443484; PUBMED:11472248; PUBMED:22166191). However, the Mediterranean diet as a whole is characterized by high consumption of fruits, vegetables, legumes, nuts, seeds, and whole grains, with olive oil as the main source of fat. These dietary patterns have direct and indirect effects on the immune system and inflammatory responses, contributing to the prevention of various immune-mediated and inflammatory diseases (PUBMED:31817038). Moreover, the Mediterranean diet's benefits are not solely due to the presence of olive oil. The diet's overall composition, which includes a variety of nutrient-dense foods and a balance of macronutrients, plays a key role in its health effects (PUBMED:37891921; PUBMED:31817038). For instance, the diet's high intake of fiber, antioxidants, and polyphenols from plant-based foods, along with moderate wine consumption, also contribute to its protective effects against age-related diseases (PUBMED:37891921). In summary, while olive oil is an essential and beneficial component of the Mediterranean diet, the diet's health benefits are the result of a synergistic combination of various dietary elements, rather than olive oil alone (PUBMED:31817038; PUBMED:37891921).
Instruction: Are the communication and professionalism competencies the new critical values in a resident's global evaluation process? Abstracts: abstract_id: PUBMED:18063268 Are the communication and professionalism competencies the new critical values in a resident's global evaluation process? Background: The ACGME requires the assessment of resident competency in 6 domains. Global evaluations covering all 6 competencies are routinely used. Evaluators may be overly influenced by resident affability and availability, thereby resulting in a halo effect. We hypothesized that the Interpersonal Skills and Communications (ICS) and Professionalism (PR) competencies would unduly influence other competency scores. Methods: General surgery resident evaluations are performed by staff and peers on a rotational basis using competency-based questions. Each question is scored using a 5-point Likert scale. Mean individual composite scores for each competency were calculated and then correlated with other mean composite competency scores. Data from patient evaluations were similarly analyzed. A final correlation of competency scores to ABSITE scores, as an objective, standardized measure of a specific competency, Medical knowledge (MK) was also performed. Results: Results were available for 37 residents (PGY 1-5). There was a significant association between ICS scores and higher scores in MK (r = 0.52, p = 0.004), PR (r = 0.826, p < 0.0001) and patient care (PC) (r = 0.619, p < 0.0001). No correlation, however, was found between patient evaluations of residents and their faculty/peer-based ICS scores. We found no association between ICS scores and improved patient evaluations. Lastly, we found no association between ICS or MK scores and ABSITE scores. Conclusions: It was difficult to ascertain whether residents with better ICS scores had higher PR, PC, and MK scores because of the halo effect, improper completion of evaluations, or whether those residents were truly performing better clinically. External measures of resident performance did not correlate with faculty/peer evaluations of ICS and PR. Residency programs should consider adopting a more standardized way to objectively evaluate residents. abstract_id: PUBMED:22577337 Assessment of resident physicians in professionalism, interpersonal and communication skills: a multisource feedback. Objective: To assess the internal validity and reliability of a multisource feedback (MSF) program by China Medical Board for resident physicians in China. Method: Multisource feedback was used to assess professionalism, interpersonal and communication skills. 258 resident physicians were assessed by attending doctors, self-evaluation, resident peers, nurses, office staffs, and patients who completed a sealed questionnaire at 19 hospitals in China. Cronbach's alpha coefficient was used to assess reliability. Validity was assessed by exploratory factor analyses and by profile ratings. Results: 4128 questionnaires were collected from this study. All responses had high internal consistency and reliability (Cronbach's α > 0.90), which suggests that both questions and form data were internally consistent. The exploratory factor analysis with varimax rotation for the evaluators' questionnaires was able to account for 70 to 74% of the total variance. Conclusion: The current MSF assessment tools are internally valid and reliable for assessing resident physician professionalism and interpersonal and communication skills in China. abstract_id: PUBMED:23867427 Are pediatric critical care medicine fellowships teaching and evaluating communication and professionalism? Objectives: To describe the teaching and evaluation modalities used by pediatric critical care medicine training programs in the areas of professionalism and communication. Design: Cross-sectional national survey. Setting: Pediatric critical care medicine fellowship programs. Subjects: Pediatric critical care medicine program directors. Interventions: None. Measurements And Main Results: Survey response rate was 67% of program directors in the United States, representing educators for 73% of current pediatric critical care medicine fellows. Respondents had a median of 4 years experience, with a median of seven fellows and 12 teaching faculty in their program. Faculty role modeling or direct observation with feedback were the most common modalities used to teach communication. However, six of the eight (75%) required elements of communication evaluated were not specifically taught by all programs. Faculty role modeling was the most commonly used technique to teach professionalism in 44% of the content areas evaluated, and didactics was the technique used in 44% of other professionalism content areas. Thirteen of the 16 required elements of professionalism (81%) were not taught by all programs. Evaluations by members of the healthcare team were used for assessment for both competencies. The use of a specific teaching technique was not related to program size, program director experience, or training in medical education. Conclusions: A wide range of techniques are currently used within pediatric critical care medicine to teach communication and professionalism, but there are a number of required elements that are not specifically taught by fellowship programs. These areas of deficiency represent opportunities for future investigation and improved education in the important competencies of communication and professionalism. abstract_id: PUBMED:25937515 Professionalism and Communication Education in Pediatric Critical Care Medicine: The Learner Perspective. Objective: Communication and professionalism are often challenging to teach, and the impact of the use of a given approach is not known. We undertook this investigation to establish pediatric critical care medicine (PCCM) trainee perception of education in professionalism and communication and to compare their responses from those obtained from PCCM fellowship program directors. Methods: The Education in Pediatric Intensive Care (E.P.I.C.) Investigators used the modified Delphi technique to develop a survey examining teaching of professionalism and communication. After piloting, the survey was sent to all 283 PCCM fellows in training in the United States. Results: Survey response rate was 47% (133 of 283). Despite high rates of teaching overall, deficiencies were noted in all areas of communication and professionalism assessed. The largest areas of deficiency included not being specifically taught how to communicate: as a member of a nonclinical group (reported in 24%), across a broad range of socioeconomic and cultural backgrounds (19%) or how to provide consultation outside of the intensive care unit (17%). Only 50% of fellows rated education in communication as "very good/excellent." However, most felt confident in their communication abilities. For professionalism, fellows reported not being taught accountability (12%), how to conduct a peer review (12%), and how to handle potential conflict between personal beliefs, circumstances, and professional values (10%). Fifty-seven percent of fellows felt that their professionalism education was "very good/excellent," but nearly all expressed confidence in these skills. Compared with program directors, fellows reported more deficiencies in both communication and professionalism. Conclusions: There are numerous components of communication and professionalism that PCCM fellows perceive as not being specifically taught. Despite these deficiencies, fellow confidence remains high. Substantial opportunities exist to improve teaching in these areas. abstract_id: PUBMED:28205259 Professional values and competencies as explanatory factors for the use of evidence-based practice in nursing. Aim: To establish the connection between values, competencies, selected job characteristics and evidence-based practice use. Background: Nurses rarely apply evidence-based practice in everyday work. A recent body of research has looked at various variables explaining the use of evidence-based practice, but not values and competencies. Design: A cross-sectional, non-experimental quantitative explorative research design. Methods: Standardized instruments were used (Nurse Professional Values Scale-R, Nurse Competence Scale, Evidence-Based Practice Beliefs and Implementation Scale). The sample included 780 nurses from 20 Slovenian hospitals. The data were collected in 2015. Results: The study identifies two new variables contributing to a better understanding of beliefs on and implementation of evidence-based practice, thus broadening the existing research evidence. These are the values of activism and professionalism and competencies aimed at the development and professionalization of nursing. Values of caring, trust and justice and competencies expected in everyday practice do not influence the beliefs and implementation of evidence-based practice. Respondents ascribed less importance to values connected with activism and professionalism and competencies connected with the development of professionalism. Nurses agree that evidence-based practice is useful in their clinical work, but they lack the knowledge to implement it in practice. Evidence-based practice implementation in nursing practice is low. Conclusions: Study results stress the importance of increasing the knowledge and skills on professional values of activism and professionalism and competencies connected to nursing development. The study expands the current understanding of evidence-based practice use and provides invaluable insight for nursing managers, higher education managers and the national nursing association. abstract_id: PUBMED:28764199 Evaluation of Competencies Related to Personal Attributes of Resident Doctors by 360 Degree. Introduction: Postgraduate teaching involves training the residents in all spheres including professionalism, communication and interpersonal skills in addition to the basic clinical skills. Therefore, 360 degree evaluation has gained focus due to increased standards of medical care as shown in this study. Aim: To evaluate interpersonal and communication skills and professionalism in residents doctors using 360 degree evaluation method. Materials And Methods: The present study was a descriptive study conducted at a tertiary care hospital affiliated with a medical college. Taking up as a pilot project, 26 residents from four departments were recruited in this study. A five point likert scale (1 to 5,1 meaning poor, 5 meaning excellent) was used for scoring them on for their interpersonal and communication skills and professionalism by the persons working around their sphere like teachers, peers, nursing staff, undergraduate students and patients/or relatives (360 degree). Results: Though the ratings by the peers and consultants were found to be slightly lower compared with nursing staff, undergraduate students and patients, there was good agreement between all of them. The mean score of all traits showed an increasing trend over the years of residency. Conclusion: Regular orientation programs for professionalism with 360 degree evaluation and subsequent feedback to the resident doctor about their strength and weaknesses can definitely bring out behavioural change in the resident doctor in practice. abstract_id: PUBMED:29065274 High-Quality Feedback Regarding Professionalism and Communication Skills in Otolaryngology Resident Education. Objective The Accreditation Council for Graduate Medical Education (ACGME) requires competency-based education for residents and recommends 5 basic features of high-quality feedback. Our aim was to examine the incorporation of feedback in articles regarding professionalism and interpersonal/communication skills for otolaryngology residency training curriculum. Data Sources PubMed, Embase, ERIC, Cochrane Library, Web of Science, Scopus, and ClinicalTrials.gov . Methods We used studies identified during a systematic review of all indexed years through October 4, 2016. Results Eighteen studies were included in this review. Professionalism was discussed in 16, of which 15 (94%) examined aspects of feedback. Interpersonal/communication skills were the focus of 16 articles, of which 14 16 (88%) discussed aspects of feedback. Our assessment demonstrated that timeliness was addressed in 8 (44%) articles, specificity in 4 (22%), learner reaction and reflection in 4 (22%), action plans in 3 (20%), and balancing reinforcing/corrective feedback in 2 (13%). Two articles did not address feedback, and 6 did not address aspects of high-quality feedback. The ACGME-recommended feedback systems of ADAPT (ask, discuss, ask, plan together) and R2C2 (relationship, reactions, content, and coach) were not reported in any of the studies. Conclusion Feedback is an essential component of graduate medical education and is required by the ACGME milestones assessment system. However, the core feedback components recommended by the ACGME are rarely included in the otolaryngology resident education literature. abstract_id: PUBMED:27589949 Proficiency in identifying, managing and communicating medical errors: feasibility and validity study assessing two core competencies. Background: Communication skills and professionalism are two competencies in graduate medical education that are challenging to evaluate. We aimed to develop, test and validate a de novo instrument to evaluate these two competencies. Methods: Using an Objective Standardized Clinical Examination (OSCE) based on a medication error scenario, we developed an assessment instrument that focuses on distinctive domains [context of discussion, communication and detection of error, management of error, empathy, use of electronic medical record (EMR) and electronic medical information resources (EMIR), and global rating]. The aim was to test feasibility, acceptability, and reliability of the method. Results: Faculty and standardized patients (SPs) evaluated 56 trainees using the instrument. The inter-rater reliability of agreement between faculty was substantial (Fleiss k = 0.71) and intraclass correlation efficient was excellent (ICC = 0.80). The measured agreement between faculty and SPs evaluation of resident was lower (Fleiss k = 0.36). The instrument showed good conformity (ICC = 0.74). The majority of the trainees (75 %) had satisfactory or higher performance in all six assessed domains and 86 % found the OSCE to be realistic. Sixty percent reported not receiving feedback on EMR use and asked for subsequent training. Conclusion: An OSCE-based instrument using a medical error scenario can be used to assess competency in professionalism, communication, using EMRs and managing medical errors. abstract_id: PUBMED:31723524 Focused Teaching Improves Medical Student Professionalism and Data Gathering Skills in the Emergency Department. Introduction: Leaders in medical education have developed milestones and core competencies in an attempt to ensure that relational skills, such as communication and professionalism, are emphasized in addition to the usual skills of medical knowledge, data gathering, and emergency stabilization during students' emergency medicine (EM) medical education. Providers facile in each of these areas have better patient outcomes, patient experiences, and decreased incidence of malpractice cases. The authors attempted to demonstrate that by deliberate teaching of these skills during an EM medical student clerkship, students could significantly improve their clinical performance. Methods: This prospective, randomized, single-blinded cohort study was performed at an academic, tertiary, urban ED to investigate the effects of a one-on-one preceptor shift on the clinical performance of fourth-year medical students. Students were randomized into two groups and assessed by pre- and post-intervention objective structured clinical encounters (OSCEs) with standardized patients (SPs) at weeks one and three. A crossover design was employed so that students in the control group participated in a preceptor shift after their second OSCE. Measurements were based on a five-point Likert scale assessment linked to early EM milestones as defined by the Accreditation Council on Graduate Medical Education (ACGME). Results: The mean improvement in total overall score was significantly greater in the intervention group: 4.31 versus 2.57 (Cohen's d = 0.57, p = 0.029). When each milestone was assessed individually, students in the intervention group improved significantly in data gathering (Cohen's d = 0.47, p = 0.048) and professionalism (Cohen's d = 0.66, p = 0.011). There was a nonstatistically significant improvement for the intervention compared to control group in emergency management and communication skills. There was no improvement for either group in medical knowledge. Conclusion: A one-on-one preceptor shift can result in a statistically significant improvement in data gathering and professionalism skills as measured by OSCEs. abstract_id: PUBMED:27433908 Professionalism Training For Surgical Residents: Documenting the Advantages of a Professionalism Curriculum. Objectives: Professionalism education is a vital component of surgical training. This research attempts to determine whether an annual, year-long professionalism curriculum in a large surgical residency can effectively change professionalism attitudes. Summary Of Background Data: The ACGME mandated 6 competencies in 2003. The competencies of Professionalism and Interpersonal/Professional Communication Skills had never been formally addressed in surgical resident education in the past. Methods: A professionalism curriculum was developed focusing on specific resident professionalism challenges: admitting mistakes, effective communication with colleagues at all levels, delivering the news of an unexpected death, interdisciplinary challenges of working as a team, the cultural challenge of obtaining informed consent through an interpreter, and the stress of surgical practice on you and your family. These professionalism skills were then evaluated with a 6-station Objective Structured Clinical Examination (OSCE). Identical OSCE scenarios were administered to 2 cohorts of surgical residents: in 2007 (before instituting the professionalism curriculum in 2008) and again in 2014. Surgical residents were rated by trained Standardized Patients according to a behaviorally anchored professionalism criteria checklist. Results: An analysis of variance was conducted of overall OSCE professionalism scores (% well done) as the dependent variable for the 2 resident cohorts (2007 vs 2014). The 2007 residents received a mean score of 38% of professionalism items "well done" (SD 9%) and the 2014 residents received a mean 59% "well done" (SD 8%). This difference is significant (F = 49.01, P < .001). Conclusions: Professionalism education has improved surgical resident understanding, awareness, and practice of professionalism in a statistically significant manner from 2007 to 2014. This documented improvement in OSCE performance reflects the value of a professionalism curriculum in the care of the patients we seek to serve. Answer: Yes, communication and professionalism competencies have become critical values in a resident's global evaluation process. The Accreditation Council for Graduate Medical Education (ACGME) requires the assessment of resident competency in six domains, which include Interpersonal Skills and Communications (ICS) and Professionalism (PR) among others (PUBMED:18063268). These competencies are increasingly recognized as vital components of medical training and are being integrated into evaluation systems. Studies have shown that there is a significant association between ICS scores and higher scores in other competencies such as Medical Knowledge (MK) and Patient Care (PC), although it is challenging to determine if this is due to a halo effect or true performance (PUBMED:18063268). Furthermore, multisource feedback (MSF) programs have been used to assess professionalism and interpersonal and communication skills, demonstrating high internal consistency and reliability, indicating that these tools are valid for assessing resident physician competencies in these areas (PUBMED:22577337). Pediatric critical care medicine fellowship programs use a variety of techniques to teach communication and professionalism, but there are elements of these competencies that are not specifically taught by all programs, suggesting opportunities for improved education (PUBMED:23867427). Similarly, PCCM fellows perceive deficiencies in specific teaching of communication and professionalism, despite high confidence in their abilities, indicating a need for enhanced teaching in these areas (PUBMED:25937515). Professional values and competencies have also been linked to the use of evidence-based practice in nursing, emphasizing the importance of these attributes in clinical practice (PUBMED:28205259). The 360-degree evaluation method has been employed to assess interpersonal and communication skills and professionalism in resident doctors, showing an increasing trend in the mean score of all traits over the years of residency (PUBMED:28764199). In otolaryngology resident education, feedback regarding professionalism and communication skills is essential, but the core feedback components recommended by the ACGME are rarely included in the literature (PUBMED:29065274). An OSCE-based instrument has been used to assess competencies in professionalism and communication, demonstrating feasibility and validity (PUBMED:27589949). Focused teaching has been shown to improve medical student professionalism and data gathering skills in the emergency department (PUBMED:31723524), and a professionalism curriculum in a surgical residency has significantly improved residents' understanding, awareness, and practice of professionalism (PUBMED:27433908). In summary, communication and professionalism are indeed critical values in the global evaluation process of residents, with a growing emphasis on their assessment and teaching within medical education programs.
Instruction: The beta-3 adrenoceptor agonist, mirabegron relaxes isolated prostate from human and rabbit: new therapeutic indication? Abstracts: abstract_id: PUBMED:25417911 The beta-3 adrenoceptor agonist, mirabegron relaxes isolated prostate from human and rabbit: new therapeutic indication? Background: Alpha1 (α1)-blockers, 5-alpha reductase and phosphodiesterase type-5 inhibitors are pharmacological classes currently available for benign prostatic hyperplasia (BPH) treatment. Mirabegron, a beta-3 adrenoceptor (β3-AR) agonist has been approved for the therapy of overactive bladder and may constitute a new therapeutic option for BPH treatment. This study is aimed to evaluate the in vitro effects of mirabegron in human and rabbit prostatic smooth muscle. Methods: In rabbit prostate, electrical field stimulation (EFS)-induced contraction and concentration-response curve (CRC) to mirabegron in phenylephrine pre-contracted tissues were carried out. The potency (pEC50 ) and maximal response (Emax ) values were determined. In human prostate, CRC to phenylephrine was carried out in the absence and presence of mirabegron. Immunohistochemistry analysis for β3-AR was also carried out. Results: In human prostate, immunohistochemistry analysis revealed the presence of β3-AR on the transition zone and mirabegron reduced by 42% the phenylephrine-induced contractions. In rabbit prostate, mirabegron produced concentration-dependent relaxations (pEC50 : 6.01 ± 0.12; Emax : 106 ± 3%), which were fully resistant to the blockade of β1-AR and β2-AR. The β3-AR blocker L748,337 caused a six-fold rightward shift in mirabegron-induced relaxations. Mirabegron (10 μM) reduced by 63% the EFS-induced contractions. Inhibitors of nitric oxide (L-NAME) and of soluble guanylate cyclase (ODQ) along with a cocktail of K+ channel blockers (apamin, charybdotoxin, glibenclamide, tetraethylammonium) all failed to significantly affect the mirabegron-induced rabbit relaxations. Conclusion: Mirabegron relaxes prostatic smooth muscle, providing an experimental support for the clinical investigation of its combination with an α1-blockers or PDE5 inhibitors in the treatment of BPH. Prostate 75:440-447, 2015. © 2014 Wiley Periodicals, Inc. abstract_id: PUBMED:26493129 Mirabegron relaxes urethral smooth muscle by a dual mechanism involving β3 -adrenoceptor activation and α1 -adrenoceptor blockade. Linked Article: This article is commented on by Michel, M. C., pp. 429-430 of this issue. To view this commentary visit http://dx.doi.org/10.1111/bph.13379. Background And Purpose: Mirabegron is the first β3 -adrenoceptor agonist approved for treatment of overactive bladder syndrome. This study aimed to investigate the effects of β3 -adrenoceptor agonist mirabegron in mouse urethra. The possibility that mirabegron also exerts α1 -adrenoceptor antagonism was also tested in rat smooth muscle preparations presenting α1A - (vas deferens and prostate), α1D - (aorta) and α1B -adrenoceptors (spleen). Experimental Approach: Functional assays were carried out in mouse and rat isolated tissues. Competition assays for the specific binding of [(3) H]prazosin to membrane preparations of HEK-293 cells expressing each of the human α1 -adrenoceptors, as well as β-adrenoceptor mRNA expression and cyclic AMP measurements in mouse urethra, were performed. Key Results: Mirabegron produced concentration-dependent urethral relaxations that were shifted to the right by the selective β3 -adrenoceptor antagonist L-748,337 but unaffected by β1 - and β2 -adrenoceptor antagonists (atenolol and ICI-118,551 respectively). Mirabegron-induced relaxations were enhanced by the PDE4 inhibitor rolipram, and the agonist stimulated cAMP synthesis. Mirabegron also produced rightward shifts in urethral contractions induced by the α1 -adrenoceptor agonist phenylephrine. Schild regression analysis revealed that mirabegron behaves as a competitive antagonist of α1 -adrenoceptors in urethra, vas deferens and prostate (α1A -adrenoceptor, pA2 ≅ 5.6) and aorta (α1D -adrenoceptor, pA2 ≅ 5.4) but not in spleen (α1B -adrenoceptor). The affinities estimated for mirabegron in functional assays were consistent with those estimated in radioligand binding with human recombinant α1A - and α1D -adrenoceptors (pKi ≅ 6.0). Conclusion And Implications: The effects of mirabegron in urethral smooth muscle are the result of β3 -adrenoceptor agonism together with α1A and α1D -adrenoceptor antagonism. abstract_id: PUBMED:35167091 Activating Human Adipose Tissue with the β3-Adrenergic Agonist Mirabegron. An appealing strategy for treatment of metabolic disease in humans is activation of brown adipose tissue (BAT), a thermogenic organ best visualized through 18F-FDG PET/CT. BAT has been activated to varying degrees by mild cold exposure. However, this approach can cause undesirable stress, and there remains no consensus protocol. Here, we describe standardized methods for both acute and chronic activation of BAT using the orally administered β3-adrenergic receptor (AR) agonist, mirabegron. Acute pharmacological stimulation has enabled quantification of whole-body BAT volume and metabolic activity using PET/CT imaging, and chronic stimulation increases these properties of BAT over time. abstract_id: PUBMED:30032151 Human β3-Adrenoreceptor is Resistant to Agonist-Induced Desensitization in Renal Epithelial Cells. Background/aims: We recently showed that the β3-adrenoreceptor (β3AR) is expressed in mouse kidney collecting ducts (CD) cells along with the type-2 vasopressin receptor (AVPR2). Interestingly, a single injection of a β3AR selective agonist promotes a potent antidiuretic effect in mice. Before considering the feasibility of chronic β3AR agonism to induce antidiuresis in vivo, we aimed to evaluate in vitro the signaling and desensitization profiles of human β3AR. Methods: Human β3AR desensitization was compared with that of human AVPR2 in cultured renal cells. Video imaging and FRET experiments were performed to dissect β3AR signaling under acute and chronic stimulation. Plasma membrane localization of β3AR, AVPR2 and AQP2 after agonist stimulation was studied by confocal microscopy. Receptors degradation was evaluated by Western blotting. Results: In renal cells acute stimulation with the selective β3AR agonist mirabegron, induced a dose-dependent increase in cAMP. Interestingly, chronic exposure to mirabegron promoted a significant increase of intracellular cAMP up to 12 hours. In addition, a slow and slight agonist-induced internalization and a delayed downregulation of β3AR was observed under chronic stimulation. Furthermore, chronic exposure to mirabegron promoted apical expression of AQP2 also up to 12 hours. Conversely, long-term stimulation of AVPR2 with dDAVP showed short-lasting receptor signaling, rapid internalization and downregulation and apical AQP2 expression for no longer than 3 h. Conclusions: Overall, we conclude that β3AR is less prone than AVPR2 to agonist-induced desensitization in renal collecting duct epithelial cells, showing sustained cAMP production, preserved membrane localization and delayed degradation after 12 hours agonist exposure. These results may be important for the potential use of chronic pharmacological stimulation of β3AR to promote antidiuresis overcoming in vivo renal concentrating defects caused by inactivating mutations of the AVPR2. abstract_id: PUBMED:24199179 Mirabegron - a selective β3-adrenoreceptor agonist for the treatment of overactive bladder. Overactive bladder is a common condition that significantly impacts overall quality of life. Antimuscarinics are the current main pharmacological option for treatment; however, many patients fail to adhere to therapy due to troublesome side effects. Mirabegron is a new beta-3 adrenoreceptor agonist which causes detrusor smooth muscle relaxation and has been proposed to be effective for treating overactive bladder symptoms. Mirabegron has been shown to be superior to placebo for reducing the mean number of incontinence episodes per 24 hours and the mean number of micturitions per 24 hours. Side effects such as dry mouth were observed at similar or lower rates than those seen for placebo and antimuscarinics. Higher doses of mirabegron were associated with minor increases in pulse rate and mean blood pressure. Mirabegron offers a new alternative for treating overactive bladder in patients for which antimuscarinics are either not tolerated or not appropriate. abstract_id: PUBMED:36841274 β3 adrenergic receptor activation modulates connexin 43 activity to relax human myometrium. Preterm labor, delivery prior to 37 completed weeks of gestation, is the leading cause of infant morbidity and mortality. β3 adrenergic receptor protein expression is increased in the myometrium during pregnancy, and the agonist, mirabegron, relaxes the myometrium making the β3 adrenergic receptor a potential therapeutic target in PTL. β3 adrenergic receptor has been shown to activate the tyrosine kinase, Src, which can down regulate connexin 43, a contractile associated protein which promotes the formation of gap junctions that create an electrical syncytium. We hypothesize that mirabegron downregulates connexin 43, imparting quiescence effects on the myometrium. Employing contractile studies, we demonstrate that Src is involved in the mirabegron-induced relaxation of contracting pregnant human myometrial tissue strips. Western blot analysis demonstrates that Src kinase expression is decreased in both preterm and term laboring myometrial tissue. Imaging revealed that mirabegron stimulation of the β3 adrenergic receptor phosphorylates tyrosine at position Y265 on connexin 43 in pregnant human uterine myocytes. Western blot analysis and immunofluorescent imaging indicate that mirabegron decreases the expression of connexin 43 and mediates relaxation over a 24-h exposure period, suggesting that mirabegron has long lasting quiescent effects on the human myometrium. The relationship between the β3 adrenergic receptor and down regulation of the contractile associated protein connexin 43 through activation of Src kinase suggests that mirabegron may be useful in combination tocolysis. abstract_id: PUBMED:37959362 Mirabegron, a Selective β3-Adrenergic Receptor Agonist, as a Potential Anti-Obesity Drug. Obesity is becoming a global health epidemic. Brown and "beige" adipose tissue may produce heat, leading to energy expenditure enhancement and weight loss. Mirabegron, a selective β3-adrenergic receptor agonist, has been found to be effective as a brown adipose tissue activator, a "beige" cells stimulator and a metabolic homeostasis controller in animal and human studies. Although in animal studies, administration of mirabegron led to obesity improvement, significant weight loss in obese patients after mirabegron treatment has not been demonstrated so far, which may be associated with the too-short duration of the trials and the small number of participants in the studies. In humans, the most effective treatment for adipose tissue stimulation was high doses of mirabegron; however, cardiovascular side effects may limit the use of such doses, so the long-term safety must be evaluated. In cases of tachycardia or blood pressure elevation, the co-administration of a β1-adrenergic receptor blocker may be useful. It should be checked whether smaller doses of mirabegron, taken for a longer time, will be sufficient to stimulate brown and "beige" adipose tissue, leading to weight loss. The introduction of mirabegron into obesity treatment in the future will require long-term trials with larger numbers of subjects, to assess mirabegron efficacy, tolerability, and safety. abstract_id: PUBMED:22834478 In vitro inhibition and induction of human cytochrome P450 enzymes by mirabegron, a potent and selective β3-adrenoceptor agonist. The potential for mirabegron, a β(3)-adrenoceptor agonist for the treatment of overactive bladder, to cause drug-drug interactions via inhibition or induction of cytochrome P450 (CYP) enzymes was investigated in vitro. Mirabegron was shown to be a time-dependent inhibitor of CYP2D6 in the presence of NADPH as the IC(50) value in human liver microsomes decreased from 13 to 4.3 μM after 30-min pre-incubation. Further evaluation indicated that mirabegron may act partly as an irreversible or quasi-irreversible metabolism-dependent inhibitor of CYP2D6. Therefore, the potential of mirabegron to inhibit the metabolism of CYP2D6 substrates in vivo cannot be excluded. Mirabegron was predicted not to cause clinically significant metabolic drug-drug interactions via inhibition of CYP1A2, CYP2B6, CYP2C8, CYP2C9, CYP2C19, CYP2E1, or CYP3A4/5 because the IC(50) values for these enzymes both with and without pre-incubation were >100 μM (370 times maximum human plasma concentration [C(max)]). Whereas positive controls (100 µM omeprazole and 10 µM rifampin) caused the anticipated CYP induction, the highest concentration of mirabegron (10 µM; 37 times plasma C(max)) had minimal effect on CYP1A2 and CYP3A4/5 activity, and CYP1A2 and CYP3A4 mRNA levels in freshly isolated human hepatocytes, suggesting that mirabegron is not an inducer of these enzymes. abstract_id: PUBMED:25565203 Activation of human brown adipose tissue by a β3-adrenergic receptor agonist. Increasing energy expenditure through activation of endogenous brown adipose tissue (BAT) is a potential approach to treat obesity and diabetes. The class of β3-adrenergic receptor (AR) agonists stimulates rodent BAT, but this activity has never been demonstrated in humans. Here we determined the ability of 200 mg oral mirabegron (Myrbetriq, Astellas Pharma, Inc.), a β3-AR agonist currently approved to treat overactive bladder, to stimulate BAT as compared to placebo. Mirabegron led to higher BAT metabolic activity as measured via (18)F-fluorodeoxyglucose ((18)F-FDG) using positron emission tomography (PET) combined with computed tomography (CT) in all twelve healthy male subjects (p = 0.001), and it increased resting metabolic rate (RMR) by 203 ± 40 kcal/day (+13%; p = 0.001). BAT metabolic activity was also a significant predictor of the changes in RMR (p = 0.006). Therefore, a β3-AR agonist can stimulate human BAT thermogenesis and may be a promising treatment for metabolic disease. abstract_id: PUBMED:26425139 Clinical use of the β3 adrenoceptor agonist mirabegron in patients with overactive bladder syndrome. Mirabegron is a β3 adrenoceptor agonist licensed for the treatment of overactive bladder symptoms, such as urinary urgency or urgency incontinence. β3 adrenoceptor activation causes detrusor muscle relaxation, but mirabegron may also act by binding other targets in the bladder, and it may also reduce activity in sensory nerves. Phase III clinical trials (SCORPIO, ARIES, and CAPRICORN) evaluated mirabegron at various doses, demonstrating reduction from baseline to endpoint in mean incontinence episodes and mean number of micturitions per 24 h (coprimary endpoints), along with health-related quality of life and a range of secondary measures. Efficacy was seen in many patients who had previously discontinued antimuscarinic therapy on the grounds of lack of efficacy or poor tolerability. Treatment emergent adverse effects were documented in a long-term study (TAURUS), mostly being of mild or moderate severity. The most frequent adverse effects were hypertension, dry mouth, constipation, and headache, with a lower incidence of dry mouth than for the antimuscarinic active comparator. Efficacy and safety are not substantially different in older patients. A urodynamic safety study in men showed no consistent effect on voiding function, but a small increase in postvoid residual. Use of mirabegron in combination with α-adrenergic blockers does not appear to increase adverse effects. Dose reduction is needed in people with severe renal failure, or moderate hepatic failure. Dose adjustment is not needed in relation to food intake. Ongoing research is evaluating the potential for combination therapy with antimuscarinics. Answer: Yes, mirabegron, a beta-3 adrenoceptor (β3-AR) agonist, has been shown to relax isolated prostate tissue from both human and rabbit, suggesting a potential new therapeutic indication for the treatment of benign prostatic hyperplasia (BPH) (PUBMED:25417911). The study demonstrated that mirabegron reduced phenylephrine-induced contractions in human prostate tissue by 42% and produced concentration-dependent relaxations in rabbit prostate tissue, which were resistant to the blockade of β1-AR and β2-AR but sensitive to a β3-AR blocker. This indicates that the relaxation effect of mirabegron is mediated through β3-AR (PUBMED:25417911). The presence of β3-AR in the transition zone of the human prostate was confirmed by immunohistochemistry analysis, providing further support for the role of mirabegron in prostatic smooth muscle relaxation (PUBMED:25417911). These findings provide experimental support for the clinical investigation of mirabegron in combination with α1-blockers or PDE5 inhibitors for the treatment of BPH (PUBMED:25417911). Given that mirabegron is already approved for the therapy of overactive bladder, its potential use in BPH treatment could offer a new option for patients, especially considering the current pharmacological classes available for BPH treatment include alpha1 (α1)-blockers, 5-alpha reductase, and phosphodiesterase type-5 inhibitors (PUBMED:25417911).
Instruction: Are Cancer Survivors Physically Active? Abstracts: abstract_id: PUBMED:37347322 Adult childhood cancer survivors' perceptions of factors that influence their ability to be physically active. Purpose: Studies indicate that adult childhood cancer survivors do not achieve recommended physical activity levels. A deeper understanding of factors that influence their ability to be physically active is essential to identify individuals in need of support. The aim was to explore factors that influence adult childhood cancer survivor's ability to be physically active. Method: Semi-structured interviews were conducted from June to October 2020 with 20 adult childhood cancer survivors with a median age of 31 (min-max 20-47) years. Interviews were transcribed verbatim and analyzed with qualitative content analysis. Results: Four main categories: "The impact of environmental factors," "Personal factors of importance," "Consequences of the treatment or disease," and "The impact of support from healthcare" and 10 sub-categories, were identified. Participants described how family habits and encouragement from others influenced their present ability to be physically active. Experienced benefits of physical activity were described as a facilitator for current physical activity while suffering from late complications was identified as a barrier. Participants highlighted the importance of specific and individualized physical activity recommendations. Conclusion: This study includes adult childhood cancer survivors several years after completion of treatment, hence highlighting the importance for support both during treatment and follow-up to sustain their physical activity. Healthcare providers need to identify individuals suffering from late complications, even several years after treatment; provide individualized physical activity recommendations; and educate families and schools about the importance of physical activity in childhood cancer survivorship. Trial Registration: This research project was registered in the Swedish National Database of Research and Development, identifier 273320, December 6, 2019 ( https://www.researchweb.org/is/vgr/project/273320 ). abstract_id: PUBMED:27051071 A Cross-Sectional Comparison of Quality of Life Between Physically Active and Underactive Older Men With Prostate Cancer. Men with prostate cancer experience many side effects and symptoms that may be improved by a physically active lifestyle. It was hypothesized that older men with prostate cancer who were physically active would report significantly higher levels of quality of life (QOL) as assessed by the WHOQOL-BREF and the WHOQOL-OLD. Of the 348 prostate cancer survivors who were invited to participate in the present postal survey, 137 men returned the questionnaires. Those who were physically active had significantly lower prostate specific antigen (PSA) scores and higher social participation than those insufficiently active. These findings offer some support for the benefits of physical activity (PA) within the prostate cancer population in managing the adverse side effects of their treatments on aspects of their QOL. Future research should more closely examine what types of PA best promote improvements in varying aspects of QOL and psychological well-being for prostate cancer survivors. abstract_id: PUBMED:35942568 Childhood cancer survivors' and their parents' experiences with participation in a physical and social intervention during cancer treatment: A RESPECT study. Aims: This study explores experiences of childhood cancer survivors and their parents with a combined physical and social activity intervention during treatment, including how the survivors and their parents perceive physical activity post-treatment. Design: A process evaluation using semi-structured interviews. Methods: Using a criterion-sampling strategy, 18 Danish childhood cancer survivors (aged 11-18 years) and their parents were interviewed from September 2019 through May 2020. Data analysis used an inductive thematic approach focused on meaning. Results: Three themes emerged: (1) being physically active during hospitalization; (2) peers as motivators and (3) physical activity post-treatment. During hospitalization, daily motivation to do physical activity was dependent on the daily well-being, that is, presence of the side effects from the child's treatment. Healthy classmates provided distraction, reduced loneliness and promoted normality for those hospitalized. For most of the survivors, their healthy peers provided motivation for being physically active during treatment. When surplus energy was lacking, some survivors preferred doing physical activity alone with a professional. Those who were physically active in the hospital sustained being physically active post-treatment while their parents continued seeking advice about appropriate activity levels. Conclusion: Childhood cancer survivors and their parents benefited from the intervention which also provided guidance to remaining physically active post-treatment. This was particularly true for the participants with leukaemia. Impact: Healthcare professionals should support children with cancer to be physically active during hospitalization. Including social and physical components in their care plan and being aware of individual preferences is pivotal to improving the survivors' level of physical and social well-being during and post-treatment. Patient Or Public Contribution: The participants were involved in designing the interview guides to ensure that the interview guides were understandable for the participants to provide rich descriptions of their experiences with a physical and social activity intervention during hospitalization. abstract_id: PUBMED:20890674 Exercise behavior in cancer survivors and associated factors. Introduction: Physical activity is an important component in promoting a healthy life style in cancer survivors. We estimated the proportion of cancer survivors who are physically active, defined as meeting public health exercise guidelines, and changes in level of physical activity (LPA) from before diagnosis to after treatment. We also identified medical and demographic factors associated with LPA and its changes. Methods: A cross-sectional survey assessing LPA before diagnosis and after treatment, together with demographic and medical variables in 975 cancer survivors. Results: Forty-five percent of the cancer survivors were physically active after treatment. Before diagnosis and after treatment 33% were active, whereas 40% were inactive at both time points. Fifteen percent were active before diagnosis but inactive after treatment, and 12% were inactive before diagnosis but active after treatment. Increasing age and weight, low education, comorbidity and smoking were associated with physical inactivity after treatment. Change in LPA from active to inactive was associated with comorbidity, distant disease and smoking, while a change from inactive to active was associated with high education. Conclusions: Less than half of cancer survivors were physically active. Almost three quarters of cancer survivors remained stable in LPA. The remaining quarter changed LPA, with slightly more cancer survivors becoming inactive than active. Age, weight, education, comorbidity, disease stage and smoking can identify survivors at risk of physical inactivity after treatment. Implications For Cancer Survivors: Recognizable variables can be used to identify physically inactive cancer survivors after treatment and give these survivors support to start or maintain LPA. abstract_id: PUBMED:26107718 Are Cancer Survivors Physically Active? A Comparison by US States. Background: Cancer survivors who engage in physical activity (PA) have improved quality of life, reduced fatigue, and lower mortality rates. We compare the percentage of cancer survivors meeting PA recommendations for US states, stratified by age and gender, to identify the need for PA education and intervention among cancer survivors. Methods: Pooled data from the 1997-2010 National Health Interview Survey were used to determine and rank age-adjusted PA by state. American Cancer Society guidelines (≥150 min/wk of PA) were used to compare prevalence by state, stratified by age group (< 65 and ≥65) and gender. Results: Thirty-three percent of cancer survivors met PA recommendations. The highest age-adjusted compliance to PA recommendations was in Vermont (59.9%, 95% confidence interval [CI], 40.8-76.3) and the lowest was in Louisiana (14.8%, 95% CI, 9.6-22.1) and Mississippi (15.5%, 95% CI, 10.4-22.3). The lowest percentages meeting recommendations were in Arkansas for males (8.6%, 95% CI, 7.0-10.6), Louisiana for females (12.5%, 95% CI, 6.8-21.9), Louisiana for survivors < 65 (15.6%, 95% CI, 10.5-22.6), and West Virginia for those ≥65 years (12.7%, 95% CI, 7.6-20.6). Conclusions: Meeting PA recommendations by cancer survivors varies markedly by state of residence. Future efforts should target states with low percentages, tailoring interventions to the special needs of this high-risk population. The importance of PA should be incorporated within cancer survivorship care plans. abstract_id: PUBMED:30368673 A qualitative study exploring what it takes to be physically active with a stoma after surgery for rectal cancer. Purpose: Many adults must cope with the adverse effects of cancer and adjuvant therapies, which may limit their engagement in health-enhancing behaviors such as physical activity (PA). Furthermore, the placement of a stoma during surgery for rectal cancer may cause additional challenges to being physically active. The aim of this study was to explore the experiences of rectal cancer survivors who were living with a stoma and the impact on their engagement in PA. Methods: Interpretive phenomenological analysis was used as the approach for this qualitative study. Fifteen rectal cancer survivors took part in a semi-structured interview post-chemotherapy, and thematic analysis was used to identify themes within their accounts. Results: Three themes captured participants' personal lived experience: (1) reasons for engaging in PA, (2) deterrents to engaging in PA, and (3) practical implications. Conclusions: Participants' accounts offered insight into both their cancer and stoma-related experiences, highlighting both reasons for and deterrents to engaging in PA. Further, findings helped to identify tangible strategies that those learning to be physically active with a stoma may find useful. abstract_id: PUBMED:25809159 Active transportation in adult survivors of childhood cancer and neighborhood controls. Purpose: Childhood cancer survivors (CCS) are at high risk of treatment-related late effects, including cardiovascular disease and diabetes, which can be exacerbated by inadequate physical activity (PA). Previous PA interventions targeting CCS have focused on the domain of leisure-time/recreational PA. Active transportation, another domain of PA, has not been described in CCS. Therefore, this study aimed to identify active transportation behaviors, barriers, and correlates in adult CCS. Methods: We recruited 158 adult CCS and 153 controls matched on age, sex, and neighborhood for a survey regarding active transportation behaviors and perceptions. Linear and logistic regression models accounting for correlation among matched participants were used. Results: Adult CCS engaged in similar levels of active transportation as controls (2.72 vs. 2.32 h/week, P = 0.40) despite perceiving greater health-related barriers (1.88 vs. 1.65 (measured on four-point Likert scale), P = 0.01). Marital/relationship status (odds ratio (OR) = 0.30, 95 % confidence interval (CI) = 0.11-0.81), planning/psychosocial barriers (OR = 0.15, 95 % CI = 0.04-0.53), and perceived neighborhood walkability (OR = 2.55, 95 % CI = 1.14-5.66) were correlates of active transportation among adult CCS, while objective neighborhood walkability (OR = 1.03, 95 % CI = 1.01-1.05) was a correlate among controls. Conclusions: Results suggest adult CCS and controls utilize active transportation at approximately equal levels. Factors other than health, including perceived neighborhood walkability, are related to active transportation behaviors to a greater degree in adult CCS. Implications For Cancer Survivors: Interventions might consider promoting active transportation as a way to incorporate more PA into the daily lives of adult CCS. Such interventions will not be likely successful, however, without existing or improved neighborhood walkability/bikeability. abstract_id: PUBMED:34554567 Active Living After Cancer: Adaptation and evaluation of a community-based physical activity program for minority and medically underserved breast cancer survivors. Background: An expanding body of research documents the benefits of physical activity for cancer survivors' physical functioning and quality of life, but few successful models provide community-based physical activity programs to cancer survivors. This report presents an evaluation of Active Living After Cancer, an evidence-based physical activity program for breast cancer survivors, adapted for community delivery to minority and medically underserved survivors. Methods: Survivors were recruited from health care and community settings. The program consisted of 12 weekly group sessions providing training in cognitive and behavioral skills for behavior change, brief physical activity, and cancer survivorship-related content. At the baseline and follow-up, participants completed assessments of their physical activity, quality of life, and physical functioning (6-minute walk and 30-second sit-to-stand test). At follow-up, they also completed questionnaires to measure program content mastery and satisfaction. Results: The outcome analysis included 127 participants. Physical activity and quality of life (mental and physical) improved from the baseline to follow-up (all P < .01). Physical functioning improved, with increases in sit-to-stand repetitions (mean, 12.5 at the baseline vs 14.9 at the follow-up; P < .01) and 6-minute walk distances (mean, 428 m at the baseline vs 470 m at the follow-up; P < .01). Conclusions: The results highlight the effectiveness of an evidence-based program adapted for community-based delivery to minority and medically underserved breast cancer survivors. The program could be delivered to improve outcomes in diverse survivor populations. Lay Summary: Physical activity in breast cancer survivors is related to better quality of life and longer cancer-free survival. However, there are few community-based programs to help breast cancer survivors to become more physically active. The Active Living After Cancer program was adapted from an evidence-based program and delivered in community-based settings to minority and medically underserved breast cancer survivors. It consisted of 12 weekly group sessions in which participants learned skills to increase their physical activity. The program participants increased their physical activity and improved their mental and physical well-being and physical functioning. abstract_id: PUBMED:31420000 Which cancer survivors are at risk for a physically inactive and sedentary lifestyle? Results from pooled accelerometer data of 1447 cancer survivors. Background: Physical activity has beneficial effects on the health of cancer survivors. We aimed to investigate accelerometer-assessed physical activity and sedentary time in cancer survivors, and describe activity profiles. Additionally, we identify demographic and clinical correlates of physical activity, sedentary time and activity profiles. Methods: Accelerometer, questionnaire and clinical data from eight studies conducted in four countries (n = 1447) were pooled. We calculated sedentary time and time spent in physical activity at various intensities using Freedson cut-points. We used latent profile analysis to identify activity profiles, and multilevel linear regression analyses to identify demographic and clinical variables associated with accelerometer-assessed moderate to vigorous physical activity (MVPA), sedentary time, the highly active and highly sedentary profile, adjusting for confounders identified using a directed acyclic graph. Results: Participants spent on average 26 min (3%) in MVPA and 568 min (66%) sedentary per day. We identified six activity profiles. Older participants, smokers and participants with obesity had significantly lower MVPA and higher sedentary time. Furthermore, men had significantly higher MVPA and sedentary time than women and participants who reported less fatigue had higher MVPA time. The highly active profile included survivors with high education level and normal body mass index. Haematological cancer survivors were less likely to have a highly active profile compared to breast cancer survivors. The highly sedentary profile included older participants, males, participants who were not married, obese, smokers, and those < 12 months after diagnosis. Conclusions: Cancer survivors engage in few minutes of MVPA and spend a large proportion of their day sedentary. Correlates of MVPA, sedentary time and activity profiles can be used to identify cancer survivors at risk for a sedentary and inactive lifestyle. abstract_id: PUBMED:26008207 Physical inactivity and risk of poor quality of life among elderly cancer survivors compared to women without cancer: the Iowa Women's Health Study. Purpose: Few studies have examined lifestyle factors and quality of life (QOL) in cancer survivors compared to a cancer-free group. Compared to active cancer-free women, we examined the association between physical inactivity and QOL in elderly cancer survivors and similar-aged women without cancer. Methods: Participants included 1776 cancer survivors and 12,599 cancer-free women enrolled in the Iowa Women's Health Study in 1986 who completed the SF36 QOL assessment in 2004 (ages 73-88 years). The odds of poor QOL were computed for each SF36 subscale (>0.5 SD below mean score of cancer-free women) associated with physical inactivity (moderate-vigorous activity <once/week) between four groups based on the cross-classification of cancer history (no/yes) and physical inactivity (no/yes) (referent group = active cancer-free women). Results: Compared with the referent group, inactive cancer survivors were significantly more likely to report poor QOL for each SF36 subscale (odds ratios 1.8 to 4.7), independent of age, comorbidity, body mass index (BMI), and diet quality. The greatest odds for poor QOL occurred for general health, vitality, and physical function. These increased odds occurred regardless of whether survivors were inactive at both baseline (1986) and follow-up or became inactive sometime after baseline. Among physically active women, cancer survivors had similar QOL as cancer-free women. Conclusion: These findings provide evidence on the importance of leisure-time physical activity in older women and support the need for interventions to help older women maintain or regain a physically active lifestyle. Implications For Cancer Survivors: Survivors who remain or become physically active as they age report better mental and physical QOL. Answer: The physical activity levels among cancer survivors vary, and several studies have explored this topic to understand the factors influencing their engagement in physical activity (PA) and the associated outcomes. A study using pooled data from the 1997-2010 National Health Interview Survey found that only 33% of cancer survivors met the American Cancer Society's PA recommendations, with significant variation by state of residence in the United States (PUBMED:26107718). Another cross-sectional survey revealed that 45% of cancer survivors were physically active after treatment, with a notable proportion remaining stable in their level of physical activity (LPA) from before diagnosis to after treatment (PUBMED:20890674). Research has also highlighted the benefits of being physically active for cancer survivors, including improved quality of life (QOL), reduced fatigue, and lower mortality rates. For instance, physically active older men with prostate cancer reported significantly higher levels of QOL compared to those who were underactive (PUBMED:27051071). Similarly, a community-based physical activity program for minority and medically underserved breast cancer survivors demonstrated improvements in physical activity, QOL, and physical functioning (PUBMED:34554567). However, there are barriers and challenges that cancer survivors face in becoming or remaining physically active. Adult childhood cancer survivors identified late complications from treatment, environmental factors, personal factors, and the need for individualized PA recommendations as influential in their ability to be physically active (PUBMED:37347322). Childhood cancer survivors and their parents also reported that being physically active during hospitalization depended on daily well-being and side effects from treatment, with peers serving as motivators (PUBMED:35942568). Additionally, rectal cancer survivors with a stoma faced specific challenges that impacted their engagement in PA (PUBMED:30368673). Despite these challenges, some cancer survivors do engage in active transportation at levels similar to controls, although they perceive greater health-related barriers (PUBMED:25809159). Furthermore, accelerometer data from 1447 cancer survivors identified six activity profiles and associated demographic and clinical correlates of physical activity and sedentary time (PUBMED:31420000). In conclusion, while some cancer survivors are physically active and experience associated benefits, a significant proportion do not meet recommended PA levels. Various factors, including age, comorbidities, treatment side effects, environmental influences, and personal motivations, can impact their physical activity levels.
Instruction: Laparoscopic colon resection: To prep or not to prep? Abstracts: abstract_id: PUBMED:28745700 Laparoscopic surgery for colon cancer Aim: To assess laparoscopic surgery in treatment of colon cancer patients. Material And Methods: The results of laparoscopic treatment of patients with colorectal cancer are presented in the article. It was estimated the influence of various clinical parameters including age, gender, comorbidities, tumor localization and stage and complications on laparoscopic management of these patients. Conclusion: It was revealed that efficiency of laparoscopic surgery in patients with colon cancer is affected by tumor stage and presence of complications. abstract_id: PUBMED:29863127 Laparoscopic surgical challenge for T4a colon cancer. For patients with T4a colon cancer, the risk of peritoneal dissemination after surgery remains unclear. Seven hundred and eleven patients with T3 or T4a colon cancer, 80 years of age or younger, underwent curative resection (open surgery in 512 and laparoscopic surgery in 199) at the four Jikei University hospitals between 2006 and 2012. Their risk factors for peritoneal dissemination after surgery were evaluated retrospectively. Number of lymph node metastases, postoperative liver metastases and postoperative peritoneal dissemination events in the T4a group were significantly greater than the number in the T3 group. Peritoneal dissemination after surgery developed in four patients (0.7%) in the T3 group and in six patients (5%) in the T4a group. Risk factors for peritoneal dissemination consisted of macroscopic type (P = 0.016), serosal invasion (P = 0.017) and number of lymph node metastases (P = 0.009) according to the Cox proportional hazards regression model. However, tumor diameter and surgical approach (laparoscopic vs open) were not significant factors for peritoneal dissemination. There were no significant differences between the postoperative relapse-free survival rates for each surgical approach within the T3 or T4a group. Because of comparable postoperative peritoneal dissemination in T3 and T4a colon cancer by the surgical approach (laparoscopic or open), laparoscopic surgery for patients with T4a colon cancer seems justified. abstract_id: PUBMED:27143915 Short- and long-term outcomes of laparoscopic surgery vs open surgery for transverse colon cancer: a retrospective multicenter study. Purpose: The purpose of the present study was to compare the perioperative and oncologic outcomes between laparoscopic surgery and open surgery for transverse colon cancer. Patients And Methods: We conducted a retrospective review of patients who underwent surgery for transverse colon cancer at six Hallym University-affiliated hospitals between January 2005 and June 2015. The perioperative outcomes and oncologic outcomes were compared between laparoscopic and open surgery. Results: Of 226 patients with transverse colon cancer, 103 underwent laparoscopic surgery and 123 underwent open surgery. There were no differences in the patient characteristics between the two groups. Regarding perioperative outcomes, the operation time was significantly longer in the laparoscopic group than in the open group (267.3 vs 172.7 minutes, P<0.001), but the time to soft food intake (6.0 vs 6.6 days, P=0.036) and the postoperative hospital stay (13.7 vs 15.7 days, P=0.018) were shorter in the laparoscopic group. The number of harvested lymph nodes was lower in the laparoscopic group than in the open group (20.3 vs 24.3, P<0.001). The 5-year overall survival (90.8% vs 88.6%, P=0.540) and disease-free survival (86.1% vs 78.9%, P=0.201) rates were similar in both groups. Conclusion: The present study showed that laparoscopic surgery is associated with several perioperative benefits and similar oncologic outcomes to open surgery for the resection of transverse colon cancer. Therefore, laparoscopic surgery offers a safe alternative to open surgery in patients with transverse colon cancer. abstract_id: PUBMED:27838816 Laparoscopic surgery should be considered in T4 colon cancer. Introduction: Laparoscopy in T4 colon cancers is not widely advocated due to concerns regarding safety and oncologic efficacy. We conducted this study to compare the short- and long-term oncological outcomes between laparoscopic and open approaches in T4 colon cancers. Methods: A retrospective analysis of all patients who underwent surgery for T4 colon cancer from 2008 to 2014 was performed. Margin positive rate, lymph node yield, local or distant recurrence and overall survival were analysed. Results: A total of 59 patients received open surgery, whilst 93 underwent laparoscopic surgery, with a conversion rate of 8.6%. There was no difference in the various measured outcomes between the laparoscopic and open groups. The relative risks of positive margins and inadequate lymph node yield for staging were 0.95 (0.74-1.23, p = 0.692) and 1.01 (0.97-1.05, p = 0.710), respectively, for the laparoscopic group when compared to the open approach. Regarding long-term outcomes, the relative risk of local recurrence in the laparoscopic group was 0.99 (0.96-1.02, p = 0.477), whilst there were also no increased risks of developing distal recurrences at the liver (RR 1.19, 0.51-2.82, p = 0.684), lungs (RR 1.20, 0.50-2.87, p = 0.678) and peritoneum (RR 1.22, 0.51-2.95, p = 0.653) in the laparoscopic group. There was also no difference in the overall survival (RR 0.70, 0.42-1.16, p = 0.168). Patients were followed up for a median of 73.3 months (range 34.8-144.7). Conclusion: Laparoscopic surgery does not compromise oncological outcomes in T4 colon cancers compared to the open approach. Because of its proven associated benefits, laparoscopy should be considered in selected T4 colon cancers. abstract_id: PUBMED:33383287 Laparoscopic colectomy for persistent descending mesocolon in sigmoid colon cancer: A case report. A 55-year-old-man underwent laparoscopic sigmoidectomy for sigmoid colon cancer. Preoperative barium enema showed a slightly medial displacement of the descending colon, and the sigmoid colon was quite long. The operative findings showed that the descending colon was not fused with the retroperitoneum and shifted to the midline and the left colon adhered to the small mesentery and right pelvic wall. Thus, a diagnosis of persistent descending mesocolon (PDM) was made. The left colon, sigmoid colon, and superior rectal arteries often branch radially from the inferior mesenteric artery. The sigmoid mesentery shortens, and the inferior mesenteric vein is often close to the marginal vessels. By understanding the anatomical feature of PDM and devising surgical techniques, laparoscopic sigmoidectomy for sigmoid colon cancer with PDM could be performed without compromising its curative effect and safety. abstract_id: PUBMED:34159041 Simultaneous Totally Laparoscopic Distal Gastrectomy and Anterior Resection for Synchronous Gastric and Colon Cancer. Although simultaneous open surgery for synchronous gastric and colon cancer has been reported frequently to date, total laparoscopic resection has been documented rarely. A 63-year-old male patient who presented with complaints of abdominal pain and constipation was diagnosed with synchronous gastric and sigmoid colon cancer. Simultaneous total laparoscopic distal gastrectomy (Roux-en-Y anastomosis and D2 lymph node dissection) and anterior resection were performed with a total of five ports. Total operation time was 310 min. and estimated blood loss was 175 mL. Histopathological examination result was reported as well-differentiated adenocarcinoma for the stomach and moderately differentiated adenocarcinoma for the colon. Staging result was Stage IIA (pT3N0M0, American Joint Committee on Cancer (AJCC) 8th Edition) for both cancers. The patient received postoperative adjuvant chemotherapy. He remains under follow-up at 21 months without any recurrence. With the improved techniques and increased experience in minimally invasive surgery, combined laparoscopic curative resection can be safely performed for gastric and colon cancer. abstract_id: PUBMED:34097812 Laparoscopic wedge resection as an alternative to laparoscopic oncological colon resection for benign endoscopically unresectable colon polyps. Aim: The aim of this study was to investigate, by comparing clinical and histological outcomes, whether laparoscopic (hybrid) wedge resection (LWR) could be a less invasive and safe alternative to laparoscopic oncological colon resection (OCR) for patients with an endoscopically unresectable, suspected benign, colon polyp. Method: All patients with an endoscopically unresectable colon polyp who were referred for surgery between 2009 and 2018 and without biopsy-proven colon cancer were identified from a prospectively maintained database. Patients with macroscopic features of malignancy during endoscopy were excluded. Clinical and histological results for patients who underwent OCR or LWR were reviewed. Results: One hundred-and-twenty-two patients were included. Ninety-seven patients underwent OCR and 25 LWR. Major complications occurred in 16.7% (n = 16) of the OCR group compared with 4.0% (n = 1) of the LWR group (p = 0.06). In the OCR group the anastomotic leakage rate was 6.3% (n = 6) and the mortality rate 3.1% (n = 3). No anastomotic leakage or deaths occurred in the LWR group. The median length of hospital stay after OCR was 5 days [interquartile range (IQR) 5-9 days)] compared with 2 days (IQR 2-4 days) after LWR (p < 0.0001). Definite pathology showed a malignancy rate of 4.2% (n = 4) in the OCR group and 4.0% (n = 1) (without high-risk features) in the LWR group. Conclusion: This study shows that LWR was associated with significantly lower complication rates and acceptable oncological risks compared with OCR. Therefore we suggest that LWR is a safe alternative treatment, next to other endoscopic options. The treatment that is most suitable for an individual patient should be discussed in a multidisciplinary meeting. abstract_id: PUBMED:23879414 Sigmoid volvulus after laparoscopic surgery for sigmoid colon cancer. We report the first case of sigmoid volvulus after laparoscopic surgery for sigmoid colon cancer. The patient is a 75-year-old man who presented with the sudden onset of severe abdominal pain. He had undergone laparoscopic sigmoidectomy for cancer 2 years before presentation. CT scan showed a distended sigmoid colon with a mesenteric twist, or "whirl sign." Colonoscopy showed a mucosal spiral and luminal stenosis with dilated sigmoid colon distally and ischemic mucosa. The diagnosis of ischemic colonic necrosis due to sigmoid volvulus was established. Resection of the necrotic sigmoid colon was performed and a descending colon stoma was created. A long remnant sigmoid colon and chronic constipation may contribute to the development of sigmoid volvulus after laparoscopic sigmoidectomy. Prompt diagnosis is essential for adequate treatment, and colonoscopy aids in the diagnosis of ischemic changes in patients without definitive findings of a gangrenous colon. abstract_id: PUBMED:20011300 Laparoscopic colectomy for colon cancer: comparable to conventional oncologic surgery? As a result of the obvious benefits of laparoscopic cholecystectomy, minimally invasive techniques have been applied to more complex gastrointestinal procedures, including colorectal resections. The goal in adapting laparoscopic techniques for colorectal surgery is to offer an operation that results in less pain, shorter hospital stay, more rapid return to normal activities, and improved cosmesis compared with conventional operation. The challenge has been to show that this can be done safely and efficiently and that for cancer patients there is no detrimental oncologic effect. The major issues that have been and continue to be addressed are (1) whether an adequate resection can be performed laparoscopically, (2) whether there is a high rate of wound or port site recurrence following these operations, and (3) whether, by using these techniques, we are trading short-term benefits for a poor long-term oncologic outcome. To answer these fundamental questions, several prospective randomized trials have been conducted and several more are under way. The results of these trials indicate that, in terms of cancer outcome, there is no difference in overall survival, disease-free survival, and wound recurrences in patients treated using laparoscopic techniques compared with conventional operation. In addition, there are short-term benefits associated with the use of these techniques. It can now be said that from an oncologic standpoint, in experienced hands, laparoscopic colectomy for curable colon cancer is equivalent to conventional therapy, and it is superior to conventional operation regarding short-term outcomes. Laparoscopic colectomy for colon cancer should be offered to appropriately selected patients. abstract_id: PUBMED:35747155 Laparoscopic vs. open resection for colon cancer-quality of oncologic resection evaluation in a medium volume center. Despite concerns regarding oncologic safety, laparoscopic surgery for colon cancer has been proven in several trials in the lasts decades to be superior to open surgery. In addition, the benefits of laparoscopic surgery can be offered to other patients with malignant disease. The aim of the present study was to compare the quality of oncologic resection for non-metastatic, resectable colon cancer between laparoscopic and open surgery in terms of specimen margins and retrieved lymph nodes in a medium volume center in Romania. A total of 219 patients underwent surgery for non-metastatic colon cancer between January 2017 and December 2020. Of these, 52 underwent laparoscopic resection, while 167 had open surgery. None of the patients in the laparoscopic group had positive circumferential margins (P=0.035) while 12 (7.19%) patients in the open group (OG) had positive margins. A total of three patients in the laparoscopic group (5.77%) and seven patients (4.19%) in the OG had invaded axial margins. While the number of retrieved lymph nodes was not correlated with the type of procedure [laparoscopic group 16.12 (14±6.56), OG 17.31 (15±8.42), P=0.448], the lymph node ratio was significantly higher in the OG (P=0.003). Given the results of the present study, it is safe to conclude that laparoscopic surgery is not inferior to open surgery for non-metastatic colon cancer in a medium volume center. Answer: The provided abstracts do not directly address the question of whether to use bowel preparation ("prep") or not in the context of laparoscopic colon resection. Bowel preparation typically refers to the process of cleaning out the intestines before surgery or certain medical procedures. The abstracts focus on the outcomes, efficiency, and safety of laparoscopic surgery for colon cancer, comparing it to open surgery, and discussing specific cases and conditions where laparoscopic surgery was performed. They evaluate factors such as tumor stage, presence of complications, perioperative benefits, oncologic outcomes, and the quality of oncologic resection (PUBMED:28745700, PUBMED:29863127, PUBMED:27143915, PUBMED:27838816, PUBMED:33383287, PUBMED:34159041, PUBMED:34097812, PUBMED:23879414, PUBMED:20011300, PUBMED:35747155). However, none of the abstracts mention the use or omission of bowel preparation as part of the surgical protocol or its impact on the success of laparoscopic colon resection. Therefore, based on the information provided in these abstracts, it is not possible to make a recommendation for or against the use of bowel preparation specifically in the context of laparoscopic colon resection. To answer this question, additional information from clinical guidelines, expert opinions, or studies specifically investigating the role of bowel preparation in laparoscopic colon surgery would be required.
Instruction: HIV vaccine trials: will intravenous drug users enroll? Abstracts: abstract_id: PUBMED:29378631 Socio-demographic and sexual practices associated with HIV infection in Kenyan injection and non-injection drug users. Background: Substance use is increasingly becoming prevalent on the African continent, fueling the spread of HIV infection. Although socio-demographic factors influence substance consumption and risk of HIV infection, the association of these factors with HIV infection is poorly understood among substance users on the African continent. The objective of the study was to assess socio-demographic and sexual practices that are associated with HIV infection among injection drug users (IDUs), non-IDUs, and non-drug users (DUs) at an urban setting of coastal Kenya. Methods: A cross-sectional descriptive study was conducted among 451 adults comprising HIV-infected and -uninfected IDUs (n = 157 and 39); non-IDUs (n = 17 and 48); and non-DUs (n = 55 and 135); respectively at coastal, Kenya. Respondent driven sampling, snowball and makeshift methods were used to enroll IDUs and non-IDUs. Convenience and purposive sampling were used to enroll non-DUs from the hospital's voluntary HIV testing unit. Participant assisted questionnaire was used in collecting socio-demographic data and sexual practices. Results: Binary logistic regression analysis indicated that higher likelihood of HIV infection was associated with sex for police protection (OR, 9.526; 95% CI, 1.156-78.528; P = 0.036) and history of sexually transmitted infection (OR, 5.117; 95% CI, 1.924-13.485; P = 0.001) in IDUs; divorced, separated or widowed marital status (OR, 6.315; 95% CI, 1.334-29.898; P = 0.020) in non-IDUs; and unemployment (OR, 2.724; 95% CI, 1.049-7.070; P = 0.040) in non-drug users. However, never married (single) marital status (OR, 0.140; 95% CI, 0.030-0.649; P = 0.012) was associated with lower odds for HIV infection in non-drug users. Conclusion: Altogether, these results suggest that socio-demographic and sexual risk factors for HIV transmission differ with drug use status, suggesting targeted preventive measures for drug users. abstract_id: PUBMED:8179045 HIV vaccine trials: will intravenous drug users enroll? Objectives: The purpose of this study was to assess the willingness of intravenous drug users to participate in a preventive human immunodeficiency virus (HIV) vaccine efficacy trial. Methods: Of the 347 intravenous drug users in methadone treatment who were approached for participation, 257 completed a battery of self-administered questionnaires assessing risk behaviors, interest in vaccine trials, and other vaccine-related information. Data from 16 known seropositives and 1 inconsistent responder were dropped from analyses (n = 240). Results: Fifty-two percent of the subjects expressed a willingness to be one of the first individuals to participate in a preventive HIV vaccine efficacy trial. Subjects who had recently shared needles or works and subjects who trusted the government to ensure vaccine safety were both twice as likely to report interest in participation. Twenty-two percent of subjects reported that they would increase needle sharing if vaccinated. Thirty percent did not know what a vaccine was. Conclusions: These findings suggest that some in-treatment intravenous drug users would volunteer for a preventive HIV vaccine efficacy trial. Education and counseling will be required to ensure that subjects fully understand the trial's purposes, methods, risks and benefits. abstract_id: PUBMED:28757895 Factors influencing mothers' decision to enroll their HIV-negative children in a hypothetical HIV vaccine trial. Despite advances in preventive treatments for HIV, children continue to become infected with HIV. Research has investigated adult and adolescents' willingness to participate in hypothetical HIV vaccine trials; however, maternal willingness to enroll their infants in such trials remains underexplored. Aim: This study explored the factors influencing mothers' decision-making about enrolling their HIV negative infants in a hypothetical HIV vaccine trial. Methods: HIV infected and uninfected mothers (n = 22) were interviewed. Results: Several factors were identified as influencing the mothers' decisions, including perceptions and knowledge of HIV and vaccines. Conclusion: Maternal concerns about protecting their infants from HIV were also identified and mothers indicated that they were eager to vaccinate their children. Insufficient information and reassurance regarding vaccine trial safety and efficacy influenced maternal reluctance to enroll their child. abstract_id: PUBMED:20044049 Cognitive factors and willingness to participate in an HIV vaccine trial among HIV-negative injection drug users. This cross-sectional study involving a cohort of injection drug users (IDU) examined the relationship between cognitive factors (HIV treatment optimism, self-efficacy and knowledge of vaccine trial concepts) as well as risk factors for seroconversion, and willingness to participate (WTP) in a preventive phase 3 HIV vaccine trial. Willingness to participate overall was 56%. In a multivariate analysis, for a 20-unit increase in a 100-point composite scale, self-efficacy was positively related to WTP (adjusted odds ratio [AOR]=1.95, 95% CI=1.40-2.70). HIV treatment optimism and knowledge of vaccine trial concepts were unrelated to WTP. Aboriginal ethnicity (AOR=3.47, 95% CI=1.68-7.18) and a higher educational level (>or=high school) (AOR=1.96, 95% CI=1.07-3.59) were positively related to WTP. This study provides information on WTP for an HIV vaccine trial. Limitations and future directions are also discussed. abstract_id: PUBMED:24404731 HIV seroprevalence among drug users in Togo Context: In the context of implementing a serological and behavioural surveillance system among drugs users, this study estimated the prevalence of HIV infection and related risk behaviours among drug users (DUs) in Togo. Method: A cross-sectional study was conducted among DUs in Togo from November 2011 to January 2012. This study involved all DUs regardless of the type of drug and the mode of consumption, over the age of 18 years, who had lived in Togo for at least 3 months. Behavioural data were collected by face-to-face interviews and serum was obtained for HIV antibody testing. HIV prevalence was estimated together with its 95% confidence interval (CI). Univariate and then multivariate analyses were performed to study the factors associated with HIV prevalence. Results: A total of 387 DUs participated in the survey and 235 (60.7%) of them were enrolled in Lomé, the country's capital. The median age of DUs was 32 years with an interquartile range of [25-39 years] and 10 (2.6%) were women. The mode of drug consumption was: smoking or inhaling drugs in 92.8% of cases and 2.8% of DUs used drugs by injection. HIV testing was accepted in 98.4% of cases. The estimated HIV prevalence among drugs users was 5.5%, (95% CI, 3.2-7.8%). Conclusion: This study, the first to be conducted among DUs in Togo, found an HIV prevalence of 5.5%, which is higher than the HIV prevalence in the general population (3.2% in 2010). Specific care of DUs is essential in order to reduce HIV prevalence in Togo. abstract_id: PUBMED:22250925 Cognitive factors and willingness to participate in an HIV vaccine trial among HIV-positive injection drug users. There are gaps in our knowledge of the role cognitive factors play in determining people's willingness to participate (WTP) in therapeutic HIV vaccine trials. Using a cross-sectional study of HIV-positive injection drug users (IDU), we determined the role of three cognitive factors: HIV treatment optimism, self-efficacy beliefs, and knowledge of vaccine trial concepts in relation to WTP in a hypothetical phase 3 therapeutic HIV vaccine trial. WTP was 54%. Participants tended to be low in HIV treatment optimism (mean = 3.9/10), high in self-efficacy (mean = 79.8/100), and low in knowledge (mean = 4.1/10). Items pertaining to HIV treatment optimism and knowledge of HIV vaccine trial concepts were generally unrelated to WTP. An increase in self-efficacy had a statistically significant positive association with WTP (OR = 1.61, 95% CI = 1.04-2.46, p < 0.05). Furthermore, most of these HIV-positive participants had high levels of self-efficacy, so we are most confident about this relationship at such levels. These findings indicate that interventions focused on increasing self-efficacy could enhance WTP among HIV-positive IDU. abstract_id: PUBMED:21324140 Prevalence of HIV among injection drug users in Georgia. Background: Injection drug use remains a major risk factor for HIV transmission in Georgia. The study aims to characterize the prevalence of HIV among injection drug users in Georgia. Methods: A cross-sectional, anonymous bio-behavioural survey to assess knowledge and behaviour in injection drug users in combination with laboratory testing on HIV status was conducted in five Georgian cities (Tbilisi, Gori, Telavi, Zugdidi and Batumi) in 2009. A snowball sample of 1127 eligible injection drug user participants was investigated. Results: Odds of HIV exposure were increased for injection drug users of greater age, with greater duration of drug use and with a history of imprisonment or detainment (p < 0.05). Conclusions: More research is required to analyze the determinants of HIV risk in Georgian injection drug users. The imprisoned population and young injection drug users may be appropriate target groups for programmes aimed at preventing HIV transmission. abstract_id: PUBMED:21163100 Situations and trends of HIV and syphilis infections among drug users in China, 1995 - 2009 Objective: To understand the characteristics and trends of HIV and syphilis infections among drug users. Methods: Information was collected regarding HIV and syphilis epidemics, as well as on high risk behaviors from the HIV/AIDS surveillance system. Source of information was from HIV/AIDS case-reporting, sentinel surveillance and trend estimation, related to the changes of distribution among HIV positive drug users prevalence rates on HIV and syphilis together with their high risk behaviors. Results: Scale number on drug users remained stable in the past six year (from 2003 - 2009), while the proportion of drug users decreased from 44.2% in 2005 to 25.8% in 2009. Based on the information from HIV/AIDS case reporting system, HIV sentinel surveillance data showed that the HIV prevalence of drug users was quite different from those of Southwestern to Northeastern parts of China. The HIV prevalence in Southwestern China was much higher than that of the Northeastern areas. Sentinel surveillance data showed that the average positive rates were 6.2% and 3.7% on HIV and syphilis, among the drug users. As for injecting drug users (IDU), the average rates on HIV antibody syphilis and HCV were 9.2% and 4.0% respectively. Series surveillance data indicated that the HIV epidemic expanded quickly among drug users from 1995 - 1999 but became plateaued after 2000 among the IDUs. HIV antibody positive rate remained at 9.0% - 10.0% in the past three years while syphilis fluctuated between 3.5% - 4.5% in the past three years. Conclusion: Drug users and IDUs still remained an important driven factor for HIV/AIDS epidemic in China, suggesting more specific strategies and methods should be developed, based on the characteristics and trends of HIV epidemic, to optimize the allocation HIV prevention and control resources. abstract_id: PUBMED:24849621 Drug users' willingness to encourage social, sexual, and drug network members to receive an HIV vaccine: a social network analysis. This study examined feasibility of peer-based promotion of HIV vaccination and dyadic correlates to vaccine encouragement in risk- and non-risk networks of drug users (n = 433) in the US. Data were collected on HIV vaccine attitudes, risk compensation intentions, likelihood of encouraging vaccination, and recent (past 6 months) risk (i.e. involving sex and/or injecting drugs) and non-risk (i.e. involving co-usage of noninjected drugs and/or social support) relationships. Willingness to encourage HIV vaccination was reported in 521 and 555 risk- and non-risk relationships, respectively. However, 37 % expressed hesitancy, typically due to fear of side effects or social concerns. Encouragement was often motivated by perceived HIV risk, though 9 % were motivated by risk compensation intentions. In non-risk partnerships, encouragement was associated with drug co-usage, and in risk relationships, with perceived vaccine acceptability and encouragement by the partner. Network-based HIV vaccine promotion may be a successful strategy, but risk compensation intentions should be explored. abstract_id: PUBMED:25265310 Cognitive reserve during neuropsychological performance in HIV intravenous drug users. HIV-associated neurocognitive disorders are frequently observed in people with HIV. We aimed to evaluate the influence of cognitive reserve on the neuropsychological performance of seropositive drug users. We carried out a neuropsychological assessment and compared the performance of seropositive drug users (n = 75) with that of a group of seronegative drug users (n = 48). The results showed that a low cognitive reserve makes the seropositive patients neuropsychologically vulnerable. Likewise, we found that a high cognitive reserve has a protective effect in the presence of neuropsychological impairment associated with HIV. In the seronegative group, differences in a small number of tests were found between participants with low and high cognitive reserve. Overall, these data suggest that seropositivity is not sufficient to explain the neuropsychological alterations of seropositive drug users; rather, these alterations are multifactorial. Answer: The willingness of intravenous drug users (IDUs) to participate in HIV vaccine trials appears to be influenced by several factors. A study assessing the willingness of IDUs in methadone treatment to participate in a preventive HIV vaccine efficacy trial found that 52% of the subjects expressed a willingness to be one of the first individuals to participate. Factors that increased the likelihood of expressing interest included recent needle sharing, trust in the government to ensure vaccine safety, and a lack of understanding about what a vaccine is. Notably, 22% of subjects reported that they would increase needle sharing if vaccinated, and 30% did not know what a vaccine was, indicating a need for education and counseling to ensure that subjects fully understand the trial's purposes, methods, risks, and benefits (PUBMED:8179045). Another study involving a cohort of HIV-negative IDUs examined the relationship between cognitive factors (such as HIV treatment optimism, self-efficacy, and knowledge of vaccine trial concepts) and willingness to participate (WTP) in a preventive phase 3 HIV vaccine trial. The overall WTP was 56%, with self-efficacy being positively related to WTP. Aboriginal ethnicity and a higher educational level were also positively related to WTP. However, HIV treatment optimism and knowledge of vaccine trial concepts were unrelated to WTP (PUBMED:20044049). In a similar study among HIV-positive IDUs, WTP in a hypothetical phase 3 therapeutic HIV vaccine trial was 54%. The study found that participants were generally low in HIV treatment optimism and knowledge but high in self-efficacy. An increase in self-efficacy had a statistically significant positive association with WTP (PUBMED:22250925). These findings suggest that while there is a willingness among IDUs to participate in HIV vaccine trials, there are also significant gaps in knowledge and understanding about vaccines and trials that could affect enrollment. Education and interventions focused on increasing self-efficacy and understanding of vaccine trials could enhance WTP among IDUs.
Instruction: Do we need inhaled anaesthetics to blunt arousal, haemodynamic responses to intubation after i.v. induction with propofol, remifentanil, rocuronium? Abstracts: abstract_id: PUBMED:17005508 Do we need inhaled anaesthetics to blunt arousal, haemodynamic responses to intubation after i.v. induction with propofol, remifentanil, rocuronium? Background: The aim of this study was to determine whether, after propofol, rocuronium and remifentanil rapid sequence induction, inhaled anaesthetic agents should be started before intubation to minimize autonomic and arousal response during intubation. Methods: One hundred ASA I and II patients were randomized to receive 1 MAC of desflurane or sevoflurane during manual ventilation or not. Anaesthesia was induced with an effect-site-controlled infusion of remifentanil at 2 ng ml(-1) for 3 min. Patients then received propofol to induce loss of consciousness (LOC). Rocuronium (0.6 mg kg(-1)) was given at LOC and the trachea was intubated after 90 s of manual breathing support (=baseline) with or without inhaled anaesthetics. Vital signs and bispectral index (BIS) were recorded until 10 min post-intubation to detect autonomic and arousal response. Results: A significant increase in BIS value after intubation was seen in all groups. The increases were mild, even in those not receiving pre-intubation inhaled anaesthetics. However, in contrast to sevoflurane, desflurane appeared to partially blunt the arousal response. Heart rate, systolic and diastolic pressure increase similarly in all groups. Conclusions: Desflurane and sevoflurane were unable to blunt the arousal reflex completely, as measured by BIS, although the reflex was significantly less when desflurane was used. Rapid sequence induction with remifentanil, propofol and rocuronium and without inhaled anaesthetics before intubation can be done without dangerous haemodynamic and arousal responses at intubation after 90 s. abstract_id: PUBMED:21359074 Optimal effect-site concentration of remifentanil for preventing development of hypertension during tracheal intubation with inhaled desflurane induction. Background: The aim of the present study was to determine the effect-site concentration of remifentanil needed to prevent haemodynamic instability during tracheal intubation with inhaled desflurane induction. Methods: One hundred American Society of Anesthesiologists I and II female patients were randomized to receive an effect-site concentration of remifentanil of 0, 1, 2, 3, or 4 ng/ml. Induction of anaesthesia was started with intravenous injection of propofol 2 mg/kg. Ninety seconds after the completion of propofol injection, rocuronium (0.8 mg/kg) and remifentanil were administered simultaneously with 3% desflurane inhalation. Tracheal intubation was attempted 150 sec after the commencement of remifentanil administration. Results: A probit model of remifentanil concentration was predictive of successful intubation without development of hypertension (P for goodness-of-fit = 0.419). The effect-site concentration of remifentanil needed to achieve successful intubation without development of hypertension in 95% of the patients was 3.3 ng/ml (95% confidence interval, 2.6-4.8 ng/ml). Conclusions: The effect-site concentration of remifentanil of 3.3 ng/ml is effective in blunting the haemodynamic response in 95% of the patients when 2.0 mg/kg of propofol induction was followed by 3% desflurane inhalation. abstract_id: PUBMED:27842499 Comparison of the effect of continuous intravenous infusion and two bolus injections of remifentanil on haemodynamic responses during anaesthesia induction: a prospective randomised single-centre study. Background: Remifentanil is an effective drug for protecting against adverse haemodynamic responses to tracheal intubation. We compared the haemodynamic responses during anaesthesia induction between continuous intravenous (IV) infusion and two bolus injections of remifentanil. Methods: This prospective, randomised, open-label, single-centre study included patients with American Society of Anesthesiologists physical status I-II, scheduled to undergo elective surgery under general anaesthesia. Patients were randomised into two groups based on remifentanil administration type: the continuous IV infusion group (Group C) receiving a 0.3-μg/kg/min remifentanil infusion for 5 min followed by a 0.1-μg/kg/min remifentanil infusion, and the IV bolus group (Group B) receiving a combination of two bolus injections of remifentanil (first bolus of 0.4 μg/kg and second bolus of 0.6 μg/kg after 3 min) and 0.1 μg/kg/min remifentanil. General anaesthesia was induced with 1 mg/kg propofol and 0.6 mg/kg rocuronium 3 min after remifentanil infusion (Group C) or immediately after the first bolus of remifentanil (Group B). Tracheal intubation was performed 4 min after the injection of propofol and rocuronium. Heart rate and non-invasive blood pressure were recorded at 1-min intervals from baseline (i.e., before induction) to 5 min after tracheal intubation. Results: A total of 107 patients were enrolled (Group C, 55; Group B, 52). Normotensive patients with no history of antihypertensive medication use were assigned to the normotensive subgroup (41 each in both groups), while those with hypertension but without a history of antihypertensive medication use were assigned to the untreated hypertensive subgroup (Group C vs. B, n = 7 vs. 4). Finally, patients with a history of antihypertensive medication use were assigned to the treated hypertensive subgroup (7 each in both Groups C and B). No differences in heart rate and blood pressure were observed between Groups C and B within each subgroup. Conclusions: Haemodynamic responses during anaesthesia induction were similar between continuous infusion and two bolus injections of remifentanil within both normotensive and hypertensive patients with or without medication. Trial Registration: The trial was retrospectively registered with Japanese Clinical Trial Registry "UMIN-CTR" on 20 October 2016 and was given a trial ID number UMIN000024495 . abstract_id: PUBMED:30686315 Corrigendum to 'Do we need inhaled anaesthetics to blunt arousal, haemodynamic responses to intubation after i.v. induction with propofol, remifentanil, rocuronium?' [Br J Anaesth 2006; 97: 835-841 N/A abstract_id: PUBMED:22949979 Comparison of dexmedetomidine and remifentanil for attenuation of hemodynamic responses to laryngoscopy and tracheal intubation. Background: This study was designed to compare the effect of dexmedetomidine and remifentanil used in anesthetic induction on hemodynamic change after direct laryngoscopy and tracheal intubation. Methods: A total of 90 ASA class 1 or 2 patients were randomly assigned to one of 3 groups to receive one of the following treatments in a double-blind manner: normal saline (Group C, n = 30), dexmedetomidine 1 µg/kg (Group D, n = 30), remifentanil 1 µg/kg (Group R, n= 30). Anesthesia was induced with propofol 2 mg/kg and rocuronium 0.6 mg/kg and maintained with 2 vol% sevoflurane and 50% nitrous oxide in oxygen. In group D, dexmedetomidine 1 µg/kg was infused for 10 min before tracheal intubation. Patients in group R was received 1 µg/kg of remifentanil 1 minute before tracheal intubation. The systolic blood pressure, diastolic blood pressure and heart rate were recorded from entrance to operation room to 5 min after tracheal intubation. Results: The percent increase in systolic and diastolic blood pressure due to tracheal intubation in group D and R were significantly lower than that of group C (P < 0.05). The heart rate 1 min after tracheal intubation was lower in groups R and D than in the group C (P < 0.05). Conclusions: In healthy normotensive patients, the use of dexmedetomidine during anesthetic induction suppressed a decrease in blood pressure due to anesthetic induction and blunted the hemodynamic responses to endotracheal intubation. abstract_id: PUBMED:22379568 Low-dose remifentanil to modify hemodynamic responses to tracheal intubation: comparison in normotensive and untreated/treated hypertensive Korean patients. Background: Remifentanil has been shown to be effective at treating potentially adverse hemodynamic responses to tracheal intubation even at low doses (< 1 µg/kg/min), which needs to be evaluated in patients with diverse cardiovascular conditions. Methods: A low-dose regimen of remifentanil (continuous infusion of 0.1 µg/kg/min, preceded by 0.5 µg/kg bolus) was given before induction with bolus propofol and rocuronium, and heart rate as well as systolic, diastolic, and mean arterial pressures were measured at 1 min intervals from before induction to 5 min after tracheal intubation in normotensive patients, untreated hypertensive patients, and patients with known hypertension. Results: The low-dose regimen of remifentanil resulted in parallel hemodynamic responses in all three groups, and was effective at limiting hemodynamic responses to tracheal intubation without excessive cardiovascular depression. Hemodynamic responses in our study showed a similar pattern to that reported in previous investigations, except for elevations in heart rate and arterial pressures over the baseline values immediately after intubation. Conclusions: We suggest that the low-dose regimen of remifentanil in our study could be routinely used to modify hemodynamic responses to tracheal intubation in patients with diverse hemodynamic characteristics. However, the development of supplementary regimens is still needed to control the brief, but exaggerated responses to tracheal intubation, especially in untreated hypertensive patients. abstract_id: PUBMED:11990278 Effects of remifentanil and alfentanil on the cardiovascular responses to induction of anaesthesia and tracheal intubation in the elderly. Background: We compared the effects of remifentanil and alfentanil on arterial pressure and heart rate at induction of anaesthesia and tracheal intubation in 40 ASA I-III patients aged greater than 65 yr, in a randomized double-blind study. Methods: Patients received either remifentanil 0.5 microg kg(-1) over 30 s, followed by an infusion of 0.1 microg kg min(-1) (group R) or alfentanil 10 microg kg(-1) over 30 s, followed by an infusion of saline (group A). Anaesthesia was then induced with propofol, rocuronium, and 1% isoflurane with 66% nitrous oxide in oxygen. Results: Systolic arterial pressure (SAP) and mean arterial pressure (MAP) decreased after the induction of anaesthesia (P<0.05) and increased for 3 min after intubation in both groups (P<0.05), but remained below baseline values throughout. Heart rate remained stable after induction of anaesthesia but increased significantly from baseline after intubation for 1 and 4 min in groups R and A, respectively (P<0.05). There were no significant between-group differences in SAP, MAP, and heart rate. Diastolic pressure was significantly higher in group A than group R at 4 and 5 min after intubation (P<0.05). Hypotension (SAP < 100 mm Hg) occurred in four patients in group R and three patients in group A. Conclusions: Remifentanil and alfentanil similarly attenuate the pressor response to laryngoscopy and intubation, but the incidence of hypotension confirms that both drugs should be used with caution in elderly patients. abstract_id: PUBMED:14997083 Peri-intubation cardiovascular response during low dose remifentanil or sufentanil administration in association with propofol TCI. A double blind comparison. Aim: The aim of this study was to compare the effects on cardiovascular modifications induced by tracheal intubation when low dose infusion of remifentanil or sufentanil are used in association with propofol target controlled infusion. Methods: Sixty normotensive, ASA I-II-III, Mallampati Score <3 undergoing general anaesthesia for major elective abdominal surgery, received i.m. midazolam (0.05 mg/kg) and atropine (0.01 mg/kg) 30 min before induction. They were randomly divided in a double-blind fashion into two groups receiving for induction respectively target controlled infusion of propofol (site-effect 3.0 microg/ml) and sufentanil (0.01 microg/kg/min) or remifentanil (0.1 microg/kg/ min) infusion. Rocuronium (0.6 mg/kg) was administered. Following intubation, the lungs were mechanically ventilated with an oxygen/air mixture. Results: Bispectral index score and haemodynamic variables were recorded at baseline, after induction, during intubation and 1-3-5 min after the trachea was intubated. No differences in systolic and diastolic arterial pressure were observed in the sufentanil group, while the remifentanil group showed transient systolic and diastolic variations after intubating manoeuvers. The heart rate and bispectral index score were not affected in either group. Conclusion: In healthy normotensive patients the use of a small dose of either remifentanil or sufentanil after standard midazolam premedication, proved to be an effective strategy to blunt the cardiovascular response to intubation. abstract_id: PUBMED:30625825 The effects of remifentanil and lidocaine on the cardiovascular responses to tracheal intubation in hypertensive patients. Background: Hypertensive patients are more prone to exhibit an exaggerated hemodynamic responses to laryngoscopy and tracheal intubation. We compared the effects of remifentanil and lidocaine on the cardiovascular responses during intubation in hypertensive patients. Methods: Fifty adult hypertensive patients (>140/90 mmHg) were divided into 2 groups (remifentanil group; 1.0 microgram/kg, n = 25, lidocaine group; 1.5 mg/kg, n = 25). After 3 minutes'oxygenation, anesthesia was induced using 0.2 mg glycopyrrolate, 1.5 mg/kg propofol, and 1 of experimental drugs (30 seconds) bolus administration. Tracheal intubation was facilitated 90 seconds after administration of rocuronium; anesthesia was maintained with 2% sevoflurane and air in oxygen. Arterial blood pressure and heart rate were measured at the following times: resting state (baseline), after oxygenation; before intubation; just after intubation; and at 1, 3, and 5 minutes after intubation. Results: There was significant attenuation of mean systolic and diastolic arterial blood pressures, after intubation of the remifentanil group compared to the lidocaine group at just after intubation and at 1 minute after intubation (P < .05). The difference in heart rate was not significant between both groups. Conclusions: Bolus administration of remifentanil was found to be superior to lidocaine in the attenuation of the blood pressure during intubation in hypertensive patients. abstract_id: PUBMED:23468262 Different remifentanil doses in rapid sequence anesthesia induction: BIS monitoring and intubation conditions. Objectives: The aim of this prospective, randomized, double blind trial was to investigate the effects of two different doses of remifentanil on bispectral index (BIS) values and intubation conditions in a simulated model of rapid sequence anesthesia induction (RSAI). Material And Methods: 54 ASA I-II adult patients undergoing elective surgery were randomly allocated to two groups. After preoxygenation for 3 minutes, induction and tracheal intubation was performed in a 300 head-up position. Group I (n=26) and Group II (n=28) received a 1 µg/kg or 0.5 µg/kg bolus of remifentanil, respectively, over 30 seconds. Propofol was the induction agent. 1 mg/kg of rocuronium was used in all patients. Patients were intubated 60 s after administration of the muscle relaxant. Hemodynamic data and BIS scores were obtained before induction (baseline), after induction, at intubation and at 1, 3, 5 and 10 minutes following intubation. Intubation conditions were scored with a standard scoring system. Results: The hemodynamic variables at all the measurement intervals and the area under the hemodynamic variable-time curves were similar among the groups. There was no difference among the groups for BIS measurements. Moreover, the mean area under the BIS-time curve for Group I (300±45cm2) was comparable to Group II (315±49cm2) (p=0.432). The mean total intubation condition score (maximum 14 points) in Group I (12.6±1.67) was higher than Group II (10.3±4.79) (p=0.030). Conclusions: 1 µg/kg of remifentanil compared to 0.5 µg/kg of remifentanil provides similar hemodynamic profiles and BIS scores, but 1 µg/kg of remifentanil was associated with superior endotracheal intubation conditions. According to this study design and medications used, a relation between BIS scores and intubation conditions couldn't be demonstrated. Answer: The use of inhaled anesthetics to blunt arousal and hemodynamic responses to intubation after intravenous induction with propofol, remifentanil, and rocuronium has been studied, and the findings suggest that it may not be necessary to start inhaled anesthetics before intubation to minimize these responses. A study found that after rapid sequence induction with remifentanil, propofol, and rocuronium, the administration of desflurane or sevoflurane during manual ventilation did not completely blunt the arousal reflex as measured by the bispectral index (BIS), although the reflex was significantly less when desflurane was used. The heart rate and blood pressure increased similarly in all groups, indicating that rapid sequence induction without inhaled anesthetics before intubation can be done without dangerous hemodynamic and arousal responses at intubation after 90 seconds (PUBMED:17005508). Furthermore, other studies have focused on the optimal effect-site concentration of remifentanil for preventing the development of hypertension during tracheal intubation. One study determined that an effect-site concentration of remifentanil of 3.3 ng/ml is effective in blunting the hemodynamic response in 95% of patients when 2.0 mg/kg of propofol induction was followed by 3% desflurane inhalation (PUBMED:21359074). Another study compared the hemodynamic responses during anesthesia induction between continuous intravenous infusion and two bolus injections of remifentanil and found no differences in heart rate and blood pressure between the two methods within normotensive and hypertensive patients (PUBMED:27842499). Additionally, a comparison of dexmedetomidine and remifentanil for attenuation of hemodynamic responses to laryngoscopy and tracheal intubation showed that both agents suppressed a decrease in blood pressure due to anesthetic induction and blunted the hemodynamic responses to endotracheal intubation (PUBMED:22949979). In conclusion, while inhaled anesthetics like desflurane and sevoflurane may not completely blunt the arousal reflex, the use of remifentanil and propofol in rapid sequence induction appears to be sufficient to manage hemodynamic and arousal responses to intubation without the need for inhaled anesthetics before intubation.
Instruction: Infrapopliteal arterial revascularization for critical limb ischemia: is the peroneal artery at the distal third a suitable outflow vessel? Abstracts: abstract_id: PUBMED:18372150 Infrapopliteal arterial revascularization for critical limb ischemia: is the peroneal artery at the distal third a suitable outflow vessel? Purpose: Though the peroneal artery (PA) often remains patent despite disease or occlusion of other infrapopliteal arteries, there is skepticism about using the terminal PA as the outflow tract in distal revascularizations for limb salvage, especially when a patent inframalleolar artery is available. We analyzed our experience of using the distal PA and inframalleolar or pedal branches arteries as outflow tracts in revascularizations for critical limb ischemia. Methods: Over a decade, among 651 infrapopliteal arterial reconstructions performed in 597 patients, the PA was the outflow vessel in 214, its distal third being involved in 69 vein revascularizations (study group). During the same period, 187 vein bypass grafts were performed to 179 inframalleolar and 8 pedal branches arteries (control group). Patency, limb salvage and survival rates were assessed using Kaplan-Meier life-table analysis. Complete follow-up (range, 0.1-10.2 years; mean, 5.8 years) was obtained in 245 (95.7%) patients (66 were in the study group). Results: The distal PA was chosen as the target vessel: (1) because the proximal, mid-PA was occluded or severely diseased and no other adequate inframalleolar or pedal branches arteries were identified preoperatively (n = 30; 43.5%); (2) because an alternative inframalleolar target vessel was present but severely diseased (n = 9; 13%); (3) because of the length limitations of the available vein (n = 12; 17.4%; or (4) because of the presence of invasive infection or necrosis overlying the dorsalis pedis or posterior tibial arteries (n = 18; 26.1%). The study group was significantly younger than the control group (68 +/- 7 years vs 70 +/- 6 years, P = .039), and included significantly more patients with diabetes mellitus (65.2% vs 50.2%, P = .033) and insulin dependence (52.2% vs 37.9%, P = .041), dialysis-dependent chronic kidney disease (5.8% vs 1.1%, P = .047), and history of smoking (75.3% vs 58.2%, P = .012). None of the patients died in the perioperative period. Although the overall need for minor amputation was statistically higher in the PA group (78.2% vs 63.1%, P = .022), especially as concerns partial calcanectomy (8.7% vs 2.1%, P = .026), the proportion of wounds completely healed during the follow-up and the mean time to wound healing were comparable in the two groups. Kaplan-Meier analysis showed comparable long-term patency, limb salvage, and survival rates in the two groups. Conclusions: Revascularization to the distal third of the PA can achieve much the same outcome in terms of patency and limb salvage rates, wound healing rate and timing, as when other inframalleolar or pedal branches are used. The skepticism surrounding use of the terminal PA as an outflow vessel appears to be unwarranted. abstract_id: PUBMED:36062761 Vessel Preparation in Infrapopliteal Arterial Disease: A Systematic Review and Meta-Analysis. Purpose: Infrapopliteal lesions are generally complex to treat due to small vessel diameter, long lesion length, multilevel disease, and severe calcification. Therefore, different vessel preparation devices have been developed to contribute to better peri- and postprocedural outcomes. This systematic review aims to compare different vessel preparation techniques prior to plain old balloon angioplasty (POBA) or drug-coated balloon (DCB) angioplasty with POBA or DCB alone in infrapopliteal arterial disease. Methods: Medline, EMBASE, and Cochrane databases were searched for studies published between 2000 and 2022 assessing the value of adjunctive vessel preparation in infrapopliteal arterial disease. The primary outcomes were 12-month primary patency and limb salvage. Results: A total of 1685 patients with 1913 lesions were included in 11 POBA studies. Methodological quality was assessed as poor to moderate in these studies. Only 2 studies with 144 patients assessed vessel preparation in conjunction with DCB angioplasty. These randomized trials were assessed as high quality and found no significant benefit of adjunctive atherectomy to DCB angioplasty. The pooled Kaplan-Meier estimates of 12-month primary patency and limb salvage in the POBA studies were 67.8% and 80.9% for POBA, 62.1% and 86.4% for scoring balloons, 67.9% and 79.6% for mechanical atherectomy (MA), and 79.7% and 82.6% for laser atherectomy, respectively. Within the pooled data only scoring balloons and MA demonstrated significantly improved 12-month limb salvage compared to POBA. Conclusions: Different forms of adjunctive vessel preparation demonstrate similar 12-month outcomes compared to POBA and DCB angioplasty alone in infrapopliteal disease, with the exception of improved 12-month limb salvage in scoring balloons and MA. However, since the included studies were heterogeneous and assessed as poor to moderate methodological quality, selection bias may have played an important role. Main conclusion is that this systematic review found no additional value of standard use of vessel preparation. Clinical Impact: Infrapopliteal arterial disease is associated with chronic limb-threatening ischemia (CLTI) and generally complex to treat due to small vessel diameter, long lesion length, multilevel disease and severe calcification. A wide range of vessel preparation devices have been developed to contribute to improved peri- and postprocedural outcomes in these complex lesions. This systematic review aims to compare different vessel preparation techniques prior to plain old balloon angioplasty (POBA) or drug coated balloon (DCB) angioplasty with POBA or DCB angioplasty alone in infrapopliteal arterial disease. Different forms of adjunctive vessel preparation demonstrate similar 12-month outcomes compared to POBA and DCB angioplasty alone in infrapopliteal disease, with the exception of improved 12-month limb salvage in scoring balloons and mechanical atherectomy (MA). However, since the included studies were heterogeneous and assessed as poor to moderate methodological quality, selection bias may have played an important role. Main conclusion is that this systematic review found no additional value of standard use of vessel preparation. abstract_id: PUBMED:33081117 Infrapopliteal Artery Occlusive Disease: An Overview of Vessel Preparation and Treatment Options. Critical limb ischemia (CLI) is defined as chronic rest pain and/or the presence of tissue loss (ulcers or gangrene) in the lower extremities secondary to ischemia. CLI is a limb and potentially life-threatening disease associated with a poor prognosis with only 50% of patients being able to preserve both limbs within 12 months of diagnosis. CLI related to diabetes is often more extensive with multi-level long segmental arterial disease resulting in a 5-30-fold increased rate of amputation. As the incidence and prevalence of diabetes mellitus increases within our aging society, the rate of infrapopliteal artery occlusive disease (IPOD) and the need for intervention rises with it. The aim of this manuscript is to provide the reader with an overview of the various devices available for vessel preparation (VP) and treatment of IPOD in order to optimize patency rates, symptom resolution, healing of wounds, and minimize complications. abstract_id: PUBMED:33665528 Collateral artery bypass in the infrapopliteal segment. A 74-year-old man with diabetes and end-stage renal failure on regular dialysis required revascularization for gangrene of multiple toes and a heel ulcer on the right foot with chronic limb-threatening ischemia. However, the anterior tibial artery, posterior tibial artery, and peroneal artery, which are the usual targets below the knee, showed obstruction or calcification and were considered inappropriate bypass targets. Instead, a collateral artery developed along the area of the posterior tibial artery, and bypass surgery was performed with this artery. This is a case report showing successful collateral artery bypass grafting in the distal infrapopliteal segment. abstract_id: PUBMED:36988702 Inframalleolar thrice distal puncture in a single endovascular treatment session for successful revascularization. Background: Most patients with chronic limb-threatening ischemia (CLTI) have infrapopliteal arterial disease, which are often challenging to treat. In endovascular treatment (EVT) for these complex lesions, establishing retrograde access is an essential option not only for guidewire crossing but also for device delivery. However, no EVT case has yet been reported requiring inframalleolar thrice distal puncture in a single EVT session so far. Case Presentation: A 60-year-old CLTI patient with grade 3 Wound, Ischemia and foot Infection (WIfI) classification underwent EVT for occluded dorsal artery and posterior tibial artery. First, we conducted successful balloon angioplasty of the posterior tibial artery by establishing a retrograde approach via the lateral plantar artery. To treat the occlusion of the dorsal artery, we punctured the first dorsal metatarsal artery, and retrogradely advanced a guidewire to the dorsal artery occlusion; however, the microcatheter could not follow the guidewire. Therefore, we punctured the occluded distal anterior tibial artery and introduced the retrograde guidewire into the puncture needle. After guidewire externalization, we pulled up the retrograde microcatheter into the occlusion of dorsal artery using the "balloon deployment using forcible manner" technique. Thereafter, we were able to advance the antegrade guidewire into the retrograde microcatheter. After guidewire externalization, an antegrade balloon catheter was delivered and inflated for the purpose of dorsal artery dilation and hemostasis at the "needle rendezvous" point. Consecutively, balloon dilation was performed for puncture site hemostasis of the first dorsal metatarsal artery and complete hemostasis was achieved. Finally, we confirmed good vascular patency and favorable blood flow. After revascularization, transmetatarsal amputation was performed and the wound healed favorably. Conclusions: We can markedly increase the success rate of revascularization by effectively utilizing the retrograde approach in EVT for complex chronic total occlusions in infrapopliteal arterial diseases. abstract_id: PUBMED:28582081 Management of Infrapopliteal Arterial Disease: Critical Limb Ischemia. According to the TransAtlantic Inter-Society Consensus Document on Management of Peripheral Arterial Disease, "there is increasing evidence to support a recommendation for angioplasty in patients with critical limb ischemia and infrapopliteal artery occlusion." Management of infrapopliteal artery disease starts with diagnosis using modern preprocedural noninvasive and invasive imaging. Interventionalists need to learn the role of chronic total occlusion cap analysis and collateral zone recognition in angiosome-directed interventions for management of critical limb ischemia and be familiar with equipment and device selection and a stepwise approach for endovascular interventions. Interventionalists need to know which crossing tools to use to successfully cross-complex chronic total occlusion caps. abstract_id: PUBMED:29402670 Infrapopliteal Angioplasty of One or More than One Artery for Critical Limb Ischaemia: A Randomised Clinical Trial. Objective: The aim was to analyse the effect of the treatment of more than one infrapopliteal artery with respect to wound healing and limb salvage. Methods: Seventy-eight patients were enrolled prospectively for 80 procedures (80 limbs) that were randomly divided into two groups: 40 in the single vessel (SV) group and 40 in the multiple vessel group (MV). All patients had tissue loss. The choice of the first artery to treat was based on an analysis of two factors: the ease of the required endovascular technique and the presence of adequate distal outflow. The randomisation point was after the first successful distal artery angioplasty. The primary endpoints were the wound healing rate and limb salvage. Results: The mean age of the patients was 69.1 ± 4.3 years, and 56% were male. Concomitant treatment of the femoral and popliteal arteries was performed in 38.8% of patients. All demographic characteristics and technical aspects were statistically comparable for both groups. Successful recanalisation was achieved in 95.8%, 86.2%, 86.9%, and 92.5% for the tibio-fibular trunk, anterior tibial, posterior tibial, and fibular artery, respectively. In the MV group, a higher contrast volume (29 mL more; p = .049), longer procedure time (p = .01), and higher radiation exposure (p = .04) were noted. There was no difference in renal function between the groups either before or 30 days after the procedure (p = .165). The limb salvage rates after 1 and 3 years, respectively, were 75.9% and 67% for the SV group and 91.1% and 91.1% for the MV group (log rank p = .052). The wound healing rates after 1 and 3 years, respectively, were 33.6% and 70.9% for the SV and 63.9% and 78.4% for MV group (log rank p = .006). Wound healing was faster in MV (2.11 cm2/month) than SV group (0.62 cm2/month; p = .004). Conclusion: Endovascular treatment of more than one artery was associated with better wound healing rates but not with better limb salvage. abstract_id: PUBMED:37853746 Twelve-Month Outcomes of Intravascular Lithotripsy for Treatment of Calcified Popliteal and Infrapopliteal Lesions in Patients With Chronic Limb-Threatening Ischemia. Purpose: Vessel calcification is estimated to be present in 30% to 50% of patients with peripheral arterial disease (PAD) and is one of the main challenges in endovascular treatment. The popliteal artery is unique compared with other arteries due to its exposure to significant deformation and biomechanical stress during knee motion. Intravascular lithotripsy (IVL) is a novel technique that uses acoustic pressure waves to cause microfractures within the intimal and medial wall calcification. Intravascular lithotripsy is safe in femoropopliteal and infrapopliteal lesions, but follow-up studies are lacking. Therefore, the purpose of this study was to describe the first follow-up outcomes of IVL in popliteal and infrapopliteal arterial disease. Methods: This prospective, multicenter cohort study included all patients treated with IVL in the popliteal and infrapopliteal arteries at 4 sites. Standardized follow-up with duplex ultrasonography was scheduled at 6 to 8 weeks and 12 months. The primary safety endpoint was a composite of major adverse events (MAEs) at 30 days. Primary efficacy endpoints were primary patency, limb salvage, and amputation-free survival (AFS) at 12 months. Secondary endpoints were primary-assisted patency and freedom from target lesion revascularization (TLR). Endpoints were distributed for patients with chronic limb-threatening ischemia (CLTI) and intermittent claudication (IC) and estimated using the Kaplan-Meier method. Results: Between April 2021 and March 2023, 29 patients with 30 limbs were treated. Diabetes mellitus (DM) and CLTI were present in 62.1% and 80.0% of patients, respectively. Within the 32 treated lesions, severe calcification was present in 84.4% and bailout stenting was necessary in 12.5% of the lesions. Four MAEs occurred within 30 days: 1 closure device failure, 1 major amputation, and 2 deaths, neither of which was related to the study device. The primary patency, primary-assisted patency, freedom from TLR, limb salvage, and AFS at 12 months were 68.8%, 90.0%, 93.3%, 83.9%, and 57.1% for CLTI patients, respectively. No events occurred in restenosis, re-occlusion, TLR, major amputation, or mortality in patients with IC. Conclusions: This first-ever analysis on follow-up outcomes of IVL in the popliteal and infrapopliteal arteries demonstrated promising safety and efficacy outcomes with a low rate of bailout stenting. Clinical Impact: Vessel calcification is a common feature in peripheral arterial disease (PAD) and is one of the main challenges in endovascular treatment. The popliteal artery is subjected to biomechanical stress during knee motion, which makes stenting unappealing and often leads to worse clinical outcomes. This study aimed to describe the first follow-up outcomes of IVL in popliteal and infrapopliteal arterial disease. As in line with previous studies, no relevant procedural complications were found and the rate of bail-out stenting was only 12.5%. Moreover, in a complex patient population, this study demonstrated promising safety and efficacy outcomes. The comparison of IVL with angioplasty alone or other vessel preparation devices for popliteal and infrapopliteal arterial disease is warranted. abstract_id: PUBMED:25000247 A registry on distal popliteal and infrapopliteal revascularization with coronary drug-eluting stents. Aims: Endovascular therapy for popliteal and infrapopliteal artery disease in patients with critical limb ischemia (CLI) remains challenging, given the high risk of adverse events. Favorable results for coronary balloon-expandable drug-eluting stents (DESs) in this arterial district have been reported in selected studies, but uncertainty persists on their risk-benefit balance in real-world patients. We, thus, sought to appraise our outcomes with DES implantation for distal popliteal or infrapopliteal lesions. Methods: Our institutional databases were retrospectively queried to identify patients undergoing DES implantation for distal popliteal or infrapopliteal artery disease. Baseline, lesion, procedural, and outcome data were systematically collected. Results: A total of 25 patients were treated with coronary DES, 12 (48.0%) for distal popliteal and 13 (52.0%) for infrapopliteal lesions. Four (16.0%) patients received two stents, whereas the others only one. Breakdown of DES was as follows: biolimus-eluting, everolimus-eluting, paclitaxel-eluting, and sirolimus-eluting stents in, respectively, two (8.0%), one (4.0%), one (4.0%), and 21 (84.0%), with an average stent length of 33.6 ± 13.1 mm. Improvement in Fontaine stage was achieved in 23 (92.0%) patients at 1 month and in 22 (88.0%) patients at long-term (28.8 ± 20.6 months). In this highly selected case series, no repeat revascularizations or unplanned amputations were recorded, whereas two (8.0%) patients died, the first 15.5 months and the second 19.5 months after the procedure. Conclusion: Implantation of coronary balloon-expandable DES appears feasible, safe, and effective in selected patients with focal lesions in the distal popliteal and infrapopliteal arteries. abstract_id: PUBMED:35100556 Intravascular Lithotripsy for Optimal Angioplasty of Infrapopliteal Calcified Lesions. Background: Infrapopliteal arterial disease results from 2 major etiologies: medial calcification and intimal atheromatous plaque. Lesion calcification constitutes one of the most widely encountered risk factors for percutaneous transluminal angioplasty failure. Intravascular lithotripsy (IVL) creates selective fracturing of calcium deposits in the arterial wall, increasing the compliance of the target artery and facilitating angioplasty. Results regarding IVL utilization at femoropopliteal and infrapopliteal lesions have been very promising in terms of safety and efficacy. This review presents currently available data on IVL outcomes for infrapopliteal lesions and provides technical information for optimal use of IVL in these challenging lesions. Answer: The peroneal artery (PA) at the distal third is indeed a suitable outflow vessel for infrapopliteal arterial revascularization in cases of critical limb ischemia (CLI). A study analyzing infrapopliteal arterial reconstructions found that revascularization to the distal third of the PA can achieve comparable outcomes in terms of patency and limb salvage rates, wound healing rate and timing, as when other inframalleolar or pedal branches are used. The skepticism surrounding the use of the terminal PA as an outflow vessel appears to be unwarranted. In the study, the distal PA was chosen as the target vessel for various reasons, including occlusion or severe disease of the proximal, mid-PA, absence of other adequate inframalleolar or pedal branches arteries preoperatively, length limitations of the available vein, or the presence of invasive infection or necrosis overlying the dorsalis pedis or posterior tibial arteries. The study group that used the distal PA included significantly more patients with diabetes mellitus, insulin dependence, dialysis-dependent chronic kidney disease, and a history of smoking. Despite these risk factors, the outcomes were comparable to the control group that used inframalleolar and pedal branches arteries as outflow tracts (PUBMED:18372150).
Instruction: Is alexithymia related to negative symptoms of schizophrenia? Abstracts: abstract_id: PUBMED:35046851 Primary and Secondary Negative Symptoms in Schizophrenia. The negative symptoms of schizophrenia include volitional (motivational) impairment manifesting as avolition, anhedonia, social withdrawal, and emotional disorders such as alogia and affective flattening. Negative symptoms worsen patients' quality of life and functioning. From the diagnostic point of view, it is important to differentiate between primary negative symptoms, which are regarded as an integral dimension of schizophrenia, and secondary negative symptoms occurring as a result of positive symptoms, comorbid depression, side effects of antipsychotics, substance abuse, or social isolation. If secondary negative symptoms overlap with primary negative symptoms, it can create a false clinical impression of worsening deficit symptoms and disease progression, which leads to the choice of incorrect therapeutic strategy with excessive dopamine blocker loading. Different longitudinal trajectories of primary and secondary negative symptoms in different schizophrenia stages are proposed as an important additional discriminating factor. This review and position paper focuses primarily on clinical aspects of negative symptoms in schizophrenia, their definition, phenomenology, factor structure, and classification. It covers the historical and modern concepts of the paradigm of positive and negative symptoms in schizophrenia, as well as a detailed comparison of the assessment tools and psychometric tests used for the evaluation of negative symptoms. abstract_id: PUBMED:26960948 Current developments and challenges in the assessment of negative symptoms. Reliable and valid assessment of negative symptoms is crucial to further develop etiological models and improve treatments. Our understanding of the concept of negative symptoms has undergone significant advances since the introduction of quantitative assessments of negative symptoms in the 1980s. These include the conceptualization of cognitive dysfunction as separate from negative symptoms and the distinction of two main negative symptom factors (avolition and diminished expression). In this review we provide an overview of existing negative symptom scales, focusing on both observer-rated and self-rated measurement of negative symptoms. We also distinguish between measures that assess negative symptoms as part of a broader assessment of schizophrenia symptoms, those specifically developed for negative symptoms and those that assess specific domains of negative symptoms within and beyond the context of psychotic disorders. We critically discuss strengths and limitations of these measures in the light of some existing challenges, i.e. observed and subjective symptom experiences, the challenge of distinguishing between primary and secondary negative symptoms, and the overlap between negative symptoms and related factors (e.g. personality traits and premorbid functioning). This review is aimed to inform the ongoing development of negative symptom scales. abstract_id: PUBMED:32007758 The influence of negative and affective symptoms on anhedonia self-report in schizophrenia. Background: Anhedonia, a symptom prevalent in schizophrenia patients, is thought to arise either within negative symptomatology or from secondary sources, such as depression. The common co-occurrence of these diseases complicates the assessment of anhedonia in schizophrenia. Method: In a sample of 40 outpatients with chronic schizophrenia, we explored both the validity of the Snaith-Hamilton Pleasure Scale (SHAPS) self-report for anhedonia assessment and those factors influenced its scoring. We assessed negative symptoms using the Brief Negative Symptom Scale (BNSS), depression symptoms using the Calgary Depression Scale for Schizophrenia (CDSS) and cognitive impairment using the Brief Assessment of Cognition in Schizophrenia (BACS), before exploring associations between these scales. Results: The SHAPS was validated for use in schizophrenia. SHAPS scores were not associated with negative symptoms or cognitive impairment, but were linked to a single Depression symptom: Hopelessness (r = 0.52, p < 0.001). Conclusions: SHAPS scores, therefore, appear to only reflect anticipatory anhedonia arising from the affective domain. We advocate the development of multi-faceted self-report measures to more holistically assess anhedonia in schizophrenia. abstract_id: PUBMED:37866075 Alexithymia in chronic schizophrenia and its mediating effect between cognitive deficits and negative symptoms. Background: Although cognition is known to impact clinical symptoms of schizophrenia, few studies investigate the potential mediators of this relationship. This study aimed to examine the relationship between cognitive deficits and negative symptoms in schizophrenia, considering the mediating role of alexithymia as an important psychological variable. Moreover, the prevalence of alexithymia in patients with schizophrenia was investigated. Methods: A total of 689 patients with schizophrenia were recruited from two psychiatric hospitals. All patients completed the Positive and Negative Syndrome Scale (PANSS), 20-item Toronto Alexithymia Scale (TAS-20), and the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS). We used structural equation modeling to examine the hypothesized mediated model. Results: In total, 31.5 % of patients with schizophrenia were classified as alexithymia. The path analyses showed that two factors of alexithymia (i.e., the difficulty in identifying feelings and difficulty in describing feelings), played a mediating role in the pathway from cognitive deficits to negative symptoms (all p < .001). Limitations: Self-reported measurement for alexithymia may not be sufficiently reliable due to response bias. Conclusion: Our findings demonstrated a high occurrence of alexithymia in patients with schizophrenia. Moreover, the mediating role of alexithymia suggests that targeting emotion processing and cognition may be a feasible way to mitigate negative symptoms. abstract_id: PUBMED:16224204 Is alexithymia related to negative symptoms of schizophrenia? A preliminary longitudinal study. Background: Alexithymic features are close to anhedonia, blunted affect, and alogia that are also characteristics of the negative symptoms of schizophrenia. This study aimed to evaluate whether alexithymia is associated with negative symptoms and is related to the change of schizophrenic symptoms over time. Sampling And Methods: A consecutive sample of 29 schizophrenic outpatients was evaluated at baseline and at 3, 6, and 12 months during appropriate treatment. They completed the Positive and Negative Syndrome Scale, the Montgomery and Asberg Depression Rating Scale, the Global Assessment of Functioning Scale, and the 20-item Toronto Alexithymia Scale at any time points. Results: The psychiatric scale scores showed significant symptom improvement over time but were unrelated to the alexithymia score that was instead stable over time. Hierarchical regression showed that the 20-item Toronto Alexithymia Scale at baseline was the sole predictor of alexithymia at 12 months, after controlling for psychopathology and psychological functioning. Conclusions: Alexithymia was unrelated to negative symptoms, suggesting it is an independent and separate construct from schizophrenia. As expected, the negative symptoms were associated instead with illness-related aspects of depression and psychosocial functioning. Caution should be expressed in generalization mainly because this study is limited by the small sample size. abstract_id: PUBMED:8564918 Negative symptoms and affective disturbance in schizophrenia and related disorders. Objective: To review the data and discuss clinical recommendations for treating negative symptoms of schizophrenia. Negative symptoms (e.g., poverty of thought, affective blunting) have been regarded as part of schizophrenia since Kraepelin's early descriptions, although they remain a subject of controversy. For example, it is unclear if negative symptoms are distinct from other psychiatric symptoms such as depression, or are in actuality depression within schizophrenia. Recent evidence suggests that negative symptoms are independent of depression. Method: Factor analytic studies have suggested that a negative factor (loss of affect, volition, poverty of thinking) may be distinguished from other components and is separable from a depression factor. Experimental use of vignettes have also been useful in the assessment of negative symptoms. A second controversial area is whether or not the presence or absence of affect is the fundamental issue separating schizophrenia from other psychoses. Results: A continuum of psychosis has been hypothesized, with unipolar psychotic depression at one pole and schizophrenia with defect state at the other. Within this proposed continuum, negative symptoms are associated only with schizophrenia without affect and with defect state schizophrenia. As such, variation in affect could be a primary determinant of the type of psychosis. Conclusion: It appears that negative symptoms are a distinct aspect of schizophrenia and may aid in our understanding of psychotic disorders. abstract_id: PUBMED:36036402 Primary and secondary negative disorders in schizophrenia as an actual problem of modern clinical psychiatry Negative symptoms of schizophrenia represent, at the present stage of their study, a certain group of mental pathology. They include motivation disorders or volitional disorders (anhedonia, abulia, asociality) and disorders of reduced expression or emotional disorders (alogia/poverty of speech and emotional decline/flattening). Negative symptoms are key disorders in schizophrenia, often appear at the pre-manifest stage of the disease, and significantly affect the prognosis and response to therapy. From a scientific and practical point of view, it is important to distinguish between primary and secondary negative symptoms, where the primary symptoms are nuclear, inherent in schizophrenia, and the secondary ones are the result of the influence of positive symptoms (delusions, hallucinations), depression, side-effects of medications, social isolation and hospitalism. Changes in the concept of primary and secondary negative disorders as well as a pathophysiological basis of negative symptoms are considered. abstract_id: PUBMED:9035984 Clinical effects of clozapine: effect on negative symptoms Objectives: we have realized an open study in a population of chronic schizophrenic inpatients treated with clozapine. The purpose of this study was to investigate the clinical response and the effect of extrapyramidal symptoms of this atypical antipsychotic. Methods: our sample is composed of 25 chronic hospitalized schizophrenics (18 males, 7 females; average age = 36.57, SD = 8.41) with an initial important symptomatology (average score on BPRS = 64.92, SD = 7.99). The weekly assessment was done using PANSS and EPRS during 9 weeks. After a 15-days period of treatment adjustment, each patient reached a daily dose of 400 mg. Then the dosage was adjusted according to the clinical condition of the patients (average = 429.41 mg per day). Results: using the PANSS, we identified a statistically significant (p < 0.05) clinical improvement from the fourth week of treatment for global score, positive, negative and general psychopathology sub-score. For EPRS a dramatic improvement (p < 0.01) occurred from the second week. We found no correlation between the improvement of negative symptoms and the improvement of extrapyramidal symptoms. We individualized a sub-group of 7 good responders who showed an improvement of 20% or more on PANSS global score at the ninth week. This group showed a statistically significant (p < 0.01) more intense initial negative symptomatology compared to non-responders. Conclusion: the improvement of extrapyramidal symptoms precede the improvement of psychotic symptoms; this effect underlines the good neurological tolerance of clozapine. The clinical efficacy concerns both positive and negative symptoms. The efficacy on negative symptoms seems to be primary. In our sample, patients with intense negative symptomatology improve more than others. abstract_id: PUBMED:35664491 Remission of Persistent Negative Symptoms and Psychosocial Consequences by Combined Clozapine and Cariprazine Treatment in a Patient With Long-Standing Treatment-Resistant Schizoaffective Disorder. This patient case report describes a 45-year old white unmarried man with disability pension due to schizoaffective disorder, diagnosed at the age of 24. He lives in an apartment and has housing support. Retrospectively, the patient displayed prodromal markers of a disorder within the schizophrenia spectrum many years before the onset of frank psychosis, indeed since childhood. Over the years several symptoms and signs across schizophrenia domains have been manifest: positive, negative, cognitive, and affective, among which the negative and affective symptoms and signs were the earliest to appear. While the positive, disorganized, and catatonic symptoms responded to treatment - when duly tested and complied with - the negative and affective symptoms have been notoriously difficult to handle. We now report on the successful introduction of cariprazine (CAR) to his ongoing clozapine (CLZ) medication, the result of which has been a near-complete remission of his persistent negative and psychosocial issues. We interpret this remarkable alleviation of the patient's disease - and concomitant improvement of his quality of life - in terms of neuroreceptor target complementarity between CLZ and CAR, with particular emphasis on the contributions from the D3 and D2 receptor partial agonist components of the latter agent. abstract_id: PUBMED:7800088 Positive or negative symptoms. Which are more reliable in the diagnosis of schizophrenia? Due to the anticipated revisions concerning diagnostic criteria for schizophrenia in DSM-IV, recent Anglo-American research has been particularly concerned with the controversial problem whether positive or negative symptoms are more suited for the determination of these criteria. We addressed this problem in an empirical study. A total of 489 consecutive admissions to the Department of Psychiatry at the RWTH University, Aachen were assessed for the distribution of positive, negative and basic symptoms according to six ICD-10 double-digit diagnostic categories. Positive symptoms were shown to be more useful for diagnosis than the negative or basic symptoms. Basic symptoms, however, had a pattern of distribution which supports the notion that they may also be useful for the early diagnosis of schizophrenia. Answer: The relationship between alexithymia and negative symptoms of schizophrenia has been explored in various studies, with mixed findings. Some studies have suggested that alexithymia, which is characterized by difficulty in identifying and describing feelings, may be related to negative symptoms of schizophrenia. For instance, a study found that alexithymia was prevalent in 31.5% of patients with schizophrenia and that two factors of alexithymia (difficulty in identifying feelings and difficulty in describing feelings) played a mediating role in the pathway from cognitive deficits to negative symptoms (PUBMED:37866075). This suggests that alexithymia could be an important psychological variable that influences the manifestation of negative symptoms in schizophrenia. However, another study found that alexithymia was unrelated to negative symptoms, suggesting it is an independent and separate construct from schizophrenia (PUBMED:16224204). This study's findings indicated that alexithymia scores were stable over time and were not associated with changes in schizophrenic symptoms, which were instead related to illness-related aspects of depression and psychosocial functioning. Overall, while there is evidence to suggest a potential relationship between alexithymia and negative symptoms in schizophrenia, the findings are not conclusive. Some studies support the idea that alexithymia may mediate the relationship between cognitive deficits and negative symptoms, while others suggest that alexithymia is a distinct construct unrelated to the negative symptoms of schizophrenia. Further research is needed to clarify the nature of this relationship and to determine the extent to which alexithymia may contribute to the negative symptomatology in schizophrenia.
Instruction: Is there an association between the rating of illness and injury severity and the experience of emergency medical physicians? Abstracts: abstract_id: PUBMED:18839122 Is there an association between the rating of illness and injury severity and the experience of emergency medical physicians? Background: The illness and injury severity of patients in emergency situations is normally rated by the National Advisory Committee for Aeronautics (NACA) score. Different issues seem to limit the validity of the NACA score, therefore, the aim of the present investigation was to analyse the association between rescue experience of pre-hospital emergency physicians and the estimated jeopardy of patients' vital functions using the NACA score. Material And Methods: In this retrospective study, the emergency chart protocols of patients in a ground-based emergency system from 2004 to 2005 were evaluated concerning patients demographic, diagnosis, and related NACA score. Emergency physicians were divided into two groups according to their experience as pre-hospital emergency physicians (group 1: less than 3 years and group 2: 3 or more years). Results: The patients in groups 1 and 2 were comparable concerning the mean age (58+/-24 years vs. 58+/-24 years) and the percentage of males (each 54%). The reasons for the emergency call in both groups were comparable with respect to disease, trauma, and the combination of both (both 77%, 18%, and 5%, resp.). A higher percentage of emergency physicians of group 1 estimated a lower illness and injury severity score in comparison to emergency physicians of group 2 with a longer working experience (NACA I-III: 56% vs. 48%; p<0.05). Accordingly, physicians in group 1 estimated a smaller percentage of patients to be in life-threatening situations (NACA IV-V: 33% vs. 40%; p<0.05). There were no significant differences in the NACA categories VI (2%) and VII (7%) between both groups. Conclusion: The results demonstrate that emergency physicians with less rescue experience rated the severity of illness or injury relatively lower in comparison to colleagues who had worked in the pre-hospital setting for many years. abstract_id: PUBMED:24691947 Prediction of further hospital treatment for emergency patients by emergency medical service physicians Introduction: Prehospital assessment of illness and injury severity with the National Advisory Committee for Aeronautics (NACA) score and hospital pre-arrival notification of a patient who is likely to need intensive care unit (ICU) or intermediate care unit (IMC) admission are both common in Germany's physician-staffed emergency medical services (EMS) system. Aim: This study aimed at comparing the prehospital evaluation of severity of disease or injuries by EMS physicians and the subsequent clinical treatment in unselected emergency department (ED) patients. Material And Methods: This study involved a prospective observational analysis of patients transported to the ED of an academic level I hospital escorted by an EMS physician over a period of 6 months (February-July 2011). The physician's qualification and the patient's NACA score were documented and the EMS physician was asked to predict whether the patient would need hospital admission and, if so, to the general ward, IMC or ICU. After the ED treatment, discharge or admission, outcome and length of hospital and ICU or IMC stay were documented. Results: A total of 378 mostly non-trauma patients (88 %) treated by experienced EMS physicians could be enrolled. The number of patients discharged from the ED decreased, while the number of patients admitted to the ICU increased with higher NACA scores. Prehospital prediction of discharge or admission, IMC or ICU treatment by EMS physicians was accurate in 47 % of the patients. In 40 % of patients a lower level of care was sufficient while 12 % needed treatment on a higher level of care than that predicted by EMS physicians. Of the patients 39 % who were predicted to be discharged after ED treatment, were admitted to hospital and 48 % of patients predicted to be admitted to the IMC were admitted to the general ward. Patients predicted to be admitted to the ICU were admitted to the ICU in 75 %. Higher NACA scores were associated with increased mortality and a longer hospital IMC or ICU length of stay, but significant differences were only found between patients with NACA V versus VI scores or patients predicted to be treated on the IMC versus the ICU. Conclusions: Prehospital NACA scores indicate the need for inpatient treatment, but neither hospital discharge or admission nor need of IMC or ICU admission after initial ED treatment could be sufficiently predicted by EMS physicians. Thus, hospital prenotification in order to predispose IMC or ICU capacities does not seem to be useful in cases where an ED can reassess admitted EMS patients. abstract_id: PUBMED:18489871 Optimization of use of prehospital emergency physicians after new dispatch and guidance instructions Introduction: Correct use of prehospital medical competence requires optimal dispatch. Based on the severity gauge Severity of Injury/Illness Index (SIII) which grades injury/illness into eight levels, we examined the effect of implementing new dispatch and guidance instructions in the emergency call centre. Materials And Methods: From the local Prehospital Database we have withdrawn data from 1st August 2000 to 31st December 2005. On 1st August 2003 new dispatch and guidance instructions were implemented with a graded allocation of prehospital resources. It is hereby possible to dispatch 1) an ambulance + prehospital emergency physician (PEP); 2) an ambulance; or refer to 3) self care or alternative transportation. Results: During the study 10,585 patients were attended by a PEP. After implementing the new dispatch and guidance instructions we experienced a total increase of five per cent in the four most severe SIII-groups. A total of 189 patients were transferred to the relevant groups. These changes are significant with p < 0,001 (chi2-test). Conclusion: By implementing more graded dispatch and guidance instructions in the emergency call centre it is possible to optimise the use of the PEP. abstract_id: PUBMED:37529486 Patients who use emergency medical services have greater severity of illness or injury compared to those who present to the emergency department via other means: A retrospective cohort study. Objective: We aimed to evaluate the differences in characteristics and illness/injury severity among patients who present to the emergency department (ED) via emergency medical services (EMS) compared to patients who present via other means. Methods: We analyzed a nationwide sample of ED visits from the 2015-2019 National Hospital Ambulatory Medical Care Survey. We excluded patients <18 or >92 years old, who eloped or left against medical advice, or who arrived via interfacility transport. Mode of presentation was dichotomized to those presenting to the ED via EMS versus any other mode of transportation. Using the appropriate survey sampling weights, we described patient characteristics and compared measures of illness/injury severity between groups using a multivariable logistic regression model. Results: An unweighted total of 73,397 ED visits, representing a weighted estimate of 528,083,416 ED visits in the United States during 2015-2019, included 18% arriving via EMS and 82% via other means. EMS patients were older, more often male, more often had multiple chronic medical conditions, and less often had private insurance. EMS patients had higher priority triage scores, consumed more resources in the ED, and had longer lengths of stay. Arrival by EMS was associated with higher odds of hospital admission (odds ratio [OR] 2.7, 95% confidence interval [CI] 2.4-2.9) and in-hospital mortality (OR 11.1, 95% CI 7.3-17.2). Conclusions: Patients presenting via EMS had significantly different characteristics and outcomes than those presenting via other means. These important differences should be considered when comparing studies of all ED patients versus those who present via EMS. abstract_id: PUBMED:29233228 Disease and Non-Battle Traumatic Injuries Evaluated by Emergency Physicians in a US Tertiary Combat Hospital. Introduction Analysis of injuries during military operations has focused on those related to combat. Non-combat complaints have received less attention, despite the need for many troops to be evacuated for non-battle illnesses in Iraq. This study aims to further characterize the disease and non-battle injuries (DNBIs) seen at a tertiary combat hospital and to describe the types of procedures and medications used in the management of these cases. Methods: In this observational study, patients were enrolled from a convenience sample with non-combat-related diseases and injuries who were evaluated in the emergency department (ED) of a US military tertiary hospital in Iraq from 2007-2008. The treating emergency physician (EP) used a data collection form to enroll patients that arrived to the ED whose injury or illness was unrelated to combat. Results: Data were gathered on 1,745 patients with a median age of 30 years; 84% of patients were male and 85% were US military personnel. The most common diagnoses evaluated in the ED were abdominal disorders, orthopedic injuries, and headache. Many cases involved intravenous access, laboratory testing, and radiographic testing. Procedures performed included electrocardiogram, lumbar puncture, and intubation. Conclusion: Disease and non-battle traumatic injuries are common in a tertiary combat hospital. Emergency providers working in austere settings should have the diagnostic and procedural skills to evaluate and treat DNBIs. Bebarta VS , Mora AG , Ng PC , Mason PE , Muck A , Maddry JK . Disease and non-battle traumatic injuries evaluated by emergency physicians in a US tertiary combat hospital. Prehosp Disaster Med. 2018;33(1):53-57. abstract_id: PUBMED:21762912 Helicopter rescue operations involving winching of an emergency physician. Objective: We sought to study the epidemiologic and medical aspects of alpine helicopter rescue operations involving the winching of an emergency physician to the victim. Methods: We retrospectively reviewed the medical and operational reports of a single helicopter-based emergency medical service. Data from 1 January 2003 to 31 December 2008 were analysed. Results: A total of 921 patients were identified, with a male:female ratio of 2:1. There were 56 (6%) patients aged 15 or under. The median time from emergency call to helicopter take-off was 7 min (IQR = 5-10 min). 840 (91%) patients suffered from trauma-related injuries, with falls from heights during sports activities the most frequent event. The most common injuries involved the legs (246 or 27%), head (175 or 19%), upper limbs (117 or 13%), spine (108 or 12%), and femur (66 or 7%). Only 81 (9%) victims suffered from a medical emergency, but these cases were, when compared to the trauma victims, significantly more severe according to the NACA index (p<0.001). Overall, 246 (27%) patients had a severe injury or illness, namely, a potential or overt vital threat (NACA score between 4 and 6). A total of 478 (52%) patients required administration of major analgesics: fentanyl (443 patients or 48%), ketamine (42 patients or 5%) or morphine (7 patients or 1%). The mean dose of fentanyl was 188 micrograms (range 25-750, SD 127). Major medical interventions such as administration of vasoactive drugs, intravenous perfusions of more than 1000 ml of fluids, ventilation or intubation were performed on 39 (4%) patients. Conclusions: The severity of the patients' injuries or illnesses along with the high proportion of medical procedures performed directly on-site validates emergency physician winching for advanced life support procedures and analgesia. abstract_id: PUBMED:10240426 An illness-injury severity index for nonphysician emergency medical personnel. N/A abstract_id: PUBMED:12184709 Medical consequences of violence: a two-year prospective study from a primary care accident and emergency department in Norway. Objective: To assess the medical consequences of violence from the perspective of a primary care accident and emergency department. Design: Prospective observational study. Setting: Bergen Legevakt (AED). Subjects: All assault victims treated at the AED 1994-96. Main Outcome Measures: Diagnoses, treatments, number of consultations at the AED, sickness certificates, rates of admittances to hospitals, referrals to specialists and injury severity ratings using Abbreviated Injury Scale (AIS) and Shepherd's injury severity scale. Results: 1803 assault victims were registered, 433 (24%) females and 1370 (76%) males. Most injuries were of slight severity, corresponding to AIS 0 to 1 (82%) or Shepherd's scale 0 to 1 (74%). Bruises/contusions and cuts/lacerations dominated. The majority of patients did not receive any specific treatment at the AED and they were not given sickness certificates, but 11% were admitted to hospitals and 30% were referred to specialists. Males were more likely to be seriously injured than females. Conclusion: Most physical injuries caused by violence and treated at a primary care accident and emergency department are minor. abstract_id: PUBMED:29380133 Non-urgent use of emergency departments: populations most likely to overestimate illness severity. Patients' overestimation of their illness severity appears to contribute to the national epidemic of emergency department (ED) overcrowding. This study aims to elucidate which patient populations are more likely to have a higher estimation of illness severity (EIS). The investigator surveyed demographic factors of all non-urgent patients at an academic ED. The patients and physicians were asked to estimate the patients' illness severity using a 1-10 scale with anchors. The difference of these values was taken and compared across patient demographic subgroups using a 2-sample t-test. One hundred and seventeen patients were surveyed. The mean patient EIS was 5.22 (IQR 4), while the mean physician EIS was less severe at 7.57 (IQR 3), a difference of 2.35 (p < 0.0001). Patient subgroups with the highest EIS compared to the physicians' EIS include those who were self-referred (difference of 2.65, p = 0.042), with income ≤ $25,000 (difference of 2.96, p = 0.004), with less than a college education (difference of 2.83, p = 0.018), and with acute-on-chronic musculoskeletal pain (difference of 4.17, p = 0.001). If we assume the physicians' EIS is closer to the true illness severity, patients with lower socioeconomic status, lower education status, who were self-referred, and who suffered from acute-on-chronic musculoskeletal pain are more likely to overestimate their illness severity and may contribute to non-urgent use of the ED. They may benefit from further education or resources for care to prevent ED misuse. The large difference of acute-on-chronic musculoskeletal pain may reflect a physician's bias to underestimate the severity of a patients' illness in this particular population. abstract_id: PUBMED:11565966 Non-fatal occupational injuries and illnesses treated in hospital emergency departments in the United States. Objectives: To estimate the number and rate of occupational injuries and illnesses treated in hospital emergency departments and to characterize the nature, event, and source of injury and illness. Setting: Twenty four hour emergency departments in hospitals in the United States. Methods: Surveillance for occupational injuries and illnesses was conducted in a national probability based sample of hospital emergency departments through the National Electronic Injury Surveillance System (NEISS). Worker demographics, nature of injury and disposition, and incident circumstances were abstracted from emergency department medical records, typically within 24-72 hours of treatment. Results: Approximately 3.6 million occupational injuries and illnesses were treated in emergency departments in 1998. Younger workers, particularly males, continue to have the highest rates of work related injuries. Together, lacerations, punctures, amputations, and avulsions represented one fourth of the emergency department treated injuries, mostly to hand and fingers. Sprains and strains, largely to the trunk, also accounted for one fourth of the injuries. The three leading injury events were contact with objects, bodily reactions and exertions, and falls. Conclusions: Despite apparent decreases in rates, youth continue to have a high burden of injury in the workplace. However, three fourths of all emergency department treated injuries occur to workers 20-44 years of age. Emergency department surveillance is particularly amenable to capture of young worker injuries and provides a wealth of injury details to guide prevention efforts--efforts that will likely reduce occupational injuries as these workers age. Emergency department surveillance also provides injury estimates with few demographic or employer constraints, other than the medical venue used. Answer: Yes, there is an association between the rating of illness and injury severity and the experience of emergency medical physicians. A study found that emergency physicians with less rescue experience rated the severity of illness or injury relatively lower in comparison to colleagues who had worked in the pre-hospital setting for many years (PUBMED:18839122). This suggests that the experience of emergency medical physicians can influence their assessment of patient conditions in emergency situations.
Instruction: Can emergency medical service staff predict the disposition of patients they are transporting? Abstracts: abstract_id: PUBMED:18843076 Can emergency medical service staff predict the disposition of patients they are transporting? Background: Emergency medical service (EMS) staff in the UK routinely transport all emergency responses to the nearest emergency department (ED). Proposed reforms in the ambulance service mean that EMS staff will transport patients not necessarily to the nearest hospital, but to one providing facilities that the patient is judged to require. No previous UK studies have evaluated how accurately EMS staff can predict which transported patients will require admission to hospital. Objectives: To survey EMS staff regarding the appropriate use of their service and determine whether they can predict which patients will require hospital admission. Methods: A prospective ''service evaluation'' of EMS staff transporting patients to an adult ED in the UK. Staff were asked to state whether ED attendance by emergency ambulance was appropriate and whether transported patients would be admitted or discharged from the ED. Results: During the study period, there were 2553 emergency transports to the ED and questionnaires were completed in 396 cases (15.5%). EMS staff predicted that 182 (46.0%) would be admitted to hospital and 214 (54.0%) would be discharged. Actual dispositions were 187 (47.2%) versus 209 (52.8%) respectively. Sensitivity of predicting admission was 71.7% (95% CI 65 to 78) and specificity was 77.0% (95% CI 71 to 81). EMS staff were significantly better at predicting admission in non-trauma cases than trauma cases (75.9% vs 57.1%, 95% CI 2.2 to 35.4). Conclusion: Staff in one UK ambulance service showed reasonable accuracy when predicting the likelihood of admission of patients they transport. They correctly identified most patients who would be able to leave. Further work is needed to support these findings and ensure that EMS staff safely triage patients to alternative destinations of care. abstract_id: PUBMED:23945897 Evaluation of emergency medical technicians intermediate prediction about their transported patients final disposition in emergency department of Imam Khomeini Hospital. This was a prospective cross-sectional study of consecutive transported patients by emergency medical service (EMS) to a referral hospital. The goal of this study was the evaluation of emergency medical technician intermediate prediction about their transported patients disposition in Emergency Department of Imam Khomeini Hospital. 2950 patients were transported to this hospital, Questionnaires were submitted in 300 of consecutive patient transports and completed data were obtained and available upon arrival at hospital for 267 of these cases. Emergency medical technicians intermediate (EMT-I) were asked to predict whether the transported patient would require admission to the hospital, and if so, what will be their prediction of patient actual disposition. Their predictions were compared with emergency specialist physicians. EMT-I predicted that 208 (78%) transports would lead to admission to the hospital, after actual disposition, 232 (%87) patients became admitted. The sensitivity of predicting any admission was 65%, with positive predictive value (PPV) of 39% and specificity of 86% with negative predictive value (NPV) of 94%. The sensitivity of predicting trauma patients (56.2% of total patients) was 55% with PPV of 38%, specificity of 86% and for Non-trauma patients' sensitivity was 80% with PPV of 40% and specificity of 82%. EMT-I in our emergency medical system have very limited ability in prediction of admission and disposition in transported patients and their prediction were better in Non-trauma patients. So in our EMS, the pre-hospital diversion and necessity of transporting policies should not be based on EMS personnel disposition. abstract_id: PUBMED:37525826 The Ability of Emergency Medical Service Staff to Predict Emergency Department Disposition: A Prospective Study. Purpose: Paramedics' decision to notify receiving hospitals and transport patients to an appropriate healthcare facility is based on the Prediction of Intensive Care Unit (ICU) and Hospital Admissions guide. This study aimed to assess the paramedics' gestalt on both ward and ICU admission. Patients And Methods: A prospective study was conducted at King Abdulaziz Medical City between September 2021 and March 2022. Paramedics were asked several questions related to the prediction of the patient's hospital outcome, including emergency department (ED) discharge or hospital admission (ICU or ward). Additional data, such as the time of the ambulance's arrival and the staff years of experience, were collected. The categorical characteristics are presented by frequency and percentage for each category. Results: This study included 251 paramedics and 251 patients. The average age of the patients was 62 years. Of the patients, 32 (12.7%) were trauma, and 219 (87.3%) were non-trauma patients. Two-thirds of the patients (n=171, 68.1%) were predicted to be admitted to the hospital, and 80 (31.8%) of the EMS staff indicated that the patient do not need a hospital or an ambulance. The sensitivity, specificity, PPV, and NPV of the emergency medical service (EMS) staffs' gestalt for patient admission to the hospital were, respectively (77%), (33%), (16%), and (90%). Further analysis was reported to defend the EMS staffs' gestalt based on the level of EMS staff and the nature of the emergency (medical vs trauma), are reported. Conclusion: Our study reports a low level of accurately predicting patient admission to the hospital, including the ICU. The results of this study have important implications for enhancing the accuracy of EMS staff predictive ability and ensuring that patients receive appropriate care promptly. abstract_id: PUBMED:17973623 Can medical admission and length of stay be accurately predicted by emergency staff, patients or relatives? Objectives: To determine the accuracy of predictions of the need for hospital admission and, if admitted, length of stay (LOS) made early in an emergency attendance by emergency department (ED) doctors, nurses, patients and relatives, and the characteristics of ED presentations predictive of admission and short stays (</= 3 days). Methods: Prospective collection of predictions by medical and nursing staff, patients and relatives of ED departure status and LOS (1 day, 2-3 days, 4-7 days or longer) of a convenience sample of adults presenting with medical symptoms. Predictions were made before full medical assessment and matched against actual departure status and LOS. Vital signs and demographics were recorded. Results: Seventy five percent (2159/2904; CI 73%-77%) of all admission predictions in 704 patients were correct with 85% (575/673; CI 81%-88%) of doctors' predictions correct. Thirty-five percent (361/1024) of all LOS predictions for 331 patients were correct with 46% (122/268; CI 40%-52%) of doctors' predictions correct. Risk factors for short-stay over longer admission included age less than 65, normal oxygen saturations and self-referral. Conclusion: Emergency admissions can be predicted with reasonable accuracy but LOS is difficult to predict. Development of a prediction tool may facilitate streaming and appropriate use of short-stay units. abstract_id: PUBMED:33595695 Geriatric education for emergency medical services : The geriatric patient in the rescue service Geriatric patients are a frequent group of patients in the ambulance service. In recent years, many structured course systems have been established in the German-speaking countries and help not only to speak the same language but also to improve the communication and care of patients in the ambulance service and emergency admission. While emergency physicians and ambulance personnel have increasingly dealt with structured processes in the fields of trauma, children and cardiovascular disorders in various course systems in order to minimize risks, there is only one course system established a few years ago that deals exclusively with the geriatric patient. Although this is the most common group of patients in emergency care, there is a risk of making mistakes due to the lack of structures and routine. The geriatric education for emergency medical services (GEMS) course system can help to ensure a structured emergency care and communication in the care of geriatric patients. abstract_id: PUBMED:25082416 The Emergency Medical Service personnel's perception of the transportation of young children. To our knowledge no previous studies have been made which describe the Emergency Medical Service (EMS) personnel's perceptions of transporting children and the care encounter between the child, the parent and the EMS personnel when separating the child and the parent specifically in an ambulance. The aim of this study was to gain an understanding of how EMS personnel perceive ambulance transport of children. The study was carried out in 2012 at one of three ambulance contractors in Stockholm. Twelve semi-structured interviews were conducted and the content analysis showed that transporting children induces stress and is deemed a precarious task by EMS personnel mainly because children are considered more vulnerable than adults and because of the necessity to separate the child from the parent during transport. There is a conflict between medical- and emotional wellbeing and traffic-safety during the transportation of children and a fear of insufficient ability to care for the child. The EMS personnel's vulnerability is evident in the complicated care situation associated with transporting children in an ambulance. These findings may be considered a first step in assessing if action is needed to improve care and patient safety during ambulance transportation of children. abstract_id: PUBMED:34696995 Performance of the Simple Clinical Score (SCS) and the Rapid Emergency Medicine Score (REMS) to predict severity level and mortality rate among patients with sepsis in the emergency department. Nurses play a key role as the first line of service for patients with medical conditions and injuries in the emergency department (ED), which includes assessing patients for sepsis. The researchers evaluated tools to examine the performance of the Simple Clinical Score (SCS) and the Rapid Emergency Medicine Score (REMS) to predict sepsis severity and mortality among sepsis patients in the ED. A retrospective survey was performed, selecting participants by using a purposive sampling method, and including the medical records of all patients diagnosed with sepsis admitted to the ED at Singburi Hospital, Thailand. Data were analysed using the ROC curve and the Area Under Curve (AUC) to calculate the accuracy of each patient's mortality prediction. A total of 225 patients diagnosed with sepsis was identified, with a mortality rate of 59.11% after admission to the medical service and intensive care unit. The AUC analysis showed that the accuracy of the model generated from the REMS (88.6%) was higher than that of the SCS (76.7%). The authors also recommend that key variables identified in this research should be used to develop screening and assessment tools for sepsis in the context of the ED. abstract_id: PUBMED:35162255 Impact of COVID-19 Pandemic on Quality of Health Services Provided by Emergency Medical Services and Emergency Departments in the Opinion of Patients: Pilot Study. Background: The aim of this study was to assess the quality of assistance provided by emergency medical service (EMS) and hospital emergency department (ED) staff in the opinion of patients. Methods: Simple random sampling was performed to select the responders. A group of 131 people who correctly completed the questionnaire form was included in the study. The questionnaire was directed only to patients receiving EMS and/or ED assistance during the time period when the pandemic was announced in Poland. Results: Problems in the ED were reported by 38 people (31.15%), and calling EMS was reported as generating problems by 7 people (16.67%). Dissatisfaction with the help provided in EDs was reported by 33 patients (27.05%), and dissatisfaction with the help by EMS was reported by eight people (19.05%). The assessment of the patient's satisfaction level significantly correlated with the waiting time in the ED (Spearman = 0.217; p = 0.016) as well as with the waiting for EMSs (Spearman = 0.403; p = 0.008). Conclusions: Patients of the ambulance service and hospital emergency departments during the pandemic positively assessed the actions of medical services, despite significant delays in providing health services. abstract_id: PUBMED:10138046 A study of Japan's emergency medical care system--emergency transportation and medical care service areas for tuberculosis patients. Since 1988, there has been a noted increase in the use of emergency transportation by tuberculosis patients in Japan. Therefore it is necessary to build a suitable emergency medical care system for these patients. We evaluated the present emergency medical care system available to them in Tokyo. We also studied emergency medical care service areas (MCSAs) to further aid in the revision of the emergency medical care system for tuberculosis. We used data from tuberculous patients who required the use of the emergency medical care system in Tokyo. The data was collected by the Tokyo Fire Department's emergency care information system from 1978 to 1990. We investigated and analyzed data regarding transportation and patient data. We also analized transportation data from the point of view of municipal districts (wards, cities, towns, villages), MCSAs and the Tokubetsuku region. We concluded that the number of emergency transports for tuberculosis patients has gradually been increasing and that the currently MCSA regions were not ideal for use in developing a transportation system for these patients. Using larger areas such as Tokubetsu-ku and Tokyotoka would be more practical emergency medical care system for tuberculosis patients in Tokyo, it will be necessary to take into account the Tokubetsu-ku area, the vagrant population within it, and the time required to transport the patients from this area to the Tokyotoka area. abstract_id: PUBMED:37197614 Incapacity to work in prehospital German emergency medical services staff Background: The determination of incapacity to work is a central approach for analyses of absence due to sickness. Nevertheless, no data are yet available for incapacity to work and associated factors in the German prehospital emergency medical services (EMS) staff. Objective: The aim of this analysis was to identify the proportion of EMS staff with at least one incapacity for work (AU) in the previous 12 months and associated factors. Material And Methods: This was a nationwide survey study with rescue workers. Factors associated with work disability were identified using multivariable logistic regression, calculating odds ratios (OR) and associated 95% confidence intervals (95% CI). Results: Included in this analysis were 2298 employees of the German emergency medical services (female 42.6%, male 57.2%). Overall, 60.10% of female participants and 58.98% of male participants reported an incapacity for work in the previous 12 months. Incapacity for work was significantly associated with having a high school diploma (high school diploma: OR: 0.51, 95% CI 0.30; 0.88, p = 0.016; reference: secondary school diploma), working in a rural environment (OR: 0.65, 95% CI 0.50; 0.86, p = 0.003) or urban environment (OR: 0.72, 95% CI: 0.53; 0.98, p = 0.037). Furthermore, hours worked per week (OR: 1.01, 95% CI: 1.00; 1.02, p = 0.003) and 5-< 10 years of service (OR: 1.40, 95% CI: 1.04; 1.89, p = 0.025) were associated with higher odds of work disability. Neck and back pain, depression, osteoarthritis, and asthma in the previous 12 months also showed a significant association with work disability in the same time period. Conclusion: This analysis shows that chronic diseases, educational attainment, area of assignment, years of service, and hours worked per week, among others, were associated with incapacity for work in the previous 12 months in German EMS staff. Answer: Emergency medical service (EMS) staff have some ability to predict the disposition of patients they are transporting, but the accuracy of their predictions varies. A study in the UK found that EMS staff showed reasonable accuracy when predicting the likelihood of admission of patients they transport, with a sensitivity of 71.7% and specificity of 77.0% for predicting admission. They were significantly better at predicting admission in non-trauma cases than trauma cases (PUBMED:18843076). However, a study in Imam Khomeini Hospital indicated that EMS personnel had a limited ability to predict admission and disposition, with better predictions for non-trauma patients (PUBMED:23945897). Another study reported a low level of accurately predicting patient admission to the hospital, including the ICU, with a sensitivity of 77% and specificity of 33% for EMS staff's gestalt for patient admission (PUBMED:37525826). In contrast, a study assessing the accuracy of predictions made by emergency staff, patients, or relatives found that emergency admissions could be predicted with reasonable accuracy, but the length of stay was difficult to predict (PUBMED:17973623). The studies suggest that while EMS staff can make educated guesses about patient disposition, their predictions are not always accurate, and there is room for improvement in their predictive abilities. Factors such as the nature of the emergency (medical vs. trauma) and the level of EMS staff experience may influence the accuracy of predictions (PUBMED:37525826). It is important to note that the ability to predict patient disposition is crucial for appropriate triage and ensuring that patients receive timely and appropriate care.
Instruction: Are physicians required during winch rescue missions in an Australian helicopter emergency medical service? Abstracts: abstract_id: PUBMED:34719134 Profiling helicopter emergency medical service winch operations involving physicians in Queensland, Australia. Objective: To analyse the mission profiles of helicopter emergency medical service (HEMS) winch operations involving LifeFlight Retrieval Medicine physicians in Queensland, Australia, specifically focusing on patients' clinical characteristics, extrication methods and scene times. Methods: A retrospective analysis was performed to identify all helicopter winch missions involving physicians during 2019. Demographic, clinical and non-clinical data were accessed from an electronic database used to log cases and findings presented using descriptive statistics. Results: Out of 4356 HEMS missions involving physicians, 100 (2.3%) were winch operations. Of these, 31 (31%) occurred overwater and 12 (12%) at night. In total, 106 patients were attended, and eight patient deaths occurred. Most patients were traumatically injured (66%), male (66%) and had a median (interquartile range) age of 43.5 (28-59) years. Thirteen missions (13%) involved drowning victims. This group had a higher burden of injury and comprised half of the patients treated with endotracheal intubation. Median scene time was 30 min (20-40), and the winch stretcher was the predominant patient extrication method. Physician winching occurred in 63 (63%) missions and was associated with increased scene time and increased use of the winch stretcher. Conclusions: Winch operations involving physicians occur infrequently in Queensland HEMS, although almost a third of missions occur overwater. Drowning victims are encountered more frequently than reported elsewhere in Australian HEMS and comprised half of the patients who underwent endotracheal intubation. Patients' severity of illness and injury may contribute to the associations between winching of physicians, increased scene times and increased use of the winch stretcher. abstract_id: PUBMED:25993018 A Paramedic-staffed Helicopter Emergency Medical Service's Response to Winch Missions in Victoria, Australia. Winching emergency medical care providers from a helicopter to the scene enables treatment of patients in otherwise inaccessible locations, but is not without risks. The objective of this study was to define characteristics of winch missions undertaken by Intensive Care Flight Paramedics (ICFP) in Victoria, Australia with a focus on extraction methods and clinical care delivered at the scene. A retrospective data analysis was performed to identify all winch missions between November 2010 and March 2014. Demographic data, winch characteristics, physiological parameters, and interventions undertaken on scene by the ICFP were extracted. Out of 5,003 missions in the study period, 125 were identified as winch operations. Winter missions were significantly less frequent than those of any other season. Patients were predominantly male (78.4%) and had a mean age of 38 years (±17.6). A total of 109 (87.2%) patients were identified as experiencing trauma with a mean Revised Trauma Score of 7.5288, and isolated limb fractures were the most frequently encountered injury. Falls and vehicle-related trauma were the most common mechanisms of injury. The total median scene duration was 49 minutes (IQR 23-91). Sixty-three patients (50.4%) were extracted using a stretcher, 45 (36.0%) using a hypothermic strop, and 6 (4.8%) via normal rescue strop. Eleven patients (8.8%) were not winched to the helicopter. Vascular access (38.4%), analgesia (44.0%), and anti-emetic administration (28.8%) were the most frequent clinical interventions. Forty-nine patients (39.2%) did not receive any clinical intervention prior to winch extraction. Winch operations in Victoria, Australia consisted predominantly of patients with minor to moderate traumatic injuries. A significant proportion of patients did not require any clinical treatment prior to winching, and among those who did, analgesia was the most frequent intervention. Advanced medical procedures were rarely required prior to winch extraction. abstract_id: PUBMED:23353665 Are physicians required during winch rescue missions in an Australian helicopter emergency medical service? Background: A helicopter emergency medical service (HEMS) capable of winching offers several advantages over standard rescue operations. Little is known about the benefit of physician winching in addition to a highly trained paramedic. Objective: To analyse the mission profiles and interventions performed during rescues involving the winching of a physician in the Greater Sydney Area HEMS (GSA-HEMS). Methods: All winch missions involving a physician from August 2009 to January 2012 were identified from the prospectively completed GSA-HEMS electronic database. A structured case sheet review for a predetermined list of demographic data and physician-only interventions (POIs) was conducted. Results: We identified 130 missions involving the winching of a physician, of which 120 case sheets were available for analysis. The majority of patients were traumatically injured (90%) and male (85%) with a median age of 37 years. Seven patients were pronounced dead at the scene. A total of 63 POIs were performed on 48 patients. Administration of advanced analgesia was the most common POI making up 68.3% of interventions. Patients with abnormal RTSc(2) scores were more likely to receive a POI than those with normal RTSc(2) (84.8% vs 15.2%; p=0.03). The performance of a POI had no effect on median scene times (45 vs 43 min; p=0.51). Conclusions: Our high POI rate of 40% (48/120) coupled with long rescue times and the occasional severe injuries support the argument for winching Physicians. Not doing so would deny a significant proportion of patients time-critical interventions, advanced analgesia and procedural sedation. abstract_id: PUBMED:30760298 Challenges of helicopter mountain rescue missions by human external cargo: need for physicians onsite and comprehensive training. Background: Human external cargo (HEC) extrication during helicopter rescue missions is commonly used in mountain emergency medical services. Furthermore, longline or winch operations offer the opportunity to deliver professional medical care onsite. As the safety and quality of emergency medical care depends on training and experience, we aimed to investigate characteristics of mountain rescue missions with HEC. Methods: We retrospectively reviewed all rescue missions conducted by Air Zermatt (a commercial rescue service in the high-alpine region of Switzerland) from January 2010 to September 2016. Results: Out of 11,078 rescue missions 1137 (10%) required a HEC rescue. In 3% (n = 29) rapid sequence induction and endotracheal intubation, in 2% (n = 14) cardiopulmonary resuscitation, and in 0.4% (n = 3) a chest tube insertion had to be performed onsite prior to HEC extraction. The most common medical intervention onsite is analgesia or analgosedation, in 17% (n = 142) fentanyl or ketamine was used in doses of ≥ 0.2 mg or ≥ 50 mg, respectively. Conclusions: As these interventions have to be performed in challenging terrain, with reduced personnel resources, and limited monitoring, our results point out the need for physicians onsite who are clinically experienced in these procedures and specially and intensively trained for the specific characteristics and challenges of HEC rescue missions. abstract_id: PUBMED:36229384 Rescue Activity of a Civilian Helicopter Emergency Medical Service in the Western Cape, South Africa: A 5-Year Retrospective Review. Introduction: Helicopter search and rescue (SAR) in Africa is conducted primarily by military organizations. Since 2002, the Western Cape of South Africa has had a dedicated contracted civilian helicopter emergency medical service (HEMS) conducting air ambulance, terrestrial, and aquatic rescue. To our knowledge, this is the first description of the operations of an African helicopter rescue service. Methods: A 5-y retrospective review of the terrestrial and aquatic helicopter rescue activity of a civilian-operated HEMS in the Western Cape, South Africa, from January 1, 2012 through December 31, 2016, was conducted. Data were extracted from the organization's operational database, aviation documents, rescue reports, and patient care records. Patient demographics and activity at the time of rescue, temporal and geographical distribution, crewing compositions, patient injury, triage, clinical interventions, and rescue techniques were analyzed. Results: A total of 581 SAR missions were conducted, of which 451 were terrestrial and 130 were aquatic rescues. The highest volume of rescues was conducted within the urban Cape Peninsula. Hoisting using a rescue harness was the most common rescue technique used. A total of 644 patients were rescued, with no or minor injuries representing 79% of the sample. Trauma (33%, 196/644) was the most common medical reason for rescue, with lower limb trauma predominant (15%, 90/644). The most common clinical interventions performed were intravenous access (n=108, 24%), spinal immobilization (n=92, 21%), splinting (n=76, 17%), and analgesia administration (n=58, 13%). Conclusions: The rescue techniques utilized are similar to those described in high-income settings. Uninjured patients comprised the majority of the patients rescued. abstract_id: PUBMED:27620190 Ambulance helicopter contribution to search and rescue in North Norway. Background: Search and rescue (SAR) operations constitute a significant proportion of Norwegian ambulance helicopter missions, and they may limit the service's capacity for medical operations. We compared the relative contribution of the different helicopter resources using a common definition of SAR-operation in order to investigate how the SAR workload had changed over the last years. Methods: We searched the mission databases at the relevant SAR and helicopter emergency medical service (HEMS) bases and the Joint Rescue Coordination Centre (North) for helicopter-supported SAR operations within the potential operation area of the Tromsø HEMS base in 2000-2010. We defined SAR operations as missions over land or sea within 10 nautical miles from the coast with an initial search phase, missions with use of rescue hoist or static rope, and avalanche operations. Results: There were 769 requests in 639 different SAR operations, and 600 missions were completed. The number increased during the study period, from 46 in 2000 to 77 in 2010. The Tromsø HEMS contributed with the highest number of missions and experienced the largest increase, from 10 % of the operations in 2000 to 50 % in 2010. Simple terrain and sea operations dominated, and avalanches accounted for as many as 12 % of all missions. The helicopter crews used static rope or rescue hoist in 141 operations. Discussion: We have described all helicopter supported SAR operations in our area by combining databases. The Tromsø HEMS service had taken over one half of the missions by 2010. Increased availability for SAR work is one potential explanation. Conclusions: The number of SAR missions increased during 2000-2010, and the Tromsø HEMS experienced the greatest increase in workload. abstract_id: PUBMED:33540207 Defining the characteristics of physically demanding winch rescue in helicopter search and rescue operations. Helicopter emergency medical service search and rescue (HEMS SAR) paramedics perform physically demanding winch rescues. Rescue organisations require valid physical employment standards (PES) to ensure personnel can adequately perform duties. There are no studies describing validated PES for HEMS SAR. We convened a subject matter expert (SME) focus group to review historical case data and generate task descriptions for land and water winch rescue as the basis for development of task simulations to assess physiological workload. Sixteen helicopter rescue paramedics with a mean age of 47 (range, 36-52) years and 8 (2-20) years' experience in helicopter rescue participated in a SME focus group. When provided with data from historical cases, SMEs achieved consensus (≥80%) when generating descriptions of winch rescue. This method may be useful to develop simulations for assessment of physiological demands of winch rescue and similar tasks, and to enhance validity and reliability of PES for rescue organisations. abstract_id: PUBMED:34172229 Helicopter Rescue Missions for Emergency Medical Aid at Sea: A New Assignment for the French Military Medical Service? Historically in charge of the search and rescue of aircraft in distress alongside military aeronautical crews, the doctors and nurses of the French Military Medical Service (FMMS) also participate in helicopter rescue missions for emergency medical aid at sea off the French metropolitan coast. This public service mission is placed at the heart of force medicine because it can be transposed to external theaters of operations (OPEX). Recent changes in the regulation of medical aid at sea, in particular the creation in 2013 of maritime emergency mobile resuscitation services, initially weakened the role of the teams of the FMMS. However, their 2018 and 2019 activity report reflects the persistence of strong involvement. The "FMMS 2020" model, especially through its "openness" aspect, allows the FMMS the optimal conditions to maintain its position as a key player in this field, particularly through the development of institutional partnerships with the public health service. Associated with their mastery of the air-maritime environment, strengthening the skills in emergency medicine of the military medical teams in charge of this mission appears to be 1 of the main measures ensuring the achievement of this objective and a high quality of care for patients. abstract_id: PUBMED:26674604 Point-of-care ultrasonography during rescue operations on board a Polish Medical Air Rescue helicopter. Point-of-care ultrasound examination has been increasingly widely used in pre-hospital care. The use of ultrasound in rescue medicine allows for a quick differential diagnosis, identification of the most important medical emergencies and immediate introduction of targeted treatment. Performing and interpreting a pre-hospital ultrasound examination can improve the accuracy of diagnosis and thus reduce mortality. The authors' own experiences are presented in this paper, which consist in using a portable, hand-held ultrasound apparatus during rescue operations on board a Polish Medical Air Rescue helicopter. The possibility of using an ultrasound apparatus during helicopter rescue service allows for a full professional evaluation of the patient's health condition and enables the patient to be brought to a center with the most appropriate facilities for their condition. abstract_id: PUBMED:34694962 The physiological demands of helicopter winch rescue in water and over land. Physically demanding water and over land winch rescues are critical tasks for helicopter paramedics. To assess the physiological demands of winch rescue, 14 intensive care flight paramedics (12 male, 2 female, mean (±SD) age 44.3 (±5.4) years, experience 7.1 (±5.2) years) completed land and water-based task simulations. For the land task, VO2 was 41.7 (±4.5) mL kg-1 min-1, or 86 (±11) % of VO2peak. Task duration was 7.0 (±3.6) min, or 53 (±27) % of maximal acceptable work duration (MAWD) (13.2 (±9.0) min). For the water task, VO2 was 36.7 (±4.4) mL kg-1 min-1, (81 (±12) % of VO2peak). Water task duration was 10.2 (±1.1) min, or 47.6 (±4.8) % of calculated MAWD (21.0 (±15.6) min). These results demonstrate that helicopter rescue paramedics work at very high physiological workloads for moderate durations, and these demands should be considered when developing selection tests and when deploying to rescues, to ensure staff are capable of task performance. Practitioner summary: Paramedics performed helicopter winch rescue task simulations in water and over land. Paramedics worked at 81% of VO2peak for 10.2 min and 86% of VO2peak for 7 min for swim and land tasks respectively. Rescue organisations should consider these demands when selecting and credentialing staff and when deploying to incidents. Abbreviations: HEMS: helicopter emergency medical service; ICFP: intensive care flight paramedic; MAWD: maximal acceptable work duration; PES: physical employment standards; SAR: search and rescue. Answer: The question of whether physicians are required during winch rescue missions in an Australian helicopter emergency medical service (HEMS) is addressed in several of the provided abstracts. According to the study on the Greater Sydney Area HEMS (GSA-HEMS), a high rate of physician-only interventions (POIs) was performed during winch missions, with advanced analgesia being the most common intervention. The study found that patients with abnormal RTSc(2) scores were more likely to receive a POI than those with normal scores, and the performance of a POI had no effect on median scene times. This suggests that the presence of a physician can provide critical interventions without delaying the mission, supporting the argument for winching physicians to provide time-critical interventions, advanced analgesia, and procedural sedation (PUBMED:23353665). In contrast, a study on winch missions in Victoria, Australia, found that a significant proportion of patients did not require any clinical treatment prior to winching, and among those who did, analgesia was the most frequent intervention. Advanced medical procedures were rarely required prior to winch extraction (PUBMED:25993018). This might imply that the presence of a physician may not always be necessary, especially in cases with minor to moderate traumatic injuries. However, the study on helicopter mountain rescue missions by human external cargo highlights the need for physicians onsite who are clinically experienced and specially trained for the specific challenges of rescue missions (PUBMED:30760298). Similarly, the study on winch operations involving LifeFlight Retrieval Medicine physicians in Queensland, Australia, indicates that drowning victims, who comprised half of the patients who underwent endotracheal intubation, were encountered more frequently, suggesting that the severity of illness and injury may necessitate the presence of a physician (PUBMED:34719134). In summary, the necessity for physicians during winch rescue missions in Australian HEMS appears to depend on the specific circumstances of the mission, including the severity of the patient's condition and the type of interventions required. While some missions may not require advanced medical procedures, others, particularly those involving critically ill or injured patients, may benefit from the expertise of a physician to provide advanced medical care onsite.
Instruction: Is unenhanced CT sufficient for evaluation of acute abdominal pain? Abstracts: abstract_id: PUBMED:26700337 Comparison of Virtual Unenhanced Images Derived From Dual-Energy CT With True Unenhanced Images in Evaluation of Gallstone Disease. Objective: The aim of this study was to compare gallstones on virtual unenhanced images and true unenhanced images acquired with dual-energy CT (DECT). Materials And Methods: We enrolled 112 patients with right upper quadrant pain and clinically suspected acute cholecystitis or gallstone who underwent DECT--including unenhanced, arterial, and portal phases. Eighty-three gallstones with composition proven by semiquantitative Fourier transform infrared spectroscopy from 45 patients who had undergone cholecystectomy (40 cholesterol gallstones from 21 patients, 43 calcium gallstones from 24 patients) were included. CT images were retrospectively evaluated for stone size, contrast-to-noise ratio (CNR) of gallstone to bile, and visibility and density of gallstones for each image set. The visibility of each type of stone was compared with a paired t test. Results: Both cholesterol and calcium stones measured smaller on virtual unenhanced images than on true unenhanced images, yielding a lower sensitivity of virtual unenhanced images for detecting small gallstones. Mean CNR of cholesterol stones was 2.45 ± 1.32 versus 1.67 ± 1.55 (p < 0.032) and that of calcium stones was 10.59 ± 7.15 and 14.11 ± 9.81 (p < 0.001) for virtual unenhanced and true unenhanced images, respectively. For calcium stones, two readers found 43 of 43 (100%) on true unenhanced images; one reader found 41 of 43 (95%) and the other, 37 of 43 (86%) on virtual unenhanced images. For cholesterol stones, one reader found 20 of 40 (50%) and the other 19 of 40 (47%) on true unenhanced images versus 34 of 40 (85%) and 30 of 40 (75%), respectively, on virtual unenhanced images. The visibility of cholesterol stones was higher on virtual unenhanced images, but that of calcium stones was lower. Conclusion: Virtual unenhanced images at DECT allow better visualization of cholesterol gallstones, but true unenhanced images allow better visualization of calcium and small gallstones. abstract_id: PUBMED:12427436 Is unenhanced CT sufficient for evaluation of acute abdominal pain? Background: To determine whether intravenous contrast improves the ability of radiologists to establish the cause of acute abdominal pain after nondiagnostic or normal unenhanced CT. Methods: Out of 164 consecutive emergency department patients presenting with less than 48 h of nontraumatic, acute abdominal pain, a confident diagnosis for cause of pain was made prospectively in 71/164 (43%) patients on these unenhanced scans by the monitoring radiologist. In the other 93 patients, our study sample, intravenous contrast-enhanced CT was obtained. At a later date, retrospectively, two experienced abdominal CT radiologists independently evaluated unenhanced CT scans alone for potential causes of pain and diagnostic confidence level on a 1-3 scale. At least 2 weeks later, intravenous enhanced and unenhanced scans were read side-by-side for the same assessment. Results: There was no significant difference in diagnostic confidence levels comparing unenhanced CT alone (2.59) vs. intravenous enhanced and unenhanced CT together (2.64). Chi-square analysis found no significant difference in finding a cause for pain when intravenous contrast was added compared to the initial unenhanced scan alone. Conclusions: Intravenous contrast did not significantly improve the ability of CT to establish a cause of abdominal pain after a negative or nondiagnostic unenhanced CT. abstract_id: PUBMED:27271919 Systematic unenhanced CT for acute abdominal symptoms in the elderly patients improves both emergency department diagnosis and prompt clinical management. Objectives: To assess the added-value of systematic unenhanced abdominal computed tomography (CT) on emergency department (ED) diagnosis and management accuracy compared to current practice, in elderly patients with non-traumatic acute abdominal symptoms. Methods: Institutional review board approval and informed consent were obtained. This prospective study included 401 consecutive patients 75 years of age or older, admitted to the ED with acute abdominal symptoms, and investigated by early systematic unenhanced abdominal CT scan. ED diagnosis and intended management before CT, after unenhanced CT, and after contrast CT if requested, were recorded. Diagnosis and management accuracies were evaluated and compared before CT (clinical strategy) and for two conditional strategies (current practice and systematic unenhanced CT). An expert clinical panel assigned a final diagnosis and management after a 3-month follow-up. Results: Systematic unenhanced CT significantly improved the accurate diagnosis (76.8% to 85%, p=1.1x10-6) and management (88.5% to 95.8%, p=2.6x10-6) rates compared to current practice. It allowed diagnosing 30.3% of acute unsuspected pathologies, 3.4% of which were unexpected surgical procedure requirement. Conclusions: Systematic unenhanced abdominal CT improves ED diagnosis accuracy and appropriate management in elderly patients presenting with acute abdominal symptoms compared to current practice. Key Points: • Systematic unenhanced CT improves significantly diagnosis accuracy compared to current practice. • Systematic unenhanced CT optimizes appropriate hospitalization by increasing the number of discharged patients. • Systematic unenhanced CT allows detection of about one-third of acute unsuspected abdominal conditions. • It should allow boosting emergency department management decision-making confidence in old patients. abstract_id: PUBMED:16183928 Nontraumatic acute abdominal pain: unenhanced helical CT compared with three-view acute abdominal series. Purpose: To prospectively evaluate and compare the diagnostic accuracy of unenhanced helical computed tomography (CT) for patients with nontraumatic acute abdominal pain with that of traditional abdominal radiography. Materials And Methods: Institutional review board approval and informed consent were obtained; this study was completed before implementation of the HIPAA. Ninety-one patients (44 men; 47 women; age range, 18-84 years; mean age, 48.5 years) with acute nontraumatic abdominal pain over a 7-month period were referred by the emergency department of one institution. These patients underwent a three-view acute abdominal series (AAS) and unenhanced helical CT. AAS included an upright chest radiograph and upright and supine abdominal radiographs. Unenhanced helical CT images with 5-mm collimation were obtained from the lung bases to the pubic symphysis, without intravenous, oral, or rectal contrast material. AAS and unenhanced helical CT images were each separately and prospectively interpreted by a different experienced radiologist who was blinded to patient history and the images and interpretation of the other examination for each patient. Final diagnosis was established with surgical, pathologic, and clinical follow-up. The sensitivity, specificity, accuracy, positive predictive value, negative predictive value, and positive and negative likelihood ratios were calculated for AAS and unenhanced helical CT. Confidence intervals of 95% were calculated for each value with the standard equation for population proportions. Results of AAS and unenhanced helical CT examinations were compared with chi2 analysis. Results: Among the 91 patients examined, unenhanced helical CT yielded an overall sensitivity, specificity, and accuracy of 96.0%, 95.1%, and 95.6%, respectively. The AAS interpretations yielded an overall sensitivity, specificity, and accuracy of 30.0%, 87.8%, and 56.0%, respectively. The accuracy of unenhanced helical CT was significantly greater than the accuracy of AAS (P < .05). Conclusion: AAS is an insensitive technique in the evaluation of nontraumatic acute abdominal pain in adults. Unenhanced helical CT is an accurate technique in the evaluation of adult patients with nontraumatic acute abdominal pain and should be considered as an alternative to radiography as the initial imaging modality. abstract_id: PUBMED:24238213 Late adverse events after enhanced and unenhanced MRI and CT: a prospective study. Prospective evaluation of frequency of late and very late adverse events in patients undergoing enhanced or unenhanced magnetic resonance imaging (MRI) or computed tomography (CT). The imaging procedure was performed according to the protocols of the department. All patients were contacted 3 days after their imaging procedure for the occurrence of late adverse events (LAEs) and a month later for the occurrence of very late adverse events (VLAEs) using a structured questionnaire. A total of 1042 (71%) among 1473 patients completed both the 3-day and 1-month questionnaire. The incidence of LAEs was significantly higher in the enhanced MRI (38%) and CT (27%) groups than unenhanced MRI (20%) and CT (16%) groups. The frequency of nausea, dizziness, abdominal pain and diarrhoea was significantly higher in the enhanced MRI group than in the MRI control group, while taste sensation was reported more frequent in the enhanced CT group than in the CT control group. The incidence of VLAEs was significantly higher in the enhanced CT (21%) than unenhanced CT group (13%). The same LAEs and VLAEs were reported in all four groups. LAEs and VLAEs were reported more often in patients with allergy in both the enhanced and unenhanced groups. The incidence of LAEs and VLAEs was higher in the enhanced groups than in the unenhanced groups. The most reported adverse events in the enhanced groups were also reported in the unenhanced groups. Patients with allergy reported LAE and VLAE more often. This suggests that adverse events may not always be caused by the contrast media. abstract_id: PUBMED:31208938 Unenhanced CT for clinical triage of elderly patients presenting to the emergency department with acute abdominal pain. Purpose: The purpose of this study was to compare the diagnostic accuracy and inter-reader agreement of unenhanced computed tomography (CT) to those of contrast-enhanced CT for triage of patients older than 75years admitted to emergency department (ED) with acute abdominal pain (AAP). Patients And Methods: Two hundred and eight consecutive patients presenting with AAP to the ED who underwent CT with unenhanced and contrast-enhanced images were retrospectively included. There were 90 men and 118 women with a mean age of 85.4±4.9 (SD) (range: 75-101.4years). Three readers reviewed unenhanced CT images first, and then unenhanced and contrast-enhanced CT images as a single set. Diagnostic accuracy was compared to the standard of reference defined as the final diagnosis obtained after complete clinico-biological and radiological evaluation. Correctness of the working diagnosis proposed by the ED physician was evaluated. Intra- and inter-reader agreements were calculated using the kappa test and interclass correlation. Subgroup analyses were performed for patients requiring only conservative management and for those requiring intervention. Results: Diagnostic accuracy ranged from 64% (95% CI: 62-66%) to 68% (95% CI: 66-70%) for unenhanced CT, and from 68% (95% CI: 66-70%) to 71% (95% CI: 69-73%) for both unenhanced and contrast-enhanced CT. Contrast-enhanced CT did not significantly improve the diagnostic accuracy (P=0.973-0.979). CT corrected the working diagnosis proposed by the ED physician in 59.1% (range: 58.1-60.0%) and 61.2% (range: 57.6-65.5%) of patients before and after contrast injection (P>0.05). Intra-observer agreement was moderate to substantial (k=0.513-0.711). Inter-reader agreement was substantial for unenhanced (kappa=0.745-0.789) and combined unenhanced and contrast-enhanced CT (kappa=0.745-0.799). Results were similar in subgroup analyses. Conclusion: Unenhanced CT alone is accurate and associated with high degrees of inter-reader agreement for clinical triage of patients older than 75years with AAP in the emergency setting. abstract_id: PUBMED:8456661 Diagnosis of acute appendicitis: value of unenhanced CT. Objective: Two hundred eleven patients with acute pain in the right lower quadrant had CT without oral or IV contrast material. The CT examination required less than 5 min to perform and interpret. We assessed the efficacy of this limited CT examination in identifying patients with acute appendicitis who required emergency laparotomy. Subjects And Methods: Unenhanced CT of the lower abdomen was performed in 211 patients with lower abdominal pain of uncertain origin (130 women and 81 men 4-91 years old). Scans were obtained by using 10-mm collimation at 10-mm intervals from the L3 level to the symphysis pubis without IV or oral contrast material. Twenty-two to 30 images per patient were obtained, depending on the size of the patient. On average, the entire examination took less than 5 min to complete. Prospective diagnoses based on CT findings were compared with surgical results and clinical follow-up. Results: Unenhanced CT was an accurate imaging technique for the initial examination of patients with suspected acute appendicitis. The accuracy was 93%. The sensitivity was 87%, the specificity was 97%, the positive predictive value was 94%, and the negative predictive value was 93%. Conclusion: This study shows that unenhanced CT is a useful test to diagnose appendicitis in patients with acute abdominal pain in the right lower quadrant. abstract_id: PUBMED:12200239 Diagnostic value of unenhanced helical CT in adult patients with suspected acute appendicitis. Clinical diagnosis of appendicitis is usually made on the basis of history, physical examination and laboratory studies. Approximately 30-45% of patients with suspected appendicitis present with atypical clinical and laboratory findings. Recently graded compression ultrasound and thin section unenhanced helical CT have been used to establish diagnosis for patients with suspected acute appendicitis. The purpose of this study was to determine the diagnostic accuracy of thin section unenhanced helical CT protocol in adult patients with suspected acute appendicitis. CT scans obtained when patients presented with right lower quadrant pain and the clinical impression was equivocal for appendicitis were evaluated. Of 296 patients referred for CT, 123 patients subsequently underwent surgery. Appendicitis had been correctly predicted in 104 of 108 patients surgically proven to have appendicitis. Unenhanced helical CT in the diagnosis of acute appendicitis had a sensitivity of 96%, specificity of 98%, positive predictive value of 97% and negative predictive value of 98%. If no definite inflammatory changes are detected, on the basis of our experience we recommend that the patient be monitored clinically, and that thin section unenhanced helical CT is the optimal technique to detect acute appendicitis in adult patients. abstract_id: PUBMED:37133836 Diagnostic Accuracy of Unenhanced Computed Tomography for Evaluation of Acute Abdominal Pain in the Emergency Department. Importance: Intravenous (IV) contrast medium is sometimes withheld due to risk of complication or lack of availability in patients undergoing computed tomography (CT) for abdominal pain. The risk from withholding contrast medium is understudied. Objective: To determine the diagnostic accuracy of unenhanced abdominopelvic CT using contemporaneous contrast-enhanced CT as the reference standard in emergency department (ED) patients with acute abdominal pain. Design, Setting, And Participants: This was an institutional review board-approved, multicenter retrospective diagnostic accuracy study of 201 consecutive adult ED patients who underwent dual-energy contrast-enhanced CT for the evaluation of acute abdominal pain from April 1, 2017, through April 22, 2017. Three blinded radiologists interpreted these scans to establish the reference standard by majority rule. IV and oral contrast media were then digitally subtracted using dual-energy techniques. Six different blinded radiologists from 3 institutions (3 specialist faculty and 3 residents) interpreted the resulting unenhanced CT examinations. Participants included a consecutive sample of ED patients with abdominal pain who underwent dual-energy CT. Exposure: Contrast-enhanced and virtual unenhanced CT derived from dual-energy CT. Main Outcome: Diagnostic accuracy of unenhanced CT for primary (ie, principal cause[s] of pain) and actionable secondary (ie, incidental findings requiring management) diagnoses. The Gwet interrater agreement coefficient was calculated. Results: There were 201 included patients (female, 108; male, 93) with a mean age of 50.1 (SD, 20.9) years and mean BMI of 25.5 (SD, 5.4). Overall accuracy of unenhanced CT was 70% (faculty, 68% to 74%; residents, 69% to 70%). Faculty had higher accuracy than residents for primary diagnoses (82% vs 76%; adjusted odds ratio [OR], 1.83; 95% CI, 1.26-2.67; P = .002) but lower accuracy for actionable secondary diagnoses (87% vs 90%; OR, 0.57; 95% CI, 0.35-0.93; P < .001). This was because faculty made fewer false-negative primary diagnoses (38% vs 62%; OR, 0.23; 95% CI, 0.13-0.41; P < .001) but more false-positive actionable secondary diagnoses (63% vs 37%; OR, 2.11, 95% CI, 1.26-3.54; P = .01). False-negative (19%) and false-positive (14%) results were common. Interrater agreement for overall accuracy was moderate (Gwet agreement coefficient, 0.58). Conclusion: Unenhanced CT was approximately 30% less accurate than contrast-enhanced CT for evaluating abdominal pain in the ED. This should be balanced with the risk of administering contrast material to patients with risk factors for kidney injury or hypersensitivity reaction. abstract_id: PUBMED:10222515 Unenhanced CT in the evaluation of the acute abdomen: the community hospital experience. The "Great Mimicker," acute appendicitis, has finally found its match with the advent of rapid unenhanced computed tomography (RUCT). With little, if any, operator dependence, RUCT can be performed easily at any facility that has CT capabilities. With only minimal interpreter dependence, the examination is highly accurate in determining which patients with acute abdominal pain require further treatment and expenditure of resources. In this article we describe our experience, since devising the technique in 1991, with over 7,000 RUCT scans done on patients with acute abdominal pain, predominantly in the right lower quadrant. We show how RUCT is extremely useful and accurate, not only in the diagnosis of acute appendicitis, but in many other disease entities that mimic the "Great Mimicker." Answer: The evaluation of unenhanced CT for acute abdominal pain has been studied in various contexts, and the findings suggest that it can be a valuable tool in certain situations, although its sufficiency varies depending on the specific clinical scenario and the type of pathology being considered. One study found that unenhanced CT was not significantly improved by the addition of intravenous contrast in establishing a cause of abdominal pain after a negative or nondiagnostic unenhanced CT, suggesting that unenhanced CT may be sufficient in some cases (PUBMED:12427436). Another study showed that systematic unenhanced CT significantly improved diagnosis accuracy and appropriate management in elderly patients presenting with acute abdominal symptoms compared to current practice (PUBMED:27271919). Similarly, unenhanced CT was found to be an accurate technique in the evaluation of adult patients with nontraumatic acute abdominal pain and was considered as an alternative to radiography as the initial imaging modality (PUBMED:16183928). In the context of gallstone disease, virtual unenhanced images derived from dual-energy CT were found to allow better visualization of cholesterol gallstones compared to true unenhanced images, but true unenhanced images were better for visualizing calcium and small gallstones (PUBMED:26700337). For the diagnosis of acute appendicitis, unenhanced CT was shown to be a useful test with high accuracy (PUBMED:8456661), and another study supported the use of thin-section unenhanced helical CT as an optimal technique for detecting acute appendicitis in adult patients (PUBMED:12200239). However, a study comparing the diagnostic accuracy of unenhanced CT to contrast-enhanced CT for triage of elderly patients with acute abdominal pain found that contrast-enhanced CT did not significantly improve diagnostic accuracy, indicating that unenhanced CT alone is accurate and associated with high degrees of inter-reader agreement for clinical triage in the emergency setting (PUBMED:31208938). Conversely, another study reported that unenhanced CT was approximately 30% less accurate than contrast-enhanced CT for evaluating abdominal pain in the emergency department, suggesting that the sufficiency of unenhanced CT may be limited in certain cases (PUBMED:37133836). In summary, unenhanced CT can be sufficient for the evaluation of acute abdominal pain in certain patient populations and clinical scenarios, but its sufficiency may be limited for certain pathologies or when compared to contrast-enhanced CT in others. The decision to use unenhanced CT should be balanced with the risk of administering contrast material to patients with risk factors for kidney injury or hypersensitivity reaction (PUBMED:37133836). Additionally, the community hospital experience with over 7,000 rapid unenhanced CT scans showed that it is extremely useful and accurate not only in diagnosing acute appendicitis but also in many other disease entities that mimic acute appendicitis (PUBMED:10222515). Overall, while unenhanced CT can be a valuable diagnostic tool for acute abdominal pain, its sufficiency as a standalone modality may depend on the specific clinical requirements, the patient's condition, and the availability of contrast-enhanced imaging options.
Instruction: Do different types of financial support after illness or injury affect socio-economic outcomes? Abstracts: abstract_id: PUBMED:23540372 Do different types of financial support after illness or injury affect socio-economic outcomes? A natural experiment in New Zealand. Background: In New Zealand, people unable to work due to an illness may be eligible for a means-tested benefit whereas injured people are eligible for a wide range of support including earnings-related compensation through the no-fault Accident Compensation Corporation (ACC). The effect of this difference on socio-economic outcomes has not been investigated before. Methods: A comparative cohort study was undertaken of stroke versus injury. Individuals aged 18-64, who had a first-stroke (n = 109) were matched by age, sex and functional impairment with injured individuals (n = 429) participating in the Prospective Outcomes of Injury Study. Data were collected by interview 3.5 and 12 months after stroke or injury. Logistic regression adjusting for the matching variables at 3.5 months, and functional impairment at 12 months, was undertaken. Results: Median personal income declined by 60% over 12 months for the Stroke Group compared to 13% decline in the Injury Group. Decline in income was greater for those in both groups who had a higher income initially, and for those who had not returned to work. The proportion of the Stroke Group reporting 'Fairly low/low' standard of living increased from 8% to 28% and 'Just/not enough' income increased from 35% to 61% whereas the Injury Group increased only from 5% to 12% and 33%-44% respectively. The odds of reporting low standard of living and income inadequacy at 12 months were significantly less for the Injury Group. Despite earnings-related compensation (80% of income), the odds of being back at work were greater for the Injury Group compared to the Stroke Group (Adjusted Odds Ratio 3.1; 95% CI 1.7-5.6). Conclusions: These findings support the conclusions that earnings-related compensation and rehabilitative support, available to injured people via ACC, largely prevents the downward spiral into poverty and ill health. It also appears to enhance return to work though residual confounding cannot be ruled out. abstract_id: PUBMED:23440845 Slum upgrading strategies involving physical environment and infrastructure interventions and their effects on health and socio-economic outcomes. Background: Slums are densely populated, neglected parts of cities where housing and living conditions are exceptionally poor. In situ slum upgrading, at its basic level, involves improving the physical environment of the existing area, such as improving and installing basic infrastructure like water, sanitation, solid waste collection, electricity, storm water drainage, access roads and footpaths, and street lighting, as well as home improvements and securing land tenure. Objectives: To explore the effects of slum upgrading strategies involving physical environment and infrastructure interventions on the health, quality of life and socio-economic wellbeing of urban slum dwellers in low and middle income countries (LMIC). Where reported, data were collected on the perspectives of slum dwellers regarding their needs, preferences for and satisfaction with interventions received. Search Methods: We searched for published and unpublished studies in 28 bibliographic databases including multidisciplinary (for example Scopus) and specialist databases covering health, social science, urban planning, environment and LMIC topics. Snowballing techniques included searching websites, journal handsearching, contacting authors and reference list checking. Searches were not restricted by language or publication date. Selection Criteria: We included studies examining the impact of slum upgrading strategies involving physical environment or infrastructure improvements (with or without additional co-interventions) on the health, quality of life and socio-economic wellbeing of LMIC urban slum dwellers. Randomised controlled trials (RCTs), controlled before and after studies (CBAs) and interrupted time series (ITS) were eligible for the main analysis. Controlled studies with only post-intervention data (CPI) and uncontrolled before and after (UBA) studies were included in a separate narrative to examine consistency of results and to supplement evidence gaps in the main analysis. Data Collection And Analysis: Two authors independently extracted data and assessed risk of bias for each study. Differences between the included study interventions and outcomes precluded meta-analysis so the results were presented in a narrative summary with illustrative harvest plots. The body of evidence for outcomes within the main analysis was assessed according to GRADE as very low, low, moderate or high quality. Main Results: We identified 10,488 unique records, with 323 screened as full text. Five studies were included for the main analysis: one RCT with a low risk, two CBAs with a moderate risk and two CBAs with a high risk of bias. Three CBAs evaluated multicomponent slum upgrading strategies. Road paving only was evaluated in one RCT and water supply in one CBA. A total of 3453 households or observations were included within the four studies reporting sample sizes.Most health outcomes in the main studies related to communicable diseases, for which the body of evidence was judged to be low quality. One CBA with a moderate risk of bias found that diarrhoeal incidence was reduced in households which received water connections from a private water company (risk ratio (RR) 0.53; 95% confidence interval (CI) 0.27 to 1.04) and the severity of diarrhoeal episodes (RR 0.48; 95% CI 0.19 to 1.22). There was no effect for duration of diarrhoea. Road paving did not result in changes in parasitic infections or sickness in one RCT. After multicomponent slum upgrading, claims for a waterborne disease as opposed to a non-waterborne disease reduced (RR 0.64; 95% CI 0.27 to 0.98) in one CBA with a high risk of bias but there was no change in sanitation-related mortality in a CBA with a moderate risk of bias.The majority of socio-economic outcomes reported within the main studies related to financial poverty, for which the body of evidence was of very low quality. Results were mixed amongst the main studies; one RCT and two CBAs reported no effect on the income of slum dwellers following slum upgrading. One further CBA found significant reduction in monthly water expenditure (mean difference (MD) -17.11 pesos; 95% CI -32.6 to -1.62). One RCT also showed mixed results for employment variables, finding no effect on unemployment levels but increased weekly worked hours (MD 4.68; 95% CI -0.46 to 9.82) and lower risk of residents intending to migrate for work (RR 0.78; 95% CI 0.60 to 1.01).There was no evidence available to assess the impact of slum upgrading on non-communicable diseases or social capital. Maternal and perinatal conditions, infant mortality, nutritional deficiencies, injuries, self-reported quality of life, education and crime were evaluated in one study each.Nine supporting studies were included that measured varying outcomes (6794 households or observations within eight studies reporting sample sizes). One CPI evaluated cement flooring only while three UBAs and five CPIs evaluated multicomponent slum upgrading strategies. All studies but one had a high risk of bias.The studies reinforced main study findings for diarrhoea incidence and water-related expenditure. Findings for parasitic infections and financial poverty were inconsistent with the main studies. In addition, supporting studies reported a number of disparate outcomes that were not evaluated in the main studies.Five supporting studies included some limited information on slum dweller perspectives. They indicated the importance of appropriate siting of facilities, preference for private facilities, delivering synergistic interventions together, and ensuring that infrastructure was fit for purpose and systems were provided for cleaning, maintenance and repair. Authors' Conclusions: A high risk of bias within the included studies, heterogeneity and evidence gaps prevent firm conclusions on the effect of slum upgrading strategies on health and socio-economic wellbeing. The most common health and socio-economic outcomes reported were communicable diseases and indicators of financial poverty. There was a limited but consistent body of evidence to suggest that slum upgrading may reduce the incidence of diarrhoeal diseases and water-related expenditure. The information available on slum dwellers' perspectives provided some insight to barriers and facilitators for successful implementation and maintenance of interventions.The availability and use of reliable, comparable outcome measures to determine the effect of slum upgrading on health, quality of life and socio-economic wellbeing would make a useful contribution to new research in this important area. Given the complexity in delivering slum upgrading, evaluations should look to incorporate process and qualitative information alongside quantitative effectiveness data to determine which particular interventions work (or don't work) and for whom. abstract_id: PUBMED:37125781 Financial toxicity after trauma and acute care surgery: From understanding to action. Abstract: Gains in inpatient survival over the last five decades have shifted the burden of major injuries and surgical emergencies from the acute phase to their long-term sequelae. More attention has been placed on evaluation and optimization of long-term physical and mental health; however, the impact of major injuries and surgical emergencies on long-term financial well-being remains a critical blind spot for clinicians and researchers. The concept of financial toxicity encompasses both the objective financial consequences of illness and medical care as well as patients' subjective financial concerns. In this review, representatives of the Healthcare Economics Committee from the American Association for the Surgery of Trauma (1) provide a conceptual overview of financial toxicity after trauma or emergency surgery, (2) outline what is known regarding long-term economic outcomes among trauma and emergency surgery patients, (3) explore the bidirectional relationship between financial toxicity and long-term physical and mental health outcomes, (4) highlight policies and programs that may mitigate financial toxicity, and (5) identify the current knowledge gaps and critical next steps for clinicians and researchers engaged in this work. abstract_id: PUBMED:24560090 Financial and employment impacts of serious injury: a qualitative study. Objectives: To explore the financial and employment impacts following serious injury. Design: Semi-structured telephone administered qualitative interviews with purposive sampling and thematic qualitative analysis. Participants: 118 patients (18-81 years) registered by the Victorian State Trauma Registry or Victorian Orthopaedic Trauma Outcomes Registry 12-24 months post-injury. Results: Key findings of the study were that although out-of-pocket treatment costs were generally low, financial hardship was prevalent after hospitalisation for serious injury, and was predominantly experienced by working age patients due to prolonged absences from paid employment. Where participants were financially pressured prior to injury, injury further exacerbated these financial concerns. Reliance on savings and loans and the need to budget carefully to limit financial burden were discussed. Financial implications of loss of income were generally less for those covered by compensation schemes, with non-compensable participants requiring welfare payments due to an inability to earn an income. Most participants reported that the injury had a negative impact on work. Loss of earnings payments from injury compensation schemes and income protection policies, supportive employers, and return to work programs were perceived as key factors in reducing the financial burden of injured participants. Employer-related barriers to return to work included the employer not listening to the needs of the injured participant, not understanding their physical limitations, and placing unrealistic expectations on the injured person. While the financial benefits of compensation schemes were acknowledged, issues accessing entitlements and delays in receiving benefits were commonly reported by participants, suggesting that improvements in scheme processes could have substantial benefits for injured patients. Conclusions: Seriously injured patients commonly experienced substantial financial and work-related impacts of injury. Participants of working age who were unemployed prior to injury, did not have extensive leave accrual at their pre-injury employment, and those not covered by injury compensation schemes or income protection insurance clearly represent participants "at risk" for substantial financial hardship post-injury. Early identification of these patients, and improved provision of information about financial support services, budgeting and work retraining could assist in alleviating financial stress after injury. abstract_id: PUBMED:20407998 Community support systems for farmers who live with disability. To return to farming following the onset of a disabling injury or health condition is predicated on a supportive environment for the farm owner-operator. The purpose of this study was to examine barriers and facilitators of returning to work after an injury or acquired disability, and to identify community supports (formal and informal) needed and available to farmers. This was a qualitative study using a participatory action research approach that involved a research team, an advisory group of seven stakeholder participants and semistructured interviews with farmers, their spouses, and service providers in the study communities. A total of 11 farmers and 17 service providers took part in a focus group or were interviewed individually with or without the spouse present. Focus group discussions and interviews were transcribed and analyzed using constant comparison method and team consensus of findings. Themes of barriers and facilitators for return to farming were identified as health care services, financial and economic issues, and "hands-on" assistance. Themes were organized into an ecological conceptual diagram depicting barriers and facilitators in the microsystem of the farm family, the mesosystem of the local farming community and regional health jurisdictions, and the macrosystem of the province and/or nation's social, legislative, and economic realities. Enhancing community support systems for farmers who become disabled as a result of injury or illness will require a multilevel system approach that involves health, financial, and labor resources. abstract_id: PUBMED:11965555 Socio-economic outcome of paraplegia compared to lower limb amputation. Study Design: The socio-economic outcome was retrospectively compared between patients with traumatic lower limb amputation and those with paraplegia due to a spinal cord injury (SCI). Objective: To evaluate the effect of specific rehabilitative procedures on the socio-economic outcome. Setting: Paraplegic Centre at the University Hospital in Zurich, Switzerland. Methods: Accidents occurring in a collective of 1.9 million persons in Switzerland over 3 years were included in the study. Fifty-six persons with unilateral amputation of lower limbs were compared with 54 persons with paraplegia. The follow-up was assessed over 5 years. Results: The following data was obtained in patients with amputations compared to those with SCI (median values): (1) hospital stay: 150 versus 164 days; (2) the work was taken up after 689 versus 616 days; (3) the global average case costs amounted to SFr. 491.000 versus SFr. 606.000 during the first 5 years after injury; (4) the permanent degree of disability was determined to 40% versus 42%. Except for the global costs, all differences were not significant. Conclusions: There are only a few differences between the socio-economic outcomes between patients with lower limb amputations and those with paraplegia. It remains unclear if the highly specialised rehabilitation services provided for patients with SCI accounts for the similarity since individuals with SCI have additional neurogenic dysfunction of the bladder and bowel. Alternatively, the data may suggest a need for greater rehabilitation services for amputees or may reflect a threshold effect in rehabilitation outcomes. abstract_id: PUBMED:23211759 Socio-economic and occupational determinants of work injury absence. Background: Measures of socio-economic position, such as education, occupational class and income, are well-known determinants of ill-health, injury and sickness absence. The aim was to analyse socio-economic and occupational determinants of work injury absence and their contribution to overall socio-economic inequalities in all-cause sickness absence. Methods: A register-based follow-up study included municipal employees of the City of Helsinki aged 25-59 years in 2004. The number of participants was 16,471 women and 5033 men. The mean follow-up time was 3.0 years. Education, occupational class and individual income were used as measures of socio-economic position. The main outcome was medically confirmed work injury and all-cause sickness absence of ≥4 days. Inequality indices were calculated using Poisson regression analysis. Results: High education, occupational class and individual income were consistently associated with lower work injury absence among both women and men. The inequalities in work injury absence were larger than in all-cause sickness absence, especially among men, but the contribution to overall socio-economic inequalities was limited. Among women, bus drivers, cooks and hospital attendants had the highest rates of work injuries. Among men, youth mentors, firemen and janitors had the highest rates. Conclusions: Our results indicate that relative socio-economic inequalities in work injury absence are larger than in all-cause sickness absence. Prevention of work injuries provides a source of reducing socio-economic inequalities in health, but their effect is not very large. Prevention of work injuries should be targeted to lower white-collar and manual workers and vulnerable occupations. abstract_id: PUBMED:37805483 Hardship financing, productivity loss, and the economic cost of illness and injury in Cambodia. Background: Financial risk protection is a core dimension of universal health coverage. Hardship financing, defined as borrowing and selling land or assets to pay for healthcare, is a measure of last recourse. Increasing indebtedness and high interest rates, particularly among unregulated money lenders, can lead to a vicious cycle of poverty and exacerbate inequity. Methods: To inform efforts to improve Cambodia's social health protection system we analyze 2019-2020 Cambodia Socio-economic Survey data to assess hardship financing, illness and injury related productivity loss, and estimate related economic impacts. We apply two-stage Instrumental Variable multiple regression to address endogeneity relating to net income. In addition, we calculate a direct economic measure to facilitate the regular monitoring and reporting on the devastating burden of excessive out-of-pocket expenditure for policy makers. Results: More than 98,500 households or 2.7% of the total population resorted to hardship financing over the past year. Factors significantly increasing risk are higher out-of-pocket healthcare expenditures, illness or injury related productivity loss, and spending of savings. The economic burden from annual lost productivity from illness or injury amounts to US$ 459.9 million or 1.7% of GDP. The estimated household economic cost related to hardship financing is US$ 250.8 million or 0.9% of GDP. Conclusions: Such losses can be mitigated with policy measures such as linking a catastrophic health coverage mechanism to the Health Equity Funds, capping interest rates on health-related loans, and using loan guarantees to incentivize microfinance institutions and banks to refinance health-related, high-interest loans from money lenders. These measures could strengthen social health protection by enhancing financial risk protection, mitigating vulnerability to the devastating economic effects of health shocks, and reducing inequities. abstract_id: PUBMED:7478724 The effect of aging and duration of disability on long term health outcomes following spinal cord injury. The purpose of the study was to discover the independent and combined effects of age and duration of injury on selected long term health outcomes of 83 spinal cord injured (SCI) men (age range 21-79 years; duration of spinal cord injury 3-52 years). Specifically, the study examined a multivariate model which specified that age, duration, the interaction of age and duration, and level of lesion were related to the following health outcomes: pain, fatigue, functional independence, mobility, illness and symptomatology, perceived overall health, social support, life satisfaction, and economic stability. Age had significant main effects on three outcomes. With increasing age, the sample experienced more fatigue, decreased activity (due to pain), and more overall satisfaction with their lives. Duration of SCI showed significant main effects on two outcomes. As subjects lived longer with their SCI, they felt less financially secure and experienced more symptoms and illnesses. Significant interaction effects of age and duration of SCI were found for two outcomes. Older age combined with longer duration of SCI amplified subjects' perceptions of financial insecurity, and threats to health. The findings send a clear message to service-providers and policy makers about the added vulnerability of older disabled individuals, about the need for extra vigilance in health care issues, and about the necessity of a social-economic safety net for already disadvantaged individuals. abstract_id: PUBMED:35400617 Economic burden of insufficient sleep duration in Canadian adults. Objective: To provide estimates of the health care and productivity costs associated with insufficient sleep duration (<7 hours per night) in Canadian adults. Methods: A prevalence-based approach was used to estimate the economic costs associated with insufficient sleep duration. Estimates relied on 3 pieces of information: (1) the relative risks of health outcomes consistently associated with insufficient sleep duration obtained from recent meta-analyses; (2) the direct (health care) and indirect (health-related losses of productivity) costs of these health outcomes obtained from the Economic Burden of Illness in Canada data; and (3) the prevalence of insufficient sleep duration in Canadian adults obtained from a representative national survey (17.2%). Results: The estimated direct, indirect, and total costs of insufficient sleep duration in Canada in 2020 were $484 million, $18 million, and $502 million, respectively. These values represent 0.5% (direct), 2.7% (indirect), and 0.5% (total) of the overall burden of illness costs for Canada (estimated at $102 billion). The 2 most expensive chronic diseases attributable to insufficient sleep duration were depression ($219 million) and type 2 diabetes ($92 million). The main contributors of these health care costs were related to hospital care (for coronary heart disease and obesity), prescription drugs (for type 2 diabetes and depression), physician care (for hypertension and cognitive disorders), and mortality (for accidents/injuries). A 5% decrease in the prevalence of insufficient sleep duration (from 17.2% to 12.2%) in Canadian adults would lead to a yearly savings of $148 million. Conclusions: Insufficient sleep duration is an important contributor to health care spending and health-related losses of productivity in Canada. Studies are needed to test cost-effective sleep health interventions at the population level. Answer: Yes, different types of financial support after illness or injury do affect socio-economic outcomes. A study in New Zealand compared the socio-economic outcomes of individuals who had a stroke (and were eligible for a means-tested benefit) with those who had an injury (and were eligible for earnings-related compensation through the Accident Compensation Corporation, ACC). The study found that the median personal income declined by 60% over 12 months for the stroke group compared to a 13% decline in the injury group. The injury group, which received earnings-related compensation and rehabilitative support through ACC, had significantly less odds of reporting a low standard of living and income inadequacy at 12 months. This group was also more likely to return to work, suggesting that earnings-related compensation and rehabilitative support can prevent a downward spiral into poverty and ill health and enhance return to work (PUBMED:23540372). In the context of slum upgrading strategies, physical environment and infrastructure interventions in low and middle-income countries showed mixed results on socio-economic outcomes. Some studies reported no effect on the income of slum dwellers following slum upgrading, while others found a significant reduction in monthly water expenditure. However, the evidence was not strong enough to draw firm conclusions on the effect of slum upgrading strategies on health and socio-economic wellbeing due to the high risk of bias within the included studies and evidence gaps (PUBMED:23440845). A qualitative study on the financial and employment impacts of serious injury highlighted that financial hardship was prevalent after hospitalization for serious injury, especially among working-age patients due to prolonged absences from paid employment. Compensation schemes and income protection policies were perceived as key factors in reducing the financial burden of injured participants (PUBMED:24560090). Overall, the type and extent of financial support provided after illness or injury can significantly influence socio-economic outcomes, including income, standard of living, and the ability to return to work.
Instruction: Is voluntary vitamin and mineral supplementation associated with better outcome in non-small cell lung cancer patients? Abstracts: abstract_id: PUBMED:15949593 Is voluntary vitamin and mineral supplementation associated with better outcome in non-small cell lung cancer patients? Results from the Mayo Clinic lung cancer cohort. Background: Some previous studies report that 80% of cancer patients take multivitamin and/or mineral supplements. To our knowledge, the consequences of such self-directed supplementation have not been examined previously in non-small cell lung cancer (NSCLC) patients. The goal of this study was to determine whether vitamin/mineral supplementation is associated with improved survival and quality of life in a cohort of NSCLC patients. Methods: NSCLC patients or their proxies who responded to a questionnaire on vitamin/mineral use were assessed for survival and quality of life. Results: A total of 1129 patients or their proxies responded to a vitamin/mineral questionnaire. Seven hundred and fourteen were vitamin/mineral users of either multivitamins or other specific vitamin/mineral supplements, and the rest non-users. Median survival was 4.3 years versus 2.0 years for vitamin/mineral users and non-users, respectively. A Cox proportional hazards model showed a relative risk of death of 0.74 (95% confidence interval (CI): 0.44, 0.65) (p < 0.01) in favor of vitamin/mineral use after adjustment for multiple prognostic factors, including tumor stage. The Lung Cancer Symptom Scale (LCSS) showed better quality of life among vitamin/mineral users (mean difference in score of 3 (95% CI: 0.8, 5.1) (p < 0.01); and after adjusting for related variables, there remained a trend in favor of vitamin/mineral use mean difference 1.8 (95% CI: 0.2, 3.9) (p = 0.08). Conclusions: Vitamin/mineral supplementation is associated with better survival and quality of life in this cohort of NSCLC patients. Future prospective clinical trials should focus on the role of such supplements in patients with NSCLC. abstract_id: PUBMED:21835895 Prognostic role of vitamin d status and efficacy of vitamin D supplementation in cancer patients: a systematic review. Background: Whether or not hypovitaminosis D can influence the prognosis of cancer patients and whether or not vitamin D (vitD) supplementation improves outcome remain controversial. Design: Studies evaluating the prognostic role of vitD and vitD receptor (VDR) in cancer patients and trials evaluating the efficacy of vitD administration on patient outcome were identified by a search of MEDLINE, EMBASE, ISI Web of Knowledge, and the Cochrane Library through June 2010. Results: Twenty-five studies were included. A negative prognostic role for low serum vitD level was observed in five cohort studies including patients with breast cancer (one study), colon cancer (two studies), prostate cancer (one study), and melanoma (one study), but not in two studies on non-small cell lung cancer and one study on breast cancer. Three of four studies showed that VDR(+) tumors carry a better prognosis than VDR(-) tumors, whereas VDR polymorphisms were significantly associated with prognosis in five of 10 studies. A significant interaction between serum vitD level and VDR polymorphism was observed in one study. Three randomized trials involving advanced prostate cancer patients explored the prognostic role of vitD supplementation. A meta-analysis of these trials showed no effect on survival (pooled risk ratio, 1.07; 95% confidence interval, CI, 0.93-1.23), with strong heterogeneity among studies. Conclusion: Hypovitaminosis D seems to be associated with a worse prognosis in some cancers, but vitD supplementation failed to demonstrate a benefit in prostate cancer patients. The currently available evidence is insufficient to recommend vitD supplementation in cancer patients in clinical practice. abstract_id: PUBMED:25260366 A prospective study of shortened vitamin supplementation prior to cisplatin-pemetrexed therapy for non-small cell lung cancer. Background: Prior supplementation with folic acid and vitamin B12 is required to reduce pemetrexed therapy toxicity; the recommended lead-in time is at least 7 days. On the basis of previous pharmacokinetic and clinical studies, we hypothesized that the lead-in time could be shortened to 24 hours, enabling earlier commencement of standard chemotherapy; thus, we planned the first prospective trial of this regimen. Methods: Patients with advanced nonsquamous non-small cell lung cancer who had not previously received cytotoxic chemotherapy were enrolled. After measurement of homocysteine concentrations, the patients received 1,000 μg of vitamin B12 by intramuscular injection and began taking 350-500 μg of oral folic acid daily. Starting 24-48 hours after the vitamin B12 injection, the patients received intravenous 500 mg/m(2) pemetrexed and 75 mg/m(2) cisplatin for 4 cycles at 3 weekly intervals. The primary endpoint was the proportion of patients who developed neutropenia grade ≥3. Results: Thirty patients received chemotherapy starting within 48 hours of the vitamin B12 injection. No treatment-related deaths or grade 4 toxicity occurred. Neutropenia grade ≥3, other laboratory toxicities grade ≥3, and nonlaboratory toxicities grade ≥3 occurred in 6.7%, 13%, and 13% of patients, respectively. The baseline homocysteine concentrations were not higher in patients with grade ≥3 toxicities than in the remainder of the cohort (mean values, 8.6 and 10.7 μmol/L, respectively). The response rate to chemotherapy was 43%. Conclusion: The shortened vitamin supplementation was well tolerated and retained antitumor efficacy. Analysis of baseline homocysteine concentrations confirmed the efficacy of short-term vitamin supplementation. abstract_id: PUBMED:28073680 Rationale and Design of PEMVITASTART-An Open-label Randomized Trial Comparing Simultaneous Versus Standard Initiation of Vitamin B12 and Folate Supplementation in Nonsquamous, Non-Small-cell Lung Cancer Patients Undergoing First-line Pemetrexed-based Chemotherapy. Pemetrexed is the preferred chemotherapeutic drug for nonsquamous, non-small-cell lung cancer patients whenever the predictive molecular biomarkers for targeted therapy have either not been assessed or are absent. As per manufacturers' instructions, supplementation with folic acid (FA; folate) at a dose of 350 to 1000 μg daily should be started seven days before the first dose of pemetrexed-based chemotherapy and continued during therapy and for 21 days after therapy cessation. Vitamin B12 injections (1000 μg intramuscularly) should also be started one week before the first dose of chemotherapy. However, the evidence for delaying chemotherapy by one week for the purpose of providing vitamin B12 and FA supplementation is not robust. Observational and prospective single-arm studies have not shown any increased toxicity if pemetrexed was started earlier than the recommended duration of supplementation. In a resource-constrained setting, the standard (conventional) approach would lead to one additional visit and a 1-week chemotherapy delay, both of which could be inconvenient for patients. Hence, an open-label, randomized trial (PEMVITASTART [Vitamin Supplementation in NSCLC Patients on Pemetrexed Based Chemotherapy]; ClinicalTrials.gov identifier, NCT02679443) is being undertaken to evaluate whether any differences exist in pemetrexed-related hematologic toxicity among patients who receive delayed initiation of chemotherapy (after 5-7 days of vitamin B12 and FA supplementation [delayed arm]) compared with those for whom vitamin B12 and FA supplementation is started simultaneously (within 24 hours) of chemotherapy initiation (immediate arm). The present report describes the rationale and detailed design of the PEMVITASART trial. abstract_id: PUBMED:23683535 The optimal duration of vitamin supplementation prior to the first dose of pemetrexed in patients with non-small-cell lung cancer. Although folic acid and vitamin B12 supplements are recommended during pemetrexed therapy, the optimal duration for supplementation prior to the first dose of pemetrexed has not been defined. We analyzed adverse events during the first cycle of pemetrexed therapy in 350 patients with advanced non-small-cell lung cancer (NSCLC) who had received pemetrexed monotherapy. Patients were divided into two groups: group A and group B included patients who started vitamin supplements 5-14 days versus within 4 days before the first dose of pemetrexed, respectively. Groups A and B included 294 (84.0%) and 56 (16.0%) patients, respectively. The median number of cycles of pemetrexed was three in both groups. Patients in group A and B showed similar rates of leukopenia (6.1% vs. 5.4%, respectively, P = 1.00), neutropenia (5.1% vs. 3.6%, P = 1.00), thrombocytopenia (3.1% vs. 7.1%, P = 0.14), neutropenic fever (0.7% vs. 0%, P = 1.00), fatigue (20.1% vs. 19.6%, P = 0.94), and anorexia (15.0% vs. 21.4%, P = 0.23) during the first cycle of pemetrexed therapy. There were no significant differences in terms of hospitalization (4.4% vs. 5.4%, P = 0.73) or unscheduled visits due to pemetrexed-related adverse events (8.2% vs. 12.5%, P = 0.31) between groups A and B, respectively. Multivariate logistic regression analysis demonstrated that an age of ≥ 65 years (odds ratio, 3.49; 95% CI 1.12-10.86) and poor performance status (odds ratio, 3.96; 95% CI 1.12-14.03) were statistically significant predictive factors for grade 3 or 4 hematologic toxicity. The duration of vitamin supplementation before the first dose of pemetrexed did not affect the development of pemetrexed-related toxicities, suggesting that the initiation of pemetrexed-based chemotherapy does not have to be delayed to accommodate a vitamin supplementation schedule. abstract_id: PUBMED:30018118 Vitamin D Supplementation and Survival of Patients with Non-small Cell Lung Cancer: A Randomized, Double-Blind, Placebo-Controlled Trial. Purpose: Higher serum 25-hydroxyvitamin D (25(OH)D) levels are reportedly associated with better survival in early-stage non-small cell lung cancer (NSCLC). Therefore, whether vitamin D supplementation can improve the prognosis of patients with NSCLC was examined (UMIN000001869).Patients and Methods: A randomized, double-blind trial comparing vitamin D supplements (1,200 IU/day) with placebo for 1 year after operation was conducted. The primary and secondary outcomes were relapse-free survival (RFS) and overall survival (OS), respectively. Prespecified subgroup analyses were performed with stratification by stage (early vs. advanced), pathology (adenocarcinoma vs. others), and 25(OH)D levels (low, <20 ng/mL vs. high, ≥20 ng/mL). Polymorphisms of vitamin D receptor (VDR) and vitamin D-binding protein (DBP) and survival were also examined.Results: Patients with NSCLC (n = 155) were randomly assigned to receive vitamin D (n = 77) or placebo (n = 78) and followed for a median of 3.3 years. Relapse and death occurred in 40 (28%) and 24 (17%) patients, respectively. In the total study population, no significant difference in either RFS or OS was seen with vitamin D compared with the placebo group. However, by restricting the analysis to the subgroup with early-stage adenocarcinoma with low 25(OH)D, the vitamin D group showed significantly better 5-year RFS (86% vs. 50%, P = 0.04) and OS (91% vs. 48%, P = 0.02) than the placebo group. Among the examined polymorphisms, DBP1 (rs7041) TT and CDX2 (rs11568820) AA/AG genotypes were markers of better prognosis, even with multivariate adjustment.Conclusions: In patients with NSCLC, vitamin D supplementation may improve survival of patients with early-stage lung adenocarcinoma with lower 25(OH)D levels. Clin Cancer Res; 24(17); 4089-97. ©2018 AACR. abstract_id: PUBMED:31765437 A prospective observational study of on-treatment plasma homocysteine levels as a biomarker of toxicity, depression and vitamin supplementation lead-in time pre pemetrexed, in patients with non-small cell lung cancer and malignant mesothelioma. Objectives: Vitamin supplementation reduces pemetrexed toxicity. Raised plasma homocysteine reflects deficiency in vitamin B12 and folate, and is suppressed by supplementation. This observational study of 112 patients receiving pemetrexed-based chemotherapy assessed homocysteine levels after 3 weeks of vitamin supplementation, hypothesising high levels would correlate with ongoing deficiency, thus increased toxicity. Material And Methods: Primary endpoint was the composite of proportion of patients with treatment delay/ dose reduction/ drug change or hospitalisation during the first six weeks of chemotherapy, comparing those with normal plasma homocysteine (successfully supplemented, SS) and those with high homocysteine (unsuccessfully supplemented, USS). Secondary endpoints included toxicity and analyses for depression. Post-hoc analysis examined correlation between interval of vitamin and folate supplementation and pemetrexed on primary endpoint and grade 3-4 toxicities. Results: Eighty-four patients (84%) were successfully supplemented (SS group). The proportion of patients undergoing a treatment delay/ dose reduction/ drug change or hospitalisation in SS group was 44.0% (95% confidence interval [CI] 33.2%-55.3%) and in USS group was 18.8% (95% CI 4.0%-45.6%) (p = 0.09). Twelve percent of patients gave a past history of depression however 66% of patients had an on study Hospital Anxiety and Depression (HAD) score of >7. Supplementation status was not associated with depression. The median overall survival (OS) was 11.8 months (95% CI 8.6-16.5) in the SS group and 8.8 months (95% CI 6.6-16.2) in the US group (p = 0.5). The number of days (<7 or ≥ 7 days) between vitamin B12 and folate initiation and pemetrexed administration, had no effect on the primary endpoint and grade 3-4 toxicities. Conclusion: On-treatment homocysteine levels were not a biomarker of toxicity or depression. Standard vitamin supplementation is adequate in the majority of patients receiving pemetrexed. High HAD score were noted in this population giving an opportunity for mental health intervention. The lead-in time for vitamin supplementation can be short. abstract_id: PUBMED:30972274 Repurposing vitamin D for treatment of human malignancies via targeting tumor microenvironment. Tumor cells along with a small proportion of cancer stem cells exist in a stromal microenvironment consisting of vasculature, cancer-associated fibroblasts, immune cells and extracellular components. Recent epidemiological and clinical studies strongly support that vitamin D supplementation is associated with reduced cancer risk and favorable prognosis. Experimental results suggest that vitamin D not only suppresses cancer cells, but also regulates tumor microenvironment to facilitate tumor repression. In this review, we have outlined the current knowledge on epidemiological studies and clinical trials of vitamin D. Notably, we summarized and discussed the anticancer action of vitamin D in cancer cells, cancer stem cells and stroma cells in tumor microenvironment, providing a better understanding of the role of vitamin D in cancer. We presently re-propose vitamin D to be a novel and economical anticancer agent. abstract_id: PUBMED:30143457 Pemetrexed, Vitamin B12, and Thoracic Tumors: The Times, They Are A-Changin'. The administration of pemetrexed requires routine supplementation with vitamin B12 and folate, even if blood concentrations are normal, in order to mitigate its hematologic toxicity. Emerging data suggest that such premedication can be initiated less than 1 week before starting chemotherapy. The current available data on later administration of vitamin B12 in patients with thoracic tumors are placed into a general context, and the possible role of such strategy in the era of immunooncology is discussed. abstract_id: PUBMED:25963695 Safety of pemetrexed according to the duration of vitamin B12 and folic acid supplementation prior to the first dose of pemetrexed Objective: Administration of vitamin B12 and folic acid for 7 days prior to the administration of the first dose of pemetrexed is recommended. However, vitamin supplementation rarely is initiated less than 7 days prior to the first dose of pemetrexed. Therefore, we analyzed the safety of pemetrexed with vitamin supplementation for less than 7 days prior to the first dose of pemetrexed. Methods: Patients were classified into 2 groups according to the duration of vitamin supplementation prior to the first dose of pemetrexed: group A received vitamin supplementation for 7 days or more, and group B received vitamin supplementation for less than 7 days. We analyzed adverse effects, such as myelosuppression, rash, and diarrhea, after 1 cycle of pemetrexed therapy. Results: A total of 70 patients were administered pemetrexed; 40 patients were men and 30 were women with a median age of 64.5 years(range, 43-86 years). A total of 57 patients were classified into group A and 13 into group B; 33 patients were administered pemetrexed as a first-line treatment. Neutropenia of Grade 3 or more was observed in 4/49(8.2%)patients in group A and 2/13(15.4%)patients in group B(p=0.60). There were no significant differences in the rates of occurrence of neutropenia, rash, and diarrhea. Conclusion: This retrospective study indicated that patients could be safely treated with pemetrexed if vitamin supplementation is initiated for less than 7 days prior to the first administration of pemetrexed. However, further studies are needed because of a lack of statistical power and adjustment for confounding factors. Answer: Yes, voluntary vitamin and mineral supplementation is associated with better outcomes in non-small cell lung cancer (NSCLC) patients. A study from the Mayo Clinic lung cancer cohort found that NSCLC patients who used vitamins and minerals had a median survival of 4.3 years compared to 2.0 years for non-users. After adjusting for multiple prognostic factors, including tumor stage, vitamin/mineral users had a relative risk of death of 0.74, indicating a 26% reduction in the risk of death. Additionally, the Lung Cancer Symptom Scale (LCSS) showed better quality of life among vitamin/mineral users, with a mean difference in score of 3, which remained a trend in favor of vitamin/mineral use even after adjusting for related variables (PUBMED:15949593). However, it is important to note that the evidence for vitamin D supplementation specifically is less clear. A systematic review found that while low serum vitamin D levels were associated with a worse prognosis in some cancers, vitamin D supplementation did not demonstrate a benefit in prostate cancer patients. The review concluded that the currently available evidence is insufficient to recommend vitamin D supplementation in cancer patients in clinical practice (PUBMED:21835895). In the context of NSCLC, a randomized, double-blind, placebo-controlled trial found that vitamin D supplementation may improve survival of patients with early-stage lung adenocarcinoma with lower 25(OH)D levels, suggesting a potential benefit in a specific subgroup of NSCLC patients (PUBMED:30018118). Overall, while there is some evidence supporting the association of vitamin and mineral supplementation with better outcomes in NSCLC patients, the evidence is not uniform across all studies and types of supplements. Further research, particularly prospective clinical trials, is needed to clarify the role of such supplements in NSCLC patients.
Instruction: Anterior hypospadias. Is repair necessary with urination in a sitting or squatting position? Abstracts: abstract_id: PUBMED:11533820 Anterior hypospadias. Is repair necessary with urination in a sitting or squatting position? Objective: To determine the chronic complication rate of anterior hypospadias repair and to explore whether the practice of placing the neomeatus at the tip of the penis should be applicable to all cases in our community where urination is in a sitting/squatting position. Methods: Over a 10-year period commencing 1st September 1987, 312 patients had hypospadias repair of whom 72% had anterior hypospadias. The meatus was advanced to the tip of the penis in all repairs. The location of the meatus was also determined in 281 non-complaining men with a straight penis and normal sexual and reproductive functions. Following prior information that anterior hypospadias was not associated with sexual and reproductive dysfunction, 51 patients were given a choice between repair or no repair. Results: Urethrocutaneous fistula occurred in 5% of patients, urethral stricture in 3% and meatal retraction in 3%, with 92% of patients having no complications. Forty six percent of non-complaining men had the meatus in locations other than the tip of the penis. Of 51 patients with the benefit of informed consent, 73% opted for no repair. Conclusion: Our results of anterior hypospadias repair compare favourably with those of other centers. Placement of the meatus at the tip of the penis for anterior hypospadias should not be applicable to all patients in this community where urination is in a sitting/squatting position. Before such repairs, an informed consent is warranted by making the patients and their parents aware of the non-association of sexual and reproductive disorders with these anomalies. abstract_id: PUBMED:4009796 Assessment of urinary stream after repair of anterior hypospadias by meatoplasty and glanuloplasty. Voiding function was studied after repair of anterior hypospadias in 44 boys using Duckett's technique of meatoplasty and glanuloplasty. Assessment of the urinary stream was done in 2 ways. At least 6 weeks postoperatively a uroflow measurement using a DISA rotating disk machine was done. Peak flow was compared to previously prepared normal values in a large group of normal boys. Then, a high speed picture of the stream was taken using electronic video photography at the time of peak flow. Of the boys 42 had normal flow rates and 40 had acceptable stream patterns. The only complication was meatal stenosis in 2 cases. There were no fistulas. Although the primary indication for correction of anterior hypospadias is cosmetic, it is essential that function not be sacrificed to achieve this end. This study indicates that voiding function is maintained after hypospadias repair with meatoplasty and glanuloplasty. abstract_id: PUBMED:31672475 Assessment of the effect of urethral plate width on outcome of hypospadias repair. Background: The paradigm for hypospadias repair is a straight penis with a vertical meatus at the tip of the glans that provides satisfactory urination and is cosmetically acceptable to the parents of the patient. Objective: To determine the cosmetic and functional outcomes of hypospadias repair in relation to the width of the urethral plate. Patients And Methods: This study was a prospective evaluation of patients operated for hypospadias. The urethral plate width (UPW) of the patients were measured preoperatively using vernier calipers. The patients were categorized into groups A and B. Group A patients have a UPW <8 mm, whereas group B patients have a UPW ≥8 mm. The width of the urethral plate was correlated to the cosmetic outcome (using hypospadias objective penile evaluation [HOPE]) and functional outcome (using the urinary stream) of hypospadias repair. Results: Overall, 47 patients had their hypospadias repaired during the study period. But, only 42 patients who had their distal hypospadias repaired using tubularized incised plate urethroplasty were evaluated. There were 20 patients (47.6%) in group A and 22 patients (52.4%) in group B, with a mean urethral plate of 7.3 mm ± 0.50 SD. The mean UPW in group A was 5.6 mm ± 1.22 SD and the mean UPW in group B was 8.8 mm ± 0.88 SD. Overall, mean HOPE score was 40.0 ± 6.83 SD. Group A patients had a mean HOPE score of 38.7 ± 7.49 SD, whereas Group B patients had a mean HOPE score of 41.2 ± 6.08 SD. P-value was 0.725, which is not statistically significant. Relating good urinary stream (15 in group A and 20 in group B) with the width of the urethral plate statistically (using Spearman correlation technique) gave a P-value of 0.03 (P < 0.05), which is statistically significant. Conclusion: Our findings indicate that the cosmetic outcome of hypospadias repair may not be determined by UPW, but the functional outcome may be predicted by the width of the urethral plate. abstract_id: PUBMED:6498392 Anterior urethral advancement: a one-stage technique for hypospadias repair. This paper presents a new technique of repairing congenital hypospadias by a one-stage method using the principle of advancement of the anterior urethra. It can also be applied in the secondary repair of urethral strictures or fistulae. No urinary diversion is necessary. It eliminates the possibility of post-operative urethral stricture or fistula formation which are frequent complications of the more classical procedures and the repair can be completed in one stage. The clinical details of 16 patients in whom this technique was successfully used are presented. abstract_id: PUBMED:31096965 Outcome of staged buccal mucosal graft for repair of long segment anterior urethral stricture. Background: Long anterior urethral stricture due to variable etiological factors constitutes a challenge for reconstruction. We evaluated our centers experience with cases of long anterior urethral stricture due to different etiologies that were managed by 2-stage substitution urethroplasty using buccal mucosal graft procedure. Methods: During the period between November 2009 and November 2016. All cases with long anterior urethral stricture that were planned for substitution urethroplasty in our department were enrolled in this study. The first stage was excision of most fibrotic areas of the urethral plate, the remaining of the urethra is laid open and augmented with buccal mucosal graft for second stage closure after 6-9 months. Results: The study included 123 patients who underwent first stage, 105 patients of them underwent second stage urethroplasty. Eighteen cases were missed after first stage. The mean (range) age was 38.4 (17-60 years). The mean (range) stricture length was 8.3 (4-13 cm). The cause of stricture was idiopathic in 47, inflammatory in 15, lichen sclerosus in 26 and post failed hypospadias repair in 35 patients. First stage was complicated by graft contracture in 11 (8.9%) patients that needed re-grafting, 5(4.1%) patient had bleeding from the buccal mucosa site that needed haemostatic sutures, oral numbness was reported in 7 (5.7%) patients. Second stage was complicated by wound dehiscence in 2(1.9%) patients, restricture in 11 (10.5%), fistula in 6 (5.7%) patients, meatal stenosis in 3 (2.9%). The overall success rate was 79.1% (83 cases out of 105) with a mean (range) follow-up of 34.7 (10-58 months). Conclusions: Staged urethroplasty using buccal mucosal graft procedure is an effective surgical option for patients with long anterior urethral strictures especially for patients with lichen sclerosus and those with failed previous surgical repair. abstract_id: PUBMED:16624194 Modification of Mathieu procedure to repair anterior hypospadias. Objective: To assess the results of one-stage repair of anterior hypospadias using modified Mathieu procedure. Design: An interventional study. Place And Duration Of Study: Department of Paediatric Surgery, Dow Medical College and Civil Hospital Karachi, from September 1997 to December 2003. Patients And Methods: All the patients with anterior hypospadias without or with minimal chordee were included. Patients with hypoplastic urethra or re-do surgery were excluded. Modified Mathieu repair was performed in all cases. The modifications used were interrupted inverting sutures of 5/0 polyglycolic acid for urethroplasty; neourethra covering with subcutaneous (dartos) pedicle flap, developed from the preputial or penile shaft skin and dorsal / anterior meatotomy for meatal stenosis. Results: A total of 69 patients with anterior hypospadias were operated using modified Mathieu s repair. Only two patients (2.89%) developed urethrocutaneous fistula. Five patients developed partial devitalisation of the covering skin while one patient had complete breakdown. Conclusion: Modified Mathieu s procedure is an effective method, to repair the anterior hypospadias without or with minimal chordee. abstract_id: PUBMED:10079344 Analysis of meatal location in 1,244 healthy men. Definition of the normal site justifies the need for meatal advancement in pediatric anterior hypospadias cases. The normal site of the external urethral meatus on the glans penis and the need for meatal advancement in patients with anterior hypospadias was studied. The location of the external meatus was analyzed in 1,244 men (mean age 28 years) with classification of the meatal position in relation to the tip of the glans and corona. The quality of erections and sexual intercourse, the presence of a penile curvature, urinary stream, and ability to void in a standing position were assessed in an interview. In 1,198 men (96.3%) the meatus was located on the distal third of the glans, in 43 (3.5%) on the middle third, (B), and in 3 (0.2%) on the posterior third. In no case was it located below the corona. One of the 3 men with the meatus on the posterior third had an associated mild penile curvature that did not cause difficulty in sexual intercourse. This study suggests that the normal site of the external meatus is at the tip of the glans. The definition of the normal site and the percentage of men with a normal meatal position justifies the need for meatal advancement in patients with anterior hypospadias when the goals of current hypospadias surgery are considered. abstract_id: PUBMED:17628902 Anterior hypospadias: one stage repair. Report of 40 cases Introduction: Anterior hypospadias is estimated to 60-71% of whole hypospadias. The treatment is delicate and expose to iterative cures. The aim of this study is to review therapeutic aspects of anterior hypospadias of which treatment are first performed by one-stage repair in two departments of urology from Dakar. Patients And Methods: It's a retrospective study reporting 40 cases of anterior hypospadias operated between January 1988 and December 2002. Therapeutic results were judged according following criteria: quality of orthoplasty, cosmetic aspect, situation of neo-urethral meatus, permeability of urethra and existence of fistula or not. The results were classified good, mean or worse. Results: The following techniques were used: Mathieu (18 cases), Duplay (14), MAGPI (Meatal Avancement and Glandulo-Plasty Incorporated) 2 cases, Mustarde (2 cases), TGUA (Trans Glandular Urethral Avancement) 4 cases. The mean age of patients was 7,2 years (5 months-19 years). The hypospadias was granular (5%), subcoronar (25%), and anterior penile (70%). Results were good in 62, 5% of cases and worse in 20% of cases. Conclusion: In our way of work, for improving the treatment of anterior hypospadias, it is necessary to have microsurgical material and to modify the urinary drainage mode. abstract_id: PUBMED:28670672 Anterior Urethral Advancement as a Single-Stage Technique for Repair of Anterior Hypospadias: Our Experience. Purpose: Many techniques have been described to correct anterior hypospadias with variable results. Anterior urethral advancement as one stage technique was first described by Ti Chang Shing in 1984. It was also used for the repair of strictures and urethrocutaneous fistulae involving distal urethra. We report our experience of using this technique with some modification for the repair of anterior hypospadias. Materials And Methods: In the period between 2013-2015, 20 cases with anterior hypospadias including 2 cases of glanular, 3 cases of coronal, 12 cases of subcoronal and 3 cases of distal penile hypospadias were treated with anterior urethral advancement technique. Patients' age groups ranged from 18 months to 10 years. Postoperatively, patients were passing urine from tip of neomeatus with satisfactory stream during follow up period of 6 months to 2 years. Results: There were no major complications in any of our patients except in one patient who developed meatal stenosis which was treated by periodic dilatation. Three fold urethral mobilization was sufficient in all cases. Conclusion: Anterior urethral advancement technique is a single-stage procedure with good cosmetic results and least complications for anterior hypospadias repair in properly selected cases. abstract_id: PUBMED:16148653 2-stage repair in infancy for severe hypospadias with chordee: long-term results after puberty. Purpose: Urinary and sexual functions were assessed in post-pubescent boys who had undergone 2-stage hypospadias repair in infancy for severe hypospadias with chordee. Materials And Methods: A total of 44 boys who had undergone 2-stage hypospadias repair from 1985 to 1993 and who were at least 13 years old were contacted. Of the 44 boys 27 (61%) with an average age of 15.4 years (range 13 to 21) responded. Meatal locations were midshaft in 14 cases, penoscrotal in 9 and perineal in 4. Four boys had bifid scrotum and 5 had intersex disorders. Intramuscular testosterone was administered preoperatively to 15 (56%) boys. A Nesbit procedure was performed in 18 boys (67%). Average patient age at stage 2 repair was 2.3 years. Mean followup was 12.7 years (range 10.7 to 17.2). Additional surgery was performed for diverticuli in 5 cases, fistula in 3 and minor strictures in 4. Of the 27 patients 25 presented for examination and 2 responded to questionnaire only. Results: All patients had normal meatal position, normal glanular anatomy, a well-defined coronal sulcus, normal cylindrical shafts without extra skin and well-defined penoscrotal junctions. Ten boys (40%) had minor spraying of stream, all stood to void and 10 (40%) milked the urethra after voiding. None had chordee. Twenty patients were able to ejaculate and 9 (42.9%) had to milk the ejaculate. Two patients (7.7%) had minor pain with erection. All subjects were satisfied with urinary, erectile and ejaculatory functions, and 23 (92%) were pleased with appearance. Conclusions: The 2-stage approach for severe hypospadias results in excellent function, cosmesis and patient satisfaction after puberty, with no chordee. Minor voiding and ejaculatory problems are to be expected. Late complications are rare. The use of extragenital skin to either primarily repair or salvage a "cripple" has not been necessary. Answer: The necessity of anterior hypospadias repair when urination is performed in a sitting or squatting position has been questioned in the literature. A study conducted over a 10-year period, which included 312 patients with hypospadias repair, found that 92% of patients had no complications post-repair, and 46% of non-complaining men had the meatus in locations other than the tip of the penis. When given a choice, 73% of patients opted for no repair after being informed that anterior hypospadias was not associated with sexual and reproductive dysfunction. This suggests that placement of the meatus at the tip of the penis for anterior hypospadias may not be necessary for all patients, especially in communities where urination is typically done in a sitting or squatting position (PUBMED:11533820). The study emphasizes the importance of informed consent and making patients and their parents aware of the non-association of sexual and reproductive disorders with these anomalies before proceeding with repairs. It concludes that the results of anterior hypospadias repair compare favorably with those of other centers, but the practice of placing the neomeatus at the tip of the penis should not be applicable to all patients in the mentioned community context.
Instruction: Is drug-resistant Mycobacterium leprae a real cause for concern? Abstracts: abstract_id: PUBMED:32043454 Case Report: Multidrug-Resistant Mycobacterium leprae in a Case of Smear-Negative Relapse. The ongoing transmission of leprosy in India is worrisome, and emerging drug resistance may be one of the factors responsible for the continued transmission of leprosy in India. Emerging cases of multidrug-resistant Mycobacterium leprae pose a great threat to eradication of leprosy and must be addressed with utmost priority. We report a case of multidrug-resistant M. leprae in a case of relapse where slit skin smear (SSS) was negative and histopathology was inconclusive. Drug resistance studies in leprosy are undertaken only in smear-positive relapse cases, and detection of this type of multidrug resistance in a case with negative SSS and innocuous histopathology is rather unusual and highlights the importance of undertaking drug resistance tests even in smear-negative cases of leprosy relapse. Resistance to ofloxacin (OFL) is also a cause for concern as OFL is one of the reserve drugs recommended for treatment of rifampicin-resistant strains. abstract_id: PUBMED:6399066 "Drug-resistant proportion test" for M. leprae to quantify the proportion of drug-resistant M. leprae in a sample using the mouse foot pad. The mouse foot pad test has not previously been used quantitatively to discriminate between samples of Mycobacterium leprae with differing proportions of drug-resistant M. leprae. The "drug-resistant proportion test" is a simple modification of the routine mouse foot pad test. It is demonstrated to distinguish between samples of M. leprae with a tenfold difference in the proportion of dapsone-resistant M. leprae. abstract_id: PUBMED:24968045 Is drug-resistant Mycobacterium leprae a real cause for concern?: First approach to molecular monitoring of multibacillary Colombian patients with and without previous leprosy treatment. Introduction: There is no information in Colombia on Mycobacterium leprae primary and secondary drug resistance in regards to the WHO-multidrug therapy regime. On the other hand, public health authorities around the world have issued various recommendations, one of which prompts for the immediate organization of resistance surveillance through simple molecular methods. Objective: To determine the prevalence of Mycobacterium leprae drug resistance to rifampicin, ofloxacin and dapsone in untreated and previously treated patients at the Centro Dermatológico Federico Lleras Acosta during the 1985-2004 period. Materials And Methods: We conducted a retrospective study which included multibacillary patient biopsies through elective sampling: 381 of them from new patients and 560 from previously treated patients. Using a microtome, we obtained six slides from each skin biopsy preserved in paraffin, and we extracted M. leprae DNA. We amplified three molecular targets through PCR and obtained the patterns of drug resistance to dapsone, rifampicin and ofloxacin by reverse hybridization. Finally, we collected epidemiological, clinical and demographical data for analyses. Results: From 941 samples under study, 4.14% of them were resistant to one or more drugs, and 5.77 and 3.04% had resistant genotypes in new and previously treated patients, respectively. Total resistance for each drug was 0.43% for dapsone, 3.19% for rifampicin and 1.17% for ofloxacin. We found statistically significant differences for rifampicin and for the total population when comparing the results from untreated versus previously treated patients. Two thirds of the resistant samples were resistant to rifampicin alone or combined. Conclusions: The standard multidrug therapy schemes continue being effective for leprosy cases; however, it is necessary to guarantee adherence and regularity. Surveillance to drug resistance in new and previously treated leprosy cases should be established. abstract_id: PUBMED:6432692 Change in cell-mediated immunity to Mycobacterium leprae and Mycobacterium marinum in Hansen's disease patients The cell mediated immunity (CMI) to protein purified derivates of Mycobacterium leprae, Mycobacterium tuberculosis, Mycobacterium avium and Mycobacterium marinum was studied. Leukocyte Migration Inhibition (LMI) and Delayed Hypersensitivity skin reactions to these antigens were examined in 44 hanseniasis patients (20 quiescent Virchowians, 13 reactional Virchowians and 11 tuberculoid patients) and 15 healthy subjects. An impairment in LMI and Delayed Hypersensitivity tests to M. leprae and M. marinum was observed in Virchowians patients both quiescent and reactional. The CMI response to all mycobacterial antigens was increased in tuberculoid patients and was observed a poor response to M. leprae and M. marinum in healthy controls. Our results show a high correlation between the CMI response to M. leprae and to M. marinum (r = + 0,8). This close relationship between both antigens may express cross-reactivity. abstract_id: PUBMED:29097343 Molecular detection of multidrug-resistant Mycobacterium leprae from Indian leprosy patients. Objectives: The emergence of multidrug-resistant (MDR) organisms for any infectious disease is a public health concern. Global efforts to control leprosy by intensive chemotherapy have led to a significant decrease in the number of registered patients. Currently recommended control measures for treating leprosy with multidrug therapy (MDT) were designed to prevent the spread of dapsone-resistant Mycobacterium leprae strains. Here we report the identification of MDR M. leprae from relapse leprosy patients from endemic regions in India. Methods: Resistance profiles to rifampicin, dapsone and ofloxacin of the isolated strains were confirmed by identification of mutations in genes previously shown to be associated with resistance to each drug. Between 2009-2016, slit-skin smear samples were collected from 239 relapse and 11 new leprosy cases from hospitals of The Leprosy Mission across India. DNA was extracted from the samples and was analysed by PCR targeting the rpoB, folP and gyrA genes associated with resistance to rifampicin, dapsone and ofloxacin, respectively, in M. leprae. M. leprae Thai-53 (wild-type) and Zensho-4 (MDR) were used as reference strains. Results: Fifteen strains showed representative mutations in at least two resistance genes. Two strains showed mutations in all three genes responsible for drug resistance. Seven, seven and one strain, respectively, showed mutations in genes responsible for rifampicin and dapsone resistance, for dapsone and ofloxacin resistance and for rifampicin and ofloxacin resistance. Conclusion: This study showed the emergence of MDR M. leprae in MDT-treated leprosy patients from endemic regions of India. abstract_id: PUBMED:15508725 Multi-drug resistant Mycobacterium leprae from patients with leprosy DNA sequences of Mycobacterium leprae in particular regions of the gyrA, rpoB, and folP genes responsible for resistance to new quinolones, rifampicin and dapsone, respectively, were analyzed. Among 88 isolates of M. leprae from leprosy patients in Japan, Haiti, Indonesia, Pakistan, and the Philippines, eleven isolates had mutational changes in 2 genes (resistance to 2 drugs), and 2 isolates (Shinsei-1 and Zensho-4) showed mutations in 3 genes (resistance to 3 drugs). These findings are suggesting emergence of multi-drug resistant M. leprae. abstract_id: PUBMED:1287401 Reversal of drug resistance in Mycobacterium leprae by ampicillin/sulbactam. The multiplication of Mycobacterium leprae in foot pads of experimentally-infected mice was suppressed by intramuscular administration of ampicillin combined with sulbactam or YTR-830H, two potent inhibitors of beta-lactamase in the bacteria. The antibiotic or the inhibitors by themselves were inactive. Ampicillin/sulbactam also inhibited the growth of drug-resistant M. leprae which grew in the presence of rifampin or dapsone. The finding provides a new approach to treat leprosy and to overcome drug resistance of the mycobacteria. abstract_id: PUBMED:31260522 Emergence and Transmission of Drug-/Multidrug-resistant Mycobacterium leprae in a Former Leprosy Colony in the Brazilian Amazon. Background: Leprosy has been treated with multidrug therapy, which has been distributed for free across the globe and regarded as highly efficient. However, the impossibility of growing Mycobacterium leprae in axenic media has historically impaired assessments of M. leprae resistance, a parameter only recently detectable through molecular methods. Methods: A systematic, population-based search for M. leprae resistance in suspected leprosy relapse cases and contacts was performed in Prata Village, an isolated, hyperendemic, former leprosy colony located in the Brazilian Amazon. Results led to an extended active search involving the entire Prata population. Confirmed leprosy cases were investigated for bacterial resistance using a combination of in vivo testing and direct sequencing of resistance genes folP1, rpoB, and gyrA. A molecular epidemiology analysis was performed using data from 17 variable number tandem repeats (VNTR). Results: Mycobacterium leprae was obtained from biopsies of 37 leprosy cases (18 relapses and 19 new cases): 16 (43.24%) displayed drug-resistance variants. Multidrug resistance to rifampicin and dapsone was observed in 8 relapses and 4 new cases. Single resistance to rifampicin was detected in 1 new case. Resistance to dapsone was present in 2 relapses and 1 new case. Combined molecular resistance and VNTR data revealed evidence of intra-familial primary transmission of resistant M. leprae. Conclusions: A comprehensive, population-based systematic approach to investigate M. leprae resistance in a unique population revealed an alarming scenario of the emergence and transmission of resistant strains. These findings may be used for the development of new strategies for surveillance of drug resistance in other populations. abstract_id: PUBMED:8326182 Clarithromycin is bactericidal against strains of Mycobacterium leprae resistant and susceptible to dapsone and rifampin. The anti-Mycobacterium leprae activity of clarithromycin when administered alone and in combination with rifampin and dapsone in the diet was determined using the kinetic method of drug evaluation in mice. Clarithromycin when administered at a concentration of 0.1% (w/w) in the diet completely prevented growth of 2 pan-susceptible, 3 dapsone-resistant, 2 rifampin-resistant, and 2 rifampin and dapsone double resistant strains of M. leprae. A 0.03% (w/w) concentration also completely prevented growth of M. leprae in all mice infected with 2 of 7 strains tested, but in only some of the mice infected with the remaining 5 strains. No antagonistic drug interactions were observed between clarithromycin and dapsone or rifampin. The addition of clarithromycin to the currently recommended multidrug regimen should improve the rate of killing of M. leprae and help to prevent the growth of dapsone-resistant and rifampin-resistant strains. abstract_id: PUBMED:3519797 Sulfone-resistance of Mycobacterium leprae--monotherapy with diaminodiphenylsulfone--the value of triple-drug combinations While the emergence of drug resistance in Mycobacterium leprae was foreseen and known for a long time, it is now presented as a tragedy jeopardizing leprosy control through monotherapy. This resistance has been mainly reported in the United States. It is not observed in other parts of the world. In our opinion, the unfavorable observations made at present result from an incorrect implementation of dapsone (DDS) therapy in the patients, resulting in low sulfone blood levels, as a consequence of the use of complex disubstituted sulfones, insufficient daily dapsone dosages, irregular or noncompliance to treatment, premature interruption of treatment, etc. Two measures are required in order to prevent the emergence of primary or secondary resistance to dapsone in M. leprae. First, it is necessary to go back to the previous regimen of 200 mg dapsone daily in an adult. It yields the "maximum tolerated effective dosage." It should never have been rejected in favor of 100 mg daily as currently recommended at the moment. The second measure is the implementation of multiple drug therapy (MDT), using concurrently DDS in association with rifampin and clofazimine. This is a logical and rational approach, at least from a theoretical point of view. However, MDT is most unfortunately quite expensive and therefore inapplicable in most countries with high prevalence, since they are poor and underdeveloped. Implementation of MDT also raises great problems, since dosages have to be strictly adhered to in order to prevent a potentially catastrophic emergence of multiple drug resistance in M. leprae. Answer: Yes, drug-resistant Mycobacterium leprae is a real cause for concern. The emergence of multidrug-resistant (MDR) M. leprae strains poses a significant threat to the control and eradication of leprosy. Studies have reported cases of M. leprae that are resistant to multiple drugs, including rifampicin, dapsone, and ofloxacin, which are key components of the multidrug therapy (MDT) regimen recommended for leprosy treatment (PUBMED:32043454, PUBMED:29097343, PUBMED:15508725, PUBMED:31260522). The detection of drug resistance in both new and previously treated patients indicates that the standard MDT schemes, while still effective, require careful adherence and regularity to prevent the spread of resistance (PUBMED:24968045). Additionally, the presence of drug resistance in smear-negative relapse cases, which are not typically tested for drug resistance, highlights the importance of conducting drug resistance tests even in such cases (PUBMED:32043454). The emergence and transmission of drug-/multidrug-resistant M. leprae in populations, as observed in a former leprosy colony in the Brazilian Amazon, underscore the need for new strategies for surveillance and management of drug resistance (PUBMED:31260522). Furthermore, the development of new approaches to treat leprosy and overcome drug resistance, such as the use of ampicillin/sulbactam and clarithromycin, is being explored to address this growing concern (PUBMED:1287401, PUBMED:8326182). The situation is compounded by the fact that M. leprae cannot be grown in axenic media, which historically has made it difficult to assess resistance, although molecular methods have recently enabled such detection (PUBMED:31260522). Overall, the evidence suggests that drug-resistant M. leprae is a significant concern that requires immediate attention and action from public health authorities and researchers.
Instruction: Is pseudo-intractability in population of patients with epilepsy still alive in the 21st century? Abstracts: abstract_id: PUBMED:21196921 Is pseudo-intractability in population of patients with epilepsy still alive in the 21st century? Audit of 100 seizure-free patients, referred with the diagnosis of pharmacoresistant epilepsy. Objective: There is no universally accepted definition of pseudo-intractable epilepsy. Pseudo-intractability means that the resistance to treatment is, in fact, caused by clinical errors. The purpose of our study was to identify the reasons for intractability and subsequent effective therapeutic management approaches in a group of patients with established pseudo-intractable epilepsy. Methods: The study was designed as a retrospective audit of 100 adult patients who, in their past medical history, were diagnosed as having intractable epilepsy but, following adjustments to their medical management, were seizure free for at least 2 years. Patients underwent standard clinical evaluation, including EEG and/or video-EEG monitoring. We re-evaluated past medical, family, seizure and pharmacological history and morphological findings. Epilepsy was re-classified according to the ILAE classification. Results: We identified possible errors including incorrect diagnosis and/or inappropriate previous epilepsy management in all 100 patients. Incorrect diagnosis (seizure type and/or syndrome) was observed in 47 patients (47%). Thirty two patients (32%) with idiopathic generalized epilepsy were treated for complex focal seizures with inappropriate choice of medication. Therapeutic errors were identified in 48 patients (48%). Issues with medication compliance were found in 20 patients (20%). Potential seizure precipitating factors were detected in 23 patients (23%). Conclusions: Our study of 100 patients confirmed that the problem of pseudo-intractability still exists. Every case of pharmacoresistance in epilepsy could potentially be caused by one or more clinical errors. abstract_id: PUBMED:19298427 The clinical concept of epilepsy. This article reviews the history of clinical concepts of epilepsy and its classification, especially in the last 100 years. Throughout its recorded history of 3 to 4 millennia, epilepsy has always been defined by its most dramatic symptoms, for example, falling, motor activity or loss of consciousness, but separation from other causes of the same paroxysmal symptoms has always proved challenging. For over a century there has been some semantic confusion whether to call the various paroxysms fits, convulsions, seizures, or epilepsies. Since the middle of the 19th century a great unresolved debate has continued about whether recurrent seizures or epilepsy should be viewed as a separable symptom of underlying brain disease or as one or more idiopathic diseases or syndromes, with an inherent age-related natural history; or indeed viewed as both a symptom and a disease. A major advance in the 20th century is that vascular theories of epilepsy, which reached their peak with Turner in 1907, have been replaced by electromagnetic discharges, based especially on the work of Todd, Jackson, Berger, Lennox, and the Gibbs, culminating eventually in new ILAE classifications of seizures (1981) and epilepsy syndromes (1989). However 21st century uncertainties about symptomatic versus idiopathic or cross-sectional (seizures) versus longitudinal (epilepsy) approaches to the problem very much reflect similar divergences of view a century ago. More attention is now being directed at interseizure events and processes which may lead either to remission or intractability with associated cognitive and psychosocial consequences. The search for the elusive essence, diathesis or predisposition to epilepsy, including seizure threshold, continues. abstract_id: PUBMED:21635233 New concepts in classification of the epilepsies: entering the 21st century. Concepts and terminology for classifying seizures and epilepsies have, until recently, rested on ideas developed nearly a century ago. In order for clinical epilepsy and practice to benefit fully from the major technological and scientific advances of the last several years, advances that are revolutionizing our understanding and treatment of the epilepsies, it is necessary to break with the older vocabulary and approaches to classifying epilepsies and seizures. The Commission on Classification and Terminology made specific recommendations to move this process along and ensure that classification will reflect the best knowledge, will not be arbitrary, and will ultimately serve the purpose of improving clinical practice as well as research on many levels. The recommendations include new terms and concepts for etiology and seizure types as well as abandoning the 1989 classification structure and replacing it instead with a flexible multidimensional approach in which the most relevant features for a specific purpose can be emphasized. This is not a finished product and will take yet more time to achieve. Waiting any longer, however, would be a disservice to patient care and will continue the longstanding frustrations with the earlier system which, at this point in time, can be viewed as both antiquated and arbitrary. abstract_id: PUBMED:26907968 Epilepsy on the silver screen in the 21st century. Epilepsy remains an attractive vehicle for filmmakers in the twenty-first century. This review examines the themes of twenty-one films, released between 2000 and 2014, that feature a character with epilepsy or a pivotal scene involving a seizure. Epilepsy continues to be associated with the supernatural in modern cinematic output. Demonic possession and epilepsy now share a similar cinematic lexicon. Unfortunately, the overlap is more than just visual. Supernatural treatments of narratives that claim to be 'based on a true story' of someone with epilepsy continue to blur the lines between medical and spiritual realms. Although there has been a steady progression away from concealment of the condition, epilepsy continues to signal 'otherness' in movie characters and seldom in a good way. All too often, a character has epilepsy to maximize the unease of the audience with them; it is a device that is used to signal 'this character is not like you'. However, amongst the hackneyed negative stereotypes, accurate portrayals of the condition are beginning to emerge, not least due to active collaborations between filmmakers and epilepsy advocacy groups. Far from being worthy, it is heartening that these films are also those that are the most absorbing and thought-provoking of the cinematic output thus far this century. This article is part of a Special Issue entitled "Epilepsy, Art, and Creativity". abstract_id: PUBMED:35568039 Epilepsy in the 21st century. N/A abstract_id: PUBMED:24798409 The societal integration of individuals with epilepsy: perspectives for the 21st century. Epilepsy is a common neurologic disorder seen throughout the world. Advances in therapy have made it possible for persons with epilepsy (PWEs) to have improved seizure control and a better quality of life. However, it is not entirely clear whether this has resulted in their successful integration into society. This review examines the societal integration of PWEs, identifying both the progress made and the challenges that continue to hamper further advances. In general, PWEs are more integrated in western-oriented cultures. However, there continue to be ongoing difficulties due to poor education and intellectual functioning, poor social and family support, the undertreatment of coexisting psychiatric conditions, transportation and mobility limitations, and problems obtaining employment. This review also discusses the effects of low socioeconomic status on integration and the persisting prejudices that affect certain racial groups. Most importantly, this review underscores the fact that societal stigma towards PWEs is still very much alive. At the beginning of the 21st century, PWEs still encounter difficulties in their quest for full societal integration. Along with medical advances being made to improve seizure control, much still has to be done to bring about the reforms necessary to help PWEs live more meaningful and productive lives. abstract_id: PUBMED:20499575 The treatment of epilepsy in the 21st century N/A abstract_id: PUBMED:20720592 Phenylketonuria: a 21st century perspective. Phenylketonuria is the most prevalent inherited defect in amino acid metabolism. Owing to mutations in the gene encoding the enzyme phenylalanine hydroxylase, the essential amino acid phenylalanine cannot be hydroxylated to tyrosine and blood and tissue concentrations of phenylalanine increase. Untreated, phenylketonuria causes severe mental retardation, epilepsy and behavioral problems. The combined effect of neonatal screening and treatment has, however, meant that phenylketonuria is now a biochemical rather than a clinical diagnosis. Treatment consists of stringent dietary restriction of natural protein intake and supplementation of amino acids other than phenylalanine by a chemically manufactured protein substitute. Although clinical outcome on a phenylalanine-restricted diet is good, neuropsychological deficits are now known to exist in dietary-treated patients with phenylketonuria, and quality of life, nutritional condition and psychosocial outcome could probably also be improved. The need for new therapeutic approaches is being met by supplementation with tetrahydrobiopterin or large neutral amino acids, whilst development of the use of phenylalanine ammonia lyase, and, in the longer term, gene therapy and chaperone treatment holds promise. This Review provides an overview of the history of phenylketonuria, the challenges of treatment today and the treatment possibilities in the near future. abstract_id: PUBMED:27648506 Pyrotherapy for the Treatment of Psychosis in the 21st Century: A Case Report and Literature Review. The concept that fevers can improve the condition of patients with certain medical and psychiatric diseases dates back to Hippocrates. Over the centuries, it has been observed that fevers and infectious agents have been beneficial for a broad spectrum of diseases, including neurologic conditions such as epilepsy and psychiatric illnesses including melancholy and psychosis. Interest in the concept of fever as a treatment for disease, termed pyrotherapy or pyretotherapy, peaked in the late 1800s and early 1900s thanks to the Nobel Prize winning work of Julius Wagner-Jauregg for his studies with malaria therapy for general paralysis of the insane, now more commonly referred to as neurosyphilis. The use of inoculations of infectious agents for their fever-inducing effects in the treatment of neurosyphilis quickly spread throughout the world, and, by the 1920s, it was considered by many to be the treatment of choice for neurosyphilis as well as other psychotic disorders. However, with the discovery of penicillin for the treatment of syphilis, which coincided with the advent of convulsion-oriented practices including electroconvulsive therapy and insulin coma for the treatment of psychotic disorders, pyrotherapy soon lost favor among psychiatrists and, since the 1950s, it has largely been overlooked by the scientific community. In this article, the authors provide a brief literature review of the history of pyrotherapy and present a case report of a woman with schizoaffective disorder and severe psychotic symptoms who experienced a remarkable resolution of psychotic symptoms following an episode of bacteremia with high fever. abstract_id: PUBMED:19298432 Epilepsy, psychiatry, and neurology. This article reviews the relationship between the psychiatry and neurology of epilepsy, especially in the last 100 years. Throughout most of its recorded history of 3 to 4 millennia epilepsy has been viewed as a supernatural or mental disorder. Although first suggested by Hippocrates in the 5th century B.C., the concept of epilepsy as a brain disorder only began to take root in the 17th and 18th centuries. The discipline of neurology emerged from "nervous disorders" or neuropsychiatry in the late 19th century, when vascular theories of epilepsy predominated. By the turn of the 19th century psychiatry and neurology were diverging and epilepsy remained to some extent in both disciplines. It was only in the middle of the 20th century with the development of electromagnetic theories of epilepsy that the concept of epilepsy per se as a neurological disorder was finally adopted in international classifications of disease. This was associated with a refined definition of the ictal, pre-, post-, and interictal psychological disorders of epilepsy, which have contributed to a renaissance of neuropsychiatry. At the beginning of the 21st century and the centenary of the ILAE psychiatry and neurology have been converging again, led in some respects by epilepsy, which has provided several useful models of mental illness and a bridge between the two disciplines. Answer: Yes, pseudo-intractability in the population of patients with epilepsy is still present in the 21st century. An audit of 100 seizure-free patients, who were previously diagnosed with pharmacoresistant epilepsy, revealed that every case of pharmacoresistance could potentially be caused by one or more clinical errors. The study identified possible errors including incorrect diagnosis and/or inappropriate previous epilepsy management in all 100 patients. Incorrect diagnosis (seizure type and/or syndrome) was observed in 47 patients (47%), and therapeutic errors were identified in 48 patients (48%). Issues with medication compliance were found in 20 patients (20%), and potential seizure precipitating factors were detected in 23 patients (23%). This study confirmed that the problem of pseudo-intractability still exists and emphasizes the need for careful re-evaluation of diagnosis and management in patients with epilepsy (PUBMED:21196921).
Instruction: Risk stratification and timing of coronary angiography in acute coronary syndromes: are we targeting the right patients in a timely manner? Abstracts: abstract_id: PUBMED:24362735 Risk stratification and timing of coronary angiography in acute coronary syndromes: are we targeting the right patients in a timely manner? (ANZACS-QI 1). Aims: The New Zealand non-ST elevation acute coronary syndrome (NSTEACS) guideline recommends that clinically appropriate patients with combined high risk features (positive troponin and ischaemic ECG and a GRACE score >140) have coronary angiography within the first hospital day. All other ACS patients referred for angiography should be studied within 72 hours. We evaluated the relationship between risk criteria, and both the incidence and timing of angiography in our practice. Methods: 2868 consecutive patients (2007 to 2010) with NSTEACS admitted to Middlemore, Waikato and Taranaki Hospitals. Individual patient demographic, risk factor, diagnostic, investigation and in-hospital outcome data was collected prospectively using Acute PREDICT software. Results: 391 (13.6%) patients met the combined high risk criteria. Compared with lower risk patients they were older and more likely to have known cardiac disease, diabetes, renal impairment, left ventricular failure, left ventricular systolic dysfunction and more likely to die in hospital. Patients with combined high risk were less likely than others to undergo coronary angiography (61.6% vs 75%, p<0.0001). Only a fifth of combined high risk patients referred had coronary angiography within 1 day. Only just over half of those referred for angiography were studied within 3 days. Conclusions: The New Zealand guidelines high risk criteria identify one in seven patients with NSTEACS as potentially appropriate for angiography within the first day. For those referred this was infrequently achieved, and only half of all NSTEACS patients referred met the 3-day target. Implementation of a national ACS registry to support more appropriate and timely management is appropriate. abstract_id: PUBMED:32754504 Diagnosis and risk stratification of coronary artery disease in Yemeni patients using treadmill test. Background: Coronary artery disease (CAD) is a common medical problem that remains a great cause of morbidity and mortality. Based on clinical manifestation, CAD span from chronic stable angina to acute coronary syndrome (ACS), which encompasses (1) unstable angina (UA), (2) non-ST-elevation myocardial infarction (NSTEMI), and (3) ST-elevation myocardial infarction (STEMI). Objective: This study was conducted to describe the risk stratification and prognosis of CAD in Yemeni patients undergoing an exercise stress test. Method: A retrospective descriptive study involved the records of 302 patients who undergoing exercise stress testing by the Bruce protocol of Treadmill (TMT). Results: Out of 302 patients, the mean age was 43.29 years, (range, 22-70 years), and 79.80% were males. The majority of patients (86.75%) were normal. Most of the affected patients (82.50%) were males. All of the affected peoples were more than 30 years old with predominance in more than 50 years old. Results showed that 45% of the positive (+ve) patients were in high-risk group, and 55% were in intermediate and low-risk groups. Conclusions: CAD affects males more than females. It affects patients older than 30 years and predominates in elderly patients. Most of the affected patients were in the high, intermediate, and low-risk groups and rarely were in the very high-risk group. abstract_id: PUBMED:34456646 ANMCO POSITION PAPER: Timing of coronary angiography in non-ST-segment elevation acute coronary syndromes. The European Society of Cardiology guidelines on non-ST-elevation acute coronary syndromes suggest different temporal strategies for the angiographic study depending on the risk profile. The scientific evidence underlying the guideline recommendations and the critical issues currently existing in Italy, that often do not allow either an extended strategy of revascularization within 24 h or the application of the principle of the same day transfer from a spoke to a hub centre, are analysed. The position paper focuses, in particular, on the subgroup of patients with a defined diagnosis of non-ST-elevation myocardial infarction by proposing a timing of coronary angiography/revascularization that takes into account the available scientific evidence and the organizational possibilities of a considerable part of national cardiology services. abstract_id: PUBMED:25429326 Bleeding risk stratification in an era of aggressive management of acute coronary syndromes. Major bleeding is currently one of the most common non-cardiac complications observed in the treatment of patients with acute coronary syndrome (ACS). Hemorrhagic complications occur with a frequency of 1% to 10% during treatment for ACS. In fact, bleeding events are the most common extrinsic complication associated with ACS therapy. The identification of clinical characteristics and particularities of the antithrombin therapy associated with an increased risk of hemorrhagic complications would make it possible to adopt prevention strategies, especially among those exposed to greater risk. The international societies of cardiology renewed emphasis on bleeding risk stratification in order to decide strategy and therapy for patients with ACS. With this review, we performed an update about the ACS bleeding risk scores most frequently used in daily clinical practice. abstract_id: PUBMED:19675816 The use of risk scores for stratification of non-ST elevation acute coronary syndrome patients. Objective: To review the methods available for the risk stratification of non-ST elevation (NSTE) acute coronary syndrome (ACS) patients and to evaluate the use of risk scores for their initial risk assessment. Data Sources: The data of the present review were identified by searching PUBMED and other databases (1996 to 2008) using the key terms "risk stratification", "risk scores", "NSTEMI", "UA" and "acute coronary syndrome". Study Selection: Mainly original articles, guidelines and critical reviews written by major pioneer researchers in this field were selected. Result: After evaluation of several risk predictors and risk scores, it was found that estimating risk based on clinical characteristics is challenging and imprecise. Risk predictors, whether used alone or in simple binary combination, lacked sufficient precision because they have high specificity but low sensitivity. Risk scores are more accurate at stratifying NSTE ACS patients into low-, intermediate- or high-risk groups. The Global Registry of Acute Cardiac Events risk score was found to have superior predictive accuracy compared with other risk scores in ACS population. Treatments based according to specific clinical and risk grouping show that certain benefits may be predominantly or exclusively restricted to higher risk patients. Conclusion: Based on the trials in the literature, the Global Registry of Acute Cardiac Events risk score is more advantageous and easier to use than other risk scores. It can categorize a patient's risk of death and/or ischemic events, which can help tailor therapy to match the intensity of the patient's NSTE ACS. abstract_id: PUBMED:25228966 Risk stratification for ST segment elevation myocardial infarction in the era of primary percutaneous coronary intervention. Acute coronary syndromes presenting with ST elevation are usually treated with emergency reperfusion/revascularisation therapy. In contrast current evidence and national guidelines recommend risk stratification for non ST segment elevation myocardial infarction (NSTEMI) with the decision on revascularisation dependent on perceived clinical risk. Risk stratification for STEMI has no recommendation. Statistical risk scoring techniques in NSTEMI have been demonstrated to improve outcomes however their uptake has been poor perhaps due to questions over their discrimination and concern for application to individuals who may not have been adequately represented in clinical trials. STEMI is perceived to carry sufficient risk to warrant emergency coronary intervention [by primary percutaneous coronary intervention (PPCI)] even if this results in a delay to reperfusion with immediate thrombolysis. Immediate thrombolysis may be as effective in patients presenting early, or at low risk, but physicians are poor at assessing clinical and procedural risks and currently are not required to consider this. Inadequate data on risk stratification in STEMI inhibits the option of immediate fibrinolysis, which may be cost-effective. Currently the mode of reperfusion for STEMI defaults to emergency angiography and percutaneous coronary intervention ignoring alternative strategies. This review article examines the current risk scores and evidence base for risk stratification for STEMI patients. The requirements for an ideal STEMI risk score are discussed. abstract_id: PUBMED:32954890 Correlation between serum free triiodothyronine levels and risk stratification in Chinese patients with acute coronary syndrome receiving percutaneous coronary intervention. Objective: Low serum free triiodothyronine (FT3) levels are associated with the occurrence of coronary heart disease and with the prognosis of cardiovascular diseases. This study aimed to investigate the relationship between FT3 levels and risk stratification in Chinese Han patients with acute coronary syndrome (ACS) receiving percutaneous coronary intervention (PCI) treatment. Methods: Plasma FT3 levels and other parameters were measured in 191 patients with ACS who received PCI. The risk of adverse cardiovascular events was assessed using the Age, Creatinine, and Ejection Fraction (ACEF) score. Results: FT3 levels were significantly lower in the high-risk group than in the medium- and low-risk groups. Serum FT3 levels were negatively linearly correlated with the ACEF score (r = -0.590). Stepwise regression analysis showed a negative correlation between FT3 levels and the risk of adverse cardiovascular events as measured by the ACEF score (standardized β = -0.261). Conclusion: Serum FT3 levels are negatively related to risk stratification in patients with ACS. Serum FT3 levels may be used as a potential predictor for adverse outcomes of patients with ACS undergoing PCI. abstract_id: PUBMED:34640592 Risk Stratification of Patients with Acute Coronary Syndrome. Defining the risk factors affecting the prognosis of patients with acute coronary syndrome (ACS) has been a challenge. Many individual biomarkers and risk scores that predict outcomes during different periods following ACS have been proposed. This review evaluates known outcome predictors supported by clinical data in light of the development of new treatment strategies for ACS patients during the last three decades. abstract_id: PUBMED:31271083 Long-Term Risk Stratification of Patients Undergoing Coronary Angiography According to the Thrombolysis in Myocardial Infarction Risk Score for Secondary Prevention. Background A risk score for secondary prevention after myocardial infarction (Thrombolysis in Myocardial Infarction Risk Score for Secondary Prevention [TRS2P]), based on 9 established clinical factors, was recently developed from the TRA 2°P- TIMI 50 (Thrombin Receptor Antagonist in Secondary Prevention of Atherothrombotic Ischemic Events) trial. We aimed to evaluate the performance of TRS 2P for predicting long-term outcomes in real-world patients presenting for coronary angiography. Methods and Results A retrospective analysis of 13 593 patients referred to angiography for the assessment or treatment of coronary disease was performed. Risk stratification for 10-year major adverse cardiovascular events was performed using the TRS 2P, divided into 6 categories (0 to ≥5 points), and in relation to the presenting coronary syndrome. All clinical variables, except prior coronary artery bypass grafting, were independent risk predictors. The annualized incidence rate of major adverse cardiovascular events increased in a graded manner with increasing TRS 2P, ranging from 1.65 to 16.6 per 100 person-years ( Ptrend<0.001). Compared with the lowest-risk group (risk indicators=0), the hazard ratios (95% CIs) for 10-year major adverse cardiovascular events were 1.60 (95% CI, 1.36-1.89), 2.58 (95% CI, 2.21-3.02), 4.31 (95% CI, 3.69-5.05), 6.43 (95% CI, 5.47-7.56), and 10.03 (95% CI, 8.52-11.81), in those with 1, 2, 3, 4 and ≥5 risk indicators, respectively. Risk gradation was consistent among individual clinical end points. TRS 2P showed reasonable discrimination with C-statistics of 0.693 for major adverse cardiovascular events and 0.758 for mortality. The graded relationship between the risk score and event rates was observed in both patients presenting with acute and nonacute coronary syndromes. Conclusions The use of TRS 2P, a simple risk score based on routinely collected variables, enables risk stratification in patients undergoing coronary angiography. Its predictive value was demonstrated in a real-world setting with long-term follow-up and regardless of the acuity of coronary presentation. abstract_id: PUBMED:27255171 External validation of the ProACS score for risk stratification of patients with acute coronary syndromes. Introduction: The ProACS risk score is an early and simple risk stratification score developed for all-cause in-hospital mortality in acute coronary syndromes (ACS) from a Portuguese nationwide ACS registry. Our center only recently participated in the registry and was not included in the cohort used for developing the score. Our objective was to perform an external validation of this risk score for short- and long-term follow-up. Methods: Consecutive patients admitted to our center with ACS were included. Demographic and admission characteristics, as well as treatment and outcome data were collected. The ProACS risk score variables are age (≥72 years), systolic blood pressure (≤116 mmHg), Killip class (2/3 or 4) and ST-segment elevation. We calculated ProACS, Global Registry of Acute Coronary Events (GRACE) and Canada Acute Coronary Syndrome risk score (C-ACS) risk scores for each patient. Results: A total of 3170 patients were included, with a mean age of 64±13 years, 62% with ST-segment elevation myocardial infarction. All-cause in-hospital mortality was 5.7% and 10.3% at one-year follow-up. The ProACS risk score showed good discriminative ability for all considered outcomes (area under the receiver operating characteristic curve >0.75) and a good fit, similar to C-ACS, but lower than the GRACE risk score and slightly lower than in the original development cohort. The ProACS risk score provided good differentiation between patients at low, intermediate and high mortality risk in both short- and long-term follow-up (p<0.001 for all comparisons). Conclusions: The ProACS score is valid in external cohorts for risk stratification for ACS. It can be applied very early, at the first medical contact, but should subsequently be complemented by the GRACE risk score. Answer: Risk stratification and the timing of coronary angiography in acute coronary syndromes (ACS) are critical for optimizing patient outcomes. The New Zealand non-ST elevation acute coronary syndrome (NSTEACS) guideline recommends that patients with high-risk features should have coronary angiography within the first hospital day, while other ACS patients should be studied within 72 hours. However, a study evaluating the incidence and timing of angiography in practice found that only a fifth of high-risk patients referred for angiography received it within one day, and just over half were studied within three days, indicating a gap between guidelines and actual practice (PUBMED:24362735). The European Society of Cardiology guidelines suggest different strategies for angiographic study depending on the patient's risk profile, but there are critical issues in Italy that often do not allow for an extended strategy of revascularization within 24 hours or the application of the principle of same-day transfer from a spoke to a hub center (PUBMED:34456646). Risk stratification tools, such as the Global Registry of Acute Cardiac Events (GRACE) risk score, have been found to have superior predictive accuracy compared to other risk scores in the ACS population and can help tailor therapy to match the intensity of the patient's NSTE ACS (PUBMED:19675816). Additionally, the ProACS risk score has been externally validated and shown to have good discriminative ability for short- and long-term outcomes, although it should be complemented by the GRACE risk score (PUBMED:27255171). In summary, while guidelines and risk stratification tools exist to help target the right patients for timely coronary angiography, there are challenges in implementing these recommendations in clinical practice. There is a need for improved adherence to guidelines and potentially for better systems to ensure high-risk patients receive angiography within the recommended timeframes.
Instruction: Colonic transit time after spinal cord injury: any clinical significance? Abstracts: abstract_id: PUBMED:9881734 The effect of electrical stimulation on colonic transit following spinal cord injury in cats. The effect of direct electrical stimulation on colinic transit and manometric recordings following spinal cord injury were assessed in five adult male cats. Intra-colonic catheters were surgically placed, stimulating electrodes were sutured to the colonic serosa and a laminectomy with spinal cord clamping at a T4 level was done to induce spinal cord injury (SCI). Twenty radiopaque markers were inserted through an intra-colonic catheter located 1 cm distal to the cecum and were monitored with daily fluoroscopy as a measure of colonic transit. Transit measurements were compared before SCI, after SCI and after SCI with electrical stimulation of 40 pps, 1 ms, and 0-50 mA. Colonic transit following SCI was significantly prolonged (P<0.05) when compared to the transit before SCI. Electrical stimulation following SCI improved colonic transit to values not significantly different from those before SCI. Spontaneous colonic phasic motor activity was similar both before and after SCI. Manometric defection patterns were also observed to be similar before SCI and after SCI with electrical stimulation. Based on our scoring criteria, the most frequent response to electrical stimulation was an abdominal contraction. These findings demonstrate that colonic transit is prolonged following SCI and that direct electrical stimulation of the colon following SCI improves colonic transit in an animal model. abstract_id: PUBMED:7847302 The effect of cisapride on segmental colonic transit time in patients with spinal cord injury. Objectives: The pathophysiological consequences of spinal cord injury (SCI) on function of the colon are complex and poorly understood. Regardless of the mechanism, many patients with SCI have deficient bowel control, which is frustrating and difficult to treat. We designed a study to assess whether a new prokinetic medication, cisapride, might be useful in this setting. Methods: Total and segmental colonic transit time were measured using the radiopaque marker technique in nine subjects with spinal cord injury and seven control subjects after the double-blind administration of cisapride (10 mg q.i.d.) or placebo. Results: In five quadriplegic subjects with prolonged colonic transit time, administration of cisapride was found to reduce left-sided colonic transit time from 24.2 to 13.8 h. In three of these five subjects, cisapride administration resulted in subjective improvement. No effect of cisapride on right-sided, rectosigmoid, or total colonic transit time was observed. Conclusion: The data suggest that cisapride might be a useful adjunctive measure in treating a subset of SCI patients with colonic inertia, but a larger study is needed before this can be routinely recommended. abstract_id: PUBMED:2229888 Colonic transit time and anorectal manometric anomalies in 19 patients with complete transection of the spinal cord. In order to determine the relative importance of sympathetic and parasympathetic centers in the control of colorectal motility, colonic transit and anorectal motility were studied in 19 patients with complete spinal cord transection: group 1 (n = 5) where transection was above T9; group 2 (n = 6) where transection was between T9 and L2; group 3 (n = 8) where transection involved S2-S4. Colonic transit time was calculated by the radiopaque markers. Methods and results were compared with those of eight controls, all bedridden for non-digestive surgery. Anorectal motility was investigated by anorectal manometry, and results were compared with those of 17 healthy controls. Both mean right and left colonic transit times were not different in the three groups of patients and the bedridden control group. Mean rectosigmoid and total transit time increased in groups 2 and 3. In five patients of group 3 the mean anal canal (lower part) pressure was higher than in controls. Recto-anal inhibitory reflex was present in all patients, but their was no correlation in group 3 between the volume of rectal distension and both amplitude and duration of the recto-anal inhibitory reflex, and in group 1 between rectal distension and the duration of the recto-anal inhibitory reflex. Rectosigmoid transit time increased when sympathetic and parasympathetic spinal centers are injured, suggesting the importance of this extrinsic nervous control; right and left colonic transit are not affected by spinal cord lesion suggesting that the distal colon but not the proximal colon, is under spinal cord nervous control.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:9360223 Colonic transit time after spinal cord injury. Colonic transit time (CTT) was measured with abdominal radiographs using Chaussade's technique in 30 spinal cord injured patients (ASIA A and B) following ingestion of 20 radiomarkers per day for three days. A significant increase in total CTT (p = 0.0001) and segmental CTT of the right colon (p = 0.0004) and of the left colon (p = 0.0001) was shown. While using on the average only 2.3 films of the abdomen per patient, we obtained results comparable with other radiologic techniques which use radiomarkers to measure CTT. The clinical relevance of these results is not clear and their correlation with intestinal symptoms remains to be investigated. abstract_id: PUBMED:7639233 Delayed colonic transit in spinal cord-injured patients measured by indium-111 Amberlite scintigraphy. Objective: Constipation is a major problem for patients with chronic spinal cord injury (SCI). However, it is not clear whether abnormal colonic transit is restricted to the rectosigmoid region or involves the entire colon. We assessed regional colonic transit with emphasis on the ascending and transverse segments in patients with chronic SCI and compared the results with those of controls using scintigraphic techniques. Methods: Seven patients with SCI below T1 and 10 control subjects were studied after oral ingestion of a capsule containing indium-111-labeled Amberlite (Sigma Chemical, St. Louis, MO) pellets. The capsule was coated with a pH-sensitive polymer that prevents disintegration until it reaches the ileocecal region. Assessments of the half-time of emptying and residence time of contents in ascending and transverse segments were made, as well as an assessment of the velocity of contents throughout the entire colon, including the descending colon. Results: A significantly slower half-time of emptying was found in SCI patients (ascending: 29 +/- 27 hr in SCI, 6.81 +/- 3.03 hr in controls, p < 0.01; ascending + transverse: 42 +/- 12 hr in SCI, 15.3 +/- 7.16 hr in controls, p < 0.01). The residence time of the median position of the contents was significantly prolonged in SCI patients (ascending: 31 +/- 23 hr in SCI, 8.75 +/- 4.68 hr in controls, p < 0.05; transverse: 26 +/- 3 hr in SCI, 5.0 +/- 4.4 hr in controls, p < 0.05). Overall, the velocity of the median position of contents throughout the entire colon was significantly lower in SCI (0.63 +/- 0.33 cm/hr in SCI, 2.58 +/- 1.20 cm/hr in controls, p < 0.001). Conclusions: Patients with chronic SCI have prolonged colonic transit that involves the entire colon. Hence, treatment of constipation in these patients may need to include prokinetic agents as well as local rectal maneuvers. abstract_id: PUBMED:11093323 Gastrointestinal and segmental colonic transit times in patients with acute and chronic spinal cord lesions. Study Design: Longitudinal study among patients with acute and chronic spinal cord injuries (SCI). Objectives: To compare total gastrointestinal transit times (GITT) and segmental colorectal transit times (CTT) in SCI patients with acute and chronic lesions to those of healthy volunteers. Furthermore, to examine the impact of time elapsed since injury on GITT and CTT, and finally to compare the pattern of colorectal dysfunction in patients with supraconal versus conal/cauda equina lesions. Setting: Surgical Research Unit and Department of Neurosurgery, University Hospital of Aarhus, Denmark. Methods: Patients took 10 radioopaque markers on six consecutive days and an abdominal X-ray was taken on day 7. GITT and CTTs were computed from the number of markers in the entire colorectum and in each colorectal segment respectively. We studied 26 patients with acute spinal cord lesions (15 supraconal and 11 conal/cauda equina lesions; time since injury=11 - 24 days) and 18 patients were available for follow-up 6 - 14 months later. Results were compared to 24 healthy volunteers. Results: In patients with acute supraconal or conal/cauda equina lesions GITT and CTTs of the ascending, transverse, and descending colon were significantly prolonged, but rectosigmoid transit time was only significantly prolonged in patients with conal/cauda equina lesions. In patients with chronic supraconal lesions GITT and CTTs of the transverse colon and the descending colon were significantly prolonged. In patients with chronic conal/cauda equina lesions GITT and CTT of the transverse, the descending colon and the rectosigmoid were significantly prolonged. Thus, supraconal SCI resulted in generalized colonic dysfunction whereas chronic conal/cauda equina lesions resulted in severe rectosigmoid dysfunction. Conclusion: SCI results in severely prolonged colonic transit times both in the acute and chronic phase. However, the type of colorectal dysfunction depends on the level of SCI. abstract_id: PUBMED:16751583 A comparison of methods of assessment of scintigraphic colon transit. Unlabelled: There is no standard method of analysis of scintigraphic colonic transit investigation. This study was designed to compare 4 techniques. Methods: Sixteen subjects (median age, 37.5 y; range, 21-61 y), who had sustained a spinal cord injury more than a year before the study, were given a pancake labeled with 10-18 MBq of (111)In bound to resin beads to eat. Anterior and posterior images were acquired with a gamma-camera 3 h after the meal and then 3 times a day for the next 4 d. Seven regions of interest, outlining the ascending colon, hepatic flexure, transverse colon, splenic flexure, descending colon, rectosigmoid, and total abdominal activity at each time point, were drawn on the anterior and posterior images. The counts were decay corrected and the geometric mean (GM), for each region, at each time point calculated. The GM was used to calculate the percentage of the initial total abdominal activity in each region, at each time point. Colonic transit was assessed in 4 ways: (a) Three independent nuclear medicine physicians visually assessed transit on the analog images and classified subjects into 5 categories of colonic transit (rapid, intermediate, generalized delay, right-sided delay, or left-sided delay). (b) Parametric images were constructed from the percentage activity in each region at each time point. (c) The arrival and clearance times of the activity in the right and left colon were plotted as time-activity curves. (d) The geometric center of the distribution of the activity was calculated and plotted on a graph versus time. The results of these 4 methods were compared using an agreement matrix. Results: Though simple to perform, the visual assessment was unreliable. The best agreement occurred between the parametric images and the arrival and clearance times of the activity in the right and left colon. Conclusion: The different methods of assessment do not produce uniform results. The best option for evaluating colonic transit appears to be a combination of the analog images, which provide a general overview of colonic transit and a quantitative method that demonstrates segmental transit. abstract_id: PUBMED:12214902 Colonic transit time after spinal cord injury: any clinical significance? Background: Both bowel dysfunction and increases in colonic transit time (CTT) are frequently observed in individuals with spinal cord injury; however, it is unknown whether there is an association between chronic intestinal problems and changes in CTTs. The current study investigates a possible relationship between the main intestinal symptoms of SCI patients and CTT values. Methods: The following clinical variables and symptoms were investigated and collected in 30 individuals with SCI: total time for bowel care, abdominal pain, abdominal gas, success of rectal emptying, fecal incontinence, and decrease in quality of life. Total and segmental CTTs (right colon, left colon, and rectosigmoid colon) were assessed using radiopaque markers. The effects of the sociodemographic variables and the clinical symptoms on the different CTTs (total and segmental) were analyzed. Results: The assessed clinical conditions were observed in the following percentages of subjects: abdominal gas symptoms (70%), fecal incontinence (56%), abdominal pain (63%), total time for bowel care > 1 hour (11%), difficult rectal emptying (66%), and decrease in quality of life (36%). We also observed an increase in total CTT in 47% of subjects; increases in segmental CTT were found in the right colon in 23%, in the left colon (60%), and in the rectosigmoid segment (23%). Statistical analyses failed to show a significant difference in mean CTT values between the group of symptomatic patients (1 or more symptoms) and the group of asymptomatic patients. No significant difference could be detected in the incidence of each intestinal symptom between the group of participants with normal CTT values and those with abnormal CTT values. For each of the clinical data assessed separately, a significantly longer CTT (left colon) was associated with the lack of abdominal pain (P < .03) and the presence of fecal incontinence (P < .01); successful rectal emptying was associated with significantly shorter total (P < .02) and segmental CTTs for the left colon (P < .01) and rectosigmoid colon (P < .05). Conclusions: Besides an association between shorter CTT and successful rectal emptying, there was little relationship between CTTs and intestinal symptoms in this study of patients with SCI. abstract_id: PUBMED:2312245 Colonic transit in spinal cord-injured patients. This report describes a study of transit time through the colon, as well as the caliber of the colon and distal small bowel, in 28 spinal cord-injured patients to see if there is a correlation between those findings and difficulty with bowel care and symptoms. In 15 of these 28 patients anorectal dynamic studies were also done. Each patient ingested 20 radiopaque markers, after which colon transit times were measured by multiple abdominal radiographs. The width of the ileum was measured and correlated with symptoms. The findings indicate that transit time was delayed in the left and rectosigmoid colon; four patients had a large and highly compliant rectum, whereas six had anorectal dyssynergia. All ten had difficulty with bowel care. The distal small bowel was dilated in ten patients, all of whom had symptoms and nine of whom had spinal cord lesions superior to T5. abstract_id: PUBMED:21102573 Do gastrointestinal transit times and colonic dimensions change with time since spinal cord injury? Study Design: Long-term follow-up study. Objectives: To determine whether gastrointestinal transit times (GITTs) and colonic dimensions change during the first or subsequent decades after spinal cord injury (SCI). Setting: Aarhus University Hospital, Denmark. Methods: GITT and colonic dimensions were evaluated by means of radio-opaque markers. Group A (n=12) was investigated 1 year after SCI and again 13 (range 11-14) years later. Group B (n=10) was studied 19 (range 9-36) years after injury and again 12 (range 11-12) years later. All had been treated with conservative bowel management. Results: In group A, the median GITT 1 year after injury was 4.3 (range 1.1-6.5) days and 13 years later, it was 3.2 (range 1.3-6.5) days, P=0.96. In group B, the median GITT 19 year after injury was 3.4 (range 0.6-5.9) days and 12 years later, it was 3.2 (range 1.9-5.5) days, P=0.77. None of the two groups experienced a significant change in the diameter of the caecum/ascending colon, transverse colon, descending colon or the sigmoid during long-term follow-up. Megacolon was present in four patients at baseline and in five at follow-up. Conclusion: GITTs and colonic dimensions did not change, neither during the first decade nor long after SCI. Answer: The clinical significance of colonic transit time (CTT) after spinal cord injury (SCI) is a topic of interest due to the frequent observation of bowel dysfunction and increased CTT in individuals with SCI. Studies have shown that SCI can result in prolonged CTT, which can be associated with various symptoms such as constipation, abdominal pain, and fecal incontinence, impacting the quality of life of affected individuals (PUBMED:11093323; PUBMED:16751583; PUBMED:12214902). One study found that electrical stimulation of the colon following SCI improved colonic transit in an animal model, suggesting potential therapeutic interventions to address prolonged CTT (PUBMED:9881734). Another study indicated that the prokinetic medication cisapride reduced left-sided colonic transit time in some quadriplegic subjects with SCI, leading to subjective improvement (PUBMED:7847302). Additionally, colonic transit and anorectal motility studies in patients with complete spinal cord transection revealed that rectosigmoid transit time increased when sympathetic and parasympathetic spinal centers are injured, highlighting the importance of extrinsic nervous control (PUBMED:2229888). Furthermore, a longitudinal study among patients with acute and chronic SCI found that SCI results in severely prolonged CTT both in the acute and chronic phase, with the type of colorectal dysfunction depending on the level of SCI (PUBMED:11093323). However, another study failed to show a significant difference in mean CTT values between symptomatic and asymptomatic patients, suggesting that the relationship between CTT and intestinal symptoms may not be straightforward (PUBMED:12214902). In summary, while prolonged CTT after SCI is well-documented and can be associated with bowel dysfunction, the clinical significance and direct relationship between CTT and specific intestinal symptoms can vary. Some studies suggest that interventions targeting CTT may improve symptoms, but more research is needed to fully understand the clinical implications and to develop effective treatments for bowel dysfunction in SCI patients.
Instruction: Can excess maternal mortality among women of foreign nationality be explained by suboptimal obstetric care? Abstracts: abstract_id: PUBMED:18823490 Can excess maternal mortality among women of foreign nationality be explained by suboptimal obstetric care? Objectives: To test the hypothesis that the risk of postpartum maternal death in France remains significantly higher for women of foreign nationality after individual characteristics are taken into account and to examine whether the quality of care received by the women who died differs according to nationality. Design: A national case-control study. Setting: Metropolitan France. Population: A total of 267 women who died of maternal death from 1996 to 2001 as cases and a representative sample (n = 13 186) of women who gave birth in 1998 as controls. Methods: Crude and adjusted odd ratios were calculated with multivariate logistic regression, and the quality of care for cases was compared according to nationality with chi-square tests or Fisher's exact tests. Main Outcome Measures: Odd ratio for postpartum maternal death associated with nationality and quality of care. Results: After taking individual characteristics into account, the risk of postpartum maternal death was twice as high for foreign women. The odds ratio was 5.5 (95% CI: 3.3-9.0) for women from sub-Saharan Africa and 3.3 (95% CI: 1.7-6.5) for those from Asia, North and South America. There was no significant excess risk of postpartum maternal death for the other European and North Africa women. The risk of dying from hypertensive disorder or infection was four times higher for foreign women. Among women who died, care was more often considered not optimal for foreign women (78 versus 57%). Conclusions: The excess risk of postpartum maternal death persisted for foreign women after individual characteristics were taken into account and was especially important for some nationalities and for some causes of death, primarily hypertensive disorders. These results point to an immediate need to pay special attention to early enrollment in prenatal care, screening and prenatal management of hypertension, especially in women of sub-Saharan African nationality. abstract_id: PUBMED:24725307 Suboptimal care and maternal mortality among foreign-born women in Sweden: maternal death audit with application of the 'migration three delays' model. Background: Several European countries report differences in risk of maternal mortality between immigrants from low- and middle-income countries and host country women. The present study identified suboptimal factors related to care-seeking, accessibility, and quality of care for maternal deaths that occurred in Sweden from 1988-2010. Methods: A subset of maternal death records (n = 75) among foreign-born women from low- and middle-income countries and Swedish-born women were audited using structured implicit review. One case of foreign-born maternal death was matched with two native born Swedish cases of maternal death. An assessment protocol was developed that applied both the 'migration three delays' framework and a modified version of the Confidential Enquiry from the United Kingdom. The main outcomes were major and minor suboptimal factors associated with maternal death in this high-income, low-maternal mortality context. Results: Major and minor suboptimal factors were associated with a majority of maternal deaths and significantly more often to foreign-born women (p = 0.01). The main delays to care-seeking were non-compliance among foreign-born women and communication barriers, such as incongruent language and suboptimal interpreter system or usage. Inadequate care occurred more often among the foreign-born (p = 0.04), whereas delays in consultation/referral and miscommunication between health care providers where equally common between the two groups. Conclusions: Suboptimal care factors, major and minor, were present in more than 2/3 of maternal deaths in this high-income setting. Those related to migration were associated to miscommunication, lack of professional interpreters, and limited knowledge about rare diseases and pregnancy complications. Increased insight into a migration perspective is advocated for maternity clinicians who provide care to foreign-born women. abstract_id: PUBMED:10597992 Trends in maternal mortality ratio among women of German and non-German nationality in West Germany, 1980-1996. Background: Maternal mortality is a sensitive indicator for inequity in health. We describe recent trends in overall and cause-specific maternal mortality ratio among women of German and non-German nationality residing in West Germany. Methods: Using birth and death register data for 1980-1996 we related 1067 cases of maternal death (ICD 9: 630-676) to 11.2 million live births. We assessed the effects of nationality and of marital status, a proxy for socioeconomic status, controlling for year of death and age of the mother in a Poisson regression model. Results: Maternal mortality ratio in West Germany decreased from 13 per 100000 live births in 1980-1988 to 6.1 in 1989-1996. The crude relative risk for non-German nationality decreased from 1.9 (95% CI: 1.6-2.3) to 1.3 (1.0-1.7); after adjusting for age, year of death and marital status it was 1.7 (95% CI: 1.4-2.1) and 1.6 (95% CI: 1.2-2.1). Unmarried women incurred an adjusted relative risk of 1.8 (95% CI: 1.5-2.3). Non-German women experienced an excess mortality from abortions which largely disappeared in 1989-1996; concurrently, being unmarried no longer conveyed an additional risk to them. The risk status of German mothers developed unfavourably: increasing proportions are unmarried, which continues to be a marker of elevated relative risk in this group. Conclusions: Our findings suggest continuously improving accessibility and quality of obstetric services, in particular for women of non-German nationality. Still, inequity in maternal risk continues to exist. Maternal risk, however, is not determined by the simple distinction 'German' versus 'non-German'; its association with socioeconomic status extends beyond nationality. abstract_id: PUBMED:34185810 Causes contributing to the excess maternal mortality risk for women 35 and over, United States, 2016-2017. To better understand age-related disparities in US maternal mortality, we analyzed 2016-2017 vital statistics mortality data with cause-of-death literal text (actual words written on the death certificate) added. We created a subset of confirmed maternal deaths which had pregnancy mentions in the cause-of-death literals. Primary cause of death was identified and recoded using cause-of-death literals. Age-related disparities were examined both overall and by primary cause. Compared to women <35, the 2016-2017 US maternal mortality rate was twice as high for women aged 35-39, four times higher for women aged 40-44, and 11 times higher for women aged 45-54 years. Obstetric hemorrhage was the leading cause of death for women aged 35+ with rates 4 times higher than for women <35, followed by postpartum cardiomyopathy with a 3-fold greater risk. Obstetric embolism, eclampsia/preeclampsia, and Other complications of obstetric surgery and procedures each had a two-fold greater risk of death for women aged 35+. Together these 5 causes of death accounted for 70.9% of the elevated maternal mortality risk for women aged 35+. The excess maternal mortality risk for women aged 35+ was focused among a few causes of death and much of this excess mortality is preventable. Early detection and treatment, as well as continued care during the postpartum year is critical to preventing these deaths. The Alliance for Innovation on Maternal Health has promulgated patient safety bundles with specific interventions that health care systems can adopt in an effort to prevent these deaths. abstract_id: PUBMED:24035736 Maternal mortality in France, 2007-2009 To monitor the maternal mortality which is an indicator of the quality of obstetric and intensive care, France has a specific approach since 1996. Recently linkages have been introduced to improve the inclusion of cases. Here are the results for the 2007 to 2009 period. The identification of the pregnancy associated deaths is lying on different data bases that are medical causes of death, birth register and hospital discharges. To document the cases, confidential enquiries are conducted by two assessors on the field; a committee of medical experts analyses the documents, select the underlying cause and assess the quality of health care. Two hundred and fifty-four obstetric deaths were identified from 2007 to 2009 giving the maternal mortality ratio (MMR) of 10.3 per 100,000 births. The maternal age and nationality, the region of deaths are associated to the MMR. The haemorrhages are the leading cause but their ratio is 1.9 versus 2.5 previously; this decrease results from the postpartum haemorrhage by uterine atony going down. The suboptimal care are still frequent (60%) but slightly less than before. The linkage method should be pursued. Maternal mortality is rather stable in France. We may reach more reduction as deaths due to atony decreased as suboptimal care did. abstract_id: PUBMED:14723755 Maternal mortality as an indicator of obstetric care in Europe. Objective: This analysis considers the usefulness of maternal mortality ratio (MMR) as an indicator of obstetric care in the context of low overall maternal mortality. We explore whether variation in the level of MMR among European countries reflects differences in obstetric care. Design: The data presented in this article were collected as part of the European Concerted Action on Mothers' Mortality and Severe morbidity (MOMS). In this study, a panel of experts followed a protocol to determine cause of death and whether it was pregnancy-related. This analysis uses the expert panel's confirmation of cause of death and obstetric attribution. Setting: All maternal deaths within 11 European countries. Population: Two hundred and ninety obstetric deaths occuring between 1992 and 1995. Methods: We present the results of a multivariable analysis that controls for cause of death, moment of death, place of death, pregnancy outcome, women's age and nationality. Main Outcome Measures: We test the hypothesis that countries with higher MMR would have proportionally more cases of direct obstetric death due to thromboembolism, hypertension, haemorrhage or infection compared with other countries in the study. We examine timing of death and maternal age to measure whether there are differences between country groups for older mothers. Results: We find distinct patterns in cause and timing of death and age-specific mortality ratios between countries with different levels of MMR. Conclusions: Despite low rates of maternal mortality in Europe, between-country differences follow patterns with respect to cause and timing of death and maternal age. In addition to representing an important indicator of health status in a country, differences in MMR among European countries provide insight to where obstetric care plays a role maternal deaths. abstract_id: PUBMED:38205108 Prediction of Maternal Morbidity and Mortality Risk Among Women with Medical Disorders Presenting to Emergency Obstetric care (EMOC): A Prospective Observational Study. Introduction: Medical disorders complicating pregnancy have recently emerged as the most common cause for maternal morbidity and mortality and it is important to predict mortality risk when they present in moribund state to emergency obstetric care so as to take and timely effective measures to prevent mortality. Methods: This prospective observational study was conducted over 6 months among pregnant and post-partum women with medical disorders who sought emergency obstetric care at a tertiary care hospital. Severity of morbidity was assessed using SOFA and APACHE II scores at admission. Results: Of the 128 women, 87.5% were pregnant, and 12.5% were post-partum. Hypertensive disorders, cardiac disorders, neurological disorders and infective disorders were 24.2%, 22.6%, 14% and 9.4%, respectively. The optimal cut-off SOFA score was 2 (AUC = 0.739) with 66% sensitivity and 71% specificity and APACHE II score cut-off was 6 (AUC = 0.732) with a sensitivity of 60% and specificity of 78% in predicting severe maternal morbidity. The median scores of APACHE II and SOFA are 14 and 4, respectively, for non-survivors and for survivors it was 4 and 1. Conclusion: Hypertensive disorder was the most common medical disorder, but severity was high in cardiac disorder. SOFA and APACHE II scores are good predictors of morbidity and mortality risk. abstract_id: PUBMED:22704683 Obstetric care in Mali: effect of organization on in-hospital maternal mortality Background: Maternal mortality is still too high in sub-Saharan Africa, particularly in referral hospitals. Solutions exist but their implementation is a great issue in the poor-resources settings. The objective of this study is to assess the effect of the organization of obstetric care services on maternal mortality in referral hospitals in Mali. Methods: This is a multicentric observational survey in 22 referral hospitals. Clinical data on 42,929 women delivering in the 22 hospitals within the 2007 to 2008 study period were collected. Organization evaluation was based on explicit criteria defined by an expert committee. The effect of the organization on in-hospital mortality adjusted on individual and institutional characteristics was estimated using multi-level logistic regression models. Results: The results show that an optimal organization of obstetric care services based on eight explicit criteria reduced in-hospital maternal mortality by 41% compared with women delivering in a referral hospital with sub-optimal organization defined as non-compliance with at least one of the eight criteria (ORa=0.59; 95% CI=0.34-0.92). Furthermore, local policies that improved financial access to emergency obstetric care had a significant impact on maternal outcome. Conclusion: Criteria for optimal organization include the management of labor and childbirth by qualified personnel, an organization of human resources that allows timely management of obstetric emergencies, routine use of partography for all patients and availability of guidelines for the management of complications. These conditions could be easily implemented in the context of Mali to reduce in-hospital maternal mortality. abstract_id: PUBMED:17052282 Mexican women in California: differentials in maternal morbidity between foreign and US-born populations. In the US, the majority of deaths and serious complications of pregnancy occur during childbirth and are largely preventable. We conducted a population-based study to assess disparities in maternal health between Mexican-born and Mexican-American women residing in California and to evaluate the extent to which immigrants have better outcomes. Mothers in these two populations deliver 40% of infants in the state. We compared maternal mortality ratios and maternal morbidities during labour and delivery in the two populations using linked 1996-98 hospital discharge and birth certificate data files. For maternal morbidities, we calculated frequencies and observed and adjusted odds (OR) ratios using pre-existing maternal health, sociodemographic characteristics and quality of health care as covariates. Approximately 19% of Mexican-born women suffered a maternal disorder compared with 21% of Mexican-American women (Observed OR = 0.89, [95% CI 0.88, 0.90]). Despite their lower education and relative poverty, Mexican-born women still experienced a lower odds of any maternal morbidity than Mexican-American women, after adjusting for covariates (OR = 0.92, [95% CI 0.90, 0.93]). These findings suggest a paradox of more favourable outcomes among Mexican immigrants similar to that found with birth outcomes. Nevertheless, the positive aggregate outcome of Mexican-born women did not extend to maternal mortality, nor to certain conditions associated with suboptimal intrapartum obstetric care. abstract_id: PUBMED:30646778 A review of maternal mortality and quality of care in the USA. Objective: The purpose of this study was to review studies and reports examining maternal mortality and quality of maternal health care in the USA, which has the highest maternal mortality rate among its peers.Methods: Electronic search of current literature on maternal mortality and quality of care in the USA and Europe was conducted. Findings were summarized according to the Donabedian's structure-process-outcomes conceptual model.Results: Standards and protocols, effective communication and hospitalist care indicated positive maternal outcomes, including a reduction in maternal mortality. However, lack of coordination of care among providers for pregnant women with chronic disease, fragmentation, or substandard of care and late prenatal care initiation are among the domain of processes of care that were noted to negatively influence maternal health outcomes. Further, the absence of a national forum committee to gather and systematically use research findings and data to guide change constitutes a serious obstacle in improving quality of care in the obstetric field.Conclusions: Providing good quality of care and eliminating health disparities in obstetrics and gynecology are important elements in preventing maternal deaths. Future research regarding patient-centered care and health disparities in maternal health will provide guidance to policymakers in our efforts to reduce maternal mortality. Answer: Yes, excess maternal mortality among women of foreign nationality can be explained by suboptimal obstetric care. Studies have shown that foreign-born women, particularly those from low- and middle-income countries, experience higher rates of maternal mortality compared to host country women, and suboptimal factors related to care-seeking, accessibility, and quality of care have been identified as significant contributors to these outcomes. In France, after accounting for individual characteristics, the risk of postpartum maternal death was found to be twice as high for foreign women, with particularly high odds ratios for women from sub-Saharan Africa and Asia, North, and South America. Among women who died, care was more often considered not optimal for foreign women (78% versus 57%) (PUBMED:18823490). Similarly, in Sweden, major and minor suboptimal factors were associated with a majority of maternal deaths and were significantly more often linked to foreign-born women. Delays in care-seeking due to non-compliance and communication barriers, such as language incongruences and suboptimal interpreter usage, were noted. Inadequate care occurred more often among the foreign-born, highlighting the need for improved communication and understanding of rare diseases and pregnancy complications among maternity clinicians caring for foreign-born women (PUBMED:24725307). In West Germany, maternal mortality ratios decreased over time, and the relative risk for non-German nationality also decreased. However, non-German women experienced excess mortality from abortions, which largely disappeared in later years. The findings suggest improving accessibility and quality of obstetric services, particularly for women of non-German nationality, but inequity in maternal risk persists, extending beyond nationality and associated with socioeconomic status (PUBMED:10597992). In the United States, maternal mortality rates were significantly higher for women aged 35 and over, with obstetric hemorrhage and postpartum cardiomyopathy being leading causes of death. The excess maternal mortality risk for this age group was focused among a few causes of death, much of which is preventable with early detection, treatment, and continued care during the postpartum year (PUBMED:34185810). Overall, these studies indicate that suboptimal obstetric care, including factors related to the organization of services, communication barriers, and delays in care, contribute to the excess maternal mortality among women of foreign nationality. Improving the quality of care and addressing these specific challenges can help reduce maternal mortality rates among this population.
Instruction: Do anatomic factors pose a significant risk in the formation of lower pole stones? Abstracts: abstract_id: PUBMED:34253388 Comprehensive immunomolecular profiling of endometrial carcinoma: A tertiary retrospective study. Objective: Combined immunohistochemical and molecular classification using the Proactive Molecular Risk Classifier for Endometrial Cancer (ProMisE) independently predicts prognosis in endometrial carcinoma (EC). As next-generation sequencing (NGS) is entering clinical practice, we evaluated whether more comprehensive immunomolecular profiling (CIMP), including NGS and extended immunohistochemical analysis, could further refine the current ProMisE classification. Methods: A series of 120 consecutive ECs, classified according to ProMisE, was stained immunohistochemically for CD3, CD8, PD-L1, beta-catenin and L1CAM. An in-house 96 gene NGS panel was performed on a subset of 44 ECs, representing the 4 ProMisE subgroups (DNA polymerase epsilon catalytic subunit exonuclease domain mutated (POLEmut), mismatch repair deficient (MMRd), p53 abnormal (p53 abn) and no specific molecular profile (NSMP) ECs). Cases harboring non-hotspot POLE variants were analyzed with Illumina TruSight Oncology 500 NGS panel (TSO500) as a surrogate for whole-exome sequencing. Results: Eight cases harbored POLE variants, half of which were hotspots. Using TSO500, non-hotspot POLE variants were classified as pathogenic (3) or variant of unknown significance (1). POLEmut and MMRd ECs typically showed higher numbers of CD3+/CD8+ tumor-infiltrating lymphocytes and higher PD-L1 expression in tumor-infiltrating immune cells. p53 abn ECs showed significantly higher L1CAM immunoreactivity and frequently harbored gene amplifications including HER2 (25%), but typically lacked ARID1A or PTEN variants. Beta-catenin-positivity and FGFR2 variants were predominantly found in NSMP ECs. Conclusions: Our data show that CIMP adds significant value to EC characterization and may help to determine pathogenicity of non-hotspot POLE variants, encountered more frequently than expected in our series. In addition, CIMP may reveal ECs benefitting from immune checkpoint inhibition and allows upfront identification of targetable alterations, such as HER2 amplification in p53 abn ECs. abstract_id: PUBMED:10215605 Structural organization and splice variants of the POLE1 gene encoding the catalytic subunit of human DNA polymerase epsilon. The catalytic subunit of human DNA polymerase epsilon, an enzyme involved in nuclear DNA replication and repair, is encoded by the POLE1 gene. This gene is composed of 51 exons spanning at least 97 kb of genomic DNA. It was found to encode three alternative mRNA splice variants that differ in their 5'-terminal sequences and in the N-termini of the predicted proteins. A CpG island covers the promoter region for the major transcript in HeLa cells. This promoter is TATA-less and contains several putative binding sites for transcription factors typical of S-phase-up-regulated and serum-responsive promoters. Potential promoter regions were also identified for the two other alternative transcripts. Interestingly, no nuclear polyadenylation signal sequence was detected in the 3'-untranslated region, although a poly(A) tail was present. These results suggest a complicated regulatory machinery for the expression of the human POLE1 gene, including three alternative transcripts expressed from three promoters. abstract_id: PUBMED:25860647 A novel POLE mutation associated with cancers of colon, pancreas, ovaries and small intestine. In some families there is an increased risk for colorectal cancer, caused by heritable, but often unidentified genetic mutations predisposing to the disease. We have identified the likely genetic cause for disease predisposition in a large family with high burden of colorectal adenomas and carcinomas, in addition to extra-colonic cancers. This family had previously been tested for known cancer susceptibility genes, with negative results. Exome sequencing was used to identify a novel mutation, c.1373A>T (p.Tyr458Phe), in the gene for DNA polymerase epsilon catalytic subunit (POLE). This mutation is located in the active site of the exonuclease domain of the enzyme, and affects a residue that has previously been shown to be important for exonuclease activity. The first predisposing mutation identified in POLE (c.1270C>G, p.Leu424Val) was associated with colorectal cancer only, but another mutation with a broader tumour spectrum (c.1089C>A, p.Asn363Lys) has recently been reported. In the family described in the present study, carriers generally have multiple colorectal adenomas and cancer of colon, pancreas, ovaries and small intestine which represents an important broadening of the tumour spectrum of POLE mutation carriers. We also observe a large phenotypic variation among the POLE mutation carriers in this family, most likely explained by modifying variants in other genes. One POLE mutation carrier has a novel variant in EXO1 (c.458C>T, p.Ala153Val), which may contribute to a more severe phenotype. The findings in this study will have important implications for risk assessment and surveillance of POLE mutation carriers. abstract_id: PUBMED:24036207 Assessing gene network stability and individual variability in the fathead minnow (Pimephales promelas) transcriptome. Transcriptomics is increasingly used to assess biological responses to environmental stimuli and stressors such as aquatic pollutants. However, fundamental studies characterizing individual variability in mRNA levels are lacking, which currently limits the use of transcriptomics in environmental monitoring assessments. To address individual variability in transcript abundance, we performed a meta-analysis on 231 microarrays that were conducted in the fathead minnow (FHM), a widely used toxicological model. The mean variability for gene probes was ranked from most to least variable based upon the coefficient of variation. Transcripts that were the most variable in individual tissues included NADH dehydrogenase flavoprotein 1, GTPase IMAP family member 7-like and v-set domain-containing T-cell activation inhibitor 1-like while genes encoding ribosomal proteins (rpl24 and rpl36), basic transcription factor 3, and nascent polypeptide-associated complex alpha subunit were the least variable in individuals across a range of microarray experiments. Gene networks that showed high variability (based upon the variation in expression of individual members within the network) included cell proliferation, metabolism (steroid, lipids, and glucose), cell adhesion, vascularization, and regeneration while those that showed low variability (more stability) included mRNA and rRNA processing, regulation of translational fidelity, RNA splicing, and ribosome biogenesis. Real-time PCR was conducted on a subset of genes for comparison of variability collected from the microarrays. There was a significant positive relationship between the two methods when measuring individual variability, suggesting that variability detected in microarray data can be used to guide decisions on sample sizes for measuring transcripts in real-time PCR experiments. A power analysis revealed that measuring estrogen receptor ba (esrba) requires fewer biological replicates than that of estrogen receptor bb (esrbb) in the gonad and samples sizes required to detect a 50% change for reproductive-related transcripts is between 12 and 20. Characterizing individual variability at the molecular level will prove necessary as efforts are made toward integrating molecular tools into environmental risk assessments. abstract_id: PUBMED:33152268 Hyperactive CDK2 Activity in Basal-like Breast Cancer Imposes a Genome Integrity Liability that Can Be Exploited by Targeting DNA Polymerase ε. Knowledge of fundamental differences between breast cancer subtypes has driven therapeutic advances; however, basal-like breast cancer (BLBC) remains clinically intractable. Because BLBC exhibits alterations in DNA repair enzymes and cell-cycle checkpoints, elucidation of factors enabling the genomic instability present in this subtype has the potential to reveal novel anti-cancer strategies. Here, we demonstrate that BLBC is especially sensitive to suppression of iron-sulfur cluster (ISC) biosynthesis and identify DNA polymerase epsilon (POLE) as an ISC-containing protein that underlies this phenotype. In BLBC cells, POLE suppression leads to replication fork stalling, DNA damage, and a senescence-like state or cell death. In contrast, luminal breast cancer and non-transformed mammary cells maintain viability upon POLE suppression but become dependent upon an ATR/CHK1/CDC25A/CDK2 DNA damage response axis. We find that CDK1/2 targets exhibit hyperphosphorylation selectively in BLBC tumors, indicating that CDK2 hyperactivity is a genome integrity vulnerability exploitable by targeting POLE. abstract_id: PUBMED:31610036 Primary adrenal insufficiency: New genetic causes and their long-term consequences. Primary adrenal insufficiency (PAI) is a potentially life-threatening condition that requires urgent diagnosis and treatment. Whilst the most common causes are congenital adrenal hyperplasia (CAH) in childhood and autoimmune adrenal insufficiency in adolescence and adulthood, more than 30 other physical and genetics cause of PAI have been reported. Reaching a specific diagnosis can have implications for management and for monitoring associated features, as well as for counselling families about recurrence risk in siblings and relatives. Here, we describe some recent insights into the genetics of adrenal insufficiency and associated molecular mechanisms. We discuss (a) the role of the nuclear receptors DAX-1 (NR0B1) and steroidogenic factor-1 (SF-1, NR5A1) in human adrenal and reproductive dysfunction; (b) multisystem growth restriction syndromes due to gain-of-function in the growth repressors CDKN1C (IMAGE syndrome) and SAMD9 (MIRAGE syndrome), or loss of POLE1; (c) nonclassic forms of STAR and P450scc/CYP11A1 insufficiency that present with a delayed-onset adrenal phenotype and represent a surprisingly prevalent cause of undiagnosed PAI; and (d) a new sphingolipidosis causing PAI due to defects in sphingosine-1-phosphate lyase-1 (SGPL1). Reaching a specific diagnosis can have life-long implications for management. In some situations, milder or nonclassic forms of these conditions can first present in adulthood and may have been labelled, "Addison's disease." abstract_id: PUBMED:27308732 Endometrial cancer: Not your grandmother's cancer. Worldwide, the incidence of endometrial carcinoma (EC) is rapidly increasing, and the highest disease burden is reported in North America and Western Europe. Although the prognosis remains good for patients with are diagnosed with early stage EC, for those with recurrent or metastatic disease, the options are few, and the median overall survival is short. It is imperative to gain a greater understanding of all aspects of EC, limit its effect on scarce health care resources and, more importantly, prevent this cancer from significantly impacting future generations of women. An exciting new era of endometrial cancer research and clinical management has begun that incorporates biologically and clinically relevant genomic and clinicopathologic parameters. Continued collaborative research efforts and funding are essential if we are to advance our understanding of this disease and improve clinical outcomes. Cancer 2016. © 2016 American Cancer Society. Cancer 2016;122:2787-2798. © 2016 American Cancer Society. abstract_id: PUBMED:27235625 Human DNA polymerase ε is phosphorylated at serine-1940 after DNA damage and interacts with the iron-sulfur complex chaperones CIAO1 and MMS19. We describe a dynamic phosphorylation on serine-1940 of the catalytic subunit of human Pol ε, POLE1, following DNA damage. We also describe novel interactions between POLE1 and the iron-sulfur cluster assembly complex CIA proteins CIAO1 and MMS19. We show that serine-1940 is essential for the interaction between POLE1 and MMS19, but not POLE1 and CIAO1. No defect in either proliferation or survival was identified when POLE1 serine-1940 was mutated to alanine in human cells, even following treatment with DNA damaging agents. We conclude that serine-1940 phosphorylation and the interaction between serine-1940 and MMS19 are not essential functions in the C terminal domain of the catalytic subunit of DNA polymerase ε. abstract_id: PUBMED:24472300 Frequent POLE1 p.S297F mutation in Chinese patients with ovarian endometrioid carcinoma. The catalytic subunit of DNA polymerase epsilon (POLE1) functions primarily in nuclear DNA replication and repair. Recently, POLE1 mutations were detected frequently in colorectal and endometrial carcinomas while with lower frequency in several other types of cancer, and the p.P286R and p.V411L mutations were the potential mutation hotspots in human cancers. Nevertheless, the mutation frequency of POLE1 in ovarian cancer still remains largely unknown. Here, we screened a total of 251 Chinese samples with distinct subtypes of ovarian carcinoma for the presence of POLE1 hotspot mutations by direct sequencing. A heterozygous somatic POLE1 mutation, p.S297F (c.890C>T), but not p.P286R and p.V411L hotspot mutations observed in other cancer types, was identified in 3 out of 37 (8.1%) patients with ovarian endometrioid carcinoma; this mutation was evolutionarily highly conserved from Homo sapiens to Schizosaccharomyces. Of note, the POLE1 mutation coexisted with mutation in the ovarian cancer-associated PPP2R1A (protein phosphatase 2, regulatory subunit A, α) gene in a 46-year-old patient, who was also diagnosed with ectopic endometriosis in the benign ovary. In addition, a 45-year-old POLE1-mutated ovarian endometrioid carcinoma patient was also diagnosed with uterine leiomyoma while the remaining 52-year-old POLE1-mutated patient showed no additional distinctive clinical manifestation. In contrast to high frequency of POLE1 mutations in ovarian endometrioid carcinoma, no POLE1 mutations were identified in patients with other subtypes of ovarian carcinoma. Our results showed for the first time that the POLE1 p.S297F mutation, but not p.P286R and p.V411L hotspot mutations observed in other cancer types, was frequent in Chinese ovarian endometrioid carcinoma, but absent in other subtypes of ovarian carcinoma. These results implicated that POLE1 p.S297F mutation might be actively involved in the pathogenesis of ovarian endometrioid carcinoma, but might not be actively involved in other subtypes of ovarian carcinoma. abstract_id: PUBMED:25948378 A patient with polymerase E1 deficiency (POLE1): clinical features and overlap with DNA breakage/instability syndromes. Background: Chromosome instability syndromes are a group of inherited conditions associated with chromosomal instability and breakage, often leading to immunodeficiency, growth retardation and increased risk of malignancy. Case Presentation: We performed exome sequencing on a girl with a suspected chromosome instability syndrome that manifested as growth retardation, microcephaly, developmental delay, dysmorphic features, poikiloderma, immune deficiency with pancytopenia, and myelodysplasia. She was homozygous for a previously reported splice variant, c.4444 + 3A > G in the POLE1 gene, which encodes the catalytic subunit of DNA polymerase E. Conclusion: This is the second family with POLE1-deficency, with the affected individual demonstrating a more severe phenotype than previously described. Answer: The provided abstracts do not contain information regarding anatomic factors and their relationship to the formation of lower pole kidney stones. The abstracts focus on various topics related to the POLE1 gene and its mutations, which are associated with different types of cancers, including endometrial carcinoma, colorectal cancer, and ovarian endometrioid carcinoma, as well as conditions like primary adrenal insufficiency and chromosome instability syndromes. They discuss the implications of POLE1 mutations for cancer risk assessment, surveillance, and treatment, as well as the molecular mechanisms and clinical features associated with POLE1 deficiency. However, none of the abstracts address the issue of anatomic factors in the formation of lower pole stones, which typically refers to kidney stones located in the lower part of the kidney. Therefore, based on the provided abstracts, it is not possible to answer the question about anatomic factors and lower pole stones.
Instruction: Is the pancreas affected in patients with septic shock? Abstracts: abstract_id: PUBMED:21459727 Is the pancreas affected in patients with septic shock?--a prospective study. Background: Hyperamylasemia can be observed anecdotally during the course of severe sepsis or septic shock. This study aimed to investigate the possibility of pancreatic involvement in patients with septic shock using serum pancreatic enzyme determinations and imaging techniques in 21 consecutive patients with septic shock and 21 healthy subjects as controls. Methods: The serum activity of pancreatic amylase and lipase was assayed initially in all subjects and 24 and 48 hours after the initial observation in the 21 patients with septic shock. All patients also underwent radiological examination to detect pancreatic abnormalities. Results: The serum activity of pancreatic amylase was significantly higher in the 21 patients with septic shock than in the 21 control subjects during the study period, while the serum activity of lipase was similar to that of the control subjects. Amylase and lipase serum activity did not significantly changed throughout the study period in the 21 patients with septic shock. None of the patients with pancreatic hyperenzymemia had clinical signs or morphological alterations compatible with acute pancreatitis. Conclusion: The presence of pancreatic hyperenzymemia in septic shock patients is not a biochemical manifestation of acute pancreatic damage, and the management of these patients should be dependent on the clinical situation and not merely the biochemical results. abstract_id: PUBMED:19958338 Extreme subcutaneous, intramuscular and inhaled insulin resistance treated by pancreas transplantation alone. Diabetes mellitus with resistance to insulin administered subcutaneously or intramuscularly (DRIASM) is a rare syndrome and is usually treated with continuous intravenous insulin infusion. We present here two cases of DRIASM in 16 and 18 years female patients that were submitted to pancreas transplantation alone (PTA). Both were diagnosed with type 1 diabetes as young children and had labile glycemic control with recurrent episodes of diabetic ketoacidosis. They had prolonged periods of hospitalization and complications related to their central venous access. Exocrine and endocrine drainages were in the bladder and systemic, respectively. Both presented immediate graft function. In patient 1, enteric conversion was necessary due to reflux pancreatitis. Patient 2 developed mild postoperative hyperglycemia in spite of having normal pancreas allograft biopsy and that was attributed to her immunosuppressive regimen. Patient 1 died 9 months after PTA from septic shock related to pneumonia. In 8 months of follow-up, Patient 2 presented optimal glycemic control without the use of antidiabetic agents. In conclusion, PTA may be an alternative treatment for DRIASM patients. abstract_id: PUBMED:15211109 Pseudomonas pneumonia-mediated sepsis induces expression of pancreatitis-associated protein-I in rat pancreas. Severe impairment of exocrine pancreatic secretion has recently been demonstrated in a clinical study in sepsis and septic shock patients. The purpose of this study was to further evaluate involvement of the pancreas in the acute phase reaction in sepsis. Using a normotensive rat model of Pseudomonas pneumonia-induced sepsis, we assessed the expression of PAP-I, amylase and trypsinogen mRNA, PAPI protein levels, and cytokine expression in the pancreas by Northern and Western blot analysis and RT-M PCR, respectively. Presence of several well-established features of pancreatitis in sepsis-induced animals were examined by biochemical and histopathological methods as well as by a determination of both water and myeloperoxidase content. Sepsis resulted in an up-regulation of PAP-I gene expression and increase in its protein level in pancreas while the mRNA levels of amylase and trypsinogen were down-regulated. Differences in the pancreatic cytokine expression, serum amylase and serum lipase levels, the occurrence of pancreatic edema as well as the severity of inflammatory infiltration and necrosis were not significantly different between sham and pneumonia groups. Acinar cells showed increased vacuolization in pneumonia animals 24 hours after the treatment. These findings demonstrate that the pancreas is actively involved in the acute phase reaction in sepsis of remote origin. This involvement occurs without concomitant biochemical and histopathologic alterations observed in pancreatitis. Taken all together, these features are indicative of a sepsis-specific dysfunction of the pancreas. abstract_id: PUBMED:1343463 Protective effect of radio-detoxified endotoxin (Tolerin) on the ultrastructure of pancreas in experimental endotoxin shock of rats. The ultrastructural changes of pancreas exocrine cells were studied after the intravenous administration of endotoxin (LPS) or radio-detoxified endotoxin (150 kGy 60Co-gamma irradiated: RD-LPS or Tolerin). The LPS (1 mg/rat) induces an autolytic destruction in the membranes of the mitochondria of the pancreas exocrine cells. The RD-LPS given in similar dose does not produce any autolytic change. However, a small dose (100/micrograms/rat) of RD-LPS (Tolerin) as a pretreatment can protect the autolytic destruction of the mitochondria induced by LPS. This may be attributed to the membrane stabilizing effect of RD-LPS. abstract_id: PUBMED:21362366 A retrospective monocenter review of simultaneous pancreas-kidney transplantation with bladder drainage in China. Background: Simultaneous pancreas-kidney transplantation (SPKT) frees the diabetic patient with end-stage nephropathy from dialysis and daily insulin injections. Herein, we review consecutive cases of SPKT with bladder drainage performed at our institution over an 8-year period. Methods: The study population included 21 patients (16 males and 5 females) who underwent SPKT between September 2001 and September 2009. Seven patients had type-1 diabetes and 14 had type-2 diabetes. Nineteen patients were on dialysis at the time of transplantation. Donation after cardiac death donors were selected for SPKT. The mean human leukocyte antigen match was 2 (range 0 - 4). SPKT was always performed using bladder drainage and vascular anastomoses to the systemic circulation. Immunosuppressive treatment consisted of anti-lymphocyte globulin induction followed by tacrolimus, mycophenolate mofetil, and prednisone. Results: The mean hospital stay was 45.43 days. After a mean follow-up of 39.4 months, survival rates for patient, kidney, and pancreas were 76.2%, 76.2%, and 66.7% at 1 year; 76.2%, 59.3%, and 55.6% at 5 years; and 57.1%, 39.5%, and 41.7% at 8 years, respectively. Major complications included anastomotic leaks, reflux pancreatitis, and rejection. Six patients died from septic shock (n = 3), duodenal stump leak (1), cardiac arrest (1), or renal failure (1). Eight kidney grafts were lost due to acute rejection (n = 2), chronic rejection (3), and death with a functioning graft (3). Pancreatic graft failure (9) was caused by thrombosis (n = 1), rejection (2), duodenal stump leak (1), and death with a functioning graft (5). Conclusions: SPKT is a valid therapeutic option for uremic diabetics although few hospitals in China can undertake SPKT. abstract_id: PUBMED:16521195 Increased heat shock protein 70 expression in the pancreas of rats with endotoxic shock. Aim: To investigate the ultra-structural changes and heat shock protein 70 (HSP70) expression in the pancreas of rats with endotoxic shock and to detect their possible relationship. Methods: A total of 33 Wistar rats were randomly divided into three groups: control group (given normal saline), small dose lipopolysaccharide (LPS) group (given LPS 5 mg/kg) and large dose LPS group (given LPS 10 mg/kg). Pancreas was explanted to detect the ultra-structural changes by TEM and the HSP70 expression by immunohistochemistry and Western blot. Results: Rats given small doses of LPS showed swelling and loss of mitochondrial cristae of acinar cells and increased number of autophagic vacuoles in the cytoplasm of acinar cells. Rats given large doses of LPS showed swelling, vacuolization, and obvious myeloid changes of mitochondrial cristae of acinar cells, increased number of autophagic vacuoles in the cytoplasm of acinar cells. HSP70 expression was increased compared to the control group (P<0.05). Conclusion: Small doses of LPS may induce stronger expression of HSP70, promote autophagocytosis and ameliorate ultra-structural injuries. abstract_id: PUBMED:21715238 Duodenal diverticulum associated with annular pancreas: a rare cause of severe cholangitis. Duodenal diverticulum is a common occurrence but most are asymptomatic. However, in some cases, they can cause mechanical biliary compression. We report the case of a duodenal diverticulum in a 64-year-old woman revealed by severe cholangitis with septic shock and a liver abscess. Associated annular pancreas was found. We discuss the various investigations to diagnose these two entities as well as the therapeutic strategy in this unique combination of disease. abstract_id: PUBMED:12969889 Effect of nitric oxide on capillary hemodynamics and cell injury in the pancreas during Pseudomonas pneumonia-induced sepsis. Sepsis-induced nitric oxide (NO) overproduction has been implicated in a redistribution of flow from the pancreas making it vulnerable to ischemic injury in septic shock. To test this hypothesis in a remote injury model of normotensive sepsis, we induced Pseudomonas pneumonia in the rat and used intravital video microscopy (IVVM) of the pancreas to measure functional capillary density, capillary hemodynamics [red blood cell (RBC) velocity, lineal density, and supply rate], and lethal cellular damage (propidium iodine staining) at 6 and 24 h after the induction of pneumonia. With pneumonia, plasma nitrite/nitrate [NO2(-)/NO3(-)(NOx(-))] levels were doubled by 21 h (P < 0.05). To assess the effect of NO overproduction on microvascular perfusion, N6-(1-iminoethyl)-L-lysine (L-NIL) was administered to maintain NOx(-) levels at baseline. Pneumonia did cause a decrease in RBC velocity of 23% by 6 h, but by 24 h RBC velocity and supply rate had increased relative to sham by 22 and 38%, respectively (P < 0.05). L-NIL treatment demonstrated that this increase was due to NO overproduction. With pneumonia, there was no change in functional capillary density and only modest increases in cellular damage. We conclude that, in this normotensive pneumonia model of sepsis, NO overproduction was protective of microvascular perfusion in the pancreas. abstract_id: PUBMED:28794357 A Rare Case of Gastric Ulcer Penetrating the Pancreas that was Successfully Managed by Conservative Therapy. A 62-year-old woman was admitted to our hospital with septic shock due to left submandibular osteomyelitis and cellulitis. Her condition improved following tooth extraction, drainage, and the administration of antibiotics. However, on the 4th day of hospitalization, she went into hemorrhagic shock after defecating a massive tarry stool. Emergency esophagogastroduodenoscopy (EGD) was performed. We found a giant ulcer at the antral greater curvature of the stomach. Computed tomography (CT) revealed that the gastric ulcer had penetrated the pancreas. She had no signs of peritonitis and had a bad general condition. She was therefore managed solely by conservative therapy. She recovered within days. abstract_id: PUBMED:10834684 Exocrine pancreatic function in critically ill patients: septic shock versus non-septic patients. Objective: To determine the exocrine pancreatic function in critically ill patients. Design: Prospective cohort study. Setting: Medical intensive care unit. Patients: A total of 18 critically ill patients (11 patients with septic shock according to the criteria of the American College of Chest Physicians/Society of Critical Care Medicine, 7 nonseptic patients). The results obtained in these subjects were compared with the data obtained in seven healthy volunteers. Interventions: Examination of exocrine pancreatic function was done by means of a cholecystokinin-secretin test. Intravenous stimulation of the exocrine pancreas with cholecystokinin and secretin (1 unit/kg body weight/hr each) and aspiration of duodenal fluid by a gastroscopically inserted oroduodenal tube was done during a period of 1 hr. Measurements And Main Results: The content of amylase, chymotrypsin, and trypsin in aspirated duodenal fluid was significantly reduced in patients with septic shock compared with nonseptic patients as well as healthy subjects (p < .01). The volume of aspirated fluid was significantly reduced in patients with septic shock compared with healthy controls (p = .03), but not in nonseptic patients. The content of bicarbonate was not statistically different in the three groups. No significant correlation was to find between variables of exocrine pancreatic function and Acute Physiology and Chronic Health Evaluation III score, sepsis-related organ failure assessment score, systolic arterial pressure and mean arterial pressure in septic shock patients. Positive end-expiratory pressure was significantly correlated with the content of trypsin (r2 = 0.52; p = .02). Postmortem examinations of five septic patients who died during the intensive care stay did not reveal gross morphologic alterations of pancreatic tissue. Conclusions: The study shows two pancreatic enzyme systems, namely, amylase as a carbohydrate splitting enzyme and the proteolytic enzymes trypsin and chymotrypsin, strongly affected in critically ill patients with septic shock. Answer: Yes, the pancreas is affected in patients with septic shock. Studies have shown that patients with septic shock can exhibit hyperamylasemia, which is an increased serum activity of pancreatic amylase, without clinical signs or morphological alterations compatible with acute pancreatitis (PUBMED:21459727). Additionally, severe impairment of exocrine pancreatic secretion has been observed in sepsis and septic shock patients (PUBMED:12969889). In a study using a rat model of Pseudomonas pneumonia-induced sepsis, it was found that sepsis resulted in up-regulation of pancreatitis-associated protein-I (PAP-I) gene expression and an increase in its protein level in the pancreas, while the mRNA levels of amylase and trypsinogen were down-regulated (PUBMED:15211109). Furthermore, in endotoxic shock, increased heat shock protein 70 (HSP70) expression in the pancreas of rats was observed, which may be related to the ultra-structural changes and cellular injury in the pancreas (PUBMED:16521195). Another study showed that exocrine pancreatic function was significantly reduced in patients with septic shock compared with nonseptic patients and healthy subjects (PUBMED:10834684). These findings indicate that the pancreas is actively involved in the acute phase reaction in sepsis and can undergo functional and biochemical changes without necessarily showing gross morphological alterations typically associated with pancreatitis.
Instruction: Do the determinants of physical activity change by physical activity level? Abstracts: abstract_id: PUBMED:36920727 Determinants of physical activity engagement in older adults. In order to increase engagement in physical activity, it is important to determine which factors contribute to physical activity engagement in older adults. The current study examined the relative predictive ability of several potential determinants, in terms of both the concurrent level as well as longitudinal trajectories. Clinically normal adults aged 61-92 completed the Physical Activity Scale for the Elderly (n = 189 for cross-sectional models; n = 214 for longitudinal models). Potential determinants included age, gender, education, physical health, sensory health, mood, cardiovascular health, cognitive status, and biomarkers of Alzheimer disease (AD). We observed a novel finding that both concurrent physical health (p < 0.001) and change in physical health (p < 0.001) were significant predictors above and beyond other determinants. Concurrent mood predicted levels of physical activity (p = 0.035), particularly in females. These findings suggest that poor physical health and low mood might be important to consider as potential barriers to physical activity engagement in older adults. abstract_id: PUBMED:36833029 Determinants of Physical Activity among Children with Disabilities. Maintaining children's physical activity levels is crucial to preventing obesity and improving their health and well-being. However, achieving the recommended daily level of 60 min of moderate-to-vigorous intensity physical activity can be difficult for children with disabilities. Moreover, children with disabilities spend less time engaging in physical activity than their typically developing peers. This study aimed to assess the personal, environmental, and social determinants of physical activity among children with disabilities. This quantitative, cross-sectional study was conducted through an online survey of a convenient sample, including 125 parents of children with disabilities aged between 5 and 18 years from different regions in the Kingdom of Saudi Arabia. Approximately 40.8% of the participants were aged between 41 and 50 years, and 57.6% (the participants and friends of their children) did not engage in regular exercise. Statistically significant differences were observed between the perception of children's health and physical activity summary scores and the engagement of children's friends in physical activity and physical activity summary scores. Efforts should be made to reinforce parents' perceptions of their children's health regarding physical activity and to support the social determinants of physical activity that ensure their children's friends' engagement. Specialized interventional studies are needed to support parents with children. abstract_id: PUBMED:37269806 Physical activity level and sedentary time determinants among Moroccan hypertensive patients. Introduction: Hypertension is closely associated with an inactive lifestyle. Physical activity and/or exercise have been shown to delay the development of hypertension. This study aimed to assess the level of physical activity and sedentary time, and its determinants among Moroccan Hypertensive patients. Patients And Methods: A cross-sectional study was conducted between March and July 2019 including 680 hypertensive patients. We administered international physical activity questionnaire in face-to-face interview to assess the level of physical activity and sedentary time. Results: The results showed that only 43.4% of participants met recommended physical activity levels (≥ 600 MET min/week). Adherence to physical activity recommendations was observed more in male participants (p = 0.035), in participants aged less than 40 years (p = 0.040) and those aged between 41 and 50 years (p = 0.047). The average sedentary time was 37.19 ± 18.92 hours per week. This time was significantly longer in people aged 51 and over, in married, divorced, and widowed people, and in those with low levels of physical activity. Conclusions: The level of physical inactivity and the sedentary time was high. In addition, participants with a high-level sedentary lifestyle had a low level of physical activity. Educational actions should be undertaken among this group of participants to prevent the risks associated with inactivity and sedentary behavior. abstract_id: PUBMED:36361011 A Summary of One Research Team's Contributions to Understanding Physical Activity Behavior in Children and Youth. Schools are well-positioned to provide physical activity opportunities to help youth achieve the recommended 60 or more daily minutes of moderate-to-vigorous physical activity. The Children's Physical Activity Research Group (CPARG) at the University of South Carolina has focused on understanding physical activity in school-aged youth for 30+ years. The purpose of this article was to critically review (CPARG) contributions to the field in school settings and school-age youth. We reviewed 127 published CPARG articles from six research projects conducted between 1993-2019. The review was guided by questions in five categories: measurement of physical activity and its determinants, characteristics of physical activity behavior, correlates/determinants of physical activity, physical activity interventions, and race/ethnicity and physical activity. Results were summarized by question and synthesized across categories. CPARG contributions included assessing physical activity levels, patterns, forms, and contexts; identifying and measuring physical activity correlates/determinants; and conducting school-based physical activity interventions. Identifying multiple domains of physical activity determinants enables researchers and practitioners to select/design age-appropriate, valid, and reliable instruments to assess determinants. Focusing on determinants enables them to create effective physical activity interventions, environments, programs, and policies in schools. These efforts must address race/ethnicity differences, ensuring that measurement instruments and intervention strategies are culturally appropriate. abstract_id: PUBMED:25160478 Clustering of diet, physical activity and smoking and a general willingness to change. Addressing multiple health behaviours are important in preventing disease and mortality. The present study investigated the clustering of health behaviours, cognitive determinants and stages of change in 2827 adults for the lifestyle factors of physical activity, fruit, vegetable and fat consumption and smoking. The results showed that only 3% of the total population met recommended guidelines for all of the five behaviours. Behaviours were found to be weakly associated. Behaviour-specific cognitions and stages of change for the behaviours clustered more strongly, however. With respect to diet and physical activity, respondents in the preparation stage for one behaviour were likely also to be preparing to change another behaviour. Possible mechanisms for the apparent general willingness to change multiple behaviours are discussed, as well as potential implications for health promotion practice. abstract_id: PUBMED:35960636 Stages of behavioral change in physical activity and overweight in adolescents Objective: to evaluate stages of change for physical activity behavior and their relationship with overweight in high-school adolescents. Methods: cross-sectional data were obtained from 549 adolescents. Physical activity and stages of change for physical activity behavior were estimated by questionnaire. The relation between these stages and overweight was estimated by multivariate logistic regression. Results: participants were 52.8 % female, 31.3 % belonged to a sport team and 25.9 % were overweight. Males performed more vigorous physical activity than female. Stages of change for physical activity behavior were distributed as follows: maintenance 37.2 %, contemplation 29.9 %, action 17.7 %, preparation 10.0 % and precontemplation 5.3 %. A non-smoking male with adequate weight belonging to a sport team was more related to a maintenance stage. Conclusions: a high percentage of the adolescents, mainly females, were in the first stages of change for physical activity, which means they were inactive. Overweight adolescents were predominantly in a contemplation stage. These results may guide to overweight prevention strategies based on stage of behavioral change. abstract_id: PUBMED:23808855 An integrative review of literature on the determinants of physical activity among rural women. Objectives: The purpose of this integrative review is to analyze current, non-experimental literature to identify factors that influence physical activity levels in rural women with a goal of informing nurses and improving the effectiveness of future physical activity interventions in this population. Design And Sample: Whittemore and Knafl's (2005) integrative review methodology was used. The sample included 11 quantitative articles, seven qualitative studies, two studies that incorporated both methodologies, and one explanatory case study. Measures: Each article was evaluated for quality using the American Association of Critical-Care Nurses revised evidence leveling system. Data were analyzed and then synthesized using the Matrix Method. Results: The terms "rural" and "physical activity" were diversely defined in the reviewed articles. The results revealed three categories of determinants: personal factors, socio-economic factors, and physical environment factors. Conclusions: Effective nursing interventions to promote physical activity should address barriers and motivating factors in all three categories of determinants for maximum efficacy. Additional research that clearly defines and consistently applies the terms "rural" and "physical activity" is needed to strengthen knowledge in this area. abstract_id: PUBMED:19243465 Do the determinants of physical activity change by physical activity level? Aim: The paper reports on a study to describe the determinants of physical activity according to physical activity level among rural women in Korea. Background: The benefits of physical activity for promoting health and preventing illness are well known, and gender and regional disparities in physical activity have been reported. Although the number of women who perform physical activity has increased, many still do not meet the recommended levels to gain health benefits. Method: A cross-sectional descriptive study was conducted using a structured questionnaire. A convenience sample of 245 women living in a rural area of Korea was recruited during 2004. Self-efficacy, benefits, barriers and sociodemographics were examined in relation to three levels of physical activity: inactive, insufficiently active and active. Descriptive statistics and multinomial logistic regression were used for data analysis. Result: Perceived self-efficacy, benefits and barriers were statistically significant determinants only when the insufficiently active group was compared with the inactive group. Sociodemographic factors related to work and family roles statistically significantly explained the physical activity levels of rural women. Conclusion: Nursing interventions to increase self-efficacy and benefits, or decrease perceived barriers could be effective for initiating physical activity for inactive women, whereas the same may not apply for insufficiently active women. Insufficiently active women may have false confidence that their physical activity will help them gain health benefits. Reducing the burdens of work and family roles of rural women might improve their physical activity levels. abstract_id: PUBMED:36069762 Determinants of Physical Activity in the Cardiac Population: Protocol for a Systematic Review. Background: Lack of physical activity is a critical contributing risk factor to cardiovascular disease. Hence, regular physical activity is a mainstay in the primary and secondary prevention of cardiovascular disease. Despite the extensive promotion of physical activity in both primary and secondary prevention programs, including cardiac rehabilitation, physical activity levels in the cardiac population remain low. Therefore, it is crucial to understand critical determinants that influence physical activity behavior. Objective: This study aims to deliver a systematic review of studies with collated observational data exploring the association between determinants and physical activity behavior in the target population. These new insights inform the design of future interventions targeted at lasting heart-healthy physical activity behavior in the cardiac population. Methods: Primary studies with observational quantitative data on determinants and their association with physical activity behavior in the cardiac population will be included. Information on relevant primary studies will be retrieved from various databases, including Embase, CINAHL, MEDLINE, PsycInfo, and Web of Science Core Collection. Six reviewers will independently double-screen articles. Studies will be selected according to the prespecified inclusion and exclusion criteria. Data will be extracted and entered into suitable worksheets. The US-based National Heart, Lung, and Blood Institute's Study Quality Assessment Tool for Observational Studies will be used to assess the quality of all eligible primary studies. The results will be presented in a descriptive and narrative synthesis. If the type and quality of data are suitable, meta-analyses will be conducted. Study reporting will follow the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement. Results: Data collection started in September 2020, and the literature search was updated in July 2021. Data synthesis is ongoing, and the literature search will be updated in October 2022. Conclusions: This review will be valuable to relevant stakeholders, including clinicians and health care professionals, intervention developers, and decision makers in health care. It lays a comprehensive foundation for understanding the determinants of physical activity to inform the design of secondary prevention interventions relevant to the cardiac population. Trial Registration: PROSPERO CRD42020206637; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=206637. International Registered Report Identifier (irrid): RR1-10.2196/39188. abstract_id: PUBMED:31179262 Associations of Participation in Organized Sport and Self-Organized Physical Activity in Relation to Physical Activity Level Among Adolescents. Although physical activity level (PAL) is positively correlated with adolescents' health, many adolescents do not fulfill recommendations for physical activity. This study examines the associations of organized sport and self-organized physical activity, with PAL among adolescents. Participants were 301 adolescents (12-13 year-olds). The adolescents wore accelerometers for 1 week according to international standards, and reported their participation in organized sport and self-organized physical activity in a questionnaire. The results showed that the level of participation in organized sport was positively associated with the adolescents' total PAL, while there was no significant association between time spent in self-organized physical activity and adolescents' daily minutes of moderate and vigorous physical activity. In addition, boys who participated <3 h per week (or not at all) in organized sport stood out with the lowest fulfillment of recommended PAL. Our findings underline the critical importance of getting adolescents, especially boys, to participate in organized sport and not to drop out from organized sport during adolescence. Answer: Yes, the determinants of physical activity can change by physical activity level. According to a study conducted among rural women in Korea, perceived self-efficacy, benefits, and barriers were significant determinants when comparing the insufficiently active group with the inactive group. However, these factors were not as significant when comparing the active group with the insufficiently active group. Sociodemographic factors related to work and family roles were found to significantly explain the physical activity levels of rural women, suggesting that interventions to increase physical activity may need to be tailored differently for women who are inactive versus those who are insufficiently active (PUBMED:19243465). This finding is consistent with the broader understanding that the factors influencing physical activity engagement can vary depending on an individual's current level of activity. For example, individuals who are already active may be motivated by different factors or face different barriers than those who are inactive or insufficiently active. This implies that public health interventions and strategies to promote physical activity should consider the specific determinants relevant to each group's activity level to be effective.
Instruction: Detection of inflammatory sacroiliitis in children with magnetic resonance imaging: is gadolinium contrast enhancement necessary? Abstracts: abstract_id: PUBMED:25892309 Detection of inflammatory sacroiliitis in children with magnetic resonance imaging: is gadolinium contrast enhancement necessary? Objective: In adults, gadolinium contrast enhancement does not add incremental value to fluid-sensitive sequences for evaluation of bone marrow edema. This study was undertaken to determine if magnetic resonance imaging (MRI) contrast is necessary to assess lesions consistent with inflammatory sacroiliitis in children. Methods: Patients with clinically suspected or diagnosed juvenile spondyloarthritis (SpA) underwent pelvic MRI consisting of multiplanar fluid-sensitive and postgadolinium T1-weighted fat-saturated sequences including dedicated sacral imaging. Three radiologists independently evaluated the fluid-sensitive sequences, and later, the complete study (including postcontrast images). With postcontrast imaging as the reference standard, we calculated the test properties of fluid-sensitive sequences for depiction of acute and chronic findings consistent with sacroiliitis. Results: The 51 patients had a median age of 15 years, and 57% were male. Nineteen patients (22 joints) were diagnosed as having sacroiliitis based on postcontrast imaging, and none had synovitis in the absence of bone marrow edema. All 22 joints demonstrated bone marrow edema on both fluid-sensitive and postgadolinium T1-weighted fat-saturated sequences. Eighteen percent of joints with sacroiliitis had capsulitis, which was observed on both noncontrast and postcontrast imaging. Fifty-nine percent of joints with sacroiliitis had synovitis on postcontrast imaging. Sensitivity, specificity, positive predictive value, and negative predictive value of fluid-sensitive sequences for the detection of acute inflammatory lesions consistent with sacroiliitis using postgadolinium imaging as the reference standard were excellent. Interrater reliability was substantial for all parameters. Conclusion: Our findings indicate that fluid-sensitive sequences are sufficient to detect acute and chronic lesions consistent with inflammatory sacroiliitis in children. abstract_id: PUBMED:18034343 Monitoring ankylosing spondylitis therapy by dynamic contrast-enhanced and diffusion-weighted magnetic resonance imaging. Objective: The effects of different therapies on enthesitis/osteitis in active ankylosing spondylitis (AS) were evaluated by magnetic resonance imaging (MRI). The aim was to assess the role of quantitative MRI in the evaluation of AS treatment efficacy. Materials And Methods: Thirty patients with active spondylitis or bilateral sacroiliitis were selected and followed up for 1 year. Ten of the patients were treated only with non-steroidal anti-inflammatory drugs, 10 patients additionally received at baseline an intravenous pulse of glucocorticoids and 10 patients were treated with regular infusions of infliximab. Disease activity was measured according to clinical instruments and laboratory tests. For each patient, one selected inflamed lesion was followed from baseline through control visits quantitatively by diffusion-weighted imaging (DWI) measuring the apparent diffusion coefficient (ADC) and by dynamic contrast-enhanced imaging (DCEI) with evaluation of the enhancement factor (f(enh)) and enhancement gradient (g(enh)). Results: Clinical and quantitative MRI parameters diminished significantly with regression of the inflammatory activity. The improvement in AS was most pronounced in patients treated with infliximab; after 12 months the ADC diminished from an average of 1.31 to 0.88 x 10(-3) mm(2)/s, f(enh) from 1.85 to 0.60, and g(enh) from 3.09 to 1.40 %/s. Conclusion: Diffusion-weighted imaging and DCEI were shown to be effective in quantifying changes in inflammation in skeletal lesions during the treatment of AS, and could therefore be convenient for assessing treatment efficacy. To the best of our knowledge this is the first time DWI was used to evaluate the activity of skeletal inflammation in rheumatic diseases such as AS. abstract_id: PUBMED:32493352 Usefulness of magnetic resonance enterography in detecting signs of sacroiliitis in young patients with inflammatory bowel disease. Background: Arthritis is often an underestimated extraintestinal manifestation in pediatric inflammatory bowel disease (IBD), including sacroiliitis, whose early signs are well detectable at magnetic resonance imaging (MRI). Magnetic resonance enterography (MRE) is an accurate imaging modality for pediatric IBD assessment. We studied the possibility to detect signs of sacroiliac inflammation in a group of children with IBD who underwent MRE for gastrointestinal disease evaluation. Methods: We retrospectively reviewed MRE scans performed in pediatric patients with IBD. We looked for signs of sacroiliitis taking the ASAS (Assessment of SpondyloArthritis international Society) criteria as a model. Presence of bone marrow edema (using T2W sequences with fat suppression), diffusion restriction in Diffusion Weighted Imaging (DWI) or Diffusion Weighted Imaging with Background Suppression (DWIBS), and dynamic contrast enhancement were evaluated. Each SI joint was divided into 4 quadrants: upper iliac, lower iliac, upper sacral, and lower sacral. Two blinded observers with experience in pediatric and skeletal imaging independently evaluated the images. Cases upon which there was a disagreement were evaluated by the two reviewing radiologists and a third radiologist with similar experience together. Results: We enrolled 34 patients (24 males and 10 females, with mean age at scanning 14.3 years, median 15.3 years; 2 affected by ulcerative colitis, 32 by Crohn's disease) for a total of 59 examinations performed at the time of their first diagnosis or at symptom exacerbations. No patient complained of musculoskeletal symptoms, neither had pathological findings at articular examination. At the time of MRE 25 patients were under treatment for their IBD. Five patients had radiological signs of SI inflammation at MRE, albeit of mild degree. All patients with SI joint edema also had a restricted diffusion in DWIBS or DWI and almost everyone had contrast media uptake. Conclusions: Sacroiliitis is one of the extraintestinal manifestation associated with IBD; it is often asymptomatic and clinically underdetected, with an unrelated progression with respect to the underlying IBD. MRE offers the possibility to study SI joints in young patients with IBD who undergo MRE for the investigation of their intestinal condition. Furthermore, we observed that gadolinium enhancement does not improve diagnostic specificity in sacroiliiitis detection. abstract_id: PUBMED:27446601 Diagnostic imaging of psoriatic arthritis. Part II: magnetic resonance imaging and ultrasonography. Plain radiography reveals specific, yet late changes of advanced psoriatic arthritis. Early inflammatory changes are seen both on magnetic resonance imaging and ultrasound within peripheral joints (arthritis, synovitis), tendons sheaths (tenosynovitis, tendovaginitis) and entheses (enthesitis, enthesopathy). In addition, magnetic resonance imaging enables the assessment of inflammatory features in the sacroiliac joints (sacroiliitis), and the spine (spondylitis). In this article, we review current opinions on the diagnostics of some selective, and distinctive features of psoriatic arthritis concerning magnetic resonance imaging and ultrasound and present some hypotheses on psoriatic arthritis etiopathogenesis, which have been studied with the use of magnetic resonance imaging. The following elements of the psoriatic arthritis are discussed: enthesitis, extracapsular inflammation, dactylitis, distal interphalangeal joint and nail disease, and the ability of magnetic resonance imaging to differentiate undifferentiated arthritis, the value of whole-body magnetic resonance imaging and dynamic contrast-enhanced magnetic resonance imaging. abstract_id: PUBMED:8970049 Magnetic resonance imaging (MRI) for detection of active sacroiliitis--a prospective study comparing conventional radiography, scintigraphy, and contrast enhanced MRI. Objective: Sacroiliitis is often difficult to diagnose in the absence of radiographic alterations. For the diagnosis of active sacroiliitis, plain radiography, scintigraphy, and contrast enhanced magnetic resonance imaging (MRI) were evaluated in a prospective study. Methods: In 44 consecutive patients with complete clinical and laboratory evaluation, plain radiographs, quantitative sacroiliac (SI) scintigraphy, and MRI were performed to evaluate the contribution of these imaging techniques to the diagnosis of active sacroiliitis. Scintiscanning and MRI were done in 20 control subjects to define the normal range of imaging findings. We determined the sensitivity and specificity for each imaging method using a reference standard based on clinical symptoms of inflammatory low back pain with or without laboratory signs, and on clinical and radiographic followup during 1.5-2.5 years to confirm diagnosis. Results: MRI was most sensitive (95%) and superior to quantitative SI scintigraphy (48%) or conventional radiography (19%) for the detection and confirmation of active sacroiliitis. For the assessment of inflammatory signs, MRI had higher specificity (100%) than scintigraphy (97%) or plain radiography (47%). At repeat MRI after 2-30 months, there was persistent pathologic signal intensity in the subchondral bone area despite clinically successful antiinflammatory drug therapy. Conclusion: For the assessment of active changes in the synovial portion and the subchondral bone marrow, contrast enhanced MRI is superior to quantitative SI scintigraphy or conventional radiography. MRI picks up an additional 75% of early cases not diagnosed by plain radiography. Scintigraphy is only of limited value. Persistent pathologic signal intensity in the subchondral bone marrow seems to be closely associated with previous inflammatory episodes, thus limiting specificity of MRI for active sacroiliitis. Based on our findings we suggest an algorithm for the evaluation of patients with suspected active sacroiliitis. abstract_id: PUBMED:19446974 Magnetic resonance imaging of active sacroiliitis: do we really need gadolinium? Introduction: Magnetic resonance imaging (MRI) of active inflammatory changes of the sacroiliac joint (SIJ) in spondyloarthritis (SpA) is performed with short tau inversion recovery (STIR) sequences and fat-saturated T1-weighted fast spin-echo (FSE) sequences after administration of gadolinium-based contrast medium (T1/Gd). The aim of the present study was to compare these two pulse sequences in terms of diagnosis, diagnostic confidence, and quantification of inflammatory changes. Materials And Methods: The study included 105 patients with suspected SpA; 72 patients developed clinical SpA over time. All patients were examined with STIR and T1/Gd and each of the two sequences was analyzed separately in conjunction with unenhanced T1 FSE images. For quantitative estimation of inflammatory changes, each sacroiliac joint (SIJ) was divided into 4 quadrants (and severity per quadrant was assigned a score of 0-4, resulting in a maximum sum score of 16 per SIJ). Diagnostic confidence was assessed on a visual analogue scale ranging from 0 to 10. Results: Active sacroiliitis was diagnosed in 46 patients and ruled out in 34 using STIR, whereas findings were inconclusive in 25 patients. The corresponding numbers for T1/Gd were 47, 44, and 14. Diagnostic confidence was significantly lower for STIR (7.3+/-2.6) compared with T1/Gd (8.7+/-1.9) (p<0.001). The sum scores were 2.5 (+/-3.3) for STIR and 2.2 (+/-3.2) for T1/Gd for the right SIJ and 2.2 (+/-2.9) (STIR) and 1.9 (+/-3.1) (T1/Gd) for the left SIJ. Agreement was high with intraclass correlation coefficient (ICC) values of 0.86 for the right SIJ and 0.90 for the left SIJ and positive correlation (r=0.62 right, 0.60 left). Summary: STIR sequences alone are sufficient for establishing a reliable diagnosis and quantify the amount of inflammation in active sacroiliitis. A contrast-enhanced study is dispensable in patients with established disease or in the setting of clinical follow-up studies. However, a contrast-enhanced MR sequence is beneficial to ensure maximum diagnostic confidence when patients with early sacroiliitis are examined. abstract_id: PUBMED:22271323 Sacroiliitis or pseudosacroiliitis? Clinical/methodical Issue: When the presence of seronegative spondyloarthropathy is unrecognized there can be a delay in achieving an accurate diagnosis, as the typical inflammatory low back pain is similar to that found in degenerative diseases of the lumbosacral spine and the sacroiliac joints. Thus, seronegative spondyloarthropathy is often misinterpreted as a degenerative disease. The initial radiography of the sacroiliac joints is often normal which results in a delay in diagnosis of sacroiliitis of approximately 3-7 years. Standard Radiological Methods: This illustrates the significance of an adequate imaging method for the early detection of sacroliliitis. Methodical Innovations: Contrast medium administration for magnetic resonance imaging (MRI) enables a differentiation between synovitis, capsulitis, enthesitis and effusion. Performance: Sensitivity and specificity for detection of active inflammation by MRI is about 83-85%. Early active inflammation can be detected by MRI 3-7 years before structural changes are obvious by x-ray examination. Practical Recommendations: Pseudosacroiliitis can be differentiated from inflammatory sacroiliitis by the patient history, laboratory data, osteoproliferative and osteodestructive changes and the typical distribution pattern of bone marrow edema. abstract_id: PUBMED:15934066 Inflammatory low back pain: high negative predictive value of contrast-enhanced color Doppler ultrasound in the detection of inflamed sacroiliac joints. Objective: To determine the value of microbubble contrast agents for color Doppler ultrasound (CDUS) compared with magnetic resonance imaging (MRI) in the detection of active sacroiliitis. Methods: An observational case-control study of 103 consecutive patients (206 sacroiliac [SI] joints) with inflammatory low back pain according to the Calin criteria and 30 controls (60 SI joints) without low back pain was conducted at the University Hospital of Innsbruck. All patients and controls underwent unenhanced and contrast-enhanced CDUS and MRI of the SI joints. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of unenhanced and contrast-enhanced CDUS were evaluated. Results: Forty-three patients (41%) with 70 of 206 SI joints (34%) and none of the controls nor the 60 control SI joints demonstrated active sacroiliitis on MRI. Unenhanced CDUS showed a sensitivity of 17%, a specificity of 96%, a PPV of 65%, and an NPV of 72%; contrast-enhanced CDUS showed a sensitivity of 94%, a specificity of 86%, a PPV of 78%, and an NPV of 97%. Detection of vascularity in the SI joint was increased by contrast administration (P < 0.0001). Clustered receiver operating curve analysis demonstrated that enhanced CDUS (A(z) = 0.89) was significantly better than unenhanced CDUS (A(z) = 0.61) for the diagnosis of active sacroiliitis verified by MRI (P < 0.0001; 2-sided test). Conclusion: Microbubble contrast-enhanced CDUS is a sensitive technique with a high NPV for detection of active sacroiliitis compared with MRI. abstract_id: PUBMED:27708913 Early detection of spondyloarthropathy in patients with psoriasis by using the ultrasonography and magnetic resonance image. Objective: To assess the validity of ultrasound (US) in the early detection of arthritis and enthesitis, with assessment of the validity of magnetic resonance imaging (MRI) in the early detection of sacroiliitis and spondylitis in patients with psoriasis and to compare the findings of clinical examination and conventional radiography. Material And Methods: The study included 50 patients with psoriasis and 20 healthy controls. All patients and controls underwent US and power Doppler analyses for the joints of both hands and feet and the entheseal sites. MRI of the lumbosacral spine and sacroiliac joints was performed. Results: Abnormal US findings of arthritis were present in 18% patients, whereas only 6% patients had X-ray abnormalities, the enthesopathy represent 74%, at a higher percentage than clinical and radiological assessment (46, 26% respectively). MRI and radiological study demonstrated evidence of inflammation in the spine in 44% and 16% patients, respectively, and evidence of sacroiliitis in 10% and 6% patients, respectively. Conclusion: Use of newer imaging modalities allows early diagnosis and early initiation of therapy. abstract_id: PUBMED:1578453 Magnetic resonance imaging in the detection of sacroiliitis. The value of magnetic resonance imaging (MRI) in establishing the diagnosis of sacroiliitis was studied in 20 patients with established or suspected disease on conventional radiographs and in 10 healthy subjects. Coronal T1 weighted, axial T2 weighted and proton density MRI images of the sacroiliac joints in addition to plain film radiographs were obtained. All films were graded from 0 to 4 according to the modified New York criteria and independently for changes in joint width, erosions, sclerosis and ankylosis. Using the modified New York criteria, more abnormalities were detected by MRI than by conventional radiography (p = 0.04). This was due to the detection of definite abnormalities (grades 2-4) by MRI in joints that were graded normal or suspicious (grades 0-1) on plain films (p = 0.01). MRI tended to be superior to plain films in visualizing erosions. Only MRI detected abnormalities of articular cartilage (19 patients) and in subchondral bone marrow (7 patients). The latter 2 types represented fatty infiltration and inflammatory change. Intraobserver and interobserver variation were similar for the interpretation of MRI scans and plain film radiographs. These results suggest that MRI detects changes of established sacroilitis. Due to its ability to directly image articular cartilage it may be particularly useful in patients with early disease. Answer: The detection of inflammatory sacroiliitis in children with magnetic resonance imaging (MRI) does not necessarily require gadolinium contrast enhancement. A study conducted to determine if MRI contrast is necessary to assess lesions consistent with inflammatory sacroiliitis in children found that fluid-sensitive sequences were sufficient to detect acute and chronic lesions consistent with inflammatory sacroiliitis. The study involved patients with clinically suspected or diagnosed juvenile spondyloarthritis (SpA) who underwent pelvic MRI consisting of multiplanar fluid-sensitive and postgadolinium T1-weighted fat-saturated sequences including dedicated sacral imaging. The results showed that bone marrow edema, which is indicative of sacroiliitis, was demonstrated on both fluid-sensitive and postgadolinium T1-weighted fat-saturated sequences. The sensitivity, specificity, positive predictive value, and negative predictive value of fluid-sensitive sequences for the detection of acute inflammatory lesions consistent with sacroiliitis using postgadolinium imaging as the reference standard were excellent. Therefore, the findings indicate that fluid-sensitive sequences alone are sufficient to detect sacroiliitis in children, and the use of gadolinium contrast enhancement may not be necessary (PUBMED:25892309).
Instruction: Cervical cancer screening of HIV-positive women: is a prolongation of the screening interval meaningful? Abstracts: abstract_id: PUBMED:17001558 Cervical cancer screening of HIV-positive women: is a prolongation of the screening interval meaningful? Objective: Cervical cancer screening guidelines stated recently that the screening interval of healthy women can be extended up to 3 years. Can those recommendations be applied for high risk populations? Material And Methods: In a prospective setting 305 HIV-positive women have been enrolled in this analysis between September 2000 and December 2003. Patients have been characterized according to HPV (human papilloma virus) prevalence, CIN (cervical intraepithelial neoplasia) incidence and CD4 cell count. Results: 41 % of all HIV-positive women were HPV positive (oncogene subtypes). In patients with diminished CD4 cells the HPV prevalence increased to 60 % (54/90). CIN was found in 27 % (83/305) women. CIN was more frequent by HPV-positive women with a CD4 cell count < 200 mm (3) (52 %, 38/72). The CIN incidence was also high in HIV-positive women with negative HPV infection and diminished CD4 cell count (39 %, 7/18 vs.7 %, 11/161). Conclusions: The current cervical cancer screening guidelines are not helpful in HIV-positive women. The CIN incidence is significantly higher as in the HIV-positive population. For this reason this high risk population as e. g. HIV-infected women need an intensive care of diagnostic tools and short screening intervals to detect CIN. abstract_id: PUBMED:22711081 Screening for cervical cancer: U.S. Preventive Services Task Force recommendation statement. Description: Update of the 2003 U.S. Preventive Services Task Force (USPSTF) recommendation statement on screening for cervical cancer. Methods: The USPSTF reviewed new evidence on the comparative test performance of liquid-based cytology and the benefits and harms of human papillomavirus (HPV) testing as a stand-alone test or in combination with cytology. In addition to the systematic evidence review, the USPSTF commissioned a decision analysis to help clarify the age at which to begin and end screening, the optimal interval for screening, and the relative benefits and harms of different strategies for screening (such as cytology and co-testing). Recommendations: This recommendation statement applies to women who have a cervix, regardless of sexual history. This recommendation statement does not apply to women who have received a diagnosis of a high-grade precancerous cervical lesion or cervical cancer, women with in utero exposure to diethylstilbestrol, or women who are immunocompromised (such as those who are HIV positive).The USPSTF recommends screening for cervical cancer in women aged 21 to 65 years with cytology (Papanicolaou smear) every 3 years or, for women aged 30 to 65 years who want to lengthen the screening interval, screening with a combination of cytology and HPV testing every 5 years. See the Clinical Considerations for discussion of cytology method, HPV testing, and screening interval (A recommendation).The USPSTF recommends against screening for cervical cancer in women younger than age 21 years (D recommendation).The USPSTF recommends against screening for cervical cancer in women older than age 65 years who have had adequate prior screening and are not otherwise at high risk for cervical cancer. See the Clinical Considerations for discussion of adequacy of prior screening and risk factors (D recommendation).The USPSTF recommends against screening for cervical cancer in women who have had a hysterectomy with removal of the cervix and who do not have a history of a high-grade precancerous lesion (cervical intraepithelial neoplasia grade 2 or 3) or cervical cancer (D recommendation).The USPSTF recommends against screening for cervical cancer with HPV testing, alone or in combination with cytology, in women younger than age 30 years (D recommendation). abstract_id: PUBMED:34797107 Experiences of cervical cancer screening in HIV-positive women in Zimbabwe. Background: The primary purpose of screening is to detect individuals in danger of cervical cancer so as to prevent further progression of the disease. Cervical cancer remains a global concern, as it ranks as the fourth most commonly diagnosed female malignancy worldwide. It is the commonest female cancer in Zimbabwe. Women living with human immunodeficiency virus (HIV) have a disproportionate risk of invasive cervical cancer, as they are 2-12 times more likely to develop pre-cancerous lesions. As a result of the increased risk, routine screenings are suggested. Few women are screened for cervical cancer in Zimbabwe. Objectives: This study aimed at describing the experiences of screening for cervical cancer and motivation behind screening. Method: The study employed a qualitative research approach. In-depth one to one interviews and focus group discussions were conducted using interview and focus group guides. The study was conducted at an opportunistic infections clinic in Mpilo Central Hospital. Data analysis was performed by using Giorgi's descriptive method of data analysis. Results: The themes that emerged from data analysis were facilitators to screening for cervical cancer, community awareness of cervical cancer screening, free cervical cancer treatment and more screening centres and integrating cervical cancer screening with HIV care. Conclusion: In-depth understanding of the factors that enable women to take part in cervical cancer screening is essential so that these factors can be strengthened to improve uptake of cervical cancer screening services. abstract_id: PUBMED:35538482 Cervical cancer screening among HIV-positive women in urban Uganda: a cross sectional study. Background: Women living with Human Immunodeficiency Virus (HIV) are at a high risk for early development of cervical cancer. Adherence to cervical cancer prevention strategies in this population is vital for the early detection and treatment of cervical cancer. This study aimed to determine the prevalence and factors associated with cervical cancer screening among HIV-positive women attending an urban HIV care center in Uganda. Methods: This cross-sectional study included 205 HIV-positive women receiving care at an urban HIV care center. An interviewer-administered questionnaire was used to capture sociodemographic information, history of screening for cervical cancer, and reproductive health characteristics. Logistic regression analysis was used to determine the factors associated with cervical cancer screening. Results: Of the 205 HIV-positive women with a mean age of 37.5 ± 8.87 that participated in the study, majority (n = 201, 98%) were aware of cervical cancer screening. Ninety participants (44%) had ever been screened for cervical cancer and only 33 (16.1%) had been screened in the past year. Obtaining information about cancer of the cervix and cervical cancer screening from health care professionals was significantly associated with higher levels of cervical cancer screening (adjusted odds ratio = 5.61, 95% confidence interval: 2.50-12.61, p value < 0.001). Conclusion: This study highlights the low prevalence of cervical cancer screening among HIV-positive women and underscores the role of health professionals as an effective source of information on cervical cancer and cervical cancer screening. Patient education programs in HIV prevention and care facilities should emphasize cervical cancer screening messages to enhance the uptake of screening services. abstract_id: PUBMED:37965474 Cervical cancer screening uptake and associated factors among Women Living with Human Immunodeficiency Virus in public hospitals, eastern Ethiopia. Background: Cervical cancer, the second leading cancer in Ethiopia women, is six times higher among women infected with the Human Immune Virus 1-infected women. Its screening provides protective advantages, and is linked to a decrease in the incidence of invasive cervical cancer and mortality. Although cancer screening has great advantages for early treatment and prevention of further complications, cervical cancer screening uptake is low among women in developing countries. Cervical cancer screening uptake among Women Living with Human Immunodeficiency Virus (WLHIV) is not well known in Eastern Ethiopia. Thus, we aimed to assess cervical cancer screening uptake and its associated factors among WLHIV in public hospitals in Harar, eastern Ethiopia. Methods: An institution-based cross-sectional study was carried out on 412 randomly selected HIV-positive women from March 20 to April 20, 2022. The results of the study were presented descriptively in percentages and analytically in odds ratio. Bivariate and multivariable logistic regression analyses were used to determine the presence and degree of association between dependent and independent variables. In the multivariable logistic analysis, a p-value of 0.05, and an adjusted odds ratio with a 95% confidence interval were considered to determine independent predictors for the uptake of cervical cancer screening. Results: Cervical cancer screening uptake among WLHIV was 57.5% (95% CI: 52.5, 62.9%). The uptake of cervical cancer screening was significantly associated with age between 20-29 years (AOR = 7.33; 95% CI: 1.98, 27.1), 40-49 years (AOR = 4.37; 95% CI: 1.48, 12.89), tertiary level of education (AOR = 0.197; 95% CI: 0.041, 0.946), good knowledge (AOR = 3.591; 95% CI: 2.123, 6.073), and monthly income of 2501(45.52 $) and above Ethiopian Birr (AOR = 0.389; 95% CI: 0.158, 0.959). Conclusions: More than half of the participants had undergone cervical cancer screening. Age, marital status, educational status, monthly income, and awareness of cancer screening uptake were all factors related to cervical cancer screening. To maximize uptake, it is necessary to create specific counseling and education programs that target HIV-positive women. abstract_id: PUBMED:32165914 Sociodemographic factors, health seeking behaviors, reproductive history, and knowledge of cervical screening among women in Swaziland. Background: Cervical cancer is the leading cause of cancer among women in Swaziland; however, a low rate of cervical screening in this population has led to high rates of morbidity and mortality from cervical cancer. Objective: To identify factors associated with lack of cervical screening among women in Swaziland. Methods: A cross-sectional study was conducted among 300 women aged 18-69 years attending clinics in three regions of Swaziland from May to August of 2014. An investigator-administered questionnaire was used to collect data on socioeconomic factors, health-seeking behaviors, reproductive history, and cervical screening history and knowledge from the women. Results: Adjusted multivariable logistic regression analysis revealed that women < 30 years of age were less likely to receive a cervical exam compared to women ≥30 years of age (Odds Ratio 0.06, 95% Confidence Interval 0.01-0.67). Women who had a tertiary education were almost 6 times more likely to receive a cervical screening (OR 5.83, 95% CI 1.11-30.50). Women who said that they did not know when to receive cervical screening were 73% less likely to have a cervical exam (OR 0.27, 95% CI 0.01-0.74). Conclusions: Younger age, lower educational level, and lack of knowledge about when to receive a cervical screening affected whether women obtained a cervical screening. This indicates the need for educating women, particularly younger women, about the importance of cervical examinations. Addressing these barriers to screening should lead to a decrease in cervical lesions and cancer, especially in this high HIV-positive population. abstract_id: PUBMED:22367370 Cervical cancer awareness and screening in Botswana. Objective: Cervical cancer remains a leading cause of death in many developing countries because limited screening by Papanicolaou (Pap) smear. We sought to better understand women's beliefs about cervical cancer and screening in Botswana, a middle-income African country with high rates of cervical cancer. Methods: We interviewed 289 women attending general medicine or human immunodeficiency virus (HIV) clinics, where Pap testing was available, in Gaborone, Botswana, in January 2009. Results: About three fourths (72%) of the respondents reported having ever had a Pap smear; HIV-positive women were more likely to have had a Pap smear than HIV-negative women (80% vs 64%; odds ratio, 1.97; 95% confidence interval, 1.10-3.55). Screening was also more common among women who were older, had higher incomes, or had heard of cervical cancer. Almost all participants reported a desire to have a Pap smear. Reasons included to determine cervical health (56%), to improve overall health (33%), and to obtain early treatment (34%). About half (54%) of the respondents said they did not know what causes cervical cancer, and almost none attributed the disease to human papillomavirus infection. Conclusions: Study findings can inform interventions that seek to increase cervical cancer awareness and uptake of screening as it becomes more widely available. abstract_id: PUBMED:34380862 Screening for cervical cancer in HIV-infected women: A review of literature. Globally, the cervical cancer burden is huge, more so in low-resource countries. Human immunodeficiency virus (HIV) infection increases a woman's risk of human papillomavirus (HPV) infection and cervical cancer. There is a lack of opportunistic, as well as, organized cervical cancer screening structure for HIV-positive women. A large proportion of women have invasive cervical cancer as their initial acquired immune deficiency syndrome (AIDS)-defining illness. There is an especially high-incidence in countries where there are no organized cervical cancer prevention programs. Additionally, there are cultural, social, psychological, and system barriers that women living with HIV have to overcome when accessing healthcare services. We believe that educating women and healthcare providers regarding the need for screening, early detection, and treatment is as important as bringing about a systematic change in healthcare services to improve participation of HIV-positive women in screening for cervical cancer. abstract_id: PUBMED:16226691 Cervical cancer screening. Cervical cytology has proved to be one of the most successful examples of cancer screening and has resulted in significant decreases in incidence and mortality from invasive cervical cancer in regions in which comprehensive programs have been instituted. A standardized system of reporting abnormal cervical cytology, the Bethesda System has been developed and revised to reflect our current understanding of HPV-related precancers. The Pap smear has been the backbone of cervical cancer screening programs in North America over the past five decades; however, recent advances, including liquid-based cytology, HPV typing, and direct visualization techniques, are proposed to address the inherent weaknesses of cytologic screening in women who are screened. The problem of unscreened and underscreened women exists and accounts for most cases of invasive cancer. Widespread screening hopefully one day will lead to a significant decrease in the mortality of cervical cancer, the second most common female malignancy worldwide. Another means of preventing this disease involves HPV vaccines, which are in development but are not ready for use outside of a clinical trial. abstract_id: PUBMED:27845952 Human papillomavirus as a cause of anal cancer and the role of screening. Purpose Of Review: Anal cancer is a serious health problem in HIV-positive men who have sex with men, and precursor lesions, anal intraepithelial neoplasia, are well defined. Given the similarities with cervical cancer, screening for and treatment of anal intraepithelial neoplasia might prevent anal cancer. Screening programmes should meet the Wilson and Jungner criteria. We used these criteria to evaluate the current body of evidence supporting a screening programme for anal dysplasia. Recent Findings: The natural history of anal intraepithelial neoplasia is gradually becoming more clear, and three prospective studies are now being performed to conclusively address this issue. High-resolution anoscopy stays the gold standard to diagnose anal intraepithelial neoplasia. The International Anal Neoplasia Society has recently published Practice Standards in the Detection of Anal Cancer Precursors. The main issue, however, is treatment. Although response rates are reasonable at early evaluation, the majority of patients has a recurrence. Summary: At present, an anal cancer screening programme for HIV-positive men who have sex with men meets most of the Wilson and Jungner criteria. Given that high-resolution anoscopy is the gold standard for screening, important issues that need addressing are the need for a less invasive screening procedure and the cost-effectiveness of screening. The main issue is treatment. Development and evaluation of new treatment strategies are essential for an effective and sustainable screening programme. Answer: Based on the provided abstracts, prolonging the screening interval for cervical cancer in HIV-positive women does not seem to be meaningful or recommended. The study in abstract PUBMED:17001558 found that the incidence of cervical intraepithelial neoplasia (CIN) is significantly higher in HIV-positive women, particularly those with diminished CD4 cell counts, and that the current cervical cancer screening guidelines are not adequate for this high-risk population. The study suggests that HIV-infected women need intensive diagnostic care and shorter screening intervals to detect CIN effectively. The US Preventive Services Task Force (USPSTF) recommendations in abstract PUBMED:22711081 specifically exclude women who are immunocompromised, such as those who are HIV positive, from the general screening guidelines that apply to the general population of women with a cervix. This indicates that HIV-positive women may require a different screening approach due to their increased risk. Furthermore, studies from Zimbabwe (PUBMED:34797107), Uganda (PUBMED:35538482), and Ethiopia (PUBMED:37965474) highlight the importance of regular screening for HIV-positive women and the various factors that influence screening uptake, including knowledge, education, and access to healthcare professionals who can provide information about cervical cancer and screening. The review of literature in abstract PUBMED:34380862 emphasizes the increased risk of cervical cancer in HIV-infected women and the lack of organized screening structures for this group, suggesting the need for systematic changes in healthcare services to improve participation in screening programs. In summary, the evidence suggests that HIV-positive women are at a higher risk for cervical cancer and its precursors, and therefore, a more frequent screening interval is necessary for early detection and treatment, rather than prolonging the interval between screenings.
Instruction: Adolescent varicocele: are somatometric parameters a cause? Abstracts: abstract_id: PUBMED:37664101 Efficacy of varicocelectomy on semen parameters and conception rates. Objective: Varicocele, the dilation of the pampiniform plexus of the spermatic cord. It is discovered incidentally in most patients. Symptoms vary, but its impact is especially experienced in the field of fertility and reflected in semen parameters. Varicocelectomy is a surgical approach to correct the varicocele, however, reports are conflicting regarding its success. Our aim was to evaluate the efficacy of varicocelectomy on semen parameters and conception rates in patients who underwent surgery and its association with comorbidities such as diabetes, hypertension, and obesity. Materials And Methods: This cross-sectional study included the complete medical records of 86 patients. Data collection form included the patient's age, body mass index (BMI), chronic diseases, smoking, surgical history, medication usage, and the reason for performing surgery. Presurgical and postsurgical semen parameters were evaluated 3 months before surgery and an average of 6 months postoperatively. Data were analyzed with SPSS, Chi-square test, and independent and paired t-test. Results: No significant difference was found between primary and secondary infertility regarding semen analysis, postoperative semen analysis indicated an improvement in semen motility at 180 min with no effect on other parameters. Using spontaneous intercourse or in vitro fertilization (IVF) after varicocelectomy was significantly associated with increasing conception rates irrespective of the type of infertility. Diabetes, hypertension, BMI, and smoking did not appear to affect semen parameters. Conclusion: Patients who underwent varicocelectomy had improved sperm motility at 180 min and subsequently had a higher chance of successful conception either through spontaneous intercourse or IVF. Further studies are needed to understand the relationship between conception and sperm motility. Semen parameters were not affected if the patient had comorbidities such as diabetes and hypertension. Furthermore, smoking and BMI did not appear to affect the conception rate. abstract_id: PUBMED:25780583 Beneficial effects of varicocele embolization on semen parameters. Background: The value of varicocele repair and the latter's impact on semen parameters are still subject to debate. Methods: We analyse changes over time in initially abnormal sperm parameters and serum concentrations of testosterone, FSH and inhibin B after embolization treatment of males with high-grade varicocele. From 2007 to 2012, we recruited 47 male infertile patients with clinically visible left varicocele in the resting patient and at least one abnormal semen parameter. Sperm parameters and serum levels of total testosterone, FSH and inhibin B were measured prior to retrograde embolization (M0) and then 3 (M3) and 6 (M6) months afterwards. Results: At M0, the median sperm concentration was 5.78 [0.84-37.70] × 10(6)/ejaculate. The mean ± SD sperm progressive motility, vitality and percentage of normal sperm were respectively, 21.83 ± 16.48%, 61.88 ± 15.98% and 12.88 ± 7.15%. The corresponding values at M3 were significantly higher (38.75 [3.96-95] × 10(6)/ejaculate, 29.32 ± 14.21%, 69.14 ± 14.86% and 19.03 ± 11.02%, respectively). The mean percentage of spermatozoa with a thin head was significantly lower at M6 (6.35 ± 5.29%) than at M0 (14.03 ± 13.09%). The mean serum testosterone, FSH and inhibin B levels did not change significantly over time. Conclusions: Embolization treatment in men with clinically visible left varicocele, abnormal sperm parameters and documented infertility is associated with a significant improvement in semen parameters including sperm head morphology. abstract_id: PUBMED:36675673 Nomogram for Predicting Semen Parameters Improvement after Microscopic Varicocelectomy in Infertile Men with Abnormal Semen Parameters. Objectives: Information on the prediction of improved semen parameters following varicocelectomy is scarce and mostly contradictory. Therefore, we developed and validated a nomogram to predict whether abnormal semen parameters in infertile men could improve following microscopic varicocelectomy (MSV). Methods: From January 2018 to December 2021, 460 consecutive patients who underwent MSV were included. Of them, 336 patients as a development cohort at the Xiang Hua institution. As a validation cohort, Hu Nan Center (124 patients) was used. Clinicopathologic patient information was recorded. The likelihood ratio test using Akaike’s information criteria was employed as the stopping rule, and multivariate logistic regression was utilized to create a prediction model with regression coefficients. The effectiveness of this prediction model was evaluated based on its ability of discrimination, calibration, and clinical utility. Results: The initial total progressively motile sperm count (TPMSC) and vein diameter were predictors of this model. The model demonstrated strong discrimination for the validation cohort, with an area under the receiver operating characteristic (AUROC) of 0.925 (p < 0.001), and strong calibration (unreliability test, p = 0.522). The decision curve study proved the model’s clinical applicability. Conclusion: According to our research, the improvement of semen parameters in infertile men following MSV was significantly predicted by greater vein diameter and higher initial TPMSC. This nomogram aids in individualized decision-making on the varicocele preoperative treatment plan and may help to enhance the therapeutic result. abstract_id: PUBMED:37162375 Effect of systemic isotretinoin therapy on semen parameters. Purpose: Vitamin A has multiple functions in the human body, being involved in growth, epithelial differentiation, vision, immune function and reproduction. While normal spermatogenesis is influenced by several factors, it requires vitamin A. Systemic isotretinoin is a vitamin A derivative that is used in the treatment of many dermatological diseases, especially acne vulgaris (AV). There is limited research on the changes in semen parameters after systemic isotretinoin therapy in humans. Our study investigates the presence of varicoceles in patients undergoing systemic isotretinoin therapy for AV and examines whether there were any changes in the semen parameters before and after treatment. Methods: Included in the study were 46 men patients who were scheduled for systemic isotretinoin therapy for AV. Before treatment, the patients underwent a physical examination and ultrasonography for varicoceles assessment. The patients underwent spermiogram before treatment and after 6 months of treatment. The spermiogram assessments included semen volume, sperm concentration, total sperm count, progressive motility, viability and sperm morphology. Results: After treatment, there was an increase in semen volume, sperm concentration, total sperm count, progressive motility and vitality from the pre-treatment values, but a deterioration in the sperm morphology (p < .05). Comparing patients with and without varicoceles revealed more changes in semen parameters after treatment in those with varicoceles. There was a statistically significant difference in sperm concentration (p < .001). Conclusions: Systemic isotretinoin therapy negatively affects sperm morphology, but has positive effect on other semen parameters, and these changes in semen parameters occur more frequently in patients with varicoceles.KEY MESSAGESAcne vulgaris is a very common disease and systemic isotretinoin is used as the most effective agent in its treatment.Systemic isotretinoin positively affects semen parameters except sperm morphology.Changes in semen parameters are more common in patients with varicocele. abstract_id: PUBMED:36968902 Modified Palomo Procedure Is an Effective Intervention for Improving Serum Testosterone Levels and Semen Parameters in Patients With Varicocele: A Prospective Study. Background: A varicocele can be defined as an abnormal venous dilatation and/or tortuosity of the pampiniform plexus. It is generally reported that varicoceles are present in 15% of the general male population and 35% of men as a cause of primary infertility and in up to 80% of men as a cause of secondary infertility. Differences in venous drainage anatomy between the left and right internal spermatic veins, venous valve incompetence resulting in venous blood reflux, and increased hydrostatic pressure are the most commonly cited theories. Various surgical and non-surgical techniques are in use for treating patients with varicocele. Here we used a modified Palamo procedure to treat the patients and observed the outcome. Methodology: A total of 40 patients with varicocele were recruited for the study. A preoperative evaluation, along with serum testosterone levels and semen analysis, was done. A modified Palomo technique was used to treat varicocele. A postoperative follow-up with serum testosterone levels and semen analysis was done to observe improvement. Results: The mean (±SD) left testis size, right testis size, testosterone, sperm concentration, sperm vitality and sperm progressive motility were found statistically significantly higher in patients after surgery as compared to patients before surgery (p<0.05). However, there was statistically insignificant mean difference in semen volume between patients before surgery and patients after surgery (p>0.05). Conclusion: Modified Palomo procedure can be used to treat varicocele with good improvement in serum testosterone levels and semen parameters. abstract_id: PUBMED:29766149 Alteration in CatSper1 and 2 genes expression, sperm parameters and testis histology in varicocelized rats. Background: CatSper gene, a member of cation channel sperm family, has an essential role in sperm motility and male fertility. Following varicocele, sperm parameters especially sperm movement decreases. For this reason, we hypothesized that CatSper gene expression might be reduced after varicocele induction in an animal model. Objective: The aim of this study was to evaluate the expression of CatSper 1 and 2 genes, sperm parameters and testis histology following varicocele induction. Materials And Methods: A total of 30 Wistar male rats were randomly divided into three following groups (n=10/ each): control, sham, and varicocele group. Experimental varicocele was induced by partial ligation of the left renal vein. The epididymal sperm parameters, CatSper1 and 2 genes expression, and testes histology were studied two months after varicocele induction. Results: Our results revealed that motility (32.73±16.14%), morphology (48.80±17%) and viability (31.23±9.82%) of sperms significantly reduced following varicocele induction. In addition, we showed a significant decrease in the number of spermatogonia (43.63±5.31) and seminiferous tubules diameters (190.51±19.23 mm) in experimental varicocele rats. The level of CatSper1 and 2 genes expression evaluated using real-time polymerase chain reaction was significantly downregulated 2 months after varicocele induction. Conclusion: Our data indicated that experimental varicocele has deleterious effects on sperm parameters, testis structure as well as the expression of CatSper 1 and 2 genes. abstract_id: PUBMED:35842930 Effect of redo varicocelectomy on semen parameters and pregnancy outcome: An original report and meta-analysis. Recurrence following varicocelectomy is an important cause of treatment failure and persistence of subnormal semen parameters. This original study was combined with a systemic review and meta-analysis aiming to evaluate the efficacy of redo varicocelectomy on male fertility potential and pregnancy outcome. The retrospective study included 32 patients who underwent microsurgical subinguinal varicocelectomy for patients with recurrent varicocele. Changes in semen parameters and hormone profiles before and after surgery were compared. The literature review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses and included seven articles in addition to our original report. Results of the original study revealed statistically significant improvements in sperm concentration, progressive motility, total motile sperm count and normal morphology following redo varicocelectomy. The meta-analysis results echoed those reported in our original study and depicted significant improvements in sperm concentration (mean difference [MD] = +20.281 million/ml, p < 0.001), total motility (MD = +9.659%, p = 0.001), total motile sperm count (MD = +23.258 million sperm, p < 0.001) and normal morphology (MD = +4.460%, p < 0.001). Overall pregnancy outcome was reported in seven studies with a rate of 34.6%. No significant changes were noted in any of the collected hormone results both in this original report and in the meta-analysis. In conclusion, redo varicocelectomy has a beneficial role on male fertility potential and can be offered for men with recurrent varicocele as directed by their individual clinical condition. abstract_id: PUBMED:27878108 Can Ultrasound Findings be a Good Predictor of Sperm Parameters in Patients With Varicocele? A Cross-Sectional Study. Background: Varicocele, one of the most common causes of sperm abnormalities in men, is the dilation of the testicular vein (the pampiniform venous plexus) by more than 2 mm. Ultrasonography is the primary imaging procedure for varicocele diagnosis. The venous diameter is evaluated using B-mode sonography, and venous reflux is assessed using color Doppler sonography. Objectives: In this study, we investigated the association of ultrasound findings with sperm parameters in patients with varicocele. Methods: Ninety-nine patients with clinically diagnosed varicocele were included in this study, which was performed at a university imaging center. Patients were evaluated for pampiniform venous plexus diameter, venous reflux, and testicular size on both sides. Semen analysis was performed for all patients. The ultrasonography parameters and semen analysis data were compared to assess the correlations between the results. Results: The semen analysis parameters, including semen volume and sperm morphology, count, and motility, significant correlations with the diameter of the varicocele vein but were not correlated with testes size. In addition, the patients with reflux of longer than 1 s and those with reflux of less than 1 s had significant differences in sperm parameters and varicocele size. Conclusions: Our study showed that ultrasonography findings can be a good predictor of sperm parameters in patients with varicocele. abstract_id: PUBMED:25132948 Adolescent varicocele: are somatometric parameters a cause? Purpose: It has been reported that varicocele is found less frequently in obese men. Accordingly, we evaluated varicocele patients and statistically analyzed the correlation between varicocele and somatometric parameters. Materials And Methods: A total of 211 patients underwent surgery for varicoceles. All patients underwent history taking, physical examination, and scrotal ultrasound to determine the presence and severity of varicocele. An age-matched control group consisted of 102 patients who were found not to have varicocele according to physical examinations and scrotal ultrasound. The age, weight, height, and body mass index (BMI) of the two groups were compared. The statistical analyses were performed by use of PASW Statistics ver. 18.0. A p-value of less than 0.05 was used for statistical significance. Results: In the varicocele group, the mean age, height, weight, and BMI were 29.42±14.01 years, 168.53±9.97 cm, 62.14±13.17 kg, and 21.66±3.21 kg/m(2), respectively. The distribution of varicocele grade was as follows: 103 (48.8%) grade III, 72 (34.1%) grade II, and 36 (17.1%) grade I. In the control group, the mean age, height, weight, and BMI were 30.83±17.31 years, 161.93±19.83 cm, 64.69±17.86 kg, and 24.04±3.64 kg/m(2), respectively. Analyzing these data specifically in adolescents, they showed significant differences in age, height, and BMI (p=0.000, p=0.000, and p=0.004, respectively) between two groups. There were no significant differences in somatometric parameters between patients with different grades of varicocele. Conclusions: Our results showed that patients with varicoceles were significantly taller and had a lower BMI than did patients without varicoceles, especially among adolescents. Carefully designed future studies may be needed. abstract_id: PUBMED:33141946 Determination of the time for improvement in semen parameters after varicocelectomy. This retrospective study aimed to determine the time for improvement in semen parameters after varicocelectomy. A total of 175 patients over 18 years of age, who underwent microscopic inguinal varicocelectomy in a tertiary centre between 2005 and 2016 and had post-operative follow-up data of at least 12 months, were included in the study. Serial semen analyses in the pre-operative period and at the post-operative third, sixth, ninth and 12th months were retrospectively documented. With respect to their pre-operative total motile sperm counts, the patients were classified into three subgroups as <5 million, 5-9 million and >9 million. In the overall cohort and the group with a total motile sperm count of >9 million, the semen parameters continued to improve up to the post-operative 12th month. No further significant improvement was observed after the post-operative third and sixth months for the patients with pre-operative total motile sperm counts of <5 million and 5-9 million respectively. For the candidates of varicocelectomy, the decision of treatment strategies, such as assisted reproductive technologies and waiting for spontaneous pregnancy can be based on the prediction of the time for improvement in semen parameters. Answer: The study titled "Adolescent varicocele: are somatometric parameters a cause?" (PUBMED:25132948) aimed to evaluate the correlation between varicocele and somatometric parameters in adolescent patients. The research included 211 patients who underwent surgery for varicoceles and an age-matched control group of 102 patients without varicocele. The study compared age, weight, height, and body mass index (BMI) between the two groups. The results indicated that in the varicocele group, patients were significantly taller and had a lower BMI than those in the control group, especially among adolescents. There were no significant differences in somatometric parameters between patients with different grades of varicocele. The study concluded that adolescent patients with varicoceles were taller and had a lower BMI compared to those without varicoceles. However, the study suggests that carefully designed future studies are needed to further explore these findings. This study suggests that there may be a correlation between somatometric parameters and the presence of varicocele in adolescents, but it does not establish a causal relationship. More research is needed to understand the potential mechanisms behind this association and whether somatometric parameters can be considered a cause of varicocele in adolescents.
Instruction: Does the introduction of a comprehensive structured training programme for endoscopic vein harvesting improve conduit quality? Abstracts: abstract_id: PUBMED:25415312 Does the introduction of a comprehensive structured training programme for endoscopic vein harvesting improve conduit quality? A multicentre pilot study. Objectives: Endoscopic vein harvesting is one of the most popular minimally invasive vein-harvesting techniques for coronary artery bypass graft surgery. It is associated with improved cosmetic outcome and fewer wound-related problems compared with the conventional open technique. However, its efficacy with regard to conduit damage and long-term patency has recently been questioned. Learning curve-associated trauma to the vein has a major impact on vein quality and the incidence of graft failure post-surgery. In an attempt to address this problem, we have devised and tested a learning tool termed Manchester Endoscopic Learning Tool (MELT). In this study, we compare vein quality following MELT training with standard recommended training. Methods: Fourteen practitioners across seven UK centres were enrolled into the study. Practitioners were categorized into two groups receiving MELT or standard training. Data were collected prospectively from the first eight vein retrievals per operator following training. A total of n = 112 vein-harvesting procedures were included in the study. Results: Veins harvested by MELT practitioners had fewer small avulsions (P <0.001), required fewer repairs (P <0.001) and experienced a lower incidence of bruising (P <0.001) than veins obtained by practitioners receiving standard training. The incidence of very short side branches requiring repair was also significantly reduced (P <0.001) in the MELT group compared with standard training. Conclusions: Our formalized training programme consistently minimizes vein trauma resulting in better-quality conduits when compared with the current standard training. Exposure of surgical practitioners to the structured curriculum during their endoscopic vein harvesting training will enhance their learning and lead to better-quality conduits. This is likely to impart clinical benefit post-surgery. abstract_id: PUBMED:27059309 A comprehensive review on learning curve associated problems in endoscopic vein harvesting and the requirement for a standardised training programme. Endoscopic vein harvesting is becoming one of the most favourable vein harvesting techniques in multiple bypass coronary surgery, due to its short term post-operative benefits with high patient satisfaction. However, long-term graft patency has been both supported and questioned in the literature. Graft failure can be affected by harvesting methods and operator's experience. Endoscopic vein harvesting is associated with a learning curve period, during which the incidence of vein trauma is high due to unfamiliarity with the surgical technique. There is a paucity of structured learning tools for novice practitioners, meaning that training differs significantly between hospital centres. Inconsistent training methods can lead to poor surgical technique, which can have a significant impact on vein quality and stress level of the practitioner. In turn, this can lead to increased postoperative complications and longer surgical duration. The main aim of this literature review is to understand the impact of the learning curve on the vein conduit and whether there is a requirement for a standardised training programme for the novice practitioners. abstract_id: PUBMED:24251019 Endoscopic vein harvesting: technique, outcomes, concerns & controversies. The choice of the graft conduit for coronary artery bypass grafting (CABG) has significant implications both in the short- and long-term. The patency of a coronary conduit is closely associated with an uneventful postoperative course, better long-term patient survival and superior freedom from re-intervention. The internal mammary artery is regarded as the primary conduit for CABG patients, given its association with long-term patency and survival. However, long saphenous vein (LSV) continues to be utilized universally as patients presenting for CABG often have multiple coronary territories requiring revascularization. Traditionally, the LSV has been harvested by creating incisions from the ankle up to the groin termed open vein harvesting (OVH). However, such harvesting methods are associated with incisional pain and leg wound infections. In addition, patients find such large incisions to be cosmetically unappealing. These concerns regarding wound morbidity and patient satisfaction led to the emergence of endoscopic vein harvesting (EVH). Published experience comparing OVH with EVH suggests decreased wound related complications, improved patient satisfaction, shorter hospital stay, and reduced postoperative pain at the harvest site following EVH. Despite these reported advantages concerns regarding risk of injury at the time of harvest with its potential detrimental effect on vein graft patency and clinical outcomes have prevented universal adoption of EVH. This review article provides a detailed insight into the technical aspects, outcomes, concerns, and controversies associated with EVH. abstract_id: PUBMED:35640590 Endoscopic vein harvesting: a guide for approaching difficult cases and assessing patients preoperatively. Coronary artery bypass grafting remains the most commonly performed cardiac surgical procedure worldwide. The long saphenous vein still presides as the first choice conduit as a second graft in multivessel coronary artery bypass grafting surgery. Traditionally, the long saphenous vein has been harvested with an open approach which can potentially result in significant wound complications in certain circumstances. Endoscopic vein harvesting is a minimally invasive vein harvesting technique, which requires a single 2-3 cm incision and is associated with a quicker return to normal daily activities, decreased wound complications and better quality of life in the longer term. There is a learning curve associated with endoscopic vein harvesting adoption and there are certain patient factors that can prove to be challenging when adopting an endoscopic approach. This commentary aims to provide a concise guide of certain challenging patient factors that operators may encounter during endoscopic vein harvesting, and how to approach these patients in both the preoperative and intraoperative settings. We suggest that with appropriate planning and awareness of the challenging patient factors and problematic venous anatomy that exists, the operator can consistently formulate a strategy for ensuring a successful endoscopic harvest. abstract_id: PUBMED:33841978 Endoscopic vein harvesting. Coronary artery bypass grafting is the most common cardiac surgical procedure performed worldwide and the long saphenous vein the most common conduit for this. When performed as an open vein harvest (OVH), the incision on each leg can be up to 85cm long, making it the longest incision of any routine procedure. This confers a high degree of morbidity to the procedure. Endoscopic vein harvest (EVH) methods were popularised over two decades ago, demonstrating significant benefits over OVH in terms of leg wound complications including surgical site infections. They also appeared to hasten return to usual activities and wound healing and became popular particularly in North America. Subgroup analyses of two trials designed for other purposes created a period of uncertainty between 2009-2013 while the impact of endoscopic vein harvesting on vein graft patency and major adverse cardiac events was scrutinised. Large observational studies debunked the findings of increased mortality in the short-term, allowing practitioners and governing bodies to regain some confidence in the procedure. A well designed, adequately powered, randomised controlled trial published in 2019 also definitively demonstrated that there was no increase in death, myocardial infarction or repeat revascularisation with endoscopic vein harvest. Endoscopic vein harvest is a Class IIa indication in European Association of Cardio-Thoracic Surgery (EACTS) and a Class I indication in International Society of Minimally Invasive Cardiac Surgery (ISMICS) guidelines. abstract_id: PUBMED:37994624 A Novel Technique of No-Touch Endoscopic Saphenous Vein Harvesting. No-touch saphenous vein harvesting is considered an ideal procedure for harvesting good-quality veins, whereas endoscopic vein harvesting (EVH) is considered ideal for decreasing wound complications. The no-touch EVH (NT-EVH) technique, which was developed in 2018, is one of the best vein-harvesting procedures. We have improved this method in several aspects since our initial experience. Herein, we present the details of this technique. abstract_id: PUBMED:37830754 Vein Morphology in Endoscopic Vein Harvesting: 15-Year Outcomes. Objective: Long-term outcomes of vein morphologies in the endoscopic vein harvesting setting are lacking. We analyzed 15-year follow-up for endoscopic versus open vein harvesting and associated vein morphology outcomes in coronary artery bypass grafting at a single center. Methods: This single-center, prospectively collected, retrospective, observational study evaluated 15-year major adverse cardiovascular events (MACE) of endoscopic versus open vein harvesting and associated vein morphologies. Results: Among 729 patients, no differences were found between endoscopic and open vein harvesting in MACE hazard ratios despite increased risks with native Hawai'ian, Pacific Islander, and Filipino patients. Asian and Filipino patients had more thin-walled veins and Asian and multirace patients had more vasovasoral branches than White patients. Bifid veins were associated with increased risk of cardiac catheterization. Varicose veins were associated with myocardial infarction, congestive heart failure, and death. Thick-walled veins were associated with revascularization and dense adhesions with cardiac catheterization and revascularization. Conclusions: Fifteen-year adjusted MACE was similar between endoscopic and open vein harvesting in small coronary targets despite increased risks with native Hawai'ian, Pacific Islander, and Filipino patients. Bifid, varicose, thick-walled veins, and dense adhesions had worse MACE. abstract_id: PUBMED:33155775 The endoscopic no-touch saphenous vein harvesting technique. The no-touch saphenous vein harvesting technique is being increasingly used; however, this technique causes more leg wound complications than conventional techniques. Endoscopic saphenous vein harvesting is considered a safe and effective approach for reducing leg complications, despite the fact that experience with this technique remains limited, because leg CO2 insufflation and dissection with a tip cannula to isolate the vein enables the graft to naturally skeletonize. In this video tutorial, we demonstrate our endoscopic no-touch saphenous vein harvesting technique using a reusable saphenous vein retractor system without CO2 insufflation and an electrothermal bipolar vessel sealing device. abstract_id: PUBMED:30990577 Endoscopic saphenous vein harvesting with a non-sealed approach. Endoscopic saphenous vein harvesting is considered a safe and effective approach for graft procurement during coronary artery bypass graft surgery. As an alternative to open techniques, minimally invasive approaches are associated with superior outcomes in terms of morbidity and patients' discomfort, without affecting graft quality, patency rate, and long-term outcomes. Both sealed and non-sealed systems are available for endoscopic conduit harvesting. Sealed systems require carbon dioxide gas insufflation, which may have a damaging impact on the endothelium. Non-sealed systems do not require carbon dioxide insufflation and thus avoid the potential drawbacks of its use. In this video tutorial, we provide a thorough description of the surgical technique for endoscopic saphenous vein harvesting using a non-sealed approach. abstract_id: PUBMED:32359067 Predictive value of great saphenous vein mapping prior to endoscopic harvesting in coronary artery bypass surgery. Objectives: The use of endoscopic vein harvesting in patients undergoing coronary artery bypass grafting is increasing, often using bedside mapping. However, data on the predictive value of great saphenous vein (GSV) mapping are scarce. This study assessed whether preoperative mapping could predict final conduit diameter. Methods: A prospective registry was created that included 251 patients. Saphenous vein mapping was performed prior to endoscopic vein harvesting at 3 predetermined sites. After harvesting and preparing the GSV, the outer diameters were measured. Appropriate graft size was defined as an outer diameter between 3 and 6 mm. Results: A total of 753 GSV segments were analysed. The average mapping diameter was 3.2 ± 0.7 mm. The harvested GSV had a mean diameter of 4.7 ± 0.8 mm. Mapping diameters were significantly positively correlated with actual GSV diameters (correlation coefficient, 0.47; P < 0.001). If the preoperative mapping diameters were between 1.5 and 5 mm, 96.6% of the GSVs had suitable dimensions after endoscopic vein harvesting. Conclusions: Preoperative bedside mapping moderately predicts final GSV size after endoscopic harvesting but could not detect unsuitable vein segments. However, the majority of endoscopically harvested GSVs had diameters suitable to be used as coronary bypass grafts. Answer: Yes, the introduction of a comprehensive structured training programme for endoscopic vein harvesting does improve conduit quality. A study that compared vein quality following training with the Manchester Endoscopic Learning Tool (MELT) to standard recommended training found that veins harvested by MELT-trained practitioners had fewer small avulsions, required fewer repairs, and experienced a lower incidence of bruising than those obtained by practitioners receiving standard training. The incidence of very short side branches requiring repair was also significantly reduced in the MELT group compared to the standard training group (PUBMED:25415312). This suggests that a formalized training program can consistently minimize vein trauma, resulting in better-quality conduits, which is likely to impart clinical benefits post-surgery.
Instruction: Do cancer survivors develop healthier lifestyle behaviors than the cancer-free population in the PLCO study? Abstracts: abstract_id: PUBMED:27837443 Do cancer survivors develop healthier lifestyle behaviors than the cancer-free population in the PLCO study? Background: Current studies report mixed results in health status and health behaviors after a diagnosis of cancer. The aim of our study is to investigate potential differences in lifestyle factors among cancer survivors and cancer-free individuals in a prospective cohort study conducted in the United States. Methods: Using data from the Prostate, Lung, Colorectal and Ovarian (PLCO) Trial, 10,133 cancer survivors were identified and compared to 81,992 participants without cancer to evaluate differences in body mass index (BMI), smoking, NSAID use, and physical activity. Results: Cancer survivors, compared to the cancer-free, were significantly less likely to engage in physical activity (odds ratio (OR) = 0.82, 95% CI = 0.77-0.88). Compared to those who were obese at baseline, cancer survivors were more likely to be at normal BMI at follow-up compared to the cancer-free (OR = 1.90, 95% CI = 1.42-2.54). Cancer survivors were less likely to report regular aspirin use as compared to the cancer-free population (OR = 0.86, 95 % CI = 0.82-0.92). Of the current smokers, cancer survivors were more likely to be former smokers at follow-up compared to the cancer-free (OR = 1.50, 95% CI = 1.30-1.74). Conclusion: Upon stratification by baseline health markers, cancer survivors practice healthier lifestyle habits such as smoking cessation and maintenance of a healthy weight. However, cancer survivors are less likely to be physically active as compared to cancer-free individuals, regardless of baseline practices. Implications For Cancer Survivors: For cancer survivors who reported poor health status and behaviors at baseline, a cancer diagnosis may encourage the practice of healthier lifestyle behaviors. abstract_id: PUBMED:26820142 Lifestyle behaviors among US cancer survivors. Purpose: We describe and compare lifestyle behaviors, including smoking, physical activity, alcohol consumption, and nutrition, among cancer survivors to individuals with no cancer. Methods: Data from the 2013 Behavior Risk Factor Surveillance System were used for this cross-sectional study. Weighted analysis was performed, and associations were examined by adjusted prevalence ratios (APRs) and 95 % confidence intervals (CIs). Results: Comparing survivors to individuals with no cancer history, differences were found for a smoking quit attempt (APR 1.08; CI 1.04, 1.12), physical inactivity (APR 1.11; CI 1.07, 1.15), and binge drinking (APR 0.89; CI 0.83, 0.95). An interaction with gender was observed when examining smoking and heavy drinking. Smoking was lower (APR 0.85; CI 0.79, 0.92) among male survivors than males with no cancer history, while higher (APR 1.25; CI 1.18, 1.32) among female survivors compared to females with no cancer history. Heavy drinking (APR 0.85; CI 0.73, 0.98) was lower among male survivors than males with no cancer history, while cancer survivorship was not associated with heavy drinking among females. No differences existed for fruit and vegetable consumption or body mass index. Conclusions: US cancer survivors are not more likely than the general population to engage in all healthy lifestyle behaviors. Interventions, including improved physician communication, to reduce physical inactivity among all cancer survivors and cigarette smoking among female survivors are needed. Implications For Cancer Survivors: Cancer survivors are at increased risk for comorbid conditions, and acceptance of healthy behaviors may reduce dysfunction and improve long-term health. Ultimately, opportunities exist for clinicians to promote lifestyle changes that may improve the length and quality of life of their patients. abstract_id: PUBMED:34066093 A Classification Approach for Cancer Survivors from Those Cancer-Free, Based on Health Behaviors: Analysis of the Lifelines Cohort. Health behaviors affect health status in cancer survivors. We hypothesized that nonlinear algorithms would identify distinct key health behaviors compared to a linear algorithm and better classify cancer survivors. We aimed to use three nonlinear algorithms to identify such key health behaviors and compare their performances with that of a logistic regression for distinguishing cancer survivors from those without cancer in a population-based cohort study. We used six health behaviors and three socioeconomic factors for analysis. Participants from the Lifelines population-based cohort were binary classified into a cancer-survivors group and a cancer-free group using either nonlinear algorithms or logistic regression, and their performances were compared by the area under the curve (AUC). In addition, we performed case-control analyses (matched by age, sex, and education level) to evaluate classification performance only by health behaviors. Data were collected for 107,624 cancer free participants and 2760 cancer survivors. Using all variables resulted an AUC of 0.75 ± 0.01, using only six health behaviors, the logistic regression and nonlinear algorithms differentiated cancer survivors from cancer-free participants with AUCs of 0.62 ± 0.01 and 0.60 ± 0.01, respectively. The main distinctive classifier was age. Though not relevant to classification, the main distinctive health behaviors were body mass index and alcohol consumption. In the case-control analyses, algorithms produced AUCs of 0.52 ± 0.01. No key health behaviors were identified by linear and nonlinear algorithms to differentiate cancer survivors from cancer-free participants in this population-based cohort. abstract_id: PUBMED:31016603 Health and lifestyle behaviors in colorectal cancer survivors with and without Lynch syndrome. Lynch syndrome (LS), a hereditary cancer syndrome, accounts for approximately 3% of colorectal cancers (CRC). Positive health behaviors and surveillance are preventive strategies, but research on whether recommended behavioral guidelines are followed by individuals with LS is limited. Additional health education and promotion could be beneficial to the improved survivorship of CRC survivors. Explore health and lifestyle behaviors in CRC survivors with and without LS. We conducted a case-control study of CRC survivors with and without LS using a mailed questionnaire. Recruitment was conducted via patient registries at The University of Texas MD Anderson Cancer Center (cases n = 33; controls n = 75) and through social media (cases n = 42). CRC survivors with and without LS in our study had substantially lower smoking prevalence (5.5% and 2.7%) compared to national prevalence (18.0%). However, they had higher levels of alcohol consumption (36.8% and 10.3% for male and female LS survivors, respectively, and 35.8% and 22.0% for male and female sporadic survivors, respectively) compared to national prevalence of 13.88% for males and 6.02% for females. Both groups of CRC survivors participate in negative health behaviors that impact survivorship. More research is needed to examine the relationship between personal engagement in preventive behaviors and patient-provider relationships to improve health behaviors and explore strategies for intervention. Additionally, better health education and lifestyle change recommendations would promote and reinforce positive health outcomes in the CRC population and especially in LS survivors. abstract_id: PUBMED:37210076 Healthier Energy Balance Behaviors Most Important for Health-Related Quality of Life in Rural Cancer Survivors in Central Pennsylvania. Background: Rural cancer survivors face a greater number of health disparities, including poorer health-related quality of life (HRQoL), than urban cancer survivors. Engagement in healthy lifestyle behaviors also varies between rural and urban cancer survivors. Lifestyle behaviors can improve HRQoL; however, the combination of behaviors most important for HRQoL in rural survivors is unclear. This study examined clusters of lifestyle behaviors in rural cancer survivors, and differences in HRQoL between behavioral clusters. Methods: Rural cancer survivors in the United States (N = 219) completed a cross-sectional survey. Lifestyle behaviors were classified into unhealthy/healthy binary categories (inactive/active, longer/shorter sedentary time, excessive/acceptable fat intake, very low/higher fruit and vegetable intake, some/no alcohol consumption, and poor/good sleep quality). Behavioral clusters were identified by latent class analysis. HRQoL differences between behavioral clusters were assessed by ordinary least squares regression. Results: The 2-class model demonstrated the best fit and interpretability. The "mostly unhealthy behaviors" class (38.5% of sample) had higher probabilities of all unhealthy behaviors, except alcohol consumption. The "healthier energy balance" class (61.5% of sample) had higher probabilities of active, shorter sedentary, higher fruit and vegetable consumption, excessive fat intake, some alcohol consumption, and poor sleep categories, and reported better HRQoL. Conclusions: Healthier energy balance behaviors were particularly relevant for HRQoL in rural cancer survivors. Multiple behavior change interventions to improve HRQoL in rural cancer survivors should focus on supporting energy balance behaviors. Many rural cancer survivors may lead very unhealthy lifestyles, placing them at high risk of adverse outcomes. This subpopulation should be prioritized to help alleviate cancer health disparities. abstract_id: PUBMED:25349070 Healthy Lifestyle Behaviors of Breast Cancer Survivors. The purpose of this secondary analysis was to describe the extent to which women with breast cancer, who participated in a randomized control trial on exercise, adopted American Cancer Society (ACS) guidelines for healthy lifestyle behaviors. Women in the study exercised during cancer treatment and for 6 months after completion of treatment. The sample included 106 women, average age 50.7 years (SD = 9.6). Adherence to guidelines for 5 servings of fruits and vegetables ranged from 36% (n = 28) to 39% (n = 36). Adherence with alcohol consumption guidelines was 71% (n = 28) to 83% (n = 30). Adherence with meeting a healthy weight ranged from 52% (n = 33) to 61% (n = 31). Adherence with physical activity guidelines ranged from 13% (n = 30) to 31% (n = 35). Alcohol and healthy weight guidelines were followed by more than half of the participants, but physical activity and dietary guidelines were followed by far fewer women. Further prospective clinical studies are indicated to determine whether interventions are effective in producing a healthy lifestyle in cancer survivors. abstract_id: PUBMED:33040283 Prevalence of lifestyle behaviors and associations with health-related quality of life among older female cancer survivors. Purpose: Healthy lifestyles including maintaining a normal weight, consuming a healthy diet, and being physically active can improve prognosis and health-related quality of life (HRQoL) among cancer survivors. The largest proportion of cancer survivors are older adults (≥ 65 years), yet their lifestyle behaviors are understudied. This study sought to examine the lifestyle behaviors (maintaining healthy weight, diet quality, physical activity) of older female cancer survivors and identify associations with HRQoL. Methods: Older female cancer survivors (n = 171) completed surveys to assess HRQoL (RAND-36), unintentional weight loss, body mass index (BMI), diet quality (HEI-2015), and physical activity. Demographic information and medical record data were also collected. Descriptive analyses, correlations, and stepwise linear regressions were utilized. Results: Physical and mental HRQoL of the sample (mean age = 74.50 years) were low: 41.94 ± 10.50 and 48.47 ± 7.18, respectively, out of 100. Physical activity was low: 75.3%, 54.2%, and 68.1% reported no strenuous, moderate, and mild physical activity, respectively. Mean BMI was 27.71 ± 6.24 (overweight) with 64% of the participants being overweight or obese. Mean HEI-2015 scores were 66.54 ± 10.0. Engagement in moderate physical activity was associated with higher physical HRQoL (β = 0.42, p = 0.004). Being white (β = 0.53, p < 0.001), older (β = 0.27, p = 0.025), and having higher HEI scores (β = 0.30, p = 0.011) were associated with higher mental HRQoL. Conclusions: Older cancer survivors report poor diet quality, high rates of being overweight or obese, and low levels of physical activity that impact their HRQoL. Results indicate the need for tailored health coaching for older cancer survivors regarding their lifestyle behaviors to improve prognosis and HRQoL. abstract_id: PUBMED:30343494 The treasure of now and an uncertain future: Delay discounting and health behaviors among cancer survivors. Background: The identification of new therapeutic targets to improve health behaviors among cancer survivors (CS) is likely to improve cancer treatment outcomes. Delay discounting (DD) rate is the degree to which one devalues rewards as a function of time to receipt. Lower DD rates (ie, prioritizing long-term over immediate rewards) are associated with healthier behaviors. CS often experience distress and thoughts of early mortality that can potentially shift priorities to the present and negatively impact DD rates, especially when newly diagnosed. Understanding relations between DD and health behaviors among CS will contribute to the examination of DD as a therapeutic target for improving health behaviors for CS. Method: CS (n = 1001) were recruited from a web panel and administered a web-based questionnaire. Multivariate models examined relations among DD rate, years since diagnosis, and 10 health behavior indicators. We hypothesized that 1) higher DD rates would be associated with fewer years since diagnosis, and 2) lower DD rates would be associated with healthier behaviors, moderated by years since diagnosis. Results: In general, higher DD rates were associated with fewer years since diagnosis. Higher DD rates were associated with more alcohol consumption, cigarette smoking, other tobacco use, tanning booth use, and conversely, greater adherence to annual primary care visits. A significant interaction between DD rate and years since diagnosis was not found. Conclusion: Lower DD rates are associated with several important healthy lifestyle behaviors. DD rate is a promising therapeutic target for new interventions to improve multiple health behaviors among CS. abstract_id: PUBMED:28188570 Recruiting Endometrial Cancer Survivors to Studies Examining Lifestyle Behaviors and Quality of Life: Challenges Faced and Lessons Learned. There are a growing number of cancer survivors in the USA. Despite lifestyle behaviors being strongly associated with morbidity and mortality following primary treatment, recruitment to clinical research studies that aim to improve such behaviors remains a significant challenge. Furthermore, pilot and feasibility studies are critical for the refinement of research methods and form an important training opportunity for early career scientists. This report details the challenges faced and lessons learned in the process of recruiting a population of overweight/obese endometrial cancer survivors (ECS) to two separate studies focused on lifestyle behaviors: a survey study and a randomized behavioral intervention study that aimed to improve diet, physical activity, and quality of life. We used in-clinic and mail-based approaches to reach eligible patients identified from clinic records. Surveys were offered via paper or online. To evaluate the recruitment process, we compared clinic records and enrollment data over time and location. Chi-squared tests were also used to compare recruitment strategies. We address specific challenges at the patient level, the clinic/provider level, and the organizational level. Overall response rate was 13.9% to the survey and 4% to the intervention. Responses to in-clinic offers were greater than to mail-based approaches for the survey with no difference for the intervention. Identifying the unique characteristics of each survivorship population, adequate planning, resource allocation, and involvement of key staff are essential to supporting recruitment efforts to research studies. Having the support of physicians and nurses is especially critical to the success of recruitment. abstract_id: PUBMED:34790854 Health-Promoting Lifestyle in Colorectal Cancer Survivors: A Qualitative Study on the Experiences and Perspectives of Colorectal Cancer Survivors and Healthcare Providers. Objective: Understanding the experiences of survivors and healthcare providers about health-promoting lifestyle (HPL) in colorectal cancer (CRC) survivors is important in planning for coping with the disease, managing treatment side effects, increasing survival, and improving quality of life (QOL). This study was conducted to explore the experiences and perspectives of CRC survivors and healthcare providers about HPL in CRC survivors. Methods: This descriptive qualitative study was performed in 2020 at Omid and Imam Reza Hospitals in Mashhad, Iran. Participants were CRC survivors (n = 12) and healthcare providers (n = 33) who were selected by purposive sampling. Data were collected using in-depth semi-structured interview by face to face and then analyzed by Zhang and Wildemuth content analysis method. MaxQDA software was used to organize the data. Results: Following the treatment of cancer, CRC survivors seek to make changes in lifestyle and they choose a HPL that maintains or improves their health. HPL in CRC survivors includes nutrition, activity and rest, health responsibility, interpersonal relations, spiritual growth, and psychological management. The results showed that HPL can lead to motivation, the ability to self-care and improve daily performance, reduce treatment complications, and increase the QOL. Conclusions: CRC survivors can help change their lifestyle patterns with healthy eating, treatment adherence, regular physical activity, and good sleep habits. Furthermore, effective personal and social relationships, spiritual growth, and management of psychological disorders develop health-promoting behaviors in them. CRC survivors also face challenges and limitations in their life after treatment; identifying the components of a HPL in CRC survivors can lead to desirable care, treatment, education, and counseling services. Answer: According to the findings from the Prostate, Lung, Colorectal and Ovarian (PLCO) Trial, cancer survivors did not uniformly develop healthier lifestyle behaviors compared to the cancer-free population. The study found that cancer survivors were significantly less likely to engage in physical activity compared to individuals without cancer (OR = 0.82, 95% CI = 0.77-0.88). However, cancer survivors were more likely to be at a normal BMI at follow-up if they were obese at baseline compared to the cancer-free population (OR = 1.90, 95% CI = 1.42-2.54). Additionally, cancer survivors were less likely to report regular aspirin use compared to the cancer-free population (OR = 0.86, 95% CI = 0.82-0.92). Among current smokers, cancer survivors were more likely to be former smokers at follow-up compared to those without cancer (OR = 1.50, 95% CI = 1.30-1.74). These findings suggest that while cancer survivors may practice healthier habits such as smoking cessation and maintaining a healthy weight, they are less physically active than their cancer-free counterparts (PUBMED:27837443).
Instruction: Are tuberculosis advisory committees well-functioning? Abstracts: abstract_id: PUBMED:15729887 Are tuberculosis advisory committees well-functioning? Purpose: To evaluate the function status of TB advisory committee to assess treatments of tuberculosis. Object And Method: Estimate by questionnaire sheets to public health nurses attending to seminars on tuberculosis at Research Institute of Tuberculosis. Result: 137 answers are available for analysis. Of these, 57 (41.6%) TB advisory committees are estimated not to assess treatments of tuberculosis at all and/or to assess treatments without necessary informations on drug sensitivity in more than around half of the cases. In 13 (16.3%) committees of the other 80, many cases are in fact self-assessed. Number of committees that are estimated to functioning well is only 44 (32.1%). Conclusion: Many TB advisory committees are estimated to be malfunctioning from the stand point of assessments of treatment. As TB advisory committee is one of key agency to control drug-resistant tuberculosis, its reform and revitalization are urgently needed. abstract_id: PUBMED:12235847 Tuberculosis control in Kobe City--analyses of advisory contents and disqualified cases by integrated tuberculosis advisory committee The incidence of tuberculosis (TB) in Kobe City has been higher in comparison with that of Japan, thereby Kobe City ought to enforce anti-tuberculosis activities. Nine Wards public health departments of Kobe City used to convene their own tuberculosis advisory committees until April 1998, when a centralized committee was established to examine all TB patients. The new committee was authorized to issue clinical advice to registered physicians whenever necessary in order to improve treatment standard of TB. The author analyzed the contents of issued documents and observed any change in statistics before and after establishment of the committee. During the past 3 years, the new committee had sent 2,221 advisory documents to physicians, which occupied 35% of all 6,305 applied cases. Main suggestions included proper choice of anti-tuberculosis agents, adequate duration of treatment and reference to the results of laboratory examinations. In addition, 140 cases were rejected for medical expenses subsidy by TB Law to avoid unnecessary long-term treatment. Main reasons of rejection were long-term treatment despite negative bacilli too more than one year, continuous single INH administration following the completion of the standard treatment and unnecessary treatment for the cases with inactive findings on chest X-ray. Since the establishment of the integrated committee, the statistics of TB has demonstrated a significant decrease in the incidence and the prevalence of TB with its annual reduction of 5.6, and 11.5 per 100,000 respectively. The proportion of bacteriologically confirmed cases among newly registered patients has increased, and the diagnostic accuracy was improved due to the decrease in the unknown bacteriology case. The proportion of the case treated by INH alone fall down rapidly and it is lower than that of the whole country. These data demonstrate that the integrated tuberculosis advisory committee contributes to standardize anti-tubercular treatment through dissemination of proper guidance on TB diagnosis and treatment to all medical facilities in the city. abstract_id: PUBMED:12664442 Suggestions for better management of the Tuberculosis Advisory Committee Since 1951 tuberculosis advisory committee, established according to the tuberculosis control law, has played an important role in improving the quality of tuberculosis control programme in Japan. The management of tuberculous patients has changed markedly in the past ten years due to advances in chemotherapy and changes in the epidemiological situation of tuberculosis. In this paper, several suggestions were made on better management of the tuberculosis advisory committee intending to judge cases by unified criteria. abstract_id: PUBMED:7884994 The present status and future roles of tuberculosis advisory committees We surveyed the present status and future roles of Tuberculosis (TB) Advisory Committees to study the ideal concept of the Committees using questionnaires. After the introduction of the new TB Control Law (which allowed the integration of the Committees), about half of the local governments (prefectures, special cities and special wards in Tokyo) have decreased their committees numbers by a third to two-thirds of the original number of health centers. In order to improve the quality of the Committees, there were additional educational activities suggested by the respondents in the questionnaire. However there were several problems highlighted in the questionnaire, for example a decrease in TB specialists, the aging of Committee members and some Committees with extremely low numbers in reported cases of TB. For the improvement of the Committees both in quality and the range in activities, it is not enough to integrate the Committees. It is recommended that the Committees should be improved in training and education for doctors and staff of the health centers and that they contribute more in comprehensive counter measures against TB in each region. In future the Committees roles should be enlarged and improved in quality, for example, to carry out extra periodic mass screening for TB, and to examine the chest x-rays in mass screenings for lung cancer. abstract_id: PUBMED:32730235 Essential Components of a Public Health Tuberculosis Prevention, Control, and Elimination Program: Recommendations of the Advisory Council for the Elimination of Tuberculosis and the National Tuberculosis Controllers Association. This report provides an introduction and reference tool for tuberculosis (TB) controllers regarding the essential components of a public health program to prevent, control, and eliminate TB. The Advisory Council for the Elimination of Tuberculosis and the National Tuberculosis Controllers Association recommendations in this report update those previously published (Advisory Council for the Elimination of Tuberculosis. Essential components of a tuberculosis prevention and control program. Recommendations of the Advisory Council for the Elimination of Tuberculosis. MMWR Recomm Rep 1995;44[No. RR-11]). The report has been written collaboratively on the basis of experience and expert opinion on approaches to organizing programs engaged in diagnosis, treatment, prevention, and surveillance for TB at state and local levels.This report reemphasizes the importance of well-established priority strategies for TB prevention and control: identification of and completion of treatment for persons with active TB disease; finding and screening persons who have had contact with TB patients; and screening, testing, and treatment of other selected persons and populations at high risk for latent TB infection (LTBI) and subsequent active TB disease.Health departments are responsible for public safety and population health. To meet their responsibilities, TB control programs should institute or ensure completion of numerous responsibilities and activities described in this report: preparing and maintaining an overall plan and policy for TB control; maintaining a surveillance system; collecting and analyzing data; participating in program evaluation and research; prioritizing TB control efforts; ensuring access to recommended laboratory and radiology tests; identifying, managing, and treating contacts and other persons at high risk for Mycobacterium tuberculosis infection; managing persons who have TB disease or who are being evaluated for TB disease; providing TB training and education; and collaborating in the coordination of patient care and other TB control activities. Descriptions of CDC-funded resources, tests for evaluation of persons with TB or LTBI, and treatment regimens for LTBI are provided (Supplementary Appendices; https://stacks.cdc.gov/view/cdc/90289). abstract_id: PUBMED:34814578 Introduction of World Health Organization Strategies and the Technical Advisory Committee actions on viral hepatitis control and status of the elimination of viral hepatitis in China Viral hepatitis has been causing big threat to public health globally. The number of annual deaths caused by hepatitis surpassed the deaths caused by AIDS, tuberculosis, and malaria. World Health Organization (WHO) issued the global health sector strategy on viral hepatitis 2016-2020 (GHSS) to control its epidemic. It established the Strategies and Technical Advisory Committee on HIV, viral hepatitis, and sexually transmitted infections (STAC). This paper summarizes the GHSS goals and the keynote of the 2020 STAC meeting, analyzes the challenges and opportunities faced by China in eliminating viral hepatitis, and provides the comments on the papers on this issue, which could guide further actions. abstract_id: PUBMED:21740649 The Union's Ethics Advisory Group. N/A abstract_id: PUBMED:24168092 National tuberculosis advisory committee 2012 committee report. N/A abstract_id: PUBMED:28262332 Report from the World Health Organization's third Product Development for Vaccines Advisory Committee (PDVAC) meeting, Geneva, 8-10th June 2016. The third meeting of WHO's Product Development for Vaccines Advisory Committee (PDVAC) was held in June 2016, with a remit to revisit the pathogen areas for which significant progress has occurred since recommendations from the 2015 meeting, as well as to consider new advances in the development of vaccines against other pathogens. Since the previous meeting, significant progress has been made with regulatory approvals of the first malaria and dengue vaccines, and the first phase III trials of a respiratory syncytial virus (RSV) vaccine candidate has started in the elderly and pregnant women. In addition, PDVAC has also supported vaccine development efforts against important emerging pathogens, including Middle Eastern Coronavirus (MERS CoV) and Zika virus. Trials of HIV and tuberculosis vaccine candidates are steadily progressing towards pivotal data points, and the leading norovirus vaccine candidate has entered a phase IIb efficacy study. WHO's Immunization, Vaccine and Biologicals (IVB) department is actively working in several pathogen areas on the recommendation of PDVAC, as well as continuing horizon scanning for advances in the development of vaccines that may benefit low and middle income countries (LMICs), such as the recent licensure of the enterovirus 71 (EV71) vaccine in China. Following on from discussions with WHO's Strategic Advisory Group of Experts (SAGE) on Immunization, PDVAC will also look beyond licensure and consider data needs for vaccine recommendation and implementation to reduce the delay between vaccine approval and vaccine impact. abstract_id: PUBMED:13200087 New medical advisory committee for the State Tuberculosis Commission. N/A Answer: The evaluation of tuberculosis (TB) advisory committees' functionality presents a mixed picture. According to one study, a significant proportion of TB advisory committees were not functioning effectively in terms of assessing treatments for tuberculosis. The study found that 41.6% of the committees did not assess treatments at all or did so without necessary information on drug sensitivity in more than half of the cases. Additionally, in 16.3% of the committees, many cases were self-assessed, and only 32.1% of the committees were estimated to be functioning well (PUBMED:15729887). However, another study focusing on Kobe City in Japan showed that the establishment of an integrated tuberculosis advisory committee led to improvements in TB control. The committee sent out numerous advisory documents to physicians, which resulted in better treatment standards. Following the committee's establishment, there was a significant decrease in the incidence and prevalence of TB, an increase in the proportion of bacteriologically confirmed cases, and an improvement in diagnostic accuracy. The proportion of cases treated by INH alone also decreased rapidly, indicating that the integrated committee contributed to the standardization of anti-tubercular treatment (PUBMED:12235847). Suggestions for better management of TB advisory committees have been made, including the need for unified criteria in judging cases (PUBMED:12664442). Additionally, there have been calls for improvements in training and education for doctors and staff, as well as an expansion of the committees' roles to include more comprehensive countermeasures against TB (PUBMED:7884994). In a broader context, the Advisory Council for the Elimination of Tuberculosis and the National Tuberculosis Controllers Association have provided recommendations for the essential components of a public health program to prevent, control, and eliminate TB, emphasizing the importance of well-established priority strategies (PUBMED:32730235). In conclusion, while some TB advisory committees may not be functioning optimally, there are examples of successful committees and recommendations for improvement that suggest the potential for TB advisory committees to contribute significantly to TB control when properly managed and supported.
Instruction: Homocysteine and methylmalonic acid levels in pregnant Nepali women. Should cobalamin supplementation be considered? Abstracts: abstract_id: PUBMED:11593347 Homocysteine and methylmalonic acid levels in pregnant Nepali women. Should cobalamin supplementation be considered? Objective: The aim of this study was to investigate homocysteine and methylmalonic acid levels as markers of functional cobalamin and folate status in pregnant Nepali women. Design: Cross-sectional study. Setting: Patan Hospital, Kathmandu, Nepal. Subjects: A sub-sample (n=382) of all pregnant women (n=2856) coming for their first antenatal visit in a 12 month period, 1994-1995. The selection of the sub-sample was based on maternal haematocrit values, categorised into three groups: severely, moderately and non-anaemic women. As serum levels of total homocysteine (s-tHcy) and methylmalonic acid (s-MMA) were similar in the three groups, pooled data are presented. Women who had already received micronutrient supplementation (n=54) were excluded. The remaining women (n=328) were included in the statistical analysis. Results: Overall mean values (+/-s.d.) of s-tHcy and s-MMA were 9.5 (+/-4.2) micromol/l and 0.39 (+/-0.32) micromol/l, respectively. Elevated s-tHcy (>7.5 micromol/l) was found in 68% of the women, while 61% had elevated s-MMA (>0.26 micromol/l). Low s-cobalamin values (<150 pmol/l) were observed in 49% of the women, while only 7% had low s-folate values (< or =4.5 nmol/l). s-tHcy was significantly correlated with s-MMA (r=0.28, P<0.001), s-cobalamin (r=-0.30, P<0.001) and s-folate (r=-0.24, P<0.001). s-MMA was significantly associated with s-cobalamin (r=-0.40, P<0.001), but not with s-folate. Conclusions: Functional cobalamin deficiency was very common in the study population, while functional folate deficiency was rather uncommon. We suggest considering cobalamin supplementation to pregnant Nepali women. Sponsorship: The Norwegian Research Council and the Norwegian Universities Committee for Development, Research and Education. abstract_id: PUBMED:25385266 Reliable and powerful laboratory markers of cobalamin deficiency in the newborn: plasma and urinary methylmalonic acid. Background: Diagnosing cobalamin deficiency is critical, given the high prevalence of cobalamin deficiency particularly in developing countries. Measuring serum cobalamin levels is of limited diagnostic sensitivity, in other words its specificity and sensitivity are low. The present study investigated the changes in the levels of metabolic markers - plasma homocysteine, plasma methylmalonic acid (MMA) and urinary MMA - of cobalamin metabolism. Methods: Plasma cobalamin and serum folic acid were studied in 206 pregnant women over the last four prenatal weeks. Plasma cobalamin, folic acid, homocysteine, MMA from umbilical cord blood and urinary MMA in newborns were studied. Results: Plasma cobalamin values were low in 66% of the mothers. There was a positive correlation between maternal and neonatal plasma cobalamin values (r = 0.72, p < 0.001). B12 was strongly inversely associated with plasma MMA, urine MMA and plasma homocysteine. To predict cobalamin deficiency, sensitivities of plasma MMA, urinary MMA and homocysteine were 96.4%, 95.6% and 88.2%, respectively. And positive predictive values (PPV) were 96.2%, 96.9% and 86% for plasma MMA, urinary MMA and plasma homocysteine levels, respectively. Conclusion: Plasma MMA and urinary MMA B12 are the most robust markers of cobalamin deficiency. As a non-invasive method, urinary MMA is a sensitive method in demonstrating cobalamin deficiency in the newborn. abstract_id: PUBMED:27941633 A Prospective Study on Serum Methylmalonic Acid and Homocysteine in Pregnant Women. This study aimed to investigate serum methylmalonic acid (MMA) and homocysteine levels and to assess their effects on pregnancy and neonatal outcomes. Serum MMA and homocysteine levels in 278 pregnant Korean women, determined by liquid chromatography-tandem mass spectrometry in each trimester, were compared with those of previous studies in other ethnic groups. We investigated the association between MMA and homocysteine status with pregnancy and neonatal events: gestational diabetes, preeclampsia, gestational age at delivery, preterm birth, small for gestational age, neonatal birth weight, and congenital abnormalities. The median (range) MMA level was 0.142 (0.063-0.446) µmol/L and homocysteine level was 10.6 (4.4-38.0) µmol/L in pregnant women. MMA levels were significantly higher in the third trimester than during other trimesters (p < 0.05), while homocysteine levels were not. No significant association was observed between MMA or homocysteine levels and any of the maternal or neonatal outcomes examined. Future studies are needed to assess the associations among maternal serum concentrations of MMA and homocysteine, and maternal and neonatal outcomes. abstract_id: PUBMED:21875397 Uptake of cobalamin and markers of cobalamin status: a longitudinal study of healthy pregnant women. Background: Currently, it is unknown whether the decline in plasma cobalamin observed during pregnancy is caused by malabsorption of the vitamin. This study examined cobalamin absorption and markers of cobalamin status during normal pregnancy. Methods: Twenty-seven pregnant Danish women were examined at gestation weeks 13, 24 and 36. The absorption test CobaSorb was performed in all women implying measurement of holotranscobalamin or cyanocobalamin bound to transcobalamin before and after 2 days intake of 3 × 9 μg cobalamin. Serum cobalamin and the two cobalamin binding proteins transcobalamin and haptocorrin, including haptocorrin saturated with cobalamin or analogues, were measured, and so was plasma methylmalonic acid and homocysteine. Results: No change in the uptake of cobalamin was observed throughout pregnancy. Serum cobalamin displayed a gradual decline during pregnancy (p<0.0001), while holotranscobalamin remained unchanged, despite an increase in total transcobalamin (p<0.0001). In accord with these results, total haptocorrin showed a decline from the 1st to 3rd trimester (p=0.007) and cobalamin bound to haptocorrin declined (p<0.0001). Interestingly, the amount of cobalamin analogues attached to haptocorrin remained unchanged. Methylmalonic acid (p=0.002) and homocysteine (p<0.0001) increased during pregnancy. Conclusions: Cobalamin absorption remains unchanged during normal pregnancy, as judged by the CobaSorb test. No change was observed in the biological active holotranscobalamin during pregnancy. Thus, the pregnancy-related decline in cobalamin is caused by alternations in haptocorrin-bound cobalamin. Surprisingly, no pregnancy-related change was observed in the amount of analogues attached to haptocorrin. abstract_id: PUBMED:12816766 Homocysteine and methylmalonic acid in diagnosis and risk assessment from infancy to adolescence. The concentration of total homocysteine (tHcy) in serum and plasma is elevated in both folate and cobalamin deficiencies, whereas methylmalonic acid (MMA) in serum, plasma, or urine is a specific marker of cobalamin function. The combined measurement of both metabolites is useful for the diagnosis and follow-up of these deficiency states. In addition, tHcy is elevated under various pathologic states (eg, renal failure), and hyperhomocysteinemia is associated with an increased risk of cardiovascular disease, cognitive dysfunction, and adverse pregnancy outcomes. The diagnostic utility of tHcy and MMA concentrations as markers of folate and cobalamin deficiencies in healthy and diseased children has been documented. This article briefly summarizes the biochemical background of tHcy and MMA and the associations of tHcy and MMA with various disease states and focuses on novel data obtained in infants, children, and adolescents, with emphasis on cobalamin status in infants. The utility of tHcy and MMA as indicators of cobalamin and folate deficiencies in adults can be extended to infants and older children. Furthermore, as in adults, tHcy is related to unhealthy lifestyle factors and is a risk factor for vascular disease. High MMA concentrations in newborns, occasionally denoted as benign methylmalonic aciduria, may reflect impaired cobalamin function. abstract_id: PUBMED:11553056 Homocysteine and methylmalonic acid as indicators of folate and vitamin B12 deficiency in pregnancy. Deficiency of folate during pregnancy is associated with megaloblastic anaemia. Lower levels of folate and vitamin B12 have been reported in mothers whose offspring had neural tube defects compared to unaffected controls. Increased methylmalonic acid levels are a sensitive indicator of mild vitamin B12 deficiency and elevated homocysteine levels denote vitamin B12 or folate deficiency. We have investigated the relationship between serum concentration of total homocysteine, methylmalonic acid, vitamin B12 and folate in pregnancy. A significant inverse correlation was found between homocysteine and red cell folate and, to a lesser extent, serum folate. In addition, a significant inverse correlation was found between methylmalonic acid and vitamin B12. No significant relationship was found between homocysteine and vitamin B12. The relationship between red cell folate and serum folate and homocysteine may be useful for detecting borderline folate deficiency in pregnancy and indicate pregnancies at risk of neural tube defect. These sensitive assays are useful tools for the further investigation of folate vitamin B12 and metabolism in normal and abnormal pregnancy. abstract_id: PUBMED:27338469 Nutritional Intake and Status of Cobalamin and Folate among Non-Pregnant Women of Reproductive Age in Bhaktapur, Nepal. Cobalamin and folate are especially important for women of childbearing age due to their ubiquitous role in fetal growth and development. Population-based data on cobalamin and folate status are lacking from Nepal, where diets are mostly vegetarian. The objectives of the study were to investigate cobalamin and folate intake and status, and to explore associations with socio-demographics, anthropometrics, anemia, and dietary habits. Following a random selection of geographical clusters, we collected blood samples from 500 non-pregnant women and 24-h dietary recalls and food frequency questionnaires from a subsample of 379 women. Twenty percent of the women did not consume any food containing cobalamin during the days recalled, and in 72% nutritional cobalamin intake was <1 μg/day. Eighty-four percent of the women had cobalamin intake lower than the estimated average requirement (EAR) (<2 μg/day). In contrast, only 12% of the women had a folate intake less than 100 μg per day, whereas 62% had intake between 100 and 320 μg. Low plasma cobalamin (<150 pmol/L) was found in 42% of the women, most of whom (88%) also had elevated levels of methylmalonic acid. Our results indicated a high prevalence of nutritional cobalamin deficiency, while folate deficiency was uncommon. abstract_id: PUBMED:10926922 Cobalamin and folate evaluation: measurement of methylmalonic acid and homocysteine vs vitamin B(12) and folate. Vitamin B(12) and folate are two vitamins that have interdependent roles in nucleic acid synthesis. Deficiencies of either vitamin can cause megaloblastic anemia; however, inappropriate treatment of B(12) deficiency with folate can cause irreversible nerve degeneration. Inadequate folate nutrition during early pregnancy can cause neural tube defects in the developing fetus. In addition, folate and vitamin B(12) deficiency and the compensatory increase in homocysteine are a significant risk factor for cardiovascular disease. Laboratory support for the diagnosis and management of these multiple clinical entities is controversial and somewhat problematic. Automated ligand binding measurements of vitamin B(12) and folate are easiest to perform and widely used. Unfortunately, these tests are not the most sensitive indicators of disease. Measurement of red cell folate is less dependent on dietary fluctuations, but these measurements may not be reliable. Homocysteine and methylmalonic acid are better metabolic indicators of deficiencies at the tissue level. There are no "gold standards" for the diagnosis of these disorders, and controversy exists regarding the best diagnostic approach. Healthcare strategies that consider the impact of laboratory tests on the overall costs and quality of care should consider the advantages of including methylmalonic acid and homocysteine in the early evaluation of patients with suspected deficiencies of vitamin B(12) and folate. abstract_id: PUBMED:16548919 Cobalamin status during normal pregnancy and postpartum: a longitudinal study comprising 406 Danish women. Objectives: To assess cobalamin (vitamin B(12)) status during normal pregnancy and postpartum in a longitudinal setting. Methods: This study was performed in 1995-1996. It comprised 406 healthy, pregnant Danish Caucasian women, living in Copenhagen County. Cobalamin status, i.e. plasma (P-) cobalamin, P-methylmalonic acid and P-homocysteine was measured at 18, 32 and 39 wk gestation and 8 wk postpartum during lactation. Results: P-cobalamin showed a gradual, significant decline during pregnancy (P < 0.0001) followed by a significant increase postpartum (P < 0.0001); at 18, 32, 39 wk gestation and 8 wk postpartum median values were 225, 172, 161 and 319 pmol/L, respectively. P-methylmalonic displayed a gradual, significant increase during pregnancy as well as postpartum (P < 0.001) with median values of 0.11, 0.13, 0.14, and 0.16 micromol/L, respectively. P-homocysteine demonstrated a significant increase during pregnancy and postpartum (P < 0.001). The frequency of P-cobalamin values <150 pmol/L increased during pregnancy from 15% at 18 wk to 43% at 39 wk gestation and subsequently declined to 3% postpartum. Conclusion: Low cobalamin status may occur among pregnant women, especially in late pregnancy. The recommendations for periconceptional vitamin B(12) supplementation should be reconsidered. abstract_id: PUBMED:35641553 Pregnancy homocysteine and cobalamin status predict childhood metabolic health in the offspring. Background: Inadequate pregnancy cobalamin status has been associated with adverse offspring metabolic health in Indian and Nepalese studies. Studies of pregnancy cobalamin status and mid-childhood health outside of Asia are scarce. Methods: Associations between pregnancy fasting plasma total homocysteine (tHcy), cobalamin status (plasma cobalamin, holotranscobalamin (holoTC), methylmalonic acid (MMA)) and mid-childhood metabolic score (MetSco) ((including fat mass index (zFMI), homeostatic model assessment of insulin resistance (zHOMA-IR) and dyslipidemia (zTG - zHDLc)/2) z-scores)) were investigated in a prospective study of 293 mother-child dyads. Results: Highest versus low-mid pregnancy tHcy tertile was associated with higher mid-childhood MetSco, specifically with higher child zFMI. Stratifying by sex, the maternal tHcy-child MetSco association was limited to boys and confirmed for zFMI and zHOMA-IR. The maternal tHcy-child zFMI association was not mediated by birth weight z-score. First trimester plasma cobalamin was not associated with child outcomes, but other indicators of cobalamin status were. Lowest versus mid-high plasma holoTC tertile was associated with MetSco (specifically zFMI and zHOMA-IR) and highest versus low-mid plasma MMA tertile with higher MetSco and dyslipidemia in boys. Conclusions: Moderately elevated pregnancy tHcy and low cobalamin status were associated with mid-childhood metabolic score in boys. The pregnancy tHcy-child zFMI association was not mediated by birth weight. Impact: Fasting plasma total homocysteine (tHcy) during pregnancy and low cobalamin status during early pregnancy are associated with mid-childhood metabolic score and its components in the offspring. These findings were only significant in male offspring. The study provides new evidence that impaired one carbon metabolism during pregnancy is associated with negative health outcomes in the offspring, in a population with low prevalence of cobalamin deficiency. The maternal-offspring associations were observed in the functional markers of cobalamin status (holotranscobalamin and methylmalonic acid) and tHcy, not with plasma cobalamin concentration. Screening for low pregnancy cobalamin status should be considered. Answer: The study on pregnant Nepali women found that functional cobalamin deficiency was very common, with 49% of the women showing low serum cobalamin values and 68% having elevated serum total homocysteine (s-tHcy) levels. Additionally, 61% had elevated serum methylmalonic acid (s-MMA) levels, which is another marker of cobalamin deficiency. The study concluded that cobalamin supplementation should be considered for pregnant Nepali women due to the high prevalence of functional cobalamin deficiency (PUBMED:11593347). Supporting this, another study highlighted that plasma and urinary MMA are reliable and powerful laboratory markers of cobalamin deficiency. The study found that plasma MMA and urinary MMA are robust markers of cobalamin deficiency, with high sensitivity and positive predictive values for detecting the deficiency. It suggested that urinary MMA, as a non-invasive method, is a sensitive way to demonstrate cobalamin deficiency in newborns, which could imply its utility in pregnant women as well (PUBMED:25385266). Furthermore, a longitudinal study of healthy pregnant Danish women showed that cobalamin absorption remains unchanged during normal pregnancy, but serum cobalamin levels decline, suggesting that the decline in cobalamin during pregnancy is not due to malabsorption. This study also found that methylmalonic acid and homocysteine levels increased during pregnancy, which could indicate a need for cobalamin supplementation (PUBMED:21875397). In conclusion, the evidence from these studies suggests that cobalamin supplementation should be considered for pregnant Nepali women due to the high prevalence of elevated homocysteine and methylmalonic acid levels, which are indicative of cobalamin deficiency.
Instruction: Can digitally reconstructed radiographs (DRRS) replace simulation films in prostate cancer conformal radiotherapy? Abstracts: abstract_id: PUBMED:14575845 Can digitally reconstructed radiographs (DRRS) replace simulation films in prostate cancer conformal radiotherapy? Purpose: To evaluate the precision of using digitally reconstructed radiographs (DRRs) of either 3 mm or 6 mm slice separation vs. using simulator images for the setup verification of patients receiving CT planned conformal radiotherapy to the prostate. To calculate the transfer error between CT and simulator. Methods And Materials: Twenty patients were CT scanned (3 mm slice spacing/width). DRRs were generated for both 3 mm (DRR 3) and 6 mm (DRR 6) separations. DRRs and a simulator image of an anterior and a lateral field were used as reference images. Five observers matched each of the reference images to treatment images using the Theraview "Target check" facility. It was assumed that poorer images would lead to a loss of precision of field placement estimations (FPE) between observers. The study was designed to detect a difference greater than 1.5 mm(2) in the precision of image placement. The transfer error was the mean difference in the setup error derived from the DRRs and the simulation films. Results: The precision of evaluations for simulator films and 3 mm DRRs were similar. There was a trend for the DRR 6 mm to achieve less precise results which was greatest for craniocaudal examinations (variance: simulator 1.5 mm(2), DRR6 2.8 mm(2), p = 0.17), but this did not reach statistical significance. A range of transfer errors was identified, with standard deviations ranging from 1.7 to 4.2 mm. There was evidence of a significant systematic bias in anterior craniocaudal (1.3-1.9 mm, p < 0.004) and anterior posterior (-1.9 mm, p = 0.027). Conclusion: The precision of setup evaluations using DRRs is similar to that achieved by using simulator fields when planning conformal prostate radiotherapy. The use of DRRs could reduce systematic errors introduced in the planning process. abstract_id: PUBMED:17544002 Magnetic resonance-based treatment planning for prostate intensity-modulated radiotherapy: creation of digitally reconstructed radiographs. Purpose: To develop a technique to create magnetic resonance (MR)-based digitally reconstructed radiographs (DRR) for initial patient setup for routine clinical applications of MR-based treatment planning for prostate intensity-modulated radiotherapy. Methods And Materials: Twenty prostate cancer patients' computed tomography (CT) and MR images were used for the study. Computed tomography and MR images were fused. The pelvic bony structures, including femoral heads, pubic rami, ischium, and ischial tuberosity, that are relevant for routine clinical patient setup were manually contoured on axial MR images. The contoured bony structures were then assigned a bulk density of 2.0 g/cm(3). The MR-based DRRs were generated. The accuracy of the MR-based DDRs was quantitatively evaluated by comparing MR-based DRRs with CT-based DRRs for these patients. For each patient, eight measuring points on both coronal and sagittal DRRs were used for quantitative evaluation. Results: The maximum difference in the mean values of these measurement points was 1.3 +/- 1.6 mm, and the maximum difference in absolute positions was within 3 mm for the 20 patients investigated. Conclusions: Magnetic resonance-based DRRs are comparable to CT-based DRRs for prostate intensity-modulated radiotherapy and can be used for patient treatment setup when MR-based treatment planning is applied clinically. abstract_id: PUBMED:9128974 Efficient CT simulation of the four-field technique for conformal radiotherapy of prostate carcinoma. Purpose: Conformal radiotherapy of prostate carcinoma relies on contouring of individual CT slices for target and normal tissue localization. This process can be very time consuming. In the present report, we describe a method to more efficiently localize pelvic anatomy directly from digital reconstructed radiographs (DRRs). Materials And Methods: Ten patients with prostate carcinoma underwent CT simulation (the spiral mode at 3 mm separation) for conformal four-field "box" radiotherapy. The bulbous urethra and bladder were opacified with iodinated contrast media. On lateral and anteroposterior DRRs, the volume of interest (VOI) was restricted to 1.0-1.5 cm tissue thickness to optimize digital radiograph reconstruction of the prostate and seminal vesicles. By removing unessential voxel elements, this method provided direct visualization of those structures. For comparison, the targets of each patient were also obtained by contouring CT axial slices. Results: The method was successfully performed if the target structures were readily visualized and geometrically corresponded to those generated by contouring axial images. The targets in 9 of 10 patients were reliable representations of the CT-contoured volumes. One patient had 18 mm variation due to the lack of bladder opacification. Using VOIs to generate thin tissue DRRs, the time required for target and normal tissue localization was on the average less than 5 min. Conclusion: In CT simulation of the four-field irradiation technique for prostate carcinoma, thin-tissue DRRs allowed for efficient and accurate target localization without requiring individual axial image contouring. This method may facilitate positioning of the beam isocenter and provide reliable conformal radiotherapy. abstract_id: PUBMED:9392555 A prospective, randomized study addressing the need for physical simulation following virtual simulation. Purpose: To accurately implement a treatment plan obtained by virtual or CT simulation, conventional or physical simulation is still widely used. To evaluate the need for physical simulation, we prospectively randomized patients to undergo physical simulation or no additional simulation after virtual simulation. Methods And Materials: From July 1995 to September 1996, 75 patients underwent conformal four-field radiation therapy planning for prostate cancer with a commercial grade CT simulator. The patients were randomized to undergo either port filming immediately following physical simulation or port filming alone. The precision of implementing the devised plan was evaluated by comparing simulator radiographs and/or port films against the digitally reconstructed radiographs (DRRs) for x, y, and z displacements of the isocenter. Changes in beam aperture were also prospectively evaluated. Results: Thirty-seven patients were randomized to undergo physical simulation and first day port filming, and 38 had first day treatment verification films only without a physical simulation. Seventy-eight simulator radiographs and 195 first day treatment port films were reviewed. There was no statistically significant reduction in treatment setup error (>5 mm) if patients underwent physical simulation following virtual simulation. No patient required a resimulation, and there was no significant difference in changes of beam aperture. Conclusions: Following virtual simulation, physical simulation may not be necessary to accurately implement the conformal four-field technique. Because port filming appears to be sufficient to assure precise and reliable execution of a devised treatment plan, physical simulation may be eliminated from the process of CT based planning when virtual simulation is available. abstract_id: PUBMED:10560340 Assessment of accuracy of daily set-ups in prostate radiotherapy using electronic imaging. The purpose of the study was to assess the accuracy, using electronic portal imaging, of daily set-ups, in patients undergoing radiotherapy for prostate carcinoma. We used a scanning liquid ion chamber to assess the accuracy of set-ups in 25 consecutive patients undergoing a 6 1/2 week course of radiotherapy to the prostate. Electronic images (EPIs) were collected during 33 treatments to each of four ports. The positions of anatomical structures on the EPIs were compared with the same structures seen on digitally reconstructed radiographs (DRRs) made after CT simulation before beginning radiotherapy. Displacements of the EPIs compared with the DRRs were computer-calculated in millimetres in lateral, longitudinal and rotational directions for each port. 11 patients had ports moved because of discrepancies between the EPIs and the DRRs; eight required moves in the first five treatments to correct systematic (simulator) errors. In the right-left and anterior-posterior directions, nearly 95% of the EPIs were within 5 mm of the simulated port position. In the superior-inferior direction, 98% of the ports were within 5 mm of the simulated port position. Two patients had in-plane rotational errors on the lateral ports (8 degrees and 10 degrees respectively). It was concluded that daily electronic imaging is an effective technique for assessing the accuracy of set-ups in prostate radiotherapy. abstract_id: PUBMED:22531511 Conformal fields in prostate radiotherapy: a comparison between measurement, calculation and simulation. Aims: The objective of this study is to evaluate the accuracy of a treatment planning system (TPS) for calculating the dose distribution parameters in conformal fields (CF). Dosimetric parameters of CF's were compared between measurement, Monte Carlo simulation (MCNP4C) and TPS calculation. Materials And Methods: Field analyzer water phantom was used for obtaining percentage depth dose (PDD) curves and beam profiles (BP) of different conformal fields. MCNP4C was used to model conformal fields dose specification factors and head of linear accelerator varian model 2100C/D. Results: Results showed that the distance to agreement (DTA) and dose difference (DD) of our findings were well within the acceptance criteria of 3 mm and 3%, respectively. Conclusions: According to this study it can be revealed that TPS using equivalent tissue air ratio calculation method is still convenient for dose prediction in non small conformal fields normally used in prostate radiotherapy. It was also showed that, since there is a close correlation with Monte Carlo simulation, measurements and TPS, Monte Carlo can be further confirmed for implementation and calculation dose distribution in non standard and complex conformal irradiation field for treatment planning systems. abstract_id: PUBMED:15337572 On-line aSi portal imaging of implanted fiducial markers for the reduction of interfraction error during conformal radiotherapy of prostate carcinoma. Purpose: An on-line system to ensure accuracy of daily setup and therapy of the prostate has been implemented with no equipment modification required. We report results and accuracy of patient setup using this system. Methods And Materials: Radiopaque fiducial markers were implanted into the prostate before radiation therapy. Lateral digitally reconstructed radiographs (DRRs) were obtained from planning CT data. Before each treatment fraction, a lateral amorphous silicon (aSi) portal image was acquired and the position of the fiducial markers was compared to the DRRs using chamfer matching. Couch translation only was used to account for marker position displacements, followed by a second lateral portal image to verify isocenter position. Residual displacement data for the aSi and previous portal film systems were compared. Results: This analysis includes a total of 239 portal images during treatment in 17 patients. Initial prostate center of mass (COM) displacements in the superior, inferior, anterior, and posterior directions were a maximum of 7 mm, 9 mm, 10 mm and 11 mm respectively. After identification and correction, prostate COM displacements were <3 mm in all directions. The therapists found it simple to match markers 88% of the time using this system. Treatment delivery times were in the order of 9 min for patients requiring isocenter adjustment and 6 min for those who did not. Conclusions: This system is technically possible to implement and use as part of an on-line correction protocol and does not require a longer than standard daily appointment time at our center with the current action limit of 3 mm. The system is commercially available and is more efficient and user-friendly than portal film analysis. It provides the opportunity to identify and accommodate interfraction organ motion and may also permit the use of smaller margins during conformal prostate radiotherapy. Further integration of the system such as remote table control would improve efficiency. abstract_id: PUBMED:10434334 Conformal radiotherapy Conformal radiotherapy is a new irradiation technique made possible by technological improvements, especially progress in imaging and 3D dosimetry. By conforming the volume irradiated as closely as possible to the clinical anatomical target volume, conformal radiotherapy is designed to deliver a higher dose to the tumour volume, while more effectively sparing the adjacent tissues from the adverse effects of irradiation. Conformal radiotherapy may therefore constitute a progress comparable to the contribution of high-energy radiotherapy in the 1960s or the impact of computer-assisted dosimetry in the 1970s. Evaluation of the results, definition of its indications, standardisation of practices, and study of the impact of dose escalation require further studies in the field of prostate cancer, as the superiority of this new technique over conventional radiotherapy has not been formally established due to the limited follow-up. abstract_id: PUBMED:29405121 Accuracy and precision of patient positioning for pelvic MR-only radiation therapy using digitally reconstructed radiographs. Background And Purpose: Magnetic resonance imaging (MRI) has in recent years emerged as an imaging modality to drive precise contouring of targets and organs at risk in external beam radiation therapy. Moreover, recent advances in MRI enable treatment of cancer without computed tomography (CT) simulation. A commercially available MR-only solution, MRCAT, offers a single-modality approach that provides density information for dose calculation and generation of positioning reference images. We evaluated the accuracy of patient positioning based on MRCAT digitally reconstructed radiographs (DRRs) by comparing to standard CT based workflow. Materials And Methods: Twenty consecutive prostate cancer patients being treated with external beam radiation therapy were included in the study. DRRs were generated for each patient based on the planning CT and MRCAT. The accuracy assessment was performed by manually registering the DRR images to planar kV setup images using bony landmarks. A Bayesian linear mixed effects model was used to separate systematic and random components (inter- and intra-observer variation) in the assessment. In addition, method agreement was assessed using a Bland-Altman analysis. Results: The systematic difference between MRCAT and CT based patient positioning, averaged over the study population, were found to be (mean [95% CI]) -0.49 [-0.85 to -0.13] mm, 0.11 [-0.33 to +0.57] mm and -0.05 [-0.23 to +0.36] mm in vertical, longitudinal and lateral directions, respectively. The increases in total random uncertainty were estimated to be below 0.5 mm for all directions, when using MR-only workflow instead of CT. Conclusions: The MRCAT pseudo-CT method provides clinically acceptable accuracy and precision for patient positioning for pelvic radiation therapy based on planar DRR images. Furthermore, due to the reduction of geometric uncertainty, compared to dual-modality workflow, the approach is likely to improve the total geometric accuracy of pelvic radiation therapy. abstract_id: PUBMED:21548269 Conformal radiotherapy of prostate carcinoma--procedure description Introduction: Today, three-dimensional conformal radiotherapy is a standard way in the radical treatment of localized prostate cancer, and it is an alternative to the radical prostatectomy. This method of radiotherapy treatment is widely accepted in the treatment of prostate cancer patients, and provides irradiation of targeted volume (prostate, seminal vesicles) with dose escalation sparing the surrounding healthy tissues (rectum, bladder) at the same time. That is not possible with the conventional two dimension technique. PROCEDURE DESCRIPTION: Three-dimensional conformal radiotherapy is a volumetric, visual simulation according to the computed tomography slices; it defines the tumour and organ at risk individually in each patient. Results of several studies have shown that there is a significant decrease in the development of acute toxicity when prostate cancer patients are treated with conformal radiotherapy. High dose irradiation gives excellent results in treatment of localized prostate carcinoma and improves treatment results in the patients with locally advanced carcinoma of prostate. Discussion: Prostate carcinoma irradiation techniques have been changed dramatically during recent years. Data obtained by computed tomography are important since the size and shapes of the prostate as well as its anatomic relations towards the rectum and bladder are considerably different in individual patients. The three-dimension plan of irradiation can be designed for each patient individually by performing computed tomography technique when planning radiotherapy. Conclusion: The advanced planning systems for conformal radiotherapy can reconstruct the anatomic structures of pelvis in three-dimension technique on the basis of computed tomography scans, which provides better conformality between the irradiation beam and geometrical shape of the tumour with minimal irradiation of the surrounding healthy tissue. Answer: Yes, digitally reconstructed radiographs (DRRs) can replace simulation films in prostate cancer conformal radiotherapy. Studies have shown that the precision of setup evaluations using DRRs is similar to that achieved by using simulator fields when planning conformal prostate radiotherapy (PUBMED:14575845). Additionally, magnetic resonance-based DRRs have been found to be comparable to CT-based DRRs for prostate intensity-modulated radiotherapy and can be used for patient treatment setup when MR-based treatment planning is applied clinically (PUBMED:17544002). Furthermore, the use of DRRs has been demonstrated to allow for efficient and accurate target localization without requiring individual axial image contouring, which may facilitate positioning of the beam isocenter and provide reliable conformal radiotherapy (PUBMED:9128974). A study also indicated that following virtual simulation, physical simulation may not be necessary to accurately implement the conformal four-field technique, suggesting that port filming alone may be sufficient to assure precise and reliable execution of a devised treatment plan (PUBMED:9392555). Moreover, the accuracy of daily set-ups in prostate radiotherapy has been effectively assessed using electronic imaging, comparing the positions of anatomical structures on electronic portal images (EPIs) with those on DRRs (PUBMED:10560340). Additionally, the accuracy of a treatment planning system (TPS) for calculating the dose distribution parameters in conformal fields has been validated against measurements and Monte Carlo simulation, supporting the use of TPS and DRRs in prostate radiotherapy (PUBMED:22531511). On-line imaging of implanted fiducial markers has also been used to reduce interfraction error during conformal radiotherapy of prostate carcinoma, demonstrating the technical feasibility and accuracy of patient setup using DRRs (PUBMED:15337572). Lastly, the MRCAT pseudo-CT method has been shown to provide clinically acceptable accuracy and precision for patient positioning for pelvic radiation therapy based on planar DRR images, potentially improving the total geometric accuracy of pelvic radiation therapy (PUBMED:29405121). Overall, these studies support the use of DRRs as a replacement for simulation films in the context of prostate cancer conformal radiotherapy.
Instruction: Is high level of disability an indication for spinal fusion? Abstracts: abstract_id: PUBMED:16306833 Is high level of disability an indication for spinal fusion? Analysis of long-term outcome after posterior lumbar interbody fusion using carbon fiber cages. Objective: Posterior lumbar interbody fusion is a recognized procedure for the treatment of back pain associated with degenerative disc disease and segmental instability. It allows decompression of the spinal canal and circumferential fusion through a single posterior incision. Methods: Sixty-five consecutive patients who underwent posterior lumbar interbody fusion using carbon cages and pedicle fixation between 1993 and 2000 were recruited and contacted with a postal survey. Clinical outcome was assessed by the postoperative clinical findings and complications and the fusion rate, which was assessed using the scoring system described by Brantigan and Steffee. Functional outcome was measured by using improvement in the Oswestry Disability Index, return to work, and satisfaction with the surgical outcome. The determinants of functional relief were analyzed against the improvement in disability using multiple regression analysis. Results: The mean postoperative duration at the time of the study was 4.4 years. Overall radiologic fusion rate was 98%. There was a significant improvement in Oswestry Disability Index (P < 0.01). There was 84% satisfaction with the surgical procedure and 61% return to predisease activity level and full employment. We found preoperative level of disability to be the best determinant of functional recovery irrespective of age or the degree of psychological morbidity and litigation (P < 0.01). Conclusion: The combination of posterior lumbar interbody fusion and posterior instrumented fusion is a safe and effective method of achieving circumferential segmental fusion. A direct relationship between preoperative level of disability and functional recovery suggests that disability should be measured preoperatively and spinal fusion should be performed to alleviate disability caused by degenerative spine. abstract_id: PUBMED:34174436 Evaluation of lumbar stiffness after long-level fusion for degenerative lumbar scoliosis via a Chinese version of the lumbar stiffness disability index. Background Context: Long-level spinal fusion for degenerative lumbar scoliosis (DLS)seeks to eliminate spinal motion in an attempt to alleviate pain, improve deformity, and reduce disability. However, this surgery considerably impairs the performance of activities of daily living (ADL) due to the resulting stiffness. The lumbar stiffness disability index (LSDI) is a validated measure of the effect of lumbar stiffness on functional activity, but this index might not be fully applicable to the elderly Chinese population given several specific lifestyle characteristics. Purpose: To evaluate lumbar stiffness in patients with DLS after long-level fusion by Chinese-LSDI (C-LSDI). Study Design: A retrospective study. Patient Sample: A total of 129 DLS patients who underwent long-level (≧4 levels) fusion surgery with at least one-year follow-up from June 2009 to September 2017 were retrospectively included. Outcome Measures: The C-LSDI was designed by modifying LSDI and Korean-LSDI (K-LSDI) based on elderly Chinese lifestyles and the internal consistency and retest repeatability of the patient-reported outcome questionnaire in the measurement of the impact of lumbar stiffness on functional abilities was assessed. Methods: The radiographic parameters including Cobb angle, apical vertebral translation (AVT), coronal vertical axis (CVA), sagittal vertical axis (SVA), thoracic kyphosis (TK), thoracolumbar kyphosis (TLK), pelvic tilt (PT), pelvic incidence (PI), sacral slope (SS), lumbar lordosis (LL), and PI-LL, and clinical symptoms including visual analogue scale (VAS) for back and low extremity pain; Oswestry disability index (ODI), Japanese Orthopedic Association-29 (JOA-29), Scoliosis Research Society⁃22 (SRS-22), 36⁃Item short form survey (SF-36), physical component scores (PCS) and mental component scores (MCS) were measured preoperatively and at the last follow-up. Results: Compared with LSDI and K-LSDI, the C-LSDI demonstrated higher internal consistency (Cronbach's alpha=0.902) and retest reliability (Internal consistency coefficients, ICC=0.904) in the elderly Chinese population. All patients showed increased lumbar stiffness and significant improvement in pain and deformity postoperatively. Regarding items, such as performing personal hygiene after toileting and getting out of a car, people reported more inconvenience with increasingly fixed levels. Conclusions: This study demonstrated that the C-LSDI questionnaire was a reliable and valid instrument for assessing functional limitations due to lumbar stiffness among elderly Chinese patients with DLS after long-level fusion. Although the effects of stiffness did trend toward greater impacts among patients who underwent longer fusions, most patients were satisfied with trade-offs of function and pain relief in exchange for perceived increases in lumbar stiffness. abstract_id: PUBMED:26656059 Effect of Obesity and Being Overweight on Disability and Pain After Lumbar Fusion: An Analysis of 805 Patients. Study Design: Prospective follow-up study. Objective: To investigate the influence of obesity and being overweight on disability and pain of patients undergoing lumbar spinal fusion. Summary Of Background Data: Although spinal fusion surgery has been shown to relieve symptoms in several spinal disorders, the influence of obesity on the results remains controversial. Methods: Since January 1, 2008, all patients undergoing lumbar fusion in two hospitals have been invited to participate in a prospective follow-up database. At the end of 2014, the preoperative and 1-year follow-up data of 805 patients were available. Disability was evaluated using the Oswestry Disability Index and back and leg pain by the visual analogue scale. In addition, the working status, smoking, physical activity, duration of symptoms, and indication for surgery were captured. According to the World Health Organization classification of preoperative body mass index, patients were divided into three groups: normal weight (<25.0 kg/m), overweight (25.0-29.9 kg/m), and obese (≥30.0 kg/m). Results: Of the 805 patients, 204 (25.3%) were normal weight, 365 (45.3%) were overweight, and 236 (29.3%) were obese. The patients in the obese group exercised significantly less per week (P < 0.001). Preoperatively, the body mass index groups did not differ with regard to disability and pain. At the 1-year follow-up, the change in the Oswestry Disability Index was -25 in normal, -24 in overweight, and -23 in obese patients (P = 0.013), and disability was severe in 14.7% of the normal group, 17.0% of the overweight group, and 24.2% of the obese group (P = 0.0052). Back and leg pain improved more in the normal weight group compared with the overweight or obese groups (P < 0.05). Conclusion: Although lumbar spine fusion is also effective in obese patients, the present large follow-up study demonstrates that their outcome results are slightly poorer than patients with normal weight at 1 year after surgery. Level Of Evidence: 3. abstract_id: PUBMED:30805292 Lumbar Spinal Fusion Affects Sitting Disability on the Floor. Background: Lumbar spinal fusion is a standard of care for certain lumbar spinal diseases. However, its impact on sitting, especially on the floor, has not been assessed, even in the countries where people usually sit on the floor instead of using a chair. Methods: A total of 100 Korean patients who underwent lumbar spinal fusion and 47 patients who underwent decompression surgery were enrolled. In a postoperative Oswestry Disability Index (ODI) questionnaire, an additional section 11 (Sitting on the Floor) was inserted, in which the phrase "sitting in a chair" of section 5 was replaced with "sitting on the floor." The ODI scores were calculated twice using either the section with "sitting in a chair" or the section with "sitting on the floor" and comparing the two. Results: In the fusion group, the mean postoperative ODI calculated with "sitting on the floor" is significantly worse than that with "sitting in a chair" (P < .0001). This difference was the same regardless of whether the fusion was done at a single level (P < .0001) or 2 or more levels (P = .006) or whether location was at L4-L5 (P = .002) or L5-S1 (P = .02) in a single-level fusion. The scores of the decompression group showed no difference. Though preoperative and postoperative ODI showed no difference between groups, the postoperative ODI using "sitting on the floor" was significantly worse in the fusion group than the decompression group (P = .009). Conclusion: ODI scores using "sitting on the floor" after lumbar fusion were significantly worse than those with "sitting in a chair." A sitting disability on the floor after lumbar arthrodesis has not been appreciated adequately so far and should be seriously considered if a lumbar arthrodesis is planned in a society where people's usual style of sitting is on the floor. abstract_id: PUBMED:33325383 Assessment of disability levels in patients with low back pain based on the type of lumbar spinal disorder. Background: Low back pain (LBP) is a common health condition and the leading cause of activity limitation and absenteeism in most parts of the world. One-fifth of patients with LBP develop chronic pain disability. Objective: This study investigated the disability levels in patients with different types of lumbar spinal disorders. Methods: A total of 528 patients visiting spine clinics between June 2017 and February 2018 were enrolled in this study. A demographic checklist, the patients' medical records, the Oswestry Disability Questionnaire, and the Visual Analog Scale (VAS) were used to collect the data. Results: Non-specific LBP (29.5%) and discopathy (27%) were the most common final diagnoses. The obtained Oswestry Disability Index (ODI) was significantly higher in older patients, those with higher body mass index, more work experience, and smokers. Lower ODI was found in individuals with a history of regular exercise. Most individuals in all diagnostic groups were categorized into the high ODI group (p⩽ 0.001). Conclusion: In patients with lumbar spine lesions, ODI is significantly correlated with age, BMI, work experience, smoking, and the type of disorder (discopathy, spondylolysis, and spondylolisthesis). Regular exercise is associated with lower levels of disability. abstract_id: PUBMED:33646141 Are the Roland Morris Disability Questionnaire and Oswestry Disability Index interchangeable in patients after lumbar spinal fusion? Background: Low back pain is a common health problem for which there are several treatment options. For optimizing clinical decision making, evaluation of treatments and research purposes it is important that health care professionals are able to evaluate the functional status of patients. Patient reported outcome measures (PROMs) are widely accepted and recommended. The Roland Morris Disability Questionnaire (RMDQ) and the Oswestry Disability Index (ODI) are the two mainly used condition-specific patient reported outcomes. Concerns regarding the content and structural validity and also the different scoring systems of these outcome measures makes comparison of treatment results difficult. Objective: Aim of this study was to determine if the RMDQ and ODI could be used exchangeable by assessing the correlation and comparing different measurement properties between the questionnaires. Methods: Clinical data from patients who participated in a multicenter RCT with 2 year follow-up after lumbar spinal fusion were used. Outcome measures were the RMDQ, ODI, Short Form 36 - Health Survey (SF-36), leg pain and back pain measured on a 0-100 mm visual analogue scale (VAS). Cronbach's alpha coefficients, Spearman correlation coefficients, multiple regression analysis and Bland-Altman plots were calculated. Results: three hundred and seventy-six completed questionnaires filled out by 87 patients were used. The ODI and RMDQ had both a good level of internal consistency. There was a very strong correlation between the RMDQ and the ODI (r= 0.87; p< 0.001), and between the VAS and both the ODI and RMDQ. However, the Bland-Altman plot indicated bad agreement between the ODI and RMDQ. Conclusions: The RMDQ and ODI cannot be used interchangeably, nor is there a possibility of converting the score from one questionnaire to the other. However, leg pain and back pain seemed to be predictors for both the ODI and the RMDQ. abstract_id: PUBMED:30213672 Prediction of Objectively Measured Physical Activity and Self-Reported Disability Following Lumbar Fusion Surgery. Objective: To investigate the predictive value of preoperative fear-avoidance factors (self-efficacy for exercise, pain catastrophizing, kinesiophobia, and depression), walking capacity, and traditional predictor variables for predicting postoperative changes in physical activity level and disability 6 months after lumbar fusion surgery in patients with chronic low back pain (LBP). Methods: We prospectively enrolled 118 patients scheduled for lumbar fusion surgery for motion-elicited chronic LBP with degenerative changes in 1-3 segments of the lumbar spine. Associations between the predictors and the dependent variables were investigated with multiple linear regression analysis. Dependent variables were physical activity level as objectively measured with a triaxial accelerometer and disability as measured with the Oswestry Disability Index. Results: Preoperative physical activity level (β = -0.349; P < 0.001) and self-efficacy for exercise (β = 0.176; P = 0.021) were significant predictors of the postoperative change in physical activity. Preoperative disability (β = -0.790; P < 0.001), self-efficacy for exercise (β = 0.152; P = 0.024), and pain catastrophizing (β = 0.383; P = 0.033) were significant predictors for the change in the Oswestry Disability Index. Conclusions: Patients with low levels of preoperative physical activity were more likely to increase their level of physical activity after lumbar fusion surgery, especially when their self-efficacy for exercise was high. However, most of these patients still had low levels of physical activity after surgery, and they may therefore need extra support in increasing their postoperative physical activity levels. abstract_id: PUBMED:31095118 Minimum Three-year Follow-up of Specific Functional Disabilities After Multilevel Lumbar Fusion: Comparison of Long-level and Short-level Fusion. Study Design: Retrospective case-control study. Objective: To evaluate specific functional disabilities after short- and long-level lumbar fusion. Summary Of Background Data: The Oswestry Disability Index (ODI) cannot represent all types of functional disabilities observed after lumbar fusion and a region-specific Functional Disability Index (SFDI) is necessary. Methods: We examined the differences in postoperative functional disability between 81 patients who underwent ≥3-level lumbar fusion (group I) and 70 age- and sex-matched patients who underwent one- or two-level lumbar fusion (group II). The ODI and Visual Analogue Scale (VAS) were assessed pre- and postoperatively. The SFDI was assessed after lumbar fusion. We evaluated intergroup differences in postoperative VAS, ODI, and SFDI scores during 3-year follow-up. Each mean score was evaluated separately for the 10 ODI and the 12 SFDI items, and we evaluated the changes observed in these scores over the 3-year follow-up. Results: The mean intergroup preoperative ODI and VAS scores were similar. The mean postoperative intergroup VAS scores were similar; however, the mean postoperative ODI and SFDI scores were significantly higher in group I than in group II at 1-year (P<0.001, P<0.001, respectively) and 3-year follow-up (P = 0.037, P<0.001, respectively). Among 10 ODI items, group I showed significant disability with regard to six items at the 1-year follow-up compared with group II, but only showed significant disability with regard to one item at the 3-year follow-up. Among the 12 SFDI items, group I showed significant disability with regard to all 12 items at 1-year follow-up compared with group II, as well as significant disability with regard to nine items at 3-year follow-up. Conclusion: The SFDI is more sensitive than the ODI in assessing functional disabilities based on the levels of fusion. Most SFDI items indicated continued significant disability in patients with long-level lumbar fusion even 3 years postoperatively. Level Of Evidence: 3. abstract_id: PUBMED:26890952 Comparison of Cervical Kinematics, Pain, and Functional Disability Between Single- and Two-level Anterior Cervical Discectomy and Fusion. Study Design: A prospective, time series design. Objective: The purpose of this study is two-fold: firstly, to investigate the impact of altered cervical alignment and range of motion (ROM) on patients' self-reported outcomes after anterior cervical discectomy and fusion (ACDF), and secondly, to comparatively differentiate the influence of single- and two-level ACDF on the cervical ROM and adjacent segmental kinematics up to 12-month postoperatively. Summary Of Background Data: ACDF is one of the most commonly employed surgical interventions to treat degenerative disc disease. However, there are limited in vivo data on the impact of ACDF on the cervical kinematics and its association with patient-reported clinical outcomes. Methods: Sixty-two patients (36 males; 55.63 ± 11.6 yrs) undergoing either a single- or consecutive two-level ACDF were recruited. The clinical outcomes were assessed with the Pain Visual Analogue Scale (VAS) and the Neck Disability Index (NDI). Radiological results included cervical lordosis, global C2-C7 ROM, ROM of the Functional Spinal Unit (FSU), and its adjacent segments. The outcome measures were collected preoperatively and then at 3, 6, and 12-month postoperatively. Results: A significant reduction of both VAS and NDI was found for both groups from the preoperative to 3-month period (P < 0.01). Pearson correlation revealed no significant correlation between global ROM with neither VAS (P = 0.667) nor NDI (P = 0.531). A significant reduction of global ROM was identified for the two-level ACDF group at 12 months (P = 0.017) but not for the single-level group. A significant interaction effect was identified for the upper adjacent segment ROM (P = 0.024) but not at the lower adjacent segment. Conclusion: Current study utilized dynamic radiographs to comparatively evaluate the biomechanical impact of single- and two-level ACDF. The results highlighted that the two-level group demonstrated a greater reduction of global ROM coupled with an increased upper adjacent segmental compensatory motions that is independent of patient-perceived recovery. Level Of Evidence: 3. abstract_id: PUBMED:35460906 Neck Disability at Presentation Influences Long-Term Clinical Improvement for Neck Pain, Arm Pain, Disability, and Physical Function in Patients Undergoing Anterior Cervical Discectomy and Fusion. Objective: To compare perioperative characteristics, patient-reported outcome measures (PROMs) and minimum clinically important difference (MCID) achievement after anterior cervical discectomy and fusion (ACDF) in patients stratified by preoperative neck disability. Background: The Neck Disability Index (NDI) assesses a patient's self-perceived neck disability and is often used to assess the efficacy of cervical surgical intervention. Our study (a retrospective cohort study) evaluates how preoperative severity of patient neck disability influences postoperative clinical improvement after ACDF. Methods: Primary, single-level, or multilevel ACDF procedures were included. PROMs were administered at preoperative/6 week/12 week/6 month/1 year/2 year time points and included Patient-Reported Outcome Measurement Information System-Physical Function (PROMIS-PF), visual analog scale (VAS) for neck and arm pain, NDI, and 12-Item Short-Form (SF-12) Physical Composite Score (PCS). Patients were grouped according to preoperative NDI <50 (mild to moderate neck disability) or NDI ≥50 (severe neck disability). Demographics/perioperative characteristics/postoperative complications/mean PROMs/MCID achievement rates were compared using χ2 or Student t test. Postoperative improvement from preoperative baseline within each cohort was assessed with paired-samples t test. MCID achievement was determined by comparing ΔPROMs with established thresholds. Results: A total of 225 patients were included, 150 NDI <50 and 75 NDI ≥50. The NDI ≥50 cohort was significantly younger (P = 0.002). Cohorts did not differ for spinal disease/operative duration/estimated blood loss/postoperative length of stay/postoperative narcotic consumption/adjacent segment disease rate/1-year arthrodesis rate/6-month pseudarthrosis rate. Postoperative VAS pain score on postoperative day 0 and 1 was significantly increased in the NDI ≥50 cohort (P < 0.048, all). Postoperative complication rates did not differ. All mean PROMs differed at all time points (P < 0.043, all). The NDI <50 patient cohort significantly improved from preoperative baseline for all PROMs and time points except SF-12 PCS/Patient-Reported Outcome Measurement Information System-Physical Function at 6 weeks. The NDI ≥50 cohort significantly improved for all PROMs and time points except SF-12 PCS at 6 weeks. The NDI ≥50 cohort showed a greater proportion achieving MCID for NDI at 6 weeks/2 years/overall (P < 0.037, all). Conclusions: Both cohorts showed significant long-term clinical improvement for neck pain/arm pain/physical function/neck disability, although patients with severe preoperative neck disability reported inferior mean scores for these outcomes at all time points. Answer: Yes, a high level of disability is an indication for spinal fusion. According to the study by PUBMED:16306833, there is a direct relationship between preoperative level of disability and functional recovery, suggesting that disability should be measured preoperatively and spinal fusion should be performed to alleviate disability caused by degenerative spine. The study found that preoperative level of disability was the best determinant of functional recovery irrespective of age or the degree of psychological morbidity and litigation, with significant improvement in the Oswestry Disability Index postoperatively. This indicates that patients with a higher level of disability before surgery may experience more significant functional improvements after undergoing spinal fusion.
Instruction: Does breast cancer change patients' dietary habits? Abstracts: abstract_id: PUBMED:34804915 Results of the ECHO (Eating habits CHanges in Oncologic patients) Survey: An Italian Cross-Sectional Multicentric Study to Explore Dietary Changes and Dietary Supplement Use, in Breast Cancer Survivors. The role of a healthy diet in cancer prevention is well recognized. Recent data indicate that following the same advices can also improve cancer survivors' quality of life. Breast cancer (BC) patients are commonly concerned about diet and nutrition and frequently express the need to obtain health-related information and the will to change their diet and lifestyle. Hence, be aware of survivors' dietary changes and information needs is crucial for healthcare professionals to guide them toward optimal lifestyle choices. In order to investigate eating habits changes in a BC survivors' population, we conceived the cross-sectional multicentric study ECHO (Eating habits CHanges in Oncologic patients) Survey. Data were collected from 684 patients, diagnosed with invasive breast cancer, in order to investigate their changes in food consumption, use of supplements, or the beginning of a specific diet, after BC diagnosis. We also examined the sources of information used and if any modification in their diets was reported to the oncologist. We primarily observed that patients increased their consumption of vegetables, pulses, nuts, fruits, wholemeal bread/pasta, grains and fish; while decreasing red and processed meat, refined bread/pasta, baked good and animal fat consumption. Survivors also reported the use of dietary supplements, mainly vitamins, aimed at counteracting therapies' side effects. Changes in nutritional habits were often adopted without asking or informing the oncologist. Despite BC survivors made some positive changes in their nutritional habits, those modifications were mostly pursued by less than half of them, while the majority of patients consumed nutritional supplements after diagnosis. These results, as well as the failure to communicate with the physicians, reinforce the need to both improve the patient-healthcare professional relationship and to develop tailored nutrition counselling and intervention programs for cancer survivors. abstract_id: PUBMED:35738619 Taste Alterations Do Not Affect Change in Food Habits and Body Weight in Breast Cancer Patients. Background/aim: Chemotherapy-induced taste alterations (TAs) affect approximately 53-84% of breast cancer patients with significant consequences on flavor perception, possibly leading to food aversion and changes in daily dietary habits. The aim of this study was to investigate the relationship between TAs and changes in food habits and body weight among early breast cancer (EBC) patients undergoing adjuvant chemotherapy. Patients And Methods: TAs were prospectively evaluated in 182 EBC patients from April 2014 to June 2018. TAs, dietary habits, and body weight were collected by a trained dietician. TAs were classified into different subtypes according to the following basic taste perception: metallic, sweet, bitter, salty, sour, and umami taste. Results: During adjuvant chemotherapy, a significant reduction in the consumption of bread, breadsticks, red meat, fat salami, snacks, added sugar, milk, and alcoholic beverages was observed, regardless of TAs onset. No correlation between these dietary changes and different TAs subtypes was found. Body weight remained stable in most EBC patients (71.4%) and was not influenced by TAs onset and by different TAs subtypes. Conclusion: EBC patients change their dietary habits during adjuvant chemotherapy, mostly following the World Cancer Research Fund recommendations, irrespective of TAs onset and without affecting body weight. abstract_id: PUBMED:11114679 Does breast cancer change patients' dietary habits? Purpose: The results of epidemiological studies on diet and cancer are often difficult to interpret on an individual level and may influence patients' beliefs, attitudes and behaviour. This study investigated the behaviour of breast cancer patients and their attitudes to dietary changes and the need of dietary advice during their disease. Patients And Methods: The study population consisted of breast cancer patients visiting the Department of Oncology in Turku University Hospital for treatment or follow-up in August and September 1999. A questionnaire was given to a total of 123 subjects. Results: The majority, 65%, were attending the clinic for treatment, 35% for follow-up. Ninety-seven patients (86%) consumed a normal Finnish diet, six (5.3%) were vegetarians and 10 (8.1%) vegetarians consuming fish and chicken occasionally. Eleven patients (8.9%) considered diet a factor contributing to their breast cancer and 38 (31.9%) had changed their dietary habits after the diagnosis of breast cancer. The numbers were higher in younger patients with higher educational background. The main reason for change in diet was the desire to be cured of cancer (52.9% of those patients who had changed their dietary habits), in 11.8% to alleviate the symptoms of nausea and 11.8% were advised by health care professionals. The main changes reported included a reduction in the consumption of animal fat, sugar and red meat and increased consumption of fruit, berries and vegetables. Forty-nine patients (39.8%) used vitamin and mineral supplements and 27 (21.9%) consumed dietary supplements including natural products and probiotics. The source of information on how to change the diet was for 33.3% the mass media, 19.4% were advised by doctors and nurses and 11.1% by dietitians. One-third of the patients expressed a need for more information on dietary factors. Conclusion: Breast cancer patients' need of control over their own life prompts an interest in alternative dietary habits after diagnosis, the focus being on a healthier diet. Expert dietary information is considered important. Many patients mentioned a lack of precise dietary recommendations for their individual disease situation and depended on information from outside their treatment centre. abstract_id: PUBMED:34480675 Dietary changes and food habits: social and clinical determinants in a cohort of women diagnosed with breast cancer in Barcelona (DAMA cohort). Objective: The aim of this study was to analyze the influence of social determinants on changes in dietary habits before and after diagnosis of breast cancer in women (Barcelona, 2003-2013). Methods: We performed a cohort study with 2,235 women diagnosed with breast cancer. The information was obtained from an ad hoc questionnaire based on recommendations from the Spanish Society of Community Nutrition. We conducted a descriptive bivariate analysis and fit logistic regression models. The dependent variable was the change in dietary habits (food groups) and the independent variables were a selection of social and clinical variables (age, social class, cohabitation, years since diagnosis, history of replaces, and treatment with chemotherapy). Results: While 5.8% of women followed a healthy diet (consumption of vegetables, fruits, farinaceous, lean meat, and seafood) before diagnosis, 9.5% did so after diagnosis. We observed statistically significant changes in consumption of all food groups (p < 0.001) after diagnosis. The greatest change in consumption patterns was observed in women aged < 50 years and those from non-manual classes (high classes) [e.g., legume consumption: OR<50 years/>65 years = 2.9 (95% CI 1.78-4.81); ORnon-manual/manual = 2.5 (95% CI 1.38-4.36)]. The occurrence of relapses and chemotherapy was associated with greater changes in dietary habits. Conclusion: Women with breast cancer change their eating habits after diagnosis, and these changes are conditioned by social and clinical determinants. abstract_id: PUBMED:29433784 Do cancer patients change their diet in the e-health information era? A review of the literature and a survey as a proposal for the Italian population. In recent years, there has been an explosion in the amount of available information on cancer in parallel with an ever-increasing number of cancer survivors. Cancer patients and long-term survivors are known to be more sensitive to health-related information and dietary changes could represent a potential consequence of this huge availability of messages. In our review about dietary changes after cancer diagnosis, we found that this topic is particularly investigated among the breast cancer population. The literature examined show that breast cancer patients modify their eating habits after diagnosis in a percentage that varies between approximately 30% and 60%. The most reported changes were an increased consumption of fruit and vegetables, a decrease in the consumption of red meat, fats and sugary foods. Patients who reported changes were more likely to be younger, with higher educational levels and with a longer period of time since their diagnosis of cancer. It also emerged that cancer patients are often more likely to use supplements. This topic has not been investigated in cancer patients in Italy, therefore, we propose an approach to explore it with a structured questionnaire: The "ECHO SURVEY - Eating habits CHanges in Oncologic patients". abstract_id: PUBMED:2261900 Dietary intervention in breast cancer patients: effects on dietary habits and nutrient intake. Effects of dietary intervention on dietary habits and nutrient levels were studied in 240 women aged 50-65 years who had been operated for a stage I-II breast cancer. Following surgery the women who had participated in a dietary history interview were randomly assigned to one of two groups. The intervention group (n = 121) received individual dietary counselling aimed at reducing dietary fat intake to 20-25 per cent of energy (E per cent) while increasing intake of carbohydrates. No dietary advice was given to women assigned to the control group (n = 119). There was no significant difference between the groups in terms of base-line nutrient intake except for a higher energy intake in the control group (P less than 0.05). Sixty-three (52 per cent) of the women in the intervention group and 106 (89 per cent) of the women in the control group completed the 2-year follow-up. Energy intake decreased after 2 years in both groups and the difference between the groups remained (P less than 0.01). Total fat intake decreased in the intervention group by 12.9-23.3 E per cent after 2 years and by 3.1 E per cent in the control group. The intake of carbohydrates increased from 46.2 to 57.2 E per cent in the intervention group and from 46.2 to 48.9 E per cent in the control group. The study shows that dietary habits and intake of nutrients can be altered through dietary counselling to breast cancer patients, and that such changes are long lasting. abstract_id: PUBMED:33488824 Changes in Eating Habits in Breast Cancer Patients. Introduction: Receiving a cancer diagnosis is an important moment in anyone's life. Consequently, many patients are prepared to change their everyday habits and begin to look for advice from a wide range of sources. Women with breast cancer are particularly motivated and committed to making changes to their lifestyle and diet. The purpose of this study was to elucidate the changes in nutritional and other lifestyle habits following breast cancer diagnosis in Slovenia. A further goal was to estimate the proportion of breast cancer patients using dietary supplements and alternative diets or ascertain their desire to attend a consultation with a dietician. Methods: A link to an online questionnaire was sent to the email addresses of members of Europa Donna and posted on their website (www.europadonna-zdruzenje.si) and Facebook page. Results: A total of 102 patients were included in the study. We found that a majority of breast cancer patients changed their eating habits (68.6%) and/or physical activity level (53.9%) following diagnosis. On average, they increased their fruit and vegetable intake and reduced their intake of sugar, red meat and fat. Alternative diets were used by 29.4% of patients, with a high proportion of patients (75.5%) consuming dietary supplements. More than a half of the patients (69.6%) expressed a desire for a consultation with a dietician. Conclusions: Nutritional support during cancer treatment is part of medical treatment and has an important role to play in secondary and tertiary cancer prevention activities. More dieticians should therefore be incorporated into the health system. abstract_id: PUBMED:8343269 DNA pattern and dietary habits in patients with breast cancer. An association between dietary fat, micronutrients and breast cancer aetiology and prognosis has been found in studies of experimental animals and in epidemiological studies. The relationship between dietary habits and the nuclear DNA content of breast cancer cells was studied in 82 women aged 50-65 years. A dietary history interview was conducted within 4 months following surgery. Patients having tumours with euploid DNA pattern reported lower mean intake of saturated fatty acids (FA) in absolute terms, lower mean intake of total fat, saturated FA, and monounsaturated FA, in percentage of total energy intake (E%), a higher E% from protein, and a higher intake of vitamin D, and selenium per 10 MJ than did patients having tumours with aneuploid DNA pattern. In the stepwise logistic regression analysis, the multivariate odds ratios (OR) for having a tumour with aneuploid DNA pattern was 1.16 (95% confidence interval, 1.04-1.28) for each 1 g increase in intake of total fat (E%) and 0.95 (95% confidence interval, 0.92-0.99) for each mg increase in selenium intake per 10 MJ. When total fat was substituted with types of fat, the OR for having a tumour with aneuploid DNA pattern was 1.30 (95% confidence interval, 1.07-1.59) for each 1 g increase in intake of saturated FA (E%). These results suggest a correlation between a diet rich in fat and protein and the DNA content of breast cancer cells. abstract_id: PUBMED:8416253 Treatment failure and dietary habits in women with breast cancer. Background: Epidemiological and experimental evidence suggests that breast cancer risk can be reduced by dietary measures. Study of the relationships between dietary habits and prognosis in patients with breast cancer is essential to the design of diet intervention trials. Purpose: Our purpose was to determine whether dietary habits are associated with disease-free survival in patients with breast cancer who have undergone treatment. Methods: We interviewed 240 women about their dietary histories. These women were 50-65 years old and had pathological stage I-II breast cancer with subsequent follow-up for 4 years; 209 of these women were postmenopausal. Differences in dietary variables between groups of patients were analyzed with bivariate and multivariate statistical methods. Results: Cancers were classified as estrogen receptor (ER) rich (> or = 0.10 fmol/micrograms of DNA) in 149 patients and as ER poor (< 0.10 fmol/micrograms of DNA) in 71 patients. Fifty-two patients had treatment failure during follow-up. The 30 patients with ER-rich tumors who had treatment failure reported higher intakes of total fat, saturated fatty acids, and polyunsaturated fatty acids than did the 119 patients with ER-rich tumors who did not have treatment failure. The multiple-odds ratio (OR) for treatment failure in these women was 1.08 for each 1% increment in percentage of total energy (E%) from total fat. For treatment failure within the first 2 years, the OR was 1.19 for each 1-mg increase in vitamin E intake per 10 megajoules of energy. In women with treatment failure 2-4 years after diagnosis, ORs were 1.13 and 1.23 for each E% increment in total fat or saturated fatty acids, respectively. No association between dietary habits and treatment failure was found for women with ER-poor cancers. There was a tendency to a dose-response relationship (in quartiles) between intake of saturated fatty acids and disease-free survival, but the observed differences were not statistically significant. Conclusions: Dietary habits at the time of diagnosis may affect prognosis, at least for patients with ER-rich breast cancers. Dietary fat may have an effect on growth or spread of breast cancer, both of which may vary according to type of fat. Total fat and saturated fatty acids were the dietary parameters most strongly associated with risk for treatment failure. Implications: Dietary intervention might serve as an adjuvant treatment to improve breast cancer prognosis. abstract_id: PUBMED:36981785 Effect of Combining Impact-Aerobic and Strength Exercise, and Dietary Habits on Body Composition in Breast Cancer Survivors Treated with Aromatase Inhibitors. This study examines both the effect of a twice-weekly combined exercise-1 h session of strength and 1 h session of impact-aerobic-on body composition and dietary habits after one year of treatment with aromatase inhibitors (AI) in breast cancer survivors. Overall, forty-three postmenopausal women with a BMI ≤ 35 kg/m2, breast cancer survivors treated with AI, were randomized into two groups: a control group (CG) (n = 22) and a training group (IG) (n = 21). Body composition, i.e., abdominal, visceral, and subcutaneous adipose tissue) was measured by magnetic resonance. In addition, some questionnaires were used to gather dietary data and to measure adherence to the Mediterranean diet. After one year, women in the IG showed a significant improvement in body composition, indicated by decreases in subcutaneous and visceral adipose tissue, and total fat tissue. Furthermore, the dietary habits were compatible with moderate adherence to the Mediterranean diet pattern and a low dietary intake of Ca, Zn, Folic Ac, and vitamins D, A, and E. A twice-weekly training program combining impact aerobic exercise and resistance exercise may be effective in improving the body composition for postmenopausal women who have breast cancer treated with AI, and the results suggest the need for nutritional counselling for this population. Answer: Yes, breast cancer does change patients' dietary habits. Studies have shown that after a diagnosis of breast cancer, many patients alter their eating habits, often shifting towards healthier dietary patterns. For instance, the ECHO Survey found that breast cancer survivors increased their consumption of vegetables, pulses, nuts, fruits, whole grains, and fish, while decreasing their intake of red and processed meats, refined bread/pasta, baked goods, and animal fats. However, these positive changes were adopted by less than half of the patients, and the majority consumed nutritional supplements after diagnosis (PUBMED:34804915). Another study in the early breast cancer (EBC) patient population undergoing adjuvant chemotherapy showed that patients changed their dietary habits during treatment, mostly following the World Cancer Research Fund recommendations, irrespective of taste alterations onset and without affecting body weight (PUBMED:35738619). Similarly, a study from Turku University Hospital indicated that a significant proportion of breast cancer patients changed their dietary habits after diagnosis, with younger patients and those with higher educational backgrounds being more likely to do so. The main changes reported included a reduction in the consumption of animal fat, sugar, and red meat, and an increased consumption of fruit, berries, and vegetables (PUBMED:11114679). The DAMA cohort study also found that women with breast cancer changed their eating habits after diagnosis, with these changes being influenced by social and clinical determinants (PUBMED:34480675). Moreover, a review of the literature and a survey proposal for the Italian population revealed that breast cancer patients modify their eating habits after diagnosis, with an increased consumption of fruit and vegetables and a decrease in the consumption of red meat, fats, and sugary foods (PUBMED:29433784). In Slovenia, a study found that a majority of breast cancer patients changed their eating habits following diagnosis, increasing their fruit and vegetable intake and reducing their intake of sugar, red meat, and fat. Many patients also expressed a desire for a consultation with a dietician (PUBMED:33488824). These findings collectively suggest that breast cancer diagnosis often leads to changes in dietary habits, with many patients seeking to adopt a healthier diet, potentially as a way to gain some control over their health and contribute to their recovery and well-being.
Instruction: Does examination of urinary sediment identify individuals with Gulf War syndrome? Abstracts: abstract_id: PUBMED:9537634 Does examination of urinary sediment identify individuals with Gulf War syndrome? A pilot study. Background: Many veterans who were involved in the Persian Gulf theater of operations have had a variety of unexplained physical complaints, collectively called the Gulf War syndrome or similar names. There has been much debate on the issue and numerous publications, both in the medical and the lay press. A method for examining urinary sediment that was developed in an effort to identify nonculturable bacteria has been used in Gulf War veterans and was the basis for intensive antimicrobial therapy in many of them. Methods: We evaluated eight Gulf War veterans with complaints compatible with Gulf War syndrome. Subjects were from various parts of the United States. A detailed history and physical examination were performed. Urine was obtained before and after prostatic massage (men) or before and after pelvic examinations (women) and was tested by a previously described microscopic method as well as by culture and conventional Gram stain. Age- and sex-matched healthy control subjects were tested similarly and concurrently. Results: Two female Gulf War veterans had findings of Candida albicans and Klebsiella pneumoniae by conventional culture. The same organism types were seen both by the special method and by conventional Gram stain. All other subjects and controls were completely indistinguishable. Conclusion: Examining the urinary sediment by this elaborate method does not differentiate persons with Gulf War syndrome from normal, healthy control subjects who were never in the Persian Gulf area. abstract_id: PUBMED:9856699 Urinary sediment examination and Gulf War Syndrome. N/A abstract_id: PUBMED:9856700 Urinary sediment examination and Gulf War Syndrome. N/A abstract_id: PUBMED:17036619 Predictors of exercise compliance in individuals with Gulf War veterans illnesses: Department of Veterans Affairs Cooperative Study 470. Although the health benefits of exercise for individuals with Persian Gulf War veterans illnesses (GWVI) are documented, many of these individuals do not exercise regularly enough to obtain benefits. The purpose of this study was to investigate factors predicting exercise compliance among individuals with GWVI in a multicenter, randomized, clinical trial. Participants were 1,092 veterans who reported at least two of the following cardinal symptoms of GWVI: (1) fatigue, (2) musculoskeletal pain, and (3) cognitive problems. Participants received exercise alone or exercise and cognitive-behavioral therapy. The overall level of compliance was relatively low during the exercise treatment phase (46.2%) and decreased by one-half during the follow-up period (23.0%). Predictors of compliance during treatment included less pain and greater age, motivation, and body mass index. Predictors of compliance during the follow-up period included less pain and greater age. The results highlight factors that affect adoption and maintenance of physical activity in a population with GWVI. abstract_id: PUBMED:10681117 Patterns of waking EEG spectral power in chemically intolerant individuals during repeated chemical exposures. Previous studies indicate that low level chemical intolerance (CI) is a symptom of several different controversial conditions with neuropsychiatric features, e.g., chronic fatigue syndrome, fibromyalgia, multiple chemical sensitivity, and "Persian Gulf Syndrome". Prior studies suggest that limbic and/or mesolimbic sensitization may contribute to development of CI. The purpose of this report was to document the waking electroencephalographic (EEG) patterns of individuals with CI during chemical exposures presented over repeated sessions. Three groups of adult subjects who were recruited from the community participated in the study: self-reported CI who had made associated lifestyle changes due to their intolerance (CI/ LSC), self-reported CI who had not made such changes (CI), and normal controls without self-reported CI. Subjects underwent two sessions involving one-minute EEG recordings during exposures to low level chemical odors (a probe for limbic activation). The CI, but not the CI/ LSC, subjects had increased absolute delta power after the chemical exposures during the second, but not the first, session. The findings support the neural sensitization hypothesis for intolerance to low levels of environmental chemicals in vulnerable individuals. As in human studies of stimulant drug sensitization, those with the strongest past history with sensitizing agents may not show-term sensitization to low level exposures in the laboratory. abstract_id: PUBMED:15452300 Clinical and laboratory assessment of distal peripheral nerves in Gulf War veterans and spouses. Background: The prevalence of symptoms suggesting distal symmetric polyneuropathy (DSP) was reported to be higher among deployed veterans (DV) to the Persian Gulf in 1990-1991 than to control non-deployed veterans (NDV). The authors therefore compared the prevalence of DSP by direct examination of DV and their spouses to control NDV and spouses. Methods: The authors performed standardized neurologic examinations on 1,061 DV and 1,128 NDV selected from a cohort of veterans who previously participated in a national mail and telephone survey. Presence of DSP was evaluated by history, physical examination, and standardized electrophysiologic assessment of motor and sensory nerves. Similar examinations were performed without electrophysiologic tests in 484 DV spouses and 533 NDV spouses. Statistical analyses were performed with appropriate adjustments for the stratified sampling scheme. Results: No differences between adjusted population prevalence of DSP in DV and NDV were found by electrophysiology (3.7% vs 6.3%, p = 0.07), by neurologic examination (3.1% vs 2.6%, p = 0.60), or by the methods combined (6.3% vs 7.3%, p = 0.47). Excluding veterans with non-military service related diseases that may cause DSP did not alter outcomes. DV potentially exposed to neurotoxins from the Khamisiyah ammunition depot explosion did not significantly differ in DSP prevalence compared to non-exposed DV. The prevalence of DSP in DV spouses did not differ from NDV spouses (2.7% vs 3.2%, p = 0.64). Conclusions: Neither veterans deployed during the Gulf War era nor their spouses had a higher prevalence of DSP compared to NDV and spouses. abstract_id: PUBMED:28453542 Phospholipid profiling of plasma from GW veterans and rodent models to identify potential biomarkers of Gulf War Illness. Gulf War Illness (GWI), which affects at least one fourth of the 700,000 veterans deployed to the Gulf War (GW), is characterized by persistent and heterogeneous symptoms, including pain, fatigue and cognitive problems. As a consequence, this illness remains difficult to diagnose. Rodent models have been shown to exhibit different symptomatic features of GWI following exposure to particular GW agents (e.g. pyridostigmine bromide, permethrin and DEET) and/or stress. Preclinical analyses have shown the activation of microglia and astroglia as a pathological hallmark in these mouse and rat models. Although much has been learned in recent years from these different rodent models and independent clinical studies, characterization studies to identify overlapping features of GWI in animals and humans have been missing. Thus, we aimed to identify biomarkers that co-occur in the plasma of rodent models of GWI and human GWI patients. We observed increases of multiple phospholipid (PL) species across all studied cohorts. Furthermore, these data suggested dysfunction within ether and docosahexaenoic acid and arachidonic acid containing PL species in relation to GWI. As these PL species play a role in inflammatory processes, these findings suggest a possible role for inflammatory imbalance in GWI. Overall, we show that the peripheral lipid disturbances are present both in human GWI patients and in the preclinical rodent models of GWI, highlighting the importance of lipidomics as a potential platform for further biomarker discovery and supporting the value of GW agent exposed models of GWI. abstract_id: PUBMED:10968383 Factor analysis of self-reported symptoms: does it identify a Gulf War syndrome? Active duty US Naval mobile construction battalion personnel (Seabees) were surveyed in 1994 for the presence of a variety of symptoms. Questions were drawn from the Hopkins Symptom Checklist and from a collection of symptoms either defining clinical depression or commonly reported by Persian Gulf War veterans. Of those surveyed, 524 were Gulf War veterans and 935 were nondeployed Gulf War-era veterans. Factor analysis applied to Gulf War veterans yielded five factors, three deriving from the Hopkins Symptom Checklist, one suggesting clinical depression, and one containing symptoms commonly reported by Gulf War veterans. Factor analysis applied to nondeployed veterans yielded five similar factors. Three of the factors yielded statistically significantly greater standardized factor scores for Gulf War veterans than for nondeployed veterans. Four of the factors resembled factors resulting from a previous analysis on a sample of similar Gulf War veterans. Gulf War veterans and nondeployed era veterans reported similar clusters of symptoms and illnesses. However, Gulf War veterans reported these same clusters with greater frequencies than did nondeployed veterans. The authors conclude that, in contrast to a previous report, factor analysis did not identify a unique Gulf War syndrome. abstract_id: PUBMED:11370211 Evaluation of Persian Gulf veterans with symptoms of peripheral neuropathy. Background: Persian Gulf syndrome is a set of symptoms that do not fit into well-understood diagnostic categories. Among these symptoms, there are some that could suggest a generalized neuropathic process. Objective: Correlate neurological symptoms with objective electrodiagnostic findings. Methods: A randomized sample of 176 Persian Gulf veterans (PGV) evaluated at the San Juan Veterans Administration Medical Center was obtained. The subjects completed a questionnaire, and those who met the inclusion criteria underwent electrodiagnostic evaluation. Results: Of the 176 PGV selected, 162 completed the questionnaire. The next step was to perform electrodiagnostic studies on those who described symptoms suggesting peripheral neuropathy and met the inclusion criteria. Twelve individuals met the inclusion criteria for electro-diagnostic studies. All studies were normal except that two subjects were found to have bilateral carpal tunnel syndrome. Conclusions: Although this is a relatively small sample of PGV, the findings are in accordance with other studies in which no definite generalized neuropathic pattern has been described. abstract_id: PUBMED:26156520 Using gene expression signatures to identify novel treatment strategies in gulf war illness. Background: Gulf War Illness (GWI) is a complex multi-symptom disorder that affects up to one in three veterans of this 1991 conflict and for which no effective treatment has been found. Discovering novel treatment strategies for such a complex chronic illness is extremely expensive, carries a high probability of failure and a lengthy cycle time. Repurposing Food and Drug Administration approved drugs offers a cost-effective solution with a significantly abbreviated timeline. Methods: Here, we explore drug re-purposing opportunities in GWI by combining systems biology and bioinformatics techniques with pharmacogenomic information to find overlapping elements in gene expression linking GWI to successfully treated diseases. Gene modules were defined based on cellular function and their activation estimated from the differential expression of each module's constituent genes. These gene modules were then cross-referenced with drug atlas and pharmacogenomic databases to identify agents currently used successfully for treatment in other diseases. To explore the clinical use of these drugs in illnesses similar to GWI we compared gene expression patterns in modules that were significantly expressed in GWI with expression patterns in those same modules in other illnesses. Results: We found 19 functional modules with significantly altered gene expression patterns in GWI. Within these modules, 45 genes were documented drug targets. Illnesses with highly correlated gene expression patterns overlapping considerably with GWI were found in 18 of the disease conditions studied. Brain, muscular and autoimmune disorders composed the bulk of these. Conclusion: Of the associated drugs, immunosuppressants currently used in treating rheumatoid arthritis, and hormone based therapies were identified as the best available candidates for treating GWI symptoms. Answer: No, examination of urinary sediment does not identify individuals with Gulf War syndrome. A pilot study evaluated eight Gulf War veterans with complaints compatible with Gulf War syndrome using a detailed history, physical examination, and urine tests before and after prostatic massage in men or pelvic examinations in women. The urine was tested by a previously described microscopic method as well as by culture and conventional Gram stain. Age- and sex-matched healthy control subjects were tested similarly and concurrently. The results showed that two female Gulf War veterans had findings of Candida albicans and Klebsiella pneumoniae by conventional culture, which were also seen by the special method and by conventional Gram stain. However, all other subjects and controls were completely indistinguishable. The conclusion of the study was that examining the urinary sediment by this elaborate method does not differentiate persons with Gulf War syndrome from normal, healthy control subjects who were never in the Persian Gulf area (PUBMED:9537634).
Instruction: Are there racial and sex differences in the use of oral isotretinoin for acne management in the United States? Abstracts: abstract_id: PUBMED:14512913 Are there racial and sex differences in the use of oral isotretinoin for acne management in the United States? Background: Treatment of various diseases has been noted to vary by patient demographics. There is reason to suspect that there may be sex and racial differences in the treatment of severe acne. Objective: We sought to determine if treatment of severe acne with oral isotretinoin varied with patient sex, race, or both. Methods: We analyzed the demographics of patients with acne and patients using oral isotretinoin, minocycline, and tetracycline recorded in the 1990 to 1997 National Ambulatory Medical Care Survey. Results: There were 35 million visits to physicians for the treatment of acne between 1990 and 1997, and isotretinoin was prescribed at 5.8 million (17%) of these visits. Per capita visit rates for acne among whites was 2.3 times that of blacks, and whites were 1.8 times more likely to receive isotretinoin at acne visits. Per capita, women had 1.4 times as many visits for acne as men, but men were 1.7 times more likely than women to receive isotretinoin at an acne visit. Dermatologists managed 83% of all isotretinoin visits. Dermatologists accounted for 100% of isotretinoin visits for which pregnancy prevention education and counseling was reported. Conclusions: Patients who are black receive less oral isotretinoin than those who are white, and the expense of isotretinoin appears to be one factor in this difference. Women are less likely than men to receive isotretinoin at acne visits. Expense does not appear to be a factor in this difference. Dermatologists have more experience than nondermatologists managing acne, prescribing isotretinoin, and counseling women treated with isotretinoin concerning pregnancy prevention. Pregnancy prevention is an essential component of isotretinoin use in women that must not be ignored. abstract_id: PUBMED:30913210 Differences in isotretinoin start, interruption, and early termination across race and sex in the iPLEDGE era. Background: iPLEDGE is the mandatory regulatory program for isotretinoin in the United States, aimed to prevent isotretinoin-related teratogenicity. However, little is known about potential unintended impact of the program, including delay in isotretinoin initiation, course interruption, and premature termination, which may vary across sex and racial domains. Objective: To determine whether differences in isotretinoin start, interruption, and completion exist across sex and racial domains and whether iPLEDGE regulations contribute to such differences. Methods: Retrospective review of isotretinoin courses of patients prescribed isotretinoin for acne at the Brigham & Women's Hospital and Massachusetts General Hospital from 2008-2016. Results: 418 patients were included in analysis after being tightly matched across age and gender. 43.5% of non-white patients ended their course early compared to 30.1% of white patients (p = 0.010). iPLEDGE -related barriers were the most commonly specified reasons for delayed starting and interruption. Conclusion: iPLEDGE may disproportionately contribute to access barriers for non-white patients. Continued evaluation of iPLEDGE is needed to minimize unintended barriers to access. abstract_id: PUBMED:32022834 Association of Race/Ethnicity and Sex With Differences in Health Care Use and Treatment for Acne. Importance: Our understanding of potential racial/ethnic, sex, and other differences in health care use and treatment for acne is limited. Objective: To identify potential disparities in acne care by evaluating factors associated with health care use and specific treatments for acne. Design, Setting, And Participants: This retrospective cohort study used the Optum deidentified electronic health record data set to identify patients treated for acne from January 1, 2007, to June 30, 2017. Patients had at least 1 International Classification of Diseases, Ninth Revision (ICD-9) or International Statistical Classification of Diseases and Related Health Problems, Tenth Revision (ICD-10) code for acne and at least 1 year of continuous enrollment after the first diagnosis of acne. Data analysis was performed from September 1, 2019, to November 20, 2019. Main Outcomes And Measures: Multivariable regression was used to quantify associations between basic patient demographic and socioeconomic characteristics and the outcomes of health care use and treatment for acne during 1 year of follow-up. Results: A total of 29 928 patients (median [interquartile range] age, 20.2 [15.4-34.9] years; 19 127 [63.9%] female; 20 310 [67.9%] white) met the inclusion criteria for the study. Compared with non-Hispanic white patients, non-Hispanic black patients were more likely to be seen by a dermatologist (odds ratio [OR], 1.20; 95% CI, 1.09-1.31) but received fewer prescriptions for acne medications (incidence rate ratio, 0.89; 95% CI, 0.84-0.95). Of the acne treatment options, non-Hispanic black patients were more likely to receive prescriptions for topical retinoids (OR, 1.25; 95% CI, 1.14-1.38) and topical antibiotics (OR, 1.35; 95% CI, 1.21-1.52) and less likely to receive prescriptions for oral antibiotics (OR, 0.80; 95% CI, 0.72-0.87), spironolactone (OR, 0.68; 95% CI, 0.49-0.94), and isotretinoin (OR, 0.39; 95% CI, 0.23-0.65) than non-Hispanic white patients. Male patients were more likely to be prescribed isotretinoin than female patients (OR, 2.44; 95% CI, 2.01-2.95). Compared with patients with commercial insurance, those with Medicaid were less likely to see a dermatologist (OR, 0.46; 95% CI, 0.41-0.52) or to be prescribed topical retinoids (OR, 0.82; 95% CI, 0.73-0.92), oral antibiotics (OR, 0.87; 95% CI, 0.79-0.97), spironolactone (OR, 0.50; 95% CI, 0.31-0.80), and isotretinoin (OR, 0.43; 95% CI, 0.25-0.75). Conclusions And Relevance: The findings identify racial/ethnic, sex, and insurance-based differences in health care use and prescribing patterns for acne that are independent of other sociodemographic factors and suggest potential disparities in acne care. In particular, the study found underuse of systemic therapies among racial/ethnic minorities and isotretinoin among female patients with acne. Further study is needed to confirm and understand the reasons for these differences. abstract_id: PUBMED:28831837 Racial disparities in the management of acne: evidence from the National Ambulatory Medical Care Survey, 2005-2014. Purpose: Racial health disparities are widespread in the United States, but little is known about racial disparities in the management of dermatological conditions. Materials And Methods: Nationally representative data on the management of acne vulgaris were gathered from the National Ambulatory Medical Care Survey (NAMCS) for the years 2005-2014. Visits to any specialist were included. Rao-Scott chi-square tests and multivariate adjusted logistic regressions were used to identify differences in patient demographics, visit characteristics and acne medications across races. Results: Black patients are less likely than white patients to visit a dermatologist (adjusted odds ratio (aOR) 0.48, p = 0.001), receive any acne medication (aOR 0.64, p = 0.01), receive a combination acne medication (aOR 0.52, p = 0.007) or receive isotretinoin (aOR 0.46, p = 0.03). Adjusting for management by a dermatologist eliminated the association between race and the prescription of any acne medication as well as between race and the prescription of isotretinoin. Conclusion: Among outpatient visits for acne in the United States, racial disparities exist in the likelihood of seeing a dermatologist and receiving treatment. Treatment disparities are less common when care is provided by a dermatologist. More research is needed to better understand the causes of disparities in acne management and other dermatological conditions. abstract_id: PUBMED:23839177 Clinical relevance of skin barrier changes associated with the use of oral isotretinoin: the importance of barrier repair therapy in patient management. Oral isotretinoin is a non-aromatic oral retinoid that is highly effective for the treatment of severe inflammatory acne vulgaris that is refractory and/or prone to scarring, and has also been used successfully to treat several other disorders in selected cases. Since its introduction into the United States marketplace in 1982, it has been well recognized that cutaneous side effects characterized by xerotic and desquamative changes are very common, and appear to be related to epidermal dyscohesion, and to some extent the sebosuppressive effects of the drug. Additionally, increased susceptibility to staphylococcal colonization has also been observed. The epidermal barrier impairments that have been associated with oral isotretinoin are reviewed in this article along with clinical implications. Strategies to mitigate the altered effects of epidermal barrier functions are reviewed including the importance of topical barrier repair therapy. abstract_id: PUBMED:11907498 Use of isotretinoin (Accutane) in the United States: rapid increase from 1992 through 2000. Background: Isotretinoin, a drug approved to treat severe recalcitrant nodular acne, has been marketed in the United States since 1982. The drug is an effective treatment for acne that is refractory to other therapies, but it is a teratogen and can cause serious side effects. Objective: Our purpose was to describe trends in the use of isotretinoin in the United States from marketing through year 2000 and summarize characteristics of patients and prescribers. Methods: Data from 2 pharmaceutical marketing research databases, the National Prescription Audit Plus and the National Disease and Therapeutic Index, and from 2 health plan networks were obtained and analyzed. Results: Retail pharmacies dispensed 19.8 million outpatient prescriptions for isotretinoin from marketing in 1982 through 2000. From 1983 through 1993, the median annual number of prescriptions was just over 800,000; between 1992 and 2000, the number of prescriptions increased 2.5-fold (250%) to nearly 2 million in year 2000. The increases registered in the health plans were somewhat larger: about 275% increases from 1995 through 1999. There is no ICD-9 code for nodulocystic acne; consequently, the type of acne treated with isotretinoin is not determinable from these data. However, between 1993 and 2000, the proportion of isotretinoin treatment for severe acne declined from 63% to 46%, whereas the proportion of treatment for mild and moderate acne increased from 31% to 49%. Data also indicated that the sex distribution of patients was nearly even, and that 63% of male patients prescribed isotretinoin were 15 to 19 years old, whereas 51% of female patients were 15 to 24 years old. Conclusion: In the last 8 years, there has been a 2.5-fold (250%) increase in the number of dispensed prescriptions for isotretinoin in the United States. Data also reveal an increasing proportion of isotretinoin use for mild and moderate acne. abstract_id: PUBMED:34127306 Oral isotretinoin: 29 years of medical malpractice cases in the United States. Background: Reviews of malpractice claims in the United States show trends of increasing payments with statistically higher payouts for more debilitating and permanent injuries. Objectives: To examine lawsuits involving notable associated adverse reactions of isotretinoin. Methods: Court records of US legal trials from 1985 to 2014 were obtained from a major computerized database: LexisNexis. Data were compiled on the demographics of the defendant and plaintiff, litigation over adverse drug reaction, legal allegations, verdicts, and ruling decisions. Results: Seventy unique cases met inclusion criteria and were selected for review. Forty-four cases cited physicians, and 26 cited a pharmaceutical company. When data from physician and pharmaceutical company litigations were combined, individuals 17 years or younger were more likely to be granted litigation outcomes in their favor compared with adults (P = .0016). Cases alleging failure to monitor were associated with an outcome in favor of the plaintiff (P = .0379). Limitations: Cases settled or terminated before going to court could not be reviewed. Conclusions: Familiarity with malpractice trends through precedent-setting cases will increase physician awareness of common pitfalls, potentially mitigating litigation risk and improving patient care. abstract_id: PUBMED:26761932 Status report from the American Acne & Rosacea Society on medical management of acne in adult women, part 3: oral therapies. Parts 1 and 2 of this 3-part series provided an overview of the epidemiology, visible patterns, and important considerations for clinical and laboratory evaluation of acne vulgaris (AV) in adult women and reviewed the role of proper skin care and topical therapies in this patient population. In Part 3, oral therapies including combination oral contraceptives, spironolactone, antibiotics, and isotretinoin are discussed along with important considerations that clinicians should keep in mind when selecting oral agents for management of AV in adult women. abstract_id: PUBMED:37649956 Oral Isotretinoin and Its Uses in Dermatology: A Review. In 1982, the Food and Drug Administration (FDA) of the United States of America approved isotretinoin (13-cis-retinoic acid), a retinoid derivative of vitamin A, to treat severe recalcitrant acne vulgaris. Apart from its prescribed use for severe acne, evidence suggests that isotretinoin is commonly used off-label to treat mild-to-moderate acne, inflammatory skin conditions, genodermatoses, skin cancer, and other skin disorders. This is due to its anti-inflammatory, immunomodulatory, and antineoplastic properties. Some "off-label" use is successful, while others are ineffective. Therefore, this information is essential to clinicians for deciding on the appropriate use of isotretinoin. In this article, we aim to review the most updated evidence-based data about the use of oral isotretinoin in dermatology. abstract_id: PUBMED:8300935 Historical aspects of the oral use of retinoids in acne. A number of investigations of the effects of vitamin A deficiency in animals and man and its treatment with natural products containing vitamin A were carried out in the twenties and thirties. In 1942, a clinical study in patients with acne treated with vitamin A yielded encouraging results. Further trials in the forties and fifties, trying to confirm the beneficial effect of oral vitamin A in acne, met with equivocal success. In the sixties, all-trans retinoic acid (tretinoin) became clinically available, and its topical efficacy in acne could be demonstrated. In 1971, oral tretinoin also was shown to be active in patients with acne. Coincidentally, the efficacy of oral 13-cis retinoic acid (isotretinoin) became evident in a series of unpublished studies in Europe. Then, in 1978, a trial carried out at the NIH, Bethesda, Maryland, yielded convincing evidence that isotretinoin is a potent new drug for the treatment of severe cystic acne. In 1982, isotretinoin was registered in the United States and one year later in Europe for the treatment of severe, recalcitrant, cystic acne. Since then, many thousands of patients suffering psychologically and physically from the severity of their disease have been treated successfully with this drug. However, the main concern of physicians prescribing isotretinoin has to focus on its potentially severe side effects, particularly its teratogenicity. Answer: Yes, there are racial and sex differences in the use of oral isotretinoin for acne management in the United States. Studies have shown that black patients receive less oral isotretinoin than white patients, and the expense of isotretinoin appears to be one factor in this difference (PUBMED:14512913). Additionally, women are less likely than men to receive isotretinoin at acne visits, although expense does not appear to be a factor in this difference (PUBMED:14512913). Further research indicates that iPLEDGE-related barriers were the most commonly specified reasons for delayed starting and interruption of isotretinoin treatment, which may disproportionately contribute to access barriers for non-white patients (PUBMED:30913210). Non-Hispanic black patients were found to be more likely to be seen by a dermatologist but received fewer prescriptions for acne medications, including isotretinoin, compared to non-Hispanic white patients (PUBMED:32022834). Male patients were more likely to be prescribed isotretinoin than female patients (PUBMED:32022834). Racial disparities in the management of acne have also been observed, with black patients being less likely than white patients to visit a dermatologist, receive any acne medication, receive a combination acne medication, or receive isotretinoin (PUBMED:28831837). Adjusting for management by a dermatologist eliminated the association between race and the prescription of any acne medication as well as between race and the prescription of isotretinoin (PUBMED:28831837). These findings suggest that there are indeed disparities in the use of oral isotretinoin for acne treatment in the United States that are influenced by race and sex.
Instruction: Does risk for major adverse cardiac events in patients undergoing vasodilator stress with adjunctive exercise differ from patients undergoing either standard exercise or vasodilator stress with myocardial perfusion imaging? Abstracts: abstract_id: PUBMED:25124828 Does risk for major adverse cardiac events in patients undergoing vasodilator stress with adjunctive exercise differ from patients undergoing either standard exercise or vasodilator stress with myocardial perfusion imaging? Background: In patients with functional limitations, the use of adjunctive exercise with vasodilator stress has advantages over vasodilator stress alone in single photon emission computed tomography myocardial perfusion imaging (MPI) for technical reasons and with regards to more effective cardiac risk stratification. Whether patients who undergo vasodilator with adjunctive exercise stress MPI possess clinical characteristics and cardiac risk that differs from those who undergo standard exercise or vasodilator stress MPI is unknown. Methods: Prospectively collected data on 19,367 consecutive patients referred for stress MPI to a tertiary care center (9,331 [48%] underwent exercise-only, 3,793 [20%] underwent vasodilator plus exercise, and 6,243 [32%] underwent vasodilator-only) were analyzed. Perfusion data were scored using the ASNC 17-segment with a summed stress score (SSS) < 4 = normal, 4-8 = mildly abnormal, and > 8 = moderate to severely abnormal. Patients were followed a mean of 1.96 ± 0.95 years. Demographics, clinical characteristics, and the occurrence of major adverse cardiac events (cardiac death or nonfatal myocardial infarction) were compared between the three stress modality groups. Results: Comparison of demographics and clinical characteristics revealed significant differences in gender, age, cardiac risk factors, and stress MPI between the three stress modality groups (P < .001). In follow-up, cardiac event-free survival of patients in the vasodilator plus exercise stress group was significantly higher than those in the vasodilator-only group but lower than those in the exercise-only group (P < .001). Annualized cardiac event rates of patients in the vasodilator plus exercise stress group were significantly lower than those in the vasodilator-only group for all three categories of the SSS (P < .001). After multivariable adjustment, with exercise-only as reference category, vasodilator plus exercise and vasodilator-only stress emerged as independent predictors (more likely occurrence) of cardiac death, while vasodilator-only stress emerged as an independent predictor (more likely occurrence) of cardiac death or nonfatal myocardial infarction. With vasodilator-only as the reference category, exercise-only and vasodilator plus exercise stress emerged as independent predictors (less likely occurrence) of cardiac death as well as of cardiac death or nonfatal myocardial infarction. Conclusions: Patients undergoing vasodilator plus exercise stress MPI possess clinical characteristics and cardiac risk that differs significantly from those undergoing either standard exercise or vasodilator stress MPI and places them in a lower risk category compared to vasodilator stress alone. abstract_id: PUBMED:28924814 The prognostic value of heart rate response during vasodilator stress myocardial perfusion imaging in patients with end-stage renal disease undergoing renal transplantation. Background: In asymptomatic end-stage renal disease (ESRD) patients undergoing vasodilator stress myocardial perfusion imaging (MPI) prior to renal transplantation (RT), the impact of pre-transplant heart rate response (HRR) to vasodilator stress on post-RT outcomes is unknown. Methods: We analyzed a retrospective cohort of asymptomatic patients with ESRD who underwent a vasodilator stress SPECT-MPI and subsequently received RT. Blunted HRR was defined as HRR <28% for regadenoson stress and <20% for adenosine stress. The primary endpoint was major adverse cardiac events (MACE), defined as cardiac death or myocardial infarction. Clinical risk was assessed using the sum of risk factors set forth by the AHA/ACCF consensus statement on the assessment of RT candidates. Results: Among 352 subjects, 140 had an abnormal pre-transplant HRR. During a mean follow-up of 3.2 ± 2.0 years, 85 (24%) MACEs were observed. Blunted HRR was associated with increased MACE risk (hazard ratio 1.72; 95% confidence interval 1.12-2.63, P = 0.013), and remained significant after adjustment for gender, sum of AHA/ACCF risk factors, summed stress score, baseline heart rate, and β-blocker use. HRR was predictive of MACE in patients with normal MPI and irrespective of clinical risk. Blunted HRR was associated with a significant increase in post-operative (30-day) MACE risk (17.9% vs 8.5%; P = 0.009). Conclusion: In asymptomatic ESRD patients being evaluated for RT, a blunted pre-transplant HRR was predictive of post-RT MACE. HRR may be a valuable tool in the risk assessment of RT candidates. abstract_id: PUBMED:23703380 Risk stratification among diabetic patients undergoing stress myocardial perfusion imaging. Background: Previous studies have suggested that diabetic patients undergoing single photon emission computed tomography (SPECT) myocardial perfusion imaging (MPI) are at greater risk for cardiac events than non-diabetic patients with both normal and abnormal imaging results. However, the impact of stress modality on outcomes in this patient group has not been examined. Methods: The data on all patients undergoing exercise stress or vasodilator stress SPECT MPI from 1996 to 2005 were reviewed. After excluding patients based on our predefined criteria, we subcategorized the study population into diabetic patients and non-diabetic patients. Among the diabetic patients, we identified patients with known coronary artery disease (CAD) and no known CAD. All studies were interpreted using the 17-segment ASNC model. The presence, extent, and severity of perfusion defects were calculated using the summed stress score (SSS), and patients were classified into normal (SSS < 4), mildly abnormal (SSS 4-8), and moderate-severely abnormal (SSS > 8) categories. The annualized cardiac event rate including cardiac death and non-fatal myocardial infarction was calculated over a mean follow-up period of 2.4 ± 1.4 years with a maximum of 6 years. Results: The cardiac event rate was statistically significantly lower in diabetic patients undergoing exercise stress MPI when compared to the diabetic patients undergoing pharmacological stress MPI across all three perfusion categories (1.3% vs 3.4%, 2.3% vs 5.7%, 4.2% vs 10.7%, respectively). Diabetic patients with no known CAD, who underwent exercise stress MPI had significantly lower cardiac events across all three perfusion categories as compared to the remainder of the diabetic population. Ability to perform exercise stress test was the strongest multivariate predictor of favorable outcome, whereas ejection fraction < 50%, abnormal perfusion imaging on SPECT MPI, and increasing age stood out as independent predictors of adverse outcome in the diabetic patients. Within the abnormal perfusion category, the annualized cardiac event rate among patients undergoing exercise stress SPECT MPI was not statistically different between the diabetic and non-diabetic cohorts. Conclusion: Diabetic patients undergoing exercise SPECT MPI have a significantly better prognosis than those undergoing pharmacological stress, more similar to patients without diabetes. In patients with diabetes exercise stress test MPI identifies low risk patients and provides precise risk stratification. abstract_id: PUBMED:17943047 Pharmacologic stress myocardial perfusion imaging. Pharmacologic stress agents (dipyridamole, adenosine and dobutamine) allow virtually all patients to be safely assessed for ischemic heart disease. These agents have mild but significant side effects, mandating a thorough knowledge of indications, contraindications, side effects and management before their use. Adjunctive exercise improves image quality in vasodilator pharmacologic myocardial perfusion imaging. Diabetics, especially women, have a much higher cardiac event rate than nondiabetics for an equal amount of ischemia. They also have a higher incidence of asymptomatic ischemia. There is growing support for screening with myocardial perfusion imaging (MPI) for asymptomatic ischemia in diabetics. The ability of MPI to identify hypocontractile but viable myocardium, thus predicting improvement in myocardial function after revascularization, is one of the most powerful uses of the modality. Vasodilator MPI should be used as the initial test in patients with left bundle branch block or paced ventricular rhythm, even if they are able to exercise. abstract_id: PUBMED:23111139 Reclassification of cardiovascular risk in patients with normal myocardial perfusion imaging using heart rate response to vasodilator stress. Previous studies have shown that patients with normal vasodilator myocardial perfusion imaging (MPI) findings remain at a greater risk of future cardiac events than patients with normal exercise MPI findings. The aim was to assess improvement in risk classification provided by the heart rate response (HRR) in patients with normal vasodilator MPI findings when added to traditional risk stratification. We retrospectively studied 2,000 patients with normal regadenoson or adenosine MPI findings. Risk stratification was performed using Adult Treatment Panel III framework. Patients were stratified by HRR (percentage of increase from baseline) into tertiles specific to each vasodilator. All-cause mortality and cardiac death/nonfatal myocardial infarction (MI) ≤2 years from the index MPI were recorded. During follow-up, 11.8% patients died and 2.7% patients experienced cardiac death/nonfatal MI in the adenosine and regadenoson groups, respectively. The patients who died had a greater Framingham risk score (12 ± 4 vs 11 ± 4, p = 0.009) and lower HRR (22 ± 16 vs 32 ± 21, p <0.0001). In an adjusted Cox model, the lowest tertile HRR was associated with an increased risk of mortality (hazard ratio 2.1) and cardiac death/nonfatal MI (hazard ratio 2.9; p <0.01). Patients in the highest HRR tertile, irrespective of the Adult Treatment Panel III category, were at low risk. When added to the Adult Treatment Panel III categories, the HRR resulted in net reclassification improvement in mortality of 18% and cardiac death/nonfatal MI of 22%. In conclusion, a blunted HRR to vasodilator stress was independently associated with an increased risk of cardiac events and overall mortality in patients with normal vasodilator MPI findings. The HRR correctly reclassified a substantial proportion of these patients in addition to the traditional risk classification models and identified patients with normal vasodilator MPI findings, who had a truly low risk of events. abstract_id: PUBMED:15472640 Comparison of risk stratification with pharmacologic and exercise stress myocardial perfusion imaging: a meta-analysis. Background: Although pharmacologic stress myocardial perfusion imaging (MPI) and exercise stress MPI have comparable diagnostic accuracy, their comparative value for risk stratification of patients with known or suspected coronary disease is not known. Methods And Results: The data of 14,918 patients were combined from 24 studies evaluating prognosis in patients undergoing either pharmacologic stress or exercise stress MPI. Studies were included if a 2 x 2 table for hard cardiac events (cardiac death and myocardial infarction [MI]) could be constructed from the data available. Excluded were studies performed for post-MI, post-revascularization, or preoperative risk stratification. A weighted t test was used to compare the cardiac events, and a random effects model was used to calculate summary odds ratios. Summary odds ratios for hard cardiac events were similar for pharmacologic stress and exercise stress MPI. Summary receiver operating characteristic curves also showed no difference in discriminatory power between the stressors. The cardiac event rates were significantly higher with normal and abnormal test results with pharmacologic stress MPI than with exercise stress MPI (1.78% vs 0.65% [P < .001] for normal results and 9.98% vs 4.3% [P < .001] for abnormal results). Subgroup analysis revealed that both cardiac death and nonfatal MI were significantly higher with pharmacologic stress MPI. Patients undergoing pharmacologic stress MPI had a significantly higher prevalence of poor prognostic factors, and meta-regression revealed that exercise capacity was the single most important predictor of cardiac events. Conclusions: This meta-analysis shows that exercise stress MPI and pharmacologic stress MPI are comparable in their ability to risk-stratify patients. However, patients undergoing pharmacologic stress studies are at a higher risk for subsequent cardiac events. This is true even for those with normal perfusion imaging results. abstract_id: PUBMED:10510953 Diagnostic and prognostic value of vasodilator stress echocardiography in asymptomatic Type 2 diabetic patients with positive exercise thallium scintigraphy: a pilot study. Aim: To assess the diagnostic and prognostic value of vasodilator stress echocardiography in Type 2 diabetic patients with positive exercise perfusion scintigraphy. Methods: Of an initial cohort of 50 asymptomatic Type 2 diabetic patients undergoing exercise single photon emission computed tomography (SPECT) thallium scintigraphy, 24 had a positive thallium scan, with a reversible perfusion defect. All these 24 underwent high dose (up to 0.84 mg/kg in 10 min) dipyridamole echocardiography and coronary angiography independently of stress echocardiography results. All patients were then followed for 61+/-22 months. Results: Coronary angiography showed normal coronary arteries in 11 patients and significant (> 50% visually assessed diameter reduction in a major vessel) coronary artery disease in 13. Stress echocardiography showed 92% sensitivity and 100% specificity for non-invasive detection of coronary artery disease. During follow-up, five patients experienced cardiac events: heart failure in one, angina with subsequent revascularization in two, and myocardial infarction in two. Event-free survival was 100% in the 12 patients with negative and 58% in the 12 patients with positive stress echocardiography (P = 0.08 by Mantel-Cox test). Conclusions: In asymptomatic Type 2 diabetic patients with stress-induced perfusion defects, vasodilator stress echocardiography is an excellent diagnostic and prognostic tool proven with long-term follow-up. abstract_id: PUBMED:21947978 Electrocardiographic changes during vasodilator SPECT myocardial perfusion imaging: does it affect diagnosis or prognosis? Background: Significance of electrocardiographic (ECG) changes during vasodilator stress myocardial perfusion imaging (MPI) is controversial. We examined the diagnostic and prognostic significance of ECG changes during vasodilator single photon emission computerized tomography (SPECT) MPI. Methods: We studied consecutive patients who underwent vasodilator SPECT MPI from 1995 to 2009. Patients with baseline ECG abnormalities, previous history of coronary artery bypass graft surgery or myocardial infarction (MI) were excluded. Significant coronary artery disease (CAD) was defined as >70% stenosis of any vessel or ≥50% stenosis of left main. Mean follow-up was 2.4 ± 1.5 years for cardiac events (cardiac death and non-fatal MI). Results: Of patients in the diagnostic cohort, ST depression was associated with increased incidence of CAD with abnormal (P = .020 and P <.001) but not in those with normal perfusion (P = .342). Of 3,566 patients with follow-up in the prognostic cohort, including 130 (5.0%) with ST depression and normal perfusion, the presence of ST depression ≥1 mm did not affect the outcomes in any summed stress score category. Conclusions: ST depression ≥1 mm during vasodilator SPECT MPI is associated with CAD in patients with abnormal perfusion, but provides no additional risk stratification beyond concomitant perfusion imaging, including those with normal studies. abstract_id: PUBMED:28475259 Regadenoson versus Dipyridamole: A Comparison of the Frequency of Adverse Events in Patients Undergoing Myocardial Perfusion Imaging. Study Objective: To compare the frequency of adverse events in patients undergoing myocardial perfusion imaging (MPI) with either regadenoson or dipyridamole. Design: Single-center, retrospective cohort study. Setting: Large community teaching hospital. Patients: A total of 568 adults who underwent single-photon emission tomography MPI with either regadenoson (284 patients) or dipyridamole (284 patients) as a vasodilator agent, following an institution conversion from regadenoson to dipyridamole in the MPI protocol on July 15, 2013, for cost-saving purposes. Measurements And Main Results: Data were collected from the patients' electronic medical records. The primary endpoint was the composite occurrence of any documented adverse event in each group. Secondary endpoints were individual components of the primary endpoint, reason for termination of the MPI examination (protocol completion or premature end due to an adverse event), use of an interventional agent to an treat adverse event, and cost-related outcomes. A higher proportion of patients in the regadenoson group experienced an adverse event than those who received dipyridamole (84.9% vs 56.7%, p<0.0001). None of the patients in either group required early MPI study termination due to an adverse event. No significant differences were noted between groups regarding use of aminophylline or other interventions to treat adverse events. The overall drug cost savings in the postconversion dipyridamole group was $51,526. Conclusion: Dipyridamole was associated with fewer adverse events than regadenoson in patients undergoing MPI. Dipyridamole offers a safe and cost-effective alternative to regadenoson for cardiac imaging studies. abstract_id: PUBMED:22160197 ST-segment depression during vasodilator stress is of minor clinical importance in women with normal myocardial perfusion imaging and low or intermediate risk of coronary artery disease. Purpose: The prognostic value of electrocardiographic (ECG) ST-segment depression during vasodilator stress testing in patients with normal myocardial perfusion scintigraphy (MPS) is based on retrospective studies with controversial results. Moreover, the true incidence of obstructive coronary artery disease (CAD) in these patients is unknown. Methods: During a 33-month period, all consecutive patients referred for MPS were prospectively evaluated for interpretable ST-segment depression ≥ 1 mm during vasodilator stress testing. Of 1,687 patients with normal MPS and without known CAD, 109 (100 women) aged 65.2 ± 9.2 years demonstrated ST-segment shifts and formed the ECG-positive group. The pretest probability of CAD was low in 56%, intermediate in 39% and high in 5%. They were advised to proceed to coronary angiography. An equal number of patients with normal MPS and no ECG abnormalities matched for CAD predisposing factors comprised the control group. Follow-up of both groups was accomplished through a telephone interview. Results: Fifty-two patients from the ECG-positive group (48%) consented to coronary angiography. Six (11.5%) had obstructive CAD (≥50% lumen stenosis), one with left main artery disease, while three required revascularization. Follow-up was complete in 99 ECG-positive patients for a period of 20.6 ± 8.9 months. One hard event (non-fatal myocardial infarction) and one soft event (revascularization) were observed. No event was recorded in the control group. Conclusion: In patients with low-intermediate risk of CAD, "ischaemic" ECG changes during vasodilator stress combined with normal MPS are encountered mostly in women and are associated with a low prevalence of significant CAD and a low cardiac event rate. Answer: Yes, the risk for major adverse cardiac events (MACE) in patients undergoing vasodilator stress with adjunctive exercise does differ from patients undergoing either standard exercise or vasodilator stress with myocardial perfusion imaging (MPI). According to the study with PUBMED:25124828, patients who underwent vasodilator with adjunctive exercise stress MPI had clinical characteristics and cardiac risk that differed significantly from those who underwent standard exercise or vasodilator stress MPI. Specifically, the cardiac event-free survival of patients in the vasodilator plus exercise stress group was significantly higher than those in the vasodilator-only group but lower than those in the exercise-only group. Additionally, the annualized cardiac event rates of patients in the vasodilator plus exercise stress group were significantly lower than those in the vasodilator-only group for all three categories of the summed stress score (SSS). After multivariable adjustment, vasodilator plus exercise and vasodilator-only stress emerged as independent predictors of cardiac death, while vasodilator-only stress emerged as an independent predictor of cardiac death or nonfatal myocardial infarction. This indicates that patients undergoing vasodilator plus exercise stress MPI are placed in a lower risk category compared to vasodilator stress alone.
Instruction: Do u smoke after txt? Abstracts: abstract_id: PUBMED:12355948 Patient with bone metastasis of breast cancer who were improved by weekly DXR/TXT therapy The patient was a 57-year-old female with the main complaint of left chest pain and gait disturbance. Multiple bone metastases were observed in the vertebral and pelvic bone. Pathological pressure fracture, stegnosis of the spinal canal and edematous changes in the spinal nerves, especially in the 4th and 5th thoracic vertebrae, were observed. The clinical pathological status was T4cN1M1b, OSS Stage IV. After 4 cycles of weekly doxorubicin (DXR) (20 mg/body)/docetaxel (TXT) (40 mg/body) therapy (day 1, day 8, day 15, 1-week recovery period) were given to the patient, CA15-3 had decreased from 1,200 U/ml to 28 U/ml. The histopathological effect was judged to be Grade 2. Thereafter, ICTP was increased to 12.7 ng/ml and after 2 cycles of the weekly DXR/TXT therapy (day 1, day 8, 2-week recovery period) had been added, the patient was able to walk. MRI of the thoracic vertebrae showed that the foci of bone metastasis had improved 9 months after the treatment with chemotherapeutics, and good PS was maintained. No serious adverse reactions were observed with the weekly DXR/TXT therapy. Therefore, weekly DXR/TXT therapy is considered to be effective in treating patients with bone metastasis. abstract_id: PUBMED:10356710 Early alterations of lung injury following acute smoke exposure and 21-aminosteroid treatment. In a simulated fire-related smoke exposure protocol, New Zealand white rabbits were utilized to investigate the potential effects of the 21-aminosteroid (lazaroid) analog U75412E on the early events of acute lung injury. Inhalation of a total of 1.6 mg/kg U75412E aerosolized at a rate of 1.53 mg/min at 0.5 hr after smoke exposure significantly attenuated the extent of lung injury at 1 hr, as evidenced by decreased bronchoalveolar lavage (BAL) concentration of total protein, 6-keto-prostaglandin F1-alpha, and blood gas defect. Histopathologic examination demonstrated that the lazaroid significantly attenuated smoke-induced lung injury as evidenced by a decrease in wet lung/body weight ratio, necrosis, and sloughing of airway epithelial cells. Electron microscopy showed that the lazaroid decreased smoke-induced interstitial edema and the vacuolization of alveolar type II epithelium (21.6 +/- 9.7 vs 8.5 +/- 3.6 vacuoled blebs/cell, smoke only vs smoke + lazaroid). However, U75412E did not attenuate smoke-induced changes in BAL concentration of tumor necrosis factor-alpha, total cell count, and granulocyte percentage. These observations suggest that U75412E may exert its action through cooperative mechanisms, such as the modulation of arachidonic acid metabolism, in addition to its characterized antioxidative effects. abstract_id: PUBMED:10487357 Functional alterations of alveolar macrophages subjected to smoke exposure and antioxidant lazaroids. Acute inhalation of diesel fuel-polycarbonate plastic (DFPP) smoke causes severe lung injury, leading to acute respiratory distress syndrome (ARDS) and death. It has been reported that the initiation of acute lung injury is associated with the activation of pulmonary alveolar macrophages (PAM). To further explore the pathogenesis, alveolar macrophages (AM) of New Zealand rabbits ventilated and exposed to a 60 tidal volume of DFPP smoke in vivo were recovered at 1 h post-smoke. Smoke exposure induced significant increases in both mRNA and protein levels for PAM tumor necrosis factor-alpha (TNF-alpha), when compared to smoke control. Smoke also induced a biphasic response (inhibited at 2 h, enhanced at 24 h after cell isolation) in the production of superoxide (O2-) by PAM. However, aerosolized lazaroid, U75412E (1.6 mg/kg body weight), significantly attenuated smoke-induced expression in AM TNF-alpha at the protein level but not at the mRNA level, and smoke-induced changes in AM production of O2-. This study suggests that highly expressing AM TNF-alpha following smoke may be a key contributor to the cascade that establishes an acute injury process and exacerbates oxidant-derived cell injury. Whereas, the lazaroid may ameliorate smoke-induced lung injury by attenuating AM TNF-alpha release, in addition to its primary antioxidative mechanism. abstract_id: PUBMED:36426142 Semi-supervised wildfire smoke detection based on smoke-aware consistency. The semi-transparency property of smoke integrates it highly with the background contextual information in the image, which results in great visual differences in different areas. In addition, the limited annotation of smoke images from real forest scenarios brings more challenges for model training. In this paper, we design a semi-supervised learning strategy, named smoke-aware consistency (SAC), to maintain pixel and context perceptual consistency in different backgrounds. Furthermore, we propose a smoke detection strategy with triple classification assistance for smoke and smoke-like object discrimination. Finally, we simplified the LFNet fire-smoke detection network to LFNet-v2, due to the proposed SAC and triple classification assistance that can perform the functions of some specific module. The extensive experiments validate that the proposed method significantly outperforms state-of-the-art object detection algorithms on wildfire smoke datasets and achieves satisfactory performance under challenging weather conditions. abstract_id: PUBMED:32683618 Smoke-free workplaces are associated with smoke-free homes in India: evidence for action. India enacted a smoke-free law in 2003. It is believed that smoke-free workplaces will lead to more smoking in private places such as homes. The national Global Adult Tobacco Survey (GATS 2) India 2016-2017 collects information on the self-reported prevalence of SHS exposure in homes and workplaces. The present study utilised the GATS 2 dataset to establish the association between working in a smoke-free workplace and living in a smoke-free home. 70.1% of respondents who worked indoors reported smoke-free workplace; 64.5% of respondents reported that they live in a smoke-free home. Respondents who reported that their workplace was smoke-free were significantly more likely to live in smoke-free homes compared with those who are exposed to SHS at the workplace (69.5% vs 45.5%, AOR = 1.8, 95% CI 1.5-2.1). Males, urban residents, family with fewer members, non-smokers and non-smokeless tobacco users were significantly more likely to live in a smoke-free home. Significant differences were also observed with respect to religion, caste, region and education levels. Our results provide conclusive evidence to support that smoke-free workplaces influence smoke-free homes in India. Thus, it highlights the importance of accelerating the implementation of existing national tobacco control legislation on smoke-free public places. abstract_id: PUBMED:6170142 Vitamin U and RNA metabolism in prokaryotes The paper is concerned with a study of the vitamin U effect on the rate of 14C-uridine incorporation into various categories of RNA in E. coli MRE-600 cells. It is found that cells grown with vitamin U (0.06 mg/ml) and incubated with 14C-uridine for 5 min are able to produce a 10-12-fold increase of the label incorporation into 4 S and 5 S RNA and a 14-fold increase into high polymeric RNA in comparison with the control cells. Under longer intervals of incubation (20 min) the intensity of high-polymeric RNA formation was half as high as for 4 S and 5 S RNA formation. MAK column chromatography of high-polymeric RNA in salt and temperature gradients showed the presence of the RNA temperature fraction in bacteria cells. Vitamin U stimulates the formation of various categories of RNA and causes a quantitative increase in the RNA temperature fraction. abstract_id: PUBMED:29851040 Surgical Smoke: Putting the Pieces Together to Become Smoke-Free: 1.6 www.aornjournal.org/content/cme. Surgical smoke is generated when tissue vaporizes from the use of various energy-generating devices (eg, electrosurgery units, lasers). Surgical smoke contains toxic gases and vapors that can be hazardous to patients and perioperative staff members. In addition to ocular and upper respiratory tract irritation, surgical smoke can have mutagenic potential. Although many regulatory agencies agree that surgical smoke is dangerous, they have been slow to commit to a firm standard to address the inhalation hazards related to surgical smoke. With the clear evidence supporting the need to filter surgical smoke, perioperative nurses and surgical technologists at our facility implemented a plan to protect patients and staff members. The plan included identifying and evaluating in-house equipment and the perceived barriers to becoming smoke-free. Perioperative personnel tested multiple smoke-evacuating products to determine which best met the needs of our staff members and facility, and our ORs are now smoke-free. abstract_id: PUBMED:21785363 Structure fires, smoke production, and smoke alarms. Smoke inhalation injury causes severe morbidity and death. In the United States, the majority of fatalities from fire and burns occur because of inhalation of smoke. Medical treatment is only supportive; there is no known antidote to the damaging effects of smoke toxicants on pulmonary tissue. Without question, minimization of the morbidity and mortality that are caused by smoke inhalation is best accomplished by prevention of the injury. Effective prevention programs depend on a thorough and detailed understanding of the mechanism of damage caused by smoke, as well as of the available options for efficacious prevention. This summary presents details of smoke production from structure fires, the effects of smoke on physiology, and the devices currently in use to prevent damage and death from smoke. abstract_id: PUBMED:37642082 Promoting Smoke-Free Environments: The Impact of Thirdhand Smoke Awareness on Smoking Bans at Home. Objective: A creating smoke-free home is a way to protect a vulnerable group from being exposed to secondhand smoke in the home, such as children, infants, and non-smokers. Studies reported an intervention for promoting a smoke-free home by using secondhand smoke messages and smoking cessation messages. However, the thirdhand smoke (THS) message has rarely been found. Therefore, this study aimed to explore the prevalence and correlations of smoking bans in homes. Methods: Secondary data from a community-based cross-sectional design survey was used for this study. Smoking ban status was defined as a smoking ban or no smoking ban. We used multiple logistic regression to test the association between factors and smoking ban status. An adjusted odd ratio and 95% confidence interval were reported. Results: Of the 882 participants included in this analysis, 38.66% (95%CI: 35.43, 41.97) had a smoking ban at home. A multiple logistic regression analysis showed that participants who believe that secondhand or THS harms children had a greater correlation with developing a smoking ban in the home compared with those who did not believe (odd ratio: 3.94, 95% confidence interval: 2.35, 6.60 and odd ratio:4.22, 95% confidence interval:2.6, 6.86, respectively). Conclusion: This study found that the belief that thirdhand smoke and secondhand smoke exposure harm children. The smoke-free home message's relevance to the harms of thirdhand smoke exposure should be adapted to be promoted, especially in homes. abstract_id: PUBMED:35906369 Research on smoke control for an underground mall fire, based on smoke barrier and mechanical smoke exhaust system. This study examines smoke spread in an underground mall fire under the composite smoke control mode of a smoke barrier and a mechanical smoke exhaust system. The objective is to optimize the selection of smoke containment and exhaust methods in an underground mall in Fuxin City, China. A Fire Dynamics Simulator was used for numerical simulation to investigate the effects of the sagging height and spacing of smoke barriers on smoke containment, as well as the effects of size, number, and arrangement of smoke vents in the mechanical smoke exhaust system on the effectiveness of smoke exhaust. The results indicated that a smoke barrier with a sagging height of 1 m and a spacing of 5 m was effective in preventing the spread of high-temperature smoke. When the sagging height of the smoke barrier increased to 1.2 m, the smoke barrier effect was comparable to that of a 1 m height barrier. Regarding the mechanical smoke exhaust system, the size of the opening area of the smoke vent had no significant effect on the smoke exhaust effect. The best smoke exhaust effect was achieved when the number of smoke vents was 12. Additionally, the double-row setting of smoke vents was more efficient than the single-row setting. Combining a smoke barrier and a mechanical smoke exhaust system can provide a more effective smoke control compared to either system alone. Comprehensively, considering the effectiveness and economy of smoke containment and exhaust, the optimal combination of smoke containment and exhaust was determined to be a smoke barrier with a sagging height of 1 m and spacing of 5 m, and a mechanical smoke exhaust system with 12 smoke vents in a double-row arrangement. Answer: No, the term "TXT" in the provided abstracts does not refer to smoking or tobacco use. In the context of PUBMED:12355948, "TXT" stands for docetaxel, which is a chemotherapy medication used to treat cancer. The abstract discusses a case where a patient with bone metastasis from breast cancer showed improvement after receiving weekly therapy with doxorubicin (DXR) and docetaxel (TXT). There is no mention of smoking in relation to "TXT" within the abstracts provided.
Instruction: Sonographic measurement of the lower uterine segment thickness: is it truly predictive of uterine rupture? Abstracts: abstract_id: PUBMED:18254997 Sonographic measurement of the lower uterine segment thickness: is it truly predictive of uterine rupture? Background: Sonographic examination of the lower uterine segment (LUS) has been used to diagnose a uterine defect and to determine the degree of LUS thinning in women with previous Caesarean section. Previous studies have demonstrated that the LUS thickness measured sonographically has a high negative predictive value for uterine rupture, suggesting that a normal LUS thickness predicts a safe trial of vaginal birth after previous Caesarean section (VBAC). However, the clinical application of LUS measurement in the management of VBAC remains controversial. Because uterine rupture is rare and the number of women willing to attempt VBAC is declining, it would be difficult to recruit sufficient patients for an adequate sample size when designing studies to evaluate LUS measurement in predicting uterine rupture. Case: A healthy 34-year-old, gravida 7, para 5, had a lower segment transverse Caesarean section for her fifth delivery. She underwent a trial of VBAC for her subsequent pregnancy. Despite a normal sonographic LUS evaluation at 37 weeks' gestation, she had uterine rupture during labour. Conclusion: Clinical experience with the use of LUS measurement in predicting uterine rupture and managing VBAC is limited. Having a national registry to record data and review all cases of uterine rupture would accelerate the accumulation of experience on this subject. abstract_id: PUBMED:30779345 Sonographic lower uterine segment thickness after prior cesarean section to predict uterine rupture: A systematic review and meta-analysis. Introduction: Cesarean section rates are increasing with a decrease in the rate of trial of labor after cesarean section. The objective of this study was to systematically review the predictive characteristics of sonographic measurement of lower uterine segment thickness for uterine rupture during labor. Material And Methods: The review was carried out in agreement with PRISMA and SEDATE guidelines. MEDLINE, EMBASE, ClinicalTrials.gov and Cochrane Library were searched from 1990 until November 2018. Quality of included studies was assessed using the QUADAS-2 tool. Data were extracted to construct 2 × 2 tables from each study comparing ultrasound measurement with uterine defect at time of delivery. The data were plotted as a summary receiver-operating characteristic (SROC) curve using the hierarchical SROC model. Results: Twenty-eight observational cohort studies met the selection criteria for inclusion. Sonographic lower uterine segment thickness was measured at a gestational age of 36-40 weeks in women with a previous cesarean section. The risk of bias and concerns regarding applicability were low among most studies. The sonographic measurement was correlated with either delivery outcome or lower uterine segment thickness at the time of repeat cesarean section. The cut-off value for lower uterine segment thickness ranged from 1.5 to 4.05 mm across all studies. An association between thin lower uterine segment measurement and uterine dehiscence and uterine rupture was shown in 27 and four studies, respectively. Nineteen studies were included in a meta-analysis with a subgroup analysis by ultrasound methodology. In the subgroup using the ultrasound methodology associated with uterine rupture, the cut-off value is more precise (2.0-3.65 mm) among these 12 studies. There were 18 cases (1.0%) of uterine rupture, 120 (6.6%) of uterine dehiscence and 1674 (92.4%) women with no uterine defect. The SROC curve showed a sensitivity of 0.88 (95% CI 0.83-0.92) and specificity of 0.77 (95% CI 0.70-0.83). The negative likelihood ratio was 0.11 (95% CI 0.08-0.16) and the diagnostic odds ratio was 34.0 (95% CI 18.2-63.5). Conclusions: Lower uterine segment thickness >3.65 mm, measured using a standardized ultrasound technique, is associated with a lower likelihood of uterine rupture. abstract_id: PUBMED:36536838 Prediction of uterine dehiscence via machine learning by using lower uterine segment thickness and clinical features. Background: With the global increase of cesarean delivery rates, the long-term effects of cesarean delivery have started to become clearer. One of the most prominent complications of cesarean delivery in recurrent pregnancies is uterine rupture. Assessing the risk of uterine rupture by accurately predicting dehiscence is very important to prevent untimely operations and/or maternal and fetal complications. Objective: This study aimed to assess whether machine learning can be used to predict uterine dehiscence by using patients' ultrasonographic findings, clinical findings, and demographic data as features. Hence, we investigated a potential method for preventing uterine rupture and its maternal and fetal complications. Study Design: The study was conducted on 317 patients with term (>37 weeks' gestation) singleton pregnancies and no medical complications or medications that may affect uterine wound healing. Demographics, body mass indices, smoking and drinking habits, clinical features, past pregnancies, number and history of abortions, interdelivery period, gestational week, number of previous cesarean deliveries, fetal presentation, fetal weight, tocography data, transabdominal ultrasonographic measurement of lower uterine segment full thickness and myometrium thickness, and lower uterine segment findings during cesarean delivery were collected and analyzed using machine learning techniques. Logistic regression, multilayer perceptron, support vector machine, random forest, and naive Bayes algorithms were used for classification. The dataset was evaluated using 10-fold cross-validation. Correct classification rate, F-score, Matthews correlation coefficient, precision-recall curve area, and receiver operating characteristic area were used as performance metrics. Results: Among the machine learning techniques tested in this study, the naive Bayes algorithm showed the best predictive performance. Among the various combinations of features used for prediction, the essential features of parity, gravidity, tocographic contraction, cervical dilation, dilation and curettage, and sonographic thickness of lower uterine segment myometrium yielded the best results. The second-best performance was achieved with sonographic full thickness of lower uterine segment added to the base features. The base features alone could classify patients with 90.5% accuracy, whereas adding the myometrium measurement increased the classification performance by 5.1% to 95.6%. Adding the full thickness measurement to the base features raised the classification performance by 4.8% to 95.3% in terms of correct classification rate. Conclusion: The naive Bayes algorithm can correctly classify uterine dehiscence with a correct classification rate of 0.953, an F-score of 0.952, and a Matthews correlation coefficient value of 0.641. This result can be interpreted as indicating that by using clinical features and lower uterine segment ultrasonography findings, machine learning can be used to accurately predict uterine dehiscence. abstract_id: PUBMED:23576473 Sonographic measurement of lower uterine segment thickness to predict uterine rupture during a trial of labor in women with previous Cesarean section: a meta-analysis. Objective: To evaluate the accuracy of antenatal sonographic measurement of lower uterine segment (LUS) thickness in the prediction of risk of uterine rupture during a trial of labor (TOL) in women with a previous Cesarean section (CS). Methods: PubMed and EMBASE were searched to identify articles published on the subject of sonographic LUS measurement and occurrence of a uterine defect after delivery. Four independent researchers performed identification of papers and data extraction. Selected studies were scored on methodological quality, and sensitivity and specificity of measurement of LUS thickness in the prediction of a uterine defect were calculated. We performed bivariate meta-analysis to estimate summary receiver-operating characteristics (sROC) curves. Results: We included 21 studies with a total of 2776 analyzed patients. The quality of included studies was good, although comparison was difficult because of heterogeneity. The estimated sROC curves showed that measurement of LUS thickness seems promising in the prediction of occurrence of uterine defects (dehiscence and rupture) in the uterine wall. The pooled sensitivity and specificity of myometrial LUS thickness for cut-offs between 0.6 and 2.0 mm was 0.76 (95% CI, 0.60-0.87) and 0.92 (95% CI, 0.82-0.97); cut-offs between 2.1 and 4.0 mm reached a sensitivity and specificity of 0.94 (95% CI, 0.81-0.98) and 0.64 (95% CI, 0.26-0.90). The pooled sensitivity and specificity of full LUS thickness for cut-offs between 2.0 and 3.0 mm was 0.61 (95% CI, 0.42-0.77) and 0.91 (95% CI, 0.80-0.96); cut-offs between 3.1 and 5.1 mm reached a sensitivity and specificity of 0.96 (95% CI, 0.89-0.98) and 0.63 (95% CI, 0.30-0.87). Conclusions: This meta-analysis provides support for the use of antenatal LUS measurements in the prediction of a uterine defect during TOL. Clinical applicability should be assessed in prospective observational studies using a standardized method of measurement. abstract_id: PUBMED:9417464 Echographic measurement of the inferior uterine segment for assessing the risk of uterine rupture Background: Ultrasonography has been used to examine the scarred uterus in women who have had previous cesarean sections in an attempt to assess the risk of rupture of the scar during subsequent labor. The predictive value of such measurements has not been adequately assessed, however. We aimed to evaluate the usefulness of sonographic measurement of the lower uterine segment before labor in predicting the risk of intrapartum uterine rupture. Methods: In this prospective observational study, the obstetricians were not told the ultrasonographic findings and did not use them to make decisions about type of delivery. Eligible patients were those with previous cesarean sections booked for delivery at our hospital. 642 patients underwent ultrasound examination at 36-38 weeks' gestation, and were allocated to four groups according to the thickness of the lower uterine segment. Ultrasonographic findings were compared with those of physical examination at delivery. Findings: The overall frequency of defective scars was 4.0% (15 uterine ruptures and 10 dehiscences). The frequency of defects rose as the thickness of the lower uterine segment decreased: there were no defects among 278 women with measurements greater than 4.5 mm, three (2%) among 177 patients with values of 3.6-4.5 mm, 14 (10%) among 136 patients with values of 2.6-3.5 mm, and eight (16%) among 51 women with values of 1.6-2.5 mm. With a cut-off value of 3.5 mm, the sensitivity of ultrasonographic measurement was 88.0%, the specificity 73.2%, positive predictive value 11.8%, and negative predictive value 99.3%. Interpretation: Our results show that the risk of a defective scar is directly correlated to the degree of thinning of the lower uterine segment at around 37 weeks of pregnancy. The high negative predictive value of the method may encourage obstetricians in hospitals where routine repeat elective cesarean is the standard procedure to offer a trial of labor to patients with a thickness value of 3.5 mm or greater. abstract_id: PUBMED:16100622 Sonographic measurement of the lower uterine segment thickness in women with previous caesarean section. Objectives: To evaluate the accuracy of prenatal sonography in determining the lower uterine segment (LUS) thickness in women with previous Caesarean section and to assess the usefulness of measuring LUS thickness in predicting the risk of uterine rupture during a trial of vaginal birth. Methods: Sonographic examination was performed in 102 pregnant women with one or more previous Caesarean sections at between 36 and 38 weeks' gestation to assess the LUS thickness, which was defined as the shortest distance between the urinary bladder wall-myometrium interface and the myometrium/chorioamniotic membrane-amniotic fluid interface. Of the 102 women examined, 91 (89.2%) had transabdominal sonography only, and 11 (10.8%) had both transabdominal and transvaginal examinations. The sonographic measurements were correlated with the delivery outcome and the intraoperative LUS appearance. Results: The mean sonographic LUS thickness was 1.8 mm, standard deviation (SD) 1.1 mm. An intraoperatively diagnosed paper-thin or dehisced LUS, when compared with an LUS of normal thickness, had a significantly smaller sonographic LUS measurement (0.9 mm, SD 0.5 mm, vs. 2.0 mm, SD 0.8 mm, respectively; P < 0.0001). Two women had uterine dehiscence, both of whom had prenatal LUS thickness of < 1 mm. Thirty-two women (31.4%) had a successful vaginal delivery, with a mean LUS thickness of 1.9 mm, SD 1.5 mm; none had clinical uterine rupture. A sonographic LUS thickness of 1.5 mm had a sensitivity of 88.9%, a specificity of 59.5%, a positive predictive value of 32.0%, and a negative predictive value of 96.2% in predicting a paper-thin or dehisced LUS. Conclusions: Sonography permits accurate assessment of the LUS thickness in women with previous Caesarean section and therefore can potentially be used to predict the risk of uterine rupture during trial of vaginal birth. abstract_id: PUBMED:8569360 Ultrasonographic measurement of lower uterine segment to assess risk of defects of scarred uterus. Background: Ultrasonography has been used to examine the scarred uterus in women who have had previous caesarean sections in an attempt to assess the risk of rupture of the scar during subsequent labour. The predictive value of such measurements has not been adequately assessed, however. We aimed to evaluate the usefulness of sonographic measurement of the lower uterine segment before labour in predicting the risk of intrapartum uterine rupture. Methods: In this prospective observational study, the obstetricians were not told the ultrasonographic findings and did not use them to make decisions about type of delivery. Eligible patients were those with previous caesarean sections booked for delivery at our hospital. 642 patients underwent ultrasound examination at 36-38 weeks' gestation, and were allocated to four groups according to the thickness of the lower uterine segment. Ultrasonographic findings were compared with those of physical examination at delivery. Findings: The overall frequency of defective scars was 4.0% (15 ruptures, 10 dehiscences). The frequency of defects rose as the thickness of the lower uterine segment decreased: there were no defects among 278 women with measurements greater than 4.5 mm, three (2%) among 177 women with values of 3.6-4.5 mm, 14 (10%) among 136 women with values of 2.6-3.5 mm, and eight (16%) among 51 women with values of 1.6-2.5 mm. With a cut-off value of 3.5 mm, the sensitivity of ultrasonographic measurement was 88.0%, the specificity 73.2%, positive predictive value 11.8%, and negative predictive value 99.3%. Interpretation: Our results show that the risk of a defective scar is directly related to the degree of thinning of the lower uterine segment at around 37 weeks of pregnancy. The high negative predictive value of the method may encourage obstetricians in hospitals where routine repeat elective caesarean is the norm to offer a trial of labour to patients with a thickness value of 3.5 mm or greater. abstract_id: PUBMED:16526096 Inter- and intraobserver variability in sonographic measurement of the lower uterine segment after a previous Cesarean section. Objective: To evaluate the reproducibility of sonographic measurement of the lower uterine segment in pregnant women at term. Methods: Two independent observers performed transabdominal sonography on 129 women between 36 and 38 weeks of gestation who had had a previous Cesarean section. Sonography was performed when the patients had a full and a half-full bladder; in 100 patients, the measurements were also performed transvaginally, with the patients having an empty bladder. Agreement was quantified by the intraclass correlation coefficient and, using a cut-off of 3.5 mm, by the kappa coefficient. Results: The intraobserver agreement was generally high (intraclass correlation coefficient > 0.90). The interobserver agreement was higher on transvaginal (intraclass correlation coefficient, 0.94) compared with transabdominal (0.70 and 0.84, with full and half-full bladder, respectively) ultrasound. The kappa coefficient was 0.75 transvaginally, compared with 0.34 and 0.54 using the transabdominal approach, with full and half-full bladder, respectively. Conclusion: The agreement between two observers for sonographic transvaginal measurement of the lower uterine segment can be considered good, compared with poor to moderate agreement using the transabdominal approach. abstract_id: PUBMED:23705088 Pitfall in ultrasound evaluation of uterine scar from prior preterm cesarean section. We report two cases of women with a previous cesarean performed before active labor at 29 weeks of gestation who underwent sonographic measurement of the lower uterine segment (LUS) at 36 weeks' gestation in their subsequent pregnancy. In both cases, uterine scar defect was detected on the upper part of the LUS, at ∼9 to 11 cm from the cervical os, and was only visualized by the transabdominal approach. We suggest that early gestational age and the absence of labor at previous cesarean can lead to a higher uterine scar location on the LUS and, therefore, increase the risk of uterine rupture in subsequent pregnancy. The heterogeneity of uterine scar location could explain discrepancies observed in studies using the transabdominal versus the transvaginal approach or both regarding the predictive value of LUS measurements for uterine rupture. abstract_id: PUBMED:19733288 Prediction of complete uterine rupture by sonographic evaluation of the lower uterine segment. Objective: The purpose of this study was to establish the validity of sonographic evaluation of lower uterine segment (LUS) thickness for complete uterine rupture. Study Design: A prospective cohort study of women with previous cesarean delivery was conducted. LUS thickness (full thickness and myometrial thickness only) was measured between 35 and 38 weeks gestation, and the thinnest measurement was considered to be the dependent variable. Receiver operating curve analyses and logistic regression were used. Results: Two hundred thirty-six women were included in the study. Nine uterine scar defects (3 cases of complete rupture during a trial of labor and 6 cases of dehiscence) were reported. Receiver operating curve analyses showed that full thickness of <2.3 mm was the optimal cutoff for the prediction of uterine rupture (3/33 vs 0/92; P = .02). Full thickness was also identified as an independent predictor of uterine scar defect (odds ratio, 4.66; 95% confidence interval, 1.04-20.91) Conclusion: Full LUS thickness of <2.3 mm is associated with a higher risk of complete uterine rupture. Answer: The sonographic measurement of the lower uterine segment (LUS) thickness has been investigated as a predictive tool for uterine rupture, particularly in women with a history of cesarean section considering a trial of vaginal birth after cesarean (VBAC). The evidence suggests that while sonographic LUS thickness measurement has a high negative predictive value, indicating that a normal LUS thickness predicts a lower risk of uterine rupture during VBAC, its clinical application remains controversial due to the rarity of uterine rupture and the declining number of women attempting VBAC (PUBMED:18254997). A systematic review and meta-analysis found that a sonographic LUS thickness greater than 3.65 mm, measured using a standardized ultrasound technique, is associated with a lower likelihood of uterine rupture (PUBMED:30779345). Another study using machine learning techniques found that combining clinical features with sonographic measurements of the LUS myometrium could accurately predict uterine dehiscence, potentially preventing uterine rupture and its complications (PUBMED:36536838). A separate meta-analysis supported the use of antenatal LUS measurements in predicting uterine defects during a trial of labor, although it noted that clinical applicability should be assessed in prospective observational studies using a standardized method of measurement (PUBMED:23576473). Furthermore, a prospective observational study found that the risk of a defective scar is directly related to the degree of thinning of the LUS around 37 weeks of pregnancy, with a high negative predictive value for a thickness value of 3.5 mm or greater (PUBMED:8569360). However, despite these findings, there are challenges in the standardization of measurements and inter- and intraobserver variability, particularly when comparing transvaginal and transabdominal ultrasound approaches (PUBMED:16526096). Additionally, the location of the uterine scar can vary, especially if the previous cesarean was performed preterm and before active labor, which could affect the predictive value of LUS measurements (PUBMED:23705088). In conclusion, while sonographic measurement of LUS thickness can be indicative of the risk of uterine rupture, it is not an absolute predictor. The variability in measurement techniques, the rarity of uterine rupture, and other clinical factors must be considered when using this tool in the management of VBAC (PUBMED: when using this tool in the management of VBAC (PUBMED:18254997). Additionally, a study that aimed to establish the validity of sonographic evaluation of LUS thickness for complete uterine rupture found that a full LUS thickness of less than 2.3 mm was associated with a higher risk of complete uterine rupture, suggesting that this measurement could be an independent predictor of uterine scar defect (PUBMED:19733288). Overall, while sonographic measurement of the LUS thickness can provide valuable information and has been associated with the risk of uterine rupture, it should not be solely relied upon for clinical decision-making. It is one of several factors that should be considered in the context of a comprehensive clinical assessment when managing pregnancies following a cesarean section.
Instruction: Is it possible to diagnose acute otitis media accurately in primary health care? Abstracts: abstract_id: PUBMED:14507792 Is it possible to diagnose acute otitis media accurately in primary health care? Background: Acute otitis media causes human suffering and enormous costs to society. Symptoms of acute otitis media overlap those of the common cold, and diagnostic methods confirming the diagnosis are used only occasionally. Uncertainty in diagnostics may lead either to overdiagnosis and unnecessary treatment or to underdiagnosis and an increase in complications. Objective: Our aim was to evaluate the inter-rater agreement in diagnosis of acute otitis media for children in primary health care. Methods: The GP on duty and the otorhinolaryngology resident at a primary health care clinic examined the same 50 children with caregiver-suspected acute otitis media. The otorhinolaryngologist photographed the tympanic membranes. Afterwards, two experienced clinicians evaluated the photographs with and without tympanograms. Diagnostic rates and diagnostic methods between clinicians were compared. Results: The otorhinolaryngologist diagnosed acute otitis media in 44% and the GP in 64%. The GP based the diagnoses on symptoms and on the colour of the tympanic membrane, whereas the otorhinolaryngologist paid more attention to the movement and position of the tympanic membrane. Conclusion: The use of a pneumatic otoscope and tympanometry reduces the number of acute otitis media diagnoses by >30%, suggesting that acute otitis media may be misdiagnosed often. Between clinicians, there was a substantial discrepancy in diagnoses of acute otitis media. abstract_id: PUBMED:8469944 Differences in the diagnose panorama in primary health care in Dalby, Sweden and Spili, Crete. We have compared the use of primary health care and the diagnoses at visits to doctors in the Spili Health Centre (SHC) in Crete and the Dalby Health Centre (DHC) in Sweden. In DHC more patients per 1000 population visited the doctors than in SHC. This was so regardless of age-group and sex, in fact more or less regardless of diagnosis. Other differences between the populations were: The diagnosis acute otitis media was more frequent in the Dalby children than in the Spili ones. The opposite was true of "head injuries" which were more frequent in the Spili boys. Visits to doctors for bronchitis was more frequent in the Spili men, maybe because of the extensive smoking habits of Cretan men. Visits for diseases of the musculoskeletal system were more frequent in DHC than in SHC. A hypothesis worth testing is that this was influenced by differences in the health insurance and sick benefit systems. Angina pectoris was fairly frequent in both areas but cardiosclerosis (including healed myocardial infarction) was more common in DHC than in SHC. Use of primary health care may be influenced by the need for health care in the population, the accessibility of the health care facilities, the costs for the patients, the quality of care as perceived by the patients and by other sociocultural factors. Comparative studies, even though fairly uncommon today, may be of use in generating hypotheses about the impact of different factors on the use of health care. abstract_id: PUBMED:3417069 The International Primary Care Network: purpose, methods, and policies. The International Primary Care Network (IPCN), a consortium of practice based primary care research networks, including the Ambulatory Sentinel Practice Network (ASPN), is described. The purpose, methods, and policies are presented as an example of international networking to address problems of global concern in primary care. IPCN's experience with organizing a framework for collaboration and conducting a study of otitis media in nine countries suggests that efforts of this type are possible and may lead to new insights available only through international investigation. abstract_id: PUBMED:28088980 Indicators for continuous quality improvement for otitis media in primary health care for Aboriginal and Torres Strait Islander children. Otitis media is a common, generally self-limiting childhood illness that can progress to severe disease and have lifelong sequelae, including hearing loss and developmental delays. Severe disease is disproportionately prevalent among Aboriginal and Torres Strait Islander children. Primary health care is at the frontline of appropriate prevention and treatment. Continuous quality improvement in the prevention and management of important causes of morbidity in client populations is accepted best practice in primary health care and now a requirement of Australian Government funding to services providing care for Aboriginal and Torres Strait Islander children. To date, there have been no indicators for continuous quality improvement in the prevention and management of otitis media and its sequelae in primary health care. Through an expert group consensus process, seven evidence-based indicators, potentially extractable from electronic health records, have been developed. The development process and indicators are described. abstract_id: PUBMED:32238972 Children with otitis attending emergency primary health care units following introduction of pneumococcal vaccine. Background: Otitis is a frequently occurring condition in young children and involves considerable use of antibiotics. The most common bacterial cause is pneumococci. The pneumococcal vaccine was introduced as part of the Childhood Immunisation Programme in Norway in 2006. The purpose of the study was to investigate whether this vaccination may have reduced the number of otitis cases presenting at emergency primary health care units. Material And Method: The material consists of data from all electronic reimbursement claims from emergency primary health care doctors in the period 2006-18. Annual consultation rates were calculated for children aged 0-5 years attending an emergency primary health care unit, both in total and due to otitis. Registration of vaccination coverage in this age group began in 2006. Results: The total consultation rate fell from 674 per 1000 inhabitants in 2006 to 502 in 2018 (a relative reduction of 26 %). The consultation rate for otitis fell from 44 per 1000 inhabitants to 21 (a relative reduction of 52 %). While the total rate fell steadily throughout the period, the reduction in consultation rates for otitis began to fall in 2011, when vaccination coverage in this group was over 90 %. Interpretation: The pneumococcal vaccine may have reduced the number of children presenting with otitis at an emergency primary health care unit. abstract_id: PUBMED:31510938 Diagnostic methods for acute otitis media in 1 to 12 year old children: a cross sectional study in primary health care. Background: Otoscopy alone has low sensitivity and specificity for acute otitis media (AOM). Otomicroscopy and pneumatic methods are superior to otoscopy. However, these methods require clinical skills. The use of different diagnostic methods for AOM differs between countries and has not been evaluated in Sweden since new guidelines were introduced in 2010. This study aimed to describe the extent of which diagnostic methods and written advice were used for AOM in children 1 to 12 years old. Methods: In this cross-sectional study all general practitioners (GPs) and specialist trainees in primary care (STs) at 27 primary health care centres in Sweden were asked to complete a self-administrated questionnaire including diagnostic approach and the management of AOM; 75% (111/148) responded to the questionnaire. Outcome Measures: GPs versus STs and their gender, the use of otoscopy, pneumatic otoscopy, otomicroscopy, tympanometry and written advice. Logistic regressions were used to evaluate the association between GPs versus STs and their gender and the use of diagnostic methods and written advice. Results: To diagnose AOM, 98% of the GPs and STs often or always used otoscopy, in addition to this 17% often or always used otomicroscopy, 18% pneumatic otoscopy and 11% tympanometry. Written advice to parents was provided often or always by 19% of the GPs and STs. The GPs used otomicroscopy more often than STs, adjusted OR 4.9 (95% CI 1.5-17; p = 0.011). For the other diagnostic methods, no differences were found. Female GPs and STs provided written advice more often than male GPs and STs, OR 5.2 (95% CI, 1.6-17; p = 0.0061), adjusted for GP versus ST. Conclusions: Otoscopy was by far the most commonly used method for the diagnosis of AOM. Female GPs and STs provided written advice more frequently than did their male colleagues. GPs used the significantly better method otomicroscopy more often than STs, therefore, it is important to emphasise teaching of practical skills in otomicroscopy in the specialist training programme for general practice. A correct diagnosis is important for avoiding potentially harmful antibiotic treatments, antimicrobial resistance and possible delay of other diagnoses. abstract_id: PUBMED:27938633 Children in Greenland: disease patterns and contacts to the health care system. Background: Previous studies of Greenlandic children's disease pattern and contacts to the health care system are sparse and have focused on the primary health care sector. Objective: We aimed to identify the disease pattern and use of health care facilities of children aged 0-10 in two Greenlandic cohorts. Methods And Design: In a retrospective, descriptive follow-up of the Ivaaq (The Greenland Child Cohort) and the CLEAR (climate changes, environmental contaminants and reproductive health) birth cohorts (total n=1,000), we reviewed medical records of children aged 6-10 in 2012 with residence in Nuuk or Ilulissat (n=332). Data on diseases and health care system contacts were extracted. Diagnoses were validated retrospectively. Primary health care contacts were reviewed for a random sample of 1:6. Results: In 311 children with valid social security number, the total number of health care system contacts was 12,471 equalling 4.6 contacts per child per year. The annual incidence rate of hospital admissions was 1:10 children (total n=266, 1,220 days, 4.6 days/admission), outpatient contacts 2:10 children and primary care 3.6 per child. Contacts were overall more frequent in boys compared with girls, 39.5 versus 34.6 during the study period, p=0.02. The highest annual contact rates for diseases were: hospitalisations/acute respiratory diseases 13.9:1,000; outpatient contacts/otitis media 5.1:1,000; primary care/conjunctivitis or nasopharyngitis 410:1,000 children. Outpatient screening for respiratory tuberculosis accounted 6.2:1,000, primary care non-disease (Z-diagnosis) 2,081:1,000 annually. Complete adherence to the child vaccination programme was seen in 40%, while 5% did not receive any vaccinations. Conclusions: In this first study of its kind, the health care contact pattern in Greenlandic children showed a relatively high hospitalisation rate and duration per admission, and a low primary health care contact rate. The overall contact rate and disease pattern resembled those in Denmark, except for tuberculosis screening. Adherence to the vaccination programme was low. These findings may be helpful for the organisation and dimensioning of the Greenlandic health care system for children. abstract_id: PUBMED:11467013 PHLS primary care consultation--infectious disease and primary care research and service development priorities. In 1999 for the first time the PHLS undertook a questionnaire survey of general practitioners' views of the burden of infectious disease and the priorities for research and development of infectious disease services within the PHLS. Three hundred and seventy-one (38%) of 979 questionnaires mailed to chairs of primary care groups in England, and general practitioners in research networks, were returned. Service areas: computer transfer of laboratory results was considered of greatest priority. Guidance on antibiotic usage, guidance on infectious diseases and education for general practitioners were ranked two, three and four. Burden of infectious disease in primary care: upper respiratory tract infections, tonsillitis/pharyngitis, otitis media/externa and acute cough were placed one, three, four and seven respectively. Urinary tract infections were ranked second and dyspepsia/Helicobacter pylori fifth. Leg ulcers, diarrhoea, genital chlamydia infection and vaginal discharge were other diseases considered to cause a large burden of ill-health. Genital chlamydia, tuberculosis, Helicobacter pylori and meningococci were ranked one, two, three, and five in the NHS opportunity to affect the burden of ill-health. Priorities for improvements to diagnostic tests, evidence on which to base treatment and guidance: chronic fatigue/ME was ranked top in these areas. The other top ten syndromes ranked in order were genital chlamydia infections, antibiotic resistance surveillance, vaginal discharge, leg ulcers, sinusitis, otitis media/externa, dyspepsia/Helicobacter pylori, Creutzfeld Jacob Disease, and tonsillitis. This consultation exercise has highlighted broad areas for future PHLS involvement in primary care. In order to make progress, further consultation is needed with groups of GPs, and other relevant bodies. Particularly for the areas ranked in the top ten, the type of further PHLS involvement needs to be defined. For some syndromes (chronic fatigue and leg ulcers) this may be writing guidance and for others (respiratory tract infections) more treatment trials are required. The purposes and possible methods of communicable disease surveillance in general practice should be the subject of additional consultation. abstract_id: PUBMED:34229624 Challenges to implementing electronic trial data collection in primary care: a qualitative study. Background: Within-consultation recruitment to primary care trials is challenging. Ensuring procedures are efficient and self-explanatory is the key to optimising recruitment. Trial recruitment software that integrates with the electronic health record to support and partially automate procedures is becoming more common. If it works well, such software can support greater participation and more efficient trial designs. An innovative electronic trial recruitment and outcomes software was designed to support recruitment to the Runny Ear randomised controlled trial, comparing topical, oral and delayed antibiotic treatment for acute otitis media with discharge in children. A qualitative evaluation investigated the views and experiences of primary care staff using this trial software. Methods: Staff were purposively sampled in relation to site, role and whether the practice successfully recruited patients. In-depth interviews were conducted using a flexible topic guide, audio recorded and transcribed. Data were analysed thematically. Results: Sixteen staff were interviewed, including GPs, practice managers, information technology (IT) leads and research staff. GPs wanted trial software that automatically captures patient data. However, the experience of getting the software to work within the limited and complex IT infrastructure of primary care was frustrating and time consuming. Installation was reliant on practice level IT expertise, which varied between practices. Although most had external IT support, this rarely included supported for research IT. Arrangements for approving new software varied across practices and often, but not always, required authorisation from Clinical Commissioning Groups. Conclusions: Primary care IT systems are not solely under the control of individual practices or CCGs or the National Health Service. Rather they are part of a complex system that spans all three and is influenced by semi-autonomous stakeholders operating at different levels. This led to time consuming and sometimes insurmountable barriers to installation at the practice level. These need to be addressed if software supporting efficient research in primary care is to become a reality. abstract_id: PUBMED:22765934 Innovative primary care delivery in rural Alaska: a review of patient encounters seen by community health aides. Background: For more than 50 years, Community Health Aides and Community Health Practitioners (CHA/Ps) have resided in and provided care for the residents of their villages. Objectives: This study is a systematic description of the clinical practice of primary care health workers in rural Alaska communities. This is the first evaluation of the scope of health problems seen by these lay health workers in their remote communities. Study Design: Retrospective observational review of administrative records for outpatient visits seen by CHA/Ps in 150 rural Alaska villages (approximate population 47,370). Methods: Analysis of electronic records for outpatient visits to CHA/Ps in village clinics from October 2004 through September 2006. Data included all outpatient visits from the Indian Health Service National Patient Information Reporting System. Descriptive analysis included comparisons by region, age, sex, clinical assessment and treatment. Results: In total 272,242 visits were reviewed. CHA/Ps provided care for acute, chronic, preventive, and emergency problems at 176,957 (65%) visits. The remaining 95,285 (35%) of records did not include a diagnostic code, most of which were for administrative or medication-related encounters. The most common diagnostic codes were: pharyngitis (11%), respiratory infections (10%), otitis media (8%), hypertension (6%), skin infections (4%), and chronic lung disease (4%). Respiratory distress and chest pain accounted for 75% (n=10,552) of all emergency visits. Conclusions: CHA/Ps provide a broad range of primary care in remote Alaskan communities whose residents would otherwise be without consistent medical care. Alaska's CHA/P program could serve as a health-care delivery model for other remote communities with health care access challenges. Answer: Diagnosing acute otitis media (AOM) accurately in primary health care can be challenging. The symptoms of AOM often overlap with those of the common cold, and diagnostic methods that confirm the diagnosis are not always used routinely. This uncertainty in diagnostics can lead to either overdiagnosis and unnecessary treatment or underdiagnosis and an increase in complications (PUBMED:14507792). A study evaluating the inter-rater agreement in the diagnosis of AOM for children in primary health care found that there was a substantial discrepancy between clinicians. General practitioners (GPs) diagnosed AOM based on symptoms and the color of the tympanic membrane, while otorhinolaryngologists paid more attention to the movement and position of the tympanic membrane. The use of a pneumatic otoscope and tympanometry reduced the number of AOM diagnoses by more than 30%, suggesting that AOM may often be misdiagnosed (PUBMED:14507792). In Sweden, a cross-sectional study revealed that otoscopy was the most commonly used method for diagnosing AOM, with 98% of GPs and specialist trainees using it often or always. However, more advanced methods like otomicroscopy, pneumatic otoscopy, and tympanometry were used less frequently. The study highlighted the importance of emphasizing practical skills in otomicroscopy in specialist training programs for general practice to avoid incorrect diagnoses and unnecessary antibiotic treatments (PUBMED:31510938). In summary, while it is possible to diagnose AOM in primary health care, the accuracy of the diagnosis can be improved by using more advanced diagnostic methods and ensuring that primary care clinicians are adequately trained in their use. The variability in diagnostic practices and the potential for misdiagnosis underscore the need for continuous quality improvement and adherence to evidence-based guidelines in the management of AOM (PUBMED:28088980).
Instruction: Do psychological factors predict occurrence of influenza-like symptoms in vaccinated elderly residents of a sheltered home? Abstracts: abstract_id: PUBMED:16238856 Do psychological factors predict occurrence of influenza-like symptoms in vaccinated elderly residents of a sheltered home? Objectives: A previously shown relation between psychological factors and the occurrence of flu has not been tested in elderly people, who are at the greatest risk of mortality from flu complications. This study examined whether psychological factors predict the occurrence of influenza-like symptoms (ILS) in elderly residents. Design: A prospective correlation design was used. Method: Our sample included 70 elderly residents of a sheltered home (mean age 83.0 years) who were vaccinated against influenza and assessed for background, biomedical factors (e.g. number of illnesses) and psychological factors (e.g. hostility, depression, life-events, and social support). The occurrence of ILS during 2 subsequent winter months was evaluated by a nurse blind to patients' psychological data, according to explicit valid criteria and physicians' notes. Results: Seventeen patients (24.3%) developed ILS. Number of medications, little physical activity, hostility, depression, and little social support significantly predicted ILS. In a multivariate analysis, only number of drugs (relative risk, RR: 1.54; 95% confidence interval, CI: 1.06-2.22) and hostility (RR = 1.18; 95% CI: 1.00-1.38) significantly and independently predicted occurrence of ILS. Conclusions: Psychological factors (particularly hostility) predict occurrence of ILS in an elderly sample. Possible behavioural and immunological mechanisms linking hostility to development of ILS are discussed. These findings add to the literature on psychological factors and flu, and may have implications for the prediction and prevention of flu in elderly people. abstract_id: PUBMED:34763723 The prior infection with SARS-CoV-2 study (PICOV) in nursing home residents and staff - study protocol description and presentation of preliminary findings on symptoms. Background: The COVID-19 pandemic, caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has presented itself as one of the most important health concerns of the 2020's, and hit the geriatric population the hardest. The presence of co-morbidities and immune ageing in the elderly lead to an increased susceptibility to COVID-19, as is the case for other influenza-like illnesses (ILI) or acute respiratory tract infections (ARI). However, little is known, about the impact of a previous or current infection on the other in terms of susceptibility, immune response, and clinical course. The aim of the "Prior Infection with SARS-COV-2" (PICOV) study is to compare the time to occurrence of an ILI or ARI between participants with a confirmed past SARS-CoV-2 infection (previously infected) and those without a confirmed past infection (naïve) in residents and staff members of nursing homes. This paper describes the study design and population characteristics at baseline. Methods: In 26 Belgian nursing homes, all eligible residents and staff members were invited to participate, resulting in 1,226 participants. They were classified as naïve or previously infected based on the presence of detectable SARS-CoV-2 antibodies and/or a positive RT-qPCR result before participation in the study. Symptoms from a prior SARS-CoV-2 infection between March and August 2020 were compared between previously infected residents and staff members. Results: Infection naïve nursing home residents reported fewer symptoms than previously infected residents: on average 1.9 and 3.1 symptoms, respectively (p = 0.016). The same effect was observed for infection naïve staff members and previously infected staff members (3.1 and 6.1 symptoms, respectively; p <0.0001). Moreover, the antibody development after a SARS-CoV-2 infection differs between residents and staff members, as previously infected residents tend to have a higher rate of asymptomatic cases compared to previously infected staff members (20.5% compared to 12.4%; p <0.0001). Conclusions: We can postulate that COVID-19 disease development and symptomatology are different between a geriatric and younger population. Therefore, the occurrence and severity of a future ILI and/or ARI might vary from resident to staff. abstract_id: PUBMED:36875328 Development of morbidity and mortality of SARS-CoV-2 in nursing homes for the elderly in Frankfurt am Main, Germany, 2020-2022: What protective measures are still required? Introduction: Nursing-home residents are among the highest risk group in the SARS-CoV-2 pandemic. At the onset of the SARS-CoV-2 pandemic, the majority of all deaths from or with SARS-CoV-2 occurred in long-term care facilities (LTCFs), so that maximum protective measures were mandated for these facilities. This study analyzed the impact of the new virus variants and the vaccination campaign on disease severity and mortality among nursing home residents and staff through 2022 as a basis for determining which protective measures remain necessary and appropriate. Methods: In five homes in Frankfurt am Main, Germany, with a total capacity for 705 residents, all cases occurring in the facility among residents and staff were recorded and documented (date of birth and diagnosis, hospitalization and death, vaccination status) and were descriptively analyzed with SPSS. Results: By 31st August 2022, 496 residents tested positive for SARS-CoV-2, 93 in 2020, 136 in 2021, and 267 in 2022; 14 residents presented with a second SARS-CoV-2 infection in 2022, having previously experienced an infection in 2020 or 2021. The percentage of hospitalizations decreased from 24.7% (2020) and 17.6% (2021) to 7.5% (2022), and the percentage of deaths decreased from 20.4% and 19.1% to 1.5%. In 2021, 61.8% of those infected were vaccinated (at least 2x); in 2022, 86.2% of residents had been vaccinated twice, 84% of whom had already had a booster vaccination. Hospitalization and death rates were significantly higher among the unvaccinated than the vaccinated throughout all years (unvaccinated 21.5% and 18.0%; vaccinated 9.8% and 5.5%; KW test p=0.000). However, this difference was no longer significant under the prevalence of the Omicron variant in 2022 (unvaccinated 8.3% and 0%; p=0.561; vaccinated 7.4% and 1.7%; p=0.604). From 2020 to 2022, 400 employees were documented as infected, with 25 having second infections in 2022. Only one employee showed a second infection in 2021 following the first in 2020. Three employees were hospitalized; no deaths occurred. Discussion And Conclusion: Severe COVID-19 courses occurred with the Wuhan Wild type in 2020, with a high death rate among nursing-home residents. In contrast, during the waves in 2022 with the relatively mildly pathogenic Omicron variant, many infections but few severe courses and deaths were observed among the now mostly vaccinated and boostered nursing-home residents. Given the high immunity of the population and the low pathogenicity of the circulating virus - even in nursing-home residents - protective measures in nursing homes that restrict people's right to self-determination and quality of life no longer seem justified. Instead, the general hygiene rules and the recommendations of the KRINKO (German Commission for Hospital Hygiene and Infection Prevention) on infection prevention should be followed, and the recommendations of the STIKO (German Standing Commission on Vaccination) on vaccination not only against SARS-CoV-2 but also against influenza and pneumococci should be observed. abstract_id: PUBMED:25275464 Immune biomarkers predictive of respiratory viral infection in elderly nursing home residents. Objective: To determine if immune phenotypes associated with immunosenescence predict risk of respiratory viral infection in elderly nursing home residents. Methods: Residents ≥ 65 years from 32 nursing homes in 4 Canadian cities were enrolled in Fall 2009, 2010 and 2011, and followed for one influenza season. Following influenza vaccination, peripheral blood mononuclear cells (PBMCs) were obtained and analysed by flow cytometry for T-regs, CD4+ and CD8+ T-cell subsets (CCR7+CD45RA+, CCR7-CD45RA+ and CD28-CD57+) and CMV-reactive CD4+ and CD8+ T-cells. Nasopharyngeal swabs were obtained and tested for viruses in symptomatic residents. A Cox proportional hazards model adjusted for age, sex and frailty, determined the relationship between immune phenotypes and time to viral infection. Results: 1072 residents were enrolled; median age 86 years and 72% female. 269 swabs were obtained, 87 were positive for virus: influenza (24%), RSV (14%), coronavirus (32%), rhinovirus (17%), human metapneumovirus (9%) and parainfluenza (5%). In multivariable analysis, high T-reg% (HR 0.41, 95% CI 0.20-0.81) and high CMV-reactive CD4+ T-cell% (HR 1.69, 95% CI 1.03-2.78) were predictive of respiratory viral infection. Conclusions: In elderly nursing home residents, high CMV-reactive CD4+ T-cells were associated with an increased risk and high T-regs were associated with a reduced risk of respiratory viral infection. abstract_id: PUBMED:19810419 Influenza outbreak in a well-vaccinated nursing home population in Belgium. Elderly people in nursing home communities are vulnerable to contagious infections, including the influenza virus. Systematic anti-influenza vaccination is an important preventive measure; however, vaccination does not provide absolute protection. We report an outbreak of influenza A infection in a well-vaccinated nursing home population. Several factors can facilitate the occurrence of this type of outbreak. This report mainly addresses the discrepancy between the circulating viral strain and strains present in the recommended vaccine. abstract_id: PUBMED:17695795 Studies of influenza vaccination among elderly nursing home residents We conducted a questionnaire survey of influenza vaccination among elderly nursing home residents, and investigated the actual condition and the view of vaccination of elderly people. There was 272 elderly residents in Nagoya-shi, Kouseiin Medical Welfare Center, they were classified into the bed ridden group; 195cases (71.7%) according to the independence in activities of daily living, and 132cases (48.5%) were evaluated as the disturbance of community ability group. The number of vaccination in 2002/2003 was 163 residents (59.9%). When the vaccination group (163cases) was compared with the non-vaccination group (109cases), it becomes clear that the later has much bedridden or dementia. The reply of questionnaire was obtained from 139 cases (51.1%) among 272 residents. As a reason of the residents who received vaccine, a prevention was 70.2%, mostly over whelmingly. The following were 17.0% of a custom of annual vaccination, and the recommendation by doctor and family were 5.3% and 3.2%, respectively. The main reasons for having not received a vaccine was inability of recoganization 22.2%. The afraid of adverse reaction or allergy, the ache of a injection was following 17.8%, respectively. In order to raise the rate of the vaccination in elderly nursing home residents, we should make into consideration that the educational campaign of the safety of vaccine and how to develop motivation for vaccination to the elderly with cognitive decline. abstract_id: PUBMED:14580873 Response to influenza vaccination in community and in nursing home residing elderly: relation to clinical factors. Intramuscular (IM) influenza vaccines are about 50% effective in preventing respiratory illness among the elderly. The aim of this study was to identify factors associated with immune response to influenza vaccination among nursing home and community-residing elderly. 114 nursing home (NHE) and 62 community residing elderly (CE) were vaccinated with a commercial IM vaccine. Serum antibodies were evaluated by HIA, and the impact of subjects' clinical characteristics on seroconversion was determined. Factors that were associated with low seroconversion among NHE, included: type II diabetes [for B/Harbin: p=0.044, OR 0.12, (CI: 0.015-0.94)], and antibody titer prior to vaccination A/(H1N1): p=0.03, OR 2.38, (CI: 1.09-5.22); A/(H3N2): p=0.015, OR 2.68 (CI: 1.22-5.92), B/Harbin: p=0.057, OR 4.46 (CI: 0.96-20.85)]. Factors that were associated with lower seroconversion CE elderly, included older age [A/(H1N1): p=0.008, OR 0.846, (CI 0.75-0.96), B/Harbin: p=0.016, OR 0.812, (CI:0.69-0.96)], and antibody titer prior to vaccination A/(H1N1): p=0.029, OR 4.08, (CI: 1.16-14.37); A/(H3N2): p<0.0001, OR 11.495 (CI: 3.18-41.55)]. There was no significant difference in seroconversion between nursing home residing elderly and community elderly. We conclude that Type-II diabetes and antibody titer>1:40 prior to vaccination are associated with reduced response to the influenza vaccination in nursing home elderly. abstract_id: PUBMED:9988287 Effectiveness of inactivated influenza vaccine among nursing home residents during an influenza type A (H3N2) epidemic. Objectives: To evaluate the use of influenza vaccine in nursing homes and its effectiveness in reducing the likelihood of influenza-like illness. Design: A retrospective case-control study with active identification of influenza infection. Setting: All nursing homes in a seven-county study area in southern lower Michigan were eligible for participation. Analyses were based on data collected from 23 homes with documented influenza transmission. Participants: Persons aged 65 years or older who were residents of the nursing homes under study during the influenza type A(H3N2) outbreak in 1989-1990. Measurements: Residents were identified as cases or controls based on occurrence of febrile respiratory illness meeting a case definition. Demographic and underlying illness information were gathered as were data on the use of influenza vaccine, antibiotics, and antivirals. Characteristics of the nursing homes were also recorded. Logistic regression analyses were carried out to determine vaccine effectiveness. Main Results: Determinants of vaccine use were different from those observed in a parallel community-based study. In a multivariate model that considered the effects of resident and nursing home characteristics, vaccinated residents were significantly less likely than those who were not vaccinated to have an influenza-like illness (OR = .58 (95% CI, .43-.78), P < .001, imputed vaccine effectiveness estimate of 42%). Vaccination was more effective in younger residents (those aged 65 to 84) then in older residents (those older than 84 years). Conclusions: Influenza vaccination was effective in reducing the likelihood of influenza-like illness in nursing home residents. Effectiveness appeared to be related to age, which may function as a surrogate for related immunologic factors. Older nursing home residents should be targeted for newer vaccines and/or potential prophylactic use of antivirals. abstract_id: PUBMED:20922296 Influenza-vaccinated and non-vaccinated elderly: reported morbidity and sociodemographic aspects, Porto Alegre (RS, Brazil), 2004 This descriptive transversal epidemiological study had the objective of comparing the elderly population who took the influenza vaccine and who did not regarding the occurrence of events of diseases or hospital admittances within three months after the vaccination. It was not possible to work with probable sampling and the attempt of pairing the vaccinated and non-vaccinated elderly was not successful due to the high vaccine coverage observed (73% of the target population) and due to the short time available to make the interviews. The result of the descriptive analysis of the 1,130 elderly interviewed was quite interesting even not being possible to infer it regarding the universe of the elderly population from Porto Alegre. We found a higher proportion of vaccinated people in the age group of 70 to 79 years old (42%), and a prevalence of non-vaccinated among the age group of 60 to 64 years old (40%). The vaccinated elderly were mostly older; female, who have private health care insurance; with higher income; that perform physical activities and non-smokers. The non-vaccinated were mostly men; younger; with lower income; that do not perform physical exercises; and smoke. A lower percentage of pneumonias reports and hospital admittances was observed among the vaccinated in comparison to the non-vaccinated people. abstract_id: PUBMED:34730890 The effect of standard-dose wintertime vitamin D supplementation on influenza infection in immunized nursing home elderly residents. Aim: To investigate whether three-month oral vitamin D supplementation (800 IU in drops) reduces the risk of influenza infection in elderly nursing home residents vaccinated against influenza. Methods: This cross-sectional observational study enrolled 97 participants (73.2% women) who received one dose of seasonal trivalent 2016-2017 influenza vaccine. The patients were randomized into an experimental group, which received vitamin D supplementation for three months starting on the day of vaccination, and a control group, which did not receive vitamin D supplementation. The primary outcome was the number of influenza infections laboratory-confirmed using a rapid point-of-care test based on nasal swabs collected during vitamin D supplementation. The secondary outcome was serum 25-hydroxyvitamin D level at the end of the study. Results: The mean age ±standard deviation was 78.5± 8.8 years. All participants had vitamin D deficiency at baseline. Twenty-three participants who developed signs of respiratory infections during the study were tested for influenza virus. Although the number of influenza-positive participants was lower in the group receiving vitamin D supplementation as compared with the control group (5 vs 12), this difference was not significant. Vitamin D supplementation failed to increase 25(OH)D levels after three months of supplementation. Conclusion: Elderly nursing home residents in Zagreb County have a significant vitamin D deficiency. The recommended national supplementation of 800 IU daily failed to lead to vitamin D sufficiency and did not reduce the risk of influenza infection among the vaccinated elderly. Answer: Yes, psychological factors do predict the occurrence of influenza-like symptoms (ILS) in vaccinated elderly residents of a sheltered home. A study with a sample of 70 elderly residents of a sheltered home, who were vaccinated against influenza, found that psychological factors such as hostility, depression, and little social support significantly predicted ILS. In a multivariate analysis, the number of drugs and hostility significantly and independently predicted the occurrence of ILS. Hostility, in particular, was highlighted as a psychological factor that predicts the occurrence of ILS in an elderly sample (PUBMED:16238856).
Instruction: Carcinoma of the middle bile duct: is bile duct segmental resection appropriate? Abstracts: abstract_id: PUBMED:20014461 Carcinoma of the middle bile duct: is bile duct segmental resection appropriate? Aim: To compare survival between bile duct segmental resection (BDSR) and pancreaticoduodenectomy (PD) for treating distal bile duct cancers. Methods: Retrospective analysis was conducted for 45 patients in a BDSR group and for 149 patients in a PD group. Results: The T-stage (P < 0.001), lymph node invasion (P = 0.010) and tumor differentiation (P = 0.005) were significant prognostic factors in the BDSR group. The 3- and 5-year overall survival rates for the BDSR group and PD group were 51.7% and 36.6%, respectively and 46.0% and 38.1%, respectively (P = 0.099). The BDSR group and PD group did not show any significant difference in survival when this was adjusted for the TNM stage. The 3- and 5-year survival rates were: stage Ia [BDSR (100.0% and 100.0%) vs PD (76.9% and 68.4%) (P = 0.226)]; stage Ib [BDSR (55.8% and 32.6%) vs PD (59.3% and 59.3%) (P = 0.942)]; stage IIb [BDSR (19.2% and 19.2%) vs PD (31.9% and 14.2%) (P = 0.669)]. Conclusion: BDSR can be justified as an alternative radical operation for patients with middle bile duct in selected patients with no adjacent organ invasion and resection margin is negative. abstract_id: PUBMED:26341821 Is Pancreaticoduodenectomy Feasible for Recurrent Remnant Bile Duct Cancer Following Bile Duct Segmental Resection? Background: When bile duct cancer recurs after surgery, treatment options are limited. This study examines the usefulness of reoperative pancreaticoduodenectomy for recurrent cancer after initial segmental bile duct resection. Methods: Six patients (5 males, 1 females; median age 65 years) who underwent pancreaticoduodenectomy for recurrent remnant bile duct cancer following segmental bile duct resection were included: 4 underwent surgery at Severance Hospital and 2 at Gangnam Severance Hospital from January 2000 to December 2013. Medical records data were retrospectively reviewed, including demographics, type of first and second surgery, radicality of resection, TNM stage, adjuvant treatments, complications, and survival. Kaplan-Meier curves were used to analyze survival. Results: The median interval between operations was 57 (range 7-95) months. Median operation time was 6.9 (range 5.2-12.8) h, blood loss was 400 (range 50-1170) mL, intensive care unit stay was 1 (range 1-2) day, and postoperative hospital stay was 33 (range 15-55) days. No patient died. Four had severe complications. The median survival after pancreaticoduodenectomy was 16 (range 5-89) months. Four patients had recurrence. T stage, N stage, and resection radicality influenced survival. Conclusion: Pancreaticoduodenectomy is reasonable for recurrent remnant bile duct cancer following segmental bile duct resection, particularly for patients with no distant metastasis, locally confined recurrence, and good general condition. abstract_id: PUBMED:32037277 Benefits and limitations of middle bile duct segmental resection for extrahepatic cholangiocarcinoma. Background: Pancreaticoduodenectomy (PD) is a standardized strategy for patients with middle and distal bile duct cancers. The aim of this study was to compare clinicopathological features of bile duct segmental resection (BDR) with PD in patients with extrahepatic cholangiocarcinoma. Methods: Consecutive cases with extrahepatic cholangiocarcinoma who underwent BDR (n = 21) or PD (n = 84) with achievement of R0 or R1 resection in Kobe University Hospital between January 2000 and December 2016 were enrolled in the present study. Results: Patients who underwent PD were significantly younger than those receiving BDR. The frequency of preoperative jaundice, biliary drainage and cholangitis was not significantly different between the two groups. The duration of surgery was longer and there was more intraoperative bleeding in the PD than in the BDR group (553 vs. 421 min, and 770 vs. 402 mL; both P<0.01). More major complications (>Clavien-Dindo IIIa) were observed in the PD group (46% vs. 10%, P<0.01). Postoperative hospital stay was also longer in that group (30 vs. 19 days, P = 0.02). Pathological assessment revealed that tumors were less advanced in the BDR group but the rate of lymph node metastasis was similar in both groups (33% in BDR and 48% in PD, P = 0.24). The rate of R0 resection was significantly higher in the PD group (80% vs. 38%, P<0.01). Adjuvant chemotherapy was more frequently administered to patients in the BDR group (62% vs. 38%, P = 0.04). Although 5-year overall survival rates were similar in both groups (44% for BDR and 51% for PD, P = 0.72), in patients with T1 and T2, the BDR group tended to have poorer prognosis (44% vs. 68% at 5-year, P = 0.09). Conclusions: BDR was comparable in prognosis to PD in middle bile duct cancer. Less invasiveness and lower morbidity of BDR justified this technique for selected patients in a poor general condition. abstract_id: PUBMED:32048467 Comparison of pancreaticoduodenectomy and bile duct resection for middle bile duct cancer: A multi-center collaborating study of Japan and Korea. Background: It is currently unknown whether bile duct segmental resection (BDSR) is an acceptable method for localized middle bile duct cancer (mid-BDC) when R0 resection can be achieved. This study aimed to investigate the short- and long-term outcomes of mid-BDC patients treated with pancreaticoduodenectomy (PD) compared to those for BDSR. Methods: This was a retrospective, Japanese and Korean multi-center collaboration study based on patients' medical records. Results: A total of 663 patients, including 245 BDSR and 418 PD cases, were enrolled. The incidence of postoperative pancreatic fistula (3.3% vs 44.1%, P < .0001), surgical site infection in the organ space (6.1% vs 17.7%, P < .0001) and clinically problematic morbidities (15.9% vs 32.8%, P < .0001) was significantly higher in the PD group. There was no difference in the mortality rate (0.8% vs 1.7%, P = .3566). Local (33.9% vs 14.4%, P < .0001) and lymph node (22.4% vs 11.0%, P < .0001) recurrence rates were significantly higher in the BDSR group. Relapse-free survival (25.0 vs 34.0 months, P = .0184) and overall survival (41.2 vs 60.1 months, P = .0019) were significantly longer in the PD group. The PD group had significantly better prognosis in stage IA/IB cases (58.3 vs 111.5 months, P = .0067), which were the best indicators for BDSR, even when R0 resection was achieved. In multivariate analysis, BDSR was an independent poor prognostic factor. Conclusion: Despite the inferior perioperative short-term outcomes, our data advocate that PD should be the standard procedure for mid-BDCs and that BDSR should be avoided even if R0 resection can be achieved. (UMIN000017914). abstract_id: PUBMED:37837506 Intraductal papillary neoplasm of the bile duct with metachronous development in the downstream bile duct after radical resection. We report a case of intraductal papillary neoplasms of the bile duct (IPNB) that metachronously developed twice in the downstream bile duct after radical resection. The first lesion was located in the left intrahepatic bile duct, the second lesion in the perihilar bile duct, and the third lesion in the distal bile duct. All lesions were IPNBs with associated invasive carcinoma (pancreatobiliary type). The depth of invasion was to the Glisson's capsule in the first lesion, to the subserosa in the second lesion, and to the fibromuscular layer in the third lesion, without lympho-vascular/perineural invasion and lymph-node metastasis. These were resected radically and had no biliary intraepithelial neoplasia and hyperplasia in the surrounding mucosa. In immunohistochemical examination, each lesion showed a different pattern. Although the downstream occurrence suggests intrabiliary dissemination, the mechanism of these metachronous developments may be multicentric. A literature review revealed that most metachronous cholangiocarcinomas have a grossly papillary appearance and tend to arise downstream. Our findings suggest that IPNB may develop metachronously in the residual bile duct after radical surgery, which may assist in early detection. abstract_id: PUBMED:33784683 Prognostic Impact of Resection Margin Length in Patients Undergoing Resection for Mid-Common Bile Duct Cancer: A Single-Center Experience. Introduction: The purpose of this study was to analyze survival outcomes after segmental bile duct resection (BDR) for mid-common bile duct cancer according to the length of the tumor-free BDR margins. Method: A total of 133 consecutive patients underwent BDR for mid-bile duct cancers between December 2007 and June 2017. The Cox proportional hazard model was used to verify the cutoff value of the R0 resection margin. The patients were divided into 3 groups according to resection margin status (group 1; R0 resection margin ≥5 mm; group 2, R0 resection margin <5 mm; and group 3, R1 resection margin). Results: The median follow-up period of the study cohort was 24 months. A resection margin of 5 mm in length was verified to be suitable as a reliable cutoff value. The median disease-free and overall survival (OS) periods were 32 and 49 months in group 1, 13 and 20 months in group 2, and 23 and 30 months in group 3, respectively (p = 0.03 and p < 0.001). The length of the tumor-free resection margin (hazard ratio, 2.01; 95% confidence interval, 1.10-3.67; p = 0.022) was independent factor affecting OS. Conclusions: BDR for mid-bile duct cancer appears to be a feasible surgical option in selected patients with careful preoperative imaging assessment and intraoperative frozen-section diagnosis. Our results suggest achieving a BDR margin ≥5 mm to improve survival outcomes. abstract_id: PUBMED:21410028 Pancreatoduodenectomy for local intrapancreatic recurrence after bile duct resection of mid bile duct carcinoma. Cancers originating from the proximal and mid portions of the common bile duct exhibit a variable degree of longitudinal tumor spread, thus curative resection with attainment of sufficient resection margins is often difficult to achieve. Not surprisingly, local tumor recurrence after noncurative resection occurs frequently, but most recurrences are not considered viable indications for a second operative intervention. We describe our experience of attempting to achieve curative repeat resection after local recurrence of bile duct carcinoma. Two male patients, aged 49 and 58 years, had localized tumor recurrence in the intrapancreatic bile duct 25 and 43 months after segmental bile duct resection for proximal-to-mid duct carcinoma. Following close observation for 3-4 months to determine operability, these 2 patients underwent pylorus-preserving pancreatoduodenectomy, in which both pre-existing hepaticojejunostomy and Roux jejunal limb were effectively used. There was no further recurrence for 65 and 37 months after repeat resection, respectively. Repeat resection with pancreatoduodenectomy may be a feasible treatment modality for selected patients showing late intrapancreatic recurrence after prior segmental bile duct resection. abstract_id: PUBMED:38254787 Survival Comparison of Different Operation Types for Middle Bile Duct Cancer: Bile Duct Resection versus Pancreaticoduodenectomy Considering Complications and Adjuvant Treatment Effects. Background: Margin status is one of the most significant prognostic factors after curative surgery for middle bile duct (MBD) cancer. Bile duct resection (BDR) is commonly converted to pancreaticoduodenectomy (PD) to achieve R0 resection. Additionally, adjuvant treatment is actively performed after surgery to improve survival. However, the wider the range of surgery, the higher the chance of complications; this, in turn, makes adjuvant treatment impossible. Nevertheless, no definitive surgical strategy considers the possible complication rates and subsequent adjuvant treatment. We aimed to investigate the appropriate surgical type considering the margin status, complications, and adjuvant treatment in MBD cancer. Materials And Methods: From 2008 to 2017, 520 patients diagnosed with MBD cancer at the Samsung Medical Center were analyzed retrospectively according to the operation type, margin status, complications, and adjuvant treatment. The R1 group was defined as having a carcinoma margin. Results: The 5-year survival rate for patients who underwent R0 and R1 resection was 54.4% and 33.3%, respectively (p = 0.131). Prognostic factors affecting the overall survival were the age, preoperative CA19-9 level, T stage, and N stage, but not the operation type, margin status, complications, or adjuvant treatment. The complication rates were 11.5% and 29.8% in the BDR and PD groups, respectively (p < 0.001). We observed no significant difference in the adjuvant treatment ratio according to complications (p = 0.675). Patients with PD who underwent R0 resection and could not undergo chemotherapy because of complications reported better survival rates than those with BDR who underwent R1 resection after adjuvant treatment (p = 0.003). Conclusion: The survival outcome of patients with R1 margins who underwent BDR did not match those with R0 margins after PD, even after adjuvant treatment. Due to improvements in surgical techniques and the ability to resolve complications, surgical complications exert a marginal effect on survival. Therefore, surgeons should secure R0 margins to achieve the best survival outcomes. abstract_id: PUBMED:26155223 Technical knacks and outcomes of extended extrahepatic bile duct resection in patients with mid bile duct cancer. Backgrounds/aims: Mid bile duct cancers often involve the proximal intrapancreatic bile duct, and resection of the extrahepatic bile duct (EHBD) can result in a tumor-positive distal resection margin (RM). We attempted a customized surgical procedure to obtain a tumor-free distal RM during EHBD resection, so that R0 resection can be achieved without performing pancreaticoduodenectomy through extended EHBD resection. Methods: We previously reported the surgical procedures of extended EHBD resection, in which the intrapancreatic duct excavation resembles a ≥2 cm-long funnel. This unique procedure was performed in 11 cases of mid bile duct cancer occurring in elderly patients between the ages of 70 and 83 years. Results: The tumor involved the intrapancreatic duct in all cases. Deep pancreatic excavation per se required about 30-60 minutes. Cancer-free hepatic duct RM was obtained in 10 patients. Prolonged leakage of pancreatic juice occurred in 2 patients, but all were controlled with supportive care. Adjuvant therapies were primarily applied to RM-positive or lymph node-positive patients. Their 1-year and 3-year survival rates were 90.9% and 60.6%, respectively. Conclusions: We suggest that extended EHBD resection can be performed as a beneficial option to achieve R0 resection in cases in which pancreaticoduodenectomy should be avoided due to various causes including old age and expectation of a poor outcome. abstract_id: PUBMED:33237486 Surgical ampullectomy with resection of the common bile duct for biliary papillomatosis. Background: Intraductal papillary neoplasm of the bile duct (IPNB) or biliary papillomatosis is a precursor lesion of papillary cholangiocarcinoma.1 IPNB is recognized as a biliary counterpart of IPMN (pancreatic intraductal papillary mucinous neoplasm). IPNB is a rare disease involving entire (diffuse type) or one part (localized type) of biliary tree. Patients without distant metastasis are considered for surgical resection. For patients with distal bile duct papillomatosis, pancreaticoduodenectomy (PD) is recommended for patients with invasive distal bile duct IPNB. PD is a high complex procedure associated with the deterioration of endocrine and exocrine functions leading to a significant impact on quality of life.2 Some authors have reported a new surgical approach leading to a complete resection of the common bile duct without pancreatectomy.3 METHODS: We report the case of a 71-year-old female presented to our department with jaundice. At endoscopic ultrasound with cholangioscopy and CT scan, 2-cm distal bile duct mass tumor with villous component was seen. All needle biopsies were benign, and no distant disease was found. According to the risk of degeneration of this tumor, a surgical resection was decided. Results: Intraoperative frozen section assessed the benignity of peripancreatic lymph nodes. We performed surgical ampullectomy with resection of the common bile duct. The intrapancreatic common bile duct was completely mobilized between the ampullectomy area and the upper edge of the pancreas. Frozen sections on distal and proximal margins of common bile duct were performed to discard malignancy. Finally, reconstruction consisted on the main pancreatic duct reimplantation to the duodenum and choledochoduodenostomy. The histological analysis confirmed the diagnosis of biliary papillomatosis with low-grade dysplasia. Conclusion: This procedure allows complete resection of benign tumors with endobiliary extension and preserve intestinal continuity and pancreatic parenchyma. Answer: The appropriateness of bile duct segmental resection (BDSR) for carcinoma of the middle bile duct has been evaluated in several studies. BDSR can be justified as an alternative radical operation for patients with middle bile duct cancer, particularly in selected patients with no adjacent organ invasion and a negative resection margin (PUBMED:20014461). The survival rates between BDSR and pancreaticoduodenectomy (PD) groups did not show a significant difference when adjusted for the TNM stage, suggesting that BDSR could be a viable option in certain cases (PUBMED:20014461). However, it is important to note that BDSR may have limitations. A study comparing clinicopathological features of BDSR with PD in patients with extrahepatic cholangiocarcinoma found that while BDSR was less invasive and had lower morbidity, it was associated with a higher rate of R1 resection and a tendency for poorer prognosis in patients with T1 and T2 tumors (PUBMED:32037277). Another study indicated that despite inferior perioperative short-term outcomes, PD should be the standard procedure for mid-bile duct cancers, as BDSR was an independent poor prognostic factor even if R0 resection could be achieved (PUBMED:32048467). For recurrent remnant bile duct cancer following segmental bile duct resection, reoperative PD is considered reasonable, especially for patients with no distant metastasis, locally confined recurrence, and good general condition (PUBMED:26341821). Additionally, achieving a bile duct resection (BDR) margin of ≥5 mm has been suggested to improve survival outcomes (PUBMED:33784683). In cases of intraductal papillary neoplasms of the bile duct (IPNB) with metachronous development in the downstream bile duct after radical resection, the mechanism of development may be multicentric, and early detection is crucial (PUBMED:37837506). Extended EHBD resection has been suggested as a beneficial option to achieve R0 resection in cases where PD should be avoided due to various reasons, including old age (PUBMED:26155223). In summary, BDSR may be appropriate for selected patients with carcinoma of the middle bile duct, particularly when there is no adjacent organ invasion, and a negative resection margin can be achieved. However, PD is generally considered the standard procedure, and the decision between the decision between BDSR and PD should be made considering the patient's condition, tumor stage, and potential for achieving clear margins. In cases where PD is not feasible or is associated with high risk, extended EHBD resection or other conservative surgical approaches may be considered. For instance, surgical ampullectomy with resection of the common bile duct has been reported as a procedure that allows complete resection of benign tumors with endobiliary extension while preserving intestinal continuity and pancreatic parenchyma (PUBMED:33237486). Ultimately, the choice of surgical approach should be individualized based on a thorough evaluation of the patient's clinical scenario, the tumor's characteristics, and the expertise of the surgical team. The goal is to balance the potential benefits of surgery in terms of oncological outcomes with the risks associated with the procedure and the impact on the patient's quality of life.
Instruction: Is living in a rural area good for your respiratory health? Abstracts: abstract_id: PUBMED:16236855 Is living in a rural area good for your respiratory health? Results from a cross-sectional study in Scotland. Study Objectives: We aimed to investigate the epidemiology of self-reported chronic respiratory disease throughout Scotland, and to explore the relationship between quality of life and geographic location in those reporting disease. Design: A cross-sectional study. Self-reported data on age, gender, socioeconomic factors, smoking habits, selected illnesses (major respiratory and atopic diseases, and other major conditions), respiratory symptoms, use of medicines and health services, and quality of life were collected using a postal questionnaire. Participants: A total of 4,560 adults registered with 1 of 57 family practices (22 rural and 35 urban) throughout Scotland. Results: The response rate was 60%. Following adjustment for potential confounders, participants from rural areas reported a significantly lower prevalence of any chest illness (adjusted odds ratio [OR], 0.72; 95% confidence interval [CI], 0.58 to 0.91), asthma (adjusted OR, 0.59; 95% CI, 0.46 to 0.76), and eczema/dermatitis (adjusted OR, 0.67; 95% CI, 0.52 to 0.87). Rural location was less likely than urban location to be associated with the reporting of persistent cough and phlegm and different symptoms (types of breathlessness and wheeze) indicative of asthma. No difference in prevalence was found for other respiratory problems. Participants from rural areas reporting COPD or emphysema, or cough or phlegm symptoms had significantly better quality of life scores than their urban counterparts. Conclusions: In this study, living in a rural area was associated with a lower prevalence of asthma but not other chronic respiratory disorders, and a lower prevalence of some respiratory symptoms (including wheeze). Although the prevalence of COPD or emphysema did not differ between rural and urban areas, rural residency appeared to be associated with better health status among subjects with these conditions. abstract_id: PUBMED:20871944 Living in a rural area with advanced chronic respiratory illness: a qualitative study. Aim: To explore the impact of living with advanced chronic respiratory illness in a rural area Methods: Using an interpretive descriptive approach, semi-structured interviews were conducted with seven people living with advanced chronic respiratory illness in a rural area of Western Canada. Results: Themes that characterised the experience of living in a rural setting with a chronic obstructive respiratory illness included: a) distance as a barrier to accessing health care; b) relationships with family practice physicians; c) supportive local community; and d) lack of respiratory education and peer support. Conclusions: Whilst living with advanced respiratory illness in a rural area posed some significant challenges, experiences of "place", conceptualised as a web of relationships embedded within a local context, are an important factor in rural residents' decisions to remain in situations where distance may present significant challenges to accessing health care. abstract_id: PUBMED:36275433 The Association Between Living Area in Childhood and Respiratory Disease Mortality in Adulthood. Objective: No studies have examined the association between characteristics of urban areas and future respiratory disease mortality. We examined whether the type of living area during childhood was associated with all-cause and respiratory disease mortality in adulthood. Methods: A total of 81,413 Japanese participants aged 40-79 years old completed a lifestyle questionnaire including the type of childhood living areas. The Cox proportional hazards regression model was used to calculate the multivariable hazard ratios (HRs) with 95% confidence intervals (CIs) of all-cause and respiratory disease mortality. Results: Living in large city areas in childhood was associated with a higher risk of all-cause mortality [HR = 1.05 (95% CI, 1.01-1.10)], but not with respiratory disease mortality [HR = 1.04 (95% CI, 0.92-1.18)] compared to rural and remote areas. The excess risk of all-cause and respiratory disease mortality was primarily found in industrial areas among men; the respective multivariable HRs were 1.28 (95% CI, 1.00-1.64) and 1.90 (95% CI: 1.10-3.29). Conclusion: Eliminating childhood health hazards associated with living in industrial areas suggested to reduce the risk of mortality from respiratory diseases in adulthood. abstract_id: PUBMED:12183969 Health care services for the elderly living in the rural area of Poland Unlabelled: The aim of the paper was the indepth comparison of the health services use by the elderly living in rural and urban area of Poland. The representative survey of the 65-year old and older people was carried out in the year 2000, as a comparative study to the Piotrowski's survey from 1967. The sample (total N = 1821; consisted of 743 people living in rural and 1078 in urban area), was corresponding to the demographic structure of the general older population in Poland. Results: As far as concerning socio-demographic features there were found slightly higher percentage of 'old old' people in rural community, their much worse living conditions and material situation in comparison to counterparts in urban community. The elderly living in the countryside have shown the worse subjective and objective health and functional indices. The positive selfrated health was found only in 12% of respondents living in rural and in 22% living in urban area. 71% people living in villages declared three or more complaints simultaneously while in the cities much less (57%). Moreover, the impairment of vision, hearing, chewing were found significantly more frequently among rural inhabitants, as well as almost twice higher percentages of dependence on P-ADL. The use of health services by the elderly during the last 12 months, in term of doctor's outpatient consultations, dentist visits rehabilitation, except of nurse visits, was significantly lower in countryside than in the cities. Use of an emergency (17%) and hospital stays (22%) was the same in the both sites. Conclusions: (1) Health status and functional ability of the older people living in rural area are dramatically worse than in cities. (2) Health care system for elderly people in Poland does not fulfill the geriatric standards in term of the commonness, availability and complexity. (3) The chance to improve health care on the elderly is an enhancement of knowledge and competence of practitioners in gerontology. The development of the geriatric base in the Medical Universities is a crucial for that. abstract_id: PUBMED:26516740 Primary Health Care from the perception of women living in a rural area Objective: Understanding the perception of women living in a rural area about the actions and services of Primary Health Care (PHC) in a municipality of southern Brazil, which is the only one regarded as predominantly rural. Method: A descriptive study of qualitative approach, carried out with women who lived in the countryside and required health services in the 15 days prior to collection. Results: The results registered low fidelity to PHC attributes, focusing its functional axis on sickness, transforming the unit into small points of emergency care and a bureaucratic place where patients are referred to other types of services. The quality of service offered is compromised to offering quick, fragmented and unequal treatment in the rural context. Conclusion: The findings of this study highlight the need for greater efforts in order to adequate the new care model in the development of appropriate actions as designated by PHC in the rural context studied. abstract_id: PUBMED:10996948 Extended main family and mental disease in a rural health area Objective: To check the association between souche family and the incidence of mental illness. Design: A case-control study. Setting: Health Rural Zone (Tineo-Asturias-Spain). Patients: Case: every person with a new diagnosis of mental illness who has been compiled (CIE-9-CM) the clinical records of the Mental Health Centre. Field-work from 1-1-1996 until 31-12-1997; they could be dead or alive and of fourteen or more years of age, including suicide and suicide attempts without clinical records in the mental health centre and excluding organic psychosis, mental illness caused by organic damage lesion and mentally handicapped (n = 162). Control: living people of fourteen or more years of age, with diagnosis of current mental illness, after more than one year since their discharge from a mental illness. They will mark five or less than five in the GHQ-28 scale. A systematic random sample was done among all patients registered by computer from the health centre (n = 378). We compiled information about: age, sex, family and personal backgrounds of mental illness, rural or urban context, marital status, education level, family structure and GHQ-28. We carried out univariant and bivariant analysis also we adjusted a logistic regression model (SPSSW7.1). Main Results: In the souche family we obtained a odds ratio equal 3.29 (95% CI, 1.69-6.43). Conclusions: The souche family is a risk of mental illness. Research into dynamics and performance should be done. abstract_id: PUBMED:32677408 Health Determinants of Rural Elderly Populations with Different Living Arrangements: a Multilevel Model Objective: To determine the health and its determinants of rural elderly populations in a city in Sichuan province. Methods: Data were extracted from a 2015 household survey, containing 7 280 rural participants aged over 60 years. The prevalence of two-week morbidity and chronic conditions were compared among those with different living arrangements. Multilevel models were established to determine the family and individual factors associated with the health status of the respondents. Results: Most respondents (57.0%) lived with a spouse, compared with 15.4% who lived alone. Those who lived alone had higher prevalence of two-week morbidity (48.0%) and chronic conditions (70.0%). The multilevel model indicated a familial clustering effect. Living arrangements and household income were associated with the prevalence of two-week morbidity and chronic conditions, after controlling for variations in age, depression, smoking, drinking, and exercise at the individual level. Conclusions: The rural elderly people who live alone have poor health status. Household income, age, depression, smoking, drinking and physical exercise are associated with two-week morbidity and chronic conditions. abstract_id: PUBMED:25156143 Living will status and desire for living will help among rural Alabama veterans. The purpose of this secondary analysis of data from an earlier intervention study to increase Veterans Administration health care enrollment in rural Alabama veterans was to determine the veterans' living will status, desire for help completing a living will, and relationships between these and demographic, health insurance, health self-report, cumulative illness, disability, and trust characteristics. Baseline data for 201 rural Alabama veterans were extracted from the larger study. Chi-square and t tests were used to analyze group differences in categorical and continuous variables. Logistic regression models were used to determine multivariate associations of variables with living will status and desire for help. Only 13% of participants had living wills. Of those without living wills, 40% expressed a desire for help completing a living will. African Americans were less likely to have living wills than were Caucasians. Participants with more than high school education were more likely to desire help completing living wills than were those with less education. With the exception of moderate-severe respiratory illness, moderate-severe illness was not associated with having a living will. With the exception of moderate-severe vascular illness, moderate-severe illness was not associated with desire for help completing a living will. The racial and educational disparities in living will status and desire for help and the number of participants who desired help completing a living will suggests a need for action to increase advance care planning among rural veterans. abstract_id: PUBMED:36774679 Mental health service needs and mental health of old adults living alone in urban and rural areas in China: The role of coping styles. This study aimed to explore the level and relationship between the mental health service needs (MHSN), coping styles (CS), and mental health (MH) of old adults living alone in urban and rural, and to explore the moderating effects of coping styles. A cross-sectional study was conducted with 717 older adults living alone. The survey measured MHSN, CS, and MH. Data were analyzed using a structural equation and bootstrapping method. There were significant differences in the MHSN, CS, and MH between urban and rural old adults living alone (P < 0.05). The model of the impact of MHSN on MH in urban and rural old adults living alone showed a good fit. These findings highlighted the importance of high MHSN and positive CS as potential protective factors of mental health in old adults living alone. This provides a theoretical basis for psychological nursing for old adults living alone in the community. abstract_id: PUBMED:22165406 Knowledges related to self-care in health among women of Southern Brazil's rural area This article aimed to study self-care actions by a group of rural women from the city of Pelotas, state of Rio Grande do Sul, Brazil. This was a qualitative research in which 11 women living in the city's rural area have participated. Data collection occurred between May and June, 2010, through semi-structured interviews. A content analysis was performed on the data. The oldest informants have participated in the community group for 12 years, doing workmanship and exchanging knowledge on the use of medicinal plants. Self-care is part of the actions of these women, which are performed in many different ways, including actions in the family context, and the transmission of beliefs and habits through generations. We hope to contribute to the valorization of rural women, educating health professionals on health practices that meet real needs of the assisted community, reclaiming the culture related to self-care. Answer: The question of whether living in a rural area is good for respiratory health is complex and the answer may depend on various factors. According to a cross-sectional study in Scotland, living in a rural area was associated with a significantly lower prevalence of self-reported chest illness, asthma, and eczema/dermatitis compared to urban areas. Rural residents also reported less persistent cough and phlegm and had better quality of life scores among those with COPD or emphysema (PUBMED:16236855). However, living with advanced chronic respiratory illness in a rural area of Western Canada posed significant challenges, such as distance being a barrier to accessing healthcare and lack of respiratory education and peer support, despite the supportive local community (PUBMED:20871944). In Japan, living in large city areas during childhood was associated with a higher risk of all-cause mortality but not specifically with respiratory disease mortality compared to rural and remote areas. However, living in industrial areas was associated with a higher risk of respiratory disease mortality among men (PUBMED:36275433). In Poland, the elderly living in rural areas had worse subjective and objective health and functional indices compared to their urban counterparts, and the use of health services was significantly lower in the countryside (PUBMED:12183969). In southern Brazil, women living in rural areas perceived primary health care as focusing on sickness, offering quick, fragmented, and unequal treatment, which compromised the quality of service (PUBMED:26516740). In rural Spain, living in an extended main family was associated with a higher risk of mental illness, which could indirectly affect overall health, including respiratory health (PUBMED:10996948). In rural Sichuan province, elderly people living alone had poorer health status, and household income, age, depression, smoking, drinking, and physical exercise were associated with two-week morbidity and chronic conditions (PUBMED:32677408). In summary, while some studies suggest that living in a rural area may be associated with a lower prevalence of certain respiratory conditions, other studies highlight challenges such as access to healthcare, quality of services, and other health determinants that could negatively impact respiratory health. Therefore, the relationship between living in a rural area and respiratory health is not straightforward and is influenced by a range of factors.
Instruction: Experience with revascularizing renal artery aneurysms: is it feasible, safe and worth attempting? Abstracts: abstract_id: PUBMED:9224303 Experience with revascularizing renal artery aneurysms: is it feasible, safe and worth attempting? Purpose: We retrospectively evaluated the feasibility and efficacy of surgical revascularization for renal artery aneurysms. Materials And Methods: Beginning in 1984, 12 patients with renal artery aneurysm underwent renal revascularization regardless of clinical features. Postoperative results were analyzed regarding split renal function, patency of the revascularized arteries, blood pressure control and surgical complications. Results: Postoperative renal function was stable or improved in all but 1 case and patency in branched arteries was preserved in 86%. Hypertension in 8 patients was cured in 7 (88%), including 2 with renovascular hypertension, and improved in 1 (12%). Complications were minimal with only 1 ureteral stricture that required reoperation. Conclusions: The majority of renal artery aneurysm cases are amenable to surgical repair. Carefully performed renal revascularization is rewarding in that high blood pressure is better controlled, renal function is improved and the potential risk of rupture is obviated. abstract_id: PUBMED:34714376 Endovascular repair of visceral artery aneurysms and pseudoaneurysms in 159 patients: twelve years' experience of clinical technique. Objectives: The purpose of this study was to evaluate the therapeutic efficacy and safety of endovascular treatment for patients with visceral and renal artery aneurysms (VRAAs). Twelve years of experience with interventional procedures and treatment options in our center were also worth discussing. Methods: From January 2009 to December 2020, clinical data of 159 consecutive patients with VRAAs were retrospectively analyzed. Patients' demographic and clinical data were recorded, and the safety and efficacy of endovascular therapy were evaluated. In addition, interventional procedures were also described. Results: A total of 159 patients underwent angiography, and 154 patients were successfully treated with endovascular therapy, with a technical success rate of 96.9%. Of the 154 patients with successful endovascular therapy, 3 patients died within 30 days of treatment, with a 30-day mortality rate of 1.9%, and the remaining patients were clinically successful, with a clinical success rate of 98.1%. Fifty-seven patients underwent emergency interventional treatment due to ruptured aneurysm. There were statistically significant differences in hemoglobin before and after emergency treatment (78.5 ± 22.0 g/dL vs. 93.8 ± 15.0 g/dL, P = 0.00). No other serious complications occurred except death in 3 patients. Conclusion: Endovascular treatment of VRRAs is safe and effective and can significantly improve the symptoms of patients, especially those with ruptured aneurysms. abstract_id: PUBMED:24767974 Preliminary experience of laparoscopic renal artery aneurysm clipping surgery. Developments in endovascular therapy and laparoscopy have led to the expansion of minimally invasive surgical techniques in the treatment of renal artery aneurysm (RAA). In this study, we introduced a new surgical technique that combined the clipping technique widely used in neurosurgery with laparoscopic surgery to clip the aneurysm. RAA clipping was performed by retroperitoneal laparoscopy, and the procedures were completed with no conversions to open surgery. In addition, no perioperative complications were recorded. Our preliminary experience suggests that laparoscopic RAA clipping surgery is feasible and appears to be safe. Laparoscopic RAA clipping surgery has advantages of a simplified surgical process, short warm ischemia time, and excellent cosmetic results. abstract_id: PUBMED:30272561 Endovascular treatment of true renal artery aneurysms: a single center experience. Purpose: We aimed to report a single center's experience on endovascular treatment of true renal artery aneurysms (TRAAs), including treatment techniques and outcomes. Methods: This retrospective study was designed to evaluate the treatment and follow-up of TRAAs treated by a variety of endovascular interventional techniques over a period of 6 years. Six patients with nine TRAAs were identified; seven of the TRAAs were treated using different combinations of coil embolization and flow diverter stents. The clinical findings, aneurysm characteristics, endovascular methods and treatment outcomes were reported. Results: Seven TRAAs of six patients were treated, with a median aneurysm size of 20 mm. Three TRAAs were treated with primary sac occlusion (one with primary coil embolization, one with balloon and stent assisted coil and glue embolization, and one with amplatzer vascular occlusion device and coil embolization). The remaining four TRAAs of three patients were treated with flow diverter stents (Cardiatis, Silk, Pipeline, and Surpass). Immediate clinical success was achieved in patients treated with primary sac embolization (95% CI, 29.2%-100%). Among patients treated with flow diverter stents, one patient required an additional flow diverter at 6-month follow-up. The occlusion time in flow diverters ranged from 1 month to 12 months (median, 3.5 months) taking the repeat procedure into account. In patients treated with flow diverters, the clinical success rate was 100% (95% CI, 29.2%-100%) at one-year follow-up. Long-term follow-up ranged from 3 to 52 months. One intraprocedural complication was encountered with a flow diverter during deployment, which required additional stenting and tirofiban infusion. No other major complication was seen. Conclusion: Endovascular treatment is an effective and safe method offering high success rates and low morbidity in the treatment of TRAAs and may supplant surgery as the primary therapy. Current experience in the use of flow diverter stents in TRAAs is limited to individual case reports with one brand of flow diverter device. Our small numbered series of four TRAAs shows our experience regarding endovascular treatment with different flow diverter brands. abstract_id: PUBMED:7648892 Renal artery aneurysms. Our experience A retrospective study concerned 8 aneurysms of the renal artery. There were 6 females and 2 males with a mean age of 50.7 years. Diagnosis was obtained angiographically in all cases, in 7 during a reno-vascular work-up for hypertension and fortuitously in 1 after angiography for arteriopathy of the lower limbs. The aneurysm was due to fibromuscular dysplasia in 5 cases and to atheromatous lesions in 3. Reconstruction techniques and their indications were analyzed on the basis of a review of the literature. Outcome for hypertensive patients are the same for the different techniques. abstract_id: PUBMED:18716538 Laparoscopic repair of renal artery aneurysm: a description of the technique and results in 2 cases. Background And Purpose: Renal artery aneurysm is a rare occurrence. We report our experience in the management of 2 symptomatic renal artery aneurysms with laparoscopic approach. Material And Methods: Two patients, male and female, mean age 44 years (range: 43 to 45 y), diagnosed with left renal artery aneurysms during routine hypertension work-up, underwent laparoscopic aneurysm reconstruction. Results: Operative time was 180 and 150 minutes with a warm ischemia time of 46 and 15 minutes and an estimated operative bleeding of 50 and 0 mL, respectively. The patients were discharged home on postoperative day 2 and 5, respectively. A selective left renal arteriography showed normal caliber of the repaired arteries in both patients. A follow-up of 45 and 7 months, respectively, has been uneventful. Conclusions: In well-selected patients, vascular surgical techniques may offer excellent results. Laparoscopy provides the advantages of the minimally invasive approach while achieving adequate vascular control. However, advance training in laparoscopic reconstructive surgical techniques is required before attempting this procedure. abstract_id: PUBMED:20036510 Aneurysmectomy with arterial reconstruction of renal artery aneurysms in the endovascular era: a safe, effective treatment for both aneurysm and associated hypertension. Background: Renal artery aneurysms (RAAs) represent a rare vascular pathology with an estimated incidence of <1%. Although an endovascular approach is being increasingly used to treat RAAs, we hypothesized that open surgical repair of RAA, specifically via aneurysmectomy with arterial reconstruction (AAR), is a safe, effective treatment, particularly for those with complex aneurysm anatomy. Methods: A review was performed of all patients with RAA, identified by ICD-9 codes, from January 2003 to December 2008 seen at a tertiary care medical center. Data were collected regarding patient demographics, aneurysm characteristics, surgical repair, and outcomes, as well as follow-up care. Results: A total of 14 patients (10 women and 4 men; mean age, 48+/-19 years) were included, representing 15 aneurysms. Ten aneurysms underwent open repair via AAR and five were followed nonoperatively. Mean RAA size was larger for those undergoing repair (2.12 cm vs. 1.62 cm, p=0.037). Seven RAAs were repaired in situ with either patch angioplasty or primary repair; three required ex vivo reconstruction; and none underwent bypass. Average operative time was similar for repair type, with a higher blood loss with ex vivo repair. Median length of stay was 5 days (range, 4 to 14 days). Operative repair had no effect on mean systolic blood pressure or GFR. This repair, however, resulted in lower medication requirement for those with concurrent hypertension (2.7 pre vs. 1.6 post, p=0.03). There was a trend toward shorter time until oral intake for retroperitoneal approach compared with transperitoneal. Mean follow-up time was 11.6 months (range, 3 to 30 months). No incidences of rupture, death, nephrectomy, or renal failure occurred in the operative group. Conclusion: In the era of endovascular repairs for RAAs, open repair, specifically via AAR, of RAAs remains a safe treatment with low associated morbidity. RAA repair resulted in a reduction in medications for those with associated hypertension. Open repair of RAAs should be the primary treatment modality for complex RAA, with specific consideration given to those with associated hypertension. abstract_id: PUBMED:33148865 TWO CASES OF THE CALCIFIED RENAL ARTERY ANEURYSM. DIFFERENTIAL DIAGNOSIS AND TREATMENT (OWN EXPERIENCE). Renal artery aneurysm is defined as the dilated segment of renal artery exceeding twice the diameter of the normal renal artery. As more than half of such aneurysms have the annular calcification, they must be differentiated from the stones. Differential diagnosis of the renal artery aneurysm includes the parapelvical cysts, hydronephrosis and kidney tumors. Renal artery aneurysm can be diagnosed by CT, MRI, as well by ultrasonography and color dopplerography. Unfortunately presence of calcification renders the ultrasonographic examination impossible. In this article we share our own experience of two cases of calcified renal aneurysms diagnosed and treated at our hospital. abstract_id: PUBMED:29780413 Experience with Renal Autotransplantation: Typical and Atypical Indications. Introduction And Objectives: Renal autotransplantation is a kidney-saving surgical procedure used in selected patients. The purpose of this report is to review nine typical and atypical indications for kidney autotransplantation and evaluate its effectiveness in maintaining kidney function and avoiding cancer recurrence. Materials And Methods: From 1999 till 2014, nine renal autotransplantations were performed in our center. A retrospective case review was done. Four of nine patients had a solitary functioning kidney. Typical indications for autotransplantation included extended ureteric disease in 5 patients, intrasinusal tumor on a solitary kidney in 1 patient, and renal artery aneurysm in 1 patient. Atypical indications consisted in bilateral urothelial tumors in 1 patient and interrupted live kidney transplantation in 1 patient. Mean cold ischemia time was 209 minutes. Demographic factors, indications, renal function before and after surgery, and in the long term, cancer recurrence and disease-free survival were evaluated. Results: Renal function was maintained in 8 patients during the early follow-up. No serious complications occurred in the postoperative period. Median duration of follow-up was 50 months. In 4 patients with a normal contralateral kidney, mean preoperative and at discharge creatinine clearance were 105.45 ml/min and 121.02 ml/min, respectively. Although values showed an improvement in the kidney function, the difference was not significant (p value 0.3). In the other 4 patients with a solitary kidney, mean discharge creatinine clearance was 99.24 ml/min surprisingly higher than the preoperative value 96.92 ml/min. At the last follow-up, kidney function was preserved for the two groups (normal contralateral kidney/solitary kidney) with relatively stable creatinine clearance values: 108.45 ml/min and 85.9 ml/min, respectively. No patients required secondary dialysis. Conclusion: Renal autotransplantation is a rare, safe, and effective surgical procedure for the treatment of complex urologic conditions. In some instances, it may be of great utility for kidney salvage in some carefully selected patients. abstract_id: PUBMED:1593835 Renal cell carcinoma with renal artery aneurysm treated by extracorporeal surgery and autotransplantation--a report of 2 cases We experienced two cases of renal cell carcinoma with renal artery aneurysm in the contralateral kidney. These cases have not been reported in the literatures which we have so far explored. In our cases, firstly tumor was resected, and then extracorporeal surgery and autotransplantation were performed on the renal artery aneurysm of the opposite side. Their postoperative courses were uneventful. Only a few instances of extracorporeal surgery and autotransplantation for renal artery aneurysm in a solitary kidney have been reported in the literature. However, no report of a case treated by extracorporeal surgery and autotransplantation for renal artery aneurysm in the contralateral kidney after surgical treatment of renal cell carcinoma has been made. Regarding the treatment of renal artery aneurysm, extracorporeal surgery and autotransplantation appeared satisfactory in our experience. It was a safe and assured surgical procedure. We recommend extracorporeal surgery and autotransplantation for renal artery aneurysm when management in situ is considered difficult. Answer: The experience with revascularizing renal artery aneurysms suggests that it is a feasible, safe, and worthwhile endeavor. Surgical revascularization for renal artery aneurysms has been shown to be rewarding, with postoperative renal function remaining stable or improved in the majority of cases, and high blood pressure being better controlled or cured in a significant number of patients. The patency in branched arteries was preserved in 86% of cases, and complications were minimal, indicating the safety of the procedure (PUBMED:9224303). Endovascular repair of visceral artery aneurysms, including renal artery aneurysms, has also been reported to be safe and effective. A high technical success rate of 96.9% was achieved, with a low 30-day mortality rate of 1.9%. This treatment significantly improved the symptoms of patients, especially those with ruptured aneurysms, further supporting the feasibility and safety of the approach (PUBMED:34714376). Laparoscopic techniques have also been explored, with laparoscopic renal artery aneurysm clipping surgery being reported as feasible and safe. This minimally invasive approach offers advantages such as a simplified surgical process, short warm ischemia time, and excellent cosmetic results (PUBMED:24767974; PUBMED:18716538). Furthermore, endovascular treatment of true renal artery aneurysms using various interventional techniques, including coil embolization and flow diverter stents, has been reported to offer high success rates and low morbidity, potentially supplanting surgery as the primary therapy (PUBMED:30272561). Open surgical repair, specifically aneurysmectomy with arterial reconstruction, remains a safe treatment with low associated morbidity, particularly for complex aneurysm anatomy and for patients with associated hypertension (PUBMED:20036510). Renal autotransplantation has also been used as a kidney-saving surgical procedure for complex urologic conditions, including renal artery aneurysm, with effectiveness in maintaining kidney function and avoiding cancer recurrence (PUBMED:29780413; PUBMED:1593835). In summary, the collective experience across different studies and techniques indicates that revascularizing renal artery aneurysms is a feasible and safe procedure that is worth attempting, with various methods available to suit individual patient needs and aneurysm characteristics.
Instruction: Picture-to-puncture time in acute stroke endovascular intervention: are we getting faster? Abstracts: abstract_id: PUBMED:24968880 Picture-to-puncture time in acute stroke endovascular intervention: are we getting faster? Background: In acute stroke management, time efficiency in the continuum of patient management is critical. We aimed to determine if implementation of system improvements at our institution translated to reduced picture-to-puncture (P2P) times over a 6-year period. Methods: We conducted a single-center retrospective analysis using a prospective acute stroke database of patients treated with intra-arterial therapy from October 2007 to October 2013. Patient demographics, stroke severity, neuroimaging and treatment time points were collected. Annual P2P times, defined as the interval between pretreatment neuroimaging (picture) and commencement of intra-arterial therapy (puncture), were assessed and compared. Results: From 2007 to 2013 a total of 189 patients were identified, of which 181 met the study criteria. At initial presentation, median baseline NIH Stroke Severity score was 17.00 (IQR 11.00-22.00). Annual median P2P times decreased from 171 to 123.5 min, showing a median decrease of 11.5 min per annum (95% CI -23.9 to 0.9) and trending towards statistical significance (p=0.069). Plotted data revealed longer P2P times in instances where stroke onset or CT acquisition times were out-of-hours. Using median regression modeling, the annual decrease in P2P median time reached statistical significance when independently adjusting for CT acquisition time (13.5 min P2P median time reduction, 95% CI -27.0 to -0.1, p=0.048) and for stroke onset time (14.5 min annual P2P median time reduction, 95% CI -26.1 to -2.8, p=0.015). Conclusions: As a consequence of systems improvement at our institution, we were able to demonstrate improved annual median P2P times from 2007 to 2013. abstract_id: PUBMED:31008303 Improving reperfusion time within the ESCAPE Endovascular Clinical Trial. Introduction: Endovascular treatment of acute ischemic stroke is more effective when performed quickly. In this report, we describe quality interventions to ensure fast endovascular treatment times in the ESCAPE (Endovascular Treatment for Small Core and Anterior circulation Proximal Occlusion with Emphasis on Minimizing CT to Recanalization Times) trial. Methods: An "audit and feedback" intervention using webinar and letter was used to improve treatment time over the course of the trial. The time metrics were computed tomography-to-groin-puncture (target < 60 min) and computed tomography-to-first-reperfusion (target < 90 min). Each site was provided with their data for computed tomography-to-groin-puncture and computed tomography-to-first-reperfusion for all their patients that were randomized to the treatment arm, and their median time was compared to the overall median times of all sites in the trial. We assessed for changes in treatment time over the course of the trial. Results: There were 165 patients enrolled into the endovascular arm from 22 sites. The computed tomography-to-groin-puncture time dropped from 57 to 47 min (p = 0.14) while computed tomography-to-reperfusion time dropped from 89 to 81 min (p = 0.48). Over the course of the trial, the absolute treatment benefit increased by 7.8% (p < 0.001). Conclusions: An "audit and feedback" intervention throughout the conduct of the ESCAPE trial was a feasible way to ensure fast treatment times. Quality improvement processes should continue as standard practice beyond the trial to encourage good patient selection and the best clinical outcomes. abstract_id: PUBMED:23393011 "Picture to puncture": a novel time metric to enhance outcomes in patients transferred for endovascular reperfusion in acute ischemic stroke. Background: Comprehensive stroke centers allow for regionalization of subspecialty stroke care. Efficacy of endovascular treatments, however, may be limited by delays in patient transfer. Our goal was to identify where these delays occurred and to assess the impact of such delays on patient outcome. Methods And Results: This was a retrospective study evaluating patients treated with endovascular therapy from November 2010 to July 2012 at our institution. We compared patients transferred from outside hospitals with locally treated patients with respect to demographics, imaging, and treatment times. Good outcomes, as defined by 90-day modified Rankin Scale scores of 0 to 2, were analyzed by transfer status as well as time from initial computed tomography to groin puncture ("picture-to-puncture" time). A total of 193 patients were analyzed, with a mean age of 65.8 ± 14.5 years and median National Institutes of Health Stroke Scale score of 19 (interquartile range, 15-23). More than two thirds of the patients (132 [68%]) were treated from referring facilities. Outside transfers were noted to have longer picture-to-puncture times (205 minutes [interquartile range, 162-274] versus 89 minutes [interquartile range, 70-119]; P<0.001), which was attributable to the delays in transfer. This corresponded to fewer patients with favorable Alberta Stroke Program Early CT Scores on preprocedural computed tomographic imaging (Alberta Stroke Program Early CT Scores >7: 50% versus 76%; P<0.001) and significantly worse clinical outcomes (29% versus 51%; P=0.003). In a logistic regression model, picture-to-puncture times were independently associated with good outcomes (odds ratio, 0.994; 95% confidence interval, 0.990-0.999; P=0.009). Conclusions: Delays in picture-to-puncture times for interhospital transfers reduce the probability of good outcomes among treated patients. Strategies to reduce such delays herald an opportunity for hospitals to improve patient outcomes. abstract_id: PUBMED:36299609 Association of time to groin puncture with patient outcome after endovascular therapy stratified by etiology. Background: Randomized clinical trials and large stroke registries have demonstrated a time-dependent benefit of endovascular treatment (EVT) in patients with acute ischemic stroke (AIS) due to large vessel occlusion (LVO). The aim of this study was to investigate whether this could be applied to different stroke subtypes in a real-world single-center cohort. Materials And Methods: Consecutive ischemic stroke patients with LVOs presenting within 24 h after symptom onset were prospectively registered and retrospectively assessed. Baseline multimodal imaging was conducted before EVT. Independent predictors of functional independence [90-day modified Rankin scale (mRS), 0-2] and any incidence of intracranial hemorrhage (ICH) were explored using a stepwise logistic regression model in the entire cohort and in stroke subtypes. Results: From 2015 to 2020, 140 eligible patients received EVT, of whom 59 (42%) were classified as large artery atherosclerosis (LAA)-related. Time from last known normal to groin puncture was identified as an independent predictor for functional independence in patients of cardioembolic (CE) subtype [odds ratio (OR) 0.90 per 10 min; 95% CI 0.82-0.98; P = 0.013] but not in the LAA subtype and the whole cohort. Groin puncture within 6 h after the time of last known normal was associated with a lower risk of any ICH in the whole cohort (OR 0.36, 95% CI 0.17-0.75, P = 0.007). Sensitivity analysis of patients with complete imaging profiles also confirmed the above findings. Besides, compared with patients of the CE subtype, the LAA subtype had a smaller baseline ischemic core volume, a better collateral status, a slower core growth rate, and a numerically smaller final infarct volume. Conclusion: Faster groin puncture has a more pronounced effect on the functional outcome in patients of CE subtype than those of LAA subtype. Reducing time to groin puncture is of great importance in improving the prognosis of patients after EVT, especially those of CE subtype, and reducing the incidence of any ICH in all patients. abstract_id: PUBMED:35622100 Time to Endovascular Reperfusion and Outcome in Acute Ischemic Stroke : A Nationwide Prospective Registry in China. Purpose: The benefit of endovascular reperfusion therapy for acute ischemic stroke is highly time-dependent but the relation of delays in workflow with outcomes and the key determinants of delays remain uncertain. This study aimed to evaluate the association between faster endovascular therapy and outcomes in a Chinese population with acute ischemic stroke. Methods: Patients treated with endovascular therapy within 7 h due to anterior large vessel occlusion were enrolled in the ANGEL-ACT registry. Time intervals from hospital arrival to arterial puncture (door-to-puncture), hospital arrival to reperfusion (door-to-reperfusion) and puncture-to-reperfusion were recorded. The outcomes included modified Rankin Scale (mRS) scores 0-1, 0-2, mortality at 3 months, substantial reperfusion, and symptomatic intracranial hemorrhage (sICH). Results: Of 932 patients receiving endovascular therapy (mean age 65.1 years, 60.1% male), the median door-to-puncture, door-to-reperfusion, and puncture-to-perfusion times were 110min (interquartile range, IQR 72-155min), 200min (IQR, 149-260min), and 76min (IQR, 50-118min). Of the patients 87.4% achieved substantial reperfusion and 9.6% had sICH. The mRS 0-1, 0-2, and mortality at 3 months were 39.8%, 43.2%, and 16.4%. Faster door-to-reperfusion and puncture-to-reperfusion were associated with higher likelihood of mRS 0-1, mRS 0-2, and lower rate of sICH. There was a trend of improved mRS, lower mortality, and fewer ICH with shorter door-to-puncture time; however, most differences were not statistically significant. Conclusion: Among patients with acute ischemic stroke in routine clinical practice, shorter time to reperfusion was associated with better outcome after endovascular therapy. Standardized workflows and training in endovascular treatment techniques should be promoted nationally to reduce in-hospital delays. abstract_id: PUBMED:35883030 Shortening door-to-puncture time and improving patient outcome with workflow optimization in patients with acute ischemic stroke associated with large vessel occlusion. Objective: We aimed to evaluate door-to-puncture time (DPT) and door-to-recanalization time (DRT) without directing healthcare by neuro-interventionalist support in the emergency department (ED) by workflow optimization and improving patients' outcomes. Methods: Records of 98 consecutive ischemic stroke patients who had undergone endovascular therapy (EVT) between 2018 to 2021 were retrospectively reviewed in a single-center study. Patients were divided into three groups: pre-intervention (2018-2019), interim-intervention (2020), and post-intervention (January 1st 2021 to August 16th, 2021). We compared door-to-puncture time, door-to-recanalization time (DRT), puncture-to-recanalization time (PRT), last known normal time to-puncture time (LKNPT), and patient outcomes (measured by 3 months modified Rankin Scale) between three groups using descriptive statistics. Results: Our findings indicate that process optimization measures could shorten DPT, DRT, PRT, and LKNPT. Median LKNPT was shortened by 70 min from 325 to 255 min(P < 0.05), and DPT was shortened by 119 min from 237 to 118 min. DRT shortened by 132 min from 338 to 206 min, and PRT shortened by 33 min from 92 to 59 min from the pre-intervention to post-intervention groups (all P < 0.05). Only 21.4% of patients had a favorable outcome in the pre-intervention group as compared to 55.6% in the interventional group (P= 0.026). Conclusion: This study demonstrated that multidisciplinary cooperation was associated with shortened DPT, DRT, PRT, and LKNPT despite challenges posed to the healthcare system such as the COVID-19 pandemic. These practice paradigms may be transported to other stroke centers and healthcare providers to improve endovascular time metrics and patient outcomes. abstract_id: PUBMED:32238744 Direct transfer to the angiography suite from outside hospitals to shorten the door to groin puncture time Door to groin puncture time is one of the determinants of clinical outcome in patients treated with endovascular thrombectomy (EVT). We have recently initiated a protocol, direct transfer to angiographic site, for patients transferred from outside hospitals. In this retrospective study, we investigated whether our new protocol had succeeded in shortening the door to groin puncture time. Data on consecutive patients with an occlusion at internal carotid artery or middle cerebral artery treated with EVT transferred from outside hospital between July 2012 and December 2018 were studied. Good outcome was defined as modified Rankin Scale score (mRS) ≤1 at 3 months. Forty (46%) patients were directly transferred to angiographic suite, 27 (19%) were indirectly transferred after CT, and 20 (23%) were after MRI. Onset to admission time was similar among the 3 groups (P = 0.711), while door to groin puncture time was significantly shorter in patients directly transferred to angiographic suite compared to those after CT as well as MRI (median 22 [25%-75%, 16-31] minutes vs. 31 [27-40], vs. 84 [58-124], P < 0.001). The rates of reperfusion with ≥ Thrombolysis in Cerebral Infarction 2b were similar among the 3 groups (88% vs. 85% vs. 90%, P = 0.886). The incidences of symptomatic intracerebral hemorrhage were also similar as 8% vs. 4% vs. 5% (P = 0.796). At 3 months after stroke, 16 (40%) patients in the 11 (41%) in those after CT, and 9 (45%) in those after MRI had the good outcome (P = 0.931). Direct transfer to angiography suite can shorten the onset to groin puncture time with safety. abstract_id: PUBMED:29146873 Mobile Interventional Stroke Teams Lead to Faster Treatment Times for Thrombectomy in Large Vessel Occlusion. Background And Purpose: Endovascular recanalization treatment for acute ischemic stroke is a complex, time-sensitive intervention. Trip-and-treat is an interhospital service delivery model that has not previously been evaluated in the literature and consists of a shared mobile interventional stroke team that travels to primary stroke centers to provide on-site interventional capability. We compared treatment times between the trip-and-treat model and the traditional drip-and-ship model. Methods: We performed a retrospective analysis on 86 consecutive eligible patients with acute ischemic stroke secondary to large vessel occlusion who received endovascular treatment at 4 hospitals in Manhattan. Patients were divided into 2 cohorts: trip-and-treat (n=39) and drip-and-ship (n=47). The primary outcome was initial door-to-puncture time, defined as the time between arrival at any hospital and arterial puncture. We also recorded and analyzed the times of last known well, IV-tPA (intravenous tissue-type plasminogen activator) administration, transfer, and reperfusion. Results: Mean initial door-to-puncture time was 143 minutes for trip-and-treat and 222 minutes for drip-and-ship (P<0.0001). Although there was a trend in longer puncture-to-recanalization times for trip-and-treat (P=0.0887), initial door-to-recanalization was nonetheless 79 minutes faster for trip-and-treat (P<0.0001). There was a trend in improved admission-to-discharge change in National Institutes of Health Stroke Scale for trip-and-treat compared with drip-and-ship (P=0.0704). Conclusions: Compared with drip-and-ship, the trip-and-treat model demonstrated shorter treatment times for endovascular therapy in our series. The trip-and-treat model offers a valid alternative to current interhospital stroke transfers in urban environments. abstract_id: PUBMED:31310296 Association Between Time to Treatment With Endovascular Reperfusion Therapy and Outcomes in Patients With Acute Ischemic Stroke Treated in Clinical Practice. Importance: Randomized clinical trials suggest benefit of endovascular-reperfusion therapy for large vessel occlusion in acute ischemic stroke (AIS) is time dependent, but the extent to which it influences outcome and generalizability to routine clinical practice remains uncertain. Objective: To characterize the association of speed of treatment with outcome among patients with AIS undergoing endovascular-reperfusion therapy. Design, Setting, And Participants: Retrospective cohort study using data prospectively collected from January 2015 to December 2016 in the Get With The Guidelines-Stroke nationwide US quality registry, with final follow-up through April 15, 2017. Participants were 6756 patients with anterior circulation large vessel occlusion AIS treated with endovascular-reperfusion therapy with onset-to-puncture time of 8 hours or less. Exposures: Onset (last-known well time) to arterial puncture, and hospital arrival to arterial puncture (door-to-puncture time). Main Outcomes And Measures: Substantial reperfusion (modified Thrombolysis in Cerebral Infarction score 2b-3), ambulatory status, global disability (modified Rankin Scale [mRS]) and destination at discharge, symptomatic intracranial hemorrhage (sICH), and in-hospital mortality/hospice discharge. Results: Among 6756 patients, the mean (SD) age was 69.5 (14.8) years, 51.2% (3460/6756) were women, and median pretreatment score on the National Institutes of Health Stroke Scale was 17 (IQR, 12-22). Median onset-to-puncture time was 230 minutes (IQR, 170-305) and median door-to-puncture time was 87 minutes (IQR, 62-116), with substantial reperfusion in 85.9% (5433/6324) of patients. Adverse events were sICH in 6.7% (449/6693) of patients and in-hospital mortality/hospice discharge in 19.6% (1326/6756) of patients. At discharge, 36.9% (2132/5783) ambulated independently and 23.0% (1225/5334) had functional independence (mRS 0-2). In onset-to-puncture adjusted analysis, time-outcome relationships were nonlinear with steeper slopes between 30 to 270 minutes than 271 to 480 minutes. In the 30- to 270-minute time frame, faster onset to puncture in 15-minute increments was associated with higher likelihood of achieving independent ambulation at discharge (absolute increase, 1.14% [95% CI, 0.75%-1.53%]), lower in-hospital mortality/hospice discharge (absolute decrease, -0.77% [95% CI, -1.07% to -0.47%]), and lower risk of sICH (absolute decrease, -0.22% [95% CI, -0.40% to -0.03%]). Faster door-to-puncture times were similarly associated with improved outcomes, including in the 30- to 120-minute window, higher likelihood of achieving discharge to home (absolute increase, 2.13% [95% CI, 0.81%-3.44%]) and lower in-hospital mortality/hospice discharge (absolute decrease, -1.48% [95% CI, -2.60% to -0.36%]) for each 15-minute increment. Conclusions And Relevance: Among patients with AIS due to large vessel occlusion treated in routine clinical practice, shorter time to endovascular-reperfusion therapy was significantly associated with better outcomes. These findings support efforts to reduce time to hospital and endovascular treatment in patients with stroke. abstract_id: PUBMED:38246030 Streamlined workflow including nurse recognition of conjugate gaze deviation for reduced door-to-puncture time in endovascular thrombectomy: A retrospective study. Background: Endovascular thrombectomy is recognized as a pivotal treatment for acute ischemic stroke due to large vessel occlusion. Prolonged door-to-puncture time correlates with decreased patient independence after acute ischemic stroke. This study aimed to assess whether a streamlined workflow, including nurse recognition of conjugate gaze deviation, could reduce door-to-puncture time in endovascular thrombectomy. Methods: This study retrospectively reviewed patients with acute ischemic stroke who underwent endovascular thrombectomy between March 2017 and March 2022 and compared a previous workflow with a streamlined workflow implemented in April 2019. In the streamlined workflow, nurses recognized conjugate gaze deviation to identify patients with large vessel occlusions and played a more active role in reducing the door-to-puncture time. We compared time metrics and outcomes, including recanalization status, parenchymal hemorrhage type 2, and favorable outcomes (modified Rankin Scale score 0-2) at three months between the previous and streamlined workflow groups. Results: After the application of the streamlined workflow, the door-to-puncture time was reduced from 76 min to 68 min (p = 0.014), and the number of patients with a door-to-puncture time of less than 60 min increased (15% vs. 36%, p = 0.002). Outcomes including modified thrombolysis in cerebral infarction ≥ 2b (73% vs. 71%, p = 1.000), parenchymal hemorrhage type 2 (7% vs. 2%, p = 0.281), and favorable outcome (33% vs. 34%, p = 1.000) were comparable between the two groups. Conclusion: Nurse recognition of conjugate gaze deviation contributed to an 8-minute reduction in the door-to-puncture time, demonstrating the potential benefits of an organized workflow in acute ischemic stroke. Answer: The collection of abstracts provided suggests that there have been efforts and improvements in reducing the picture-to-puncture (P2P) time in acute stroke endovascular intervention, indicating that we are indeed getting faster. One study showed that annual median P2P times decreased from 171 to 123.5 minutes over a 6-year period, with a median decrease of 11.5 minutes per annum, trending towards statistical significance (PUBMED:24968880). Another study reported that the implementation of an "audit and feedback" intervention during the ESCAPE trial helped to reduce computed tomography-to-groin-puncture time from 57 to 47 minutes and computed tomography-to-reperfusion time from 89 to 81 minutes (PUBMED:31008303). A retrospective study highlighted that delays in P2P times for interhospital transfers reduce the probability of good outcomes among treated patients, suggesting that strategies to reduce such delays could improve patient outcomes (PUBMED:23393011). Similarly, another study found that faster groin puncture has a more pronounced effect on the functional outcome in patients of cardioembolic subtype than those of large artery atherosclerosis subtype, emphasizing the importance of reducing time to groin puncture (PUBMED:36299609). The ANGEL-ACT registry in China associated shorter time to reperfusion with better outcomes after endovascular therapy, advocating for standardized workflows and training to reduce in-hospital delays (PUBMED:35622100). Workflow optimization in a single-center study demonstrated that multidisciplinary cooperation was associated with shortened door-to-puncture time and improved patient outcomes (PUBMED:35883030). Direct transfer to the angiography suite from outside hospitals was shown to shorten the door to groin puncture time (PUBMED:32238744), and the "trip-and-treat" model was found to lead to faster treatment times compared to the traditional "drip-and-ship" model (PUBMED:29146873). A retrospective cohort study using data from the Get With The Guidelines-Stroke registry also supported the association of shorter time to endovascular-reperfusion therapy with better outcomes (PUBMED:31310296). Lastly, a study found that a streamlined workflow, including nurse recognition of conjugate gaze deviation, could reduce door-to-puncture time in endovascular thrombectomy (PUBMED:38246030). In summary, the evidence from these studies suggests that there have been improvements in reducing P2P times in acute stroke endovascular intervention, and these reductions are associated with better patient outcomes.
Instruction: Does obesity increase early postoperative complications after laparoscopic colorectal surgery? Abstracts: abstract_id: PUBMED:37165256 The impact of sarcobesity on incisional hernia after laparoscopic colorectal cancer surgery. Purpose: Incisional hernia is a common complication after abdominal surgery, especially in obese patients. The aim of the present study was to evaluate the relationship between sarcobesity and incisional hernia development after laparoscopic colorectal cancer surgery. Methods: In total, 262 patients who underwent laparoscopic colorectal cancer surgery were included in the present study. Univariate and multivariate analyses were performed to evaluate the independent risk factors for the development of incisional hernia. We then performed subgroup analyses to assess the impact of visceral obesity according to clinical variables on the development of incisional hernia in laparoscopic surgery for colorectal cancer surgery. Results: Forty-four patients (16.8%) developed incisional hernias after laparoscopic colorectal cancer surgery. In the univariate analysis, the development of incisional hernia was significantly associated with female sex (P = 0.046), subcutaneous obesity (P = 0.002), visceral obesity (P = 0.002), sarcobesity (P < 0.001), and wound infection (P < 0.001). In the multivariate analysis, sarcobesity (P < 0.001) and wound infection (P < 0.001) were independent predictors of incisional hernia. In subgroup analysis, the odds ratio of visceral obesity was the highest (13.1; 95% confidence interval [CI], 4.51-37.8, P < 0.001) in the subgroup of sarcopenia. Conclusion: Sarcobesity may be a strong predictor of the development of incisional hernia after laparoscopic surgery for colorectal cancer, suggesting the importance of body composition in the development of incisional hernia. abstract_id: PUBMED:27778169 Laparoscopic colorectal cancer resections in the obese: a systematic review. Background: Laparoscopic colorectal cancer operations in obese patients pose technical challenges that may negatively impact oncologic adequacy. A meta-analysis was performed to assess the oncologic outcomes of laparoscopic colorectal cancer resections in obese compared to non-obese patients. Short- and long-term outcomes were assessed secondarily. Methods: A systematic literature search was conducted from inception until October 01, 2014. Studies that compared outcomes of laparoscopic colorectal malignant neoplasms in obese and non-obese patients were selected for meta-analysis. Studies that defined obesity as body mass index (BMI) ≥ 30 kg/m2 were included. Oncologic, operative, and postoperative outcomes were evaluated. Pooled odds ratios (OR) and weighted mean differences (WMD) with 95 % confidence intervals (CI) were calculated using fixed-effects models. For oncologic and survival outcomes, a subgroup analysis was conducted for rectal cancer and a secondary analysis was conducted for Asian studies that used a BMI cutoff of 25 kg/m2. Results: Thirteen observational studies with a total of 4550 patients were included in the meta-analysis. Lymph node retrieval, distal, and circumferential margins, and 5-year disease-free and overall survival were similar in the obese and non-obese groups. Conversion rate (OR 2.11, 95 % CI 1.58-2.81), postoperative morbidity (OR 1.54, 95 % CI 1.21-1.97), wound infection (OR 2.43, 95 % CI 1.46-4.03), and anastomotic leak (OR 1.65, 95 % CI 1.01-2.71) were all significantly increased in the obese group. Conclusions: Laparoscopic colorectal cancer operations in obese patients pose an increased technical challenge as demonstrated by higher conversion rates and higher risk of postoperative complications compared to non-obese patients. Despite these challenges, oncologic adequacy of laparoscopic colorectal cancer resections is comparable in both groups. abstract_id: PUBMED:28900886 The effect of obesity on laparoscopic and robotic-assisted colorectal surgery outcomes: an ACS-NSQIP database analysis. Advantages of robotic-assisted colorectal surgery have been reported, but the effect on outcomes between obese and non-obese patients undergoing laparoscopic and robotic-assisted colorectal surgery remains unclear. Patients who underwent elective laparoscopic and robotic colon or rectal resections between 2012 and 2014 were identified in the ACS-NSQIP database. Propensity score matching was performed to determine the effect of obesity on laparoscopic and robotic-assisted 30-day surgical outcomes. 29,172 patients met inclusion criteria; 27,693 (94.9%) underwent laparoscopic colorectal surgery while 1479 (5.1%) underwent robotic-assisted surgery. Mean BMI was 28.4 kg/m2 and 35% of patients had a BMI ≥30 kg/m2. A 10-to-1 propensity matching of laparoscopic to robotic approaches was performed, resulting in 14,770 (90.9%) laparoscopic patients and 1477 (9.1%) robotic-assisted patients available for analysis. Robotic-assisted surgery was associated with lower conversion to laparotomy (2.4 vs 3.4%; p = 0.04) and decreased length-of-stay (4.5±3.2 vs 5.1±4.5 days; p < 0.0001). After adjusting for BMI and surgical approach, obese patients undergoing robotic-assisted surgery had a reduced odds ratio for developing prolonged ileus (p = 0.03). Robotic-assisted colorectal surgery is associated with fewer conversions to laparotomy and shorter length-of-stays compared to laparoscopic surgery. Risk of prolonged ileus is significantly reduced in obese patients undergoing a robotic-assisted approach. abstract_id: PUBMED:25902904 Impact of visceral obesity on outcomes of laparoscopic colorectal surgery: a meta-analysis. Background: Excessive visceral fat could influence surgical difficulty of laparoscopic colorectal surgery. With the use of visceral fat area measured by computed tomography, surgeons could quantify the amount of visceral fat. The aim of the present meta-analysis is to quantitatively combine studies in order to determine the impact of visceral obesity on laparoscopic colorectal surgery. Methods: A systematic search of literature (PubMed, EMBASE, the Cochrane Library) was performed to identify studies evaluating the impact of visceral obesity on laparoscopic colorectal surgery. Results: Four studies were included in this meta-analysis. Meta-analysis revealed that visceral obesity was associated with longer operative time (weighted mean difference (WMD) 24.42, 95% confidence interval (CI) 12.98-35.86), less lymph nodes harvest (WMD -5.99, 95% CI -8.31 to -3.67), more conversion to open procedure (odds ratio (OR) 2.24, 95% CI 1.05-4.78), higher morbidity (OR 2.33, 95% CI 1.56-3.48), more surgical site infection (OR 3.22, 95% CI 1.95-5.32) and more anastomotic leakage (OR 2.40, 95% CI 1.06-5.44). Conclusion: The present study shows that visceral obesity is associated with increased surgical difficulty and post-operative morbidity of laparoscopic colorectal surgery but does not affect post-operative recovery thanks to laparoscopic procedure. abstract_id: PUBMED:36684184 Advantage of laparoscopic surgery in patients with generalized obesity operated for colorectal malignancy: A retrospective cohort study. Background: Because of the progression of minimally invasive surgery skills and obesity in colorectal surgery, we aimed to evaluate the short-term outcomes of colorectal cancer resections in patients with generalized obesity at a single teaching hospital with mature surgical techniques and training programs. Methods: A total of 537 patients were diagnosed with CRC and had a body mass index ≥30 kg/m2 between January 2009 and December 2019 at a single institution. 265 patients underwent open surgery and 272 patients underwent laparoscopic surgery. Data were analysed to explore the independent risk factors for postoperative complications. Results: The laparoscopic group had less blood loss (73 ± 128 vs. 148 ± 290 ml, p < 0.001) and a shorter postoperative hospital stay (10.8 ± 17.1 vs. 11.7 ± 6.8 days, p < 0.001) than the open group. The number of harvested lymph nodes did not significantly differ between the two groups (30.9 ± 18.3 vs. 30.2 ± 15.3, p = 0.981). Although anastomotic leakage was significantly higher in the laparoscopic group (1.5% vs. 4.8%, p = 0.030), there were also similar overall postoperative morbidity and mortality rates between the open and laparoscopic groups for CRC patients with generalized obesity who underwent surgery. Conclusion: Laparoscopic surgery can reduce blood loss, decrease the length of hospital stay, obtain a similar number of harvested lymph nodes, and achieve an acceptable conversion rate for CRC patients with generalized obesity. We suggest that laparoscopic surgery could become a standard method for CRC treatment in patients with generalized obesity. abstract_id: PUBMED:35330491 The Role of Preoperative Platelet-to-Lymphocyte Ratio as a Predictor for Incisional Hernias after Hand-Assisted Laparoscopic Liver Surgery for Metastatic Colorectal Cancer. (1) Background: Hand-assisted laparoscopic surgery for liver resection is a globally established technique. In this study, we report on the incidence and risk factors for postoperative incisional hernia (IH) after hand-assisted laparoscopic surgery for colorectal liver metastasis. (2) Methods: This was retrospective analysis of 89 consecutive hand-assisted laparoscopic surgery for colorectal liver metastasis. (3) Results: Participants were 39 females and 50 males. Median age was 65 years, and in 63%, the BMI was ≥25. Postoperative complications were encountered in 18% of the patients. Seven patients (7.8%) had postoperative incisional hernia in the hand port site. There was significantly higher incidence of incisional hernia in overweight patients (BMI ≥ 25) (p = 0.04), and in cases with simultaneous liver and colon resection (p = 0.02). In univariant and multivariant analyses, simultaneous liver and colon resection (p = 0.004 and 0.03, respectively), and platelet-to-lymphocyte ratio ≤ 200 (p = 0.03, 0.04, respectively) were both independent risk factors for developing postoperative incisional hernia. (4) Conclusions: Both simultaneous liver and colon resection, and platelet-to-lymphocyte ratio ≤ 200 are independent risk factors for postoperative incisional hernia after hand-assisted laparoscopic surgery for colorectal liver metastasis. abstract_id: PUBMED:11965460 Systematic review of laparoscopic surgery for colorectal malignancy. Background: We set out to evaluate the current literature on the use of laparoscopic surgery for malignant colorectal disease and identify its place in current practice. Methods: We performed a systematic review of the literature that included papers published from January 1991 to March 1997. Inclusion criteria were used to select the most robust studies. The quality of each study was assessed against predefined criteria and weighted according to hierarchy of evidence and sample size. Results: The published literature was found to be low in the hierarchy of evidence. Of the 157 studies located, only 42 papers were found to be of sufficiently high quality to be included in the review. Only 13 of these 42 papers concentrated specifically on malignant colorectal disease. We noted the following findings: The average age of patients was 63.2 years. The most frequently reported contraindication to laparoscopic surgery was obesity. Conversion rate to open procedure varied between 1.5% and 48%. The most common postoperative complication was wound infection. Thirty-day mortality varied between 0 and 5.1%. Disease stage and pathology were poorly reported. Patient recovery benefits varied. It was not possible to identify the effectiveness, safety, survival outcome, or cost of laparoscopic surgery for colorectal malignancy in comparison to open surgery. Conclusions: Six years after the first report of laparoscopic surgery, studies low in the hierarchy of evidence continue to be reported. This reflects a lack of control following the introduction of this new technology. The majority of reports continue to be feasibility studies. The variable nature and content of the literature demonstrate the lack of standardization and the absence of an agreed core minimum data set. The benefit of laparoscopic surgery for malignant colorectal disease remains unclear. Until the mechanisms of port site recurrences are elucidated and long-term data on survival outcomes become available, laparoscopic surgery for malignant disease should be carried out only in the context of a large, multicenter randomized controlled trial. abstract_id: PUBMED:33924366 Video-Laparoscopic versus Open Surgery in Obese Patients with Colorectal Cancer: A Propensity Score Matching Study. Background: Minimally invasive surgery in obese patients is still challenging, so exploring one more item in this research field ranks among the main goals of this research. We aimed to compare short-term postoperative outcomes of open and video-laparoscopic (VL) approaches in CRC obese patients undergoing colorectal resection. Methods: We performed a retrospective analysis of a surgical database including 138 patients diagnosed with CRC, undergoing VL (n = 87, 63%) and open (n = 51, 37%) colorectal surgery. As a first step, propensity score matching was performed to balance the comparison between the two intervention groups (VL and open) in order to avoid selection bias. The matched sample (N = 98) was used to run further regression models in order to analyze the observed VL surgery advantages in terms of postoperative outcome, focusing on hospitalization and severity of postoperative complications, according to the Clavien-Dindo classification. Results: The study sample was predominantly male (N = 86, 62.3%), and VL was more frequent than open surgery (63% versus 37%). The two subgroup results obtained before and after the propensity score matching showed comparable findings for age, gender, BMI, and tumor staging. The specimen length and postoperative time before discharge were longer in open surgery (OS) patients; the number of harvested lymph nodes was higher than in VL patients as well (p < 0.01). Linear regression models applied separately on the outcomes of interest showed that VL-treated patients had a shorter hospital stay by almost two days and about one point less Clavien-Dindo severity than OS patients on average, given the same exposure to confounding variables. Tumor staging was not found to have a significant role in influencing the short-term outcomes investigated. Conclusion: Comparing open and VL surgery, improved postoperative outcomes were observed for VL surgery in obese patients after surgical resection for CRC. Both postoperative recovery time and Clavien-Dindo severity were better with VL surgery. abstract_id: PUBMED:33111467 Impact of obesity on short-term outcomes of laparoscopic colorectal surgery for Japanese patients with colorectal cancer: A multicenter study. Introduction: The impact of obesity on short-term outcomes after laparoscopic colorectal surgery (LAC) in Asian patients is unclear. The purpose of the present multicenter study was to evaluate the safety and feasibility of LAC in obese Japanese patients. Methods: We retrospectively reviewed 1705 patients who underwent LAC between April 2016 and February 2019. Patients were classified according to body mass index (BMI): non-obese (BMI < 25 kg/m2 , n = 1335), obese I (BMI 25-29.9 kg/m2 , n = 313), and obese II (BMI ≥30 kg/m2 , n = 57). Clinical characteristics and surgical outcomes were compared among the three groups. Results: The proportion of patients with comorbidities (non-obese, 58.1%; obese I, 69.6%; obese II, 75.4%; P < .001) and median operation time (non-obese, 224 minutes; obese I, 235 minutes; obese II, 258 minutes; P = .004) increased significantly as BMI increased. The conversion rate was similar among the groups (P = .715). Infectious complications were significantly high in obese II patients (non-obese, 10.4%; obese I, 8.3%; obese II, 28.1%; P < .001). Multivariate analysis revealed that in obese II patients, BMI was an independent predictive factor of infectious postoperative complications (odds ratio 2.648; 95% confidence interval, 1.421-4.934; P = .002). Conclusion: LAC has an increased risk of postoperative infectious complications in obese II patients, despite improvements in surgical technique. Management of obese II colorectal cancer patients requires meticulous perioperative management. abstract_id: PUBMED:20039065 The impact of obesity on outcomes of laparoscopic surgery for colorectal cancer in Asians. Background: The influence of obesity on surgical outcomes after laparoscopic colorectal surgery in Asian patients is unclear. The aim of this study was to evaluate the feasibility and safety of laparoscopic surgery in obese Asian patients with colorectal cancer. Methods: We retrospectively reviewed the prospectively collected data on 984 consecutive patients who underwent laparoscopic surgery for colorectal cancer between May 2001 and February 2008. Patients were classified according to the categories proposed by the International Obesity Task Force, as Nonobese (body mass index [BMI] < 25.0 kg/m(2)), Obese I (BMI 25.0-29.9 kg/m(2)), and Obese II (BMI > or = 30 kg/m(2)). Surgical outcomes, including open conversion, operation time, postoperative complications, estimated blood loss, and postoperative hospital stay, were compared in Nonobese, Obese I, and Obese II patients. Results: Of the 984 patients, 645 (65.5%), 312 (31.7%), and 27 (2.7%), were classified as Nonobese, Obese I, and Obese II, respectively. Clinicopathologic characteristics were similar among the three groups. The Obese II group had higher conversion rates (14.8% versus 2.6% and 2.9%, P = 0.001), longer operation time (258 versus 201 and 215 min, P = 0.001), and longer postoperative hospital stay (12.1 versus 9.5 and 9.2 days, P = 0.035) than the Nonobese and Obese I groups. However, the rates of intraoperative events (P = 0.634) and postoperative complications (P = 0.603) were similar in nonobese and obese patients. Independent risk factors for conversion were BMI group and operation type. Obese II patients had an 8.36-fold greater risk of conversion than had Nonobese patients (P = 0.001). Conclusions: With sufficient experience, laparoscopic colorectal surgery in obese Asian patients is feasible and safe, offering all the benefits of a minimally invasive approach. Management of Asian colorectal cancer patients with BMI > or = 30 kg/m(2) requires meticulous perioperative care, and colorectal surgeons must be familiar with obesity-related challenges in such patients. Answer: Yes, obesity does increase early postoperative complications after laparoscopic colorectal surgery. Studies have shown that obese patients have a higher risk of postoperative morbidity, including wound infection, anastomotic leak, and other complications compared to non-obese patients (PUBMED:27778169). Specifically, obese patients have been found to have significantly increased conversion rates to open surgery, postoperative morbidity, wound infection, and anastomotic leak (PUBMED:27778169; PUBMED:28900886; PUBMED:25902904). Additionally, in obese patients, the risk of postoperative infectious complications increases, despite improvements in surgical technique, necessitating meticulous perioperative management (PUBMED:33111467). Furthermore, in Asian populations, obese patients have been reported to have higher conversion rates, longer operation times, and longer postoperative hospital stays (PUBMED:20039065). However, it is worth noting that with sufficient experience and careful perioperative care, laparoscopic colorectal surgery in obese patients can still be feasible and safe (PUBMED:20039065).
Instruction: Should we treat auditory hallucinations with repetitive transcranial magnetic stimulation? Abstracts: abstract_id: PUBMED:26548388 Transcranial magnetic stimulation used in psychiatry Repetitive transcranial magnetic stimulation is a non-invasive treatment technique, using electromagnetism properties. It has been used for around twenty years in neurology (treatment of neuropathic pain, certain abnormal movements, Parkinson's disease), and in psychiatry (obsessive compulsive disorder, hallucinations, mood disorders, etc.). The presence and support of a nurse during the sessions is essential. abstract_id: PUBMED:20061953 Repetitive transcranial magnetic stimulation for refractory symptoms in schizophrenia. Purpose Of Review: Schizophrenia is an illness associated with a substantial degree of treatment resistance and suboptimal therapeutic response. In recent years, novel brain stimulation technologies have been identified as potential treatments for schizophrenia and related disorders. Several published studies have assessed the use of repetitive transcranial magnetic stimulation (rTMS) in patients with schizophrenia. Recent Findings: Most published studies have focused on the use of low-frequency rTMS to treat refractory auditory hallucinations. These studies support the efficacy of stimulation over the temporoparietal cortex. Several other studies have assessed high-frequency stimulation of the prefrontal cortex in the treatment of negative symptoms. Novel protocols to treat auditory hallucinations have been piloted and case reports are emerging on the use of maintenance rTMS to treat auditory hallucinations. Summary: Overall, rTMS studies have demonstrated some promise in the treatment of schizophrenia. However, more research is required to delineate the role of this technique in clinical practice and to explore novel stimulation techniques that may ultimately lead to improved therapeutic efficacy. abstract_id: PUBMED:25206578 Repetitive transcranial magnetic stimulation for hallucination in schizophrenia spectrum disorders: A meta-analysis. Objective: This study assessed the efficacy and tolerability of repetitive transcranial magnetic stimulation for treatment of auditory hallucination of patients with schizophrenia spectrum disorders. Data Sources: Online literature retrieval was conducted using PubMed, ISI Web of Science, EMBASE, Medline and Cochrane Central Register of Controlled Trials databases from January 1985 to May 2012. Key words were "transcranial magnetic stimulation", "TMS", "repetitive transcranial magnetic stimulation", and "hallucination". Study Selection: Selected studies were randomized controlled trials assessing therapeutic efficacy of repetitive transcranial magnetic stimulation for hallucination in patients with schizophrenia spectrum disorders. Experimental intervention was low-frequency repetitive transcranial magnetic stimulation in left temporoparietal cortex for treatment of auditory hallucination in schizophrenia spectrum disorders. Control groups received sham stimulation. Main Outcome Measures: The primary outcome was total scores of Auditory Hallucinations Rating Scale, Auditory Hallucination Subscale of Psychotic Symptom Rating Scale, Positive and Negative Symptom Scale-Auditory Hallucination item, and Hallucination Change Scale. Secondary outcomes included response rate, global mental state, adverse effects and cognitive function. Results: Seventeen studies addressing repetitive transcranial magnetic stimulation for treatment of schizophrenia spectrum disorders were screened, with controls receiving sham stimulation. All data were completely effective, involving 398 patients. Overall mean weighted effect size for repetitive transcranial magnetic stimulation versus sham stimulation was statistically significant (MD = -0.42, 95%CI: -0.64 to -0.20, P = 0.000 2). Patients receiving repetitive transcranial magnetic stimulation responded more frequently than sham stimulation (OR = 2.94, 95%CI: 1.39 to 6.24, P = 0.005). No significant differences were found between active repetitive transcranial magnetic stimulation and sham stimulation for positive or negative symptoms. Compared with sham stimulation, active repetitive transcranial magnetic stimulation had equivocal outcome in cognitive function and commonly caused headache and facial muscle twitching. Conclusion: Repetitive transcranial magnetic stimulation is a safe and effective treatment for auditory hallucination in schizophrenia spectrum disorders. abstract_id: PUBMED:32521380 Efficacy of repetitive transcranial magnetic stimulation on auditory hallucinations in schizophrenia: A meta-analysis. To evaluate the efficacy of repetitive transcranial magnetic stimulation (rTMS) on auditory hallucinations (AH) in schizophrenia, we conducted a meta-analysis of currently available randomized control studies (RCTs). Electronic databases were searched to identify relevant literatures. Only RCTs that met the inclusion criteria were enrolled for further analysis. Standard mean difference (SMD) and 95% confidence interval (CI) values were used to evaluate the effects of rTMS. The overall robustness of the results was assessed by analyzing the influence of single studies. Publication bias was analyzed using funnel plots. Eleven eligible studies were included in this meta-analysis. Auditory hallucinations improved more in the rTMS group than in the sham group (SMD = -0.27, 95%CI = -0.51 to -0.03). However, this result was not stable after sensitivity analysis. Despite a moderate effect for rTMS on AH, future definitive trials of rTMS with rigorous processes and high-quality reporting are needed. abstract_id: PUBMED:26351619 Efficacy of Transcranial Magnetic Stimulation (TMS) in the Treatment of Schizophrenia: A Review of the Literature to Date. We reviewed the literature on transcranial magnetic stimulation and its uses and efficacy in schizophrenia. Multiple sources were examined on transcranial magnetic stimulation efficacy in relieving positive and negative symptoms of schizophrenia. Literature review was conducted via Ovid Medline and PubMed databases. We found multiple published studies and metaanalyses that give evidence that repetitive transcranial magnetic stimulation can have benefit in relieving positive and negative symptoms of schizophrenia, particularly auditory hallucinations. These findings should encourage the psychiatric community to expand research into other applications for which transcranial magnetic stimulation may be used to treat patients with psychiatric disability. abstract_id: PUBMED:28315614 Repetitive transcranial magnetic stimulation for treating the symptoms of schizophrenia: A PRISMA compliant meta-analysis. Objective: To explore the efficacies of 1-Hz (low frequency) and 10-Hz (high frequency) repetitive transcranial magnetic stimulation (rTMS) in treating auditory hallucinations and negative symptoms of schizophrenia, respectively. Methods: Electronic databases were searched to identify relevant literature. Standard mean difference (SMD) and 95% confidence interval (CI) values were used to evaluate the effects of rTMS. The stability and sensitivity of the results, the source of heterogeneity, and the recommended grade of the evidence were also analyzed. Results: Thirteen studies of 1-Hz rTMS were included. The auditory hallucinations improved more in the rTMS group than in the sham group (SMD=-0.29, 95%CI=-0.57 to -0.01). However, this result was not stable after sensitivity analysis, and publication bias had a substantial impact on the results. Meta-analysis performed for seven studies of 10-Hz rTMS found that improvement of negative symptoms did not differ significantly between the real rTMS and sham groups. Finally, the grade of evidence for this meta-analysis was found to be low. Conclusion: Although there may appear to be a therapeutic effect for 1-Hz rTMS on auditory hallucinations of schizophrenia, this needs to be confirmed by large-scale randomized controlled trials before this finding can be recommended in clinical practice. Significance: 1-Hz rTMS might have an effect on auditory hallucinations of schizophrenia. abstract_id: PUBMED:33844154 Repetitive Transcranial Magnetic Stimulation as a Therapeutic and Probe in Schizophrenia: Examining the Role of Neuroimaging and Future Directions. Schizophrenia is a complex condition associated with perceptual disturbances, decreased motivation and affect, and disrupted cognition. Individuals living with schizophrenia may experience myriad poor outcomes, including impairment in independent living and function as well as decreased life expectancy. Though existing treatments may offer benefit, many individuals still experience treatment resistant and disabling symptoms. In light of the negative outcomes associated with schizophrenia and the limitations in currently available treatments, there is a significant need for novel therapeutic interventions. Repetitive transcranial magnetic stimulation (rTMS) is a non-invasive brain stimulation technique that can modulate the activity of discrete cortical regions, allowing direct manipulation of local brain activation and indirect manipulation of the target's associated neural networks. rTMS has been studied in schizophrenia for the treatment of auditory hallucinations, negative symptoms, and cognitive deficits, with mixed results. The field's inability to arrive at a consensus on the use rTMS in schizophrenia has stemmed from a variety of issues, perhaps most notably the significant heterogeneity amongst existing trials. In addition, it is likely that factors specific to schizophrenia, rather than the rTMS itself, have presented barriers to the interpretation of existing results. However, advances in approaches to rTMS as a biologic probe and therapeutic, many of which include the integration of neuroimaging with rTMS, offer hope that this technology may still play a role in improving the understanding and treatment of schizophrenia. abstract_id: PUBMED:24023548 Use of repetitive transcranial magnetic stimulation for treatment in psychiatry. The potential of noninvasive neurostimulation by repetitive transcranial magnetic stimulation (rTMS) for improving psychiatric disorders has been studied increasingly over the past two decades. This is especially the case for major depression and for auditory-verbal hallucinations in schizophrenia. The present review briefly describes the background of this novel treatment modality and summarizes evidence from clinical trials into the efficacy of rTMS for depression and hallucinations. Evidence for efficacy in depression is stronger than for hallucinations, although a number of studies have reported clinically relevant improvements for hallucinations too. Different stimulation parameters (frequency, duration, location of stimulation) are discussed. There is a paucity of research into other psychiatric disorders, but initial evidence suggests that rTMS may also hold promise for the treatment of negative symptoms in schizophrenia, obsessive compulsive disorder and post-traumatic stress disorder. It can be concluded that rTMS induces alterations in neural networks relevant for psychiatric disorders and that more research is needed to elucidate efficacy and underlying mechanisms of action. abstract_id: PUBMED:25598827 Bilateral Repetitive Transcranial Magnetic Stimulation for Auditory Hallucinations in Patients with Schizophrenia: A Randomized Controlled, Cross-over Study. Objective: A randomized double-blind cross-over trial was conducted in patients with persistent auditory hallucinations (AHs) to investigate whether bilateral repetitive transcranial magnetic stimulation (rTMS) at the temporoparietal area or Broca's area is more effective at high- or low-frequencies compared to a sham condition. Methods: Twenty three patients with persistent AHs who remained stable on the same medication for 2 months were enrolled. They were randomized to one of four conditions: low-frequency (1 Hz)-rTMS to the temporoparietal area (L-TP), high-frequency (20 Hz)-rTMS to the temporoparietal area (H-TP), high-frequency (20 Hz)-rTMS to Broca's area (H-B), or sham. Results: All the four rTMS conditions resulted in significant decrease in the scores under the auditory hallucination rating scale and hallucination change scale over time. However, there were no significant treatment effects or interaction between time and treatment, suggesting no superior effects of the new paradigms over the sham condition. Conclusion: Our findings suggest that bilateral rTMS at the temporoparietal area or Broca's area with high- or low-frequency does not produce superior effects in reducing AHs compared to sham stimulation. abstract_id: PUBMED:21941181 Repetitive transcranial magnetic stimulation in the treatment of auditory hallucinations in schizophrenic patients. Purpose Of Review: Since 1999, most studies have confirmed the initial positive results in the treatment of auditory verbal hallucination (AVH) with repetitive transcranial magnetic stimulation (rTMS) targeted to the classical site in the left temporo-parietal region. However, recent literature has tempered the initial interest in this treatment, requiring a new review on this topic. Recent Findings: From the four meta-analyses, the latest reported a moderate effect size of 0.54. Two recent controlled studies, not included in the meta-analyses, failed to observe a significant improvement of AVH after 1 Hz rTMS. While almost all trials have studied the effects of low-frequency rTMS (1Hz), two recent procedures using high-frequency (20 Hz) or continuous theta burst stimulation showed promising results. The interest in using cerebral imaging to increase the efficacy of rTMS in the treatment of AVH has not been clearly demonstrated. Summary: Using rTMS to treat auditory hallucinations now seems less promising than it did 10 years ago because of the variable clinical effects and the high level of placebo responders. Evidence is still lacking concerning the maintenance treatment and the neurobiological underpinnings of rTMS efficacy, underscoring the need for further studies. Answer: The use of repetitive transcranial magnetic stimulation (rTMS) for treating auditory hallucinations in schizophrenia has been explored in various studies with mixed results. The evidence suggests that rTMS can be a safe and effective treatment for auditory hallucinations in schizophrenia spectrum disorders, with patients receiving rTMS responding more frequently than those receiving sham stimulation (PUBMED:25206578). Multiple studies have focused on the use of low-frequency rTMS to treat refractory auditory hallucinations, supporting the efficacy of stimulation over the temporoparietal cortex (PUBMED:20061953). Additionally, a meta-analysis indicated that auditory hallucinations improved more in the rTMS group than in the sham group, although the results were not stable after sensitivity analysis (PUBMED:32521380). However, the literature also indicates that the efficacy of rTMS in treating auditory hallucinations is not consistent across all studies. Some studies have failed to observe a significant improvement after rTMS treatment (PUBMED:21941181), and a randomized controlled cross-over study found no superior effects of bilateral rTMS at the temporoparietal area or Broca's area over the sham condition (PUBMED:25598827). Moreover, the grade of evidence for the efficacy of rTMS in treating auditory hallucinations is considered low, and publication bias may have a substantial impact on the results (PUBMED:28315614). In summary, while there is some evidence supporting the use of rTMS for treating auditory hallucinations in schizophrenia, the results are not unequivocal, and more research is needed to confirm its efficacy and to understand the underlying mechanisms. The variability in clinical effects and the presence of high placebo responders also suggest that rTMS may not be universally effective for all patients with auditory hallucinations (PUBMED:21941181). Therefore, while rTMS could be considered as a treatment option for auditory hallucinations, it should be approached with caution, and further high-quality studies are necessary to establish its role in clinical practice.
Instruction: Does the Gadoxetic Acid-Enhanced Liver MRI Impact on the Treatment of Patients with Colorectal Cancer? Abstracts: abstract_id: PUBMED:34498476 Association between oncological outcomes of patients with colorectal liver metastasis and additional gadoxetic acid-enhanced magnetic resonance imaging. Background: Gadoxetic acid-enhanced magnetic resonance imaging (MRI) has shown to be superior to contrast-enhanced computed tomography (CT) in studies, there is no adequate data on its impact on overall survival and recurrence-free survival in patients with colorectal liver metastasis. Our objective was to study the benefit afforded by gadoxetic acid-enhanced MRI in patients with colorectal liver metastasis, especially in terms of overall survival and recurrence-free survival. Methods: Patients who underwent surgical treatment for colorectal liver metastasis between January 2003 and December 2018 were divided into groups depending on the pro-operative imaging carried out. Overall survival and recurrence-free survival were compared between both groups. Results: Totally, 480 cases were included. Gadoxetic acid-enhanced MRI detected 220 new lesions in 123 cases. Presence of metastatic nodules was confirmed by gadoxetic acid-enhanced MRI in 33 of the 55 patients (60.0%) with indeterminate nodules on contrast-enhanced CT. Gadoxetic acid-enhanced MRI detected significantly more nodules than contrast-enhanced CT (932 vs. 673, respectively, P<0.001). There was no difference in the 5-year overall survival (P=0.390) or in the 3-month (85.1% vs. 86.7%, respectively, P=0.790), 6-month (78.0% vs. 81.7%, respectively, P=0.570), or 1-year (65.7% vs. 69.6%, respectively, P=0.446) recurrence-free survival in patients examined with contrast-enhanced CT or gadoxetic acid-enhanced MRI. Conclusions: Although gadoxetic acid-enhanced MRI is more sensitive than contrast-enhanced CT, its utilization does not significantly affect the recurrence-free survival of the patient. abstract_id: PUBMED:27861987 Comparison of contrast-enhanced ultrasonograpy with Gd-EOB-DTPA-enhanced MRI in the diagnosis of liver metastasis from colorectal cancer. Purpose: To compare contrast-enhanced ultrasonography (CEUS) using Sonazoid with Gd-EOB-DTPA-enhanced MRI (EOB-MRI) in the diagnosis of liver metastases in patients with colorectal cancer. Methods: A total of 69 patients diagnosed with or suspected of having liver metastasis were enrolled. These hepatic lesions were diagnosed by histopathological examination after surgical resection or based on follow-up using various imaging modalities. The diagnostic accuracies of CEUS and EOB-MRI were compared. Results: One hundred thirty-three lesions were detected. Of these lesions, 109 were diagnosed as liver metastases. Of the 133 lesions, 90.2% were detected on CEUS, and 98.5% on EOB-MRI. One hundred nine lesions were diagnosed as liver metastasis. The areas under the receiver operating characteristic curve for diagnosis were 0.906 and 0.851 on CEUS and EOB-MRI, respectively (p = 0.41). Sensitivity, specificity, positive predictive value (PPV), negative predictive value, and overall accuracy were 90.8%, 84.5%, 97.1%, 67.1%, and 90.2%, respectively, for CEUS, and 95.4%, 70.8%, 93.7%, 77.3%, and 91%, respectively, for EOB-MRI. Conclusions: CEUS has a higher specificity and PPV for the diagnosis of liver metastasis than EOB-MRI. © 2016 Wiley Periodicals, Inc. J Clin Ultrasound 45:138-144, 2017. abstract_id: PUBMED:27815722 Disappearing or residual tiny (≤5 mm) colorectal liver metastases after chemotherapy on gadoxetic acid-enhanced liver MRI and diffusion-weighted imaging: Is local treatment required? Objectives: To evaluate the clinical course of disappearing colorectal liver metastases (DLM) or residual tiny (≤5 mm) colorectal liver metastases (RTCLM) on gadoxetic acid-enhanced magnetic resonance imaging (MRI) and diffusion-weighted imaging (DWI) in patients who had colorectal liver metastases (CLM) and received chemotherapy. Methods: Among 137 patients who received chemotherapy for CLM and underwent gadoxetic acid-enhanced MRI and DWI between 2010 and 2012, 43 patients with 168 DLMs and 48 RTCLMs were included. The cumulative in situ recurrence rate of DLM and progression rate of RTCLM and their predictive factors were evaluated. Results: A total of 150 DLMs and 26 RTCLMs were followed up without additional treatment. At 1 and 2 years, respectively, the cumulative in situ recurrence rates for DLM were 10.9 % and 15.7 % and the cumulative progression rates for RTCLM were 27.2 % and 33.2 %. The in situ recurrence rate at 2 years was 4.9 % for the DLM group that did not show reticular hypointensity of liver parenchyma on hepatobiliary phase. Conclusions: DLM on gadoxetic acid-enhanced liver MRI and DWI indicates a high possibility of clinical complete response, especially in patients without chemotherapy-induced sinusoidal obstruction syndrome. Thirty-three percent of RTCLMs showed progression at 2 years. Key Points: • DLMs on gadoxetic acid-enhanced MRI and DWI showed low recurrence rates. • If there is sinusoidal obstruction syndrome, evaluation of DLM needs special care. • The progression rate for residual tiny CLMs was 33.2 % at 2 years. abstract_id: PUBMED:27022613 Does the Gadoxetic Acid-Enhanced Liver MRI Impact on the Treatment of Patients with Colorectal Cancer? Comparison Study with ¹⁸F-FDG PET/CT. Objectives: We evaluated the value of Gadoxetic acid-enhanced liver MRI in the preoperative staging of colorectal cancer and estimated the clinical impact of liver MRI in the management plan of liver metastasis. Methods: We identified 108 patients who underwent PET/CT and liver MRI as preoperative evaluation of colorectal cancer, between January 2011 and December 2013. We evaluated the per nodule sensitivity of PET/CT and liver MRI for liver metastasis. Management plan changes were estimated for patients with metastatic nodules newly detected on liver MRI, to assess the clinical impact. Results: We enrolled 131 metastatic nodules (mean size 1.6 cm) in 41 patients (mean age 65 years). The per nodule sensitivities of PET/CT and liver MRI were both 100% for nodules measuring 2 cm or larger but were significantly different for nodules measuring less than 2 cm (59.8% and 95.1%, resp., P = 0.0001). At least one more metastatic nodule was detected on MRI in 16 patients. Among these, 7 patients indicated changes of management plan after performing MRI. Conclusions: Gadoxetic acid-enhanced liver MRI detected more metastatic nodules compared with PET/CT, especially for small (<2 cm) nodules. The newly detected nodules induced management plan change in 43.8% (7/16) of patients. abstract_id: PUBMED:33502225 Clinical Performance of Abbreviated Liver MRI for the Follow-Up of Patients With Colorectal Liver Metastases. OBJECTIVE. The objective of our study was to compare an abbreviated liver MRI protocol with a standard liver MRI protocol for the posttreatment follow-up of colorectal metastases in assessing disease presence, segmental involvement, and response to chemotherapy and for surgical planning. MATERIALS AND METHODS. This retrospective single-center study reviewed consecutive chemotherapy-naïve patients with colorectal liver metastases (April 1, 2011-August 31, 2017) who underwent gadoxetate disodium-enhanced MRI on a 1.5-T unit before and 8-12 weeks after chemotherapy. Two radiologists blinded to outcomes independently reviewed images obtained using standard MRI sequences at baseline and after treatment. The standard MRI sequences were the following: axial T1-weighted, axial T2-weighted, axial DWI (b values = 0-750 s/mm2), axial multiphase contrast-enhanced T1-weighted, and axial and coronal hepatobiliary phase (HBP) T1-weighted sequences. The standard sequences obtained at baseline and the abbreviated protocol sequences (i.e., HBP gadoxetate disodium-enhanced T1-weighted and DWI sequences) obtained after treatment were reviewed. For each image set, reviewers assessed disease presence and segmental involvement; in addition, for images obtained after therapy, treatment response according to RECIST 1.1 was recorded. RESULTS. One hundred thirteen patients (73 men and 40 women; mean age, 61.6 years) were evaluated. The total number of metastases showed high agreement between the standard protocol and abbreviated protocol (intraclass correlation coefficient = 0.97). There was good agreement between the protocols for segmental involvement (weighted κ = 0.73-0.85), and the weighted kappa was 0.82 for all segments. Discrepancies in segmental involvement did not alter potential surgical planning. Categorization of RECIST 1.1 treatment response using the abbreviated protocol versus the standard protocol showed excellent agreement (weighted κ = 0.92). CONCLUSION. An abbreviated liver MRI protocol (i.e., HBP gadoxetate disodium-enhanced T1-weighted and DWI sequences) allows assessment after chemotherapy similar to a standard liver MRI protocol. Use of the abbreviated protocol can reduce imaging time without sacrificing diagnostic performance for the follow-up of colorectal liver metastases. abstract_id: PUBMED:30130970 Non-contrast liver MRI as an alternative to gadoxetic acid-enhanced MRI for liver metastasis from colorectal cancer. Background: Liver magnetic resonance imaging (MRI) provides reliable diagnostic performance for detecting liver metastasis but is costly and time-consuming. Purpose: To compare the diagnostic performance of non-contrast liver MRI to whole MRI using gadoxetic acid for detecting liver metastasis in patients with colorectal cancer (CRC). Material And Methods: We included 175 patients with histologically confirmed 401 liver metastases and 73 benign liver lesions. A non-contrast MRI (T1-weighted, T2-weighted, and diffusion-weighted images) with or without multidetector computed tomography (MDCT) and a whole MRI (gadoxetic acid-enhanced and non-contrast MRI) were analyzed independently by two observers to detect liver metastasis using receiver operating characteristic analysis. Results: We found no significant differences in Az value (range = 0.914-0.997), sensitivity (range = 95.2-99.6%), specificity (range = 77.3-100%), or positive (range = 92.9-100%) or negative predictive value (range = 87.5-95.7%) between the non-contrast MRI with or without MDCT and the whole MRI for both observers for all lesions as well as lesions ≤1.0 cm and lesions >1.0 cm in size ( P = 0.203-1.000). Combined MDCT and non-contrast MRI led to similar numbers of false-positive diagnosis to the whole MRI (eight for Observers 1 and 4 vs. 3 for Observer 2). Conclusion: Non-contrast liver MRI may serve as an alternative to gadoxetic acid-enhanced MRI for detecting and characterizing liver metastasis from CRC, at least in patients with relatively high risk of liver metastasis who underwent MDCT. Non-contrast liver MRI could be beneficial especially for patients with lesions that are already documented as benign but require additional follow-up MRIs. abstract_id: PUBMED:31640929 Additional value of gadoxetic acid-enhanced MRI to conventional extracellular gadolinium-enhanced MRI for the surgical management of colorectal and neuroendocrine liver metastases. Background: Liver resection being the only potentially curative treatment for patients with liver metastasis, it is critical to select the appropriate preoperative imaging modality. The aim of this study was to assess the impact of preoperative gadoxetic acid-enhanced MRI compared to a conventional extracellular gadolinium-enhanced MRI on the surgical management of colorectal and neuroendocrine liver metastasis. Methods: We included 110 patients who underwent both a gadoxetic acid-enhanced MRI (hepatospecific contrast) and conventional extracellular gadolinium for the evaluation of colorectal or neuroendocrine liver metastases, from January 2012 to December 2015 at the CHU de Québec - Université Laval. When the number of lesions differed, a hepatobiliary surgeon evaluated if the gadoxetic acid-enhanced MRI modified the surgical management. Results: Gadoxetic acid-enhanced MRI found new lesions in 25 patients (22.7%), excluded lesions in 18 patients (16.4%) and identified the same number in 67 patients (60.9%). The addition of the gadoxetic acid-enhanced MRI directly altered the surgical management in 19 patients overall (17.3% (95% CI [10.73-25.65])). Conclusion: Despite the additional cost associated with gadoxetic acid-enhanced MRI compared to conventional extracellular gadolinium-enhanced MRI, the use of this contrast agent has a significant impact on the surgical management of patients with liver metastases. abstract_id: PUBMED:35043218 Characteristics of false-positive lesions in evaluating colorectal liver metastases on gadoxetic acid-enhanced magnetic resonance imaging. Purpose: Gadoxetic acid-enhanced MRI (Gd-EOB-MRI) shows higher sensitivity for colorectal liver metastases (CRLM) than contrast-enhanced computed tomography (CECT). However, the details of false-positive lesions for each imaging modality are unknown. Methods: Cases undergoing hepatectomy for CRLM following a preoperative evaluation with both CECT and Gd-EOB-MRI between July 2008 and December 2016 were reviewed. The false-positive and false-negative rates were assessed for each modality, and the characteristics of false-positive lesions were evaluated. Results: We evaluated 275 partial hepatectomies in 242 patients without preoperative chemotherapy. Among the 275 hepatectomies, 546 lesions were recognized by CECT and/or Gd-EOB-MRI. The false-positive rates for CECT and Gd-EOB-MRI were 4% (18/422) and 7% (37/536), respectively. The size of false-positive lesions was significantly smaller than that of correctly diagnosed lesions (median: 28 mm [3-120 mm] vs 7.6 mm [320 mm], P < 0.001). Compared with the 233 correctly diagnosed lesions ≤ 20 mm in diameter, false-positive lesions were more frequently located near the liver surface or vasculobiliary structures than true lesions (33/37 [89%] vs 149/233 [64%], respectively; P = 0.0021). Conclusion: Gd-EOB-MRI had a 7% false-positive rate. A small size and tumor location near the surface or near vasculobiliary structures were associated with false positivity. abstract_id: PUBMED:30442100 Gadoxetic acid-enhanced magnetic resonance imaging significantly influences the clinical course in patients with colorectal liver metastases. Background: Gadoxetic acid (Primovist™)-enhanced magnetic resonance imaging (P-MRI) scans have higher accuracy and increased detection of small colorectal liver metastases (CRLM) compared to CT scans or conventional MRI scans. But, P-MRI scans are still inconsistently acquired in the diagnostic work up of patients with CRLM. The aim of this study was to determine the influence of P-MRI scans on treatment plan proposition and subsequently the clinical course of the patient. Methods: Eighty-three consecutive patients with potentially resectable CRLM based on a conventional CT scan underwent P-MRI scanning prior to treatment. Treatment plans proposed by the multidisciplinary team were compared before and after P-MRI scanning and related to the final treatment and diagnosis, the accuracy for the CT scan and P-MRI scan was calculated. Results: P-MRI scans led to a change of treatment in 15 patients (18%) and alteration of extensiveness of local therapy in another 17 patients (20%). All changes were justified leading to an accuracy of 93% for treatment proposition based on P-MRI scan, compared to an accuracy of 75% for the CT scan. Conclusions: P-MRI scans provide additional information that can aid in proposing the most suitable treatment for patients with CRLM and might prevent short-term reintervention. abstract_id: PUBMED:28796732 Is MRI of the Liver Needed During Routine Preoperative Workup for Colorectal Cancer? Background: The clinical efficacy of gadoxetic acid-enhanced liver MRI as a routine preoperative procedure for all patients with colorectal cancer remains unclear. Objective: The purpose of this study was to evaluate the efficacy of preoperative gadoxetic acid-enhanced liver MRI for the diagnosis of liver metastasis in patients with colorectal cancer. Design: This was a retrospective analysis from a prospective cohort database. Settings: All of the patients were from a subspecialty practice at a tertiary referral hospital. Patients: Patients who received preoperative gadoxetic acid-enhanced liver MRI after CT and attempted curative surgery for colorectal cancer were included. Main Outcome Measures: The number of equivocal hepatic lesions based on CT and gadoxetic acid-enhanced liver MRI and diagnostic use of the gadoxetic acid-enhanced liver MRI were measured. Results: We reviewed the records of 690 patients with colorectal cancer. Equivocal hepatic lesions were present in 17.2% of patients based on CT and in 4.5% based on gadoxetic acid-enhanced liver MRI. Among 496 patients with no liver metastasis based on CT, gadoxetic acid-enhanced liver MRI detected equivocal lesions in 15 patients and metastasis in 3 patients. Among 119 patients who had equivocal liver lesions on CT, gadoxetic acid-enhanced liver MRI indicated hepatic lesions in 103 patients (86.6%), including 90 with no metastasis and 13 with metastasis. Among 75 patients who had liver metastasis on CT, gadoxetic acid-enhanced liver MRI indicated that the hepatic lesions in 2 patients were benign, in contrast to CT findings. The initial surgical plans for hepatic lesions according to CT were changed in 17 patients (3%) after gadoxetic acid-enhanced liver MRI. Limitations: This study was limited by its retrospective design. Conclusions: The clinical efficacy of gadoxetic acid-enhanced liver MRI as a routine preoperative procedure for all patients with colorectal cancer is low, in spite of its high diagnostic value for detecting liver metastasis. However, this study showed gadoxetic acid-enhanced liver MRI was helpful in characterizing equivocal hepatic lesions identified in CT and could lead to change in treatment plans for some patients. See Video Abstract at http://links.lww.com/DCR/A420. Answer: The use of gadoxetic acid-enhanced liver MRI (Gd-EOB-MRI) does impact the treatment of patients with colorectal cancer (CRC) in various ways. Studies have shown that Gd-EOB-MRI is more sensitive than contrast-enhanced computed tomography (CT) for detecting colorectal liver metastases (CRLM), particularly for small lesions less than 2 cm in size. This increased sensitivity can lead to the detection of additional metastatic nodules that were not identified on CT scans, potentially altering the management plan for patients (PUBMED:27022613). In some cases, the additional information provided by Gd-EOB-MRI has led to changes in the treatment plan for 18% of patients and alterations in the extensiveness of local therapy for another 20% of patients (PUBMED:30442100). Moreover, the use of Gd-EOB-MRI has been shown to have a significant impact on the surgical management of patients with liver metastases, with new lesions found in 22.7% of patients and changes in surgical management in 17.3% of cases (PUBMED:31640929). This suggests that Gd-EOB-MRI can influence decisions regarding the extent of liver resection and other therapeutic interventions. However, it is important to note that while Gd-EOB-MRI is more sensitive in detecting liver metastases, its utilization does not significantly affect recurrence-free survival in patients with colorectal liver metastasis (PUBMED:34498476). Additionally, the clinical efficacy of Gd-EOB-MRI as a routine preoperative procedure for all patients with CRC is considered low, despite its high diagnostic value. It is particularly helpful in characterizing equivocal hepatic lesions identified on CT, which can lead to changes in treatment plans for some patients (PUBMED:28796732). In summary, Gd-EOB-MRI can significantly influence the clinical course and treatment planning for patients with CRC, particularly in the detection and characterization of liver metastases, which may lead to changes in surgical and therapeutic approaches. However, its impact on overall survival and recurrence-free survival may not be significant, and its routine use for all CRC patients as a preoperative procedure may not be justified based on its clinical efficacy.
Instruction: Does ablation of supraventricular tachycardia in children with a structurally normal heart improve quality of life? Abstracts: abstract_id: PUBMED:21106019 Does ablation of supraventricular tachycardia in children with a structurally normal heart improve quality of life? Introduction: Ablation has become first-line therapy for managing many pediatric patients with atrioventicular accessory pathway or atrioventricular nodal-mediated supraventricular tachycardia (SVT). Studies to date have all focused on elimination of the SVT substrate; yet, there are no studies to document whether this results in improved quality of life (QoL) scores. Objective: The objective of this study was to evaluate whether a successful catheter ablation improved QoL scores in pediatric patients. Patient And Methods: Pediatric patients between the age of 5 and 18 years with structurally normal hearts and re-entry SVT that were referred to the electrophysiology laboratory between October 2004 and June 2006 at the Children's Healthcare of Atlanta were eligible to be enrolled in the study. The Pediatric Quality of Life Inventory cardiac model questionnaire was administered to patients prior to and 6 months following catheter ablation. Areas evaluated were physical, emotional, social, school, and psychosocial function. The paired t-test was used to test the difference between pre- and post-time points for the study groups. P value <.05 was considered significant. Results: Forty-three patients consented to initial enrollment and returned the initial questionnaires. Seven patients did not have ablations performed during the electrophysiology study and therefore were excluded from further analysis. Complete pre- and post-ablation data were available for 27 patients. Comparing pre-ablation data with post-ablation data, there was significant improvement in all measured areas following successful elimination of tachycardia substrate. Patients reported lowest social and physical functioning scores pre-ablation. The greatest reported improvement post-ablation was in physical functioning. Conclusion: Successful elimination of SVT substrate results in improved pediatric QoL scores as reported by patients. abstract_id: PUBMED:32066520 Quality of life assessment in children before and after a successful ablation for supraventricular tachycardia. Background And Objectives: Young patients suffering from rhythm disorders have a negative impact in their quality of life. In recent years, ablation has become the first-line therapy for supraventricular arrhythmias in children. In the light of the current expertise and advancement in the field, we decided to evaluate the quality of life in young patients with supraventricular arrhythmias before and after a percutaneous ablation procedure. Methods: The prospective cohort consisted of patients <18 years with structurally normal hearts and non-pre-excited supraventricular arrhythmias, who had an ablation in our centre from 2013 to 2018. The cohort was evaluated with the PedsQL™ 4.0 Generic Core Scales self-questionnaire prior to and post-ablation. Results: The final cohort included 88 patients consisted of 52 males (59%), with a mean age at ablation of 12.5 ± 3.3 years. Forty-two patients (48%) had a retrograde-only accessory pathway mediating the tachycardia, 38 (43%) had atrio-ventricular nodal re-entrant tachycardia, 7 (8%) had ectopic atrial tachycardia, and 1 (1%) had atrial flutter. The main reason for an ablation was the patient's choice in 53%. There were no severe complications. Comparison between the baseline and post-ablation assessments showed that patients reported significant improvement in the scores for physical health, emotional and social functioning, as well as in the total scores. Conclusions: The present study demonstrates that the successful treatment of supraventricular arrhythmias by means of an ablation results in a significant improvement in the quality of self-reported life scores in young patients. abstract_id: PUBMED:25838334 Radiofrequency ablation changes the quality of life of children with supraventricular tachycardias. Objective: Radiofrequency ablation (RFA) has rapidly become the first-line therapy for children with supraventricular tachycardia (SVT). Recently, more attention has been given to the measurement of health-related quality of life (QoL) in children. The primary aim of this study was to determine if there is a change in the QoL in children with SVT pre and post RFA procedure using the Pediatric Quality of Life Inventory (PedsQL) cardiac inventory. In addition, the study discusses the impact of age, gender and variety of SVT mechanisms on the QoL. Design, Setting And Patients: All consecutive children with SVT referred for RFA at Mansoura University Children's Hospital were enrolled in this study. The PedsQL cardiac module questionnaire was given to the children/parents to be filled out before and 1 month following RFA procedure. Evaluated areas were physical, emotional, social, school and psychosocial function. The paired t test was used to test the difference between pre-time and post-time points for the study groups. Demographic and clinical data were collected. Results: The study sample consisted of 38 patients who underwent a successful ablation. The mean age of the patients at the time of RFA procedure was 12.4±5.3 years. There was a statistically significant improvement in all measured areas 1 month post successful RFA as compared with pre ablation. Post ablation, the greatest score improvement was in physical functioning. Older children (>12 years) showed the greatest benefit, but gender and type of SVT did not influence outcome. Conclusions: RFA therapy is useful in improving QoL and perceptions in children with recurrent SVT. abstract_id: PUBMED:28617528 Influence of ablation therapy on the quality of life in children with supraventricular tachycardia. Objective: Numerous restrictions, which are imposed on children with arrhythmia, influence their quality of life (QoL) and may have a negative impact on their further development. Ablation is a highly successful treatment leaving patients free from arrhythmia and other related limitations. There are very few studies evaluating the influence of ablation on the QoL in children with arrhythmia, based on small groups of patients. The aim of this study was to evaluate the impact of ablation on the QoL in children with supraventricular tachycardia (SVT). Patients And Methods: We included 122 children with SVT who underwent a successful ablation. The Qol was assessed before and after the ablation, using the WHOQOL-BREF and the Pediatric Arrhythmia Related Score (PARS) - a specific questionnaire developed by the authors. Results: Six months after the ablation, WHOQOL-BREF showed a significant improvement in the QoL in the physical (Phd) (p < 0.0001), psychological (Psd) (p = 0.0014) and social relationships (SRD) (p = 0.0165) domains. PARS showed a significant improvement in the QoL in the Phd (p < 0.0001), Psd (p = 0.0307) and medical satisfaction domains (Msd) (p < 0.0001). No improvement in Psd was revealed in children who had been off medications before the ablation. In the youngest children, a significant improvement was observed in all the measured domains in both questionnaires (p < 0.05), while in older children the greatest improvement after the ablation was noted in the area of physical functioning. Conclusions: Ablation in children with SVT significantly improved general satisfaction with health and with the QoL and had a positive impact on QoL scores. The youngest patients and those on antiarrhythmic medication before the ablation, benefit most from the procedure. abstract_id: PUBMED:33253137 Electrophysiologic characteristics and catheter ablation results of tachycardia-induced cardiomyopathy in children with structurally normal heart. Objective: The aim of this study is to present electrophysiologic characteristics and catheter ablation results of tachycardia-induced cardiomyopathy (TIC) in children with structurally normal heart. Methods: We performed a single-center retrospective review of all pediatric patients with TIC, who underwent an electrophysiology study and ablation procedure in our clinic between November 2013 and January 2019. Results: A total of 26 patients, 24 patients with single tachyarrhythmia substrates and two patients each with two tachyarrhythmia substrates, resulting with a total of 28 tachyarrhythmia substrates, underwent ablation for TIC. The median age was 60 months (2-214 months). Final diagnoses were supraventricular tachycardia (SVT) in 24 patients and ventricular tachycardia (VT) in two patients. The most common SVT mechanisms were focal atrial tachycardia (31%), atrioventricular reentrant tachycardia (27%), and permanent junctional reciprocating tachycardia (15%). Radiofrequency ablation (RFA) was performed in 15 tachyarrhythmia substrates, and cryoablation was performed in 13 tachyarrhythmia substrates, as the initial ablation method. Acute success in ablation was achieved in 24 out of 26 patients (92%). Tachycardia recurrence was observed in two patients (8%) on follow-up, who were treated successfully with repeated RFA later on. Overall success rates were 92% (24 out of 26) in patients and 93% (26 out of 28) in substrates. On echocardiography controls, the median left ventricular recovery time was 3 months (1-24 months), and median reversible remodeling time was 6 months (3-36 months). Conclusion: TIC should be kept in mind during differential diagnosis of dilated cardiomyopathy. Pediatric TIC patients can be treated successfully and safely with RFA or cryoablation. With an early diagnosis of TIC and quick restoration of the normal sinus rythm, left ventricular recovery, and remodeling may be facilitated. abstract_id: PUBMED:8840848 Effect of radiofrequency catheter ablation on health-related quality of life and activities of daily living in patients with recurrent arrhythmias. Background: Although radiofrequency catheter ablation can be used to effectively treat a variety of arrhythmias, the effects of this procedure on health-related quality of life have not been systematically studied. Methods And Results: The SF-36 (a measure of general health status), the Symptom Checklist-Frequency and Severity Scale (an instrument specific for cardiac arrhythmias), and an Activities of Daily Living questionnaire were used to assess quality of life in 161 patients before radiofrequency catheter ablation. These same instruments were used to measure quality of life 1 and 6 months after ablation with complete data in 159 of the original 161 patients. Before ablation, SF-36 scores of the study population were low compared with the US normative data base reflecting significant impairment in physical functioning and well-being. The lowest scores were reported by patients with atrial fibrillation and atrial flutter. Catheter ablation was associated with significant improvement in quality of life that was sustained over the 6 months after ablation. Improvements were measured in both the generic SF-36 health status questionnaire and the disease-specific Symptom Checklist-Frequency and Severity Scale. Catheter ablation was followed by improved performance of activities of daily living and a marked decrease in the number of visits to physicians and emergency rooms in the 6 months after ablation compared with the 6 months before ablation. Conclusions: Radiofrequency catheter ablation improves the health-related quality of life for patients with a variety of cardiac arrhythmias. abstract_id: PUBMED:27660579 Catheter Ablation to Treat Supraventricular Arrhythmia in Children and Adults With Congenital Heart Disease: What We Know and Where We Are Going. Background: Catheter ablation has been used to manage supraventricular arrhythmia in children since 1990. This article reviews the history of catheter ablation used to treat arrhythmia in children and discusses new frontiers in the field. We also address ablation in adult patients with a history of congenital heart disease (CHD) that was diagnosed and initially treated in childhood. Methods: We conducted an evidence-based literature review to gather available data on ablation for supraventricular tachycardia in children and adult patients with CHD. Results: Ablations can be performed safely and effectively in children. Complication rates are higher in children <4 years and <15 kg. In one study, the overall success rate of radiofrequency ablation in pediatrics was 95.7%, with the highest success rate in left free wall pathways (97.8%). Recurrence was higher in septal pathways. Cryoablation has been reported to have a 93% acute success rate for atrioventricular (AV) nodal reentrant tachycardia and septal pathways with no risk of AV block and a 5%-9% risk of recurrence. Three-dimensional mapping, intracardiac echocardiography, remote magnetic navigation, and irrigated catheter ablation are new technologies used to treat pediatric and adult patients with CHD. The population of adult patients with CHD is growing, and these patients are at particularly high risk for arrhythmia. A paucity of data is available on ablation in adult patients with CHD. Conclusion: Electrophysiology for pediatric and adult patients with CHD is a rapidly growing and progressing field. We benefit from continuous development of ablation techniques for adults with structurally normal hearts and have the unique challenge and responsibility to ensure the safe and effective application of these techniques in the vulnerable population of pediatric and adult patients with CHD. abstract_id: PUBMED:23280027 The effect of radiofrequency ablation treatment on quality of life and anxiety in patients with supraventricular tachycardia. Aim: Paroxysmal supraventricular tachycardia (PSVT) has been shown to be associated with poor outcome in psychological status and quality of life. This study aimed to assess the quality-of-life and anxiety in patients with PSVT and to investigate the influence of radiofrequency ablation (RFA) treatment on these parameters. Materials And Methods: Fifty patients with newly diagnosed PSVT with electrophysiologic study and treated with RFA, were enrolled in the study and 50 healthy age-and sex-matched individuals. The brief version of World Health Organization Quality of Life Scale and the state-trait anxiety inventory were administered to controls and all patients before and three months after the RFA procedure. Results: At baseline, in comparison with the control group, the patients with PSVT exhibited greater anxiety average scores (p < 0.05). After RFA procedure, the patients with PSVT exhibited significant improvement in anxiety, quality of life and health satisfaction scores. Statistically significant difference between the controls and the patients in respect to state and trait anxiety average scores disappeared after treatment (p > 0.05). Conclusions: In this study, normalization of anxiety and quality of life is associated with RFA treatment for patients with PSVT. This finding need to be confirmed by larger scale clinical trials with long-term quality of life follow-up in higher number of patients. abstract_id: PUBMED:37652863 Quality of life among patients with supraventricular tachycardia post radiofrequency cardiac ablation in Jordan. Background: Supraventricular tachycardia (SVT) is a common arrhythmia with associated symptoms such as palpitation, dizziness, and fatigue. It significantly affects patients' quality of life (QoL). Radiofrequency cardiac ablation (RFCA) is a highly effective treatment to eliminate arrhythmia and improve patients' QoL. The purpose of this study was to assess the level of QoL among patients with SVT and examine the difference in QoL before and after RFCA. Methods: One group pre-posttest design with a convenience sample of 112 patients was used. QoL was assessed by 36-Item Short Form (SF-36). Data were collected at admission through face-to-face interviews and 1-month post-discharge through phone interviews. Results: There was a significant difference between QoL before (33.7±17.0) and 1 month after (62.5±18.5) the RFCA. Post-RFCA patients diagnosed with atrioventricular nodal reentrant tachycardia had higher QoL than other types of SVT. Moreover, there were significant negative relationships between QoL and the number and duration of episodes pre- and post-RFCA. There were no significant differences in QoL based on: age, sex, working status, marital status, smoking, coronary artery disease, diabetes mellitus, and hypertension. Conclusions: After RFCA, the QoL of patients with ST improved for both physical and mental component subscales. abstract_id: PUBMED:28359691 Ablation of supraventricular arrhythmias in adult congenital heart disease: A contemporary review. Supraventricular arrhythmias are an important and increasing cause of morbidity in adults with congenital heart disease, requiring specific management strategies. Pharmacological treatment has limited efficacy, and is often associated with some side-effects. Major improvements in catheter ablation techniques have opened new opportunities to better understand underlying mechanisms of supraventricular arrhythmias, offer better therapy, and eventually improve symptoms and quality of life in these patients. An array of tools and techniques are necessary to access relevant anatomical areas to address the arrhythmogenic substrate. The mechanism of these arrhythmias is mostly related to macroreentry around surgical scars or cavotricuspid isthmus-dependent flutter. The efficacy of catheter ablation is mainly dependent on the underlying congenital heart condition, with the most complex cases typically being associated with atrial switch and Fontan surgeries. Although relatively high rates of recurrence are seen after a single procedure, additional attempts are often helpful to decrease recurrences and improve symptoms. Catheter ablation in such patients continues to present many unique challenges that are best addressed by experienced multidisciplinary teams, at centres equipped with the proper catheters, imaging capabilities, mapping systems and support staff needed to maximize safety and success. Consensus indications have emerged that often support ablation as first-line therapy in these patients. In this comprehensive review, we aim to describe the specific issues associated with ablation of supraventricular arrhythmias in adult congenital heart disease, assess the results in contemporary practice and, finally, review the current indications. Answer: Yes, ablation of supraventricular tachycardia (SVT) in children with a structurally normal heart does improve quality of life (QoL). Studies have shown that after successful catheter ablation, there is significant improvement in various areas of QoL, including physical, emotional, social, school, and psychosocial functioning (PUBMED:21106019). Patients report the greatest improvement in physical functioning post-ablation (PUBMED:21106019; PUBMED:25838334). The improvement in QoL scores is significant and includes better scores for physical health, emotional and social functioning, as well as total scores (PUBMED:32066520). Furthermore, older children (>12 years) showed the greatest benefit from the procedure, although gender and type of SVT did not influence the outcome (PUBMED:25838334). The youngest patients and those on antiarrhythmic medication before the ablation benefit most from the procedure (PUBMED:28617528). Overall, successful treatment of supraventricular arrhythmias by means of ablation results in a significant improvement in self-reported life scores in young patients (PUBMED:32066520).
Instruction: Are we late for the diagnosis of acute kidney injury in the intensive care units in pediatric patients? Abstracts: abstract_id: PUBMED:25411948 Are we late for the diagnosis of acute kidney injury in the intensive care units in pediatric patients? A preliminary, retrospective observational study among 66 patients. Background: The purposes of this study were to emphasize the importance of Risk, Injury, Failure, Loss, and End-Stage (RIFLE) classification in early diagnose and prognosis of acute kidney injury (AKI), and to evaluate the practicability of the RIFLE criteria in intensive care units. Methods: Sixty-six patients applied acute peritoneal dialysis were included into the study. Patients having acute peritoneal dialysis within the first 24 hours of intensive care unit admission were named group 1, between 24-48 hours group 2, and those who had acute peritoneal dialysis 48 hours or more after admission to the intensive care unit were named group 3. Retrospectively, we evaluated patients by RIFLE criteria at the consultation time, and patients who had been just in AKI were called late referral patients. The mean interval time between the onset of AKI and the consultation time was defined as delay time in late referral patients. Results: There were 20 patients in group 1, 15 were in group 2 and 31 in group 3. In total there were 18/66 patients in risk, 13/66 in injury while 35/66 in failure. There was statistically difference between delay times of in risk and failure class in group 3 (P<0.05). Also delayed patient numbers of both risk and failure class were found statistically highly significant (P<0.001). Conclusions: Most of the patients were in failure class at the time of consultation. We guess that if they would have been diagnosed earlier, prognosis might have been better. Therefore early diagnosis of AKI with RIFLE criteria and early initiation of acute peritoneal dialysis would probably improve prognosis. abstract_id: PUBMED:36507612 Characteristics of pediatric continuous renal replacement therapies in hospitals with pediatric intensive care units in Japan. Introduction: Information on the implementation of continuous renal replacement therapy (CRRT) in pediatric intensive care units (PICUs) is limited. We investigated the real-world practice of this therapy in Japan. Methods: We conducted a cross-sectional survey of 26 hospitals with PICUs in Japan. One physician per hospital responded to the questionnaire. Results: Fewer than half of the hospitals followed the CRRT practice guidelines; treatment options were often selected at a physician's discretion. PICUs varied widely in continuous renal replacement settings, circuit priming methods, frequency of circuit exchange, and policies for using CRRT in sepsis. Only two anticoagulants were used: nafamostat mesylate and unfractionated heparin. Most units did not change the nutritional dosage according to CRRT use. Many facilities were proactive concerning rehabilitation. Conclusion: Nafamostat-centered anticoagulation management was unique to Japan. Our results may help identify areas where additional clinical studies are needed to standardize CRRT practice. abstract_id: PUBMED:31344338 Impact of the development of acute kidney injury on patients admitted to the pediatric intensive care unit. Objective: To identify the risk factors for the development of acute kidney injury and for short and long-term mortality of patients with acute kidney injury after admission to the Pediatric Intensive Care Unit. Materials And Methods: Retrospective analysis of patients admitted to the Pediatric Intensive Care Unit from January 2004 to December 2008. Acute kidney injury was defined by the KDIGO criterion. Risk factors for acute kidney injury, in-hospital, and long-term mortality were obtained through multivariate logistic regression analysis. Long-term mortality (up to 2011) was obtained by searching the institution's database and by telephone contact with patients' family members. Results: A total of 434 patients were evaluated and the incidence of acute kidney injury was 64%. Most acute kidney injury episodes (78%) occurred within the first 24hours after admission to the Pediatric Intensive Care Unit. The risk factors for the development of acute kidney injury were: low volume of diuresis, younger age, mechanical ventilation, vasoactive drugs, diuretics, and amphotericin. Lower weight, positive fluid balance, acute kidney injury, dopamine use and mechanical ventilation were independent risk factors for in-hospital mortality. Long-term mortality was 17.8%. Systolic blood pressure, PRISM score, low volume of diuresis, and mechanical ventilation were independent risk factors associated with long-term mortality after admission to the Pediatric Intensive Care Unit. Conclusion: Acute kidney injury was a frequent, early event, and was associated with in-hospital mortality and long-term mortality after admission to the Pediatric Intensive Care Unit. abstract_id: PUBMED:24053902 Morbimortality associated to acute kidney injury in patients admitted to pediatric intensive care units. Aim: To describe the morbimortality associated to the development of acute kidney injury (AKI) defined by the pediatric adaptation of the RIFLE criteria in a Pediatric Intensive Care Unit (PICU). Design: A retrospective cohort study was carried out. Setting: Children admitted to a PICU in a tertiary care hospital. Patients or participants A total of 320 children admitted to a tertiary care hospital PICU during the year 2011. Neonates and renal transplant patients were excluded. Primary endpoints AKI was defined and classified according to the pediatric adaptation to the RIFLE criteria. PICU and hospital stays, use of mechanical ventilation and mortality were used to evaluate morbimortality. Results: A total of 315 children met the inclusion criteria, with a median age of 19 months (range 6-72). Of these patients, 128 presented AKI (73 reached the Risk category and 55 reached the Injury and Failure categories). Children with AKI presented a longer PICU stay (6.0 [4.0-12.5] vs. 3.5 [2.0-7.0] days) and hospital stay (17 [10-32] vs. 10 [7-15] days), and a greater need for mechanical ventilation (61.7 vs. 36.9%). The development of AKI was an independent factor of morbidity, associated with a longer PICU and hospital stay, and with a need for longer mechanical ventilation, with a proportional relationship between increasing morbidity and the severity of AKI. Conclusion: The development of AKI in critically ill children is associated with increased morbimortality, which is proportional to the severity of renal injury. abstract_id: PUBMED:19270213 Peritoneal dialysis in the pediatric intensive care unit setting. Acute kidney injury (AKI) is a common complication in pediatric and neonatal intensive care units (ICUs). Renal replacement therapy (RRT) is frequently needed in children in whom supportive therapy is not enough to satisfy metabolic demands or to provide adequate nutrition in cases of oliguric kidney failure. The decision to begin dialysis should not be delayed, because experience in infants shows that the shorter the time from the ischemic insult to the beginning of dialysis, the higher the survival rate. The use of continuous RRT (CRRT) in pediatric patients in the ICU has almost tripled; at the same time, the use of peritoneal dialysis (PD) and intermittent hemodialysis has markedly declined. Patient age seems to be the most important factor influencing the decision on the choice of dialysis modality. Although CRRT is reported as the preferred dialysis modality for acutely ill children, PD is still the most common modality used in patients under 6 years of age. Among the several advantages that PD offers, relatively low cost is probably the most significant. Other advantages include technical simplicity, lack of a need for anticoagulation or placement of a central venous catheter, and excellent tolerance in hemodynamically unstable patients. Much controversy exists regarding the adequacy of PD in hypercatabolic patients in the ICU. Nonetheless, when Kt/V has been applied to acutely ill children, it has been shown that PD can provide adequate clearances for most infants. No prospective studies have evaluated the effect of dialysis modality on the outcomes of children with AKI in the ICU setting. The decision about dialysis modality should therefore be based on local expertise, resources available, and the patient's clinical status. abstract_id: PUBMED:32919936 Evaluation of the prevalence and factors associated with acute kidney injury in a pediatric intensive care unit. Objective: To assess the prevalence of acute kidney injury in pediatric intensive care unit according to diagnostic criteria - pediatric risk, injury, failure, loss, end-stage renal disease, Acute Kidney Injury Network and Acute Kidney Injury Work Group, or Kidney Disease: Improving Global Outcomes -, and determining factors associated with acute kidney injury as well as its outcome. Methodology: This was a cross-sectional monocentric observational study, including patients aged between 29 days and 17 years who were admitted to the pediatric intensive care unit between January 1, 2012 and December 31, 2016. To evaluate the association between the study variables and acute kidney injury, the log-binomial generalized univariate and multivariate linear models were adjusted. Results: The study included 1131 patients, with prevalence of acute kidney injury according to the Acute Kidney Injury Network and Kidney Disease: Improving Global Outcomes criteria of 12.6% and of 12.9% according to the pediatric risk, injury, failure, loss, end-stage renal disease. In the multivariate analysis of older children (PR 1.007, 95% CI: 1.005-1.009), sepsis (PR 1.641, 95% CI: 1.128-2.387), demand for ventilatory support (PR 1.547, 95% CI: 1.095-2.186), and use of vasoactive amines (PR 2.298, 95% CI: 1.681-3.142) constituted factors associated with statistical significance to the development of acute kidney injury. The mortality rate among those with acute kidney injury was 28.7%. Conclusion: Older children, diagnosis of sepsis, demand for ventilatory support, and use of vasoactive amines were correlated with a higher risk of developing acute kidney injury. The mortality associated with acute kidney injury was elevated; it is crucial that all measures that ensure adequate renal perfusion are taken for patients with risk factors, to avoid the installation of the disease. abstract_id: PUBMED:30322370 Comprehensive Management Considerations of Select Noncardiac Organ Systems in the Cardiac Intensive Care Unit. As the acuity and complexity of pediatric patients with congenital cardiac disease have increased, there are many noncardiac issues that may be present in these patients. These noncardiac problems may affect clinical outcomes in the cardiac intensive care unit and must be recognized and managed. The Pediatric Cardiac Intensive Care Society sought to provide an expert review of some of the most common challenges of the respiratory, gastrointestinal, hematological, renal, and endocrine systems in pediatric cardiac patients. This review provides a brief overview of literature available and common practices. abstract_id: PUBMED:12762252 Monitoring fluid balance in pediatric intensive care The dynamic metabolism as far as bigger amount of water versus solid tissues in child demand much better skilfulness in calculating liquid input and excretion in intensive care. It is very important fact in prevention of life threatening conditions in children and adults, especially in conditions with multiorganic disorders, because their treatment is reciprocally opposite. Considering experiences of ICU UZ Gent-Belgium we made in Paediatric intensive care unit at Paediatric clinic in Tuzla unique method of monitoring liquid balance in children within specific conditions in which standard method of monitoring just the amount of planed liquid, body weight and diuresis is not reliable enough. With this new monitoring we prescribe dynamics of parenteral input of crystalloid and colloids as well as peroral income much more precise and much more confidence in positive result and prevention of harder disorders as well. In this paper we present our own method of observation as well as results in period July 1999. Till September 2002. We did the intensive volume control in 88 or 45 of total number of 2255 patients treated in PICU in this period. We followed the liquid balance 399 days or 9576 hours. Primary nephrologic diseases were indication for balance in 17 or 19.3% children, while in other 71 or 80.6% children, were other diseases like cardiac failure, diabetic ketoacidosis, chronic systemic diseases, the Toxic shock syndrome, the conditions of intracranial hypertension. We notice the signs of threatening renal failure in 7 or 7.9% of children and signs of manifesting renal failure in 10 or 11.3%. Our experiences tell that the model we made is applicable in working conditions of ICU. Evidence of organism responds the planned input is much better and safer than in standard way of observation. The possibility of in time intervention against threatening renal failure as well as other organic disorders is much greater. We recommended this model for all life threatening conditions in Paediatric intensive care units. abstract_id: PUBMED:21804902 Hemolytic uremic syndrome in pediatric intensive care units in são paulo, Brazil. The hemolytic uremic syndrome (HUS) caused by Shiga toxin-producing Escherichia coli (STEC) is one of the most frequent causes of pediatric acute renal failure. The aim of this study was to report the clinic and microbiologic features associated with 13 post-diarrheal HUS cases identified in pediatric intensive care units in the city of São Paulo, Brazil, from January 2001 to August 2005. Epidemiologic, clinic, and laboratorial information, along with fecal and serum samples, were collected for identifying the genetic sequences of Stx and for studying antibodies directed against LPS O26, O111 and O157. STEC was isolated from three patients, and serotypes O26:H11, O157:H7 and O165:H- were identified. In nine patients, high levels of IgM against LPS O111 (n=2) and O157 (n=7) were detected. Dialysis was required in 76.9% of the patients; arterial hypertension was present in 61.5%, neurological complications were observed in 30.7%, and only one patient died. During a 5-year follow-up period, one patient developed chronic kidney disease. The combined use of microbiologic and serologic techniques provided evidence of STEC infection in 92.3% of the HUS cases studied, and the importance of O157 STEC as agents of HUS in São Paulo has not been previously highlighted. abstract_id: PUBMED:37458817 Acute kidney injury in pediatric toxic shock syndrome is associated with worse clinical course in the intensive care unit. To explore the prevalence, severity, nature, and significance of acute kidney injury (AKI) among children admitted to the pediatric intensive care unit (PICU) with toxic shock syndrome (TSS). Bi-center, retrospective observational study. Children admitted for TSS to two intensive care units from 2009-2022 were included. We identified 41 children (median age 5 years, 46% females) who met the Centers for Disease Control and Prevention (CDC) definitions of TSS. Staphylococcal TSS accounted for 63% of the patients and Streptococcal TSS accounted for the remaining 37%. AKI was diagnosed in 24 (59%) (stage 1: n = 6 [15% of total], stage 2: n = 10 [24%], and stage 3: n = 8 [20%]). The worst creatinine level was measured during the first day of admission in 34 (83%) patients. The median duration of AKI was 2 days. Creatinine normalized by hospital discharge in all cases. Patients with AKI had a longer intensive care unit stay than those without AKI (6 vs. 3 days, respectively, p = 0.01), needed more respiratory support (87% vs. 47%, p = 0.002), had fewer 28 ventilation-free days (25 vs. 28, p = 0.01), fewer vasopressor-free days (25 vs. 28, p = 0.001), and received more blood products (p = 0.03). Conclusion: Children admitted to the PICU with TSS, show a high prevalence of AKI at presentation. Creatinine levels and clearance normalize by hospital discharge in most cases. AKI in the setting of TSS could be used as an early marker for illness severity and a predictor of a more complex course. What is Known: • TSS is characterized according to the CDC by specific sets of clinical signs and symptoms in conjunction with specific laboratory findings one of which is AKI. • AKI is associated with worse outcomes in critically ill patients in general and in septic patients in particular. What is New: • AKI is found in about 60% of all patients admitted to the PICU with a diagnosis of TSS and hence is an important defining criteria. • AKI in the setting of TSS is associated with a more complex illness course and can serve as an early marker predicting such a course. Answer: The question of whether the diagnosis of acute kidney injury (AKI) is delayed in pediatric intensive care units (PICUs) is addressed in several studies. A preliminary, retrospective observational study among 66 patients who underwent acute peritoneal dialysis in a PICU found that most patients were in the failure class of the Risk, Injury, Failure, Loss, and End-Stage (RIFLE) criteria at the time of consultation, suggesting a delay in diagnosis. The study concluded that earlier diagnosis of AKI using RIFLE criteria and early initiation of acute peritoneal dialysis could probably improve prognosis (PUBMED:25411948). Another study evaluating the prevalence and factors associated with AKI in a PICU found that the incidence of AKI was high, with most episodes occurring within the first 24 hours after admission. The study identified several risk factors for the development of AKI, including low volume of diuresis, younger age, mechanical ventilation, vasoactive drugs, diuretics, and amphotericin. The study concluded that AKI was associated with increased in-hospital and long-term mortality after admission to the PICU (PUBMED:31344338). A retrospective cohort study described the morbimortality associated with the development of AKI in a PICU, as defined by the pediatric adaptation of the RIFLE criteria. The study found that children with AKI had longer PICU and hospital stays, a greater need for mechanical ventilation, and that the development of AKI was an independent factor of morbidity (PUBMED:24053902). These studies suggest that there may be a delay in the diagnosis of AKI in PICUs, which is associated with increased morbidity and mortality. Early recognition and management of AKI in pediatric patients are crucial for improving outcomes.
Instruction: Beak-Like Extension of the Pancreatic Uncinate Process on MDCT: Is It Hyperplasia or Movement? Abstracts: abstract_id: PUBMED:27224221 Beak-Like Extension of the Pancreatic Uncinate Process on MDCT: Is It Hyperplasia or Movement? Objective: We aimed to evaluate the pancreatic uncinate process with a beak-like extension (BLE) beyond the left border of the superior mesenteric artery, to define the cause of BLE, and to differentiate BLE from hyperplasia. Methods: We retrospectively reviewed 1042 triple-phase contrast-enhanced multidetector-row computed tomography (3P-CE-MDCT) examinations of 500 patients. Finally, 38 patients (28 men, 10 women; mean age, 66 years) with 140 3P-CE-MDCT images showing BLE were studied regarding BLE size, contour, and cause. The superior mesenteric artery position was also evaluated. Results: Beak-like extensions were found in 7.6% of patients. Most were caused by movement of the small bowel mesentery (n = 21, 55%), with deviation of mesenteric vessels or mass effect from expanded adjacent organs (n = 3, 8%). Seven patients (18.5%) had true hyperplasia. Conclusions: Beak-like extension is caused by movement of the small bowel mesentery with deviation of mesenteric vessels or by adjacent organ expansion. Beak-like extension closely mimics other pathology on nonenhanced MDCT. abstract_id: PUBMED:20544429 Beak necrosis in Hungarian partridges (Perdix perdix) associated with beak-bits and avian poxvirus infection. Proliferative growth, consistent with poxvirus infection, encapsulated plastic beak-bits and covered the dorsal portion of the upper beak and nares of adult male and female captive-raised Hungarian partridges. Three representative birds were submitted to the Wisconsin Veterinary Diagnostic Laboratory for necropsy. Lesions in the necropsied birds extended through the nares, where the plastic bit ends are designed to rest. The lesions also variably extended caudally into the oropharynx and cranially within the beak epithelium, and included palate deformity and beak necrosis. Poxvirus was diagnosed in all of the birds examined based on histopathology, electron microscopy, and polymerase chain reaction amplification and sequencing. This report is the first to describe avian pox lesions associated with the application of beak-bits and the resulting beak and oral pathology. abstract_id: PUBMED:28609372 Pancreatic Polypeptide Cell Proliferation in the Pancreas and Duodenum Coexisting in a Patient With Pancreatic Adenocarcinoma Treated With a GLP-1 Analog. A partial pancreaticogastrodudenectomy was performed on a 66-year old man with type 2 diabetes mellitus because of an invasive, moderately differentiated adenocarcinoma in the head of the pancreas. In the adjacent grossly normal tissue of the uncinate process, there was a massive proliferation of pancreatic polypeptide (PP) cells confined to this region and showed invasive pattern. Strikingly, in the heaped area of his duodenum, there was a strikingly large number of PP, glucagon, a few insulin cells in a mini-islet-like patterns composed of glucagon and insulin cells. Among the etiological factors, the possible long-lasting effects of the GLP-1 analog, with which the patient was treated, are discussed. This is the first report in the literature of both the coexistence of a pancreatic adenocarcinoma and invasive PPoma and the occurrence of PP and insulin cells in human duodenal mucosa. abstract_id: PUBMED:6737606 Beak and feather dystrophy in wild sulphur-crested cockatoos (Cacatua galerita). Three flocks of wild sulphur-crested cockatoos in southeastern Australia had a 10-20% prevalence of feather and beak deformities. In affected birds, the crest, tail, contour and down feathers were lost or reduced in length owing to a "pinching off" 0.5-1 cm above the skin. The underlying skin was discolored brown. Flight feathers appeared normal and could sustain flight. In mildly affected birds, only the crest and down feathers were involved. Histological examination of affected feathers showed necrosis of developing cells in their epidermal collars. The epidermis was moderately hyperplastic and contained numerous large macrophages, with large intracytoplasmic granules. Electron microscopic examination showed these granules to consist of regular lattices of 17-20-nm-diameter particles, resembling parvovirus particles. In some birds the upper beak was enlarged and had longitudinal cracks. Histological examination of the beak showed necrosis of its epidermis. It was concluded that a viral infection may have caused feather and beak dystrophy in these birds. abstract_id: PUBMED:9756017 Duodenum-preserving resection of the pancreatic head for mucinous ductal ectasia without overt carcinoma. Background/aims: The clinical characteristics of mucinous ductal ectasia (MDE) of the pancreas without overt carcinoma have not been clarified. To clarify MDE and assess the optimal treatment procedure, including the technique of duodenum-preserving resection of the pancreatic head (DpRPH), we studied four patients. Methodology: Our patients consisted of three men and one woman, with a mean age of 71 years. The patients underwent DpRPH (n=3) or the pylorus-preserving Whipple procedure (PpW) (n=1). Clinicopathological features, postoperative pancreatic function, and technique to preserve duodenal blood flow were studied. Results: All patients had intraductal mucin-hypersecretion and multilocular cysts lined by hyperplastic epithelium. The lesions were located in the uncinate process (n=3) or head-body (n=1) of the pancreas. DpRPH totally removed the lesions in the uncinate process. Of the three patients receiving DpRPH, dusky duodenum and a postoperative duodenal ulcer developed in two whose gastroduodenal arteries (GDA) were divided, but did not develop in one with undivided GDA. Postoperative glucose tolerance test and peptide para-aminobenzoic acid test after DpRPH showed better values than those after PpW. All patients are alive and well 22 to 40 months after surgery. Conclusions: DpRPH is a new standard for MDE. During DpRPH, preservation of the GDA and the superior portion of the pancreatic head is recommended to maintain an adequate duodenal blood flow. abstract_id: PUBMED:27775943 Physiological Uptake in the Pancreatic Head on Somatostatin Receptor Scintigraphy Using [111In-DTPA]Octreotide: Incidence and Mechanism. Purpose: Physiological uptake in the uncinate process or pancreatic head has been described with Ga-labeled PET tracers for somatostatin receptor imaging. In-DTPA-octreotide is the only registered radiopharmaceutical for the imaging of neuroendocrine tumors. We studied the uptake in this region of the pancreatic head on somatostatin receptor scintigraphy (SRS) using In-DTPA-octreotide in a large group of patients. Furthermore, known physiological and clinical characteristics are discussed in an attempt to elucidate this phenomenon. Methods: Four hundred seven patients underwent SRS using In-DTPA-octreotide in our department in 2014. After excluding patients with a known malignancy in or close to the pancreas, as well as all scans without SPECT/CT of the upper abdomen, we reviewed 178 scans in total. The uptake was graded on a 4-point scale that correlates the uptake in the pancreatic head to physiological uptake in the liver. Results: Uptake in the region of the pancreatic head, including the uncinate process, was seen in 46 (26%) of 178 patients on SPECT/CT and in 12 patients (7%) on planar imaging. On SPECT/CT, uptake was lower than the liver in 26 patients (15%), equal to the liver in 17 patients (10%), and higher than the liver in 3 patients (2%). In patients with diabetes mellitus (DM), the incidence of uptake in the pancreatic head was 50% on SPECT/CT. Conclusions: Physiological uptake in the pancreatic head is seen on SPECT/CT with In-DTPA-octreotide in 26% of patients, and the incidence is doubled in patients with DM. Previous case reports showed uptake in the pancreatic head due to histologically proven pancreatic polypeptide (PP) cell hyperplasia. Also, patients with DM have elevated serum PP concentrations, which is likely due to PP cell hyperplasia. Because 90% of PP cells are present in the pancreatic head, PP cell hyperplasia is the most likely explanation for visualization of the pancreatic head on SRS in a substantial number of patients. abstract_id: PUBMED:11558632 Expression of heparin-binding epidermal growth factor-like growth factor in pancreatic adenocarcinoma. Background: Previous studies demonstrated that heparin-binding epidermal growth factor-like growth factor (HB-EGF) contributes to carcinogenesis and carcinoma progression. In this study, we investigated its expression in human pancreatic adenocarcinoma. Methods: We immunohistochemically investigated the expression of HB-EGF in 40 cases of pancreatic adenocarcinoma. Results: HB-EGF was only occasionally and faintly expressed in normal and hyperplastic pancreas duct epithelia. In pancreatic adenocarcinoma, 22 (55.0%) of the 40 cases were classified as positive for HB-EGF. Its expression was more frequently observed in cases with a low Ki-67 labeling index, well differentiated. early stage, small size, without lymph node metastasis and low EGF-R expression. Conclusion: These results suggest that HB-EGF mainly plays a role in early phase of the progression of pancreatic adenocarcinoma. abstract_id: PUBMED:12751598 Intraductal papillary mucinous tumor of the pancreas: computerized tomography and magnetic resonance imaging features. The aim of this study was to analyze the computerized tomography (CT) and magnetic resonance imaging (MRI) features of intraductal papillary mucinous tumor (IPMT) of the pancreas. The cases of eight patients with pathologically proven IPMT (1 papillary hyperplasia, 7 adenocarcinoma) of the pancreas were retrospectively reviewed. There were five men and three women with ages ranging from 42 to 82 years. Imaging studies included six thin-section dynamic CT scans, seven MRI scans, one MR cholangiopancreatography scan, and two endoscopic retrograde cholangiopancreatography scans. There was only one benign IPMT, which presented as a unilocular cyst in the pancreatic body with no mural nodules and no dilatation of the main pancreatic duct (MPD). All seven patients with malignant IPMT had multilocular cysts with papillary projections in the pancreatic head and/or uncinate process accompanied by dilated MPD (5 diffuse, 2 segmental). Communication between the cystic lesions and the MPD were evident in all seven patients. One patient had small mural nodules in the branch ducts of the pancreatic body and five had a bulging papilla with a patulous orifice. A mass effect resulting in biliary obstruction was shown in one patient. One patient had a ruptured cyst with mucin leakage into the right anterior pararenal space following sono-guided aspiration. In conclusion, the main imaging feature of IPMT in our patients was a multilocular cyst with papillary projections located in the pancreatic head and uncinate process. Although CT and MRI cannot differentiate mucin content from pancreatic juice, communication between the cystic lesion and the dilated MPD and a bulging papilla with a patulous orifice are characteristics of IPMT. abstract_id: PUBMED:18266952 RNA interference-mediated silencing of the polo-like kinase 1 gene enhances chemosensitivity to gemcitabine in pancreatic adenocarcinoma cells. Gemcitabine is the first-line chemotherapeutic agent for advanced adenocarcinoma of the pancreas; however, chemoresistance to gemcitabine remains a major cause of failure for the clinical treatment of this disease. Polo-like kinase 1 (Plk-1) is highly expressed in pancreatic cancer cell lines and pancreatic tumour tissues, and is involved in a wide variety of cell cycle processes. Nevertheless, its biological role and implication for gemcitabine resistance are not clearly defined. In this study, we used RNA-interference (RNAi)-mediated depletion of Plk-1 to determine its potential for sensitizing pancreatic tumour cells to gemcitabine. We showed that the level of Plk-1 protein was correlated significantly with gemcitabine resistance in human pancreatic adenocarcinoma cells and that overexpression of Plk-1 reduced sensitivity to gemcitabine in these cells. In addition, small interfering RNA (siRNA)-mediated knockdown of Plk-1 caused cell cycle arrest at G2/M and the reduction of cellular proliferation. More importantly, the treatment of pancreatic cancer cells with Plk-1 siRNA followed by exposure to gemcitabine dramatically decreased cell viability and increased cellular apoptosis, as compared with treatment with either agent alone. These observations indicate that down-regulation of Plk-1 expression by RNAi enhances gemcitabine sensitivity and increases gemcitabine cytotoxicity in pancreatic tumour cells. This is the first demonstration that the combination of Plk-1 gene therapy and gemcitabine chemotherapy has synergistic anti-tumour activity against pancreatic carcinoma in vitro. This combination treatment warrants further investigation as an effective therapeutic regimen for patients with resistant pancreatic cancer and other tumours. abstract_id: PUBMED:26070712 Transcriptional co-factor Transducin beta-like (TBL) 1 acts as a checkpoint in pancreatic cancer malignancy. Pancreatic ductal adenocarcinoma (PDAC) is the fourth leading cause of cancer fatalities in Western societies, characterized by high metastatic potential and resistance to chemotherapy. Critical molecular mechanisms of these phenotypical features still remain unknown, thus hampering the development of effective prognostic and therapeutic measures in PDAC. Here, we show that transcriptional co-factor Transducin beta-like (TBL) 1 was over-expressed in both human and murine PDAC. Inactivation of TBL1 in human and mouse pancreatic cancer cells reduced cellular proliferation and invasiveness, correlating with diminished glucose uptake, glycolytic flux, and oncogenic PI3 kinase signaling which in turn could rescue TBL1 deficiency-dependent phenotypes. TBL1 deficiency both prevented and reversed pancreatic tumor growth, mediated transcriptional PI3 kinase inhibition, and increased chemosensitivity of PDAC cells in vivo. As TBL1 mRNA levels were also found to correlate with PI3 kinase levels and overall survival in a cohort of human PDAC patients, TBL1 was identified as a checkpoint in the malignant behavior of pancreatic cancer and its expression may serve as a novel molecular target in the treatment of human PDAC. Answer: The beak-like extension (BLE) of the pancreatic uncinate process observed on multidetector-row computed tomography (MDCT) can be caused by either movement of the small bowel mesentery with deviation of mesenteric vessels or by adjacent organ expansion. In a study that reviewed 1042 triple-phase contrast-enhanced MDCT examinations of 500 patients, BLE was found in 7.6% of patients. Most cases were attributed to movement of the small bowel mesentery (55%), with deviation of mesenteric vessels or mass effect from expanded adjacent organs (8%). True hyperplasia was identified in 18.5% of the patients with BLE (PUBMED:27224221). Therefore, while BLE can mimic other pathologies on nonenhanced MDCT, it is not exclusively due to hyperplasia and can often be attributed to movement-related causes.
Instruction: Does hyperexpansion of the native lung adversely affect outcome after single lung transplantation for emphysema? Abstracts: abstract_id: PUBMED:9787839 Does hyperexpansion of the native lung adversely affect outcome after single lung transplantation for emphysema? Preliminary findings. Rationale And Objectives: The authors evaluated the effect of the native emphysematous lung on graft function after single lung transplantation. Materials And Methods: Thirty-two patients who underwent single lung transplantation were examined with radiography preoperatively for degree of emphysema and postoperatively for hyperexpansion of the native lung. All patients underwent ventilation-perfusion scanning before transplantation and ventilation scanning after transplantation. Pulmonary function tests and measurement of arterial partial pressure of oxygen were also measured before and after surgery. The postoperative course was graded on a subjective scale. Results: Hyperexpansion of the native lung was seen in 16 of the 32 patients in the postoperative period. On the basis of serial measurements of forced expiratory volume in 1 minute, these patients fared poorly in the postoperative period compared with patients without hyperexpansion. Pulmonary blood flow to the native lung, as measured with perfusion scintigraphy, paradoxically increased in 11 patients after transplantation. Nine of these 11 patients demonstrated hyperinflation of the native lung, suggesting that graft compression adversely affects blood flow to the transplanted lung. Conclusion: Hyperexpansion of the native lung after single lung transplantation for emphysema may have a deleterious effect on graft function and possibly on clinical outcome. abstract_id: PUBMED:33367595 Simultaneous single lobar lung transplantation and contralateral lung volume reduction. Lung transplantation (LTx) and lung volume reduction surgery are established therapies for end-stage chronic obstructive pulmonary disease. Although native lung hyperinflation is a well-known complication of unilateral LTx for chronic obstructive pulmonary disease, the unilateral procedure continues to be performed because of severe shortages of cadaveric donors. As native lung hyperinflation can adversely affect the graft, all possible protection should be provided for patients with one-lobe transplantation. We report an emphysematous juvenile patient who successfully underwent simultaneous living-donor, single-lobe LTx and volume reduction in the contralateral lung. abstract_id: PUBMED:8694592 Unilateral volume reduction after single-lung transplantation for emphysema. Background: Single-lung transplantation has become accepted therapy for patients with end-stage emphysema. Hyperinflation of the native lung can occur after single-lung transplantation with mediastinal shifting and compression of the transplanted lung. A volume reduction operation (pneumectomy) [corrected] may relieve symptoms of dyspnea and improve exercise tolerance. Methods: Three of 66 patients who underwent single-lung transplantation for emphysema had development of native lung hyperexpansion and mediastinal shifting causing compression of the transplanted contralateral lung at 12, 17, and 42 months after transplantation. There were 2 men and 1 woman. Unilateral volume reduction was performed without complication in all 3 patients. Results: All patients were noted to have marked improvement in chest radiographs after volume reduction, substantial relief of dyspnea, and improvement in exercise tolerance. An improvement in pulmonary function test results was noted in 1 patient, but tests were not done for the other 2 patients. Conclusions: Patients with chronic obstructive pulmonary disease who undergo single-lung transplantation may have symptomatic hyperexpansion of the native lung requiring volume reduction months to years after transplantation. Unilateral volume reduction can be safely performed in the posttransplantation period. abstract_id: PUBMED:9089995 Native lung complications after single lung transplantation for emphysema. We reviewed the impact of the presence of the native diseased contralateral lung on the outcome after single lung transplantation for emphysema. Twenty consecutive recipients of single lung transplants for emphysema were reviewed for complications related to the native lung. Five patients (25%) suffered major complications arising in the native lung and resulting in serious morbidity and mortality. The timing of onset varied from 1 day to 43 months after transplantation. We conclude that the susceptibility of the native lung to complications such as those described in this report is an additional fact to be considered in choosing the ideal transplant procedure for patients with obstructive lung disease. abstract_id: PUBMED:24506317 Endoscopic treatment of native lung hyperinflation using endobronchial valves in single-lung transplant patients: a multinational experience. Background: Hyperinflation of the native lung (NLH) is a known complication to single-lung transplantation for emphysema. The hyperinflation can lead to compression of the graft and cause respiratory failure. Endobronchial valves have been used to block airflow in specific parts of the native lung, reducing the native lung volume and relieving the graft. Objective: We report short-term follow-up and safety from 14 single-lung transplant patients with NLH treated with bronchoscopic lung volume reduction using endobronchial valves. Methods: Retrospective clinical information related to endobronchial valve treatment was obtained from four centres. All patients were treated with IBV(TM) Valve System (Spiration, Olympus Respiratory America, Redmond, WA, USA). All patients had evidence of severe NLH with mediastinal displacement. Results: A total of 74 IBV valves were placed in 14 patients, with an average of 5.3 (range 2-10). Five patients had two procedures with staged treatment. Eleven patients reported symptom relief, and nine had lung function improvements. There was a significant increase in forced expiratory volume in 1 s of 9% (P = 0.013) and forced vital capacity of 15% (P = 0.034) within the first months after treatment. There were no reported device-related adverse events nor reports of migration. Two patients had pneumothorax. One patient had pneumonia in the location of the valve placement, and another had infection within days. Three other patients were hospitalised with infection 2 months after treatment. Conclusions: Treating NLH with IBV endobronchial valves leads to clinical improvement in the majority of patients, and the treatment has an acceptable safety. abstract_id: PUBMED:32775384 Spontaneous pneumothorax in a single lung transplant recipient-a blessing in disguise: A case report. Background: End-stage chronic obstructive pulmonary disease (COPD) is one of the common lung diseases referred for lung transplantation. According to the international society of heart and lung transplantation, 30% of all lung transplantations are carried out for COPD alone. When compared to bilateral lung transplant, single-lung transplant (SLT) has similar short-term and medium-term results for COPD. For patients with severe upper lobe predominant emphysema, lung volume reduction surgery is an excellent alternative which results in improvement in functional status and long-term mortality. In 2018, endobronchial valves were approved by the Food and Drug Administration for severe upper lobe predominant emphysema as they demonstrated improvement in lung function, exercise capacity, and quality of life. However, the role of endobronchial valves in native lung emphysema in SLT patients has not been studied. Case Summary: We describe an unusual case of severe emphysema who underwent a successful SLT 15 years ago and had gradual worsening of lung function suggestive of chronic lung allograft dysfunction. However, her lung function improved significantly after a spontaneous pneumothorax of the native lung resulting in auto-deflation of large bullae. Conclusion: This case highlights the clinical significance of native lung hyperinflation in single lung transplant recipient and how spontaneous decompression due to pneumothorax led to clinical improvement in our patient. abstract_id: PUBMED:10456403 Complications in the native lung after single lung transplantation. Objectives: Single lung transplantation is a viable option for patients with end-stage pulmonary disease; despite encouraging results, we observed serious complications arising in the native lung. We retrospectively reviewed 36 single lung transplants to evaluate the incidence of complications arising in the native lung, their treatment and outcome. Methods: Between 1991 and 1997, 35 patients received 36 single lung transplants for emphysema (16), pulmonary fibrosis (14), lymphangioleiomyomatosis (4), primary pulmonary hypertension (1) and bronchiolitis obliterans (1). The clinical records were reviewed and the complications related to the native lung were divided into early (up to 6 weeks after the transplant) and late complications. Results: Nineteen complications occurred in 18 patients (50%), leading to death in nine (25%). Early complications (within 6 weeks from the transplant) were bacterial pneumonia (1), overinflation (3), retention of secretions with bronchial obstruction and atelectasis (1), hemothorax (1), pneumothorax (1) and invasive aspergillosis (3); one patient showed active tuberculosis at the time of transplantation. Two patients developed bacterial pneumonia and invasive aspergillosis leading to sepsis and death. The other complications were treated with separate lung ventilation (1), bronchoscopic clearance (1), chest tube drainage (1) and wedge resection and pleurodesis (mechanical) by VATS (1). One patient with hyperinflation of the native lung eventually required pneumonectomy and died of sepsis. The patient with active tuberculosis is alive and well after 9 months of medical treatment. Late complications were recurrent pneumothorax (4), progressive overinflation with functional deterioration (2), aspergillosis (1) and pulmonary nocardiosis (1). Recurrent pneumothorax was treated with chest tube drainage alone (1), thoracoscopic wedge resection and/or pleurodesis (2) and pneumonectomy (1); hyperinflation was treated with thoracoscopic lung volume reduction in both cases; both patients with late infectious complications died. Conclusions: After single lung transplantation, the native lung can be the source of serious problems. Early and late infectious complications generally result in a fatal outcome; the other complications can be successfully treated in most cases, even if surgery is required. abstract_id: PUBMED:9834996 Reduction of native lung volume after single-lung transplantation for emphysema We report three cases of volume reduction surgery in three single lung transplant recipients with emphysema. Each patient had a late decline in lung function with hyper-inflation of the native lung. Lung function was improved post-operatively for two patients. The relief of thoracic overdistension may be considered in single lung transplant recipients who exhibit clinical significant functional deterioration. abstract_id: PUBMED:8417944 Prolonged independent lung respiratory treatment after single lung transplantation in pulmonary emphysema. Single lung transplantation (SLT) is now successfully used in patients with severe emphysema. Mechanical imbalance between the native emphysematous and the healthy transplanted lung can be easily managed, unless severe graft failure occurs, leading to acute respiratory failure. Emergency retransplantation has been used in this setting, since the conventional approach to adult respiratory distress syndrome (ARDS) (mechanical ventilation and positive end-expiratory pressure [PEEP]) fails, due to the mechanical discrepancy between the two lungs. We describe two cases of severe graft failure following SLT in emphysema patients that were successfully treated with prolonged independent respiratory treatment. Mechanical ventilation and PEEP were applied to the failing transplanted lung while the native emphysematous lung was maintained on spontaneous breathing to avoid hyperexpansion and barotrauma. The independent lung respiratory treatment lasted 35 and 25 days, respectively: to our knowledge, these are among the longest-lasting independent respiratory treatments reported. The management was simplified by the early use of a double-lumen tracheostomy cannula as an alternative to orotracheal double lumen tube. abstract_id: PUBMED:12154426 Differential lung ventilation after single-lung transplantation for emphysema. Objective: To review outcome and cardiovascular and respiratory function after initiation of differential lung ventilation for acute severe native lung hyperinflation in patients who have had a single-lung transplant for end-stage emphysema. Design: Retrospective review. Setting: Cardiothoracic tertiary referral center. Participants: Thirteen patients who had differential lung ventilation for acute severe native lung hyperinflation, of a total of 132 patients who had a single-lung transplant for end-stage emphysema between 1988 and the end of 2000. Interventions: None. measurements and main results: Thirteen patients had differential lung ventilation for acute severe native lung hyperinflation; 7 survived to 1 year after transplant. There was a highly significant (p = 0.0006) improvement in mean PaO(2) from 8.23 (95% confidence interval [CI], 6.15 to 10.3) to 16.6 (95% CI, 12.84 to 20.45) 1 hour after start of differential lung ventilation. The average ratio of estimated dynamic compliance in the native lung compared with the transplanted (donor) lung was 2.69 (95% CI, 1.75 to 3.62). Conclusion: In addition to previous case reports, this series shows that differential lung ventilation is an appropriate treatment for acute severe native lung hyperinflation. A difference in estimated effective dynamic compliance of > or = 2.69 between native and transplanted lung may require differential lung ventilation. Answer: Yes, hyperexpansion of the native lung does adversely affect the outcome after single lung transplantation for emphysema. Studies have shown that hyperexpansion of the native lung can be seen postoperatively in patients who have undergone single lung transplantation for emphysema. This condition has been associated with poorer postoperative outcomes compared to patients without hyperexpansion, as indicated by serial measurements of forced expiratory volume in one minute (PUBMED:9787839). Additionally, hyperinflation of the native lung can lead to compression of the transplanted lung, which may adversely affect blood flow to the graft (PUBMED:9787839). Native lung hyperinflation is a recognized complication of unilateral lung transplantation for chronic obstructive pulmonary disease (COPD), and it can negatively impact the graft. Therefore, measures such as simultaneous single lobar lung transplantation and contralateral lung volume reduction have been reported to protect the graft in patients with one-lobe transplantation (PUBMED:33367595). Furthermore, unilateral volume reduction surgery has been performed to relieve symptoms of dyspnea and improve exercise tolerance in patients who experienced symptomatic hyperexpansion of the native lung after single-lung transplantation (PUBMED:8694592). Complications in the native lung, including hyperinflation, can lead to serious morbidity and mortality after single lung transplantation for emphysema (PUBMED:9089995). Endoscopic treatment using endobronchial valves has been used to reduce native lung volume and relieve graft compression, leading to clinical improvement in the majority of patients treated (PUBMED:24506317). Additionally, spontaneous pneumothorax in a single lung transplant recipient, which resulted in auto-deflation of large bullae, highlighted the clinical significance of native lung hyperinflation and its impact on lung function (PUBMED:32775384). In summary, hyperexpansion of the native lung after single lung transplantation for emphysema can have a deleterious effect on graft function and clinical outcome, and various interventions have been developed to mitigate these adverse effects.
Instruction: Can enoxaparin safely replace unfractionated heparin during coronary intervention in acute coronary syndromes? Abstracts: abstract_id: PUBMED:15262028 Can enoxaparin safely replace unfractionated heparin during coronary intervention in acute coronary syndromes? Background: Enoxaparin has gained wide acceptance in patients with acute coronary syndromes. However, there is uncertainty regarding management of patients who require coronary intervention while on enoxaparin. Some physicians withhold the morning dose of enoxaparin prior to coronary intervention while others switch patients to unfractionated heparin. Both methods do not provide optimal anticoagulation in the hours preceding intervention. There are no published controlled data to assess the safety of coronary intervention using enoxaparin alone in patients with acute coronary syndromes. Methods: We prospectively compared enoxaparin to unfractionated heparin during coronary angiography and intervention. Sixty four patients admitted to the coronary care unit (CCU) were given enoxaparin twice daily, including on the morning of procedure. Coronary angiography and intervention were performed without additional unfractionated heparin. The control group comprised of 52 patients admitted to Internal Medicine for an acute coronary syndrome. These were also given enoxaparin but the morning dose was withheld and unfractionated heparin was used during procedure. Results: Patients in both groups had similar baseline characteristics. No significant differences were observed between the two groups in procedural success rate, complications or bleeding. One year follow up showed similar rates of hospitalization and mortality. Conclusion: Enoxaparin seems to offer safe and effective procedural anticoagulation in patients undergoing percutaneous intervention for acute coronary syndromes. Patients given enoxaparin can probably have coronary intervention without interruption of enoxaparin treatment and without additional procedural anticoagulation. These findings require confirmation in larger, randomized trials. abstract_id: PUBMED:26946415 Anticoagulation in coronary intervention. Percutaneous coronary intervention (PCI) induces thrombin generation and is associated with the risk of acute, subacute, or long-term ischaemic events. Therefore, intravenous anticoagulation is recommended to minimize thrombotic complications. The intensity and duration of anticoagulation needed are dependent on the clinical presentation (elective PCI for stable coronary artery disease, PCI for non-ST elevation acute coronary syndromes, or primary PCI for ST-segment elevation myocardial infarction) and procedural features. As both ischaemic and periprocedural bleeding complications are associated with acute and long-term mortality, the optimal level of anticoagulation and the best agents are a matter of debate. Despite a number of limitations and the lack of large randomized clinical trials, unfractionated heparin (UFH) is still been used in the majority of interventions. Intravenous enoxaparin, a low-molecular-weight heparin, leads to a more predictable level of anticoagulation and has been compared with UFH in patients with elective PCI and primary PCI with favourable results. The direct thrombin inhibitor bivalirudin has been studied in numerous trials and consistently shown to reduce bleeding complications when compared with UFH with or without glycoprotein IIb/IIIa inhibitors. This review will summarize the current status of anticoagulation for PCI and the results of most recent trials and give recommendations for different clinical scenarios. abstract_id: PUBMED:26800365 Developing drugs for use before, during and soon after percutaneous coronary intervention. Introduction: Percutaneous coronary intervention (PCI) is a milestone for treating coronary artery disease (CAD). Antithrombotic therapy is essential to prevent ischemic complications, including the microvascular no-reflow, while minimizing bleeding events. Areas Covered: This overview discusses available and developing drugs for PCI including anticoagulants, antiplatelets and treatment of no-reflow. Expert Opinion: For years unfractionated heparin (UFH) has been the unique anticoagulant to be used before and during PCI. Enoxaparin showed similar efficacy and safety, yet, based on recent trials, bivalirudin has been shown to have some benefits, particularly for patients with ST-segment elevation myocardial infarction (STEMI). The evidence concerning new anticoagulants is still preliminary, except for new oral anticoagulants, particularly rivaroxaban that showed intriguing findings and is currently under investigation. Dual antiplatelet therapy (DAPT) is the standard of care after PCI, but new developments have recently emerged. Indeed, ticagrelor and prasugrel are currently recommended over clopidogrel due to their significant reduction of ischemic events in acute coronary syndrome (ACS) whereas clopidogrel remains the choice in stable CAD. Among new agents, vorapaxar and cangrelor showed positive but limited evidence and might be considered at least in selected patients. Conversely, evidence on effective treatments for no-reflow remains limited and would require future dedicated research. abstract_id: PUBMED:25120366 Clinical and economic studies of eptifibatide in coronary stenting. Platelet adhesion and aggregation at the site of coronary stenting can have catastrophic clinical and economic consequences. Therefore, effective platelet inhibition is vital during and after percutaneous coronary intervention. Eptifibatide is an intravenous antiplatelet agent that blocks the final common pathway of platelet aggregation and thrombus formation by binding to glycoprotein IIb/IIIa receptors on the surface of platelets. In clinical studies, eptifibatide was associated with a significant reduction of mortality, myocardial infarction, or target vessel revascularization in patients with acute coronary syndrome undergoing percutaneous coronary intervention. However, recent trials conducted in the era of dual antiplatelet therapy and newer anticoagulants failed to demonstrate similar results. The previously seen favorable benefit of eptifibatide was mainly offset by the increased risk of bleeding. Current American College of Cardiology/American Heart Association guidelines recommend its use as an adjunct in high-risk patients who are undergoing percutaneous coronary intervention with traditional anticoagulants (heparin or enoxaparin), who are not otherwise at high risk of bleeding. In patients receiving bivalirudin (a newer safer anticoagulant), routine use of eptifibatide is discouraged except in select situations (eg, angiographic complications). Although older pharmacoeconomic studies favor eptifibatide, in the current era of P2Y12 inhibitors and newer safer anticoagulants, the increased costs associated with bleeding make the routine use of eptifibatide an economically nonviable option. The cost-effectiveness of eptifibatide with the use of strategies that decrease the bleeding risk (eg, transradial access) is unknown. This review provides an overview of key clinical and economic studies of eptifibatide well into the current era of potent antiplatelet agents, novel safer anticoagulants, and contemporary percutaneous coronary intervention. abstract_id: PUBMED:30069327 Comparison of enoxaparin and unfractionated heparin in patients with non-ST-segment elevation acute coronary syndrome undergoing percutaneous coronary intervention: a systematic review and meta-analysis. Background: No randomized trial has been conducted to directly compare enoxaparin with unfractionated heparin (UFH) in patients undergoing percutaneous coronary intervention (PCI) for non-ST-segment elevation acute coronary syndrome (NSTE-ACS). In an era where early invasive strategies are recommended in high risk patients, the effect of enoxaparin and UFH needs to be re-evaluated. The authors performed a meta-analysis to determine whether enoxaparin is superior to UFH in patients with NSTE-ACS undergoing PCI. Methods: The composite efficacy end point included all-cause mortality and myocardial infarction (MI) in the hospital or within 60 days. Major bleeding, as defined in the individual clinical trials evaluated, was the main safety endpoint within the same time period. Pooled estimates of the difference in outcome between enoxaparin and UFH were calculated using fixed or random effects models. Results: A total of 8,861 patients from 4 trials were included. In the pooled analysis, rates of death or MI were similar in patients treated with enoxaparin and UFH [risk ratio (RR), 0.89, 95% confidence interval (CI): 0.77-1.02, P=0.09; I2 =50%]. Major bleeding was also similar between enoxaparin and UFH (RR, 1.21, 95% CI: 0.94-1.56, P=0.15, I2=39%). A subgroup analysis, including randomized trials only or trials with a large sample size, and a leave-one-out sensitivity analysis, demonstrated similar results with above, respectively. Conclusions: In patients undergoing PCI for NSTE-ACS, rates for both death/MI and major bleeding were similar between patients treated with enoxaparin and UFH. abstract_id: PUBMED:29339166 Anticoagulation in Acute Coronary Syndrome-State of the Art. Early intravenous anticoagulation is the corner stone treatment of patients admitted with an acute coronary syndrome: it antagonizes the ongoing coronary thrombosis and facilitates the percutaneous coronary intervention, hence a reduction of mortality and acute stent thrombosis. Unfractionated heparin, enoxaparin, bivalirudin and fondaparinux have been extensively studied in large randomized control trials and meta-analyses with the same objective: reducing the ischemic burden without hiking hemorrhagic events. This conundrum is evolving along the generalization of the radial-artery access, the use of potent P2Y12 and the trend towards a tailored approach regarding the ischemic and bleeding balance. In this systematic review, we aimed at presenting the evidence based data and strategies for each anticoagulant in the setting of acute coronary syndrome with and without ST-segment elevation. abstract_id: PUBMED:22306479 Efficacy and safety of enoxaparin versus unfractionated heparin during percutaneous coronary intervention: systematic review and meta-analysis. Objective: To determine the efficacy and safety of enoxaparin compared with unfractionated heparin during percutaneous coronary intervention. Design: Systematic review and meta-analysis. Data Sources: Medline and Cochrane database of systematic reviews, January 1996 to May 2011. Study Selection: Randomised and non-randomised studies comparing enoxaparin with unfractionated heparin during percutaneous coronary intervention and reporting on both mortality (efficacy end point) and major bleeding (safety end point) outcomes. Data Extraction: Sample size, characteristics, and outcomes, extracted independently and analysed. Data Synthesis: 23 trials representing 30,966 patients were identified, including 10,243 patients (33.1%) undergoing primary percutaneous coronary intervention for ST elevation myocardial infarction, 8750 (28.2%) undergoing secondary percutaneous coronary intervention after fibrinolysis, and 11,973 (38.7%) with non-ST elevation acute coronary syndrome or stable patients scheduled for percutaneous coronary intervention. A total of 13,943 patients (45.0%) received enoxaparin and 17,023 (55.0%) unfractionated heparin. Enoxaparin was associated with significant reductions in death (relative risk 0.66, 95% confidence interval 0.57 to 0.76; P<0.001), the composite of death or myocardial infarction (0.68, 0.57 to 0.81; P<0.001), and complications of myocardial infarction (0.75, 0.6 to 0.85; P<0.001), and a reduction in incidence of major bleeding (0.80, 0.68 to 0.95; P=0.009). In patients who underwent primary percutaneous coronary intervention, the reduction in death (0.52, 0.42 to 0.64; P<0.001) was particularly significant and associated with a reduction in major bleeding (0.72, 0.56 to 0.93; P=0.01). Conclusion: Enoxaparin seems to be superior to unfractionated heparin in reducing mortality and bleeding outcomes during percutaneous coronary intervention and particularly in patients undergoing primary percutaneous coronary intervention for ST elevation myocardial infarction. abstract_id: PUBMED:12525234 Use of low-molecular-weight heparins in the management of acute coronary artery syndromes and percutaneous coronary intervention. Context: Low-molecular-weight heparins (LMWHs) possess several potential pharmacological advantages over unfractionated heparin as an antithrombotic agent. Objective: To systematically summarize the clinical data on the efficacy and safety of LMWHs compared with unfractionated heparin across the spectrum of acute coronary syndromes (ACSs), and as an adjunct to percutaneous coronary intervention (PCI). Data Sources: We searched MEDLINE for articles from 1990 to 2002 using the index terms heparin, enoxaparin, dalteparin, nadroparin, tinzaparin, low molecular weight heparin, myocardial infarction, unstable angina, coronary angiography, coronary angioplasty, thrombolytic therapy, reperfusion, and drug therapy, combination. Additional data sources included bibliographies of articles identified on MEDLINE, inquiry of experts and pharmaceutical companies, and data presented at recent national and international cardiology conferences. Study Selection: We selected for review randomized trials comparing LMWHs against either unfractionated heparin or placebo for treatment of ACS, as well as trials and registries examining clinical outcomes, pharmacokinetics, and/or phamacodynamics of LMWHs in the setting of PCI. Of 39 studies identified, 31 fulfilled criteria for analysis. Data Extraction: Data quality was determined by publication in the peer-reviewed literature or presentation at an official cardiology society-sponsored meeting. Data Synthesis: The LMWHs are recommended by the American Heart Association and the American College of Cardiology for treatment of unstable angina/non-ST-elevation myocardial infarction. Clinical trials have demonstrated similar safety with LMWHs compared with unfractionated heparin in the setting of PCI and in conjunction with glycoprotein IIb/IIIa inhibitors. Finally, LMWHs show promise as an antithrombotic agent for the treatment of ST-elevation myocardial infarction. Conclusions: The LMWHs could potentially replace unfractionated heparin as the antithrombotic agent of choice across the spectrum of ACSs. In addition, they show promise as a safe and efficacious antithrombotic agent for PCI. However, further study is warranted to define the benefit of LMWHs in certain high-risk subgroups before their use can be universally recommended. abstract_id: PUBMED:10731289 Combination enoxaparin and abciximab therapy during percutaneous coronary intervention: "NICE guys finish first". Data from randomized clinical trials support the administration of both enoxaparin and platelet glycoprotein IIb/IIIa blockade to patients who present with non-ST segment evaluation acute coronary syndromes. Enoxaparin does not activate platelets, has a more predictable dose response that facilitates weight-adjusted dosing and may have enhanced antithrombotic (increased anti-Xa activity) and safety (reduced anti-IIa activity) properties when compared with unfractionated heparin. Abciximab administration during percutaneous coronary intervention reduces the incidence of ischemic adverse outcomes and may improve survival in long-term follow-up. The preliminary experience with combining abciximab and intravenous enoxaparin during percutaneous coronary intervention in the NICE-4 Trial demonstrates a low incidence of minor/major bleeding (TIMI definition) and transfusion and infrequent major cardiac events to 30 days follow-up. Future algorithms to facilitate the transition of patients from the clinical service who have received subcutaneous administration of enoxaparin to the cardiac catheterization laboratory prior to percutaneous coronary intervention are forthcoming and will provide seamless integration of "optimal" adjunctive pharmacology through the course of hospitalization for patients with non-ST elevation acute coronary syndromes. abstract_id: PUBMED:19092648 Antiplatelet intervention in acute coronary syndrome. Clinical trials have demonstrated the usefulness of antiplatelet agents, percutaneous coronary intervention, and glycoprotein (GP) IIb/IIIa inhibitors in patients with acute coronary syndrome (ACS) based on risk stratification. Studies like RITA 3 and FRISC II have shown that an early invasive strategy in high-risk patients was associated with lower mortality over the long term compared with conservative treatment. High-risk patients with unstable angina/non-ST-elevation myocardial infarction derive particular benefit from GP IIb/IIIa inhibitors and an early invasive strategy. The TIMI risk score for patients with unstable angina/non-ST-elevation myocardial infarction provides an easily implemented tool for therapeutic decision-making. Simultaneous assessment of troponin, C-reactive protein, and brain natriuretic peptide at the time of presentation of ACS provides incremental prognostic information. Recent evidence supports the fact that thrombosis and inflammation are interrelated (platelets are involved in inflammation and, similarly, leukocytes are involved in hemostasis). The platelet, which was once viewed as a bystander in hemostasis, is now recognized as a key mediator of thrombosis as well as inflammation. Antithrombotic drugs block platelet aggregation and activation at various points in the thrombotic cascade and include aspirin, the thienopyridine clopidogrel, and its predecessor ticlopidine, intravenous GP IIb/IIIa inhibitors, which block the final common pathway of platelet activation and aggregation, unfractionated heparin and low-molecular-weight heparin, notably enoxaparin, and direct thrombin inhibitors (eg, bivalirudin). Bivalirudin has proven noninferior to heparin in patients undergoing percutaneous coronary intervention. Enoxaparin is emerging as a safer and better alternative to unfractionated heparin in invasively managed patients. Declining renal function is a major cause of excess dosing of antithrombotic agents and frequently increases the risk of bleeding in elderly patients. Class I American College of Cardiology/American Heart Association recommendations for acute (<24 hours) management of patients with high-risk non-ST-elevation ACS include the use of aspirin, beta-blockers, unfractionated heparin or low-molecular-weight heparin, or GP IIb/IIIa inhibitors for patients undergoing catheterization and revascularization and clopidogrel for patients undergoing percutaneous coronary intervention. Medical therapy should be coupled with an early invasive strategy of catheterization and revascularization within 48 hours. Predischarge initiation of secondary prevention therapies for risk factor modification may have substantial advantages for improving the long-term prognosis of patients. A large proportion of patients with ACS undergo interventional treatment, which underscores the importance of upstream initiation of antithrombotic agents. Data from CRUSADE suggests that the majority of patients are likely to benefit from aggressive upstream antithrombotic therapy. Patients with ACS who have diabetes have a higher risk for recurrent events than their nondiabetic counterparts but stand to benefit more from early aggressive therapy. Combining GP IIb/IIIa inhibition with drug-eluting stents offers the potential to optimize outcomes after revascularization in patients with diabetes. Whereas the use of drug-eluting stents has greatly reduced the risk of restenosis, patients with diabetes who have ACS and who undergo stenting remain at high risk for restenosis and are more likely to require revascularization. Increasing adherence to American College of Cardiology/American Heart Association guidelines is key to improving outcomes. The optimal management of patients with ACS continues to change as new therapies and strategies of care are developed and proven effective. The clinical challenge remains to increase physician adherence to evidence-based cardiac care for all patients. Answer: Based on the provided abstracts, enoxaparin appears to be a viable alternative to unfractionated heparin (UFH) during coronary intervention in acute coronary syndromes (ACS). A prospective comparison indicated that enoxaparin can offer safe and effective procedural anticoagulation in patients undergoing percutaneous intervention for ACS, suggesting that patients given enoxaparin could have coronary intervention without interruption of enoxaparin treatment and without additional procedural anticoagulation (PUBMED:15262028). Furthermore, a systematic review and meta-analysis found that enoxaparin was associated with significant reductions in death, the composite of death or myocardial infarction, and complications of myocardial infarction, as well as a reduction in the incidence of major bleeding, particularly in patients undergoing primary percutaneous coronary intervention for ST elevation myocardial infarction (PUBMED:22306479). Another systematic review and meta-analysis comparing enoxaparin and UFH in patients with non-ST-segment elevation acute coronary syndrome undergoing percutaneous coronary intervention (PCI) found similar rates for both death/myocardial infarction and major bleeding between patients treated with enoxaparin and UFH (PUBMED:30069327). Additionally, a review of clinical data on the efficacy and safety of low-molecular-weight heparins (LMWHs), such as enoxaparin, compared with UFH across the spectrum of ACS and as an adjunct to PCI, suggested that LMWHs could potentially replace UFH as the antithrombotic agent of choice (PUBMED:12525234). However, it is important to note that while enoxaparin has been shown to lead to a more predictable level of anticoagulation and has favorable results compared with UFH in patients with elective PCI and primary PCI, the optimal level of anticoagulation and the best agents are still a matter of debate, and the evidence concerning new anticoagulants is still preliminary (PUBMED:26946415, PUBMED:26800365). Therefore, while enoxaparin is emerging as a safer and better alternative to UFH in invasively managed patients, further study is warranted to define its benefit in certain high-risk subgroups before its use can be universally recommended (PUBMED:19092648). In conclusion, the evidence suggests that enoxaparin can safely replace UFH during coronary intervention in ACS, but larger, randomized trials and further research are needed to confirm these findings and to establish definitive guidelines for its use in various clinical scenarios.
Instruction: Does the number of forceps deliveries performed in residency predict use in practice? Abstracts: abstract_id: PUBMED:25794629 Does the number of forceps deliveries performed in residency predict use in practice? Objective: We aimed to determine whether a threshold number of forceps deliveries in residency predicts use of forceps in independent practice. Study Design: We surveyed obstetrics and gynecology residency graduates of 2 academic programs from 2008 through 2012 regarding the use of operative vaginal delivery in practice. At these programs, residents are trained in both forceps and vacuums. Individual case log data were obtained with the number of forceps deliveries performed by each respondent during residency. Respondents were grouped as currently using any forceps or vacuums alone. A logistic regression model estimated the probability of forceps use, predicted by the number of residency forceps deliveries. From the resulting receiver-operating characteristic curve, we assessed sensitivity, specificity, positive predictive value, and area under the curve. Results: The response rate was 85% (n = 58) and 90% (n = 52) practice obstetrics. Seventy-nine percent (n = 41) use forceps in practice. The mean number of forceps performed during residency was 22.3 ± 1.3 (mean ± SE) in the any-forceps group and 18.5 ± 2.1 in the vacuums-only group (P = .14). Although the model performed only moderately (area under the curve, 0.61, 95% confidence interval [CI], 0.42-0.81), more than 13 residency forceps deliveries corresponded to a 95% sensitivity (95% CI, 84-99) and a positive predictive value of 83% (95% CI, 69-92) for using forceps in practice. The specificity of this threshold is 27% (95% CI, 6-61). Conclusion: Although exceeding 13 forceps deliveries made it highly likely that obstetricians would use them in practice, further study is necessary to set goals for a number of resident forceps deliveries that translate into use in practice. abstract_id: PUBMED:10554722 Number of deliveries performed during family practice residency training. Background And Objectives: We conducted a study of family practice residency programs to investigate the number of deliveries performed by residents during training. Methods: A one-page questionnaire was mailed to family practice residency program directors, requesting an estimate of the number of deliveries performed by each resident. In addition, information on residency characteristics was taken from the American Academy of Family Physicians Directory of Family Practice Residency Programs. Results: Responses were received from 316 programs, for a response rate of 70%. Residency directors estimated that residents did a mean of 18 continuity and 24.5 obstetrics block month deliveries. On the average, residents delivered 93.9 babies in 3 years of residency. The number of continuity deliveries was related to location of the program and type of program. Total deliveries related to the presence of a competing obstetrics program, faculty who were doing their own deliveries, location of the program, and number of required months of obstetrics. On multivariate analysis, only the latter two variables predicted the number of total deliveries. Conclusions: Family practice residents deliver an average of 94 babies in 3 years of residency training. Programs that want to increase deliveries can best do so by increasing the number of required months of training. abstract_id: PUBMED:30226705 Influence of Residency Training on Operative Vaginal Delivery in Independent Practice. Objective: To determine if residency training in the use of forceps and vacuums in vaginal deliveries is associated with their use in postgraduate practice. Study Design: We surveyed all Ob/Gyn residency graduates of 4 academic programs from 2004-2012. The average number of vacuums and forceps performed per graduating resident was linked to respondents, who were then grouped by self-report as using forceps alone, vacuums alone, or both in practice. Multivariable logistic regression determined predictors of use of forceps and vacuums in practice Results: The response rate was 61.2% (n=200). Those practicing obstetrics (n=171) were classified as forceps only (24%), vacuums only (22%), or both (54%). Use of forceps in practice was greater for those who graduated within the last 5 years (OR 7.55, 95% CI 2.37-24.07), felt inadequately trained in vacuums (OR 4.58, 95% CI 1.23-17.00), cared for a patient population <50% privately insured (OR 4.08, 95% CI 1.58-10.52), and performed more forceps in residency (OR 1.69, 95% CI 1.31-2.18 per 5 forceps performed on average). The only significant predictor of vacuum use was the number of vacuum deliveries performed in residency (OR 3.71, 95% CI 2.28-6.01 per 5 vacuums performed on average). Conclusion: Independent practice pattern for operative vaginal delivery reflects residency training for both vacuums and forceps. abstract_id: PUBMED:8423970 Survey of forceps delivery in North America in 1990. Objective: To determine whether opinions and practice patterns have markedly changed over the past decade and whether clinicians are cognizant of the new ACOG definitions for forceps deliveries. Methods: Two hundred ninety-five United States and Canadian residency programs were surveyed via a questionnaire; 203 (69%) responded. Statistical analysis was performed using the chi 2 test, Yates correction factor, and the extended Mantel-Haenszel test. Results: All but two program directors (99%) were familiar with the new definitions, which were being used by 162 (80%) of the programs. All institutions used outlet and low forceps, although 14% no longer performed midforceps deliveries. Attending faculty were the primary instructors in 67% of United States and 100% of Canadian programs. Simpson forceps were the most commonly used for outlet (46%) and low (43%) forceps deliveries. Kielland (27%) and Simpson (24%) instruments were most commonly used for midforceps deliveries. Conclusion: Although the rate of midforceps use decreased, operative vaginal delivery was still commonly taught in residency programs in North America in 1990. abstract_id: PUBMED:17392838 Vacuum and forceps training in residency: experience and self-reported competency. Objective: Determine chief residents' experience with vacuum and forceps deliveries and self-perceived competencies with the procedures. Study Design: Study 1: A written questionnaire was mailed to all fourth year residents in US RRC approved Ob/Gyn programs. Study 2: The study was replicated using a web-based survey the following year. Data were analyzed with chi (2) and Wilcoxon Signed Rank tests using SPSS. Results: Surveys were received from 238 residents (20%) in Study 1 and 269 residents in Study 2 (23%, representing 50% of all residency programs). In both studies, residents reported performing significantly less forceps than vacuum deliveries. Virtually all residents wanted to learn to perform both deliveries, indicated attendings were willing to teach both, and felt competent to perform vacuum deliveries (Study 1, 94.5%; Study 2, 98.5%); only half felt competent to perform forceps deliveries (Study 1, 57.6%; Study 2, 55.0%). The majority of residents who felt competent to perform forceps deliveries reported that they would predominately use forceps or both methods of deliveries in their practice (Study 1, 75.8%; Study 2, 64.6%). The majority of residents who reported that they did not feel competent to perform forceps deliveries reported that they would predominately use vacuum deliveries in their practice (Study 1, 86.1%; Study 2, 84.2%). Conclusion: Current training results in a substantial portion of residents graduating who do not feel competent to perform forceps deliveries. Perceived competency affected future operative delivery plans. abstract_id: PUBMED:10819824 The effect of obstetric resident gender on forceps delivery rate. Objective: This study was undertaken to examine the effect of obstetric resident physician gender on the forceps delivery rate. Study Design: Analysis was based on >350,000 deliveries performed by >800 residents in obstetrics and gynecology throughout the United States from 1994 to 1998. A chi(2) analysis was performed on resident statistics from residency review committee report forms. Results: The percentage of total deliveries performed with forceps during residency was significantly higher among male residents (P <. 0001), as was the percentage of vaginal deliveries performed with forceps during residency (P <.0001). The percentage of overall operative vaginal deliveries (vacuum plus forceps) was significantly higher for male residents (P <.0001); however, the percentage of vacuum deliveries did not vary according to gender of the resident when considered independently. Conclusion: These results strongly suggest that resident gender affects performance of forceps delivery. abstract_id: PUBMED:27312098 French residents' training in instrumental deliveries: A national survey Objectives: To evaluate French residents in Obstetrics and Gynaecology's training in instrumental deliveries in 2015. Patients And Methods: We conducted a national descriptive survey among 758 residents between December 2014 and January 2015. Respondents were invited by email to specify their University Hospital, their current university term, the number of instrumental deliveries performed by vacuum extractor, forceps or spatulas, and whether they made systematic ultrasound exams before performing the extraction. Results: Response rate was 34.7 % (n=263). There were important differences between regions in terms of type of instruments used. Vacuum extractor was the most commonly used instrument for instrumental deliveries by French residents (56.9 %), more than forceps (25.2 %) and spatulas (17.9 %). At the end of the residency, all the residents had been trained in instrumental deliveries with at least two instruments. Conclusion: The training of difficult techniques as well as their perfect control is required for instrumental deliveries. Yet, we are forced to note that there are substantial differences in the French residents' training in instrumental deliveries depending on their region. So, teaching at least two techniques seems essential as well as improving the training capacities and standardizing practices. A greater systematization of the teaching of the mechanics and obstetric techniques might be a solution to be considered too. abstract_id: PUBMED:10362078 Forceps and vacuum delivery: expectations of residency and fellowship training program directors. The objective of this study is to compare current forceps training practices in North American obstetrical residency training programs with that in maternal-fetal medicine fellowship programs. We sent a survey to all obstetrics and gynecology residency training programs and to all maternal-fetal medicine fellowship programs in North America. After sending out 354 questionnaires, 219 were returned for a response rate of 62%. The response rate for fellowship programs (52 of 59; 88%) was significantly greater than that of residency training programs (167 of 295; 56.6%) (p < 0.05). All fellowship training programs were using the 1988 ACOG forceps classification system, as were 98% of the residency training programs. Eighty-five percent of fellowship directors and 80% of residency directors felt the same system should be used for vacuum deliveries. All residency and fellowship directors expected proficiency with both instruments for outlet deliveries. For low deliveries requiring < or =45 degrees of rotation, at least 92% expected proficiency with both instruments. For low-forceps deliveries with >45 degrees of rotation, 82% of fellowship directors and 80% of residency directors expected proficiency. For low-vacuum deliveries with >45 degrees of rotation, 80% of fellowship directors and 76% of residency directors expected proficiency. Significantly more fellowship directors expected midforceps proficiency (47%) than did residency program directors (38%) (p < 0.05). Midvacuum proficiency was expected by 73% of fellowship directors and 69% of residency directors. The ACOG 1988 forceps classification system has now achieved wide acceptance and is taught by both residency and fellowship program directors. Most program directors favor using the same classification system for vacuum extraction deliveries. In general, the expectations of the residency program directors mirror those of maternal-fetal medicine fellowship directors. While outlet and low operations with < or =45 degrees of rotation are taught and proficiency is expected, most programs no longer expect proficiency in midforceps delivery, but do expect proficiency in midvacuum delivery. Proficiency in low operations with rotations < or =45 degrees is still expected. abstract_id: PUBMED:19780736 An anonymous survey of registrar training in the use of Kjelland's forceps in Australia. Large series suggest that, when used with appropriate care and skill, rotational deliveries with Kjelland's forceps are a safe and useful procedure. We surveyed obstetric trainees in Australia to assess their experience with, and intentions to use, Kjelland's forceps. The response rate was 65%, and it was uncommon for even senior trainees to have performed more than ten Kjelland's forceps rotations during their training. Ninety-four per cent of final-year trainees stated they did not intend to use Kjelland's forceps in their consultant practice. abstract_id: PUBMED:35550423 Rapid Cycle Deliberate Practice Versus Traditional Simulation to Improve Resident Performance in Forceps Deliveries. This study aimed to assess the effectiveness of rapid cycle deliberate practice (RCDP) in improving obstetrics and gynaecology residents' performance and skill retention in forceps-assisted vaginal deliveries. Thirty-three residents were randomly assigned to RCDP or traditional debriefing. Performance was evaluated using an objective structured assessment of technical skills (OSATS) tool. There were no significant differences in immediate performance scores between groups (P = 0.082). At 4-6 months, both groups demonstrated improved performance scores (traditional, P = 0.001; RCDP, P = 0.001). Seventy-six percent of residents in the RCDP group indicated "real-time" feedback was advantageous. Three residents suggested this method was less beneficial at a senior level. Answer: The number of forceps deliveries performed during residency appears to have a predictive relationship with the use of forceps in independent practice. A study found that obstetricians who performed more than 13 forceps deliveries during residency had a 95% sensitivity and an 83% positive predictive value for using forceps in practice, although the specificity of this threshold was only 27% (PUBMED:25794629). Additionally, another study indicated that the use of forceps in practice was greater for those who performed more forceps in residency, with an odds ratio of 1.69 per 5 forceps performed on average (PUBMED:30226705). This suggests that the experience gained with forceps during residency training is associated with their use in postgraduate practice. Furthermore, the influence of residency training on operative vaginal delivery is reflected in independent practice patterns for both vacuums and forceps (PUBMED:30226705). Residents' self-perceived competency in forceps deliveries also affects their future operative delivery plans, with those feeling competent more likely to use forceps or both methods in their practice (PUBMED:17392838). However, it is important to note that while exceeding a certain number of forceps deliveries during residency increases the likelihood of using them in practice, further study is necessary to establish goals for the number of resident forceps deliveries that translate into use in practice (PUBMED:25794629). Additionally, the effect of resident gender on forceps delivery rate has been observed, with male residents performing a significantly higher percentage of forceps deliveries (PUBMED:10819824). In conclusion, the number of forceps deliveries performed during residency does seem to predict the use of forceps in independent practice, but the relationship is not absolute and may be influenced by other factors such as training adequacy, resident gender, and self-perceived competency.
Instruction: Are primary care practices ready to become patient-centered medical homes? Abstracts: abstract_id: PUBMED:23551648 Are primary care practices ready to become patient-centered medical homes? Purpose: To measure the readiness of rural primary care practices to qualify as patient-centered medical homes (PCMHs), one step toward participating in changes underway in health care finance and delivery. Methods: We used the 2008 Health Tracking Physician Survey to compare PCMH readiness scores among metropolitan and nonmetropolitan primary care practices. The National Committee on Quality Assurance (NCQA) assessment system served as a framework to assess the PCMH capabilities of primary care practices based on their services, processes, and policies. Findings: We found little difference between urban and rural practices. Approximately 41% of all primary care practices offer minimal or no PCMH services. We also found that large practices score higher on standards primarily related to information technology and care management. Conclusions: Achieving the benefits of the PCMH model in small rural practices may require additional national promotion, technical assistance, and financial incentives. abstract_id: PUBMED:35879079 How Patient-Centered Medical Homes Can Bring Meaning to Health Care: A Call for Person-Centered Care. The development of patient-centered medical homes in the United States was, among other things, an attempt to improve patients' experiences of care. This and other improvement strategies, however, have failed to confront a major barrier, our disease-oriented medical model. Focusing on diseases has contributed to subspecialization and reductionism, which, for patients, has increased medical complexity and made it more difficult to engage in collaborative decision making. The progressive uncoupling of disease prevention and management from other outcomes that may matter more to patients has contributed to the dehumanization of care. An alternative approach, person-centered care, focuses clinical care directly on the aspirations of those seeking assistance, rather than assuming that these aspirations will be achieved if the person's medical problems can be resolved. We recommend the adoption of 2 complementary person-centered approaches, narrative medicine and goal-oriented care, both of which view health problems as obstacles, challenges, and often opportunities for a longer, more fulfilling life. The transformation of primary care practices into patient-centered medical homes has been an important step forward. The next step will require those patient-centered medical homes to become person centered. abstract_id: PUBMED:30101596 Multiyear Performance Trends Analysis of Primary Care Practices Demonstrating Patient-Centered Medical Home Transformation: An Observation of Quality Improvement Indicators among Outpatient Clinics. Despite the ever-changing requirements of modern policy, payers seek interventions for care delivery improvement through value-based care models. Prior research acknowledges the Patient-Centered Medical Home (PCMH) as a tool for performance and outcomes improvement. However, these studies lack empirical evidence of performance trends across medical homes. A retrospective observational study was conducted to describe national trends in National Committee for Quality Assurance PCMH recognition for more than 23 000 primary care practices across the United States from 2008 to 2017. More than half of recognized practices scored 100% pass rates for activities related to appointment availability, patient care planning, and data for population management. The most common underperforming PCMH activities were for practice team, referral tracking and follow-up, and quality improvement implementation. Study findings indicate that patient-centered care collaboration between clinical and nonclinical team members, primary care provider coordination with specialty care providers, and practice implementation of clinical quality improvement methodologies are particularly challenging activities. abstract_id: PUBMED:24849691 Patient-centered medical homes: recognition and reimbursement - part 3. As a result of current weaknesses and deficiencies in the United States health care system, the concept of patient-centered medical homes (PCMHs), a way of organizing primary care that emphasizes coordination and communication among patients and providers, has taken root. The formation of the National Committee for Quality Assurance-Patient- Centered Medical Homes (NCQA-PCMH) Recognition Program and its associated standards has assisted many clinicians seeking to evolve with these changing models of medical practice. Not only have PCMHs been shown to improve patient health outcomes, but they also have been associated with decreasing overall health care costs. Additionally, there are many benefits of primary care practice sites to develop into a PCMH, including eligibility for both private party and government reimbursement. abstract_id: PUBMED:25861803 Implementation of Patient-Centered Medical Homes in Adult Primary Care Practices. There has been relatively little empirical evidence about the effects of patient-centered medical home (PCMH) implementation on patient-related outcomes and costs. Using a longitudinal design and a large study group of 2,218 Michigan adult primary care practices, our study examined the following research questions: Is the level of, and change in, implementation of PCMH associated with medical surgical cost, preventive services utilization, and quality of care in the following year? Results indicated that both level and amount of change in practice implementation of PCMH are independently and positively associated with measures of quality of care and use of preventive services, after controlling for a variety of practice, patient cohort, and practice environmental characteristics. Results also indicate that lower overall medical and surgical costs are associated with higher levels of PCMH implementation, although change in PCMH implementation did not achieve statistical significance. abstract_id: PUBMED:28720627 Costs of Transforming Established Primary Care Practices to Patient-Centered Medical Homes (PCMHs). Background: The patient-centered medical home (PCMH) shows promise for improving care and reducing costs. We sought to reduce the uncertainty regarding the time and cost of PCMH transformation by quantifying the direct costs of transforming 57 practices in a medical group to National Committee for Quality Assurance (NCQA)-recognized Level III PCMHs. Methods: We conducted structured interviews with corporate leaders, and with physicians, practice administrators, and office managers from a representative sample of practices regarding time spent on PCMH transformation and NCQA application, and related purchases. We then developed and sent a survey to all primary care practices (practice-level response rate: initial recognition-44.6%, renewal-35.7%). Direct costs were estimated as time spent multiplied by average hourly wage for the relevant job category, plus observed expenditures. Results: We estimated HealthTexas' corporate costs for initial NCQA recognition (2010-2012) at $1,508,503; for renewal (2014-2016), $346,617; the Care Coordination resource costs an additional ongoing $390,790/year. A hypothetical 5-physician HealthTexas practice spent another estimated 239.5 hours ($10,669) obtaining, and 110.5 hours ($4,957) renewing, recognition. Conclusion: Centralized PCMH support reduces the burden on practices; however, overall time and cost remains substantial, and should be weighed against the mixed evidence regarding PCMH's impact on quality and costs of care. abstract_id: PUBMED:23583533 Urologists and the patient centered medical home. Purpose: Hopes are high that the delivery system reforms embodied in the patient centered medical home will improve the quality of care for patients with chronic diseases. While primary care physicians, given their training, will likely be the locus of care under this model, there are certain conditions for which urologists are well suited to provide the continuous and comprehensive care called for by the patient centered medical home. To assess the feasibility of the urology based patient centered medical home, we analyzed national survey data. Materials And Methods: For our measure of medical home infrastructure, we mapped items from the 2007 and 2008 NAMCS (National Ambulatory Medical Care Survey) to the NCQA (National Committee on Quality Assurance) standards for patient centered medical home recognition. We determined the proportion of urology practices in the United States that would achieve patient centered medical home recognition. Finally, we used NAMCS data to estimate the impact of consolidating genitourinary cancer (ie prostate, bladder, kidney and testis) followup care among the current supply of urologists. Results: Nearly three-quarters of urology practices meet NCQA standards for patient centered medical home recognition. At present, primary care physicians spend 9,295 cumulative workweeks providing direct and indirect care to survivors of genitourinary cancers. Off-loading half of this care to urology practices, in the context of the patient centered medical home, would generate an average of 0.73 additional workweeks for each practicing urologist. Conclusions: Urology practices may possess the capacity needed to direct medical homes for their patients with genitourinary cancers. Successful implementation of this model would likely require a willingness to manage some nonurological conditions. abstract_id: PUBMED:25310632 Unlocking the black box: supporting practices to become patient-centered medical homes. Background: Despite widespread interest in supporting primary care transformation, few evidence-based strategies for technical assistance exist. The Safety Net Medical Home Initiative (SNMHI) sought to develop a replicable and sustainable model for Patient-centered Medical Home practice transformation. Objectives: This paper describes the multimodal technical assistance approach used by the SNMHI and the participating practices' assessment of its value and helpfulness in supporting their transformation. Results: Components of the technical assistance framework included: (1) individual site-level coaching provided by local medical home facilitators and supplemented by expert consultation; (2) regional and national learning communities of participating practices that included in-person meetings and field trips; (3) data monitoring and feedback including longitudinal feedback on medical home implementation as measured by the Patient-centered Medical Home-A; (4) written implementation guides, tools, and webinars relating to each of the 8 Change Concepts for Practice Transformation; and (5) small grant funds to support infrastructure and staff development. Overall, practices found the technical assistance helpful and most valued in-person, peer-to-peer-learning opportunities. Practices receiving technical assistance from membership organizations with which they belonged before the SNMHI scored higher on measures of medical home implementation than practices working with organizations with whom they had no prior relationship. Conclusions: There is an important role for both local and national organizations to provide nonduplicative, mutually reinforcing support for primary care transformation. How (in-person, between-peers) and by whom technical assistance is provided may be important to consider. abstract_id: PUBMED:34238063 How Patient-Centered Medical Homes Integrate Dental Services Into Primary Care: A Scoping Review. Integrated care delivery is at the core of patient-centered medical homes (PCMHs). The extent of integration of dental services in PCMHs for adults is largely unknown. We first identified dental-medical integrating processes from the literature and then conducted a scoping review using PRISMA guidelines to evaluate their implementation among PCMHs. Processes were categorized into workforce, information-sharing, evidence-based care, and measuring and monitoring. After screening, 16 articles describing 21 PCMHs fulfilled the inclusion criteria. Overall, the implementation of integrating processes was limited. Less than half of the PCMHs reported processes for information exchange across medical and dental teams, referral tracking, and standardized protocols for oral health assessments by medical providers. Results highlight significant gaps in current implementation of adult dental integration in PCMHs, despite an increasing policy-level recognition of and support for dental-medical integration in primary care. Understanding and addressing associated barriers is important to achieve comprehensive patient-centered primary care. abstract_id: PUBMED:24808108 National Committee for Quality Assurance (NCQA) patient-centered medical home (PCMH) recognition is suboptimal even among innovative primary care practices. The National Committee for Quality Assurance (NCQA) has promoted patient-centered medical home (PCMH) recognition among primary care practices since 2008 as a standard indicator of which practices have transformed into medical homes. A 40% PCMH adoption rate among a large national cohort of identified practices with innovative staffing (n = 131) calls into question whether the NCQA recognition process is truly transformative and patient-centered or simply another certificate to hang on the wall. Answer: The readiness of primary care practices to become patient-centered medical homes (PCMHs) varies and is influenced by several factors. Approximately 41% of all primary care practices offer minimal or no PCMH services, indicating a significant portion of practices may not be ready for the transition. The readiness scores among metropolitan and nonmetropolitan primary care practices showed little difference, suggesting that geographic location is not a major determinant of PCMH readiness (PUBMED:23551648). However, larger practices tend to score higher on standards related to information technology and care management, which are crucial components of the PCMH model (PUBMED:23551648). The transformation to PCMHs requires a shift from a disease-oriented medical model to a person-centered approach, focusing on patients' aspirations and collaborative decision-making. This shift is necessary to address the dehumanization of care and medical complexity that patients currently face (PUBMED:35879079). While the transformation to PCMHs has been associated with improved patient health outcomes and decreased health care costs (PUBMED:24849691), the implementation of PCMH is challenging, particularly in areas such as care collaboration, coordination with specialty care providers, and clinical quality improvement methodologies (PUBMED:30101596). The costs of transforming established primary care practices to PCMHs can be substantial, both in terms of time and financial resources. Centralized PCMH support can reduce the burden on individual practices, but the overall investment must be weighed against the mixed evidence regarding the impact of PCMHs on quality and costs of care (PUBMED:28720627). Additionally, while some specialty practices like urology may have the capacity to direct medical homes for their patients with genitourinary cancers, successful implementation would likely require managing some nonurological conditions (PUBMED:23583533). Technical assistance, including coaching, learning communities, data monitoring, and written implementation guides, is valuable for supporting practices in their transformation efforts. The mode of delivery and the relationship with the technical assistance provider can influence the effectiveness of the support (PUBMED:25310632). In conclusion, while some primary care practices are on the path to becoming PCMHs, many still face significant challenges in terms of readiness, resources, and the need for a paradigm shift in care delivery. Additional national promotion, technical assistance, and financial incentives may be required to facilitate the transition for small and rural practices (PUBMED:23551648).
Instruction: Are the presence of MODIC changes on MRI scans related to "improvement" in low back pain patients treated with lumbar facet joint injections? Abstracts: abstract_id: PUBMED:26338033 Are the presence of MODIC changes on MRI scans related to "improvement" in low back pain patients treated with lumbar facet joint injections? Background: Modic changes (MC) have been linked with low back pain (LBP) and worse outcomes from some treatments. No studies have investigated the impact that MCs may have on patient outcomes from lumbar facet injections. Therefore, the purpose of this study is to investigate whether the presence of Modic changes is related to 'improvement' in patients undergoing imaging-guided lumbar facet injection therapy. Methods: Outcomes from 226 patients with MRI scans within 3 months of their imaging-guided lumbar facet injections were investigated to determine whether MCs are related to 'improvement' post injection. At 1 day, 1 week and 1 month post injection the Patients Global Impression of Change scale answers were collected by postal questionnaire. This was the primary outcome measure. The numerical rating scale for pain data was collected prior to treatment and at the same post injection time points. The MRI scans were independently evaluated by two examiners for the presence/absence of Modic changes and the type of Modic change if present. Kappa statistics were used for reliability of diagnosis analysis. Chi-squared test and logistic regression analysis tested MCs with 'improvement'. Results: Intra- and inter-examiner reliability for the diagnosis of MCs was Kappa = 0.77 and 0.74. Intra- and inter-examiner reliability for categorizing MCs was K = 0.77 and K = 0.78. At 1 month post injection 45.2 % of patients without MCs reported clinically relevant 'improvement' compared to 34.2 % of patients with MC I and 32.1 % of patients with MC II. However, this did not reach statistical significance. Logistic regression found that Modic changes were not predictive of 'improvement'. Conclusions: There was a tendency for patients without MCs to have better outcomes but this did not reach statistical significance. The reliability of diagnosing MCs was substantial. abstract_id: PUBMED:29162138 Predictability of the effects of facet joint infiltration in the degenerate lumbar spine when assessing MRI scans. Background: Imaging results are frequently considered as hallmarks of disease by spine surgeons to plan their future treatment strategy. Numerous classification systems have been proposed to quantify or grade lumbar magnetic resonance imaging (MRI) scans and thus objectify imaging findings. The clinical impact of the measured parameters remains, however, unclear. To evaluate the pathological significance of imaging findings in patients with multisegmental degenerative findings, clinicians can perform image-guided local infiltrations to target defined areas such as the facet joints. The aim of the present retrospective study was to evaluate the correlation of MRI facet joint degeneration and spinal stenosis measurements with improvement obtained by image-guided intraarticular facet joint infiltration. Methods: Fifty MRI scans of patients with chronic lumbar back pain were graded radiologically using a wide range of classification and measurement systems. The reported effect of facet joint injections at the site was recorded, and a comparative analysis performed. Results: When we allocated patients according to their reported pain relief, 27 showed no improvement (0-30%), 16 reported good improvement (31-75%) and 7 reported excellent improvement (> 75%). MRI features assessed in this study did, however, not show any relevant correlation with reported pain after facet joint infiltration: Values for Kendall's tau ranged from τ = - 0.190 for neuroforaminal stenosis grading as suggested by Lee, to τ = 0.133 for posterior disc height as proposed by Hasegawa. Conclusion: Despite the trend in evidence-based medicine to provide medical algorithms, our findings underline the continuing need for individualised spine care that, along with imaging techniques or targeted infiltrations, includes diagnostic dimensions such as good patient history and clinical examination to formulate a diagnosis. Trial Registration: ClinicalTrials.gov , NCT03308149 , retrospectively registered October 2017. abstract_id: PUBMED:35385789 Lower effectiveness of facet joint infiltration in patients with concurrent facet joint degeneration and active endplate changes. Background Context: Facet joint degeneration (FJD) and disc degeneration (DD) with associated endplate (EP) changes, specifically Modic 1 changes, might occur concurrently and therefore pose a challenge in the treatment of lower back pain (LBP). Purpose: The aim of the present study was to investigate if the presence of active EP changes (Modic 1) would alter the effect of facet joint infiltrations (FJI) for the treatment of concurrent FJD. Study Design: Prospective cohort study, Level III. Patient Sample: 42 patients (Male:20, Female:22) with an average of 58±14 years with FJD on conventional magnetic resonance imaging (MRI) receiving a FJI for treatment of lower back pain were included. Outcome Measures: The pain score at baseline, 15 min, 1 day, 1 week and 1 month following FJI as well as the reduction of pain were analyzed. Furthermore, active EP changes on conventional MRI and increased EP metabolic activity on PET/MRI were evaluated and compared. Methods: All the patients underwent a (18F)-NaF PET/MRI, conventional MRI and FJI for symptomatic FJD. Active EP changes on conventional MRI and increased EP metabolic activity on PET/MR were analyzed for conformity. The pain score as well as the pain reduction at the above-mentioned time points were compared between patients with and without increased EP metabolic activity in PET/MRI. Results: The LBP reduction was significantly different between patients with (n=20) and without (n=22) active EP changes at 15 minutes (1.3±2.4 vs. 2.9±2.4, p=.03) and 1 month (0.9±2.3 vs. 2.8±2.9, p<.001) following FJI. The minimal clinically important difference for LBP reduction was reached significantly more often in the absence of active EP changes (73%) compared with patients with active EP changes (35%) 1 month following FJI (p=.03). Conclusions: FJI is less effective in LBP reduction of patients with FJD and concurrent active EP changes (eg Modic 1). abstract_id: PUBMED:30515862 Multidimensional vertebral endplate defects are associated with disc degeneration, modic changes, facet joint abnormalities, and pain. The aim of the current study was to investigate the multi-dimensional characteristics of lumbar endplate defects in humans in relation to disc degeneration and other MRI phenotypes as well as their role with pain and disability. A total of 108 subjects were recruited and underwent 3T MRI of the lumbar spine. Structural endplate defects were identified and their dimensions were measured in terms of maximum width and depth, and were then standardized to the actual width of the endplate and depth of the vertebral body, respectively. Both width and depth of all endplate defects in each subject were added separately and scores were assigned on the basis of size from 1 to 3. Combining both scores provided "cumulative endplate defect scores." Disc degeneration scores, Modic changes, disc displacement, HIZ, and facet joint changes were assessed. Subject demographics, pain profile, and Oswestry Disability Index (ODI) were also obtained. Endplate defects were observed in 67.5% of the subjects and in 13.5% of the endplates. All dimensions of endplate defects showed significance with disc degenerative scores, Modic changes, and posterior disc displacement (p < 0.05). Maximum width (p = 0.009) and its standardized value (p = 0.02), and cumulative endplate defect scores (p = 0.004) increased with narrow facet joints. Cumulative endplate defect scores showed a strong positive association with ODI (p < 0.05) compared to disc degenerative scores. Large size endplate defects were strongly associated with degenerative spine changes and more back-related disability. Findings from this study stress the need to assess endplate findings from a multi-dimensional perspective, whose role may have clinical utility. © 2019 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res. abstract_id: PUBMED:27190760 Management of lumbar zygapophysial (facet) joint pain. Aim: To investigate the diagnostic validity and therapeutic value of lumbar facet joint interventions in managing chronic low back pain. Methods: The review process applied systematic evidence-based assessment methodology of controlled trials of diagnostic validity and randomized controlled trials of therapeutic efficacy. Inclusion criteria encompassed all facet joint interventions performed in a controlled fashion. The pain relief of greater than 50% was the outcome measure for diagnostic accuracy assessment of the controlled studies with ability to perform previously painful movements, whereas, for randomized controlled therapeutic efficacy studies, the primary outcome was significant pain relief and the secondary outcome was a positive change in functional status. For the inclusion of the diagnostic controlled studies, all studies must have utilized either placebo controlled facet joint blocks or comparative local anesthetic blocks. In assessing therapeutic interventions, short-term and long-term reliefs were defined as either up to 6 mo or greater than 6 mo of relief. The literature search was extensive utilizing various types of electronic search media including PubMed from 1966 onwards, Cochrane library, National Guideline Clearinghouse, clinicaltrials.gov, along with other sources including previous systematic reviews, non-indexed journals, and abstracts until March 2015. Each manuscript included in the assessment was assessed for methodologic quality or risk of bias assessment utilizing the Quality Appraisal of Reliability Studies checklist for diagnostic interventions, and Cochrane review criteria and the Interventional Pain Management Techniques - Quality Appraisal of Reliability and Risk of Bias Assessment tool for therapeutic interventions. Evidence based on the review of the systematic assessment of controlled studies was graded utilizing a modified schema of qualitative evidence with best evidence synthesis, variable from level I to level V. Results: Across all databases, 16 high quality diagnostic accuracy studies were identified. In addition, multiple studies assessed the influence of multiple factors on diagnostic validity. In contrast to diagnostic validity studies, therapeutic efficacy trials were limited to a total of 14 randomized controlled trials, assessing the efficacy of intraarticular injections, facet or zygapophysial joint nerve blocks, and radiofrequency neurotomy of the innervation of the facet joints. The evidence for the diagnostic validity of lumbar facet joint nerve blocks with at least 75% pain relief with ability to perform previously painful movements was level I, based on a range of level I to V derived from a best evidence synthesis. For therapeutic interventions, the evidence was variable from level II to III, with level II evidence for lumbar facet joint nerve blocks and radiofrequency neurotomy for long-term improvement (greater than 6 mo), and level III evidence for lumbosacral zygapophysial joint injections for short-term improvement only. Conclusion: This review provides significant evidence for the diagnostic validity of facet joint nerve blocks, and moderate evidence for therapeutic radiofrequency neurotomy and therapeutic facet joint nerve blocks in managing chronic low back pain. abstract_id: PUBMED:27106489 Can facet joint fluid on MRI and dynamic instability be a predictor of improvement in back pain following lumbar fusion for degenerative spondylolisthesis? Purpose: To investigate the relationship between lumbar facet fluid and dynamic instability in degenerative spondylolisthesis (DS), as well as the relationship between facet fluid and patient-reported outcomes following a posterior lumbar fusion. Methods: We analyzed consecutive patients with degenerative spondylolisthesis undergoing single level posterior lumbar fusion from December 2010 to January 2013 at a single academic institution. We investigated the relationship between fluid in the facet joint as measured on axial MRI and the presence of dynamic instability. We investigated the impact of facet fluid, Modic changes, and dynamic instability on patient-reported outcomes. Results: There was a significant association between the amount of facet joint fluid and the presence of dynamic instability (p = 0.03); as facet fluid increases, the probability of dynamic instability also increases. For every 1 mm of facet fluid, there was a 41.6 % increase (95 % CI 1.8-97) in the odds of dynamic instability. A facet fluid amount less than 0.5 mm gave a 90 % probability that there was no dynamic instability. The presence of facet fluid and dynamic instability were associated with achieving minimal clinical important difference (MCID) in low back pain following lumbar fusion (p = 0.04 and 0.05, respectively). Conclusion: Facet joint fluid is associated with the presence of dynamic instability in DS. The presence of facet fluid and dynamic instability may predict increased likelihood of achieving MCID for improvement in back pain following posterior lumbar fusion. abstract_id: PUBMED:27046146 Comparison of Outcomes in MRI Confirmed Lumbar Disc Herniation Patients With and Without Modic Changes Treated With High Velocity, Low Amplitude Spinal Manipulation. Objective: The purpose of this study was to determine if there is a difference in outcomes between Modic positive and negative lumbar disc herniation (LDH) patients treated with spinal manipulative therapy (SMT). Methods: This prospective outcomes study includes 72 MRI confirmed symptomatic LDH patients treated with SMT. Numerical rating scale (NRS) pain and Oswestry disability data were collected at baseline. NRS, patient global impression of change to assess overall improvement, and Oswestry data were collected at 2 weeks, 1, 3, 6 months and 1 year. MRI scans were analyzed for Modic change present/absent and classified as Modic I or II when present. Chi-squared test compared the proportion of patients reporting relevant 'improvement' between patients with and without Modic changes and those with Modic I vs. II. NRS and Oswestry scores were compared at baseline and change scores at all follow-up time points using the unpaired Student t test. Results: 76.5% of Modic positive patients reported 'improvement' compared to 53.3% of Modic negative patients (P = .09) at 2 weeks. Modic positive patients had larger decreases in leg pain (P = .02) and disability scores (P = .012) at 2 weeks. Modic positive patients had larger reductions in disability levels at 3 (P = .049) and 6 months (P = .001). A significant difference (P = .001) between patients with Modic I vs. Modic II was found at 1 year, where Modic II patients did significantly better. Conclusion: Modic positive patients reported higher levels of clinically relevant improvement 2 weeks, 3 and 6 months compared to Modic negative patients. However, at 1 year Modic I patients were significantly less likely to report 'improvement', suggesting they may be prone to relapse. abstract_id: PUBMED:29615369 Are Modic changes in patients with chronic low back pain indicative of a worse clinical course? 10 years of follow-up. Objective: Establish the long-term (10 years) predictive value of Modic changes in the course of lumbar pain and the need for surgical treatment. Material And Method: Observational longitudinal prospective cohort study. Comparison of progression at 10 years of 2 groups of patients with chronic lumbar pain: group A with Modic changes in MRI and group B with no Modic changes. Exclusion Criteria: neoplasia, inflammatory or infectious diseases, or previous surgery. Assessment was done with the aid of the VAS for low lumbar and radicular pain and the Oswestry Disability Questionnaire. The need for surgical or medical treatment and occupational disability during the study period was analyzed. For the statistical analyses, the Mann-Whitney U test and logistic regression were applied. Results: Seventy patients, 24 male and 46 female, with a mean age of 56.5 years (35 in each group) were included in the study. No statistically significant differences in the intensity of lumbar pain, degree of impairment, or need for medical or surgical treatment (P>.05) were found in patients with Modic changes types 1, 2, or 3 between the baseline assessment and 10 years after. No statistically significant differences between patients with/without changes in Modic at 10 years of follow-up (P>.05) were determined. Conclusions: There is no relationship between Modic changes in MRI and greater intensity of lumbar pain or need for medical or surgical treatment at 10 years of follow-up. Modic changes cannot be considered a sign of bad prognosis by themselves, or an indication for surgery. abstract_id: PUBMED:37954474 Study of the Distribution of Lumbar Modic Changes in Patients with Low Back Pain and Correlation with Lumbar Degeneration Diseases. Purpose: To study the distribution of lumbar Modic changes (MC) in low back pain patients, and to explore the relationship between Modic changes and age, gender, segment and lumbar degeneration diseases (disc degeneration, spinal stenosis, lumbar spondylolisthesis). Patients And Methods: Modic changes were counted and analysed in 500 randomly selected patients who visited the outpatient clinic for lumbar spine MRI for low back pain from 2019 to 2022. The relationship between Modic changes and bulging disc, protrusion, extrusion, sequestration, Schmorl's node, spinal stenosis, and lumbar spondylolisthesis was studied. Data correlations were compared using the chi-square test, P<0.05 was considered a statistically significant difference. Results: Modic changes were present in 184 segments. 58 were type I, 119 were type II, and 7 were type III. The number of Modic changes segments in L1-2, L2-3, L3-4, L4-5, and L5-S1 was 10, 22, 22, 58, and 72, respectively. Modic changes by gender occurred in 44 males and 82 females, and by age grouping were 0 cases in 10-19 years, 3 cases in 20-29 years, 6 cases in 30-39 years, 41 cases in 40-49 years, 64 cases in 50-59 years, 38 cases in 60-69 years, 15 cases in 70-79 years, and 5 cases in 80-89 years, respectively. Modic changes occurred in 56 with bulging disc, 45 with Protrusion, 57 with Extrusion and Sequestration, 32 with Schmorl's node, 32 with spondylolisthesis, and 56 with stenosis. Modic changes correlated with all of the above factors except bulging disc, and all differences were statistically significant (P<0.05). Conclusion: The most common type of Modic changes is type II; Modic changes have correlation with age, gender, segment, lumbar disc herniation, lumbar spondylolisthesis, and lumbar spinal stenosis, and are a very important factor in low back pain. abstract_id: PUBMED:25037930 Are Modic changes related to outcomes in lumbar disc herniation patients treated with imaging-guided lumbar nerve root blocks? Objective: To compare outcomes after imaging-guided transforaminal lumbar nerve root blocks in MRI confirmed symptomatic disc herniation patients with and without Modic changes (MC). Methods: Consecutive adult patients with MRI confirmed symptomatic lumbar disc herniations and an imaging-guided lumbar nerve root block injection who returned an outcomes questionnaire are included. Numerical rating scale (NRS) pain data was collected prior to injection and 20-30 min after injection. NRS and overall improvement were assessed using the patient's global impression of change (PGIC) scale at 1 day, 1 week and 1 month post injection. The proportion of patients with and without MC on MRI as well as Modic I and Modic II was calculated. These groups were compared for clinically relevant 'improvement' using the Chi-squared test. Baseline and follow-up NRS scores were compared for the groups using the unpaired t-test. Results: 346 patients are included with MC present in 57%. A higher percentage of patients without MC reported 'improvement' and a higher percentage of patients with MC reported 'worsening' but this did not reach statistical significance. The numerical scores on the PGIC and NRS scales showed that patients with MC had significantly higher pain and worse overall improvement scores at 1 month (p=0.048 and p=0.03) and a significantly lower 1 month NRS change score (p=0.04). Conclusions: Patients with MRI confirmed symptomatic lumbar disc herniations and MC report significantly lower levels of pain reduction after a lumbar nerve root block compared to patients without MC. Answer: The presence of Modic changes (MC) on MRI scans does not appear to be significantly related to "improvement" in low back pain patients treated with lumbar facet joint injections. A study investigating outcomes from 226 patients with MRI scans within 3 months of their imaging-guided lumbar facet injections found that although there was a tendency for patients without MCs to have better outcomes, this did not reach statistical significance. Logistic regression analysis in this study also indicated that Modic changes were not predictive of 'improvement' (PUBMED:26338033). Another study that evaluated the correlation of MRI facet joint degeneration and spinal stenosis measurements with improvement obtained by image-guided intraarticular facet joint infiltration in 50 patients found no relevant correlation between MRI features assessed and reported pain after facet joint infiltration (PUBMED:29162138). Furthermore, a prospective cohort study with 42 patients showed that facet joint infiltration (FJI) was less effective in reducing lower back pain in patients with concurrent facet joint degeneration and active endplate changes, such as Modic 1 changes (PUBMED:35385789). These findings suggest that while Modic changes are associated with certain spine conditions and may influence the effectiveness of some treatments, they do not appear to be a reliable predictor of improvement following lumbar facet joint injections for low back pain.
Instruction: Is the Quantification of Antigen-Specific Basophil Activation a Useful Tool for Monitoring Oral Tolerance Induction in Children With Egg Allergy? Abstracts: abstract_id: PUBMED:27012013 Is the Quantification of Antigen-Specific Basophil Activation a Useful Tool for Monitoring Oral Tolerance Induction in Children With Egg Allergy? Objectives: To assess modifications in baseline specific IgE- and anti-IgE- and antigen-specific-mediated basophil activation in egg-allergic children. The values were compared before and after the children completed specific oral tolerance induction (SOTI) with egg. Patients And Methods: We studied 28 egg-allergic children who completed SOTI with egg. The basophil activation test and specific IgE determinations with egg white, ovalbumin, and ovomucoid were performed in all 28 children. Results: A decrease in antigen-specific activation with egg white, ovalbumin, and ovomucoid was observed only at the 2 lowest concentrations used (5 and 0.05 ng/mL). Baseline activation was higher in patients with multiple food allergies and in those who developed anaphylaxis during SOTI; this activation decreased in both groups after completion of SOTI. A significant decrease was also observed in specific IgE values for egg white, ovalbumin, and ovomucoid after tolerance induction. Conclusions: Food tolerance induction is a specific process for each food that can be mediated by immunologic changes such as a decrease in specific IgE values and in specific and spontaneous basophil activation. abstract_id: PUBMED:23682931 Decrease in antigen-specific CD63 basophil expression is associated with the development of tolerance to egg by SOTI in children. Background: In the last decade, there have been an increasing number of studies on achieving tolerance to foods by specific oral tolerance induction (SOTI). Still, the underlying mechanism of SOTI is unknown. Our aim was to describe changes in CD63 expression on basophils following in vitro Ag-specific stimulation by basophil activation test (BAT), after SOTI with egg in a pediatric population. Methods: Ten children with persistent allergy to egg were included. Skin prick tests (SPTs) and open food challenges (OFCs) were performed before SOTI. Specific IgE determination and BAT with egg white (EW), ovomucoid (OM), and ovalbumin (OVA) were performed before and after 1 month of the buildup phase of SOTI. Results: Total tolerance to egg was achieved in 9 cases and partial in one. After SOTI, there was a significant decrease in mean specific IgE levels (p < 0.05). CD63 expression also decreased (p < 0.05) in all patients. Conclusion: Decrease in Ag-specific basophil responsiveness is associated with the development of clinical tolerance by SOTI. abstract_id: PUBMED:37937713 Basophil activation test with progressively less heated forms of egg distinguishes egg allergic from tolerant children. Background And Objective: Diagnosis of egg allergy through basophil activation testing (BAT) has been mainly performed with an egg white extract or individual egg allergens rather than clinically more representative whole-egg extracts. Impact of heating on whole-egg extract allergenicity remains unassessed.Validating BAT with gradually less heated whole-egg extracts in egg allergy diagnosis and as tolerance marker. Methods: CD63-based BAT was performed with five progressively less heated extracts from cake, hard-boiled egg, omelet, soft-boiled, and raw egg in 10 egg allergic (EA), 10 complete egg tolerant (ET) and 12 non-egg-sensitized non-allergic (NEA) children. Cutoffs and diagnostic accuracy measures were established through ROC analysis. Changes in basophil response were assessed in 12 baked egg tolerant children undergoing an 8-month gradual egg reintroduction protocol with BAT and oral food challenges prior to each reintroduction step. Results: Basophil responses to all egg extracts were increased in EA, but not in ET and NEA children. Responses decreased progressively with more heated egg extracts. Compared to ET children, EA children showed higher basophil sensitivity for all egg extracts. Negative BAT responses predicted clinical tolerance with a 90-100% sensitivity, 100% specificity, and false positive rate of 2.78%. In comparison, egg sIgE's (<0.35 kUA/L) had a lower specificity of 50-78% with a false positive rate of 40%. Basophil reactivity and sensitivity tended to decrease in baked egg tolerant children undergoing gradual egg reintroduction, concurrent with tolerance development. Conclusion: BAT with progressively less heated egg preparations is a sensitive and highly specific tool to discriminate EA from ET children. abstract_id: PUBMED:17919140 Specific oral tolerance induction in food allergy in children: efficacy and clinical patterns of reaction. Background: Specific oral tolerance induction (SOTI) seems to be a promising treatment of food allergy. Specific oral tolerance induction and elimination diet were compared with respect to efficacy rate and patterns of clinical reaction. Methods: Children with challenge proven immunoglobulin E (IgE)-mediated cow's milk (CM) allergy or hen's egg (HE) allergy were randomly assigned to SOTI or elimination diet as a control group. Specific oral tolerance induction treatment was performed at home on a daily basis according to a study protocol with fresh CM or lyophilized HE protein. Re-evaluation of clinically relevant food allergy was performed by food challenge after a median of 21 months. Children in the SOTI group received a secondary elimination diet for 2 months prior to follow-up challenge to evaluate persistence of induced oral tolerance. Results: At follow-up challenge, nine of 25 children (36%) showed permanent tolerance in the SOTI group, three of 25 (12%) were tolerant with regular intake and four of 25 (16%) were partial responders. In the control group, seven of 20 children (35%) were tolerant. Allergen-specific immunoglobulin E decreased significantly both in children who developed natural tolerance during the elimination diet (P < 0.05) and in those with SOTI (P < 0.001). Conclusions: Specific oral tolerance induction seems a valid treatment option for patients with persistent food allergy. Indications may be given if avoidance cannot be guaranteed or for those who are eager to eat the food in question. Advantages of SOTI are the increased threshold dose for allergic reactions and the substantially reduced risk of severe allergic reactions after inadvertent ingestion of the allergen. However, careful monitoring during SOTI is mandatory. abstract_id: PUBMED:33291359 Investigation of Basophil Activation Test for Diagnosing Milk and Egg Allergy in Younger Children. In children with concomitant food allergy and atopic dermatitis (AD), uncovering the causative food allergen is more arduous. We evaluated the basophil activation test (BAT) for its diagnostic value in children, including those with AD, for milk or egg allergy. We simultaneously measured serum-specific immunoglobulin E (sIgE) levels and performed BATs for cow's milk and egg white. We compared their overall diagnostic performance using the area under the receiver operating characteristic curve (AUC) with the Delong method and compared them in children with AD. Analyses were completed for 75 children for milk allergy and for 85 children for egg allergy. The sIgE and percentage of basophils with the expression of CD63 were correlated for both milk (r = 0.384, p < 0.001) and egg (r = 0.557, p < 0.001). The AUC of sIgE (0.701) for milk allergy was significantly increased when combined with the BAT (0.805; p = 0.029). In children with AD, the AUC of the BAT (0.924) for milk allergy was significantly larger than that of sIgE (0.701; p = 0.017). The BAT is a potentially useful diagnostic tool for milk allergy in children when combined with sIgE. Moreover, it may be a surrogate marker for milk allergy in children with AD. abstract_id: PUBMED:29518422 Egg-specific IgE and basophil activation but not egg-specific T-cell counts correlate with phenotypes of clinical egg allergy. Background: Egg allergy is phenotypically heterogeneous. A subset of patients with egg allergy can tolerate egg in an extensively heated form. Inclusion of baked egg (BE) into the diet accelerates resolution of egg allergy. Conversely, BE reactivity is associated with persistent disease. The immune basis of this clinical heterogeneity is unknown. Objectives: We sought to study egg-specific antibody, basophil, and T-cell responses in children with reactivity or tolerance to BE. Methods: All participants underwent double-blind, placebo-controlled challenges to BE, and those who tolerated BE were challenged with unheated egg white protein to confirm clinical egg reactivity. Laboratory studies included serum antibody measurements, basophil activation tests, and CD154-based detection of egg-responsive T cells by using flow cytometry. Results: Of the 129 children studied, BE-reactive participants had significantly greater levels of egg-, ovalbumin-, and ovomucoid-specific IgE; lower ratios of egg-specific IgG4/IgE; and increased basophil activation in response to egg. Among all participants, CD154-based profiling revealed egg-responsive T cells producing IL-4 and IL-13 but little IL-10 or IFN-γ, as well as the presence of egg-responsive Foxp3+CD25+CD127low regulatory T cells. Egg-responsive T cells expressed CCR4, CCR6, and CXCR5, indicating capacity for homing to the skin, mucosa, and B-cell follicles. However, neither the frequency nor phenotype of egg-responsive T cells was different in those with tolerance or reactivity to BE. Conclusions: Egg-specific antibody and basophil responses, but not T-cell responses, are greater in those with reactivity versus tolerance to BE. Egg-specific antibody and T-cell responses were highly heterogeneous in this cohort. The clinical implications of this immune heterogeneity will need to be studied longitudinally. abstract_id: PUBMED:37622260 The role of basophil activation test and component-resolved diagnostics in the workup of egg allergy in children at low risk for severe allergic reactions: A real-life study. Background: The gold standard for diagnosing egg allergy in children is the oral food challenge (OFC). However, OFCs are time-consuming and risky procedures. Our study aimed to evaluate the utility of the basophil activation test (BAT) and component-resolved diagnostic in the diagnostic workup of children with egg allergy. Methods: Overall, 86 children aged 6 months to 17 years, suspected of egg allergy, underwent OFC with boiled egg according to international standardized protocols. BAT and specific immunoglobulin E (sIgE) testing to component egg proteins (Gal d 1-4) were also performed. Results: Of the 22 children who reacted to boiled egg, only one experienced anaphylaxis during the challenge. BAT was performed in samples obtained by 75 of the 86 patients of our cohort. Egg white and yolk protein extracts induced CD63 upregulation in the egg-allergic (EA) children compared with sensitized children that tolerated boiled egg (we registered an overall mean of CD63 expression in the EA population of 44.4% [SD 34.1] for egg white and 34.7% [SD 31.3] for egg yolk vs. 12.5% [SD 19.1] and 10.0% [SD 16.0] in sensitized children). BAT could discriminate between true egg allergy and egg sensitization in our population. As a second-line diagnostic step, the positivity of BAT for egg white or Gal d 1-sIgE resulted in a 40.9% OFC reduction, especially for those with a positive outcome. Conclusion: The BAT may be implemented in the diagnostic workup of egg allergy in children and, in a stepwise approach, separately or combined with Gal d 1-sIgE, may predict the allergic status and reduce the number of positive OFCs in children with egg allergy at low risk for severe reactions. abstract_id: PUBMED:27059671 Oral immunotherapy with low allergenic hydrolysed egg in egg allergic children. Background: A major drawback of oral immunotherapy for food allergy is the possibility of severe side-effects. We assessed both safety and efficacy of a low allergenic hydrolysed egg (HydE) preparation used in a double-blind placebo-controlled randomized study in egg allergic children. Methods: In a pilot multicentre study, 29 egg allergic patients (aged 1-5.5 years) were administered daily for 6 months 9 g HydE or placebo in a blinded, randomized manner. Safety was verified by oral food challenge to assess tolerance towards HydE at the start and efficacy by an open oral food challenge (OFC, primary outcome) at the end. Additionally, changes in basophil activation and specific IgE and IgG4 were assessed. Results: All egg allergic patients randomized to HydE (n = 15) tolerated the full dose at day 1 and received the maintenance dose from the start at home. No statistically significant difference was observed on the final OFC (36% and 21% had a negative OFC in the treatment and placebo groups, respectively). Specific IgG4 levels increased, while both CD203c+ and CD63+ basophils decreased significantly more over time in the treatment than in the placebo group. Conclusions: HydE can be regarded as a safe, low allergenic product to use in children allergic to egg. Although not significant, HydE given for 6 months increased numerically the proportion of patients becoming tolerant to egg. HydE induced a modulation of the immune response towards better tolerance. A longer treatment period and/or a higher dose may improve the clinical outcome and should be evaluated. abstract_id: PUBMED:22957889 Specific oral tolerance induction with raw hen's egg in children with very severe egg allergy: a randomized controlled trial. Background: Treatment of severe egg allergy is avoidance of hen's egg (HE) and carrying self-injectable epinephrine. Specific oral tolerance induction (SOTI) seems a promising alternative treatment. However, some aspects of SOTI are still considered experimental. Methods: We evaluated the efficacy and safety of an original 6-month SOTI protocol in children with very severe HE allergy using raw HE emulsion. Twenty children (age range: 5-11 yr) were randomized equally into a SOTI treatment group and a control group. The treatment group started SOTI and underwent a second challenge 6 months later. Control children were kept on an egg-free diet for 6 months and then underwent a second challenge. Results: After 6 months, 9/10 children of the SOTI group (90%) achieved partial tolerance (at least 10 ml, but <40 ml of raw HE emulsion, in a single dose) and 1 (10%) was able to tolerate only 5 ml (no tolerance). After 6 months, nine control children tested positive to the second challenge at a dose ≤0.9 ml of raw HE emulsion, and one reacted to 1.8 ml (SOTI vs. control group p<0.0001). All children in the SOTI group had side effects, but no child had a grade 5 reaction according to the Sampson grading. Conclusion: Six months of SOTI with raw HE emulsion resulted in partial tolerance, with regular intake, in a significant percentage of children with severe egg allergy. abstract_id: PUBMED:19642905 A pilot study of interferon-gamma-induced specific oral tolerance induction (ISOTI) for immunoglobulin E-mediated anaphylactic food allergy. Food-induced anaphylaxis is a life-threatening, IgE-mediated disease. No specific therapeutic recommendations, aside from the avoidance of offending foods, exist at this time. However, specific oral tolerance induction for food allergy has been investigated by several groups. In this study, specific oral tolerance induction was attempted using interferon-gamma (IFN-gamma) as an adjuvant for IgE-mediated anaphylactic food allergies. A total of 25 patients with IgE-mediated anaphylactic food allergy to milk, eggs, or wheat were selected. IFN-gamma-induced specific oral tolerance induction (ISOTI) was conducted on 10 patients, while five patients were only treated with food, five patients received only IFN-gamma therapy, and five patients did not receive any treatment. Tolerance for IgE-mediated anaphylactic food allergy was successfully induced in all patients (10/10) with ISOTI, while no patients acquired tolerance for allergenic foods in the control groups. Food-specific IgE levels were increased, and skin prick test reactions significantly decreased after the completion of ISOTI. IFN-gamma-induced specific oral tolerance induction (ISOTI) is a promisingly effective treatment for IgE-mediated anaphylactic food allergy. IFN-gamma may be an important cytokine in tolerance induction. Simultaneous allergen stimulation with nonspecific immunomodulation of IFN-gamma was essential for specific tolerance induction in IgE-mediated anaphylactic food allergy. Answer: Yes, the quantification of antigen-specific basophil activation is a useful tool for monitoring oral tolerance induction (SOTI) in children with egg allergy. Studies have shown that a decrease in antigen-specific basophil activation, as measured by the basophil activation test (BAT), is associated with the development of clinical tolerance to egg following SOTI in children (PUBMED:23682931). Furthermore, a decrease in antigen-specific activation with egg white, ovalbumin, and ovomucoid was observed after completion of SOTI, indicating that food tolerance induction can be mediated by immunologic changes such as a decrease in specific IgE values and in specific and spontaneous basophil activation (PUBMED:27012013). Additionally, the BAT with progressively less heated forms of egg can distinguish egg allergic from tolerant children, with basophil responses decreasing progressively with more heated egg extracts. This suggests that BAT is a sensitive and highly specific tool to discriminate between egg allergic (EA) and egg tolerant (ET) children (PUBMED:37937713). Moreover, the BAT has been shown to be a potentially useful diagnostic tool for milk and egg allergy in children, including those with atopic dermatitis (AD), when combined with serum-specific immunoglobulin E (sIgE) levels (PUBMED:33291359). In summary, the quantification of antigen-specific basophil activation through BAT is a valuable method for monitoring the effectiveness of SOTI in children with egg allergy, as it correlates with the development of clinical tolerance and can help distinguish between allergic and tolerant individuals.
Instruction: Is adjuvant chemotherapy beneficial to high risk stage II colon cancer? Abstracts: abstract_id: PUBMED:32927771 Adjuvant Chemotherapy for Stage II Colon Cancer. In stage II colon cancer management, surgery alone has shown a high cure rate (about 80%), and the role of adjuvant chemotherapy is still a matter of debate. Patients with high-risk features (T4, insufficient nodal sampling, grading, etc.) have a poorer prognosis and, usually, adjuvant chemotherapy is recommended. The purpose of the present study is to highlight and discuss what is still unclear and not completely defined from the previous trials regarding risk stratification and therapeutic benefit of adjuvant chemotherapy. With all the limitations of generalizing, we make the effort of trying to quantify the relative contribution of each prognostic factor and the benefit of adjuvant chemotherapy for stage II colon cancer. Finally, we propose a decision algorithm with the aim of summarizing the current evidence and translating it to clinical practice. abstract_id: PUBMED:26914273 Adjuvant chemotherapy is not associated with improved survival for all high-risk factors in stage II colon cancer. Adjuvant chemotherapy can be considered in high-risk stage II colon cancer comprising pT4, poor/undifferentiated grade, vascular invasion, emergency surgery and/or <10 evaluated lymph nodes (LNs). Adjuvant chemotherapy administration and its effect on survival was evaluated for each known risk factor. All patients with high-risk stage II colon cancer who underwent resection and were diagnosed in the Netherlands between 2008 and 2012 were included. After stratification by risk factor(s) (vascular invasion could not be included), Cox regression was used to discriminate the independent association of adjuvant chemotherapy with the probability of death. Relative survival was used to estimate disease-specific survival. A total of 4,940 of 10,935 patients with stage II colon cancer were identified as high risk, of whom 790 (16%) patients received adjuvant chemotherapy. Patients with a pT4 received adjuvant chemotherapy more often (37%). Probability of death in pT4 patients receiving chemotherapy was lower compared to non-recipients (3-year overall survival 91% vs. 73%, HR 0.43, 95% CI 0.28-0.66). The relative excess risk (RER) of dying was also lower for pT4 patients receiving chemotherapy compared to non-recipients (3-year relative survival 94% vs. 85%, RER 0.36, 95% CI 0.17-0.74). For patients with only poor/undifferentiated grade, emergency surgery or <10 LNs evaluated, no association between receipt of adjuvant chemotherapy and survival was observed. In high-risk stage II colon cancer, adjuvant chemotherapy was associated with higher survival in pT4 only. To prevent unnecessary chemotherapy-induced toxicity, further refinement of patient subgroups within stage II colon cancer who could benefit from adjuvant chemotherapy seems indicated. abstract_id: PUBMED:34760701 High-Risk Features Are Prognostic in dMMR/MSI-H Stage II Colon Cancer. Background: High-risk features, such as T4 disease, bowel obstruction, poorly/undifferentiated histology, lymphovascular, perineural invasion, and <12 lymph nodes sampled, indicate poor prognosis and define high-risk stage II disease in proficient mismatch repair stage II colon cancer (CC). The prognostic role of high-risk features in dMMR/MSI-H stage II CC is unknown. Similarly, the role of adjuvant therapy in high-risk stage II CC with dMMR/MSI-H (≥1 high-risk feature) has not been studied in prospective trials. The aim of this analysis of the National Cancer Database is to evaluate the prognostic value of high-risk features in stage II dMMR/MSI-H CC. Methods: Univariate (UVA) and multivariate (MVA) Cox proportional hazards (Cox-PH) models were built to assess the association between clinical and demographic characteristics and overall survival. Kaplan-Meier survival curves were generated with log-rank tests to evaluate the association between adjuvant chemotherapy in high-risk and low-risk cohorts separately. Results: A total of 2,293 stage II CC patients have dMMR/MSI-H; of those, 29.5% (n = 676) had high-risk features. The high-risk dMMR/MSI-H patients had worse overall survival [5-year survival and 95%CI, 73.2% (67.3-78.1%) vs. 80.3% (76.7-83.5%), p = 0.0001]. In patients with stage II dMMR/MSI-H CC, the high-risk features were associated with shorter overall survival (OS) along with male sex, positive carcinoembryonic antigen, Charlson-Deyo score >1, and older age. Adjuvant chemotherapy administration was associated with better OS, regardless of the high-risk features in dMMR/MSI-H (log-rank test, p = 0.001) or not (p = 0.0006). When stratified by age, the benefit of chemotherapy was evident only in patients age ≥65 with high-risk features. Conclusion: High-risk features are prognostic in the setting of dMMR/MSI-H stage II CC. Adjuvant chemotherapy may improve survival specifically in patients ≥65 years and with high-risk features. abstract_id: PUBMED:19238405 Is adjuvant chemotherapy beneficial to high risk stage II colon cancer? Analysis in a single institute. Background: Colorectal carcinoma is the most common malignancy of the gastrointestinal tract. It remains controversial for adjuvant chemotherapy in patients with stage II colon cancer. This study was designed to identify the risk factors of tumor recurrence in stage II colon cancer. Furthermore, the benefit of adjuvant chemotherapy for high-risk stage II colorectal cancer will be investigated. Materials And Methods: From May 1998 until August 2004, 375 patients with stage II (T3N0M0, T4N0M0) colon cancer received curative operation in a single hospital. The clinical data were extracted from the prospectively collected colorectal cancer database. The disease-free survival curves were calculated with Kaplan-Meier's analysis, and the survival difference was determined by log-rank test. The p value less than 0.05 was considered to be significant. Results: Of 375 stage II colon cancer, 66 patients received 5-FU-based adjuvant chemotherapy, either oral or intravenous (IV) form. Within the median of 48.5 months of follow-up, recurrence developed in 35 patients (9.3%). T4 lesion (p=0.024), lymphovascular invasion (p=0.022), obstruction at presentation (p=0.008), and mucinous component more than 50% (p=0.032) were associated with significantly decreased disease-free survival. High-risk patients (n=102), but not other patients with stage II colon cancer, benefited from adjuvant therapy (3-year disease-free survival: 96.4% vs. 84.7%, p=0.045; 5-year overall survival: 100% vs. 86.4%, p=0.015). Conclusion: Adjuvant therapy for high-risk stage II colon cancer may be beneficial, and we suggest adjuvant therapy should be considered in high-risk stage II colon cancer patients. abstract_id: PUBMED:37725517 Impact of adjuvant chemotherapy on long-term overall survival in patients with high-risk stage II colon cancer: a nationwide cohort study. Background: This study aimed to investigate the impact of adjuvant chemotherapy on long-term survival in unselected patients with high-risk stage II colon cancer including an analysis of each high-risk feature. Materials And Methods: Data from the Danish Colorectal Cancer Group, the National Patient Registry and the Danish Pathology Registry from 2014 to 2018 were merged. Patients surviving > 90 days were included. High-risk features were defined as emergency presentation, including self-expanding metal stents (SEMS)/loop-ostomy as a bridge to resection, grade B or C anastomotic leakage, pT4 tumors, lymph node yield < 12 or signet cell carcinoma. Eligibility criteria for chemotherapy were age < 75 years, proficient MMR gene expression, and performance status ≤ 2. The primary outcome was 5-year overall survival. Secondary outcomes included the proportion of eligible patients allocated for adjuvant chemotherapy and the time to first administration. Results: In total 939 of 3937 patients with stage II colon cancer had high-risk features, of whom 408 were eligible for chemotherapy. 201 (49.3%) patients received adjuvant chemotherapy, with a median time to first administration of 35 days after surgery. The crude 5-year overall survival was 84.9% in patients receiving adjuvant chemotherapy compared with 66.3% in patients not receiving chemotherapy, p < 0.001. This association corresponded to an absolute risk difference of 14%. Conclusion: 5-year overall survival was significantly higher in patients with high-risk stage II colon cancer treated with adjuvant chemotherapy compared with no chemotherapy. Adjuvant treatment was given to less than half of the patients who were eligible for it. abstract_id: PUBMED:34933441 Clinical implication of adjuvant chemotherapy according to mismatch repair status in patients with intermediate-risk stage II colon cancer: a retrospective study. Backgruound: The present study evaluated the clinical implications of adjuvant chemotherapy according to the mismatch repair (MMR) status and clinicopathologic features of patients with intermediate- and high-risk stage II colon cancer (CC). Methods: This study retrospectively reviewed 5,774 patients who were diagnosed with CC and underwent curative surgical resection at Kyungpook National University Chilgok Hospital. The patients were enrolled according to the following criteria: (1) pathologically diagnosed with primary CC; (2) stage II CC classified based on the 7th edition of the American Joint Committee on Cancer staging system; (3) intermediate- and high-risk features; and (4) available test results for MMR status. A total of 286 patients met these criteria and were included in the study. Results: Among the 286 patients, 54 (18.9%) were identified as microsatellite instability-high (MSI-H) or deficient MMR (dMMR). Although all the patients identified as MSI-H/dMMR showed better survival outcomes, T4 tumors and adjuvant chemotherapy were identified as independent prognostic factors for survival. For the intermediate-risk patients identified as MSI-low (MSI-L)/microsatellite stable (MSS) or proficient MMR (pMMR), adjuvant chemotherapy exhibited a significantly better disease-free survival (DFS) but had no impact on overall survival (OS). Oxaliplatin-containing regimens showed no association with DFS or OS. Adjuvant chemotherapy was not associated with DFS in intermediate-risk patients identified as MSI-H/dMMR. Conclusion: The current study found that the use of adjuvant chemotherapy was correlated with better DFS in MSI-L/MSS or pMMR intermediate-risk stage II CC patients. abstract_id: PUBMED:38091096 Adjuvant chemotherapy decision-making in stage II colon adenocarcinoma associated with patients' age and high-risk factors. Purpose: To clarify whether the combination of age and high-risk factors (HRFs) was preferable for adjuvant chemotherapy (AC) decision-making in patients with stage II colon adenocarcinoma. Methods: We conducted a retrospective study analyzing eligible colon cancer patients from the Surveillance, Epidemiology, and End Results (SEER) database between 2010 and 2017. A nomogram was used to predict patient prognosis. Decision curve analysis (DCA) predicted model clinical benefit. Restricted cubic spline calculated the optimal cut-off value. Results: A total of 8570 patients with stage II colon adenocarcinoma were included in this study; 25.2% received AC. A nomogram predicting the prognosis of patients with stage II colon adenocarcinoma was constructed with age and HRFs, and scores were assigned to the relevant variables. DCA showed that age combined with HRFs was superior to treatment decision-making based on HRFs alone. Patients were grouped according to their total score with the cut-off value of 100. AC did not significantly improve overall survival (OS) in low-score group (hazard ratios (HRs) 1.01, 95% confidence intervals (CIs) 0.86-1.18, p = 0.918). In high-score group, AC improved 5-year OS by about 7.6% (HR 0.73, 95% CI 0.61-0.88, p = 0.001). And high-score group mainly included patients aged < 50 years with two or more HRFs and patients aged ≥ 50 years with at least one HRF. Conclusion: Age and HRFs could be preferable for determining the group of stage II colon adenocarcinoma patients who would benefit from AC. Patients aged < 50 years with two or more HRFs might be a potential benefit population for AC. abstract_id: PUBMED:37249722 Factors Associated with Receipt of Adjuvant Chemotherapy in Stage II Colon Cancer. Background: The benefits of chemotherapy in stage II colon cancer remain unclear, but it is recommended for high-risk stage II disease. Which patients receive chemotherapy and its impact on survival remains undetermined. Methods: The National Cancer Database was surveyed between 2004 and 2016 for stage II colon cancer patients. Patients were categorized as high- or average-risk as defined by the National Comprehensive Cancer Network. The demographic characteristics of high- and average-risk patients who did and did not receive chemotherapy were compared using univariate and multivariable analyses. The survival of high- and average-risk patients was compared based on receipt of chemotherapy with Cox hazard ratios and Kaplan-Meier curves. Results: Overall, 84,424 patients met the inclusion criteria. A total of 34,868 patients were high-risk and 49,556 were average-risk. In high-risk patients, the risk factors for not receiving chemotherapy included increasing age, distance from the treatment facility, Charlson-Deyo score, and lack of insurance. In average-risk patients, factors associated with receipt of chemotherapy were decreasing age, distance from the treatment facility, Charlson-Deyo score, and non-academic association of the treatment facility. In both, chemotherapy was significantly associated with increased survival on the Kaplan-Meier curve. In the Cox hazard ratio, only high-risk patients benefited from chemotherapy (hazard ratio 1.183, confidence interval 1.116-1.254). Conclusions: Factors associated with not receiving chemotherapy in high-risk stage II colon cancers included increasing age, medical comorbidities, increasing distance from the treatment facility, and lack of insurance. Chemotherapy is associated with improved overall survival in high-risk patients. abstract_id: PUBMED:26648796 Chemotherapy for Stage II Colon Cancer. The adjuvant treatment of patients with stage II colon cancer is an area of controversy in medical oncology. Adjuvant chemotherapy aims to eradicate micrometastatic disease present at the time of surgery, preventing the development of distant metastatic disease and thereby curing those patients of their cancer. National and international guidelines for the adjuvant treatment of stage II colon cancer recommend a range of treatment options from observation to chemotherapy with single-agent or combination regimens, depending on the presence or absence of high-risk features (poorly differentiated histology, presence of lymphovascular invasion, presence of perineural invasion, report of < 12 lymph nodes, bowel obstruction, localized perforation, or positive margins). In the one prospective study designed to address the role of adjuvant chemotherapy in stage II colon cancer, a small but statistically significant benefit in overall survival was seen for those patients who received adjuvant chemotherapy; however, multiple meta-analyses and retrospective subgroup analyses have called these findings into question. Though there may be a role for adjuvant chemotherapy in the treatment of patients with stage II colon cancer, its incremental benefit is small, at best, and comes with the risks of real and rarely fatal complications of chemotherapy. abstract_id: PUBMED:21789139 Combination therapy in high-risk stage II or stage III colon cancer: current practice and future prospects. Colon cancer represents the second leading cause of cancer-related deaths. For patients who have undergone curative surgery, adjuvant therapy can reduce the risk of recurrence and death from relapsed or metastatic disease. Postoperative chemotherapy with a 5-fluorouracil-based regimen combined with oxaliplatin is the current standard of care for stage III patients. However, there is still controversy in stage II disease about the real impact of adjuvant monotherapy or combined therapy on survival. Better identification of a subgroup of patients with a higher risk of recurrence can select patients who might benefit from adjuvant therapy. For the elderly population, there is a well-established role for postoperative therapy, although the most appropriate regimen remains to be defined. Targeted agents for combined adjuvant therapy in stage II and III colon cancer is a promising area, but to date, there is no evidence supporting its use in this setting. Results from large prospective trials with targeted therapy have been disappointing and new drugs and strategies are needed to define the role of these types of agents in the adjuvant scenario of colon cancer. Answer: Adjuvant chemotherapy is a topic of debate in the management of high-risk stage II colon cancer. While surgery alone has a high cure rate of about 80% for stage II colon cancer, the role of adjuvant chemotherapy is less clear and is often recommended for patients with high-risk features such as T4 lesions, insufficient nodal sampling, poor grading, and other factors (PUBMED:32927771). Studies have shown that adjuvant chemotherapy can be beneficial for certain high-risk factors. For instance, patients with pT4 tumors who received adjuvant chemotherapy had a lower probability of death and higher survival rates compared to those who did not receive chemotherapy (PUBMED:26914273). Similarly, high-risk features were found to be prognostic in dMMR/MSI-H stage II colon cancer, and adjuvant chemotherapy was associated with better overall survival, especially in patients aged ≥65 years with high-risk features (PUBMED:34760701). However, the benefit of adjuvant chemotherapy is not uniform across all high-risk factors. For patients with only poor/undifferentiated grade, emergency surgery, or fewer than 10 lymph nodes evaluated, no association between receipt of adjuvant chemotherapy and survival was observed (PUBMED:26914273). In a single institute analysis, high-risk patients benefited from adjuvant therapy, suggesting that it should be considered for this group (PUBMED:19238405). A nationwide cohort study also found that 5-year overall survival was significantly higher in patients with high-risk stage II colon cancer treated with adjuvant chemotherapy compared to those who did not receive it (PUBMED:37725517). Moreover, the use of adjuvant chemotherapy was correlated with better disease-free survival in MSI-L/MSS or pMMR intermediate-risk stage II CC patients (PUBMED:34933441). In summary, adjuvant chemotherapy appears to be beneficial for patients with high-risk stage II colon cancer, particularly for certain subgroups such as those with pT4 tumors or dMMR/MSI-H status. However, the decision to use adjuvant chemotherapy should be individualized based on specific risk factors and patient characteristics, and further refinement of patient subgroups who could benefit from adjuvant chemotherapy is indicated (PUBMED:26914273; PUBMED:34760701; PUBMED:19238405; PUBMED:37725517; PUBMED:34933441).
Instruction: Is the risk of obstetric anal sphincter injuries increased in vaginal twin deliveries? Abstracts: abstract_id: PUBMED:26135763 Is the risk of obstetric anal sphincter injuries increased in vaginal twin deliveries? Objective: Vaginal twin deliveries have a higher rate of intrapartum interventions. We aimed to determine whether these characteristics are associated with an increased rate of obstetric anal sphincter injuries compared with singleton. Study Design: Retrospective study of all twin pregnancies undergoing vaginal delivery trial was conducted from January 2000-September 2014. Sphincter injury rate compared with all concurrent singleton vaginal deliveries. Multivariable analysis was used to determine twin delivery association with sphincter injuries while adjusting for confounders. Results: About 717 eligible twin deliveries. Outcome was compared with 33 886 singleton deliveries. Twin pregnancies characterized by a higher rate of nulliparity (54.8% versus 49.5%, p = 0.005), labor induction (42.7% versus 29.1%, p < 0.001), and instrumental deliveries (27.5% versus 16.7%, p < 0.001), lower gestational (34.6 ± 3.3 versus 38.8 ± 2.3, p < 0.001), and lower birth weight. Total breech extraction was performed in 29.0% (208/717) of twin deliveries. Overall obstetric sphincter injury rate was significantly lower in the twins group (2.8% versus 4.4%, p = 0.03, OR = 0.6, 95% CI 0.4-0.9), due to lower rate of 3rd degree tears in twins versus singletons (2.2% versus 4.0%, p = 0.02), rate of 4th degree tears similar among the groups (0.6% versus 0.4%, p = 0.5). In multivariable analysis, sphincter injuries were associated with nulliparity (OR = 3.9, 95% CI 3.4-4.5), forceps (OR = 6.8, 95% CI 5.8-7.8), vacuum (OR = 2.9, 95% CI 2.5-3.3), earlier gestational age (OR = 0.2, 95% CI 0.1-0.3), episiotomy (OR = 0.8, 95% CI 0.7-0.9), and birth weight over 3500 g (OR = 1.8, 95% CI 1.6-2.0). However, the association between twins (versus singletons) deliveries and sphincter injuries was lost after adjustment for delivery gestational age (OR = 0.7, 95% CI 0.4-1.2). Conclusion: Despite a higher rate of intrapartum interventions, the rate of sphincter injuries is lower in twins versus singleton deliveries, mainly due to a lower gestational age at delivery. abstract_id: PUBMED:26423259 Risk factors for obstetric anal sphincter injuries in twin deliveries: a retrospective review. Introduction And Hypothesis: Risk factors for obstetric anal sphincter injuries (OASIS) have been well-established in singleton pregnancies. Considering the unique characteristics of twin deliveries, our aim was to identify risk factors for OASIS that are specific to twins. Methods: A retrospective study of all vaginal twin deliveries in a tertiary center between 2000 and 2014. Women who experienced OASIS (the OASIS group) were compared with those whose anal sphincter was intact (controls). Results: Overall 717 women were eligible for the study, of whom 20 (2.8 %) experienced OASIS. Women in the OASIS group were more likely to be nulliparous (95.0 % vs 53.7 %, p < 0.001) and were characterized by a higher gestational age at delivery (36.1 ± 2.5 vs 34.6 ± 3.3, p = 0.04), a higher birth weight for both twin A and twin B (2,507 ± 540 g vs 2,254 ± 525 g, p = 0.03, and 25,49 ± 420 g vs 2,232 ± 606 g, p = 0.004 respectively), and a higher rate of episiotomy (40.0 % vs 14.2 %, p = 0.001), instrumental delivery for twin A (80.0 % vs 13.5 %, p < 0.001) or twin B (80.0 % vs 18.7 %, p < 0.001), and inter-twin delivery interval of over 30 min (20.0 % vs 7.5 %, p = 0.04). The only factor that remained significant on multivariate analysis was instrumental delivery: forceps delivery of twin A (OR = 8.8, 95 % CI 2.6-30.1), vacuum extraction of twin A (OR = 9.2, 95 % CI 2.6-34.6), and forceps delivery of twin B (OR = 15.4, 95 % CI 4.9-48.6). In women with certain combinations of risk factors the risk of OASIS was as high as 30 %. Conclusion: The overall rate of OASIS in twins is low and instrumental delivery, especially by forceps, is a risk factor. abstract_id: PUBMED:30187969 A comparative study of obstetric anal sphincter injuries in vaginal deliveries of twins and singleton pregnancies. Aims: Increasing numbers of twin pregnancies necessitate investigation of the risk factors associated with obstetric anal sphincter injuries (OASIS). The aims of this study were to establish the incidence of OASIS, and compare women delivering twins to those delivering singletons in risk of OASIS plus maternal, neonatal, and obstetric outcomes. Methods: A retrospective cohort study was undertaken and included women delivering in a tertiary London maternity unit between 1999 and 2015; 51 957 eligible women with singleton and 261 with twin pregnancies were identified. Women delivering twins were compared to those delivering singletons in the occurrence of maternal, neonatal, and obstetric outcomes using unadjusted relative risks. A secondary analysis was performed, and conditional logistic regression used to derive an adjusted relative risk of OASIS. Women delivering singletons were matched to those delivering twins for age, parity, ethnicity, gestation, and mode of delivery in a ratio of 2:1. Results: Compared to the singleton group, mothers delivering twins had significantly more instrumental deliveries (RR [95%CI] 1.92 [1.67-2.27]; P < 0.0001), smaller fetuses (2754.1 vs 3383.8 g; P < 0.001), and were older (32.9 vs 31.0 years; P < 0.0001). The twin group was not significantly different to the singleton group in risk of OASIS (RR 0.61 [0.27-1.33], P = 0.205). The conditional logistic regression demonstrated similar results for the risk of OASIS (adjusted RR = 0.58 [0.22-1.47]; P = 0.253). Conclusions: Women delivering twins vaginally were not at a significantly higher risk of sustaining OASIS compared to those delivering singletons. abstract_id: PUBMED:31170749 Anal Sphincter Injury Associated with Vaginal Twin Delivery. Objective: Obstetric anal sphincter injury remains the most common cause of fecal incontinence in women, and research in twin pregnancies is sparse. This study aimed to examine risk factors for sphincter injury in twin deliveries over a 10-year period. Study Design: This was a retrospective study of twin vaginal deliveries in a tertiary-level hospital over 10 years. We examined the demographics of women who had a vaginal delivery of at least one twin. Logistic regression analysis was used to examine risk factors. Results: There were 1,783 (2.1%) twin pregnancies, of which 556 (31%) had a vaginal delivery of at least one twin. Sphincter injury occurred in 1.1% (6/556) women with twins compared with 2.9% (1720/59,944) singleton vaginal deliveries. Women with sphincter injury had more instrumental deliveries (83.3 vs. 27.6%; p = 0.008). On univariate analysis, only instrumental delivery was a significant risk factor (odds ratio: 2.93; p = 0.019). Conclusion: Sphincter injury occurs at a lower rate in vaginal twin pregnancies than in singletons. No twin-specific risk factors were identified. Discussion of the risk of sphincter injury should form part of patient counseling with regard to the mode of delivery. abstract_id: PUBMED:28202245 Recurrent obstetric anal sphincter injury and the risk of long-term anal incontinence. Background: Women with an obstetric anal sphincter injury are concerned about the risk of recurrent obstetric anal sphincter injury in their second pregnancy. Existing studies have failed to clarify whether the recurrence of obstetric anal sphincter injury affects the risk of anal and fecal incontinence at long-term follow-up. Objective: The objective of the study was to evaluate whether recurrent obstetric anal sphincter injury influenced the risk of anal and fecal incontinence more than 5 years after the second vaginal delivery. Study Design: We performed a secondary analysis of data from a postal questionnaire study in women with obstetric anal sphincter injury in the first delivery and 1 subsequent vaginal delivery. The questionnaire was sent to all Danish women who fulfilled inclusion criteria and had 2 vaginal deliveries 1997-2005. We performed uni- and multivariable analyses to assess how recurrent obstetric anal sphincter injury affects the risk of anal incontinence. Results: In 1490 women with a second vaginal delivery after a first delivery with obstetric anal sphincter injury, 106 had a recurrent obstetric anal sphincter injury. Of these, 50.0% (n = 53) reported anal incontinence compared with 37.9% (n = 525) of women without recurrent obstetric anal sphincter injury. Fecal incontinence was present in 23.6% (n = 25) of women with recurrent obstetric anal sphincter injury and in 13.2% (n = 182) of women without recurrent obstetric anal sphincter injury. After adjustment for third- or fourth-degree obstetric anal sphincter injury in the first delivery, maternal age at answering the questionnaire, birthweight of the first and second child, years since first and second delivery, and whether anal incontinence was present before the second pregnancy, the risk of flatal and fecal incontinence was still increased in patients with recurrent obstetric anal sphincter injury (adjusted odds ratio, 1.68 [95% confidence interval, 1.05-2.70), P = .03, and adjusted odds ratio, 1.98 [95% confidence interval, 1.13-3.47], P = .02, respectively). More women with recurrent obstetric anal sphincter injury reported affected the quality of life because of anal incontinence (34.9%, n = 37) compared with women without recurrent obstetric anal sphincter injury (24.2%, n = 335), although this difference did not reach statistical significance after adjustment (adjusted odds ratio, 1.53 [95% confidence interval, 0.92-2.56] P = .10). Conclusion: Women opting for vaginal delivery after obstetric anal sphincter injury should be informed about the risk of recurrence, which is associated with an increased risk of long-term flatal and fecal incontinence. abstract_id: PUBMED:35471584 Birthweight difference between deliveries and the risk of obstetric anal sphincter injury in parous women. Introduction And Hypothesis: While obstetric anal sphincter injury (OASI) is less frequent in parous compared to nulliparous women, it remains a major concern affecting quality of life of women worldwide. The aim of this study was to evaluate the association between birthweight (BW) difference between deliveries and risk of OASI in parous women. Methods: We performed a retrospective case-control study including parous women with at least one previous vaginal delivery who were diagnosed with OASI. The control group consisted of parous women who did not have OASI during vaginal delivery. Controls were matched in a 1:2 ratio by year of birth, maternal age, and parity. Medical history, obstetric background, and current labor-related data were compared. Further univariate and multivariable analyses were performed, assessing for risk factors for OASI. Results: One hundred eight parous women who had a diagnosis of OASI and a control group of 216 parturients who delivered without OASI were included in the final analysis. Differences between the current BW and the preceding and maximal previous BW were evaluated. There were significantly higher rates of women who had a larger neonate with > 500 g difference between the current and previous BW in the OASI group than in those with no OASI (28.7% vs. 12.30%, respectively; p < 0.001). Following a multivariable analysis for the dependent parameter of OASI, the following parameters were found to be independently associated with OASI outcome: previous operative vaginal delivery, BW ≥ 90th percentile, and current BW ≥ 500 g compared to previous maximal BW. Conclusions: In parous women, neonatal BW increase between deliveries of > 500 g is associated with OASI. abstract_id: PUBMED:29537100 Risk and protective factors for obstetric anal sphincter injuries: A retrospective nationwide study. Background: In view of the reported increase in obstetric anal sphincter injuries, the objective of this study was to evaluate the incidence of such injuries over time and the associated risk and protective factors. Methods: This was a retrospective cohort study from a national database of 168 137 primiparous women with term, singleton, cephalic, vaginal delivery between 2008 and 2014. The main outcome measure was obstetric anal sphincter injury. A multivariate regression model was used to identify risk and protective factors. Results: Age >19 years, birthweight >4000 g, and operative vaginal delivery were independent risk factors for obstetric anal sphincter injuries. Mediolateral episiotomy increased the risk for obstetric anal sphincter injuries in spontaneous vaginal birth (number needed to harm 333), whereas it was protective in vacuum delivery (number needed to treat 50). From 2008 to 2014, there was an increase in the rate of obstetric anal sphincter injuries (2.1% vs 3.1%, P < .01), vacuum deliveries (12.1% vs 12.8%, P < .01), and cesarean delivery after labor (17.1% vs 19.4%, P < .01), while forceps deliveries (0.4% vs 0.1%, P < .01) and episiotomy rate decreased (35.9% vs 26.4%, P < .01). Conclusions: Episiotomy may be a risk or protective factor depending on the type of episiotomy and the clinical setting in which it is used. Our study supports a restrictive use of mediolateral episiotomy in spontaneous vaginal deliveries. In vacuum deliveries mediolateral episiotomy may help prevent obstetric anal sphincter injuries. abstract_id: PUBMED:28409235 Anal incontinence after two vaginal deliveries without obstetric anal sphincter rupture. Purpose: To evaluate prevalence and risk factors for long-term anal incontinence in women with two prior vaginal deliveries without obstetric anal sphincter injury (OASIS) and to assess the impact of anal incontinence-related symptoms on quality of life. Methods: This is a nation-wide cross-sectional survey study. One thousand women who had a first vaginal delivery and a subsequent delivery, both without OASIS, between 1997 and 2008 in Denmark were identified in the Danish Medical Birth Registry. Women with more than two deliveries in total till 2012 were excluded at this stage. Of the 1000 women randomly identified, 763 were eligible and received a questionnaire. Maternal and obstetric data were retrieved from the national registry. Results: The response rate was 58.3%. In total, 394 women were included for analysis after reviewing responses according to previously defined exclusion criteria. Median follow-up time was 9.8 years after the first delivery and 6.4 years after the second. The prevalence of flatal incontinence, fecal incontinence and fecal urgency were 11.7, 4.1, and 12.3%, respectively. Overall, 20.1% had any degree of anal incontinence and/or fecal urgency. In 6.3% these symptoms affected their quality of life. No maternal or obstetric factors including episiotomy and vacuum extraction were consistently associated with altered risk of anal incontinence in the multivariable analyses. Conclusions: Anal incontinence and fecal urgency is reported by one fifth of women with two vaginal deliveries without OASIS at long-term follow-up. Episiotomy or vacuum extraction did not alter the risk of long-term anal incontinence. abstract_id: PUBMED:26902989 Predicting obstetric anal sphincter injuries in a modern obstetric population. Background: Perineal lacerations are common at the time of vaginal delivery and may predispose patients to long-term pelvic floor disorders, such as urinary incontinence and pelvic organ prolapse. Obstetric anal sphincter injuries, which are the most severe form of perineal lacerations, result in disruption of the anal sphincter and, in some cases, the rectal mucosa during vaginal delivery. Long-term morbidity, including pain, pelvic floor disorders, fecal incontinence, and predisposition to recurrent injury at subsequent delivery may result. Despite several studies that have reported risk factors for obstetric anal sphincter injuries, no accurate risk prediction models have been developed. Objective: The purpose of this study was to identify risk factors and develop prediction models for perineal lacerations and obstetric anal sphincter injuries. Study Design: This was a nested case control study within a retrospective cohort of consecutive term vaginal deliveries at 1 tertiary care facility from 2004-2008. Cases were patients with any perineal laceration that had been sustained during vaginal delivery; control subjects had no lacerations of any severity. Secondary analyses investigated obstetric anal sphincter injury (3rd- to 4(th)-degree laceration) vs no obstetric anal sphincter injury (0 to 2(nd)-degree laceration). Baseline characteristics were compared between groups with the use of the chi-square and Student t test. Adjusted odds ratios and 95% confidence intervals were calculated with the use of multivariable logistic regression. Prediction models were created and model performance was estimated with receiver-operator characteristic curve analysis. Receiver-operator characteristic curves were validated internally with the use of the bootstrap method to correct for bias within the model. Results: Of the 5569 term vaginal deliveries that were recorded during the study period, complete laceration data were available in 5524 deliveries. There were 3382 perineal lacerations and 249 (4.5%) obstetric anal sphincter injuries. After adjusted analysis, significant predictors for laceration included nulliparity, non-black race, longer second stage, nonsmoking status, higher infant birthweight, and operative delivery. Private health insurance, labor induction, pushing duration, and regional anesthesia were not statistically significant in adjusted analyses. Significant risk factors for obstetric anal sphincter injury were similar to predictors for any laceration; nulliparity and operative vaginal delivery had the highest predictive value. Area under the curve for the predictive ability of the models was 0.70 for overall perineal laceration, and 0.83 for obstetric anal sphincter injury. When limited to primiparous patients, 1996 term vaginal deliveries were recorded. One hundred ninety-two women sustained an obstetric anal sphincter injury; 1796 women did not. After adjusted analysis, significant predictors for laceration included non-black race, age, obesity, and nonsmoking status. In secondary analyses, significant predictors for obstetric anal sphincter injury included non-black race, nonsmoking status, longer duration of pushing, operative vaginal delivery, and infant birthweight. Area under the curve for the predictive ability of the models was 0.60 for any laceration and 0.77 for obstetric anal sphincter injury. Conclusions: Significant risk factors for sustaining any laceration and obstetric anal sphincter injury during vaginal deliveries were identified. These results will help identify clinically at-risk patients and assist providers in counseling patients about modifications to decrease these risks. abstract_id: PUBMED:30659578 Operator experience affects the risk of obstetric anal sphincter injury in vacuum extraction deliveries. Introduction: Delivery by vacuum extraction is a major risk factor for obstetric anal sphincter injury. The aim of this study was to assess risk factors for obstetric anal sphincter injury in vacuum extraction in nulliparous women, specifically operator-related factors. A secondary aim was to assess other complications of vacuum extraction that are dependent on operator experience. Material And Methods: A historical cohort study of nulliparous women with a live single fetus ≥34 weeks, delivered by vacuum extraction at a teaching hospital in Sweden in 1 year (2013), using data from medical records. Risk of obstetric anal sphincter injury was assessed for obstetricians (reference), gynecologists, and residents, and adjusted for maternal, fetal, procedure-related, and operator-related covariates using unconditional logistic regression. Results are presented as prevalence and crude and adjusted odds ratio (aOR) with 95% CI. Results: In total, 323 nulliparous women delivered by vacuum extraction were included. Obstetric anal sphincter injury occurred in 57 (17.6%) women. Fifteen (11.5%) obstetric anal sphincter injuries occurred in vacuum extractions performed by obstetricians, 10 (13.5%) by gynecologists (aOR 1.84, 95% CI 0.72-4.70), and 32 (26.9%) by residents (aOR 5.13, 95% CI 2.20-11.95). Maternal height ≤155 cm (aOR 4.63, 95% CI 1.35-15.9) and conversion to forceps (aOR 19.4, 95% CI 1.50-252) increased the risk of obstetric anal sphincter injury. Operator gender, night shift work, or being a frequent operator did not affect the risk of obstetric anal sphincter injury. Postpartum hemorrhage and fetal complications did not differ between operator categories. Conclusions: The adjusted risk of obstetric anal sphincter injury in nulliparous women was five times higher in vacuum extractions performed by residents compared with those performed by obstetricians. Vacuum extractions performed by gynecologists did not carry an increased risk of obstetric anal sphincter injury. Experience in years of training, rather than frequency of the procedure, seemed to have the highest impact on reducing obstetric anal sphincter injury in vacuum extractions, which indicates a need for increased training and supervision. Answer: The risk of obstetric anal sphincter injuries (OASIS) is not necessarily increased in vaginal twin deliveries compared to singleton deliveries. Several studies have investigated this risk with varying results: - One retrospective study found that despite a higher rate of intrapartum interventions in twin pregnancies, the rate of sphincter injuries was actually lower in twins versus singleton deliveries, mainly due to a lower gestational age at delivery (PUBMED:26135763). - Another retrospective review also reported a low overall rate of OASIS in twins and identified instrumental delivery, especially by forceps, as a risk factor for OASIS in twin deliveries (PUBMED:26423259). - A comparative study concluded that women delivering twins vaginally were not at a significantly higher risk of sustaining OASIS compared to those delivering singletons (PUBMED:30187969). - A study over a 10-year period found that sphincter injury occurred at a lower rate in vaginal twin pregnancies than in singletons, with no twin-specific risk factors identified (PUBMED:31170749). These findings suggest that while certain factors associated with twin deliveries, such as instrumental delivery, can increase the risk of OASIS, the overall risk is not necessarily higher for twin deliveries when compared to singleton deliveries. It is important for healthcare providers to consider individual risk factors and circumstances when counseling patients about the mode of delivery.
Instruction: Are paediatric medicines risk factors for dental caries and dental erosion? Abstracts: abstract_id: PUBMED:20426261 Are paediatric medicines risk factors for dental caries and dental erosion? The Objective: To assess in vitro the cariogenic and erosive potentials of Brazilian liquid oral paediatric medicines. Setting: Twenty-three paediatric medicines available on the Brazilian market were evaluated. The sample consisted of antihistamines, antitussives, bronchodilators and mucolytics. Main Outcome Measures: Duplicates of each bottle were analyzed for sugar concentration using normal-phase- high-performance liquid chromatography (HPLC). Quantification of sugars and sorbitol was calculated using the peak heights of commercial standards as references. pH measurements were determined using a digital pH meter. Titratable acidity was assessed by diluting three aliquots of each medicine, and increments of 0.1N NaOH were titrated until neutrality was reached. Viscosity was determined using a viscosemeter. Results: Sugars were detected in 56.5% of the medicines. Sucrose was identified in 10 medicines, with concentrations ranging from 11.36 g% to 85.99 g%. Glucose was detected in five medicines, with concentrations varying from 4.64 g% to 40.19 g%; fructose in six medicines, with concentrations ranging from 5.09 g% to 46.71 g%. Twelve medicines exhibited sorbitol, with values ranging from 5.39 g% to 46.09 g%. Most tested medicines were acidic, with pH values ranging between 2.6 and 5.7. Only two medicines (Fluimucil and Polaramine) presented pH 6.4 and 6.0, respectively. Titratable acidity mean values ranged between 0.28 and 16.33 mL. Viscosity values varied between 2.8 cP and 412.3 cP. Conclusions: Many paediatric medicines showed high sugar concentration, pH values below the critical value and high titratable acidity values, all of which increase the medicines' cariogenic and erosive potentials. abstract_id: PUBMED:11924363 Salivary tests for caries and erosion risk factors Apart from a health questionnaire and oral health examination with a clinical history, it is also important to inquire after oral and salivary complaints. Supplementary it is useful to assess dietary habits in patients at risk and to perform a number of simple salivary tests. Interpretation of all data obtained through these investigations, will enable the identification of a number of caries and erosion risk factors. On the basis of these risk factors a preventive treatment plan can be made for the patient. These data provide the possibility to evaluate the success of the preventive treatment plan after a few years and to make adjustments, if necessary. abstract_id: PUBMED:35655029 Prevalence and risk factors of erosive tooth wear among young adults in the Singapore military. Objectives: Erosive tooth wear (ETW) is defined as the irreversible chemo-mechanical loss of dental hard tissues without plaque or bacteria and has been a growing phenomenon in the general population worldwide. The aim of this study was to determine the prevalence of ETW and the associated risk factors among young military personnel in Singapore. Materials And Methods: This cross-sectional study consisted of a questionnaire and clinical examination of military personnel between 18 and 25 years. The questionnaire collected information on demographics, medical and dental history, lifestyle and dietary habits. Tooth wear was clinically evaluated using the Basic Erosive Wear Examination (BEWE) index. The participants were profiled into four risk categories (none, low, moderate, severe) based on the sum score. Multivariable logistic regression analysis was performed to determine the risk factors associated with the presence of ETW. Results: Out of 1296 participants evaluated, 21.8% of them had ETW. The anterior sextants were more affected by erosion than posterior sextants for both arches. The risk factors significantly associated with ETW were GERD (OR = 2.83, p = 0.041), presence of self-reported TMD symptoms (OR = 2.51, p < 0.001), acidic food (OR = 2.03, p < 0.001) and drinks (OR = 3.07, p < 0.001), high caries risk (OR = 4.59, p < 0.001) and toothbrush stiffness (OR = 4.67, p < 0.001). Conclusion: ETW is prevalent among the young military population, with over half of the population at a risk of ETW. Gastric reflux disease, TMD, acidic diet, dental caries susceptibility and the use of stiff toothbrushes were identified as significant risk factors. Clinical Relevance: A large proportion of young military personnel exhibited mild to moderate risk of ETW. Dental professionals can identify risk-susceptible individuals at a young age and educated them with preventive measures and dietary interventions to prevent the progression of the condition. abstract_id: PUBMED:30336938 Prevalence of risk factors for oral diseases in obese patients referred for bariatric surgery. Background: Obesity is a risk factor for several chronic diseases, and scientific evidence suggests an association between obesity and oral diseases. In this study, the authors estimated the prevalence of risk factors for caries, dental erosion, and periodontal disease in a group of obese patients referred for bariatric surgery. Methods: The authors invited obese patients who were referred for bariatric surgery at a private center for treatment of obesity to participate in this study. The patients answered a questionnaire about their sociodemographic data, general and oral health histories, dietary habits, and oral health care behavior. Results: The authors evaluated a total of 255 participants; 200 (78.4%) were women, the median (interquartile range) age was 36.0 (30.0-43.0) years, and the median (interquartile range) body mass index was 39.7 (37.6-43.5) kilograms per square meter. The authors observed a high prevalence of risk factors for dental erosion; 78.4% of the participants consumed acidic foods daily, and 92.2% consumed acidic beverages daily. With respect to risk factors for caries, 38.5% of the participants reported high sucrose intake, and 59.6% preferred sweet beverages and foods with sucrose. Almost 13% of the participants reported having or having had dental mobility, and 59.6% reported having or having had gingival bleeding, suggesting a history of periodontal disease. Conclusions: Obese patients referred for bariatric surgery are exposed to risks of experiencing oral diseases and should be evaluated by a dentist. Practical Implications: Knowing the risk factors for oral diseases in obese patients is important in raising awareness about these risk factors, as well as to prevent complications in the oral cavity. abstract_id: PUBMED:20339617 Sucrose concentration and pH in liquid oral pediatric medicines of long-term use for children. Objectives: To determine the pH and sucrose concentrations (SC) of pediatric liquid drugs of long-term use by children in order to evaluate the potential risk for dental caries and dental erosion. Methods: After assessing the pH, we analyzed 71 aqueous medicine samples for sucrose by the Lane-Eynon general volumetric method. The pH and SC values (mean +/- standard deviation (SD) were calculated according to therapeutic action. Results: The highest and the lowest SC values (mean +/- SD) were found in respiratory (37.75% +/- 17.23%) and endocrine drugs (11.97% +/- 15.16%) (p < 0.01). The values for medicines prescribed for daily ingestion were 47.15% +/- 9.57%, whereas for twice daily and three or four times a day, these numbers were 24.42% +/- 18.03% and 34.43% +/- 14.83%, respectively (p < 0.01). The SC (mean +/- SD)) values were higher in syrups (36.32% +/- 17.62%) than in other formulations (p > 0.05). The overall pH (mean +/- SD) was 5.89 +/- 2.02 (range 2.3 +/- 0.01 to 10.6 +/- 0.02). In products with acidic pH, the SC (mean +/- SD) was significantly lower (22.14% +/- 15.72%) than in nonacidic medicines (39.22% +/- 15.82%) (p < 0.001). Conclusions: It can be concluded that the pediatric medicines studied have a high SC and low pH, which vary according to therapeutic class, daily dose, and brand. Caution about dental caries, dental erosion, and systemic diseases such as diabetes mellitus is warranted when these medicines are ingested frequently. abstract_id: PUBMED:27286713 Erosive Tooth Wear and Related Risk Factors in 8- and 14-Year-Old Greek Children. The aim of the present study was to investigate the erosive tooth wear of primary and permanent teeth and its association with related risk factors. Two groups of Greek children aged 8 (n = 329) and 14 years (n = 263) were examined in the classroom using the Basic Erosive Wear Examination (BEWE) index. Data concerning risk factors were collected using questionnaires. Dental caries (DMFS/dmfs) was also recorded. The data were analyzed using the t test, one-way ANOVA, multiple regression analysis, Fisher's exact test, and the χ2 test. In the 8-year-olds, the primary teeth showed a predominantly medium level of wear and the permanent teeth no wear. A majority of the 14-year-olds exhibited low risk levels of wear. The most frequently affected dental surface in both age groups was the occlusal surface of the mandibular posterior teeth. In the 8-year-olds, BEWE scores and the prevalence of wear in the primary teeth was influenced by gender (p = 0.020). In their permanent teeth, soft drink consumption (p < 0.0001) and preference for lemon/vinegar (p = 0.041) significantly affected wear prevalence and BEWE scores, while habitually retaining soft drinks in the mouth influenced wear prevalence (p = 0.008), risk (p = 0.004), and BEWE scores (p = 0.022). In the 14-year-olds, wear prevalence was significantly affected by the consumption of lemon-flavored candies (p = 0.016) and soft drinks (p = 0.050). BEWE scores were significantly affected by gender (p = 0.022) and soft drink consumption (p = 0.030). Gender influenced tooth wear risk in both age groups (p = 0.010 and p = 0.021, respectively). The results of this study indicate that erosive tooth wear differed between primary and permanent teeth and was influenced by gender and dietary factors. abstract_id: PUBMED:19266847 Obesity and oral health: risk factors of obese patients in dental practice For 20 years, on a worldwide scale, the prevalence of obesity has always progressed as well in child as in adult. Obesity is frequently associated with other pathologies: cardiovascular, oncologic, systemic (diabetes) or oral. In this situation, the follow-up of obesity and its co morbidity needs a multidisciplinary approach. Obesity and oral health pathologies have risk factors in common, particularly dietary one. Indeed, it is established that food affects oral health. In the same way, an unbalanced diet raises the risk of overweight or obesity. Thereby, owing to these inter relations, the World Health Organization advocates to insert oral health in the Program for Global Health. First, this study presents the factors of risks that obesity and oral health diseases have in common. After, the risks and the specificity of the care of obese patients are outlined. Consecutively, a program of prevention and education of oral health is proposed. abstract_id: PUBMED:33710526 Congenital and acquired defects in enamel of primary teeth: prevalence, severity and risk factors in Brazilian children. Objective: This cross-sectional study evaluated the prevalence of tooth enamel defects and risk factors associated in primary dentition. Methods: A sample of 656 schoolchildren (population based), aged 4 years old, from Araraquara-SP (Brazil) was evaluated by two trained examiners according to the following indexes: Dental fluorosis (DF), deciduous molar hypomineralization (DMH), non-fluoride related developmental defects of enamel (DDE), tooth erosion and tooth attrition. Structured questionnaires identified socioeconomic condition, medical/dental history, behavior and dietary habits. Data were analyzed using Chi-square tests (p < 0.05). Results: Of the enamel congenital defects, DF was the most prevalent (6.1%, n = 40), followed by DMH (5.6%, n = 37). Of the acquired defects, attrition was the most prevalent (36.9%, n = 242), followed by erosion (2.4%, n = 16). The caries experience was similar between the affected children (29,5%; n = 94) and total sample (30.6%; n = 201). The etiological factors related to congenital defects were: not exclusively fed breast milk (p = 0.003) and jaundice at birth (p < 0.001); the association with acquired defects was found with vomiting frequent episodes (p = 0.037). Conclusion: The general prevalence of enamel defects in primary dentition in Araraquara was 48.6%. Enamel defects may be associated with health-related factors and current lifestyle. abstract_id: PUBMED:31276748 The use of index teeth vs. full mouth in erosive tooth wear to assess risk factors in the diet: A cross-sectional epidemiological study. Objective: To assess common dietary erosive-tooth-wear (ETW) risk in university students from an exotic-fruit country comparing index teeth vs. full mouth ETW assessment. Methods: A risk factors' questionnaire was applied on 601 18-25 years old subjects in Bogotá-Colombia. Trained examiners assessed clinically: ETW (BEWE) on all buccal, occlusal and lingual surfaces and ICDAS caries experience (ICDAS-DMFS). Full-arch and index-teeth (buccal of upper-central incisors and occlusal of lower-first molars) maximum-BEWE score categorized patients into: with- (2-3) and without wear (0-1). These were compared in terms of demographic, clinical, dietary and other factors with crude and logistic regression models. Results: Students' mean age was 20.0 ± 1.9 (77.7% females). Most consumed fruits were erosive/extremely erosive (57%). Prevalence of wear was 73% (full-mouth) vs. 19.6% (index-teeth). Full-mouth-BEWE correlated significantly with teeth-index-BEWE score but low (0.31, p < 0.001). Besides anterior-teeth incisal surfaces, occlusal of lower molars (16%) and buccal of upper central incisors (3.3%) showed highest wear frequency. Straw use or 1 -h waiting for toothbrushing didn't show a protective effect. ETW was significantly associated on index teeth with frequent intakes of dietary acids (≥3 daily-acidic drinks and ≥4 daily-fruit portions) (single-variable-logistic regression: OR 4.41, p = 0.22 and OR 1.60, p = 0.035; multivariable-logistic regression: OR 4.47, p = 0.022 and OR 1.63, p = 0.036 respectively). No significant differences were noticed between groups when the full-mouth maximum score was used. Conclusion: This young cohort showed dietary ETW associated with frequent dietary acids' intakes and grading ETW on index teeth vs. full mouth was a more sensitive measurement method to assess underlying ETW risk factors. The teeth index has promising usefulness for the clinic and epidemiology. Clinical Significance: Using index teeth (buccal of upper central incisors and occlusal of lower first molars) for ETW (BEWE) assessment allowed to show association in young adults between frequent daily exotic fruits/fruit juices dietary-acid consumption and ETW, representing a less time consuming clinical/epidemiological method of ETW measurement than a full mouth examination. abstract_id: PUBMED:35902948 Risk prediction models for erosive wear in preschool-aged children: a prospective study. Background: Despite increasing prevalence, age-specific risk predictive models for erosive tooth wear in preschool-age children have not been developed. Identification of at-risk groups and the timely introduction of behavioural change or treatment will stop the progression of erosive wear in the permanent dentition. This study aimed to identify age-specific risk factors for erosive wear. Distinct risk prediction models for 3-year-old and 5-year-old children were developed. Methods: A prospective cohort study included school-based clinical examinations and parent administered questionnaires for consented 3 and 5-year-old healthy children. Calibrated examiners measured the following health parameters under standardised conditions: erosion, using the Basic Erosive Wear Examination Index (BEWE), caries using the International Caries Detection and Assessment System (ICDAS), plaque and calculus according to the British Association for the Study of Community Dentistry (BASCD) scores, dental traumatic injuries and soft tissue lesions, salivary testing and BMI. Other health conditions were collected via a parent-administered questionnaire that explored oral- and general-health. Non parametric tests were utilised to explore the temporal relation of erosion with, demographic factors, oral hygiene habits, general health and dietary habits. Variables showing significance with a difference in BEWE cumulative score over time were utilised to develop two risk prediction models. The models were evaluated by Receiver Operating Characteristics analysis. Results: Risk factors for the 3-year-old cohort (N = 336) included erosive wear (χ2(1, 92) = 12.829, p < 0.001), district (χ2(5, 92) = 17.032, p = 0.004) and family size (χ2(1, 92) = 4.547, p = 0.033). Risk factors for the 5-year-old cohort (N = 441) also included erosive wear (χ2(1, 144) = 4.768, p = 0.029), gender (χ2(1, 144) = 19.399, p < 0.001), consumption of iced tea (χ2(1, 144) = 8.872, p = 0.003) and dry mouth (χ2(1, 144) = 9.598, p = 0.002). Conclusions: Predictive risk factors for 3-year-old children are based on demographic factors and are distinct from those for 5-year-old children based on biological and behavioural factors. Erosive wear is a risk factor for further wear in both age cohorts. Answer: Yes, paediatric medicines are risk factors for dental caries and dental erosion. Studies have shown that many paediatric medicines contain high sugar concentrations, which can contribute to the development of dental caries (PUBMED:20426261; PUBMED:20339617). These medicines often have pH values below the critical value for enamel demineralization and high titratable acidity values, which can increase their erosive potential (PUBMED:20426261). Additionally, the viscosity of these medicines can affect their cariogenic and erosive potentials (PUBMED:20426261). Sucrose, glucose, and fructose are sugars commonly found in paediatric medicines, and their presence in significant amounts can pose a risk for dental health (PUBMED:20426261; PUBMED:20339617). The frequent ingestion of medicines with high sugar content and low pH can warrant caution regarding dental caries, dental erosion, and systemic diseases such as diabetes mellitus (PUBMED:20339617). Furthermore, the prevalence of risk factors for oral diseases, including dental erosion, has been observed in specific populations, such as obese patients referred for bariatric surgery, indicating that these individuals should be evaluated by a dentist due to their exposure to risks of experiencing oral diseases (PUBMED:30336938). In summary, paediatric medicines with high sugar content and acidic pH are risk factors for dental caries and dental erosion, and healthcare providers should be aware of these risks when prescribing such medications to children.
Instruction: Is there a relationship between uterine activity and the length of the cervix in the second trimester? Abstracts: abstract_id: PUBMED:11147721 Is there a relationship between uterine activity and the length of the cervix in the second trimester? Objective: The purpose of this study was to determine if a relationship could be detected between uterine activity and cervical change in the second trimester. Methods: Ten women with evidence of cervical change and 10 women with no cervical change (controls) were studied between 20 and 28 weeks gestation. Uterine activity was recorded using home uterine activity monitoring units. Cervical assessment was performed using transvaginal ultrasound. Results: There was no statistically significant difference in the contractions per hour in the cervical change group (1.26 +/- 0.38; mean +/- SEM) compared to the controls (1.13 +/- 0.48) (p 0.48; Mann-Whitney U test). There were significant differences in closed endocervical length (p < 0.001) and internal os dilatation (p 0.004), the cervical change group demonstrating a shorter closed endocervical length and greater internal os dilatation. Conclusions: This preliminary study shows no evidence of a relationship between uterine activity and endocervical canal length in the second trimester. abstract_id: PUBMED:35351409 Mid-trimester uterine electromyography in patients with a short cervix. Background: Preterm birth is the largest single cause of infant death in the United States. A cervical length of <2.5 cm, measured in the mid-trimester, has been shown to identify individuals at increased risk. Uterine electromyography is an emerging technology for noninvasively assessing uterine bioelectrical activity. With its ability to characterize nuanced differences in myometrial signals, uterine electromyography assessments during the mid-trimester may provide insight into the mechanisms of cervical shortening. Objective: This study aimed to characterize uterine bioelectrical activity in pregnant individuals with short cervices in the mid-trimester compared with that of pregnant individuals of the same gestational age with normal cervical lengths. Study Design: This is a prospective cohort study of subjects with singleton, nonanomalous pregnancies between 16 weeks and 0 days and 22 weeks and 6 days of gestational age. Subjects with normal cervical length (≥3.0 cm) were compared with subjects with short cervical length (<2.5 cm). The short-cervical-length cohort was further stratified by history of preterm birth. Multichannel uterine electromyography recordings were obtained for ∼60 minutes using proprietary, directional electromyography sensors on the abdomen. Uterine electromyography signals were observed and classified in groups as spikes, short bursts, and bursts. Primary outcomes were relative expression of spike, short-burst, and burst uterine electromyography signals. Subgroup analyses assessed each signal percentage by cervical length, history of preterm birth, and gestational age at delivery. Differences in percentage of uterine electromyography signals according to cervical length were analyzed using nonparametric tests of significance. Results: Of the 28 included subjects, 10 had normal and 18 had short cervical length. There were 9 subjects with short cervical length and a history of preterm birth. Spikes were the most commonly recorded signals and were higher in the normal-cervical-length cohort (96.3% [interquartile range, 93.1%-100.0%]) than the short-cervical-length cohort (75.2% [interquartile range, 66.7%-92.0%], P=.001). In contrast, median percentages of short-bursts and bursts were significantly higher in subjects with a short cervical length (17.3% [interquartile range, 13.6%-23.9%] vs 2.5% for normal cervical length [interquartile range, 0%-5.5%], P=.001 and 6.6% [interquartile range, 0%-13.4%] vs 0% for normal cervical length [interquartile range, 0%-2.8%], P=.014, respectively). Within subgroup analyses, cervical length was inversely proportional to percentage of observed short-bursts (P=.013) and bursts (P=.014). Subjects with short cervical length and history of preterm birth had higher burst percentages (12.8% [interquartile range, 9.0%-15.7%]) than those with short cervical length and no history of preterm birth (3.3% [interquartile range, 0%-5.0%], P=.003). Conclusion: Short-burst and burst uterine electromyography signals are observed more frequently in mid-trimester patients with short cervical lengths. This relationship provides insight into abnormal myometrial activation in the mid-trimester and offers a plausible biophysiological link to cervical shortening. abstract_id: PUBMED:26653679 Uterine fibroids at routine second-trimester ultrasound survey and risk of sonographic short cervix. Objective: To determine whether women with sonographically identified uterine fibroids are at higher risk for a short cervix. Methods: This retrospective cohort study evaluated all women with singleton gestations who had a routine second-trimester ultrasound at 17-23 weeks gestational age from 2010 to 2013. When fibroids were noted, their presence, number, location and size were recorded. Exclusion criteria included a history of cervical conization or loop electrosurgical excision procedure (LEEP), uterine anomalies, maternal age greater than 40 years, and a previously placed cerclage. The primary variable of interest was short cervix (<25 mm). Secondary variables of interest included gestational age at delivery, mode of delivery, indication for cesarean, malpresentation, birth weight, and Apgar scores. A multivariable logistic regression analysis was performed. Results: Fibroids were identified in 522/10 314 patients (5.1%). In the final multivariable logistic regression model, short cervix was increased in women with fibroids (OR 2.29, 95% CI: 1.40, 3.74). The number of fibroids did not affect the frequency of short cervix. Fibroids were significantly associated with preterm delivery (<37 weeks), primary cesarean, breech presentation, lower birth weight infants, and lower Apgar scores. Conclusions: Women with uterine fibroids may be at higher risk for a short cervix. Fibroids are also associated with several adverse obstetric and neonatal outcomes. abstract_id: PUBMED:3548692 Oxytocin induced second trimester uterine rupture. A case of oxytocin induced rupture of an unscarred uterus during the second trimester is presented. While rupture of the uterus by oxytocic agents in the second trimester has previously been documented, this case is unusual in that fetal parts were palpable through the cervical os, delaying diagnosis and definitive management. Digital palpation of fetal parts through the cervix does not preclude the diagnosis of fetal extrusion secondary to uterine rupture. abstract_id: PUBMED:37654106 First trimester SHARP1 and second-trimester uterine artery Doppler to predict preeclampsia. Objectives: The objective of this study was to identify the predictive value of the first-trimester serum SHARP1 level and the second-trimester uterine artery Doppler in singleton pregnancy for the prediction of preeclampsia. Methods: A prospective study including singleton pregnancy presenting at an antenatal clinic, King Chulalongkorn Memorial Hospital, Department of Obstetrics and Gynecology, Faculty of Medicine, Chulalongkorn University from 2019-March 2020 was conducted. Serum SHARP1 was collected at the gestational age (GA) of 11-13+6 weeks, and transabdominal uterine artery Doppler ultrasound was performed at GA of 18-24 weeks. Serum SHARP1 level and uterine artery pulsatility index (PI) were combined to calculate the predictive value for preeclampsia detection. Results: 288 pregnant women were enrolled in the first trimester, but only 249 participants completed the study. Thirteen patients had preeclampsia (5.2%), which three cases (1.2%) had early-onset preeclampsia. The median serum SHARP1 level in the first trimester of pregnant women with preeclampsia was lower than the normal pregnancy group (1392 pg/ml vs. 1941 pg/ml, p = 0.046). The second-trimester uterine artery PI and prevalence of early diastolic notching were higher in the preeclampsia group than in the normal pregnancy group (p = 0.029 and p = 0.001, respectively). When the first-trimester serum SHARP1 level is combined with the second-trimester uterine artery PI, the sensitivity, specificity, PPV, and NPV for preeclampsia prediction were 84.6%, 47.5%, 8.2%, and 98.3%, respectively. Conclusions: This study demonstrated that serum SHARP1 level in the first trimester combined with the uterine artery PI in the second trimester had good sensitivity to predict preeclampsia. abstract_id: PUBMED:18816491 Transperineal versus transvaginal sonographic cervical length measurement in second- and third-trimester pregnancies. Objective: To investigate the reliability of, and patient satisfaction with, transperineal cervical length measurement during the third trimester of pregnancy and to compare these with measurement during the mid-trimester, using transvaginal sonographic imaging of the cervix as a reference. Methods: Women attending an outpatient clinic for cervical length measurement before 29 weeks or after 35 weeks of gestation were examined by transvaginal and transperineal ultrasound and the results compared. Preference for either method was indicated by both sonographers and patients. Results: Seventy-one patients participated in the study, 23 in the second and 48 in the third trimester of pregnancy. There was failure to obtain a clear image on transperineal ultrasound in 30% of mid-trimester pregnancies, and in 19% of third-trimester cases. Elevation of the patient's hips improved the image in five out of 10 women in whom the scan was repeated following a postural change. Transvaginal cervical length measurements could be obtained in all cases. There was a strong correlation between transvaginal and transperineal measured cervical length (Pearson's correlation coefficient = 0.85). Sonographers preferred transvaginal images of the cervix irrespective of whether they were obtained in the second or third trimester. Transperineal ultrasound was judged as not or mildly painful by most women, but transvaginal ultrasound was preferred. Conclusions: Transvaginal ultrasound is the least painful, most feasible, and probably most accurate method with which to measure cervical length in the third trimester of pregnancy. Transperineal ultrasound is a feasible alternative, although its application in clinical practice is restricted by the need for an experienced sonographer. abstract_id: PUBMED:16390792 Uterine contractions in asymptomatic pregnant women with a short cervix on ultrasound. Objective: To estimate the incidence of uterine contractions in asymptomatic pregnant women with a short cervix on transvaginal ultrasound. Methods: Asymptomatic women with a short cervix on transvaginal ultrasound between 14 and 23(6/7) weeks of pregnancy were instructed to undergo uterine monitoring immediately. Women without available tracings were excluded. Women with and without contractions were compared with regard to demographics, risk factors, and outcomes. Results: One hundred and one women with a short cervix and available tracings were identified. Eighty-six (85%) had contractions and 15 (15%) did not have contractions immediately after identification of the short cervix. The median number of contractions per hour per woman was 4 (range 0-31). These two groups did not differ in demographics, risk factors, or outcomes, except for the fact that 33% of women with contractions versus 73% of women without uterine contractions had a prior second trimester loss (p = 0.004). Conclusions: In this study, 85% percent of pregnant women with a short cervix on transvaginal ultrasound between 14 and 24 weeks of pregnancy are having asymptomatic uterine contractions. This information is important for further investigation of the short cervix and preterm delivery. abstract_id: PUBMED:2277401 Translabial ultrasound of the third-trimester uterine cervix. Correlation with digital examination. This prospective blinded study compared translabial ultrasound (TLU) and digital palpation of the cervix among women whose cervix could not be visualized adequately or at all with transabdominal ultrasound during the third trimester of pregnancy. The cervical canal and internal cervical os were adequately visualized during 109 of 113 TLUs (96%). The TLU was well tolerated by the patients; only two patients refused the TLU pending discussion of the technique with their obstetrician. Among 43 examinations of patients without clinical cervical effacement, TLU showed a cervical length of 2.5 to 4.2 cm (mean, 2.9 cm). An excellent correlation (R = 0.90) was noted between cervical length on TLU and percentage of cervical effacement on digital examination. The correlation between amount of cervical dilatation on TLU and on digital examination was not high (R = 0.58). We conclude that the measurements of cervical length on TLU correlate well with the amount of cervical effacement on digital examination and that TLU provides a rapid and well-tolerated alternative technique to visualize the third-trimester uterine cervix. abstract_id: PUBMED:26550395 Uterine artery embolization for hemorrhage resulting from second-trimester abortion in women with scarred uterus: report of two cases. This study was conducted to investigate the effect of uterine artery embolization for the treatment of hemorrhage following second-trimester labor induction for women with scarred uterus. Two cases of second-trimester abortion were retrospectively reviewed, both of which had a history of caesarean delivery and were complicated by gestational anemia. One was at 18 weeks' gestation and presented with persistent vaginal bleeding for two months resulting in relatively large area of blood clot in uterine cavity. The other was at 25 weeks' gestation with partial hydatidiform mole and presented with intermittent vaginal bleeding. Both patients presented with continuous and heavy vaginal bleeding after oral administration of mifepristone for labor induction, with one cervix left unopened, while the other cervix 3 cm left dilatation, yet felt obstructed by pregnant tissue. Both patients were immediately treated with uterine artery embolization (UAE). Both patients presented with alleviated hemorrhage and regular uterine contraction after UAE, followed by smooth induction of labor. No hemorrhage occurred since then during the follow-up. The results suggest that UAE is safe and effective for the treatment of massive hemorrhage of second-trimester abortion in women with scarred uterus. It can reduce time period of labor induction and alleviate hemorrhage, which not only rescues patients but also avoids cesarean sections and retains fertility for the pregnant. abstract_id: PUBMED:7168176 Priming of the cervix with prostaglandins in the first and second trimester Sulprostone was used for preoperative cervical priming in the first and second trimester in 154 patients with missed abortion, hydatidiform mole, and termination for medical indications. In 61 patients 50 mcg sulprostone was administered into the cervix ("intramural"); in 93 patients sulprostone was administered preoperatively by the intragluteal route (4 injections of 250 micrograms at 4 hour intervals). After intramural injection an abortion score of at least 20 according to Csapo (preoperative) was obtained in 78% of patients, after intragluteal administration in 98%. The intragluteal route of administration of the prostaglandin derivative sulprostone for cervical priming in the first and second trimester is much simpler than the intramural route. The injections into the cervix are painful, the desired effect is not as great, and the incidence of side effects is higher. Cervical priming in the first trimester is also recommended because it avoids the severe complications that can occur during curettage. Answer: Based on the provided abstracts, there is evidence suggesting that there may be a relationship between uterine activity and the length of the cervix in the second trimester. One study found no statistically significant difference in contractions per hour between women with cervical change and those without in the second trimester, indicating no evidence of a relationship between uterine activity and endocervical canal length during this period (PUBMED:11147721). However, another study reported that short-burst and burst uterine electromyography signals, which reflect uterine bioelectrical activity, were observed more frequently in mid-trimester patients with short cervical lengths, suggesting a possible biophysiological link to cervical shortening (PUBMED:35351409). Additionally, women with uterine fibroids were found to be at higher risk for a short cervix, which could imply an association between uterine structural abnormalities and cervical length (PUBMED:26653679). In contrast, another study reported that 85% of pregnant women with a short cervix on transvaginal ultrasound between 14 and 24 weeks of pregnancy were having asymptomatic uterine contractions, which could indicate a relationship between uterine contractions and cervical shortening (PUBMED:16390792). Overall, while one study did not find a direct relationship between uterine activity as measured by contractions and cervical length (PUBMED:11147721), other studies suggest that there may be a connection between uterine bioelectrical activity or structural abnormalities and the length of the cervix in the second trimester (PUBMED:35351409, PUBMED:26653679, PUBMED:16390792). Further research is needed to clarify the nature of this relationship.
Instruction: Can a novel rectangular footplate provide higher resistance to subsidence than circular footplates? Abstracts: abstract_id: PUBMED:22718226 Can a novel rectangular footplate provide higher resistance to subsidence than circular footplates? An ex vivo biomechanical study. Study Design: Ex vivo biomechanical evaluation using cadaveric vertebral bodies. Objective: To compare the subsidence characteristics of a novel rectangular footplate design with a conventional circular footplate design. Summary Of Background Data: Cage subsidence is a postoperative complication after reconstruction of corpectomy defects in the thoracolumbar spine and depends on factors, such as bone quality, adjunctive fixation, and the relationship between the footplate on the cage and the vertebral body endplate. Methods: Twenty-four cadaveric vertebrae (T12-L5) were disarticulated, potted in a commercial resin, loaded with either a circular or a rectangular footplate, and tested in a servo hydraulic testing machine. Twelve vertebral bodies were loaded with a circular footplate, and after subsidence the same vertebral bodies were loaded with a rectangular footplate. The second set of 12 vertebral bodies was loaded with a rectangular footplate only. Force-displacement curves were developed for the 3 groups, and the ultimate load to failure and stiffness values were calculated. Results: The ultimate load to failure with the circular footplate was 1310 N (SD, 482). The ultimate load to failure with a rectangular footplate with a central defect and without a central defect was 1636 N (SD, 513) and 2481 N (SD, 1191), respectively. The stiffness of the constructs with circular footplate was 473 N/mm (SD, 205). The stiffness of the constructs with a rectangular footplate with a central defect and without a central defect was 754 N/mm (SD, 217) and 1054 N/mm (SD, 329), respectively. Conclusion: A rectangular footplate design is more resistant to subsidence than a circular footplate design in an ex vivo biomechanical model. The new design had higher load to failure even in the presence of a central defect. These findings suggest that rectangular footplates may provide better subsidence resistance when used to reconstruct defects after thoracolumbar corpectomy. abstract_id: PUBMED:35141636 Differences in surgical outcome after anterior corpectomy and reconstruction with an expandable cage with rectangular footplates between thoracolumbar and lumbar osteoporotic vertebral fracture. Background: Anterior and posterior spinal fixation (APSF) can provide rigid structural anterior column support in patients with osteoporotic vertebral fracture (OVF). A new rectangular footplate designed based on biomechanical studies of endplates provides better resistance to subsidence. However, differences in characteristics exist between the thoracolumbar and lower lumbar spine. The purpose of this study was to evaluate the surgical outcomes following APSF using an expandable cage with rectangular footplates in the thoracolumbar/lumbar region. Methods: Consecutive patients who underwent APSF for OVF at multiple centers were retrospectively reviewed. Clinical and radiographic evaluations were performed by dividing the patients into thoracolumbar (TL, T10-L2) and lumbar (L, L3-L5) groups. Surgical indications were incomplete neurologic deficit or intractable back pain with segmental spinal instability. Surgical outcomes including the Japanese Orthopaedic Association (JOA) score and reoperation rate were compared between TL and L groups. Results: Sixty-nine patients were followed-up for more than 12 months and analyzed. Operative intervention was required for 35 patients in the TL group and 34 patients in the L group. Mean ages in the TL and L groups were 76.5 years and 75.1 years, respectively. Intra-vertebral instability was more frequent in the TL group (p<0.001). Screw fixation range was significantly longer in the TL group (p=0.012). The rate of cage subsidence did not differ significantly between the TL group (46%) and L group (44%). Reoperation rate tended to be higher in the TL group (p=0.095). Improvement ratio of JOA score was significantly better in the L group (60%) than in the TL group (46.9%, p=0.029). Conclusion: APSF using an expandable cage was effective to treat OVF at both lumbar and thoracolumbar levels. However, the improvement ratio of the JOA score was better in the L group than in the TL group. abstract_id: PUBMED:34884365 Comparison of Outcomes between Minimally Invasive Lateral Approach Vertebral Reconstruction Using a Rectangular Footplate Cage and Conventional Procedure Using a Cylindrical Footplate Cage for Osteoporotic Vertebral Fracture. The aim of the current study was to compare outcomes between lateral access vertebral reconstruction (LAVR) using a rectangular footplate cage and the conventional procedure using a cylindrical footplate cage in patients with osteoporotic vertebral fracture (OVF). We included 46 patients who underwent anterior-posterior combined surgery for OVF: 24 patients underwent LAVR (Group L) and 22 underwent the conventional procedure (Group C). Preoperative, postoperative, and 1- and 2-year follow-up X-ray images were used to measure local lordotic angle, correction loss, and cage subsidence (>2 mm in vertebral endplate depression). In anterior surgery, the operation time was significantly shorter (183 vs. 248 min, p < 0.001) and the blood loss was significantly less (148 vs. 406 mL, p = 0.01) in Group L than in Group C. In Group C, two patients had anterior instrumentation failure. Correction loss was significantly smaller in Group L than in Group C (1.9° vs. 4.9° at 1 year, p = 0.02; 2.5° vs. 6.5° at 2 years, p = 0.04, respectively). Cage subsidence was significantly less in Group L than in Group C (29% vs. 80%, p < 0.001). LAVR using a rectangular footplate cage is an effective treatment for OVF to minimize surgical invasiveness and postoperative correction loss. abstract_id: PUBMED:24831343 Corpectomy cage subsidence with rectangular versus round endcaps. Corpectomy cages with rectangular endcaps utilize the stronger peripheral part of the endplate, potentially decreasing subsidence risk. The authors evaluated cage subsidence during cyclic biomechanical testing, comparing rectangular versus round endcaps. Fourteen cadaveric spinal segments (T12-L2) were dissected and potted at T12 and L2, then assigned to a rectangular (n=7) or round (n=7) endcap group. An L1 corpectomy was performed and under uniform conditions a cage/plate construct was cyclically tested in a servo-hydraulic frame with increasing load magnitude. Testing was terminated if the test machine actuator displacement exceeded 6mm, or the specimen completed cyclic loading at 2400 N. Number of cycles, compressive force and force-cycles product at test completion were all greater in the rectangular endcap group compared with the round endcap group (cycles: 3027 versus 2092 cycles; force: 1943 N versus 1533 N; force-cycles product: 6162kN·cycles versus 3973 kN·cycles), however these differences were not statistically significant (p ⩾ 0.076). After normalizing for individual specimen bone mineral density, the same measures increased to a greater extent with the rectangular endcaps (cycles: 3014 versus 1855 cycles; force: 1944 N versus 1444 N; force-cycles product: 6040 kN·cycles versus 2980 kN·cycles), and all differences were significant (p⩽0.030). The rectangular endcap expandable corpectomy cage displayed increased resistance to subsidence over the round endcap cage under cyclic loading as demonstrated by the larger number of cycles, maximum load and force-cycles product at test completion. This suggests rectangular endcaps will be less susceptible to subsidence than the round endcap design. abstract_id: PUBMED:28009380 Algorithms for Fresnel Diffraction at Rectangular and Circular Apertures. This paper summarizes the theory of Fresnel diffraction by plane rectangular and circular apertures with a view toward numerical computations. Approximations found in the earlier literature, and now obsolete, have been eliminated and replaced by algorithms suitable for use on a personal computer. abstract_id: PUBMED:31841720 Minimally Invasive Lateral Retropleural Approach to the Thoracic Spine for Salvage of a Subsided Expandable Interbody Cage. Background: Cylindrical expandable cages are commonly used as interbody grafts after cervical, thoracic, or lumbar corpectomy in patients with osteomyelitis. Unfortunately, there is a high incidence of hardware subsidence due to small-diameter footplates. Newer expandable intervertebral cages with large rectangular endcaps use the anatomic strength of the vertebral epiphyseal ring to prevent subsidence. Case Description: A 67-year-old man with medically refractory thoracic osteomyelitis and discitis presented to our service for further management of debilitating back pain secondary to a persistent infection and associated progressive spinal kyphotic deformity. He underwent a transpedicular T9-10 corpectomy, placement of an expandable interbody cage, and posterior instrumented spinal fusion from T7 to T12. On postoperative day 2, upright thoracic radiographs demonstrated cage subsidence of >50% into the T8 vertebral body. The patient was returned to the operating room for hardware revision and placement of an expandable intervertebral cage with rectangular endcaps through a minimally invasive lateral retropleural approach to the thoracic spine. The patient tolerated the procedure well, and no evidence of subsidence occurred after the revision after 2 years of follow-up. Conclusions: Expandable intervertebral cages with rectangular endcaps can be used to prevent and/or correct preexisting cage subsidence in patients in need of anterior column instrumentation, especially in those with bone-weakening pathologies. Prospective studies should be entertained to evaluate subsidence rates in cages with cylindrical versus rectangular endcaps. abstract_id: PUBMED:9529227 Footplates of the medial crura. The purpose of this combined prospective and retrospective study was to review the abnormalities of the footplates of the medial crura, their surgical correction, and the dynamic changes that result from footplate alteration. Prospectively, measurements of 40 footplates were obtained during 20 consecutive primary rhinoplasties. The distance between the footplates at their most posterocaudal position was measured, along with the thickness, length, and width of the footplates. The shape of the nostrils was also observed and correlated to the form of the footplates. The distance between the footplates ranged from 7.5 to 15 mm, the average being 11.4 mm. The length of the footplates ranged from 4 to 7.5 mm, the average being 5.81 mm. The thickness of the footplates averaged 1.06 mm, ranging from 0.80 to 1.5 mm. The width of the footplates ranged from 2.5 to 7.0 mm, averaging 4.48 mm. In a retrospective review of 295 consecutive rhinoplasties, footplates were altered in 76 cases (25.8 percent). Of these cases, 29 procedures (9.8 percent) were performed to narrow the columella base and to advance the subnasale: on 24 patients (8.1 percent), the goal of this maneuver was to narrow the columella base only; on 5 patients (1.7 percent), the operation was conducted to aid in increasing the tip projection, provide a better foundation for the tip, advance the subnasale caudally, and narrow the alar base. Asymmetry of the columella was corrected in 16 patients (5.4 percent), and footplates were resected primarily to reduce the tip projection in 2 patients (0.7 percent). A detailed analysis of the nasal base will dictate one of the following courses pertaining to footplate alteration. If the patient exhibits an overprojected tip and divergent footplates, the lateral portion of the footplates will be resected partially, then approximated. If the tip is underprojected or has normal projection, the divergent footplates will be approximated without resection. Should the subnasale and the base of the columella be protruding, the soft tissue between the footplates will be removed to avoid excess fullness in this site as a result of the approximation of the footplate. However, when the footplates are divergent, the columella base and nasal spine area are often retracted, setting an auspicious stage for approximation of the footplates without having to excise the soft tissue. This maneuver not only narrows the columella base, it also advances it caudally. Longstanding caudal deviation of the septum may also create asymmetry of the footplates, which will not respond to mere repositioning of the septum, and often requires repositioning of the footplates with mobilization and fixation to the contralateral footplates. abstract_id: PUBMED:8892566 Prosthesis on a mobilized stapes footplate. Managing a mobilized footplate in stapedectomy surgery can be challenging. Between 1963 and 1992, 145 footplates were inadvertently mobilized during otosclerosis surgery. After a vein graft, a 4.0-mm Robinson prosthesis was placed on all footplates, making no attempt to remove the footplate. There were 73 thin, blue footplates and 72 thick, white footplates. Hearing results in the thin, blue footplate group was 97% successful and 100% satisfactory at 3 years. No footplate refixed. In the thick, white group, hearing was 60% successful and 72% satisfactory at 6 months. Footplate refixation was found at revision in all but one unsuccessful case. After revision, the thick, white group had 79% successful and 89% satisfactory hearing results at 3 years. No patient in either group was worse. We conclude that placing a vein graft and a Robinson prosthesis is a safe and effective technique for a mobilized footplate. If the footplate is thin and blue, there is little or no risk of refixation. If the footplate is thick and white, approximately 30% will require revision. abstract_id: PUBMED:28216750 Early results of thoraco lumbar burst fracture treatment using selective corpectomy and rectangular cage reconstruction. Background: Subsidence and late fusion are commonly observed in anterior subtotal corpectomy and reconstruction for treating thoracolumbar burst fractures. The subsidence rate of this surgical method was reported from 19.6% to 75% in the literatures, which would cause treatment failure. Thus, an improvement of anterior surgery technique should be studied to reduce these complications. Materials And Methods: 130 patients of thoracolumbar burst fractures treated by minimal corpectomy, decompression and U cage, between January 2009 and December 2010 were included in this study. The hospital Ethical Committee approved the protocols. The American Spinal Injury Association (ASIA) scale, visual analog scales, and Oswestry Disability Index (ODI) scores were used for clinical evaluation. The local kyphosis angle, vertebral height (one level above the fractured vertebral to one level below), canal stenosis, and fusion status were used to assess radiological outcome. All complications and demographic data such as number of male/female patients, average age, mode of trauma, burst level involved, mean surgery time and blood lost were reported. Results: 120 patients were followed up for 24 months. Most patients had improvement of at least 1 ASIA grade, and all experienced pain reduction. The mean ODI score steadily decreased after the surgery (P < 0.01). Approximately, 83.3% of patients achieved solid fusion at 3 months and reached 98.3% at 6 months. The kyphosis angle and radiographic height were corrected significantly after the surgery and with a nonsignificant loss of correction at 24 months (P > 0.05). The average canal stenosis index was increased from 39% to 99% after surgery. No cage subsidence or implant failure was observed. Conclusions: The clinical outcomes described here suggest that the selective corpectomy and rectangular cage reconstruction can effectively promote solid fusion and eliminate complications related to subsidence or implant failure. abstract_id: PUBMED:21799454 Stapedotomy: is the color of the footplate important in the choice of the type of perforator? Objective: To valuate the influence of the color of footplate in the choice of the type of perforator. Study Design: Prospective study. Level of Evidence 1b. Settings: Single hospital study in Avellino, Italy. Subjects And Methods: From the first of September 2008 to the 30th of June 2009, we considered 83 patients affected by otosclerosis. Patients were divided into 2 groups under footplate perforator used. In the group A, manual perforators (the more traumatic drills) were used. In group B, CO2 laser (the less traumatic drill) was used. The 2 groups under the color of visible portion of the footplate before removing the stapes superstructure were divided in 2 subgroups. Group IA includes 33 blue footplates and Group IIA 10 white footplates. Group IB includes 30 blue footplates and Group IIB 10 white footplates. We estimated whether there is statistical correlation between type of perforator, incidence of footplate complications, and color of footplate. Results: In group IA, we had 0 complications. In group IB, we had 0 complications. In group IIA, we had 7 complications. In group IIB, we had 2 complications. Comparison of incidence of footplate complications showed no statistical significance between group IA and IB and statistical significance between group IIA and IIB. Conclusion: The footplate color is important in the choice of the type of perforator. Answer: Yes, a novel rectangular footplate can provide higher resistance to subsidence than circular footplates. This is supported by several studies that have compared the subsidence characteristics of rectangular and circular footplate designs. One study found that the ultimate load to failure was higher for rectangular footplates both with and without a central defect, compared to circular footplates. The stiffness of the constructs with rectangular footplates was also higher than those with circular footplates, suggesting that rectangular footplates may offer better resistance to subsidence when reconstructing defects after thoracolumbar corpectomy (PUBMED:22718226). Another study showed that using a rectangular footplate cage in minimally invasive lateral access vertebral reconstruction resulted in significantly less cage subsidence compared to a conventional procedure using a cylindrical footplate cage (PUBMED:34884365). Additionally, a study comparing corpectomy cages with rectangular versus round endcaps found that the rectangular endcap cage displayed increased resistance to subsidence under cyclic loading, suggesting that rectangular endcaps are less susceptible to subsidence than round endcap designs (PUBMED:24831343). These findings collectively indicate that rectangular footplates can indeed provide higher resistance to subsidence than circular footplates.
Instruction: Can preoperative urodynamic examination allow us to predict the risk of incontinence after radical prostatectomy? Abstracts: abstract_id: PUBMED:10370953 Can preoperative urodynamic examination allow us to predict the risk of incontinence after radical prostatectomy? Objectives: To verify whether bladder dysfunction detected by urodynamic studies prior to radical prostatectomy can predict postoperative continence status. Material And Methods: Twenty patients diagnosed with prostate cancer had multichannel subtracted filling and voiding videocystometry before undergoing radical retropubic prostatectomy. Postoperatively, all patients had periodic clinical assessment of continence status. Results: On preoperative filling cystometry, detrusor instability with a maximal detrusor pressure greater than 15 cm H2O was demonstrated in 12/20 patients (60%). Postoperatively, 11/20 patients (55%) were continent, 4 (20%) had mild stress incontinence and 5 (25%) complained of episodic urge incontinence. However, only 5 of the 12 patients with preoperatively diagnosed detrusor instability manifested clinical urge incontinence after surgery (positive predictive value = 41.6%). Conclusion: The incidence of preoperative detrusor instability in our series was high, but little correlation was found between this finding and postoperative incontinence. abstract_id: PUBMED:7801419 Preoperative urodynamic evaluation: does it predict the degree of urinary continence after radical retropubic prostatectomy? Preoperative evaluation of voiding patterns and detailed urodynamic evaluation before radical retropubic prostatectomy was carried out in an attempt to identify patients at higher risk of developing urinary incontinence postoperatively. Ninety-two men, mean age 64 years (range 54-72), who completed 1 year of follow-up after radical retropubic prostatectomy for clinically localized adenocarcinoma of the prostate were included in the study. According to the preoperative urodynamic findings, patients were divided into two main groups: group 1 (n = 64) patients with normal urodynamic findings, and group 2 (n = 28) patients with abnormal urodynamic findings. The latter group was further subdivided according to the abnormality: detrusor instability (n = 12), weak sphincter mechanism (n = 9), and detrusor and sphincter instability (n = 7). The degree and incidence of urinary incontinence were evaluated in both groups at the 1-year follow-up visit. There was a substantial difference in the incidence of urinary incontinence between the two main groups with only 2 patients with incontinence in group 1 (3%) versus 11 patients in group 2 (39%). In addition, the incidence of incontinence in group 2 differed depending on the type of abnormality: the lower incidence occurred in patients with detrusor instability (17%) and the higher incidence in patients with both detrusor and sphincter instability (71%). Identification of sphincteric and bladder dysfunction preoperatively may indicate a high risk of urinary incontinence after radical prostatectomy. abstract_id: PUBMED:26507516 Effect of preoperative urodynamic detrusor overactivity on post-prostatectomy incontinence: a systematic review and meta-analysis. Purpose: To investigate whether preoperative urodynamic detrusor overactivity (DO) contributes to post-prostatectomy incontinence (PPI). Methods: We systematically searched the online PubMed, Embase, and Cochrane Library databases spanning the period of January 1989 to December 2014. Results: A total of nine articles met the eligibility criteria for this systematic review. The eligible studies included a total of 457 patients with a median number of 58 patients per study (range 17-92). Of the nine studies, five conducted open retropubic radical prostatectomy (RRP), two performed robot-assisted laparoscopic prostatectomy (RALP), and two others utilized multiple modalities. PPI was more likely to occur in patients with preoperative DO [pooled odds ratio (OR) 2.30; 95 % confidence interval (CI) 1.39-3.82; studies 9; participants 419], as compared to patients who were DO negative. Sensitivity analysis using the subgroups of RRP (OR 2.32; 95 % CI 1.11-4.85), RALP (OR 3.41; 95 % CI 1.55-7.47), DO defined as any amplitude of involuntary contraction (OR 2.32; 95 % CI 1.11-4.85), no postoperative intervention (OR 2.32; 95 % CI 1.11-4.85), and outcome evaluation after 6 months (OR 2.32; 95 % CI 1.11-4.85) demonstrated consistent results. Although some comparisons showed inter-study heterogeneity, there was no clear evidence of publication bias in this meta-analysis. Conclusions: Our meta-analysis results suggest that preoperative DO is another possible underlying mechanism for PPI. abstract_id: PUBMED:12676411 Contribution of the preoperative urodynamic findings in the determination of risks factors of urinary incontinence after radical retropubic prostatectomy Unlabelled: The aim of the study was to determine whether preoperative urodynamic evaluation helps the physiotherapist to adapt preoperative management of patients undergoing radical retropubic prostatectomy (RP) by identifying a group at risk of incontinence. Material And Methods: We compared the preoperative urodynamic evaluation of 229 men scheduled for RP with their continence status, evaluated by standardized pad-test and questionnaire, at 6 weeks and 4 months postoperatively. Results: The primary urinary incontinence risk has been obtained for five patient's categories, namely normal, bladder instability, bladder outlet obstruction, hypocontractility, and mixed results. None of the patients diagnosed with detrusor instability and bladder outlet obstruction was continent at six weeks from surgery. At four months, although it improves, the continence status remains significantly poorer than observed in all other groups. Conclusion: Preoperative urodynamic evaluation of patients scheduled for RP allows identifying patients with a high risk of postoperative urinary incontinence. abstract_id: PUBMED:8976260 Urodynamic evaluation of changes in urinary control after radical retropubic prostatectomy. Purpose: Urodynamic evaluations were done in patients before and after radical prostatectomy to obtain more information about the factors that affect continence. Materials And Methods: Urodynamic testing was done in 82 patients before, and 6 to 8 weeks and 6 months (9 cases) after radical prostatectomy. Evaluation included measurement of cystometric bladder capacity, compliance, functional profile length, maximal urethral closure pressure, maximal urethral closure pressure during voluntary contraction of the external sphincter, residual urine, maximal flow rate and bladder instability. Results: The continence rate was 33.4, 69.4, 84.7 and 90.9% at 1, 3, 6 and 12 months after radical prostatectomy, respectively. Mean functional urethral length decreased from 61 mm. preoperatively to 25.9 mm. postoperatively, maximal urethral pressure from 89.6 to 65.2 cm. water and bladder capacity from 338.7 to 278.8 ml. Bladder instability was found in 17 and 41% of cases before and directly after radical prostatectomy, respectively. There was a statistically significant difference in maximal urethral closure pressure (68.1 versus 53.1 cm. water) as well as functional urethral length (27.6 versus 20.5 mm.) in continent versus incontinent patients, respectively. Urodynamic examination 6 months after prostatectomy showed an increase in functional profile length and maximal urethral pressure, while bladder measurements did not change significantly. Conclusions: After radical prostatectomy significant changes in bladder and sphincter measurements are noted. Urethral closure pressure, functional urethral length and bladder stability are significant urodynamic factors that influence continence after radical prostatectomy. abstract_id: PUBMED:24334874 Prospective urodynamic model for prediction of urinary incontinence after robot-assisted radical prostatectomy. Objectives: To assess the preoperative urodynamic predictors of urinary incontinence (UI) 1 year after robot-assisted radical prostatectomy (RARP) and to design a nomogram capable of predicting its occurrence. Materials And Methods: Our prospective study included 58 previously continent patients who underwent RARP, in most cases, bilateral nerve-sparing and bladder neck preservation. A urodynamic examination including a urethral pressure profile was performed preoperatively. Multivariate analysis was used to assess the predictors for the need to use 1 or more pads/day and a nomogram was constructed. Results: There was a 20.6% incidence of UI at 1 year after RARP. Bladder compliance, maximum urethral closure pressure and the development of bladder outlet obstruction, correlated well with the incidence of UI on the multivariate analysis (p = 0.043, 0.001, and 0.05, respectively). Conclusion: Bladder compliance <27.8 ml/cm H2O, maximum urethral closure pressure <50.3 cm H2O and the bladder outlet obstruction are independent urodynamic factors correlating with UI after RARP. The new nomogram can objectively predict a patient likelihood of requiring 1 or more pads/day 1 year after RARP with a good accuracy. abstract_id: PUBMED:25428752 Severe postprostatectomy incontinence: Is there an association between preoperative urodynamic findings and outcome of incontinence surgery? Objective: This study investigated urodynamic findings in prostate cancer patients with self-reported persistent severe postprostatectomy urinary incontinence (PPI) as well as the outcome of incontinence surgery. The main aim was to evaluate the ability of preoperative urodynamic dysfunctions to predict the outcome of incontinence surgery. The hypothesis was that preoperative bladder dysfunction was predictive of an unsuccessful surgical outcome. Materials And Methods: Based on patient-reported adverse effects after radical prostatectomy (RP) in 2005-2009, 94 men with persistent severe PPI were invited in 2011 to a clinical examination including urodynamics. When indicated, surgery for PPI was offered, and by January 2014, 46 patients had been operated on with either an artificial urinary sphincter or a sling. The outcome of PPI surgery was evaluated with a follow-up questionnaire. Successful outcome was defined in two ways: patient-reported satisfaction and using one or fewer pads/day. Results: Of the 94 men with severe PPI more than 12 months after RP, 76 patients (81% response rate) met for clinical examination. Among them, 99% had intrinsic sphincter deficiency, in 67% coexisting with urodynamic bladder dysfunction. The presence of preoperative bladder dysfunction was not predictive of the outcome of PPI surgery. Preoperative use of fewer pads, less severe PPI, and a longer interval between RP and PPI surgery were associated with the successful outcome of one or fewer pads/day. Longer duration from RP to PPI surgery was the only preoperative factor associated with the successful outcome of satisfaction. Conclusions: The hypothesis was not supported as the presence of preoperative urodynamic bladder dysfunction was not predictive of the outcome of PPI surgery in this study. abstract_id: PUBMED:7725990 Value of urodynamic studies in the framework of total radical prostatectomy In the service, sixty patients underwent a radical prostatectomy between 1987 and 1994. The last 35 candidates to radical prostatectomy were prospectively analyzed in urodynamic parameters preoperatively and two months afterwards. Five patients present an incontinence: two total incontinence and three moderate incontinence. In two of the patients there was a detrusor instability, in two others an urethral instability and in the last patient, we discover a diminution of the bladder functional capacity on the postoperative urodynamics. So, the question is now to determine if incontinence is only caused by a sphincteric insufficiency or by a bladder or urethral dysfunction. The preoperative urodynamics are important to appreciate the quality of sphincters and the quality of the bladder; it's also an important meeting between the patient and the surgeon. The postoperative urodynamics seem to be reserved to the patients who have some problems, so to search an explanation for the incontinence. All our patients underwent rehabilitative physiotherapy, which seems to help them to recover their continence. abstract_id: PUBMED:30420112 Preoperative magnetic resonance imaging in predicting early continence recovery after robotic radical prostatectomy. Introduction And Aims: Urinary incontinence is a common complication after radical prostatectomy. The aim of our study was to describe the preoperative anatomical features using magnetic resonance imaging in order to predict early continence recovery after robotic radical prostatectomy. Material And Methods: 72 patients who underwent robotic radical prostatectomy were prospectively analysed. EPIC questionnaire (1, 6 and 12 mo) and first self-reported continence were used to assess functional outcomes. Membranous urethral length (MUL) and MUL-prostate axis angle (aMULP) were assessed preoperatively on T2 weighted sagittal images. Results: Continence rate was 67.2%, 92.6% and 95.2% at 1, 6 and 12 months, respectively. Early continence was achieved in patients with the lower aMULP. At 1 month, average aMULP in continent patients was 107.21° (IC 95% 90.3-124.6) vs. 118.5° (IC 95% 117.7-134) in incontinent ones (P=.014). At 6 month differences in aMULP among groups were found: 114.24° (IC 95% 104.6-123.9) in continents vs. 142° (IC 95% 126.5-157.6) in incontinents (P=0.015). At 12 month, continent group showed a significantly higher preoperative aMULP. aMULP was revealed as the only independent predictor of urinary continence at 6 mo in multivariate analysis, OR 0.007 (IC 95% 0.002-0.012), P=0.012. Conclusions: Preoperative anatomical parameters assessment prior surgery can help to identified those patients will achieve early continence recovery and it supports therapeutic decisions making. abstract_id: PUBMED:9859586 Urodynamics of urinary incontinence post radical prostatectomy Objective: To determine the urodynamic characteristics of urinary incontinence after radical retropubic prostatectomy. Methods: We conducted a clinical and urodynamic study on 25 patients who had undergone radical retropubic prostatectomy due to localized prostate adenocarcinoma. Results: 68% of the patients completely recovered urinary continence. The maximum rate of urinary continence recovery was found between the third and fourth month postprostatectomy. A relationship was demonstrated between preoperative hormone blockade, duration of the surgery and urinary continence. Videocystography demonstrated an incompetent proximal continence mechanism in all radical prostatectomy patients. All incontinent patients presented stress urinary incontinence. Denervation potentials of the periurethral sphincter was demonstrated by electromyography in 80% of incontinent patients. These potentials were not present in the continent patients. Reinnervation potentials were present in 50% of the continent patients and in 20% of the incontinent patients (significant differences). No relationship was found between other urodynamic data and post-radical prostatectomy urinary incontinence. Conclusions: The surgical difficulty influences the preservation of urinary continence. A high percentage of patients submitted to radical prostatectomy recover urinary continence with time. Urinary incontinence following radical prostatectomy is based on the distal sphincteric mechanism. Patients who remain incontinent four months postoperatively and with electromyographically demonstrated denervation potentials of the periurethral sphincter can be considered to be candidates for treatment of incontinence without waiting any further. Answer: Preoperative urodynamic examination has been studied to determine if it can predict the risk of incontinence after radical prostatectomy, but the results across various studies are not entirely consistent. Some studies have found a correlation between preoperative urodynamic findings and postoperative continence status, while others have not. For instance, one study found that preoperative detrusor instability was present in a high percentage of patients, but there was little correlation between this finding and postoperative incontinence (PUBMED:10370953). Another study reported a substantial difference in the incidence of urinary incontinence between patients with normal urodynamic findings and those with abnormalities, suggesting that identification of sphincteric and bladder dysfunction preoperatively may indicate a high risk of urinary incontinence after radical prostatectomy (PUBMED:7801419). A systematic review and meta-analysis indicated that preoperative urodynamic detrusor overactivity (DO) contributes to post-prostatectomy incontinence, with patients with preoperative DO being more likely to experience postoperative incontinence (PUBMED:26507516). Similarly, another study concluded that preoperative urodynamic evaluation allows for identifying patients with a high risk of postoperative urinary incontinence (PUBMED:12676411). However, a study that developed a prospective urodynamic model found that bladder compliance, maximum urethral closure pressure, and the development of bladder outlet obstruction were independent urodynamic factors correlating with urinary incontinence after robot-assisted radical prostatectomy (PUBMED:24334874). In contrast, another study found that the presence of preoperative bladder dysfunction was not predictive of the outcome of incontinence surgery (PUBMED:25428752). In summary, while some studies suggest that preoperative urodynamic examination can be helpful in predicting the risk of incontinence after radical prostatectomy, the predictive value is not definitive and may vary depending on the specific urodynamic parameters and the type of prostatectomy performed. The presence of certain preoperative urodynamic abnormalities, such as detrusor overactivity and sphincteric dysfunction, may indicate a higher risk, but other factors also play a role in postoperative continence outcomes.
Instruction: Decreased deformability of donor red blood cells after intrauterine transfusion in the human fetus: possible reason for their reduced life span? Abstracts: abstract_id: PUBMED:15265129 Decreased deformability of donor red blood cells after intrauterine transfusion in the human fetus: possible reason for their reduced life span? Background: The life span of donor red blood cells (RBCs) is reduced in the fetus with Rh hemolytic disease. This may have resulted from donor or recipient factors. Study Design And Methods: Studied in vitro was the effect of gamma irradiation on hemolysis, methemo-globin (metHb), and lipid peroxidation of donor RBCs and the ability of fetal and adult plasma to protect irradiated RBCs from induced lipid peroxidation. Also studied in vivo were the effects after the time that donor RBCs reside in the fetus by measuring its lipid peroxidation, cholesterol-to-phospholipid ratios, and deformability of RBCs. Results: Irradiation barely increased hemolysis and metHb formation and did not increase lipid peroxidation. Plasma samples of D+ fetuses inhibited induced oxidative stress less than plasma samples of adults. Nevertheless, in vivo lipid peroxidation of the donor RBC membrane had not increased, whereas the molar cholesterol-to-phospholipid ratio increased from 1.08 +/- 0.11 to 1.38 +/- 0.12. It became identical to that of the fetal RBCs (1.44 +/- 0.12). Before transfusion, the deformability of the adult RBCs (elongation index, 0.578 +/- 0.013) was better than that of the fetal cells (elongation index, 0.494 +/- 0.027), but decreased to fetal levels after transfusion (elongation index, 0.518 +/- 0.039). Conclusion: Irradiation of the RBCs and a reduced fetal antioxidant capacity do not lead to in vivo lipid peroxidation. The shorter life span of donor cells in the fetus probably results from a decreased deformability of the RBCs after transfusion, most likely owing to an increased cholesterol-to-phospholipid ratio. abstract_id: PUBMED:33015934 Inter-donor variability in deformability of red blood cells in blood units. Objective: This study aimed to examine the donor-to-donor variability in the deformability of red blood cells (RBCs) from freshly collected blood donations (F-RBC) and packed RBCs. Background: Packed RBCs are supplied for transfusion by the first-in-first-out (FIFO) criterion, assuming that their quality is the same for packed RBCs with equal storage duration. To challenge this notion, we determined the deformability of F-RBC and packed RBCs stored for different durations. Methods: Three RBC groups were employed: A. 79 samples of F-RBC; B. 76 samples of packed RBC units, randomly used for transfusion at different storage durations; and C. 65 samples of outdated packed RBCs stored for 35 to 37 days. All packed RBC units were non-leukofiltrated and stored in Citrate-phosphate-dextrose solution with adenine (CPDA-1). RBC deformability was determined using a computerised cell-flow properties analyser, which monitors the shape change of cells directly visualised in a narrow-gap flow chamber and provides the cells' deformability distribution in a large RBC population. Results: The F-RBC deformability exhibited a wide-range inter-donor variability. The cold storage of packed RBCs exerted a mild reduction of deformability, which became significant, compared to the initial inter-donor variability, only after 3 weeks of storage. Conclusion: Packed RBCs are generally supplied for transfusion by the FIFO criterion based on the assumption that the storage duration is a key factor of RBC quality. This study demonstrates that the deformability of red blood cells is significantly different in donors, and substantial variability persists throughout the entire process of their storage. Therefore, the FIFO criterion is not sufficient for assessing the RBC deformability, which should, therefore, be specifically characterised for each unit. abstract_id: PUBMED:35846223 Degradation of red blood cell deformability during cold storage in blood bags. Red blood cells (RBCs) stored in blood bags develop a storage lesion that include structural, metabolic, and morphologic transformations resulting in a progressive loss of RBC deformability. The speed of RBC deformability loss is donor-dependent, which if properly characterized, could be used as a biomarker to select high-quality RBC units for sensitive recipients or to provide customized storage timelines depending on the donor. We used the microfluidic ratchet device to measure the deformability of red blood cells stored in blood bags every 14 days over a span of 56 days. We observed that storage in blood bags generally prevented RBC deformability loss over the current standard 42-day storage window. However, between 42 and 56 days, the deformability loss profile varied dramatically between donors. In particular, we observed accelerated RBC deformability loss for a majority of male donors, but for none of the female donors. Together, our results suggest that RBC deformability loss could be used to screen for donors who can provide stable RBCs for sensitive transfusion recipients or to identify donors capable of providing RBCs that could be stored for longer than the current 42-day expiration window. abstract_id: PUBMED:12660436 Antioxidant protection against free radicals is reduced in fetal plasma after intrauterine red blood cell transfusion. After intrauterine transfusion for red cell alloimmunization, a 2- to 20-fold increase of plasma Hb, a strong pro-oxidant, was observed. The increase of fetal plasma Hb after transfusion leads to a highly significant reduction of plasma antioxidant capacity, measured as the peroxyl radical trapping capacity. Consequently, the posttransfusion reduced antioxidant protection may enhance the peroxidation of lipids in e.g. donor erythrocyte membranes, leading to the shortened life span of these cells in the fetus. abstract_id: PUBMED:32690367 Unit-to-unit variability in the deformability of red blood cells. Background: In blood banking practice, the storage duration is used as the primary criterion for inventory management, and usually, the packed red blood cells (PRBC) units are supplied primarily according to first-in-first-out (FIFO) principle. However, the actual functionality of individual PRBC units is mostly ignored. One of the main features of the RBCs not accounted for under this approach is the deformability of the red cells, i.e., their ability to affect the recipients' blood flow. The objective of the study was to analyze unit-to-unit variability in the deformability of PRBCs during their cold storage. Methods: RBC samples were obtained from twenty leukoreduced PRBC units, stored in SAGM. The deformability of cells was monitored from the day of donation throughout 42 days. RBC deformability was determined using the computerized cell flow-properties analyzer (CFA) based on cell elongation under a shear stress of 3.0 Pa, expressed by the elongation-ratio (ER). The image analysis determines the ER for each cell and provides the ER distribution in the population of 3000-6000 cells. Results: The deformability of freshly-collected RBCs exhibited marked variability already on the day of donation. We also found that the aging curve of PRBC deformability varies significantly among donors. Significance: The present study has demonstrated that storage duration is only one of the factors, and seemingly not even the major one, affecting the PRBCs functionality. Therefore, the FIFO approach is not sufficient for assessing the potential transfusion outcome, and the PRBC functionality should be determined explicitly for each unit. abstract_id: PUBMED:23449853 Decreased erythrocyte deformability after transfusion and the effects of erythrocyte storage duration. Background: Erythrocyte cell membranes undergo morphologic changes during storage, but it is unclear whether these changes are reversible. We assessed erythrocyte cell membrane deformability in patients before and after transfusion to determine the effects of storage duration and whether changes in deformability are reversible after transfusion. Methods: Sixteen patients undergoing posterior spinal fusion surgery were studied. Erythrocyte deformability was compared between those who required moderate transfusion (≥ 5 units erythrocytes) and those who received minimal transfusion (0-4 units erythrocytes). Deformability was measured in samples drawn directly from the blood storage bags before transfusion and in samples drawn from patients before and after transfusion (over 3 postoperative days). In samples taken from the blood storage bags, we compared deformability of erythrocytes stored for a long duration (≥ 21 days), those stored for a shorter duration (<21 days), and cell-salvaged erythrocytes. Deformability was assessed quantitatively using the elongation index (EI) measured by ektacytometry, a method that determines the ability for the cell to elongate when exposed to shear stress. Results: Erythrocyte deformability was significantly decreased from the preoperative baseline in patients after moderate transfusion (EI decreased by 12% ± 4% to 20% ± 6%; P = 0.03) but not after minimal transfusion (EI decreased by 3% ± 1% to 4% ± 1%; P = 0.68). These changes did not reverse over 3 postoperative days. Deformability was significantly less in erythrocytes stored for ≥ 21 days (EI = 0.28 ± 0.02) than in those stored for <21 days (EI = 0.33 ± 0.02; P = 0.001) or those drawn from patients preoperatively (EI = 0.33 ± 0.02; P = 0.001). Cell-salvaged erythrocytes had intermediate deformability (EI = 0.30 ± 0.03) that was greater than that of erythrocytes stored ≥ 21 days (P = 0.047), but less than that of erythrocytes stored <21 days (P = 0.03). Conclusions: The findings demonstrate that increased duration of erythrocyte storage is associated with decreased cell membrane deformability and that these changes are not readily reversible after transfusion. abstract_id: PUBMED:31796943 Deformability based sorting of stored red blood cells reveals donor-dependent aging curves. A fundamental challenge in the transfusion of red blood cells (RBCs) is that a subset of donated RBC units may not provide optimal benefit to transfusion recipients. This variability stems from the inherent ability of donor RBCs to withstand the physical and chemical insults of cold storage, which ultimately dictate their survival in circulation. The loss of RBC deformability during cold storage is well-established and has been identified as a potential biomarker for the quality of donated RBCs. While RBC deformability has traditionally been indirectly inferred from rheological characteristics of the bulk suspension, there has been considerable interest in directly measuring the deformation of RBCs. Microfluidic technologies have enabled single cell measurement of RBC deformation but have not been able to consistently distinguish differences between RBCs between healthy donors. Using the microfluidic ratchet mechanism, we developed a method to sensitively and consistently analyze RBC deformability. We found that the aging curve of RBC deformability varies significantly across donors, but is consistent for each donor over multiple donations. Specifically, certain donors seem capable of providing RBCs that maintain their deformability during two weeks of cold storage in standard test tubes. The ability to distinguish between RBC units with different storage potential could provide a valuable opportunity to identify donors capable of providing RBCs that maintain their integrity, in order to reserve these units for sensitive transfusion recipients. abstract_id: PUBMED:36310401 Hypoxic storage of donor red cells preserves deformability after exposure to plasma from adults with sickle cell disease. Background: Red cell (RBC) transfusions are beneficial for patients with sickle cell disease (SCD), but ex vivo studies suggest that inflamed plasma from patients with SCD during crises may damage these RBCs, diminishing their potential efficacy. The hypoxic storage of RBCs may improve transfusion efficacy by minimizing the storage lesion. We tested the hypotheses that (1) The donor RBCs exposed to the plasma of patients in crisis would have lower deformability and higher hemolysis than those exposed to non-crisis plasma, and (2) hypoxic storage, compared to standard storage, of donor RBCs could preserve deformability and reduce hemolysis. Study Design And Methods: 18 SCD plasma samples from patients who had severe acute-phase symptoms (A-plasma; n = 9) or were at a steady-state (S = plasma; n = 9) were incubated with 16 RBC samples from eight units that were stored either under conventional(CRBC) or hypoxic(HRBC) conditions. Hemolysis and microcapillary deformability assays of these RBCs were analyzed using linear mixed-effect models after each sample was incubated in patient plasma overnight at 37°C RESULTS: Relative deformability was 0.036 higher (p < 0.0001) in HRBC pairs compared to CRBC pairs regardless of plasma type. Mean donor RBC hemolysis was 0.33% higher after incubation with A-plasma compared to S-plasma either with HRBC or CRBC (p = 0.04). HRBCs incubated with steady-state patient plasma demonstrated the highest deformability and lowest hemolysis. Conclusion: Hypoxic storage significantly influenced RBC deformability. Patient condition significantly influenced post-incubation hemolysis. Together, HRBCs in steady-state plasma maximized donor red cell ex vivo function and survival. abstract_id: PUBMED:10718283 Effect of L-carnitine supplementation on red blood cells deformability in hemodialysis patients. Anemia is a serious problem in hemodialysis patients, the main cause of which is erythropoietin deficiency. After the discovery of recombinant human erythropoietin (rHuEpo) at the end of the last decade, the hematological profile of hemodialysis patients improved significantly but at considerable expense. The deformability of red blood cells (RBC) influences their microcirculation and tissue oxygen delivery along with their life span. We investigated the deformabilty of RBCs in 15 hemodialysis patients before and after three months on L-carnitine supplementation (30 mg/Kg body wt/dialysis session). We excluded from the study all patients who received blood transfusions three months before or during the study, patients who had hemorrhagic episodes, those with hyperparathyroidism or infections, and any who required surgical intervention during the study. The serum iron, folic acid and vitamin B-12 levels were kept normal during the duration of the study. The erythropoietin dose taken before the beginning of L-cartnitine supplementation was not changed. The deformability of RBCs before and after dialysis, prior to and following three months on L-carnitine was determined and compared to the deformability of RBCs from a control group. Hematocrit levels were measured before entry into the study and every month for three months. We found that the deformability of RBCs before the dialysis session was significantly greater than that found in the control group (t-test, p < 0.00001), and that there was a further increase after the end of the dialysis session. Three months following L-carnitine supplementation, we found a significant reduction of RBCs deformability (paired t-test, p < 0.004), and a significant increase in the hematocrit (ANOVA, p < 0.0001). We concluded that abnormalities in the deformability of RBCs improved after L-carnitine and that this was responsible for the increase in the hematocrit. This may allow a substantial reduction in rHuEpo dose. abstract_id: PUBMED:26477408 Microfluidic deformability analysis of the red cell storage lesion. A key challenge in transfusion medicine research and clinical hematology is to develop a simple and non-destructive method to measure the quality of each blood unit prior to use. RBC deformability has long been proposed as an indicator of blood quality. We measured RBC deformability using the pressure required for single cells to transit through a micrometer scale constriction to examine longitudinal changes in RBC deformability, as well as the variability in blood quality and storage capacity across donors. We used a microfluidic device to monitor deformability changes in RBCs stored in plastic tubes and in blood bags over 14 and 56 days respectively. We found consistent storage based degradation of RBC deformability with statistically significant variability in both the initial RBC deformability and storage capacity among donors. Furthermore, all samples exhibited a transient recovery phenomenon. Deformability profiling of stored RBCs using transiting pressure showed significant donor variability in initial quality and storage capacity. This measurement approach shows promise as a rapid method to individually assess the quality of stored RBC units. Answer: The decreased deformability of donor red blood cells (RBCs) after intrauterine transfusion in the human fetus is indeed a possible reason for their reduced life span. Studies have shown that the life span of donor RBCs is reduced in fetuses with Rh hemolytic disease, and this could be attributed to changes in the RBCs after they are transfused into the fetal environment (PUBMED:15265129). Specifically, the deformability of adult RBCs, which is initially better than that of fetal cells, decreases to fetal levels after transfusion. This change in deformability is likely due to an increased cholesterol-to-phospholipid ratio in the RBC membrane, which becomes identical to that of the fetal RBCs after transfusion (PUBMED:15265129). Furthermore, the reduced antioxidant capacity of the fetal plasma may contribute to the shortened life span of donor RBCs in the fetus. After intrauterine transfusion, there is a significant increase in plasma hemoglobin, which is a strong pro-oxidant, leading to a reduction in plasma antioxidant capacity. This diminished antioxidant protection could enhance the peroxidation of lipids in donor erythrocyte membranes, thereby contributing to the reduced life span of these cells in the fetus (PUBMED:12660436). It is also important to note that the deformability of RBCs exhibits significant inter-donor variability, and this variability persists throughout the storage process (PUBMED:33015934). The speed of RBC deformability loss during storage is donor-dependent, and characterizing this could be used as a biomarker to select high-quality RBC units for sensitive recipients or to provide customized storage timelines depending on the donor (PUBMED:35846223). Additionally, the aging curve of RBC deformability varies significantly among donors, which suggests that storage duration is only one of the factors affecting the functionality of packed RBCs (PUBMED:32690367). In summary, the decreased deformability of donor RBCs after intrauterine transfusion in the human fetus, potentially caused by changes in the RBC membrane composition and reduced antioxidant protection in the fetal environment, is a plausible explanation for their reduced life span. This phenomenon is further complicated by the inherent variability in RBC deformability among different donors (PUBMED:15265129; PUBMED:12660436; PUBMED:33015934; PUBMED:35846223; PUBMED:32690367).
Instruction: Demographic confounders in volumetric MRI analysis: is the posterior fossa really small in the adult Chiari 1 malformation? Abstracts: abstract_id: PUBMED:25794074 Demographic confounders in volumetric MRI analysis: is the posterior fossa really small in the adult Chiari 1 malformation? Objective: Measurement of posterior fossa volume has been proposed to have diagnostic utility and physiologic significance in the Chiari malformation type 1. This study evaluated the effects of demographics on posterior fossa volume and total intracranial volume in adult control subjects, adult patients with Chiari malformation type 1, and adult patients with idiopathic intracranial hypertension, who may share some imaging features of patients with Chiari malformation type 1. Materials And Methods: Twenty-eight patients with Chiari malformation type 1, 21 patients with idiopathic intracranial hypertension, and 113 control subjects underwent brain MRI including contrast-enhanced 3D gradient-recalled echo (GRE) T1-weighted imaging. Linear measurements of the posterior fossa and intracranial space were obtained. Manual segmentation of the posterior fossa and intracranial space was performed to yield posterior fossa volume and total intracranial volume. Age, sex, race, and body mass index (weight in kilograms divided by the square of height in meters; BMI) were controlled for when comparing cohorts. Results: Three of the 12 linear measurements significantly predicted total intracranial volume (accounting for 74% of variance), and four predicted posterior fossa volume (54% of variance). Age, race, sex, and BMI each statistically significantly influenced posterior fossa volume and total intracranial volume. No statistically significant differences in posterior fossa volume, total intracranial volume, or ratio of posterior fossa volume to total intracranial volume were seen between the Chiari malformation type 1 group and control group after controlling for demographics. Patients with idiopathic intracranial hypertension were more likely than control subjects to have smaller posterior fossa volumes (odds ratio [OR]=1.81; p=0.01) and larger total intracranial volumes (OR=1.24; p=0.06). Conclusion: Linear measurements of the posterior fossa are not strong predictors of posterior fossa volume. Age, race, sex, and BMI have statistically significant effects on intracranial measurements that must be considered, particularly with respect to posterior fossa volume in Chiari malformation type 1. Even when these demographic variables are appropriately accounted for, other similarly presenting diseases may show small posterior fossa volumes. abstract_id: PUBMED:36143119 The Small Posterior Cranial Fossa Syndrome and Chiari Malformation Type 0. Patients showing typical Chiari malformation type 1 (CM1) signs and symptoms frequently undergo cranial and cervical MRI. In some patients, MRI documents >5 mm of cerebellar tonsillar herniation (TH) and the diagnosis of CM1. Patients with 3−5 mm TH have “borderline” CM1. Patients with less than 3 mm of TH and an associated cervical syrinx are diagnosed with Chiari “zero” malformation (CM0). However, patients reporting CM1 symptoms are usually not diagnosed with CM if MRI shows less than 3−5 mm of TH and no syrinx. Recent MRI morphometric analysis of the posterior fossa and upper cervical spine detected anatomical abnormalities in and around the foramen magnum (FM) that explain these patients’ symptoms. The abnormalities include a reduced size of the posterior fossa, FM, and upper cervical spinal canal and extension of the cerebellar tonsils around the medulla rather than inferior to the foramen magnum, as in CM1. These morphometric findings lead some neurologists and neurosurgeons to diagnose CM0 in patients with typical CM1 signs and symptoms, with or without cervical syringes. This article reviews recent findings and controversies about CM0 diagnosis and updates current thinking about the clinical and radiological relationship between CM0, borderline CM1, and CM1. abstract_id: PUBMED:34448046 Posterior cranial fossa and cervical spine morphometric abnormalities in symptomatic Chiari type 0 and Chiari type 1 malformation patients with and without syringomyelia. Background: To better understand how anatomical features of Chiari malformation type 0 (CM0) result in the manifestation of Chiari malformation type 1 (CM1) signs and symptoms, we conducted a morphometric study of the posterior cranial fossa (PCF) and cervical canal in patients with CM1 and CM0. Methods: This retrospective study had a STROBE design and included 120 adult patients with MRI evidence of a small PCF (SPCF), typical clinical symptoms of CM1, and a diagnosis of CM1, CM0, or SPCF-TH0-only (SPCF with cerebellar ectopia less than 2 mm and without syringomyelia). Patients were divided by MRI findings into 4 groups: SPCF-TH0-only, SPCF-TH0-syr (CM0 with SPCF and syringomyelia), SPCF-CM1-only (SPCF with cerebellar ectopia 5 mm or more without syringomyelia), and SPCF-CM1-syr (CM1 with syringomyelia). Neurological examination data and MRI parameters were analyzed. Results: All patient cohorts had morphometric evidence of a small, flattened, and overcrowded PCF. The PCF phenotype of the SPCF-TH0-only group differed from that of other CM cohorts in that the length of clivus and supraocciput and the height of the PF were longer, the upper CSF spaces of PCF were taller, and the area of the foramen magnum was smaller. The SPCF-TH0 groups had a more significant narrowing of the superior cervical canal and a smaller decrease in PCF height than the SPCF-CM1 groups. Conclusions: Patients with SPCF-TH0 with and without syringomyelia developed Chiari 1 symptoms and signs. Patients with SPCF-TH0-syr (Chiari 0) had more constriction of their CSF pathways in and around the foramen magnum than patients with SPCF-TH0-only. abstract_id: PUBMED:676804 Size of posterior fossa in Chiari type 1 malformation in adults. The clinical and neuroradiological findings in five patients with Chiari Type 1 malformation presenting symptoms in adult life are presented. New posterior fossa ratio methods have been applied to this material. The posterior cranial fossae are demonstrated to be small, and the most significant deviation from normal gave the ratio h/Tw: posterior fossa height in proportion to Twining's line. The implications of these findings are discussed in relation to the static and dynamic factors that are operating in the development of neurological symptoms in Chiari type 1 malformation. A final discussion is given comparing the Chiari type 1 malformations to meningo-myelocele. The findings using posterior fossa ratio methods may strengthen the indications for suboccipital craniectomy in these patients. abstract_id: PUBMED:38455251 Volumetric assessment of Posterior cranial fossa in a West African population. Background And Objectives: Posterior fossa pathologies can have potentially devastating outcomes. The volumetric capacity of this fossa, known to have ethnoregional variations, can thus be critical in determining outcomes and intervention measures and approaches to pathologies involving this region. This study aimed to evaluate the normal posterior fossa volumes within the West-African subpopulation. Methods: This was a descriptive study of all patients presenting for a cranial imaging study at the study location within a two-year period using a 1.5T MRI of this cranial region. Obtained data included the transverse and anteroposterior diameters, and the height of this fossa and the obtained data was analyzed. P values < 0.05 was statistically significant. Results: A total of 315 patients were recruited (165 males and 150 females). The average posterior fossa transverse diameter, anteroposterior diameter and height were 108.19 mm, 71.58 mm and 35.53 mm respectively for males, and 105.7 mm, 66.48 mm, and 34.24 mm for females respectively. The average posterior fossa volume for males (292.36 cm3) was significantly higher than for females (252.90 cm3) (p= 0.0038). The highest average posterior fossa volume was between 16-30 years for males and above 75 years for females. Conclusion: Posterior cranial fossa volumes for the West African population is significantly higher than those obtained for other regions. In addition to being beneficial in some posterior fossa space occupying lesions, this larger volume can explain the relative rarity and sexual preferences of some posterior fossa congenital abnormalities like Chiari-1 malformation amongst the West African population. abstract_id: PUBMED:34390378 CSF-space volumetric change following posterior fossa decompression in paediatric Chiari type-I malformation: a correlation with outcome. Objective: We have previously reported inferior post-operative clinical outcomes in younger children with Chiari type-I malformation (CIM). We sought to quantify the CSF volumetric changes pre- and post-decompression, in a paediatric cohort, to determine whether cisternal volume change is associated with clinical outcomes. Methods: In this retrospective clinical study, the CSF spaces of the posterior fossa (supracerebellar/quadrigeminal, prepontine, fourth ventricle, cisterna magna) were measured on magnetic resonance images pre- and post-operatively using a semi-automated method. Additionally, we describe a novel CSF space of the upper cervical canal incorporating the subarachnoid space from the foramen magnum to the inferior cortex of the C2 body, FM-C2 cistern. Morphometric measurements included the pB-C2 distance, clivoaxial angle, clival length, clival angle and Boogard's angle. Volumetric and morphometric data were correlated with clinical outcomes at 4-12 months post-operatively as measured by the Chicago Chiari Outcome Scale (CCOS). Results: Of 59 adequate clinical cases, 57 and 36 patients had acceptable imaging for morphometric and volumetric analysis respectively. All CSF spaces measured had a significant increase in volume post-operatively (p < 0.05). There was no correlation between the change in volume or post-operative CSF volumes and CCOS. The pre-operative volume of the FM-C2 was positively correlated with total CCOS (Wald [Formula: see text], [Formula: see text]) and was significantly smaller in the 0-6-year age group (2.38 ± 1.27 ml vs. 3.67 ± 1.56 ml, p = 0.014). No morphometric measurement changed significantly after surgery or demonstrated a relationship with CCOS. Conclusions: Volumetric changes in the CSF cisterns of the posterior cranial fossa and upper cervical canal do not correlate with the age-related differences in clinical outcomes in paediatric CIM. The pre-operative volume of the FM-C2 cistern may have a role in predicting the likelihood of a beneficial post-operative outcome in paediatric CIM. abstract_id: PUBMED:16327707 Chiari type 1 malformation and magnetic resonance imaging The Chiari type 1 malformation is common. Unlike the Chiari type 2 and 3 malformations, it may remain latent for a long time, becoming symptomatic only in adulthood. The introduction of MRI has resulted in an increased number of diagnoses of this malformation in pediatric patients. It appears to be related to underdevelopment of the posterior cranial fossa. It must be differentiated from acquired tonsillar herniation, particularly when herniation results from intracranial hypotension; these cases are sometimes reported as acquired Chiari I malformation with spontaneous resolution. Tonsillar ectopia may cause symptoms by its direct effect on any or all of the medulla and the cerebellar and upper spinal cord. The most suggestive of the oculomotor disturbances is oscillopsia with downbeat nystagmus. Dysphonia and dysphagia are common. Potentially serious autonomic disturbances are also frequent: sleep apnea, respiratory failure, syncope and even sudden death. Another risk is syrinx formation, resulting from obstruction of CSF circulation in the cisterna magna. Syringomyelia is detected in 32 to 74% of patients with Chiari I malformation. Treatment is surgical. Posterior fossa decompression is achieved by suboccipital craniectomy combined with laminectomy of the upper cervical segments. Surgical intervention is indicated when the malformation is symptomatic and there is no doubt that it is the cause of the symptoms. When a Chiari I malformation is identified fortuitously on MRI, long-term monitoring is essential. The risk of developing symptoms increases over time. Patients should be advised not to participate in contact sports. abstract_id: PUBMED:18352798 Volumetric analysis of the posterior cranial fossa in a family with four generations of the Chiari malformation Type I. Object: Many authors have concluded that the Chiari malformation Type I (CM-I) is due to a smaller than normal posterior cranial fossa. In order to establish this smaller geometry as the cause of hindbrain herniation in a family, the authors of this paper performed volumetric analysis in a family found to have this malformation documented in 4 generations. Methods: Members from this family found to have a CM-I by imaging underwent volumetric analysis of their posterior cranial fossa using the Cavalieri method. Results: No member of this family found to have CM-I on preoperative imaging had a posterior fossa that was significantly smaller than that of age-matched controls. Conclusions: The results of this study demonstrate that not all patients with a CM-I will have a reduced posterior cranial fossa volume. Although the mechanism for the development of hindbrain herniation in this cohort is unknown, this manifestation can be seen in multiple generations of a familial aggregation with normal posterior fossa capacity. abstract_id: PUBMED:30318076 Minimally-invasive approach to posterior fossa decompression: Initial experience in Adult Chiari Type 1 Malformation patients. We report our initial experience using Minimally-Invasive Surgery (MIS) technique for Posterior Fossa Decompression (PFD) in Adult Chiari 1 Malformation (C1M) patients. Five subjects who were treated with MIS PFD at our center and followed up over a 5-year period. Another nine subjects who were treated with Open PFD and follow up over the same period were used for comparison. This study suggests that there are little differences in efficacy and safety between MIS and Open PFD. Larger series and prospective randomized trials comparing the two methods would provide higher-quality evidence and clarify the role of either technique in the treatment of C1M. abstract_id: PUBMED:21800079 Neuroradiological diagnosis of Chiari malformations. Chiari malformations 1, 2, 3 represent different degrees of herniation of posterior fossa content into the cervical canal (Chiari 1 and 2), or through an upper-cervical meningocele (Chiari 3), whereas Chiari 4 anomaly consists of hypoplasia of the cerebellum. Chiari 1 malformation (CM1) is the commonest anomaly; it is probably related to a mesodermal defect that create a congenitally small posterior fossa, subsequent overcrowding of its contents and herniation through the foramen magnum. The diagnosis of CM1 is based on the demonstration of the downward displacement and particular shape of the cerebellar tonsils into the upper cervical spinal canal associated with obliteration of the subarachnoid spaces at the level of the foramen magnum. MRI has a fundamental role in the correct identification of the anatomical aspects of this malformation and associated anomalies and represents the imaging modality of choice. Answer: The question of whether the posterior fossa is truly small in adult Chiari 1 malformation (CM1) has been explored in various studies, with some conflicting results. A study by Houston et al. (PUBMED:25794074) found that after controlling for demographic factors such as age, race, sex, and body mass index (BMI), there were no statistically significant differences in posterior fossa volume between the CM1 group and the control group. This suggests that demographic confounders can significantly influence intracranial measurements, and when these are accounted for, the previously assumed small posterior fossa in CM1 may not be as apparent. In contrast, other studies have reported that patients with CM1 do exhibit a small posterior cranial fossa (PCF). For instance, a study by Milhorat et al. (PUBMED:676804) applied new posterior fossa ratio methods and found that the posterior cranial fossae in CM1 patients were small, particularly when considering the ratio of posterior fossa height to Twining's line. This finding supports the notion that a small PCF may contribute to the development of neurological symptoms in CM1. Further complicating the picture, a study by Tubbs et al. (PUBMED:36143119) discussed the concept of Chiari malformation type 0 (CM0), where patients exhibit typical CM1 symptoms but do not meet the traditional MRI criteria for CM1, such as significant tonsillar herniation. This study suggests that anatomical abnormalities in and around the foramen magnum, including a reduced size of the PCF, may explain the symptoms in these patients, indicating that a small PCF can be associated with Chiari-like symptoms even in the absence of significant tonsillar herniation. Additionally, a study by Nwotchouang et al. (PUBMED:38455251) found that posterior cranial fossa volumes in the West African population were significantly higher than those obtained for other regions, which could explain the relative rarity of CM1 among this population. In summary, while demographic factors can influence the measurement of posterior fossa volume and potentially confound the association with CM1, there is evidence from various studies to suggest that a small PCF may still be a relevant feature in some patients with CM1 or CM-like symptoms (PUBMED:25794074, PUBMED:676804, PUBMED:36143119, PUBMED Continuing from where I left off, there is evidence from various studies to suggest that a small PCF may still be a relevant feature in some patients with CM1 or CM-like symptoms. However, the relationship between PCF size and CM1 is complex and may not be straightforward in all cases. For example, a study by Tubbs et al. (PUBMED:34448046) found that all patient cohorts with CM0 and CM1 had morphometric evidence of a small, flattened, and overcrowded PCF. This supports the idea that a small PCF is associated with the symptoms of CM1 and CM0. Furthermore, the study by Milhorat et al. (PUBMED:676804) using posterior fossa ratio methods indicated that the small size of the PCF may be a factor in the development of neurological symptoms in CM1 patients. On the other hand, a study by Tubbs et al. (PUBMED:18352798) found that not all patients with CM1 had a reduced posterior fossa volume when compared to age-matched controls, suggesting that a small PCF is not a universal finding in CM1. In the context of treatment outcomes, a study by Thorne et al. (PUBMED:34390378) found that volumetric changes in the CSF cisterns of the posterior cranial fossa and upper cervical canal did not correlate with age-related differences in clinical outcomes in pediatric CIM, indicating that the relationship between PCF size and symptomatology may not be directly correlated with treatment success. Overall, while demographic factors must be considered when evaluating PCF size in CM1, the evidence suggests that a small PCF may be associated with CM1 and related conditions in some patients, but it is not a definitive feature in all cases. Further research, including larger series and prospective randomized trials, may help to clarify the role of PCF size in CM1 and guide the treatment of this condition (PUBMED:25794074, PUBMED:676804, PUBMED:36143119, PUBMED:34448046, PUBMED:18352798, PUBMED:34390378).
Instruction: Educating doctors in France and Canada: are the differences based on evidence or history? Abstracts: abstract_id: PUBMED:16313579 Educating doctors in France and Canada: are the differences based on evidence or history? Background: Despite many economic and political similarities between France and Canada, particularly in their health care systems, there are very significant differences in their systems of medical education. Aim: This work aims to highlight the sociohistorical values of each country that explain these differences by comparing the medical education systems of the 2 countries, including medical schools (teachers, funding), key processes (curriculum, student selection) and quality assurance methods. Discussion: In France, means and processes are standardised and defined at a national level. France has almost no national system of assessment of medical schools nor of students. By contrast, Canada leaves medical schools free to design their medical curricula, select students and appoint teachers using their own criteria. In order to guarantee the homogeneity and quality of graduates, the medical profession in Canada has created independent national organisations that are responsible for accreditation and certification processes. Each country has a set of founding values that partly explain the choices that have been made. In France these include equality and the right to receive free education. In Canada, these include equity, affirmative action and market-driven tuition. Conclusion: Many of the differences are more easily explained by history and national values than by a robust base of evidence. There is a constant tension between a vision of education promoted by medical educators, based on contextually non-specific ideas such as those found in the medical education literature, and the sociopolitical foundations and forces that are unique to each country. If we fail to consider such variables, we are likely to encounter significant resistance when implementing reforms. abstract_id: PUBMED:23362577 The teaching of evidence-based practice in allied healthcare training in France Evidence-Based Practice is a strategy which health professionals can use to guide their clinical decisions. However, the specific skills necessary for its implementation are still insufficiently developed in initial training.A survey of physiotherapy students confirms this observation and suggests that the teaching of Evidence-Based Practice needs to be reinforced. abstract_id: PUBMED:11460834 Evidence-based medicine: a critical analysis of the concept of evidence in medicine There is currently a lively debate involving scholars from diverse cultural background on the subject of evidence-based medicine. In order to set out the grounds of discussion this paper deals with the problem of "evidence", both the concept and meaning of this term. After a brief history of the idea of "evidence" from antiquity to the present day, it will be underlined how modern day thinking recognizes the existence of two types of "evidence": rational evidence supported first by René Descartes, and perceptible evidence, defended by Francis Bacon. In contemporary epistemology the idea of "evidence" has been conceived above all as perceptible and is closely linked to the idea of fact. The concept of "scientific fact" will therefore be analyzed and will reveal how in science, contrary to prevailing opinion, "pure facts" practically do not exist and how the "facts" which scientists talk about in their research are always "facts depending on theory". Subsequently the capacity of "facts" to prove scientific hypothesis will be examined. In the light of more recent epistemological reflection the thesis will be maintained according to which facts are not capable of definitively proving the truth of a theory. Such reflection leads to the conclusion that in medicine "evidence" is always dependent on the theoretical and practical context in which researchers work. In the final part of the paper the epistemological presumptions and ambitions of evidence-based medicine will be examined. This analysis concludes that the epistemology on which evidence-based medicine relies upon does not grasp the true complexity of the scientific methodology and can weaken in doctors the beneficial stimulus that is doubt. abstract_id: PUBMED:10427660 Evidence-based medicine: context and relevance Currently, evidence-based medicine is subject of a controversial discussion. This review is focused on the different types of evidence in clinical medicine and elaborates the history and context of evidence-based medicine. Evidence-based medicine insists that the theoretical plausibility of a therapeutical concept alone does not substitute for the empirical proof of its efficacy. Evidence-based medicine emphasizes the necessity to complement the individual physician's clinical experience with valid external evidence. This evidence has to be obtained by laborious systematic reviews rather than the traditional fragmentary study of the literature. The Cochrane Collaboration stands for this concept. The application of evidence-based medicine links the individual patient's problems with external evidence by asking defined answerable questions and supplies criteria for the critical evaluation of the evidence found. Evidence-based medicine, however, is not devoid of problems and contradictions. Above all, it has to be subjected to its own criteria and has to prove its efficacy. abstract_id: PUBMED:34076152 Educating for the implementation of evidence-based healthcare in Brazil: the JBI methodology. Objective: To present the JBI evidence implementation methodology and report the Brazilian experience in educating health professionals. Method: This is an experience report on professional education in the Brazilian context as per the JBI methodology. Results: In four years, 29 clinical fellows were trained in Brazil, with technical and scientific theoretical bases and tools for evidence-based practice, focusing on the transformation of the health contexts in which they are inserted. Conclusion: The JBI methodology offers systems and tools to evaluate existent practices; it also reinforces and disseminates evidence-based healthcare, potentializing the achievement of effective change in healthcare. abstract_id: PUBMED:38025287 A cross-sectional study evaluating the knowledge, attitude, and practice of evidence-based medicine among resident doctors of a health-care Institution of National Importance in India. Background And Objectives: Evidence-based medicine (EBM) promotes the integration of updated- best evidence with patient preferences and medical expertise for clinical decision-making. Despite the availability of high-quality evidence such as systematic review and meta-analysis, some clinicians manage their patients based on past experiences and expert opinion. Thus, this study was proposed to assess the knowledge, attitude, and practice of EBM among resident doctors at a tertiary care hospital in India. Participants And Methods: This cross-sectional questionnaire-based study was conducted among senior residents and final-year postgraduates (PGs) who were independently involved in clinical decision-making. By convenience sampling method, the participants were recruited, and the validated EBM Questionnaire (EBMQ) was distributed online for assessing the knowledge, attitude, and practice of EBM. Descriptive statistics were represented as frequency and proportions. Results: A total of 102 resident doctors participated with male preponderance (74.5%). Nearly, 96 (94.1%) participants were already practicing EBM and 21.6% had undergone EBM training. Textbooks (50%) were the most often referred sources for EBM information. Specific EBM databases such as MEDLINE and Cochrane were also utilized by 37.3% of participants. More than 70% of participants understood the terms such as a randomized controlled trial, case-control study, and P value. A higher proportion (80.4%) of participants showed a positive attitude about patient care improved by EBM. Conclusions: The majority of the resident doctors exhibited good knowledge and a positive attitude toward applying EBM in clinical decision-making. Periodic training through workshops or courses and integration of EBM with the PG curriculum would potentially enhance the EBM practice. abstract_id: PUBMED:18548408 The history of evidence-based psychiatry and evidence-based guideline development Evidence-based medicine, evidence-based psychiatry and evidence-based development of clinical guidelines are all flourishing today because of developments that have taken place in both science and society. These concepts will be explained and placed in their context. Medical treatment is based on the one hand on scientific knowledge and experience, on the other hand on compassion and consideration for others. Non-specific and relational factors play an important role, particularly in psychiatry and mental health care, but the scientific way of thinking remains as important as ever. abstract_id: PUBMED:15827845 History and development of evidence-based medicine. This article illustrates the timeline of the development of evidence-based medicine (EBM). The term "evidence-based medicine" is relatively new. In fact, as far as we can tell, investigators from McMaster's University began using the term during the 1990s. EBM was defined as "a systemic approach to analyze published research as the basis of clinical decision making." Then in 1996, the term was more formally defined by Sacket et al., who stated that EBM was "the conscientious and judicious use of current best evidence from clinical care research in the management of individual patients." Ancient era EBM consists of ancient historical or anecdotal accounts of what may be loosely termed EBM. This was followed by the development of the renaissance era of EBM, which began roughly during the seventeenth century. During this era personal journals were kept and textbooks began to become more prominent. This was followed by the 1900s, during an era we term the transitional era of EBM (1900-1970s). Knowledge during this era could be shared more easily in textbooks and eventually peer-reviewed journals. Finally, during the 1970s we enter the modern era of EBM. Technology has had a large role in the advancement of EBM. Computers and database software have allowed compilation of large amounts of data. The Index Medicus has become a medical dinosaur of the past that students of today likely do not recognize. The Internet has also allowed incredible access to masses of data and information. However, we must be careful with an overabundance of "unfiltered" data. As history, as clearly shown us, evidence and data do not immediately translate into evidence based practice. abstract_id: PUBMED:6399837 The Academy of Medicine of France: history and activities. Remarks and reflections The author presents the Academie de Médecine de France its structure, its history and its fundamental activities. He evokes and proposes some ideas about the medical problems concerning Tunisia and countries of Maghreb. abstract_id: PUBMED:21161932 Lessons from evidence-based medicine: what healthcare designers can learn from the medical field. This paper explores the history of evidence-based medicine (EBM) in comparison to the relatively short history of evidence-based design (EBD). Throughout its development, EBM has encountered a number of challenges, including dealing with initial objections to the concept, creating safeguards to ensure rigor in research, supporting practitioners in their efforts to stay current with a growing body of research data, applying the literature to real-life situations, making decisions when literature to inform decision making is absent, and educating students and professionals to become EBM practitioners. The ways in which the field of EBM has handled these challenges provides a number of lessons for the relatively young field of EBD. Answer: The differences in educating doctors in France and Canada are more easily explained by history and national values than by a robust base of evidence (PUBMED:16313579). In France, medical education is standardized and defined at a national level with almost no national system of assessment of medical schools or students. By contrast, Canada allows medical schools to design their own curricula, select students, and appoint teachers using their own criteria, with independent national organizations responsible for accreditation and certification processes to ensure homogeneity and quality of graduates. The founding values that partly explain these choices include equality and the right to free education in France, and equity, affirmative action, and market-driven tuition in Canada (PUBMED:16313579). The differences in educational approaches are not necessarily derived from evidence-based practices in medical education but rather reflect the sociohistorical values of each country. This is consistent with the broader discussions on evidence-based medicine (EBM), which emphasize that evidence is always dependent on the theoretical and practical context in which researchers work (PUBMED:11460834). EBM itself has been subject to debate and has evolved over time, with its own history and development (PUBMED:15827845). The concept of evidence in medicine has been critically analyzed, with recognition that "pure facts" practically do not exist in science and that facts are always theory-laden (PUBMED:11460834). Furthermore, the teaching of evidence-based practice in allied healthcare training in France has been identified as needing reinforcement, suggesting that the specific skills necessary for its implementation are still insufficiently developed in initial training (PUBMED:23362577). This indicates that the differences in medical education between France and Canada may not be primarily driven by evidence-based educational practices but rather by historical and cultural factors unique to each country.
Instruction: Preplacement nerve testing for carpal tunnel syndrome: is it cost effective? Abstracts: abstract_id: PUBMED:15247811 Preplacement nerve testing for carpal tunnel syndrome: is it cost effective? Unlabelled: Is not hiring otherwise-qualified workers who have an abnormal post-offer preplacement (POPP) median nerve test a cost-effective strategy to reduce workers' compensation expenses related to carpal tunnel syndrome (CTS)? We performed a retrospective dynamic cohort study based on 2150 workers hired at a company between January 1996 and December 2001 and who underwent POPP median nerve testing. Workers were followed until they left the company or until follow-up ended in May 2003. Results: Thirty-five cases of work-related CTS occurred during follow-up, and 9.13 cases could have been avoided. However, if the company had not hired workers with abnormal POPP nerve test results, it would have suffered a net loss of $357,353. Conclusion: Not hiring workers with abnormal POPP nerve tests to reduce costs of work-related CTS is not a cost-effective strategy for employers. abstract_id: PUBMED:37722903 Median nerve entrapments in the forearm - a case report of rare anterior interosseous nerve syndrome. Nowadays, median nerve entrapment is a frequent issue. Many physicians are familiar with the most common median entrapment, which is the carpal tunnel syndrome (CTS). By contrast, less frequent entrapments, historically called "pronator syndrome" are still misdiagnosed as overuse syndrome, flexor tendinitis or other conditions. This article is meant to introduce proximal median nerve entrapments, followed by a case report of the rarest example - anterior interosseous nerve syndrome (AIN syndrome). abstract_id: PUBMED:8691185 Carpal tunnel syndrome: a case of median nerve injection injury and a safe and effective method for injecting the carpal tunnel. The carpal tunnel syndrome is a compressive neuropathy of the median nerve at the wrist. The local injection of corticosteroid is an effective treatment modality in properly selected cases; however, this usually efficacious and safe procedure may result in serious complications if insufficient attention is paid to technique. A recent case of severe median nerve injury secondary to local steroid injection at the wrist prompted this report. We present a safe and effective method for injection of the carpal tunnel syndrome. abstract_id: PUBMED:20705778 Estimation and pharmacodynamic consequences of the minimum effective anesthetic volumes for median and ulnar nerve blocks: a randomized, double-blind, controlled comparison between ultrasound and nerve stimulation guidance. Background: Nerve stimulation and ultrasound guidance are the most popular techniques for peripheral nerve blocks. However, the minimum effective anesthetic volume (MEAV) in selected nerves for both techniques and the consequences of decreasing the local anesthetic volume on the pharmacodynamic characteristics of nerve block remain unstudied. We designed a randomized, double-blind controlled comparison between neurostimulation and ultrasound guidance to estimate the MEAV of 1.5% mepivacaine and pharmacodynamics in median and ulnar nerve blocks. Methods: Patients scheduled for carpal tunnel release were randomized to ultrasound guidance (UG) or neurostimulation (NS) groups. A step-up/step-down study model (Dixon method) was used to determine the MEAV with nonprobability sequential dosing based on the outcome of the previous patient. The starting dose of 1.5% mepivacaine was 13 and 11 mL for median and ulnar nerves at the humeral canal. Block success/failure resulted in a decrease/increase of 2 mL. A blinded physician assessed sensory blockade at 2-minute intervals for 20 minutes. Block onset time and duration were noted. Results: The MEAV50 (SD) of the median nerve was lower in the UG group 2 (0.1) mL (95% confidence interval [CI] = [1, 96] to [2, 04]) than in the NS group 4 (3.8) mL (95% CI = [2, 4] to [5, 6]) (P = 0.017). There was no difference for the ulnar nerve between UG group 2 (0.1) mL (95% CI = [1, 96] to [2, 04]) and NS group 2.4 (0.6) mL (95% CI = [2, 1] to [2, 7]). The duration of sensory blockade was significantly correlated to local anesthetic volume, but onset time was not modified. Conclusion: Ultrasound guidance selectively provided a 50% reduction in the MEAV of mepivacaine 1.5% for median nerve sensory blockade in comparison with neurostimulation. Decreasing the local anesthetic volume can decrease sensory block duration but not onset time. abstract_id: PUBMED:34969010 Neurodynamics Is an Effective Intervention for Carpal Tunnel Syndrome. Clinical Scenario: Carpal tunnel syndrome (CTS) is the most common entrapment neuropathy of the upper-extremity. Due to the involvement of the median nerve, long-term compression of this nerve can lead to hand dysfunction and disability that can impact work and daily life. As such, early treatment is warranted to prevent any long-term damage to the median nerve. Conservative management is utilized in those with mild to moderate CTS. Neural mobilizations can aid in the reduction of neural edema, neural mobility, and neural adhesion while improving nerve conduction. Clinical Question: Is neurodynamics effective in reducing pain and reported symptoms in those with CTS? Summary of Key Findings: Four studies were included, with 2 studies utilizing passive neural mobilizations, one study using active techniques, and one study using active neural mobilizations with splinting. All studies showed large effect size for pain, symptom severity, and physical function. Clinical Bottom Line: Neurodynamics is an effective treatment for CTS. Splinting is only effective when combined with neurodynamics. Strength of Recommendation: Level B evidence to support the use of neurodynamics for the treatment of CTS. abstract_id: PUBMED:26206729 Shocking therapy: Brief electrical stimulation for delayed nerve repair. This commentary provides perspective on a recent paper published in Experimental Neurology by Elzinga et al. where the authors investigated the effect of brief electrical stimulation (ES) on nerve regeneration after delayed nerve repair in a rodent model. Their results from a well controlled series of experiments indicated that brief ES promoted axonal outgrowth after chronic axotomy as well after chronic Schwann cell and muscle denervation. ES also increased chronically axotomized neurons regenerating into chronically denervated stumps, which represent a true delayed repair. The authors conclude that brief ES promotion of nerve regeneration after delayed nerve repair is as effective as after immediate repair. Given the prior experimental evidence, and the prior clinical data from patients with carpal tunnel syndrome and digital nerve repair, the implication of this new work is to consider a well designed clinical trial for use of brief ES in nerve graft and nerve transfer repairs. abstract_id: PUBMED:8953289 High median nerve entrapments. An obscure cause of upper-extremity pain. Compression of the median nerve at the wrist is the most common nerve entrapment syndrome but may be over- or misdiagnosed. With high (proximal) median nerve entrapment being uncommon and having an elusive diagnosis, proximal compression may be overlooked as a cause of the painful upper extremity. Recognition and diagnosis of this problem will help ensure timely and effective management of the more common pain syndromes. abstract_id: PUBMED:38472962 The Unpredictable Ulnar Nerve-Ulnar Nerve Entrapment from Anatomical, Pathophysiological, and Biopsychosocial Aspects. Peripheral nerves consist of delicate structures, including a rich microvascular system, that protect and nourish axons and associated Schwann cells. Nerves are sensitive to internal and external trauma, such as compression and stretching. Ulnar nerve entrapment, the second most prevalent nerve entrapment disorder after carpal tunnel syndrome, appears frequently at the elbow. Although often idiopathic, known risk factors, including obesity, smoking, diabetes, and vibration exposure, occur. It exists in all adult ages (mean age 40-50 years), but seldom affects individuals in their adolescence or younger. The patient population is heterogeneous with great co-morbidity, including other nerve entrapment disorders. Typical early symptoms are paresthesia and numbness in the ulnar fingers, followed by decreased sensory function and muscle weakness. Pre- and postoperative neuropathic pain is relatively common, independent of other symptom severity, with a risk for serious consequences. A multimodal treatment strategy is necessary. Mild to moderate symptoms are usually treated conservatively, while surgery is an option when conservative treatment fails or in severe cases. The decision to perform surgery might be difficult, and the outcome is unpredictable with the risk of complications. There is no consensus on the choice of surgical method, but simple decompression is relatively effective with a lower complication rate than transposition. abstract_id: PUBMED:15859485 Nerve entrapments of the wrist: early treatment preserves function. Nerve entrapment syndromes may affect as many as one in four office workers. Prompt diagnosis is critical to the selection of an appropriate and effective treatment. abstract_id: PUBMED:9135923 Nerve entrapment syndromes. Peripheral nerve entrapment syndromes may be caused by inflammation, trauma, or congenital deficits. Some entrapment neuropathies are overdiagnosed, particularly carpal tunnel syndrome. Other nerve entrapment syndromes, such as the thoracic outlet syndrome, posterior interosseous nerve syndrome, and anterior interosseous nerve syndrome, are rare. In most cases of an entrapment neuropathy, the diagnosis and management are straightforward, but much remains to be learned about the natural history, course, and prevention of these clinical conditions. Answer: Preplacement nerve testing for carpal tunnel syndrome (CTS) is not a cost-effective strategy for employers. A retrospective dynamic cohort study based on 2150 workers who underwent post-offer preplacement (POPP) median nerve testing showed that while 9.13 cases of work-related CTS could have been avoided by not hiring workers with abnormal POPP nerve test results, the company would have suffered a net loss of $357,353. Therefore, the conclusion is that not hiring workers with abnormal POPP nerve tests to reduce costs of work-related CTS is not cost-effective for employers (PUBMED:15247811).
Instruction: Patient retention and replacement trends after saline breast implants: are deflations inflationary? Abstracts: abstract_id: PUBMED:18752020 Patient retention and replacement trends after saline breast implants: are deflations inflationary? Background: This study aimed to examine serial operative trends with patients who have experienced surgical implant deflation. In addition, the economic impact of deflation on practice caseload was analyzed. Methods: A retrospective review was conducted to examine patients who experienced deflation from 2000 to 2007. Patient demographics, implant data, and the presence of secondary (performed at explantation) or tertiary (performed later) procedures were examined. Financial information was tabulated to determine the economic multiplier effect (i.e. the expected value of revenue from secondary and tertiary procedures divided by explantation cost) of taking on deflation cases in a practice. Results: For this study, 285 patients with an average age of 38.4 years were identified. The average time to explantation was 50 months. Slightly more than half of the patients (55%) had both implants replaced at the time of explantation, whereas 59% switched to silicone implants and 41% continued with saline implants. A larger implant was chosen by 54% of the patients (average increase, 82 ml), whereas 18% underwent secondary procedures at the time of explantation including mastopexy (n = 22), facial rejuvenation (n = 8), liposuction (n = 7), or a combination of the two (n = 8). Tertiary procedures were performed for 31% of the patients after their explantation/reimplantation (average time frame, 13 months). The tertiary procedures included replacement with silicone (33.7%), liposuction (24.7%), abdominoplasty (11.2%), facial rejuvenation (13.5%), or nonsurgical rejuvenation using Botox, Restylane, or laser procedure (33.7%). Economic multiplier analysis showed that the financial impact of revenue derived from implant deflation on downstream practice revenue is 1.02. Conclusion: At the time of explantation, replacement with silicone after saline deflation is common (59% of patients). In this study, patients who chose replacement with saline had a significant tendency to replace with silicone (33%) as a tertiary procedure. Saline deflation represents a substantial opportunity for practice development. In particular, it has a positive impact on patient retention for additional aesthetic surgical or nonsurgical procedures. Economic multiplier analysis can be used to quantify the financial impact of saline deflation. abstract_id: PUBMED:19653062 Saline replacement after local resection of the breast: a simple technique with good control and improved patient satisfaction. Background: Here we report a study on the use of saline replacement after local resection of the breast. Local resection of breast tissue is widely used in the treatment of breast cancer due to the development of imaging analysis technology such as mammography, ultrasound sonography, computed tomography, and magnetic resonance. Preserving the size and shape of the breast after treatment is beneficial for the patient, and deformity of the residual breast can be a serious problem. Methods: Following resection, the subcutaneous tissue was sutured with 4-0 polydioaxanone, and a 20-G syringe was inserted through the skin into the hole. After suturing the skin with 4-0 nylon, an adequate volume of saline was injected. A conservative treatment group did not receive saline injections and was used as controls. We sent self-administered questionnaires to 60 patients who had undergone local resection of the breast and received 55 responses (92%). Results: Saline replacement was performed in 28 of the 55 patients, and conservative treatment (no saline replacement) was performed in 27 patients. The average volume of injected saline was 46 ml (range, 5-150 ml). There were no statistically significant differences in skin irritation, pain, fatigue, or daily activity between the two groups, but the saline replacement group was more satisfied with the surgical scar (p < 0.05) and shape of the breast (p = 0.05) after surgery. Conclusions: Saline replacement after local resection of the breast is a simple technique that provides benefits for patients undergoing conservative surgery of the breast. abstract_id: PUBMED:33936908 Breast Cancer in the Presence of Failed Saline Breast Implants. Breast augmentation has been gaining popularity over the last two decades to correct congenital breast asymmetry or increase breast size and projection. Augmentation options started with saline implants, then silicone implants, and, recently, autologous fat transfer. Unfortunately, breast implants are not without complications, some of which are common, like capsular contracture, implant failure and infection. Others are quite rare, such as Breast Implant-Associated Anaplastic Large Cell Lymphoma (BIA-ALCL). Most of these complications will eventually require explantation in most cases, as the patients' and implants' age and risk of complications increase. We present a 79-year-old patient who presented to our breast unit with a left breast lump with 50-year-old saline implants. A triple assessment revealed incidental right breast cancer treated with radiofrequency identification (RFID) tag-guided wide local excision, sentinel lymph node biopsy and bilateral explantation. abstract_id: PUBMED:36000008 Restoring Breast Volume in High BMI Patients: A Single-Center Review of Breast Reconstruction Using Hyperinflated Saline Implants. Background: Breast reconstruction in the obese patient is often fraught with poor patient satisfaction due to inadequate volume restoration. The off-label hyperinflation of saline implants is a direct yet controversial solution to this problem, with limited studies in the literature. This study sought to determine the safety and efficacy of this technique for breast reconstruction. Methods: A retrospective chart review was performed to identify all patients with a body mass index (BMI) greater than or equal to 30 kg/m2 who underwent breast reconstruction between the years 2013 to 2020 with saline implants filled beyond the manufacturer's maximum recommended volume. Results: The 21 patients identified had an average age of 49 years. The mean BMI was 39.5 kg/m2. A total of 42 implants were placed; 34 were 800 mL, 4 were 750 mL, and 4 were 700 mL. The average overfill volume was 302 mL (138%). Mean follow-up was 65.0 months. Of these, 1 (4.8%) patient with a history of chest wall radiotherapy underwent reoperation for unilateral implant exposure 27 days after the index procedure, no patient sustained spontaneous leak or rupture, and 1 patient had unilateral deflation following emergent central line and pacemaker placement 2 years after the implant was placed for an unrelated cardiovascular event. Conclusions: Hyperinflation of saline implants beyond the maximum recommended volume may be considered for volume replacement in obese patients undergoing implant-based breast reconstruction. This practice is well tolerated, has a complication rate comparable to using implants filled to the recommended volume, and has the potential to restore lost breast volume in the obese patient post mastectomy. abstract_id: PUBMED:30637599 Trends in mammography, hormone replacement therapy, and breast cancer incidence and mortality in Canadian women. Purpose: The purpose of the study is to examine relationships between long-term trends of region- and age-specific rates of mammography, hormone replacement therapy (HRT), and breast cancer incidence and mortality in Canadian women aged 35 years and older. Methods: Population-based complex surveys were used to estimate mammography use in the past 2 years and ever, and HRT use in the past month. National population-based administrative data were used to estimate breast cancer incidence and mortality. Joinpoint analyses were used to estimate trends in rates and years where trend changed. Results: No consistent relationship between mammography use and breast cancer incidence was observed across age groups. Opportunistic screening occurred prior to the establishment of organized screening programs in Canada and prior to substantial declines in breast cancer mortality observed around 1990. Women aged 35-39 years demonstrated a 62.8% relative decrease in breast cancer mortality between 1950 and 2015 despite lower rates of mammography use in the past 2 years (range 9.4-15.9%) reinforcing important treatment advances. A substantial proportion of women in their 40s report mammography use in the past 2 years (range 35.8-42.2%) and regional variation exists reflecting inconsistencies in guidelines across Canada. Conclusion: Rates of mammography use over time do not necessarily reflect national guideline releases or establishment of organized screening programs. abstract_id: PUBMED:22942113 Two-year outcomes with a novel, double-lumen, saline-filled breast implant. Background: A double-lumen, saline-filled breast implant with a baffle structure (IDEAL IMPLANT Saline-Filled Breast Implant; Ideal Implant Incorporated, Irving, Texas) was developed to overcome the limitations of single-lumen saline implants by controlling saline movement and providing internal support to the implant edge and upper pole. Objectives: The authors report 2-year data from a 10-year US clinical trial evaluating the safety and effectiveness of this investigational implant. Methods: Women seeking primary breast augmentation or replacement of existing augmentation implants were enrolled between February 2009 and February 2010 at 35 private practice sites, where the women underwent surgery to receive the new technology implant. Data collection included incidence and grade of capsular contracture (CC) and wrinkling as well as patient- and surgeon-reported satisfaction measures. All clinical data were reported as Kaplan-Meier risk rates of first occurrence, per patient, in each cohort. Results: Two-year follow-up visits were completed by 472 of 502 enrolled women (94.0%), 378 of whom had undergone primary breast augmentation and 94 of whom had received replacement augmentation. Patient-reported satisfaction with the outcome was 94.3% for primary augmentations and 92.3% for replacement augmentations; surgeon-reported satisfaction was also high (96.5% and 93.4%, respectively). Baker Grade III and IV CC rates were 3.8% (primary) and 8.2% (replacement), whereas moderate-to-severe wrinkling was 3.8% (primary) and 12.0% (replacement). Deflations occurred in 4.8% of primary augmentations and 3.3% of replacement augmentations. No deflations were caused by a shell fold flaw. Conclusions: Two-year data from 472 women indicate that this double-lumen saline implant containing a baffle structure has a low rate of wrinkling and a lower rate of CC at 2 years than was reported for current single-lumen saline implants at 1 year. abstract_id: PUBMED:10355020 Breast cancer and hormone replacement therapy There are more than 50 studies and at least 6 metaanalyses with varying results concerning breast cancer risk related to hormone replacement therapy. According to the literature the breast cancer risk in hormone replacement therapy may be slightly increased, especially after long-term treatment. After discontinuing hormone therapy the risk decreases and reaches baseline 5 years later. Since prospective randomised studies are lacking, it is still unclear whether hormone replacement therapy really promotes breast cancer or if the increase is due to methodological biases such as more frequent breast examination or mammography. According to the literature there is no evidence that under hormone replacement therapy the risk of relapse is elevated or the prognosis less favourable in women treated for breast cancer. Since prospective randomised studies with long-term follow-up are lacking, we recommend the use of the guidelines of the "Deutsche Gesellschaft für Senologie", based on oestrogen receptor levels and lymph node involvement. Risks and benefits of hormone replacement therapy should be discussed with the well-informed patient and alternative therapy modalities evaluated. abstract_id: PUBMED:30489499 Cost-Effectiveness Analysis of Silicone versus Saline Implant-Based Breast Reconstruction Using the BREAST-Q. Background: The most common type of breast reconstruction is implant-based breast reconstruction. Implant-based reconstruction has been reported to impact quality-of-life outcomes. Therefore, the authors sought to evaluate the cost-effectiveness of saline versus silicone implants. Methods: The authors retrospectively reviewed data from patients who underwent breast reconstruction with saline or silicone implants at their institution. This included type of procedure, acellular dermal matrix use, complications, and number of revisions. Costs were estimated using the Centers for Medicare and Medicaid Services physician fee schedule and hospital costs. Effectiveness was measured using BREAST-Q-adjusted life-years, a measure of years of perfect breast health, based on BREAST-Q data collected before mastectomy and reconstruction and at 12 months after final reconstruction. The incremental cost-effectiveness ratio was obtained for silicone and saline reconstruction. Results: The authors identified 134 women, among which 77 (57 percent) underwent silicone and 57 (43 percent) underwent saline breast reconstruction. The cost of saline reconstruction was $1288.23 less compared with silicone. BREAST-Q-adjusted life-years were 28.11 for saline and 23.57 for silicone, demonstrating higher cost-effectiveness for saline. The incremental cost-effectiveness ratio for saline was -$283.48, or $283.48 less per year of perfect breast-related health postreconstruction than silicone. Conclusions: The authors' results indicate that saline breast reconstruction may be more cost-effective compared with silicone at 12 months after final reconstruction. Silicone was both more expensive and less effective than saline. However, given the relatively small cost difference, surgeon and patient preference may be important in determining type of implant used. abstract_id: PUBMED:32212335 Forty-year trends in menopausal hormone therapy use and breast cancer incidence among postmenopausal black and white women. After reports from the Women's Health Initiative randomized trial evaluating estrogen plus progestin, there was a sudden, substantial, and sustained decrease in all categories of menopausal hormone therapy, and the first reduction in age-adjusted breast cancer incidence in more than 20 years was seen in 2003-2004 among US women 50 years of age or older. Subsequent trends in breast cancer incidence have been described, but most reports have not focused on the postmenopausal age group or fully engaged the potential influence of reduced hormone therapy on breast cancer incidence trends by race/ethnicity. To address this gap, this commentary examines trends for annual age-adjusted breast cancer incidence over a 40-year period from 1975 to 2015 for white and black women on the basis of findings from the Surveillance, Epidemiology, and End Results 9 registries. Overall, the sharp decline in breast cancer incidence seen in 2003-2004 was followed in the subsequent decade by a continued low breast cancer incidence plateau in white women that has largely persisted. In contrast, a new discordance between breast cancer incidence trends in black and white women has emerged. In the 2005-2015 decade, a sustained increase in breast cancer incidence in black women has resulted in annual incidence rates comparable, for the first time, to those in white women. This commentary explores the hypothesis that the over-decade-long and discordant changes in breast cancer incidence rates in postmenopausal black and white women are, to a large extent, associated with changes in hormone therapy use in these 2 groups. abstract_id: PUBMED:21136577 Patient satisfaction with postmastectomy breast reconstruction: a comparison of saline and silicone implants. Background: At a time when the safety and effectiveness of breast implants remains under close scrutiny, it is important to provide reliable and valid evidence regarding patient outcomes. In the setting of postmastectomy reconstruction, patient satisfaction and quality of life may be the most significant outcome variables when evaluating surgical success. The objective of the current study was to identify predictors of patient satisfaction with breast appearance, including implant type, in a large sample of women who underwent breast reconstruction surgery using implants. Methods: A multicenter, cross-sectional study design was used. A total of 672 women who had completed postmastectomy, implant-based reconstruction at 1 of 3 centers in North America were asked to complete the BREAST-Q (Reconstruction Module). Multivariate linear regression modeling was performed. Results: Completed questionnaire data were available for 482 of the 672 patients. In 176 women, silicone implants were placed and in 306, saline implants were used. The multivariate model confirmed that patients' satisfaction with their breasts was significantly higher in patients with silicone implants (P = .016). The receipt of postmastectomy radiotherapy was found to have a significant, negative effect on breast satisfaction (P<.000) in both silicone and saline implant recipients. In addition, for women who received either silicone or saline implants, satisfaction diminished over time (P = .017). Conclusions: In the setting of postmastectomy reconstruction, patients who received silicone breast implants reported significantly higher satisfaction with the results of reconstruction than those who received saline implants. This information can be used to optimize shared medical decision-making by providing patients with realistic postoperative expectations. Answer: The study on patient retention and replacement trends after saline breast implant deflation found that deflation cases can have a positive impact on practice development, with a significant number of patients opting for replacement with silicone implants after experiencing saline implant deflation. Specifically, 59% of patients in the study switched to silicone implants, and 33% of those who initially chose to replace with saline later opted for silicone as a tertiary procedure. The average time to explantation was 50 months, and at the time of explantation, 55% of patients had both implants replaced. Additionally, 54% of patients chose a larger implant size, with an average increase of 82 ml. Secondary procedures at the time of explantation were performed for 18% of patients, and tertiary procedures were performed for 31% of patients after their explantation/reimplantation, with an average time frame of 13 months. The economic multiplier analysis showed a financial impact of revenue derived from implant deflation on downstream practice revenue of 1.02, indicating that saline deflation represents a substantial opportunity for practice development and patient retention for additional aesthetic surgical or nonsurgical procedures (PUBMED:18752020).
Instruction: Does medical specialty influence the treatment of asymptomatic carotid stenosis? Abstracts: abstract_id: PUBMED:21460764 Does medical specialty influence the treatment of asymptomatic carotid stenosis? a Belgian multidisciplinary survey. Aim: The aim of this study was to supplement the few data that exist regarding the potential effect of the referring medical specialty on the proposed treatment for asymptomatic carotid stenosis. Methods: In a web survey, we presented Belgian cardiologists, neurologists and vascular surgeons with two fairly uncomplicated case vignettes on asymptomatic carotid stenosis differing only in the degree of stenosis (70-80% in case 1 and >80% in case 2). Results: In both cases the suggested therapies were different per medical specialty (P<0.000002 and P<0.00002, respectively). Cardiologists were more conservative and vascular surgeons were more aggressive. Preferred therapies for both cases differed statistically significantly (odds ratio 8.63; 95% confidence interval 5.11-14.58). Suggesting a different therapy or not for case 1 and case 2 was also different per medical specialty (P<0.035). Cardiologists were most inclined to suggest a different therapy and vascular surgeons the least. Nobody switched to a more conservative treatment. Younger physicians suggested a more conservative approach (P<0.014). Conclusion: Different medical specialties prefer different treatments for asymptomatic carotid stenosis. Also, younger physicians seem more conservative. We elaborate on the different reasons that could explain these findings. abstract_id: PUBMED:25389280 Carotid endarterectomy may be required in addition to best medical treatment for some patient subgroups with asymptomatic carotid stenosis. Several guidelines recommend carotid endarterectomy for patients with severe asymptomatic carotid stenosis to reduce the risk of a future cerebrovascular event, as long as the perioperative stroke/death rate is <3%. Based on improvements in best medical treatment, it was argued that currently best medical treatment alone should comprise the treatment-of-choice for asymptomatic carotid stenosis patients and that no intervention is warranted in these individuals. While it is true that best medical treatment should be used for the management of all asymptomatic carotid stenosis patients, emerging evidence suggests that best medical treatment alone may not prevent disease progression and the development of symptoms in some asymptomatic carotid stenosis patient subgroups. This article analyzes the results of two recent independent studies demonstrating that medical therapy alone may not be adequate for stroke prevention in some asymptomatic carotid stenosis patient subgroups. These results suggest that besides best medical treatment, additional carotid endarterectomy should be considered for specific asymptomatic carotid stenosis patients. abstract_id: PUBMED:26721504 Definition of Best Medical Treatment in Asymptomatic and Symptomatic Carotid Artery Stenosis. Implementation of best medical treatment (BMT) is the cornerstone of the management of patients with either asymptomatic or symptomatic carotid artery stenosis. We review the literature to define the components of BMT. Smoking cessation, maintaining a healthy body weight, moderate exercise, and a Mediterranean diet are essential lifestyle measures. Moderate alcohol consumption may also be beneficial but recommending it to patients may be hazardous if they consume too much. The importance of lifestyle measures is largely underestimated by both physicians and patients. Blood pressure and diabetes control, antiplatelet agents, and lipid-lowering treatment with statins/ezetimibe comprise the pharmacological components of BMT. Initiation of an intensive regimen of BMT is a sine qua non for patients with carotid artery stenosis whether or not they are offered or undergo an invasive revascularization procedure. abstract_id: PUBMED:30873912 Angioplasty in asymptomatic carotid artery stenosis vs. endarterectomy compared to best medical treatment: One-year interim results of SPACE-2. Background: Treatment of individuals with asymptomatic carotid artery stenosis is still handled controversially. Recommendations for treatment of asymptomatic carotid stenosis with carotid endarterectomy (CEA) are based on trials having recruited patients more than 15 years ago. Registry data indicate that advances in best medical treatment (BMT) may lead to a markedly decreasing risk of stroke in asymptomatic carotid stenosis. The aim of the SPACE-2 trial (ISRCTN78592017) was to compare the stroke preventive effects of BMT alone with that of BMT in combination with CEA or carotid artery stenting (CAS), respectively, in patients with asymptomatic carotid artery stenosis of ≥70% European Carotid Surgery Trial (ECST) criteria. Methods: SPACE-2 is a randomized, controlled, multicenter, open study. A major secondary endpoint was the cumulative rate of any stroke (ischemic or hemorrhagic) or death from any cause within 30 days plus an ipsilateral ischemic stroke within one year of follow-up. Safety was assessed as the rate of any stroke and death from any cause within 30 days after CEA or CAS. Protocol changes had to be implemented. The results on the one-year period after treatment are reported. Findings: It was planned to enroll 3550 patients. Due to low recruitment, the enrollment of patients was stopped prematurely after randomization of 513 patients in 36 centers to CEA (n = 203), CAS (n = 197), or BMT (n = 113). The one-year rate of the major secondary endpoint did not significantly differ between groups (CEA 2.5%, CAS 3.0%, BMT 0.9%; p = 0.530) as well as rates of any stroke (CEA 3.9%, CAS 4.1%, BMT 0.9%; p = 0.256) and all-cause mortality (CEA 2.5%, CAS 1.0%, BMT 3.5%; p = 0.304). About half of all strokes occurred in the peri-interventional period. Higher albeit statistically non-significant rates of restenosis occurred in the stenting group (CEA 2.0% vs. CAS 5.6%; p = 0.068) without evidence of increased stroke rates. Interpretation: The low sample size of this prematurely stopped trial of 513 patients implies that its power is not sufficient to show that CEA or CAS is superior to a modern medical therapy (BMT) in the primary prevention of ischemic stroke in patients with an asymptomatic carotid stenosis up to one year after treatment. Also, no evidence for differences in safety between CAS and CEA during the first year after treatment could be derived. Follow-up will be performed up to five years. Data may be used for pooled analysis with ongoing trials. abstract_id: PUBMED:26711272 Management of Patients with an Asymptomatic Carotid Stenosis--Medical Management, Endovascular Treatment, or Carotid Endarterectomy? Patients with asymptomatic carotid stenosis all warrant intensive medical therapy; they are at a higher risk of myocardial infarction than of stroke. With modern intensive medical therapy, the annual risk of ipsilateral is now ∼ 0.5%. Justifying carotid intervention on the basis of the results of historic trials with a medical arm, extrapolated to modern trials with lower interventional risks but no medical arm is not legitimate. Most patients (∼ 90%) with asymptomatic carotid stenosis would be better served by intensive medical therapy than by either stenting or endarterectomy. The few (∼ 10%) who could benefit from intervention can be identified by microemboli on transcranial Doppler and other methods in development. Routine stenting of asymptomatic carotid stenosis, because it is more likely to harm than help the patient, may be considered unethical. abstract_id: PUBMED:31757218 Can risk modelling improve treatment decisions in asymptomatic carotid stenosis? Background: Carotid endarterectomy (CEA) is routinely performed for asymptomatic carotid stenosis, yet its average net benefit is small. Risk stratification may identify high risk patients that would clearly benefit from treatment. Methods: Retrospective cohort study using data from the Asymptomatic Carotid Atherosclerosis Study (ACAS). Risk factors for poor outcomes were included in backward and forward selection procedures to develop baseline risk models estimating the risk of non-perioperative ipsilateral stroke/TIA. Baseline risk was estimated for all ACAS participants and externally validated using data from the Atherosclerosis Risk in Communities (ARIC) study. Baseline risk was then included in a treatment risk model that explored the interaction of baseline risk and treatment status (CEA vs. medical management) on the patient-centered outcome of any stroke or death, including peri-operative events. Results: Three baseline risk factors (BMI, creatinine and degree of contralateral stenosis) were selected into our baseline risk model (c-statistic 0.59 [95% CI 0.54-0.65]). The model stratified absolute risk between the lowest and highest risk quintiles (5.1% vs. 12.5%). External validation in ARIC found similar predictiveness (c-statistic 0.58 [0.49-0.67]), but poor calibration across the risk spectrum. In the treatment risk model, CEA was superior to medical management across the spectrum of baseline risk and the magnitude of the treatment effect varied widely between the lowest and highest absolute risk quintiles (3.2% vs. 10.7%). Conclusion: Even modestly predictive risk stratification tools have the potential to meaningfully influence clinical decision making in asymptomatic carotid disease. However, our ACAS model requires target population recalibration prior to clinical application. abstract_id: PUBMED:30251204 Asymptomatic Carotid Stenosis: Intervention or Best Medical Therapy? Purpose Of Review: Provide a current overview regarding the optimal strategy for managing patients with asymptomatic carotid artery stenosis. Recent Findings: Carotid endarterectomy (CEA) and carotid artery stenting (CAS) reduce long-term stroke risk in asymptomatic patients. However, CAS is associated with a higher risk of peri-procedural stroke. Improvements in best medical therapy (BMT) have renewed uncertainty regarding the extent to which results from older randomised controlled trials (RCTs) comparing outcomes following carotid intervention can be generalised to modern medical practise. 'Average surgical risk' patients with an asymptomatic carotid artery stenosis of 60-99% and increased risk of late stroke should be considered for either CEA or CAS. In patients deemed 'high risk' for surgery, CAS is indicated. Use of an anti-platelet, anti-hypertensive and statin, with strict glycaemic control, is recommended. Results from ongoing large, multicentre RCTs comparing CEA, CAS and BMT will provide clarity regarding the optimal management of patients with asymptomatic carotid artery stenosis. abstract_id: PUBMED:28462683 Carotid revascularization and medical management for asymptomatic carotid stenosis: Protocol of the CREST-2 clinical trials. Rationale Trials conducted decades ago demonstrated that carotid endarterectomy by skilled surgeons reduced stroke risk in asymptomatic patients. Developments in carotid stenting and improvements in medical prevention of stroke caused by atherothrombotic disease challenge understanding of the benefits of revascularization. Aim Carotid Revascularization and Medical Management for Asymptomatic Carotid Stenosis Trial (CREST-2) will test whether carotid endarterectomy or carotid stenting plus contemporary intensive medical therapy is superior to intensive medical therapy alone in the primary prevention of stroke in patients with high-grade asymptomatic carotid stenosis. Methods and design CREST-2 is two multicenter randomized trials of revascularization plus intensive medical therapy versus intensive medical therapy alone. One trial randomizes patients to carotid endarterectomy plus intensive medical therapy versus intensive medical therapy alone; the other, to carotid stenting plus intensive medical therapy versus intensive medical therapy alone. The risk factor targets of centrally directed intensive medical therapy are LDL cholesterol <70 mg/dl and systolic blood pressure <140 mmHg. Study outcomes The primary outcome is the composite of stroke and death within 44 days following randomization and stroke ipsilateral to the target vessel thereafter, up to four years. Change in cognition and differences in major and minor stroke are secondary outcomes. Sample size Enrollment of 1240 patients in each trial provides 85% power to detect a treatment difference if the event rate in the intensive medical therapy alone arm is 4.8% higher or 2.8% lower than an anticipated 3.6% rate in the revascularization arm. Discussion Management of asymptomatic carotid stenosis requires contemporary randomized trials to address whether carotid endarterectomy or carotid stenting plus intensive medical therapy is superior in preventing stroke beyond intensive medical therapy alone. Whether carotid endarterectomy or carotid stenting has favorable effects on cognition will also be tested. Trial registration United States National Institutes of Health Clinicaltrials.gov NCT02089217. abstract_id: PUBMED:23192761 The effect of medical treatments on stroke risk in asymptomatic carotid stenosis. Background And Purpose: Recent evidence suggests current best medical treatment may be sufficient to prevent stroke in patients with asymptomatic carotid stenosis. If this is the case, then it is important to determine risk reduction provided by treatments. Using Asymptomatic Carotid Emboli Study (ACES) prospective data, the effect of current treatment and risk factors on future stroke and transient ischemic attack risk were determined. Methods: Four-hundred seventy-seven patients with asymptomatic carotid stenosis were followed-up every 6 months for 2 years. Changes in risk factors and stroke prevention therapies were reviewed at each visit. Using time-dependent Cox regression, the relationship between current treatment over time was determined and presented as hazard ratios and 95% confidence intervals for risk of stroke, transient ischemic attack, and cardiovascular death end points. Results: On multivariate analysis, antiplatelets (P=0.001) and lower mean blood pressure (P=0.002) were independent predictors of reduced risk of ipsilateral stroke and transient ischemic attack. Antiplatelets (P<0.0001) and antihypertensives (P<0.0001) were independent predictors of a lower risk of any stroke or cardiovascular death. Conclusions: Antiplatelet therapy and blood pressure control are the most important factors in reducing short-term stroke and cardiovascular risk in patients with asymptomatic carotid stenosis. More prospective data are required for medical treatments in asymptomatic carotid stenosis in particular for current statin usage. abstract_id: PUBMED:31820138 Management of Asymptomatic Carotid Artery Stenosis. Purpose Of Review: The goal of this paper is to provide the reader with a review of the evidence supporting the surgical and medical management of patients with asymptomatic internal carotid artery (ICA) stenosis. Recent Findings: Based on the results of earlier clinical trials, surgical intervention with carotid endarterectomy (CEA) has long been the preferred method of management for patients with asymptomatic severe carotid stenosis. Carotid artery stenting (CAS) is another less invasive surgical option that has similar outcomes over the long-term. However, more recent improvements in medical management have reduced the risk of stroke in this population to comparable rates seen with CEA. As a result, medical management alone is advocated as well for patients with asymptomatic carotid stenosis. In addition to stenosis severity, there are a number of features of plaque morphology associated with vulnerable plaque that predict future stroke risk. Rates of stroke in patients with asymptomatic severe carotid stenosis with modern surgical techniques, CEA and CAS, are similar to modern medical therapy alone. Both surgery and medical therapy are good treatment options but it is not known which treatment is superior. The Carotid Revascularization and Medical Management for Asymptomatic Carotid Stenosis Trial (CREST-2), an NIH-sponsored, multicenter, randomized trial that aims to answer this important management decision. Answer: Yes, medical specialty does influence the treatment of asymptomatic carotid stenosis. A Belgian multidisciplinary survey revealed that the suggested therapies for asymptomatic carotid stenosis differed significantly among cardiologists, neurologists, and vascular surgeons. Cardiologists tended to be more conservative in their treatment approach, while vascular surgeons were more inclined towards aggressive treatments. The survey also found that younger physicians suggested a more conservative approach to treatment (PUBMED:21460764).
Instruction: Are there pollination syndromes in the Australian epacrids (Ericaceae: Styphelioideae)? Abstracts: abstract_id: PUBMED:23681546 Are there pollination syndromes in the Australian epacrids (Ericaceae: Styphelioideae)? A novel statistical method to identify key floral traits per syndrome. Background And Aims: Convergent floral traits hypothesized as attracting particular pollinators are known as pollination syndromes. Floral diversity suggests that the Australian epacrid flora may be adapted to pollinator type. Currently there are empirical data on the pollination systems for 87 species (approx. 15 % of Australian epacrids). This provides an opportunity to test for pollination syndromes and their important morphological traits in an iconic element of the Australian flora. Methods: Data on epacrid-pollinator relationships were obtained from published literature and field observation. A multivariate approach was used to test whether epacrid floral attributes related to pollinator profiles. Statistical classification was then used to rank floral attributes according to their predictive value. Data sets excluding mixed pollination systems were used to test the predictive power of statistical classification to identify pollination models. Key Results: Floral attributes are correlated with bird, fly and bee pollination. Using floral attributes identified as correlating with pollinator type, bird pollination is classified with 86 % accuracy, red flowers being the most important predictor. Fly and bee pollination are classified with 78 and 69 % accuracy, but have a lack of individually important floral predictors. Excluding mixed pollination systems improved the accuracy of the prediction of both bee and fly pollination systems. Conclusions: Although most epacrids have generalized pollination systems, a correlation between bird pollination and red, long-tubed epacrids is found. Statistical classification highlights the relative importance of each floral attribute in relation to pollinator type and proves useful in classifying epacrids to bird, fly and bee pollination systems. abstract_id: PUBMED:24602227 Evolutionary consequences of shifts to bird-pollination in the Australian pea-flowered legumes (Mirbelieae and Bossiaeeae). Background: Interactions with pollinators are proposed to be one of the major drivers of diversity in angiosperms. Specialised interactions with pollinators can lead to specialised floral traits, which collectively are known as a pollination syndrome. While it is thought that specialisation to a pollinator can lead to either an increase in diversity or in some cases a dead end, it is not well understood how transitions among specialised pollinators contribute to changes in diversity. Here, we use evolutionary trait reconstruction of bee-pollination and bird-pollination syndromes in Australian egg-and-bacon peas (Mirbelieae and Bossiaeeae) to test whether transitions between pollination syndromes is correlated with changes in species diversity. We also test for directionality in transitions that might be caused by selection by pollinators or by an evolutionary ratchet in which reversals to the original pollination syndrome are not possible. Results: Trait reconstructions of Australian egg-and-bacon peas suggest that bee-pollination syndrome is the ancestral form and that there has been replicated evolution of bird-pollination syndromes. Reconstructions indicate potential reversals from bird- to bee-pollination syndromes but this is not consistent with morphology. Species diversity of bird-pollination syndrome clades is lower than that of their bee-pollination syndrome sisters.We estimated the earliest transitions from bee- to bird-pollination syndrome occurred between 30.8 Ma and 10.4 Ma. Geographical structuring of pollination syndromes was found; there were fewer bird-pollination species in the Australian southeast temperate region compared to other regions of Australia. Conclusions: A consistent decrease in diversification rate coincident with switches to bird pollination might be explained if greater dispersal by bird pollinators results in higher levels of connectivity among populations and reduced chances of allopatric speciation.The earliest transitions overlap with the early diversification of Australian honeyeaters - the major lineage of pollinating birds in Australia. Our findings are consistent with the idea that environment and availability of pollinators are important in the evolution of pollination syndromes. Changes in flower traits as a result of transitions to bird-pollination syndrome might also limit reversals to a bee-pollination syndrome. abstract_id: PUBMED:24393294 A quantitative review of pollination syndromes: do floral traits predict effective pollinators? The idea of pollination syndromes has been largely discussed but no formal quantitative evaluation has yet been conducted across angiosperms. We present the first systematic review of pollination syndromes that quantitatively tests whether the most effective pollinators for a species can be inferred from suites of floral traits for 417 plant species. Our results support the syndrome concept, indicating that convergent floral evolution is driven by adaptation to the most effective pollinator group. The predictability of pollination syndromes is greater in pollinator-dependent species and in plants from tropical regions. Many plant species also have secondary pollinators that generally correspond to the ancestral pollinators documented in evolutionary studies. We discuss the utility and limitations of pollination syndromes and the role of secondary pollinators to understand floral ecology and evolution. abstract_id: PUBMED:30368819 Beyond buzz-pollination - departures from an adaptive plateau lead to new pollination syndromes. Pollination syndromes describe recurring adaptation to selection imposed by distinct pollinators. We tested for pollination syndromes in Merianieae (Melastomataceae), which contain bee- (buzz-), hummingbird-, flowerpiercer-, passerine-, bat- and rodent-pollinated species. Further, we explored trait changes correlated with the repeated shifts away from buzz-pollination, which represents an 'adaptive plateau' in Melastomataceae. We used random forest analyses to identify key traits associated with the different pollinators of 19 Merianieae species and estimated the pollination syndromes of 42 more species. We employed morphospace analyses to compare the morphological diversity (disparity) among syndromes. We identified three pollination syndromes ('buzz-bee', 'mixed-vertebrate' and 'passerine'), characterized by different pollen expulsion mechanisms and reward types, but not by traditional syndrome characters. Further, we found that 'efficiency' rather than 'attraction' traits were important for syndrome circumscription. Contrary to syndrome theory, our study supports the pooling of different pollinators (hummingbirds, bats, rodents and flowerpiercers) into the 'mixed-vertebrate' syndrome, and we found that disparity was highest in the 'buzz-bee' syndrome. We conclude that the highly adaptive buzz-pollination system may have prevented shifts towards classical pollination syndromes, but provided the starting point for the evolution of a novel set of distinct syndromes, all having retained multifunctional stamens that provide pollen expulsion, reward and attraction. abstract_id: PUBMED:25115902 Pollination syndromes in a specialised plant-pollinator interaction: does floral morphology predict pollinators in Calceolaria? Pollination syndromes are defined as suites of floral traits evolved in response to selection imposed by a particular group of pollinators (e.g., butterflies, hummingbirds, bats). Although numerous studies demonstrated their occurrence in plants pollinated by radically different pollinators, it is less known whether it is possible to identify them within species pollinated by one functional pollinator group. In such a framework, we expect floral traits to evolve also in response to pollinator subgroups (e.g., species, genera) within that unique functional group. On this, specialised pollination systems represent appropriate case studies to test such expectations. Calceolaria is a highly diversified plant genus pollinated by oil-collecting bees in genera Centris and Chalepogenus. Variation in floral traits in Calceolaria has recently been suggested to reflect adaptations to pollinator types. However, to date no study has explicitly tested that observation. In this paper, we quantitatively test that hypothesis by evaluating the presence of pollination syndromes within the specialised pollination system formed by several Calceolaria and their insect pollinators. To do so, we use multivariate approaches and explore the structural matching between the morphology of 10 Calceolaria taxa and that of their principal pollinators. Our results identify morphological matching between floral traits related to access to the reward and insect traits involved in oil collection, confirming the presence of pollinator syndromes in Calceolaria. From a general perspective, our findings indicate that the pollination syndrome concept can be also extended to the intra-pollinator group level. abstract_id: PUBMED:37814841 Pollination syndromes and the origins of floral traits. Background: A general view in the study of pollination syndromes is that floral traits usually represent convergent floral adaptations to specific functional pollinator groups. However, the definition of convergence is elusive and contradictory in the literature. Is convergence the independent evolution of either the same trait or similar traits with the same function? A review of the concept of convergence in developmental biology and phylogenetic systematics may shed new light in studies of pollination syndromes. Scope: The aims of this article are (1) to explore the notion of convergence and other concepts (analogy, homoplasy and parallelism) within the theory and practice of developmental evolution and phylogenetic systematics; (2) to modify the definitions of syndromes in order to embrace the concepts of analogy and convergence; (3) to revisit the bat pollination syndrome in the context of angiosperm phylogeny, with focus on the showy 'petaloid' organs associated with the syndrome; (4) to revisit the genetic-developmental basis of flower colour; (5) to raise evolutionary hypotheses of floral evolution associated with the bat pollination syndrome; and (6) to highlight some of the current frontiers of research on the origin and evolution of flowers and its impact on pollination syndrome studies in the 21st century. Conclusions: The inclusion of the concepts of analogy and convergence within the concept of syndromes will constitute a new agenda of inquiry that integrates floral biology, phylogenetic systematics and developmental biology. Phyllostomid and pteropodid bat pollination syndrome traits in eudicots and monocots represent cases of analogous and convergent evolution. Pollination syndromes are a multivariate concept intrinsically related to the understanding of flower organogenesis and evolution. The formulation of hypotheses of pollination syndromes must consider the phylogenetic levels of universality for both plant and animal taxa, flower development, genetics, homology and evolution, and a clear definition of evolutionary concepts, including analogy, convergence, homoplasy and parallelism. abstract_id: PUBMED:29036172 A question of data quality-Testing pollination syndromes in Balsaminaceae. Pollination syndromes and their predictive power regarding actual plant-animal interactions have been controversially discussed in the past. We investigate pollination syndromes in Balsaminaceae, utilizing quantitative respectively categorical data sets of flower morphometry, signal and reward traits for 86 species to test for the effect of different types of data on the test patterns retrieved. Cluster Analyses of the floral traits are used in combination with independent pollinator observations. Based on quantitative data we retrieve seven clusters, six of them corresponding to plausible pollination syndromes and one additional, well-supported cluster comprising highly divergent floral architectures. This latter cluster represents a non-syndrome of flowers not segregated by the specific data set here used. Conversely, using categorical data we obtained only a rudimentary resolution of pollination syndromes, in line with several earlier studies. The results underscore that the use of functional, exactly quanitified trait data has the power to retrieve pollination syndromes circumscribed by the specific data used. Data quality can, however, not be replaced by sheer data volume. With this caveat, it is possible to identify pollination syndromes from large datasets and to reliably extrapolate them for taxa for which direct observations are unavailable. abstract_id: PUBMED:34716581 Floral traits differentiate pollination syndromes and species but fail to predict the identity of floral visitors to Castilleja. Premise: Animal pollination is critical to plant reproduction and may cause convergent evolution of pollination syndromes. Pollination syndromes in Castilleja have been distinguished based on floral traits and historical observations of floral visitors. Here we addressed these questions: (1) Can pollination syndromes be distinguished using floral morphological traits or volatile organic compound emissions? (2) Is there significant variation in floral traits within a pollination syndrome at the level of populations or species? (3) Do pollination syndromes predict the most frequent floral visitor to Castilleja? Methods: Floral traits and visitation were measured for five co-occurring Castilleja species (C. applegatei, C. linariifolia, C. miniata, C. nana, and C. peirsonii), representing three pollination syndromes (bee, fly, and hummingbird), at four sites in the Sierra Nevada Mountains. We used nonmetric multidimensional scaling (NMDS) and multiple linear regressions to address key questions in the differentiation of Castilleja and floral visitors. Results: Our analyses revealed that both morphological traits and floral VOCs can be used to distinguish between some pollination syndromes and Castilleja species. Morphological traits defined pollination syndromes reliably, but within the hummingbird syndrome, there was also significant variation among populations and species. Pollination syndrome was a poor predictor of visitors to Castilleja. Conclusions: Floral trait differentiation among Castilleja individuals reflects both taxonomy and pollination syndromes. Differentiation was generally more evident in morphological traits compared to VOCs. Furthermore, a priori notions of pollination syndromes in this system are overly simplistic and fail to predict which animals most frequently visit Castilleja in natural populations. abstract_id: PUBMED:31172529 Comparative analysis of pollen release biomechanics in Thalictrum: implications for evolutionary transitions between animal and wind pollination. Transitions from animal to wind pollination have occurred repeatedly in flowering plants, driven by structural and biomechanical modifications to flowers. But the initial changes promoting wind pollination are poorly understood, especially those required to release pollen into airflows - the critical first stage of wind pollination. Using a wind tunnel, we performed a comparative study of pollen release biomechanics in 36 species of animal- and wind-pollinated Thalictrum. We quantified pollination syndromes and stamen natural frequency (fn ), a key vibration parameter, to determine if floral traits reliably predicted pollen release probability. We then investigated if pollen release was caused by wind-induced resonance vibration of stamens. We detected wind-induced stamen resonance in 91% of species and a strong effect of stamen acceleration on pollen release, inversely driven by fn . However, unlike fn , pollination syndromes did not reliably predict the probability of pollen release among species. Our results directly link fn to the capacity of stamens to release pollen by wind and suggest that structural mechanisms reducing fn are likely to be important for initiating transitions from animal to wind pollination. Our inability to predict the probability of pollen release based on pollination syndromes suggests diverse phenotypic trajectories from animal to wind pollination. abstract_id: PUBMED:37551426 The evolution of Ericaceae flowers and their pollination syndromes at a global scale. Premise: Floral evolution in large clades is difficult to study not only because of the number of species involved, but also because they often are geographically widespread and include a diversity of outcrossing pollination systems. The cosmopolitan blueberry family (Ericaceae) is one such example, most notably pollinated by bees and multiple clades of nectarivorous birds. Methods: We combined data on floral traits, pollination ecology, and geography with a comprehensive phylogeny to examine the structuring of floral diversity across pollination systems and continents. We focused on ornithophilous systems to test the hypothesis that some Old World Ericaceae were pollinated by now-extinct hummingbirds. Results: Despite some support for floral differentiation at a continental scale, we found a large amount of variability within and among landmasses, due to both phylogenetic conservatism and parallel evolution. We found support for floral differentiation in anther and corolla traits across pollination systems, including among different ornithophilous systems. Corolla traits show inconclusive evidence that some Old World Ericaceae were pollinated by hummingbirds, while anther traits show stronger evidence. Some major shifts in floral traits are associated with changes in pollination system, but shifts within bee systems are likely also important. Conclusions: Studying the floral evolution of large, morphologically diverse, and widespread clades is feasible. We demonstrate that continent-specific radiations have led to widespread parallel evolution of floral morphology. We show that traits outside of the perianth may hold important clues to the ecological history of lineages. Answer: Yes, there are pollination syndromes in the Australian epacrids (Ericaceae: Styphelioideae). A study found a correlation between bird pollination and red, long-tubed epacrids, with statistical classification being able to classify bird pollination with 86% accuracy, with red flowers being the most important predictor. Fly and bee pollination could also be classified with 78% and 69% accuracy, respectively, although these did not have individually important floral predictors. The study concluded that while most epacrids have generalized pollination systems, there is a clear correlation between certain floral attributes and pollinator types, particularly for bird pollination (PUBMED:23681546).
Instruction: Two ECGs and a history: a guide to early hospital discharge of patients with 'chest pain? Abstracts: abstract_id: PUBMED:9597626 Two ECGs and a history: a guide to early hospital discharge of patients with 'chest pain? cause'. Objectives: To shorten the stay in hospital of patients admitted with chest pain of uncertain origin, using clinical history and an unchanging ECG as a basis to inform patients on the post-take ward round of imminent discharge that same day (pending normal cardiac enzyme results), thereby facilitating actual discharge on the same day. Design: A prospective observational study over a two-month period of consecutive patients admitted with chest pain of uncertain origin. Setting: District general hospital in North-West England with a regional cardiothoracic centre on site. Results: Of the 115 patients enrolled in the study (15% of acute medical admissions), 43 (37%) were identified by the investigators as likely to have normal cardiac enzymes and only one of them had evidence of important cardiac ischaemia (median actual length of stay, 3 days). The specificity of the protocol was 98%, with a sensitivity of 89%. Conclusions: A careful clinical history taken by experienced junior staff together with two sequential ECGs, can identify patients who may be discharged within 24 hours of admission with chest pain of uncertain origin. Significant bed savings would result from the instigation of this practice, with minimal requirement for extra resources. abstract_id: PUBMED:37568502 Early Hospital Discharge Using Remote Monitoring for Patients Hospitalized for COVID-19, Regardless of Need for Home Oxygen Therapy: A Descriptive Study. Aim: Since beds are unavailable, we prospectively investigated whether early hospital discharge will be safe and useful in patients hospitalized for COVID-19, regardless of their need for home oxygen therapy. Population And Methods: Extending the initial inclusion criteria, 62 patients were included and 51 benefited from home telemonitoring, mainly assessing clinical parameters (blood pressure, heart rate, respiratory rate, dyspnea, temperature) and peripheral saturation (SpO2) at follow-up. Results: 47% of the patients were older than 65 years; 63% needed home oxygen therapy and/or presented with more than one comorbidity. At home, the mean time to dyspnea and tachypnea resolutions ranged from 21 to 24 days. The mean oxygen-weaning duration was 13.3 ± 10.4 days, and the mean SpO2 was 95.7 ± 1.6%. The nurses and/or doctors managed 1238 alerts. Two re-hospitalizations were required, related to transient chest pain or pulmonary embolism, but no death occurred. Patient satisfaction was good, and 743 potential days of hospitalization were saved for other patients. Conclusion: The remote monitoring of vital parameters and symptoms is safe, allowing for early hospital discharge in patients hospitalized for COVID-19, whether or not home oxygen therapy was required. Oxygen tapering outside the hospital allowed for a greater reduction in hospital stay. Randomized controlled trials are necessary to confirm this beneficial effect. abstract_id: PUBMED:33706253 A prospective analysis of time to screen protocol ECGs in adult Emergency Department triage patients. Background: Early identification of ST elevation MI (STEMI) in emergency departments (ED) via electrocardiogram (ECG) expedites intervention. While screening of all ED chest pain ECGs should be obtained within 10 minutes per the American Heart Association, 40% of all ECGs are software-analyzed as "Normal" or "Otherwise Normal." However, the reliability of this analysis and the time for confirmation read are uncertain. This study investigates the time necessary for Patient Care Technicians (PCTs) to deliver ECGs to ED attendings to confirm automated interpretation. Methods: A prospective cohort study was conducted at a single academic ED. All patients ≥18 years who had a triage ECG were included. ECGs were obtained within 10 min of arrival, time-stamped, delivered for ED attending review and time-stamped upon PCT return to triage. Data were entered into REDCap and analyzed using StatPlus. Results: During the 4-month study, 1768 ECGs were collected. Distribution of automated readings was: "Normal ECG" 33.7%; "Otherwise Normal ECG" 11.2%; and "borderline/abnormal" 55.1%. The median time necessary for PCTs to confirm a screening ECG was 2.8 min (IQR 2,4) with attending physicians interrupted an average of 14.6 times per day. Conclusion: Screening of triage ECGs is time-intensive and compounds the frequency of physician interruptions. Although findings are not generalizable, the impact of these interruptions on patient care and safety is paramount and universal. Future directions include validating the reliability of "Normal" and "Otherwise Normal" ECG automated readings to obviate the need to interrupt ED physician for expedited screening confirmation. abstract_id: PUBMED:26013480 An accelerated diagnostic protocol for the early, safe discharge of low-risk chest pain patients. Clinical Question: Can an accelerated 2-hour diagnostic protocol using the cardiac troponin I (cTnI) measurement as the only biomarker be implemented to allow an earlier and safe discharge of low-risk chest pain patients? Article Chosen: Than M, Cullen L, Aldous S, et al. 2-Hour accelerated diagnostic protocol to assess patients with chest pain symptoms using contemporary troponins as the only biomarker: the ADAPT trial. J Am Coll Cardiol 2012;59(23):2091-8. Objective: To determine whether an accelerated diagnostic protocol (ADP) for possible cardiac chest pain could identify low-risk patients suitable for early discharge using cTnI as the sole biomarker. abstract_id: PUBMED:16545220 Telemedicine in the transmission of prehospitalisation ECGs of patients with suspected acute myocardial infarction Introduction: Time to reperfusion is critical to the outcome of patients with ST-elevation myocardial infarction (MI). Other studies have indicated that wireless transmission of prehospital ECGs can reduce the time to reperfusion. This study evaluated the effect on time to treatment using prehospitalisation ECG transmission to a cardiologist's handheld device in patients with acute chest pain, as well as the potential effect of direct transfer of patients to the closest appropriate cardiological facility. Materials And Methods: During a one-year study period, prehospitalisation ECGs were transmitted for all patients with symptoms indicative of acute coronary syndrome. The ECGs were received on a handheld device by an attending cardiologist, and the patient was referred to PCI treatment if ST-elevation was found. Results: 152 transmissions were registered; 27 of these patients underwent primary PCI treatment. 135 (89%) of the attempted transmissions were successful. Compared to historic controls, there was an increased on-scene time usage of 7 minutes, including transfer. The median time spent from arrival at the hospital to the start of invasive treatment was low, 22 minutes. Compared to historic controls, we found a total reduction in time spent of 72 minutes (p< 0.01). Discussion: The results indicate that it is possible to minimize the time delay to primary PCI treatment by direct transfer through consultation and use of wireless transmission of prehospitalisation-acquired ECGs to a handheld device carried by an attending cardiologist. abstract_id: PUBMED:19542407 Low-risk patients with chest pain in the emergency department: negative 64-MDCT coronary angiography may reduce length of stay and hospital charges. Objective: The current standard-of-care workup of low-risk patients with chest pain in an emergency department takes 12-36 hours and is expensive. We hypothesized that negative 64-MDCT coronary angiography early in the workup of such patients may enable a shorter length of stay and reduce charges. Materials And Methods: The standard-of-care evaluation consisted of serial cardiac enzyme tests, ECGs, and stress testing. After informed consent, we added cardiac CT early in the standard-of-care workup of 53 consecutive patients. Fifty patients had negative CT findings and were included in this series. The length of stay and charges were analyzed using actual patient data for all patients in the standard-of-care workup and for two earlier discharge scenarios based on negative cardiac CT results: First, CT plus serial enzyme tests and ECGs during an observation period followed by discharge if all were negative; and second, CT plus one set of enzyme tests and one ECG followed by discharge if all were negative. Comparisons were made using paired Student's t tests. Results: For standard of care and the two CT-based earlier discharge analyses, the mean lengths of stay were 25.4, 14.3, and 5.0 hours; mean charges were $7,597, $6,153, and $4,251. Length of stay and charges were both significantly less (p < 0.001) for the two CT-based analyses. Conclusion: In low-risk patients with chest pain, discharge from the emergency department based on negative cardiac CT, enzyme tests, and ECG may significantly decrease both length of stay and hospital charges compared with the standard of care. abstract_id: PUBMED:2753683 Early exercise testing and discharge of patients with acute chest pain syndromes. Forty-nine patients deemed to have a low in-hospital risk after 24 hours observation for suspected myocardial infarction, underwent exercise testing 24-48 hours after admission. Thirty-six patients had adequate exercise tolerance and follow-up of this group at one year, showed that no major cardiac event occurred. Of the 13 patients whose performance was inadequate, five were found to have critical lesions at angiography. Early exercise testing is a useful prognostic test, permitting confident early discharge of patients who do not have threatening coronary disease. If this approach was adopted throughout the country, the number of patients, referred for angiographic assessment prior to loss of myocardium could be substantially, and justifiable increased. abstract_id: PUBMED:2240745 Early detection of acute myocardial infarction in patients presenting with chest pain and nondiagnostic ECGs: serial CK-MB sampling in the emergency department. Study Objectives: Patients presenting to the emergency department with chest discomfort are a difficult problem for emergency physicians. Nearly 50% of patients with acute myocardial infarction (AMI) will initially have nondiagnostic ECGs on ED presentation. The purpose of this study was to determine if patients with AMI having nondiagnostic ECGs could be identified using new immunochemical assays for serial CK-MB sampling in the ED. Design: Chest pain patients, more than 30 years old, with pain not caused by trauma or explained by radiographic findings, were eligible for the study. Serial serum samples were drawn on ED presentation (zero hours) and three hours after presentation, then analyzed for CK-MB using four immunochemical methods and electrophoresis. Standard World Health Organization criteria were used to establish the diagnosis of AMI, including new Q-wave formation or elevation of standard in-hospital serum cardiac enzyme markers. Setting: A tertiary cardiac care community hospital. Measurements And Main Results: The serum from 183 patients hospitalized for possible ischemic chest pain was collected and analyzed. Thirty-one of 183 patients (17%) were found to have AMI by standard in-hospital criteria. Sixteen of the 31 patients (52%) with AMI had nondiagnostic ECGs on presentation. Immunochemical determination of serial CK-MB levels provided a sensitive and specific method for detecting AMI in patients within three hours after ED presentation compared with standard electrophoresis. The four immunochemical methods demonstrated a range in sensitivity from 50% to 62.1% on ED presentation versus 92% to 96.7% three hours later. The immunochemical tests demonstrated specificities ranging from 83.0% to 96.4% at three hours, with three of the four tests having specificities of 92% or greater. Electrophoresis had a sensitivity of 34.5% on ED presentation, increasing to 76.9% at three hours, with a specificity of 98.6%. Conclusions: Immunochemical CK-MB methods allowed rapid, sensitive detection of AMI in the ED. Early detection of AMI offers many potential advantages to the emergency physician. Early detection of AMI, while the patient is in the ED, could direct disposition of this potentially unstable patient to an intensive care setting. Such information may prevent the ED discharge of patients with AMI having nondiagnostic ECGs. The diagnosis of AMI within a six-hour period after symptom onset may allow thrombolytic therapy to be given to patients with AMI not having diagnostic ECGs. This study served as a pilot trial for a multicenter study of the Emergency Medicine Cardiac Research Group, which is currently ongoing. abstract_id: PUBMED:23117012 Identifying patients for early discharge: performance of decision rules among patients with acute chest pain. Background: The HEART score and North American Chest Pain Rule (NACPR) are decision rules designed to identify acute chest pain patients for early discharge without stress testing or cardiac imaging. This study compares the clinical utility of these decision rules combined with serial troponin determinations. Methods And Results: A secondary analysis was conducted of 1005 participants in the Myeloperoxidase In the Diagnosis of Acute coronary syndromes Study (MIDAS). MIDAS is a prospective observational cohort of Emergency Department (ED) patients enrolled from 18 US sites with symptoms suggestive of acute coronary syndrome (ACS). The ability to identify participants for early discharge and the sensitivity for ACS at 30 days were compared among an unstructured assessment, NACPR, and HEART score, each combined with troponin measures at 0 and 3h. ACS, defined as cardiac death, acute myocardial infarction, or unstable angina, occurred in 22% of the cohort. The unstructured assessment identified 13.5% (95% CI 11.5-16%) of participants for early discharge with 98% (95% CI 95-99%) sensitivity for ACS. The NACPR identified 4.4% (95% CI 3-6%) for early discharge with 100% (95% CI 98-100%) sensitivity for ACS. The HEART score identified 20% (95% CI 18-23%) for early discharge with 99% (95% CI 97-100%) sensitivity for ACS. The HEART score had a net reclassification improvement of 10% (95% CI 8-12%) versus unstructured assessment and 19% (95% CI 17-21%) versus NACPR. Conclusions: The HEART score with 0 and 3 hour serial troponin measures identifies a substantial number of patients for early discharge while maintaining high sensitivity for ACS. abstract_id: PUBMED:29361856 Catheterization laboratory activations and time intervals for patients with pre-hospital ECGs. Objective: The use of pre-hospital ECGs (PH-ECG) reduces time to reperfusion for patients with ST-segment elevation myocardial infarction (STEMI). The feasibility of reperfusion therapy within 60 minutes for hospitals with 24/7 PCI capability has been questioned, and current guidelines have set time targets to 90 minutes. Our primary objective was to investigate the proportion of false-positive catheterization laboratory activations by PH-ECG. Our secondary objective was to describe the time intervals from first medical contact to reperfusion and to establish the proportion of patients receiving reperfusion within 60 minutes. Design: A retrospective cohort study among 4298 patients for whom a PH-ECG was transmitted to the investigating hospital, mainly due to chest pain, during 2013 were included. Results: Among patients with PH-ECGs, 139 (3.2%) patients had a STEMI. There were 115 pre-hospital catheterization laboratory activations among which 16% (95% confidence interval 10-23) were false-positive for STEMI. The median total time from emergency call to arterial puncture was 76 minutes. The target of PCI within 60 minutes was met in 83% of the cases. The time from EMS arrival to PH-ECG was 20 minutes for female patients and 13 minutes for male patients (p < .001), and only 16% had a PH-ECG within 10 minutes from Emergency Medical Service arrival. Conclusion: The rate of false-positive catheterization laboratory activations based on pre-hospital ECGs was low and the target of PCI within 60 minutes is achievable for a majority of patients. Efforts should be made to reduce the time from ambulance arrival to PH-ECG transmission, especially for female patients. Answer: The study described in abstract PUBMED:9597626 investigated the potential for early hospital discharge of patients admitted with chest pain of uncertain origin by using clinical history and two sequential ECGs. The objective was to identify patients who could be safely discharged within 24 hours of admission, pending normal cardiac enzyme results. The study was conducted over a two-month period at a district general hospital in North-West England and included 115 patients. The results showed that 43 (37%) of the patients were likely to have normal cardiac enzymes, and only one of these had evidence of important cardiac ischemia. The protocol used had a specificity of 98% and a sensitivity of 89%. The study concluded that a careful clinical history taken by experienced junior staff, along with two sequential ECGs, can identify patients who may be discharged early, resulting in significant bed savings with minimal requirement for extra resources.
Instruction: Are pelvic side-wall recurrences of cervical cancer biologically different from central relapses? Abstracts: abstract_id: PUBMED:8033044 Are pelvic side-wall recurrences of cervical cancer biologically different from central relapses? Background: By using the Combined Operative and Radiotherapeutic Treatment (CORT) procedure, pelvic side-wall recurrences of gynecologic malignancies arising in a previously irradiated pelvis may be locally controlled. Local control of central relapses may be achieved by exenteration alone. If, in cervical cancer, both relapse patterns are biologically different (as hypothesized by some investigators), distinct disease courses after local treatment may be expected. Methods: Since June, 1989, 32 pelvic recurrences of cervical cancer were treated for local control in this institution. The median size of the recurrent tumors was 5 cm (range, 2-9 cm); 84% of the patients had been extensively irradiated in the pelvis. Therapy of 14 centrally located recurrences was exenteration alone. In 18 patients with relapses fixed to the pelvic wall and histologically confirmed intralesional resection planes, the CORT procedure was applied. Results: After a median observation of 24 months (range, 5-48 months) 7 patients with central recurrences and 11 patients with pelvic wall recurrences had progressive disease in the pelvis and/or distantly. The site of recurrent tumor progression was similar in both groups. Stratified Kaplan-Meier and univariate Cox regression analysis identified recurrent tumor size, age, and recurrence-free intervals, but not relapse location as prognostic factors. Only size of the recurrent tumor significantly influenced survival in the multivariate Cox analysis. Conclusions: These results suggest that central and pelvic wall recurrences of cervical cancer do not exhibit pronounced biologic differences. Patients with large (> or = 5 cm) recurrences have a poor prognosis in spite of extended radical treatment irrespective of tumor location. Efforts should be made to detect isolated pelvic relapses at a smaller tumor size to enhance the chance for long term survival after local control by exenteration and CORT. abstract_id: PUBMED:9988791 Laterally extended endopelvic resection: surgical treatment of infrailiac pelvic wall recurrences of gynecologic malignancies. Objective: My purpose was to treat infrailiac pelvic wall recurrences of gynecologic malignancies with extended radical surgery. Study Design: On the basis of cadaver dissection studies, I developed the laterally extended endopelvic resection techniques. The new operations were offered to patients with infrailiac sidewall disease during a 3-year feasibility study. Results: Laterally extended endopelvic resections extending the lateral resection plane of pelvic exenteration to the medial aspects of the acetabulum, obturator membrane, sacrospinous ligament, and sacral plexus/piriformis muscle were performed in 18 consecutive patients. After this procedure, all patients had tumor-free intraoperative biopsy specimens taken from the remaining pelvic wall structures within the tumor bed area. The final histopathologic report confirmed clean margins in 6 patients and margins with microscopic tumor extensions only in 12 patients. Severe complications occurred in 4 patients (22%), without treatment-related deaths. Conclusion: Laterally extended endopelvic resection allows the complete surgical removal of infrailiac pelvic-side-wall tumors, the most frequent local recurrence of lower genital tract cancer, either with free margins or with potential microscopic residual tumor as a prerequisite for combined operative and radiation treatment. abstract_id: PUBMED:35079249 Value of tube combined with real-time ultrasound-guided accurate interstitial high-dose-rate brachytherapy for post-operative pelvic side-wall recurrences of cervical cancer. Purpose: Treatment of post-operative pelvic side-wall relapses of cervical cancer has always been a therapeutic challenge for radiation oncologists. Radiation dose boost to recurrent tumor by brachytherapy is necessary, but difficult to achieve. Our treatment center has successfully achieved precise transvaginal insertion of a pelvic side-wall mass, using a metal tube with real-time ultrasound guidance. This study investigates the efficacy and safety of image-guided high-dose-rate (HDR) interstitial brachytherapy (IBT) for patients with post-operative pelvic side-wall relapses. Material And Methods: Between 2018 and 2020, 36 post-operative pelvic side-wall relapses of cervical cancer patients receiving external beam radiotherapy (EBRT) combined with HDR-IBT were analyzed retrospectively. Doses per fraction ranged from 6.0 to 7.0 Gy, whereas cumulative equivalent doses in 2 Gy fractions ranged from 80 to 100 Gy. Effects of prognostic factors on local control (LC) and progression-free survival (PFS) were analyzed, and late toxicity data were evaluated. Results: A total of thirty-six patients were included, with a median follow-up of 19.3 months. The tumor response was obtained for all patients, with radiological complete remission in 20 (55.6%) patients. Two-years LC and PFS rates were 72.2% and 47.2%, respectively. Grade II rectal toxicity was observed in 5 patients (13.9%). Multivariate analyses for LC and PFS using proportional regression model were performed, in which shape of exophytic tumor was associated with a significantly better prognosis for both LC and PFS (HR = 0.071, 0.128, p = 0.005, 0.002). Clinical target volume (CTV) D90 remained associated with a significantly better prognosis for PFS (HR = 0.056, p = 0.000). Conclusions: A metal tube placed in the vagina, under the guidance of real-time ultrasound in transvaginal IBT for pelvis masses can be accurately achieved. The shape of exophytic tumor and CTV D90 were associated with a significantly better prognosis for PFS, and the shape of exophytic tumor was also associated with a better prognosis for LC. Therefore, radiation dose boost using IBT can improve the prognosis of patients with post-operative pelvic side-wall recurrences of cervical cancer. abstract_id: PUBMED:15777897 Clinical aspects and prognosis of pelvic recurrence of cervical carcinoma. Objective: To identify which patients with locally recurrent cervical carcinoma are potentially curable. Method: A total of 664 stage IB-IVA patients were examined following surgery or radiotherapy. Result: Among the 664 patients, 193 (29%) developed recurrence. Sixty-seven (35%) of these recurrences were located in the pelvis alone. Among these 67 recurrences, 24 (35%) were central recurrences and the remaining 43 (65%) were pelvic side-wall recurrences. Of the 24 patients with central recurrences, 8 were salvaged. Of these 8 patients, 3 underwent pelvic exenteration, and 5 received optimal radiotherapy. The recurrent tumor in these 5 survivors who received radiotherapy had consisted of a small (<2 cm) tumor. All 43 patients with pelvic wall recurrence developed progressive disease. Conclusion: The following patients are potentially curable: patients with a resectable, centrally located tumor who are candidates for pelvic exenteration, and patients with a small central recurrence for whom complete radiation therapy is feasible. abstract_id: PUBMED:31322915 Pelvic side wall recurrence in locally advanced cervical carcinoma treated with definitive chemoradiation-clinical impact of pelvic wall dose. Objective: Pelvic side wall dose in locally advanced cervical carcinoma treated with definitive chemoradiation has been debated. The present study investigated relationship of disease recurrence with dose for the pelvic side wall. It also attempted to identify minimal dose that significantly reduced recurrence. Methods: Pelvic side wall recurrence at median 24 months was assessed clinically and radiologically across three groups of patients receiving variable pelvic wall doses using no parametrial boost, external beam or interstitial boost, or dose escalated combined external beam with interstitial boost. Results: At 24 months, recurrence occurred in 3/155 boost vs 40/130 no boost patients. (p < 0.0001). Receiver operating characteristic curve analysis demonstrated cut-off pelvic wall dose to be 58.9 Gy (p < 0.0001). Dose escalated combined boost showed no significant benefit compared to single modality parametrial boost (p = 0. 0.553). Conclusion: Mean pelvic wall dose of at least 58.9 Gy offers clinically significant benefit in pelvic wall control. Doses recommended by guidelines should be adhered to in the patients' best interests. Advances In Knowledge: This preliminary study determined a relationship between recurrence rates and dose to the pelvic side wall and also a cut-off dose that significantly improved pelvic wall control in locally advanced cervical cancer. abstract_id: PUBMED:15099959 Pelvic relapses of uterine neoplasms: transvaginal sonographic and Doppler features. Objective: To describe the sonographic and power Doppler features of pelvic relapses in endometrial and cervical cancer. Methods: We retrospectively analyzed the preoperative transvaginal sonographic reports of 45 women with a histological diagnosis of pelvic relapse. The three diameters of the lesion were recorded; then the shape, margins, content (solid or cystic), and location were analyzed. A subjective assessment of the vascularization (vascular score) was obtained with power Doppler. Results: Twenty-six patients had pelvic recurrence from endometrial cancer and 19 from cervical cancer. In 36 (80%) patients, the recurrence was a central pelvic mass located on the vaginal apex, while in six cases (13%), it was diagnosed as a pelvic side-wall recurrence, and in three (7%), the recurrence occupied the whole pelvis. The recurrences had diameters ranging from 1.0 to 6.8 cm (mean diameter: 3.0 cm, standard deviation: 1.5). In 44/45 cases (98%), the recurrence appeared as a solid nodular mass, while in one case (2%), it was defined as a cystic mass. In 32 (71%) women, the mass showed a hypoechoic content with respect to the surrounding bowel, while in 13 (29%), it manifested a nonhomogeneous content. Vascularization of the mass was assessed in all patients before surgery; in all cases, it was possible to visualize randomly dispersed blood vessels in the context of the relapse. The vascular score revealed scarce blood vessels in 19 relapses, moderate flow in 23, and abundant flow in three cases. Conclusions: Knowledge of the spectrum of ultrasonographic findings of pelvic relapses will help the physician in diagnosing a recurrent malignant disease at an early stage, when cure rates are high. The widespread availability and low cost of transvaginal sonography (TVS) support its use in routine surveillance of patients operated for uterine neoplasms. abstract_id: PUBMED:35571661 Multimodality treatment for multiple recurrences of cervical cancer after radiotherapy: a case report. Background: Despite the availability of multiple treatments, the prognosis of recurrent cervical cancer (RCC) remains poor. There are no reports of the entire treatment of cases with multiple recurrences, and clinicians have no guidelines in such cases. We share our experience and consider this complex case of multiple recurrences of cervical cancer. Case Description: We report our encounter with a challenging case of locally advanced cervical cancer with multiple recurrences after primary chemoradiotherapy. The first recurrence was a bulky lesion invading the posterior bladder wall and pelvic wall accompanied by severe haematuria and moderate anaemia. The patient was treated with various treatments during the course of the disease, including pelvic exenteration (PE), external beam radiotherapy (EBRT), radioactive seed implantation, and targeted therapy. These salvage treatments led to an overall survival (OS) of 47 months, a progression-free survival (PFS) after the last chemotherapy of 34 months, and a post-relapse survival of 13.5 months. However, the patient died from severe infection due to an intestinal fistula. Conclusions: This study reports on the experience of treatment after multiple relapses to provide a reference for clinicians. It suggests that PE should be considered for cervical cancer patients with central recurrence within the primary irradiated field when positive margins can be guaranteed, but it can be palliative even if the pelvic wall is invaded when no other treatments are available. Appropriate extension of resection or additional treatments such as intraoperative radiotherapy (IORT) and neoadjuvant chemotherapy can be considered for patients at high risk of re-recurrence. abstract_id: PUBMED:9699528 Tumor hypoxia in pelvic recurrences of cervical cancer. We have previously demonstrated in primary cancer of the uterine cervix that tumor hypoxia, as determined polarographically, is strongly associated with clinical malignant progression of the disease. Having applied a similar methodological approach to investigate loco-regional relapses, we found a pronounced shift to more hypoxic oxygenation profiles in the recurrent tumors than in the primary tumors. Median pO2 values in 53 pelvic recurrences were significantly lower than the median pO2 values of 117 primary tumors of comparable sizes (7.1 +/- 1.1 mmHg vs. 12.1 +/- 1.0 mmHg, p = 0.0013). The differences in tumor oxygenation between primary and recurrent tumors mirrored the differences in the patients' 5-year survival probabilities. In the cohort of patients with pelvic relapses, median tumor pO2 < 4 mmHg indicated a significantly shorter median survival time as compared to median tumor pO2 > or = 4 mmHg. Our results further support our thesis that in cervical cancer, tumor hypoxia and clinical aggressiveness in terms of resistance to therapy and tumor dissemination, are interrelated. abstract_id: PUBMED:14599868 Laterally extended endopelvic resection. Novel surgical treatment of locally recurrent cervical carcinoma involving the pelvic side wall. Objectives: To demonstrate the therapeutic potential of the laterally extended endopelvic resection (LEER) especially for patients with recurrent cervical carcinomas involving the side wall of an irradiated pelvis. These patients, suffering from the most common situation of local failure, have so far no longer been considered for curative therapy. Methods: Extending the lateral resection plane of pelvic exenteration to the medial aspects of the lumbosacral plexus, sacrospinous ligament, acetabulum, and obturator membrane enables the complete removal of a subset of locally advanced and recurrent tumors of the lower female genital tract fixed to the pelvic wall with free margins (R0). Patients selected for LEER were accrued to a prospective outcome trial. Results: Thirty-six patients with recurrent (n = 29) or primary advanced (n = 7) gynecologic malignancies involving the side wall of the lesser pelvis underwent LEER from July 1996 until October 2002. The majority of the patients suffered from cervical carcinoma (n = 29) and had received previous pelvic irradiation (n = 24). Tumor-free (R0) lateral margins were obtained in 34 patients. Severe postoperative complications occurred in 14 patients with one treatment-related death. Five-year survival probability is 49% for the whole group and 46% for those patients considered only for palliation with current treatment options. Most patients without evidence of disease at least 1 year after LEER achieved good quality of life. Conclusions: LEER can be offered as novel surgical salvage therapy to a selected subset of patients with locally advanced and recurrent cervical carcinoma involving the pelvic wall. abstract_id: PUBMED:9614958 Pelvic wall recurrence of cervix carcinomas. Combined surgical-radio-chemotherapeutic procedure (CORCT) Purpose: The prognosis of patients with pelvic wall recurrences after primary therapy of cervical cancer is bad. In selected patients treated exclusively by surgery as primary therapy the 5-year survival rate was between 5 and 25%. Additionally the combination of operation and radiotherapy (CORT) improved the survival so far. We developed a new concept for the treatment of pelvic wall recurrences. This concept includes the combination of radical surgery, interstitial radiation and chemotheray--CORCT (combined operative- and radiochemotherapy). Patients And Methods: After radical surgery, interstitial HDR (Ir-192) brachytherapy in afterloading technique (2.5 Gy, 2 fractions/day in 5 days) was performed. Additionally a chemotherapy with cisplatin 25 mg/m2/day in 5 days and 5-fluorouracil 1000 mg/day in 5 days was applicated. Results: After combined operative- and radiotherapy 3 of 3 patients died after treatment within 8 months (median) because of distant metastases. After additive radiochemotherapy 3 of 4 patients had no evidence of disease (NED) after a follow-up period of 14 (12 to 30) months. Conclusion: The first treatment results of the new designed combined operative- and radiochemotherapy concept (CORCT) led us to expect an improvement of the prognosis of patients with recurrences of cervical cancer at the pelvic wall. Answer: The biological differences between pelvic side-wall recurrences and central relapses of cervical cancer have been a subject of investigation. According to the study by PUBMED:8033044, central and pelvic wall recurrences of cervical cancer do not exhibit pronounced biologic differences. The study found that the site of recurrent tumor progression was similar in both groups, and prognostic factors such as recurrent tumor size, age, and recurrence-free intervals were identified, but not relapse location. The size of the recurrent tumor was the only factor that significantly influenced survival in multivariate analysis, suggesting that efforts should be made to detect isolated pelvic relapses at a smaller tumor size to enhance the chance for long-term survival after local control by exenteration and CORT. Furthermore, PUBMED:15777897 reported that among patients with pelvic recurrences, those with central recurrences had a better chance of being salvaged compared to those with pelvic wall recurrence, where all patients developed progressive disease. This suggests that while the biological behavior may not be significantly different, the outcomes and potential for salvage treatment may vary between central and pelvic side-wall recurrences. In summary, while the biological differences between pelvic side-wall recurrences and central relapses of cervical cancer may not be pronounced, the outcomes and treatment approaches can differ, with central recurrences potentially having a better prognosis and being more amenable to salvage treatments.
Instruction: Is glycated hemoglobin related to other dysmetabolic variables implicated in the increase of cardiovascular risk in polycystic ovary syndrome? Abstracts: abstract_id: PUBMED:24918529 Is glycated hemoglobin related to other dysmetabolic variables implicated in the increase of cardiovascular risk in polycystic ovary syndrome? A comparative study. Background: In non-PCOS patients the concentration of glycated hemoglobin (HbA1C) has been employed to identify individuals at higher risk for impaired glucose tolerance (IGT) and diabetes mellitus. A few studies have examined the role of HbA1C in PCOS patients and current results are controversial. Aim: To compare the strength of the association between glycated hemoglobin and other predictors of cardiovascular risk in polycystic ovary syndrome (PCOS). Methods: This cross-sectional study enrolled 197 PCOS patients and 72 non-PCOS women. Transvaginal ultrasound, biochemical and hormone measurement were performed. Glycated hemoglobin (HbA1C) was correlated with other variables related to dysmetabolic/vascular diseases. Results: The HbA1C levels were 6.0±1.4% and 4.9±0.4% in PCOS patients and non-PCOS controls, respectively (p<0.001). The HbA1C levels were≥5.7% in 46.4% of PCOS and in none of the control subjects (OR=90.8). HbA1C was well-correlated with several anthropometric, metabolic and endocrine parameters. Stepwise multiple regression including HbA1C and other known predictors of cardiovascular risk resulted in a significant model in which body mass index (BMI) and free testosterone exhibited the best correlation with HbA1C (adjusted R(2)=0.530; F=39.8; p<0.001). Conclusion: HbA1C was elevated and correlated with anthropometric, biochemical and endocrine variables of metabolic/vascular disease risks in PCOS patients. Combined HbA1C, BMI and free testosterone levels provided a significant model with potential use to evaluate metabolic/vascular disease in PCOS patients. abstract_id: PUBMED:22357766 Prevalence of elevated glycated hemoglobin in women with polycystic ovary syndrome. Background: Recently, the American Diabetes Association (ADA) has included glycated hemoglobin A1(c) (A1C) level as a component of diagnostic criteria of 'diabetes' or 'increased risk for diabetes'. This study was conducted to examine the prevalence of and risk factors for 'elevated A1C' (≥5.7%) in women with polycystic ovary syndrome (PCOS). Methods: A1C was measured using an immunoturbidimetric assay, and was evaluated in 154 patients with PCOS and 469 age-matched controls (match ratio of 1-3). All subjects were categorized by BMI (non-obese <25 kg/m(2) and obese ≥25 kg/m(2)), and the prevalence of elevated A1C was also analyzed according to BMI. Results: One-third (31.2%) of the patients with PCOS had elevated A1C. The prevalence of elevated A1C (≥5.7%) was similar in obese women with PCOS and obese controls (23.5 and 20.0%, respectively, P= 1.0) but non-obese women with PCOS (mean age 29.8 ± 5.3 years) had a higher prevalence of elevated A1C than non-obese controls (31.2 versus 6.6%, respectively, P< 0.001). Logistic regression analysis of all subjects showed that the odds that a woman has elevated A1C was 6.7 times higher if she has PCOS (adjusted odds ratio 6.67, 95% confidence interval 3.50-12.70). Conclusions: The high prevalence of elevated A1C in non-obese patients with PCOS and an increased risk of elevated A1C associated with PCOS suggest that PCOS itself may be associated with abnormal A1C status. Assessing A1C level in young, non-obese patients with PCOS may be a useful new approach to screening for diabetes. abstract_id: PUBMED:33413067 Efficacy and Cardiovascular Safety of Metformin. Metformin has been the cornerstone of the medical management of type 2 diabetes mellitus (T2DM) for over 6 decades now since its first-ever discovery. Although there were initial concerns about its efficacy and safety, increasing evidence over the years proved it to be very effective and safe. It has stood the test of time, proving to be much safer than other drugs in the same class and garnering robust evidence in cardiovascular safety. In patients with T2DM, metformin lowers mean glycated hemoglobin (HbA1c) levels by 1.1-1.2% as monotherapy, by 0.6-0.83% as an add- on therapy to insulin, and by 0.9- 0.95% as add-on therapy to other oral agents. Recently its use has also been explored for unlicensed indications other than diabetes, including but not limited to obesity, polycystic ovarian syndrome (PCOS), and type 1 diabetes. Metformin is also currently under investigation for its role as a cancer adjuvant therapy. We aim to provide a comprehensive review of the available evidence for the safety and efficacy of metformin. abstract_id: PUBMED:21982282 Hemoglobin A1c as a tool for the diagnosis of type 2 diabetes in 208 premenopausal women with polycystic ovary syndrome. Objective: To study hemoglobin A1c (HbA1c) as a tool for diagnosing diabetes and to study HbA1c as a cardiovascular risk marker in patients with polycystic ovary syndrome (PCOS). Design: Retrospective observational study. Setting: Academic tertiary-care medical center. Patient(s): Two hundred eight premenopausal women with PCOS. Intervention(s): Patients underwent clinical evaluation (Ferriman-Gallwey score, body mass index, waist, blood pressure), hormone analyses (T, sex hormone-binding globulin, fasting lipids, insulin, glucose, HbA1c), transvaginal ultrasound, and 2-hour oral glucose tolerance tests (OGTT) measuring capillary blood glucose (BG) at 0 (BG 0) and 120 (BG 120) minutes, insulin, and C-peptide. Main Outcome Measure(s): Results of OGTT, HbA1c values. Result(s): Twenty patients were diagnosed with type 2 diabetes during OGTT. The sensitivity and specificity of HbA1c ≥6.5% for the diagnosis of diabetes were 35% and 99%, respectively, compared with the diagnosis established by OGTT. Hemoglobin A1c showed closer correlation with waist, body mass index, and lipid profile than BG 120, suggesting that HbA1c could be a cardiovascular risk marker. Conclusion(s): The clinical utility of HbA1c for diagnosing impaired glucose tolerance and type 2 diabetes in PCOS in daily practice is low. Long-term prospective studies are needed to determine whether HbA1c is superior to glucose levels as a cardiovascular risk marker in patients with PCOS. abstract_id: PUBMED:37828797 The role of glycated hemoglobin in the diagnosis of prediabetes and diabetes mellitus in young women with polycystic ovary syndrome. Objective: To compare glycated hemoglobin (HbA1c) and the oral glucose tolerance test (OGTT) for the diagnosis of prediabetes and diabetes in young women with polycystic ovary syndrome (PCOS). Patients And Design: This cross-sectional study included 154 women aged 20-40 years with a diagnosis of PCOS, who were screened for prediabetes and diabetes by the OGTT and HbA1c. Women with diabetes, hyperprolactinemia, thyroid or adrenal diseases, and anemia and users of hormonal contraception or corticosteroids were excluded. Measurements: Clinical, biochemical and ultrasound data were collected from the electronic medical records. The women were classified as having normal glucose metabolism, prediabetes, or diabetes based on the diagnostic tests. Sensitivity and specificity were calculated and the Kappa method was used to assess agreement between the two methods. Results: According to the OGTT and HbA1c values, 79.2% and 76% of the women were within the normal range, respectively, 16.8% and 19.5% had prediabetes, and 4% and 4.5% had diabetes (p > .05). The Kappa coefficient of 0.41 (95% confidence interval: 0.24-0.58) indicated medium agreement between methods. Considering the OGTT as the gold standard, the specificity of HbA1c was 89.5% and sensitivity was 85.7% in the diagnosis of prediabetes and 100% and 66.7%, respectively, in the diagnosis of diabetes. Conclusion: The HbA1c, when compared to the OGTT, showed high sensitivity and specificity in the diagnosis of prediabetes in young women with PCOS. abstract_id: PUBMED:24883154 Prevalence of elevated glycated hemoglobin concentrations in the polycystic ovary syndrome: anthropometrical and metabolic relationship in amazonian women. Background: To determine the prevalence of elevated glycated hemoglobin (HbA1c) and to examine its relationship with other carbohydrate metabolic parameter among Brazilian women with polycystic ovary syndrome (PCOS). Methods: A cross-sectional study including 288 PCOS patients was conducted. Anthropometrical, clinical, biochemical and endocrine parameters were evaluated. Results: The mean age was 26.92 ± 5.51 years. HbA1c mean concentration was 5.83±1.34%. In 38.54% of patients, HbA1c was ≥ 5.7%. HbA1c was positively correlated with body weight (r = 0.142, P = 0.017), body mass index (P = 0.000), waist:hip ratio (P = 0.000), fat mass (P = 0.000), conicity index (P = 0.000), triglyceride (P = 0.001), C-peptide (P = 0.000), total testosterone (P = 0.003), free testosterone (P = 0.000), free androgen index (P = 0.006) and fasting insulin (P = 0.025). Using the oral glucose tolerance test, HbA1c showed positive correlation with glucose concentrations at any point in time (P < 0.05). Conclusions: HbA1c was elevated in nearly 40% of PCOS patients and it showed positive correlation with several anthropometric and metabolic factors and androgen levels. The current study provides further evidence that HbA1C is higher in PCOS patients and may have a potential role in the prediction of dysglycemic disease in these women. abstract_id: PUBMED:23529348 Utility of hemoglobin-A1C in nondiabetic women with polycystic ovary syndrome. Objective: Hemoglobin A1c (A1C) ≥5.7% is now accepted as a biomarker for identifying individuals at risk for diabetes. Compared to the general population, women with polycystic ovary syndrome (PCOS) have a higher risk for diabetes. Our goal was to determine what glucose homeostasis abnormalities can be identified by A1C ≥5.7% in women with PCOS. Methods: In a cross-sectional study, nondiabetic women with PCOS (according to the National Institutes of Health [NIH] criteria) were divided into 2 groups based on A1C (<5.7% [n = 23] and ≥5.7% [n = 25]). Oral glucose tolerance tests (OGTT) and frequently sampled intravenous glucose tolerance tests (FS-IVGTT) were conducted, and body composition, cardiovascular risk factors, and sex steroid levels were assessed. Results: Compared to women with A1C <5.7%, those with A1C ≥5.7% were older (35.1 ± 1.1 years vs. 31.1 ± 1.1 years; P = .04), had higher glucose levels at fasting and during OGTT, and had a lower insulin sensitivity index (SI: 2.0 ± 0.2 vs. 4.2 ± 0.6; P = .0195) and disposition index (DI: 1,014 ± 82 vs. 1,901 ± 217; P = .011) during FS-IVGTT. They also had higher triglycerides, high-sensitivity C-reactive protein (hs-CRP), and fatty acid-binding protein 4 (FABP4) levels. There was no difference in serum androgen levels. Conclusion: A1C ≥5.7% identified the subgroup of PCOS patients with higher insulin resistance, inadequate compensatory insulin response, impaired glucose disposition, and increased cardiovascular risk factors. Thus, A1C represents an inexpensive and informative biomarker to identify PCOS patients at risk for metabolic abnormalities. abstract_id: PUBMED:27593259 A Review on Glycosylated Hemoglobin in Polycystic Ovary Syndrome. Polycystic ovary syndrome (PCOS) is one of the most common reproductive endocrine disorders among women of reproductive age, with a variety of complications and consequences mostly due to hyperandrogenism and insulin resistance (IR). PCOS patients with IR are at risk for metabolic syndrome and diabetes mellitus (DM) along with its complications such as cardiovascular events. There are several methods for screening IR in patients with PCOS to predict DM and other complications. Fasting plasma glucose test, oral glucose tolerance test, and insulin and glycosylated hemoglobin (HbA1c) levels are some available screening tools for IR. The American Diabetes Association recommended HbA1c to screen for DM because HbA1c is not affected by day-to-day plasma glucose levels and reflects the plasma glucose status during 2-3 months before measurement. Some studies have evaluated the role of HbA1c as a screening method to predict DM in PCOS patients, however, there are still controversies in this matter. Also some studies reported that HbA1c has a correlation with complications of PCOS such as metabolic syndrome and cardiovascular events. We found that HbA1c could be a suitable screening test for IR in PCOS patients but more studies are recommended, omitting confounding factors that could affect IR in patients with PCOS, such as antihyperglycemic agents like metformin, or lifestyle modification, which can be effective in reducing IR in patients with PCOS. abstract_id: PUBMED:32819262 Efficacy and Cardiovascular Safety of DPP-4 Inhibitors. Dipeptidyl peptidase-4 (DPP-4) inhibitors or gliptins belong to the class of incretin mimetics. These drugs have been available on the market for the management of type 2 diabetes mellitus (T2DM) for over a decade. Sitagliptin, linagliptin, vildagliptin, saxagliptin and alogliptin are widely available globally, whilst anagliptin, gemigliptin and teneliptin are used mainly in the Asian countries. The glycemic control conferred by DPP-4 inhibitors varies among individual molecules with an average reduction of glycated hemoglobin (HbA1c) ranging between -0.5 to -1.0% with monotherapy. Additive effects on HbA1c reduction may result from combination therapy with other antidiabetics. Weak evidence from various studies suggests that DPP-4 inhibitors may be useful in treating nonalcoholic fatty liver disease (NAFLD) and polycystic ovary syndrome (PCOS). DPP-4 inhibitors safety is not established in pregnancy, and there is only meagre evidence of its use in T2DM among children. In line with the United States Food and Drug Administration (US FDA) recommendations, sitagliptin, linagliptin, saxagliptin and alogliptin have undergone rigorous cardiovascular outcome trials (CVOTs) in recent years, and the safety data for vildagliptin is available through retrospective analysis of various studies in meta-analysis. Small clinical trial, and meta-analysis based data are available for the CV safety of other DPP-4 inhibitors. In general, the CVOTs and other safety data do not reveal serious warning signals except for saxagliptin (higher risk of hospitalization from heart failure [hHF]), although there is no robust data on the risk of hHF among patients with moderate to severe HF at baseline treated with other DPP-4 inhibitors. This review critically appraises the efficacy and cardiovascular safety of DPP-4 inhibitors to empower clinicians to use this class of antidiabetic medications judiciously. abstract_id: PUBMED:29158232 Development and validation of QDiabetes-2018 risk prediction algorithm to estimate future risk of type 2 diabetes: cohort study. Objectives To derive and validate updated QDiabetes-2018 prediction algorithms to estimate the 10 year risk of type 2 diabetes in men and women, taking account of potential new risk factors, and to compare their performance with current approaches.Design Prospective open cohort study.Setting Routinely collected data from 1457 general practices in England contributing to the QResearch database: 1094 were used to develop the scores and a separate set of 363 were used to validate the scores.Participants 11.5 million people aged 25-84 and free of diabetes at baseline: 8.87 million in the derivation cohort and 2.63 million in the validation cohort.Methods Cox proportional hazards models were used in the derivation cohort to derive separate risk equations in men and women for evaluation at 10 years. Risk factors considered included those already in QDiabetes (age, ethnicity, deprivation, body mass index, smoking, family history of diabetes in a first degree relative, cardiovascular disease, treated hypertension, and regular use of corticosteroids) and new risk factors: atypical antipsychotics, statins, schizophrenia or bipolar affective disorder, learning disability, gestational diabetes, and polycystic ovary syndrome. Additional models included fasting blood glucose and glycated haemoglobin (HBA1c). Measures of calibration and discrimination were determined in the validation cohort for men and women separately and for individual subgroups by age group, ethnicity, and baseline disease status.Main outcome measure Incident type 2 diabetes recorded on the general practice record.Results In the derivation cohort, 178 314 incident cases of type 2 diabetes were identified during follow-up arising from 42.72 million person years of observation. In the validation cohort, 62 326 incident cases of type 2 diabetes were identified from 14.32 million person years of observation. All new risk factors considered met our model inclusion criteria. Model A included age, ethnicity, deprivation, body mass index, smoking, family history of diabetes in a first degree relative, cardiovascular disease, treated hypertension, and regular use of corticosteroids, and new risk factors: atypical antipsychotics, statins, schizophrenia or bipolar affective disorder, learning disability, and gestational diabetes and polycystic ovary syndrome in women. Model B included the same variables as model A plus fasting blood glucose. Model C included HBA1c instead of fasting blood glucose. All three models had good calibration and high levels of explained variation and discrimination. In women, model B explained 63.3% of the variation in time to diagnosis of type 2 diabetes (R2), the D statistic was 2.69 and the Harrell's C statistic value was 0.89. The corresponding values for men were 58.4%, 2.42, and 0.87. Model B also had the highest sensitivity compared with current recommended practice in the National Health Service based on bands of either fasting blood glucose or HBA1c. However, only 16% of patients had complete data for blood glucose measurements, smoking, and body mass index.Conclusions Three updated QDiabetes risk models to quantify the absolute risk of type 2 diabetes were developed and validated: model A does not require a blood test and can be used to identify patients for fasting blood glucose (model B) or HBA1c (model C) testing. Model B had the best performance for predicting 10 year risk of type 2 diabetes to identify those who need interventions and more intensive follow-up, improving on current approaches. Additional external validation of models B and C in datasets with more completely collected data on blood glucose would be valuable before the models are used in clinical practice. Answer: Yes, glycated hemoglobin (HbA1c) is related to other dysmetabolic variables implicated in the increase of cardiovascular risk in polycystic ovary syndrome (PCOS). A study found that HbA1c levels were elevated in PCOS patients and correlated with several anthropometric, metabolic, and endocrine parameters of metabolic/vascular disease risks. Specifically, body mass index (BMI) and free testosterone exhibited the best correlation with HbA1c, suggesting that combined HbA1c, BMI, and free testosterone levels could be significant in evaluating metabolic/vascular disease in PCOS patients (PUBMED:24918529). Furthermore, the prevalence of elevated HbA1c (≥5.7%) was found to be high in non-obese patients with PCOS, indicating that PCOS itself may be associated with abnormal HbA1c status. This suggests that assessing HbA1c levels in young, non-obese patients with PCOS could be a useful approach to screening for diabetes (PUBMED:22357766). Another study showed that HbA1c was elevated in nearly 40% of PCOS patients and positively correlated with several anthropometric and metabolic factors, androgen levels, and glucose concentrations at any point in time during the oral glucose tolerance test. This evidence supports the potential role of HbA1c in predicting dysglycemic disease in women with PCOS (PUBMED:24883154). Additionally, HbA1c ≥5.7% identified a subgroup of PCOS patients with higher insulin resistance, inadequate compensatory insulin response, impaired glucose disposition, and increased cardiovascular risk factors. Thus, HbA1c represents an informative biomarker to identify PCOS patients at risk for metabolic abnormalities (PUBMED:23529348). Overall, these findings suggest that HbA1c is indeed related to other dysmetabolic variables that contribute to increased cardiovascular risk in PCOS, and it could be a useful marker for assessing metabolic and cardiovascular health in these patients.
Instruction: Do physical activity level and body mass index predict recovery from persistent neck pain in men and women of working age? Abstracts: abstract_id: PUBMED:23653132 Do physical activity level and body mass index predict recovery from persistent neck pain in men and women of working age? A population-based cohort study. Purpose: The study sought to examine the gender-specific effects of physical activity level and body mass index on recovery from persistent neck pain (PNP) among citizens of working age in Stockholm, Sweden. Methods: A population-based cohort of 1,730 subjects (18-65) with PNP answered surveys in 2002 and 2007. Prognostic factors were self-reported body mass index (BMI) and physical activity level (PAL) at baseline. Analyses were performed with odds ratios (OR) with corresponding 95 % confidence intervals (95 % CI). Results: Women reporting higher physical activity level had higher odds of recovering from PNP than women with sedentary leisure time (OR of 1.5, 95 % CI 1.0-2.4), but no associations were found in men. No associations were found between BMI and recovery from PNP in any analyses. Conclusion: Physical activity seems to be associated with recovery from PNP in women and should therefore be encouraged. Future studies should continue investigating physical activity and lifestyle factors in relation to recovery from persistent neck pain, since these modifiable factors may be considered in interventions. abstract_id: PUBMED:21633119 Physical exercise, body mass index, and risk of chronic pain in the low back and neck/shoulders: longitudinal data from the Nord-Trondelag Health Study. Chronic musculoskeletal pain constitutes a large socioeconomic challenge, and preventive measures with documented effects are warranted. The authors' aim in this study was to prospectively investigate the association between physical exercise, body mass index (BMI), and risk of chronic pain in the low back and neck/shoulders. The study comprised data on approximately 30,000 women and men in the Nord-Trøndelag Health Study (Norway) who reported no pain or physical impairment at baseline in 1984-1986. Occurrence of chronic musculoskeletal pain was assessed at follow-up in 1995-1997. A generalized linear model was used to calculate adjusted risk ratios. For both females and males, hours of physical exercise per week were linearly and inversely associated with risk of chronic pain in the low back (women: P-trend = 0.02; men: P-trend < 0.001) and neck/shoulders (women: P-trend = 0.002; men: P-trend < 0.001). Obese women and men had an approximately 20% increased risk of chronic pain in both the low back and the neck/shoulders. Exercising for 1 or more hours per week compensated, to some extent, for the adverse effect of high BMI on risk of chronic pain. The authors conclude that physical inactivity and high BMI are associated with an increased risk of chronic pain in the low back and neck/shoulders in the general adult population. abstract_id: PUBMED:17289775 Association between body mass index and recovery from whiplash injuries: a cohort study. It is hypothesized that excess weight is a risk factor for delayed recovery from neck pain, such as from whiplash injuries. However, the association between obesity and recovery from whiplash injury has not been studied. The authors examined the association between body mass index and time to recovery from whiplash injuries in a population-based cohort study of traffic injuries in Saskatchewan, Canada. The cohort included 4,395 individuals who made an insurance claim to Saskatchewan Government Insurance and were treated for whiplash injury between July 1, 1994, and December 31, 1995. Of those, 87.7% had recovered by November 1, 1997. No association was found between baseline body mass index and time to recovery. Compared with individuals with normal weight, those who were underweight (hazard rate ratio = 0.88, 95% confidence interval: 0.73, 1.06), overweight (hazard rate ratio = 1.01, 95% confidence interval: 0.94, 1.09), and obese (hazard rate ratio = 0.99, 95% confidence interval: 0.90, 1.08) had similar rates of recovery, even after adjustment for other factors. The results do not support the hypothesis that individuals who are overweight or obese have a worse prognosis for whiplash. abstract_id: PUBMED:22208802 Predictors of change in pain and physical functioning among post-menopausal women with recurrent pain conditions in the women's health initiative observational cohort. Unlabelled: Pain complaints are commonly reported symptoms among postmenopausal women and can have significant effects on health-related quality of life. We sought to identify medical and psychosocial factors that predict changes in pain and overall physical functioning over a 3-year period among postmenopausal women with recurrent pain conditions. We examined data from postmenopausal women age 50 to 79 with recurrent pain conditions (low back pain, neck pain, headache or migraines, or joint pain or stiffness) over a 3-year period using the Women's Health Initiative Observational Study Cohort (N = 67,963). Multinomial logistic regression models controlling for demographic and clinical characteristics were used to identify baseline predictors of change in the SF-36 subscales for pain and physical functioning between baseline and 3-year follow-up. Body mass index (BMI) was associated with worsening of pain (OR [95% CI] 1.54 [1.45-1.63] for BMI ≥30) and physical functioning (1.83 [1.71-1.95] for BMI ≥30). A higher reported number of nonpain symptoms, higher medical comorbidity, and a positive screen for depression (1.13 [1.05-1.22] for worsened pain) were also associated with worsening of pain and physical functioning. Baseline prescription opioid use was also associated with lack of improvement in pain (OR .42, 95% CI .36-.49) and with worsened physical functioning (1.25 [1.04-1.51]). Perspective: This study presents prospective data on change in pain and physical functioning in postmenopausal women over a 3-year period. Our results suggest depression, nonpain physical symptoms, obesity, and possibly opioid treatment are associated with worse long-term pain outcomes in this population. abstract_id: PUBMED:36157900 Association between spinal disorders and different domains of physical activity among young adult men. Background: There is limited evidence of how physical activity (PA) associates with low back pain (LBP) and neck pain (NP). Particularly, the association between occupational or commuting PA and LBP/NP is unclear. The aim of the present cross-sectional study was to investigate the association between spinal disorders and different physical activity domains in young and healthy adult men. Methods: Self-reported leisure time, occupational and commuting PA, as well as LBP and NP, were studied using questionnaires among young Finnish males (n = 1,630). Logistic regression analysis was used to study the associations of PA domains, physical fitness, and spinal disorders. Regression models were adjusted for age, education, smoking, waist circumference, and the other PA domains. Results: There was a positive association between moderate leisure-time PA and both LBP (OR: 1.51, 95% CI: 1.18-1.95) and NP (OR: 1.29, 95% CI: 1.00-1.66) compared to low PA. Furthermore, moderate (OR: 1.31, 95% CI: 1.00-1.71) and high (OR: 1.53, 95% CI: 1.15-2.02) leisure time PA groups had a higher likelihood of lumbago. Moderate (OR: 0.67, 95% CI: 0.51-0.90) and high (OR: 0.68, 95% CI: 0.48-0.95) occupational PA groups had lower likelihood for radiating LBP, while high occupational PA (OR: 0.72, 95% CI: 0.52-0.99) had lower likelihood for lumbago. Conclusions: The associations between physical activity and LBP or NP seem to vary between different domains of physical activity among young healthy men. Commuting and occupation-related PA appear not to be harmfully associated with LPB or NP, whereas moderate-level leisure time PA may be associated with increased LBP and NP, and the respective high level may be associated with an increase in the prevalence of lumbago. abstract_id: PUBMED:30341129 Influence of family history on prognosis of spinal pain and the role of leisure time physical activity and body mass index: a prospective study using family-linkage data from the Norwegian HUNT study. Objectives: To investigate the influence of parental chronic spinal pain on prognosis of chronic spinal pain in adult offspring, and whether offspring physical activity level and body mass index (BMI) modified this association. Design: Prospective cohort study. Setting: We used family-linked longitudinal data from the Norwegian HUNT study collected in HUNT2 (1995-1997) and HUNT3 (2006-2008). Participants: A total of 1529 offspring who reported spinal pain in HUNT2 were linked with parental data and followed up in HUNT3. Outcomes: We estimated relative risk (RR) with 95% CI for recovery from chronic spinal pain, and also from activity limiting spinal pain, in offspring related to chronic spinal pain in parents. We also investigated whether offspring leisure time physical activity and BMI modified these intergenerational associations in spinal pain. Results: A total of 540 (35%) offspring were defined as recovered after approximately 11 years of follow-up. Offspring with both parents reporting chronic spinal pain were less likely to recover from chronic spinal pain (RR 0.83, 95% CI 0.69 to 0.99) and activity limiting spinal pain (RR 0.71, 95% CI 0.54 to 0.94), compared with offspring of parents without chronic spinal pain. Analyses stratified by BMI and physical activity showed no strong evidence of effect modification on these associations. However, offspring who were overweight/obese and with both parents reporting chronic spinal pain had particularly low probability of recovery from activity limiting spinal pain, compared with those who were normal weight and had parents without chronic spinal pain (RR 0.57, 95% CI 0.39 to 0.84). Conclusion: Offspring with chronic spinal pain are less likely to recover if they have parents with chronic spinal pain, particularly if offspring are overweight/obese. abstract_id: PUBMED:26674164 Back and neck pain among school teachers in Poland and its correlations with physical activity. Background: Back pain represents one of the most common diseases across various populations of workers worldwide. This study analyzes the prevalence and severity of back pain, based on selected demographic variables, and the relationship with physical activity among school teachers. Material And Methods: The study included 998 professionally active teachers (840 females and 158 males) from the southern part of Poland. Validated psychometric tools, namely: 1) for evaluation of disability due to back pain - a Polish version of the Oswestry Disability Index (ODI) and Neck Disability Index (NDI), 2) for physical activity assessment - the Subjective Experience of Work Load (SEWL) as well as the authors' supplementary questionnaire, addressing demographic and anthropometric variables were used. Results: There was no correlation between age and the NDI scores in females (r = 0.027), in contrast to males (r = 0.202; p ≤ 0.05). Statistically significant correlations (p ≤ 0.05) were reported between: age and the ODI, in both females (r = 0.219) and males (r = 0.180). No associations between the body mass index (BMI)-ODI, and BMI-NDI were found. In the case of women, disability related to low back pain (LBP) had a negative impact on the indicators of their activity during work, sports, and leisure time. In the case of men, the NDI and ODI scores did not differ, based on activity indicators. Conclusions: Our findings confirm that back pain represents a serious concern among teachers. Age appears to be a prognostic factor, while no association between the BMI and LBP has been revealed. The limitation of physical activity leads to more frequent back pain. abstract_id: PUBMED:17563606 Physical activity, cardiorespiratory fitness, and body mass index in relationship to work productivity and sickness absence in computer workers with preexisting neck and upper limb symptoms. Objective: The purpose of this study was to investigate associations among three modifiable risk factors (ie, physical activity, cardiorespiratory fitness, and body mass index), work productivity, and sickness absence in computer workers. Methods: All participants were computer workers with neck and upper limb symptoms in the preceding 6 months, the last 2 weeks, or both. Productivity and sickness absence were assessed with the Health and Performance Questionnaire (HPQ). Results: Physical activity and cardiorespiratory fitness were not associated with work productivity or sickness absence. Obese male workers reported lower absolute productivity than lean workers. Lean and overweight workers reported similar productivity levels. Body mass index was not associated with sickness absence. Conclusions: In a population of white-collar workers with neck and upper limb symptoms, obese male workers showed significantly lower productivity than did lean or overweight workers. abstract_id: PUBMED:33034558 Association Between Physical Activity and Odds of Chronic Conditions Among Workers in Spain. Introduction: Prevention of chronic conditions is a major public health challenge, and achieving minimum recommended levels of physical activity aids in reaching this objective. The aim of our study was to investigate whether levels of physical activity were associated with the prevalence of common chronic conditions among the Spanish workforce. Methods: We retrieved data from the Spanish National Health Survey 2017 (N = 9,695) in which the mean age of participants was 44.4 (standard deviation, 10.4 y), and 47.4% were women. Workers self-reported a set of 6 chronic conditions (ie, chronic low-back pain, chronic neck pain, diabetes, hypertension, depression, and anxiety), and we used the International Physical Activity Questionnaire short form to estimate physical activity. We performed multivariable logistic regression adjusted for possible confounders to assess associations between physical activity and chronic conditions. Results: The final adjusted model showed that performing less than 600 metabolic equivalent-minutes per week of physical activity was associated with significantly increased odds for chronic conditions (adjusted odds ratio [aOR] = 1.18; 95% CI, 1.07-1.30). Of the sex and age subgroups analyzed, this association was significant in men aged 17 to 44 (aOR = 1.21; 95% CI, 1.00-1.46). Among chronic conditions, low-back pain and anxiety were associated with low levels of physical activity, whereas covariates such as body mass index, smoking habits, education level, and occupational class had an important influence on the association between physical activity and chronic conditions. Conclusion: Results suggest that achieving sufficient physical activity could reduce chronic conditions among Spanish workers. abstract_id: PUBMED:14609687 Neck and shoulder pain in 70- to 79-year-old men and women: findings from the Health, Aging and Body Composition Study. Background Context: Musculoskeletal pain in the cervicobrachial region is considered a major health problem among adults of working age, but little is known about the impact of this pain in the elderly. Purpose: Determine the prevalence of neck and shoulder pain in a well-functioning cohort, identify factors associated with this pain, assess the pattern of coexisting joint pain and evaluate the impact of this pain on physical functioning. Study Design: Cross-sectional study. Patient Sample: Black and white men and women, aged 70 to 79 years, participating in the Health, Aging and Body Composition (Health ABC) study. Outcome Measures: Not applicable. Methods: Between April 1997 and June 1998, 3,075 men and women participating in Health ABC study completed the initial home interview and clinical examination. Information was collected on musculoskeletal pain, medical history, depressive symptomatology and physical function. Physical performance measures were obtained. Results: A total of 11.9% of participants reported neck pain of one month or more in duration and 18.9% reported shoulder pain. White women had the highest prevalence of neck pain (15.4%) and black women the highest prevalence of shoulder pain (24.3%). The correlates of both neck and shoulder pain were female gender, no education beyond high school, poorer self-rated health, depressive symptomatology and a medical history of arthritis, heart attack, angina. Increasing severity of both neck and shoulder pain was associated with an increased prevalence of joint pain at other body sites and with poor functional capacity. Measures of physical performance involving the upper extremity were also decreased. Conclusions: Neck and shoulder pain, either alone or in conjunction with pain in other joints, has a substantial impact on the function and well-being of the older adults in this cohort. Answer: Physical activity level and body mass index (BMI) have been studied to determine their predictive value for recovery from persistent neck pain (PNP) in men and women of working age, with mixed results. A population-based cohort study in Stockholm, Sweden, found that women who reported higher physical activity levels had higher odds of recovering from PNP compared to women with sedentary leisure time (OR of 1.5, 95% CI 1.0-2.4). However, no associations were found in men, and no associations were found between BMI and recovery from PNP in any analyses (PUBMED:23653132). In contrast, a study from the Nord-Trøndelag Health Study in Norway showed that physical inactivity and high BMI were associated with an increased risk of chronic pain in the low back and neck/shoulders in the general adult population. Exercising for 1 or more hours per week compensated, to some extent, for the adverse effect of high BMI on the risk of chronic pain (PUBMED:21633119). Another study on the association between BMI and recovery from whiplash injuries found no association between baseline BMI and time to recovery, suggesting that individuals who are overweight or obese do not have a worse prognosis for whiplash, which often results in neck pain (PUBMED:17289775). Furthermore, a study on postmenopausal women with recurrent pain conditions found that a higher BMI was associated with worsening of pain and physical functioning (PUBMED:22208802). In young adult men, the associations between physical activity and low back pain (LBP) or neck pain (NP) varied between different domains of physical activity. Moderate-level leisure time physical activity may be associated with increased LBP and NP, while occupational physical activity appeared to have a protective effect against radiating LBP and lumbago (PUBMED:36157900). Lastly, a prospective study using family-linkage data from the Norwegian HUNT study indicated that offspring with chronic spinal pain are less likely to recover if they have parents with chronic spinal pain, particularly if the offspring are overweight/obese (PUBMED:30341129). In summary, physical activity level appears to be a positive predictor of recovery from persistent neck pain, particularly in women, while the role of BMI is less clear, with some studies showing no association and others indicating that higher BMI may be associated with worse outcomes.
Instruction: Do frequent moderate exacerbations contribute to progression of chronic obstructive pulmonary disease in patients who are ex-smokers? Abstracts: abstract_id: PUBMED:25792820 Do frequent moderate exacerbations contribute to progression of chronic obstructive pulmonary disease in patients who are ex-smokers? Background: In addition to smoking, acute exacerbations are considered to be a contributing factor to progression of chronic obstructive pulmonary disease (COPD). However, these findings come from studies including active smokers, while results in ex-smokers are scarce and contradictory. The purpose of this study was to evaluate if frequent acute moderate exacerbations are associated with an accelerated decline in forced expiratory volume in one second (FEV1) and impairment of functional and clinical outcomes in ex-smoking COPD patients. Methods: A cohort of 100 ex-smoking patients recruited for a 2-year follow-up study was evaluated at inclusion and at 6-monthly scheduled visits while in a stable condition. Evaluation included anthropometry, spirometry, inspiratory capacity, peripheral capillary oxygen saturation, severity of dyspnea, a 6-minute walking test, BODE (Body mass index, airflow Obstruction, Dyspnea, Exercise performance) index, and quality of life (St George's Respiratory Questionnaire and Chronic Respiratory Disease Questionnaire). Severity of exacerbation was graded as moderate or severe according to health care utilization. Patients were classified as infrequent exacerbators if they had no or one acute exacerbation/year and frequent exacerbators if they had two or more acute exacerbations/year. Random effects modeling, within hierarchical linear modeling, was used for analysis. Results: During follow-up, 419 (96% moderate) acute exacerbations were registered. At baseline, frequent exacerbators had more severe disease than infrequent exacerbators according to their FEV1 and BODE index, and also showed greater impairment in inspiratory capacity, forced vital capacity, peripheral capillary oxygen saturation, 6-minute walking test, and quality of life. However, no significant difference in FEV1 decline over time was found between the two groups (54.7±13 mL/year versus 85.4±15.9 mL/year in frequent exacerbators and infrequent exacerbators, respectively). This was also the case for all other measurements. Conclusion: Our results suggest that frequent moderate exacerbations do not contribute to accelerated clinical and functional decline in COPD patients who are ex-smokers. abstract_id: PUBMED:17112715 Exacerbations and lung function decline in COPD: new insights in current and ex-smokers. Aim: To investigate whether there is a significant relationship between an increased frequency of exacerbations and the rate of forced expiratory volume in 1s (FEV(1)) decline in COPD patients. METHODS-MEASUREMENTS: About 102 COPD patients (44 smokers, 58 ex-smokers) participated in a 3-year prospective study. Exacerbations were identified as worsening of patient's respiratory symptoms as recorded on diary cards. Spirometry was performed every 6 months. The effect of frequent exacerbations on lung function was investigated using random effects models. Results: The median (mean(95% CI)) annual exacerbation rate was 2.85 (3.1 (2.7-3.6)). Patients with an annual exacerbation rate over the median rate had significantly lower baseline post-bronchodilation FEV(1)(%pred), higher MRC dyspnoea score and chronic cough compared to patients who had an annual exacerbation rate less than the median. The average annual rate of FEV(1)(%pred), adjusted for smoking decline (DeltaFEV(1)), was found significantly increased in frequent compared to infrequent exacerbators (P=0.017). The highest DeltaFEV(1) was observed in smokers frequent exacerbators and a significant interaction between exacerbation frequency and DeltaFEV(1) was also observed in ex-smokers. Conclusions: Our findings suggest that an increased frequency of exacerbations is significantly associated with FEV(1) decline even in ex-smokers. Thus, smoking and frequent exacerbations may have both negative impact on lung function. Smoking cessation and prevention of exacerbations should be a major target in COPD. abstract_id: PUBMED:31188775 Stability of Frequent Exacerbator Phenotype in Patients with Chronic Obstructive Pulmonary Disease. Background: At present, there is little information in Bulgaria regarding the rate and stability of frequent-exacerbation phenotype in COPD patients. Aim: To study the rate and stability of frequent-exacerbation phenotype in COPD patients. Materials And Methods: We followed up 465 COPD patients for exacerbations over a 3-year period. Exacerbations were defined as events that resulted in treatment with antibiotics and/or corticosteroids (moderate), or that led to hospitalization (severe). Result: Approximately 10% of the patients had two or more exacerbations per year (frequent-exacerbation phenotype), and this structure stayed stable over the study period. The exacerbation rate in the first year of follow up was 0.33 per stage I COPD patients (according to GOLD stages), 0.49 per stage II COPD patients; 0.69 - for stage III, and 1.06 for stage IV COPD patients. The frequent-exacerbation rate increased from stage I to stage IV by 4.35%, 9.17%, 10.79%, and 20.97%, respectively. A history of previous year exacerbations increased the risk of new exacerbations: with a history of one exacerbation - OR 2.1820 (95% CI: 1.4018 to 3.3965, p = 0.0005), and with a history of two exacerbations - OR 4.6460 (95% CI: 2.3286 to 9.2696; p < 0.0001). The frequent-exacerbation phenotype appeared to be unstable over the study period - up to 33% from those patients stayed in the phenotype for the next year. Conclusions: The exacerbation frequency and the rate of frequent-exacerbation phenotype increases with COPD progression. History of exacerbations in the previous year is a significant risk factor for exacerbations of COPD. The frequent-exacerbation phenotype appeared to be unstable over the study period. The pheno-type of non-exacerbators was more likely to remain stable over time. abstract_id: PUBMED:32442114 Susceptibility to frequent exacerbation in COPD patients: Impact of the exacerbations history, vaccinations and comorbidities? Introduction: Exacerbations are key events in the natural history of COPD, but our understanding of their longitudinal determinants remains unclear. We used data from a large observational study to test the hypothesis that vaccination status and comorbidities could be associated with the occurrence of exacerbations profile. Methods: Diagnosed COPD patients have been included by their pulmonologists, with up to 3 years of follow-up. Data were analyzed using the KmL method designed to cluster longitudinal data and receiver operating characteristic curve analysis to determine the best threshold to allocate patients to identified clusters. Results: 932 COPD patients were included since January 2014, 446 patients (65.68% males, 35.59% current smokers) were followed over a period of 3 years with complete data. 239(28.15%) patients reported two or more exacerbations in the year before enrolment (frequent exacerbations). Among them 142(16.68%) also had frequent exacerbations in the first year of the study, and 69(8.10%) who remained frequent exacerbators in the second year. Based on our hypothesis, we were able to determine four phenotypes: A (infrequent), B (frequent in underweight patients), C (transient), and D (frequent in obese patients). Frequent exacerbators had more airflow limitation and symptoms. Irrespective of cut-offs set to define the optimal number of clusters, a history of exacerbations OR: 3.72[2.53-5.49], presence of anxiety OR: 2.03[1.24-3.31] and absence of the annual influenza vaccination OR: 1.97[1.20-3.24] remained associated with the frequent exacerbator phenotypes. Conclusions: The most important determinants of frequent exacerbations are a history of exacerbations, anxiety and unvaccinated against influenza. abstract_id: PUBMED:35911870 Increased mortality associated with frequent exacerbations in COPD patients with mild-to-moderate lung function impairment, and smokers with normal spirometry. Background: The burden of frequent respiratory exacerbations in COPD patients with mild-to-moderate spirometric impairment and smokers with preserved lung function is unknown. Methods: We categorized COPD participants in COPDGene with post-bronchodilator FEV1%predicted≥50% by the annual exacerbation frequency into three groups: i)frequent exacerbators (top 5%; n = 109), ii)exacerbators (>0 but less than frequent exacerbators; n = 1,009), and iii)No exacerbation (n = 981). Exacerbations were defined as respiratory episodes requiring antibiotics and/or systemic steroids. We performed a Cox proportional hazards regression analysis to examine the association with mortality. We repeated the same process in current/former smokers with preserved spirometry (FEV1≥80%predicted and FEV1/FVC≥0.7). Results: Among 2,099 COPD participants, frequent exacerbators had ≥1.8 exacerbations/year and were responsible for 34.3% of the total exacerbations. There were 102 (10.4%) deaths in the group with no exacerbations, 119 (11.8%) in the exacerbator group, and 24 (22%) in the frequent exacerbators. Adjusted mortality in frequent exacerbators was higher relative to individuals with no exacerbations (hazard ratio (HR) = 1.98; 95%CI = 1.25-3.13). An increase in frequency of exacerbations by one exacerbation/year was associated with increased mortality (HR = 1.40,95%CI = 1.21-1.62). Among 3,143 participants with preserved spirometry, frequent exacerbators had ≥0.8 exacerbations/year and were responsible for more than half of the exacerbations. There were 93 (4.2%) deaths in the group with no exacerbations, 28 (3.8%) in the exacerbator group, and 14 (7.6%) in the frequent exacerbators. The adjusted mortality was increased in frequent exacerbators with preserved spirometry relative to those with no exacerbations (HR = 2.25; 95%CI = 1.26-4.01). Conclusions: In COPD participants with mild-to-moderate spirometric impairment and smokers with preserved spirometry, the frequent exacerbator phenotype is responsible for a large proportion of total exacerbations and associated with high mortality. abstract_id: PUBMED:24505290 Steroid resistance in COPD? Overlap and differential anti-inflammatory effects in smokers and ex-smokers. Background: Inhaled corticosteroids (ICS) reduce exacerbation rates and improve health status but can increase the risk of pneumonia in COPD. The GLUCOLD study, investigating patients with mild-to-moderate COPD, has shown that long-term (2.5-year) ICS therapy induces anti-inflammatory effects. The literature suggests that cigarette smoking causes ICS insensitivity. The aim of this study is to compare anti-inflammatory effects of ICS in persistent smokers and persistent ex-smokers in a post-hoc analysis of the GLUCOLD study. Methods: Persistent smokers (n = 41) and persistent ex-smokers (n = 31) from the GLUCOLD cohort were investigated. Effects of ICS treatment compared with placebo were estimated by analysing changes in lung function, hyperresponsiveness, and inflammatory cells in sputum and bronchial biopsies during short-term (0-6 months) and long-term (6-30 months) treatment using multiple regression analyses. Results: Bronchial mast cells were reduced by short-term and long-term ICS treatment in both smokers and ex-smokers. In contrast, CD3⁺, CD4⁺, and CD8⁺ cells were reduced by short-term ICS treatment in smokers only. In addition, sputum neutrophils and lymphocytes, and bronchial CD8⁺ cells were reduced after long-term treatment in ex-smokers only. No significant interactions existed between smoking and ICS treatment. Conclusion: Even in the presence of smoking, long-term ICS treatment may lead to anti-inflammatory effects in the lung. Some anti-inflammatory ICS effects are comparable in smokers and ex-smokers with COPD, other effects are cell-specific. The clinical relevance of these findings, however, are uncertain. abstract_id: PUBMED:37928768 Analysis of Airway Thickening and Serum Cytokines in COPD Patients with Frequent Exacerbations: A Heart of the Matter. Background: Differences in lung function for Chronic Obstructive Pulmonary Disease (COPD) cause bias in the findings when identifying frequent exacerbator phenotype-related causes. The aim of this study was to determine whether computed tomographic (CT) biomarkers and circulating inflammatory biomarkers were associated with the COPD frequent exacerbator phenotype after eliminating the differences in lung function between a frequent exacerbator (FE) group and a non-frequent exacerbator (NFE) group. Methods: A total of 212 patients with stable COPD were divided into a FE group (n=106) and a NFE group (n=106) according to their exacerbation history. These patients were assessed by spirometry, quantitative CT measurements and blood sample measurements during their stable phase. Univariate and multivariate logistic regression were used to assess the association between airway thickening or serum cytokines and the COPD frequent exacerbator phenotype. Receiver operating characteristic (ROC) curves were calculated for Pi10, WA%, IL-1β and IL-4 to identify frequent exacerbators. Results: Compared with NFE group, FE group had a greater inner perimeter wall thickness of a 10 mm diameter bronchiole (Pi10), a greater airway wall area percentage (WA%) and higher concentrations of IL-1β and IL-4 (p<0.001). After adjusting for sex, age, BMI, FEV1%pred and smoking pack-years, Pi10, WA%, IL-β and IL-4 were independently associated with a frequent exacerbator phenotype (p<0.001). Additionally, there was an increase in the odds ratio of the frequent exacerbator phenotype with increasing Pi10, WA%, IL-4, and IL-1β (p for trend <0.001). The ROC curve demonstrated that IL-1β had a significantly larger calculated area under the curve (p < 0.05) than Pi10, WA% and IL-4. Conclusion: Pi10, WA%, IL-4, and IL-1β were independently associated with the frequent exacerbator phenotype among patients with stable COPD, suggesting that chronic airway and systemic inflammation contribute to the frequent exacerbator phenotype. Trial Registration: This trial was registered in Chinese Clinical Trial Registry (https://www.chictr.org.cn). Its registration number is ChiCTR2000038700, and date of registration is September 29, 2020. abstract_id: PUBMED:35820227 Frequent productive cough: Symptom burden and future exacerbation risk among patients with asthma and/or COPD in the NOVELTY study. Introduction: Persistent cough with sputum production is an important clinical trait in chronic obstructive pulmonary disease (COPD). We defined "frequent productive cough" based on 2 questions from the St George's Respiratory Questionnaire (SGRQ) and sought to determine its occurrence and associated outcomes in patients with physician-assigned asthma and/or COPD from the NOVELTY study. Methods: Frequent productive cough was defined as cough and sputum production most or several days/week for the past 3 months (scoring ≥3 for both SGRQ questions). Relationships with baseline disease characteristics and exacerbations over 12 months' follow-up were examined using logistic regression. Results: Baseline SGRQ data were available for 7125 patients, of whom 31.3% had frequent productive cough. It was more common in asthma+COPD (38.8%) and COPD (38.1%) than asthma (25.0%), increasing with physician-assessed severity, and in current versus former and never smokers. Patient-reported symptomatic worsening was more common in patients with versus without frequent productive cough. Reduced post-bronchodilator FEV1 (odds ratio [OR] per 10% decrement 1.14 [95% confidence interval 1.11-1.16]) and history of pollutant exposure at home/work (OR 1.50 [1.33-1.69]) were associated with frequent productive cough in all diagnoses. Patients with baseline frequent productive cough were more likely to have ≥1 exacerbation over the subsequent 12 months (OR 1.71 [1.52-1.93]), including exacerbations requiring hospital admission and those treated with oral corticosteroids. Conclusions: Frequent productive cough represents an important indicator of adverse clinical outcomes across asthma and/or COPD. Research into the underlying pathologic mechanisms is required to support targeted therapy development. Clinicaltrials: GOV: NCT02760329. abstract_id: PUBMED:31759270 Factors influencing decline in quality of life in smokers without airflow obstruction: The COPDGene study. Introduction: Current and former smokers with normal spirometry and with Preserved Ratio Impaired Spirometry (PRISm) experience respiratory events similar to chronic obstructive pulmonary disease (COPD) exacerbations. Exacerbations significantly reduce quality of life (QoL) in COPD patients however the effect of respiratory exacerbations on QoL in these groups is unknown. We hypothesized that exacerbations and change in exacerbation status would predict QoL decline among normal spirometry and PRISm participants in COPDGene. Methods: COPDGene is a multicenter, longitudinal study in the U.S. designed to identify genetic determinants of COPD. We enrolled study subjects in Phase 1 of COPDGene and performed multivariable logistic regression models to determine independent predictors of decline in quality of life [>4 points on the St George's Respiratory Questionnaire (SGRQ)]. Separate analyses were performed for current and former smokers with normal spirometry and PRISm. Frequent exacerbator status was defined by > 2 moderate or >1 severe exacerbations in the year prior to the baseline and year 5 follow-up visits. Results: Independent predictors of QoL deterioration included current smoking, higher exacerbation frequency, and a change from infrequent to frequent exacerbation status (REF: infrequent to infrequent exacerbation status) in both groups [PRISm (OR = 3.15,95%CI, 1.67-5.94), normal spirometry (OR = 4.72,95%CI, 3.25-6.86)]. A change from frequent to infrequent exacerbation status did not lower the odds of QoL decline in either cohort. Conclusion: Continued smoking and the onset of frequent exacerbations were predictors of QoL decline in smokers with normal spirometry and PRISm. Further studies are needed to identify modifiable factors associated with decline in QoL in smokers. abstract_id: PUBMED:32754944 Lung-Specific Risk Factors Associated With Incident Hip Fracture in Current and Former Smokers. Hip fractures are associated with significant morbidity and mortality in smokers with lung disease, but whether lung-specific factors are associated with fracture risk is unknown. Our goal was to determine whether lung-specific factors associate with incident hip fracture and improve risk discrimination of traditional fracture risk models in smokers. The analysis consisted of a convenience sample of 9187 current and former smokers (58,477 participant follow-up years) participating in the Genetic Epidemiology of chronic obstructive pulmonary disease (COPD) longitudinal observational cohort study. Participants were enrolled between 2008 and 2011 with follow-up data collection through July 2018. Traditional risk factors associated with incident hip fracture (n = 361) included age, female sex, osteoporosis, prevalent spine and hip fracture, rheumatoid arthritis, and diabetes. Lung-specific risk factors included post-bronchodilator percent forced expiratory volume in 1 s (FEV1 %) predicted (OR, 0.95; 95% CI, 0.92-0.99 for each 10% increase), Global Initiative for Chronic Obstructive Lung Disease (GOLD) classification (OR, 1.09; 95% CI, 1.002-1.19 for each higher stage), presence of CT-determined emphysema (OR, 1.34; 95% CI, 1.06-1.69), symptom scores (OR, 1.10; 95% CI, 1.03-1.19 for each higher unit score), 6-min walk distance (OR, 0.92; 95% CI, 0.90-0.95 for each 30-m increase), body mass index, airflow obstruction, dyspnea, and exercise (BODE) index (OR, 1.07; 95% CI, 1.01-1.13 for each higher unit score), total exacerbations (OR, 1.13; 95% CI, 1.10-1.16 per exacerbation), and annual exacerbations (OR, 1.37; 95% CI, 1.21-1.55 per exacerbation). In multivariable modeling, age, black race, osteoporosis, prevalent hip and spine fracture, rheumatoid arthritis, and diabetes were associated with incident hip fracture. The presence of emphysema, 6-min walk distance, and total number of exacerbations added to traditional models improved risk discrimination (integrated discrimination improvement [IDI] values 0.001 [95% CI, 0.0003-0.002], 0.001 [95% CI, 0.0001-0.002], and 0.008 [95% CI, 0.003-0.013], corresponding to relative IDIs of 12.8%, 6.3%, and 34.6%, respectively). These findings suggest that the incorporation of lung-specific risk factors into fracture risk assessment tools may more accurately predict fracture risk in smokers. © 2020 American Society for Bone and Mineral Research. Answer: The relationship between frequent moderate exacerbations and the progression of chronic obstructive pulmonary disease (COPD) in ex-smokers has been studied, but the findings are not entirely consistent. One study found that frequent moderate exacerbations do not contribute to an accelerated clinical and functional decline in COPD patients who are ex-smokers. This conclusion was based on a 2-year follow-up study of 100 ex-smoking COPD patients, which showed no significant difference in the decline of forced expiratory volume in one second (FEV1) between frequent exacerbators and infrequent exacerbators (PUBMED:25792820). However, another study suggests that an increased frequency of exacerbations is significantly associated with FEV1 decline even in ex-smokers. This study involved 102 COPD patients (44 smokers, 58 ex-smokers) over a 3-year period and found that frequent exacerbators had a significantly increased average annual rate of FEV1 decline compared to infrequent exacerbators (PUBMED:17112715). Additionally, a study on the stability of the frequent-exacerbation phenotype in COPD patients indicated that the exacerbation frequency and the rate of the frequent-exacerbation phenotype increase with COPD progression. The study also found that a history of exacerbations in the previous year is a significant risk factor for exacerbations of COPD (PUBMED:31188775). In summary, while one study suggests that frequent moderate exacerbations do not contribute to accelerated decline in ex-smokers with COPD (PUBMED:25792820), other research indicates that increased exacerbation frequency is associated with FEV1 decline in this population (PUBMED:17112715), and that exacerbation history is a significant risk factor for future exacerbations (PUBMED:31188775). Therefore, the impact of frequent moderate exacerbations on the progression of COPD in ex-smokers may vary and could be influenced by other factors such as the overall severity of the disease and individual patient characteristics.
Instruction: Is hepatic resection absolutely contraindicated for hepatocellular carcinoma in Child-Pugh class C cirrhotic patients? Abstracts: abstract_id: PUBMED:10370588 Is hepatic resection absolutely contraindicated for hepatocellular carcinoma in Child-Pugh class C cirrhotic patients? Background/aims: Liver resection for hepatocellular carcinoma (HCC) in Child-Pugh class C cirrhotic patients is considered to be high risk and even contraindicated. This study examined our results of hepatectomy for HCC in such cirrhotic patients. Methodology: A retrospective review of the clinicopathological features, as well as early and late resection results of Child-Pugh class A (n = 181) and class C patients (n = 13) were compared. The extent of hepatectomy was based on the pre-operative liver function test and indocyanine-green (ICG) clearance rate. Results: The tumor size in class C patients was smaller than that in class A patients. There were no significant differences with regard to operative blood loss, amount of blood transfusion, operative morbidity or mortality. The surgical margins of class C patients were narrower (p = 0.003). The tumors of class C patients had higher incidences of well-formed capsules and absence of satellite nodules. The 5-year disease-free and actuarial survival rates of class A and C patients were 35.4% and 40.7% (p = 0.28), and 48% and 50% (p = 0.13), respectively. Conclusions: Not all HCCs in Child-Pugh class C cirrhotic patients are contraindicated for liver resection. In the absence of uncontrollable ascites, marked jaundice and encephalopathy, surgical resection is still justified in some selected cases, in spite of a narrow surgical margin. abstract_id: PUBMED:23274319 Surgical management of hepatocellular carcinoma in Child-Pugh class B cirrhotic patients: hepatic resection and/or microwave coagulation therapy versus living donor liver transplantation. Background: The surgical management of hepatocellular carcinoma (HCC) in cirrhotic patients with Child-Pugh class B remains controversial. The aim of this study was to compare the results of hepatic resection plus microwave coagulation therapy (MCT) versus living donor liver transplantation (LDLT) for HCC in cirrhotic patients with Child-Pugh class B. Material/methods: Between January 1998 and June 2008, 30 patients underwent hepatic resection plus MCN and 40 patients underwent LDLT for HCC with Child-Pugh class B. Univariate and multivariate Cox proportional hazard models were established. Kaplan-Meier survival curves were generated, and a log-rank test was performed to compare group survival status. Results: There was no difference in overall survival after hepatic resection plus MCT (1-, 3-, and 5-year: 86.7%, 70.4%, and 70.4%, respectively) compared with LDLT (1-, 3-, and 5-year: 92.5%, 81.5%, and 72.6%, respectively). Disease-free survival was significantly better after LDLT compared with hepatic resection plus MCT. On multivariate analyses, the des-gamma-carboxy prothrombin (DCP) level of more 300 mAU/mL was an independent risk factor for overall survival and recurrence of HCC after LDLT. In preoperative Milan criteria met-patients, 5-year overall survival following LDLT was significantly better than that after hepatic resection plus MCT. Incidentally found hepatocellular carcinoma had higher tendency of well differentiated tumor in the explant liver after LDLT. Conclusions: In preoperative Milan criteria met-cirrhotic patients with Child-Pugh class B, LDLT was associated with longer disease-free and overall survival than hepatic resection plus MCN. LDLT could not be indicated in the patients with DCP level of more 300 mAU/mL. abstract_id: PUBMED:38434920 Systemic Chemotherapy for Advanced Hepatocellular Carcinoma in Patients With Child-Pugh class B. Background/aim: Numerous agents, including immune checkpoint inhibitors, are now available for hepatocellular carcinoma (HCC) treatment. Most trials involving systemic chemotherapy have included patients with Child-Pugh class A, while excluding or minimally enrolling those with Child-Pugh class B, due to liver dysfunction-related mortality. This study aimed to identify prognostic factors for survival in Child-Pugh class B patients receiving sorafenib (SOR), lenvatinib (LEN), atezolizumab plus bevacizumab (ATZ+BEV), or hepatic arterial infusion chemotherapy (HAIC). Patients And Methods: From December 2003 to June 2023, 137 patients with advanced HCC receiving systemic chemotherapies (SOR: n=43, LEN: n=16, ATZ+BEV: n=18, HAIC: n=60) were enrolled. Results: Overall survival (OS) and response rates did not differ significantly across treatments (SOR: 8.3 months, LEN: 10.2 months, ATZ+BEV: 8.5 months, HAIC: 7.3 months). Patients on HAIC and LEN had a lower rate of discontinuing treatment within three months compared to those on ATZ+BEV and SOR. HAIC was associated with fewer changes in ALBI score and better preservation of liver function. Multivariate logistic regression identified serum α-fetoprotein >400 ng/ml [hazard ratio (HR)=1.94; p=0.001], tumor count >5 (HR=1.55; p=0.043), and Child-Pugh score (HR=2.53; p=0.002) as independent predictors of OS. Conclusion: OS and response rates were similar across systemic chemotherapies. Prognosis for HCC in Child-Pugh class B patients was associated with liver function, necessitating further research for optimal treatment. abstract_id: PUBMED:28406546 Role of hepatic resection in patients with intermediate-stage hepatocellular carcinoma: A multicenter study from Japan. Transarterial chemoembolization (TACE) is recommended for patients with intermediate-stage (Barcelona Clinic Liver Cancer criteria B [BCLC-B]) hepatocellular carcinoma (HCC). However, patients with BCLC-B HCC can differ in background factors related to hepatic function, as well as tumor size and number. In the present study, we clarified the role of hepatic resection in patients with BCLC-B HCC. A total of 489 BCLC-B HCC patients with Child-Pugh class A disease initially treated with hepatic resection or TACE were included. After propensity score matching (n = 264), hepatic resection (hazard ratio [HR], 0.56; 95% confidence interval [CI], 0.35-0.91) was independently associated with survival in the multivariate analysis. We then divided patients into two groups based on the results of statistical analysis. There were 170 patients treated with resection and 319 with TACE. Child-Pugh score and number of tumors (cut-off, three tumors) were independently associated with type of HCC treatment in the multivariate analysis. We then divided patients in Group A (Child-Pugh score of 5 and ≤3 tumors; n = 186) and Group B (Child-Pugh score of 6 or ≥4 tumors; n = 303). In Group A, cumulative survival was significantly higher in the hepatic resection group than in the TACE group (P = 0.014). In Cox proportional hazards models, hepatic resection (HR, 0.38; 95% CI, 0.23-0.64) was independently associated with survival in Group A patients. In Group B, treatment status was not associated with overall survival. Hepatic resection should be considered in patients with a Child-Pugh score of 5 and ≤3 tumors, despite having BCLC-B HCC. abstract_id: PUBMED:32812315 Initial treatment response to transarterial chemoembolization as a predictive factor for Child-Pugh class deterioration prior to refractoriness in hepatocellular carcinoma. Aim: Repeated transarterial chemoembolization (TACE) for intermediate-stage hepatocellular carcinoma (HCC) eventually leads to either deteriorated hepatic reserve or TACE refractoriness. Switching to molecular targeted agents after TACE requires preservation of hepatic reserve. This study aimed to investigate the predictive factors associated with early deterioration of hepatic reserve by repeated TACE prior to refractoriness. Methods: Ninety-three patients with intermediate-stage HCC who underwent TACE as the first-line treatment and had a Child-Pugh class A hepatic reserve were retrospectively analyzed. The time to Child-Pugh class deterioration (TTCPD), defined as the duration from initial TACE to the diagnosis of Child-Pugh class B or C prior to TACE refractoriness, was assessed. Patients who progressed to TACE refractoriness prior to Child-Pugh class deterioration were censored at TACE refractoriness. Results: The radiological response to initial TACE was assessed as responders and non-responders in 59 (63.4%) and 34 (36.6%) patients, respectively. The median TTCPD was 40.6 months in all patients. The hepatic reserve in 31 (33.3%) patients deteriorated to Child-Pugh class B or C prior to TACE refractoriness. In the multivariate analysis, non-response to initial TACE, albumin-bilirubin grade 2, and non-selective TACE were identified as independent predictors associated with a shortened TTCPD. The TTCPD was significantly shorter in the non-responders than in the responders to initial TACE (median, 19.6 vs. 55.9 months; P < 0.001). Conclusions: Failure to respond to initial TACE was a predictive factor for early deterioration of hepatic reserve prior to TACE refractoriness in intermediate-stage HCC patients with repeated TACE. abstract_id: PUBMED:22171142 Hepatocellular carcinoma in cirrhotic patients with portal hypertension: is liver resection always contraindicated? Aim: To analyze the outcome of hepatocellular carcinoma (HCC) resection in cirrhosis patients, related to presence of portal hypertension (PH) and extent of hepatectomy. Methods: A retrospective analysis of 135 patients with HCC on a background of cirrhosis was submitted to curative liver resection. Results: PH was present in 44 (32.5%) patients. Overall mortality and morbidity were 2.2% and 33.7%, respectively. Median survival time in patients with or without PH was 31.6 and 65.1 mo, respectively (P = 0.047); in the subgroup with Child-Pugh class A cirrhosis, median survival was 65.1 mo and 60.5 mo, respectively (P = 0.257). Survival for patients submitted to limited liver resection was not significantly different in presence or absence of PH. Conversely, median survival for patients after resection of 2 or more segments with or without PH was 64.4 mo and 163.9 mo, respectively (P = 0.035). Conclusion: PH is not an absolute contraindication to liver resection in Child-Pugh class A cirrhotic patients, but resection of 2 or more segments should not be recommended in patients with PH. abstract_id: PUBMED:27315783 Response to chemotherapy improves hepatic reserve for patients with hepatocellular carcinoma and Child-Pugh B cirrhosis. There is no established treatment for patients with advanced hepatocellular carcinoma (HCC) with Child-Pugh class B cirrhosis. The aim of the present study was to assess the efficacy of hepatic arterial infusion chemotherapy (HAIC) according to Child-Pugh score (CPS) and to evaluate the correlation of a patient's response to HAIC with hepatic reserve and outcome. We retrospectively reviewed the medical records of 377 patients treated with HAIC between March 2003 and February 2015. Subjects included 179 with Child-Pugh class B. Median overall survival was 12.1 months for patients with CPS = 7 (n = 75) and 11.9 months for patients with CPS = 8 (n = 58), which were significantly longer compared with those of patients with CPS = 9 (n = 46, 6.3 months). The objective response rates of patients with CPS = 7, 8 and 9 were 26.7%, 27.6% and 6.5%, respectively. The CPS of responders improved significantly after HAIC, whereas those of nonresponders did not. A multivariate analysis demonstrated that improved CPS, responses to HAIC and absence of extrahepatic lesions were independent favorable prognostic factors. Patients with CPS = 7 or 8 tolerated HAIC, but nine (19.6%) of patients with CPS = 9 were unable to complete one course. HAIC is effective and safe for patients with a CPS = 7 or 8 and improved hepatic reserve of responders significantly. abstract_id: PUBMED:37577406 Clinical outcomes by Child-Pugh Class in patients with advanced hepatocellular carcinoma in a community oncology setting. Aim: Many pivotal trials in advanced hepatocellular carcinoma (HCC) require participants to have Child-Pugh A disease. However, many patients in real-world practice are Child-Pugh B or C. This study examined treatment patterns and clinical outcomes in patients with advanced HCC treated with first-line systemic therapy. Materials & Methods: In this retrospective study, patients with HCC treated with first-line systemic therapy (2010-2017) were identified from US Oncology Network records. Outcomes included overall survival and progression-free survival, by Child-Pugh Class and prior liver-directed therapy. Results: Of 352 patients, 78.7% were Child-Pugh A or B, 96.6% received first-line sorafenib, and 33.8% received first-line-prior liver-directed therapy. Survival outcomes were similar for Child-Pugh A or B, and longer after first-line prior liver-directed therapy. Conclusion: First-line systemic therapy is beneficial in patients with Child-Pugh A or B, and after first-line prior liver-directed therapy. These findings may help position systemic therapy in the community setting. abstract_id: PUBMED:29372372 Surgery for Hepatocellular Carcinoma in Patients with Child-Pugh B Cirrhosis: Hepatic Resection Versus Living Donor Liver Transplantation. Background: Liver transplantation has been established as the optimal treatment for hepatocellular carcinoma in cirrhotic patients, but hepatic resection is also effective in patients with well-preserved liver function. Determining the suitable surgical treatment for patients with Child-Pugh class B cirrhosis is a more difficult challenge. Methods: We retrospectively compared the results of hepatic resection and living donor liver transplantation for hepatocellular carcinoma in 137 patients with Child-Pugh class B cirrhosis. The procedures were performed at Kyushu University Hospital from April 2014 through October 2016. Results: Patients who underwent hepatic resection were significantly older and had better liver function, larger tumor size, smaller number of tumors, and less surgical stress compared with patients who underwent living donor liver transplantation. The overall survival rate and the recurrence-free survival rate in patients with transplantation were significantly better than that in patients with resection. The multivariate analysis showed that recurrent hepatocellular carcinoma and microvascular invasion were significant prognostic factors for both overall and recurrence-free survival in the hepatic resection group. In the group with protein induced by vitamin K absence or antagonist-II ≥300 mAU/mL, both the overall survival curve and the recurrence-free survival curve in patients with living donor liver transplantation were not significantly different from those in patients with hepatic resection. Conclusions: Living donor liver transplantation for hepatocellular carcinoma in patients with Child-Pugh class B cirrhosis was favorable under the condition of protein induced by vitamin K absence or antagonist-II <300 mAU/mL in selected recipients and donors. Hepatic resection for recurrent hepatocellular carcinoma and excessive blood loss should be avoided in patients with Child-Pugh class B cirrhosis. abstract_id: PUBMED:25312843 Prognostic role of Child-Pugh score 5 and 6 in hepatocellular carcinoma patients who underwent curative hepatic resection. Background: It is unclear whether Child-Pugh score discriminates a prognosis of the Child-Pugh A patients who underwent hepatic resection for hepatocellular carcinoma. Methods: Between April 2000 and March 2011, 361 patients with Child-Pugh A who underwent curative hepatectomy were divided into 2 groups: Child-Pugh score 5 points group (CPS5) and Child-Pugh score 6 points group (CPS6); both CPS5 (n = 274) and CPS6 (n = 87) groups were compared. Results: Overall survival rates (1/2/5 years of the CPS5 and CPS6 groups were 90.9%/82.5%/62.4% and 80.6%/68.0%/47.6%, respectively) and disease-free survival rates (67.6%/51.8%/30.1% and 36.9%/16.0%/5.9%, respectively) showed that the CPS5 group was significantly better than the CPS6 group. Multivariate analysis revealed that Child-Pugh score at overall survival (P = .0125) and disease-free survival (P = .0103) was a significant prognostic factor. Conclusions: The overall survival and disease-free survival in Child-Pugh A showed quite a difference between the CPS5 and CPS6 groups. However, CPS5 and CPS6 may be a useful prognostic marker of hepatocellular carcinoma patients with hepatic resection. Answer: Hepatic resection for hepatocellular carcinoma (HCC) in Child-Pugh class C cirrhotic patients is generally considered high risk and often contraindicated. However, a study examining the results of hepatectomy for HCC in Child-Pugh class C patients suggests that not all cases are contraindicated for liver resection. The study found that in the absence of uncontrollable ascites, marked jaundice, and encephalopathy, surgical resection could still be justified in some selected cases, despite a narrow surgical margin. The 5-year disease-free and actuarial survival rates of class A and C patients were not significantly different, indicating that with careful selection, hepatic resection may be a viable option for some Child-Pugh class C patients with HCC (PUBMED:10370588).
Instruction: Inflammatory Morbidity due to Compound Mandibular Body Fractures: Does It Have a Relationship with Treatment Outcome? Abstracts: abstract_id: PUBMED:25791420 Inflammatory Morbidity due to Compound Mandibular Body Fractures: Does It Have a Relationship with Treatment Outcome? Objective: The aim of this study was to evaluate the relationship between the degree of preoperative pain and trismus with the development of complications following the repair of isolated unilateral compound mandibular body fractures using a closed reduction technique. Subjects And Methods: This was a 7-year prospective study carried out at the Dental and Maxillofacial Surgery Clinic of the University of Calabar Teaching Hospital, Calabar, Nigeria. Of a total of 97 patients, 83 (85.6%) subjects (66 males, 17 females, ratio 5:1) were preoperatively evaluated for trismus and pain in a blinded manner by a single examiner, and complications were recorded postoperatively. The data obtained were statistically analyzed with EPI Info 2008 software. Results: Of the 83 patients treated, 13 (15.7%) developed complications. The fractures were most common in the age range of 21-40 years (n = 45, 54.2%). The age (p = 0.02) and gender (p = 0.01) distribution of the subjects was significant. The more severe the limitation of mouth opening (p = 0.03) and pain (p = 0.04) before treatment, the more complications developed, and these significantly affected treatment outcome. Impaired mastication and facial asymmetry (n = 17, 41.5%) were the most common complications. Conclusion: This study showed that posttrauma pain and trismus due to unilateral mandibular body fractures may be associated with the development of complications. An adequately powered prospective study treating patients at 5 or 7 days is required in order to make the case for later intervention. abstract_id: PUBMED:37451679 Use of chin sling for conservative treatment of mandibular fracture in a guinea pig (Cavia porcellus). Objective: To describe a treatment for a mandibular fracture in a guinea pig (Cavia porcellus). Animal: A 6-month-old sexually intact male guinea pig referred for a 24-hour history of hyporexia. Clinical Presentation, Progression, And Procedures: Physical examination showed lateral displacement of the incisors and inflammation and crepitation during mandible lateralization. Imaging tests revealed a minimally displaced complete oblique fracture of the left mandible with fracture of the left mandibular first premolar tooth and incisor tooth. Treatment And Outcome: Conservative treatment was established by placing a chin sling (CS) to immobilize the jaw and a nasogastric tube for nutritional support. Imaging tests repeated 3 weeks later showed initial callus formation, and the conservative treatment was discontinued. Follow-up examinations showed appetite and progressive weight gain. Five months later, the clinical crown of the left mandibular incisor was absent and a resorptive lesion on the left mandibular first premolar tooth was detected. Complete ossification of the fracture without premolar and moler teeth elongation was observed on control imaging tests. Ten months after initial examination, the patient was reportedly healthy with no signs of pain or dental disease. Clinical Relevance: Jaw fracture treatments in guinea pigs are poorly described in the literature. Surgical treatment can be challenging in this species due to its skull and dental anatomy. Although CS was originally employed to increase the congruency of premolar and molar teeth after coronal reduction, this device yielded satisfactory results as a noninvasive, inexpensive treatment of a mandibular fracture in this guinea pig. Alopecia on the back of the head was the only undesired adverse effect associated with the CS. abstract_id: PUBMED:32735787 Can Patients With Isolated Mandibular Fractures Be Treated as Outpatients? Purpose: Treating mandibular fractures urgently is controversial. The purpose of this study was to estimate and compare the rates of postoperative inflammatory complications (POICs) in patients with isolated mandibular fractures treated in a nonurgent manner by an outpatient protocol versus a traditional, urgent inpatient protocol. Patients And Methods: We implemented a retrospective cohort study and enrolled a sample of patients with isolated mandibular fractures treated with open reduction-internal fixation (ORIF). The primary predictor variable was the treatment protocol: outpatient (elective) or inpatient (urgent). The outpatient group was treated with closed reduction and intermaxillary fixation, discharged, and scheduled for definitive treatment as outpatients. The inpatient group was admitted to the hospital, and the fracture was treated with ORIF as soon as possible. The primary outcome variable was POIC (present or absent). Descriptive, bivariate, and multiple logistic regression statistics were computed to measure the association between the treatment protocol and POICs, with statistical significance set at P < .05. Results: The study sample was composed of 193 patients, with 82 in the outpatient group and 111 in the inpatient group. The frequency of POICs was 17.1% and 18.9% in the outpatient and inpatient groups, respectively (P = .13; relative risk, 0.80; 95% confidence interval [CI], 0.62 to 1.0). The time to ORIF was not significantly associated with POICs (P = .71). After adjustment for treatment group, fracture location, and time to fracture stabilization, smoking (P = .04, odds ratio, 2.3; 95% CI, 1.0 to 5.1) and intraoral incision with a transbuccal trocar (P = .02, odds ratio, 3.4; 95% CI, 1.2 to 9.8) were associated with an increased risk of POICs. Length of stay was 0.6 ± 0.8 days in the outpatient group compared with 2.7 ± 2.0 days in the inpatient group (P < .0001). Conclusions: An outpatient model to treat isolated mandibular fractures was not associated with an increased risk of POICs. This outpatient care model reduced the hospital length of stay without increasing the risk of POICs. abstract_id: PUBMED:14703483 Mandibular fracture fixed with PDLLA/nano-HA compound plates: an animal study Objective: To investigate the applicability of PDLLA/nano-HA compound plates used in internal fixation of mandibular fractures. Methods: Rabbit mandibular fracture model was used in this study. Clinical observation, the amount of callus, histological observation were studied and compared with PDLLA plates. Results: All fractures were fixed rigidly. There was 1 animal experienced local inflammation and then forming fistula in the side of PDLLA plate at postoperative 2 and 3 weeks. No side-effects were found in the side of compound plates. During the early stage of bone healing, the quality of callus in the side of compound plates was apparently more than that in the other side, and histological study showed that the osteoblasts and fibroblasts were more active in the side of compound plate in the early healing stage. Conclusions: PDLLA/nano-HA compound plate has a proper degradation time. Compared with PDLLA plates, it is more effective and safer when used in mandibular fracture. abstract_id: PUBMED:24923155 Immediate and long-term results of surgical treatment of patients for traumatic mandibular fracture The analysis of immediate and long-term results of the surgical treatment of 286 patients, operated for traumatic mandibular fractures in the Department of Oral and Maxillofacial Surgery, National O. O. Bogomolets Medical University. In 67% of patients the anatomical shape of the bone was adequately restored. The presence of residual displacements in other cases was determined by the fracture type and localization, the technical complexity of the surgical intervention, lack of fixation rigidity under certain functional load conditions. In long terms of observation the infection and inflammatory complications were observed in 13.4% of patients, delayed unition and non-unition of bone fragments occurred--in 4.7%, arthosis with persistent dysfunction of the temporomandibular joint--in 6.7%, fibrous ankylosis--in 1.3%. Secondary displacement of fragments was observed in 23.5% of patients due to insufficient stiffness and reliaability of the bone-fixatorsystem. The non-precise reposition of fragments and secondary displacements in the early and late postoperative period were the main cause of occlusal disturbances of various severities, noted in 28% of operated patients, limitation of mouth opening (10%), TMJ disorders and changes in masticatory stereotype (33%), the sensation of pain and discomfort in tough food chewing (35%). A statistical analysis of the effectiveness of different osteosynthesis methods depending on the type and localization of the fracture was carried out and recommendations for usage of fixation devices in clinical practice were given. abstract_id: PUBMED:24074502 Impairment of wound healing after operative treatment of mandibular fractures, and the influence of dexamethasone. Our aim was to clarify the incidence of impaired wound healing after open reduction and ostheosynthesis of mandibular fractures, and to find out whether the use of dexamethasone during the operation increased the risk. Patients were drawn from a larger group of healthy adult dentate patients who had participated in a single-blind, randomised study, the aim of which was to clarify the benefits of operative dexamethasone after treatment of facial fractures. The present analysis comprised 41 patients who had had open reduction and fixation of mandibular fractures with titanium miniplates and monocortical screws through one or 2 intraoral approaches. The outcome variable was impaired healing of the wound. The primary predictive variable was the perioperative use of dexamethasone; other potential predictive variables were age, sex, smoking habit, type of fracture, delay in treatment, and duration of operation. Wound healing was impaired in 13/41 patients (32%) (13/53 of all fractures). The incidence among patients who were given dexamethasone and those who were not did not differ significantly. Only age over 25 was significantly associated with delayed healing (p=0.02). The use of dexamethasone 30 mg perioperatively did not significantly increase the risk of impaired wound healing in healthy patients with clinically uninfected mandibular fractures fixed with titanium miniplates through an intraoral approach. Older age is a significant predictor of impaired healing, which emphasises the importance of thorough anti-infective care in these patients during and after the operation. abstract_id: PUBMED:26335408 Reporting of Postoperative Pain Management Protocols in Randomized Clinical Trials of Mandibular Fracture Repair: A Systematic Review. Importance: The control of pain associated with mandibular fractures is an important treatment outcome that affects function, adherence to treatment regimens, and patient comfort and satisfaction. Objective: To explore the pain management protocols reported in studies of mandibular fractures, including the reporting of quality-of-life measures. Evidence Review: PubMed/MEDLINE, EMBASE, Cochrane CENTRAL, and clinicaltrials.gov were searched for randomized clinical trials published from 1970 to July 2014. We followed PRISMA reporting standards to assess study eligibility and extract data. Studies of patients older than 16 years who underwent operative mandibular fracture management were included. The primary data collected included the type of analgesic prescribed, associated adverse effects of the analgesic, method of pain assessment, and use of quality-of-life measures. A pain attentiveness score was assigned to studies based on the comprehensiveness of the information reported. Several variables were reviewed to determine the factors that predict reporting of pain-related data. Assessments of risk for bias were performed using the Cochrane Collaboration's domain-based evaluation method. Findings: The initial search identified 111 articles, of which 38 met inclusion criteria. Among the 38 reviewed articles, there were 38 trials and 1808 unique patients represented. Among the 38 articles, the procedures reported included maxillomandibular fixation only in 6 (16%), open reduction with internal fixation only in 20 (53%), and both in 12 (32%). Specific analgesics prescribed were reported in only 5 of the 38 studies (13%), and 3 of these used a combination of nonsteroidal anti-inflammatory drugs and acetaminophen (paracetamol). Thirteen studies (34%) reported pain assessments and 5 (13%) included quality-of-life measures. Geographic region was the only variable that predicted pain attentiveness, with studies from Europe (3 of 11 studies [27%]) and Asia (6 of 16 studies [38%]) most likely to have a high pain attentiveness score. A low rating was least common in the United States (2 of 5 studies [40%]) (P = .047, Fisher exact test). Most of the studies had unclear (n = 27) or high (n = 6) risks for bias in the key domains assessed. Conclusions And Relevance: Pain management is a neglected outcome in randomized clinical trials of mandibular trauma; most studies did not describe the specific analgesics used. Many randomized clinical trials (13 [34%]) assessed pain levels among patients without providing information about the agents prescribed. The incorporation of validated pain measures and quality-of-life scores in future studies of mandibular trauma would focus attention on this key outcome measure. abstract_id: PUBMED:29199340 Analysis of relative factors associated with postoperative infection of mandibular fractures Purpose: To explore risk factors associated with postoperative inflammatory complications(POIC) after rigid internal fixation of mandibular fractures. Methods: A case-control study was designed and a sample of 447 patients treated with rigid internal fixation for mandibular fractures were selected from January 2009 to December 2015.26 patients developed POIC after treatment were categorized into the experimental group. Twenty-six cases with no complications were enrolled into the control group. Data were collected and bivariate and multiple logistic analyses were used to identify factors associated with POIC between the two groups by SAS9.2 software package. Results: In multiple regression model, an increasing Mandibular Injury Severity Score (OR=2.509; 95%CI 1.083-5.814) and health condition below the baseline (OR=5.019; 95%CI 1.294-19.472) were significantly (P<0.05) associated with an increased risk for a POIC. However, in bivariate analyses, the use of antibiotics was associated with POIC (P<0.05). Conclusions: Postoperative inflammatory complications are associated with the Mandibular Injury Severity and health condition of the patients. abstract_id: PUBMED:15583535 Comparative study of intra-articular irrigation and corticosteroid injection versus closed reduction with intermaxillary fixation for the management of mandibular condyle fractures. Objective: To evaluate the clinical outcome of a modified conservative treatment protocol involving intra-articular irrigation and corticosteroid injection into the superior joint compartment (SJC) of patients with fresh mandibular condyle fractures. Study design A total of 26 consecutive unilateral fresh condylar fractures in 26 patients were divided into the intra-articular irrigation (IR) group (14 patients, 14 joints) and the conventional conservative treatment (CC) group (12 patients, 12 joints). In the IR group, the SJCs of the fractured joints were irrigated with saline solution and injected dexamethazone sodium, followed by conventional rehabilitation. In the CC group, patients were treated by closed reduction with intermaxillary fixation (IMF) for 2 weeks, followed by conventional rehabilitation. Clinical outcome at 1 month, 3 months, 6 months, and 1 year after injury was determined by clinical examination of jaw motion, joint pain, and occlusal changes. The posttreatment results were compared with the pretreatment baseline data. Between-group differences in clinical parameters were analyzed. Results: There were significant between-group differences in the range of mandibular motion at 1 month and 3 months after injury (1 month, P=.0022; 3 months, P=.0022). In the IR group, joint pain was well relieved from the early stage of treatment. Occlusal changes were found in 1 patient in the IR group and 4 patients in the CC group at 1 year after injury. Conclusions: The modified treatment protocol involving intra-articular irrigation and corticosteroid injection into the SJC is a more effective and quick-acting modality than conventional closed reduction with IMF for functional recovery and control of clinical symptoms of patients with unilateral fresh condylar fractures. abstract_id: PUBMED:34171226 Is Screw-Fracture Proximity or Residual Fracture Displacement Correlated to Outcomes After Open Reduction and Internal Fixation of Mandibular Angle Fractures? Purpose: Computed tomography (CT) imaging is commonly obtained following open reduction and internal fixation (ORIF) of mandibular fractures but the significance of common findings is unknown. The purpose of this study is to examine the relationship of screw-to-fracture proximity and residual fracture displacement to post-operative complications following ORIF of mandibular angle fractures treated with monocortical fixation techniques. Methods: The authors conducted a retrospective cohort study consisting of all patients with mandibular angle fractures treated with ORIF using monocortical fixation at the lateral surface of the mandible from the dates August 1, 2015 to May 31, 2020. The predictor variables were the distance measurements between the 2 closest screws to the fracture and the fracture line and the amount of residual fracture displacement, both measured on postoperative CT. The primary outcome variable was the presence of postoperative inflammatory complications (POICs). Statistical analysis was performed using logistic regression and Bayesian variable selection to calculate posterior probability of importance for the variables of interest. Results: Of the 285 patients included in the study, 84.6% were men and the average age was 30.8 years. POICs occurred in 22.1% of the patients. Age, smoking, homelessness, noncompliance and drug use were associated with POICs. However, none of the screw-to-fracture distances were associated with POICs, including linear, dichotomous or polynomial transformations of these variables. Additionally, the residual fracture displacement distances and transformations of these distances were also not associated with POICs. Conclusion: The present study did not find any evidence to suggest that a closer screw-fracture distance or increased residual fracture displacement on postoperative CT imaging increased the risk of POICs for mandibular angle fractures treated with ORIF using monocortical fixation at the lateral border. Answer: The study "Inflammatory Morbidity due to Compound Mandibular Body Fractures: Does It Have a Relationship with Treatment Outcome?" aimed to evaluate the relationship between preoperative pain and trismus with the development of complications following the repair of isolated unilateral compound mandibular body fractures using a closed reduction technique. The study was a 7-year prospective study carried out at the Dental and Maxillofacial Surgery Clinic of the University of Calabar Teaching Hospital, Calabar, Nigeria, involving 83 subjects. The results indicated that the more severe the limitation of mouth opening (trismus) and pain before treatment, the more complications developed postoperatively. These complications significantly affected treatment outcomes, with impaired mastication and facial asymmetry being the most common complications. The study concluded that posttrauma pain and trismus due to unilateral mandibular body fractures might be associated with the development of complications, suggesting a potential relationship between inflammatory morbidity and treatment outcomes (PUBMED:25791420).
Instruction: Are patients with renal failure good candidates for percutaneous coronary revascularization in the new device era? Abstracts: abstract_id: PUBMED:11113047 Are patients with renal failure good candidates for percutaneous coronary revascularization in the new device era? Background: Patients with end-stage renal disease undergoing conventional balloon angioplasty have reduced procedural success and increased complication rates. This study was designed to determine the immediate and long-term outcomes of patients with varying degrees of renal failure undergoing percutaneous coronary intervention in the current device era. Methods And Results: We compared the immediate and long-term outcomes of 362 renal failure patients (creatinine >1.5 mg/dL) with those of 2972 patients with normal renal function who underwent percutaneous coronary intervention between 1994 and 1997. Patients with renal failure were older and had more associated comorbidities. They had reduced procedural success (89.5% versus 92.9%, P:=0.007) and greater in-hospital combined major event (death, Q-wave myocardial infarction, emergent CABG; 10.8% versus 1.8%; P:<0.0001) rates. Renal failure was an independent predictor of major adverse cardiac events (MACEs) (OR, 3.41; 95% CI, 1.84 to 6.22; P:<0.00001). Logistic regression analysis identified shock, peripheral vascular disease, balloon angioplasty strategy, and unstable angina as independent predictors of in-hospital MACEs in the renal group. Compared with 362 age- and sex-matched patients selected from the control group, patients with renal failure had a lower survival rate (27.7% versus 6.1%, P:<0.0001) and a greater MACE rate (51% versus 33%, P:<0.001) at long-term follow-up. Cox regression analysis identified age and PTCA strategy as independent predictors of long-term MACEs in the renal group. Finally, within the renal failure population, the dialysis and nondialysis patients experienced remarkably similar immediate and long-term outcomes. Conclusions: Although patients with renal failure can be treated with a high procedural success rate in the new device era, they have an increased rate of major events both in hospital and at long-term follow-up. Nevertheless, utilization of stenting and debulking techniques improves immediate and long-term outcomes. abstract_id: PUBMED:31376902 Coronary Revascularization in Patients With Advanced Chronic Kidney Disease. Patients with chronic kidney disease (CKD) have an increased risk of obstructive coronary artery disease (CAD), whereas patients with end stage renal disease who are receiving hemodialysis represent a population at particularly high risk of developing cardiac ischemic events. Patients with CKD and acute coronary syndromes should be treated the same way as acute coronary syndromes patients without kidney dysfunction. The benefit of revascularization in patients with advanced kidney failure and CAD is unknown. Observational studies suggest that revascularization might confer a survival benefit compared with medical therapy alone. Little evidence from randomized trials exists regarding the effectiveness of revascularization of patients with CAD with either coronary artery bypass grafting or percutaneous coronary intervention vs medical therapy alone in patients with CKD. The risk of contrast-induced nephropathy is a major concern when percutaneous coronary intervention is performed in patients with CKD. Strict rehydration protocols and techniques to minimize contrast use are paramount to reduce this risk. Finally, in CKD patients who are awaiting kidney transplantation, a noninvasive or invasive CAD screening approach according to the cardiovascular risk profile should be used. Revascularization should be performed in candidates with critical lesions. abstract_id: PUBMED:31500492 Hybrid Coronary Artery Revascularization: A Review and Current Evidence. Objective: The role of hybrid coronary revascularization (HCR), which utilizes the combination of minimally invasive surgical coronary artery bypass grafting of the left anterior descending artery and percutaneous coronary intervention (PCI) of non-left anterior descending vessels to treat multivessel coronary artery disease, is expanding. We set out to provide a review of this technology. Methods: We conducted a retrospective analysis of all minimally invasive hybrid operations performed at our institution from September 2004 to December 2018. An effective analysis comparing patients undergoing HCR vs off-pump or on-pump surgical coronary artery revascularization was undertaken using an adjusted analysis with inverse-probability weighting based on the propensity score. Outcomes that were assessed include death, myocardial infarction, stroke, atrial fibrillation, renal failure, requirement of blood transfusion, conversion to open procedure (in the hybrid group), length of stay in intensive care unit, and total length of stay in hospital. Intention-to-treat analysis was performed. An up-to-date literature review of HCR complements this study. Results: Since 2004 a total of 191 consecutive patients (61.4±11.1 years; 142 males and 49 females) underwent HCR (robotic-assisted coronary artery bypass graft of the left internal thoracic artery to the left anterior descending coronary artery (LAD) and PCI of a non-LAD vessel) in a single- or double-stage fashion. Successful HCR occurred in 183 of the 191 patients (8 patients required intraoperative conversion to conventional coronary bypass). From our comparative analysis and literature review we found no significant difference between HCR and coronary artery bypass grafting groups with respect to in-hospital and 1-year follow-up. Conclusions: Current evidences suggest that HCR is a feasible, safe, and effective coronary artery revascularization strategy in selected patients with multivessel coronary artery disease. abstract_id: PUBMED:32890452 Trends of repeat revascularization choice in patients with prior coronary artery bypass surgery. Objective: To examine rates and predictors repeat revascularization strategies (percutaneous coronary intervention [PCI] and coronary artery bypass grafting [CABG]) in patients with prior CABG. Methods: Using the National Inpatient Sample, patients with a history of CABG hospitalized for revascularization by PCI or CABG from January 2004 to September 2015 were included. Regression analyses were performed to examine predictors of receipt of either revascularization strategy as well as in-hospital outcomes. Results: The rate of redo CABG doubled between 2004 (5.3%) and 2015 (10.3%). Patients who underwent redo CABG were more comorbid and experienced significantly worse major adverse cardiovascular and cerebrovascular events (odds ratio [OR]: 5.36 95% CI 5.11-5.61), mortality (OR 2.84 95% CI 2.60,-3.11), bleeding (OR 5.97 95% CI 5.44-6.55) and stroke (OR 2.15 95% CI 1.92-2.41), but there was no difference in cardiac complications between groups. Thoracic complications were high in patients undergoing redo CABG (8%), especially in females. Factors favoring receipt of redo CABG compared to PCI included male sex, age < 80 years, and absence of diabetes and renal failure. Conclusion: Reoperation in patients with prior CABG has doubled in the United States over a 12-year period. Patients undergoing redo CABG are more complex and associated with worse clinical outcomes than those receiving PCI. abstract_id: PUBMED:37517398 Left Main Coronary Artery Revascularization in Patients with Impaired Renal Function: Percutaneous Coronary Intervention versus Coronary Artery Bypass Grafting. Introduction: The evidence about the optimal revascularization strategy in patients with left main coronary artery (LMCA) disease and impaired renal function is limited. Thus, we aimed to compare the outcomes of LMCA disease revascularization (percutaneous coronary intervention [PCI] vs. coronary artery bypass grafting [CABG]) in patients with and without impaired renal function. Methods: This retrospective cohort study included 2,138 patients recruited from 14 centers between 2015 and 2,019. We compared patients with impaired renal function who had PCI (n= 316) to those who had CABG (n = 121) and compared patients with normal renal function who had PCI (n = 906) to those who had CABG (n = 795). The study outcomes were in-hospital and follow-up major adverse cardiovascular and cerebrovascular events (MACCE). Results: Multivariable logistic regression analysis showed that the risk of in-hospital MACCE was significantly higher in CABG compared to PCI in patients with impaired renal function (odds ratio [OR]: 8.13 [95% CI: 4.19-15.76], p < 0.001) and normal renal function (OR: 2.59 [95% CI: 1.79-3.73]; p < 0.001). There were no differences in follow-up MACCE between CABG and PCI in patients with impaired renal function (HR: 1.14 [95% CI: 0.71-1.81], p = 0.585) and normal renal function (HR: 1.12 [0.90-1.39], p = 0.312). Conclusions: PCI could have an advantage over CABG in revascularization of LMCA disease in patients with impaired renal function regarding in-hospital MACCE. The follow-up MACCE was comparable between PCI and CABG in patients with impaired and normal renal function. abstract_id: PUBMED:31255335 Comparative Effectiveness of Medical Therapy, Percutaneous Revascularization, and Surgical Coronary Revascularization in Cardiovascular Risk Subgroups of Patients With CKD: A Retrospective Cohort Study of Medicare Beneficiaries. Rationale & Objective: Prior studies suggesting that medical therapy is inferior to percutaneous (percutaneous coronary intervention [PCI]) or surgical (coronary artery bypass grafting [CABG]) coronary revascularization in chronic kidney disease (CKD) have not adequately considered medication optimization or baseline cardiovascular risk and have infrequently evaluated progression to kidney failure. We compared, separately, the risks for kidney failure and death after treatment with PCI, CABG, or optimized medical therapy for coronary disease among patients with CKD stratified by cardiovascular disease risk. Study Design: Retrospective cohort study. Setting & Participants: 34,385 individuals with CKD identified from a national 20% Medicare sample who underwent angiography or diagnostic stress testing without (low risk) or with (medium risk) prior cardiovascular disease or who presented with acute coronary syndrome (high risk). Exposures: PCI, CABG, or optimized medical therapy (defined by the addition of cardiovascular medications in the absence of coronary revascularization). Outcomes: Death, kidney failure, composite outcome of death or kidney failure. Analytical Approach: Adjusted relative rates of death, kidney failure, and the composite of death or kidney failure estimated from Cox proportional hazards models. Results: Among low-risk patients, 960 underwent PCI, 391 underwent CABG, and 6,426 received medical therapy alone; among medium-risk patients, 1,812 underwent PCI, 512 underwent CABG, and 9,984 received medical therapy alone; and among high-risk patients, 4,608 underwent PCI, 1,330 underwent CABG, and 8,362 received medical therapy alone. Among low- and medium-risk patients, neither CABG (HRs of 1.22 [95% CI, 0.96-1.53] and 1.08 [95% CI, 0.91-1.29] for low- and medium-risk patients, respectively) nor PCI (HRs of 1.14 [95% CI, 0.98-1.33] and 1.02 [95% CI, 0.93-1.12], respectively) were associated with reduced mortality compared with medical therapy, but in low-risk patients, CABG was associated with a higher rate of the composite, death or kidney failure (HR, 1.25; 95% CI, 1.02-1.53). In high-risk patients, CABG and PCI were associated with lower mortality (HRs of 0.57 [95% CI, 0.51-0.63] and 0.70 [95% CI, 0.66-0.74], respectively). Also, in high-risk patients, CABG was associated with a higher rate of kidney failure (HR, 1.40; 95% CI, 1.16-1.69). Limitations: Possible residual confounding; lack of data for coronary angiography or left ventricular ejection fraction; possible differences in decreased kidney function severity between therapy groups. Conclusions: Outcomes associated with cardiovascular therapies among patients with CKD differed by baseline cardiovascular risk. Coronary revascularization was not associated with improved survival in low-risk patients, but was associated with improved survival in high-risk patients despite a greater observed rate of kidney failure. These findings may inform clinical decision making in the care of patients with both CKD and cardiovascular disease. abstract_id: PUBMED:23011752 Role of arterial stiffness and impaired renal function in the progression of new coronary lesions after percutaneous coronary intervention. In the era of drug-eluting stents, revascularization of an initially non-target site owing to its progression as a new culprit lesion has emerged as a new therapeutic target of coronary artery disease. We aimed to clarify the prognostic factors for the progression of a previously non-significant coronary portion after prior percutaneous coronary intervention (PCI). We examined 275 patients who underwent PCI between February 2010 and January 2011 and had follow-up coronary angiography (CAG) after 6-12 months. Patients with target lesion revascularization were excluded. Finally, a total of 236 patients were included in this study. Thirty-three patients (14 %) underwent additional clinically driven PCI to treat previously non-significant lesions. There was no difference in background clinical characteristics between patients with and without additional PCI. The prevalence of chronic kidney disease (CKD; 61 vs. 31 %, p = 0.001) and multivessel disease (MVD; 55 vs. 35 %, p = 0.027), and the brachial-ankle pulse wave velocity (baPWV; 1,838 ± 371 vs. 1,589 ± 313 cm/s, p < 0.001) were significantly higher in patients with additional PCI than in those without. A multivariate analysis showed that CKD, MVD, higher baPWV, and lower high-density lipoprotein cholesterol at the follow-up CAG were independent determinants of the progression of new culprit coronary lesions. In conclusion, higher baPWV, CKD, and MVD are independent predictors of later additional PCI, suggesting an important role for arterial stiffness and impaired renal function in the progression of new culprit coronary artery lesions after PCI. abstract_id: PUBMED:27839805 Review of early hospitalisation after percutaneous coronary intervention. Background: Percutaneous coronary intervention (PCI) is the most common modality of revascularization in patients with coronary artery disease. Understanding the readmission rates and reasons for readmission after PCI is important because readmissions are a quality of care indicator, in addition to being a burden to patients and healthcare services. Methods: A literature review was performed. Relevant studies are described by narrative synthesis with the use of tables to summarize study results. Results: Data suggests that 30-day readmissions are not uncommon. The rate of readmission after PCI is highly influenced by the cohort and the healthcare system studied, with 30-day readmission rates reported to be between 4.7-% and 15.6%. Studies consistently report that a majority of readmissions within 30days are due to a cardiac-related disorders or complication-related disorders. Female sex, peripheral vascular disease, diabetes mellitus, renal failure and non-elective PCI are predictive of readmission. Studies also suggest that there is greater risk of mortality among patients who are readmitted compared to those who are not readmitted. Conclusion: Readmission after PCI is common and its rate is highly influenced by the type of cohort studied. There is clear evidence that majority of readmissions within 30days are cardiac related. While there are many predictors of readmission following PCI, it is not known whether targeting patients with modifiable predictors could prevent or reduce the rates of readmission. abstract_id: PUBMED:22490630 Clinical outcome of patients undergoing emergent coronary artery bypass grafting following failed percutaneous coronary intervention in stent era Objective: To explore clinical outcomes of patients undergoing emergent coronary artery bypass grafting (CABG) following failed percutaneous coronary intervention (PCI) in the stent era. Methods: Eleven patients who underwent emergent CABG following failed PCI from January, 2002 to December 2010 were enrolled. The in-hospital follow-up included cardiac deaths, Q-wave myocardial infarction, kidney failure, and cerebrovascular events. The clinical end-point of out-hospital follow-up was the major adverse cardiac events including death, myocardial infarction, and target lesion revascularization. Results: The patients were (61 ± 5) years old. Coronary angiography showed 5 patients had triple vessel lesions. There were 9 target lesions on left anterior descending artery. There were 3 (27.3%) severe calcified, 4 (36.4%) chronic total occlusion, and 4 (36.4%) diffused long lesions. Reasons for emergent CABG were dissection (n = 5, 45.5%), perforation (n = 3, 27.3%), failure to sufficient predilation (n = 1, 9.1%), acute closure (n = 1, 9.1%) and stent loss (n = 1, 9.1%). The average duration of follow-up was (47 ± 33) months. During in-hospital follow-up, there were 1 (9.1%) cardiac death and 2 (18.2%) Q wave myocardial infarction. During follow-up after hospital discharge, 1 patient (9.1%) died of kidney failure, and there was no rehospitalization due to cardiac events. Conclusions: Emergent CABG after failed PCI often happened in patients with complex coronary lesions. The long term outcome of patients requiring emergent CABG after failed PCI was favorable in this cohort. abstract_id: PUBMED:35287949 Outcomes with revascularization and medical therapy in patients with coronary disease and chronic kidney disease: A meta-analysis. Background And Aims: Chronic kidney disease (CKD) confers a high risk for poor cardiovascular outcomes. We conducted a systematic review and meta-analysis to estimate the effects of revascularization as the initial management strategy compared with medical therapy among patients with CKD and coronary artery disease. Methods: A Medline/PubMed literature research was conducted to identify randomized studies comparing early coronary revascularization with optimal medical therapy or medical therapy alone in patients with CKD (estimated glomerular filtration rate <60 mL/min/1.73 m2 or maintenance dialysis). The primary outcome was myocardial infarction. The secondary outcomes were all-cause mortality or progression to kidney failure. The risk ratio (RR) was estimated using a random-effects model. Results: Eleven randomized trials were included (3422 patients). Revascularization was associated with lower incidence of myocardial infarction compared with medical therapy in patients with CKD: RR 0.71 (95% confidence interval [CI] 0.54-0.94; p=0.02). This result was mainly driven from a significantly lower incidence of myocardial infarction with early revascularization among patients with stable coronary artery disease: RR 0.59; 95% CI 0.37-0.93. A similar incidence of all-cause mortality was observed with both treatment strategies: RR 0.88 (95% CI 0.72-1.08; p=0.22). A trend towards lower incidence of all-cause mortality was observed with revascularization in the subgroup of patients presenting with NSTE-ACS: RR 0.73 (95% CI 0.51-1.04; p=0.08) but not among patients with stable coronary disease. There was no difference in progression to kidney failure between the two strategies. Conclusions: Coronary revascularization may be superior to medical therapy among patients with CKD and coronary disease. Answer: Patients with renal failure, including those with end-stage renal disease, present a challenging population for percutaneous coronary revascularization (PCI) in the new device era. Studies have shown that these patients have reduced procedural success and increased complication rates compared to those with normal renal function (PUBMED:11113047). Despite the high procedural success rate that can be achieved with the utilization of stenting and debulking techniques, patients with renal failure still experience an increased rate of major events both in-hospital and at long-term follow-up (PUBMED:11113047). The benefit of revascularization in patients with advanced kidney failure and coronary artery disease (CAD) is not well established, and the risk of contrast-induced nephropathy is a major concern when performing PCI in patients with chronic kidney disease (CKD) (PUBMED:31376902). However, observational studies suggest that revascularization might confer a survival benefit compared to medical therapy alone (PUBMED:31376902). Hybrid coronary revascularization (HCR), which combines minimally invasive surgical coronary artery bypass grafting of the left anterior descending artery with PCI of non-left anterior descending vessels, has been suggested as a feasible, safe, and effective strategy in selected patients with multivessel coronary artery disease (PUBMED:31500492). In patients with prior coronary artery bypass surgery, the choice of repeat revascularization strategy (PCI vs. CABG) is influenced by factors such as sex, age, and the presence of diabetes and renal failure, with redo CABG being associated with worse clinical outcomes than PCI (PUBMED:32890452). For patients with left main coronary artery disease and impaired renal function, PCI could have an advantage over CABG regarding in-hospital major adverse cardiovascular and cerebrovascular events (MACCE), with follow-up MACCE being comparable between the two strategies in patients with impaired and normal renal function (PUBMED:37517398). In summary, while patients with renal failure can be treated with PCI in the new device era, they face increased risks and complications. The decision to proceed with PCI should be individualized, taking into account the patient's overall health, the severity of renal impairment, and the specific coronary anatomy. Alternative strategies such as HCR or medical therapy may be considered, and strict protocols to minimize contrast use and manage hydration are critical to reduce the risk of further renal damage (PUBMED:31376902).
Instruction: Does urinary incontinence affect middle-aged and older women's time use and activity patterns? Abstracts: abstract_id: PUBMED:15572498 Does urinary incontinence affect middle-aged and older women's time use and activity patterns? Objective: To investigate the relationship between urinary incontinence and women's levels and hours of participation in 31 activities. Methods: A subset of panel members from the Health and Retirement Study completed the self-administered Consumption and Activities Mail Survey questionnaire in 2001. These data were linked with Health and Retirement Study 2000 data. Analyses were limited to 2,190 female Consumption and Activities Mail Survey self-respondents born in 1947 or earlier. Logistic regression was used to predict activity participation. Linear regression was used to predict the number of hours of participation. Results: The hypothesis that urinary incontinence affects women's time use and activity patterns was supported. Compared with the continent women, the incontinent women were less likely to have house cleaned, shopped, physically shown affection, or attended religious services in the recent past; and were more likely to have watched television or made music by singing or playing an instrument. Compared with continent activity participants, incontinent participants reported significantly fewer hours spent walking, communicating with friends and family by telephone or e-mail, working for pay, using a computer, and engaging in personal grooming and hygiene. Conclusion: These findings substantiate prior work on the relationship between urinary incontinence and quality of life, and suggest a useful route for educating patients about the impact of urinary incontinence. Clinicians must be alert to opportunities for encouraging incontinent women to be active. It is also important to consider the implications for time use and activity patterns when advising patients about treatment and management options. Level Of Evidence: II-2. abstract_id: PUBMED:37301078 Diuretic use and risk of falls in older women with urinary incontinence. Objective: To determine the association between diuretic use and falls in community-dwelling older women with urinary incontinence (UI). Methods: We conducted an analytic cross-sectional study using patients' electronic medical records. Patients were women with UI, 65 years or older seen at a urogynecology clinic between January 1, 2018 and September 30, 2019. We used logistic regression analysis to explore the associations between falls and diuretic use. Results: The study included 108 women, mean age of 75.2 ±7.5 years. Twenty-two (20%) reported one or more falls within the past year and 32 (30%) were diuretic users. Fall prevalence in diuretic users and non-users were 25% (8/32) and 18.4% (14/76), respectively. Diuretic use was not associated with falls (OR = 0.74, 95%CI = 0.22-2.52). Post-hoc analysis revealed inadequate sample size. Conclusions: Diuretics use may not be a risk factor for falls in ambulatory older women with UI. A larger sample will be needed to confirm. abstract_id: PUBMED:38247352 Urinary incontinence in middle-aged and older women with hip and knee osteoarthritis: An outpatient study of frequency and risk factors. Aims: To investigate the frequency and the factors associated with urinary incontinence (UI) in a sample of middle-aged and older women with lower limb osteoarthritis (OA). Methods: Women aged 50 years or older with clinical hip/knee OA diagnoses were recruited for this cross-sectional study. Self-reported UI and type, sociodemographic characteristics, medical conditions, physical activity level, anthropometric and body composition measurements, muscle strength, and physical function were assessed. Uni and multivariable logistic regression were used to investigate the factors associated with UI. Results: Among 100 middle-aged and older women (mean 67.27 ± 8.77 SD years), 67% reported UI. In the UI group, 33% reported stress UI, 36% reported urgency UI, and 31% reported mixed UI. In the univariate analysis, age, level of physical activity, pulmonary disease, number of medications, body mass index (BMI), number of deliveries, and activity limitation were significantly associated with UI. In the multivariable analysis, older age (60-69 years OR: 4.91, 95% CI: 1.25-19.36; ≥70 years OR: 8.06, 95% CI: 1.96-33.22), compared to 50-59 years, morbid obesity (OR: 14.10, 95% CI: 1.36-146.48), compared to BMI < 30 kg/m2 , and activity limitation (OR: 5.31, 95% CI: 1.61-17.54), assessed as short physical performance battery ≤8, remained significantly associated with UI. Conclusions: UI was highly frequent among middle-aged and older women with hip/knee OA. Older age, activity limitation, and morbid obesity were independently associated with UI. Interventions targeting physical function and weight management must be considered to prevent and treat UI in this population. abstract_id: PUBMED:35118038 Experiences of Urinary Incontinence Management in Older Women: A Qualitative Study. Introduction: Older women have various experiences regarding the management of urinary incontinence depending on the societies they live in and their cultural backgrounds. The present study aimed to determine older women's experiences in urinary incontinence management. Methods: The present qualitative study employed a conventional content analysis approach and was conducted in Iran from 2019 to 2020. In this research, the data were collected through face-to-face unstructured in-depth interviews with 22 older women suffering from urinary incontinence selected via purposeful sampling. The interviews were continued until reaching the data saturation point. Data analysis was performed simultaneously with data collection. The interviews were recorded, transcribed, and analyzed through Graneheim and Lundman style content analysis, and data management was done using the MAXQDA software. In order to achieve the accuracy and validity of the study, the Four-Dimensions Criteria (FDC) by Lincoln and Guba, namely credibility, dependability, conformability, and transformability, were considered and used. Results: This study was conducted on 22 older women suffering from urinary incontinence with the mean age of 66.54 ± 5.76 years. The acquired data were put in four main categories of "resilience" with three subcategories, "change in lifestyle" with six subcategories, "attempt for treatment of the condition" with three subcategories, and "receiving support" with two subcategories. Conclusion: The study results indicated that the older women suffering from urinary incontinence were resilient against the condition, had changed their lifestyles to manage the condition, and sought treatment. In addition, receiving support from the family and the society played a significant role in the follow-up and management of the condition. The present study findings can help healthcare team members focus on urinary incontinence, design care programs for older women with this condition, and improve their quality of life. Furthermore, focusing on young and middle-aged women's health, providing them with the necessary training for taking care of the genitourinary system, and raising their awareness for preventing urinary incontinence during old ages can be helpful. Moreover, increasing the healthcare team's sensitivity and following the patients up can help diagnose, manage, and treat the condition before exerting adverse impacts on their quality of life. abstract_id: PUBMED:25146458 The impact of multimorbidity on sexual function in middle-aged and older women: beyond the single disease perspective. Introduction: Little is known about sexual activity and function in women with multiple chronic health conditions. Aim: To examine the impact of multimorbidity on sexual activity and function in middle-aged and older women. Methods: Multiethnic cross-sectional cohort of 1,997 community-dwelling women (mean age of 60.2 [±9.5] years) in California. Structured questionnaires assessed prior diagnoses of common cardiometabolic, colorectal, neuropsychiatric, respiratory, musculoskeletal, and genitourinary conditions. Main Outcome Measures: Sexual desire, frequency of sexual activity, overall sexual satisfaction, and specific sexual problems (i.e., difficulty with arousal, lubrication, orgasm, and pain) were assessed by structured questionnaires. Results: Seventy-one percent of women had two or more diagnosed chronic conditions. Fifty-nine percent reported low sexual desire, 53% reported less than monthly sexual activity, and 47% reported low overall sexual satisfaction. Multimorbidity was associated with increased odds of reporting low sexual desire (OR = 1.11, 95% CI = 1.06-1.17, per each additional chronic condition), less than monthly sexual activity (OR = 1.11, 95% CI = 1.05-1.17 per each additional condition), and low sexual satisfaction (OR = 1.10, 95% CI = 1.04-1.16 per each additional condition), adjusting for age, race/ethnicity, and partner status. Depression and urinary incontinence were each independently associated with low desire (OR = 1.53, 95% CI = 1.19-1.97, and OR = 1.23, 95% CI = 1.00-1.52, respectively), less than monthly sexual activity (OR = 1.39, 95% CI = 1.06-1.83, and OR = 1.29, 95% CI = 1.02-1.62, respectively), and low sexual satisfaction (OR = 1.49, 95% CI = 1.14-1.93, and OR = 1.38, 95% CI = 1.11-1.73, respectively), adjusting for other types of conditions. After adjustment for total number of chronic conditions, age remained a significant predictor of low desire and less than monthly sexual activity, but not sexual satisfaction. Conclusions: Women with multiple chronic health conditions are at increased risk for decreased sexual function. Depression and incontinence may have particularly strong effects on sexual desire, frequency of activity, and satisfaction in women, independent of other comorbid conditions. Women's overall sexual satisfaction may be more strongly influenced by multimorbidity than age. abstract_id: PUBMED:25171279 Overcoming the challenges of characterizing normal urodynamic parameters in middle-aged and older women. Introduction: To describe urodynamic study (UDS) findings in middle-aged and older women with various lower urinary tract symptoms (LUTS) who were found to have a normal study interpretation. Materials And Methods: Following institutional review board approval, UDS tracings of non-neurogenic women who were tested for various LUTS and whose study was interpreted as normal were reviewed. Demographic data, indications for UDS, and UDS parameter findings were extracted. UDS was conducted according to an established protocol using a 6F dual-lumen catheter (ICS guidelines) with a Laborie system and interpreted with a pre-existing template to standardize each reading. The fill-void study was frequently repeated during the same UDS session to confirm normal findings. Study interpretation was done by a neutral reviewer with UDS expertise. Results: From 2000-2012, 42 middle-aged women, who had been coded as having a normal study, were retrospectively reviewed from a database of over 2200 studies. The majority were Caucasian, with mean age 63 (range 42-85), mean body mass index 24.5 (20-37), mean parity 2 (0-4), and 67% were post-menopausal. Of the 42 patients, 28 underwent a second fill-void study. UDS findings were reported based on clinical indication for UDS: 1) incontinence, 2) pelvic organ prolapse, or 3) other LUTS symptoms. UDS findings were consistent between first and second studies. Conclusions: UDS parameters from a cohort of middle-aged and older women with normal findings could serve as reference values when interpreting urodynamic studies or for designing an age-comparable nomogram. abstract_id: PUBMED:21404319 Green tea drinking is inversely associated with urinary incontinence in middle-aged and older women. Aims: To investigate the association between green tea consumption and urinary incontinence among middle-aged and older women. Methods: Three hundred community-dwelling women aged 40-75 years were recruited in middle and southern Japan. A validated food frequency questionnaire was administered face-to-face to obtain information on dietary intake and habitual tea and coffee consumption. Urinary incontinence status was ascertained using the International Consultation on Incontinence Questionnaire-Short Form. Results: The prevalence of urinary incontinence was 27.5% among 298 eligible participants. The 82 women who leaked urine tended to drink less green tea (daily mean intake 622, SD 554 ml) than others without the condition (daily mean intake 757, SD 571 ml), P = 0.04. Relative to non-drinkers, the adjusted odds ratios of urinary incontinence were 0.37 (95% CI 0.15-0.91) for drinking more than 700 ml and 0.34 (95% CI 0.13-0.88) for drinking 4 or more cups daily. The corresponding dose-response relationships for these quantity (P = 0.04) and frequency (P = 0.01) of intake were significant. However, association with black tea, oolong tea and coffee consumption was not evident. Conclusions: The findings suggested an inverse association between urinary incontinence and habitual green tea consumption in middle-aged and older women. abstract_id: PUBMED:31296215 Symptoms of urinary incontinence and pelvic organ prolapse and physical performance in middle-aged women from Northeast Brazil: a cross-sectional study. Background: Reproductive history and urogynecological disorders have been associated with limitations in physical function. However, little is known about the relationship between symptoms of urinary incontinence and pelvic organ prolapse, and physical performance. Therefore, the purpose of this study was to examine whether symptoms of urinary incontinence and pelvic organ prolapse are independently associated factors with indicators of lower physical performance in middle-aged women from Northeast Brazil. Methods: This is a cross-sectional study of 381 women between 40 to 65 years old living in Parnamirim, Northeast Brazil. Physical performance was assessed by gait speed, chair stand and standing balance tests. Urinary incontinence and pelvic organ prolapse were self-reported. Multiple linear regression analyses were performed to model the effect of self-reported urinary incontinence and pelvic organ prolapse on each physical performance measure, adjusted for covariates (age, family income, education, body mass index, parity). Results: In the analysis adjusted for confounders, women reporting urinary incontinence spent, on average, half a second longer to perform the chair stand test (β = 0.505 95% CI: 0.034: 0.976). Those reporting pelvic organ prolapse shortened the balance time with eyes open by 2.5 s on average (β = - 2.556; CI: - 4.769: - 0.343). Conclusions: Symptoms of pelvic organ prolapse and urinary incontinence are associated to worse physical performance in middle-aged women. These seemingly small changes in physical performance levels are of clinical importance, since these conditions may influence women's physical ability, with implications for other tasks important to daily functioning and should be addressed by health policies targeting women's health and functionality. abstract_id: PUBMED:18206951 Physical activity and incident urinary incontinence in middle-aged women. Purpose: While strenuous, high impact activity may provoke urinary incontinence, little is known about relations with moderate physical activity. We examined recreational activity and incident urinary incontinence in middle-aged women. Materials And Methods: This is a prospective study of women 37 to 54 years old in the Nurses' Health Study II. Repeated physical activity reports from 1989 to 2001 were averaged to estimate long-term activity levels. From 2001 to 2003 we identified 4,081 incident cases with at least monthly urinary incontinence. Incontinence type was further determined among cases with at least weekly urinary incontinence. Multivariable logistic regression models were used to estimate adjusted relative risks of urinary incontinence across quantiles of physical activity. To determine whether relations were mediated by body mass index, separate models were constructed that excluded and included body mass index as a covariate. Results: The risk of at least monthly urinary incontinence decreased with increasing quintiles of moderate physical activity (RR 0.80, 95% CI 0.72-0.89 comparing extreme quintiles). For stress and urge urinary incontinence, women with the most physical activity had lower rates of incontinence than those with less activity. RRs were 0.75 (95% CI 0.59-0.96 for top vs bottom quartile) for stress urinary incontinence and 0.53 (95% CI 0.31-0.90) for urge urinary incontinence. After adjustment for body mass index, the overall association with at least monthly incontinence attenuated, but remained significant (RR 0.89, 95% CI 0.80-0.99 comparing extreme quintiles). Conclusions: Long-term, moderate physical activity was inversely associated with urinary incontinence. The role of exercise in weight maintenance may partly explain this association. abstract_id: PUBMED:37886637 Association of overweight, obesity and risk of urinary incontinence in middle-aged and older women: a meta epidemiology study. Aims: The aim of this meta-analysis is to evaluate the potential correlation between obesity and overweight, and the vulnerability to urinary incontinence (UI) in women aged middle-aged and above. Methods: We searched PubMed, Cochrane Library, and Embase for observational studies published between the inception of the databases and April 25, 2023. A fixed-effects model was used when the P>0.1 and the I2 ≤ 50%. In cases where I2 ≥ 50% (indicating significant heterogeneity), a random-effects model was applied. For the purpose of evaluating publication bias, a funnel plot and Egger's test were used. Stata 14.0 was used for all statistical analyses. Findings: This meta-analysis includes 16 observational studies, covering29,618 individuals. The pooled analysis shows that being overweight(25 kg/m2≤BMI<30kg/m2) in middle-aged and elderly women is more likely to develop UI (OR=1.27; 95% CI: 1.17-1.37; I2 = 51.8%, P=0.013). Middle-aged and elderly women with obesity(30 kg/m2≤BMI<35 kg/m2) are significantly more likely to develop UI (OR=1.60; 95% CI: 1.42-1.81; I2 = 71.8%, P=0.000). In addition, the results indicated a higher probability of UI in middle-aged and older women with obesity class II (BMI≥35 kg/m2) (OR=1.85; 95% CI: 1.59-2.16; I2 = 48.1%, P=0.103). In subgroup analysis, there is no direct relationship between the obesity in middle-aged and elderly women and an increased risk of stress urinary incontinence (SUI) (OR=1.31; 95% CI: 0.99-1.74; I2 = 63.7%, P=0.011). In middle-aged and elderly women with obesity are more likely to develop urgent urinary incontinence (UUI) (OR=2.11; 95% CI: 1.54-2.89; I2 = 80.2%, P=0.000). Conclusion: In this meta-analysis, overweight and obesity are associated with an increased risk of UI in middle-aged and elderly women. Obesity and overweight are independent risk factors for UI, as demonstrated by this study. Systematic Review Registration: https://www.crd.york.ac.uk/PROSPERO/, identifier CRD42023421986. Answer: Yes, urinary incontinence does affect middle-aged and older women's time use and activity patterns. A study found that compared with continent women, those with urinary incontinence were less likely to have house cleaned, shopped, physically shown affection, or attended religious services in the recent past. They were also more likely to have watched television or made music by singing or playing an instrument. Additionally, incontinent participants reported significantly fewer hours spent walking, communicating with friends and family by telephone or email, working for pay, using a computer, and engaging in personal grooming and hygiene (PUBMED:15572498). These findings suggest that urinary incontinence can have a substantial impact on the daily activities and social engagement of middle-aged and older women, potentially affecting their quality of life.
Instruction: Is "Internet Addiction" a disorder of its own? Abstracts: abstract_id: PUBMED:36200222 Problematic Internet use and Internet gaming disorder: Overlap and relationship with health-related quality of life in adolescents. General Problematic Internet Use (GPIU) is a dysfunctional use of Internet handling and management in general. In contrast, Internet Gaming Disorder (IGD) is a specific behaviour linked to online videogames. Both problems are becoming common in adolescents, but they have hardly been studied simultaneously, and the joint relationship of the two constructs with Health-Related Quality of Life (HRQoL) is unknown. The general objective of this study is to analyse the relationship between GPIU and IGD and their association with HRQoL. The study is analytical and cross-sectional with 2,024 participants (46.4% boys, n = 939) from 16 schools of 7 Spanish regions. The mean age and standard deviation were 14.20±1.42, with a range of 11-18 years. The Spanish versions of the KIDSCREEN-10, the Revised Generalized and Problematic Internet Use Scale 2 and the Internet Gaming Disorder Scale were used. Of the total sample evaluated, 15.5% of the participants had high levels of GPIU, and 3.3% of video game players presented IGD. The dimensions of GPIU are strongly associated with those of IGD. HRQoL correlated significantly and negatively with all the dimensions of GPIU and IGD (p <.001). Participants who reported problems with GPIU or IGD, individually or conjointly, had significantly lower scores in HRQoL than those with no problems. abstract_id: PUBMED:34181742 Development of Internet Addiction Scale Based on DSM-5 Diagnostic Criteria: An Evaluation in Terms of Internet Gaming Disorder Objective: Internet addiction (IA) turn out to be a serious health problem in our country as in the whole world, especially for young adults. To our knowledge, there are very few studies investigating IA and associated psychological variables in a Turkish sample. The purpose of this study is to develop the internet addiction scale (IAS) on the basis of the diagnostic criteria for internet gaming disorder, in the third section of the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) by the American Psychiatric Association (APA). Method: The study included 331 university students consisting of 189 (57.1%) females and 142 (42.9%) males with a mean age of 21.6 (SD=2.61) years. Data were acquired with a Demographic Information Form, the Internet Addiction Scale (IAS), the UCLA Loneliness Scale (UCLA-LS), the Impulsive Behaviour Scale (IBS) based on the Minnesota Multiphasic Personality Inventory (MMPI), the SelfCompassion Scale (SCS) and the Basic Psychological Needs Scale (BPNS). Results: Exploratory factor analysis revealed that the IAS has a 7-factor structure with an internal consistency coefficient of 0.97 in accordance with the DSM-5 diagnostic criteria. Conclusion: Results of the statistical analyses showed the psychometric properties of IAS to be valid on a Turkish student sample and it significantly correlated with loneliness, self-compassion, impulsivity and basic psychological needs. abstract_id: PUBMED:31631668 Validity, functional impairment and complications related to Internet gaming disorder in the DSM-5 and gaming disorder in the ICD-11. Background: Many concerns have been raised regarding the Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5) criteria for Internet gaming disorder and International Classification of Diseases, 11th Revision (ICD-11) criteria for gaming disorder. Aims: In this study, we demonstrated the diagnostic validity of each criterion for Internet gaming disorder in the DSM-5 in terms of their intensity and frequency thresholds and evaluated functional impairments, unhealthy behaviors and complications among adults with Internet gaming disorder and gaming disorder. Methods: We recruited 69 subjects with Internet gaming disorder, 69 regular gamers and 69 controls without regular gaming based on diagnostic interviewing conducted by a psychiatrist according to the DSM-5 Internet gaming disorder criteria. Results: Except for the 'deceiving' and 'escapism' criteria, all criteria for Internet gaming disorder had a diagnostic accuracy ranging from 84.7% to 93.5% in differentiating between adults with Internet gaming disorder and regular gamers. A total of 44 participants with Internet gaming disorder (63.8%) fulfilled the gaming disorder criteria. In addition, 89% and 100% of the Internet gaming disorder and gaming disorder groups, respectively, had academic, occupational or social functional impairment. Both the Internet gaming disorder and gaming disorder groups had higher rates of delayed sleep phase syndrome and insomnia. The gaming disorder group also had a higher obesity proportion. Conclusion: The 'deceiving' and 'escapism' criteria had relatively lower diagnostic accuracy. Both the Internet gaming disorder and gaming disorder groups demonstrated functional impairments and unhealthy behaviors. They also exhibited complications, such as obesity and sleep disorders. These results support the utility of the DSM-5 Internet gaming disorder and ICD-11 gaming disorder criteria in identifying individuals who need treatment for both gaming addiction symptoms and complications resulting from the addiction. abstract_id: PUBMED:37682846 Relationships of internet gaming engagement, history, and maladaptive cognitions and adolescent internet gaming disorder: A cross-sectional study. Objective: This study tested the mediation effect of maladaptive cognition of internet gaming and moderation effect of internet gaming history in the relationship between internet gaming engagement and internet gaming disorder in adolescents. Method: A total of 2,902 secondary school students were surveyed in Hong Kong from February 2021 to December 2021. The proposed moderated mediation model was tested by PROCESS. Results: Internet gaming engagement, internet gaming history and maladaptive cognition were positively associated with internet gaming disorder symptoms. Maladaptive cognition significantly mediated the association between internet gaming engagement and internet gaming disorder symptoms in both males and females. In addition, a significant interaction between internet gaming engagement and internet gaming history was detected among females but not for males, namely, the positive relationships of internet gaming engagement with maladaptive cognition and internet gaming disorder symptoms were weaker with the increased years of internet gaming. Conclusions: Our study provides a better understanding of the underlying mechanism and boundary condition in the association between internet gaming engagement and internet gaming disorder among adolescents. Preventing interventions should aim to reduce maladaptive cognition and internet gaming engagement. Interventions targeting internet gaming engagement maybe more effective among female gamers who are beginners and all male gamers. abstract_id: PUBMED:29450225 Online-specific fear of missing out and Internet-use expectancies contribute to symptoms of Internet-communication disorder. Some of the most frequently used online applications are Facebook, WhatsApp, and Twitter. These applications allow individuals to communicate with other users, to share information or pictures, and to stay in contact with friends all over the world. However, a growing number of users suffer from negative consequences due to their excessive use of these applications, which can be referred to as Internet-communication disorder. The frequent use and easy access of these applications may also trigger the individual's fear of missing out on content when not accessing these applications. Using a sample of 270 participants, a structural equation model was analyzed to investigate the role of psychopathological symptoms and the fear of missing out on expectancies towards Internet-communication applications in the development of symptoms of an Internet-communication disorder. The results suggest that psychopathological symptoms predict higher fear of missing out on the individual's Internet-communication applications and higher expectancies to use these applications as a helpful tool to escape from negative feelings. These specific cognitions mediate the effect of psychopathological symptoms on Internet-communication disorder. Our results are in line with the theoretical model by Brand et al. (2016) as they show how Internet-related cognitive bias mediates the relationship between a person's core characteristics (e.g., psychopathological symptoms) and Internet-communication disorder. However, further studies should investigate the role of the fear of missing out as a specific predisposition, as well as specific cognition in the online context. abstract_id: PUBMED:34456816 The Relationship Between Affective and Obsessive-Compulsive Symptoms in Internet Use Disorder. We investigated the relationships and diagnostic power of symptoms associated with affective disorders, obsessive-compulsive disorder, and drug addictions on Internet use disorder. Moreover, we tested whether Internet use disorder is characterized by a specific network of symptoms. One-hundred-and-four young adults (78 women) were assessed in laboratory using self-report measures of Internet addiction, alcohol use disorder, cannabis abuse, depression, anxiety, and stress symptoms, impulsiveness, and obsessive-compulsive symptoms. Only hoarding, obsessing, and depression symptoms were positively linked to Internet use disorder severity, with hoarding having greater power and accuracy than other obsessive-compulsive and affective symptoms. Only individuals with mild-moderate Internet use disorder were characterized by a network of strong and positive associations of affective and obsessive-compulsive symptoms. These findings may encourage future longitudinal studies aimed at identifying potential clinical criteria for the diagnosis of Internet use disorder and treatment targets. abstract_id: PUBMED:27672496 Mischievous responding in Internet Gaming Disorder research. The most recent update to the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM-5) included Internet Gaming Disorder as a new potential psychiatric condition that merited further scientific study. The present research was conducted in response to the APA Substance-Related Disorders Working Group's research call to estimate the extent to which mischievous responding-a known problematic pattern of participant self-report responding in questionnaires-is relevant to Internet Gaming Disorder research. In line with a registered sampling and analysis plan, findings from two studies (n tot = 11,908) provide clear evidence that mischievous responding is positively associated with the number of Internet Gaming Disorder indicators participants report. Results are discussed in the context of ongoing problem gaming research and the discussion provides recommendations for improving the quality of scientific practice in this area. abstract_id: PUBMED:37559920 Case report for an internet- and mobile-based intervention for internet use disorder. Background And Aims: Internet use disorder (IUD), characterized as the inability to control one's internet use, is associated with reduced quality of life and mental health comorbidities such as depression, substance abuse, or anxiety. Evidence-based treatment options are scarce due to the novelty of the diagnosis. Internet- and mobile-based interventions (IMI) may be an effective means to deliver psychological treatment to individuals with IUD as they address affected individuals in their online setting. This study presents a newly developed IMI for IUD disclosing treatment satisfaction and preliminary effects by exemplifying with a case report. Methods: The case of a female participant with IUD, characterized by an excessive use of social media, is analyzed. The case report follows the CARE guidelines and presents qualitative and quantitative outcomes regarding potential symptom reduction measured by the Internet Addiction Test (IAT) and Compulsive Internet Use Scale (CIUS), treatment satisfaction measured by the Client Satisfaction Questionnaire (CSQ) and feasibility by analyzing participant's written feedback during treatment. Results: The case report shows that internet- and mobile-based interventions may be feasible in supporting an individual in reducing symptoms of IUD as well as depressive symptoms, anxiety and procrastination behavior. Treatment satisfaction was reported as good. Discussion And Conclusions: This case report illustrates that IMIs can have the potential to be an easily accessible and possibly effective treatment option for IUD. Case studies on IMIs may provide insights into important mechanisms for symptom change. Further studies are needed to expand our understanding of this diverse disorder to provide adequate treatment. Clinical Trial Registration: https://clinicaltrials.gov/,DRKS00015314. abstract_id: PUBMED:38299404 Efficacy of Methylphenidate for Internet Gaming Disorder and Internet Addiction in Patients with Attention-Deficit/Hyperactivity Disorder. Background: Internet Gaming Disorder (IGD) and Internet Addiction (IA) are related clinical conditions often comorbid with Attention-Deficit/Hyperactivity Disorder (ADHD). Objective: We evaluated the efficacy of MPH for IGD/IA symptoms in ADHD patients. Methods: We enrolled 38 drug-naive patients diagnosed with ADHD (Attention-Deficit/Hyperactivity Disorder) and IGD/IA. At baseline, all patients underwent a clinical assessment for IGD/IA symptoms and then received the most appropriate therapy according to their clinical profile. Twenty-one patients received MPH (methylphenidate) treatment, and 17 patients did not. Patients were re-evaluated after three months of treatment. Results: Findings revealed significant reductions in IGD/IA symptoms over time, while no significant effect of MPH on symptom reduction was found. Clinical predictors of symptom reduction were identified, including IQ (Intelligence Quotient) and comorbid anxiety. Conclusion: This longitudinal prospective study contributes to the understanding of IGD/IA treatment in ADHD patients and highlights the importance of considering individual clinical characteristics when predicting treatment response. However, MPH may not directly impact IGD/IA symptom reduction. abstract_id: PUBMED:33069109 Behavioral modification decreases approach bias in young adults with internet gaming disorder. Background: One of the key symptoms of Internet gaming disorder is the impulsive pursuit of Internet games, which causes individuals with this disorder to have a tendency to approach gaming cues. Dual processing theory states that this approach bias is caused by the suppression of reflective processing and the hyperactivity of impulsive processing. Therefore, reverse modification training for approach bias can suppress impulsive processing, thereby reducing or treating the disorder. Method: In this study, 48 individuals with Internet gaming disorder were selected and randomly assigned to the following two groups: the experimental group and the control group. The 24 participants in the experimental group received a 4-day stimulus-response compatibility (SRC) approach bias modification treatment, whereas the 24 participants in the control group received a corresponding SRC pseudomodification training. Results: The approach response time of both groups to the gaming cues was significantly shorter than that to the neutral cues at baseline. After the modification, the response time of the experimental group to the gaming cues significantly increased, whereas the scores for Internet gaming disorder severity, anxiety level and craving significantly decreased. However, the control group subjects did not show these effects. Conclusion: The current results imply that individuals with Internet gaming disorder show approach bias toward Internet gaming cues, and SRC reverse modification training can significantly correct this bias and therefore have therapeutic effects to some extent. Answer: The concept of "Internet Addiction" is a subject of ongoing debate and research within the field of mental health. The term "Internet Addiction" is often used to describe a problematic or pathological use of the internet that leads to significant impairment or distress. However, it is not officially recognized as a distinct disorder in the main diagnostic manuals used by mental health professionals, such as the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) by the American Psychiatric Association (APA) or the International Classification of Diseases (ICD-11) by the World Health Organization (WHO). The DSM-5 includes "Internet Gaming Disorder" (IGD) as a condition for further study, suggesting that it is a potential psychiatric condition that merits more scientific investigation (PUBMED:27672496). The ICD-11 has recognized "Gaming Disorder" as a diagnosable condition, which is characterized by a pattern of persistent or recurrent gaming behavior that may be online or offline (PUBMED:31631668). Research has shown that problematic internet use, including IGD, can be associated with negative outcomes such as reduced health-related quality of life (HRQoL) in adolescents (PUBMED:36200222), and that individuals with IGD may experience functional impairments and complications such as obesity and sleep disorders (PUBMED:31631668). Additionally, studies have developed scales to measure internet addiction based on DSM-5 criteria for IGD, indicating that the concept is being taken seriously within the research community (PUBMED:34181742). While "Internet Addiction" as a broad concept is not officially recognized as a distinct disorder, specific aspects of problematic internet use, such as IGD, are being studied and have been included in diagnostic manuals for further consideration. The research suggests that problematic internet use can have significant negative impacts on individuals' lives, warranting attention from mental health professionals and researchers.