input
stringlengths
6.82k
29k
Instruction: Can the distance fallen predict serious injury after a fall from a height? Abstracts: abstract_id: PUBMED:10372624 Can the distance fallen predict serious injury after a fall from a height? Background: After a fall, the distance fallen is sometimes used to predict the injury severity. We aimed to examine how distance fallen performs as a predictor of major injury. Method: A cohort of trauma victims attending our emergency department after having fallen from a height was identified retrospectively, and data were collected regarding the fall and injuries sustained. Performance of threshold heights, ranging from 2 meters (6.6 feet) to 10 meters (32.8 feet), as a diagnostic test for major injury was assessed. Results: Height fallen performed poorly over the range of thresholds used. At low thresholds, sensitivity was inadequate to rule out major trauma, whereas the low prevalence meant that, despite impressive specificity at higher thresholds, positive predictive value was poor. At the optimal threshold of 5 meters (16.4 feet), the positive predictive value was 0.17 and sensitivity was 0.33. Conclusion: Height of fall is a poor predictor of major injury. abstract_id: PUBMED:28643032 Fall from heights: does height really matter? Introduction: Fall from heights is high energy injuries and constitutes a fraction of all fall-related trauma evaluations while bearing an increase in morbidity and mortality. We hypothesize that despite advancements in trauma care, the overall survivability has not improved in this subset of trauma patients. Methods: All adult trauma patients treated after sustaining a fall from heights during a 40-month period were retrospectively reviewed. Admission demographics, clinical data, fall height (ft), injury patterns, ISS, GCS, length of stay, and mortality were reviewed. Results: 116 patients sustained a fall from heights, 90.4% accidental. A mean age of 37± 14.7 years, 86% male, and a fall height of 19 ± 10 ft were encountered. Admission GCS was 13 ± 2 with ISS 10 ± 11. Overall LOS was 6.6 ± 14.9 days and an ICU LOS of 2.8 ± 8.9 days. Falls ≥ 25 ft.(16%) had lower GCS 10.4 ± 5.8, increased ISS 22.6 ± 13.8, a fall height 37.9 ± 13.1 ft and associated increased mortality (p < 0.001). Mortality was 5.2%, a mean distance fallen of 39 ± 22 ft. and an ISS of 31.5 ±16.5. Brain injury was the leading cause of death, 50% with open skull fractures. Conclusion: Level of height fallen is a good predictor of overall outcome and survival. Despite advances in trauma care, death rates remain unchanged. Safety awareness and injury prevention programs are needed to reduce the risk of high-level falls. abstract_id: PUBMED:28144931 Analysis of the nature of injuries in victims of fall from height Aim Of Study: To assess the types and extent of injuries sustained by victims of fall from height depending on the height of fall. Material And Methods: The study included 338 bodies of victims of fatal falls from different heights (from the 1st to 10th floors) who were subjected to medico-legal autopsy at the Department of Forensic Medicine, Jagiellonian University Medical College, between 1995 and 2014. For each individual, selected data were collected including gender, age, body height, injury types and presence of alcohol or other intoxicants in blood. The analysis comprised injuries to the brain, thoracic and abdominal organs, fractures of the skull, extremities, ribs and spine, and fractures of the scapula, clavicle and sternum (considered together). The study focused on determining the frequency of occurrence of different injuries in relation to one another and depending on the height of fall. Results: The number and extent of injuries was found to increase along with the height of fall. Three injury types, including injuries to the mesentery and both kidneys and fractures of upper extremity small bones, were shown to occur from the threshold heights of the 3rd, 4th and 6th floors. Eleven injuries demonstrated a statistically significant correlation with the height of fall. The study also revealed a number of correlations between the frequencies of occurrence of different injuries. Conclusions: Injuries found from the threshold value may suggest the minimal height of fall. The presence of injuries which correlate with increasing height, and the overall number of injuries observed in victims of fall from height, may be useful for inferring the height of the fall. abstract_id: PUBMED:35733504 The Relationship Between Falling Distance and Trauma Severity Among Fall Injury Survivors Who Were Transported to a Trauma Center. Introduction Falls from >6 meters have been shown to cause multiple traumas and serious injuries. However, especially among fall survivors who were transported to trauma centers, the relationship between falling distance and trauma severity remains unclear. This study aimed to investigate the relationship between falling distance and trauma severity among fall injury survivors who were transported to a trauma center and clarify the characteristics of trauma among survivors of falls from high places from an orthopedic surgeon's perspective. Methods We retrospectively reviewed the medical records of 65 fall injury survivors who were transported to a trauma center for falling distance; whether the fall was a suicide attempt; abdominal, chest, and head trauma; the numbers of upper-limb, lower-limb, and spinal vertebral fractures; McCormack load-sharing classification score; unstable pelvic fracture; Frankel classification; injury severity score (ISS); and duration of intensive care unit (ICU) and hospital stay. We evaluated the correlations between falling distance and the other factors and compared all factors between those falling <6 meters and those falling >6 meters. Results Falling distance was weakly positively correlated with durations of ICU and hospital stay. The percentage of cases that were suicide attempts, the number of lower-limb fractures, the McCormack load-sharing classification score, and the durations of ICU and hospital stay were significantly higher among those falling from >6 meters than among those falling from <6 meters. Conversely, there were no significant differences in abdominal trauma, chest trauma, head trauma, number of upper-limb fractures, number of vertebral fractures, unstable pelvic fracture, or Frankel classification between the two groups. Conclusion The findings indicate that falling from a higher distance may increase lower-limb and vertebral fracture severity and may lead to longer ICU and hospital stays among fall injury survivors who are transported to trauma centers. abstract_id: PUBMED:33024375 An Elucidation of Pattern of Injuries in Patients with Fall from Height. Background: Fall from height (FFH) is the second most common cause of trauma presenting to the emergency department (ED). They account for majority of the polytrauma cases. This study was done to determine the pattern of injuries sustained due to FFH and outcome. Materials And Methods: This was a retrospective observational study of all patients with history of FFH presenting to the ED of a large tertiary care hospital in South India. Details of the incident, fall height, injuries, and outcome were noted and analyzed. Results: This study cohort included 861 patients with a mean age was 36.2 (SD 20.8) years. A male predominance (74%) was noted. Majority of the patients, i.e., 62%, were triaged as priority 2, depending on the hemodynamic stability. Approximately a quarter (26%) sustained injury to the lower limbs with 18% sustaining spinal cord injury (SCI). Among the patients suffering SCI (35%), patients were further categorized in the American Spinal cord Injury Association (ASIA) classification. New Injury Severity Score (NISS) was more than 8 in 47% of the total study population. Majority of the patients, i.e., 62%, were discharged stable from ED after primary care with a plan of follow-up in the outpatient department. One-third (30%) of the total patients required hospital admission and among them 20% of the patients had to undergo major surgical intervention. The rest were either discharged stable or left against medical advice (LAMA) after primary care. The in-hospital mortality rate was 1.04%. Conclusion: This study has expressed the pattern of injuries in patients with FFH. An alarmingly high number of young adults with significant lower limbs and spinal injuries were noted. We observed that with increase in fall height there was a proportional increase in SCI and decrease in lower limb injuries. How To Cite This Article: Lohanathan A, Hazra D, Jyothirmayi CA, Kundavaram AP. An Elucidation of Pattern of Injuries in Patients with Fall from Height. Indian J Crit Care Med 2020;24(8):683-687. abstract_id: PUBMED:34070640 Falls from Great Heights: Risk to Sustain Severe Thoracic and Pelvic Injuries Increases with Height of the Fall. Falls from a height are a common cause of polytrauma care in Level I Trauma Centers worldwide. The expected injury consequences depend on the height of the fall and the associated acceleration, as well as the condition of the ground. In addition, we further hypothesize a correlation between the cause of the fall, the age of the patient, and the patient's outcome. A total of 178 trauma patients without age restriction who were treated in our hospital after a fall >3 m within a 5-year period were retrospectively analyzed. The primary objective was a clinically and radiologically quantifiable increase in the severity of injuries after falls from different relevant heights (>3 m, >6 m, and >9 m). The cause of the fall, either accidental or suicidal; age and duration of intensive care unit stay, including duration of ventilation; and total hospital stay were analyzed. Additionally, the frequency of urgent operations, such as, external fixation of fractures or hemi-craniectomies, laboratory parameters; and clinical outcomes were also among the secondary objectives. Sustaining a thoracic trauma or pelvis fractures increases significantly with height, and vital parameters are significantly compromised. We also found significant differences in urgent pre- and in-hospital emergency interventions, as well as organ complications and outcome parameters depending on the fall's height. abstract_id: PUBMED:31825330 Forensic medical assessment of fractures of long tubular bones caused by fall from height The interpretation of symptoms of multiple injuries caused by fall from height in case of postmortem examination with the aim to establish facts of causing traumas is onerous, especially in diagnosing mechanisms of various injury formations. The objective of the study was to elaborate expert diagnostic criteria to determine mechanisms of fractures of long tubular bones caused by fall from height on the basis of the analysis of fracture morphology and information technologies. The prognostic models have been developed, which make it possible to prognosticate mechanisms of fractures basing on the elaborated expert diagnostic criteria of fractures of long tubular bones in case of catatrauma. The morphologic criteria of fractures with the use of linear regression analysis make it possible to distinguish reliably different variants of the body landing and height of fall ranges. abstract_id: PUBMED:28453166 An analysis of the relationship between bodily injury severity and fall height in victims of fatal falls from height. Aim of the study: One of the basic issues discussed in forensic literature regarding falls from a height is determination of fall heights and differentiation between suicidal and accidental falls. The aim of the study was to verify the usefulness of the available methods for the purposes of forensic expertises. Material and methods: The study encompassed fatalities of falls from a height whose autopsies were performed in the Department of Forensic Medicine in Lublin. Results: Similarly to other authors, the severity of injuries was assessed using the Abbreviated Injury Scale (AIS) and injury severity score (ISS). The study findings demonstrated a statistically significant correlation between the fall height and the severity of injuries according to ISS and a statistically significant difference in fall heights between the groups of accidents and suicides. abstract_id: PUBMED:25967553 Hollow-organ perforation following thoracolumbar spinal injuries of fall from height. Introduction: Spinal trauma is the cause of high mortality and morbidity, the fall from height as mechanism that can cause a wide variety of lesions, associated both with the direct impact on the ground and with the deceleration. In such fall cases greater heights and higher mortality are involved. Presentation Of Case: We report the successful management of life-threatening hollow-organ perforation following thoracolumbar spinal injury. Discussion: Perforation of the hollow-organ in the setting of thoracolumbar trauma may delay the diagnosis and can have devastating consequences. Conclusions: This case supports the recommendation for neurosurgeon in the setting of thoracolumbar injury that perforation of the hollow-organ can have devastating consequences. It is vital to achieve an early diagnosis to improve survival rate. abstract_id: PUBMED:35554593 Insomnia Diagnosis, Prescribed Hypnotic Medication Use, and Risk for Serious Fall Injuries in the Reasons for Geographic and Racial Differences in Stroke (REGARDS) Study. Study Objectives: Insomnia is common among older adults and associated with an increased risk for falls. Determining if falls are more strongly associated with insomnia or prescribed hypnotic medications could be used to guide interventions to reduce falls risk. Methods: We examined the prospective association of a diagnosis of insomnia and/or prescribed hypnotic medication use with the risk for serious fall injuries among 9087 Reasons for Geographic and Racial Differences in Stroke (REGARDS) study participants aged 65 years or older with Medicare fee-for-service health insurance at baseline (2003-2007). A diagnosis of insomnia was based on ICD-9 codes in Medicare claims and prescribed hypnotic medication use was determined through a pill bottle review. Serious fall injuries were identified by Medicare claims between baseline and December 31, 2018. Results: Over a median of 6.8 years, 1660 (18.3%) participants had a serious fall injury. The incidence rates for a serious fall injury per 1000 person-years were 24.8 (95%CI: 23.5, 26.1), 28.8 (95%CI: 18.6, 38.9), 32.6 (95%CI: 28.2, 37.0), and 46.6 (95%CI: 26.7, 66.5) for participants without insomnia or taking prescribed hypnotic medication (-insomnia/-hypnotics), with insomnia only (+insomnia/-hypnotics), taking prescribed hypnotic medication only (-insomnia/+hypnotics), and with insomnia and taking prescribed hypnotic medication (+insomnia/+hypnotics), respectively. Compared with the -insomnia/-hypnotic group, the multivariable-adjusted hazard ratios for a serious fall injury were 1.13 (95%CI: 0.79, 1.61), 1.29 (95%CI: 1.11, 1.50), and 1.60 (95%CI: 1.01, 2.56) for +insomnia/-hypnotics, -insomnia/+hypnotics, and +insomnia/+hypnotics, respectively. Conclusions: The risk for serious fall injuries was higher for those taking prescribed hypnotic medications but not with an insomnia diagnosis. Answer: The relationship between the distance fallen and the prediction of serious injury after a fall from a height is complex and varies across studies. Some research suggests that the height of the fall is not a reliable predictor of major injury. For instance, one study found that using height fallen as a diagnostic test for major injury performed poorly, with low sensitivity at lower thresholds and poor positive predictive value at higher thresholds, even at an optimal threshold of 5 meters (16.4 feet) (PUBMED:10372624). In contrast, other studies indicate that the level of height fallen can be a good predictor of overall outcome and survival, with falls from greater heights associated with lower Glasgow Coma Scale (GCS) scores, increased Injury Severity Score (ISS), and increased mortality (PUBMED:28643032). Additionally, the number and extent of injuries were found to increase with the height of the fall, with certain injury types occurring from threshold heights of the 3rd, 4th, and 6th floors (PUBMED:28144931). Another study found that falling from a higher distance may increase lower-limb and vertebral fracture severity and may lead to longer intensive care unit (ICU) and hospital stays among fall injury survivors who are transported to trauma centers (PUBMED:35733504). Similarly, an increase in fall height was associated with a proportional increase in spinal cord injuries and a decrease in lower limb injuries (PUBMED:33024375). Moreover, the risk of sustaining severe thoracic and pelvic injuries was found to increase with the height of the fall (PUBMED:34070640). However, it is important to note that the predictive value of fall height can be influenced by various factors, such as the nature of the surface impacted, the body part that first contacts the ground, and individual patient characteristics. Therefore, while there is evidence to suggest that fall height can be associated with injury severity, it should not be the sole factor used to predict serious injury after a fall from a height.
Instruction: Burning mouth syndrome and oral health-related quality of life: is there a change over time? Abstracts: abstract_id: PUBMED:35000220 Oral health-related quality of life in patients with Parkinson's disease. Background: Parkinson's disease (PD) is a neurodegenerative condition affecting the quality of life. Due to a worsening of oral health in PD patients with the progression of the disease, oral health-related quality of life (OHRQoL) could be impaired as well. Objectives: To assess whether PD patients in The Netherlands experience worse OHRQoL than historical controls, and to investigate which factors are associated with OHRQoL in PD patients. Materials & Methods: In total, 341 PD patients (65.5 ± 8.4 years) and 411 historical controls (62.6 ± 5.3 years) participated. Both groups completed a questionnaire. The PD patients were asked questions regarding demographics, PD, oral health, and OHRQoL. The historical controls filled in demographic information and questions regarding OHRQoL. The latter construct was assessed using the Dutch 14-item version of the Oral Health Impact Profile (OHIP-14). Data were analysed using independent samples t-tests and univariate and multivariate linear regression analysis. Results: The mean OHIP-14 score was higher in PD patients (19.1 ± 6.7) than in historical controls (16.5 ± 4.4) (t(239) = 6.5; p < .001). OHRQoL in PD patients was statistically significant associated with motor aspects of experiences of daily living (B = 0.31; t(315) = 7.03; p < .001), worsening of the oral environment during disease course (B = 3.39; t(315) = 4.21; p < .001), being dentate (B = -5.60; t(315) = -4.5; p < .001), tooth wear (B = 2.25; t(315) = 3.29; p = .001), and possible burning mouth syndrome (B = 5.87; t(315) = 2.87; p = .004). Conclusion: PD patients had a lower OHRQoL than historical controls. Besides, PD-related variables and oral health-related variables were associated with OHRQoL. abstract_id: PUBMED:21491763 Oral health impact profile. an instrument for measuring the impact of oral health on the quality of life A Dutch-language version of the Oral Health Impact Profile, a questionnaire by means of which the impact of oral health on the quality of life of patients can be determined, was developed and subsequently psychometrically tested among a group of patients with complaints concerning missing dentition or their dentures. In addition, a shortened version of this so-called OHIP-NL49, the OHIP-NL14, was psychometrically tested among a group of patients with temporomandibular disorders. The psychometrical characteristics of both the OHIP-NL49 and the OHIP-NL14 were very good: both the reliability and the validity were high. The conclusion was, that the OHIP-NL49 and the OHIP-NL14 are well suited for determining the impact of oral health on the quality of life. abstract_id: PUBMED:31332814 Assessment of oral health-related quality of life, measured by OHIP-14 and GOHAI, and psychological profiling in burning mouth syndrome: A case-control clinical study. Objectives: To evaluate the oral health-related quality of life (OHRQoL) of patients with burning mouth syndrome (BMS) by comparing the Oral Health Impact Profile-14 (OHIP-14) and Geriatric Oral Health Assessment Index (GOHAI) tests, assessing their dependence with pain, anxiety and depression and, secondly, to analyse the changes in time after treatment with psychotropic drugs. Methods: Twenty-six patients and 26 controls were included. The GOHAI, OHIP-14, visual analogue scale (VAS) and the Hamilton Rating Scales for Depression and Anxiety (HAM-D and HAM-A) were performed at baseline (time 0) and after 6 months of treatment (time 1). Descriptive statistics, the Mann-Whitney non-parametric test for two independent samples and the Wilcoxon non-parametric test for two paired samples were used. Results: The scores from all outcome measurements were statistically significantly different between the cases and controls (P < .001) with a strong correlation between the GOHAI and the OHIP-14 (P < .001). The BMS patients showed a statistically significant improvement in the VAS, HAM-D and HAM-A scores from time 0 to time 1 (P < 0.001), and in the OHIP-14 scores (P < .004**) after the treatment, but no statistically significant difference in the GOHAI score (.464). Conclusions: Burning mouth syndrome patients showed poorer scores on all scales compared to the healthy subjects with a lower OHRQoL. OHIP-14 gives a greater weight to psychological and behavioural outcomes in evaluating oral health than GOHAI, and therefore, it is a more effective questionnaire in terms of the evaluation of the treatment response. The management of BMS can improve pain, anxiety and depression and the OHRQoL. abstract_id: PUBMED:21707741 Oral health-related quality of life--associations with oral health and conditions in Swedish 70-year-old individuals. Objective: The aim of this study was to evaluate oral health-related quality of life (OHRQoL) in relation to different chronic oral conditions and perception of oral health among elderly individuals. Background: Health-related quality of life is important to assess and understand the impact of different diseases and conditions on individuals' well-being. Material And Methods: Randomly selected 70-year-old individuals (N = 561) were included in this analysis. They were clinically examined and answered a questionnaire. Variables used were perceived oral health, satisfaction with appearance of teeth, problems with chewing and dry mouth, burning mouth syndrome, dental anxiety and dental visiting habits. To measure the participants' perception of the impact of oral conditions on their well-being, the Oral Health Impact Profile (OHIP-14) was used. Results: The OHIP-14 distribution of sum of scores showed a floor effect but revealed acceptable psychometric properties with regard to reliability and validity. Multivariable logistic regression revealed three highly predictive variables for individuals with high OHIP-14 scores: using dentures, having chewing problems and being dissatisfied with appearance of teeth. Conclusion: Use of dentures, having chewing problems and being dissatisfied with appearance of teeth are associated with high OHIP-14 sums, indicating strong influence on OHRQoL. abstract_id: PUBMED:20412450 Burning mouth syndrome and oral health-related quality of life: is there a change over time? Background: The symptoms associated with burning mouth syndrome can be quite varied and can interfere with the every day lives of patients. Management of the condition can be challenging for clinicians. Aims: To determine the oral health-related quality of life (OHRQOL) implications of BMS on patients over a period of time whilst undergoing treatment and to evaluate whether treatment interventions had a positive effect on OHRQOL. Materials And Methods: Thirty-two individuals (26 females, 6 males, mean age 61 years, range 38-83 years) were enrolled in this study. Individuals were interviewed using Short-Form McGill Pain Questionnaire (SFMPQ), Visual Analogue Scale (VAS), the Hospital Anxiety and Depression Scale (HADS) and the Oral Health Impact Profile (OHIP-14), at weeks 0, 8 and 16. Results: Scores from all outcome measures used decreased over the 16 weeks of the study. Statistically significant differences were found between time points for VAS pain scores (P < 0.001), HADS depression scores (P = 0.029), SFMPQ sensory pain scores (P < 0.01) and total scores for OHIP-14 (P < 0.05). Conclusion: Burning mouth syndrome has a negative impact on OHRQOL; however, individually tailored management of the condition can result in an improvement in patient-reported outcome measures including quality of life. abstract_id: PUBMED:27770603 Prevalence and oral health-related quality of life of self-reported orofacial conditions in Sweden. Objectives: To (i) determine the prevalences of self-report in a Swedish adult population, of temporomandibular disorders, burning mouth syndrome, dry mouth, and bad breath and (ii) determine oral health-related quality-of-life impairment in subjects reporting these conditions. Subjects And Methods: A cross-sectional, randomized sample of the adult Swedish population (response rate: 46%, N = 1309 subjects) self-reported their condition from the preceding month to assess prevalences of self-report for the studied conditions together with comorbidity group of subjects who reported more than one condition. The 49-item Oral Health Impact Profile (OHIP) used to assess oral health-related quality of life. Results: The most prevalent condition was bad breath (39%), followed by dry mouth (22%), temporomandibular disorders (18%) and burning mouth syndrome (4%). High comorbidity of conditions occurred in 27% of the population. Quality-of-life impairment increased with the number of comorbid conditions. Among individual conditions, burning mouth syndrome and temporomandibular disorders (57% and 40% OHIP points) presented higher impairment than dry mouth and bad breath (32% and 26% OHIP points). Conclusions: Orofacial conditions were common and often coexist. The comorbidity group experienced the highest impact on oral health-related quality of life: the more the comorbid conditions, the greater the negative impact. abstract_id: PUBMED:33031643 Oral health-related quality of life and associated factors in patients with burning mouth syndrome. Objective: The purpose of this study was to assess oral health-related quality of life (OHRQoL) in patients with burning mouth syndrome (BMS) and to identify clinical factors associated with OHRQoL. Methods: Fifty-seven patients with BMS (56.4 ± 10.7 years) participated in the study. Patients underwent oral examination, laboratory tests, psychological evaluation, measurement of salivary flow rates and evaluation of clinical characteristics using a BMS questionnaire. The OHRQoL of patients was assessed using the Oral Health Impact Profile-14 (OHIP-14). Results: The OHIP-14 score for patients with BMS was 38.6 ± 12.8. Patients had higher scores for the psychological discomfort and physical pain dimensions of the OHIP-14. The intensity of taste disturbance or xerostomia symptoms (β = 0.464, P < .001), worry about symptoms (β = 0.307, P = .020), and results of psychological evaluation (β = 0.311, P = .026) were significantly associated with OHIP-14 score. Multiple linear regression showed that the intensity of taste disturbance or xerostomia symptoms was significantly associated with decreased OHRQoL (β = 0.637, P = .026). Conclusions: This study suggests that severity of taste disturbance or xerostomia is an important factor that influences OHRQoL in patients with BMS. abstract_id: PUBMED:32415993 The impact of oro-facial pain conditions on oral health-related quality of life: A systematic review. Pain in the oro-facial region is one of the most common reasons for patients to seek dental treatment. Oral health-related quality of life (OHRQoL) can be affected not only by pain, but also by other oral disorders. Four main dimensions, Oral Function, Orofacial Pain, Orofacial Appearance and Psychosocial Impact, have been suggested to cover different areas of OHRQoL. The aim of this systematic review was to map the impact of oro-facial pain conditions on the Orofacial Pain dimension of OHRQoL (PROSPERO registration: CRD42017064033). Studies were included if they reported Oral Health Impact Profile (OHIP) mean or median domain scores for patients with odontogenic pain, oral mucosal pain/burning mouth syndrome (BMS), third molar extractions or temporomandibular disorders (TMD). A search in PubMed (MEDLINE), EMBASE, Cochrane, CINAHL and PsycINFO on 8 June 2017, updated 14 January 2019, combined with a hand search identified 2104 articles. After screening of abstracts, 1607 articles were reviewed in full text and 36 articles were included that reported OHIP data for 44 patient populations including 5849 patients. Typical Orofacial Pain impact for all four conditions (odontogenic pain, oral mucosal pain/BMS, pain after third molar extractions and TMD) was between 2 and 3 on a 0-8 converted OHIP scale with the highest reported impact for pain after 3rd molar extractions. This review provides standardised information about OHRQoL impact from four oro-facial pain conditions as a model for the Orofacial Pain dimension. The results show moderate impact for the pain dimension of OHRQoL in patients with common oro-facial pain conditions. abstract_id: PUBMED:30587180 Psychological problems and quality of life of patients with oral mucosal diseases: a preliminary study in Chinese population. Background: Psychological problems might play important roles in oral mucosal diseases such as recurrent aphthous ulcers (RAU), oral lichen planus (OLP), burning mouth syndrome (BMS), but the relevance to patients' quality of life remained controversial. The aim of this study was to investigate the psychological problems and oral health-related quality of life in patients with RAU, OLP, and BMS in China, to assess the relationship between psychological problems and quality of life. Method: Thirty-nine RAU patients, 45 OLP patients, 15 BMS patients and 45 healthy controls were enrolled in the study. Hospital Anxiety and Depression Scale (HADS) were chosen to analyze the patients' psychological problems. Oral Health Impact Profile (OHIP-14) was used to measure the OHRQoL. The scores of HADS and OHIP-14 were used to analyze the relationship between psychological problems and the quality of life of oral mucosa patients. Results: Each of OHIP-14 scores and HADS scores in RAU, OLP, BMS was higher than the control group, and there was significant difference in the patients groups with the control cases(P < 0.05). OHIP-14 score of RAU was the highest in three patient groups. Its OHRQoL was lowest in the three groups, which had statistical significance (P < 0.05). Positive correlations existed between the psychological problems and the quality of life of the three patient groups (rs > 0, P < 0.05), except for the depression of the BMS group (rs = 0.168, P = 0.395). Conclusion: Patients with oral mucosal diseases such as RAU, OLP, and BMS had higher levels of anxiety, depression, and lower quality of life. The patient's psychological problems were related to their quality of life, suggesting that the psychological state of patients with oral mucosal disease need more attention. abstract_id: PUBMED:22225834 Generic and oral quality of life is affected by oral mucosal diseases. Background: The generic and oral health-related quality of life (QoL) has provided opportunity for investigation of the interrelations among generic health, oral health, and related outcomes. The purpose of this study was to identify the generic and oral QoL in the patients with oral mucosal disease (OMD). Methods: Five hundred and thirty-eight OMDs were recruited in this study. The instruments applied were Chinese version of the 36-item short form health survey (SF-36) and the short-form of Oral Health Impact Profile (OHIP-14). Results: The mean score of sum OHIP-14 was significantly higher in the patients with OMD (10.81 ± 9.01) compared with those in the healthy subjects (HS) (6.55 ± 6.73) (p < 0.001, Mann-Whitney U test). 56.51% of the OMD patients and 12.94% of the HS reported at least one oral negative impact (p < 0.001, Chi-square test). The overall mean score of SF-36 was significantly lower in the patients with OMD (74.54 ± 12.77) compared with those in the HS (77.97 ± 12.39) (p = 0.021, t-test). Conclusions: Administration of specific and generic questionnaires of QoL can provide us a detailed picture of the impact of OMDs on patients, and both generic and oral QoL were impaired in the patients with OMD. Answer: Burning mouth syndrome (BMS) has been shown to have a significant negative impact on oral health-related quality of life (OHRQoL). Studies have demonstrated that BMS patients experience poorer scores on various scales compared to healthy subjects, indicating a lower OHRQoL (PUBMED:31332814, PUBMED:20412450, PUBMED:33031643). The Oral Health Impact Profile-14 (OHIP-14) is a commonly used instrument to assess OHRQoL and has been found to be sensitive to the psychological and behavioral outcomes in evaluating oral health, making it effective for evaluating treatment response in BMS patients (PUBMED:31332814). Over time, with individually tailored management and treatment interventions, there can be an improvement in patient-reported outcome measures, including quality of life (PUBMED:20412450). For instance, BMS patients showed statistically significant improvements in pain, anxiety, and depression scores after treatment with psychotropic drugs, as well as in OHIP-14 scores (PUBMED:31332814). This suggests that management of BMS can lead to positive changes in OHRQoL over time. However, the severity of symptoms such as taste disturbance or xerostomia, worry about symptoms, and psychological evaluation results are significantly associated with OHRQoL, indicating that these factors can influence the degree of quality of life impairment in BMS patients (PUBMED:33031643). Therefore, while there can be an improvement in OHRQoL over time with appropriate treatment, the extent of this improvement may be influenced by the severity and management of BMS symptoms and associated psychological factors.
Instruction: Is digital substraction angiography still needed for the follow-up of intracranial aneurysms treated by embolisation with detachable coils? Abstracts: abstract_id: PUBMED:18795274 Is digital substraction angiography still needed for the follow-up of intracranial aneurysms treated by embolisation with detachable coils? Introduction: Follow-up of intracranial aneurysms treated by embolisation with detachable coils is mandatory to detect a possible recanalisation. The aim of this study was to compare contrast-enhanced magnetic resonance angiography (CE-MRA) with digital substraction angiography (DSA) used to detect aneurysm recanalisation to determine if DSA is still needed during follow-up. Materials And Methods: From May 2006 to May 2007, 55 patients with 67 aneurysms were treated by endosaccular coiling with (n = 9) or without (n = 58) an adjunctive stent. Follow-up imaging protocol included MRA at 6 and 12 months and a DSA at 12 months or earlier if a major recanalisation was identified on the 6-month MRA. Two neuroradiologists independently reviewed MRA images (readers 1 and 2) and two other reviewed DSA images. Results: Follow-up DSA showed stability of the aneurysm occlusion in 52 cases, recanalisation in 14 cases, and further thrombosis in one. On CE-MRA, both readers identified all recanalisations but one (sensitivity of 93%) as they missed a major recanalisation in a 2-mm ruptured aneurysm. There were two false-positive evaluations by reader 1 and three for reader 2. Mean specificity of CE-MRA to detect aneurysm recanalisation was 95.5%. Conclusion: CE-MRA is accurate to detect aneurysm recanalisation after embolisation with detachable coils. CE-MRA may be proposed as first-intention imaging technique for their follow-up. However, its sensitivity and specificity remain inferior to that of DSA and major recurrences may be missed in very small aneurysms. Therefore, a single DSA remains mandatory during the imaging follow-up. abstract_id: PUBMED:14534768 Follow-up of intracranial aneurysms treated with detachable coils: comparison of plain radiographs, 3D time-of-flight MRA and digital subtraction angiography. All patients with aneurysms treated with Guglielmi detachable coils (GDC) are undergo angiography to assess long-term stability of aneurysm exclusion or to show recurrence of the aneurysm sac, which may require further treatment. We prospectively compared the plain-film appearance of the coil-mass, 3D time-of-flight MR angiography (TOF MRA) and digital subtraction angiography (DSA) for the detection of aneurysm recanalisation during follow-up. We studied 60 patients with 74 intracranial aneurysms treated with Guglielmi detachable coils. We used the unsubtracted image of the angiograms performed at the completion of any embolisation procedure and at follow-up as the plain radiographs. Recanalisation was considered if loosening, compaction or reorientation of the coil mass was apparent. TOF MRA was performed to assess the presence and size of a neck remnant. DSA was regarded as the definitive investigation. Comparison of the techniques showed good agreement as regards aneurysm recanalisation. MRA was more accurate than plain radiography and could replace DSA for long term follow- up. The initial follow-up examination should, however, include both modalities. In cases of contraindications or limitations to MRA, the interval between follow-up angiographic examinations could be increased if there is no change in the plain-film coil-mass appearances. abstract_id: PUBMED:15956496 MR angiography at 3T versus digital subtraction angiography in the follow-up of intracranial aneurysms treated with detachable coils. Background And Purpose: Digital subtraction angiography (DSA) is used to follow-up intracranial aneurysms treated with detachable coils to identify recurrence and determine need for additional treatment. However, DSA is invasive and involves a small risk of neurologic complications. We assessed the feasibility and usefulness of 3D time-of-flight (TOF) MR angiography (MRA) performed at 3T compared with DSA for the follow-up of coil-treated intracranial aneurysms. Methods: In a prospective study, 20 consecutive patients with 21 intracranial aneurysms treated with coils underwent DSA and nonenhanced and enhanced multiple overlapping thin-slab acquisition 3D TOF MRA at 3T on the same day at a mean follow-up of 6 months (range, 4-14 months) after coil placement. MRA images were evaluated for presence of artifacts, presence and size of aneurysm remnants and recurrences, patency of parent and branch vessels, and added value of contrast material enhancement. MRA and DSA findings were compared. Results: Interobserver agreement of MRA was good, as was agreement between MRA and DSA. All three recurrences that needed additional treatment were detected with MRA. Minor disagreement occurred in four cases: three coil-treated aneurysms were scored on MRA images as having a small remnant, whereas on DSA images these aneurysms were occluded; the other aneurysm was scored on MRA images as having a small remnant, whereas on DSA images this was a small recurrence. Use of contrast material had no additional value. Coil-related MR imaging artifacts were minimal and did not interfere with evaluation of the occlusion status of the aneurysm. Conclusion: High-spatial-resolution 3D TOF MRA at 3T is feasible and useful in the follow-up of patients with intracranial aneurysms treated with coil placement. abstract_id: PUBMED:11274544 Follow-up of intracranial aneurysms treated with detachable coils: comparison of gadolinium-enhanced 3D time-of-flight MR angiography and digital subtraction angiography. Purpose: To compare three-dimensional (3D) time-of-flight magnetic resonance (MR) angiography with digital subtraction angiography (DSA) in the follow-up of intracranial aneurysms treated with selective endovascular placement of detachable coils. Materials And Methods: Sixty-eight consecutive patients with intracranial aneurysms were included in the prospective study. The goal was to evaluate 3D time-of-flight MR angiography versus DSA for the detection of a residual aneurysm neck or residual flow inside the coil mesh. Results: Eighty-one MR angiographic and 83 DSA examinations were performed; 15 patients were examined with both modalities twice. MR angiography was not possible in two patients. In another patient, the quality of MR angiography was not sufficient to assess the treated aneurysm. In 72 of the remaining 80 MR angiographic and DSA examinations, there was good correlation between the two modalities. In 54 cases, neither image type showed remnants or recurrence, but in 18, both showed residual aneurysm. In eight cases, the MR angiographic and DSA results differed. In one of these cases, MR angiography depicted residual aneurysm but DSA depicted an arterial loop. In seven cases, a small (<3-mm) remnant was not detected at MR angiography. Conclusion: Because very small aneurysm remnants or recurrences probably are not clinically important, MR angiography is an option for following up intracranial aneurysms treated with detachable coils and may partly replace DSA. abstract_id: PUBMED:16528000 Intracranial aneurysms treated with Guglielmi detachable coils: imaging follow-up with contrast-enhanced MR angiography. Background And Purpose: To compare the utility of contrast-enhanced MR Angiography (CE-MRA) with digital subtraction angiography (DSA) after endovascular treatment of intracranial aneurysms with Guglielmi detachable coils. Methods: From April 1999 to August 2002, 106 patients with 107 aneurysms treated by endovascular coiling using Guglielmi detachable coils underwent simultaneous DSA and CE-MRA at follow-up (mean: 12.9 range: 5 to 27 months). DSA was performed as the standard reference. MR angiographic images were analyzed independently by 2 senior radiologists (J.-Y.G., S.C.) and DSA by a third radiologist (X.L.). Findings were assigned to 1 of 3 categories: complete obliteration (class 1), residual neck (class 2) and residual aneurysm (class 3). Results: DSA at follow-up demonstrated 65 (60.6%) complete obliterations (group 1), 21 (19.7%) residual necks (group 2) and 21 (19.7%) residual aneurysms (group 3). One patient (0.9%) experienced aneurysm rebleed during the follow-up period. Among 101 assessable imaging comparisons, interobserver agreement was determined to be very good for CE-MRA (kappa=0.96) with only 4 discrepancies between both examiners. Comparison between CE-MRA and DSA showed an excellent agreement between techniques (kappa=0.93). Of the 21 with residual necks described on DSA, 20 were seen on CE-MRA. CE-MRA detected all 19 residual aneurysms. Conclusions: CE-MRA after selective embolization of intracranial aneurysm is useful and comparable to DSA in the assessment of aneurysmal recanalization either as residual neck or aneurysmal sac. abstract_id: PUBMED:15983772 Time-of-flight magnetic resonance angiography in the follow-up of intracranial aneurysms treated with Guglielmi detachable coils. The purpose of this study was to evaluate time-of-flight magnetic resonance angiography (MRA) in the follow-up of intracranial aneurysms treated with Guglielmi detachable coils (GDCs). From January 1998 to January 2002 27 MRA and intra-arterial digital subtraction angiography (IADSA) examinations were analyzed for residual aneurysms and arterial patency following GDC placement. A total number of 33 intracranial aneurysms was analyzed, including 18 located in the posterior circulation. The MRA analysis was based on source images in combination with maximum intensity projections. The IADSA was used as the reference standard. Two aneurysms were excluded from evaluation, because of susceptibility artefacts from other aneurysms, which were clipped. Sensitivity and positive predictive values of MRA in revealing residual aneurysms were, respectively, 89% and 80%. Specificity in ruling out remnant necks and residual flow around coils was, respectively, 91% and 97%, with a negative predictive value of, respectively, 95% and 100%. Specificity and negative predictive value of MRA for arterial occlusion were, respectively, 87% and 100% for the parent arteries and, respectively, 85% and 100% for the adjacent arteries. MRA is a reliable diagnostic tool in the follow-up of GDC treatment, and it may replace IADSA in excluding residual flow around coils and aneurysmal necks and in ruling out arterial occlusion. abstract_id: PUBMED:11972455 MR angiography in the follow-up of coiled cerebral aneurysms after treatment with Guglielmi detachable coils. Purpose: Intra-arterial digital subtraction angiography (DSA) has been considered the gold standard examination in the follow-up of patients treated with Guglielmi detachable coils (GDCs). However, DSA is an invasive and expensive investigation and results in exposure to ionising radiation to both patient and operator. The aim of this study was to compare MR angiography (MRA) with DSA with regard to patency of the occlusion of aneurysms following GDC treatment. Material And Methods: We performed 75 MRA and DSA examinations on 51 patients treated with GDCs. The examinations were performed 3-36 months after embolisation and the interval between MRA and DSA was less than 1 week. Hard copies of both studies were interpreted retrospectively and independently for residual flow within the aneurysm, residual aneurysmal neck, and parent and branch vessel flow. Results: Patency status of parent and branch vessel flow was correctly identified with MRA in all patients except 1. The sensitivity of MRA in revealing residual flow within the aneurysm was 97%. The specificity in ruling out residual flow within the aneurysm was 91%. Conclusion: MRA may replace DSA in the long-term follow-up of coiled cerebral aneurysms. The initial follow-up examination should, however, include both modalities. abstract_id: PUBMED:9726477 MR angiography with ultrashort echo time in cerebral aneurysms treated with Guglielmi detachable coils. We evaluated a time-of-flight three-dimensional MR angiographic sequence with an ultrashort echo time for its ability to characterize the perfusional state of cerebral aneurysms that had been treated with Guglielmi detachable coils and to depict adjacent cerebral arteries. The results were compared with findings at conventional MR angiography and digital subtraction angiography. Adjacent vessels were seen better in 36% of patients imaged with the new technique. Both MR angiographic methods detected residual cerebral aneurysmal perfusion with a tendency to overestimate the patent portion of the aneurysm. abstract_id: PUBMED:12811438 Angiographic follow-up of vertebrobasilar artery aneurysms treated with detachable coils. Endovascular treatment of ruptured vertebrobasilar artery aneurysms with Gugliemi detachable coils (GDC) has become an alternative to surgery. Mid-term angiographic follow-up can now be reported. Of 111 vertebrobasilar aneurysms in 110 patients we treated with GDC since 1992, 53 underwent angiography within 1 year and 59 after more than 18 months. We did not achieve complete occlusion on initial treatment of 23 aneurysms (21%). Complications were observed in 19 patients (17%), leading to permanent clinical disability in eight. Enlargement of the neck or reopening was seen in 12 (23%) of 53 aneurysms followed by angiography within 12 months. Documented recanalisation was treated in four (8%). Angiography was performed after 18-78 months in 59 patients, of whom nine, including three with initially incomplete occlusions, were retreated with GDC. Within the entire second observation period, three (5%) of the 59 patients had a further haemorrhage and were retreated with GDC. Rebleeding proved to be the only factor influencing the clinical outcome of retreated patients. abstract_id: PUBMED:10096328 Prospective evaluation of time-of-flight MR angiography in the follow-up of intracranial saccular aneurysms treated with Guglielmi detachable coils. Purpose: The purpose of our study was to prospectively evaluate 3D time-of-flight (TOF) MR angiography (MRA) in the follow-up of 27 intracranial aneurysms treated with Guglielmi detachable coils (GDCs). Method: From February 1997 to June 1998, 26 patients with 27 aneurysms were included in this prospective study. Aneurysms were located in the anterior circulation in 23 cases and in the posterior circulation in 4 cases. All patients underwent 3D TOF MRA and digital subtraction angiography (DSA) in the same week within 4 months after aneurysmal treatment with GDCs. No clinical events occurred during the follow-up. We analyzed residual flow within the coil mass and within the aneurysmal neck and the patency of the parent and adjacent arteries on MRA and DSA. MRA analysis was based upon MIPPED and source images. DSA was our gold standard. Results: In all cases, the quality of MRA was good enough to be informative. In aneurysmal analysis, the sensitivity, specificity, positive predictive value, and negative predictive value of MRA were, respectively, 80, 100, 100, and 96% to diagnose residual flow within the coil mass (one false-negative case) and 83, 100, 100, and 95.5% to diagnose residual flow within the aneurysmal neck (one false-negative case). In arterial analysis, sensitivity and positive predictive value of MRA were 89 and 100% to diagnose patency of the parent artery (three false-negative cases) and 83 and 100% to diagnose patency of adjacent arteries (seven false-negative cases). Conclusion: In the follow-up of intracranial aneurysms treated with GDCs, 3D TOF MRA could be used as a screening test to select patients that should undergo DSA and thus could improve patient follow-up in terms of risk-benefit. Answer: Digital subtraction angiography (DSA) has traditionally been the gold standard for the follow-up of intracranial aneurysms treated with embolization using detachable coils. However, several studies have investigated the efficacy of magnetic resonance angiography (MRA) as a non-invasive alternative to DSA for this purpose. One study found that contrast-enhanced MRA (CE-MRA) is accurate in detecting aneurysm recanalization after embolization with detachable coils, suggesting that CE-MRA could be proposed as a first-intention imaging technique for follow-up. However, the sensitivity and specificity of CE-MRA were still found to be inferior to DSA, and major recurrences might be missed, especially in very small aneurysms. Therefore, the study concluded that at least one DSA should be included during the imaging follow-up (PUBMED:18795274). Another study compared plain radiographs, 3D time-of-flight MRA (TOF MRA), and DSA for the detection of aneurysm recanalization. It found that MRA was more accurate than plain radiography and could potentially replace DSA for long-term follow-up, although the initial follow-up examination should include both MRA and DSA (PUBMED:14534768). A study assessing 3D TOF MRA at 3T versus DSA concluded that high-spatial-resolution 3D TOF MRA at 3T is feasible and useful in the follow-up of patients with intracranial aneurysms treated with coil placement, as all recurrences needing additional treatment were detected with MRA (PUBMED:15956496). Furthermore, a study comparing gadolinium-enhanced 3D TOF MRA and DSA found that MR angiography is an option for following up intracranial aneurysms treated with detachable coils and may partly replace DSA, especially since very small remnants or recurrences are likely not clinically significant (PUBMED:11274544). CE-MRA has also been shown to have excellent agreement with DSA in assessing aneurysmal recanalization, either as residual neck or aneurysmal sac, suggesting its utility and comparability to DSA (PUBMED:16528000).
Instruction: Is There a Role for PET/CT Parameters to Characterize Benign, Malignant, and Metastatic Parotid Tumors? Abstracts: abstract_id: PUBMED:27276388 Is There a Role for PET/CT Parameters to Characterize Benign, Malignant, and Metastatic Parotid Tumors? Objective: Assessment of benign and malignant lesions of the parotid gland, including metastatic lesions, is challenging with current imaging methods. Fluorine-18 FDG PET/CT is a noninvasive imaging modality that provides both anatomic and metabolic information. Semiquantitative data obtained from PET/CT, also known as PET/CT parameters, are maximum, mean, or peak standardized uptake values (SUVs); metabolic tumor volume; total lesion glycolysis; standardized added metabolic activity; and normalized standardized added metabolic activity. Our aim was to determine whether FDG PET/CT parameters can differentiate benign, malignant, and metastatic parotid tumors. Materials And Methods: Thirty-four patients with parotid neoplasms underwent PET/CT before parotidectomy; maximum SUV, mean SUV, peak SUV, total lesion glycolysis, metabolic tumor volume, standardized added metabolic activity, and normalized standardized added metabolic activity were calculated on a dedicated workstation. Univariate analyses were performed. A ROC analysis was used to determine the ability of PET/CT parameters to predict pathologically proven benign, malignant, and metastatic parotid gland neoplasms. Results: Fourteen patients had a benign or malignant primary parotid tumor. Twenty had metastases to the parotid gland. When the specificity was set to at least 85% for each parameter to identify cut points, the corresponding sensitivities ranged from 15% to 40%. Assessment of benign versus malignant lesions of parotid tumors, as well as metastasis from squamous cell carcinoma versus other metastatic causes, revealed that none of the PET/CT parameters has enough power to differentiate among these groups. Conclusion: PET/CT parameters, including total lesion glycolysis, metabolic tumor volume, standardized added metabolic activity, and normalized standardized added metabolic activity, are not able to differentiate benign from malignant parotid tumors, primary parotid tumors from metastasis, or metastasis from squamous cell carcinoma and nonsquamous cell carcinoma metastasis. abstract_id: PUBMED:38352266 A case of metastatic lymphoepithelial carcinoma of parotid gland identified on 68gallium DOTA-[Tyr3] octreotate PET CT. The authors present the case of a 59-year-old lady diagnosed with lymphoepithelial carcinoma (LEC) of the left parotid gland. The primary tumour was identified using contrast-enhanced CT, and diagnosis was confirmed via fine needle aspiration cytology and immunohistochemistry. Staging using fluorine-18 fluorodeoxyglucose PET CT revealed regional nodal metastases, while no distant metastasis was evident. Following radical radiotherapy, a favourable locoregional response was observed on MRI, yet the patient's plasma Epstein-Barr virus load continued to rise. Given her primary tumour's somatostatin receptor type 2 (SSTR2) positivity, gallium-68 DOTA-[Tyr3] octreotate PET CT (68Ga-DOTATATE PET CT) was performed, revealing multiple distant metastases with DOTATATE avidity. Despite attempts at palliative chemotherapy and immunotherapy, disease progression led to the decision for the best supportive care. The unique presentation of metastatic LEC on 68Ga-DOTATATE PET CT suggests a potential role for SSTR2-targeted imaging in diagnosis and management. abstract_id: PUBMED:25888252 Clinical significance of parotid gland incidentalomas on (18)F-FDG PET/CT. The aim of this study was to evaluate the frequency and clinical significance of incidental focal uptake of (18)F-fluorodeoxyglucose (FDG) on positron emission tomography/computed tomography (PET/CT) in the parotid glands of cancer patients. A retrospective review of 7,252 oncologic PET/CT studies was done. FDG positive parotid incidentalomas occurred in 0.4% of PET/CT scans, of which only 4% were malignant. PET/CT was unable to differentiate benign from malignant parotid lesions based on SUVmax alone. (18)F-FDG positive parotid incidentalomas can be managed conservatively, however patients with a prior history of lymphoma had a much higher risk of parotid malignancy and require further investigation. abstract_id: PUBMED:31989884 The Clinical Significance of Incidental Parotid Uptake in a PET/CT Study: A Diagnostic Algorithm. Background: Patients diagnosed with cancer do not have sufficient clinical data for the management of incidental parotid lesions. We aimed to reveal the importance of randomized parotid lesions encountered during oncologic F-18 fluorodeoxyglucose positron emission tomography (FDG PET/CT) imaging in our clinical practice and the diagnostic algorithm of such lesions. Methods: We performed a database search of PET/CT records generated from 2009 to 2015 for "parotid" in reports of patients who underwent PET/CT examination for a known malignancy elsewhere, or cancer screening. Results: Incidental parotid FDG uptake on PET/CT had a prevalence of 1.1%. The incidence of parotid metastasis in our series was 36.4%, and 75% of them had malign melanoma metastasis. Of the 11 cases, 5 were of Warthin tumours, and Warthin tumours showed stronger GLUT1 expression than metastatic parotid lesions. Conclusion: In patients with malignancy elsewhere, focal involvement of FDG by the parotid gland, especially if malignant melanoma or SCC is absent, should not be considered a metastatic disease without histopathologic confirmation. If parotid disease would change the patient's treatment plan and disease stage, the parotid lesion should be evaluated by additional methods, such as fine needle aspiration biopsy. abstract_id: PUBMED:33172066 18F-FDG PET/CT for Response Assessment in Pediatric Sebaceous Carcinoma of the Parotid Gland. A 16-year-old male patient underwent 18F-FDG PET/CT staging after multiple surgical resections and radiotherapy of an uncommon metastatic pediatric sebaceous carcinoma of the parotid gland. Initial PET/CT imaging exhibited a recurrent paravertebral metastasis (C4) as well as a metabolically active tumor tissue at the primary site. Follow-up PET/CT after radiotherapy of the cervical spine (C4) and four cycles of chemotherapy with cisplatin and palbociclib revealed complete functional remission in the cervical spine and partial remission at the primary site. This case illustrates the 18F-FDG-uptake behavior and the disease course of a very rare malignant epithelial tumor of the salivary glands. abstract_id: PUBMED:34829412 Proliferation PET/CT Imaging of Salivary Gland Tumor. Salivary gland tumors are rare neoplasms which vary in terms of origin and malignant potential. 2-[18F]-fluoro-2-deoxy-d-glucose (FDG)-positron emission tomography (PET) has limited ability to differentiate between different types of salivary gland tumors because both Warthin's tumors and pleomorphic adenomas usually show increased FDG uptake, with no statistically significant difference in standardized uptake value (SUV) compared with malignant salivary gland tumors. Here, we discuss 4'-[methyl-11C]-thiothymidine (4DST) PET, which provides cell proliferation imaging capable of demonstrating intense uptake in parotid carcinoma and Warthin's tumor, but no uptake in parotid pleomorphic adenoma. This is the first report of the potential of proliferation PET/ computed tomography (CT) imaging for characterizing salivary gland tumors based on the molecular pathogenesis of the tumor. abstract_id: PUBMED:36579280 Efficacy of the Systemic Immune Inflammation Index in Malignant and Benign Parotid Neoplasms. Objective Several studies have looked at systemic immune-inflammation index (SII) (neutrophil x platelet x lymphocyte) values, which have been shown to be useful in determining tumor aggressivity and prognosis, as well as predicting recurrence risk, particularly in cancer cases. The purpose of the current study was to determine SII values in patients with parotid masses and investigate their utility in distinguishing between malignant and benign parotid tumors. Methods This retrospective study included 237 adult patients-112 women and 125 men-who were followed up on and treated for parotid mass between 2015 and 2021. The SII values determined were compared between the groups. Results The difference between the two groups was statistically significant (p = 0.001). In addition, SII values were higher in malignant tumors with perineural and lymphovascular invasion compared to other malignant tumors, although the difference was not statistically significant. Conclusions Although SII values yielded significant results in differentiating malignant from benign parotid tumors, since no significant cut-off value was determined, we do not think that they represent an effective marker capable of being used to distinguish between these tumors in clinical practice. abstract_id: PUBMED:27738606 The Diagnostic Value of B-Mode Sonography in Differentiation of Malignant and Benign Tumors of the Parotid Gland. Introduction: Different imaging modalities are used to evaluate salivary gland diseases, including tumors. Ultrasonography (US) is the preferred method on account of its ease of use, affordability, safety profile, and good tolerance among patients. The aim of this study was to evaluate the role of US in differentiating malignant from benign parotid tumors, in the context of previous controversy in the literature on this subject. Materials And Methods: A cross-sectional study was performed in patients who presented to Qaem Medical Center with parotid masses and who were candidates for parotidectomy between June 2013 and January 2015. Patients were initially referred for a diagnostic US of the parotid. US examinations were performed and sonographic features were reported. The tumors were then classified as benign or malignanton the basis of literature descriptions of the US features of parotid tumors, and were next diagnosed pathologically. The sensitivity, specificity, positive predictive value, and negative predictive value of US for the purpose of differentiating malignant from benign tumors were then calculated. Results: Twenty-eight patients (aged 18-92 years) underwent US of parotid masses. Twenty-three tumors were diagnosed as benign and five were diagnosed as malignant. The final histopathologic examination showed 21 benign and seven malignant tumors. The sensitivity, specificity, positive predictive value, and negative predictive value of US for differentiating malignant from benign tumors were calculated as 57%, 95%, 80%, and 87%, respectively. Conclusion: US has a high specificity in differentiating between malignant and benign tumors. However, fine needle aspiration or core needle biopsy is advocated for an exact diagnosis. abstract_id: PUBMED:26629288 Metastatic pleomorphic adenoma to the supraspinatus muscle: a case report and review of a rare aggressive clinical entity. We report a case of a 65-year-old female with a recurrent right parotid pleomorphic adenoma (PA) 24 years after initial surgical excision. Positron-emission tomography (PET) and computed tomography (CT) demonstrated an unusual suspicious FDG-avid erosive rim enhancing mass centered in the right supraspinatus muscle. Cytology from CT-guided aspiration of the mass was consistent with a histologically benign PA, and the patient was diagnosed with metastatic pleomorphic adenoma (MPA). The patient later developed diffuse pulmonary metastases and died within 3 months. MPA, although rare, is recognised as a potentially lethal malignant complication of recurrent or longstanding benign PA. As no biochemical or genetic parameters are predictive of malignant change, patients presenting with recurrent PA should be considered for screening for metastatic disease. abstract_id: PUBMED:16462726 Does 18F-FDG PET/CT play a role in the differential diagnosis of parotid masses. Aim: The aim of the present study was to assess the accuracy of an hybrid PET/CT scanner in the evaluation of newly diagnosed parotid masses, comparing the results with those reported in the literature with using PET scanners only. Methods: The potential role of 18F-FDG PET/CT in distinguishing benign from malignant parotid masses in 14 consecutive patients was investigated. All patients were preoperatively evaluated by means of ultrasound (US), US-guided fine needle aspiration (FNA) cytology, computed tomography (CT) scan, magnetic resonance imaging (MRI) and 18F-FDG PET/CT. For To interpreting FDG PET findings, the right to left parotid (R/L) SUV max ratio was calculated in a group of 54 patients without evidence of parotideal disease (mean+/-SD = 1+/-0.2; range = 0.8-1.2); considering the R/L SUV max ratio, focal or diffuse uptakes <0.8 or >1.2 were considered as potentially pathological. Results: Imaging data were compared with surgical and histopathological findings. At FDG PET/CT, 9 false positive cases were found (8 Warthin's tumours, 1 pleomorphic adenoma), 1 false negative (acinar cell carcinoma), 4 true negative (1 Warthin's tumour, 1 pleomorphic adenoma, 1 lymph epithelial cyst, 1 parotid inflammation) whereas there was no case of true positive. The global accuracy of FGD PET/CT was rather low = at 29%. Conclusions: In agreement with other preliminary reports in which the FDG PET without CT fusion imaging was used, in our experience 18F-FDG PET/CT did not prove to play a significant role in differential diagnosis (benign vs malignant) of parotid masses. Further studies collecting larger groups of patients are needed to further elucidate this observation. Answer: Based on the provided abstracts, PET/CT parameters, including maximum SUV, mean SUV, peak SUV, total lesion glycolysis, metabolic tumor volume, standardized added metabolic activity, and normalized standardized added metabolic activity, do not have sufficient power to differentiate between benign, malignant, and metastatic parotid tumors (PUBMED:27276388). Additionally, 18F-FDG PET/CT has been shown to have limitations in differentiating benign from malignant parotid lesions based on SUVmax alone, with a low accuracy rate reported in one study (PUBMED:16462726). Furthermore, incidental focal uptake of FDG in the parotid glands on PET/CT scans in cancer patients often represents benign lesions, with a very low percentage being malignant (PUBMED:25888252). However, there are instances where PET/CT imaging can be useful in the management of parotid tumors. For example, 68Ga-DOTATATE PET/CT, which targets somatostatin receptor type 2 (SSTR2), was able to identify multiple distant metastases in a case of lymphoepithelial carcinoma of the parotid gland, suggesting a potential role for SSTR2-targeted imaging in the diagnosis and management of certain parotid tumors (PUBMED:38352266). Additionally, 18F-FDG PET/CT has been used for response assessment in a case of pediatric sebaceous carcinoma of the parotid gland, indicating its utility in monitoring treatment response (PUBMED:33172066). Moreover, proliferation PET/CT imaging with 4'-[methyl-11C]-thiothymidine (4DST) has shown potential in characterizing salivary gland tumors based on their molecular pathogenesis, demonstrating intense uptake in parotid carcinoma and Warthin's tumor, but no uptake in parotid pleomorphic adenoma (PUBMED:34829412). In conclusion, while conventional PET/CT parameters may not be reliable for characterizing benign, malignant, and metastatic parotid tumors, certain specialized PET/CT imaging techniques, such as 68Ga-DOTATATE PET/CT and proliferation PET/CT with 4DST, may have a role in specific cases or for monitoring treatment response. Additionally, other diagnostic tools, such as B-mode sonography, have been used to differentiate malignant from benign parotid tumors, with high specificity but limited sensitivity, suggesting that fine needle aspiration or core needle biopsy may be necessary for an exact diagnosis (PUBMED:27738606). Furthermore, the systemic immune-inflammation index (SII) has been investigated for its utility in distinguishing between malignant and benign parotid tumors, but no significant cut-off value was determined, indicating that SII may not be an effective marker for use in clinical practice for this purpose (PUBMED:36579280). In summary, while PET/CT parameters alone may not be sufficient to characterize benign, malignant, and metastatic parotid tumors, specialized PET/CT imaging techniques and other diagnostic methods can play a role in the management and diagnosis of these tumors. However, the utility of PET/CT in this context is still limited, and further studies are needed to improve its diagnostic accuracy and to establish its role in the clinical setting.
Instruction: Can changes in vital signs be used to predict the response to lumbar facet blocks and radiofrequency denervation? Abstracts: abstract_id: PUBMED:24918331 Can changes in vital signs be used to predict the response to lumbar facet blocks and radiofrequency denervation? A prospective, correlational study. Background: Facet joint radiofrequency (RF) ablation is characterized by a high failure rate, which is partly due to the fact that pain relief after diagnostic blocks is inherently subjective. An area that has yet to be explored is whether more objective measures, such as changes in vital signs after blocks, might be used to predict treatment outcomes. Methods: A multicenter, prospective study was performed in 223 patients who underwent diagnostic lumbar medial branch blocks, of whom 87 proceeded to RF denervation. Blood pressure (BP), heart rate (HR), and pain scores were recorded preblock and 20 minutes postblock. A positive vital sign response was designated as a decrease of less than 7.5 units in BP or HR, and a positive facet block as pain relief of 50% or greater based on 6-hour pain diary scores. Results: Overall, 125 subjects (56.1%; 95% confidence interval, 49.3%-62.6%) experienced a positive facet block, and 71 had 3-month follow-up information after denervation. Correlations between changes in NRS scores and HR (r = -0.01, P = 0.893), systolic BP (r = 0.05, P = 0.47), diastolic BP (DBP) (r = 0.08, P = 0.22), and mean arterial pressure (r = 0.08, P = 0.21) were weak and nonsignificant. No associations were found between facet block results and any vital sign. Six (85.7%) of 7 patients who experienced a decrease in DBP of greater than 7.5 mm Hg after facet block had a positive RF denervation outcome at 3 months, compared with 43.8% who did not (odds ratio, 7.52; 95% confidence interval, 0.84-363.8; P = 0.049). A classification tree based on significant decrease in DBP, pain duration, and baseline NRS pain score showed a 76.7% (range, 65.8%-86.3%) accuracy rate. Conclusions: Although a decrease in DBP of more than 7.5 mm Hg had 97.3% specificity and 85.7% positive predictive value for predicting positive RF ablation outcomes, the low negative predictive value (56.3%) precludes its use as a solitary screening tool. An algorithm based on age, baseline NRS pain score, and a significant decrease in DBP was able to predict 76.7% (range, 65.8%-86.3%) of RF denervation outcomes. abstract_id: PUBMED:35322977 Equivalent Outcomes of Lumbar Therapeutic Facet Joint Nerve Blocks and Radiofrequency Neurotomy: Comparative Evaluation of Clinical Outcomes and Cost Utility. Background: Chronic low back pain secondary to facet joint pathology is prevalent in 27% to 40% of selected populations using controlled comparative local anesthetic blocks. Lumbar facet joint nerve blocks and radiofrequency neurotomy are the most common interventional procedures for lower back pain. Nonetheless, questions remain regarding the effectiveness of each modality. Moreover, there is no agreement in reference to superiority or inferiority of lumbar facet joint nerve blocks when compared with radiofrequency neurotomy. Centers for Medicare and Medicaid Services (CMS) and almost all payers prefer radiofrequency ablation. Both procedures have been extensively studied with randomized controlled trials, systematic reviews, and cost utility analysis. Objective: To assess the clinical outcomes and cost utility of therapeutic lumbar facet joint nerve blocks (lumbar facet joint nerve blocks with L5 dorsal ramus block) compared with radiofrequency neurotomy in managing chronic low back pain of facet joint origin. Study Design: A retrospective, case-control, comparative evaluation of outcomes and cost utility. Setting: The study was conducted in an interventional pain management practice, a specialty referral center, a private practice setting in the United States. Methods: The study was performed utilizing the Strengthening the Reporting of Observational Studies in Epidemiology Analysis (STROBE) criteria. Only the patients meeting the diagnostic criteria of facet joint pain by means of comparative, controlled diagnostic local anesthetic blocks were included.The main outcome measure was pain relief measured by the Numeric Rating Scale (NRS) evaluated at 3, 6, and 12 months. Significant improvement was defined as at least 50% improvement in pain relief. Cost utility was calculated utilizing direct payment data for the procedures with the addition of estimated indirect costs over a period of one year based on highly regarded surgical literature and previously published interventional pain management literature. Results: A total of 326 patients met the inclusion criteria with 99 patients receiving lumbar facet joint nerve blocks (lumbar facet joint nerve blocks with L5 dorsal ramus block) and 227 receiving lumbar radiofrequency neurotomy. Forty-eight patients in the facet joint nerve block group and 148 patients in the radiofrequency group completed one-year follow-up. Patients experienced significant improvement in both groups from baseline to 12 months with significant pain relief (≥ 50%) Significant pain relief was recorded in 100%, 99%, and 79% of the patients in the facet joint nerve block group, whereas, it was 100%, 74%, and 65% in the radiofrequency neurotomy group at the 3, 6, and 12 month follow-up, with a significant difference at 6 months. Cost utility analysis showed average costs for quality-adjusted life year (QALY) of $4,664 for lumbar facet joint nerve blocks and $5,446 for lumbar radiofrequency neurotomy. Twelve patients (12%) in the lumbar facet joint nerve block group and 79 patients (35%) in the lumbar radiofrequency group were converted to other treatments, either due to side effects or inadequate relief. Conclusion: This study shows similar outcomes of therapeutic lumbar facet joint nerve blocks when compared with radiofrequency neurotomy as indicated by significant pain relief and cost utility. abstract_id: PUBMED:29173659 Radiofrequency Denervation of the Cervical and Lumbar Spine. Facet or zygapophysial joint pain is commonly seen in the aging population. Interventional procedures, such as facet joint nerve blocks, facet intraarticular injections, and radiofrequency denervation, are used for the diagnosis and treatment of axial spinal chronic neck and low back pain. The focus of this article is to understand how radiofrequency denervation works in the cervical and lumbar spine and to be able to properly select appropriate patients who might benefit from this safe and effective procedure. abstract_id: PUBMED:32110398 Best practice in radiofrequency denervation of the lumbar facet joints: a consensus technique. Background: Radiofrequency denervation is used to treat selected people with low back pain. Recent trials have been criticised for using a sub-optimal intervention technique. Objectives: To achieve consensus on a best practice technique for administering radiofrequency denervation of the lumbar facet joints to selected people with low back pain. Study Design: A consensus of expert professionals in the area of radiofrequency denervation of the lumbar facet joints. Methods: We invited a clinical member from the 30 most active UK departments in radiofrequency pain procedures and two overseas clinicians with specific expertise to a 1 day consensus meeting. Drawing on the known anatomy of the medial branch, the theoretical basis of radiofrequency procedures, a survey of current practice and collective expertise, delegates were facilitated to reach consensus on the best practice technique. Results: The day was attended by 24 UK and international clinical experts. Attendees agreed a best practice technique for the conduct of radiofrequency denervation of the lumbar facet joints. Limitations: This consensus was based on a 1 day meeting of 24 clinical experts who attended and took part in the discussions. The agreed technique has not been subject to input from a wider community of experts. Conclusions: Current best practice for radiofrequency denervation has been agreed for use in a UK trial. Group members intend immediate implementation in their respective trusts. We propose using this in a planned Randomised Controlled Trial (RCT) of radiofrequency denervation for selected people with low back pain. abstract_id: PUBMED:20556216 Comprasion of Effectiveness of CT vs C-arm Guided Percutaneous Radiofrequency Lumbar Facet Rhizotomy. Background: Facet joint have been implicated as a source of chronic low back pain. Radiofrequency denervation has demonstrated the most solid evidence. To increase safety and efficacy of treatment, computed tomography (CT) guidance injection has been used in several disease. The purpose of this study was to evaluate the efficacy of CT-guided radiofrequency rhizotomy in the treatment of facet joint pain. Methods: A total of 40 patients were randomized to undergo radiofrequency facet joint denervation under CT guidance or C-arm guidance. All patients were examined visual analogue scale (VAS) score before treatment, 1 month, and 3 months after treatment. Results: The VAS in both groups showed significant improvement over the 1-month interval. No significant difference in the VAS score among the group was observed. Conclusions: In this study there was no significant difference between CT guidance lumbar rhizotomy and C-arm guidance lumbar rhizotomy. Therefore CT-guided radiofrequency denervation of the lumbar facet joint was a minimally invasive technique that appears effective. abstract_id: PUBMED:37143095 A comparison of pulsed radiofrequency and radiofrequency denervation for lumbar facet joint pain. Background: Lumbar facet joint pain is a common disorder. The main symptom is chronic lumbar pain, which can reduce quality of life. Radiofrequency has often been used to treat lumbar facet joint pain. However, the effectiveness of this technique has been controversial. This study was conducted to compare the effectiveness of pulsed radiofrequency (PRF) and radiofrequency denervation (RD) for lumbar facet joint pain. Methods: One hundred and forty-two patients with lumbar facet joint pain were allocated to two treatment groups: PRF group (N = 72) and RD group (N = 70). Patients enrolled in the study were assessed using a visual analogue scale (VAS), Roland-Morris questionnaire (RMQ), Oswestry disability index (ODI) and Short-Form 36 (SF-36) questionnaire before therapy, 3 months and 12 months later. Results: There were no significant differences in VAS, RMQ score, ODI score and SF-36 score at 3 months (p > 0.05). Significant differences in pain control were observed in both groups at 12 months (3.09 ± 1.72 vs. 2.37 ± 1.22, p = 0.006). There was a significant difference in RMQ score (11.58 ± 3.58 vs. 8.17 ± 2.34, p < 0.001) and ODI score (43.65 ± 11.01 vs. 35.42 ± 11.32, p < 0.001) at 12 months. The total SF-36 score was higher in the RD group than in the PRF group at 12 months (58.45 ± 6.97 vs. 69.36 ± 6.43, p < 0.001). In terms of complications, skin numbness occurred in three patients. Mild pain such as burning and pinking at the puncture site in two patients. One patient experienced a decrease in back muscle strength and back muscle fatigue. These complications disappeared in 3 weeks without any treatment. There were no serious adverse events in the PRF group. Conclusion: Radiofrequency is an effective and safe treatment option for patients with lumbar facet joint pain. RD could provide good and lasting pain relief, with significant improvement in lumbar function and quality of life at long-term follow-up. abstract_id: PUBMED:36189159 Radiofrequency denervation and cryoablation of the lumbar zygapophysial joints in the treatment of positive lumbar facet joint syndrome - a report of three cases. Radiofrequency denervation of the zygapophysial (facet) joints is a frequently performed procedure for chronic low back pain. However, cryoablation represents a novel therapeutic approach for this condition. We observed and analyzed 3 cases with confirmed positive lumbar facet joint syndrome. Our results show a significant improvement in the clinical state of the patients in the first and third months after the procedure. The 6-month follow-up examination demonstrates a recurrence of pain and a gradual deterioration in the quality of life with a lasting partial pain-relief effect. Thermal radiofrequency denervation and cryoablation of the lumbar zygapophysial joints represent an effective, albeit temporary treatment option for lumbar facet joint syndrome patients, resulting from the pathophysiology of sensory nerve regeneration after destructive procedures. This type of treatment can be used repeatedly in the case of a positive response. abstract_id: PUBMED:37103732 Delphi-Based Expert Consensus Statements for the Management of Percutaneous Radiofrequency Neurotomy in the Treatment of Lumbar Facet Joint Syndrome. Introduction: A modified Delphi strategy was implemented for obtaining recommendations that could be useful in the management of percutaneous radiofrequency treatment of lumbar facet joint syndrome, as the literature on the argument was poor in quality. Methods: An Italian research team conducted a comprehensive literature search, defined the investigation topics (diagnosis, treatment, and outcome evaluation), and developed an explorative semi-structured questionnaire. They also selected the members of the panel. After an online meeting with the participants, the board developed a structured questionnaire of 15 closed statements (round 1). A five-point Likert scale was used and the cut-off for consensus was established at a minimum of 70% of the number of respondents (level of agreement ≥ 4, agree or strongly agree). The statements without consensus were rephrased (round 2). Results: Forty-one clinicians were included in the panel and responded in both rounds. After the first round, consensus (≥ 70%) was obtained in 9 out of 15 statements. In the second round, only one out of six statements reached the threshold. The lack of consensus was observed for statements concerning the use of imaging for a diagnosis [54%, median 4, interquartile range (IQR) 3-5], number of diagnostic blocks (37%, median 4, IQR 2-4), bilateral denervation (59%, median 4, IQR 2-4), technique and number of lesions (66%, median 4, IQR 3-5), and strategy after denervation failure (68%, median 4, IQR 3-4). Conclusion: Results of the Delphi investigations suggest that there is a need to define standardized protocols to address this clinical problem. This step is essential for designing high-quality studies and filling current gaps in scientific evidence. abstract_id: PUBMED:27618781 Percutaneous radiofrequency facet capsule denervation as an alternative target in lumbar facet syndrome. Objectives: Percutaneous radiofrequency denervation of the medial dorsal branch is often used in chronic low back pain of intervertebral facet etiology, which is sometimes difficult to perform and recurrence of pain often ensues. We theorized that shifting the target of RF coagulation to the facet joint capsule would provide an easier target and a longer-lived pain relieving response. Patients And Methods: A prospective randomized controlled trial where 120 patients diagnosed with CLBP of a confirmed facet origin were randomly divided into three equal groups, the first was submitted to percutaneous radiofrequency coagulation of the facet joint capsule, the second underwent percuataneous denervation of the medial dorsal branch and the third did not receive radiofrequency lesioning. All the three groups received local injection of a mixture of local anesthetic and steroid. Cases were followed for up to 3 years. Results: 87(72.5%) patients were females. By 3 months' post procedure, improvement in VAS was significantly better than pretreatment levels in all groups (p<0.05). The control group lost improvement by 1-year follow-up (p=0.017). At 2 years' follow-up, the joint capsule denervation group maintained significant improvement (p=0.033) whereas the medial branch denervation group lost its significant effect (p=0.479). By the end of follow-up period, only joint capsule denervation group kept significant improvement (p=0.026). Conclusion: In CLBP of facet origin, shifting the target of percutaneous radiofrequency to the facet joint capsule provides an easier technique with an extended period of pain relief compared to the medial dorsal branch of the facet joint. abstract_id: PUBMED:17277644 Clinical predictors of success and failure for lumbar facet radiofrequency denervation. Objective: To determine the clinical factors associated with the success and failure of radiofrequency denervation of the lumbar facet joints. Methods: Clinical data were garnered from 3 academic medical centers on 192 patients with low back pain who underwent radiofrequency denervation after a positive response to diagnostic blocks. Success was defined as >/=50% pain relief lasting at least 6 months. Factors evaluated for their association with outcome included duration of pain, opioid use, symptom location, paraspinal tenderness, pain exacerbated by extension/rotation (ie, facet loading), MRI abnormalities, diabetes, smoking, scoliosis, obesity, prior surgery and levels treated. Results: The only factor associated with a successful outcome was paraspinal tenderness. Variables that correlated with treatment failure were 'facet loading,' long duration of pain, and previous back surgery. Conclusions: It is counterproductive to use 'facet loading' as the sole basis for choosing patients for facet interventions. In patients at high risk for treatment failure, taking additional steps to reduce the rate of false-positive screening blocks may improve outcomes. Answer: Changes in vital signs have been explored as potential predictors for the response to lumbar facet blocks and radiofrequency denervation. A prospective, correlational study found that although a decrease in diastolic blood pressure (DBP) of more than 7.5 mm Hg had a high specificity (97.3%) and positive predictive value (85.7%) for predicting positive radiofrequency ablation outcomes, the low negative predictive value (56.3%) limits its utility as a standalone screening tool. An algorithm incorporating age, baseline Numeric Rating Scale (NRS) pain score, and a significant decrease in DBP could predict 76.7% of radiofrequency denervation outcomes (PUBMED:24918331). However, the study concluded that changes in vital signs alone are not sufficient to predict treatment outcomes reliably. In general, the literature suggests that while there may be some associations between changes in vital signs and treatment outcomes, these changes are not robust enough to serve as reliable predictors. Clinical decision-making for lumbar facet blocks and radiofrequency denervation typically involves a combination of patient history, physical examination, diagnostic blocks, and other factors rather than relying solely on changes in vital signs.
Instruction: Does insurance status affect continuity of care for ambulatory patients with operative fractures? Abstracts: abstract_id: PUBMED:21471422 Does insurance status affect continuity of care for ambulatory patients with operative fractures? Background: We compared insurance status among three groups of ambulatory patients with an operatively treated fracture of the distal part of the radius or of the ankle, in order to determine if insurance status affected continuity of care. The patients were categorized as having received initial care at our institution, having received initial care elsewhere with an identifiable reason for transfer to a tertiary care center, or having received initial care elsewhere with no identifiable reason for transfer. Methods: We conducted a retrospective review of 697 patients with an operatively treated distal radial fracture or ankle fracture who had received their definitive treatment at a level-I trauma center. Demographic data, the mechanism of injury, the insurance type, and the location of the initial care were recorded. Results: The proportion of uninsured or underinsured patients in the group that had had their initial treatment at our trauma center was similar to that in the group that had had a specific reason to seek definitive care with us (64% and 63%, p < 0.832). However, the proportion of uninsured or underinsured patients was significantly larger in the group that had not received initial care from us and had no specific reason to receive definitive care from us (82% vs. 63%, p < 0.001). With other variables held constant, the odds of being underinsured or uninsured were 2.53 times greater for the patients initially treated elsewhere who had no specific reason to receive definitive treatment from us. Conclusions: These results suggest that nonmedical reasons play a role in determining where ambulatory patients with fractures requiring operative treatment are able to receive definitive care. Patients without specific medical or nonmedical reasons to receive definitive care at our center were significantly more likely to be uninsured or underinsured. abstract_id: PUBMED:33546990 Does Insurance Status Affect Access to Care Among Ankle Fracture Patients? An Institutional Retrospective Study. Delayed access to care for patients with ankle fractures may increase risk of complications, particularly if surgical management is warranted. Medicaid is a state and federal insurance program in place for those with low income, which has previously been associated with delayed access to care among patients with ACL tears and total hip arthroplasties. The purpose of this study is to assess whether patient insurance status affects access to care for ankle fracture patients, using data from a single institution. A retrospective cohort study (N = 311 patients)was performed on individuals that underwent open reduction and internal fixation for an ankle fracture between years 01/2008 and 12/2018. Patients with polytraumatic injuries, open injuries, Medicare, no insurance, indigent/charity insurance, self-pay, or whose insurance information was not available were excluded. Time from date of injury to date of surgery, injury to first visit, and first visit to surgery was compared between patients with private insurance and Medicaid. Average time from injury to first appointment was 1.2 days and 6.2 days for privately insured and Medicaid patients, respectively (p < .001). Average time from injury to surgery was 8.3 days and 16.1 days for privately insured and Medicaid patients, respectively (p < .001). Patients enrolled in Medicaid have significantly delayed access to care compared to those with private insurance. For ankle fracture patients this is a critical healing time, and delayed care may result in increased costs, increased utilization of healthcare resources, higher complication rates, and poorer patient outcomes. abstract_id: PUBMED:26248479 The impact of public versus private insurance on trauma patients. Background: The socioeconomic status has been associated with disparities in the incidence and mortality of traumatic injuries. However, there is a lack of studies on the level of health insurance with regard to various epidemiologic data of traumatic injuries, which this study opted to clarify. Materials And Methods: All consecutive 6595 patients admitted to a level one trauma center in 2012 and 2013 were included in this retrospective cohort study. Patients were grouped according to their health insurance status (public versus private extended health care insurance) and compared with regard to several epidemiologic variables, that is, the type of injuries, inhospital outcome, and surgical procedures. Results: Public insurance coverage was significantly more common than private insurance (75% versus 25%). Public insurance was associated with younger age, male sex, transfers to another hospital or mental institution, head concussions, head fractures, and increased mortality. Contrarily, patients with private insurance were more often associated with longer hospital stay, discharge to a rehabilitation clinic, fractures of the proximal humerus, and shoulder dislocations. However, there were no significant differences for the remaining majority of studied variables. Conclusions: In a trauma setting, the level of insurance does not seem to play a crucial role in most types of injuries and surgical procedures in a country with a high level of obligatory health care coverage. Nonetheless, it appears that publicly insured patients are more commonly younger, males, transferred to another hospital more often, more prone to head trauma, and subject to increased mortality, whereas privately insured patients show longer hospital stays, increased transfers to rehabilitation clinics, and more fractures of the proximal humerus. abstract_id: PUBMED:28253856 Evaluation of Ambulatory Geriatric Rehabilitation (EAGER): study protocol of a matched cohort study based on claims data. Background: Ambulatory geriatric rehabilitation (AGR) is a community based outpatient intervention which aims to improve physical function, maintain independent living of geriatric patients, avoiding hospitalisation and institutionalisation. It should therefore reduce health care costs. The objective of our study is to evaluate the effectiveness of AGR for frail elderly patients insured by the statutory health insurance AOK Nordost compared to care as usual. Outcome variables are progression to higher nursing care levels, admission to nursing home, incident fractures, hospital admission, ambulatory care sensitive hospital admissions, days spent in hospital, and health care costs. Methods: This matched cohort study aims to estimate average treatment effects of AGR. For this purpose we will compare patients receiving AGR with matched patients receiving care as usual. Patients in the intervention group were treated between the years 2009 and 2013 from three centres in Mecklenburg-Vorpommern, Germany. Matching will be conducted using propensity score techniques. Claims data will be provided by the statutory health insurance AOK Nordost. The primary outcomes are the progression of nursing care levels, incident fractures, and admission to nursing home. Secondary outcomes are total and ambulatory care sensitive hospital admissions, and health care costs from the statutory health insurance perspective. Data will be analysed using appropriate regression models. Discussion: This study aims to quantify the effectiveness of AGR. Results will be important for providers of AGR, policy makers and stakeholders to make informed decisions on whether to continue, modify or extend AGR. Trial Registration: German Clinical Trials Register (DRKS) S00008926 , registered 29.07.2015. abstract_id: PUBMED:31764322 Is Insurance Status Associated with the Likelihood of Operative Treatment of Clavicle Fractures? Background: Most closed clavicle fractures are treated nonoperatively. Research during the past decade has reported differences in the treatment of clavicle fractures based on insurance status in the US and may highlight unmet needs in a vulnerable population, particularly because new data show that surgery may lead to improved outcomes in select populations. Large-scale, national data are needed to better inform this debate. Questions/purposes: (1) Does the likelihood of operative fixation of closed clavicle fractures vary among patients with different types of insurance? (2) What demographic and socioeconomic factors are associated with the likelihood of clavicle fracture surgery? (3) Has the proportion of operative fixation of clavicle fractures changed over time? Methods: A retrospective analysis of the Nationwide Inpatient Sample 2001-2013 database was performed. This database is the largest publicly available all-payer inpatient database in the US that provides pertinent socioeconomic data on a nationwide scale. Data were queried for patients with closed clavicle fractures using International Classification of Diseases, Ninth Revision (ICD-9) diagnostic codes, and surgery was determined using ICD-9 procedural codes. A total of 252,109 patients were included in the final analysis after 158,619 patients were excluded because of missing demographic or insurance data, ambiguous fracture location, or age younger than 19 years. Of the 252,109 included patients, 21,638 (9%) underwent surgical fixation of clavicle fractures. A chi-square analysis was performed to determine variables to be included in a multivariable analysis. A binary logistic regression analysis was used to examine demographic and other important variables, with a significance level of p < 0.01. Poisson's regression and a t-test were used to analyze trends over time. Results were recorded as odds ratios (OR) and incidence rate ratios. Results: After controlling for demographic and potentially relevant variables, such as the median income and fracture location, we found that patients with Medicare, Medicaid, and no insurance had a lower likelihood of undergoing operative fixation of clavicle fractures than did those with private insurance. Patients without insurance were the least likely to undergo surgery (OR, 0.63; 95% CI, 0.60-0.66; p < 0.001), followed by those with Medicare (OR, 0.73; 95% CI, 0.70-0.78; p < 0.001) and those with Medicaid (OR, 0.74; 95% CI, 0.69-0.78; p < 0.001). Women, black, and Hispanic patients were also less likely to undergo surgery than men and white patients (OR, 0.95; p = 0.003; OR = 0.67; p < 0.001; and OR = 0.82; p < 0.001, respectively) There was an increase in the overall proportion of patients undergoing surgery, from 5% in 2001 to 11% in 2013 (incidence rate ratio, 2.99; p < 0.001). Conclusions: We believe that the greater use of surgery among adult patients with clavicle fractures who have private insurance than among those with nonprivate or no insurance-as well as among men and white patients compared with women and patients of color-may be a manifestation of important health care disparities in the inpatient population. This may be owing to variable access to care or a difference in the likelihood that a surgeon will offer surgery based on a patient's insurance status. Because operative fixation of closed clavicle fractures increases in the adult population, future research should elucidate conscious and subconscious motivations of patients and surgeons to better inform the discussion of health care disparities in orthopaedics. Level Of Evidence: Level III, therapeutic study. abstract_id: PUBMED:12146513 Continuity of care and health decline associated with a hip fracture. Objectives: Identify determinants of health decline associated with hip fracture with the goal of designing interventions. Method: Prefracture and postfracture information was obtained from participants aged 65-plus years fracturing a hip between July 1996 and August 1997. Health utilization data were linked to the cohort data and to an age-gender matched cohort of nonfracture seniors. Results: Fracture patients were likely to have been hospitalized and have low continuity of care. Patients making frequent physician visits were at increased risk of both prefracture hospitalization and postfracture health decline. Prefracture hospitalization was less likely for patients with high physical function; patients of high mental status were less likely to experience postfracture health decline. Discussion: Health appears to be in decline prefracture. Patients may benefit from continuous physician care to prevent further health deterioration. Some hip fractures can be prevented by identifying high-risk seniors at an early stage and intervening to prevent falls. abstract_id: PUBMED:34652247 Does Insurance Status Affect Treatment of Children With Tibial Spine Fractures? Background: Previous studies have reported disparities in orthopaedic care resulting from demographic factors, including insurance status. However, the effect of insurance on pediatric tibial spine fractures (TSFs), an uncommon but significant injury, is unknown. Purpose: To assess the effect of insurance status on the evaluation and treatment of TSFs in children and adolescents. Study Design: Cross-sectional study; Level of evidence, 3. Methods: We performed a retrospective cohort study of TSFs treated at 10 institutions between 2000 and 2019. Demographic data were collected, as was information regarding pre-, intra-, and postoperative treatment, with attention to delays in management and differences in care. Surgical and nonsurgical fractures were included, but a separate analysis of surgical patients was performed. Univariate analysis was followed by purposeful entry multivariate regression to adjust for confounding factors. Results: Data were collected on 434 patients (mean ± SD age, 11.7 ± 3.0 years) of which 61.1% had private (commercial) insurance. Magnetic resonance imaging (MRI) was obtained at similar rates for children with public and private insurance (41.4% vs 41.9%, respectively; P≥ .999). However, multivariate analysis revealed that those with MRI performed ≥21 days after injury were 5.3 times more likely to have public insurance (95% CI, 1.3-21.7; P = .02). Of the 434 patients included, 365 required surgery. Similar to the overall cohort, those in the surgical subgroup with MRI ≥21 days from injury were 4.8 times more likely to have public insurance (95% CI, 1.2-19.6; P = .03). Children who underwent surgery ≥21 days after injury were 2.5 times more likely to have public insurance (95% CI, 1.1-6.1; P = .04). However, there were no differences in the nature of the surgery or findings at surgery. Those who were publicly insured were 4.1 times more likely to be immobilized in a cast rather than a brace postoperatively (95% CI, 2.3-7.4; P < .001). Conclusion: Children with public insurance and a TSF were more likely to experience delays with MRI and surgical treatment than those with private insurance. However, there were no differences in the nature of the surgery or findings at surgery. Additionally, patients with public insurance were more likely to undergo postoperative casting rather than bracing. abstract_id: PUBMED:17717473 Pediatric orthopaedic patients presenting to a university emergency department after visiting another emergency department: demographics and health insurance status. Background: The disparity in access to and delivery of health care among children has become increasingly apparent. The purpose of our study was to analyze demographic information, including health insurance status, of children with extremity injuries seen at a University Hospital emergency department (UH ED) after visiting another ED for the same complaint. Methods: A database of pediatric orthopaedic consults requested for extremity injuries at UH ED was reviewed. Information regarding patients' age, ethnicity, orthopaedic diagnosis, type of health insurance, time from injury to presentation at the first ED and at UH ED, mode of transportation to UH ED, and orthopaedic treatment rendered was analyzed. All patients with Medicaid, health maintenance organization-Medicaid, no insurance, or charity care were classified as having public insurance, whereas those with commercial insurance, including health maintenance organization and preferred provider organization plans, were placed in the private insurance category. Results: Over a 30-month period, 125 children, of whom 18% had private health insurance, were noted to have recently visited another ED seeking treatment for an extremity injury. A closed fracture was diagnosed in 117 patients, 94% of whom were discharged from UH ED after cast application. There was no difference with regard to patients' age, sex, ethnicity, diagnosis, and time to presentation at the initial ED between private and public insurance groups. However, 52% of children with private insurance received orthopaedic care within 24 hours compared with 22% with public insurance (P = 0.013). Children with public insurance were more likely to have visited another health facility besides the initial ED before presenting to UH ED (P = 0.004). Moreover, 74% of privately insured patients presenting to UH ED arrived via ambulance compared with 34% with public insurance (P < 0.001). Conclusions: Most children with an extremity injury who received orthopaedic consultation at a tertiary-level ED after visiting another ED had an isolated fracture requiring cast treatment only. There was a trend for delay in receiving definitive orthopaedic care for patients with public health insurance compared with those with private insurance. abstract_id: PUBMED:35715560 Impact of continuity of care on risk for major osteoporotic fracture in patients with new onset rheumatoid arthritis. There is a clear relationship between rheumatoid arthritis (RA) and major osteoporotic fracture (MOF), although there is limited evidence on the effect of continuity of care (COC) on MOF in these patients. We investigated the association between COC and risk of MOF, including fractures of the lumbar spine and pelvis, forearm, and hip, among newly diagnosed RA patients aged ≥ 60 years. A total of 8715 incident RA patients from 2004 to 2010 were included from the Korean National Health Insurance Service-Senior cohort database. Participants were categorized into a good and bad COC group according to the COC index. The cumulative incidence of MOF was higher in RA patients with bad than in those with good COC (p < 0.001). The incidence rates of MOF were 4439 and 3275 cases per 100,000 person-years in patients with bad and good COC, respectively. RA patients with bad COC had an increased incidence of overall MOF (adjusted hazard ratio, 1.32; 95% confidence interval, 1.14-1.53), with the highest increase in risk being that of forearm fracture. An increased MOF risk in patients with bad COC was predominantly observed in females. This study suggested that interventions that can improve COC in patients with RA should be considered. abstract_id: PUBMED:30296321 The Effect of Insurance Type on Patient Access to Ankle Fracture Care Under the Affordable Care Act. The purpose of this study is to assess the effect of insurance type (Medicaid, Medicare, private insurance) on the ability for patients with operative ankle fractures to access orthopedic traumatologists. The research team called 245 board-certified orthopedic surgeons specializing in orthopedic trauma within 8 representative states. The caller requested an appointment for their fictitious mother in order to be evaluated for an ankle fracture which was previously evaluated by her primary care physician and believed to require surgery. Each office was called 3 times to assess the response for each insurance type. For each call, information was documented regarding whether the patient was able to receive an appointment and the barriers the patient confronted to receive an appointment. Overall, 35.7% of offices scheduled an appointment for a patient with Medicaid, in comparison to 81.4%and 88.6% for Medicare and BlueCross, respectively (P < .0001). Medicaid patients confronted more barriers for receiving appointments. There was no statistically significant difference in access for Medicaid patients in states that had expanded Medicaid eligibility vs states that had not expanded Medicaid. Medicaid reimbursement for open reduction and internal fixation of an ankle fracture did not significantly correlate with appointment success rates or wait times. Despite the passage of the Affordable Care Act, patients with Medicaid have reduced access to orthopedic surgeons and more complex barriers to receiving appointments. A more robust strategy for increasing care-access for patients with Medicaid would be more equitable. Answer: Yes, insurance status does affect continuity of care for ambulatory patients with operative fractures. A study comparing insurance status among ambulatory patients with operatively treated fractures of the distal part of the radius or of the ankle found that patients without specific medical or nonmedical reasons to receive definitive care at a trauma center were significantly more likely to be uninsured or underinsured (PUBMED:21471422). Another study showed that patients enrolled in Medicaid had significantly delayed access to care compared to those with private insurance, which could result in increased costs, higher complication rates, and poorer patient outcomes (PUBMED:33546990). Additionally, a study on the impact of public versus private insurance on trauma patients indicated that publicly insured patients were more commonly younger, males, transferred to another hospital more often, more prone to head trauma, and subject to increased mortality (PUBMED:26248479). Furthermore, research on the likelihood of operative fixation of closed clavicle fractures found that patients with Medicare, Medicaid, and no insurance had a lower likelihood of undergoing operative fixation than those with private insurance (PUBMED:31764322). A study on pediatric orthopaedic patients presenting to a university emergency department after visiting another emergency department revealed that there was a trend for delay in receiving definitive orthopaedic care for patients with public health insurance compared with those with private insurance (PUBMED:17717473). Lastly, a study assessing the effect of insurance status on the evaluation and treatment of children with tibial spine fractures found that children with public insurance were more likely to experience delays with MRI and surgical treatment than those with private insurance (PUBMED:34652247). These findings collectively suggest that insurance status plays a significant role in determining the continuity and timeliness of care for patients with operative fractures, with uninsured or publicly insured patients facing more barriers and delays in receiving definitive care.
Instruction: Does interhospital transfer improve outcome of acute myocardial infarction? Abstracts: abstract_id: PUBMED:37207003 Interhospital Transfer versus Direct Admission in Patients with Acute ST-Segment Elevation Myocardial Infarction. There is concern whether patients with ST-segment elevation myocardial infarction (STEMI) who admitted to a percutaneous coronary intervention (PCI) center from interhospital transfer is associated with longer reperfusion time compared with direct admission. We evaluated the reperfusion delays in patients with STEMI who admitted to a primary PCI center through interhospital transfer or direct admission. We retrospectively analyzed 6,494 consecutive STEMI patients admitted between 2011 and 2019. Compared with direct admission ( n = 4,121; 63%), interhospital transferred patients ( n = 2,373) were younger (55 ± 10 vs. 56 ± 10 years, p < 0.001), had similar gender (85.6 vs. 86% male, p = 0.67), greater proportion of off-hour admission (65.2 vs. 48.3%, p < 0.001), less diabetes mellitus (28 vs. 30.8%, p = 0.019), and received more primary PCI (70.5 vs. 48.7%, p < 0.001). Interhospital transferred patients who received primary PCI ( n = 3,677) or fibrinolytic ( n = 238) had longer symptom-to-PCI center admission time (median, 360 vs. 300 minutes, p < 0.001), shorter door-to-device (DTD) time for primary PCI (median, 74 vs. 87 minutes, p < 0.001), and longer total ischemic time (median, 465 vs. 414 minutes, p < 0.001). Logistic regression in interhospital transferred patients showed that delay in door-in-to-door-out (DI-DO) time at the first hospital was strongly associated with prolonged total ischemic time (adjusted odds ratio = 3.92; 95% confidence interval: 3.06-5.04, p < 0.001). This study suggests that although interhospital transferred patients received more primary PCI with shorter DTD time, interhospital transfer creates longer total ischemic time that associates with the delay in DI-DO time at the first hospital that should be improved. abstract_id: PUBMED:18782452 Does interhospital transfer improve outcome of acute myocardial infarction? A propensity score analysis from the Cardiovascular Cooperative Project. Background: Many patients suffering acute myocardial infarction (AMI) are transferred from one hospital to another during their hospitalization. There is little information about the outcomes related to interhospital transfer. The purpose of this study was to compare processes and outcomes of AMI care among patients undergoing interhospital transfer with special attention to the impact on mortality in rural hospitals. Methods: National sample of Medicare patients in the Cooperative Cardiovascular Study (n = 184,295). Retrospective structured medical record review of AMI hospitalizations. Descriptive study using a retrospective propensity score analysis of clinical and administrative data for 184,295 Medicare patients admitted with clinically confirmed AMI to 4,765 hospitals between February 1994 and July 1995. Main outcome measure included: 30-day mortality, administration of aspirin, beta-blockers, ACE-inhibitors, and thrombolytic therapy. Results: Overall, 51,530 (28%) patients underwent interhospital transfer. Transferred patients were significantly younger, less critically ill, and had lower comorbidity than non-transferred patients. After propensity-matching, patients who underwent interhospital transfer had better quality of care anlower mortality than non-transferred patients. Patients cared for in a rural hospital had similar mortality as patients cared for in an urban hospital. Conclusion: Transferred patients were vastly different than non-transferred patients. However, even after a rigorous propensity-score analysis, transferred patients had lower mortality than non-transferred patients. Mortality was similar in rural and urban hospitals. Identifying patients who derive the greatest benefit from transfer may help physicians faced with the complex decision of whether to transfer a patient suffering an acute MI. abstract_id: PUBMED:31784297 Impact on mortality of direct admission versus interhospital transfer in patients with ST-segment elevation myocardial infarction undergoing primary percutaneous coronary intervention. Introduction: In a primary percutaneous coronary intervention (PCI) program, interhospital transfer of patients with ST-elevation myocardial infarction (STEMI) can increase ischemic time, compared to patients who are admitted directly to a catheterization laboratory. Objectives: To assess the impact of interhospital transfer in patients with STEMI undergoing primary PCI, in terms of time to reperfusion and one-year mortality. Methods: This was an observational, retrospective, longitudinal study of patients with STEMI admitted to Hospital de Braga between June 2011 and May 2016, who were treated successfully within 12 hours of symptom onset. A total of 1222 patients were included and divided into two groups according to admission to Hospital de Braga: direct or interhospital transfer. Results: In this study, 37.0% (n=452) of the population were admitted directly to Hospital de Braga and 63.0% (n=770) were transferred from other hospitals. Although timings (in min) until reperfusion were longer in interhospital transfer patients (symptom onset-first medical contact (median 76.5, IQR 40.3-150 vs. 91.0, IQR 50-180, p=0.002), first medical contact-reperfusion (median 87.5, IQR 69.0-114 vs. 145, IQR 115-199, p<0.001) and symptom onset-reperfusion (median 177, IQR 125-265 vs. 265, IQR 188-400, p<0.001)), one-year mortality did not differ significantly between the groups (53 [11.7%] vs. 71 [9.2%], p=0.193). In multivariate analysis, age, symptom onset-reperfusion time and especially Killip class IV at admission (HR 11.2, 95% CI 6.35-19.8, p<0.001) were the main independent predictors of one-year mortality. Conclusion: Interhospital transfer of patients with STEMI increased the time before PCI. No differences were detected between groups in one-year mortality. This may be related to the fact that the direct admission group had twice as many patients in Killip class IV as the interhospital transfer group. abstract_id: PUBMED:32700216 Identifying Racial/Ethnic Disparities in Interhospital Transfer: an Observational Study. Background: Interhospital transfer (IHT) is often performed to provide patients with specialized care. Racial/ethnic disparities in IHT have been suggested but are not well-characterized. Objective: To evaluate the association between race/ethnicity and IHT. Design: Cross-sectional analysis of 2016 National Inpatient Sample data. Patients: Patients aged ≥ 18 years old with common medical diagnoses at transfer, including acute myocardial infarction, congestive heart failure, arrhythmia, stroke, sepsis, pneumonia, and gastrointestinal bleed. Main Measures: We performed a series of logistic regression models to estimate adjusted odds of transfer by race/ethnicity controlling for patient demographics, clinical variables, and hospital characteristics and to identify potential mediators. In secondary analyses, we estimated adjusted odds of transfer among patients at community hospitals (those more likely to transfer patients) and performed subgroup analyses by region and primary medical diagnosis. Key Results: Of 5,774,175 weighted hospital admissions, 199,015 (4.5%) underwent IHT, including 4.7% of White patients, compared with 3.9% of Black patients and 3.8% of Hispanic patients. Black (OR 0.83, 95% CI 0.78-0.89) and Hispanic (OR 0.81, 95% CI 0.75-0.87) patients had lower crude odds of transfer compared with White patients, but this became non-significant after adjusting for hospital-level characteristics. In secondary analyses among patients hospitalized at community hospitals, Hispanic patients had lower adjusted odds of transfer (aOR 0.89, 95% CI 0.79-0.98). Disparities in IHT by race/ethnicity varied by region and medical diagnosis. Conclusions: Black and Hispanic patients had lower odds of IHT, largely explained by a higher likelihood of being hospitalized at urban teaching hospitals. Racial/ethnic disparities in transfer were demonstrated at community hospitals, in certain geographic regions and among patients with specific diseases. abstract_id: PUBMED:26087707 Achieving regionalization through rural interhospital transfer. Regionalization of emergency medical care aims to provide consistent and efficient high-quality care leading to optimal clinical outcomes by matching patient needs with appropriate resources at a network of hospitals. Regionalized care has been shown to improve outcomes in trauma, myocardial infarction, stroke, cardiac arrest, and acute respiratory distress syndrome. In rural areas, effective regionalization often requires interhospital transfer. The decision to transfer is complex and includes such factors as capabilities of the presenting hospital; capacity at the receiving hospital; and financial, geographic, and patient-preference considerations. Although transfer to a comprehensive center has proven benefits for some conditions, the transfer process is not without risk. These risks include clinical deterioration, limited resource availability during transport, vehicular crashes, time delays for time-sensitive care, poor communication between providers, and neglect of patient preferences. This article reviews the transfer decision, financial implications, risks, and considerations for patients undergoing rural interhospital transfer. We identify several strategies that should be considered for development of the regionalized emergency health care system of the future and identify areas where further research is necessary. abstract_id: PUBMED:28216215 Direct Admission Versus Interhospital Transfer for Primary Percutaneous Coronary Intervention in ST-Segment Elevation Myocardial Infarction. Objectives: This study sought to assess the influence of direct admission versus transfer via regional hospital to a percutaneous coronary intervention (PCI) center on time delays and 12-month mortality in ST-segment elevation myocardial infarction (STEMI) patients from a real-life perspective. Background: Reduction of delays to reperfusion is crucial in a STEMI system of care. However, it is still debated whether direct admission to a PCI center is superior to interhospital transfer in terms of long-term prognosis. The authors hypothesized that compared with interhospital transfer, direct admission shortens the total ischemic time, limits the loss of left ventricular systolic function, and finally, reduces 12-month mortality. Methods: Prospective nationwide registry data of STEMI patients admitted to PCI centers within 12 h of symptom onset and treated with PCI between 2006 and 2013 were analyzed. Patients admitted directly were compared with patients transferred to a PCI center via a regional non-PCI-capable facility in terms of time delays, left ventricular ejection fraction (LVEF), and 12-month mortality. Data were adjusted using propensity-matched and multivariate Cox analyses. Results: Of the 70,093 patients eligible for analysis, 39,144 (56%) were admitted directly to a PCI center. Direct admission was associated with a shorter median symptoms-to-admission time (by 44 min; p < 0.001) and total ischemic time (228 vs. 270 min; p < 0.001), higher LVEF (47.5% vs. 46.3%; p < 0.001), and lower propensity-matched 12-month mortality (9.6% vs. 10.4%; p < 0.001). In propensity-matched multivariate Cox analysis, direct admission (hazard ratio [HR]: 1.06, 95% confidence interval [CI]: 1.01 to 1.11) and shorter symptoms-to-admission time (HR: 1.03; 95% CI: 1.01 to 1.06) were significant predictors of lower 12-month mortality. Conclusions: In a large, community-based cohort of patients with STEMI treated by PCI, direct admission to a primary PCI center was associated with lower 12-month mortality and should be preferred to transfer via a regional non-PCI-capable facility. abstract_id: PUBMED:24222827 Interhospital transfer due to failed prehospital diagnosis for primary percutaneous coronary intervention: an observational study on incidence, predictors, and clinical impact. Background: For patients with ST-elevation myocardial infarction (STEMI), guidelines recommend prehospital triage and direct referral to a percutaneous coronary intervention (PCI)-capable centre in order to minimize ischemic time. However, few have studied failed prehospital diagnosis. We assessed the incidence, predictors, and clinical impact of interhospital transfer for primary PCI after initial referral to a non-PCI-capable centre due to a failed prehospital STEMI diagnosis. Methods: We studied 846 consecutive STEMI patients undergoing primary PCI between January 2008 and January 2010. Results: We found that 609 patients (72%) were directly admitted through prehospital triage and 127 patients (15%) required interhospital transfer after failed prehospital diagnosis. Median first medical contact to treatment time was 88 min in the prehospital diagnosis group and 155 min in the interhospital transfer group (p<0.001). In the interhospital transfer group, the first available electrocardiogram was diagnostic for STEMI in 77% of cases. Predictors of interhospital transfer were female gender, diabetes, prior myocardial infarction, and greater event location to PCI-capable centre distance. Interhospital transfer independently accounted for a 47% increase in ischemic time (95% CI 33 to 63%; p<0.001). One-year mortality was higher in the interhospital transfer group (10 vs. 5.3%; p=0.030). Conclusions: Despite an often-diagnostic electrocardiogram, interhospital transfer after failed prehospital diagnosis occurred in 15% of STEMI patients undergoing primary PCI. Interhospital transfer was a major predictor of ischemic time and 1-year mortality was significantly higher. Continuing efforts to optimize prehospital triage are warranted, especially among patients at higher risk of failed prehospital diagnosis. abstract_id: PUBMED:36370009 Emergency Interhospital Transfer of Patients With ST-Segment-Elevation Myocardial Infarction: Call 9-1-1-The American Heart Association Mission: Lifeline Program. The American Heart Association Mission: Lifeline program objectives are to improve the quality of care and outcomes for patients with ST-segment-elevation myocardial infarction. Every minute of delay in treatment adversely affects 1-year mortality. Transfer of patients safely and timely to hospitals with primary percutaneous coronary intervention capability is needed to improve outcomes. But treatment times continue to show delays, especially during interhospital transfers. A simple 3-step process of an interhospital "Call 9-1-1" protocol may expedite this process. This STAT TRANSFER process uses a systems approach that considers diverse ways in which patients access care, how EMS responds and determines destinations, how referring hospital transfers are performed, urban and rural differences, and how receiving hospitals prepare for an incoming patient with ST-segment-elevation myocardial infarction. This initiative suggests a strategy to reduce variability in interhospital transfer times using a STAT TRANSFER and a Call 9-1-1 process in a system of care that involves all stakeholders. abstract_id: PUBMED:22589297 Transfer times and outcomes in patients with ST-segment-elevation myocardial infarction undergoing interhospital transfer for primary percutaneous coronary intervention: APEX-AMI insights. Background: Transfer delays for primary percutaneous coronary intervention may increase mortality in patients with ST-segment-elevation myocardial infarction. We examined the association between door 1-to-door 2 (D1D2) time, a measure capturing the entire transfer process, and outcomes in patients undergoing interhospital transfer for primary percutaneous coronary intervention. Methods And Results: We evaluated the relationship between D1D2 time and the 90-day incidence of death, shock, and heart failure in the subset of 2075 (36.1%) of 5745 patients who underwent interhospital transfer for primary percutaneous coronary intervention in the Assessment of Pexelizumab in Acute Myocardial Infarction trial. There was no significant difference in the 90-day incidence of death, shock, and heart failure between the transferred and the nontransferred groups (10.3% versus 10.2%; P = 0.89). The median difference in symptom-to-balloon time between the 2 groups was 45 minutes (229 versus 184; P<0.001). The primary outcome per 30-minute delay was higher for patients with a D1D2 time ≤150 minutes (hazard ratio, 1.19: 95% confidence interval, 1.06 to 1.33; P = 0.004) but not for D1D2 times >150 minutes (hazard ratio, 0.99: 95% confidence interval, 0.96 to 1.02; P = 0.496). The association between longer D1D2 time and worsening outcome was no longer statistically significant after multivariable adjustment. Conclusions: Longer transfer times were associated with higher rate of death, shock, and heart failure among patients undergoing interhospital transfer from primary percutaneous coronary intervention, although this difference did not persist after adjusting for baseline characteristics. abstract_id: PUBMED:26926589 The first-door-to-balloon time delay in STEMI patients undergoing interhospital transfer. Background: Interhospital transfer delays for ST-elevation myocardial infarction (STEMI) patients requiring primary percutaneous coronary intervention (PCI) may be shortened by improved regional care systems. We evaluated the transfer process and first door-to-balloon (D1toB) time in STEMI patients who underwent interhospital transfer for primary PCI. Methods And Results: We evaluated the D1toB time in 1837 patients who underwent interhospital transfer for primary PCI from the Cardiovascular Disease Surveillance program in Korea. Only 29.3% of patients had a D1toB time less than 120 minutes, as recommended by the American College of Cardiology Foundation/American Heart Association guidelines for the management of STEMI. After adjusting for potential confounders, chest pain at presentation (adjusted odds ratio [AOR], 2.06; 95% confidence interval [CI], 1.18-3.83), transfer to a PCI center with an annual PCI volume greater than 200 (AOR, 1.35; 95% CI, 1.04-1.74), and higher urbanization level (AOR, 2.01 [95% CI, 1.40-2.91], for urban areas; AOR, 3.70 [95% CI, 2.59-3.83], for metropolitan areas) showed beneficial effects on reducing the D1toB time. The median length of stay in the referring hospital (D1LOS) and interhospital transport time were 50 (interquartile range [IQR], 30-100) minutes and 32 (IQR, 20-51) minutes, respectively. The median time interval from the door of the receiving hospital to balloon insertion was 55 (IQR, 40-79) minutes. Conclusions: Patients with STEMI undergoing interhospital transfer did not receive definite care within the recommended therapeutic time window. Delays in the transfer process (length of stay in the referring hospital and interhospital transport time) were major contributors to the delay in the D1toB time. Answer: Interhospital transfer for patients with acute myocardial infarction (AMI) can have varying impacts on outcomes, depending on several factors including the type of AMI, the timing of reperfusion therapy, and the characteristics of the hospitals involved. For patients with ST-segment elevation myocardial infarction (STEMI), interhospital transfer can lead to longer total ischemic times due to delays, particularly in the door-in-to-door-out (DI-DO) time at the first hospital, which is strongly associated with prolonged total ischemic time (PUBMED:37207003). However, transferred patients often receive primary percutaneous coronary intervention (PCI) with shorter door-to-device (DTD) times once they arrive at the PCI center (PUBMED:37207003). A retrospective propensity score analysis from the Cardiovascular Cooperative Project found that after adjusting for differences in patient characteristics, patients who underwent interhospital transfer had better quality of care and lower mortality than non-transferred patients, with similar mortality in rural and urban hospitals (PUBMED:18782452). Another study found that while interhospital transfer increased the time before PCI for STEMI patients, there were no significant differences in one-year mortality between directly admitted and transferred patients. This may be related to the fact that the direct admission group had more patients in a severe condition upon admission (PUBMED:31784297). Direct admission to a PCI center has been associated with shorter median symptoms-to-admission time, total ischemic time, higher left ventricular ejection fraction, and lower 12-month mortality compared to transfer via a regional non-PCI-capable facility (PUBMED:28216215). However, interhospital transfer after failed prehospital diagnosis occurred in a significant portion of STEMI patients undergoing primary PCI and was a major predictor of increased ischemic time and higher one-year mortality (PUBMED:24222827). In summary, while interhospital transfer can lead to delays and longer ischemic times, which are generally associated with worse outcomes, the actual impact on mortality and other outcomes may vary. In some cases, transferred patients receive better care and have lower mortality rates than non-transferred patients, especially when the transfer process is well-managed and timely (PUBMED:18782452, PUBMED:28216215).
Instruction: Is preoperative bowel preparation necessary for gynecological oncology surgery? Abstracts: abstract_id: PUBMED:27125402 Is preoperative bowel preparation necessary for gynecological oncology surgery? Objective: We investigated the necessity of preoperative bowel preparation for gynecological oncology surgery. Materials And Methods: We retrospectively reviewed the medical records of patients who underwent gynecological oncology surgery with simultaneous colon or rectal resection between April 2005 and September 2014 at the Tri-Service General Hospital, Taipei, Taiwan. Patients were divided into two groups based on whether preoperative mechanical bowel preparation (MBP) was performed. Patient characteristics, including duration of antibiotic treatment, surgical procedures, and occurrence of surgical and nonsurgical complications, were compared. Results: We enrolled 124 patients who underwent gynecological oncology surgery with simultaneous colon or rectal resection, of whom 76 received MBP and 48 did not receive mechanical bowel preparation. On comparison between the two groups, no significant differences were noted in the assessed patient characteristics, including mean age (p = 0.61), Federation of Gynecology and Obstetrics stage (p = 0.9), American Society of Anesthesiologists grade (p = 0.9), body mass index (p = 0.8), and residual tumor size (p = 0.86). Furthermore, duration of antibiotic treatment (p = 0.97), surgical procedures (p = 0.99), and total hospital days (p = 0.75), were not different between groups. The risk of surgical (p = 0.78) or nonsurgical (p = 1.0) complications was not significantly higher in the non-MBP group than in the MBP group. Conclusion: MBP provides no significant benefit during gynecological oncology surgery. Thus, preoperative MBP is not essential before gynecological oncology surgery and can be omitted. abstract_id: PUBMED:37224788 Effect of non-mechanical bowel preparation on postoperative gastrointestinal recovery following surgery on malignant gynecological tumors: A randomized controlled trial. Objective: To investigate the efficacy and safety of non-mechanical bowel preparation (non-MBP) in patients undergoing surgery for malignant gynecological tumors. Methods: Patients undergoing surgery for a gynecological malignancy (n = 105) were randomized to receive mechanical bowel preparation (MBP) or non-MBP. Parameters indicating postoperative gastrointestinal function recovery were the primary outcomes. The secondary outcomes included the number of postoperative complaints, the plasma levels of D-lactate and diamine oxidase (DAO), ease of visualization of the surgical field, involuntary defecation during surgery, operation time, wound healing, surgical site infection, length of hospital stay, and tolerance to MBP. Results: The participants in the non-MBP group exhibited shorter time intervals until the first postoperative bowel movement (27.87 vs. 29.48 h), first passage of flatus (50.96 vs. 55.08 h), and first passage of stool (75.94 vs. 98.50 h) compared with the MBP group, while they also exhibited fewer postoperative gastrointestinal symptoms, including nausea (18.9% vs. 38.5%), vomiting (26.4% vs. 51.9%), abdominal pain (34.0% vs. 78.9%), and bloating (3.8% vs.26.9%). The plasma D-lactate and DAO levels were significantly increased following bowel preparation compared with the baseline levels in the MBP group (2.93 vs. 5.68 nmol/mL and 20.46 vs. 54.49 ng/mL, respectively), but no such differences were observed in the non-MBP group. Compared with the MBP group, surgical field visualization was superior (92.45% vs. 78.85%), and the operation time was shorter (173.58 vs. 203.88 min) in the non-MBP group. The patients undergoing MBP complained of bloating (182.35%), an unpleasant taste (78.43%), sleep disturbance (70.59%), nausea (68.63%), abdominal pain (64.71%), vomiting (45.10%), polydipsia (33.33%), dizziness (25.49%), and headache (7.84%). Conclusions: The use of non-MBP in patients undergoing surgery for gynecological malignancies is more conducive to the postoperative recovery of gastrointestinal function. abstract_id: PUBMED:32740871 Controversies in preoperative bowel preparation in gynecologic and gynecologic oncology surgery: a review of the literature. Purpose: The purpose of this review is to assess the impact of mechanical and oral antibiotics bowel preparation on surgical performance and to investigate their role before gynecologic surgical procedures regarding the infection rates. We also aim to study the updated evidence regarding the use of these different types of bowel preparation, as well as the current preoperative practice applied. Methods: An extensive search of the literature was conducted with Medline/PubMed, and the Cochrane Library Database of Systematic Reviews being used for our primary search. Results: To date, due to the conflicting guidelines by the scientific societies, surgeons do not use a specific pattern of bowel preparation regimen. There are no strong evidence supporting mechanical bowel preparation, but instead, in many cases, patients' adverse effects, both physiological and psychological have been noted. On the other hand, the combined use of oral antibiotic and mechanical bowel preparation has been proven beneficial in colorectal surgery in reducing postoperative morbidities. Conclusion: Based on current literature, in gynecologic surgeries with minimal probability of intraluminal entry, a regimen without any bowel preparation should be applied. The combined administration of both mechanical and oral antibiotic bowel preparation, or even the use of the oral antibiotics alone, should be preserved for cases of increased complexity, where bowel involvement is highly anticipated, such as in gynecologic oncology, as stated in the ERAS protocols. Nonetheless, further research specific to gynecologic surgery is required. abstract_id: PUBMED:36423444 Association of bowel preparation with surgical-site infection in gynecologic oncology surgery: Post-hoc analysis of a randomized controlled trial. Objective: To determine the relationship between bowel preparation and surgical-site infection (SSI) incidence following colorectal resection during gynecologic oncology surgery. Methods: This post-hoc analysis used data from a randomized controlled trial of patients enrolled from 03/01/2016-08/20/2019 with presumed gynecologic malignancy investigating negative-pressure wound therapy among those requiring laparotomy. Patients were treated preoperatively without bowel preparation, oral antibiotic bowel preparation (OABP), or OABP plus mechanical bowel preparation (MBP) per surgeon preference. Univariate and multivariable analyses with stepwise model selection for SSI were performed for confirmed gynecologic malignancies requiring colorectal resection. Results: Of 161 cases, 15 (9%) had no preparation, 39 (24%) OABP only, and 107 (66%) OABP+MBP. The overall SSI rate was 19% (n = 31)-53% (n = 8/15) in the no preparation, 21% (n = 8/39) in the OABP alone, and 14% (n = 15/107) in the OABP+MBP groups (P = 0.003). The difference between OABP and OABP+MBP was non-significant (P = 0.44). The median length of stay was 9 (range, 6-12), 6 (range, 5-8), and 7 days (range, 6-10), respectively (P = 0.045). The overall complication rate (34%; n = 54) did not significantly vary by preparation type (P = 0.23). On univariate logistic regression analysis, OABP (OR, 0.23; 95% CI: 0.06-0.80) and OABP+MBP (OR, 0.14; 95% CI: 0.04-0.45) were associated with decreased SSI risk compared to no preparation (P = 0.004). On multivariate analysis, both methods of preparation retained a significant impact on SSI rates (P = 0.004). Conclusion: Bowel preparation is associated with reduced SSI incidence and is beneficial for patients undergoing gynecologic oncology surgery with anticipated colorectal resection. Further investigation is needed to determine whether OABP alone is sufficient. abstract_id: PUBMED:35589434 Mechanical and oral antibiotic bowel preparation in ovarian cancer debulking: Are we lowering or just trading surgical complications? Objectives: To examine postoperative complications associated with preoperative mechanical and oral antibiotic bowel preparation (MOABP) for patients with ovarian cancer who underwent bowel resection at cytoreductive surgery (CRS). Methods: This was a single-institution retrospective study of patients with ovarian cancer undergoing CRS from 01/2011-12/2020 using ICD-10 diagnoses and procedure codes. Patients were stratified by those who underwent bowel resection versus no resection. Bowel resection patients were further stratified by those who underwent MOABP versus no bowel preparation. Patient demographics, tumor data, and perioperative metrics were collected. Unadjusted and adjusted logistic regression evaluated odds of 30-day postoperative complications in patients with bowel resection versus no resection and those with MOABP versus no bowel preparation. Results: Of 919 patients identified, 215 (23.3%) required bowel resection, which included 81 (37.7%) who received MOABP. Patient characteristics, co-morbidities, and cancer data were similar between MOABP versus no bowel preparation patients. MOABP patients underwent more interval CRS (34.6% versus 9.0%), more optimal surgical resections (96.3% versus 83.8%), fewer diverting ostomies (13.5% versus 33.5%), and shorter hospital stays (7.1 versus 9.4 days) than no bowel preparation patients. On adjusted analyses, MOABP patients experienced significantly lower odds of deep/organ-space surgical infections and 30-day readmissions but higher odds of unplanned intensive care unit (ICU) admissions and grade 3 or higher cardiac and gastrointestinal complications. Conclusions: Patients who underwent preoperative MOABP prior to ovarian cancer CRS with bowel resection had lower odds or deep/organ-space infections and readmissions, shorter hospital stays, fewer diverting ostomies, and more optimal resections. However, these patients also experienced higher odds of ICU admissions and grade 3 or higher cardiac and gastrointestinal complications. The positive and negative postoperative outcomes in this population should be considered in clinical practice. abstract_id: PUBMED:25880356 Rectal enema is an alternative to full mechanical bowel preparation for primary rectal cancer surgery. Aim: According to the French GRECCAR III randomized trial, full mechanical bowel preparation (MBP) for rectal surgery decreases the rate of postoperative morbidity, in particular postoperative infectious complications, but MBP is not well tolerated by the patient. The aim of the present study was to determine whether a preoperative rectal enema (RE) might be an alternative to MBP. Methods: An analysis was performed of 96 matched cohort patients undergoing rectal resection with primary anastomosis and protective ileostomy at two different university teaching hospitals, whose rectal cancer management was comparable except for the choice of preoperative bowel preparation (MBP or RE). Prospective databases were retrospectively analysed. Results: Patients were well matched for age, gender, body mass index and Charlson index. The surgical approach and cancer characteristics (level above anal verge, stage and use of neoadjuvant therapy) were comparable between the two groups. Anastomotic leakage occurred in 10% of patients having MBP and in 8% having RE (P = 1.00). Pelvic abscess formation (6% vs 2%, P = 0.63) and wound infection (8% vs 15%, P = 0.55) were also comparable. Extra-abdominal infection (13% vs 13%, P = 1.00) and non-infectious abdominal complications such as ileus and bleeding (27% and 31%, P = 0.83) were not significantly different. Overall morbidity was comparable in the two groups (50% vs 54%, P = 0.83). Conclusion: A simple RE before rectal surgery seems not to be associated with more postoperative infectious complications nor a higher overall morbidity than MBP. abstract_id: PUBMED:21719323 Elimination of preoperative mechanical bowel preparation in patients undergoing cystectomy and urinary diversion. Introduction: The utility of a preoperative mechanical bowel preparation prior to bowel surgery has recently been questioned. The purpose of this study is to compare the perioperative outcomes between patients undergoing cystectomy with urinary diversion with or without preoperative mechanical bowel preparation. Methods: Seventy patients underwent radical cystectomy and urinary diversion between May 2008 and August 2009 for bladder cancer. The first cohort of patients (n = 37) underwent cystectomy and diversion during the period May 2008-December 2008 and underwent a preoperative mechanical bowel preparation including a clear liquid diet, magnesium citrate solution, and an enema before surgery. The second cohort of patients underwent surgery during the period of January 2009-August 2009 (n=33). These patients were given a regular diet before surgery and did not undergo a mechanical bowel preparation except for the enema before surgery was performed to decrease rectal/colonic distention. Outcome measures included gastrointestinal and overall complications, and perioperative outcomes including recovery of bowel function. Results: There were no differences with regard to recovery of bowel function, time to discharge, or overall complication rates between the 2 groups. More specifically, the rate of GI complications was not different in prepped patients vs. nonprepped patients (22% vs. 15%; P = 0.494). There were no occurrences of bowel anastomotic leak, fistula, abscess, peritonitis, or surgical site infection in either group. One perioperative death occurred in the nonprepped group secondary to cardiovascular complications. Conclusions: Preoperative mechanical bowel preparation prior to radical cystectomy with urinary diversion does not demonstrate any significant advantage in perioperative outcomes, including gastrointestinal complications. Further studies aimed at measuring patient satisfaction and larger randomized trials will be beneficial in evaluating the role of mechanical bowel preparation prior to urinary diversion. abstract_id: PUBMED:17480223 Preoperative mechanical preparation of the colon: the patient's experience. Background: Preoperative mechanical bowel preparation can be questioned as standard procedure in colon surgery, based on the result from several randomised trials. Methods: As part of a large multicenter trial, 105 patients planned for elective colon surgery for cancer, adenoma, or diverticulitis in three hospitals were asked to complete a questionnaire regarding perceived health including experience with bowel preparation. There were 39 questions, each having 3 - 10 answer alternatives, dealing with food intake, pain, discomfort, nausea/vomiting, gas distension, anxiety, tiredness, need of assistance with bowel preparation, and willingness to undergo the procedure again if necessary. Results: 60 patients received mechanical bowel preparation (MBP) and 45 patients did not (No-MBP). In the MBP group 52% needed assistance with bowel preparation and 30% would consider undergoing the same preoperative procedure again. In the No-MBP group 65 % of the patients were positive to no bowel preparation. There was no significant difference between the two groups with respect to postoperative pain and nausea. On Day 4 (but not on Days 1 and 7 postoperatively) patients in the No-MBP group perceived more discomfort than patients in the MBP group, p = 0.02. Time to intake of fluid and solid food did not differ between the two groups. Bowel emptying occurred significantly earlier in the No-MBP group than in the MBP group, p = 0.03. Conclusion: Mechanical bowel preparation is distressing for the patient and associated with a prolonged time to first bowel emptying. abstract_id: PUBMED:32112733 Use of bowel preparation does not reduce postoperative infectious morbidity following minimally invasive or open hysterectomies. Background: Literature on the use of bowel preparation in gynecologic surgery is scarce and limited to minimally invasive gynecologic surgery. The decision on the use of bowel preparation before benign or malignant hysterectomies is mostly driven by extrapolating data from the colorectal literature. Objective: Bowel preparation is a controversial element within enhanced recovery protocols, and literature investigating its efficacy in gynecologic surgery is scarce. Our aim was to determine if mechanical bowel preparation alone, oral antibiotics alone, or a combination are associated with decreased rates of surgical site infections or anastomotic leaks compared to no bowel preparation following benign or malignant hysterectomy. Study Design: We identified women who underwent hysterectomy between January 2006 and July 2017 using OptumLabs, a large US commercial health plan database. Inverse propensity score weighting was used separately for benign and malignant groups to balance baseline characteristics. Primary outcomes of 30-day surgical site infection, anastomotic leaks, and major morbidity were assessed using multivariate logistic regression that adjusted for race, census region, household income, diabetes, and other unbalanced variables following propensity score weighting. Results: A total of 224,687 hysterectomies (benign, 186,148; malignant, 38,539) were identified. Median age was 45 years for the benign and 54 years for the malignant cohort. Surgical approach was as follows: benign: laparoscopic/robotic, 27.2%; laparotomy, 32.6%; vaginal, 40.2%; malignant: laparoscopic/robotic, 28.8%; laparotomy, 47.7%; vaginal, 23.5%. Bowel resection was performed in 0.4% of the benign and 2.8% of the malignant cohort. Type of bowel preparation was as follows: benign: none, 93.8%; mechanical bowel preparation only, 4.6%; oral antibiotics only, 1.1%; mechanical bowel preparation with oral antibiotics, 0.5%; malignant: none, 87.2%; mechanical bowel preparation only, 9.6%; oral antibiotics only, 1.8%; mechanical bowel preparation with oral antibiotics, 1.4%. Use of bowel preparation did not decrease rates of surgical site infections, anastomotic leaks, or major morbidity following benign or malignant hysterectomy. Among malignant abdominal hysterectomies, there was no difference in the rates of infectious morbidity between mechanical bowel preparation alone, oral antibiotics alone, or mechanical bowel preparation with oral antibiotics, compared to no preparation. Conclusion: Bowel preparation does not protect against surgical site infections or major morbidity following benign or malignant hysterectomy, regardless of surgical approach, and may be safely omitted. abstract_id: PUBMED:38406026 Effect of Preoperative Oral Antibiotics and Mechanical Bowel Preparations on the Intestinal Flora of Patients Undergoing Laparoscopic Colorectal Cancer Surgery: A Single-Center Prospective Pilot Study. Introduction: In the last few decades, considerable progress has been made in controlling surgical site infections (SSIs) using a combination of mechanical and oral antibiotic bowel preparation. However, the number of bacteria present after bowel preparation has not been clarified. In this study, we investigated the bacterial cultures of intestinal fluid samples from patients undergoing laparoscopic surgery for colorectal cancer after preoperative bowel preparation. Methods: This prospective observational study was designed as a pilot study at a single center. We enrolled 25 consecutive patients who underwent laparoscopic surgery for colorectal cancer between March 2021 and February 2022 at our institution. Results: The rate of bacterial culture positivity was 56.0%. The most abundant bacterium was Escherichia coli (44.0%). The positivity rates for E. coli on the right and left sides were 54.5% and 35.7%, respectively (P = 0.60). Moreover, there was a significant relationship between a low American Society of Anesthesiologists Physical Status score and E. coli positivity on the right side (P = 0.031). In the left-sided group, female sex and large tumor size were significantly associated with E. coli positivity (P = 0.036 and 0.049, respectively). Superficial SSI occurred in the patient in the left-sided group, but E. coli was negative. Conclusion: This study emphasizes the importance of understanding intestinal fluid contamination and its relationship to infection risk. Future prospective multicenter studies should be conducted to determine the association between intestinal bacteria and different types of preoperative preparation. Answer: The necessity of preoperative bowel preparation for gynecological oncology surgery is a topic of debate. Some studies suggest that mechanical bowel preparation (MBP) may not be essential. For instance, a retrospective review found no significant benefit of MBP in terms of patient characteristics, duration of antibiotic treatment, surgical procedures, or the risk of surgical or nonsurgical complications, suggesting that preoperative MBP can be omitted (PUBMED:27125402). Similarly, a randomized controlled trial indicated that non-MBP was more conducive to postoperative recovery of gastrointestinal function, with patients experiencing fewer postoperative gastrointestinal symptoms and shorter times to first bowel movement, flatus, and stool passage (PUBMED:37224788). However, other research highlights the potential benefits of bowel preparation. A post-hoc analysis of a randomized controlled trial found that bowel preparation, particularly oral antibiotic bowel preparation (OABP) with or without MBP, was associated with reduced surgical-site infection (SSI) incidence following colorectal resection during gynecologic oncology surgery (PUBMED:36423444). Another study on ovarian cancer debulking suggested that mechanical and oral antibiotic bowel preparation (MOABP) led to lower odds of deep/organ-space infections and readmissions, although it was also associated with higher odds of ICU admissions and grade 3 or higher cardiac and gastrointestinal complications (PUBMED:35589434). A review of the literature indicates that there is no strong evidence supporting MBP alone, but the combined use of oral antibiotic and MBP has been beneficial in colorectal surgery in reducing postoperative morbidities (PUBMED:32740871). Additionally, a study on rectal cancer surgery found that a rectal enema might be an alternative to full MBP without increasing postoperative morbidity (PUBMED:25880356). In conclusion, while some studies suggest that MBP may not be necessary and could be omitted for gynecological oncology surgery (PUBMED:27125402), others highlight the benefits of bowel preparation, particularly when combined with oral antibiotics, in reducing postoperative infectious morbidity (PUBMED:36423444, PUBMED:35589434). The decision to use bowel preparation should be tailored to the individual patient and the complexity of the surgery, with consideration given to the potential benefits and risks. Further research specific to gynecologic surgery is required to establish definitive guidelines (PUBMED:32740871).
Instruction: Can loud noise cause acoustic neuroma? Abstracts: abstract_id: PUBMED:16357108 Exposure to loud noise and risk of acoustic neuroma. Exposure to occupational loud noise has been previously identified as a possible risk factor for acoustic neuroma in only one relatively small (n = 86 cases) case-control study of men. The goal of the present study was to further examine the role of loud noise in acoustic neuroma etiology. In their population-based case-control study of both sexes conducted from 1999 to 2002 in Sweden, the authors compared reports on type and duration of occupational and nonoccupational loud noise exposure of 146 acoustic neuroma cases and 564 controls. Controls were randomly selected from the study base and were frequency matched on age, sex, and residential area. The authors found that individuals reporting loud noise exposure from any source were at increased risk for acoustic neuroma (odds ratio (OR) = 1.55, 95% confidence interval (CI): 1.04, 2.30). Exposure to loud noise from machines, power tools, and/or construction increased the risk for acoustic neuroma (OR = 1.79, 95% CI: 1.11, 2.89), as did exposure to loud music (OR = 2.25, 95% CI: 1.20, 4.23). The odds ratio for a latency period of 13 or more years since the first loud noise exposure from any source was 2.12 (95% CI: 1.40, 3.20). The findings of an increased risk of acoustic neuroma with loud noise exposure support previous research. abstract_id: PUBMED:24786799 Loud noise exposure and acoustic neuroma. The results from studies of loud noise exposure and acoustic neuroma are conflicting. A population-based case-control study of 451 acoustic neuroma patients and 710 age-, sex-, and region-matched controls was conducted in Sweden between 2002 and 2007. Occupational exposure was based on historical measurements of occupational noise (321 job titles summarized by a job exposure matrix) and compared with self-reported occupational noise exposure. We also evaluated self-reported noise exposure during leisure activity. Conditional logistic regression was used to estimate odds ratios. There was no statistically significant association between acoustic neuroma and persistent occupational noise exposure, either with or without hearing protection. Exposure to loud noise from leisure activity without hearing protection was more common among acoustic neuroma cases (odds ratio = 1.47, 95% confidence interval: 1.06, 2.03). Statistically significant odds ratios were found for specific leisure activities including attending concerts/clubs/sporting events (odds ratio = 1.82, 95% confidence interval: 1.09, 3.04) and participating in workouts accompanied by loud music (odds ratio = 2.84, 95% confidence interval: 1.37, 5.89). Our findings do not support an association between occupational exposure to loud noise and acoustic neuroma. Although we report statistically significant associations between leisure-time exposures to loud noise without hearing protection and acoustic neuroma, especially among women, we cannot rule out recall bias as an alternative explanation. abstract_id: PUBMED:19289391 Can loud noise cause acoustic neuroma? Analysis of the INTERPHONE study in France. Objectives: To investigate possible associations between risk of acoustic neuroma and exposure to loud noise in leisure and occupational settings. Methods: A case-control study was conducted in France within the international INTERPHONE study. The cases were the 108 subjects diagnosed with acoustic neuroma between 1 June 2000 and 31 August 2003. Two controls per case were selected from the electoral rolls and individually matched for gender, age (5 years) and area (local authority district) of residence at the time of the case diagnosis. Multivariate analyses were conducted using conditional logistic regression. Adjustment was made for socioeconomic status. Results: Acoustic neuroma was found to be associated with loud noise exposure (odds ratio (OR) = 2.55; 95% CI 1.35 to 4.82), both in leisure settings, particularly when listening to loud music (OR = 3.88; 95% CI 1.48 to 10.17) and at work (OR = 2.26; 95% CI 1.08 to 4.72). This risk increased with exposure duration (>6 years' leisure exposure: OR = 3.15; 95% CI 1.07 to 9.24). Risk varied according to the type of noise (continuous or explosive vs intermittent). Conclusion: The present results agree with other recent reports implicating loud noise in the risk of acoustic neuroma. abstract_id: PUBMED:30614502 Exposure to loud noise and risk of vestibular schwannoma: results from the INTERPHONE international case‒control study. Objective Studies of loud noise exposure and vestibular schwannomas (VS) have shown conflicting results. The population-based INTERPHONE case‒control study was conducted in 13 countries during 2000-2004. In this paper, we report the results of analyses on the association between VS and self-reported loud noise exposure. Methods Self-reported noise exposure was analyzed in 1024 VS cases and 1984 matched controls. Life-long noise exposure was estimated through detailed questions. Odds ratios (OR) and 95% confidence intervals (CI) were estimated using adjusted conditional logistic regression for matched sets. Results The OR for total work and leisure noise exposure was 1.6 (95% CI 1.4-1.9). OR were 1.5 (95% CI 1.3-1.9) for only occupational noise, 1.9 (95% CI 1.4-2.6) for only leisure noise and 1.7 (95% CI 1.2-2.2) for exposure in both contexts. OR increased slightly with increasing lag-time. For occupational exposures, duration, time since exposure start and a metric combining lifetime duration and weekly exposure showed significant trends of increasing risk with increasing exposure. OR did not differ markedly by source or other characteristics of noise. Conclusion The consistent associations seen are likely to reflect either recall bias or a causal association, or potentially indicate a mixture of both. abstract_id: PUBMED:2736213 Noise trauma in the aetiology of acoustic neuromas in men in Los Angeles County, 1978-1985. The aim of this study was to investigate whether occupational and other suggested brain tumour risk factors relate to the development of acoustic neuromas (AN) in men. Responses to interviews were compared for 86 AN patients and 86 neighbourhood controls. During the period 10 or more years before the year of diagnosis of the case, more cases than controls had a job involving exposure to extremely loud noise; noise exposure was determined by a blinded review of job histories and linkage to the National Occupational Hazards Survey data base (odds ratio (OR) = 2.2, 95% confidence interval (CI) = 1.12, 4.67). A dose-response analysis showed an increase in risk related to number of years of job exposure to extremely loud noise (P for trend = 0.02) with an OR of 13.2 (CI = 2.01, 86.98) for exposure for 20 or more years during the period up to 10 years before diagnosis. We propose that the findings in this study which identify noise as a risk factor support the hypothesis that mechanical trauma may contribute to tumorigenesis. abstract_id: PUBMED:31028471 Validation of self-reported occupational noise exposure in participants of a French case-control study on acoustic neuroma. Objectives: To validate self-reported occupational loud noise exposure against expert evaluation of noise levels in a French case-control study on acoustic neuroma and to estimate the impact of exposure misclassification on risk estimation. Methods: Noise levels were evaluated in 1006 jobs held by 111 cases and 217 population controls by an expert. Case-control differences in self-reporting were analyzed with logistic models. Sensitivity, specificity, positive and negative predictive values, and observed agreement of the self-reports were computed relative to the expert evaluation. They were used to calibrate the odds ratio (OR) between lifetime ever occupational loud noise exposure and the risk of acoustic neuroma, without adjustment for measurement error of the expert assessments. Results: Cases reported noise levels in individual jobs closer to the expert assessment than controls, but the case-control difference was small for lifetime exposures. For expert-rated exposure of 80 dB(A), reporting of individual jobs by cases was more sensitive (54% in cases, 37% in controls), whereas specificity (91% in cases, 93% in controls) and observed agreement (82% in cases, 81% in controls) were similar. When lifetime exposure was considered, sensitivity increased (76% in cases, 65% in controls), while cases specificity decreased (84%). When these values were used to calibrate self-reports for exposure misclassification compared to expert evaluation at 80 dB(A), the crude OR of 1.7 was reduced to 1.3. Conclusions: Despite the relatively accurate reporting of loud noise, the impact of the calibration on the OR was non-negligible. abstract_id: PUBMED:36064622 Noise exposure and the risk of cancer: a comprehensive systematic review. The association between noise exposure and increased risk of cancer has received little attention in the field of research. Therefore, the goal of this study was to conduct a systematic review on the relationship between noise exposure and the incidence of cancer in humans. In this study, four electronic bibliographic databases including Scopus, PubMed, Web of Science, and Embase were systematically searched up to 21 April 2022. All types of noise exposure were considered, including environmental noise, occupational noise, and leisure or recreational noise. Furthermore, all types of cancers were studied, regardless of the organs involved. In total, 1836 articles were excluded on the basis of containing exclusion criteria or lacking inclusion criteria, leaving 19 articles retained for this study. Five of nine case-control studies showed a significant relationship between occupational or leisure noise exposure and acoustic neuroma. Moreover, four of five case-control and cohort studies indicated statistically significant relationships between environmental noise exposure and breast cancer. Of other cancer types, two case-control studies highlighted the risk of Hodgkin and non-Hodgkin lymphoma and two cohort studies identified an increased risk of colon cancer associated with environmental noise exposure. No relationship between road traffic and railway noise and the risk of prostate cancer was observed. In total, results showed that noise exposure, particularly prolonged and continuous exposure to loud noise, can lead to the incidence of some cancers. However, confirmation of this requires further epidemiological studies and exploration of the exact biological mechanism and pathway for these effects. abstract_id: PUBMED:16943265 Re: "Exposure to loud noise and risk of acoustic neuroma". N/A abstract_id: PUBMED:17008419 Exposure to loud noise and risk of acoustic neuroma. N/A abstract_id: PUBMED:37303697 Occupational and Environmental Noise Exposure and Extra-Auditory Effects on Humans: A Systematic Literature Review. Noise is a common harmful factor in our work and the environment. Most studies have investigated the auditory effects of noise exposure; however, few studies have focused on the extra-auditory effects of exposure to occupational or environmental noise. This study aimed to systematically review published studies on the extra-auditory effects of noise exposure. We reviewed literature from PubMed and Google Scholar databases up to July 2022, using the Patient, Intervention, Comparison, and Outcome criteria and Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines to identify studies that reported extra-auditory effects of occupational or environmental noise exposure. Studies were evaluated utilizing validated reporting tools (CONSORT, STROBE) appropriate to study design. A total of 263 articles were identified, of which 36 were finally selected and reviewed. Upon conducting a review of the articles, exposure to noise can elicit a variety of extra-auditory effects on humans. These effects include circulatory effects linked to higher risk of cardiovascular disease and decreased endothelial function, nervous system effects correlated with sleep disturbance, cognitive impairment, and mental health problems, immunological and endocrinal effects connected to increased physiological stress response and metabolic disorders, oncological and respiratory effects associated with an elevated risk of acoustic neuroma and respiratory disorders, gastrointestinal effects linked to an increased risk of gastric or duodenal ulcer, and obstetric effects connected to the risk of preterm birth. Our review suggests that there are numerous extra-auditory effects of noise exposure on human, and further investigations are needed to fully understand these effects. Answer: The relationship between loud noise exposure and the risk of acoustic neuroma has been explored in various studies, with some evidence suggesting a potential causal link. A population-based case-control study in Sweden found that individuals reporting loud noise exposure from any source were at increased risk for acoustic neuroma, with an odds ratio (OR) of 1.55. Exposure to loud noise from machines, power tools, construction, and loud music increased the risk, with a higher OR observed for a latency period of 13 or more years since the first loud noise exposure (PUBMED:16357108). Another study within the INTERPHONE study in France reported an association between acoustic neuroma and loud noise exposure both in leisure settings, particularly when listening to loud music, and at work, with the risk increasing with exposure duration (PUBMED:19289391). However, conflicting results have also been reported. A separate Swedish study did not find a statistically significant association between acoustic neuroma and persistent occupational noise exposure, although it did find statistically significant associations between leisure-time exposures to loud noise without hearing protection and acoustic neuroma, particularly among women (PUBMED:24786799). The INTERPHONE international case-control study also reported consistent associations between vestibular schwannomas (a type of acoustic neuroma) and self-reported loud noise exposure in work and leisure contexts, but the study acknowledged the possibility of recall bias or a mixture of recall bias and a causal association (PUBMED:30614502). A study in Los Angeles County suggested that noise exposure could be a risk factor for acoustic neuroma, with a dose-response relationship observed for the number of years of job exposure to extremely loud noise (PUBMED:2736213). Additionally, a systematic review indicated that some case-control studies showed a significant relationship between occupational or leisure noise exposure and acoustic neuroma (PUBMED:36064622). In summary, while some studies suggest that loud noise exposure may increase the risk of acoustic neuroma, the evidence is not entirely consistent, and further research is needed to clarify the relationship. Potential issues such as recall bias and the need for more precise exposure assessment have been highlighted in the literature (PUBMED:31028471).
Instruction: Do we have to stop anticoagulant and platelet-inhibitor treatments during proctological surgery? Abstracts: abstract_id: PUBMED:10597382 Management of haemophilia in patients with high-titre inhibitors: focus on the evolution of activated prothrombin complex concentrate AUTOPLEX T. Numerous therapeutic strategies have been applied to the management of patients with inhibitors to factors VIII or IX. Different treatment approaches are analysed including prothrombin complex concentrates (PCCs), activated prothrombin complex concentrates (aPCCs), porcine factor VIII concentrate, inhibitor neutralization, immune tolerance therapy, immunosuppressive regimens and recombinant factor VIIa. Clinical data are reported in the analysis of several treatments. PCCs and aPCCs have gained widespread acceptance as the standard first-line approach for patients with inhibitors. The aPCC AUTOPLEX T has achieved a high response rate with a low level of thrombotic events. Four case studies are presented in which AUTOPLEX T has been used successfully. Administration of platelet concentrate or, in elective surgery, waiting for inhibitor levels to decline are useful adjuncts to some treatments. The optimal treatment depends on the patient's inhibitor status--low responder (minimal or no increase in inhibitor levels upon administration of replacement clotting factor) or high responder (replacement clotting factor generates inhibitor production). A suggested algorithm for treating high-responder inhibitor patients is presented. abstract_id: PUBMED:1894198 Some objective considerations for the neutralization of the anticoagulant actions of recombinant hirudin. Recombinant hirudin (r-hirudin) is currently under development as an anticoagulant for use in surgery, therapeutic anticoagulation, disseminated intravascular coagulation and other pathologic states involving the generation of thrombin. Circulating levels of r-hirudin as an antithrombotic agent range from 2 to 20 micrograms/ml (0.1-1.0 mg/kg) as determined in an animal model of stasis thrombosis. In order to establish a relationship between the r-hirudin circulating level and bleeding, we utilized a rabbit ear blood loss model. r-Hirudin did not produce any loss of blood at dosages up to 20 micrograms/ml i.v. (1.0 mg/kg). When the circulating levels were maintained at 20 micrograms/ml for periods of up to 3 h, no increase in blood loss was observed. At 50 and 100 micrograms/ml initial circulating levels (2.5 and 5.0 mg/kg) a dose-dependent increase in the blood loss was observed which was equivalent to that observed with 1.25 and 2.5 mg/kg i.v. heparin. Such levels of r-hirudin are not expected in clinical usage. In contrast to heparin, the anticoagulant actions of r-hirudin were not neutralized by protamine sulfate, platelet factor 4, other polycationic agents and heparinase. In our studies, the blood loss induced by greater than 2.0 mg/kg i.v. dosages of r-hirudin in an animal model was neutralized by the administration of an activated prothrombin complex concentrate at 25 U/kg. In a similar experimental setting, r-factor VIIa was also partially effective. These studies suggest that r-hirudin anticoagulation may not require neutralization, since bleeding effects are not observed at effective antithrombotic dosages in individuals with normal hemostatic status.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:15660984 Treatment of factor XI inhibitor using recombinant activated factor VIIa. A 30-year-old female with severe factor XI deficiency of 0-2% acquired factor XI inhibitor following many infusions for fresh frozen plasma (FFP) for surgical procedures starting at 4 years of age. Seven months before this inhibitor was diagnosed, surgery was complicated by prolonged bleeding resistant to FFP, requiring epsilon aminocaproic acid (EACA) and surgical packing. The inhibitor was measured at 2.2 Bethesda units, 7 months since the last FFP. The inhibitor was confirmed as specific anti-XI and anti-XIa binding by patient's IgG to immobilized factor XI and factor XIa from whole plasma and purified IgG. For repair of a painful anterior cruciate ligament (ACL) defect she was given recombinant factor VIIa (rVIIa) at 90 mug kg(-1), starting one-half hour preoperatively and continued every 2 h for 8 h when haemostasis was complete. Thereafter the rVIIa was given every 3 h for two doses, and then every 4 h for four doses at which time she was discharged on EACA which was continued for 6 days. There was excellent haemostasis during and following the surgery. There was no evidence of consumptive coagulopathy, with no change in the fibrinogen, platelet count, or D-D dimer; and no increase of platelet factor 4, beta-thromboglobulin, or prothrombin fragment F 1.2. The thrombin-antithrombin complex increased over baseline after 24 h. There was no postoperative deep vein thrombosis or pulmonary embolus. In this patient with a factor XI inhibitor, the recombinant factor VIIa was effective and safe, ensuring adequate haemostasis with no thrombotic complications. This product which was designed for patients with inhibitors to factor VIII or factor IX, and factor VII deficiency, has now been given successfully to four patients with factor XI inhibitors. abstract_id: PUBMED:16731183 Bivalirudin anticoagulation for a patient with hypercoagulable immune syndromes undergoing mitral valve surgery. Unfractionated heparin has been a near universal anticoagulant for cardiac surgery; however it is contraindicated in heparin-induced thrombocytopenia type II. Alternative anticoagulants such as bivalirudin (a direct thrombin inhibitor) are being utilized. Bivalirudin was successfully used in an immunologically complex patient (diagnoses of heparin-induced thrombocytopenia type II, systemic lupus erythematosus, antiphospholipid syndrome, and dialysis-dependent renal failure) requiring cardiopulmonary bypass. Thrombotic events are common in antiphospholipid syndrome patients undergoing cardiac surgery utilizing high-dose heparin. This may represent unrecognized heparin-induced thrombocytopenia type II. Our patient did not experience perioperative thrombotic or bleeding complications. The possible cross-reactivity between heparin induced thrombocytopenia type II and antiphospholipid syndrome has not been investigated. abstract_id: PUBMED:22329977 Recognition of heparin-induced thrombocytopenia and initiation of argatroban therapy after cardiothoracic surgery in the intensive care unit. Objective: Patients recovering from cardiothoracic surgery are known to be at increased risk of heparin-induced thrombocytopenia. Postoperatively, if heparin-induced thrombocytopenia is suspected, heparin is discontinued immediately and an alternative anticoagulant, such as the direct thrombin inhibitor argatroban, is administered. Current data regarding the safety and efficacy of argatroban in the postoperative cardiothoracic surgical patient in the intensive care setting are limited. Methods: Data were collected retrospectively from January 1, 2007, to December 31, 2010, from patients tested for antiplatelet factor 4/heparin antibodies on clinical suspicion of heparin-induced thrombocytopenia after cardiothoracic surgery. We evaluated the use of argatroban as a therapeutic agent for the postoperative treatment of suspected heparin-induced thrombocytopenia by comparing thrombotic and bleeding events, platelet dynamics, antiplatelet factor 4/heparin antibody titer, and clinical probability score between patients who did and did not receive argatroban. Results: Eighty-seven patients were included; 47 patients (54%) were treated with argatroban, and 40 patients (46%) were not treated with argatroban. There was no association between argatroban therapy and bleeding, mortality, length of stay, or pretreatment thrombotic events. Among all patients, antiplatelet factor 4/heparin antibody titer and clinical probability score were higher in patients treated with argatroban. Conclusions: Clinical suspicion of heparin-induced thrombocytopenia as detected by clinical probability score and thrombotic complications should prompt immediate cessation of heparin and initiation of an alternative anticoagulant such as argatroban. The results from this study demonstrate that argatroban should be considered without increased risk for adverse events, including bleeding, in the cardiothoracic intensive care unit after surgery. abstract_id: PUBMED:21316987 Update in hematology: heparin-induced thrombocytopenia and bivalirudin. Heparin-induced thrombocytopenia (HIT) is important because it is common, and it significantly increases mortality after cardiac surgery. Although thrombocytopenia after cardiac surgery is common, it predicts serious adverse outcome when it is severe. Despite the high prevalence of heparin/platelet factor 4 antibodies in cardiac surgical patients, they typically do not indicate a higher perioperative risk. Recent evidence suggests, however, that when these antibodies are in the immunoglobulin M class, there is an increased risk of nonthrombotic adverse outcomes after cardiac surgery. According to the guidelines from the American College of Chest Physicians, patients with HIT require parenteral anticoagulation with a direct thrombin inhibitor such as lepirudin, argatroban, or bivalirudin. The transition to oral anticoagulation must be undertaken cautiously and only after the platelet count has recovered. Patients with a remote history of HIT can have cardiac surgery safely with unfractionated heparin. Patients with clinically active HIT who require cardiac surgery before the resolution of the HIT preferably should be anticoagulated with bivalirudin, dosed according to body weight and the goal-activated coagulation time. Given that bivalirudin is an established alternative to heparin as a thrombin inhibitor for cardiac surgery, it is likely that future trials will investigate which anticoagulant confers better outcomes after cardiac surgery, as is the case in percutaneous coronary intervention. abstract_id: PUBMED:20050998 Anti-PF4/heparin antibody formation postorthopedic surgery thromboprophylaxis: the role of non-drug risk factors and evidence for a stoichiometry-based model of immunization. Background: Heparin-induced thrombocytopenia is an antibody-mediated disorder exhibiting variable frequency in different clinical settings. Antibodies recognize PF4/heparin complexes formed at optimal stoichiometric molar ratios. Objective: To identify clinical factors influencing risk of anti-PF4/heparin immunization. Patients/methods: We performed observational studies and exploratory analyses of the frequency of anti-PF4/heparin antibody formation in 6324 patients who received enoxaparin or fondaparinux in four randomized controlled trials of postorthopedic surgery thromboprophylaxis. Variables included surgery type (knee vs. hip), timing of first anticoagulant dose (pre- vs. postsurgery), circumstances of surgery (elective vs. hip fracture), anticoagulant (enoxaparin vs. fondaparinux) and body-mass index (BMI). We applied a stoichiometry-based model that predicts immunization risk based on expected differences in PF4/anticoagulant ratios in different settings, and specifically used this model to predict the effect of increasing BMI quartiles upon relative risk (RR) of immunization for fondaparinux vs. enoxaparin. Results: Anti-PF4/heparin immunization was more frequent after knee vs. hip surgery (particularly for enoxaparin), and when enoxaparin was given post- rather than pre-elective surgery; however, the opposite occurred with hip fracture surgery, that is, antibody formation was more frequent when enoxaparin or fondaparinux was given presurgery. The RR of immunization for fondaparinux vs. enoxaparin decreased significantly for increasing BMI quartiles, an effect predominantly because of increasing immunization with enoxaparin at increasing BMI quartiles. Conclusions: Several non-drug factors--including type and circumstances of surgery, timing of first anticoagulant dose and BMI--influence risk of anti-PF4/heparin antibody formation, consistent with a stoichiometry-based immunization model of PF4 and anticoagulant ratios occurring during the early peri-operative period. abstract_id: PUBMED:17252251 Two surgical case reports showing atypical heparin-induced thrombocytopenia Heparin-induced thrombocytopenia (HIT) represents a serious side effect caused by an atypical immune response to platelet factor 4 leading to platelet activation and thrombin formation. These patients are at high risk of thromboembolism, with a rapid drop in platelet count between days 5 and 14 after the initiation of heparin treatment. In single cases, especially after major surgery, platelet count reduction might be absent or hidden by preceding thrombocytosis. Different clinical manifestations of HIT include unspecific skin reactions with potential necrosis at the site of heparin injection, mostly after the application of unfractionated heparin but also with low molecular weight heparin. In heparin-induced skin necrosis, administration of unfractionated or low molecular weight heparin is contraindicated and heparin therapy should be stopped immediately. Instead, an alternative anticoagulant in the form of a direct thrombin inhibitor such as argatroban, and respectively lepirudin, or danaparoid sodium must be administered. Due to frequent misinterpretations of heparin-induced unspecific skin reactions, especially in the absence of thrombocytopenia, we present two case reports which should increase the awareness of HIT's various clinical pictures. abstract_id: PUBMED:34511835 Early Suspicion of Heparin-Induced Thrombocytopenia for Successful Free Flap Salvage: Reports of Two Cases. Heparin-induced thrombocytopenia (HIT) is an underestimated complication of heparin treatment. Flap loss and related morbidity (even mortality) are caused by occlusion of the capillary vessels by a highly immunogenic complex formed by adherence of antiheparin antibodies to platelet factor 4. Early suspicion and effective treatment of HIT developing in two free flaps are described. We report on the management of two patients with HIT. Both patients were treated successfully by early suspicion and hematology consultation. Heparin products were discontinued; the patients were switched to a nonheparin anticoagulant. We emphasize the importance of early diagnosis, hematologist assessment, and a change to a nonheparin anticoagulant to prevent flap failure and possibly the catastrophic consequences of such failure. abstract_id: PUBMED:8938285 Restoration of the normal coagulation process: advances in therapies to antagonize heparin. A number of naturally occurring anticoagulants exist that preserve normal blood fluidity and limit blood clot formation to vascular injury sites, thus acting as regulators of hemostasis. The protein C/protein S pathway is one system that acts to modulate thrombin formation. The activation of protein C by thrombin is accelerated more than 1,000-fold at the endothelial surface by thrombomodulin localized on the endothelial cell. Activated protein C then binds to its co-factor, protein S, and the protein C/protein S complex exerts its antithrombotic function by inactivating the coagulation factors Va and VIIIa. Patients deficient in protein C and protein S may be particularly vulnerable to thrombotic events after cardiac surgery. In addition, several studies suggest that reductions in protein C and protein S concentrations, as well as thrombomodulin, occur during cardiopulmonary bypass (CPB). The possibility of a low anticoagulant potential when heparinization is reversed may be an important factor in the subsequent morbidity associated with thrombotic complications. Aprotinin is a serine protease inhibitor that in vitro binds competitively with the serine protease-activated protein C. However, aprotinin in the clinical setting has not been reported to alter levels of protein C in patients undergoing CPB. Reversal of the heparinization needed for CPB is almost universally performed with protamine. However, protamine has many deleterious effects. Recombinant platelet factor 4 (rPF4) has been proposed as an alternative to protamine. We investigated the effective heparin neutralization dose of rPF4 vs. the standard agent protamine in human blood activated through exposure to the CPB circuit. Activated clotting time (ACT) measurements suggested a 2:1 (w/w) reversal ratio for rPF4 and protamine. The first human open-label phase 1 trial of rPF4 reported no serious side effects and no important hemodynamic effects. Doses of 2.5 and 5.0 mg/kg were uniformly effective in reversing the anticoagulant effect of heparin and reducing the ACT to <200 s by 5 min after administration. Repeated monitoring of the ACT did not detect a rebound effect of heparin. Answer: The abstracts provided do not directly address the question of whether anticoagulant and platelet inhibitor treatments must be stopped during proctological surgery. However, they do discuss various aspects of managing anticoagulation in the context of surgery and the presence of inhibitors or anticoagulant-related complications. For instance, the management of haemophilia in patients with high-titre inhibitors involves different therapeutic strategies, including activated prothrombin complex concentrates (aPCCs) like AUTOPLEX T, which has been used successfully in some cases (PUBMED:10597382). The treatment of factor XI inhibitor using recombinant activated factor VIIa (rVIIa) is also mentioned, which was effective and safe in ensuring adequate haemostasis without thrombotic complications in a patient undergoing anterior cruciate ligament repair (PUBMED:15660984). In cases of heparin-induced thrombocytopenia (HIT), alternative anticoagulants like argatroban and bivalirudin are used (PUBMED:22329977, PUBMED:21316987). These direct thrombin inhibitors are administered when HIT is suspected, and heparin is discontinued immediately. The use of argatroban in the postoperative cardiothoracic surgical patient in the intensive care setting has been shown to be safe and effective without increased risk for adverse events, including bleeding (PUBMED:22329977). The abstracts suggest that the decision to stop anticoagulant and platelet inhibitor treatments during surgery, including proctological procedures, should be based on a careful assessment of the patient's coagulation status, the presence of any inhibitors, the risk of bleeding, and the potential for thrombotic complications. It is essential to consult with a hematologist and consider individual patient factors when making such decisions. The management of anticoagulation around the time of surgery is complex and must be individualized to balance the risks of bleeding against the risks of thrombosis (PUBMED:10597382, PUBMED:15660984, PUBMED:22329977, PUBMED:21316987).
Instruction: Should total number of lymph nodes be used as a quality of care measure for stage III colon cancer? Abstracts: abstract_id: PUBMED:19300237 Should total number of lymph nodes be used as a quality of care measure for stage III colon cancer? Objective: To assess whether TNODS is an independent prognostic factor after adjusting for the lymph node ratio (LNR). Summary Background Data: The medical literature has suggested that the TNODS is associated with better survival in stage II and III colon cancer. Thus TNODS was endorsed as a quality measure for patient care by American College of Surgeons, National Quality Forum. There is, however, little biologic rationale to support this linkage. Methods: : A total of 24,477 stage III colon cancer patients were identified from Surveillance, Epidemiology, and End Results cancer registry and categorized into 4 groups, LNR1 to LNR4, according to LNR interval: &lt;0.07, 0.07 to 0.25, 0.25 to 0.50, and &gt;0.50. Patients were also stratified according to TNODS into high TNODS (&gt; or = 12) and low TNODS (&lt;12) groups. The method of Kaplan-Meier was used to estimate the 5-year survival and the log-rank test was used to test the survival difference among the different groups. Results: Patients with high TNODS have better survival compared with those with low TNODS (5-year survival 51.0% vs. 45.0%, P &lt; 0.0001). However, after stratifying by LNR status, there was no significant survival difference between patients with high TNODS and those with low TNODS within strata LNR2 (5-year survival 56.3% vs. 56.0%, P = 0.26). Ironically, patients with high TNODS had significantly worse survival than those with low TNODS within strata LNR3 (5-year survival 41.2% vs. 47.4%, P = 0.0009) and LNR 4 (5-year survival 22.0% vs. 32.1%, P &lt; 0.0001). Conclusions: The previously reported prognostic effect of TNODS on node-positive colon cancer was confounded by LNR. This observation calls into question the use of TNODS as a quality measure for colon cancer patients' care. abstract_id: PUBMED:34616683 Impact of Inadequate Number of Lymph Nodes Examined on Survival in Stage II Colon Cancer. Background: Inadequate number of lymph nodes examined was not uncommon. We aimed to assess the clinical role of inadequate number of lymph nodes examined in stage II colon cancer. Methods: The cancer data used in our study were obtained from the SEER (Surveillance, Epidemiology and End Results) program. Using the chi-square test, all the variables obtained in our study were compared based on whether patients had enough (≥12) lymph nodes examined. Kaplan-Meier analysis was used for overall survival (OS) analysis, and log-rank test was applied to compare different N stages with the total number of lymph nodes examined. Multivariate analysis was carried out by creating a Cox proportional hazard model to assess the prognostic roles of different variables. Results: In total, 80,296 stage II/III colon cancer patients were recruited for our study. N0 stage with &lt;8 lymph nodes examined would present with a worse prognosis compared to N1 stage (5-year OS rates, 51.6% vs. 57.1%, p &lt; 0.001). Multivariate analyses indicated that OS of N0 stage with &lt;8 lymph nodes examined was similar to that of N1 stage after adjusting for other recognized prognostic factors [hazard ratios (HRs) = 1.051, 95% confidence intervals (CIs) = 1.014-1.090, p = 0.018]. Conclusions: N0 stage with less than eight lymph nodes examined in stage II colon cancer presented with no better OS compared to that of N1 stage. Stage II colon cancer with less than eight lymph nodes examined needed to be given greater emphasis in clinical practice. abstract_id: PUBMED:19228501 Survival in stage III colon cancer is independent of the total number of lymph nodes retrieved. Background: Retrieval of &gt;/= 12 lymph nodes has been set as a marker of quality for surgical resection for colon cancer. The aim of our study was to determine if increasing the number of lymph nodes recovered in stage III colon cancer results in improved survival and if it does represent a reasonable quality metric. Study Design: Data from patients with stage III colon cancer from 1996 to 2001 were analyzed. Outcomes after operation (cancer-specific survival, disease-free survival, and overall survival) with or without adjuvant therapy were evaluated in 3 categories: the entire cohort, patients with N1, and patients with N2 disease. These categories were then classified into subgroups by the number of nodes (&lt;/= 12 versus &gt;12) retrieved per specimen and whether they had 5-FU-based chemotherapy or not. Results: Three hundred twenty-nine patients, with a median followup of 5 years with stage III colon cancer, were identified. Five-year cancer-specific and disease-free survival was 67.2% and 59.7%, respectively. A positive correlation between number of positive lymph nodes and overall survival was found (p &lt; 0.05). No significant association was observed between the total number (&gt; 12 versus &lt;/= 12) of lymph nodes removed either in the entire cohort or in patients with N1 (249 patients) and N2 (80 patients) disease. Conclusion: Accurate staging requires an appropriate operation and a concerted pathologic effort to identify lymph nodes in the colon specimen. The total number of lymph nodes analyzed for stage III colon cancer is not a prognostic indicator of cancer-specific and disease-free survival. abstract_id: PUBMED:29751794 Prognostic value of total number of lymph nodes retrieved differs between left-sided colon cancer and right-sided colon cancer in stage III patients with colon cancer. Background: The consensus is that a minimum of 12 lymph nodes should be analyzed at colectomy for colon cancer. However, right colon cancer and left colon cancer have different characteristics, and this threshold value for total number of lymph nodes retrieved may not be universally applicable. Methods: The data of 63,243 patients with colon cancer treated between 2004 and 2012 were retrieved from the National Cancer Institute's Surveillance, Epidemiology, and End Results database. Multivariate Cox regression analysis was used to determine the predictive value of total number of lymph nodes for survival after adjusting for lymph nodes ratio. The predictive value in left-sided colon cancer and right-sided colon cancer was compared. The optimal total number of lymph nodes cutoff value for prediction of overall survival was identified using the online tool Cutoff Finder. Survival of patients with high total number of lymph nodes (≥12) and low total number of lymph nodes (&lt; 12) was compared by Kaplan-Meier analysis. Results: After stratifying by lymph nodes ratio status, total number of lymph nodes≥12 remained an independent predictor of survival in the whole cohort and in right-sided colon cancer, but not in left-sided colon cancer. The optimal cutoff value for total number of lymph nodes was determined to be 11. Low total number of lymph nodes (&lt; 11) was associated with significantly poorer survival after adjusting for lymph nodes ratio in all subgroups except in the subgroup with high lymph nodes ratio (0.5-1.0). Conclusions: Previous reports of the prognostic significance of total number of lymph nodes on node-positive colon cancer were confounded by lymph nodes ratio. The 12-node standard for total number of lymph nodes may not be equally applicable in right-sided colon cancer and left-sided colon cancer. abstract_id: PUBMED:30293393 The association of metastasis-related indexes of lymph nodes and the prognosis of stage N2b colorectal cancer patients Objective: To evaluate the prognostic value of lymph node metastasis-related indexes in patients with stage N2b colorectal cancer. Methods: Clinicopathologic data of 245 patients with stage N2b colorectal cancer who initially underwent radical operation in Cancer Hospital, Chinese Academy of Medical Sciences between January 2007 and December 2012 were retrospectively analyzed. The prognostic values of several indexes, including number of positive lymph nodes, number of negative lymph nodes, lymph node ratio (LNR) and log odds of positive lymph nodes (LODDS) were analyzed. Results: The 5-year overall survival rate of 245 patients with colorectal cancer was 54.0%, and the 5-year recurrence-free survival rate was 48.5%.Univariate analysis showed that perineural or blood vessel invasion, T stage, postoperative adjuvant therapy, number of positive lymph nodes, number of negative lymph nodes, LNR, and LODDS were significantly associated with the 5-year overall survival of colorectal cancer patients (P&lt;0.05). Multivariate cox regression analysis showed that, number of positive lymph nodes, number of negative lymph nodes, LNR, LODDS were all independent prognostic factors for stage N2b colorectal cancer patients (P&lt;0.05). The areas under the receiver operating characteristic curve (ROC) curves of number of positive lymph node, number of negative lymph nodes, LNR and LODDS were 0.649, 0.667, 0.690 and 0.683, respectively, however, no statistical significance was observed between the number of negative lymph nodes (P=0.622), LNR (P=0.178) or LODDS (P=0.272) and the number of positive lymph nodes. Conclusion: The number of positive lymph nodes, number of negative lymph nodes, LNR and LODDS were all independent prognostic factors for patients with stage N2b colorectal cancer. abstract_id: PUBMED:30607613 Marked impact of tumor location on the appropriate cutoff values and the prognostic significance of the lymph node ratio in stage III colon cancer: a multi-institutional retrospective analysis. Background: The prognostic significance of lymph node ratio (LNR) is not constant among studies. Exploration of appropriate location-specific cutoffs might be necessary because the number of lymph nodes harvested is generally higher in right than in left colon cancer. We aimed to determine appropriate cutoff values of LNR in right and left colon cancer and to clarify its clinical significance. Methods: The clinicopathologic data of 5463 patients with stage III colon cancer were collected. The best cutoff for LNR as a prognostic indicator for patients with right and left colon cancer was studied separately. We compared the prognostic impact between LNR and the number of lymph node metastasis using the Akaike information criterion (AIC), and evaluated the prognostic significance of LNR in each stage III subcategory. Results: The best performance was noted when LNR was categorized by cutoffs of 0.16 and 0.22 for right and left colon cancer, respectively. AIC scores were better with these categorizations than with subgrouping by number of positive nodes. LNR-low right colon cancer patients showed better cancer-specific survival than LNR-high in stage IIIA (95.7% vs. 89.3%), IIIB (86.7% vs. 77.2%), and IIIC (71.2% vs. 58.7%). The same results were obtained in left colon cancer patients with stage IIIB (88.3% vs. 80.7%) and IIIC (79.8% vs. 68.4%). Conclusions: We demonstrated the difference in the appropriate cutoffs of LNR between right and left colon cancer. Categorization by location-specific cutoff of LNR may be useful for risk stratification of patients with stage III cancer. abstract_id: PUBMED:19935398 Should total number of lymph nodes be used as a quality of care measure for stage III colon cancer? N/A abstract_id: PUBMED:22102737 Number of lymph nodes retrieved is an important determinant of survival of patients with stage II and stage III colorectal cancer. Objective: The number of lymph nodes retrieved is recognized to be a prognostic factor of Stage II colorectal cancer. However, the prognostic significance of the number of lymph nodes retrieved in Stage III colorectal cancer remains controversial. Methods: The relationship between the number of lymph nodes retrieved and clinical and pathological factors, and significance of the number of lymph nodes retrieved for prognosis of Stage II and III colorectal cancer were investigated. A total of 16 865 patients with T3/T4 colorectal cancer who had R0 resection were analysed. Results: The arithmetic mean of the number of lymph nodes retrieved of all cases was 20.0. The number of lymph nodes retrieved were varied according to several clinical and pathological variables with significant difference, and the greater difference was observed in scope of nodal dissection. Survival of Stages II and III was significantly associated with the number of lymph nodes retrieved. Five-year overall survival of the patients with ≤ 9 of the number of lymph nodes retrieved and those with &gt;27 differed by 6.4% for Stage II colon cancer, 8.8% for Stage III colon cancer, 12.5% for Stage II rectal cancer and 10.6% for Stage III rectal cancer. With one increase in the number of lymph nodes retrieved, the mortality risk was decreased by 2.1% for Stage II and by 0.8% for Stage III, respectively. The cut-off point of the number of lymph nodes retrieved was not obtained. Conclusions: The number of lymph nodes retrieved was shown to be an important prognostic variable not only in Stage II but also in Stage III colorectal cancer, and it was most prominently determined by the scope of nodal dissection. A cut-off value for the number of lymph nodes retrieved was not found, and it is necessary to carry out appropriate nodal dissection and examine as many lymph nodes as possible. abstract_id: PUBMED:24045143 Nodal status, number of lymph nodes examined, and lymph node ratio: what defines prognosis after resection of colon adenocarcinoma? Background: Lymph node ratio (LNR) has been proposed as an optimal staging variable for colorectal cancer. However, the interactive effect of total number of lymph nodes examined (TNLE) and the number of metastatic lymph nodes (NMLN) on survival has not been well characterized. Study Design: Patients operated on for colon cancer between 1998 and 2007 were identified from the Surveillance, Epidemiology, and End Results database (n = 154,208) and randomly divided into development (75%) and validation (25%) datasets. The association of the TNLE and NMLN on survival was assessed using the Cox proportional hazards model with terms for interaction and nonlinearity with restricted cubic spline functions. Findings were confirmed in the validation dataset. Results: Both TNLE and NMLN were nonlinearly associated with survival. Patients with no lymph node metastasis had a decrease in the risk of death for each lymph node examined up to approximately 25 lymph nodes, while the effect of TNLE was negligible after approximately 10 negative lymph nodes (NNLN) in those with lymph node metastasis. The hazard ratio varied considerably according to the TNLE for a given LNR when LNR ≥ 0.5, ranging from 2.88 to 7.16 in those with an LNR = 1. The independent effects of NMLN and NNLN on survival were summarized in a model-based score, the N score. When patients in the validation set were categorized according to the N stage, the LNR, and the N score, only the N score was unaffected by differences in the TNLE. Conclusions: The effect of the TNLE on survival does not have a unique, strong threshold (ie, 12 lymph nodes). The combined effect of NMLN and TNLE is complex and is not appropriately represented by the LNR. The N score may be an alternative to the N stage for prognostication of patients with colon cancer because it accounts for differences in nodal samples. abstract_id: PUBMED:34293461 Combining tumor deposits with the number of lymph node metastases to improve the prognostic accuracy in stage III colon cancer: a post hoc analysis of the CALGB/SWOG 80702 phase III study (Alliance)☆. Background: In colon cancer, tumor deposits (TD) are considered in assigning prognosis and staging only in the absence of lymph node metastasis (i.e. stage III pN1c tumors). We aimed to evaluate the prognostic value of the presence and the number of TD in patients with stage III, node-positive colon cancer. Patients And Methods: All participants from the CALGB/SWOG 80702 phase III trial were included in this post hoc analysis. Pathology reports were reviewed for the presence and the number of TD, lymphovascular and perineural invasion. Associations with disease-free survival (DFS) and overall survival (OS) were evaluated by multivariable Cox models adjusting for sex, treatment arm, T-stage, N-stage, lymphovascular invasion, perineural invasion and lymph node ratio. Results: Overall, 2028 patients were included with 524 (26%) TD-positive and 1504 (74%) TD-negative tumors. Of the TD-positive patients, 80 (15.4%) were node negative (i.e. pN1c), 239 (46.1%) were pN1a/b (&lt;4 positive lymph nodes) and 200 (38.5%) were pN2 (≥4 positive lymph nodes). The presence of TD was associated with poorer DFS [adjusted hazard ratio (aHR) = 1.63, 95% CI 1.33-1.98] and OS (aHR = 1.59, 95% CI 1.24-2.04). The negative effect of TD was observed for both pN1a/b and pN2 groups. Among TD-positive patients, the number of TD had a linear negative effect on DFS and OS. Combining TD and the number of lymph node metastases, 104 of 1470 (7.1%) pN1 patients were re-staged as pN2, with worse outcomes than patients confirmed as pN1 (3-year DFS rate: 65.4% versus 80.5%, P = 0.0003; 5-year OS rate: 87.9% versus 69.1%, P = &lt;0.0001). DFS was not different between patients re-staged as pN2 and those initially staged as pN2 (3-year DFS rate: 65.4% versus 62.3%, P = 0.4895). Conclusion: Combining the number of TD and the number of lymph node metastases improved the prognostication accuracy of tumor-node-metastasis (TNM) staging. Answer: The use of the total number of lymph nodes (TNODS) as a quality of care measure for stage III colon cancer is a subject of debate. Some studies have suggested that a higher TNODS is associated with better survival in stage II and III colon cancer, leading organizations like the American College of Surgeons and the National Quality Forum to endorse TNODS as a quality measure for patient care (PUBMED:19300237). However, the biological rationale for this linkage is not well established, and recent evidence challenges the validity of TNODS as a standalone prognostic factor. For instance, a study found that after adjusting for the lymph node ratio (LNR), there was no significant survival difference between patients with high TNODS (≥12) and those with low TNODS (<12) within certain LNR strata. In fact, patients with high TNODS had significantly worse survival than those with low TNODS within strata LNR3 and LNR4, suggesting that the prognostic effect of TNODS on node-positive colon cancer was confounded by LNR (PUBMED:19300237). Another study indicated that the number of lymph nodes examined is an important determinant of survival in stage II and III colorectal cancer, but it did not establish a cut-off value for the number of lymph nodes retrieved, emphasizing the need for appropriate nodal dissection and examination of as many lymph nodes as possible (PUBMED:22102737). Moreover, the prognostic value of TNODS may differ between left-sided and right-sided colon cancer, with TNODS remaining an independent predictor of survival in right-sided colon cancer but not in left-sided colon cancer after stratifying by LNR status (PUBMED:29751794). In light of these findings, the appropriateness of using TNODS as a quality of care measure for stage III colon cancer is questionable. It appears that other factors, such as LNR and the location of the cancer, may be more relevant for prognosis and may need to be considered alongside TNODS when evaluating the quality of patient care (PUBMED:19300237; PUBMED:29751794; PUBMED:22102737).
Instruction: Breast contrast-enhanced ultrasound: is a scoring system feasible? Abstracts: abstract_id: PUBMED:27358688 Predictive model for contrast-enhanced ultrasound of the breast: Is it feasible in malignant risk assessment of breast imaging reporting and data system 4 lesions? Aim: To build and evaluate predictive models for contrast-enhanced ultrasound (CEUS) of the breast to distinguish between benign and malignant lesions. Methods: A total of 235 breast imaging reporting and data system (BI-RADS) 4 solid breast lesions were imaged via CEUS before core needle biopsy or surgical resection. CEUS results were analyzed on 10 enhancing patterns to evaluate diagnostic performance of three benign and three malignant CEUS models, with pathological results used as the gold standard. A logistic regression model was developed basing on the CEUS results, and then evaluated with receiver operating curve (ROC). Results: Except in cases of enhanced homogeneity, the rest of the 9 enhancement appearances were statistically significant (P &lt; 0.05). These 9 enhancement patterns were selected in the final step of the logistic regression analysis, with diagnostic sensitivity and specificity of 84.4% and 82.7%, respectively, and the area under the ROC curve of 0.911. Diagnostic sensitivity, specificity, and accuracy of the malignant vs benign CEUS models were 84.38%, 87.77%, 86.38% and 86.46%, 81.29% and 83.40%, respectively. Conclusion: The breast CEUS models can predict risk of malignant breast lesions more accurately, decrease false-positive biopsy, and provide accurate BI-RADS classification. abstract_id: PUBMED:30766895 Comparison of contrast-enhanced digital mammography and contrast-enhanced digital breast tomosynthesis for lesion assessment. Contrast-enhanced digital mammography (CEDM) reveals neovasculature of breast lesions in a two-dimensional contrast enhancement map. Contrast-enhanced digital breast tomosynthesis (CEDBT) provides contrast enhancement in three dimensions, which may improve lesion characterization and localization. We aim to compare CEDM and CEDBT for lesion assessment. Women with breast imaging-reporting and data system 4 or 5 suspicious breast lesion(s) were recruited in our study and were imaged with CEDM and CEDBT in succession under one breast compression. Two radiologists assessed CEDM and CEDBT with both images displayed side-by-side and compared (1) contrast enhancement of lesions and (2) lesion margin using a five-point scale ranging from -2 (CEDM much better) to +2 (CEDBT much better). Biopsy identified 19 malignant lesions with contrast enhancement. Our results show that CEDBT provides better lesion margins than CEDM with limited reduction in contrast enhancement. CEDBT delivers less radiation dose compared to CEDM + DBT. Synthetic CEDM can be generated from CEDBT data and provides lesion contrast enhancement comparable to CEDM. CEDBT has potential for clinical applications, such as treatment response monitoring and guidance for biopsy. abstract_id: PUBMED:38237520 Contrast-enhanced breast imaging: Current status and future challenges. Background: Contrast-enhanced breast MRI and recently also contrast-enhanced mammography (CEM) are available for breast imaging. The aim of the current overview is to explore existing evidence and ongoing challenges of contrast-enhanced breast imaging. Methods: This narrative provides an introduction to the contrast-enhanced breast imaging modalities breast MRI and CEM. Underlying principle, techniques and BI-RADS reporting of both techniques are described and compared, and the following indications and ongoing challenges are discussed: problem-solving, high-risk screening, supplemental screening in women with extremely dense breast tissue, breast implants, neoadjuvant systemic therapy (NST) response monitoring, MRI-guided and CEM- guided biopsy. Results: Technique and reporting for breast MRI are standardised, for the newer CEM standardisation is in progress. Similarly, compared to other modalities, breast MRI is well established as superior for problem-solving, screening women at high risk, screening women with extremely dense breast tissue or with implants; and for monitoring response to NST. Furthermore, MRI-guided biopsy is a reliable technique with low long-term false negative rates. For CEM, data is as yet either absent or limited, but existing results in these settings are promising. Conclusion: Contrast-enhanced breast imaging achieves highest diagnostic performance and should be considered essential. Of the two contrast-enhanced modalities, evidence of breast MRI superiority is ample, and preliminary results on CEM are promising, yet CEM warrants further study. abstract_id: PUBMED:36371851 Feasibility of contrast-enhanced mammography in women with breast implants. Contrast-enhanced mammography (CEM) may provide an alternative to magnetic resonance imaging as a diagnostic exam in women with known or suspected breast cancer or as a screening exam in women at increased risk of breast cancer. Women with breast augmentation, either for oncologic or cosmetic reasons, may fall into this increased risk population and need safe and effective screening and diagnostic imaging tools. Here, we present our clinical practice data in order to demonstrate the feasibility of CEM in women with breast implants. An institutional review board-approved, Health Insurance Portability and Accountability Act-compliant, retrospective review of our tertiary cancer center's database yielded 104 women with breast implants who underwent 198 CEM exams from November 2014 to March 2020. All 198/198 (100%) exams were successfully completed in 104 women. Exam indications included: 174/198 (88%) screening due to increased risk, 10/198 (5%) to evaluate a palpable abnormality, 9/198 (&lt;5%) to evaluate disease extent following neoadjuvant chemotherapy for a known breast malignancy, and 5/198 (&lt;3%) for a 6-month follow-up. 97/104 (93%) women had dense breasts. Routine and implant-displaced low-energy views were obtained with contrast-enhanced images obtained on displaced views for all patients. 197/198 (99.5%) exams yielded no complications. In one exam, the patient experienced mild vasovagal symptoms following the administration of contrast. In conclusion, it is feasible to utilize CEM in both diagnostic and screening capacities in women with breast implants. abstract_id: PUBMED:25133534 Breast contrast-enhanced ultrasound: is a scoring system feasible? A preliminary study in China. Objectives: Although many studies about breast contrast-enhanced ultrasound had been conducted, clear diagnostic criteria for evaluating enhancement patterns are still lacking. This study aims to identify significant indicators for breast contrast-enhanced ultrasound and to establish an initial scoring system. Materials And Methods: Totally 839 patients were included in the study. This study was divided into two parts. 364 patients were included in part 1 while 475 in part 2. Conventional ultrasound and contrast-enhanced ultrasound were used to examine each lesion. Only the cases in part 2 were also examined by elastography. In part 1, Logistic regression analysis was performed to predict significant variables. A 5-point scoring system was developed based on the results. In part 2, the scoring system was used to evaluate all the breast lesions. To evaluate the diagnostic efficacy of the new scoring system, it was compared with the system established for elastography and conventional ultrasound (BI-RADS). Results: Three independent variables, namely, lesion scope, margin, and shape were selected in the final step of the logistic regression analysis in part 1. In part 2, the area under the ROC (receiver operating characteristic) curve for the contrast-enhanced scoring system was 0.912. The difference in the diagnostic capabilities of the contrast-enhanced scoring system and elastography was not statistically significant (P = 0.17). The difference in the diagnostic capabilities of the contrast-enhanced scoring system and BI-RADS was statistically significant (P&lt;0.001). Conclusions: The contrast-enhanced patterns of benign and malignant breast tumors are different. The application of a 5-point scoring system for contrast-enhanced ultrasound is clinically promising. abstract_id: PUBMED:33573122 Contrast-Enhanced Ultrasonography in the Diagnosis and Treatment Modulation of Breast Cancer. The aim of this paper is to highlight the role of contrast-enhanced ultrasound in breast cancer in terms of diagnosis, staging and follow-up of the post-treatment response. Contrast-enhanced ultrasound (CEUS) is successfully used to diagnose multiple pathologies and has also clinical relevance in breast cancer. CEUS has high accuracy in differentiating benign from malignant lesions by analyzing the enhancement characteristics and calculating the time-intensity curve's quantitative parameters. It also has a significant role in axillary staging, especially when the lymph nodes are not suspicious on clinical examination and have a normal appearance on gray-scale ultrasound. The most significant clinical impact consists of predicting the response to neoadjuvant chemotherapy, which offers the possibility of adjusting the therapy by dynamically evaluating the patient. CEUS is a high-performance, feasible, non-irradiating, accessible, easy-to-implement imaging method and has proven to be a valuable addition to breast ultrasound. abstract_id: PUBMED:33222993 Contrast-Enhanced Mammography Implementation, Performance, and Use for Supplemental Breast Cancer Screening. Contrast-enhanced mammography (CEM) is an emerging breast imaging technology that provides recombined contrast-enhanced images of the breast in addition to low-energy images analogous to a 2-dimensional full-field digital mammogram. Because most breast imaging centers do not use CEM at this time, a detailed overview of CEM implementation and performance is presented. Thereafter, the potential use of CEM for supplemental screening is discussed in detail, given the importance of this topic for the future of the CEM community. Diagnostic performance, safety, and cost considerations of CEM for dense breast tissue supplemental screening are discussed. abstract_id: PUBMED:36010240 Contrast-Enhanced Mammography versus Breast Magnetic Resonance Imaging: A Systematic Review and Meta-Analysis. Background: Contrast-enhanced mammography (CEM) and contrast-enhanced magnetic resonance imaging (CE-MRI) are commonly used in the screening of breast cancer. The present systematic review aimed to summarize, critically analyse, and meta-analyse the available evidence regarding the role of CE-MRI and CEM in the early detection, diagnosis, and preoperative assessment of breast cancer. Methods: The search was performed on PubMed, Google Scholar, and Web of Science on 28 July 2021 using the following terms "breast cancer", "preoperative staging", "contrast-enhanced mammography", "contrast-enhanced spectral mammography", "contrast enhanced digital mammography", "contrast-enhanced breast magnetic resonance imaging" "CEM", "CESM", "CEDM", and "CE-MRI". We selected only those papers comparing the clinical efficacy of CEM and CE-MRI. The study quality was assessed using the QUADAS-2 criteria. The pooled sensitivities and specificity of CEM and CE-MRI were computed using a random-effects model directly from the STATA "metaprop" command. The between-study statistical heterogeneity was tested (I2-statistics). Results: Nineteen studies were selected for this systematic review. Fifteen studies (1315 patients) were included in the metanalysis. Both CEM and CE-MRI detect breast lesions with a high sensitivity, without a significant difference in performance (97% and 96%, respectively). Conclusions: Our findings confirm the potential of CEM as a supplemental screening imaging modality, even for intermediate-risk women, including females with dense breasts and a history of breast cancer. abstract_id: PUBMED:29710688 Application of contrast-enhanced ultrasound in the diagnosis of small breast lesions. Background: Breast cancer is the most common cancer in women worldwide. The purpose of the study was to observe the features of contrast-enhanced ultrasound (CEUS) and the combination with Breast Imaging-Reporting and Data System (BI-RADS) of conventional ultrasound for assessing small breast lesions. Objectives: The study was to explore the small breast lesions' features of contrast-enhanced ultrasound (CEUS) and the combination with Breast Imaging-Reporting and Data System (BI-RADS) of conventional ultrasound, in order to improve the diagnostic accuracy of early breast cancer. Methods: 105 lesions were subject to conventional US (ultrasound) and CEUS before operations or biopsies. Among 105 breast lesions, six patient diagnoses were established by thick core-needle biopsy, while the rest were all confirmed by surgery and pathology. Results: Significant differences were found between benign and malignant lesions in qualitative and quantitative indexes (peak) of CEUS (P &lt; 0.05). The qualitative features of malignant small breast lesions were as follows: (1) enhanced intensity within the lesion was not uniform (61/61,100%); (2) the speed of wash-in was earlier than the surrounding tissue (58/61, 95.1%); (3) lesion interior and the surrounding tissues had contrast vessel performance (61/61,100%). Peak of malignant lesions (35.77±11.45) was higher than that of benign lesions (31.96±10.76) (P &lt; 0.05). The diagnostic performance of BI-RADS-US plus qualitative indexes (method one) in terms of area under receiver operating characteristic curve (AUROC) were the highest (i.e., AUROC = 0.817), in comparison with other combined diagnostic methods. The associated sensitivity, specificity and accuracy were 78.69%, 84.09% and 80.95%, respectively. With method one, however, was similar with US-BI-RADS in specificity, 11 malignant breast lesions were regarded as a higher classification of BI-RADS and classified into malignant group, which were identified as benign on US-BI-RADS originally. Conclusions: CEUS was useful to differentiate benign from malignant small breast lesions, and the combination of CEUS and BI-RADS-US can improve the early diagnosis of breast cancers. abstract_id: PUBMED:37493921 Contrast-enhanced ultrasound-guided biopsy of suspicious breast lesions on contrast-enhanced mammography and contrast-enhanced MRI: a case series. Purpose: To assess the effectiveness of contrast-enhanced ultrasound (CEUS) in guiding biopsies of breast lesions that were detected on contrast-enhanced mammography (CEM) or contrast-enhanced breast MRI (CE-MRI) but were not clearly visible on B-mode ultrasound (B-US). Methods: In this study, 23 lesions in 16 patients were selected for CEUS-guided biopsy due to poor visualization on B-US despite being detected on CEM (n = 20) or CE-MRI (n = 3). B-US, color Doppler ultrasound (CDUS), and CEUS were used to visualize the suspicious lesions, followed by a CEUS-guided core needle biopsy using Sonazoid as the contrast agent. The accuracy of the biopsy was assessed based on pathology-radiology concordance and 12-month imaging follow-up. The conspicuity scores for lesion visualization were evaluated using a 5-point conspicuity scale agreed upon by two breast radiologists. Results: The enhancing lesions detected on CEM/CE-MRI had an average size of 1.6 ± 1.3 cm and appeared as mass-enhancing (61%) or non-mass-enhancing (39%). The lesions had mean conspicuity scores of 2.30 on B-US, 2.78 on CDUS, and 4.61 on CEUS, with 96% of the lesions showing contrast enhancement on CEUS. CEUS-guided biopsy showed increased visibility in 96% and 91% of the lesions compared to B-US and CDUS, respectively. The overall accuracy of CEUS-guided biopsy was 100% based on concordance with histology and 12-month follow-up. Conclusions: CEUS enhances the visibility of suspicious CEM/CE-MRI lesions that are poorly visible on B-US during biopsy procedures. Answer: Yes, a scoring system for breast contrast-enhanced ultrasound (CEUS) is feasible. A preliminary study in China aimed to identify significant indicators for breast CEUS and establish an initial scoring system. The study included 839 patients and was divided into two parts. In the first part, logistic regression analysis was performed to predict significant variables, and a 5-point scoring system was developed based on the results. Three independent variables—lesion scope, margin, and shape—were selected in the final step of the logistic regression analysis. In the second part, the scoring system was used to evaluate all the breast lesions. The area under the ROC curve for the contrast-enhanced scoring system was 0.912, indicating high diagnostic efficacy. The difference in diagnostic capabilities between the contrast-enhanced scoring system and elastography was not statistically significant, while the difference between the contrast-enhanced scoring system and BI-RADS was statistically significant, suggesting that the contrast-enhanced patterns of benign and malignant breast tumors are different and that the application of a 5-point scoring system for CEUS is clinically promising (PUBMED:25133534).
Instruction: Does residents' involvement in mastectomy cases increase operative cost? Abstracts: abstract_id: PUBMED:22964271 Does residents' involvement in mastectomy cases increase operative cost? If so, who should bear the cost? Introduction: It is a widely held belief that surgical residents' involvement in operative cases increases operative time and costs. However, there is little published data regarding the veracity of this belief, and there is poor documentation of the actual costs associated with involving surgical residents in operative cases. This report represents a retrospective analysis of the additional cost associated with involving surgical residents of all training years in mastectomy cases over a 6-y period. Methods: A total of 1063 mastectomy cases (with and without reconstruction) were performed over a 6-y period (2004-2010) at the Saint Barnabas Medical Center, Livingston, NJ. Data relating to the type of mastectomy performed, operative times, training level of residents involved, and the surgical attendings' case volume were collected. Two major groups (attendings-only and attendings with residents) and 19 mastectomy subcategories were formed. Differences in operating room (OR) times between the attendings-only and attendings with residents groups were multiplied by the hospital OR charges per minute of time ($15/min after the initial 30 min) to determine the additional operative costs associated with residents' involvement. The impact of postgraduate training year and the attendings' case volume on operative time was analyzed separately. Statistical analysis was conducted using IBM SPSS 18.0.1 Windows version (PASW Statistics for Windows, SPSS Inc, Chicago). Results: Twenty-six surgeons and 97 residents were involved in 1063 mastectomy cases. Eight hundred ninety-five cases involved residents, of which 405 cases had reconstruction and 490 cases did not. Among 168 cases involving residents, 107 cases had reconstruction and 61 cases did not. The mean OR time was increased in nearly all cases involving surgical residents but statistically significant in only nine of the 19 mastectomy subcategories. Postgraduate year (PGY) 1 or PGY2 residents increased OR time for unilateral mastectomy, whereas the involvement of PGY1 through PGY3 residents increased OR time for bilateral mastectomy+reconstruction (P&lt;0.05). No significant difference was observed when PGY4 and PGY5 were involved in mastectomy cases. The additional calculated time required to perform mastectomies involving surgical residents was 41,366 min, which corresponded to an increase in OR costs of $620,340 over 6 y. Conclusion: The involvement of surgical residents in mastectomy cases significantly increases operative times and cost. This difference is most apparent when junior-level residents are involved and disappears by the PGY4-PGY5. Increased operative costs and the lost opportunity costs for surgical attendings are significant issues associated with residency education. Center for Medicare and Medicaid Services and Accreditation Council on Graduate Medical Education should consider such data when establishing appropriate reimbursements for graduate training programs. abstract_id: PUBMED:37035610 Quantifying the Opportunity Cost of Resident Involvement in Academic Orthopedic Shoulder Arthroplasty: A Matched - Pair Analysis. Background: There is minimal work defining the economic impact of resident participation in shoulder arthroplasty. Thus, this study quantified the opportunity cost of resident participation in total shoulder arthroplasty (TSA) and hemiarthroplasty (HA) by determining differences in operative time, relative value units (RVUs)/hour, and RVUs/case. Methods: A retrospective analysis of shoulder arthroplasty procedures were identified from the ACS-NSQIP database from 2006 to 2014 using CPT codes. Demographic, comorbidity, preoperative laboratory data and surgical procedure were used to develop matched cohorts. Mean differences in operative time, RVUs/case and RVUs/hour between attending-only (AO) cases and cases with resident involvement (RI) were examined. Cost analysis was performed to identify differences in RVUs generated per hour in dollars/case. Results: A total of 1786 AO and 1102 RI cases were identified. With the exception of PGY-3 and PGY-4 cases, RI cases had lower mean operative times compared to AO cases. The cost of RI was highest for PGY-3 ($199.87 per case) and PGY-4 ($9 .2 9) residents with all other postgraduate years providing a cost reduction. Discussion: Involvement of residents was associated with shorter operative times leading to a savings of $29.64 per case. Involvement of intermediate-level (PGY-3) residents were associated with increased costs that ultimately decreased as residents became more senior. abstract_id: PUBMED:37251986 Opportunity Cost to Attending Surgeons of Intraoperative Training for Residents in Cataract Surgery. Purpose: To estimate the opportunity cost to attending surgeons of teaching residents cataract surgery in the operating room. Patients And Methods: Operating room records at an academic teaching hospital from July 2016 to July 2020 were analyzed in this retrospective review of cases. Cases were identified using Current Procedural Terminology (CPT) codes 66982 and 66984 for cataract surgery. Outcomes measured include operative time and work relative value units (wRVUs). Cost analysis was performed using the generic 2021 Medicare Conversion Factor. Results: Of 8813 cases, 2906 (33.0%) included resident involvement. For CPT 66982 cases, median (interquartile range (IQR)) operative time was 47 (22) minutes with resident involvement and 28 (18) minutes without (p&lt;0.001). For CPT 66984 cases, median (IQR) operative time was 34 (15) minutes with resident involvement and 20 (11) minutes without (p&lt;0.001). Median wRVUs was 78.5 (20.9) with resident involvement and 61.0 (14.4) without (p&lt;0.001) which converted to an opportunity cost (IQR) per case of $1393.72 ($1055.63). Among cases involving residents, median operative time was significantly higher during the first and second quarters (p&lt;0.001) and for every quarter when compared to cases performed by attendings only (p&lt;0.001). Conclusion: Teaching cataract surgery in the operating room is associated with a considerable opportunity cost for attending surgeons. abstract_id: PUBMED:34384279 Operative Confidence Among U.S General Surgery Residents. Background: General surgery residents (GSRs) must develop operative autonomy skills to practice independently after graduation. We aim to investigate perceived confidence and operative autonomy of GSR physicians in order to identify and address influential factors. Methods: A 28-question anonymous online survey was distributed to 23 United States general surgery residency programs. Multivariable logistic regression was used for calculating the adjusted odds ratio (aOR) for binary outcomes. Significance was defined as P-values ≤ .05 or 95% confidence intervals (CIs) &gt;1 or &lt;1. Results: There were 120/558 (21.5%) GSR respondents. General surgery residents with &gt;200 overall operative case volume reported significantly higher confidence with minor cases (P = .05) and major cases (P = .02). General surgery residents that performed both minor and major surgeries reported higher confidence with minor cases at 85.7% compared to GSRs that performed mostly minor surgeries (64.7%) and mostly major surgeries (62.5%). General surgery residents who performed &gt;50 minor surgeries during their PGY 1 and 2 were less confident with major cases than GSRs who performed &lt;50 minor surgeries (aOR: 19.98, 95% CI: 1.26, 318). General surgery residents from community teaching hospitals reported higher confidence with major and minor cases than GSRs from university teaching hospitals and combined programs. Conclusion: Increased case volume, predominant case type, early surgical experience during PGY 1 and 2 years, and training at community teaching hospitals were identified as the most important factors that positively influence perception of operative confidence and autonomy among GSRs. These may have important implications in the development of future surgeons. abstract_id: PUBMED:15078701 The "cost" of operative training for surgical residents. Hypothesis: There is an increase in the amount of time required to perform an operation when the procedure involves training a surgical resident. This increased time does not translate into a financial burden for the hospital. Design: Retrospective review of prospectively collected data. During the study period, surgeons and residents were blinded to the study's intent. We compared the operative times of academic surgeons performing 4 common surgical procedures before and after the introduction of a postgraduate year 3 resident into a community teaching hospital. Between January 1, 2001, and June 30, 2002, 4 academic surgeons performed operations without a resident in a community hospital that was recently integrated into a tertiary medical center system. During that period, surgeons operated alone (hernia surgery) or assisted one another (laparoscopic cholecystectomy, colectomy, and carotid endarterectomy). From July 1, 2002, through March 31, 2003, these same 4 surgeons were assisted by a postgraduate year 3 resident on similar procedures. Setting: Community hospital recently integrated into a tertiary medical center system. Participants: Four experienced academic surgeons operating in the community setting and patients undergoing 1 of 4 surgical procedures (inguinal hernia repair, laparoscopic cholecystectomy, partial colectomy, or carotid endarterectomy) from January 1, 2001, through March 31, 2003. Intervention: The introduction of a postgraduate year 3 surgical resident rotation into a community hospital in which the same academic surgeons had been performing operations without a resident for 18 months. Main Outcome Measures: Mean operating time with and without a postgraduate year 3 resident participating in 4 common surgical procedures. Result For the 4 procedures studied, there was a significant increase in the operative time required to complete such procedures. Conclusions: There is an increased time cost associated with the operative training of surgical residents. This "cost" primarily impacts the attending surgeon. abstract_id: PUBMED:28818349 Fifteen-Year Decrease in General Surgery Resident Breast Operative Experience: Are We Training Proficient Breast Surgeons? Objective: The goal of the study was to evaluate trends in general surgery resident breast cases over the past 15 years. Study Design: The Accreditation Council for Graduate Medical Education (ACGME) Case Logs Statistics Reports from 2000 to 2015 were reviewed for average breast-specific case numbers and trends over time. ACGME data were available for all cases and breast-specific cases including the following: excisional biopsy/lumpectomy, simple mastectomy, modified radical mastectomy, and sentinel lymph node excision. Setting: The study evaluation was conducted at Cleveland Clinic, Cleveland, Ohio. Participants: No individuals directly participated in this project. However, all general surgery residents at ACGME-accredited programs are represented in this analysis by virtue of the ACGME Case Logs Statistics Reports. Results: Total residency case volume increased by 2% (2000-2015, p = 0.0159), with 2015 graduates logging 985.5 cases. In contrast, breast cases decreased by 17.1%. The largest drops were in modified radical mastectomy (61.5% decrease, p = 0.0001) and excisional biopsy/lumpectomy (25.8% decrease). Simple mastectomy increased from 6.0 to 10.8 cases (p = 0.0001). Sentinel lymph node excision fluctuated, but has been down-trending recently (67.3% decrease from 2010 to 2015, p = 0.0001). Decreased experience is occurring at both junior and senior resident levels. Conclusions: Breast case operative experience for general surgery residents decreased by 17% between 2000 and 2015, despite increase in overall operative volume. Residents have less experience in more advanced cases including axillary management, raising concern about the proficiency of graduating surgeons with respect to these procedures. It is reasonable to set national minimums for resident breast operative experience to ensure that individuals are appropriately trained to perform these cases in practice. abstract_id: PUBMED:37891061 Timing of mastectomy and the effect on the likelihood of outpatient surgery and cost savings in breast cancer patients. Background: Same-day discharge after mastectomy has potential patient- and hospital-level benefits; however, few data are available regarding factors affecting the likelihood of same-day discharge in order to address barriers. We sought to evaluate factors contributing to same-day discharge, focusing on the timing of mastectomy during the operative day. Methods: We conducted a single-institution retrospective review of patients who underwent mastectomies for malignancy over a 3-y time frame. Clinicopathologic variables were collected along with a binary variable for mastectomy start time (morning versus afternoon). Our primary endpoint was rate of same-day discharge. A multivariable logistic regression model was constructed from significant univariate variables to determine independent predictors of same-day discharge. A secondary endpoint was a cost-utility analysis for morning versus afternoon start time, using hospital cost data. Results: There were 451 patients included in the analysis. Factors associated with same-day discharge rate included the American Society of Anesthesiologists score, use of a preoperative regional anesthesia block, type of mastectomy performed, individual surgeon variation, and a morning start for the mastectomy. On multivariable analysis, morning start was a strong independent predictor of same-day discharge (odd ratio = 2.83; 95% CI, 1.75-4.60). The cost-utility analysis favored a morning start, with average cost savings of $550 per patient. Conclusion: Despite patient- and surgeon-specific variations, simple scheduling policies can improve same-day discharge rates after mastectomy, leading to improved hospital bed use and cost reduction. abstract_id: PUBMED:29068764 In Pursuit of the Most Cost-Effective Pediatric Laparoscopic Appendectomy: The Effect of Disposable Instrument Choice on Operative Time and Surgeon-Controllable Cost. Background: There is a movement toward cost savings in healthcare worldwide. Surgeons can affect two main cost variables in an operation (controllable cost): disposables and time. Our hypothesis is that increasing disposable costs do not change outcome or operative time, but simply increases controllable cost. Methods: We retrospectively reviewed patients younger than the age of 18 years undergoing laparoscopic appendectomies for nonperforated appendicitis from January 2013 to November 2016. Data obtained included demographic information in addition to intraoperative details, including disposables used and associated cost, resident participation, operative time, and final pathology. Patients were excluded if perforation was present as confirmed by operative findings or pathology (Kansas City definition). Patients were also excluded if concurrent procedures were performed during the appendectomy. Results: We reviewed 918 patients and excluded 288 for a total of 690. Disposable cost, operative time, and complications were compared between cases with a resident present and those without. Residents did not increase the use of disposables, but did increase operative time and therefore the total controllable cost. Transumbilical laparoscopic-assisted technique was significantly faster with lower controllable cost when compared with all other methods. Using disposable trocars with an endostapler was the second fastest and second lowest controllable cost and retained a significant difference when compared with most other methods. Endoloop methods did not show overall controllable cost savings versus the vast majority of methods. Conclusions: To maximize controllable cost savings, we recommend a transumbilical laparoscopic-assisted appendectomy or a standard three-port laparoscopic appendectomy, with disposable trocars and the endostapler. abstract_id: PUBMED:27350104 Effect of Resident Involvement on Operative Time and Operating Room Staffing Costs. Objective: The operating room (OR) is a major driver of hospital costs; therefore, operative time is an expensive resource. The training of surgical residents must include time spent in the OR, but that experience comes with a cost to the surgeon and hospital. The objective of this article is to determine the effect of surgical resident involvement in the OR on operative time and subsequent hospital labor costs. Design: The Kruskal-Wallis statistical test is used to determine whether or not there is a difference in operative times between 2 groups of cases (with residents and without residents). This difference leads to an increased cost in associated hospital labor costs for the group with the longer operative time. Setting: Cases were performed at Greenville Memorial Hospital. Greenville Memorial Hospital is part of the larger healthcare system, Greenville Health System, located in Greenville, SC and is a level 1 trauma center with up to 33 staffed ORs. Participants: A total of 84,997 cases were performed at the partnering hospital between January 1st, 2011 and July 31st, 2015. Cases were only chosen for analysis if there was only one CPT code associated with the case and there were more than 5 observations for each group being studied. This article presents a comprehensive retrospective analysis of 29,134 cases covering 246 procedures. Results: The analysis shows that 45 procedures took significantly longer with a resident present in the room. The average increase in operative time was 4.8 minutes and the cost per minute of extra operative time was determined to be $9.57 per minute. OR labor costs at the partnering hospital was found to be $2,257,433, or $492,889 per year. Conclusions: Knowing the affect on operative time and OR costs allows managers to make smart decisions when considering alternative educational and training techniques. In addition, knowing the connection between residents in the room and surgical duration could help provide better estimates of surgical time in the future and increase the predictability of procedure duration. abstract_id: PUBMED:6416244 Considerations of cost and effectiveness in the choice of mastectomy. Operable breast cancer may be managed equally well by a number of different regimens of mastectomy and/or radiotherapy. This paper examines the financial and morbidity costs, together with the node sampling efficacy, of three commonly employed techniques: simple mastectomy; total mastectomy with node biopsy or limited node excision; and modified radical mastectomy. Adjuvant radiotherapy (and/or chemotherapy) can only be applied rationally if node sampling is effective. Simple mastectomy cannot achieve this. Total mastectomy, providing that a node sample is obtained, is effective, and is probably the cheapest option. Total axillary clearance runs the risk of increased morbidity (and financial cost), while not necessarily guaranteeing more effective node sampling. Much of the cost and morbidity of mastectomy may be reduced by a limited but rational radiation policy. Answer: Yes, residents' involvement in mastectomy cases does increase operative cost. A retrospective analysis of 1063 mastectomy cases over a 6-year period at the Saint Barnabas Medical Center showed that the mean operating room (OR) time was increased in nearly all cases involving surgical residents, with statistically significant increases in nine of the 19 mastectomy subcategories. The additional time required for mastectomies involving surgical residents was 41,366 minutes, corresponding to an increase in OR costs of $620,340 over 6 years. This difference in cost was most apparent when junior-level residents were involved and disappeared by the PGY4-PGY5 level (PUBMED:22964271).
Instruction: Does small for gestational age worsen outcomes in gestational diabetics? Abstracts: abstract_id: PUBMED:27269646 Does small for gestational age worsen outcomes in gestational diabetics? Objective: Our goal was to determine whether pregnancy outcomes are worse in gestational diabetics with small for gestational age (SGA) than those without. Methods: This was a retrospective cohort study of 114 199 pregnancies with gestational diabetes mellitus (GDM) in California, 6446 of which were complicated by SGA. SGA was defined as birth weight &lt; 10th percentile. Predictors included the presence or absence of SGA. Outcomes included respiratory distress syndrome (RDS), neonatal demise (NND), intrauterine fetal demise (IUFD), hypoglycemia and jaundice. The data were also stratified by gestational age looking at 32 + 0-36 + 6 weeks and 37 + 0-41 + 6 weeks. Univariate and multivariate analyses were performed. Results: In the term 37 + 0 to 41 + 6 week GDM cohort the risk of RDS increased from 0.4% to 1.3%, the risk of neonatal demise from 0.02% to 0.09%, the risk of IUFD from 0.1% to 0.4%, the risk of hypoglycemia from 0.4% to 1.0% and the risk of jaundice from 18.0% to 23.3% (p &lt; 0.001 for all). Similar relationships were seen in the 32 + 0 to 36 + 6 week group. The findings remained significant in multivariate analyses. Conclusions: The presence of SGA in a patient with gestational diabetes is associated with significantly increased risks of adverse outcomes compared to gestational diabetics without SGA including increased risks of RDS, neonatal demise, IUFD, hypoglycemia and jaundice. abstract_id: PUBMED:32061817 Adverse maternal and neonatal outcomes in pregnant women with abnormal glucose metabolism. Aims: To assess the prevalence and risk of adverse perinatal outcomes in pregnant women with abnormal glucose metabolism. Methods: 3269 Chinese pregnant women with singleton delivery were studied, including 787 diagnosed as gestational diabetes mellitus (GDM), 115 pregnancy with diabetes (PWD), and 2367 normal glucose tolerance (NGT). The prevalence and risk of adverse maternal and fetal outcomes were compared and assessed among the three groups, and the related risk factors of the glucose metabolism for adverse pregnancy outcomes were evaluated by binary logistic regression. Results: Compared to NGT, maternal GDM and PWD faced increased risk of adverse perinatal outcomes such as pregnancy-induced hypertension (odds ratio (OR) 1.78 [95% confidence interval (CI): 1.17-2.72]; 4.31 [95% CI: 2.32-7.98]), low birth weight (OR 1.51 [95% CI: 1.01-2.28]; 4.05 [95% CI: 2.17-7.55]). And PWD group exhibited remarkably higher risk for preterm delivery (OR 2.88 [95% CI: 1.68-4.94]) and stillbirth (OR 7.78 [95% CI: 2.44-24.84]) than other two groups. The increased fasting insulin and glycated hemoglobin A1c were successively independent risk factors for maternal and neonatal adverse outcomes. Conclusions: Gestational abnormal glucose metabolism is associated with the remarkably increased risk of adverse perinatal outcomes, and PWD has higher risk of adverse perinatal outcomes than GDM. abstract_id: PUBMED:36369083 What is an appropriate gestational weight gain for women with gestational diabetes mellitus: based on the adverse pregnancy outcomes of over 12 thousand participants? Background: Evidence showed possible benefits of a less gestational weight gain (GWG) than the US Institute of Medicine (IOM) recommendation in gestational diabetes mellitus (GDM) pregnancy. Here, we aimed to explore an appropriate GWG range in GDM women according to adverse pregnancy outcomes. Methods: We enrolled all the singleton GDM pregnant women (n = 14,213) from January 2015 to December 2018 in Xi'an, Northwest China. According to the pre-pregnancy body mass index (BMI), they were classified into the Underweight (&lt; 18.5 kg/m2), Normal weight (18.5-24.9 kg/m2), Overweight (25.0-29.9 kg/m2) and Obesity (≥ 30.0 kg/m2) group, respectively. Logistic regression analysis was used to calculate the odds ratio (OR) and 95% confidence intervals (95% CI). The appropriate ranges of GWG were determined based on a significant protective association (OR &lt; 1). Results: Totally, 12,712 participants were finally recruited. There were 1180 (9.3%), 9134 (71.9%), 2097 (16.5%), and 301 (2.4%) patients in the Underweight, Normal weight, Overweight, and Obesity groups, respectively. Adverse outcomes increased with the elevation of pre-pregnancy BMI. Among them, the risk of cesarean section was the highest, followed by large for gestational age (LGA), small for gestational age (SGA), preeclampsia, and gestational hypertension. Through the analysis of the risk of adverse outcomes in continuous GWG categories in each group, an ideal GWG range obtained in this study was as follows: 10-15.9 kg, 8-11.9 kg, 6-7.9 kg, and -5-3.9 kg for the Underweight, Normal weight, Overweight and Obesity group, respectively. Furthermore, the ranges in this study were more protective for adverse outcomes than those from IOM. Conclusions: Based on the adverse pregnancy outcomes of over 12 thousand participants, our findings showed a more stringent GWG range for GDM women than the IOM criteria recommendation. abstract_id: PUBMED:36017972 The effect of maternal gestational diabetes on maternal and neonatal outcomes in twin pregnancies: a systematic review and meta-analysis. Diabetes and twin pregnancies are known risk factors for poor perinatal and neonatal outcomes. However, the effects of these two entities occurring together are still unclear. PubMed, Scopus and Google Scholar databases were searched in a systematic manner to identify observational studies among twin pregnancies, which documented the association of gestational diabetes with maternal and neonatal outcomes. All the analysis was done using STATA software. The meta-analysis included 21 studies, of which majority were retrospective data based. Mothers with gestational diabetes had higher risks of hypertensive disorder in pregnancy, caesarean section, large for gestational age baby, NICU admission and neonatal hypoglycaemia compared to mothers without gestational diabetes. Diabetic mothers were at reduced risk of small for gestational age baby and low APGAR score. No statistically significant differences in the risk of low birth weight, mean birth weight, prematurity and neonatal death were noted. This meta-analysis observed increased risks of detrimental maternal, neonatal and perinatal outcomes in twin pregnancies complicated by gestational diabetes, underscoring the need for the early detection and management of gestational diabetes. abstract_id: PUBMED:36694793 Maternal and Perinatal Outcomes During the COVID-19 Epidemic in Pregnancies Complicated by Gestational Diabetes. Introduction: Gestational diabetes (GDM) is one of the most common complications in pregnancy, with a prevalence that continues to rise. At the time of the COVID-19 epidemic, immediate reorganisation and adjustment of the system was needed. Telemedicine support was offered in order to provide high-quality treatment to pregnant women. However, the success of the treatment is unknown. We therefore aimed to evaluate COVID-19 epidemic effects on pregnancy outcomes in GDM. Methods: The maternal outcomes (insulin treatment, gestational weight gain, caesarean section, hypertensive disorders) and perinatal outcomes (rates of large and small for gestational age, preterm birth and a composite child outcome) of women visiting a university hospital diabetes clinic from March to December 2020 were compared with those treated in the same period in 2019. Results: Women diagnosed with GDM during the COVID-19 epidemic (n=417), were diagnosed earlier (23.9 [11.7-26.0] vs. 25.1 [21.8-26.7] gestational week), had higher fasting glucose (5.2 [5.0-5.4] vs. 5.1 [4.8-5.3] mmol/l) and earlier pharmacological therapy initiation, and had achieved lower HbA1c by the end of followup (5.1% (32.2 mmol/mol) [4.9% (30.1 mmol/mol)-5.4% (35.0 mmol/mol)] vs. 5.2% (33.3 mmol/mol) [5.0% (31.1 mmol/mol) - 5.4%·(35.5 mmol/mol)], p&lt;0.001) compared to a year before (n=430). No significant differences in perinatal outcomes were found. Conclusions: Although GDM was diagnosed at an earlier gestational age and higher fasting glucose concentration was present at the time of diagnosis, the COVID-19 epidemic did not result in worse glucose control during pregnancy or worse pregnancy outcomes in Slovenia. abstract_id: PUBMED:37393013 Glycemic control and neonatal outcomes in twin pregnancies with gestational diabetes mellitus. Background: Preliminary data suggest that strict glycemic control in twin pregnancies with gestational diabetes mellitus may not improve outcomes but might increase the risk of fetal growth restriction. Objective: This study aimed to investigate the association of maternal glycemic control with the risk of gestational diabetes mellitus-related complications and small for gestational age in twin pregnancies complicated by gestational diabetes mellitus. Study Design: This was a retrospective cohort study of all patients with a twin pregnancy complicated by gestational diabetes mellitus in a single tertiary center between 2011 and 2020, and a matched control group of patients with a twin pregnancy without gestational diabetes mellitus in a 1:3 ratio. The exposure was the level of glycemic control, described as the proportion of fasting, postprandial, and overall glucose values within target. Good glycemic control was defined as a proportion of values within target above the 50th percentile. The first coprimary outcome was a composite variable of neonatal morbidity, defined as at least 1 of the following: birthweight &gt;90th centile for gestational age, hypoglycemia requiring treatment, jaundice requiring phototherapy, birth trauma, or admission to the neonatal intensive care unit at term. A second coprimary outcome was small for gestational age, defined as birthweight &lt;10th centile or &lt;3rd centile for gestational age. Associations between the level of glycemic control and the study outcomes were estimated using logistic regression analysis and were expressed as adjusted odds ratio with 95% confidence interval. Results: A total of 105 patients with gestational diabetes mellitus in a twin pregnancy met the study criteria. The overall rate of the primary outcome was 32.4% (34/105), and the overall proportion of pregnancies with a small for gestational age newborn at birth was 43.8% (46/105). Good glycemic control was not associated with a reduction in the risk of composite neonatal morbidity when compared with suboptimal glycemic control (32.1% vs 32.7%; adjusted odds ratio, 2.06 [95% confidence interval, 0.77-5.49]). However, good glycemic control was associated with higher odds of small for gestational age compared with nongestational diabetes mellitus pregnancies, especially in the subgroup of diet-treated gestational diabetes mellitus (65.5% vs 34.0%, respectively; adjusted odds ratio, 4.17 [95% confidence interval, 1.74-10.01] for small for gestational age &lt;10th centile; and 24.1% vs 7.0%, respectively; adjusted odds ratio, 3.97 [95% confidence interval, 1.42-11.10] for small for gestational age &lt;3rd centile). In contrast, the rate of small for gestational age in gestational diabetes mellitus pregnancies with suboptimal control was not considerably different when compared with non-gestational diabetes mellitus pregnancies. In addition, in cases of diet-treated gestational diabetes mellitus, good glycemic control was associated with a left-shift of the distribution of birthweight centiles, whereas the distribution of birthweight centiles among gestational diabetes mellitus pregnancies with suboptimal control was similar to that of nongestational diabetes mellitus pregnancies. Conclusion: In patients with gestational diabetes mellitus in a twin pregnancy, good glycemic control is not associated with a reduction in the risk of gestational diabetes mellitus-related complications but may increase the risk of a small for gestational age newborn in the subgroup of patients with mild (diet-treated) gestational diabetes mellitus. These findings further question whether the gestational diabetes mellitus glycemic targets used in singleton pregnancies also apply to twin pregnancies and support the concern that applying the same diagnostic criteria and glycemic targets in twin pregnancies may result in overdiagnosis and overtreatment of gestational diabetes mellitus and potential neonatal harm. abstract_id: PUBMED:27371343 Neonatal outcomes according to different therapies for gestational diabetes mellitus. Objectives: To compare different neonatal outcomes according to the different types of treatments used in the management of gestational diabetes mellitus. Methods: This was a retrospective cohort study. The study population comprised pregnant women with gestational diabetes treated at a public maternity hospital from July 2010 to August 2014. The study included women aged at least 18 years, with a singleton pregnancy, who met the criteria for gestational diabetes mellitus. Blood glucose levels, fetal abdominal circumference, body mass index and gestational age were considered for treatment decision-making. The evaluated neonatal outcomes were: type of delivery, prematurity, weight in relation to gestational age, Apgar at 1 and 5min, and need for intensive care unit admission. Results: The sample consisted of 705 pregnant women. The neonatal outcomes were analyzed based on the treatment received. Women treated with metformin were less likely to have children who were small for gestational age (95% CI: 0.09-0.66) and more likely to have a newborn adequate for gestational age (95% CI: 1.12-3.94). Those women treated with insulin had a lower chance of having a preterm child (95% CI: 0.02-0.78). The combined treatment with insulin and metformin resulted in higher chance for a neonate to be born large for gestational age (95% CI: 1.14-11.15) and lower chance to be born preterm (95% CI: 0.01-0.71). The type of treatment did not affect the mode of delivery, Apgar score, and intensive care unit admission. Conclusions: The pediatrician in the delivery room can expect different outcomes for diabetic mothers based on the treatment received. abstract_id: PUBMED:37742804 Different subtypes of gestational diabetes mellitus are associated with distinct perinatal outcomes in twin pregnancies. Aims: To determine whether different gestational diabetes mellitus (GDM) subtypes are associated with distinct perinatal outcomes in twin pregnancies. Materials: This retrospective cohort study enrolled women with twin pregnancies who gave birth at a tertiary hospital between January 2017 and December 2022. GDM was diagnosed by the IADPSG diagnostic criteria. Three subtypes of GDM were defined as only abnormal fasting glucose (OAFG) values, only abnormal post-load glucose (OAPG) values and abnormal combined fasting and post-load glucose (ACFPG) values. Logistic regression or generalized estimation equation models were used to test the correlation of subtypes of GDM and perinatal outcomes. Results: GDM with OAPG had a slightly higher risk for preterm delivery (PTD) at &lt;37 gestational weeks (aOR 1.22, 95 %CI 1.01-1.47) and neonatalintensivecareunit (NICU) admission (aOR 1.31, 95 %CI 1.09-1.57). GDM with ACFPG were associated with PTD at &lt;37 gestational weeks (aOR 1.42, 95 %CI 1.06-1.89) and PTD at &lt;34 gestational weeks (aOR 1.65, 95 % CI 1.14-2.39). GDM with OAFG had a lower risk of being small-for-gestational age (SGA) (aOR 0.48, 95 % CI 0.26-0.92). Conclusions: Different subtypes of GDM are associated with distinct perinatal outcomes. Only abnormal fasting glucose levels may be responsible for reduced the risk of SGA neonates. abstract_id: PUBMED:30221320 The risk stratification of adverse neonatal outcomes in women with gestational diabetes (STRONG) study. Aims: To assess the risk of adverse neonatal outcomes in women with gestational diabetes (GDM) by identifying subgroups of women at higher risk to recognize the characteristics most associated with an excess of risk. Methods: Observational, retrospective, multicenter study involving consecutive women with GDM. To identify distinct and homogeneous subgroups of women at a higher risk, the RECursive Partitioning and AMalgamation (RECPAM) method was used. Overall, 2736 pregnancies complicated by GDM were analyzed. The main outcome measure was the occurrence of adverse neonatal outcomes in pregnancies complicated by GDM. Results: Among study participants (median age 36.8 years, pre-gestational BMI 24.8 kg/m2), six miscarriages, one neonatal death, but no maternal death was recorded. The occurrence of the cumulative adverse outcome (OR 2.48, 95% CI 1.59-3.87), large for gestational age (OR 3.99, 95% CI 2.40-6.63), fetal malformation (OR 2.66, 95% CI 1.00-7.18), and respiratory distress (OR 4.33, 95% CI 1.33-14.12) was associated with previous macrosomia. Large for gestational age was also associated with obesity (OR 1.46, 95% CI 1.00-2.15). Small for gestational age was associated with first trimester glucose levels (OR 1.96, 95% CI 1.04-3.69). Neonatal hypoglycemia was associated with overweight (OR 1.52, 95% CI 1.02-2.27) and obesity (OR 1.62, 95% CI 1.04-2.51). The RECPAM analysis identified high-risk subgroups mainly characterized by high pre-pregnancy BMI (OR 1.68, 95% CI 1.21-2.33 for obese; OR 1.38 95% CI 1.03-1.87 for overweight). Conclusions: A deep investigation on the factors associated with adverse neonatal outcomes requires a risk stratification. In particular, great attention must be paid to the prevention and treatment of obesity. abstract_id: PUBMED:37548502 Relationship between gestational weight gain during different phases and maternal complications or neonatal outcomes. Objectives: Improper gestational weight gain (GWG) causes many adverse obstetrical and neonatal outcomes. This study evaluates the relationship between weight gain in different phases and maternal outcomes or neonatal outcomes. Material And Methods: Finally, this study recruited 2,608 women delivered at Fujian Provincial Maternity and Child Health, affiliated hospital of Fujian Medical University from December 2017 to January 2019. To evaluate the relationship between maternal outcome and neonatal outcome, the participants were divided into four groups based on their baseline BMI and weight gain in the second/third trimester of pregnancy. Results: This study demonstrated that neonate weight, small-for-gestational-age infants, macrosomia, neonatal death, cesarean delivery, and GDM significantly differed across the baseline BMI, weight gain in the second and third trimester. The umbilical cord's abnormality, bulging membrane, abruptio placentae, and postpartum hemorrhage were significantly related to baseline BMI. Furthermore, gestational hypertension and pre-eclampsia/eclampsia were significantly correlated with baseline BMI and weight gain in the second trimester. The maternal and infant outcomes are different, and the GWG curves are significantly different. Finally, multivariate regression analysis showed that baseline BMI and weight gain in the second/third trimester were the independent risk factors for GDM and macrosomia. Also, baseline BMI and weight gain in the third trimester were the independent risk factors for developing gestational hypertension and pre-eclampsia/eclampsia, respectively. Conclusions: The baseline BMI and weight gain in the second/third trimester are significant with maternal outcomes and neonatal outcomes to a varying degree. Thus, maintaining appropriate baseline BMI and weight gain in different phases are essential in preventing pregnancy complications and maternal and neonatal prognosis. Answer: Yes, small for gestational age (SGA) does worsen outcomes in gestational diabetics. A retrospective cohort study of 114,199 pregnancies with gestational diabetes mellitus (GDM) in California found that the presence of SGA in a patient with GDM is associated with significantly increased risks of adverse outcomes compared to gestational diabetics without SGA. These increased risks include respiratory distress syndrome (RDS), neonatal demise (NND), intrauterine fetal demise (IUFD), hypoglycemia, and jaundice (PUBMED:27269646). Additionally, another study showed that different subtypes of GDM are associated with distinct perinatal outcomes in twin pregnancies. Specifically, GDM with only abnormal fasting glucose (OAFG) had a lower risk of SGA neonates (PUBMED:37742804). This suggests that the subtype of GDM may also play a role in the risk of SGA and subsequent outcomes. Furthermore, a study on twin pregnancies with GDM indicated that strict glycemic control did not reduce the risk of GDM-related complications but might increase the risk of fetal growth restriction, leading to SGA (PUBMED:37393013). This highlights the complexity of managing GDM and the potential impact on fetal growth. In summary, SGA is a significant factor that worsens outcomes in gestational diabetics, and careful management of GDM is required to mitigate these risks.
Instruction: Does individualism help explain differences in employers' stigmatizing attitudes toward disability across Chinese and American cities? Abstracts: abstract_id: PUBMED:21171794 Does individualism help explain differences in employers' stigmatizing attitudes toward disability across Chinese and American cities? Purpose: Stigmatizing attitudes toward people with disabilities can jeopardize such individuals' well-being and recovery through denial of employment and community isolation. By shaping social norms that define group membership, the construct of individualism may partially explain differences in stigmatizing attitudes across cultures. Further, widespread globalization has brought intensely individualistic social practices to certain segments of non-Western cultures. This paper examines whether the construct of individualism can help to explain cross-cultural differences in stigmatizing attitudes observed between American and Chinese employers. Design: Employers (N = 879) from Beijing, Hong Kong, and Chicago provided information on their attitudes toward hiring people with disabilities, and path analyses were conducted to examine potential mediating relationships. Results: Path analyses indicated that vertical individualism, along with perceived responsibility for acquiring a condition, partially mediated the relationship between culture and employers' negative attitudes about job candidates with disabilities. Conclusion: These results suggested that greater espousal of competitive and individualist values may drive stigmatizing attitudes across cultures. abstract_id: PUBMED:34777076 The Role of Collectivism-Individualism in Attitudes Toward Compliance and Psychological Responses During the COVID-19 Pandemic. This study examined the role of individual differences in horizontal and vertical individualism and collectivism, trust and worries, and concerns about COVID-19 in predicting the attitudes toward compliance of health advice and psychological responses during the COVID-19 pandemic. Chinese university students (N=384, 324 female) completed measures of individualism and collectivism, trust, attitudes toward compliance, and psychological responses to the pandemic. Results showed that not only vertical collectivist orientation but also horizontal individualist orientation significantly predicted higher willingness to comply, whereas vertical individualist orientation significantly predicted lower willingness to comply. Vertical individualist and vertical collectivist orientations predicted higher psychological response in terms of distress, anxiety, and depression, while horizontal collectivistic orientation significantly predicted less psychological problems. Implications of the effect of individual-level cultural orientations on attitudes toward public health compliance and psychological well-being during global health crises are discussed. abstract_id: PUBMED:34276121 Does Trust Always Help Gender Role Attitudes? The Role of Individualism and Collectivism. Social trust has a complex interrelationship with attitudes toward gender equality. Social trust has its origins in exchange relationships in preindustrial societies, lowering uncertainty in transactions and easing interpersonal exchanges. The degree to which this trust was extended to opportunities for women in commercial and societal roles, however, differed across cultures. Prior literature finds attitudes toward individualism and collectivism have significant implications for gender equality and patriarchal attitudes. We combine these ideas arguing that the degree to which social trust fosters gender equality depends upon the degrees of individualism and collectivism. Employing World Values Survey data across countries over time, we find that with low levels of individualism, and high degrees of collectivism, higher levels of trust are not effective in reducing the prevalence of gendered patriarchal attitudes-in fact it entrenches them further, worsening gender equality. However, as individualism rises, and collectivism falls, higher levels of trust become effective in reducing the prevalence of patriarchal beliefs. Thus, collectivistic beliefs stand as a barrier to future improvements in women's equality and economic rights in many societies; preventing social trust from being extended beyond traditional gendered roles. abstract_id: PUBMED:25852049 Employers' attitudes on hiring workers with intellectual disabilities in small and medium enterprises: an Italian research. Employers play a significant role in the process of hiring workers with intellectual disability. Through an in-depth interview, this research aims to investigate the attitudes of 30 representatives of small and medium-sized Italian companies involved in a process of recruitment. The data were analyzed using a phenomenological approach. The results show that attitudes toward the disabled employee are influenced by three areas, namely, personal characteristics of employers, selection process, and concerns and opinions of employers. abstract_id: PUBMED:38433746 Improving Medical Student Knowledge and Reducing Stigmatizing Attitudes Toward Treating Patients With Opioid Use Disorder. Objectives: Stigma and lack of knowledge are barriers to clinicians when caring for individuals with opioid use disorder (OUD). In 2018, only about 15 out of 180 American medical schools had comprehensive addiction programs. The AAMC reports that institutions are increasingly incorporating competencies to address the OUD and opioid epidemic. There have been few evaluated curriculums focused on reducing stigmatizing attitudes. This study evaluated whether a 4-hour case-based curriculum focused on improving stigmatizing attitudes toward patients with OUD could reduce medical student perceptions around viewing addiction as a punitive condition and other substitution-based misconceptions around opioid agonist-based medication. Methods: Medical students completed a 4-hour curricular workshop which included learning objectives focusing on barriers to healthcare/stigmatizing attitudes, effective behavioral therapy options, and appropriate use of opioid medications. We measured changes in knowledge and attitudes using validated scales on stigma. Non-parametric repeated measure tests determined statistically significant differences between pre and post assessments between OUD related perceptions and a control condition (diabetes). Results: Of 135 eligible participants, 99 (76%) students completed both pre- and post-surveys. Mean scores across knowledge questions improved (60%-81%, P &lt; .001) and stigmatizing attitudes regarding perceived violence of people with OUD decreased (2.04-1.82, P = .016). There was significant improvement in mean scores for OUD-related opinions including desire to work with and effectively treat patients with OUD (3.58-3.88, P &lt; .001) while no significant concurrent change was observed in mean opinion scores of a non-OUD comparator, diabetes (3.88-3.97, P = .201). Conclusions: Results indicate that the workshop was associated with measurable changes in knowledge and attitudinal forms of OUD stigma. With recent policy changes eliminating the X-waiver, healthcare institutions are eager to design curriculum around OUD management and treatment. This study provides a blueprint for an effective curriculum that improves clinician knowledge and reduces stigmatizing attitudes. abstract_id: PUBMED:30636469 Employers' attitudes toward hiring individuals with visual impairments. Purpose: This study examines: (a) the attitudes of 196 private sector employers toward hiring individuals with visual impairments and (b) the impact of the employers' individual characteristics (age, gender, and educational level), attitudes toward visual impairment, social contact with them, and the form of business entity on their attitudes toward hiring such individuals.Materials and methods: The research tool used is a questionnaire of 15 closed-type questions.Results: The results of this research verify that the examined variables are significant individual predictors of the employers' attitudes that are expressed in 7 out of 15 questions.Conclusions: The majority of the participants provided negative or neutral answers for most of the questions and exhibited the same attitudes with regard to the employment of individuals with visual impairments. Only two variables from the examined ones - "frequency of social contact" and "attitudes toward visual impairment" - appear to affect the employers' intentions to hire people with visual impairments.Implications for rehabilitationThe target should be more favorable employer attitudes toward individuals with visual impairments.The target should be more informative employers in regard to individuals with visual impairments and their skills and capabilitiesVocational rehabilitation professionals should be aware that employers' individual factors, age, gender and educational level are factors shown to affect their attitudes toward the vocational integration of people with disabilities.Informative seminars can be based on the variables and the factors shown to affect employers' attitudes. abstract_id: PUBMED:35681951 Stigmatizing Attitudes toward People Living with HIV among Young Women Migrant Workers in Vietnam. Despite intensive HIV education and prevention efforts in the past few years, stigmatizing attitudes toward people living with HIV (PLWH) remain a major barrier to HIV prevention and treatment efforts in Vietnam. The purpose of this study was to examine the prevalence of stigmatizing attitudes regarding HIV and identify correlative factors that impact the perceptions of PLWH among a heretofore overlooked demographic in Vietnamese society: women who are migrant workers in designated industrial zones (IZs). A cross-sectional study was conducted among 1061 women migrant workers aged 18 to 29 from January 2020 to November 2020 in Hanoi, Vietnam. Stigmatizing attitudes toward PLWH were measured using a four-item scale. Multiple logistic regression was conducted to examine the factors associated with stigmatizing attitudes. Our findings indicate both substantial levels of stigma persisting among this demographic group as well as the influence of important mitigating factors on the expression of HIV-related stigma. Over seventy-six percent (76.2%) of the participants reported having at least one of the four stigmatizing attitudes. Greater levels of stigmatizing attitudes toward PLWH were significantly associated with lower HIV knowledge, lower levels of education, and identifying as Kinh (the ethnic majority in Vietnam). Additionally, this study found that questions framing HIV infection through a familial lens were significantly associated with lower rates of stigmatizing responses. The high overall levels of stigmatizing attitudes toward PLWH among the study participants suggests that there is an urgent need for the development of culturally appropriate interventions and outreach education activities to reduce stigmatizing attitudes toward PLWH among women who are migrant workers working in the IZs in Vietnam. This study adds to both the existing literature and current efforts and policies around HIV in Vietnam by empirically suggesting that familial-based messaging may be a powerful potential narrative for interventions addressing HIV-related issues such as stigma. abstract_id: PUBMED:36400148 Attitudes toward seeking professional psychological help among Chinese pregnant women: A cross-sectional study. Background: There exists a gap between the high prevalence of mental health problems and negative attitudes toward seeking professional psychological help among Chinese pregnant women. This study aimed to investigate the attitudes and factors toward seeking professional psychological help among Chinese pregnant women to provide some suggestions for improving pregnant women's psychological help-seeking behaviors. Methods: A total of 1024 pregnant women were recruited. The Chinese version of the Attitude Toward Seeking Professional Psychological Help Scale (ATSPPH) was used to assess the attitudes toward professional psychological help-seeking. Demographic and socio-psychological factors were collected by self-developed questionnaire, Perceived Social Support Scale, Simplified Coping Style Questionnaire, and Patient Health Questionnaire-9 Scale. Results: 1006 pregnant women were included in the final analysis. Only 3.7 % of participants chose professionals as their first choice for psychological help-seeking. The attitudes toward seeking professional psychological help were relatively positive, with the highest score of stigma tolerance among four subscales in ATSPPH. Education level, perceived social support, and positive coping style had a positive predictive effect on the four subscales of ATSPPH. Women with prenatal depressive symptoms had a lower score in the subscales of recognition of need for psychotherapeutic help and confidence in mental health practitioner; women with higher scores of negative coping style had a lower score in the subscales of stigma tolerance and interpersonal openness. History of professional psychological help-seeking behaviors was only associated with the subscale of recognition of need for psychotherapeutic help; Maternal age was only associated with the subscale of interpersonal openness. Conclusion: The attitudes toward seeking professional psychological help were relatively positive among Chinese pregnant women, and stigma tolerance was the significant barrier to professional psychological help-seeking. More attention should be paid to those women with older age, negative coping style, and women with prenatal depressive symptoms. abstract_id: PUBMED:31256568 Political Ideology and Stigmatizing Attitudes Toward Depression: The Swedish Case. Background: Stigmatizing attitudes toward persons with mental disorders is a well-established and global phenomenon often leading to discrimination and social exclusion. Although previous research in the United States showed that conservative ideology has been related to stigmatizing attitudes toward mental disorders, there is reason to believe that this mechanism plays a different role in the context of a universal welfare state with a multi-party system such as Sweden. Furthermore, "mental disorders" may signify severe psychotic disorders, which may evoke more negative attitudes. This suggests the importance of specific studies focusing on the more common phenomenon of depression. This paper investigates the relationship between political ideology and stigmatizing attitudes toward depression in Sweden. Methods: This study is part of the New Ways research program. Data were collected by the Laboratory of Opinion Research (LORE) at the University of Gothenburg in 2014 (N = 3246). Independent variables were political ideology and party affiliation. The dependent variable was the Depression Stigma Scale (DSS). Data were analyzed with linear regression analyses and analyses of variance. Results: More conservative ideology (B = 0.68, standard error [SE] = 0.04, P&lt;.001) and more conservative party affiliation (F(8 2920) = 38.45, P&lt;.001) showed more stigmatizing attitudes toward depression. Item-level analyses revealed a difference where the supporters of the conservative party differed (P&lt;.05) from supporters of the liberal party, with a higher proportion agreeing that "people could snap out of " depression if they wanted to; the populist right-wing party differed from the conservative party with a higher proportion agreeing on items displaying people with depression as "dangerous" and "unpredictable." Even self-stigma was highest among the populist right-wing party with 22.3% agreeing that "if I had depression I wouldn't tell…." Conclusion: Political ideology was associated with stigmatizing attitudes toward depression in Sweden. The results also confirm the need to distinguish between different forms of conservatism by observing social distance as being a more important driver among voters for the populist right-wing party compared with personal agency and responsibility among voters for the more traditional conservative party. abstract_id: PUBMED:33948140 Stigmatizing attitudes toward people living with HIV/AIDS (PLWHA) among primary health care providers in Kinta District, Perak. Background: Stigmatizing attitudes expressed by health care providers prevent some members of at-risk populations from accessing human immunodeficiency virus (HIV) screening and care. This attitude contributes to the continuity of the infection dissemination within our community, which gives an impact on the healthcare service and the curtailment of the global HIV/acquired immunodeficiency syndrome (AIDS) pandemic. Objective: This study was conducted to identify stigmatizing attitudes toward people living with HIV/AIDS (PLWHA) and their determinants among primary health care providers in Kinta District, Perak. Methodology: A cross-sectional study was conducted in 36 primary care clinics in Kinta District, Perak. Using stratified random sampling, 365 primary health care providers were recruited into the study. A validated self-administered questionnaire was used to obtain sociodemographic data as well as information on the healthcare experiences of healthcare providers, their knowledge of HIV/AIDS, and attitudes toward PLWHA. Determinants were identified using multiple linear regression. Results: More than half of the respondents (54.1%) had never provided care to HIV/AIDS patients. A minority (29.9%) had received training on HIV/AIDS. This study shows that doctors (Coef.= -9.50, 95% CI: -18.93, -0.07, p= 0.048), respondents with HIV-positive relatives, (Coef.= -5.61, 95% CI: -10.57, -0.65, p= 0.027), those who had provided care to HIV/AIDS patients (Coef.= -2.38, 95% CI: -4.31, -0.45, p= 0.016), and those with a higher knowledge score on HIV/AIDS (Coef.= -0.86, 95% CI: -1.59, -0.13, p= 0.021) were less likely to show stigmatizing attitudes toward PLWHA. Conclusion: The issue of stigmatizing attitudes toward PLWHA among primary health care providers needs to be addressed. This study finds that knowledge, profession, experiences with caring for PLWHA, gender, and having HIV-positive relatives are significant predictors of stigmatizing attitudes toward PLWHA among primary health care providers in Kinta District, Perak. Interventional programs to improve knowledge and awareness, as well as decrease stigma toward PLWHA, should be implemented among all health care providers, especially those who have no opportunity to provide direct care. Answer: Yes, individualism does help explain differences in employers' stigmatizing attitudes toward disability across Chinese and American cities. A study examining employers' attitudes toward hiring people with disabilities in Beijing, Hong Kong, and Chicago found that vertical individualism, along with perceived responsibility for acquiring a condition, partially mediated the relationship between culture and employers' negative attitudes about job candidates with disabilities. This suggests that a greater espousal of competitive and individualist values may drive stigmatizing attitudes across cultures (PUBMED:21171794).
Instruction: Is there a correlation between body proportion and choice of profession? Abstracts: abstract_id: PUBMED:25894636 Is there a correlation between body proportion and choice of profession? Objective: We made anthropometric measurements to compare body proportions of university students and to investigate the tendency of the students with more developed bodies in choosing profession for university education. Methods: A total of 204 male students [group 1: students of School of Physical Education and Sports" (SPESp) (n=88), group 2: students of Faculty of Fine Arts (FFArts) (n=59), and group 3: students of Faculty of Medicine (FMed) (n=57)] were included, and anthropometric measurements were performed. Results: "Height"; "chest circumference"; "difference of inspiration and expiration of chest circumference" values of the SPESp students were higher than the others (FFArts and FMed). In the SPESp students, chest circumference and chest expansion capacity by inspiration developed more. This shows the positive impact of sport on respiratory functions. Additionally, the legs are longer in students engaged in regular sports. Conclusion: Exercise is important for all subjects in terms of body development, growth, and oxygenization and to decrease cardiovascular risk factors. For the development of the ideal body anthropometric rates, obesity prevention, and for the growth of healthy generations, governments should give importance to sports and take incentive measures to increase children and young people's interest in the sport. abstract_id: PUBMED:37662069 The completion of trial apprenticeships is followed by the choice of an apprenticeship profession: Predicting the decision to pursue a trial apprenticeship profession in the career choice process This article investigated the factors that lead adolescents to pursue a trial apprenticeship profession in their career choice process. Gottfredson's Theory of Circumscription, Compromise, and Self-Creation (TCCSC) served as the theoretical basis for the analyses. A longitudinal sample of 128 female and 202 male adolescents who completed standardized questionnaires in seventh and ninth grades and who indicated that they would begin vocational education and training after compulsory schooling was analyzed separately by gender. Regression models showed that the number of apprenticeship positions has the strongest effect for both female and male adolescents (significant positive effect). Only for female adolescents, the extent of enterprising requirements of the trial apprenticeship profession was shown to have a negative effect on the decision to pursue a trial apprenticeship profession, controlling for the number of apprenticeship positions. The results support the accessibility hypothesis postulated in the TCCSC, according to which individuals are more likely to pursue professions they perceive as more accessible, i.e., with a large number of apprenticeship positions. In light of the compatibility idea contained in the TCCSC, a possible explanation for the negative effect of enterprising requirements of trial apprenticeship professions among female adolescents is that they may assess their enterprising skills as too low. As a result, they perceive professions with high enterprising requirements as incompatible and turn away from these professions. This study was able to address a gap in career choice research by examining the role of trial apprenticeship professions and offers important reference points for future research. abstract_id: PUBMED:29221560 Commitment at the heart of the nursing profession Although it is a free choice and appears paradoxical, the choice to be a nurse is binding. In keeping with its Code of Ethics, the nursing profession demands an ethical, social, political and collective commitment that we cannot shy away from or we will fail to fulfil our obligations. abstract_id: PUBMED:10335540 The adolescent and the choice of profession Is easy to understand that choice of profession in countries where there is no organized professional help for job guidance it is not based done of the recognitions of the aptitude necessary for a particular job or profession. But it is a result of a very different actual conditions: youthful fancies, parent's mistakes, influences of the surroundings, ets. The situation of this conditions that we find in other countries and in Italy would resulting a serious problem. If not overcome by positive judgment in the orientation. The fact of the average intelligence accompany by sufficient good will is needed to overcome most common difficulties which are meet in a job or a profession. abstract_id: PUBMED:37908122 What motivates young Arab Muslim women to choose nursing as a profession: A cross-sectional study. Aim: This study examines the impact of nurse uniforms, the coronavirus pandemic, and the image of nurses and the nursing profession on the career choice of young Muslim Arab women in nursing, with a focus on policy implications. Background: The global nursing shortage poses significant challenges, including in Israel. Attracting young Muslim Arab women to nursing careers is vital. Understanding how nurse uniforms, the pandemic, and perceptions of nurses and the nursing profession shape their career decisions can inform policy interventions. Methods: A cross-sectional study utilized digital questionnaires administered to 200 Muslim Arab women aged 20-30 in Israel. The study adhered to the EQUATOR checklist for cross-sectional studies. Results: Only 29.2% of participants expressed interest in studying nursing. Findings revealed a positive correlation between the pandemic's impact and intention to pursue nursing. A linear regression analysis shows the impact of the white uniform, the image of nurses, and the image of the profession on the choice of a career in nursing. Discussion: The study emphasizes the significance of nurse uniforms, the image of nurses, and the image of the nursing profession in shaping young Muslim Arab women's career choices in nursing. Policy interventions should raise awareness about nursing roles, expand responsibilities, and highlight nurses' contributions to society. Conclusion: Strategies to attract young Muslim Arab women to nursing careers should address nurse uniforms, the image of nurses, and the image of the nursing profession. Implications For Policy: Policy interventions should prioritize diversity, promote positive images of nurses and the nursing profession, and raise awareness of nursing roles and contributions among young Muslim Arab women. abstract_id: PUBMED:25271027 Considering body ethics in the healthcare profession This article uses the theory of body phenomenology and Watson's caring theory to develop and apply body ethics to the clinical healthcare profession. This attempt is meant to facilitate deep, humanistic experiences for healthcare personnel. The analysis of body phenomenology reveals that the soul is banished from her familiar and comfortable "at-home" status when illness and pain invade the body. In such situations, the body becomes an external object that is self-alienated. This experience induces experiences such as solitude and violence. However, it also holds the potential to expose the original morality of the body. Additionally, this article discusses popular tools used in clinical ethics such as principalism and virtual-based ethics, which are based on moral reasoning and moral feeling. In contrast to these, body ethics seek a more profound and humble level of sensibility that is able to implant authenticity into the ethics. Finally, we offer some suggestions related to Watson's caring theory. abstract_id: PUBMED:27644123 Choice and perception of the nursing profession from the perspective of Polish nursing students: a focus group study. Background: Although previous quantitative studies provide important information on the factors which influence the choice of nursing as a career, qualitative analysis makes it possible to study the subject more thoroughly. The purpose of this study was to conduct an in-depth analysis of the reasons why Polish students choose nursing as a profession and their later perception of the job based on experiences acquired during the nursing course. Methods: A qualitative descriptive study was designed. We organized 8 focus group discussions with third-year nursing students. A total of 76 students participated in the study. Results: Several reasons why students had chosen the nursing profession were identified: desire to help others, family tradition, desire to work abroad, failure to get into another course, pure chance, and low admission requirements (relative to medical studies). The participants' views of the nursing profession were based on their own personal experiences or observations of nurses at work. Often these observations were superficial, concerning only selected fragments of nursing work. The participants also identified reasons for there being low regard for the nursing profession. Conclusion: The decision about choosing nursing is mainly determined by practical aspects, e.g., the opportunity for employment. Although young people are aware of the low prestige of the nursing profession in Poland, they believe it is possible to improve its image and enhance its prestige. abstract_id: PUBMED:29641873 On Choice of Profession of Pharmacist by Students of Moscow Universities The development of personality strongly depends on correspondence of chosen profession to capacities and interests of specialist, labor market demand and possibility of career making. The particular dynamics is observed in alteration of conception of professions and professional activities because information revolution resulted in shifting from mono-professionalism to mastering of several professions at once. The profession of pharmacist also passed through several alterations following development of pharmaceutical industry. The results of survey of opinions of students of pharmaceutical faculties concerning choice of profession of pharmacist demonstrate an actual situation at industry market and give an opportunity of forecasting its development in Moscow for nearest years. abstract_id: PUBMED:15002308 Choice of the profession and work satisfaction among nursing students from the Medical University of Gdansk The choice of the future profession is very important to every young person. The influence of this choice on future occupational satisfaction is significant. In the paper there is presented the connection between the choice of nursery profession and achieved occupational satisfaction. The material was based on information gathered by anonymous questionnaires. The whole group of second year students of Nursery Faculty of Medical University of Gdańsk were interviewed by the questionnaire. For 60% of them the choice of the profession was not done accidentally. The evaluation of motives for selecting the nursery profession and its influence on future occupational satisfaction were studied. The tendency of changing the profession according to the way of work was also analyzed. The results indicate the differences between achieved occupational satisfaction and the tendency of changing the profession depending on motives of taking up the studies. The results do not allow to estimate the quality of work of particular groups but can be the introduction of following more thorough research. abstract_id: PUBMED:30244884 Probing Sensory Readout via Combined Choice-Correlation Measures and Microstimulation Perturbation. It is controversial whether covariation between neuronal activity and perceptual choice (i.e., choice correlation) reflects the functional readout of sensory signals. Here, we combined choice-correlation measures and electrical microstimulation on a site-to-site basis in the medial superior temporal area (MST), middle temporal area (MT), and ventral intraparietal area (VIP) when macaques discriminated between motion directions in both fine and coarse tasks. Microstimulation generated comparable effects between tasks but heterogeneous effects across and within brain regions. Within the MST and MT, microstimulation significantly biased an animal's choice toward the sensory preference instead of choice-related signals of the stimulated units. This was particularly evident for sites with conflict preference of sensory and choice-related signals. In the VIP, microstimulation failed to produce significant effects in either task despite strong choice correlations presented in this area. Our results suggest that sensory readout may not be inferred from choice-related signals during perceptual decision-making tasks. Answer: Yes, there is evidence suggesting a correlation between body proportion and the choice of profession. A study comparing anthropometric measurements of university students from different faculties found that students from the School of Physical Education and Sports had higher values for height, chest circumference, and the difference between inspiration and expiration of chest circumference compared to students from the Faculty of Fine Arts and Faculty of Medicine. This indicates that students with more developed bodies, particularly in terms of chest size and respiratory function, may be inclined to choose professions related to physical education and sports (PUBMED:25894636).
Instruction: Does medical insurance type (private vs public) influence the physician's decision to perform Caesarean delivery? Abstracts: abstract_id: PUBMED:22562949 Does medical insurance type (private vs public) influence the physician's decision to perform Caesarean delivery? Introduction: US data reveal a Caesarean rate discrepancy between insured and uninsured patients, with the C-section rate highest among the privately insured. The data have prompted concern that financial incentives associated with insurance status might influence American physicians' decisions to perform Caesarean deliveries. Objective: To determine whether differences in medical risk factors account for the apparent Caesarean rate discrepancy between Medicaid and privately insured patients in Michigan, USA. Method: A retrospective review was performed of 617 269 live birth deliveries in Michigan hospitals during 2004-8. All live birth records that were able to be linked to their mothers' hospital discharge records were utilised. Diagnosis-related group codes from the hospitalisation records were used to identify Caesarean deliveries. Regression models determined Caesarean probability for the time period under study, adjusted for insurance type, maternal age, race, maternal medical conditions, multiple births, prematurity and birth weight. Results: From 2004 to 2008, Caesarean rates were 33% for privately insured patients and 29% for Medicaid patients. The probability of Caesarean delivery was significantly greater for privately insured than Medicaid patients on univariate analysis (OR 1.2, 95% CI 1.19 to 1.22) but not on multivariate analysis (adjusted OR 1.01, 95% CI 0.99 to 1.02). Conclusion: No significant disparity was found in the odds of Caesarean delivery between privately insured and Medicaid patients in Michigan after adjusting for other Caesarean risk factors. A positive disparity would have provided de facto evidence that financial incentives play a role in physician decision-making regarding Caesarean delivery. abstract_id: PUBMED:28827257 Caesarean sections and private insurance: systematic review and meta-analysis. Objective: Financial incentives associated with private insurance may encourage healthcare providers to perform more caesarean sections. We therefore sought to determine the association of private insurance and odds of caesarean section. Design: Systematic review and meta-analysis. Data Sources: MEDLINE, Embase and The Cochrane Library from the first year of records through August 2016. Eligibility Criteria: We included studies that reported data to allow the calculation of OR of caesarean section of privately insured as compared with publicly insured women. Outcomes: The prespecified primary outcome was the adjusted OR of births delivered by caesarean section of women covered with private insurance as compared with women covered with public insurance. The prespecified secondary outcome was the crude OR of births delivered by caesarean section of women covered with private insurance as compared with women covered with public insurance. Results: Eighteen articles describing 21 separate studies in 12.9 million women were included in this study. In a meta-analysis of 13 studies, the adjusted odds of delivery by caesarean section was 1.13 higher among privately insured women as compared with women with public insurance coverage (95% CI 1.07 to 1.18) with no relevant heterogeneity between studies (τ2=0.006). The meta-analysis of crude estimates from 12 studies revealed a somewhat more pronounced association (pooled OR 1.35, 95% CI 1.27 to 1.44) with no relevant heterogeneity between studies (τ2=0.011). Conclusions: Caesarean sections are more likely to be performed in privately insured women as compared with women using public health insurance coverage. Although this effect is small on average and variable in its magnitude, it is present in all analyses we performed. abstract_id: PUBMED:34636088 Supplementary private health insurance: The impact of physician financial incentives on medical practice. In the French diagnosis-related group (DRG)-based payment system, both private and public hospitals are financed by a public single payer. Public hospitals are overcrowded and have no direct financial incentives to choose one procedure over another. If a patient has a strong preference, they can switch to a private hospital. In private hospitals, the preference does come into play, but the patient has to pay for the additional cost, for which they are reimbursed if they have supplementary private health insurance. Do financial incentives from the fees received by physicians for different procedures drive their behavior? Using French exhaustive data on delivery, we find that private hospitals perform significantly more cesarean deliveries than public hospitals. However, for patients without private health insurance, the two sectors differ much less in terms of cesareans rate. We determine the impact of the financial incentive for patients who can afford the additional cost. Affordability is mainly ensured by the reimbursement of costs by private health insurance. These findings can be interpreted as evidence that, in healthcare systems where a public single payer offers universal coverage, the presence of supplementary private insurance can contribute to creating incentives on the supply side and lead to practices and an allocation of resources that are not optimal from a social welfare perspective. abstract_id: PUBMED:31015268 Publicly insured caesarean sections in private hospitals: a repeated cross-sectional analysis in Chile. Objective: To measure the likelihood of delivery by caesarean section (C-section) for publicly insured births as compared with privately insured births, across all hospitals and within private hospitals. Design: Repeated cross-sectional analysis. Setting: The universe of hospital births in 15 regions of Chile. Participants: 2 405 082 singleton births between 2001 and 2014. Outcome Measures: C-section rates by type of hospital and type of insurance; contribution to overall C-section rates of subgroups by type of insurance and type of hospital; adjusted OR of privately insured births delivered by C-section compared with publicly insured births, across all hospitals and within private hospitals; percentage of discharges related to maternal morbidity and mortality across groups; length of stay after delivery. Results: An increasing percentage of publicly insured births occur in private facilities each year. Approximately three out of four publicly insured births in private hospitals are delivered by C-section. The adjusted odd of C-section delivery in a private maternity unit is lower for those privately insured than for those with public insurance: OR 0.6, 95% CI 0.56 to 0.64. There is no evidence that these women would have been more likely to have a C-section out of medical necessity. Conclusions: We find an association between high C-section rates and publicly insured women delivering at private institutions in Chile, and show that this group is driving the overall high and growing rates. There is a need for a more informed surveillance on the part of the public insurance system of its private providers' C-section practices. abstract_id: PUBMED:27896623 Effect of the new maternity insurance scheme on medical expenditures for caesarean delivery in Wuxi, China: a retrospective pre/post-reform case study. Aiming to control rising medical expenditures and help improve China's healthcare systems, this study examined whether a cap-based medical insurance scheme with shared financial interest between the insurance and healthcare providers is effective in containing hospitals' C-section medical expenditures. We used 6547 caesarean delivery case records from a teaching tertiary-level general public hospital located in Wuxi, China (2004-2013), and used the Chow test to investigate the possibility of significant variation in mean medical expenditures for caesarean deliveries pre- and post-reform. We also used paired sample t-tests and linear regression models to compare the mean medical expenditures between insured and uninsured women undergoing caesarean delivery during the post-reform period. After the scheme's implementation, medical expenditures for caesarean deliveries declined and the medical expenditures of women covered by the scheme were significantly lower than those of uninsured patients. These findings indicated the scheme's effectiveness in minimizing caesarean delivery expenditures. The cap-based medical insurance scheme with shared financial interest between insurance and healthcare providers would likely steer healthcare providers' behaviors in a more cost-effective direction. abstract_id: PUBMED:35439390 The development of mechanism of participation of private clinics in the mandatory medical insurance system The article considers actual aspects of participation of private clinics in implementation of territorial program of mandatory medical insurance from position of development of mechanism of interaction at micro-level. The principles of public-private partnership are gaining more popularity in social sphere, especially in health care. In Russia, private clinics developed their own motives to participate in implementation of territorial program of mandatory medical insurance, but mechanism itself is not always elaborated considering regional characteristics. Therefore, medical organizations encounter problems concerning implementation of mechanism of public-private partnership: application deadline, normative legal base, positioning of executive and regulatory authorities, etc. The situation is aggravated by lacking of municipal health care departments in region that could inform private business about its corresponding rights and obligations. The problems of implementing public-private partnership are analyzed and mechanism of participation of private clinics in implementation of program of mandatory medical insurance (as exemplified by the Krasnodar Kray). The study permitted to determine detailed algorithm of participation of private medical organizations in program of mandatory medical insurance. The study results permitted to scope regional features of development of interaction between state and business. abstract_id: PUBMED:24277646 A retrospective cohort study of mode of delivery among public and private patients in an integrated maternity hospital setting. Objective: To examine the associations between mode of delivery and public versus privately funded obstetric care within the same hospital setting. Design: Retrospective cohort study. Setting: Urban maternity hospital in Ireland. Population: A total of 30 053 women with singleton pregnancies who delivered between 2008 and 2011. Methods: The study population was divided into those who booked for obstetric care within the public (n=24 574) or private clinics (n=5479). Logistic regression analyses were performed to examine the associations between operative delivery and type of care, adjusting for potential confounding factors. Main Outcome Measures: Caesarean section (scheduled or emergency), operative vaginal delivery (vacuum or forceps), indication for caesarean section as classified by the operator. Results: Compared with public patients, private patients were more likely to be delivered by caesarean section (34.4% vs 22.5%, OR 1.81; 95% CI 1.70 to 1.93) or operative vaginal delivery (20.1% vs 16.5%, OR 1.28; 95% CI 1.19 to 1.38). The greatest disparity was for scheduled caesarean sections; differences persisted for nulliparous and parous women after controlling for medical and social differences between the groups (nulliparous 11.9% vs 4.6%, adjusted (adj) OR 1.82; 95% CI 1.49 to 2.24 and parous 26% vs 12.2%, adj OR 2.08; 95% CI 1.86 to 2.32). Scheduled repeat caesarean section accounted for most of the disparity among parous patients. Maternal request per se was an uncommonly reported indication for caesarean section (35 in each group, p&lt;0.000). Conclusions: Privately funded obstetric care is associated with higher rates of operative deliveries that are not fully accounted for by medical or obstetric risk differences. abstract_id: PUBMED:11858789 The role of private insurance in public health care systems: conceptual framework and policies The structure of the health care system ans specifically the type and amount of the public and private mix is not a closed issue. This article provides and update of the arguments that justify public intervention in health, and emphasizes the failures of the private insurance market that call for mandatory universal health insurance, although that does not necessarily mean that state has to be the insurer. The relationship between both sectors and the variables determining the relative level of expenditure in both are also analyzed. Following the literature on the public provision of private goods, the level of expenditure in a democracy is seen to depend on the preferences of the median voter, where private insurance usually tops up public insurance. The key variable determining the decision to buy additional private insurance is the difference in quality, defined broadly, between both sectors. Concerning policies, the appropriateness of fiscal incentives to promote the uptake of private insurance is discussed and it is concluded that there is no clear evidence of its suitability. Also, it is argued that models in which the public and private sectors appear totally segregated or totally integrated are preferable to intermediate models, in which both sectors appear combined. Medical coverage bought by an informed agent in exchange for a capitation payment seems a better way to integrate the private sector than through a system of vouchers. abstract_id: PUBMED:32526109 The public-private partnership in Russian health care: the regional aspect The health care problems are considered in detail both in the program and non-program activities of the sector state regulatory bodies. The public-private partnership, in particular, participation of private clinics in the system of mandatory health insurance can become one of the effective tools to cope with many problems. The article analyzes participation of private clinics in implementing territorial program of mandatory health insurance as exemplified by the Krasnodar Krai. This is to provide guaranteed free medical care by private clinics of municipalities in the Krai. The regional features of public-private partnership in health care are considered in fairly prosperous Russian region that allow to develop positive scenario. Out of the total number of medical organizations, potential participants of the mandatory health insurance system the private clinics were identified. The analysis of participation of private medical organizations in the mandatory medical insurance system was carried out in context of two main classification criteria: geographical location and the type of medical services. According to the study results, the structure of private clinics of the Krasnodar Krai participating in mandatory health insurance program on a territorial basis is determined. The types of medical care that are not carried out by the selected companies, as well as the types of medical services that are in demand by private clinics are established. The list of private clinics participating in territorial mandatory health insurance program with the widest range of services is determined. The matrix of participation of private clinics in mandatory health insurance system was developed on the example of the Krasnodar Krai. abstract_id: PUBMED:11711406 Unwanted caesarean sections among public and private patients in Brazil: prospective study. Objective: To assess and compare the preferences of pregnant women in the public and private sector regarding delivery in Brazil. Design: Face to face structured interviews with women who were interviewed early in pregnancy, about one month before the due date, and about one month post partum. Setting: Four cities in Brazil. Participants: 1612 pregnant women: 1093 public patients and 519 private patients. Main Outcome Measures: Rates of delivery by caesarean section in public and private institutions; women's preferences for delivery; timing of decision to perform caesarean section. Results: 1136 women completed all three interviews; 476 women were lost to follow up (376 public patients and 100 private patients). Despite large differences in the rates of caesarean section in the two sectors (222/717 (31%) among public patients and 302/419 (72%) among private patients) there were no significant differences in preferences between the two groups. In both antenatal interviews, 70-80% in both sectors said they would prefer to deliver vaginally. In a large proportion of cases (237/502) caesarean delivery was decided on before admission: 48/207 (23%) in women in the public sector and 189/295 (64%) in women in the private sector. Conclusions: The large difference in the rates of caesarean sections in women in the public and private sectors is due to more unwanted caesarean sections among private patients rather than to a difference in preferences for delivery. High or rising rates of caesarean sections do not necessarily reflect demand for surgical delivery. Answer: The medical insurance type (private vs public) appears to influence the physician's decision to perform Caesarean delivery, although the extent and direction of this influence can vary by region and healthcare system. In the United States, a study in Michigan found that after adjusting for other risk factors, there was no significant disparity in the odds of Caesarean delivery between privately insured and Medicaid patients, suggesting that financial incentives may not play a role in physician decision-making regarding Caesarean delivery in this context (PUBMED:22562949). However, a systematic review and meta-analysis that included studies from various countries showed that privately insured women had higher odds of delivery by Caesarean section compared with women with public insurance coverage, indicating that financial incentives associated with private insurance may encourage healthcare providers to perform more Caesarean sections (PUBMED:28827257). In France, where both private and public hospitals are financed by a public single payer, private hospitals perform significantly more Caesarean deliveries than public hospitals, especially for patients with private health insurance, suggesting that financial incentives from fees received by physicians for different procedures may influence their behavior (PUBMED:34636088). In Chile, publicly insured women delivering at private institutions had high C-section rates, and this group was driving the overall high and growing rates, indicating an association between insurance type and the likelihood of Caesarean delivery (PUBMED:31015268). In contrast, a study in Wuxi, China, found that after implementing a cap-based medical insurance scheme with shared financial interest between the insurance and healthcare providers, medical expenditures for Caesarean deliveries declined, and the expenditures of women covered by the scheme were significantly lower than those of uninsured patients, suggesting that the scheme was effective in minimizing Caesarean delivery expenditures (PUBMED:27896623). Overall, while there is evidence that private insurance can be associated with higher rates of Caesarean sections, the relationship between insurance type and the decision to perform a Caesarean delivery is complex and influenced by various factors, including the healthcare system's structure, financial incentives, and patient preferences.
Instruction: Prognosis of metastatic breast cancer: are there differences between patients with de novo and recurrent metastatic breast cancer? Abstracts: abstract_id: PUBMED:29475920 The Difference in Prognostic Outcomes Between De Novo Stage IV and Recurrent Metastatic Patients with Hormone Receptor-positive, HER2-negative Breast Cancer. Background/aim: Patients with de novo stage IV and recurrent metastatic breast cancer are often treated with the same strategies, although the difference in prognostic outcomes remains unclear. The objective of this retrospective chart review study was to compare the prognostic outcomes between two types of patients with hormone receptor-positive (HR+), HER2-negative (HER2-) breast cancer. Patients And Methods: We estimated overall survival of the two groups and evaluated the progressive course of the disease using disease-free interval (DFI) and interval from the end of adjuvant treatment to the first recurrence (AFI). Results: We studied 172 patients with HR+/HER2- breast cancer, of which 65 were de novo and 107 were recurrent. Median OS between de novo and recurrent BC was 4.85 and 3.45 years, respectively (p=0.046). Recurrent patients with a DFI&lt;2 years were found to have a significantly poorer prognosis than recurrent patients with a DFI≥2 years (p=0.016) and de novo patients (p=0.002). Similarly, recurrent patients with an AFI&lt;1 year had a significantly poorer prognosis compared to de novo patients (p=0.026). Conclusion: De novo patients had better prognoses than recurrent patients with DFI&lt;2 years or AFI&lt;1 year, likely due to their therapy-naïve status or lower resistance to systemic treatment. abstract_id: PUBMED:32043771 De Novo Versus Recurrent HER2-Positive Metastatic Breast Cancer: Patient Characteristics, Treatment, and Survival from the SystHERs Registry. Background: Limited data exist describing real-world treatment of de novo and recurrent HER2-positive metastatic breast cancer (MBC). Materials And Methods: The Systemic Therapies for HER2-Positive Metastatic Breast Cancer Study (SystHERs) was a fully enrolled (2012-2016), observational, prospective registry of patients with HER2-positive MBC. Patients aged ≥18 years and ≤6 months from HER2-positive MBC diagnosis were treated and assessed per their physician's standard practice. The primary endpoint was to characterize treatment patterns by de novo versus recurrent MBC status, compared descriptively. Secondary endpoints included patient characteristics, progression-free and overall survival (PFS and OS, by Kaplan-Meier method; hazard ratio [HR] and 95% confidence interval [CI] by Cox regression), and patient-reported outcomes. Results: Among 977 eligible patients, 49.8% (n = 487) had de novo and 50.2% (n = 490) had recurrent disease. A higher proportion of de novo patients had hormone receptor-negative disease (34.9% vs. 24.9%), bone metastasis (57.1% vs. 45.9%), and/or liver metastasis (41.9% vs. 33.1%), and a lower proportion had central nervous system metastasis (4.3% vs. 13.5%). De novo patients received first-line regimens containing chemotherapy (89.7%), trastuzumab (95.7%), and pertuzumab (77.8%) more commonly than recurrent patients (80.0%, 85.9%, and 68.6%, respectively). De novo patients had longer median PFS (17.7 vs. 11.9 months; HR, 0.69; 95% CI, 0.59-0.80; p &lt; .0001) and OS (not estimable vs. 44.5 months; HR, 0.55; 95% CI, 0.44-0.69; p &lt; .0001). Conclusion: Patients with de novo versus recurrent HER2-positive MBC exhibit different disease characteristics and survival durations, suggesting these groups have distinct outcomes. These differences may affect future clinical trial design. Clinical trial identification number. NCT01615068 (clinicaltrials.gov). Implications For Practice: SystHERs was an observational registry of patients with HER2-positive metastatic breast cancer (MBC), which is a large, modern, real-world data set for this population and, thereby, provides a unique opportunity to study patients with de novo and recurrent HER2-positive MBC. In SystHERs, patients with de novo disease had different baseline demographics and disease characteristics, had superior clinical outcomes, and more commonly received first-line chemotherapy and/or trastuzumab versus those with recurrent disease. Data from this and other studies suggest that de novo and recurrent MBC have distinct outcomes, which may have implications for disease management strategies and future clinical study design. abstract_id: PUBMED:36008651 Clinical subtype, treatment response, and survival in De Novo and recurrent metastatic breast cancer. Purpose: This study evaluated whether patients with de novo metastatic breast cancer (MBC) have superior outcomes compared to those with recurrent MBC in a contemporary treatment era and examined factors related to outcome differentials. Methods: Using an institutional database, we examined patient and tumor characteristics, treatment response, and outcome among 232 patients with de novo and 612 patients with recurrent MBC diagnosed between 2011 and 2017. Results: De novo MBC had 9-month (m) longer overall survival (OS) than recurrent MBC (36.4 vs 27.4 m, p &lt; 0.001). Contributions to this difference included nearly twofold more HER2-positive (29.3% vs 15.2%) and significantly fewer triple-negative breast cancers (20.3% vs 32.4%, both p &lt; 0.001) in de novo compared with recurrent MBC cohorts. Stratified by clinical subtype, progression-free survival (PFS) on first-line therapy was significantly longer in de novo MBC in all but the triple-negative subtype, 25.5 vs 11.6 m (p &lt; 0.001) among 390 patients with hormone receptor-positive, HER2-negative, 11.4 vs 5.4 m (p = 0.002) among 142 patients with HER2-positive, and 4.0 vs 3.0 m (p = 0.121) among 162 with triple-negative MBC. In multivariable analysis, de novo status remained independently associated with improved OS (hazard ratio 0.63, 95% CI 0.49-0.80), regardless of subtype and other features. Conclusion: Patients with de novo MBC have better outcomes than those with recurrent MBC. Differences in clinical subtype and response to therapy in the metastatic setting contribute to, but do not fully explain, this difference. Longer PFS to first-line therapy in de novo MBC suggests biologic differences compared to recurrent MBC, which may be intrinsic or due to acquired resistance from treatment for prior localized breast cancer in recurrent disease. abstract_id: PUBMED:30017062 Incidence and survival rate of de novo tumors in liver transplants. Introduction: The greater survival of transplanted patients is accompanied by an increase in the rate of de novo malignancies (NM), which are the most frequent late-onset complication. We can distinguish between non-melanoma skin cancers (NMSC), post-transplant lymphoproliferative disorders (PTLD) and solid organ cancers (SOC). Our objective is to determine the incidence of the different types of NM, the time elapsed until diagnosis and survival rates in our setting. Methods: We conducted a retrospective study of 1071 liver transplant patients from 1990 to 2015 at our center. We analyzed the demographic variables, incidence of NM and survival. Results: 184 NM developed in 1071 transplant patients (17%), specifically 19% of the males and 13% of the females (P=.004). The most frequent NM were NMSC (29%), lung (18%), head and neck (16%), PTLD (10%) and gastrointestinal (8%). The median time of diagnosis was 7.9 years in NMSC, 3.9 years in PTLD and 9.8 years in SOC. Patients with NMSC had significantly better survival than those with PTLD or SOC. The incidence of de novo tumors (excluding NMSC) was 1889/100,000 transplants/year. By gender, lung cancer was the most common TOS in men and breast cancer in women. Conclusion: In our setting, excluding NMSC, the incidence is 8.8 times greater than estimations for the general population, with a high rate of lung cancer, so we should implement preventive and diagnostic strategies. abstract_id: PUBMED:35117877 Radiation therapy for primary tumor of de novo stage IV breast cancer. Despite recent advances in multimodality treatments such as endocrine therapy, chemotherapy, molecularly targeted therapy, and radiation therapy, it is still very difficult to cure de novo stage IV breast cancer patients completely. The traditional role of radiation therapy for these patients has been a palliative treatment strategy that aims to control tumor progression and suppress tumor related symptoms. Recently, several non-randomized retrospective studies on de novo stage IV breast cancer have revealed that locoregional radiation therapy (LRRT) might confer a survival benefit. However, there is no high level evidence to support the impact of LRRT on survival among patients with de novo metastatic disease so far. This article aimed to summarize the literature and to discuss whether treating the primary lesion with radiation therapy could improve clinical survival outcomes among de novo stage IV breast cancer patients. The issue of patient selection will be discussed because not all de novo stage IV breast cancer patients could benefit from LRRT. This article also explores the clinical evidence regarding LRRT for de novo metastatic disease across various cancers such as prostate, uterine cervical, non-small-cell lung, and head and neck cancers. Many retrospective trials have shown the impact of locoregional treatment (LRT) on survival in de novo metastatic breast cancer. However, since the backgrounds of patients treated with LRRT are quite different from those of patients who did not receive LRRT and the treatment consists of surgery and/or radiation therapy, the role of radiation therapy alone remains unclear. Several reports investigated prognostic factors to detect the benefits of LRRT, which still remains conflicting and no consensus exists. However, selected patients with de novo metastatic disease with better performance status, low tumor burden, and estrogen receptor positivity should be considered for the addition of radiation therapy delivered to the primary site. To explore proper decision-making regarding LRRT, further prospective randomized trials are eagerly awaited. abstract_id: PUBMED:35728342 Occurrence and characteristics of patients with de novo advanced breast cancer according to patient and tumor characteristics - A retrospective analysis of a real world registry. Background: Patients with de novo metastatic breast cancer (dnMBC) may have different clinical and pathological characteristics. In studies concerned with first-line metastatic patients, the proportion of these patients without secondary resistance mechanisms may have a large influence ont the study results. The aim of this study was to identify patient and tumor characteristics that are associated with dnMBC vs. recurrent MBC (rMBC). Methods: This is a retrospective analysis of data prospectively collected in the PRAEGNANT metastatic breast cancer registry (NCT02338167). Firs line treated patients were eligible. Patient and tumor characteristics were compared with common disease and tumor characteristics relative to de novo metastatic status, as well as early and late recurrences after primary disease without metastases. Results: Among the 947 patients identified, 355 were included with de novo metastatic disease (37.5%). Older age and HER2-positive disease were significantly associated with a higher frequency of dnMBC. Patients younger than 50, 50-69, or 70 years or older had dnMBC frequencies of 22.7%, 44.0%, and 57.6%, respectively. HER2-positive patients had dnMBC at initial presentation in 49.1% of cases, in comparison with 21.9%, 35.5%, and 37.6% in patients with triple-negative, luminal A-like and luminal B-like breast cancer, respectively. Conclusion: Age and breast cancer subtype are associated with the frequency of first-line MBC patients. Inclusion criteria concerning age or breast cancer subtype can influence the frequency of these patients in a selected patient population and can therefore modify the number of patients with secondary resistance to specific therapies in clinical trials. abstract_id: PUBMED:32994701 Role of Locoregional Treatment in De Novo Stage IV Breast Cancer. It is estimated that approximately 154000 women in the United States have stage IV breast cancer (BC). A subset of this group has metastatic disease at presentation, known as de novo stage IV disease. De novo stage IV BC accounts for approximately 6% of all BC diagnoses in the United States. Traditionally, stage IV BC patients are treated with primary systemic therapy with a palliative intent reserving possible locoregional treatment (LRT) as last resort. There has been a lot of interest in the role of LRT in de novo stage IV BC for the past decade with mixed conclusions. Although this review is not intended to be a comprehensive overview of all literature regarding this topic to date, we will review the recent findings in literature focusing on the studies with larger sample sizes to investigate the role of LRT in de novo stage IV BC. abstract_id: PUBMED:33044617 Characteristics and survival of de novo and recurrent metastatic breast cancer in New Zealand. Background: We aim to examine the characteristics and survival of patients with de novo metastatic breast cancer (dnMBC) and recurrent metastatic breast cancer (rMBC) in New Zealand. Methods: This study included women diagnosed with dnMBC and women who developed rMBC between 2010 and 2017. The Kaplan-Meier method was used to examine cancer-specific survival. Cox proportional hazards regression was used to estimate the adjusted hazard ratio (HR) of cancer-specific mortality by ethnicity, age, year of diagnosis, socioeconomic deprivation, site of metastases, number of metastatic sites, biomarker subtype and MBC subgroup. Results: We included 2177 MBC patients (667 dnMBC and 1510 rMBC). The median survival of dn MBC patients was 26 months compared to 18 months for rMBC. There were no differences in breast-cancer specific mortality by ethnicity or socioeconomic deprivation. The adjusted HR for patients with visceral metastases compared to patients with non-visceral metastases was 1.41, and the adjusted HR for triple negative disease compared to Luminal A disease was 2.24. Compared to dnMBC, the adjusted HRs for rMBC patients with a metastatic-free interval of &lt; 2 years, 2-4 years, 5-7 year and 8 + years were 1.81, 1.47, 1.08 and 0.82, respectively. Conclusions: The survival for patients with MBC in New Zealand is very similar to other developed countries. Patients with dnMBC had a much better prognosis than those with recurrent disease. Patients with triple negative disease or non-luminal HER2 positive disease had the worst prognosis. The prognosis for patient with rMBC improved the longer the time from diagnosis to the development of metastases. abstract_id: PUBMED:38327599 Survival nomogram for patients with de novo metastatic breast cancer based on the SEER database and an external validation cohort. Background: On average, 5-10% of patients are diagnosed with metastatic breast cancer (MBC) at the initial diagnosis. This study aimed to develop a nomogram to predict the overall survival (OS) in these patients. Methods: The nomogram was based on a retrospective study of 9435 patients with de novo MBC from the Surveillance, Epidemiology, and End Results (SEER) database. The predictive accuracy and discriminative ability of the nomogram were determined using the concordance index (C-index), area under the time-dependent receiver operating characteristic curve (AUC), and calibration curve. Decision curve analysis (DCA) was employed to evaluate the benefits and advantages of our new predicting model over the 8th edition of the American Joint Committee on Cancer (AJCC) Tumor Node Metastasis (TNM) staging system. The results were validated in a retrospective study of 103 patients with de novo MBC from January 2013 to June 2022 at an institution in northwest China. Results: Multivariate analysis of the primary cohort revealed that independent factors for survival were age at diagnosis, pathological type, histological grade, T stage, N stage, molecular subtype, bone metastasis, brain metastasis, liver metastasis, lung metastasis, surgery, chemotherapy, and radiotherapy. The nomogram achieved a C-index of 0.688 (95% confidence interval [CI], 0.682-0.694) in the training cohort and 0.875 (95% CI, 0.816-0.934) in the validation cohort. The AUC of the nomograms indicated good specificity and sensitivity in the training and validation cohorts, respectively. Calibration curves showed favorable consistency between the predicted and actual survival probabilities. Additionally, the DCA curve produced higher net gains than by the AJCC-TNM staging system. Finally, risk stratification can accurately identify groups of patients with de novo MBC at different risk levels. Conclusions: The nomogram showed favorable predictive and discriminative abilities for OS in patients with de novo MBC. Other populations from different countries or prospective studies are needed to further validate the nomogram. abstract_id: PUBMED:33808259 Expanding Roles of De Novo Lipogenesis in Breast Cancer. In recent years, lipid metabolism has gained greater attention in several diseases including cancer. Dysregulation of fatty acid metabolism is a key component in breast cancer malignant transformation. In particular, de novo lipogenesis provides the substrate required by the proliferating tumor cells to maintain their membrane composition and energetic functions during enhanced growth. However, it appears that not all breast cancer subtypes depend on de novo lipogenesis for fatty acid replenishment. Indeed, while breast cancer luminal subtypes rely on de novo lipogenesis, the basal-like receptor-negative subtype overexpresses genes involved in the utilization of exogenous-derived fatty acids, in the synthesis of triacylglycerols and lipid droplets, and fatty acid oxidation. These metabolic differences are specifically associated with genomic and proteomic changes that can perturb lipogenic enzymes and related pathways. This behavior is further supported by the observation that breast cancer patients can be stratified according to their molecular profiles. Moreover, the discovery that extracellular vesicles act as a vehicle of metabolic enzymes and oncometabolites may provide the opportunity to noninvasively define tumor metabolic signature. Here, we focus on de novo lipogenesis and the specific differences exhibited by breast cancer subtypes and examine the functional contribution of lipogenic enzymes and associated transcription factors in the regulation of tumorigenic processes. Answer: Yes, there are differences in prognosis between patients with de novo and recurrent metastatic breast cancer. Studies have shown that patients with de novo metastatic breast cancer generally have better prognostic outcomes compared to those with recurrent metastatic breast cancer. For instance, a retrospective chart review study found that patients with de novo hormone receptor-positive (HR+), HER2-negative (HER2-) breast cancer had a median overall survival (OS) of 4.85 years, which was significantly longer than the 3.45 years observed in patients with recurrent breast cancer. Furthermore, recurrent patients with a disease-free interval (DFI) of less than 2 years or an interval from the end of adjuvant treatment to the first recurrence (AFI) of less than 1 year had a significantly poorer prognosis compared to de novo patients (PUBMED:29475920). Similarly, in the SystHERs registry, patients with de novo HER2-positive metastatic breast cancer had longer median progression-free survival (PFS) and OS compared to those with recurrent disease. De novo patients also exhibited different disease characteristics and received first-line regimens containing chemotherapy, trastuzumab, and pertuzumab more commonly than recurrent patients (PUBMED:32043771). Another study using an institutional database confirmed that de novo metastatic breast cancer patients had a 9-month longer OS than recurrent metastatic breast cancer patients. The study also found that de novo status remained independently associated with improved OS, regardless of clinical subtype and other features (PUBMED:36008651). In New Zealand, a study showed that the median survival of patients with de novo metastatic breast cancer was 26 months compared to 18 months for those with recurrent metastatic breast cancer. The prognosis for patients with recurrent metastatic breast cancer improved the longer the time from diagnosis to the development of metastases (PUBMED:33044617). These findings suggest that de novo metastatic breast cancer patients tend to have better outcomes than those with recurrent disease, which may be due to differences in clinical subtype, treatment response, and possibly intrinsic biological differences or acquired resistance from treatment for prior localized breast cancer in recurrent disease.
Instruction: Routine day-case laparoscopic pyeloplasty: a paradigm shift? Abstracts: abstract_id: PUBMED:26730057 Randomised Controlled Trial of Day-Case Laparoscopic Cholecystectomy vs Routine Laparoscopic Cholecystectomy. Many randomised controlled trials conducted worldwide favours for day-case laparoscopic cholecystectomy, but questions have been raised regarding its application in developing country like ours. Hence, considering it a high time to review current practices, we conducted this trial to report our experience with day-case laparoscopic cholecystectomy and to access its feasibility and safety in our set-up. Data from 65 patients with symptomatic gallstone were randomised to perform laparoscopic cholecystectomy either as day-case procedure or as routine (conventional) procedure. Complication, quality of life, satisfaction, post-operative nausea and vomiting and pain were assessed. Ninety-seven per cent (31/32) of day-case laparoscopic cholecystectomy patients were successfully discharged with mean duration of 8.9 ± 4.54 h, which was 3.33 ± 1.45 days (72.92 ± 34.8 h) in routine (conventional) laparoscopic cholecystectomy group. There was no significant difference in complication, quality of life, satisfaction, post-operative nausea and vomiting and pain between the two groups. Day-case laparoscopic cholecystectomy is a safe, feasible and beneficial procedure in our set-up. Patient acceptance in terms of quality of life and satisfaction was similar to that of routine laparoscopic cholecystectomy. abstract_id: PUBMED:24027412 Intent at day case laparoscopic cholecystectomy in owerri, Nigeria: initial experiences. Background And Objective: Laparoscopic cholecystectomy has been the default operation for cholelithiasis at Federal Medical Centre, Owerri for the past 2 years and the outcomes have been good. The duration of post operative stay has been decreasing. We therefore initiated a preliminary 2-year prospective study in May 2010 to determine the feasibility of carrying out day case laparoscopic cholecystectomy in our hospital. Materials And Methods: PATIENTS UNDERGOING LAPAROSCOPIC CHOLECYSTECTOMY WERE INCLUDED IN THE STUDY IF THEY SATISFIED THE FOLLOWING CRITERIA: Age &lt; 65 years, body mass index &lt; 35 kg/m(2) , American Society of Anaesthesiology physical status class I and II, patient residence within 20 km radius of the hospital, patient acceptance of the procedure and absence of previous complicated upper abdominal surgery. Results: Twelve patients (10 females, 2 males) were worked up with the intent of achieving same-day discharge of the patients. Five of the patients (41.7%) were discharged on the day of operation. The reasons for overnight stay included inadequate pain control, insertion of drain and patient wishes. There was no conversion to open surgery, no major complications and no case of readmission to the hospital. Conclusions: Day case laparoscopic cholecystectomy in our environment could be safely promoted but will depend on improved facilities and patient enlightenment. abstract_id: PUBMED:35677494 Quality of recovery assessment of day case and multiday stay patients undergoing elective laparoscopic cholecystectomy. Objectives: Many laparoscopic cholecystectomy operations are performed with at least overnight admission. Current research shows that laparoscopic cholecystectomy is safe and feasible to do as a day case. Patient centred outcomes are less well understood. Material And Methods: Elective laparoscopic cholecystectomy patients at a single metropolitan hospital in Melbourne, Australia were surveyed 24 hours after surgery using the 15-question Quality of Recovery (QoR-15) survey. A comparison was made between day case surgeries and multi-day surgeries. Results: One hundred and eight patients were recruited consisting of 34 day case and 74 multi-day patients. Patient groups did not differ in terms of age, sex or postoperative morbidity. The multi-day group had a higher proportion of comorbid patients (p-value = 0.03). There was no significant dif- ference in overall QoR-15 score between the two groups, although there was an observed trend towards a higher score in the day case group (132.0 vs 127.9, p= 0.147). QoR-15 individual question results showed that day cases rated significantly better for sleep quality and for less feelings of anxiety or worry. The differences narrowed when comparing patient groups as they were booked (intention-to-treat). There were no identified sub-groups that had a significantly higher score if admitted multi-day. Conclusion: Quality of recovery following day case laparoscopic cholecystectomy is just as good, if not better, than multi-day cases. Laparoscopic cholecystectomy as a day case is both safe and economically superior to multi-day management. This gives further weight to current recommendations suggesting that the majority of laparoscopic cholecystectomy operations could be performed as day cases. abstract_id: PUBMED:19088930 An audit of ambulatory laparoscopic cholecystectomy in a Singapore institution: are we ready for day-case laparoscopic cholecystectomy? Background: Day-Case laparoscopic cholecystectomy (LC) is practiced in many countries. However, this has yet to be widely accepted in Singapore. This study aims to determine the potential success rate of day-case LC in our institution. Patient And Methods: We retrospectively assessed the proportion of our Ambulatory Surgery 23 hour (AS23) LC patients that met discharge criteria. Our proposed same-day discharge criteria include minimal pain, ability to tolerate feeds, ambulate independently and void spontaneously after 6-8 hours of monitoring. Results: From January 2005 to December 2006, of 405 patients listed for elective LC, 84% of patients were admitted to our AS23 ward. Patients with previous biliary sepsis or pancreatitis or who need laparoscopic common bile duct exploration (LCBDE) were included. The other 66 were admitted as inpatient. Forty-one of them were admitted due to conversion. A history of cholecystitis or cholangitis was a significant predictor of conversion to open surgery (OR=5.73 and 5.74 respectively, p&lt;0.001). Of the 339 patients, 66% of them fulfilled all four criteria within eight hours of monitoring. Therefore, based on an intention-to-treat analysis, 51.2% fulfilled all four criteria and could potentially be discharged the same day. No predictor for failure was identified, including presence of co-morbidities, duration of operation, surgeon's grade and additional procedures like LCBDE. Conclusion: Using our current inclusion criteria, we projected a success rate of at least 50% with the implementation of day-case LC. With the attendant advantages of cost savings and reduced resource utilization, it is therefore worthwhile to start it in Singapore. abstract_id: PUBMED:24665156 Routine haemoglobin electrophoresis screening in day case herniotomy in Nigerian children: Is it evidence-based? Background: To determine the prevalence of haemoglobinopathies in children who require day case herniotomy in our centre and ascertain if routine screening is necessary in all patients who require herniotomy. Materials And Methods: A 12-month retrospective analysis of patients requiring herniotomy in our centre. Data including age, sex, diagnosis, haemoglobin electrophoresis status, surgical outcome and hospital stay were analysed. Results: Ninety-five patients had complete records. There were 84 boys and 11 girls. M:F ratio: 7.6:1. The mean age was 3.2 ± 0.6 years. Fifty-five point eight per cent of the patients had right inguinal hernias while 35.8% had left inguinal hernias. Eight patients (8.4%) had bilateral inguinal hernias. Twenty-six patients (27.4%) had haemoglobinopathies while 69 patients (72.6%) had homozygous Haemoglobin A. The Sickle Cell trait (HbAS) was found in 22 patients (23.2%) while the HbAC was found in three patients (3.2%). One patient (1.1%) had Sickle Cell disease (Haemoglobin SS). He had had blood transfusion and previous history of jaundice. All patients survived and all patients were discharged on the day of surgery (mean hospital stay: 4hrs (range: 2.5 hrs-12 hrs)) except the patient with Sickle Cell disease who was admitted a day before surgery and discharged a day after the operation. Conclusion: One in four children coming for day case herniotomy in our centre had the Sickle Cell trait while only 1% had the Sickle Cell disease. These findings are in keeping with the prevalence in the Nigerian population. Routine screening may not be necessary for all patients coming for herniotomy in our centre. Clear indication(s) should be outlined for screening. abstract_id: PUBMED:33544661 Day case laparoscopic cholecystectomy: Identifying patients for a 'COVID-Cold' isolated day-case unit during the pandemic. Background: The UK practice of laparoscopic cholecystectomy has reduced during the COVID-19 pandemic due to cancellation of non-urgent operations. Isolated day-case units have been recommended as 'COVID-cold' operating sites to resume surgical procedures. This study aims to identify patients suitable for day case laparoscopic cholecystectomy (DCLC) at isolated units by investigating patient factors and unexpected admission. Method: Retrospective analysis of 327 patients undergoing DCLC between January and December 2018 at Ysbyty Gwynedd (District General Hospital; YG) and Llandudno General Hospital (isolated unit; LLGH), North Wales, UK. Results: The results showed that 100% of DCLCs in LLGH were successful; 71.4% of elective DCLCs were successful at YG. Increasing age (p = 0.004), BMI (p = 0.01), ASA Score (p = 0.006), previous ERCP (p = 0.05), imaging suggesting cholecystitis (p = 0.003) and thick-walled gallbladder (p = 0.04) were significantly associated with failed DCLC on univariate analysis. Factors retaining significance (OR, 95% CI) after multiple regression include BMI (1.82, 1.05-3.16; p = 0.034), imaging suggesting cholecystitis (4.42, 1.72-11.38; p = 0.002) and previous ERCP (5.25, 1.53-18.00; p = 0.008). Postoperative complications are comparable in BMI &lt;35kg/m2 and 35-39.9kg/m2. Conclusions: Current patient selection for isolated day unit is effective in ensuring safe discharge and could be further developed with greater consideration for patients with BMI 35-39.9kg/m2. As surgical services return, this helps identify patients suitable for laparoscopic cholecystectomy at isolated COVID-free day units. abstract_id: PUBMED:32875414 Day case laparoscopic cholecystectomy at Kilimanjaro Christian Medical Centre, Tanzania. Introduction: The Lancet Commission on Global Surgery has promoted the case for safe, affordable surgical care in low- and middle-income countries (LMICs). In 2017, Kilimanjaro Christian Medical Centre (KCMC) in Tanzania introduced a day case laparoscopic cholecystectomy (DCLC) service, the first of its kind in Sub-Saharan Africa (SSA). We aimed to evaluate this novel service in terms of safety, feasibility and acceptability by patients and staff. Methods: This study used mixed methods and was split into two stages. In stage 1, we reviewed records of all laparoscopic cholecystectomies (LCs) comparing day cases and admissions. These patients were followed up with a telephone questionnaire to investigate complication rates and receive service feedback. Stage 2 consisted of semi-structured interviews with staff exploring the challenges KCMC faced in implementing DCLC. Results: 147 laparoscopic cholecystectomies were completed: 109 were planned for DCLC, 82 (75.2%) of which were successful, whilst 27 (24.8%) patients were admitted. No variables significantly predicted unplanned admission, the commonest causes for which were pain and nausea. In the DCLC group there was 1 readmission. 62 patients answered the follow up questionnaire, 60 (97%) of which were satisfied with the service. Stage 2 interviews suggested staff to be motivated for DCLC but revealed poor organisation of the day case pathway. Conclusion: High rates of DCLC combined with low rates of complications and readmission suggests DCLC is feasible at KCMC. However, staff interviews alluded to administrative problems preventing KCMC from reaching its full DCLC potential. A dedicated day case surgery unit would address most of these problems. abstract_id: PUBMED:23493831 Ambulatory laparoscopic cholecystectomy: An audit of day case vs overnight surgery at a community hospital in Japan. Aim: To evaluate the applicability and safety of ambulatory laparoscopic cholecystectomy (LC) and to compare day case and overnight stay LC. Methods: Data were collected retrospectively and consecutively for day case and overnight stay LC patients from July 1, 2009 to April 30, 2011. Outcomes were analyzed for patient demographics, operation time, blood loss during operation and frequency and reasons for unexpected or prolonged hospitalization in each group. Results: There was no hospital mortality and no patient was readmitted with serious morbidity after discharge. 50 patients received a day case LC and 19 had an overnight stay LC. There was a significant difference in age between both groups (P &lt; 0.02). There were no significant differences between the day case LC performed (n = 41) and failed (n = 9) groups and between the day case LC performed and the one night stay LC (n = 12) groups. There was a significant difference in age between the one night stay and more nights stay LC groups (P &lt; 0.05). Thus, elderly patients showed a tendency to like to stay in hospital rather than being a day case. The proportion of unexpected or prolonged hospitalization was not significantly different between the day case and overnight stay LC groups, when the patient's request was excluded. Conclusion: Day case LC can be performed with a low rate of complications. In overnight stay patients, there are many who could be performed safely as a day case. Moreover, we need to take special care to treat elderly patients. abstract_id: PUBMED:14991631 Routine day-case laparoscopic cholecystectomy. Background: A prospective study was carried out to assess the feasibility of performing true day-case laparoscopic surgery in a district general hospital. Methods: All patients admitted consecutively under the care of one surgeon for laparoscopic cholecystectomy were included in the study. Selection criteria for a day-case procedure included an American Society of Anesthesiologists grade of I or II and the availability of a responsible carer at home. Patients were discharged 4-6 h after surgery with a standard analgesia pack and a contact number for advice. All patients were contacted by telephone on the day after discharge. A postal questionnaire was sent to the first 100 patients to assess satisfaction with the day-case process. Results: Of 357 patients admitted for laparoscopic cholecystectomy over a 24-month period, 154 (43.1 per cent) were operated on as day cases on a morning theatre list. Twenty-two patients required an overnight stay (14.3 per cent), three because of conversion to an open procedure. One patient was readmitted for neck pain. Eighty-two (92.1 per cent) of 89 patients were either satisfied or very satisfied with the day-case procedure. Conclusion: This study has demonstrated a low rate of overnight stay (14.3 per cent) and readmission (1.9 per cent), and a high degree of patient satisfaction for day-case laparoscopic cholecystectomy. abstract_id: PUBMED:21505573 Day case laparoscopic nephrectomy: initial experience. Rationale: Laparoscopic nephrectomy tends to become the new gold standard surgical technique in a selected population (non-functioning kidney, localised renal cell carcinoma). Day surgery is a popular pathway of care and, procedures of ever-increasing complexity are being considered. Objective: The aim of the study was to report the postoperative complications of day case laparoscopic nephrectomy, according to the Clavien system, and, to assess the feasibility of the procedure performed as a day case. Material And Results: This study included all the patients considered for day case transperitoneal laparoscopic nephrectomy between May 2008 and November 2009. Sixteen consecutive patients were enrolled in this retrospective study. There were ten procedures on the left hand-side and six on the right hand-side. Age ranges from 22 to 77 years old. Male to female ratio was 9:7. The preoperative diagnosis was non-functioning kidney in 9 cases and kidney tumour in the other 7 cases. All but two patients have been discharged in the same day (87.5%). The readmission rate was of 12.5%. One wheel-chair bonded patient was readmitted four days after the procedure, because of adynamic ileus, and another one three days later because of wound infection. There were two grade I and one grade IV complications (Clavien system). The patient readmitted with grade IV complication, wheel-chair bonded because of cerebral palsy, was not a typical day surgery patient. Discussion: The vast majority of complications were minor and resulted in no residual disability. In our small series, the day case laparoscopic nephrectomy was feasible and safe. Answer: The provided abstracts do not specifically address the concept of "routine day case laparoscopic pyeloplasty." Instead, they focus on laparoscopic cholecystectomy and nephrectomy as day case surgeries. However, the principles and findings from these studies could potentially be extrapolated to consider the feasibility of day case laparoscopic pyeloplasty. The abstracts demonstrate a growing trend towards performing laparoscopic surgeries, such as cholecystectomy and nephrectomy, on a day case basis, which involves discharging patients on the same day as their surgery. These studies report on the safety, feasibility, patient satisfaction, and economic benefits of day case surgeries in various settings, including developed and developing countries (PUBMED:26730057, PUBMED:24027412, PUBMED:35677494, PUBMED:19088930, PUBMED:24665156, PUBMED:33544661, PUBMED:32875414, PUBMED:23493831, PUBMED:14991631, PUBMED:21505573). For instance, one study found that day case laparoscopic cholecystectomy is safe and feasible with high patient satisfaction and similar quality of life outcomes compared to routine (conventional) laparoscopic cholecystectomy (PUBMED:26730057). Another study highlighted that day case laparoscopic cholecystectomy could be safely promoted with improved facilities and patient enlightenment (PUBMED:24027412). Additionally, the quality of recovery for day case patients was found to be as good as, if not better than, for multi-day cases (PUBMED:35677494). The success of day case surgeries in these studies suggests that a similar approach could potentially be applied to laparoscopic pyeloplasty. However, it is important to note that each type of surgery has its own specific considerations and patient selection criteria that must be taken into account. The studies emphasize the importance of patient selection, postoperative care, and the availability of resources and support systems for successful day case surgeries (PUBMED:33544661, PUBMED:32875414). In conclusion, while the abstracts do not directly address day case laparoscopic pyeloplasty, the positive outcomes and experiences with other laparoscopic procedures suggest that with careful patient selection and appropriate resources, a paradigm shift towards routine day case laparoscopic pyeloplasty could be considered. Further research and trials specific to laparoscopic pyeloplasty would be necessary to fully assess its viability as a day case procedure.
Instruction: Is there a distinct form of developmental dyslexia in children with specific language impairment? Abstracts: abstract_id: PUBMED:19057171 Is there a distinct form of developmental dyslexia in children with specific language impairment? Findings from an orthographically regular language. Objectives: The aim of this study was to identify quantitative and qualitative differences between the reading and writing skills of children with developmental dyslexia and those of dyslexic children with a specific language impairment (SLI). Background: It is suggested that although the etiology of developmental dyslexia and SLI may be diverse, dyslexic children with SLI and their language-intact peers are comparable on a behavioral level. Methods: Three groups of second-grade children were compared on reading and writing tests with single words and nonwords: 15 dyslexic children with a history of SLI (SLI group), 15 dyslexic children with a typical pattern of language development (non-SLI group), and a control group of 30 children with no clinical history of learning disabilities or communication disorders. Results: Analysis of the results revealed the performances of both SLI and non-SLI dyslexic groups to be comparable in terms of speed, accuracy, and error typology. Conclusions: This study confirms that there are parallels between dyslexic children with language disorders and their dyslexic peers with intact language skills, at least in terms of their performance on reading and writing tests. abstract_id: PUBMED:16478378 Are specific language impairment and dyslexia distinct disorders? Purpose: The purpose of this study was to determine whether specific language impairment (SLI) and dyslexia are distinct developmental disorders. Method: Study 1 investigated the overlap between SLI identified in kindergarten and dyslexia identified in 2nd, 4th, or 8th grades in a representative sample of 527 children. Study 2 examined phonological processing in a subsample of participants, including 21 children with dyslexia only, 43 children with SLI only, 18 children with SLI and dyslexia, and 165 children with typical language/reading development. Measures of phonological awareness and nonword repetition were considered. Results: Study 1 showed limited but statistically significant overlap between SLI and dyslexia. Study 2 found that children with dyslexia or a combination of dyslexia and SLI performed significantly less well on measures of phonological processing than did children with SLI only and those with typical development. Children with SLI only showed only mild deficits in phonological processing compared with typical children. Conclusions: These results support the view that SLI and dyslexia are distinct but potentially comorbid developmental language disorders. A deficit in phonological processing is closely associated with dyslexia but not with SLI when it occurs in the absence of dyslexia. abstract_id: PUBMED:25703047 'MetaTaal': enhancing complex syntax in children with specific language impairment--a metalinguistic and multimodal approach. Background: Currently, most research on the effective treatment of morphosyntax in children with specific language impairment (SLI) pertains to younger children. In the last two decades, several studies have provided evidence that intervention for older school-age children with SLI can be effective. These metalinguistic intervention approaches teach grammatical rules explicitly and use shapes and colours as two-dimensional visual support. Reading or writing activities form a substantial part of these interventions. However, some children with SLI are poor readers and might benefit more from an approach that is less dependent on literacy skills. Aims: To examine the effectiveness of a combined metalinguistic and multimodal approach in older school-age children with SLI. The intervention was adapted to suit poor readers and targeted the improvement of relative clause production, because relative clauses still pose difficulties for older children with SLI. Methods & Procedures: Participants were 12 monolingual Dutch children with SLI (mean age 11;2). All children visited a special school for children with speech and language disorders in the Netherlands. A quasi-experimental multiple-baseline design was chosen to evaluate the effectiveness of the intervention. A set of tasks was constructed to test relative clause production and comprehension. Two balanced versions were alternated in order to suppress a possible learning effect from multiple presentations of the tasks. After 3 monthly baseline measurements, the children received individual treatment with a protocolled intervention programme twice a week during 5 weeks. The tests were repeated directly post-therapy and at a retention measurement 3 months later. During the intervention programme, the speech therapist delivering the treatment remained blind to the test results. Outcomes & Results: No significant changes were found during the baseline measurements. However, measurement directly post-therapy showed that 5 h of intervention produced significant improvement on the relative clause production tasks, but not on the relative clause comprehension task. The gains were also maintained 3 months later. Conclusions & Implications: The motor and tactile/kinesthetic dimensions of the 'MetaTaal' metalinguistic intervention approach are a valuable addition to the existing metalinguistic approaches. This study supports the evidence that grammatical skills in older school-age children with SLI can be remediated with direct intervention using a metalinguistic approach. The current tendency to diminish direct intervention for older children with SLI should be reconsidered. abstract_id: PUBMED:26079274 The attentional blink in typically developing and reading-disabled children. This study's research question was whether selective visual attention, and specifically the attentional blink (AB) as operationalized by a dual target rapid serial visual presentation (RSVP) task, can explain individual differences in word reading (WR) and reading-related phonological performances in typically developing children and reading-disabled subgroups. A total of 407 Dutch school children (Grades 3-6) were classified either as typically developing (n = 302) or as belonging to one of three reading-disabled subgroups: reading disabilities only (RD-only, n = 69), both RD and attention problems (RD+ADHD, n = 16), or both RD and a specific language impairment (RD+SLI, n = 20). The RSVP task employed alphanumeric stimuli that were presented in two blocks. Standardized Dutch tests were used to measure WR, phonemic awareness (PA), and alphanumeric rapid naming (RAN). Results indicate that, controlling for PA and RAN performance, general RSVP task performance contributes significant unique variance to the prediction of WR. Specifically, consistent group main effects for the parameter of AB(minimum) were found, whereas there were no AB-specific effects (i.e., AB(width) and AB(amplitude)) except for the RD+SLI group. Finally, there was a group by measurement interaction, indicating that the RD-only and comorbid groups are differentially sensitive for prolonged testing sessions. These results suggest that more general factors involved in RSVP processing may explain the group differences found. abstract_id: PUBMED:22876769 Is weak oral language associated with poor spelling in school-age children with specific language impairment, dyslexia or both? The purpose of this study was to test the hypothesis that word reading accuracy, not oral language, is associated with spelling performance in school-age children. We compared fourth grade spelling accuracy in children with specific language impairment (SLI), dyslexia or both (SLI/dyslexia) to their typically developing grade-matched peers. Results of the study revealed that children with SLI performed similarly to their typically developing peers on a single-word spelling task. Alternatively, those with dyslexia and SLI/dyslexia evidenced poor spelling accuracy. Errors made by both those with dyslexia and SLI/dyslexia were characterized by numerous phonologic, orthographic and semantic errors. Cumulative results support the hypothesis that word reading accuracy, not oral language, is associated with spelling performance in typically developing school-age children and their peers with SLI and dyslexia. Findings are provided as further support for the notion that SLI and dyslexia are distinct, yet co-morbid, developmental disorders. abstract_id: PUBMED:34185581 Spoken Word Learning in Children With Developmental Language Disorder or Dyslexia. Purpose Word learning difficulties have been documented in multiple studies involving children with dyslexia and developmental language disorder (DLD; see also specific language impairment). However, no previous studies have directly contrasted word learning in these two frequently co-occurring disorders. We examined word learning in second-grade students with DLD-only and dyslexia-only as compared to each other, peers with both disorders (DLD + dyslexia), and peers with typical development. We hypothesized that children with dyslexia-only and DLD-only would show differences in word learning due to differences in their core language strengths and weaknesses. Method Children (N = 244) were taught eight novel pseudowords paired with unfamiliar objects. The teaching script included multiple exposures to the phonological form, the pictured object, a verbal semantic description of the object, and spaced retrieval practice opportunities. Word learning was assessed immediately after instruction with tasks requiring recall or recognition of the phonological and semantic information. Results Children with dyslexia-only performed significantly better on existing vocabulary measures than their peers with DLD-only. On experimental word learning measures, children in the dyslexia-only and DLD + dyslexia groups showed significantly poorer performance than typically developing children on all word learning tasks. Children with DLD-only differed significantly from the TD group on a single word learning task assessing verbal semantic recall. Conclusions Overall, results indicated that children with dyslexia display broad word learning difficulties extending beyond the phonological domain; however, this contrasted with their relatively strong performance on measures of existing vocabulary knowledge. More research is needed to understand relations between word learning abilities and overall vocabulary knowledge and how to close vocabulary gaps for children with both disorders. Supplemental Material https://doi.org/10.23641/asha.14832717. abstract_id: PUBMED:17365085 Reading comprehension in children with specific language impairment: an examination of two subgroups. Background: In reading research, children with specific language impairment (SLI) have tended to be included in groups of children expected to have difficulties with both decoding and reading comprehension (generally poor readers). This is because generally children with specific language impairment display difficulties with phonology as well as syntax and/or semantics. However, children with specific language impairment are a heterogeneous group. Many children with specific language impairment have oral comprehension difficulties that are likely to limit reading comprehension. A subgroup of these children may exhibit intact phonological and decoding skills. If so, they would resemble the children with specific reading comprehension difficulties (poor comprehenders) reported in the literature. Aims: This study sought to identify a group of children with a poor comprehender reading profile amongst children with specific language impairment. It then compared the phonological and oral comprehension skills of the group of 15 poor comprehenders with a group of 15 generally poor readers with specific language impairment, to identify any differences in language skills. Secondarily, the study wanted to determine which of the language tasks best predicted group membership. Methods & Procedures: The study was carried out in two phases. In Phase 1, children with specific language impairment were assessed on the Woodcock Word Attack to identify a group with adequate decoding skills. These children had poor reading comprehension on the Neale Analysis of Reading Ability. From the poor decoders on the Word Attack, a second group of children, matched for age and gender, was selected to form the generally poor reader group. In Phase 2, the participants were assessed on a battery of phonological and oral comprehension tasks. Outcomes & Results: A group of children exhibiting a poor comprehender reading profile was found to exist amongst children with specific language impairment. As expected, the poor comprehenders performed significantly better than the generally poor readers on phonological awareness tasks. On the oral comprehension tasks, the two groups did not differ at the word and sentence level; however, the poor comprehenders had significantly weaker oral comprehension skills at the paragraph level. Conclusions: This study found that children with specific language impairment, who have equally poor reading comprehension but which differ in their decoding ability, differ not only in their performance on phonological tasks, but also on oral comprehension at the paragraph level. This indicates a need for paragraph-level oral comprehension to be included in assessment. In addition, educational and clinical intervention programmes for children with specific language impairment should ensure that they are meeting individual needs. abstract_id: PUBMED:24139661 Children with specific language impairment are not impaired in the acquisition and retention of Pavlovian delay and trace conditioning of the eyeblink response. Three converging lines of evidence have suggested that cerebellar abnormality is implicated in developmental language and literacy problems. First, some brain imaging studies have linked abnormalities in cerebellar grey matter to dyslexia and specific language impairment (SLI). Second, theoretical accounts of both dyslexia and SLI have postulated impairments of procedural learning and automatisation of skills, functions that are known to be mediated by the cerebellum. Third, motor learning has been shown to be abnormal in some studies of both disorders. We assessed the integrity of face related regions of the cerebellum using Pavlovian eyeblink conditioning in 7-11year-old children with SLI. We found no relationship between oral language skills or literacy skills with either delay or trace conditioning in the children. We conclude that this elementary form of associative learning is intact in children with impaired language or literacy development. abstract_id: PUBMED:20563782 Is dyslexia a form of specific language impairment? A comparison of dyslexic and language impaired children as adolescents. Two groups of adolescents with a childhood history of language impairment were compared with a group of developmentally dyslexic young people of the same age and nonverbal ability. The study also included two comparison groups of typically developing children, one of the same age as those in the clinical groups, and a younger comparison group of similar reading level to the dyslexic students. Tests of spoken and written language skills revealed that the adolescents with dyslexia were indistinguishable from those with resolved language impairments on spoken language tasks, and both groups performed at age-expected levels. However, both dyslexic readers and those with resolved specific language impairments showed deficits in phonological awareness. On written language tasks, a different pattern of performance was apparent. In reading and spelling, adolescents with dyslexia performed only as well as those with persistent oral language impairments and younger controls. However, their reading comprehension was better. The theoretical and educational implications of these findings are discussed. abstract_id: PUBMED:29775094 Developmental dyslexia and specific language impairment: distinct syntactic profiles? Recent work exploring syntax in developmental dyslexia (DD) has identified morphosyntactic deficits, striking parallelisms between children with DD and specific language impairment (SLI). The question remains open if the underlying causes for such deficits are related to difficulties in phonology, which is affected in DD, or to working memory, as has been previously reported for SLI. We focus on the production of third person accusative clitic pronouns (ACC3) and of homophonous definite determiners in French-speaking children with DD and SLI as well as typically developing (TD) controls. If syntactic complexity modulates performance of DD children, as has already been shown for SLI, we predict children with DD to perform significantly worse on ACC3 compared to definite determiners, which are homophonous but syntactically simpler. In addition, if impairment in ACC3 stems from phonology or working memory difficulties, we expect ACC3 performance in both clinical groups to relate to performance on non-word repetition or forward/backward digit spans. We studied 2 groups of 21 children and adolescents, with DD and SLI (7-15 years) and age-matched TD controls. Results reveal significant weaknesses with ACC3 in DD and SLI groups compared to TD controls, but no difficulty for homophonous definite determiners, confirming a deficit relating specifically to syntactic complexity. As for links to phonology and working memory, a single correlation emerged between ACC3 and the backward digit span in SLI, but not in DD, suggesting different underlying sources for syntactic deficits in these populations. Clinical implications of these results are discussed. Answer: The research presented in the abstracts suggests that developmental dyslexia and specific language impairment (SLI) are distinct but potentially comorbid developmental language disorders. One study found that children with dyslexia or a combination of dyslexia and SLI performed significantly less well on measures of phonological processing than did children with SLI only and those with typical development, indicating that a deficit in phonological processing is closely associated with dyslexia but not with SLI when it occurs in the absence of dyslexia (PUBMED:16478378). Another study confirmed that there are parallels between dyslexic children with language disorders and their dyslexic peers with intact language skills, at least in terms of their performance on reading and writing tests (PUBMED:19057171). Furthermore, a study comparing dyslexic and language-impaired children as adolescents found that the adolescents with dyslexia were indistinguishable from those with resolved language impairments on spoken language tasks, and both groups performed at age-expected levels. However, both dyslexic readers and those with resolved specific language impairments showed deficits in phonological awareness (PUBMED:20563782). Additionally, a study examining syntactic profiles in children with developmental dyslexia (DD) and SLI found significant weaknesses with complex syntactic structures in both DD and SLI groups compared to typically developing controls, but no difficulty for homophonous definite determiners, confirming a deficit relating specifically to syntactic complexity (PUBMED:29775094). In summary, while there is evidence of overlap between developmental dyslexia and SLI, particularly in terms of phonological processing deficits, the research suggests that they are distinct disorders that can co-occur. Children with SLI may have mild deficits in phonological processing compared to typical children, but these are not as pronounced as in children with dyslexia. The studies also highlight the importance of considering the heterogeneity within the SLI population, as some children with SLI may have intact phonological and decoding skills but poor reading comprehension, resembling the profile of poor comprehenders (PUBMED:17365085).
Instruction: Do the physical characteristics of a virtual reality device contraindicate its use for balance assessment? Abstracts: abstract_id: PUBMED:31695345 Balance training using virtual reality improves balance and physical performance in older adults at high risk of falls. Purpose: Exercise programs designed for falls prevention have been proven effective in reducing falls by approximately 21%. Virtual reality may provide a viable alternative intervention for falls prevention. This study compared the effects of virtual reality training using the Balance Rehabilitation Unit (BRU) versus exercise using a modified Otago Exercise Programme (EX) on improving balance and physical performance in the short-term restorative care setting of the Gait and Balance Gym (Gabagym). Patients And Methods: This was a pre- and post-intervention study of 195 participants (median age 78 years, IQR 73-84; 67% female) who presented with a risk and/or history of falls. Participants were assigned to either EX (n=82) or BRU (n=63). Supervised sessions occurred twice a week for 6 weeks. Participants receiving interventions were compared to a separate group (n=50) with similar characteristics who did not receive any intervention. Balance and physical performance were assessed at initial and final attendance and included the 5 Times Sit to Stand (5STS) test, Timed Up and Go (TUG), gait speed and posturography assessment using the BRU. Fear of falling was assessed using the Falls Efficacy Scale. Handgrip strength and adherence were also monitored. Results: Post-intervention, EX and BRU groups achieved similar improvements and reported similar adherence rates (71% vs 72%, respectively). Both intervention groups improved in balance and physical performance measures. Both interventions showed significantly better improvement than the non-intervention group in TUG (p&lt;0.001), gait speed (p=0.021), limits of stability in posturography assessment (p=0.008), FES-I score (p=0.013) and handgrip strength (p=0.021). Only the BRU group improved control of static posture in the eyes closed (p=0.002) and foam eyes closed (p=0.006) tasks. Conclusion: This study highlights the potential use of virtual reality as a practical alternative to improve outcomes of balance training for reduction of falls risk in older adults. abstract_id: PUBMED:18270385 Do the physical characteristics of a virtual reality device contraindicate its use for balance assessment? Context: Virtual reality environments may allow researchers to investigate functional balance performance without risks associated with testing in the real world. Objective: To investigate the effects of the mass of a head-mounted display (HMD) on balance performance. Design: Counterbalanced pretest-posttest. Setting: Virtual reality laboratory. Participants: 20 healthy college students. Intervention(s): Balance Error Scoring System (BESS) with a tracker-only headband and again with tracker plus HMD was performed. Main Outcome Measures: BESS error scores, elliptical sway area, and center of pressure travel distance were recorded. Results: No effect of the HMD mass on balance performance was observed. A significant stance by surface interaction was present but was negated when the HMD conditions were included in the model. Conclusions: The mass of a HMD has not been proven to adversely affect balance performance. These data suggest the HMD mass is not a contraindication to the use of immersive virtual environments in future concussion research involving balance. abstract_id: PUBMED:31946613 An Exploratory Study on the use of Virtual Reality in Balance Rehabilitation. Studies have shown the potential of Virtual Reality and motion tracking devices in physical rehabilitation. This paper addresses the topic of using non-immersive Virtual Reality therapeutic games with motion tracking in physical rehabilitation and describes an exploratory study performed in collaboration with a national public Rehabilitation Center about their use to motivate patients to perform exercises relevant for balance rehabilitation. The work involved developing and adapting mini-games to track patients posture; tests with patients recovering from Spinal Cord Injury suggest that this type of games can be helpful in the recovery process namely in patients' motivation for performing the therapeutic gestures. abstract_id: PUBMED:36298254 Virtual Reality-A Supplement to Posturography or a Novel Balance Assessment Tool? Virtual reality (VR) is a well-established technology in medicine. Head-mounted displays (HMDs) have made VR more accessible in many branches of medical research. However, its application in balance evaluation has been vague, and comprehensive literature on possible applications of VR in posture measurement is scarce. The aim of this review is to conduct a literature search on the application of immersive VR delivered using a head-mounted display in posturographic measurements. A systematic search of two databases, PubMed and Scopus, using the keywords "virtual reality" and "posturography," was performed following PRISMA guidelines for systematic reviews. Initial search results returned 89 non-duplicate records. Two reviewers independently screened the abstracts. Sixteen papers fulfilled the inclusion criteria and none of the exclusion criteria and were selected for complete text retrieval. An additional 16 records were identified from citation searching. Ultimately, 21 studies were included in this review. virtual reality is often used as additional visual stimuli in static and dynamic posturography evaluation. Only one study has attempted to evaluate a VR environment in a head-mounted display as an independent method in the assessment of posture. Further research should be conducted to assess HMD VR as a standalone posturography replacement. abstract_id: PUBMED:26527045 Advantages and limitations of virtual reality for balance assessment and rehabilitation. Virtual reality (VR) is now commonly used in many domains because of its ability to provide a standardized, reproducible and controllable environment. In balance assessment, it can be used to control stimuli presented to patients and thus accurately evaluate their progression or compare them to different populations in standardized situations. In balance rehabilitation, VR allows the creation of new generation tools and at the same time the means to assess the efficiency of each parameter of these tools in order to optimize them. Moreover, with the development of low-cost devices, this rehabilitation can be continued at home, making access to these tools much easier, in addition to their entertaining and thus motivating properties. Nevertheless, and even more with low-cost systems, VR has limits that can alter the results of the studies that use it: the latency of the system (the delay cumulated on each step of the process from data acquisition on the patients to multimodal outputs); and distance perception, which tends to be underestimated in VR. After having described why VR is an essential tool for balance assessment and rehabilitation and illustrated this statement with a case study, this review discusses the previous works in the domain with regards to the technological limits of VR. abstract_id: PUBMED:34311059 Effects of virtual reality versus conventional balance training on balance of the elderly. Objective: The aging population is growing in the world, and the reduction in physical function caused by this is an important issue that, particularly, causes a disorder of balance and an increased risk of falling. This study aimed at the comparison between the effects of virtual reality training (VRT) and Conventional balance training (CBT) on the balance of the elderly. Methods: The present study was conducted on 36 elderly (men and women) who are living in nursing homes. Participants were randomly divided into three groups: virtual reality training (6 males, 6 females; age = 66.5 ± 3.8 years), Conventional balance training (6 males, 6 females; age = 67.5 ± 3.1 years), and control (5 males, 7 females; age = 66.7 ± 3.2 years). Each group participated in a 60-min session, 3 times per week, for 9 weeks. To assess the participants' balance, the balance tests were used on single-leg stance (SLS) with open and closed eyes, Functional reach test (FRT), Timed up and Go Test (TUG), and Fullerton Advance Balance Scale (FABS). Data analysis was done using paired t-test and analysis of covariance by SPSS software version 24 at the significant level (P = 0.05). Results: In both groups (VRT, CBT), SLS with open and closed eyes, FRT, TUG, and FABS were significantly improved (P˂0.05). After the intervention, changes in both groups were similar (P &gt; 0.05), which indicates that neither VRT and CBT training methods were superior to the other. Conclusion: According to the results of this study, it seems that a virtual reality training program can be used as a new training method to improve the elderly's balance in daily programs of nursing homes. abstract_id: PUBMED:38255047 Effectiveness of Virtual Reality Therapy on Balance and Gait in the Elderly: A Systematic Review. Virtual reality (VR) therapies are presently utilized to treat physical and cognitive impairments among elderly people. This systematic review aims to collect the most recent evidence on the effectiveness of VR in improving balance and gait among healthy elderly individuals, in comparison with other therapies. A literature search was conducted using the PubMed, SCOPUS, PEDro, and WoS databases, by selecting randomized clinical trials that evaluated balance, both static and dynamic, as well as gait in a population of healthy older adults who underwent virtual reality therapy. The methodological quality of the studies was assessed using the PEDro scale. After eligibility criteria were applied and duplicates were removed, 20 studies were selected out of 1705 initially identified. The present systematic review concludes that virtual reality therapy is more effective than minimal intervention or usual care in enhancing static balance, dynamic balance, and gait in healthy elderly individuals. Moreover, virtual reality therapy yields better outcomes compared to traditional balance training and physical exercise in improving balance and gait in this demographic. However, both methods have shown effectiveness. abstract_id: PUBMED:24764645 The Effects of Virtual Reality-based Balance Training on Balance of the Elderly. [Purpose] The objective of this study was to determine the effects of virtual reality-based balance training on balance of the elderly. [Methods] The subjects were 32 healthy elderly people aged between 65 and 80, who were divided into a VR (virtual reality) training group (n=17) and a control group (n=15). The VR training group engaged in a 30-minute exercise session using Wii Fit three times a week for eight weeks, while the control group received no intervention. The balance of the two groups was measured before and after the intervention. [Results] According to the Romberg Test conducted to examine the effects of the training on balance, both the area covered by the body's center of pressure movement, and movement distances per unit area of the body's center of pressure envelope significantly decreased in the VR training group. Moreover, the two groups showed significant differences in balance. [Conclusion] Virtual reality training is effective at improving the balance of the healthy elderly. Thus, virtual reality training can be proposed as a form of fall prevention exercise for the elderly. abstract_id: PUBMED:26656425 A New Approach to Improve Cognition, Muscle Strength, and Postural Balance in Community-Dwelling Elderly with a 3-D Virtual Reality Kayak Program. Aging is usually accompanied with deterioration of physical abilities, such as muscular strength, sensory sensitivity, and functional capacity. Recently, intervention methods with virtual reality have been introduced, providing an enjoyable therapy for elderly. The aim of this study was to investigate whether a 3-D virtual reality kayak program could improve the cognitive function, muscle strength, and balance of community-dwelling elderly. Importantly, kayaking involves most of the upper body musculature and needs the balance control. Seventy-two participants were randomly allocated into the kayak program group (n = 36) and the control group (n = 36). The two groups were well matched with respect to general characteristics at baseline. The participants in both groups performed a conventional exercise program for 30 min, and then the 3-D virtual reality kayak program was performed in the kayak program group for 20 min, two times a week for 6 weeks. Cognitive function was measured using the Montreal Cognitive Assessment. Muscle strength was measured using the arm curl and handgrip strength tests. Standing and sitting balance was measured using the Good Balance system. The post-test was performed in the same manner as the pre-test; the overall outcomes such as cognitive function (p &lt; 0.05), muscle strength (p &lt; 0.05), and balance (standing and sitting balance, p &lt; 0.05) were significantly improved in kayak program group compared to the control group. We propose that the 3-D virtual reality kayak program is a promising intervention method for improving the cognitive function, muscle strength, and balance of elderly. abstract_id: PUBMED:32598189 The Use of Virtual Reality Through Head-Mounted Display on Balance and Gait in Older Adults: A Scoping Review. Introduction: Falls are the most prominent cause of injury and injury-related deaths among older adults (OAs). Virtual reality has been utilized as a method of improving balance and gait in OAs. However, the use of virtual reality through a head-mounted display (VR-HMD) in this area is limited. Objective: The objective of this scoping review was to identify research that used VR-HMD in relation to balance and gait in OAs and to evaluate how VR-HMD is being used with this population. Materials and Methods: A systematic search of the literature was carried out from June 2019 to July 2019 through the following databases: Scopus, Web of Science, PUBMED, and PsycInfo. Eligible studies involved the use of VR-HMD to assess or intervene in balance or gait outcomes of OAs (≥65 years). Articles were not limited to any specific study design or by the year of publication. Results: Our search identified 306 possible articles, of which eight citations met the eligibility criteria. Four studies utilized VR-HMD as an assessment tool or to perturb the balance, while the other four used VR-HMD in their interventions. Conclusions: Currently, it is not clear whether VR-HMD alone is an effective tool for improving balance and gait. However, this review suggests that it is feasible to use VR-HMD with OAs to affect balance and gait. More research is needed in this area, although there appears to be great potential in utilizing VR-HMD with OAs to improve balance outcomes. Answer: The physical characteristics of a virtual reality device, specifically the mass of a head-mounted display (HMD), do not contraindicate its use for balance assessment. A study investigating the effects of the mass of an HMD on balance performance found no adverse effect on balance performance, suggesting that the mass of an HMD is not a contraindication to the use of immersive virtual environments in balance-related research (PUBMED:18270385). Additionally, virtual reality (VR) has been shown to be a viable tool for balance assessment and rehabilitation in older adults, with various studies demonstrating its effectiveness in improving balance and physical performance (PUBMED:31695345, PUBMED:34311059, PUBMED:38255047, PUBMED:24764645, PUBMED:26656425). While there are advantages and limitations to using VR for balance assessment and rehabilitation, the technological limits, such as system latency and distance perception, do not appear to significantly hinder its use in this context (PUBMED:26527045). Moreover, a scoping review on the use of VR through HMD in older adults suggests that it is feasible to use VR-HMD to affect balance and gait, although more research is needed to determine its effectiveness as a standalone tool (PUBMED:32598189). Overall, the current evidence does not indicate that the physical characteristics of VR devices are a contraindication for their use in balance assessment.
Instruction: Should studies of risk factors for musculoskeletal disorders be stratified by gender? Abstracts: abstract_id: PUBMED:19305934 Should studies of risk factors for musculoskeletal disorders be stratified by gender? Lessons from the 1998 Québec Health and Social Survey. Objectives: Several studies have reported male-female differences in the prevalence of symptoms of work-related musculoskeletal disorders (MSD), some arising from workplace exposure differences. The objective of this paper was to compare two strategies analyzing a single dataset for the relationships between risk factors and MSD in a population-based sample with a wide range of exposures. Methods: The 1998 Québec Health and Social Survey surveyed 11 735 respondents in paid work and reported "significant" musculoskeletal pain in 11 body regions during the previous 12 months and a range of personal, physical, and psychosocial risk factors. Five studies concerning risk factors for four musculoskeletal outcomes were carried out on these data. Each included analyses with multiple logistic regression (MLR) performed separately for women, men, and the total study population. The results from these gender-stratified and unstratified analyses were compared. Results: In the unstratified MLR models, gender was significantly associated with musculoskeletal pain in the neck and lower extremities, but not with low-back pain. The gender-stratified MLR models identified significant associations between each specific musculoskeletal outcome and a variety of personal characteristics and physical and psychosocial workplace exposures for each gender. Most of the associations, if present for one gender, were also found in the total population. But several risk factors present for only one gender could be detected only in a stratified analysis, whereas the unstratified analysis added little information. Conclusions: Stratifying analyses by gender is necessary if a full range of associations between exposures and MSD is to be detected and understood. abstract_id: PUBMED:15458009 Gender differences in the relations between work-related physical and psychosocial risk factors and musculoskeletal complaints. Gender differences in the prevalence of musculoskeletal complaints might be explained by differences in the effect of exposure to work-related physical and psychosocial risk factors. A systematic review was conducted to examine gender differences in the relations between these risk factors and musculoskeletal complaints. Several electronic databases were searched. The strength of the evidence was determined on the basis of the methodological quality and consistency of the study results. For lifting, strong evidence was found that men have a higher risk of back complaints than women. The same was found for the relation between hand-arm vibration and neck-shoulder complaints. For arm posture, strong evidence was found that women have a higher risk of neck-shoulder complaints than men. For social support, no evidence of a gender difference was found for either neck-shoulder or back complaints. For hand-wrist and lower-extremity complaints, inconclusive evidence was found due to a lack of high-quality studies. abstract_id: PUBMED:19337673 Is there a gender difference in the effect of work-related physical and psychosocial risk factors on musculoskeletal symptoms and related sickness absence? Objectives: The objective of this study was to determine whether there are gender differences in the effect of exposure to work-related physical and psychosocial risk factors on low back, neck, shoulder, or hand-arm symptoms and related sickness absence. Methods: Data of a prospective cohort (study on musculoskeletal disorders, absenteeism stress and health) with a follow-up period of three years were used. Questionnaires were used to assess exposure to risk factors and musculoskeletal symptoms. Sickness absence was registered continuously. Female-to-male gender ratios (GR) were calculated to determine whether there were any differences in the effect. A GR value &gt;1.33 or &lt;0.75 was regarded as relevant. Results: Except for the effect of bending the wrist and the neck backwards (GR 1.52-2.55), men generally had a higher risk of symptoms (GR range 0.50-0.68) with equal exposure. For sickness absence, a GR value of &gt;1.33 was found for twisting the upper body, working in uncomfortable postures, twisting the wrist, bending the neck backwards, and coworker and supervisor support (GR range 1.66-2.63). For driving vehicles, hand-arm vibration, squeezing, working above shoulder level or below knee level, reaching, twisting the neck, job demands, and skill discretion, the GR value was &lt;0.75. For job satisfaction, a GR value of 0.50 was found for absence due to back symptoms, while the GR value was 1.78 for sickness absence due to neck, shoulder, or hand-arm symptoms. Conclusions: Although women are expected to be more vulnerable to exposure to work-related risk factors, the results of this study show that, in many cases, men are more vulnerable. This study could not explain the gender difference in musculoskeletal symptoms among workers. abstract_id: PUBMED:31670156 Identification of gender differences in the factors influencing shoulders, neck and upper limb MSD by means of multivariate adaptive regression splines (MARS). In the present research, models based on multivariate adaptive regression splines (MARS) are proposed to study the influence of gender in the factors affecting the development of shoulders, neck and upper limb MSD. Two different MARS models, corresponding to men and women, are constructed to identify variables with the strongest effect on the target MSD. Both models are capable to predict successfully the occurrence of the studied disorders. Men seem to be more vulnerable to physical risk factors and some other working conditions, whereas women appear to be more affected by psychosocial risk factors and activities carried out outside their working hours. According to the results, gender needs to be considered to ensure the success and effectiveness of ergonomic interventions on the whole working population. abstract_id: PUBMED:37669536 Development and evaluation of the gender-specific CONSTANCES job exposure matrix for physical risk factors in France. Objectives: This study aimed to construct and evaluate a gender-specific job exposure matrix (JEM) for 27 physical work exposures, based on self-report. Methods: We constructed a JEM using questionnaire data on current physical exposures from 29 381 male and 35 900 female asymptomatic workers aged 18-69 years in the French CONSTANCES cohort study. We excluded workers with musculoskeletal pain to reduce potential reporting bias. We grouped 27 self-reported physical exposures using the French national job codes and stratified by gender. We compared individual and group-based exposures using the performance indicators Cohen's kappa (κ), sensitivity, specificity, and area under the receiver operating curve (AUC). Results: JEM validation showed fair-to-moderate agreement (κ 0.21-0.60) for most physical exposures for both genders except for 'reach behind' (poor), 'bend neck' (poor), 'finger pinch' (poor), standing' (good), 'use computer screen' (good), and 'use keyboard or scanner' (good). We found the highest AUC for 'standing' (men 0.85/ women 0.87), 'kneel/squat' (men 0.80/women 0.81), 'use computer screen' (men/women 0.81), and 'use keyboard or scanner' (men 0.82/ women 0.84). The AUC was &lt;0.60 for only three exposures: 'bend neck' (men 0.58/women 0.57), 'finger pinch' (men 0.56/ women 0.55), and 'reach behind' (men 0.54/ women 0.51). Conclusion: The constructed JEM validation measures were comparable for men and women for all exposures. Further research will examine the predictive ability of this gender-specific JEM for musculoskeletal disorders and the relevance of gender-stratification in this process, knowing accuracy of each exposure. abstract_id: PUBMED:33987772 Risk factors for musculoskeletal injuries in military personnel: a systematic review with meta-analysis. Objective: To investigate the risk factors for musculoskeletal injuries in military personnel. Methods: A systematic literature search was carried out in August 2019 and updated in July 2020 without language or time filters. The inclusion criteria were prospective studies that investigated the risk factors for injuries in military personnel. Only risk factors analyzed by at least ten studies were selected for the meta-analysis. For data analysis, the RevMan5.3 program was used to compare the number of participants with injuries between high- or low-risk groups. The measurement of dichotomous variables was one of the selected parameters for the analysis, as well as the Mantel-Haenszel statistical method, random-effects model, and analysis with a relative risk (RR) and 95% confidence interval for the included studies. Results: A total of 2,629 studies were identified through databases. Thirty-four studies met the inclusion criteria. The groups considered at risk were the oldest [RR = 1.22; (95% CI 1.06-1.41)], with overweight or obesity [RR = 1.27; (95% CI 1.08-1, 48)], with previous injuries [RR = 1.15; (95% CI 1.01-1.30)], and with the worst performance in running tests of 1,600-3,200 m [RR = 1.87; (95% CI 1.28-2.71)]. Gender, ethnicity, and smoking were not associated with injuries. However, a subgroup analysis showed that among studies with a follow-up of fewer than 12 months, women presented RR = 2.44 (95% CI 1.65-3.60) more likely to develop injuries. Conclusion: Age, overweight or obesity, previous injuries, and performance in the 1600-3200 m run are associated with an increased risk of injury in the military. abstract_id: PUBMED:19753591 Risk factors for work-related musculoskeletal disorders: A systematic review of recent longitudinal studies. Objective: This systematic review was designed and conducted in an effort to evaluate the evidence currently available for the many suggested risk factors for work-related musculoskeletal disorders. Methods: To identify pertinent literature we searched four electronic databases (Cinahl, Embase, Medline, and The Cochrane Library). The search strategies combined terms for musculoskeletal disorders, work, and risk factors. Only case-control or cohort studies were included. Results: A total of 1,761 non-duplicated articles were identified and screened, and 63 studies were reviewed and integrated in this article. The risk factors identified for the development of work-related musculoskeletal disorders were divided and organized according to the affected body part, type of risk factor (biomechanical, psychosocial, or individual) and level of evidence (strong, reasonable, or insufficient evidence). Conclusions: Risk factors with at least reasonable evidence of a causal relationship for the development of work-related musculoskeletal disorders include: heavy physical work, smoking, high body mass index, high psychosocial work demands, and the presence of co-morbidities. The most commonly reported biomechanical risk factors with at least reasonable evidence for causing WMSD include excessive repetition, awkward postures, and heavy lifting. Additional high methodological quality studies are needed to further understand and provide stronger evidence of the causal relationship between risk factors and work-related musculoskeletal disorders. The information provided in this article may be useful to healthcare providers, researchers, and ergonomists interested on risk identification and design of interventions to reduce the rates of work-related musculoskeletal disorders. abstract_id: PUBMED:19294319 Gender adjustment or stratification in discerning upper extremity musculoskeletal disorder risk? Objectives: The aim was to explore whether "adjustment" for gender masks important exposure differences between men and women in a study of rotator cuff syndrome (RCS) and carpal tunnel syndrome (CTS) and work exposures. Methods: This cross-sectional study of 733 subjects in 12 health care and manufacturing workplaces used detailed individual health and work exposure assessment methods. Multiple logistic regression analysis was used to compare gender stratified and adjusted models. Results: Prevalence of RCS and CTS among women was 7.1% and 11.3% respectively, and among men 7.8% and 6.4%. In adjusted (gender, age, body mass index) multivariate analyses of RCS and CTS, gender was not statistically significantly different. For RCS, upper arm flexion &gt;/=45 degrees and forceful pinch increased the odds in the gender-adjusted model (OR 2.66, 95% CI 1.26-5.59) but primarily among women in the stratified analysis (OR 6.68, 95% CI 1.81-24.66 versus OR 1.45, 95% CI 0.53-4.00). For CTS, wrist radial/ulnar deviation &gt;/=4% time and lifting &gt;/=4.5kg &gt;3% time, the adjusted OR was higher for women (OR 4.85, 95% CI 2.12-11.11) and in the gender stratified analyses, the odds were increased for both genders (women OR 5.18, 95% CI 1.70-15.81 and men OR 3.63, 95% CI 1.08-12.18). Conclusions: Gender differences in response to physical work exposures may reflect gender segregation in work and potential differences in pinch and lifting capacity. Reduction in these exposures may reduce prevalence of upper extremity disorders for all workers. abstract_id: PUBMED:9215436 Challenges in assessing risk factors in epidemiologic studies on back disorders. In epidemiologic studies on musculoskeletal disorders, some risk factors, especially physical load, cannot be determined independently from the worker. Posture, movement and external load are the result both of physical work requirements forced on the worker and of the worker's capacity to adopt particular techniques. Risk factors are also adjusted in relation to the worker's health. This paper presents a dynamic model that links exposure to risk factors for back pain and disability. Its aim is to help identify core elements in exposure assessment strategies for epidemiologic studies on back disorders. In this dynamic model, risk factors are determined relative to health status in order to distinguish between etiological and prognostic factors. Measurement techniques for various risk factors are classified into self-reports, observations, and direct instrumentation. Features of commonly used techniques are discussed with respect to feasibility, accuracy, and precision. In addition, consideration is given to the optimum allocation of measurements taking into account the effects of random and systematic variation in exposure due to tasks, workplaces, and workers. abstract_id: PUBMED:10633243 The prevalence of musculoskeletal complaints among women in Tijuana, Mexico: sociodemographic and occupational risk factors. The authors interviewed an age- and occupation-stratified sample of 466 women, aged 18-40, from 12 Tijuana neighborhoods, about sociodemographic characteristics, work and reproductive history, and musculoskeletal complaints. A total of 29.8% reported experiencing aches or pain in the low back, 38.3% in the upper back, 26.4% in the neck/shoulders, 18.2% in the hand/wrist, and 28.3% in the legs in the preceding year. Both sociodemographic and occupational factors were associated with these complaints. Very low educational attainment, having substandard housing, being the head of household, and being a migrant were each associated with an increased prevalence of one or more musculoskeletal complaints. In general, working outside the home increased the risk of musculoskeletal complaints. Compared with women who had not worked in the preceding 30-month period, those working in the maquiladora had 40-90% higher risks of upper back, neck/shoulder, and hand/wrist pain. Compared with women working outside the maquiladora, maquiladora women workers had 20% higher risks of low back, upper back, and neck/shoulder complaints. More detailed studies of the incidences of musculoskeletal disorders and of specific etiologic risk factors within the maquiladora industry are warranted. Future studies should concurrently evaluate sociodemographic risk factors. Answer: Yes, studies of risk factors for musculoskeletal disorders (MSDs) should be stratified by gender. Evidence suggests that there are significant differences in the prevalence of MSD symptoms and the associations between risk factors and MSD outcomes between men and women. Stratifying analyses by gender allows for the detection and understanding of a full range of associations between exposures and MSDs that may be present for one gender but not the other (PUBMED:19305934). For example, a systematic review found strong evidence that men have a higher risk of back complaints from lifting and neck-shoulder complaints from hand-arm vibration, while women have a higher risk of neck-shoulder complaints from arm posture. However, no evidence of a gender difference was found for social support in relation to neck-shoulder or back complaints (PUBMED:15458009). Another study showed that men generally had a higher risk of symptoms with equal exposure to certain work-related physical and psychosocial risk factors, but women had a higher risk of related sickness absence for some factors (PUBMED:19337673). Furthermore, gender differences were identified in the factors influencing shoulders, neck, and upper limb MSD, with men being more vulnerable to physical risk factors and women more affected by psychosocial risk factors (PUBMED:31670156). A gender-specific job exposure matrix (JEM) for physical work exposures also supported the relevance of gender stratification, showing comparable validation measures for men and women for all exposures (PUBMED:37669536). In military personnel, gender was not associated with injuries in general, but a subgroup analysis indicated that women were more likely to develop injuries in studies with a follow-up of fewer than 12 months (PUBMED:33987772). Additionally, gender adjustment in a study of rotator cuff syndrome (RCS) and carpal tunnel syndrome (CTS) revealed that gender differences in response to physical work exposures might reflect gender segregation in work and potential differences in capacity (PUBMED:19294319). In conclusion, stratifying by gender is necessary to accurately identify and address the specific risk factors for MSDs in men and women, which can differ due to biological, psychosocial, and occupational factors.
Instruction: Is bacterial colonisation of the tonsillar fossa a factor in post-tonsillectomy haemorrhage? Abstracts: abstract_id: PUBMED:17445306 Is bacterial colonisation of the tonsillar fossa a factor in post-tonsillectomy haemorrhage? Objectives: To identify if there is a link between bacterial colonisation of the tonsillar fossa and post-tonsillectomy haemorrhage. Study Design And Setting: Prospective non-interventional study of 105 patients who underwent tonsillectomy during a seven-month period. The study took place in a secondary care centre, the West Middlesex University Hospital. Participants: The participants were 105 patients who consecutively underwent tonsillectomy. The exclusion criteria were any patients with suspected or known malignancy, or known bleeding dyscrasias. The participants underwent microbiological sampling of the tonsil pre-operatively. Main Outcomes Measures: The outcome measures were primary or secondary bleeding, defined as any evidence of haemorrhage in the tonsillar fossae. Results: Twenty-four per cent of patients undergoing tonsillectomy had positive cultures from their tonsils pre-operatively. Patients with bacterial colonisation of the tonsillar fossa pre-operatively had an increased rate of post-tonsillectomy haemorrhage (odds ratio: 3.8, 1.1-12.1, 95 per cent confidence intervals, p = 0.04). Conclusion: This prospective study has found a relationship between bacterial colonisation of the tonsillar fossa and post-tonsillectomy haemorrhage. This suggests that there may be an argument for the use of antibiotics in those cases with positive pre-operative cultures. In view of the types of pathogens isolated, we feel that the management of a post-tonsillectomy bleed should include a beta lactamase inhibiting antibiotic. abstract_id: PUBMED:37024450 Associations Between Post-tonsillectomy Hemorrhage and Anatomical Subsites of the Tonsillar Fossa. Objective: This study presents a novel classification of the anatomical subsites of the tonsillar fossa and discusses their associations with post-tonsillectomy hemorrhage (PTH) after extracapsular tonsillectomy. Materials And Methods: Coblation tonsillectomy was performed on three adult cadavers and the anatomical subsites of the tonsillar fossa based on the distribution of the tonsillar feeding artery: the upper pole (subsite A), most of the tonsil body (subsite B), the inferior tonsil body (subsite C), and components of the lower pole (subsites D and E). Extracapsular tonsillectomy was prospectively performed using various surgical techniques and PTH was evaluated. Results: A cadaveric study revealed that the intra- and extra-capsular vessel topographies were essentially identical. Although the demarcation lines varied either up or down by a few millimeters, the arterial vascular network was particularly dense at subsites D and E, and the vessel diameter at these subsites was significantly greater than at subsite C and also (especially) at subsite E. Of 680 patients who underwent tonsillectomy, PTH developed early in 13 (31.7%) and late in 28 (68.3%). Surgical interventions were required by 29/41 patients (70.7%). Subsites D and E were the most common subsites of late PTH and PTH that required intervention. Such intervention was rarely necessary when PTH developed at subsite A or B. Conclusions: The new classification of the anatomical subsites of the tonsillar fossa aids inexperienced surgeons and provides an anatomical rationale for variation in surgical technique that minimizes vascular injury, thus improving safety. abstract_id: PUBMED:38039804 Tonsil microbiome in pediatric patients with post tonsillectomy hemorrhage for tonsillar hypertrophy. Objective: This study aimed to compare the tonsillar microbiota between post tonsillectomy patients with bleeding and without bleeding, and to investigate the potential role of tonsillar microbiota in the development of post-tonsillectomy hemorrhage (PTH). Methods: Nineteen tonsillar tissues from PTH patients and 21 tissues from control patients were collected. Metagenomic sequencing was used to compare the microbiota in PTH and control groups. Alpha diversity indices were used to compare the richness and evenness of the microbiota between the two groups. PCoA and NMDS analyses were used to evaluate beta diversity. LDA analysis was conducted to identify significantly abundant genera. Results: No significant difference in alpha diversity indices was found between PTH and control patients. The dominant bacteria in the tonsillar microbiota were Haemophilus, Streptococcus, and Fusobacterium. PCoA and NMDS analyses showed significant differences in beta diversity between PTH and control patients. PTH patients had a significantly higher relative abundance of Neisseria, Capnocytophaga, and Veillonella. Capnocytophaga was also identified as a significantly abundant genus by LDA analysis. Conclusion: This study demonstrates that there is a difference in the tonsillar microbiota between PTH and control patients. The results suggest that Neisseria, Capnocytophaga, and Veillonella may be associated with the development of PTH. These findings provide new insights into the potential role of the tonsillar microbiota in the development of PTH, and may help to develop new strategies for preventing and treating this potentially life-threatening complication. abstract_id: PUBMED:24508083 Effect of cold-water cooling of tonsillar fossa and pharyngeal mucosa on post-tonsillectomy pain. Purpose: Post-tonsillectomy pain is a notable concern and thermal injury produced by electric surgical devices is considered a main cause. Intraoperative cooling of the tonsillar fossa and pharyngeal mucosa with cold water has effectively reduced postoperative pain, but no studies have fully evaluated the effects of this technique with a proper study design. We assessed mucosal cooling in two groups of patients undergoing the same surgical technique by a single surgeon, with one group receiving cold-water cooling and the other group as a control. Methods: Forty patients who underwent monopolar electrocautery tonsillectomy were randomly assigned to two groups (n = 20 for each group). Group 1 received cooling of the tonsillar fossa and pharyngeal mucosa with 4 °C saline just after removal of each tonsil whereas Group 2 did not receive cooling. Postoperative pain was recorded on operation day and 1, 2, 4, 7, and 10 days postoperatively. Post-tonsillectomy pain, return to normal diet, and incidence of postoperative bleeding were compared between the groups. Results: Post-tonsillectomy pain on 6 selected days and overall pain during the 10-day follow-up period were significantly lower in Group 1. However, return to normal diet and incidence of postoperative bleeding did not differ significantly between the groups. Conclusions: Intraoperative application of cold water after tonsillectomy significantly reduced postoperative pain. We recommend cooling the tonsillar fossa and pharyngeal mucosa with cold water during tonsillectomy to easily and effectively reduce post-tonsillectomy pain. abstract_id: PUBMED:32232479 Trigeminal Nerve Blockade in the Pterygopalatine Fossa for the Management of Postoperative Pain in Three Adults Undergoing Tonsillectomy: A Proof-of-Concept Report. Setting: Post-tonsillectomy pain in adults can be severe and is often poorly controlled. Pain can lead to decreased oral intake, bleeding, longer hospital stays, emergency department visits, dehydration, and weight loss. Due to persistent pain despite scheduled medications, other methods for pain control are needed. Local/regional anesthetic options have been previously studied in this population. Unfortunately, neither the injection of local anesthetics into the tonsillar fossa nor the postoperative topical application of local anesthetics to the tonsillar bed has demonstrated efficacy in large systematic reviews. Patients: Here we report on the post-tonsillectomy pain experience of three patients who were treated with perioperative nerve blocks placed in the pterygopalatine fossa. This represents an as-yet unexplored option for post-tonsillectomy pain control. Intervention: After induction of general anesthesia, before surgical incision, a 25-gauge spinal needle was advanced into the pterygopalatine fossa using a suprazygomatic, ultrasound-guided approach. Ropivacaine and dexamethasone were deposited into the pterygopalatine fossa. Results: All three patients experienced excellent pain control for the duration of their recovery and required ≤10 mg of oxycodone over the two weeks after surgery. Conclusions: Our case series of three patients provides proof of concept that use of nerve blocks in the pterygopalatine fossa can be useful for the control of post-tonsillectomy pain. Further study is needed to confirm these initial results. abstract_id: PUBMED:23021465 Effect of tonsillar fossa closure on postoperative pain and bleeding risk after tonsillectomy. Objective: To determine if closing the tonsil fossa after tonsillectomy leads to less pain and bleeding. Study Design: Interventional, Randomized, Single Blind, Active Control, Single Group Assignment, Safety/Efficacy Study. Follow-up: 2 months. Methods: Generalized Estimating Equations (GEE) analyzed effects of tonsillar pillar closure, surgeon experience and interaction on outcomes. Setting: Primarily academic tertiary care referral center, institutional practice, primarily children, both ambulatory and hospitalized care. Subjects: 763 subjects (age 8-264 months) undergoing tonsillectomy. Exclusions: suspected malignancy or active peritonsillar abscess. At the discretion of the attending surgeon, patients undergoing tonsillectomy during the 4 year study period were offered participation. A computer selected the side closed. 131 subjects withdrew (complete lack of follow-up information) after the first 72 h. Intervention: 3-0 chromic sutures on tapered needles to close one tonsillar fossa. The subject was not told which side was closed. Main Outcome Measures: postoperative bleeding (at any time) and pain reported was sought on days 1, 7, 14, 21, and 28. Results: Closure of the tonsillar fossa did not change the risk of bleeding. Closing the tonsillar fossa had a 40% increase in the odds ratio of postoperative pain. In the tonsillar fossa sides left open, greater surgeon experience decreased the risk of bleeding. In closed sides, enriched surgeon experience increased the risk of bleeding (p&lt;.0.05). Conclusions: Suture closure of the tonsillar fossa after tonsillectomy does not reduce the risk of bleeding. Additionally, closing the tonsillar fossa increased postoperative pain. Level Of Evidence: 1b (individual randomized controlled trial). abstract_id: PUBMED:20379405 Spontaneous tonsillar hemorrhage and post-tonsillectomy hemorrhage. Spontaneous tonsillar hemorrhage from infectious causes is extremely rare and post-tonsillectomy hemorrhage, although also relatively rare, is an unavoidable complication of the procedure. Hemorrhage in association with tonsillitis or tonsillectomy is potentially dangerous and can be life threatening. We report here the presentation and management of a 42-yr-old man with severe spontaneous hemorrhage from infected tonsils and post-tonsillectomy hemorrhage. We suggest that if attempts to control the bleeding are not successful or if severe spontaneous tonsillar hemorrhage occurs repeatedly or a malignancy is suspected, tonsillectomy and close postoperative follow up is recommended. abstract_id: PUBMED:33728170 Endovascular Embolization for Control of Post-Tonsillectomy Hemorrhage. Post-operative hemorrhage is a potentially life-threatening complication of tonsillectomy. While standard surgical maneuvers including the use of electrocautery, application of topical hemostatic agents, direct pressure, and suturing of the tonsillar pillars have traditionally been used for the treatment of severe bleeding, endovascular approaches are an important adjunct when other techniques are unsuccessful. Here, we describe the case of a 10-year-old female who presented with severe bleeding four days after tonsillectomy and adenoidectomy for chronic tonsillitis. She was taken emergently to the operating room where pulsatile bleeding was noted from the right inferior tonsillar pole. Hemostasis could not be achieved using electrocautery despite multiple attempts. The patient was taken for emergent angiography, which demonstrated an irregularity of the right tonsillar artery consistent with arterial vasospasm, and which corresponded to the intraoral site of bleeding localized by the surgeon. Coil embolization of the tonsillar artery was successfully performed, and the patient experienced no further bleeding. We conclude that endovascular embolization of branches of the external carotid artery is an effective treatment for severe post-tonsillectomy hemorrhage in children and should be considered when attempts at surgical control are ineffective. This procedure requires exceptional collaboration between the surgical, radiology, and anesthesia teams. abstract_id: PUBMED:36200961 Tonsillar asymmetry in adults: review of the literature and management algorithm Tonsillar asymmetry is both a common clinical finding and a potential sign of cancer. The diagnosis of the latter requires tonsillectomy, which is associated with two weeks off work and post-operative risk of bleeding, which ranges between 1.5 and 15% of the cases. Thus, it is crucial to determine which patients can be followed clinically and which ones will need a diagnostic tonsillectomy. This article provides a review of the literature on tonsillar asymmetry in the adult population and an algorithm for its management. abstract_id: PUBMED:33730604 Predictive clinical exam findings in post-tonsillectomy hemorrhage. Objectives: Post tonsillectomy hemorrhage (PTH) is a common complication of tonsillectomy. Our objectives were to: 1) Examine the postoperative course of patients presenting to St. Christopher's Hospital for Children (SCHC) with PTH; 2) Compare patients with and without a blood clot visualized in the tonsillar fossa at time of presentation to determine if outcomes regarding return to the operating room (OR) differ. Methods: This was a retrospective chart review conducted at an academic, tertiary, pediatric hospital in an urban setting. Pediatric patients who underwent a tonsillectomy with concurrent adenoidectomy and were admitted for observation following secondary post tonsillectomy hemorrhage were reviewed. The effects of age, gender, indication, and clinical exam findings on admission on the rate of eventual return to the OR for control of hemorrhage were also analyzed. Chi-square analysis and Fisher's exact test were used to compare the significance of categorical frequencies. Results: The rate of blood clot presence in our cohort was 50.9% (28/55). Return to OR rates were defined as patients who began actively hemorrhaging following admission for observation, further stratified by presence or absence of clot on admission physical exam. There was a statistically significant higher rate of return to OR in patients who presented with a clot (46.6%) on clinical exam versus no clot (18.5%) after resolved post tonsillectomy hemorrhage (p &lt; .027). Furthermore, patients with a blood clot present were significantly more likely to require OR sooner (21.31 h from admission) than those without a clot (100.75 h from admission) (p &lt; .012). There was no statistically significant higher rate of blood clot presence or rate of return to OR in groups based on age, gender, or indication. Discussion: Pediatric patients presenting after resolved secondary PTH with a blood clot visualized in the tonsillar fossa are more likely to require return to the OR for hemostasis and cautery than are those without a blood clot, and this is more likely to occur within 24 h of admission. Thus, patients with a blood clot on initial presentation may benefit from admission for a 24-h observation period, while a similar observation period may be unproductive for patients without a blood clot. Conclusions: Patients who present with a resolved secondary PTH and a blood clot present on clinical exam require return to the OR more often than patients presenting without a blood clot. While previously controversial, we feel that this demonstrates that a 24-h observation of a patient with a clot on exam is reasonable. Answer: Yes, bacterial colonization of the tonsillar fossa is a factor in post-tonsillectomy hemorrhage. A prospective non-interventional study of 105 patients who underwent tonsillectomy found that patients with bacterial colonization of the tonsillar fossa pre-operatively had an increased rate of post-tonsillectomy hemorrhage (odds ratio: 3.8, 1.1-12.1, 95 percent confidence intervals, p = 0.04). This suggests that there may be an argument for the use of antibiotics in those cases with positive pre-operative cultures, and the management of a post-tonsillectomy bleed should include a beta-lactamase inhibiting antibiotic (PUBMED:17445306). Additionally, another study comparing the tonsillar microbiota between post-tonsillectomy patients with bleeding and without bleeding found significant differences in the microbiota composition. PTH patients had a significantly higher relative abundance of Neisseria, Capnocytophaga, and Veillonella, suggesting that these bacteria may be associated with the development of post-tonsillectomy hemorrhage (PUBMED:38039804). These findings indicate that bacterial colonization in the tonsillar fossa is indeed a factor that can influence the risk of hemorrhage following a tonsillectomy.
Instruction: Do we need femoral derotation osteotomy in DDH of early walking age group? Abstracts: abstract_id: PUBMED:20012070 Do we need femoral derotation osteotomy in DDH of early walking age group? A clinico-radiological correlation study. Introduction: The occurrence of exaggerated femoral anteversion and the role of femoral derotation osteotomy in developmental dysplasia of hip, especially early walking age group are controversial. Method: We evaluated femoral anteversion, acetabular anteversion, acetabular index in 15 dislocated hips and 11 normal hips in cases of unilateral dislocation of hip in DDH of age group 12-48 months. We correlated this femoral anteversion with the intra operative "test of stability" which is described by Zadeh et al. We found that there was no statistically significant difference in femoral anteversion between dislocated and normal hips. In all the 15 cases we did open reduction by anterior approach (Somerville approach) and evaluated the position for maximum stability. In 3 cases we were unable to perform test of stability as they needed femoral shortening for reduction of joint. Results: In the rest 12 hips, 10 were stable in flexion and abduction while 2 were stable in flexion. None of the hips required internal rotation for stability. Hence we did salters osteotomy in all the hips and femoral shortening through lateral approach in 3 cases. At a minimum follow up of 18 months all the hips were clinically stable and none of them dislocated till final follow up. The mean correction of acetabular index was 15.4 degrees and the outcome was excellent in 8 hips and good in 7 hips as per modified McKay's criteria. Hence we recommend that femoral derotation osteotomy is not needed in DDH of early walking age group. Conclusion: As the surgical treatment of DDH involves complex osteotomies around the hip and these surgeries have effect on long term outcome, MRI evaluation of femoral anteversion as a part of pre operative evaluation is advised. Also, as the intra operative evaluation needs enough clinical experience and it can not be performed in cases requiring femoral shortening for reduction, we consider pre operative evaluation of femoral anteversion by MRI as essential rather than adjunctive. abstract_id: PUBMED:29152500 Application of 3D technology and printing for femoral derotation osteotomy: case and technical report. In some surgical techniques like femoral derotation osteotomy, accuracy is a key factor that often is not optimal because of the lack of appropriate technology. 3D printing is emerging in many professional areas and its use in the medical field may enhance the results of certain surgeries. This case describes a patient who underwent an intramedullary nail fixation to treat a femoral shaft fracture. After nine months, the patient presented hip pain and "in toe" walking caused by a malrotation produced during the surgery. To address the consequent femoral derotation osteotomy, 3D technology was used throughout the whole process. A 3D model of the patient's femur was created to conduct a real and accuracy assessment of femoral anteversion. Then, a customized surgical guide was designed and printed to ensure the proper alignment during surgery. Given the success of this surgery, 3D printing can be considered a quick and inexpensive tool to improve surgical results. abstract_id: PUBMED:34881171 Short-term outcomes of treatment in children presenting with DDH in walking age - An analysis of 84 hips. Background: Developmental Dysplasia of Hip (DDH) presenting at walking age is not uncommon, particularly in developing countries. The available treatment modalities in this age group are closed reduction (CR), open reduction (OR), OR with additional femoral and/or pelvic osteotomy. This study was done in patients who presented between 12 and 36 months of age to assess the following: 1) Percentage of hips amenable for successful CR, 2) failure rate after CR and OR in walking age DDH and 3) the need for secondary procedures to address subluxation and residual dysplasia within first 2 years. Method: ology: After IRB approval, the institutional database was searched for patients admitted with a diagnosis of DDH from January 2009 to January 2019. We identified 142 patients, of which 65 patients with 84 hips formed the study cohort after applying inclusion and exclusion criteria. Demographic details, details of the interventions, brace wear, revision procedures and radiological data were collected from Hospital Information System. We divided the patients in three groups: Group I - CR, Group II -OR, and Group III - OR with an additional bony procedure in the form of femoral and/or pelvic osteotomy. Results: The mean age at presentation was 20.1 months. We had 10 (11.9%) hips in group I, 39 (46.4%) hips in group II and 35 (41.6%) hips in group III. The mean follow-up was 44.8 months (24-132 months). In Group I, 5 (50%) had re-dislocation and 2 (20%) needed revision intervention for residual dysplasia. In Group II, 4 (10%) had re-dislocation and 4 (10%) needed revision intervention. In Group III, 5 (14.2%) hips needed revision intervention for residual dysplasia. The mean final AI was 24.6°in Group I, 28.2° in Group II and 26.3°in Group III. There was no significant difference in the final AI between the groups (p &gt; 0.05). Conclusions: An attempted closed reduction has a 50% failure rate, and we recommend a low threshold for open reduction. There is a 10% rate of re-dislocation following open reduction with or without additional bony procedure. About 50% of the dysplastic hips treated without pelvic osteotomy at the time of index procedure fail to remodel and have residual acetabular dysplasia. abstract_id: PUBMED:28419537 Electromagnetic tracking for femoral derotation osteotomy-an in vivo study. Femoral derotation osteotomy delivers good to excellent results in the treatment of rotational gait abnormalities and especially in internal rotation gait. The outcome of the procedure has been evaluated in numerous short- and long-term studies. Although reasons for recurrence and over-/under-correction have been unveiled in earlier studies the mechanisms are still not fully understood. False intra-operative assessment of the derotation angle may contribute to imprecise outcomes. In a recent saw-bone study we evaluated an electromagnetic tracking system in comparison to conventional goniometer measurement and a CT reference measurement and found it to be extremely accurate, whereas the use of a conventional goniometer for derotation measurement showed a high inter- and intra-rater variability. The current study evaluates the electromagnetic tracking system for continuous intra-operative derotation control under real OR conditions. Adults (age: 18-40 years) with the diagnosis of internal rotation gait, independent of the underling pathology, undergoing a supracondylar deroation osteotomy were included. A rotational CT scan was conducted before and in close proximity after surgery and the difference served as reference for the electromagnetic tracking results. The results showed a mean deviation of 2.6° (1.2-5.5°) in comparison to the reference measurement of the pre- and post-operative CT scans. The system proved to be stable under OR conditions with a good usability and a small technical footprint. Electromagnetic tracking delivers a precise, reliable, and independent assessment of intra-operative derotation angles in femoral derotation osteotomies. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 35:2652-2657, 2017. abstract_id: PUBMED:24294309 The effect of Dega acetabuloplasty and Salter innominate osteotomy on acetabular remodeling monitored by the acetabular index in walking DDH patients between 2 and 6 years of age: short- to middle-term follow-up. Introduction: The surgical management of neglected developmental dysplasia of the hip (DDH) in walking children has always been a challenge to orthopedic surgeons. The aim of this study was to evaluate the short- to middle-term clinical and radiographic results of the management of DDH. Patients And Methods: Patients less than 6 years old using two of the most commonly used osteotomies, namely, Salter innominate osteotomy and the Dega acetabuloplasty. Special attention was paid to acetabular remodeling after concentric reduction, which was monitored by the acetabular index, that, in turn, was measured preoperatively, immediately postoperatively, every 6 months, and at the final follow-up examination. Results: The final overall clinical end results were favorable (excellent or good) in 93 hips (85.3 %). There was a marked improvement of the acetabular coverage during the follow-up period, which proved the good remodeling potential of the acetabulum for this particular age group after concentric reduction was achieved and maintained. Conclusion: Both osteotomy types were found to be adequate for the management of neglected walking DDH patients under the age of 6 years. abstract_id: PUBMED:26468922 Femoral Derotation in Children with Cerebral Palsy - Does the Result Depend on the Age at Operation and the Kind of Surgery? Introduction: Patients with spastic cerebral palsy GMFCS I-III often develop gait dysfunctions. One of the most prevalent gait dysfunctions is the intoeing gait. Femoral derotation osteotomy is the common treatment for internal rotation gait in cerebral palsy. We now present 3D-gait analysis data of the hip rotation in gait before and after femoral derotation osteotomy. We analysed the influence of the age at the index operation on the risk of recurrence and the surgical technique. Methods: We included 48 patients treated with femoral derotation osteotomy during a single event of multi-level surgery. Mean hip rotation in standing was measured before and after femoral derotation osteotomy (FDO). The patients were divided into two groups of different age and in a second analysis into two groups with the osteotomy in different locations, either inter-trochanteric (DO proximal) or supracondylar (DO distal). Results: Age at FDO and surgical technique had no influence on the results. However, the variance of the results was very high. Differences were found in the walking speed between the DO proximal and DO distal groups. The walking speed in the group of distal femoral osteotomy was higher. This difference was not significant, but there was a trend to proximal osteotomy in slower walkers. Significant improvements in IRG after FDO were found in our investigation. Our results indicate that FDO as a part of single-event multilevel surgery SEMLS provides a satisfactory mean overall correction of IRG. The results were independent of the age at the index operation and the location of the osteotomy. abstract_id: PUBMED:37065526 Open Reduction with Femoral Shortening, Derotation Osteotomy, and Acetabuloplasty in a Walking Age Developmental Dysplasia of Hip - A Surgical Video Technique. Introduction: It is very common for pediatric orthopedic surgeon to encounter developmental dysplasia of hip (DDH) in walking age, especially in developing countries. The conservative options of management are almost over by this age and most require open reduction (OR) with various adjunct procedures. The most preferred approach for OR in this age group is anterior Smith-Peterson approach to hip joint. These neglected cases also require femoral shortening ± derotation osteotomy and acetabuloplasty. Case Report: In this surgical video technique, we demonstrate OR + femoral shortening and derotation osteotomy and acetabuloplasty, step by step, in a neglected, walking age DDH in a 3-year-old child. We hope that the detailed demonstration and tricks at various surgical steps will benefit our readers and viewers. Conclusion: Step-wise surgical execution as per demonstrated technique makes the procedure easily reproducible with fairly good outcomes. In this case example, with demonstrated surgical technique, we were able to achieve a good outcome at short-term follow-up. abstract_id: PUBMED:27325569 Electromagnetic bone segment tracking to control femoral derotation osteotomy-A saw bone study. Correction of rotational gait abnormalities is common practice in pediatric orthopaedics such as in children with cerebral palsy. Femoral derotation osteotomy is established as a standard treatment, however, different authors reported substantial variability in outcomes following surgery with patients showing over- or under-correction. Only 60% of the applied correction is observed postoperatively, which strongly suggests intraoperative measurement error or loss of correction during surgery. This study was conducted to verify the impact of error sources in the derotation procedure and assess the utility of a newly developed, instrumented measurement system based on electromagnetic tracking aiming to improve the accuracy of rotational correction. A supracondylar derotation osteotomy was performed in 21 artificial femur sawbones and the amount of derotation was quantified during the procedure by the tracking system and by nine raters using a conventional goniometer. Accuracy of both measurement devices was determined by repeated computer tomography scans. Average derotation measured by the tracking system differed by 0.1° ± 1.6° from the defined reference measurement . In contrast, a high inter-rater variability was found in goniometric measurements (range: 10.8° ± 6.9°, mean interquartile distance: 6.6°). During fixation of the osteosynthesis, the tracking system reliably detected unintentional manipulation of the correction angle with a mean absolute change of 4.0° ± 3.2°. Our findings show that conventional control of femoral derotation is subject to relevant observer bias whereas instrumental tracking yields accuracy better than ±2°. The tracking system is a step towards more reliable and safe implementation of femoral correction, promising substantial improvements of patient safety in the future. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 35:1106-1112, 2017. abstract_id: PUBMED:25217494 Superior functional outcome after femoral derotation osteotomy according to gait analysis in cerebral palsy. The femoral derotation osteotomy (FDO) is seen as the golden standard treatment in children with cerebral palsy and internal rotation gait. Variable outcomes with cases of over- and undercorrection mainly in the less involved patients have been reported. The determination of the amount of derotation is still inconsistent. 138 patients (age: 11 (± 3.3) years) with cerebral palsy and internal rotation gait were examined pre- and 1 year postoperatively after distal or proximal FDO, using standardized clinical examination and 3D gait analysis. Three groups were defined retrospectively depending on the amount of derotation in relation to the mean hip rotation in stance (MHR) during gait analysis: Group A (derotation angle &gt; MHR + 10°), Group B (derotation angle = MHR ± 10°), Group C (derotation angle &lt;MHR-10°), and compared according to their postoperative mean hip rotation. ANOVA with Bonferroni post hoc test was used for statistics (p &lt; 0.05). Group B had the greatest benefit with the highest rate (86%) of good results (postoperative MHR = ± 15°). In contrast there were 14% cases of overcorrection and 5% cases of deterioration in Group A with only 81% good results and only 79% good results in Group C. It can be concluded, that it is less likely to have unsatisfactory outcomes if the amount of FDO is defined according to the findings of gait analysis compared with clinical examination. abstract_id: PUBMED:23162148 Outcome of one-stage treatment of developmental dysplasia of hip in older children. Background: The principles of treatment of congenital dislocation of hip in old children are different than those of infants and neonates. The purpose of this study is to evaluate the radiographic and functional results of one-stage treatment (open reduction, femoral shortening derotation, and Salter's osteotomy) of DDH in older children. Materials And Methods: Between January 2005 and June 2010, 25 patients (30 hips) underwent one-stage triple procedure of open reduction, femoral shortening derotation, and Salter's osteotomy for the treatment of DDH. Preoperatively, they were classified according to the Tönnis class. Clinical outcomes were assessed using the modified McKay's criteria to measure pain symptoms, gait pattern, Trendelenburg sign status, and the range of hip joint movement. Radiographic assessment was made using Severin's scoring method to measure the centre-edge angle and dysplasia. Results: The mean age at the time of operation was 3.9 years (range 1.6-8 years), and the average duration of followup was 4.1 years (range 2-7.6 years). The McKay's score was excellent in 13 hips, good in 14 hips, fair in 2, and poor in 1 hip. The Severin's class I and II was found in 25 (83.3%) hips at the time of final evaluation as compared to none at the time of presentation. Conclusions: Young children having DDH can safely be treated with an extensive one-stage triple procedure of open reduction, femoral shortening derotation, and Salter's osteotomy, without increasing the risk of AVN. Early diagnosis and intervention is the successful treatment of patients suffering from DDH. Answer: Based on the provided abstracts, the necessity of femoral derotation osteotomy in developmental dysplasia of the hip (DDH) for the early walking age group appears to be a subject of clinical judgment and varies depending on individual cases. In one study, it was found that there was no statistically significant difference in femoral anteversion between dislocated and normal hips in children aged 12-48 months with unilateral DDH. The study concluded that femoral derotation osteotomy was not needed in DDH of the early walking age group, as none of the hips required internal rotation for stability during open reduction. They recommended Salter's osteotomy instead and suggested that preoperative evaluation of femoral anteversion by MRI is essential (PUBMED:20012070). Another study that analyzed short-term outcomes in children presenting with DDH at walking age found that an attempted closed reduction has a 50% failure rate, and there is a 10% rate of re-dislocation following open reduction with or without additional bony procedure. This study did not specifically address the need for femoral derotation osteotomy but indicated that about 50% of the dysplastic hips treated without pelvic osteotomy at the time of the index procedure fail to remodel and have residual acetabular dysplasia (PUBMED:34881171). A different study highlighted the use of 3D technology and printing to improve the accuracy of femoral derotation osteotomy, suggesting that precision in the surgical technique is crucial and can be enhanced with modern technology (PUBMED:29152500). In summary, while some studies suggest that femoral derotation osteotomy may not be necessary in all cases of DDH in the early walking age group (PUBMED:20012070), others do not provide a definitive answer but emphasize the importance of accurate surgical techniques and the potential need for additional procedures to address residual dysplasia (PUBMED:34881171, PUBMED:29152500). Therefore, the decision to perform femoral derotation osteotomy should be based on individual patient assessment, including clinical and radiological evaluation, and possibly aided by advanced technologies for surgical planning and execution.
Instruction: Are reciprocal changes a consequence of "ischemia at a distance" or merely a benign electrical phenomenon? Abstracts: abstract_id: PUBMED:14516286 Are reciprocal changes a consequence of "ischemia at a distance" or merely a benign electrical phenomenon? A pulsed-wave tissue Doppler echocardiographic study. Objectives: The aim of the present study was to investigate whether ST segment depression in precordial leads at the time of acute inferior myocardial infarction represents a reciprocal change rather than concurrent anterior wall ischemia on the surface electrocardiography. Background: The mechanism of reciprocal ST segment depression during acute myocardial infarction is controversial. "Ischemia at a distance" or a benign electrical phenomenon has been implicated in numerous reports. Pulsed-wave tissue Doppler (PWTD) echocardiography can be used to examine the regional diastolic motion of the left ventricular myocardial wall and may allow the detection of ischemic segments. Methods: We evaluated regional myocardial ischemia using PWTD echocardiography in 48 patients with a first inferior wall myocardial infarction. The left ventricle was divided into 16 segments. PWTD echocardiographic velocities were obtained from each left ventricular segments. Results: Reciprocal ST segment depression was present in 35 patients (Group 1) but not in the remaining 13 patients (Group 2). There were no significant differences between groups 1 and 2 with respect to systolic (S) (7.4 +/- 1.1 vs 6.8 +/- 0.9 cm/s; P &gt; 0.05), early (E) (10.5 +/- 2 vs 9.4 +/- 1.2 cm/s; P &gt; 0.05), and late (A) (9.5 +/- 3.2 vs 8.5 +/- 2.3 cm/s; P &gt; 0.05) diastolic waves peak velocities, E/A ratio 1.1 +/- 0.2 vs 1.1 +/- 0.1; P &gt; 0.05), Ewave deceleration time (DT) (92 +/- 17 vs 101 +/- 16 ms; P &gt; 0.05) and regional relaxation time (RT) (82 +/- 19 vs 93 +/- 21 ms; P &gt; 0.05) in anterior wall (basal levels), which correspond to reciprocal ST segment depression on electrocardiography. According to E/A ratio detected by PWTD echocardiography in anterior wall and anterior septum, patients with reciprocal ST segment depression were also divided into two groups: Group A, with E/A ratio &gt; 1; Group B, with E/A ratio &lt; 1. Among the 35 patients with reciprocal ST segment depression, anterior wall ischemia was present in 10 patients and absent in 25 patients, whereas anterior septal ischemia was present 12 patients and absent in 23 patients. Conclusions: Reciprocal ST segment depression during the early phases of inferior infarction is an electrical reflection of primary ST segment elevation in the area of infarction. abstract_id: PUBMED:29406044 Reciprocal ST-Segment Changes in Myocardial Infarction: Ischemia at Distance Versus Mirror Reflection of ST-Elevation. Background: Reciprocal ST-depression in the electrocardiograms (ECGs) of patients with ST-elevation myocardial infarction (STEMI) results from either true ischemia at a distance via collateral circulation diverting blood to the infarcted region or an electrical phenomenon that results from a mirror reflection of ST-elevation. We aimed to identify the role of reciprocal ECG changes in predicting collateral circulation to the infarcted area determined angiographically. Methods: In a retrospective study, ECG and angiography of 53 STEMI patients admitted to SUNY Upstate Medical University in 2014 were reviewed independently by experts blinded to the results of ECG and coronary angiography. Results: Reciprocal changes (RC) in ECG were present in 41 patients (77%) and on angiography, 14 patients (26%) exhibited collateral vessels to the ischemic areas. No correlation was found between the presence of RC and collateral circulation (P = 0.384), or between the depth of reciprocal ST-depression and the degree of the collateral circulation (P = 0.195). However, 84% of patients without collaterals exhibited resolution of RC after successful percutaneous coronary intervention (PCI) (P = 0.036), suggesting that the ST depressions that resolved after reperfusion were directly caused by the culprit vessel. Patients without RC presented late after symptom onset (9.25 versus 3.83 hours, P = 0.004), also suggesting time related resolution. Conclusions: RC had no relation to or predictive value for collaterals on angiography. Among late presenting patients, RC were less frequent. Thus, reciprocal ST-depression may represent subendocardial ischemia from the primary coronary event or simply an electrical phenomenon, rather than ischemia at distance from impaired collateral circulation. abstract_id: PUBMED:8322696 Are reciprocal changes a consequence of "ischemia at a distance" or merely a benign electric phenomenon? A PTCA study. The significance of reciprocal ST-segment depression during acute occlusion of an epicardial coronary artery is still actively debated. "Ischemia at a distance" has been implicated in numerous reports. To determine the prevalence and mechanism of reciprocal changes, we recorded 12-lead electrocardiograms (ECG) during balloon inflation in 66 patients undergoing 79 coronary angioplasty (PTCA) procedures. The 38 men and 28 women had a mean age of 59 +/- 12 years. Twenty nine PTCAs were of the dominant right coronary artery (RCA), 24 were of the proximal left anterior descending artery (LAD), 24 of the left circumflex artery (LCF), and 2 of the diagonal branch of the LAD. Primary ST elevation (&gt; or = 1 mm) occurred in 56 (71%) PTCAs, 49 (88%) of which showed reciprocal (&gt; or = 1 mm) ST depression. Reciprocal changes occurred in 15 of 21, 19 of 20, and 14 of 14 PTCAs of the LAD, RCA, and LCF, respectively (p value not significant [NS]) and were common in patients with collateral vessels supplying the arterial bed distal to the site of balloon occlusion (60%). They were equally prevalent in PTCAs of patients with single-vessel disease and patients with multivessel disease (90% vs 82%, p = NS). We conclude that reciprocal changes occur in the majority regardless of the vessel involved or the extent of coronary artery disease, that they usually represent electric phenomena and not remote ischemia, and that "ischemia at a distance" is not a diagnosis that can be made by ECG. abstract_id: PUBMED:32247072 Relation of collateral circulation with reciprocal changes in patients with acute ST-elevation myocardial infarction. Objective: Acute STEMI is often accompanied by reciprocal ST-segment depression (RC) occurring in opposite leads, whose significance has been debated for decades. The possible role of collateral circulation in promoting RC in acute STEMI has not been identified. So our aim to find the relationship between collateral circulation and RC in STEMI patients treated with primary percutaneous intervention (PPCI). Methods: The study included 112 pts. with acute STEMI underwent PPCI. The patients divided in to 2 groups: Group (A):66 pts. with RC, Group (B):46 pts without RC. All patients subjected to history taking, ECG [localization of infarction &amp; RC], CKMB level, transthoracic echo [LVEF%], coronary angiography &amp;PPCI to culprit artery and assess number of diseased vessels, site of occlusion, collaterals, TIMI flow pre and post PCI. Results: Patients in group A with RC had shorter time to door, P &lt; 0.001; more frequent inferior infarctions, P &lt; 0.001; had higher CKMB level, P &lt; 0.001; higher LVEDD, P &lt; 0.001; LVESD, P &lt; 0.001and lower LVEF, P = 0.004; had multi vessel diseases P = 0.02, increase incidence of RCA as a culprit artery &lt;0.001 compared to patients with no RC. Patients with RC had significantly higher incidence of proximal LAD occlusion, distal RCA and distal LCX compared to patients without RC. The percentage of change was 61.2 ± 12.35% for ST elevation and 50.5 ± 10.87% for reciprocal ST depression post PCI with significance difference between them, t = 3.035P = 0.0023.There was no significant correlation between collateral circulation and RC. We found four significant independent predictors of RC. They were inferior infarction (P = 0.024), RCA as a culprit vessel, (P = 0.034), low EF, (P = 0.007) and multi-vessel disease, (P = 0.022). Conclusion: There is no correlation between concomitant RC and presence of collateral vessels in acute STEMI patients. So the pathogenesis of reciprocal ST-segment changes result from an interplay of ischemia at distance due to multi-vessel CAD and benign mirror electrical changes not caused by collateral circulation diverting blood to ischemic area from non-diseased artery. abstract_id: PUBMED:4014035 Reciprocal electrocardiographic changes in acute myocardial infarction. If reciprocal electrocardiographic changes during acute myocardial infarction (AMI) are a result of ischemia of the wall opposite the AMI, a stress test is expected to induce similar changes in the corresponding electrocardiographic leads. Right atrial pacing was used as a myocardial stress method in 137 consecutive patients recovering from a transmural AMI, and the appearance of pacing-provoked ischemia before hospital discharge was correlated to the presence of absence of ST depression in the opposite wall during the initial 48 hours. Of the 137 patients, 83 (61%) had reciprocal changes; they were more common in inferior (87%) than in anterior (37%) AMI (p less than 0.01). Of 54 patients without reciprocal changes, only 5 (9%) had ST depression during predischarge pacing; however, of the 83 patients with reciprocal changes, 41 had pacing-induced ischemia (p less than 0.01) and 42 did not, indicating that in half of this group the reciprocal changes represent ischemia of the opposite wall. In the other half of the group, without ST depression during pacing, these changes may be a "mirror image" phenomenon. Follow-up showed that angina pectoris, positive treadmill test response 6 months later, or recurrent AMI all consequences of impaired myocardial blood supply, were significantly more frequent in patients with reciprocal changes. This group could be further separated according to the results of right atrial pacing, because angina pectoris or recurrent AMI were infrequent among those with reciprocal changes and negative pacing responses, but was frequent among those with reciprocal changes and positive pacing responses.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:27917635 Raynaud's phenomenon Raynaud's phenomenon (RP) is a very common sign which can usually be seen across all medical specialties. It is characterized by episodic color changes of acral parts of the body (palor, cyanosis, rubor) lasting from a few minutes to hours, which are usually triggered by cold temperature and/or stress. The primary RP occurs alone, without concomitant diseases, is usually benign and has favorable prognosis. Secondary RP occurs in a variety of diseases with a very variable progression and prognosis, mostly unfavorable one due to the development of ischemic tissue necrosis and gangrene. This work provides a comprehensive overview of the history, current knowledge about the epidemiology and pathogenesis and the recommended evaluation and treatment of RP. abstract_id: PUBMED:28164311 Flow and Diameter Changes of Forearm Arteries During Temporary Unilateral Reciprocal Occlusion: A Prospective Observational Study. Purpose: Transradial and transulnar artery approaches are commonly used for percutaneous interventions, with considerably low risk of developing hand ischemia, for reasons that have not been fully understood. The aim of this study was to evaluate the changes in the diameter, peak systolic flow velocity (PSV), and volume flow (VF) of the radial (RA) and ulnar artery (UA) during unilateral reciprocal temporary occlusion. Methods: A total of 204 extremity arteries of 102 consecutive patients were evaluated using Doppler ultrasonography. At the level of the wrist, RA and UA were consecutively compressed for 60 seconds. The diameter, PSV, and VF parameters of both arteries were evaluated during unilateral reciprocal compression. Results: Compared with the baseline values, the median (interquartile range) increases in diameter, PSV, and VF were 0.2 (0.1-0.3) mm, 6.8 (1.7-17.5) cm/s, and 1.9 (0.8-10) ml/min, respectively, for RA, and 0.1 (0.1-0.3) mm, 9.4 (2.6-18.0) cm/s, and 10.0 (0.0-20.0) ml/min, respectively, for UA during reciprocal compression (p &lt; 0.001 for all). Conclusions: Reciprocal compression significantly increased the diameter, PSV, and VF of both arteries during reciprocal compression. © 2016 Wiley Periodicals, Inc. J Clin Ultrasound 45:197-203, 2017. abstract_id: PUBMED:8461218 Significance of reciprocal ST depression in acute myocardial infarction: a study of 258 patients treated by thrombolysis. Objective: To investigate the clinical significance of reciprocal ST depression on the presenting electrocardiogram in patients with acute myocardial infarction treated by thrombolysis. Design: A prospective cohort analytical study. Setting: A London district general hospital. Subjects: Two hundred and fifty eight consecutive patients with acute myocardial infarction treated with thrombolysis. Interventions: All patients underwent treadmill stress testing after a mean (SEM) of 10 (3) days; 200 patients (78%) were referred for coronary arteriography at 30 (16) days. Main Outcome Measures: Relation between reciprocal ST depression at presentation and several endpoints: time from start of chest pain to hospital presentation, electrocardiographic changes during early treadmill stress testing, presence of multivessel coronary disease, and clinical outcome in terms of recurrent ischaemic events (death, reinfarction, and unstable angina) during a 10 (range six to 12) month follow up. Results: Presentation was generally early, but in this group of patients reciprocal ST depression was significantly related to the time from the start of symptoms, those with reciprocal change presenting on average one hour earlier than those without. Although reciprocal change on the presenting electrocardiogram was weakly associated with ST depression on treadmill stress testing, it was not indicative of remote ischaemia as a result of multivessel coronary disease or high grade collateralisation of the infarct related artery. There was no association between reciprocal change and the incidence of recurrent ischaemic events. Conclusion: Reciprocal ST depression on the presenting electrocardiogram seems to be a benign electrical phenomenon related to the time from the start of symptoms. It does not necessarily predict an adverse prognosis in patients treated by thrombolysis. abstract_id: PUBMED:26491507 Head Loss As an Explanation of the Steal Phenomenon in Microvascular Surgery. Vascular steal has been cited to help explain end-organ ischemia after microvascular reconstruction. Attempts to clarify a mechanism of vascular steal have been made by modeling blood circulation after a simple electrical circuit, suggesting that the free flap provides a path of least resistance for blood flow and thereby compromises end-organ perfusion. We present a case of a posterior medial thigh perforator flap for the reconstruction of a diabetic foot ulcer in a patient with a single vessel providing inflow to the foot. In the context of this case, we provide a novel explanation for the steal phenomenon using the Hagen-Poiseuille law and the property of head loss in fluid dynamics and discuss how the vessel size of the free flap may contribute to a steal phenomenon. abstract_id: PUBMED:6691279 Angiographic evidence that reciprocal ST-segment depression during acute myocardial infarction does not indicate remote ischemia: analysis of 23 patients. Although reciprocal ST-segment depression from the remote noninfarcting ventricular wall during acute myocardial infarction (MI) is a common clinical finding, the significance of this electrocardiographic pattern is unclear. Previous retrospective studies have suggested that these findings may reflect either remote wall ischemia, multivessel coronary artery disease (CAD), extensive MI or a benign electrical phenomenon. Prior studies have lacked angiographic data obtained at the time of these acute electrocardiographic changes. In this study we prospectively evaluated 23 patients with acute MI. Left ventricular wall motion, coronary anatomy and the ECG were all assessed over a short period during the acute phase of the MI. Segmental wall motion was used as a sensitive indicator of ischemia. Seventeen patients had acute anterior MI, of whom 47% had reciprocal ST-segment depression; 6 patients had inferior MI, with 3 showing reciprocal ST depression. The mean degree of ST-segment elevation from the infarcting wall tended to be greater in patients with reciprocal ST-segment depression than in those without such reciprocal ST depression (2.8 +/- 0.4 vs 1.9 +/- 0.3 mm, p = 0.06). Patients with and without reciprocal ST-segment depression had similar degrees of segmental dysfunction in the infarct wall. However, no abnormalities in segmental wall motion in the remote wall were seen regardless of the presence or absence of remote wall ST-segment depression. In addition, the presence or absence of ST-segment depression did not predict the extent or degree of CAD.(ABSTRACT TRUNCATED AT 250 WORDS) Answer: The question of whether reciprocal changes in electrocardiograms during acute myocardial infarction are a consequence of "ischemia at a distance" or merely a benign electrical phenomenon has been the subject of debate. Several studies have investigated this issue, with varying conclusions. One study using pulsed-wave tissue Doppler echocardiography in patients with acute inferior myocardial infarction found that reciprocal ST segment depression is an electrical reflection of primary ST segment elevation in the area of infarction, rather than indicating concurrent anterior wall ischemia (PUBMED:14516286). Another study found no correlation between reciprocal changes and collateral circulation on angiography, suggesting that reciprocal ST depression may represent subendocardial ischemia from the primary coronary event or simply an electrical phenomenon, rather than ischemia at a distance from impaired collateral circulation (PUBMED:29406044). A PTCA study concluded that reciprocal changes usually represent electric phenomena and not remote ischemia, and that "ischemia at a distance" is not a diagnosis that can be made by ECG (PUBMED:8322696). Similarly, another study found no correlation between concomitant reciprocal changes and the presence of collateral vessels in acute STEMI patients, suggesting that the pathogenesis of reciprocal ST-segment changes results from an interplay of ischemia at distance due to multi-vessel CAD and benign mirror electrical changes not caused by collateral circulation (PUBMED:32247072). In contrast, a study using right atrial pacing as a myocardial stress method in patients recovering from a transmural AMI found that reciprocal changes during AMI may represent ischemia of the opposite wall in half of the cases, while in the other half, these changes may be a "mirror image" phenomenon (PUBMED:4014035). Another study concluded that reciprocal ST depression on the presenting electrocardiogram seems to be a benign electrical phenomenon related to the time from the start of symptoms and does not necessarily predict an adverse prognosis in patients treated by thrombolysis (PUBMED:8461218). Lastly, an angiographic study of 23 patients with acute MI found no abnormalities in segmental wall motion in the remote wall regardless of the presence or absence of remote wall ST-segment depression, suggesting that reciprocal ST depression does not indicate remote ischemia (PUBMED:6691279).
Instruction: Is posttransplant lymphoproliferative disorder (PTLD) caused by any specific immunosuppressive drug or by the transplantation per se? Abstracts: abstract_id: PUBMED:14508366 Is posttransplant lymphoproliferative disorder (PTLD) caused by any specific immunosuppressive drug or by the transplantation per se? Background: An association between posttransplant lymphoproliferative disorder (PTLD) and cyclosporine A (CsA) and OKT3 has often been postulated on the basis of retrospective studies, although a randomized study with PTLD as the endpoint will probably never be performed. Because focus on PTLD coincided with the use of these drugs, a bias could be suspected. Methods: In a retrospective, nonrandomized study, we reevaluated all lymphoma-like lesions arising in kidney-transplant patients grafted at our center during 1969 to 1998 and observed up to 2002. Case pathology was reviewed, and an association with Epstein-Barr virus (EBV) infection (and latency pattern) was assessed. Results: We did not find any significant difference in the incidence of PTLDs when comparing the prednisolone/azathioprine, and CsA eras (P=0.89), the periods before or after OKT3 (P=0.61), and those before or after antilymphocyte globulin (ALG) (P=0.22). Occurrence time was shorter in the CsA (P=0.059), OKT3 (P=0.007), and ALG (P=0.007) eras. In the OKT3 era, 182 patients received, and 224 did not receive, OKT3; after the same observation time, there had been eight and five PTLDs, respectively (P=0.34). The use of mycophenolate mofetil (MMF) was associated with a reduction in the number of PTLDs (P=0.01). EBV was detected in 16 of 21 (76%) cases. Conclusions: We found no evidence to implicate any one drug regime preferentially in the development of PTLDs. The risk of developing PTLD seems to be a result of the whole transplantation process, which includes the antigenicity of the foreign graft, the immunosuppression resulting in inadequate cytotoxic T-cell activity, and the result of EBV infection. An important minority of cases are EBV negative. abstract_id: PUBMED:19759356 Generation of Epstein-Barr virus-specific cytotoxic T lymphocytes resistant to the immunosuppressive drug tacrolimus (FK506). Adoptive transfer of autologous Epstein-Barr virus-specific cytotoxic T lymphocytes (EBV-CTLs) to solid organ transplant (SOT) recipients has been shown safe and effective for the treatment of EBV-associated posttransplantation lymphoproliferative disorders (PTLDs). SOT recipients, however, require the continuous administration of immunosuppressive drugs to prevent graft rejection, and these agents may significantly limit the long-term persistence of transferred EBV-CTLs, precluding their use as prophylaxis. Tacrolimus (FK506) is one of the most widely used immunosuppressive agents in SOT recipients, and its immunosuppressive effects are largely dependent on its interaction with the 12-kDa FK506-binding protein (FKBP12). We have knocked down the expression of FKBP12 in EBV-CTLs using a specific small interfering RNA (siRNA) stably expressed from a retroviral vector and found that FKBP12-silenced EBV-CTLs are FK506 resistant. These cells continue to expand in the presence of the drug without measurable impairment of their antigen specificity or cytotoxic activity. We confirmed their FK506 resistance and anti-PTLD activity in vivo using a xenogenic mouse model, suggesting that the proposed strategy may be of value to enhance EBV-specific immune surveillance in patients at high risk of PTLD after transplantation. abstract_id: PUBMED:37609612 Posttransplant lymphoproliferative disorder in a heart transplant recipient: a case report. Malignancy in heart transplant recipients is a grave complication. Post-transplant lymphoproliferative disorder (PTLD) is the second most common tumour in adults and commonest in children. The incidence varies with the transplanted organ from 1 to 2% following kidney transplantation to as high as 10% following thoracic organ transplantation due to different immunosuppression intensity. PTLD include a wide spectrum of diseases ranging from benign proliferation of lymphoid tissue to frank malignancy with aggressive behaviour (lymphoma). Epstein-Barr virus (EBV) infection and prolonged immunosuppressant therapy are implicated in the pathogenesis of PTLD. The incidence of PTLD varies from 2.6% at 1 year to 28% at 10 years post-transplant. Seronegativity for EBV in recipients with seropositive donors increases the risk of PTLD in recipients. The majority of early-onset PTLDs (85%) are of B-cell origin and associated with EBV. Timely and accurate diagnosis with histological examination of lymphoid tissue is essential for early intervention. Reduction of immunosuppressive therapy (IST) and rituximab usually are effective in remission of PTLD. In resistant cases, chemotherapy is given with or without rituximab. Adoptive T-cell transfer represents a promising therapeutic approach. Early PTLD respond well to lowering immunosuppression and has a favourable prognosis compared to late PTLD. Five-year survival is 30% for high-grade lymphomas. The prognosis of EBV-negative lymphomas is worse. One out of 40 heart transplant recipients followed up in our centre developed PTLD. He was treated to remission and we describe this case here. abstract_id: PUBMED:31240394 Biomarkers for PTLD diagnosis and therapies. Post-transplant lymphoproliferative disorder (PTLD) represents a spectrum of lymphoproliferative disorders and is a serious complication of pediatric transplantation. The majority of PTLD are associated with Epstein Barr virus (EBV) and the characteristic EBV+ B cell lymphomas are the leading post-transplant malignancy in children. EBV+ PTLD remains a formidable issue in pediatric transplantation and is thought to result from impaired immunity to EBV as a result of immunosuppression. However, the key viral and immune factors that determine whether EBV+ PTLD develops remain unknown. Recently, there has been much interest in developing biomarkers in order to improve and achieve more personalized approaches, in the clinical diagnosis, management, and treatment of EBV+ PTLD. Here, we review the status of immune-, viral-, and B cell lymphoma-derived candidates for biomarkers of EBV+ PTLD. abstract_id: PUBMED:12815484 High incidence of PTLD after non-T-cell-depleted allogeneic haematopoietic stem cell transplantation as a consequence of intensive immunosuppressive treatment. The occurrence of post-transplant lymphoproliferative disorder (PTLD) in relation to immunosuppressive treatment was determined in 257 patients treated with non-T-cell-depleted allogeneic stem cell transplantation from an HLA-matched sibling (173 patients) or unrelated donor (84 patients). The conditioning consisted of total body irradiation and cyclophosphamide (myeloablative conditioning, 250 patients), or fludarabine combined with cyclophosphamide or a single 2 Gy dose of TBI (nonmyeloablative conditioning, seven patients). In transplantations from an unrelated donor, the patients also received antithymocyte globulin (ATG). The prophylaxis against graft-versus-host disease (GVHD) consisted of cyclosporine A, methotrexate, and methylprednisolone. The autopsy reports of deceased patients were systematically reviewed, and the autopsy materials of cases suggestive of PTLD were re-examined histologically for Epstein-Barr virus (EBV). Nineteen patients with EBV-positive PTLD were identified, of whom six had been transplanted from a sibling donor and 13 from an unrelated donor. All the patients who developed PTLD had been given ATG either for the treatment of steroid-resistant acute GVHD (all PTLD patients with a sibling donor and one with an unrelated donor), or as part of the conditioning (all patients with an unrelated donor). In conclusion, in transplantations from an HLA-identical donor with a non-T-cell-depleted graft, the risk of PTLD correlated strongly with the intensity of the immunosuppressive treatment. abstract_id: PUBMED:37005883 Non-invasive monitoring associated with B lymphoma cells in post-transplant lymphoproliferative disorder (PTLD) patients: Systematic review. Background: One of the most severe side effects of solid-organ transplantation is posttransplant lymphoproliferative disease (PTLD). People with human immunodeficiency virus infection (HIV), an immunosuppressive disease comparable to HIV, have a higher chance of developing lymphoma when their peripheral blood contains elevated levels of the immunoglobulins kappa and lambda free light chains (FLCs). Methods: This systematic review's objective was to monitor associated B lymphoma cells in PTLD patients. In order to find relevant studies published between 1/1/2000 and 1/9/2022, two independent researchers conducted searches (MT, AJ). A literature search of English language publications was conducted using MEDLINE through PubMed, EMBASETM through Ovid, the Cochrane Library, and Trip. In addition to Magiran and SID, we searched KoreaMed and LILACS for literature published in other languages. sFLC or PTLD, transplant, or Electrophoresis are terms used in the search strategy. Results: A total of 174 studies were selected. After analyzing their correspondence with the required criteria, a final review of five studies was conducted. The manuscript presents current findings on the potential benefits of the clinical applicability of sFLCs in PTLD. While the preliminary results appear promising, the only consistent result is that early-onset PTLD is predicted within the first two years after transplant, a biomarker that could be used to diagnose the condition. Conclusions: Therefore, PTLD has been predicted by using the sFLCs. There have been contradictory results to date. Future research could include assessing the quantity of sFLCs and their quality in transplant recipients. In addition to PTLD and complications after transplantation, sFLCs may provide insight into other diseases. To confirm the validity of sFLCs, more studies are needed. abstract_id: PUBMED:31269329 PD-L1 and PD1 expression in post-transplantation lymphoproliferative disease (PTLD) of childhood and adolescence: An inter- and intra-individual descriptive study covering the whole spectrum of PTLD categories. Therapy of children with post-transplantation lymphoproliferative disorder (PTLD) after hematopoietic stem cell (HSCT) and solid organ transplantation (SOT) can be challenging. In this retrospective study, we investigated PD-L1 and PD1 expression in all PTLD categories of childhood and adolescence to see whether checkpoint inhibition with PD-L1/PD1 inhibitors may serve as a therapy option. We included 21 patients aged 19 years or younger (at date of transplant) with PTLD following SOT or HSCT having adequate tumor samples available (n = 29). Using immunohistochemistry, we evaluated PD-L1/PD1 expression on both tumor cells and cells of the microenvironment in all samples. Availability of consecutively matched tumor samples during 6 of 21 patients' disease courses also allowed an intra-individual assessment of PD-L1/PD1 expression. We observed lower PD-L1 and higher PD1 expression in non-destructive lesions, and higher PD-L1 and lower PD1 expression in polymorphic and, in particular, in monomorphic PTLD, mostly diffuse large B-cell lymphomas (DLBCL, n = 10/21). The amount of PD-L1- and PD1-positive cells changed in the opposite way in sequential biopsies of the same individual correlating well with the PTLD category. This is the first comprehensive pediatric study assessing PD-L1 and PD1 expression on tumor cells and in the microenvironment of PTLD including not only monomorphic, but also non-destructive early lesions. PD-L1 expression of the tumor cells inversely correlated with PD1 expression in surrounding tissues, with the highest expression in DLBCL. Since PTLD can be therapeutically challenging, our results indicate a potential efficacy of checkpoint inhibitors if standard immune- and/or chemotherapy fail or are impossible. We therefore recommend routine staining of PD-L1 and PD1 in all PTLD categories. abstract_id: PUBMED:24168387 Very late relapse of PTLD 10 yr after allogeneic HSCT and nine yr after stopping immunosuppressive therapy. We present a very late onset relapse of PTLD 10 yr after allogeneic HSCT in a patient in third remission for ALL, nine yr after the first episode of PTLD. The recipient was conditioned with fractionated TBI 12 Gy, cyclophosphamide, and horse ATG. The first episode of PTLD with a large retroperitoneal tumor occurred one yr after transplantation; a residual tumor infiltrating spleen and colon was resected one yr later. Due to continual pathological signals in liver and lungs, persistent fever, and an M-component in peripheral blood, a new course of four rituximab doses was given, after which the fever settled, the PET scan normalized, and the M-component disappeared. Without any ongoing immunosuppressive therapy, PTLD relapsed nine yr later with large intra-abdominal lymph node masses causing ureteric obstruction with bilateral hydronephrosis. Pathological features were identical to the primary PTLD tumor: EBV related, of donor origin, positive for CD138 and CD79 alpha, but negative for CD20 and CD19. The transcription factor PAX5 was negative but BOB1 and OCT2 were positive, consistent with plasmablastic lymphoma. The relapse was successfully treated with a combination of low dose chemotherapy and rituximab. Five yr after end of treatment, the girl has moderately reduced renal function but otherwise remains well without evidence of disease. abstract_id: PUBMED:37749317 Role of brentuximab vedotin plus sirolimus in the treatment of classical Hodgkin lymphoma type post-transplant lymphoproliferative disorder: a case-based review. Post-transplant lymphoproliferative disorder (PTLD) is a common secondary malignancy after transplantation, which has been recognized as a life-threatening complication. Hodgkin lymphoma (HL)-type PTLD is the rarest of four subtypes of PTLD, which has no treatment guideline due to its rarity. HL-type PTLD includes classical HL-type PTLD (cHL-PTLD) and HL-like PTLD. In our study, we reported the case of successful treatment using brentuximab vedotin (BV) plus sirolimus for a patient with classical HL-type PTLD in detail. Lymph node biopsy showed a picture of classical HL with mixed cellularity subtype, and immunophenotyping suggested CD30 strong positivity. Due to his impaired physical condition, we decided against intensive chemotherapy and started BV treatment with immunosuppressive agents switched to sirolimus. The 66-year-old patient with cHL-PTLD had achieved a durable complete remission for over a 1-year follow-up period. Additionally, we analyzed the clinical profile and outcomes in PTLD patients who used BV monotherapy or combined therapy by literature review. In summary, this case-based review might provide clues that treatment of cHL-PTLD with new modalities such as BV monotherapy or combination therapy, together with improvements in the immunosuppressive regimens like sirolimus, might be a feasible and chemotherapy-free approach, but warrants further evaluation in a larger patient cohort. abstract_id: PUBMED:26687013 Successful management of EBV-PTLD in allogeneic bone marrow transplant recipient by virological-immunological monitoring of EBV infection, prompt diagnosis and early treatment. Epstein-Barr virus-related post-transplant lymphoproliferative disorder (EBV-PTLD) is an uncommon, but frequently fatal, complication after allogeneic hematopoietic stem cell transplant. Prospective post-transplant virological and immunological monitoring allowed to successfully manage a patient who developed both polymorphic and monomorphic, "diffuse large B-cell lymphoma like", as an EBV-PTLD, 65days after allogeneic bone marrow transplant. Early detection of significant increase in EBV DNA level in patient's peripheral blood (peak of viral load equal to 119,039copies/mL whole blood, +56day after transplant) led to administration of pre-emptive anti-CD20 monoclonal antibody (rituximab) and close clinical monitoring. After one week, physical exam revealed laterocervical adenopathy. Histopathologic features, immunohistochemical characterization and in situ hybridization study allowed to establish a diagnosis of EBV-related PTLD. Immunological monitoring showed no EBV-specific T-cell responses during EBV replication, thus potentially explaining the occurrence of high EBV load with subsequent PTLD development. A total of four doses of anti-CD20 monoclonal antibody were administered and at the end of the treatment, EBV infection was cleared and imaging technique showed complete disease remission. In conclusion, the early use of anti-CD20 monoclonal antibody proved to be a safe and effective treatment strategy for EBV-PTLD. Moreover, combined virological-immunological monitoring of EBV infection may more accurately assess patients at higher risk for EBV-PTLD. Answer: Posttransplant lymphoproliferative disorder (PTLD) is not caused by any specific immunosuppressive drug but seems to be a result of the whole transplantation process. A retrospective study did not find any significant difference in the incidence of PTLDs when comparing different drug regimes, including prednisolone/azathioprine, cyclosporine A (CsA), and the periods before or after the use of OKT3 and antilymphocyte globulin (ALG). The study concluded that the risk of developing PTLD is associated with the antigenicity of the foreign graft, the immunosuppression resulting in inadequate cytotoxic T-cell activity, and the result of Epstein-Barr virus (EBV) infection, with an important minority of cases being EBV negative (PUBMED:14508366). The development of PTLD is also influenced by the intensity of immunosuppressive treatment, as seen in patients who received non-T-cell-depleted allogeneic stem cell transplantation. The risk of PTLD correlated strongly with the intensity of the immunosuppressive treatment, particularly in patients who received antithymocyte globulin (ATG) either for the treatment of steroid-resistant acute graft-versus-host disease (GVHD) or as part of the conditioning (PUBMED:12815484). In summary, PTLD is not attributed to a specific immunosuppressive drug but rather to the overall immunosuppressive state and other factors related to the transplantation process itself.
Instruction: Is fat content of human milk decreased by infusion? Abstracts: abstract_id: PUBMED:24847514 Is fat content of human milk decreased by infusion? Background: Human milk (HM) is the optimum nutrition for preterm infants. Previous studies showed that tube infusion decreased the fat content in thawed HM. The aim of this study was to determine if freezing–thawing is the main reason for decrease of fat content. In neonatal intensive care units, thawed HM is used in general, therefore the aim of this study was to investigate fat loss during tube infusion with regard to changes in tube size, material, and infusion rate. Methods: First, pre-infusion and post-infusion fat content was measured in 15 fresh HM, 10 thawed HM and 6 formula samples. We compared post-infusion and pre-infusion fat content as well as the percent decrease in fat concentration among fresh HM, thawed HM and formula samples. Second, we measured the fat content of 160 thawed HM samples infused via four different diameters (3–6 Fr), two types of material (DEHP-free and PVC-free), and two infusion rates (30 or 60 min). We compared the percent decrease in fat concentration among four different tube sizes, between DEHP-free and PVC-free tubes, and between 30 and 60 min infusion durations. Results: Post-infusion fat content was significantly decreased compared to before infusion in thawed HM and fresh HM but not in formula. Given that thawed HM resulted in larger decrease in fat content, we performed a second experiment and found no difference regarding differing size, materials or infusion rate. Conclusions: There was a far greater decrease in the post-infusion fat content of thawed HM compared to fresh HM under all test conditions. abstract_id: PUBMED:25122692 Type of homogenization and fat loss during continuous infusion of human milk. Background: Substantial fat loss may occur during continuous feeding of human milk (HM). A decrease of fat loss has been described following homogenization. Well-established methods of homogenization of HM for routine use in the neonatal intensive care unit (NICU) would be desirable. Objective: We compared the loss of fat based on the use of 3 different methods for homogenizing thawed HM during continuous feeding. Methods: Sixteen frozen donor HM samples were thawed, homogenized with ultrasound and separated into 3 aliquots ("baseline agitation," "hourly agitation," and "ultrasound"), and then frozen for 48 hours. Aliquots were thawed again and a baseline agitation was applied. Subsequently, aliquots baseline agitation and hourly agitation were drawn into a syringe, while ultrasound was applied to aliquot ultrasound before it was drawn into a syringe. The syringes were loaded into a pump (2 mL/h; 4 hours). At hourly intervals the hourly agitation infusion was stopped, the syringe was disconnected and gently shaken. During infusion, samples from the 3 groups were collected hourly for analysis of fat and caloric content. Results: The 3 groups of homogenization showed similar fat content at the beginning of the infusion. For fat, mean (SD) hourly changes of -0.03 (0.01), -0.09 (0.01), and -0.09 (0.01) g/dL were observed for the hourly agitation, baseline agitation, and ultrasound groups, respectively. The decrease was smaller for the hourly agitation group (P &lt; .001). Conclusion: When thawed HM is continuously infused, a smaller fat loss is observed when syringes are agitated hourly versus when ultrasound or a baseline homogenization is used. abstract_id: PUBMED:27664665 Comparison of gravimetric, creamatocrit and esterified fatty acid methods for determination of total fat content in human milk. The gravimetric method is considered the gold standard for measuring the fat content of human milk. However, it is labor intensive and requires large volumes of human milk. Other methods, such as creamatocrit and esterified fatty acid assay (EFA), have also been used widely in fat analysis. However, these methods have not been compared concurrently with the gravimetric method. Comparison of the three methods was conducted with human milk of varying fat content. Correlations between these methods were high (r(2)=0.99). Statistical differences (P&lt;0.001) were observed in the overall fat measurements and within each group (low, medium and high fat milk) using the three methods. Overall, stronger correlation with lower mean (4.73g/L) and percentage differences (5.16%) was observed with the creamatocrit than the EFA method when compared to the gravimetric method. Furthermore, the ease of operation and real-time analysis make the creamatocrit method preferable. abstract_id: PUBMED:38043286 Is milk fat globule size correlated with milk fat content in Ruminants? In milk, fat exists in the form of milk fat globules (MFGs). The average size (average fat globules of different particle sizes) is the most common parameter when describing MFG size. There are different views on whether there is a correlation between MFG size and milk fat content. Is the MFG size correlated with milk fat content in ruminants? To address this question, we conducted two experiments. In experiment Ⅰ, dairy cows (n = 40) and dairy goats (n = 30) were each divided into a normal group and a low-fat group according to the milk fat content. In experiment Ⅱ, dairy cows (n = 16) and dairy goats (n = 12) were each divided into a normal group and a conjugated linoleic acid (CLA)-induced low-fat group. The normal groups were fed a basal diet, and the CLA-induced low-fat groups were fed the basal diet + 300 g/d CLA (cows) or the basal diet + 90 g/d CLA (goats). In both experiments, we determined the correlation between MFG size and milk composition and MFG distribution. The results showed that in the normal and low-fat groups of cows and goats, MFG size was not correlated with milk fat, protein, or lactose content or fat-to-protein ratio. Additionally, there was no difference in the distribution of large, medium, and small MFGs (P &gt; 0.05). However, in the CLA-induced low-fat groups, we found a correlation between MFG size and milk fat content and fat-to-protein ratio (R2 &gt; 0.3). Moreover, there was a significant change in the size distribution of MFGs. Therefore, in natural milk, MFG size was not correlated with milk fat content. Following CLA supplementation, MFG size was correlated with milk fat content. Our findings revealed that CLA and not milk fat affects MFG distribution and size. abstract_id: PUBMED:24690583 Freezing and thawing effects on fat, protein, and lactose levels of human natural milk administered by gavage and continuous infusion. Objectives: to analyze the changes in human milk macronutrients: fat, protein, and lactose in natural human milk (raw), frozen and thawed, after administration simulation by gavage and continuous infusion. Method: an experimental study was performed with 34 human milk samples. The infrared spectrophotometry using the infrared analysis equipment MilkoScan Minor® (Foss, Denmark) equipment was used to analyze the macronutrients in human milk during the study phases. The analyses were performed in natural (raw) samples and after freezing and fast thawing following two steps: gavage and continuous infusion. The non-parametric Wilcoxon test for paired samples was used for the statistical analysis. Results: the fat content was significantly reduced after administration by continuous infusion (p&lt;0.001) during administration of both raw and thawed samples. No changes in protein and lactose content were observed between the two forms of infusion. However, the thawing process significantly increased the levels of lactose and milk protein. Conclusion: the route of administration by continuous infusion showed the greatest influence on fat loss among all the processes required for human milk administration. abstract_id: PUBMED:33805759 Correlations of Fat Content in Human Milk with Fat Droplet Size and Phospholipid Species. Fat globule size and phospholipid (PL) content in human milk (HM) were investigated. HM was classified into three groups depending on fat content (A &lt; B &lt; C). PL content (mg/100 g HM) was significantly higher in the C group (p &lt; 0.05), indicating its positive relationship with HM fat content. When the PL content was normalized (mg/g fat), that of group A was significantly higher (p &lt; 0.05) and fat droplet size in group C was slightly larger, suggesting that HM fat content is affected by fat droplet numbers to a larger extent than by fat droplet size. A correlation between PC and SM content in HM was observed regardless of fat content, while correlation between PE and either PC or SM increased in the order of C &gt; B &gt; A, hence the composition and content of PL species in HM varied according to its fat content. abstract_id: PUBMED:30174084 Determination of fat content in UHT milk by electroanalytical method. An electroanalytical method was proposed to determine fat content in ultrahigh-temperature (UHT) processed milk by magnetoelectric induction. In the technique, involves the induction of a controllable voltage (potential difference, Ubc) in UHT milk, with differential magnetic fluxes as the stimulus. Results indicated that Ubc increased with an increase in the ratio difference between primary coil 1 and primary coil 2. Variation of fat content in the UHT milk induced a change in Ubc. A reduction in fat content led to an increase in λ-value (ratio of Uab to excitation voltage) under specific conditions. For calibration and verification, a high linear correlation coefficient (R2 = 0.9428) and a low root-mean-square error of cross-validation (0.342 g/100 g) were observed between milk fat and λ-value at 20 V and 700 Hz under an open circuit at a coil ratio of 14:56. Measurement sensitivity was improved with the increase of the sample coils. abstract_id: PUBMED:31529507 Macronutrient Analysis of Modified-Fat Breast Milk Produced by 3 Methods of Fat Removal. Background: Infants with chylothorax after congenital heart disease surgery are commonly treated using modified-fat breast milk. The effect of fat removal on breast milk macronutrients remains unclear. We compared macronutrient content of breast milk with breast milk skimmed using 3 methods, including a novel device, a cream separator. Methods: Thawed frozen breast milk samples from 30 women were defatted using refrigerated centrifuge, cream separator, and manual separation after refrigeration. We used standard assays to measure energy, protein, and fat content of breast milk samples. Results: All fat removal methods yielded skimmed breast milk with substantially lower fat and energy content. Mean energy content in breast milk skimmed by centrifuge (36.7 [SD 3.6] kcal/100 mL) was similar to that from cream separator (38.8 [3.5] kcal/100 mL). Both centrifuge and cream separator methods removed almost all fat and substantially more fat than the manual fat removal method. For unprocessed milk, energy and fat content estimated by creamatocrit was similar to reference method measurements; in skimmed milk, the creamatocrit significantly overestimated fat content. Mean protein content of skimmed breast milk was similar to unprocessed breast milk (mean 1.25 [0.31] g/100 mL). Conclusion: Breast milk fat removal did not significantly alter protein levels. In skimmed breast milk, the overestimation of fat content using creamatocrit method suggests a need for more accurate bedside methods to assess macronutrient content. The similar macronutrient composition of breast milk skimmed by cream separator and centrifuge suggests the potential for cream separator use as a new, portable defatting method for hospitals and families. abstract_id: PUBMED:36230008 Effect of Solid Fat Content in Fat Droplets on Creamy Mouthfeel of Acid Milk Gels. Previous studies have shown that emulsions with higher solid fat content (SFC) are related to a higher in-mouth coalescence level and fat-related perception. However, the effect of SFC in fat droplets on the fat-related attributes of emulsion-filled gels has not been fully elucidated. In this study, the effect of SFC on the creamy mouthfeel of acid milk gel was investigated. Five kinds of blended milk fats with SFC values ranging from 10.61% to 85.87% were prepared. All crystals in the blended milk fats were needle-like, but the onset melting temperature varied widely. Blended milk fats were then mixed with skim milk to prepare acid milk gels (EG10−EG85, fat content 3.0%). After simulated oral processing, the particle size distribution and confocal images of the gel bolus showed that the degree of droplet coalescence in descending order was EG40 &gt; EG20 &gt; EG60 &gt; EG10 ≥ EG85. There was no significant difference in apparent viscosity measured at a shear rate of 50/s between bolus gels, but the friction coefficients measured at 20 mm/s by a tribological method were negatively correlated with the coalescence result. Furthermore, quantitative descriptive analysis and temporal dominance of sensations analysis showed that SFC significantly affected the ratings of melting, mouth coating, smoothness and overall creaminess, as well as the perceived sequence and the duration of melting, smoothness and mouth coating of acid milk gels. Overall, our study highlights the role of intermediate SFC in fat droplets on the creamy mouthfeel of acid milk gels, which may contribute to the development of low-fat foods with desirable sensory perception. abstract_id: PUBMED:34971400 Investigation of the Effect of Human Milk Expression Techniques on Milk Content: A Systematic Review. Background: If the mother and infant cannot meet after birth, it is recommended to express milk and give it to the infant. There was evidence indicating that there might be decrease in essential nutrient values in human milk content depending on the expression technique in literature. The goal of this systematic review was to investigate the effect of human milk expression techniques on the macronutrient milk content and establish an evidence base for future studies. Methods: Studies investigating the effect of human milk expression techniques on milk content were reviewed without year limitations. A literature review was conducted in six electronic databases (MEDLINE, Web of Science, PubMed, ScienceDirect, CINAHL and Cochrane) until 30 May 2021, using the keywords of breast milk expression techniques, milk content and breast milk pumping. Results: From 258 articles initially screened, we included 6 articles in the systematic review. The fat, protein and lactose content of human milk was analyzed in the studies reviewed. It was concluded that there was no significant effect on the protein (9.7-9.8 g/dl and 2.1-2.1 g/dl, respectively) and lactose (6.50-6.53% and 6.1-6.1 g/dl, respectively) content of milk. However, the fat (58.30, 48.81g/l; 2.6-2.2 g/dl) content was affected. Conclusions: This study investigated the effect of milk expression techniques on the macronutrient content of human milk, and it was concluded that there was no significant effect on the protein and carbohydrate content of milk. However, the fat content was affected. Limitation of this study is that some factors that might affect the content of human milk were not standardized sufficiently in the included studies. Answer: Yes, the fat content of human milk is decreased by infusion. Studies have shown that post-infusion fat content is significantly decreased compared to before infusion in both thawed and fresh human milk, but not in formula. Thawed human milk, in particular, resulted in a larger decrease in fat content compared to fresh human milk under all test conditions, regardless of differing tube sizes, materials, or infusion rates (PUBMED:24847514). Additionally, when thawed human milk is continuously infused, a smaller fat loss is observed when syringes are agitated hourly versus when ultrasound or a baseline homogenization is used (PUBMED:25122692). Furthermore, the route of administration by continuous infusion showed the greatest influence on fat loss among all the processes required for human milk administration (PUBMED:24690583).
Instruction: Do the personalities of international and U.S. medical graduates in psychiatry differ? Abstracts: abstract_id: PUBMED:16609126 Do the personalities of international and U.S. medical graduates in psychiatry differ? A preliminary study. Objective: To compare personality traits of psychiatry residents with various characteristics. Method: The authors administered Cloninger's personality inventory to residents at two schools. Results: There were no trait differences between international medical graduates (IMGs) and U.S. medical graduates (USMGs) or those for whom psychiatry was a first or second choice. Conclusion: Perceived differences between IMG and USMG psychiatry residents appear unrelated to personality. abstract_id: PUBMED:35486365 International Medical Graduates in the United States Psychiatry Workforce. Objective: This study describes the supply, distribution, and characteristics of international medical graduate (IMG) psychiatrists who provide services in the USA. Methods: Cross-sectional study design, using descriptive statistics based on combined data from the American Medical Association (2020 Physician Masterfile) and the Educational Commission for Foreign Medical Graduates. Results: International medical graduates continue to make significant contributions to the US physician workforce. As a group, they represent 29% of active psychiatrists in the USA, compared to 23% in all other medical specialties. Many IMG psychiatrists were US citizens who obtained their medical degrees outside the USA or Canada, often in the Caribbean. In some states (i.e., Florida, New Jersey), over 40% of active psychiatrists are IMGs. Over 30% of IMG psychiatrists graduated from medical schools in India and Pakistan. Conclusions: This study provides an overview of the psychiatric workforce in the USA, quantifying the specific contribution of IMGs. Several factors, including immigration policies, continued expansion of US medical schools, and the number of available residency positions, could impact the flow of IMGs to the US. Longitudinal studies are needed to better understand the implications for workforce composition and distribution, and their potential impact on the care of psychiatric patients. abstract_id: PUBMED:35910699 Challenges for International Medical Graduates in the US Graduate Medical Education and Health Care System Environment: A Narrative Review. International medical graduates (IMGs) have become a vital part of the US graduate medical education (GME) and health care system (HCS) workforce; they contribute to essential diversity that relieves cultural and linguistic barriers to health care. The number of IMGs looking for medical training in the United States. has constantly been increasing in the last decades. The challenges they meet begin long before residency application, continue during their transition to residency programs, through early medical training, and eventually subside in senior years. IMGs' hurdles permeate the themes of navigating the US GME and HCS, adaptation to the US culture, communication skills, racial discrimination, emotional distress, and finances. This article aims to comprehensively review available information concerning the challenges encountered by IMGs in their transition to the US GME and HCS environments. abstract_id: PUBMED:22851028 A roadmap for observership programs in psychiatry for international medical graduates. Objective: International medical graduates (IMGs) constitute a significant proportion of the psychiatric workforce in the United States. Observership programs serve an important role in preparing IMGs for U.S. residency positions; yet there are limited resources with information available on establishing these observerships, and none specific to psychiatry. In this article, authors present a roadmap for observership programs in psychiatry for IMGs. Method: This article draws on the experience of the IMG committee of the Group for Advancement of Psychiatry in establishing observership programs. Results: Authors highlight the benefits of observership programs to IMGs, psychiatry departments, and the U.S. medical system as a whole. The different components of an observership program are presented, along with core competencies that need to be acquired. The authors discuss challenges that observership programs may encounter as well as recommendations for overcoming them. Conclusion: Observership programs provide a unique opportunity to integrate IMGs into the U.S. medical system. This article provides a framework for establishing such programs in a way that will optimize their benefits and avoid potential pitfalls. abstract_id: PUBMED:35588212 Foreign medical graduates and American psychiatry. Graduates from foreign medical schools (FMGs) began to staff US state psychiatric hospitals after World War II, and became increasingly associated with the poor quality of those institutions. Public and professional commentary on FMGs criticized their skills and suitability for the US healthcare system in the 1970s, at the same time that state hospitals were under increasing attack. By the 1980s and 1990s, the association between international medical graduates (as they became known) and underserved populations became an argument in favour of easing restrictions on these graduates. The role of foreign-trained psychiatrists in the US public sector became a way for American psychiatry leaders to manage the problems of the seriously mentally ill, first with blame and then with neglect. abstract_id: PUBMED:14754850 Recent trends in psychiatry residency workforce with special reference to international medical graduates. Objective: This study examines trends in the supply, distribution, and demographics of psychiatry residents during the 1990s. It evaluates the extent to which the predicted downsizing of psychiatry residency training programs actually occurred and how it affected training programs of different sizes and locations. Method: Data for this study were obtained from the American Medical Association's (AMA) Annual Survey of Graduate Medical Education (GME) Programs, the AMA GME directory, and the APA Graduate Medical Census. The study compares the roles played by international medical graduates (IMGs) in contrast to U.S. medical graduates (USMGs) in these trends. Results: There was a significant decline in the number of residents during the years studied. The median training program size also decreased. International medical graduates found broad acceptance in training programs of all locations and sizes, including medical school based programs. Implications of the findings are discussed regarding the impact of current graduate medical education (GME) and immigration policies on future workforce patterns. Conclusion: The field will have to decide whether it can afford anymore residency downsizing in light of emerging evidence of a shortage of psychiatrists. abstract_id: PUBMED:35098550 Disaggregating inequalities in the career outcomes of international medical graduates in the United States. Although research finds that international medical graduates (IMGs) fill gaps in US health care left by US medical graduates (USMGs), the extent to which IMGs' career outcomes are stratified along the lines of their country of medical education remains understudied. Using data from the 2019 American Medical Association Physician Masterfile (n = 19,985), I find IMGs from developed countries chart less marginalised paths in their US careers relative to IMGs from developing countries; they are more likely to practise in more competitive and popular medical specialities; to attend prestigious residency programmes; and to practise in less disadvantaged counties that employ more USMGs relative to IMGs. These findings suggest IMGs experience divergent outcomes in the United States based on their place of medical education, with IMGs from developing countries experiencing more constraints in their careers relative to IMGs from developed countries. This understudied axis of stratification in medicine has important implications for our understanding of how nativism and racism may intersect to generate inequalities in the medical profession and in US health care more broadly. abstract_id: PUBMED:20034389 Specialty choice in UK junior doctors: is psychiatry the least popular specialty for UK and international medical graduates? Background: In the UK and many other countries, many specialties have had longstanding problems with recruitment and have increasingly relied on international medical graduates to fill junior and senior posts. We aimed to determine what specialties were the most popular and desirable among candidates for training posts, and whether this differed by country of undergraduate training. Methods: We conducted a database analysis of applications to Modernising Medical Careers for all training posts in England in 2008. Total number of applications (as an index of popularity) and applications per vacancy (as an index of desirability) were analysed for ten different specialties. We tested whether mean consultant incomes correlated with specialty choice. Results: In, 2008, there were 80,949 applications for specialty training in England, of which 31,434 were UK graduates (39%). Among UK medical graduates, psychiatry was the sixth most popular specialty (999 applicants) out of 10 specialty groups, while it was fourth for international graduates (5,953 applicants). Among UK graduates, surgery (9.4 applicants per vacancy) and radiology (8.0) had the highest number of applicants per vacancy and paediatrics (1.2) and psychiatry (1.1) the lowest. Among international medical graduates, psychiatry had the fourth highest number of applicants per place (6.3). Specialty popularity for UK graduates was correlated with predicted income (p = 0.006). Conclusion: Based on the number of applicants per place, there was some consistency in the most popular specialties for both UK and international medical graduates, but there were differences in the popularity of psychiatry. With anticipated decreases in the number of new international medical graduates training in the UK, university departments and professional associations may need to review strategies to attract more UK medical graduates into certain specialties, particularly psychiatry and paediatrics. abstract_id: PUBMED:37801062 Why Does the US Overly Rely on International Medical Graduates in Its Geriatric Psychiatric Workforce? International medical graduates (IMGs) tend to choose careers in geriatric psychiatry likely because high-quality training and robust mentorship opportunities from other IMGs are available. Geriatric psychiatry offers stable career prospects and opportunities to express humanitarian impulses by working closely with elders. IMGs currently constitute almost a quarter of the psychiatry workforce and approximately 53% of the geriatric psychiatry work force in the United States. However, the number of IMGs entering psychiatry residencies-and, subsequently, geriatric psychiatry fellowships-has recently declined. Overreliance on IMGs in the US geriatric psychiatric workforce means these shortages will further burden an already-limited geriatric mental health care workforce and further compromise US health system capacity to meet its elders' needs. abstract_id: PUBMED:25830581 International medical graduates in the US physician workforce. International medical graduates (IMGs) play a vital role in the US health care system. These graduates represent 26% of physicians in practice and 24% of residents in specialty programs. All IMGs go through US medical licensing examination and credentialing verification to receive certification from the Educational Commission for Foreign Medical Graduates to become eligible to enter the US graduate medical education process. Compared with US graduates, IMGs tend to practice in primary care specialties and in underserved and rural areas. The author summarizes available data regarding IMGs in training and in practice to convey the role IMGs play in the US health care system. Answer: Based on the information provided in the abstracts, there were no trait differences between international medical graduates (IMGs) and U.S. medical graduates (USMGs) in psychiatry. This finding comes from a study where Cloninger's personality inventory was administered to residents at two schools, and the results indicated that perceived differences between IMG and USMG psychiatry residents appear unrelated to personality (PUBMED:16609126).
Instruction: The differential effects of surgical harm in elderly populations. Does the adage: "they tolerate the operation, but not the complications" hold true? Abstracts: abstract_id: PUBMED:24929708 The differential effects of surgical harm in elderly populations. Does the adage: "they tolerate the operation, but not the complications" hold true? Background: Elderly patients are thought to tolerate surgical complications poorly because of low physiologic reserve. The purpose of the study was to evaluate the differential effects of surgical harm in patients over 80 years old. Methods: Three years of data from a harm-reduction campaign were used to identify inpatient surgeries performed on patients older than 50. The rates of harm, death, cost, and length of stay (LOS) were analyzed using SPSS 21 (IBM, New York, NY). Results: A total of 22,710 patients were identified. Rates of harm and mortality increased with increasing age. Harmed patients over age 80 had increased mortality (9.5% vs 7%), but lower cost, intensive care unit days, and LOS versus those aged 50 to 80. Linear regression showed increased cost with harm ($24,000) and decreased cost with age above 80 (-$7,000). Conclusions: In the elderly surgical population, there is more harm and harm events are associated with higher mortality rates, but less additional cost and LOS. Differing goals or aggressiveness of care may explain cost avoidance in the elderly. abstract_id: PUBMED:34396795 The Surgical Apgar Score Predicts Postoperative Complications in Elderly Patients After Surgery for Colorectal Cancer. Objective: The surgical Apgar score (SAS) has been validated as a risk assessment tool for postoperative complications. However, the utility of the SAS in elderly patients with colorectal cancer remains unclear. In this study, we evaluated the utility of the SAS for predicting the severe complications in elderly patients with colorectal cancer. Methods: We retrospectively analyzed 295 patients underwent radical surgery for colorectal cancer in elderly patients ≥75 years old. The SAS was calculated based on 3 intraoperative parameters: estimated blood loss (EBL), lowest mean arterial pressure, and lowest heart rate. Severe complications were defined as Clavien-Dindo classification grade ≥ IIIa. We divided all patients into 2 groups according to with or without severe complications. The optimal cut-off value of SAS for severe complications has been determined by receiver operator characteristic curve. Predictors for severe complications were analyzed by logistic regression modeling. Results: Severe complications were observed in 57 patients (19.3%). Male, rectal cancer, operation time (&gt;240 minutes), EBL (≥120 mL), and a low SAS (≤6) were significantly associated with severe complications in univariate analysis. A multivariate analysis revealed that male, rectal cancer, and a low SAS (≤6) were independent predictors for severe complications. Conclusions: A low SAS (≤6) was associated with severe complications after colorectal cancer surgery in elderly patients. The SAS is a valuable predictor for severe complications in elderly patients with colorectal cancer. abstract_id: PUBMED:11482220 The complications in surgical treatment of gastric cancer in elderly and senile patients Causes of the perioperative complications occurrence in 736 elderly and senile patients, operated on for gastric cancer, were studied. It was established that the complications frequency and mortality were depended on the concomitant diseases presence, on the main disease complications, duration and volume of operation as well, but not from their age. The operation performance method and anesthesiological support perfection would permit to widen indications for the radical surgical intervention performance and raise the treatment of patients efficacy. abstract_id: PUBMED:29402555 The development and validation of a novel model for predicting surgical complications in colorectal cancer of elderly patients: Results from 1008 cases. Objective: To establish predicting models of surgical complications in elderly colorectal cancer patients. Background: Surgical complications are usually critical and lethal in the elderly patients. However, none of the current models are specifically designed to predict surgical complications in elderly colorectal cancer patients. Methods: Details of 1008 cases of elderly colorectal cancer patients (age ≥ 65) were collected retrospectively from January 1998 to December 2013. Seventy-six clinicopathological variables which might affect postoperative complications in elderly patients were recorded. Multivariate stepwise logistic regression analysis was used to develop the risk model equations. The performance of the developed model was evaluated by measures of calibration (Hosmer-Lemeshow test) and discrimination (the area under the receiver-operator characteristic curve, AUC). Results: The AUC of our established Surgical Complication Score for Elderly Colorectal Cancer patients (SCSECC) model was 0.743 (sensitivity, 82.1%; specificity, 78.3%). There was no significant discrepancy between observed and predicted incidence rates of surgical complications (AUC, 0.820; P = .812). The Surgical Site Infection Score for Elderly Colorectal Cancer patients (SSISECC) model showed significantly better prediction power compared to the National Nosocomial Infections Surveillance index (NNIS) (AUC, 0.732; P ˂ 0.001) and Efficacy of Nosocomial Infection Control index (SENIC) (AUC; 0.686; P˂0.001) models. Conclusions: The SCSECC and SSISECC models show good prediction power for postoperative surgical complication morbidity and surgical site infection in elderly colorectal cancer patients. abstract_id: PUBMED:8590296 Risk factors in elderly surgical patients. A prospective study 250 patients aged 65 and more were operated in a general surgical unit and were studied prospectively on admission. 16 parameters have been studied, trying to determine the factors which might induce a risk of post-operative complications. These complications, minor or major, were found in 25% of the cases (62) of whom 11 (4%) died. As expected, this study shows that with the increase in the importance or the number of unfavorable factors, the complication rate rises. The statistical analysis shows that some factors are of low significance, such as the degree of emergency of the surgical operation or hyperglycemia. Other factors are highly significant for predicting complications: increased dependence, bad mental score, anemia, increased prothrombin time, hyponatremia or the presence of other pathological states than those studied. The importance and the type of the surgical operation, particularly bone traumatology, vascular surgery, thoracic and visceral surgery involve a high rate of post-operative complications. The same is true when malnutrition or hypoalbuminemia are present. Advancing age represents per se an augmentation of these risks, because the elderly patients are not able to meet the increased metabolic demand. This loss of reserve capacity is the most important factor that decreases the elderly patient's ability to tolerate operations. The risk factors mentioned above are particularly important because some pathological states can be corrected, and some other preventive measures can be taken before surgery, as far as one is not dealing with emergency. abstract_id: PUBMED:17960350 Rare complication after STARR operation Longo's STARR operation is a new surgical technique for the management of obstructive defecation syndrome. A major advantage is the repair of rectocele and intussusception in a single transanal approach. Complications such as bleeding, infection, anal stenosis, and fistulas are known. In the described patient we detected a cavity in the spatium rectovaginale which was lined by rectum mucosa and connected to the rectum as a result of an incomplete resection of rectum wall and dilatation of the stapler suture. Recurrent stool incrustration in the cavity led to preformation of coproliths. We performed a modified Rehn-Delorme operation to resect the cavity. The STARR operation is a technique which should be used carefully and performed only by surgeons with experience and full knowledge of the potential complications. abstract_id: PUBMED:27215514 Therapy of both surgical and non-surgical related complication of gastric cancer for the elderly Gastric cancer is one of the most common digestive malignant tumors. More and more elderly gastric cancer patients are diagnosed and need to undergo surgical treatment as the population ages. Since the elderly patients decrease in organ function and increase in internal diseases, the tolerance to anesthesia and surgery is poor. As a result, the incidence of surgical and postoperative complications is obviously higher. Complications can be divided into surgical complications and non-surgical related complications. Surgical complications consist mainly of hemorrhage, anastomotic leakage, anastomotic dehiscence and intestinal obstruction, while non-surgical related complications include deep venous thrombosis, pulmonary infection, anesthesia-related complication, abdominal infection, urinary infection, incision infection, poor wound healing, gastroparesis, gastroesophageal reflux disease, dumping syndrome and so on. Hence, we should consider more about the elderly patients' physical condition instead of the extent of radical operation. To reduce complications, we should evaluate the organ function and take an active role in underlying diseases before operation. Meanwhile, high quality nursing, powerful analgesia, anti-inflammation, keeping water electrolyte balance and nutrition support are also required postoperatively. Moreover, laparoscopic surgery and enhanced recovery after surgery (ERAS) can reduce the postoperative complications in elderly patients with gastric cancer as well. Further prospective randomized controlled trials about elderly gastric cancer should be carried out in the future, which can provide advanced evidences for treatment. abstract_id: PUBMED:29114304 Thrombosed Giant "True" Posterior Communicating Artery Aneurysm Treated by Trapping and Thrombectomy. Giant "true" posterior communicating artery (PCOM) aneurysms are rare and the best surgical treatment for them is unclear. We present a case of 85-year-old woman with this type of lesion, 35 mm in diameter, successfully treated by trapping and thrombectomy via pterional approach without complications. There were no perforating arteries originating from the aneurysmal wall. The patient had an uneventful postoperative course. The key for successful treatment for such lesions is preservation of perforators, as trapping may result in ischemic complications. However, our case indicates that trapping and thrombectomy might have relatively low risks for development of such complications, supposing that the thrombosis within the giant "true" PCOM aneurysm induced spontaneous obliteration of perforators, arising from the aneurysmal dome, and that collateral flow from the posterior cerebral artery already compensated the corresponding territories. abstract_id: PUBMED:37452304 Perioperative risk factors related to complications of lumbar spine fusion surgery in elderly patients. Purpose: To analyze the perioperative risk factors related to lumbar spine fusion surgery in elderly patients. Methods: 202 elderly patients (age range 77-92 years old) who have underwent lumbar spinal fusion surgeries between January 2019 and June 2021 were retrospectively investigated. Information of age, sex, comorbidity, fixation segments, operation time, surgical blood loss and perioperative complications during hospitalization were collected. Risk factors for complications were analyzed. Student's t-test, chi-square test, Mann-Whitney U‑test and multivariate generalized linear models were used. Results: In this study, 31 patients presented complications (15.3%) in these elderly patients with an average age of 79.1 years, including 1 patient with intraoperative complication and 30 patients with postoperative complications; and 2 out of 31 patients (1%) died. The elderly patients were divided into group A (24 patients) with major postoperative complications and group B (178 patients) without major postoperative complications. Major postoperative complications were significantly associated with age (univariate analysis, t = 3.92, P &lt; 0.001; multivariate analysis, OR = 1.323, 95%CI 1.126-1.554, P = 0.001), but not significantly associated with other factors tested (sex, comorbidity, fixation segments, operation time, surgical blood loss). Then 173 patients (range 77-81 years) were selected and the rate of major postoperative complications of each age from 78 to 81 years was compared with that of 77 years patients, respectively. We found that the ratios of complications at 80 years (OR = 10.000, P = 0.019) and 81 years (OR = 10.000, P = 0.009) were higher than the ratio at 77 years. Conclusions: Although with great progress of medical technology, increasing age was still the independent risk factor for major postoperative complications in elderly patients undergoing lumbar spinal fusion surgery. As for the incidence of major postoperative complications, 80 and 81 years old patients was 10 folds higher than that of 77 years old patients, reminding us to pay more attention to 80 years old and even older patients. abstract_id: PUBMED:25263474 Risks and complications of urological surgical procedures in elderly patients Introduction: Surgical procedures in elderly patients are associated with a higher risk of complications. The main goal of this study was to evaluate the results and complications of urological surgeries in elderly patients. Material And Methods: A retrospective study was completed, evaluating the perioperative and early postoperative complications in patients over the age of 75. In total, 221 patients that underwent urological surgery between January 2011 and December 2012 were evaluated. The mean age of the patients was 78. Results: The patient cohort that was evaluated experienced 48 (22%) complications; one death (&lt;0.5%) and 4 (&lt;2%) underwent repeat surgery due to serious complications. Infection was the most common complication: sepsis or surgical site infection. Other complications included respiratory insufficiency, delirium, bleeding etc. Conclusion: Urological surgery in patients above the age of 75 is safe, provided all pre-op procedures are executed correctly and prevention of possible complications is dealt with appropriately. Based on our results, the risk of complications falls within an acceptable range. Answer: The adage "they tolerate the operation, but not the complications" seems to be supported by the evidence presented in the abstracts. Elderly patients are indeed at a higher risk for complications following surgery, and these complications can have significant impacts on outcomes. For instance, a study found that elderly patients over the age of 80 who experienced harm had increased mortality rates compared to younger cohorts aged 50 to 80, although they incurred lower additional costs and had shorter lengths of stay in the hospital. This suggests that while the elderly may survive the initial surgery, they are more vulnerable to the consequences of complications that arise thereafter (PUBMED:24929708). Additionally, the Surgical Apgar Score (SAS), which is a risk assessment tool for postoperative complications, was shown to be a valuable predictor for severe complications in elderly patients undergoing colorectal cancer surgery. A low SAS was associated with severe complications, indicating that the elderly may not tolerate complications well (PUBMED:34396795). Another study on gastric cancer in elderly and senile patients found that the frequency of complications and mortality depended on the presence of concomitant diseases, complications of the main disease, and the duration and volume of the operation, rather than age alone. This suggests that while age is a factor, the overall health status and complexity of the surgery are more indicative of how well an elderly patient will tolerate complications (PUBMED:11482220). Moreover, a novel model was developed to predict surgical complications in elderly colorectal cancer patients, which showed good prediction power for postoperative surgical complication morbidity and surgical site infection. This further emphasizes the importance of anticipating complications in the elderly surgical population (PUBMED:29402555). A prospective study of elderly surgical patients identified several risk factors for postoperative complications, including increased dependence, poor mental score, anemia, and other pathological states. These factors highlight the vulnerability of elderly patients to not tolerating complications well (PUBMED:8590296). In summary, the evidence suggests that while elderly patients may physically survive an operation, they are at a higher risk for complications, and these complications can lead to increased mortality and morbidity. Therefore, the adage appears to hold true, emphasizing the need for careful risk assessment and management of complications in the elderly surgical population.
Instruction: Is impaired outcome following a first manic episode due to mood-incongruent psychosis? Abstracts: abstract_id: PUBMED:11099745 Is impaired outcome following a first manic episode due to mood-incongruent psychosis? Background: Mood-incongruent psychosis during the course of bipolar disorder has been associated with poor outcome. However, it remains unknown whether this is secondary to persistent affective or psychotic symptoms or both. Method: Fifty patients with bipolar disorder between the ages of 16 and 45 years were recruited during their first psychiatric hospitalization for mania. These patients were evaluated using structured and semi-structured clinical instruments then followed longitudinally. Outcomes during the first eight months of follow-up were compared between patients with mood-incongruent psychosis and those without (i.e., patients with mood-congruent psychosis or no psychosis) during the index manic episode. Specifically, ratings of the percent of weeks during follow-up with psychosis and affective syndromes and symptoms, as well as ratings of global outcome (GAF), were compared. Results: Patients with mood-incongruent psychosis at the index hospitalization exhibited significantly more weeks during follow-up with both mood-incongruent and mood-congruent psychotic symptoms than patients without mood-incongruent psychosis. Mood-incongruent psychosis was also associated with poorer overall functioning during the outcome interval. The groups did not differ in the percent of weeks with affective syndromes or symptoms. Treatment during follow-up did not differ between groups and was not associated with outcome variables in general. Conclusion: Mood-incongruent psychosis that occurs during the first manic episode appears to predict an increased likelihood of persistent psychotic symptoms during the subsequent eight months. This persistence of psychosis is associated with a worse overall course of illness as compared to patients without mood-incongruent psychosis. Limitations: These results apply to a relatively short outcome period and are from a single center. abstract_id: PUBMED:35468481 A two-year follow-up study on the first manic episode due to mood-incongruent psychosis. Background: Mood-incongruent psychosis during bipolar disorder has been associated with poor outcomes. However, it remains unknown whether this is secondary to persistent affective or psychotic symptoms or both. Method: Sixty-eight patients with bipolar disorder between the ages of 16 and 45 were recruited during their first psychiatric hospitalization for mania. These patients were evaluated using structured and semi-structured clinical interview then followed longitudinally. Outcomes during the first twenty-four months of follow-up were compared between patients with mood-incongruent psychosis and those without (i.e., patients with mood-congruent psychosis or no psychosis) during the index manic episode. Specifically, ratings of the percent of weeks during follow-up with the duration of mood incongruent psychotic symptom, any psychotic symptom, affective syndromes, and scores of global outcomes were compared. Results: Comparing the 24-month follow-up results between the two groups, patients with mood incongruent psychotic symptoms had a lower global functional rating scale, efficacy index, while the duration of mood incongruent psychotic symptom, any psychotic symptom, and complete affective symptom showed statistically significant differences between the two groups. There were also statistically significant differences between the two groups in the duration of mood stabilizers, and antidepressants use, typical antipsychotics, and atypical antipsychotics. Partial correlation analysis reveals the scores of the global assessment of functioning scale (GAF) after 24 months showed a significant negative correlation with the length of time of incongruent psychotic symptoms. Still, the correlation was intermediate (correlation coefficients less than 0.5,r2 = -0.471, P &lt; 0.001). Conclusion: Mood-incongruent psychosis that occurs during the first manic episode appears to predict an increased likelihood of persistent psychotic symptoms during the subsequent twenty-four months. This persistence of psychosis is associated with a worse overall course of illness than patients without mood-incongruent psychosis. Limitations: These results apply to a relatively short outcome period, and the sample size is relatively small. abstract_id: PUBMED:8682975 Mood-congruent versus mood-incongruent psychotic symptoms in first-admission patients with affective disorder. The distribution of mood-congruent and mood-incongruent symptoms in 49 first-admission DSM-III-R psychotic bipolar and 35 psychotic depressed patients is presented. Most patients had mood-incongruent symptoms (77.4%). 73% of mood-incongruent bipolars and 32% of incongruent depressives had a combination of mood-congruent and mood-incongruent symptoms. Demographic and clinical variables were unrelated to incongruence. The only 24-month clinical outcome predicted by mood incongruence was poorer GAF rating. 15 of the 16 patients whose diagnosis was changed at follow-up from affective to nonaffective psychosis had mood-incongruent features initially. The findings raise questions about the general prognostic utility of mood congruence. abstract_id: PUBMED:27762161 Structural brain changes in first episode mania with and without psychosis: Data from the Systematic Treatment Optimization Program for Early Mania (STOP-EM). Objectives: The neurobiological underpinnings of bipolar I disorder are not yet understood. Previous structural neuroimaging studies of bipolar disorder have produced rather conflicting results. We hypothesise that clinical sub-phenotypes of bipolar I disorder defined by their psychotic symptoms, especially those with mood-incongruent psychotic features, may have more extensive structural brain abnormalities. Methods: We investigated structural brain alterations in patients with first-episode mania (n = 55) with mood-congruent (n = 16) and mood-incongruent (n = 32) psychotic features, as well as those without psychotic symptoms (n = 7), relative to healthy subjects (n = 56). Results: Total intracranial volume was significantly reduced in patients with mood-incongruent psychosis compared to healthy subjects while cerebrospinal fluid (CSF) volume was significantly increased. Patients with mood-congruent psychosis showed significant reduction in total white matter volume and significant CSF volume increase. Patients with psychosis had significant volume reduction in anterior cingulate and medial prefrontal cortices. Relative to mood-congruent psychotic features, mood-incongruent psychotic features were associated with volume reduction in the left middle temporal gyrus, right inferior parietal gyrus, right fusiform gyrus, left middle orbitofrontal gyrus and cerebellum. Conclusions: While preliminary, our findings suggest that the presence and type of psychosis in first-episode mania may be phenotypic markers of underlying biological variants of bipolar disorder. abstract_id: PUBMED:3976922 Major depression with mood-congruent or mood-incongruent psychotic features: outcome after 40 years. Using cross-sectional evaluations 40 years after index admissions, the authors compared depressed patients with mood-congruent and those with mood-incongruent psychotic features. These patients were then compared with patients with nonpsychotic major depression, schizophreniform disorder, or schizophrenia. Outcome in the mood-congruent group resembled that in the nonpsychotic group and was significantly better than that in the mood-incongruent group. Patients in this latter group, however, had significantly better follow-up scores than did schizophrenic patients. These findings are consistent with a short-term outcome and family history study and suggest that patients with major depression and mood-incongruent psychotic features constitute a more diagnostically heterogeneous group than do those with mood-congruent psychotic features. abstract_id: PUBMED:36801515 Interactions between mood and paranoid symptoms affect suicidality in first-episode affective psychoses. Background: Suicide prevention is a major challenge in the treatment of first-episode affective psychoses. The literature reports that combinations of manic, depressive and paranoid symptoms, which may interact, are associated with an increased risk of suicide. The present study investigated whether interactions between manic, depressive and paranoid symptoms affected suicidality in first-episode affective psychoses. Methods: We prospectively studied 380 first-episode psychosis patients enrolled in an early intervention programme and diagnosed with affective or non-affective psychoses. We compared intensity and presence of suicidal thoughts and occurrence of suicide attempts over a three-year follow-up period and investigated the impact of interactions between manic, depressive and paranoid symptoms on level of suicidality. Results: At 12 months follow-up, we observed a higher level of suicidal thoughts and higher occurrence of suicide attempts among the affective psychoses patients compared to non-affective psychoses patients. Combined presence of either depressive and paranoid symptoms, or manic and paranoid symptoms, was significantly associated with increased suicidal thoughts. However, the combination of depressive and manic symptoms showed a significant negative association with suicidal thoughts. Conclusions: This study suggests that paranoid symptoms combined with either manic or depressive symptoms are associated with an increased risk of suicide in first-episode affective psychoses. Detailed assessment of these dimensions is therefore warranted in first-episode affective patients and integrated treatment should be adapted to increased suicidal risk, even if patients do not display full-blown depressive or manic syndromes. abstract_id: PUBMED:15337330 Schneiderian first rank symptoms predict poor outcome within first episode manic psychosis. Background: The validity of a sub-classification of affective psychosis according to the mood congruence of psychotic features has been questioned in the literature. While some authors have found a correlation between such symptoms and outcome, their predictive value was rather limited in these studies. Method: Prospective study of 108 subjects presenting with a first DSM-III-R manic episode with psychotic features to determine the frequency of different types of psychotic symptoms and to measure the predictive utility of mood incongruent psychotic symptoms (MIPS) and first-rank Schneiderian symptoms (FRSS) during the first episode for a 12-month outcome. Outcome was measured by the level of positive, negative, depressive symptoms, and psychosocial functioning. Duration of affective and psychotic symptoms was also assessed. Results: Patients presented with a wide variety of psychotic symptoms. The presence of MIPS at baseline was significantly correlated with a longer persistence of psychotic symptoms, but not with poorer outcome at 12 months. By contrast, the presence of FRSS at baseline was significantly associated with earlier onset of psychosis as well as increased severity of negative symptoms and poorer psychosocial functioning after 12 months. Conclusion: The presence of FRSS during a first manic episode with psychotic features identifies a sub-group of patients with more severe presentation and poorer short-term outcome. These results question the prognostic utility of MIPS. Limitations: Despite the relatively large number of subjects compared with other studies, the statistical power to detect all but large effect sizes is limited by the sample size. abstract_id: PUBMED:11059995 Ten-year outcome: patients with schizoaffective disorders, schizophrenia, affective disorders and mood-incongruent psychotic symptoms. Background: It is unclear whether outcome in schizoaffective disorders is more similar to schizophrenia or affective disorders. Aims: To provide longitudinal data on clinical course and outcome in schizoaffective disorders versus schizophrenia and affective disorders, and determine whether mood-incongruent psychotic symptoms have negative prognostic implications. Method: A total of 210 patients with schizoaffective disorders, schizophrenia, bipolar manic disorders and depression were assessed at hospitalisation and then followed up four times over 10 years. Results: At all four follow-ups, fewer patients with schizoaffective disorders than with schizophrenia showed uniformly poor outcome. Patients with mood-incongruent psychotic symptoms during index hospitalisation showed significantly poorer subsequent outcome (P &lt; 0.05). Conclusions: Schizoaffective outcome was better than schizophrenic outcome and poorer than outcome for psychotic affective disorders. Mood-incongruent psychotic symptoms have negative prognostic implications. The results could fit a symptom dimension view of schizoaffective course. abstract_id: PUBMED:27107261 Three-year clinical and functional outcome comparison between first-episode mania with psychotic features and first-episode schizophrenia. Background: The early course of first-episode mania with psychotic features (FEMP) is under-studied. Accumulating evidence suggests that FEMP is associated with substantial functional impairment. Very few studies were conducted to directly compare clinical and functional outcomes between FEMP and first-episode schizophrenia (FES). Methods: Four-hundred-twenty patients aged 15-25 years who presented with FEMP or FES to a territory-wide early intervention service in Hong Kong from July 2001 to August 2003 and completed 3-year follow-up were studied. Baseline and follow-up variables were collected via systematic medial file review. Functional remission was operationalized as attaining sustained employment, and Social and Occupational Functioning Assessment Scale (SOFAS) score &gt;60 in the last 12 months of follow-up. Results: At baseline, FEMP patients were younger, more likely to be hospitalized, had shorter duration of untreated psychosis, more severe positive symptoms and lower SOFAS score than FES patients. By the end of 3-year follow-up, FEMP patients had significantly milder positive symptom severity, higher SOFAS score, and higher rates of sustained employment (45.7%) and functional remission (36.9%) than FES patients. Regression analyses showed that diagnostic group membership of FEMP (vs. FES) independently predicted better clinical and functional outcomes. Conclusion: Our results indicate that FEMP patients had better clinical and functional outcomes than FES patients in the initial 3 years of treatment. Yet, only approximately 37% of FEMP patients attained functional remission at 3 years. This underscores the need to develop specialized early intervention for FEMP populations to promote functional recovery in the early stage of illness. abstract_id: PUBMED:31708055 Affective Disorders with Psychosis in Youth: An Update. Mood disorders, including major depression and mania, can present with psychotic features. In youth psychotic-like phenomena such as "seeing faces in the dark" or "hearing noises" are fairly common. Rates of lifetime psychotic symptoms are much higher than rates of psychosis during a "current" episode of mania or depression in youth. Psychotic phenomena can be mood congruent or incongruent. A detailed mental status examination and clinical history include questioning to ensure the informants understand the questions being asked. There are interviews that structure how questions are asked, and rating scales that help anchor severity and quality of the mood episode. Answer: The available research suggests that impaired outcome following a first manic episode may indeed be associated with mood-incongruent psychosis. Studies have shown that patients with mood-incongruent psychosis during their first manic episode are more likely to experience persistent psychotic symptoms during follow-up periods, which is associated with poorer overall functioning and a worse course of illness compared to patients without mood-incongruent psychosis (PUBMED:11099745; PUBMED:35468481). For instance, a study with an eight-month follow-up period found that patients with mood-incongruent psychosis exhibited significantly more weeks with both mood-incongruent and mood-congruent psychotic symptoms than patients without mood-incongruent psychosis, and this was associated with poorer global functioning (PUBMED:11099745). Another study with a two-year follow-up period confirmed these findings, showing that patients with mood-incongruent psychotic symptoms had a lower global functional rating scale and efficacy index, and the duration of mood-incongruent psychotic symptoms was significantly longer compared to patients without such symptoms (PUBMED:35468481). Moreover, mood-incongruent psychotic features have been associated with more extensive structural brain abnormalities, suggesting that the presence and type of psychosis in first-episode mania may be phenotypic markers of underlying biological variants of bipolar disorder (PUBMED:27762161). Additionally, mood-incongruent psychotic symptoms have been found to have negative prognostic implications in other studies as well, indicating that they may contribute to a more diagnostically heterogeneous group with a poorer long-term outcome (PUBMED:3976922; PUBMED:11059995). In summary, the evidence points towards mood-incongruent psychosis during a first manic episode as a predictor of an increased likelihood of persistent psychotic symptoms and a worse overall course of illness, which can be considered an impaired outcome.
Instruction: Identifying pneumonia outbreaks of public health importance: can emergency department data assist in earlier identification? Abstracts: abstract_id: PUBMED:18782400 Identifying pneumonia outbreaks of public health importance: can emergency department data assist in earlier identification? Objective: To retrospectively review the performance of a near real-time Emergency Department (ED) Syndromic Surveillance System operating in New South Wales for identifying pneumonia outbreaks of public health importance. Methods: Retrospective data was obtained from the NSW Emergency Department data collection for a rural hospital that has experienced a cluster of pneumonia diagnoses among teenage males in August 2006. ED standard reports were examined for signals in the overall count for each respiratory syndrome, and for elevated counts in individual subgroups including; age, sex and admission to hospital status. Results: Using the current thresholds, the ED syndromic surveillance system would have trigged a signal for pneumonia syndrome in children aged 5-16 years four days earlier than the notification by a paediatrician and this signal was maintained for 14 days. Conclusion: If the ED syndromic surveillance system had been operating it could have identified the outbreak earlier than the paediatrician's notification. This may have permitted an earlier public health response. Implications: By understanding the behaviour of syndromes during outbreaks of public health importance, response protocols could be developed to facilitate earlier implementation of control measures. abstract_id: PUBMED:35682493 Infectious Diseases Associated with Desert Dust Outbreaks: A Systematic Review. Background: Desert dust outbreaks and dust storms are the major source of particulate matter globally and pose a major threat to human health. We investigated the microorganisms transported with desert dust particles and evaluated their potential impact on human health. Methods: A systematic review of all reports on the association between non-anthropogenic desert dust pollution, dust microorganisms and human health is conducted. Results: In total, 51 articles were included in this review. The affected regions studied were Asia (32/51, 62.7%) followed by Europe (9/51, 17.6%), America (6/51, 11.8%), Africa (4/51, 7.8%) and Australia (1/51, 2.0%). The Sahara Desert was the most frequent source of dust, followed by Asian and American deserts. In 39/51 studies the dust-related microbiome was analyzed, while, in 12/51 reports, the association of desert dust with infectious disease outbreaks was examined. Pathogenic and opportunistic agents were isolated from dust in 24/39 (61.5%) and 29/39 (74.4%) of the studies, respectively. A significant association of dust events with infectious disease outbreaks was found in 10/12 (83.3%) reports. The infectious diseases that were mostly investigated with dust outbreaks were pneumonia, respiratory tract infections, COVID-19, pulmonary tuberculosis and coccidioidomycosis. Conclusions: Desert dust outbreaks are vehicles of a significant number of pathogenic or opportunistic microorganisms and limited data indicate an association between dust events and infectious disease outbreaks. Further research is required to strengthen the correlation between dust events and infectious diseases and subsequently guide preventive public health measures. abstract_id: PUBMED:33239568 Responding to the Pandemic: Challenges With Public Health Surveillance Systems and Development of a COVID-19 National Surveillance Case Definition to Support Case-Based Morbidity Surveillance During the Early Response. Responding to introductions of diseases and conditions of unknown etiology is a critical public health function. In late December 2019, investigation of a cluster of pneumonia cases of unknown origin in Wuhan, China, resulted in the identification of a novel coronavirus, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Multiple public health surveillance actions were rapidly implemented to detect introduction of the virus into the United States and track its spread including establishment of a national surveillance case definition and addition of the disease, coronavirus disease 2019, to the list of nationally notifiable conditions. Challenges in conducting effective case-based surveillance and the public health data supply chain and infrastructure are discussed. abstract_id: PUBMED:34014907 Outbreaks Associated with Treated Recreational Water - United States, 2015-2019. Outbreaks associated with treated recreational water can be caused by pathogens or chemicals in aquatic venues such as pools, hot tubs, water playgrounds, or other artificially constructed structures that are intended for recreational or therapeutic purposes. For the pseriod 2015-2019, public health officials from 36 states and the District of Columbia (DC) voluntarily reported 208 outbreaks associated with treated recreational water. Almost all (199; 96%) of the outbreaks were associated with public (nonbackyard) pools, hot tubs, or water playgrounds. These outbreaks resulted in at least 3,646 cases of illness, 286 hospitalizations, and 13 deaths. Among the 155 (75%) outbreaks with a confirmed infectious etiology, 76 (49%) were caused by Cryptosporidium (which causes cryptosporidiosis, a gastrointestinal illness) and 65 (42%) by Legionella (which causes Legionnaires' disease, a severe pneumonia, and Pontiac fever, a milder illness with flu-like symptoms). Cryptosporidium accounted for 2,492 (84%) of 2,953 cases resulting from the 155 outbreaks with a confirmed etiology. All 13 deaths occurred in persons affected by a Legionnaires' disease outbreak. Among the 208 outbreaks, 71 (34%) were associated with a hotel (i.e., hotel, motel, lodge, or inn) or a resort, and 107 (51%) started during June-August. Implementing recommendations in CDC's Model Aquatic Health Code (MAHC) (1) can help prevent outbreaks associated with treated recreational water in public aquatic venues. abstract_id: PUBMED:32515882 Occupational health responses to COVID-19: What lessons can we learn from SARS? On 31 December 2019, the World Health Organization (WHO) received reports of pneumonia cases of unknown etiology in the city of Wuhan in Hubei Province, China. The agent responsible was subsequently identified as a coronavirus-SARS-CoV-2. The WHO declared this disease as a Public Health Emergency of International Concern at the end of January 2020. This event evoked a sense of déjà vu, as it has many similarities to the outbreak of severe acute respiratory syndrome (SARS) of 2002-2003. Both illnesses were caused by a zoonotic novel coronavirus, both originated during winter in China and both spread rapidly all over the world. However, the case-fatality rate of SARS (9.6%) is higher than that of COVID-19 (&lt;4%). Another zoonotic novel coronavirus, MERS-CoV, was responsible for the Middle East respiratory syndrome, which had a case-fatality rate of 34%. Our experiences in coping with the previous coronavirus outbreaks have better equipped us to face the challenges posed by COVID-19, especially in the health care setting. Among the insights gained from the past outbreaks were: outbreaks caused by viruses are hazardous to healthcare workers; the impact of the disease extends beyond the infection; general principles of prevention and control are effective in containing the disease; the disease poses both a public health as well as an occupational health threat; and emerging infectious diseases pose a continuing threat to the world. Given the perspectives gained and lessons learnt from these past events, we should be better prepared to face the current COVID-19 outbreak. abstract_id: PUBMED:32805703 A New Era of Epidemiology: Digital Epidemiology for Investigating the COVID-19 Outbreak in China. A novel pneumonia-like coronavirus disease (COVID-19) caused by a novel coronavirus named SARS-CoV-2 has swept across China and the world. Public health measures that were effective in previous infection outbreaks (eg, wearing a face mask, quarantining) were implemented in this outbreak. Available multidimensional social network data that take advantage of the recent rapid development of information and communication technologies allow for an exploration of disease spread and control via a modernized epidemiological approach. By using spatiotemporal data and real-time information, we can provide more accurate estimates of disease spread patterns related to human activities and enable more efficient responses to the outbreak. Two real cases during the COVID-19 outbreak demonstrated the application of emerging technologies and digital data in monitoring human movements related to disease spread. Although the ethical issues related to using digital epidemiology are still under debate, the cases reported in this article may enable the identification of more effective public health measures, as well as future applications of such digitally directed epidemiological approaches in controlling infectious disease outbreaks, which offer an alternative and modern outlook on addressing the long-standing challenges in population health. abstract_id: PUBMED:32540844 Mapping of Health Literacy and Social Panic Via Web Search Data During the COVID-19 Public Health Emergency: Infodemiological Study. Background: Coronavirus disease (COVID-19) is a type of pneumonia caused by a novel coronavirus that was discovered in 2019. As of May 6, 2020, 84,407 cases and 4643 deaths have been confirmed in China. The Chinese population has expressed great concern since the COVID-19 outbreak. Meanwhile, an average of 1 billion people per day are using the Baidu search engine to find COVID-19-related health information. Objective: The aim of this paper is to analyze web search data volumes related to COVID-19 in China. Methods: We conducted an infodemiological study to analyze web search data volumes related to COVID-19. Using Baidu Index data, we assessed the search frequencies of specific search terms in Baidu to describe the impact of COVID-19 on public health, psychology, behaviors, lifestyles, and social policies (from February 11, 2020, to March 17, 2020). Results: The search frequency related to COVID-19 has increased significantly since February 11th. Our heat maps demonstrate that citizens in Wuhan, Hubei Province, express more concern about COVID-19 than citizens from other cities since the outbreak first occurred in Wuhan. Wuhan citizens frequently searched for content related to "medical help," "protective materials," and "pandemic progress." Web searches for "return to work" and "go back to school" have increased eight-fold compared to the previous month. Searches for content related to "closed community and remote office" have continued to rise, and searches for "remote office demand" have risen by 663% from the previous quarter. Employees who have returned to work have mainly engaged in the following web searches: "return to work and prevention measures," "return to work guarantee policy," and "time to return to work." Provinces with large, educated populations (eg, Henan, Hebei, and Shandong) have been focusing on "online education" whereas medium-sized cities have been paying more attention to "online medical care." Conclusions: Our findings suggest that web search data may reflect changes in health literacy, social panic, and prevention and control policies in response to COVID-19. abstract_id: PUBMED:29771872 Outbreaks Associated with Treated Recreational Water - United States, 2000-2014. Outbreaks associated with exposure to treated recreational water can be caused by pathogens or chemicals in venues such as pools, hot tubs/spas, and interactive water play venues (i.e., water playgrounds). During 2000-2014, public health officials from 46 states and Puerto Rico reported 493 outbreaks associated with treated recreational water. These outbreaks resulted in at least 27,219 cases and eight deaths. Among the 363 outbreaks with a confirmed infectious etiology, 212 (58%) were caused by Cryptosporidium (which causes predominantly gastrointestinal illness), 57 (16%) by Legionella (which causes Legionnaires' disease, a severe pneumonia, and Pontiac fever, a milder illness with flu-like symptoms), and 47 (13%) by Pseudomonas (which causes folliculitis ["hot tub rash"] and otitis externa ["swimmers' ear"]). Investigations of the 363 outbreaks identified 24,453 cases; 21,766 (89%) were caused by Cryptosporidium, 920 (4%) by Pseudomonas, and 624 (3%) by Legionella. At least six of the eight reported deaths occurred in persons affected by outbreaks caused by Legionella. Hotels were the leading setting, associated with 157 (32%) of the 493 outbreaks. Overall, the outbreaks had a bimodal temporal distribution: 275 (56%) outbreaks started during June-August and 46 (9%) in March. Assessment of trends in the annual counts of outbreaks caused by Cryptosporidium, Legionella, or Pseudomonas indicate mixed progress in preventing transmission. Pathogens able to evade chlorine inactivation have become leading outbreak etiologies. The consequent outbreak and case counts and mortality underscore the utility of CDC's Model Aquatic Health Code (https://www.cdc.gov/mahc) to prevent outbreaks associated with treated recreational water. abstract_id: PUBMED:25211122 Strategies for controlling non-transmissible infection outbreaks using a large human movement data set. Prediction and control of the spread of infectious disease in human populations benefits greatly from our growing capacity to quantify human movement behavior. Here we develop a mathematical model for non-transmissible infections contracted from a localized environmental source, informed by a detailed description of movement patterns of the population of Great Britain. The model is applied to outbreaks of Legionnaires' disease, a potentially life-threatening form of pneumonia caused by the bacteria Legionella pneumophilia. We use case-report data from three recent outbreaks that have occurred in Great Britain where the source has already been identified by public health agencies. We first demonstrate that the amount of individual-level heterogeneity incorporated in the movement data greatly influences our ability to predict the source location. The most accurate predictions were obtained using reported travel histories to describe movements of infected individuals, but using detailed simulation models to estimate movement patterns offers an effective fast alternative. Secondly, once the source is identified, we show that our model can be used to accurately determine the population likely to have been exposed to the pathogen, and hence predict the residential locations of infected individuals. The results give rise to an effective control strategy that can be implemented rapidly in response to an outbreak. abstract_id: PUBMED:33794098 All Hands on Deck: Local Public Health Agencies Leveraging the Incident Command System During Crises. The COVID-19 pandemic has stretched limited public health resources beyond measures, particularly at the local level. What started as an interesting report of pneumonia of unknown etiology in late December 2019 in Wuhan, China, bloomed into an international crisis by mid-January 2020. However, it was not until late January, when the first case was reported in the United States, that a new reality took shape for US public health agencies. After all, severe acute respiratory syndrome never made it to this country, and the only 2 cases of Middle East respiratory syndrome here were imported and never spread. Local public health agencies are notoriously short-staffed and underfunded. Therefore, when a crisis looms, personnel from a multitude of areas within the agencies are called upon to help out. Under its innovative and forward-thinking leadership, the St. Louis County Department of Health internally implemented the Incident Command System, a component of the National Incident Management System. While reassignment of individuals to new responsibilities under a new and temporary reporting structure did not always go perfectly, Incident Command System kept its promise to be adaptable to a fast-evolving situation, to clearly outline needed areas of responsibility, and to provide scaffolding that kept the Department of Health functional in chaotic times. It was able to be implemented quickly within hours of the first confirmed COVID-19 case in St. Louis County and enhanced the quality and timeliness of the public health response. This experience is being shared to provide a model of how organizations with limited personnel can use the Incident Command System to reorganize and meet unexpected challenges with increased success. Answer: Yes, emergency department data can assist in earlier identification of pneumonia outbreaks of public health importance. A retrospective review of the performance of a near real-time Emergency Department (ED) Syndromic Surveillance System operating in New South Wales demonstrated that the system could have triggered a signal for pneumonia syndrome in children aged 5-16 years four days earlier than the notification by a pediatrician. This signal was maintained for 14 days, suggesting that if the ED syndromic surveillance system had been operating, it could have identified the outbreak earlier than the pediatrician's notification, potentially allowing for an earlier public health response (PUBMED:18782400). Understanding the behavior of syndromes during outbreaks can help develop response protocols to facilitate earlier implementation of control measures, improving the public health response to such events.
Instruction: Failed pyeloplasty in children: Is robot-assisted laparoscopic reoperative repair feasible? Abstracts: abstract_id: PUBMED:25791423 Failed pyeloplasty in children: Is robot-assisted laparoscopic reoperative repair feasible? Objective: In this study we aim to demonstrate that robot-assisted laparoscopic (RAL) reoperative repair is safe and effective and even less technically demanding than open repair for recurrent ureteropelvic-junction obstruction (UPJO). Study Design: A retrospective study was conducted of all cases of failed open pyeloplasties who underwent RAL reoperative repair at our institution between January 2010 and December 2013. The general surgical procedure was the same we previously described for robot-assisted laparoscopic pyeloplasty. Success was defined as: improvement in the degree of hydronephrosis at ultrasound, improvement of diuretic washout time at postoperative diuretic renogram (&lt;15 min), improvement or at least stable differential renal function and absence of symptoms. These radiographic and symptomatic criteria of success were considered the primary outcomes. Secondary outcomes included complications and length of hospital stay. Results: Between 2000 and 2013 a total of 153 patients underwent open Anderson-Hynes dismembered pyeloplasty. Of these 9 (6%) had recurrent UPJO. Four patients underwent open redo pyeloplasty. As a result, our study population comprised 5 children who underwent reoperative RALP repair. Patient characteristics and outcomes are summarized in the table below. Our success rate was 100%. Discussion: Due to the low failure rate of open dismembered pyeloplasty there is no consensus on the best surgical approach for recurrent obstruction. While endoscopic approaches have been favored in adults, children have shown better success rates with repeat pyeloplasty. Laparoscopic salvage pyeloplasty for failed open procedures has become more popular and has been shown to result in excellent outcomes while providing the advantages of minimally invasive surgery. To date, the literature regarding the use of RALP for failed open procedures in the pediatric population is scarce. Only 2 pediatric series of robotic reoperative pyeloplasty have been reported by Helmal et al. (9 patients) and Lindgren et al. (16 patients) with a success rate of 100 and 88%, respectively. Although this is one of the first published studies about robot-assisted laparoscopic reoperative repair for failed open pyeloplasty in pediatric patients, we acknowledge the limitations of our study due to the small number of patients, its retrospective nature and limited follow-up time. Conclusions: The incidence of failed open pyeloplasty is as low as 5% and management remains controversial. As reported by other authors, we believe that crossing vessels play a particularly important role in secondary obstruction and adversely impact the outcome. Redo pyeloplasty, open or minimally invasive, is associated with high success rates (80-100%) and therefore considered the treatment of choice by the majority of authors nowadays. Additionally, RALP for secondary procedures has demonstrated to be safe and even less technically demanding when compared to the open approach, providing the advantages of minimally invasive surgery. abstract_id: PUBMED:30395839 Robot-Assisted Laparoscopic Reoperative Repair for Failed Pyeloplasty in Children: An Updated Series. Purpose: Reoperative pyeloplasty is commonly used in children with recurrent obstruction after pyeloplasty. We previously reported on reoperative robot-assisted laparoscopic repair for failed pyeloplasty in 16 children and concluded that short-term and intermediate outcomes were comparable to open reoperative repair. In this updated series we describe longer term outcomes from an extended study. Materials And Methods: We retrospectively reviewed outcomes of consecutive children with prior failed primary pyeloplasty who underwent robot-assisted laparoscopic reoperative repair at a single institution from January 2008 to June 2018. Results: Overall, 36 children 0.6 to 15.2 years old (median 3.7) underwent robot-assisted laparoscopic reoperative repair (pyeloplasty in 31, ureterocalicostomy in 5) at a median of 24.3 months (range 3.9 to 136.7) after primary repair. Median reoperative time was 285.0 minutes (range 207 to 556) and median length of stay was 1 day (1 to 8). Crossing vessels were present in 8 of 30 children (26.7%) with prior open repair and in 0 of 6 with prior minimally invasive repair. Clavien-Dindo grade 1 to 2 perioperative complications occurred in 4 children (11.1%) and grade 3 to 5 complications in 2 (5.6%). Median followup was 35.3 months (range 1.4 to 108.3), with 18 children (50.0%) being followed for more than 3 years. Postoperative ultrasound in 34 children revealed improvement in 31 (91.2%), stability in 2 (5.9%) and worsening hydronephrosis in 1 (2.9%). All 11 children undergoing preoperative and postoperative diuretic renography demonstrated stable or improved differential renal function. All children were symptom-free at last followup. Conclusions: To our knowledge, this is the largest series of robot-assisted laparoscopic reoperative repair for failed pyeloplasty in children. Our results indicate the feasibility, efficacy, safety and durability of this procedure. abstract_id: PUBMED:22819409 Robot-assisted laparoscopic reoperative repair for failed pyeloplasty in children: a safe and highly effective treatment option. Purpose: Failed pyeloplasty represents a management dilemma, with treatment options including balloon dilation, endopyelotomy and reoperative pyeloplasty. We review our experience with robot-assisted laparoscopic reoperative repair of recurrent/persistent ureteropelvic junction obstruction in children and compare this method to other approaches. Materials And Methods: We reviewed in detail all cases of failed prior ureteropelvic junction procedures, either open or laparoscopic, managed by robot-assisted laparoscopic reoperative repair between 2006 and July 2011. Results: Robot-assisted laparoscopic repair was performed in 16 cases for persistent or recurrent ureteropelvic junction obstruction following a prior procedure involving the ureteropelvic junction (12 open pyeloplasties, 4 robot-assisted laparoscopic repairs). Additional interventions had been performed in 12 patients. Reoperative robot-assisted laparoscopic pyeloplasty was performed in 13 patients and reoperative robot-assisted laparoscopic ureterocalycostomy in 3. Patient age ranged from 12 months to 15.3 years (mean 6.1 years). Mean operative time and length of stay were 303 minutes and 1.6 days, respectively. Mean followup was 14.9 months. All symptomatic patients had resolution of symptoms postoperatively. A total of 14 patients (88%) had improved radiological findings. One patient underwent transfusion and conversion to an open procedure due to bleeding. Conclusions: Robot-assisted laparoscopic reoperative repair of persistent/recurrent ureteropelvic junction obstruction is a safe, highly effective procedure even in the setting of multiple prior procedures. In our series all patients improved symptomatically, 88% improved radiographically and none have required further surgical intervention. Success is greater than with endopyelotomy and comparable to open reoperative repair for this challenging condition during short-term and intermediate followup. abstract_id: PUBMED:29641368 From Laparoscopic Pyeloplasty to Robot-Assisted Laparoscopic Pyeloplasty in Primary and Reoperative Repairs for Ureteropelvic Junction Obstruction in Children. Purpose: Robot-assisted laparoscopic pyeloplasty (RALP) and laparoscopic pyeloplasty (LP) are both minimally invasive surgical options to correct ureteropelvic junction obstruction in children. There are limited data reporting surgeons' experience of switching from LP to RALP and comparing the outcomes of the two procedures. Methods: We conducted a retrospective study on 63 consecutive children who underwent either LP or RALP by the same surgeon in a tertiary center from January 2008 to November 2016. LP had been the standard practice until January 2014 when it was replaced by RALP in short transition. Results: Thirty-seven LP and 26 RALP were successfully performed in children aged 2 months to 16 years. There was no difference between the two groups in age, body weight, laterality, clinical presentations, preoperative imagings, primary/reoperative repairs. The overall success rates were 34/37 (91.9%) and 25/26 (96.2%) for LP and RALP, respectively (P &gt; .05). RALP was associated with shorter length of stay (3.1 days versus 4.0 days; P = .03). Surgeon-in-training participation was greater in RALP group (P &lt; .001). There was no difference in operative time, complication rates, and analgesic requirement between the two groups. Conclusions: Surgeons proficient in LP can adapt quickly to RALP, achieving comparable outcomes. Robotic technology may facilitate training on minimally invasive pyeloplasty. abstract_id: PUBMED:33240828 Redo Laparoscopic Pyeloplasty in Infants and Children: Feasible and Effective. Purpose: To determine the feasibility and effectiveness of redo laparoscopic pyeloplasty among patients with failed previous pyeloplasty, specifically examining rates of success and complications. Materials and Methods: We retrospectively reviewed the charts of all patients, who underwent redo laparoscopic pyeloplasty from 2006 to 2017. This included patients who underwent primary pyeloplasty at our institution and those referred for failures. Analysis included demographics, operative time, complications, length of hospital stay, complications, and success. Success was defined as improvement of symptoms and hydronephrosis and/or improvement in drainage demonstrated by diuretic renogram, especially in those with persistent hydronephrosis. Descriptive statistics are presented. Results: We identified 22 patients who underwent redo laparoscopic pyeloplasty. All had Anderson-Hynes technique except two cases in which ureterocalicostomy was performed. Median (IQR) follow-up was 29 (2-120) months, median time between primary pyeloplasty and redo laparoscopic pyeloplasty was 12 (7-49) months. The median operative time was 200 (50-250) min, and median length of hospital stay was 3 (2-10) days. The procedure was feasible in all cases without conversion. During follow-up, all but two patients demonstrated an improvement in the symptoms and the degree of hydronephrosis. Ninety-one percent of patients experienced success and no major complications were noted. Conclusions: Redo laparoscopic pyeloplasty is feasible and effective with a high success rate and low complication rate. abstract_id: PUBMED:26675936 An innovative technique of robotic-assisted/laparoscopic re-pyeloplasty in horseshoe kidney in patients with failed previous pyeloplasty for ureteropelvic junction obstruction. Objectives: To describe an innovative laparoscopic/robotic-assisted re-pyeloplasty technique in patients with recurrent ureteropelvic junction obstruction (UPJO) in horseshoe kidneys. Patients And Methods: Data from five patients (37-65, median 54 years) with symptomatic recurrence of UPJO in horseshoe kidney who underwent laparoscopic/robotic-assisted re-pyeloplasty at our institution since 2004 were evaluated retrospectively. The upper ureter together with wedge resection of the pelvis at the lower calyx was performed. The ureter is spatulated till beyond the isthmus and anastomosed to lower calyx. Rotational renal pelvis flap is used for reconstruction and conisation of the pelvis. Results: Median operative time was 137 min (92-180) with a negligible blood loss. There was no need for conversion or revisions. Perioperative periods were uneventful. The intraoperatively inserted JJ was left for 6-8 weeks. Median postoperative differential function of affected kidney at 3 months (MAG III) was 38 % (26-42 %), unchanged from 35 % (21-41 %), preoperatively. This was stable in three patients and higher in two (5 and 7 %). There were no obstructive elements indicating anatomical ureteric obstruction. After convalescence period, three patients recurred to their work at 5 weeks, while all at 8 weeks. All patients remained asymptomatic and have not required any further interventions during whole follow-up. Conclusions: Described technique of laparoscopic/robotic-assisted re-pyeloplasty in horseshoe kidneys is technically feasible, safe and effective with high patient satisfaction and early convalescence. Its success rate is comparable with the results after primary pyeloplasty in horseshoe and heterotopic kidneys. Larger series may be required to allow for more accurate comparison. abstract_id: PUBMED:35840791 Outcome of reoperative surgery for late failure of postcholecystectomy bile duct injury repair. The aim of the study is to report the outcomes of reoperative surgery for late failure of postcholecystectomy bile duct injury (BDI) repair. All the patients, who underwent a reoperative surgery for late failure of postcholecystectomy BDI repair at our institution between August 2007 and July 2020, were retrospectively reviewed. Of the total 262 patients of BDI repair, 66 underwent reoperative surgery for late failure. Median duration between last attempt repair and the onset of recurrent symptoms was 18 months. Eighty-five percent of patients with failed repair became symptomatic within 5 years of attempt repair. The most common type of BDI was E3. All the patients underwent Roux-en-Y hepaticojejunostomy. Twenty-nine postoperative complications developed in 23 (35%) patients. Postoperative mortality was 1.5%. Median postoperative hospital stay was 9 (5-61) days. Over a median follow-up of 80 (12-150) months, 5.2% (3/58) of patients developed clinically relevant anastomotic stricture. Three patients with secondary biliary cirrhosis died in the follow-up period due to decompensated liver disease. Overall, excellent or good long-term outcome was achieved in 83% (48/58) of patients which was significantly less satisfactory than primary repair patients (82.8% vs 92.7%, p = 0.039). Reoperative surgery is safe in patients with failed repair after postcholecystectomy BDI and good long-term clinical success can be achieved in most of the patients. The long-term results were less satisfactory in failed-repair group than those who underwent primary repair at our institution. Early referral to a specialized unit for BDI repair may improve long-term outcome. abstract_id: PUBMED:27628631 Robotic-assisted laparoscopic pyeloplasty: initial Australasian experience. Laparoscopic dismembered pyeloplasty has a success rate in excess of 90% for the treatment of uretero-pelvic junction (UPJ) obstruction. Laparoscopic intracorporeal suturing, however, remains technically challenging and may lead to prolonged operating times. Robotic-assisted suturing using the da Vinci(®) surgical system (Intuitive Surgical, CA, USA) may reduce the difficulty associated with intra-corporeal suturing. The da Vinci(®) surgical system was used to facilitate intra-corporeal suturing in adults undergoing trans-peritoneal robotic-assisted laparoscopic pyeloplasty (RALPY) at our institution. Initially, the robot was only docked for the anastomosis, but in the later part of the series the robot was used for all parts of the dissection and reconstruction. Peri-operative and outcome data were recorded prospectively. Twenty-four patients underwent RALPY over a 4-year period. The mean age was 46.6 (range 18-76) years. The mean total operative time was 211 min (range 150-317 min) with an anastomotic time of 44 min (range 30-55 min). The mean estimated blood loss was 56 ml (10-150 ml) and there was one temporary urine leak managed by 24 h of urethral catheterization. The median length of stay was 4 (2-10) days. Patients underwent diuretic renography at 6 months post surgery, and satisfactory renal drainage was demonstrated in all cases. RALPY is a feasible and safe option for the management of UPJ obstruction. This technology may reduce the difficulty associated with complex laparoscopic suturing and facilitate shorter operative times with excellent outcomes. This is now our preferred approach for all patients opting for surgical management of UPJ obstruction. abstract_id: PUBMED:35611058 A Comparison Between Laparoscopic and Robot-Assisted Laparoscopic Pyeloplasty in Patients with Ureteropelvic Junction Obstruction. Background: The aim of this study is to compare the results of laparoscopic pyeloplasty and robot-assisted laparoscopic pyeloplasty in patients with ureteropelvic junction obstruction. Methods: Between March 2008 and May 2019, the patients who underwent retroperitoneal laparoscopic or robotic-assisted laparoscopic pyeloplasty in our institution were retrospectively reviewed. Results: Thirteen patients underwent laparoscopically, and 12 patients underwent robotic surgery. The significant difference was found in median operative time between laparoscopic group (296 minutes) and robotic group (199 minutes) (P = 0.001). The median time for drain removal in laparoscopic group was longer than robotic group (3 vs. 2 days, respectively, P = 0.029). Conclusion: Laparoscopic and robot-assisted laparoscopic pyeloplasty is safe and excellent success rates in patients with ureteropelvic junction obstruction. However, our experience study suggested that robotic surgery improves a total operative time, decreases drain removal time and less intraoperative blood loss than laparoscopic approach. abstract_id: PUBMED:30923575 Robot-assisted laparoscopic pyeloplasty versus laparoscopic pyeloplasty for pelvi-ureteric junction obstruction in the paediatric population: a systematic review and meta-analysis. Background: Owing to the improved vision and instrument manipulation in robot-assisted procedures, we sought to evaluate the comparative outcomes of robot-assisted laparoscopic pyeloplasty (RALP) and laparoscopic pyeloplasty (LP) in a paediatric patients with pelvi-ureteric junction obstruction (PUJO). Methods: We conducted a systemic literature search of online sources, including PubMed, MEDLINE, EMBASE and Cochrane Central Register of Controlled Trials, and respective bibliographic reference lists. Success rate, operative time, hospital length of stay, postoperative complication rate and re-intervention rate were our primary outcomes. Combined overall effect sizes were calculated using fixed-effect or random-effects models. Results: We identified 14 observational studies reporting a total of 2254 paediatric patients with PUJO, who underwent LP (n = 1021) or RALP (n = 1233). Our analysis demonstrated that RALP was associated with a significantly higher success rate [odds ratio (OR) 2.51; 95% confidence interval (CI) 1.08-5.83, p = 0.03] and shorter length of hospital stay [mean difference (MD) -1.49; 95% CI -2.22 to -077; p &lt; 0.0001] compared with LP. Moreover, nonsignificant reductions in postoperative complications (OR 0.61; 95% CI 0.36-1.02; p = 0.06) and re-intervention (OR 0.43; 95% CI 0.15-1.21; p = 0.11) were found in favour of RALP. There was no difference in procedure time between the two approaches (MD -0.15; 95% CI -30.22 to 29.93, p = 0.99). Conclusions: Our meta-analysis of observational studies demonstrated that RALP is safe and may have higher success rate compared with the more traditional laparoscopic approach in a paediatric population. Moreover, it may be associated with lower postoperative complications and re-intervention rates. Evidence from randomized trials is required. Answer: Yes, robot-assisted laparoscopic reoperative repair (RALP) is feasible for failed pyeloplasty in children. Several studies have demonstrated the safety, efficacy, and feasibility of RALP for recurrent ureteropelvic junction obstruction (UPJO) after a failed pyeloplasty. One study reported a 100% success rate in a small cohort of children who underwent RALP after failed open pyeloplasty, suggesting that RALP is not only feasible but also potentially less technically demanding than open repair (PUBMED:25791423). Another updated series with a larger cohort of 36 children also supported the feasibility, efficacy, safety, and durability of RALP for failed pyeloplasty, with most children being symptom-free at the last follow-up (PUBMED:30395839). Further evidence from a study that reviewed cases between 2006 and July 2011 found that RALP for persistent or recurrent UPJO is a safe and highly effective procedure, with all patients improving symptomatically and 88% showing improved radiological findings (PUBMED:22819409). Additionally, a retrospective study on 63 consecutive children who underwent either laparoscopic pyeloplasty (LP) or RALP showed comparable outcomes between the two minimally invasive techniques, with RALP associated with shorter hospital stays (PUBMED:29641368). Moreover, a study focusing on redo laparoscopic pyeloplasty in infants and children found it to be feasible and effective with a high success rate and low complication rate (PUBMED:33240828). An innovative technique of robotic-assisted/laparoscopic re-pyeloplasty in horseshoe kidneys also demonstrated the procedure's technical feasibility, safety, and effectiveness with high patient satisfaction (PUBMED:26675936). In summary, the evidence suggests that RALP is a feasible and effective option for reoperative repair in children with failed pyeloplasty, offering the benefits of minimally invasive surgery with high success rates and low complication rates.
Instruction: Does intergenerational social mobility among men affect cardiovascular mortality? Abstracts: abstract_id: PUBMED:18775818 Does intergenerational social mobility among men affect cardiovascular mortality? A population-based register study from Sweden. Background: Socioeconomic inequalities in cardiovascular mortality are well documented. The aim here is to examine the relation between childhood and adulthood class as well as the role of unique intergenerational social mobility trajectories in such mortality. Methods: Data were obtained from Swedish registries. Childhood and adulthood information were from the 1960 and 1990 censuses. Men born 1945-59 (809,199) were followed-up for four cardiovascular mortality outcomes 1990-2002 (5533 deaths) by means of Cox regressions. Three different approaches were applied to study mobility between four main classes. Results: In mutually adjusted models, the effect of a manual adulthood class (compared with non-manuals) was clearly larger (hazard ratios (HR) were 1.56 for MI, 1.70 for stroke, 1.64 for other cardiovascular disease (CVD), 1.62 for all CVD) as for a manual childhood class (1.38, 1.17, 1.24 and 1.28, respectively). This also applied to unclassifiable, while there were few systematic findings for self-employed. When adjusting for education level, childhood class was still significant for MI, other and all CVD, but adulthood class was significant for all outcomes. Trajectory-specific analyses revealed that mobile men from non-manual to manual had significantly higher mortality than mobile from manual to non-manual and stable non-manuals, but not significantly lower than stable manuals. Conclusions: Cardiovascular mortality was clearly structured by adulthood class, but not as consistently structured by childhood class. The mediating role of education suggests that a major part of life-course disadvantages or advantages in relation to CVD was due to achieved education. abstract_id: PUBMED:31422827 Educational expansion and trends in intergenerational social mobility among Korean men. Scholars have explored three channels through which educational expansion contributes to increased intergenerational social mobility: the compositional effect, educational equalization, and class returns to education. Existing literature on impacts of educational expansion on intergenerational social mobility is primarily based on experiences of European societies and the United States. We expand the existing literature by investigating the relationship between educational expansion and intergenerational mobility in Korea showing an exceptional degree of educational expansion over the last few decades. Log-linear models show that social fluidity has increased across birth cohorts of Korean men born between 1950 and 1984, with the recent cohorts experiencing it considerably. Utilizing a counterfactual decomposition method, our study shows that educational expansion has played a crucial role in promoting social fluidity mainly through educational equalization for earlier cohorts and through the compositional effect for more recent cohorts. The role played by the class returns to education was minor. abstract_id: PUBMED:30368028 Intergenerational relations and social mobility: Social inequality in physical function in old age. Background: The concept of social foreground describes how adult offspring's socioeconomic resources may influence older adults' health and several studies have shown an association between socioeconomic position of adult offspring and the health of their older parents. However, little is known about the factors that generate these associations. We study 1) how adult offspring's social class is associated with physical function (PF) among older adults, 2) whether geographical closeness and contact frequency with offspring modify the association, and 3) whether intergenerational social mobility of offspring is associated with PF of older adults. Method: Data are obtained from the 2002 (n = 621) and 2011 (n = 931) waves of the Swedish Panel Study of Living Conditions of the Oldest Old. Multivariable linear regression models were employed and adjusted for own and partner's prior social class and offspring's age and gender. Results: Compared to offspring with non-manual occupation, offsprings with manual occupation was associated with poorer PF in older adults (-0.14, CI95%:-0.28;0.00). In stratified analyses, offspring's social class was only associated with older adults' PF among those who lived geographically close. Contact frequency between the offspring and the older adults did not modify the associations. Older adults whose offspring experienced downward intergenerational social mobility were associated with the poorest PF. Conclusion: This study supports evidence of a relationship between social foreground and older adults' PF where geographical closeness and social mobility are important components. abstract_id: PUBMED:19091450 Intergenerational class mobility and cardiovascular mortality among Swedish women: a population-based register study. Class inequalities in cardiovascular disease (CVD) mortality are well documented, but the impact of intergenerational class mobility on CVD mortality among women has not been studied thoroughly. We examined whether women's mobility trajectories might contribute to CVD mortality beyond what could be expected from their childhood and adult social class position. The Swedish Work and Mortality Data Base provided childhood (1960) and adulthood (1990) social indicators. Women born 1945-59 (N=791 846) were followed up for CVD mortality 1990-2002 (2019 deaths) by means of logistic regression analysis. CVD mortality risks were estimated for 16 mobility trajectories. Gross and net impact of four childhood and four adult classes, based on occupation, were analysed for mortality in ischemic heart disease (IHD), stroke, other CVD, - and all CVD. Differences between the two most extreme trajectories were 10-fold, but the common trajectory of moving from manual to non-manual position was linked to only a slight excess mortality (OR=1.26) compared to the equally common trajectory of maintaining a stable non-manual position (reference category). Moving into adult manual class resulted in an elevated CVD mortality whatever the childhood position (ORs varied between 1.42 and 2.24). After adjustment for adult class, childhood class had some effect, in particular there was a low risk of coming from a self-employed childhood class on all outcomes (all ORs around=0.80). A woman's own education had a stronger influence on the mortality estimates than did household income. Social mobility trajectories among Swedish women are linked to their CVD mortality risk. Educational achievement seems to be a key factor for intergenerational continuity and discontinuity in class-related risk of CVD mortality among Swedish women. However, on mutual adjustment, adult class was much more closely related to CVD mortality than was class in childhood. abstract_id: PUBMED:9723857 Social mobility and 21 year mortality in a cohort of Scottish men. The objective of this prospective cohort study was to determine the contribution of changes in social class within and between generations to mortality risk and to socioeconomic differentials in mortality. In 27 workplaces in the west of Scotland. 5567 men aged 35-64 years when screened, provided information on their father's occupation, their own first occupation and their occupation at screening. Mortality risk, from broad causes of death by intergenerational and intragenerational social mobility groups, was measured after 21 years. For all or some of the 3 routes of mobility (childhood to screening, labour market entry to screening and childhood to labour market entry), increasing values were seen across the 4 groups (stable non manual, upwardly mobile, downwardly mobile and stable manual) for diastolic blood pressure, body mass index, current smoking, early school leaving, angina, bronchitis, severe chest pain, and proportion living in deprived areas. Decreasing values were seen for serum cholesterol, height, FEV1, exercise, never and ex-smokers, wine drinkers and car users. For mobility between childhood and screening and between childhood and labour market entry, mortality risk was highest for the stable manual group and lowest for the stable non manual group for all cause, cardiovascular disease and other causes of death. The upwardly and downwardly mobile groups had intermediate risks. For cancer mortality, the stable manual group had the highest risk with the other groups having lower and similar risks. For mobility between labour market entry and screening, the highest risk was for the downwardly mobile group for all cause and cardiovascular mortality. For cancer mortality, the risk was higher for men in manual social classes at all occasions. Adjustment for risk factors attenuated but could not completely explain the differentials in mortality risk. Overall, major differences in mortality risk were seen between the stable non manual and the stable manual groups, to which social mobility does not contribute. With the exception of the small intragenerationally downwardly mobile group there was little evidence that social mobility itself was associated with mortality outcomes different from those expected on the basis of lifetime socioeconomic experience. This is consistent with the suggestion that the main influence of socioeconomic position on mortality risk is an accumulative one, acting across the lifecourse. abstract_id: PUBMED:8091160 Social mobility and health in a prospective study of middle-aged men. Many social factors and social phenomena and their influence on health are now of interest for epidemiological research and also for health policy. On social phenomenon of interest is mobility in the social structure of society. In this prospective study of a sample of 50-year-old men born in 1913 in Gothenburg, Sweden, it was possible to study intergenerational social mobility and different measures of ill-health, like health complaints, incidence of myocardial infarction and mortality. The results of the study indicate that individuals who find themselves in a lower social group than that of their father at the age of 50 exhibit a somewhat poorer subjective state of health and report more complaints and symptoms at the age of 60. They also tended to be slightly more at risk with regard to myocardial infarction, but there were no mortality differences in a long-term follow-up. abstract_id: PUBMED:22906525 Familial risk of premature cardiovascular mortality and the impact of intergenerational occupational class mobility. The negative impact of low social class on cardiovascular disease (CVD) and mortality has been consistently documented. However, less scientific consistency exists in terms of whether a unique health effect of social mobility from childhood to adulthood prevails. This study explored how childhood and adult social class and the transition between them (social mobility), are related to premature CVD mortality when familial aggregation of CVD among siblings is also considered. The study includes nearly 1.9 million Swedish residents born 1939-1959 distributed over 1,044,725 families, of whom 14,667 died prematurely from CVD in 1990-2003. Information on parental class (1960) and own mid-life occupational class (1990) was retrieved from the respective censuses. Odds ratios for premature CVD mortality according to trajectory-specific social mobility, along with pairwise mean odds ratios for sibling resemblance of premature CVD mortality, were calculated by means of alternating logistic regression. This model calculates the remaining dependency of CVD mortality within sibships after accounting for available risk factors (like parental and adult social class) in the population mean model. Results showed that premature CVD mortality was associated with both parental and own adult social class. A clear tendency for the downwardly mobile to have increased, and for the upwardly mobile to experience a decreased risk of premature CVD mortality was found, as well as a corresponding unique effect of social mobility per se among the manual and non-manual classes. This effect was verified for men, but not for women, when they were analysed separately. The pairwise mean odds ratios for premature CVD mortality among full siblings were 1.78 (95% CI: 1.52-2.08), and were independent of parental CVD mortality and parental or adult occupational class. abstract_id: PUBMED:36688006 Heterogeneous Effects of Intergenerational Social Mobility: An Improved Method and New Evidence. Intergenerational social mobility has immense implications for individuals' well-being, attitudes, and behaviors. However, previous methods may be unreliable for estimating heterogeneous mobility effects, especially in the presence of moderate- or large-scale intergenerational mobility. We propose an improved method, called the "mobility contrast model" (MCM). Using simulation evidence, we demonstrated that the MCM is more flexible and reliable for estimating and testing heterogeneous mobility effects, and the results are robust to the scale of intergenerational mobility. We revisited the debate about the effect of mobility on fertility and analyzed data from the 1962 Occupational Changes in a Generation Study (OCG-1) and more recent data from the 1974 through 2018 General Social Survey (GSS) using both previous models and the MCM. The MCM suggested a small association between fertility and occupational mobility in the GSS data but substantial and heterogeneous educational mobility effects on fertility in the OCG-1 and the GSS. Such effects were difficult to pinpoint using previous methods because mobility effects of different magnitudes and opposite directions among mobility groups may cancel out. The new method can be extended to investigate the effect of intergenerational mobility across multiple generations and other research areas including immigrant assimilation and heterogamy. abstract_id: PUBMED:36695147 Intergenerational social mobility and health in Russia: Mind over matter? The consequences of changing one's socio-economic status over the life course-i.e. social mobility-for individual health are not well understood. Theories of the health implications of social mobility draw on the human perception of one's changing conditions, but empirical studies mostly examine the health implications of moving from objectively defined indicators of parental socio-economic position such as education, occupation, or income, to own socio-economic position in adult life. Little is known about the consequences of individuals' own assessment of changes in socio-economic position for health outcomes. In this study, we examine the association of social mobility and health in a unique sample of the Russian population after the transition to a market society. We take a broad perspective on social mobility, putting emphasis on subjectively perceived social mobility. Results show that individuals' objective characteristics only partially explain the variation in their subjective perceptions of intergenerational mobility. Net of social origin and destination variables, subjective social mobility is associated with individuals' health outcomes, as measured by the 12-Item Short Form Health Survey. Those who perceive being upwardly mobile report better health, and downward mobility is associated with poorer health. The association holds for mental and physical health, for perceived downward and upward social mobility, and for a general subjective measure of mobility and a subjective measure prompting respondents to only think of mobility in terms of occupation. These findings are robust to controlling for a rich set of socio-demographic predictors on childhood adversity, contemporaneous material wellbeing, and family-related circumstances. We conclude that a conventional focus on single socio-economic status dimensions such as occupation might be too narrow to capture the health consequences of social mobility. abstract_id: PUBMED:10475669 The roles of social class of origin, achieved social class and intergenerational social mobility in explaining social-class inequalities in alcoholism among young men. The aim of this study was to investigate the role of intergenerational health-related mobility in explaining social-class inequalities in alcoholism among young men. Data on social class of origin and on risk factors in childhood and adolescence, e.g. risk use of alcohol, were collected for 49,323 men, born 1949-51, at enlistment for compulsory military training in 1969/70. Information on achieved socioeconomic class was obtained from Sweden's 1975 census. Data on alcoholism diagnoses were collected from the national in-patient care register 1976-83. Risk indicators for alcoholism established in adolescence were found to be more common among downwardly mobile individuals, and also among stable manual workers, than among those who ended up as non-manual employees. Downwardly mobile individuals, and also stable manual workers, were also found to have an increased risk of alcoholism diagnosis. The increased relative risk could, to a considerable extent, be attributed to factors from childhood/adolescence. In this longitudinal study, it is shown that intergenerational social mobility associated with health-related factors, albeit not with illness itself, made a major contribution to explaining differences in alcoholism between social classes. Factors established in adolescence were important with regard to differences in alcoholism between social classes among young adults. But such adverse conditions did not seem to be well reflected by social class of origin. Answer: Intergenerational social mobility among men does affect cardiovascular mortality. A population-based register study from Sweden found that cardiovascular mortality was clearly structured by adulthood class, but not as consistently by childhood class. Men who experienced mobility from a non-manual to a manual class had significantly higher mortality than those who moved from manual to non-manual and stable non-manuals, but not significantly lower than stable manuals. Education played a mediating role, suggesting that life-course disadvantages or advantages in relation to cardiovascular disease (CVD) were largely due to achieved education (PUBMED:18775818). Another study on Swedish women also found that social mobility trajectories were linked to their CVD mortality risk, with educational achievement being a key factor for intergenerational continuity and discontinuity in class-related risk of CVD mortality (PUBMED:19091450). Moreover, a study on familial risk of premature cardiovascular mortality highlighted that both childhood and adult social class, as well as the transition between them (social mobility), are related to premature CVD mortality. The study found a tendency for the downwardly mobile to have increased risk, and for the upwardly mobile to experience a decreased risk of premature CVD mortality. This effect was confirmed for men but not for women when analyzed separately (PUBMED:22906525). In summary, intergenerational social mobility among men does have an impact on cardiovascular mortality, with adult social class being a more significant factor than childhood class. Education appears to be a significant mediator in this relationship, and the direction of mobility (upward or downward) also influences the risk of CVD mortality.
Instruction: C2 nerve root transection during C1 lateral mass screw fixation: does it affect functionality and quality of life? Abstracts: abstract_id: PUBMED:3829929 Formation of IgG antibodies to C1 inhibitor as the cause of life-threatening angioedema A clinical picture with recurrent (in some cases potentially fatal) edema of skin and internal organs based not on a hereditary C1 inhibitor deficiency, but an acquired loss of C1 inhibitor activity due to antibodies is described for the first time in two patients. The clinical symptoms commenced in middle age patients between 40 and 46 years old. Anti C1 antibodies of the IgG were found in both patients. Quantitatively, these C1 inhibitor protein was in the lower range of normal, whereas no inhibitor activity could be demonstrated functionally. The function of the complement components C1, C2 and C4 was greatly reduced. The therapeutic use of C1 inhibitor concentrate at a high doses (6 X 500 U) as well as administration of high-dose corticosteroids in several emergency situations was unsuccessful. abstract_id: PUBMED:99872 The presence of active C1 (C-1) on peripheral human lymphocytes. We have shown that the first component of complement C1 is present in an active form on the surface of washed human peripheral lymphocytes but not on platelets or erythrocytes. This active C1 (C-1) was detected by its ability to transfer to sensitized cells carrying C4, i.e., EAC4, forming EAC-1,4. Active C1 was also able to consume C4. Treatment of these lymphocytes with 0.02 M EDTA removed C-1. EDTA-treated lymphocytes were able to bind exogenous purified human C-1. Comparative studies with sentized erythrocytes (EA) and EDTA treated lymphocytes showed that although fewer molecules of exogenous C1 could bind to the EDTA-treated lymphocytes than to EA, the consumption of C4 by C-1 bound to lymphocytes was significantly higher than that observed with EAC-1. When lymphocytes obtained from 2 patients with chronic lymphocytic leukemia and hypocomplementemia were tested, the release of C1, the C4 consumption and the binding of C-1 to EDTA-treated cells were highly inefficient. abstract_id: PUBMED:7204976 Activation of first component of complement (C1) in guinea pig serum by a polysaccharide is prevented by C1 inhibitor. A polysaccharide (PS) purified from venom of the ant Pseudomyrmex sp. causes the activation of the classical complement (C) pathway in normal serum, but not in guinea pig serum. To investigate why C was not activated in guinea pig serum, we partially purified guinea pig C1 in the presence of the protease inhibitor p-nitrophenyl, p'-guanidinobenzoate (NPGB). This C1 preparation was activated (mu = 0.15, pH 7.5) by the PS in a dose-dependent reaction after NPGB was eliminated by dilution. The PS decreased the action of the C1 inhibitor for C1 in diluted guinea pig serum, and it also inhibited the activity of highly purified guinea pig C1 inhibitor for C1. There was a direct correlation between the concentration of the guinea pig C1 inhibitor and the loss of ability of the PS to activate C1 in mixtures of constant concentrations of purified guinea pig C1 and purified venom PS, and increasing concentrations of purified guinea pig C1 inhibitor. The activity of the human C1 inhibitor, either in diluted serum or highly purified, was not decreased by the PS. These results show that the PS does not activate guinea pig C1 in serum because its action is blocked by the C1 inhibitor. abstract_id: PUBMED:1583391 Angioneurotic edema with acquired C1- inhibitor deficiency and autoantibody to C1- inhibitor: response to plasmapheresis and cytotoxic therapy. A patient with severe acquired angioneurotic edema had essentially no C1- inhibitor activity in his serum and nearly died of cardiopulmonary arrest during an acute episode of facial, oral, and pharyngeal edema. This patient had an antibody directed against C1- inhibitor and C1- inhibitor-anti-C1- inhibitor complexes in his serum. The antibody required a normal residue (Arg) in the reactive center of the inhibitor for its optimal interaction with the inhibitor. Plasmapheresis with 5% human serum albumin replacement relieved him of his antibody load and the edema; additional treatment with pulsed cyclophosphamide has provided a sustained remission. The 5% albumin solution that was used contained functional C1- inhibitor; other lots that were tested contained only traces or none. No underlying disease has yet been identified. During this acute episode of edema, the C1- inhibitor in the patient's plasma was a 92 kd component, and on recovery, a 105 kd component reappeared. C1- inhibitor isolated from the patient's plasma, which was obtained before pheresis, was mainly in lower molecular weight forms (56 kd and 45 kd). The antibody in the patient's serum appeared to render C1- inhibitor susceptible to proteolysis, for when purified antibody was added to normal serum, a cleaved form of C1- inhibitor was generated. abstract_id: PUBMED:3485688 Control of C1 activation by nascent C3b and C4b: a mechanism of feedback inhibition. We have demonstrated that immune complexes turn over C1, i.e., limiting quantities of immune complexes activate an excess of C1. This was readily apparent in a system of purified C1 and C1-inhibitor (C1-In) but not in normal human serum (NHS). The following results indicate that C3 and C4 are the serum factors responsible for the inhibition of C1 turnover by immune complexes. 1) In a purified protein system composed of C1 and C1-In at pH 7.5, ionic strength 0.14 M, doses of immune complexes that activated all the C1 in 60 min at 37 degrees C yielded no detectable C1 activation when C2, C3, and C4 were also present. All proteins were at their physiologic concentrations. Activation was quantified by SDS-PAGE analysis and hemolytic titration 2) In order to inactivate C3 and C4, NHS was treated with 50 mM methylamine (MeAm) for 15 min at 37 degrees C, after which the MeAm was removed by dialysis. The activities of C1, C2, and C1-In were unaffected by this treatment. Doses of immune complexes that consumed no C1 in NHS, consumed all the C1 in MeAm-treated NHS (MeAm-NHS). 3) Reconstitution of MeAm-NHS with physiologic concentrations of C3 and C4 rendered the serum again resistant to excessive C1 consumption by immune complexes. Immune complexes used in these studies included EA-IgG, EA-IgM, tetanus-human anti-tetanus, and aggregated human IgG. There appeared to be specificity to the inhibition reaction since C4 by itself could inhibit C1 consumption by EA-IgM, whereas the presence of C3 was also required to control EA-IgG. Finally, N-acetyl-L-tyrosine was added to NHS at a final concentration of 30 mM. This nucleophile did not interact with native C3 or C4, nor did it directly activate C1. However, upon the addition of low doses of immune complexes, acetyl tyrosine did yield uncontrolled C1 activation, presumably by binding nascent C3b and C4b and thereby blocking their attachment to the immune complexes. We conclude that in NHS there is a mechanism of feedback inhibition by which nascent C3b and C4b inhibit C1 turnover by immune complexes. This mechanism of control might be physiologically important in that it prevents excessive complement activation by low concentrations of immune complexes. abstract_id: PUBMED:2049932 C1 subcomponent complexes and C2 cleavage in active systemic lupus erythematosus. We studied the activation and C1 inactivator-dependent dissociation of the first component of complement, the C1q(C1r-C1s)2 complex, in relation to recruitment of the classical activation pathway in the circulation of 24 patients with systemic lupus erythematosus (SLE). The patients were divided into three groups on a clinical basis, and were investigated during flares of disease activity. Group I had mild symptoms, group II major extrarenal manifestations, and group III manifest renal disease. High serum concentrations of trimer complexes containing C1 inactivator, activated C1r and zymogen C1s(C1 IA-C1r-C1s) were found in the majority of the patients. Some patients with high C1 IA-C1r-C1s concentrations showed no evidence of classical pathway activation, indicating that C1 activation was controlled by the action of C1 IA at the C1r level. By contrast, formation in serum of tetramer complexes in which C1 IA was firmly bound to both C1r and C1s (C1 IA-C1r-C1s-C1 IA) was associated with C2 and C3 cleavage in EDTA plasma, and with manifest hypocomplementemia. Low C1 IA-C1r-C1s-C1 IA values were observed in conjunction with substantial C2 cleavage in a few patients. Thus, C1 IA-C1r-C1s-C1 IA may not always be a sensitive indicator of classical pathway activation. Efficient recruitment of the classical pathway was related to disease severity, with some overlap between the clinical groups. In conclusion, C1 dissociation with formation of C1 IA-containing complexes was consistently found in patients with active SLE. The results suggested that C1 IA-dependent control of C1 activation was of biological significance in the disease. abstract_id: PUBMED:977945 Leukocyte-derived complement inhibitor. IV. The functional properties of C1 bound to erythrocytes pretreated with leukocyte culture supernatant. E, pretreated with leukocyte cultures supernatant (ES), binds C1 through C1q; ES and EIgM that bind the same amount of C1 as measured in a hemolytic assay have the same uptake of 125I-C1q; ESC1q and EIgMC1q, carrying the same number of molecules of CUq per cell, have the same uptake of CUr and CUs; soluble immune compleses prevent the binding of C1 and C1q to ES. The activity of C1 bound to ES is impaired; ESC1 can react with C4 but not with C2. The C4 turnover and the C1 ING turnover by ESC1 are reduced so that ES-bound C1 is protected from destruction by C1 ING. These modifications are fully reversed when C1 is transferred from ES to EA:C1 recovers its ability to react with C2, and C1 INH. Thus the C1s activity can be modulated inside the C1 molecular complex upon binding of C1q to a lymphocyte product. In addition, the 125I-C1q uptake is proportional to the amount of IgM hemolysin used to sensitize E; it has, however, an exponential relationship to the amount of IgG or S used to sensitize E. The ratio of 125I-C1q uptake towhole C1 uptake measured in a hemolytic assay is lowerthan 2. This indicates that one molecule of IgM is sufficient to bind one molecule of C1q on E, that several molecules of IgG or S are required to bind one molecule of C1q, and that one molecule of C1q is sufficient to create a lytic site on E. abstract_id: PUBMED:3029223 Antibody-independent activation of C1. II. Evidence for two classes of nonimmune activators of the classical pathway of complement. Nonimmune activation of the first component of complement (C1) by cardiolipin (CL) vesicles present specific features which were not demonstrated on immune complexes. CL vesicles which activate C1 in the presence of C1-inhibitor (C1-INH) were found to bind C1s in the absence of C1r, and to induce a specific C1r-independent cleavage of C1q-bound C1s. Therefore, several known natural nonimmune activators were analyzed by comparing their ability to activate C1 in the presence of C1-INH and to mediate a C1r-independent cleavage of C1s. Freshly isolated human heart mitochondria (HHM) activated C1 only in the absence of C1-INH. However, mitoplasts derived from HHM (HHMP) activated C1 regardless of the presence of C1-INH, and induced a specific cleavage of C1q-bound C1s. The same pattern was observed in the case of smooth E. coli and a semi-rough E. coli strain. DNA, known to activate C1 only in the absence of C1-INH, does not induce C1s cleavage in the absence of C1r. Thus, nonimmune activators can be classified into two distinct categories. "Strong" activators, such as CL vesicles, HHMP, or the semi-rough E. coli strain J5 can activate C1 in the presence of C1-INH. By using C1qs2 as a probe, they exhibit a specific, C1r-independent cleavage of C1s. C1s-binding to C1q is a critical factor for the activation process in this group. In the case of "weak" activators, such as E. coli smooth strains, DNA, or HHM, no C1s-binding to activator-bound C1q was detected, and C1r-independent C1s cleavage and C1 activation in the presence of C1-INH were not observed. As in the case of immune complexes, C1r activation appears to play a key role in the C1 activation by "weak" activators. abstract_id: PUBMED:8144914 Unique C1 inhibitor dysfunction in a kindred without angioedema. I. A mutant C1 INH that inhibits C1-s but not C1-r. We have described hereditary incomplete deficiency of the fourth component of complement (C4) in 10 members of a large kindred. C4 deficiency in this kindred is not linked to C4 loci in the HLA region. C4 synthesis is decreased, and C4 catabolism is normal in kindred members with low serum C4 levels. We have discovered a uniquely dysfunctional C1 inhibitor in all C4-deficient members of this kindred. C1 inhibitor dysfunction is revealed by incubating sera of affected members with EDTA, which destroys all C4 activity in these sera, but not in normal sera or sera from individuals with partial C4 deficiencies. The M(r) of C1 inhibitor purified from affected members is normal, but approximately 50% of this C1 inhibitor resists cleavage by trypsin (0.14 microM) at arg444, suggesting a substitution at this position. Moderate increases in trypsin, however, result in cleavage of the resistant molecules, which would not be expected if arg444 were the site of the mutation. All molecules in C1 inhibitor purified from affected members' plasma bind to activated C1s (C1-s), but approximately 50% of molecules in these preparations do not bind to activated C1r (C1r). These findings show that affected kindred members have a unique mutation in C1 inhibitor. The mutant C1 inhibitor does not prevent the activation of C1s by C1-r when serum Ca2+ is chelated by EDTA, but its inhibition of C1-s is normal in vivo, as shown by normal C2 levels, normal C4 catabolism, and absence of angioedema in C4-deficient members. The nature of the mutation, its selective failure to inhibit C1-r, and its relationship to decreased C4 synthesis remain to be defined. abstract_id: PUBMED:6724636 Formation of EAC142 and EAC1423 with macrophage culture supernatant containing the secreted complement components C1 to C3. Culture supernatants of thioglycollate-elicited guinea pig peritoneal macrophages contained hemolytic C1, C4, C2 and C3, whereas hemolytic C5, C6, C7, C8 or C9 were not detected. Activity of C1, C2 and C3 increased up to a 48 h culture period, whereas C4 activity already declined in 2 day old cultures. After secretion, the hemolytic activity of C1 was least stable in culture supernatant. Sensitized sheep erythrocytes (EA) when incubated with culture supernatant initiated activation and functional cooperation of secreted C1 to C3 as indicated by formation of EAC142 and EA1423 intermediates. Decay and regeneration with purified C2 was shown for EAC142 and deposition of C3 fragments on EAC1423 was demonstrated with anti-C3. On an average, supernatants of 2 day old macrophage cultures were most suitable for formation of EAC142 and EAC1423 . The rate of EAC142 and EAC1423 formation, and also of C2 and C3 inactivation, during incubation of EA with culture supernatant was slow; addition of purified C1 to culture supernatant, however, greatly enhanced the same reactions of EA with supernatant which indicated that C1 was the rate limiting factor. Local secretion of hemolytic C1, C4, C2 and C3 by macrophages may have an important role in antimicrobial defense mechanisms due to the well-known functional cooperation between macrophages and activated C3. Answer: The provided abstracts do not contain information related to C2 nerve root transection during C1 lateral mass screw fixation or its effects on functionality and quality of life. The abstracts focus on the biochemistry and immunology of the complement system, specifically the C1 inhibitor and its role in conditions such as angioedema and systemic lupus erythematosus. They do not address surgical procedures or outcomes related to spinal surgery or neurosurgical interventions. Therefore, based on the given abstracts, it is not possible to answer the question about the impact of C2 nerve root transection during C1 lateral mass screw fixation on functionality and quality of life.
Instruction: Can the tomographic aspect characteristics of patients presenting with acute respiratory distress syndrome predict improvement in oxygenation-related response to the prone position? Abstracts: abstract_id: PUBMED:12218526 Can the tomographic aspect characteristics of patients presenting with acute respiratory distress syndrome predict improvement in oxygenation-related response to the prone position? Background: In some patients with acute respiratory distress syndrome, the prone position is able to improve oxygenation, whereas in others it is not. It could be hypothesized that the more opacities that are present in dependent regions of the lung when the patient is in the supine position, the better the improvement in oxygenation is observed when the patients are turned prone. Therefore, we conducted a prospective study to identify computed tomographic scan aspects that could accurately predict who will respond to the prone position. Methods: We included 46 patients with acute respiratory distress syndrome (31 responders and 15 nonresponders). Computed tomographic scan was performed in the 6-h period preceding prone position. Blood gas analyses were performed before and at the end of the first 6-h period of prone position. Results: Arterial oxygen partial pressure/fraction of inspired oxygen increased from 117 +/- 42 (mean +/- SD) in the supine position to 200 +/- 76 mmHg in the prone position (P &lt; 0.001). There were 31 responders and 15 nonresponders. There was a vertebral predominance of the opacities (P &lt; 0.0001). However, there was no difference between responders and nonresponders. When only the amount of consolidated lung located under the heart was evaluated, there was more consolidated tissue under the heart relative to total lung area in nonresponders than in responders (P = 0.01). Conclusions: There are no distinctive morphologic features in the pattern of lung disease measured by computed tomographic scanning performed with the patient in the supine position that can predict response to the prone position. abstract_id: PUBMED:27324241 Lung ultrasonography for assessment of oxygenation response to prone position ventilation in ARDS. Purpose: Prone position (PP) improves oxygenation and outcome of acute respiratory distress syndrome (ARDS) patients with a PaO2/FiO2 ratio &lt;150 mmHg. Regional changes in lung aeration can be assessed by lung ultrasound (LUS). Our aim was to predict the magnitude of oxygenation response after PP using bedside LUS. Methods: We conducted a prospective multicenter study that included adult patients with severe and moderate ARDS. LUS data were collected at four time points: 1 h before (baseline) and 1 h after turning the patient to PP, 1 h before and 1 h after turning the patient back to the supine position. Regional lung aeration changes and ultrasound reaeration scores were assessed at each time. Overdistension was not assessed. Results: Fifty-one patients were included. Oxygenation response after PP was not correlated with a specific LUS pattern. The patients with focal and non-focal ARDS showed no difference in global reaeration score. With regard to the entire PP session, the patients with non-focal ARDS had an improved aeration gain in the anterior areas. Oxygenation response was not associated with aeration changes. No difference in PaCO2 change was found according to oxygenation response or lung morphology. Conclusions: In ARDS patients with a PaO2/FiO2 ratio ≤150 mmHg, bedside LUS cannot predict oxygenation response after the first PP session. At the bedside, LUS enables monitoring of aeration changes during PP. abstract_id: PUBMED:29790144 Prone Position and Nursing Care Acute respiratory distress syndrome (ARDS) is a life-threatening disease, as acute inflammation in the lungs typically leads to hypoxia and symptoms of dyspnea. The treatment modalities of ARDS include mechanical ventilation, corticosteroid, extracorporeal membranous oxygenation, inhaled nitrogen oxide, and the prone position. Among these, the prone position is supported by evidence showing significantly reduced mortality in patients that adopt this modality. Lying in the prone position reduces atelectatic lung volumes by recruitment of dependent parts and facilitates normal regulation of alveolar ventilation, giving better-matched ventilation perfusion and, thus, improved oxygenation. Lying in the prone position should be initiated as early as possible and may be implemented when there is limited improvement after 12-24 hours under mechanical ventilation and when the PaO2/FiO2 ratio is less than 150 mmHg. However, the prone position may not be appropriate for patients with increased intracranial pressure, hemodynamic instability, an open abdominal wound, or pregnancy. Prior to setting a patient in the prone position, fixation of tubings should be affirmed, and all tubings should be reexamined after each adjustment in position in order to ensure that they are free of twists and function well. In addition, caution should be given to the skin at pressure points to avoid pressure sores and foam dressings may be applied in advance for protection. This article summarizes the pathophysiology of ARDS, the principle of applying the prone position, and related indications, complications, and nursing care in order to give nurses more confidence in caring for patients using the prone position. abstract_id: PUBMED:26806842 Can lung ultrasonography predict prone positioning response in acute respiratory distress syndrome patients? Purpose: The purpose was to assess whether lung ultrasonography (L-US) is a useful tool in prediction of prone positioning (PP) oxygenation response in patients with acute respiratory distress syndrome (ARDS). Methods: In a prospective study, 19 ARDS patients were included for assessment of PP oxygenation response. The latter was assessed for at least 12 hours 6 different ultrasonography windows were performed on each hemithorax before prone (H0, H2, H12 before return to supine and at H14 (2 hours after return to supine). Patients were classified into 2 groups (responders / non responders) according their oxygenation response to PP. Ultrasonography videos were blindly evaluated by 3 expert clinicians to classify lung regions as "normal", "moderate loss of aeration," "severe loss of aeration," or "lung consolidation." Oxygenation parameters were collected at H0, H2, and H14. Results: Association of each lung region aspect to PP oxygenation response was compared between the 2 groups. The normal aspect of the anterobasal regions was significantly associated with the oxygenation response (P = .0436), with a positive predictive value equal to or near 100%. Discussion: Our results demonstrated that a simple and short L-US examination could be a useful tool in prediction of PP oxygenation response in ARDS patients. A normal L-US pattern of both anterobasal lung regions in supine position may predict a significant PaO2/FIO2 ratio improvement. abstract_id: PUBMED:10051075 Improvement in oxygenation by prone position and nitric oxide in patients with acute respiratory distress syndrome. Objective: Inhaled nitric oxide (NO) and prone position improve arterial oxygenation in patients with the acute respiratory distress syndrome. This study was undertaken to assess the combined effects of NO and prone position in these patients. Design: Prospective clinical study. Setting: General intensive care service in a community teaching hospital. Patients: 14 mechanically ventilated adult patients with the acute respiratory distress syndrome (mean lung injury score 3.23+/-0.27). Measurements And Results: We measured hemodynamic and oxygenation parameters in the supine position and 2 h later in the prone position, before and during inhalation of 10 ppm NO. A positive response in oxygenation was defined as a &gt; or =20% increment in the arterial oxygen tension/fractional inspired oxygen ratio (PaO2/FIO2). In the prone position PaO2/FIO2 increased significantly (from 110+/-55 to 161+/-89 mm Hg, p&lt;0.01) and venous admixture decreased (from 38+/-12 to 30+/-7%, p&lt;0.01) compared to the supine position. Ten of the 14 patients were responders in the prone position. In the supine position, inhalation of NO improved oxygenation to a lesser extent, increasing PaO2/FIO2 to 134+/-64 mm Hg (p&lt;0.01) and decreasing venous admixture to 35+/-12%, (p&lt;0.01). Five of the 14 patients responded to NO inhalation supine and 8 of 14 responded prone (p = 0.22). The combination of NO therapy and prone positioning was additive in increasing PaO2/FIO2 (197+/-92 mm Hg) and decreasing venous admixture (27+/-8%) (p&lt;0.01). This combination also showed a positive oxygenation response on compared to the supine value without NO in 13 of the 14 patients (93 %). NO-induced changes in PaO2/FIO2 were correlated to changes in pulmonary vascular resistance only in the prone position. Conclusions: In patients with the acute respiratory distress syndrome, the combination of NO and prone position is a valuable adjunct to mechanical ventilation. abstract_id: PUBMED:36972632 Aging-related predictive factors for oxygenation improvement and mortality in COVID-19 and acute respiratory distress syndrome (ARDS) patients exposed to prone position: A multicenter cohort study. Background: Elderly patients are more susceptible to Coronavirus Disease-2019 (COVID-19) and are more likely to develop it in severe forms, (e.g., Acute Respiratory Distress Syndrome [ARDS]). Prone positioning is a treatment strategy for severe ARDS; however, its response in the elderly population remains poorly understood. The main objective was to evaluate the predictive response and mortality of elderly patients exposed to prone positioning due to ARDS-COVID-19. Methods: This retrospective multicenter cohort study involved 223 patients aged ≥ 65 years, who received prone position sessions for severe ARDS due to COVID-19, using invasive mechanical ventilation. The PaO2/FiO2 ratio was used to assess the oxygenation response. The 20-point improvement in PaO2/FiO2 after the first prone session was considered for good response. Data were collected from electronic medical records, including demographic data, laboratory/image exams, complications, comorbidities, SAPS III and SOFA scores, use of anticoagulants and vasopressors, ventilator settings, and respiratory system mechanics. Mortality was defined as deaths that occurred until hospital discharge. Results: Most patients were male, with arterial hypertension and diabetes mellitus as the most prevalent comorbidities. The non-responders group had higher SAPS III and SOFA scores, and a higher incidence of complications. There was no difference in mortality rate. A lower SAPS III score was a predictor of oxygenation response, and the male sex was a risk predictor of mortality. Conclusion: The present study suggests the oxygenation response to prone positioning in elderly patients with severe COVID-19-ARDS correlates with the SAPS III score. Furthermore, the male sex is a risk predictor of mortality. abstract_id: PUBMED:35710365 Effect of prone position in patients with acute respiratory distress syndrome supported by venovenous extracorporeal membrane oxygenation: a retrospective cohort study. Background: The application of prone position (PP) in acute respiratory distress syndrome (ARDS) supported by venovenous extracorporeal membrane oxygenation (VV-ECMO) is controversial. Objectives: To evaluate the safety and efficacy of application of PP during VV-ECMO in patients with ARDS. Methods: This was a single-center, retrospective study of patients who met the Berlin definition of ARDS, and were supported with VV-ECMO. We divided the patients into two groups. The prone group included patients who were supported by VV-ECMO, and experienced at least one period of PP, while those without PP during VV-ECMO were defined as the supine group. Propensity score matching (PSM) at a ratio of 1:1 was introduced to minimize potential confounders. The primary outcomes were the complications of PP and the change of arterial oxygen pressure/fraction of the inspiration (PaO2/FiO2) ratio after PP. The secondary outcomes were hospital survival, ICU survival, and ECMO weaning rate. Results: From April 2013 to October 2020, a total of 91 patients met the diagnostic criteria of ARDS who were supported with ECMO. 38 patients (41.8%) received at least one period of PP during ECMO, while 53 patients (58.2%) were maintained in supine position during ECMO. 22 minor complications were reported in the prone group and major complications were not found. The other ECMO-related complications were similar between two groups. The PaO2/FiO2 ratio significantly improved after PP compared with before (174.50 (132.40-228.25) mmHg vs. 158.00 (122.93-210.33) mmHg, p &lt; 0.001). PSM selected 25 pairs of patients with similar characteristics. Hospital survival or ICU survival did not differ between the two groups (40% vs. 28%, p = 0.370; 40% vs. 32%, p = 0.556). Significant difference of ECMO weaning rate between two groups was not found (56% vs. 32%, p = 0.087). Conclusions: PP during VV-ECMO was safe and could improve oxygenation. A large-scale and well-designed RCT is needed in the future. abstract_id: PUBMED:37693003 Construction and application of prone position ventilation management scheme for severe COVID-19 patients. Background: Prone position ventilation (PPV) can significantly improve oxygenation index and blood oxygen saturation in most (70%-80%) patients with acute respiratory distress syndrome. However, although PPV is not an invasive procedure, there are many potential PPV-related complications, such as nerve compression, crush injury, venous stasis (e.g., facial oedema), pressure sores, retinal damage, vomiting, and arrhythmia, with an incidence of up to 56.9%. Nursing managers have focused on reducing the occurrence of PPV-related complications and improving safety. Objective: To construct a prone ventilation management scheme for patients with severe coronavirus disease 2019 (COVID-19) and analyse its application effect. Methods: Based on a previous evidence-based study combined with the COVID-19 Diagnosis and Treatment Protocol (Trial Edition 9), a prone ventilation management protocol for severe COVID-19 was formulated and applied to COVID-19 patients in the intensive care unit of a designated hospital. A prospective self-control study was used to compare changes in the oxygenation index and other outcome indicators before and after the intervention. Results: The oxygenation index of patients after intervention (321.22 ± 19.77 mmHg) was significantly higher (p &lt; 0.05) than before intervention (151.59 ± 35.49 mmHg). The difference in oxygenation index in different prone position ventilation durations was statistically significant (p &lt; 0.05). Nursing quality evaluation indicators showed that the implementation rate of gastric residual volume assessment was 100% and the incidence of occupational exposure and cross-infection was 0%; the incidences of pressure ulcers, drug extravasation, and facial oedema were 13.64% (3/22), 4.54% (1/22), and 4.54% (1/22), respectively. The incidence of unplanned extubation, aspiration, and falls/falls was 0%. abstract_id: PUBMED:12130960 Prone-position ventilation induces sustained improvement in oxygenation in patients with acute respiratory distress syndrome who have a large shunt. Objectives: Prone-position ventilation (PPV) induces acute improvement in oxygenation in many patients with acute respiratory distress syndrome (ARDS), with some maintaining their oxygenation even after they were returned to the supine position, but it is unclear what clinical factors determine the sustained oxygenation benefit. We hypothesized that patients with ARDS who have a larger shunt would have a better acute and sustained oxygenation response to PPV. Design: Prospective, nonrandomized interventional study. Setting: Medical and surgical intensive care units, university tertiary care center. Patients: Twenty-two consecutive patients, with ARDS with an average PaO2/FiO2 of 94, were administered PPV for 12 hrs followed by supine-position ventilation for 2 hrs. Measurements: Hemodynamic and gas exchange variables were monitored. The shunt was measured as venous admixture at an FiO2 of 1.0, and compliances of the respiratory system, lung, and chest wall were measured by the esophageal balloon technique before PPV, during PPV, and during subsequent supine-position ventilation. Main Results: Fourteen patients (64%) responded to PPV, with PaO2/FiO2 increasing by &gt; or =20. These changes were associated with a decrease in chest wall compliance. Responders had significantly shorter time from ARDS to PPV, a lower baseline PaO2/FiO2, and a higher venous admixture. All responders maintained the improvement in oxygenation and had a greater respiratory system compliance after returning to the supine position. Time from ARDS to PPV and baseline lung injury score were negatively associated, whereas chest wall compliance, heart rate, and PaCO2 were positively associated with sustained improvement in oxygenation. Conclusions: PPV induced acute and sustained improvement in oxygenation in many patients with ARDS. The sustained improvement is more significant if PPV is administered early to patients with a larger shunt and a more compliant chest wall. Measuring venous admixture and chest wall compliance before PPV may help select a subgroup of patients with ARDS who may benefit the most from PPV. abstract_id: PUBMED:18570829 Prone position in the adult respiratory distress syndrome Acute respiratory distress syndrome (ARDS) has been a subject of research in the recent decades since it was defined in the seventies. ARDS is now known to be triggered by pulmonary and non-pulmonary insults which allow chemical agents to be released in the lungs. The resulting damage to the alveolar-capillary membrane causes an impaired ventilation and oxygenation. The aim of the present bibliographic review is to describe all the theories that have been proposed since 1974 to explain how the prone position enhances oxygenation in ARDS patients. Current thought amongst researchers includes how the prone position could improve the functional residual capacity, the ventilation/perfusion ratio, the mechanics of lung, diaphragmatic and thoracic movement, the ability to clear secretions, and the release of the lungs from weight of the mediastinal structures. The literature suggests using the prone position in the early stages of ARDS; however it has not been concluded how long prone position should be maintained in order to achieve the best oxygenation levels. The results of the studies reviewed revealed that 50-80% of the patients experienced significant improvement of oxygenation with the prone position. However, no study has shown that the prone position significantly reduces mortality. It is concluded that further investigations are necessary to understand how the prone position affects the lungs, what group of patients respond to this treatment, what duration and frequency of proning is most beneficial to reduce mortality, and how to effectively nurse patients who are in the prone position. Answer: The tomographic aspect characteristics of patients presenting with acute respiratory distress syndrome (ARDS) do not appear to reliably predict improvement in oxygenation related response to the prone position. A prospective study that included 46 patients with ARDS found no distinctive morphologic features in the pattern of lung disease measured by computed tomographic scanning performed with the patient in the supine position that could predict response to the prone position (PUBMED:12218526). Similarly, another study involving lung ultrasonography (LUS) in ARDS patients with a PaO2/FiO2 ratio ≤150 mmHg concluded that bedside LUS cannot predict oxygenation response after the first prone position (PP) session (PUBMED:27324241). However, a separate study suggested that a normal lung ultrasonography pattern of both anterobasal lung regions in the supine position may predict a significant PaO2/FIO2 ratio improvement when patients are placed in the prone position (PUBMED:26806842). Despite these findings, the overall evidence suggests that imaging characteristics alone may not be sufficient to accurately predict which patients will benefit from prone positioning in terms of oxygenation improvement.
Instruction: Genetic knowledge and counselling skills of Dutch cardiologists: sufficient for the genomics era? Abstracts: abstract_id: PUBMED:12643889 Genetic knowledge and counselling skills of Dutch cardiologists: sufficient for the genomics era? Aims: Genetic scientific knowledge is growing rapidly but how this affects clinical practice is unclear. We investigated the levels of knowledge, practical skills and clinical genetic practices of Dutch cardiologists. Methods And Results: A survey was designed to assess cardiologists' experience with genetic aspects of hypertrophic cardiomyopathy, self-reported genetic knowledge, and genetic skills in general and aimed at this disease. In addition, cardiologists' opinions on five possible measures for improvement were obtained. Study Population: all Dutch cardiologists (n=643). Median number of patients suffering from hypertrophic cardiomyopathy per cardiologist is five. Forty-one percent of respondents do not give information about genetics to all their patients. Cardiologists rarely initiate DNA tests for hypertrophic cardiomyopathy. Only 38% refer patients to clinical geneticists. Self-reported knowledge levels are low (average score 3.3-5.1, 0-10 scale). Cardiologists with an established working relationship with a clinical geneticist report significantly higher levels of knowledge. Clinical guidelines, education and improved collaboration with clinical geneticists are preferred. Conclusion: Dutch cardiologists' genetic knowledge and clinical genetic practice levels are insufficient. As a result, clinical genetic care for patients with hypertrophic cardiomyopathy is poor. Improvements proposed include advancement of knowledge (education, professional guidelines) and structural measures (working relationships, multidisciplinary outpatient clinics). Collaboration of cardiologists and clinical geneticists is urgently needed to optimise cardiogenetic patient care. abstract_id: PUBMED:32557217 Re-examining the Ethics of Genetic Counselling in the Genomic Era. Respect for patient autonomy has served as the dominant ethical principle of genetic counselling, but as we move into a genomic era, it is time to actively re-examine the role that this principle plays in genetic counselling practice. In this paper, we argue that the field of genetic counselling should move away from its emphasis on patient autonomy and toward the incorporation of a more balanced set of principles that allows counsellors to offer clear guidance about how best to obtain or use genetic information. We begin with a brief history of how respect for patient autonomy gained such emphasis in the field and how it has taken on various manifestations over time, including the problematic concept of nondirectiveness. After acknowledging the field's preliminary move away from nondirectiveness, we turn to a series of arguments about why the continued dominance of patient autonomy has become untenable given the arrival of the genomic era. To conclude, we describe how a more complete set of bioethical principles can be adapted and used by genetic counsellors to strengthen their practice without undermining patient autonomy. abstract_id: PUBMED:27924139 Outcome of a four-hour smoking cessation counselling workshop for medical students. Background: Lack of smoking cessation education in undergraduate medical training hinders healthcare professionals in providing adequate tobacco cessation counselling. We developed a comprehensive 4-h smoking cessation counselling course for medical students that is easy to incorporate in a medical school curriculum, and assessed its short-term outcome for knowledge, skills, and attitudes. Methods: Eighty-eight medical students (53f, 35 m) were educated by a doctoral student in five identical 4-h courses. A 45-min theoretical introduction was followed by patient-physician role-playing by student pairs. Knowledge, skills, and attitude were assessed before and 4 weeks after the course by questionnaires, and by blinded analysis of pre- and post-course videos of a five-minute standardized patient situation. Results: Knowledge: Before the course 10.6 (mean, SD: 2.7) questions out of 29 were answered correctly, and increased to 19.2 (3.6) after the course (p &lt; 0.0005). Major features of the students' counselling skills improved. Significant and highly relevant attitude changes reflected increased motivation to counselling smokers. Conclusion: Implementing a four-hour smoking intervention workshop into a medical curriculum was highly effective in improving students' knowledge, skills and attitudes towards smoking counselling, as well as providing them with additional clinical competencies. abstract_id: PUBMED:34334262 Structured online training for university students to deliver peer-led addiction counselling for young drug abusers in China: Effect on improving knowledge, attitude, confidence, and skills. Objectives: To measure the effect of a structured online training programme on improving the knowledge, attitude, confidence, and skills of university students to deliver peer-led addiction counselling to young drug abusers. Methods: A structured training programme was designed for university students with a background in health-related disciplines. Pre- and post-training evaluations were conducted for 245 university students with health-related backgrounds who completed the training programme. The participants' knowledge of and attitudes towards drug abuse and their confidence in delivering addiction counselling were evaluated and compared before and after the training. Counselling skills were assessed using a simulated standardised case. Results: A paired test indicated that after attending the training programme, the participants' knowledge of and attitudes towards drug abuse and their confidence in delivering addiction counselling were significantly improved. Most of the trained participants demonstrated satisfactory counselling skills. Conclusion: Participation in a one-day structured addiction training programme significantly improved students' knowledge of and attitudes towards drug abuse and their confidence and skills to deliver addiction counselling. Practice Implication: Future research should elucidate the implementation of the counselling skills in terms of the counselling outcomes and successful referral. The clients' health outcomes should be assessed to reflect the peer counsellors' skills. abstract_id: PUBMED:21407930 A questionnaire survey of psychiatrists attitudes towards genetic counselling. Genetic counselling in psychiatry aims at facilitating mentally ill patients and their relatives to make informed choices after understanding what is known about the genetics of a given mental disorder This area of practice is a recent development in mental health care. This study surveyed the attitudes and practices of Indian psychiatrists towards genetic counselling. Fifty-nine out of 150 randomly selected psychiatrists completed a semi-structured questionnaire assessing various aspects of genetic counselling in practice.The results show that there is a felt need for genetic counselling among patients. Most of the respondents provide information regarding the hereditary nature and risk of inheritance of mental illnesses in single sessions of less than 30 minutes duration in clinical situations. Most psychiatrists (76%) felt that genetic counselling is feasible in practice, but 24% felt inadequate knowledge, illiteracy among patients and time constraints as some hindering factors for the same. Genetic counselling was considered to be important in planning management for patients by most respondents. The need to improve their knowledge and skills in the genetic understanding of mental illnesses was also reported. These results have implications for future practice of genetic counselling in the Indian setting. abstract_id: PUBMED:26590592 Non-genetic health professionals' attitude towards, knowledge of and skills in discussing and ordering genetic testing for hereditary cancer. Non-genetic health professionals (NGHPs) have insufficient knowledge of cancer genetics, express educational needs and are unprepared to counsel their patients regarding their genetic test results. So far, it is unclear how NGHPs perceive their own communication skills. This study was undertaken to gain insight in their perceptions, attitudes and knowledge. Two publically accessible databases were used to invite NGHPs providing cancer genetic services to complete a questionnaire. The survey assessed: sociodemographic attributes, experience in ordering hereditary cancer genetic testing, attitude, knowledge, perception of communication skills (e.g. information giving, decision-making) and educational needs. Of all respondents (N = 49, response rate 11%), most have a positive view of their own information giving (mean = 53.91, range 13-65) and decision making skills (64-77% depending on topic). NGHPs feel responsible for enabling disease and treatment related behavior (89-91%). However, 20-30% reported difficulties managing patients' emotions and did not see management of long-term emotions as their responsibility. Correct answers on knowledge questions ranged between 41 and 96%. Higher knowledge was associated with more confidence in NGHPs' own communication skills (r(s) = .33, p = 0.03). Although NGHPs have a positive view of their communication skills, they perceive more difficulties managing emotions. The association between less confidence in communication skills and lower knowledge level suggests awareness of knowledge gaps affects confidence. NGHPs might benefit from education about managing client emotions. Further research using observation of actual counselling consultations is needed to investigate the skills of this specific group of providers. abstract_id: PUBMED:37927209 Genetic counselling considerations with genetic/genomic testing in Neonatal and Pediatric Intensive Care Units: A scoping review. Genetic and genomic technologies can effectively diagnose numerous genetic disorders. Patients benefit when genetic counselling accompanies genetic testing and international guidelines recommend pre- and post-test genetic counselling with genome-wide sequencing. However, there is a gap in knowledge regarding the unique genetic counselling considerations with different types of genetic testing in the Neonatal Intensive Care Unit (NICU) and the Pediatric Intensive Care Unit (PICU). This scoping review was conducted to identify the gaps in care with respect to genetic counselling for infants/pediatric patients undergoing genetic and genomic testing in NICUs and PICUs and understand areas in need of improvement in order to optimize clinical care for patients, caregivers, and healthcare providers. Five databases (MEDLINE [Ovid], Embase [Ovid], PsycINFO [Ebsco], CENTRAL [Ovid], and CINHAL [Ebsco]) and grey literature were searched. A total of 170 studies were included and used for data extraction and analysis. This scoping review includes descriptive analysis, followed by a narrative account of the extracted data. Results were divided into three groups: pre-test, post-test, and comprehensive (both pre- and post-test) genetic counselling considerations based on indication for testing. More studies were conducted in the NICU than the PICU. Comprehensive genetic counselling was discussed in only 31% of all the included studies demonstrating the need for both pre-test and post-test genetic counselling for different clinical indications in addition to the need to account for different cultural aspects based on ethnicity and geographic factors. abstract_id: PUBMED:29234904 Post genomics era for orchid research. Among 300,000 species in angiosperms, Orchidaceae containing 30,000 species is one of the largest families. Almost every habitats on earth have orchid plants successfully colonized, and it indicates that orchids are among the plants with significant ecological and evolutionary importance. So far, four orchid genomes have been sequenced, including Phalaenopsis equestris, Dendrobium catenatum, Dendrobium officinale, and Apostaceae shengen. Here, we review the current progress and the direction of orchid research in the post genomics era. These include the orchid genome evolution, genome mapping (genome-wide association analysis, genetic map, physical map), comparative genomics (especially receptor-like kinase and terpene synthase), secondary metabolomics, and genome editing. abstract_id: PUBMED:30496830 Developing a short-form of the Genetic Counselling Outcome Scale: The Genomics Outcome Scale. The Genetic Counselling Outcome Scale (GCOS-24) is a 24-item patient reported outcome measure for use in evaluations of genetic counselling and testing services. The aim of this study was to develop a short form of GCOS-24. The study comprised three phases. Phase I: Cognitive interviews were used to explore interpretability of GCOS-24 items and which GCOS-24 items were most valued by the target population. Phase II: The Graded Response Model was used to analyse an existing set of GCOS-24 responses (n = 395) to examine item discrimination. Phase III: Item Selection. Three principles guided the approach to item selection (i) Items with poor discriminative properties were not selected; (ii) To avoid redundancy, items capturing a similar outcome were not selected together; item information curves and cognitive interview findings were used to establish superior items. (iii) Rasch analysis was then used to determine the optimal scale. In Phase I, ten cognitive interviews were conducted with individuals affected by or at risk for a genetic condition, recruited from patient support groups. Analysis of interview transcripts identified twelve GCOS-24 items which were highly valued by participants. In Phase II, Graded Response Model item characteristic curves and item information curves were produced. In Phase III, findings from Phases I and II were used to select ten highly-valued items that perform well. Finally, items were iteratively removed and permutated to establish optimal fit statistics under the Rasch model. A six-item questionnaire with a five-point Likert Scale was produced (The Genomics Outcome Scale (GOS)). Correlation between GCOS-24 and GOS scores is high (r = 0.838 at 99% confidence), suggesting that GOS maintains the ability of GCOS-24 to capture empowerment, whilst providing a less burdensome scale for respondents. This study represents the first step in developing a preference-based measure which could be used in the evaluation of technologies and services used in genomic medicine. abstract_id: PUBMED:38452709 Cancer genetic counselling for hereditary breast cancer in the era of precision oncology. A relevant percentage of breast cancers (BCs) are tied to pathogenetic (P)/likely pathogenetic (LP) variants in predisposing genes. The knowledge of P/LP variants is an essential element in the management of BC patients since the first diagnosis because it influences surgery and subsequent oncological treatments and follow-up. Moreover, patients with metastatic BCs can benefit from personalized treatment if carriers of P/LP in BRCA1/2 genes. Multigene panels allow the identification of other predisposing genes with an impact on management. Cascade genetic testing for healthy family members allows personalized preventive strategies. Here, we review the advances and the challenges of Cancer Genetic Counseling (CGC). We focus on the area of oncology directed to hereditary BC management describing the peculiar way to lead CGC and how CGC changes over time. The authors describe the impact of genetic testing by targeted approach or universal approach on the management of BC according to the stage at diagnosis. Moreover, they describe the burden of CGC and testing and future perspectives to widely offer testing. A new perspective is needed for models of service delivery of CGC and testing, beyond formal genetic counselling. A broader genetic test can be quickly usable in clinical practice for comprehensive BC management and personalized prevention in the era of precision oncology. Answer: The genetic knowledge and counselling skills of Dutch cardiologists appear to be insufficient for the genomics era. A survey conducted among Dutch cardiologists revealed that their self-reported genetic knowledge levels are low, with average scores ranging from 3.3 to 5.1 on a 0-10 scale. Additionally, only 38% of cardiologists refer patients to clinical geneticists, and 41% do not provide genetic information to all their patients with hypertrophic cardiomyopathy. The cardiologists rarely initiate DNA tests for this condition, indicating a gap in the practical application of genetic knowledge in clinical practice. Those cardiologists who have an established working relationship with a clinical geneticist report significantly higher levels of knowledge, suggesting that collaboration between cardiologists and clinical geneticists could be a key factor in improving genetic care for patients. The study concludes that there is an urgent need for improvements in education, professional guidelines, and structural measures such as working relationships and multidisciplinary outpatient clinics to optimize cardiogenetic patient care (PUBMED:12643889).
Instruction: Can smartphones and tablets improve the management of childhood illness in Tanzania? Abstracts: abstract_id: PUBMED:25890078 Can smartphones and tablets improve the management of childhood illness in Tanzania? A qualitative study from a primary health care worker's perspective. Background: The impact of the Integrated Management of Childhood Illness (IMCI) strategy has been less than anticipated because of poor uptake. Electronic algorithms have the potential to improve quality of health care in children. However, feasibility studies about the use of electronic protocols on mobile devices over time are limited. This study investigated constraining as well as facilitating factors that influence the uptake of a new electronic Algorithm for Management of Childhood Illness (ALMANACH) among primary health workers in Dar es Salaam, Tanzania. Methods: A qualitative approach was applied using in-depth interviews and focus group discussions with altogether 40 primary health care workers from 6 public primary health facilities in the three municipalities of Dar es Salaam, Tanzania. Health worker's perceptions related to factors facilitating or constraining the uptake of the electronic ALMANACH were identified. Results: In general, the ALMANACH was assessed positively. The majority of the respondents felt comfortable to use the devices and stated that patient's trust was not affected. Most health workers said that the ALMANACH simplified their work, reduced antibiotic prescription and gave correct classification and treatment for common causes of childhood illnesses. Few HWs reported technical challenges using the devices and complained about having had difficulties in typing. Majority of the respondents stated that the devices increased the consultation duration compared to routine practice. In addition, health system barriers such as lack of staff, lack of medicine and lack of financial motivation were identified as key reasons for the low uptake of the devices. Conclusions: The ALMANACH built on electronic devices was perceived to be a powerful and useful tool. However, health system challenges influenced the uptake of the devices in the selected health facilities. abstract_id: PUBMED:28077751 Performance of Health Workers Using an Electronic Algorithm for the Management of Childhood Illness in Tanzania: A Pilot Implementation Study. In low-resource settings, where qualified health workers (HWs) are scarce and childhood mortality high, rational antimicrobial prescription for childhood illnesses is a challenge. To assess whether smartphones running guidelines, as compared with paper support, improve consultation process and rational use of medicines for children, a pilot cluster-randomized controlled study was conducted in Tanzania. Nine primary health-care facilities (HFs) were randomized into three arms: 1) paper algorithm, 2) electronic algorithm on a smartphone, and 3) control. All HWs attending children aged 2-59 months for acute illness in intervention HFs were trained on a new clinical algorithm for management of childhood illness (ALMANACH) either on 1) paper or 2) electronic support; 4 months after training, consultations were observed. An expert consultation was the reference for classification and treatment. Main outcomes were proportion of children checked for danger signs, and antibiotics prescription rate. A total of 504 consultations (166, 171, and 167 in control, paper, and phone arms, respectively) were observed. The use of smartphones versus paper was associated with a significant increase in children checked for danger signs (41% versus 74%, P = 0.04). Antibiotic prescriptions rate dropped from 70% in the control to 26%, and 25% in paper and electronic arms. The HWs-expert agreement on pneumonia classification remained low (expert's pneumonia identified by HWs in 26%, 30%, and 39% of patients, respectively).Mobile technology in low-income countries is implementable and has a potential to improve HWs' performance. Additional point-of-care diagnostic tests are needed to ensure appropriate management. Improving the rational use of antimicrobial is a challenge that ALMANACH can help to take up. abstract_id: PUBMED:32784181 Use of Tablets and Smartphones to Support Medical Decision Making in US Adults: Cross-Sectional Study. Background: Tablet and smartphone ownership have increased among US adults over the past decade. However, the degree to which people use mobile devices to help them make medical decisions remains unclear. Objective: The objective of this study is to explore factors associated with self-reported use of tablets or smartphones to support medical decision making in a nationally representative sample of US adults. Methods: Cross-sectional data from participants in the 2018 Health Information National Trends Survey (HINTS 5, Cycle 2) were evaluated. There were 3504 responses in the full HINTS 5 Cycle 2 data set; 2321 remained after eliminating respondents who did not have complete data for all the variables of interest. The primary outcome was use of a tablet or smartphone to help make a decision about how to treat an illness or condition. Sociodemographic factors including gender, race/ethnicity, and education were evaluated. Additionally, mobile health (mHealth)- and electronic health (eHealth)-related factors were evaluated including (1) the presence of health and wellness apps on a tablet or smartphone, (2) use of electronic devices other than tablets and smartphones to monitor health (eg, Fitbit, blood glucose monitor, and blood pressure monitor), and (3) whether people shared health information from an electronic monitoring device or smartphone with a health professional within the last 12 months. Descriptive and inferential statistics were conducted using SAS version 9.4. Weighted population estimates and standard errors, univariate odds ratios, and 95% CIs were calculated, comparing respondents who used tablets or smartphones to help make medical decisions (n=944) with those who did not (n=1377), separately for each factor. Factors of interest with a P value of &lt;.10 were included in a subsequent multivariable logistic regression model. Results: Compared with women, men had lower odds of reporting that a tablet or smartphone helped them make a medical decision. Respondents aged 75 and older also had lower odds of using a tablet or smartphone compared with younger respondents aged 18-34. By contrast, those who had health and wellness apps on tablets or smartphones, used other electronic devices to monitor health, and shared information from devices or smartphones with health care professionals had higher odds of reporting that tablets or smartphones helped them make a medical decision, compared with those who did not. Conclusions: A limitation of this research is that information was not available regarding the specific health condition for which a tablet or smartphone helped people make a decision or the type of decision made (eg, surgery, medication changes). In US adults, mHealth and eHealth use, and also certain sociodemographic factors are associated with using tablets or smartphones to support medical decision making. Findings from this study may inform future mHealth and other digital health interventions designed to support medical decision making. abstract_id: PUBMED:23981292 Using electronic technology to improve clinical care - results from a before-after cluster trial to evaluate assessment and classification of sick children according to Integrated Management of Childhood Illness (IMCI) protocol in Tanzania. Background: Poor adherence to the Integrated Management of Childhood Illness (IMCI) protocol reduces the potential impact on under-five morbidity and mortality. Electronic technology could improve adherence; however there are few studies demonstrating the benefits of such technology in a resource-poor settings. This study estimates the impact of electronic technology on adherence to the IMCI protocols as compared to the current paper-based protocols in Tanzania. Methods: In four districts in Tanzania, 18 clinics were randomly selected for inclusion. At each site, observers documented critical parts of the clinical assessment of children aged 2 months to 5 years. The first set of observations occurred during examination of children using paper-based IMCI (pIMCI) and the next set of observations occurred during examination using the electronic IMCI (eIMCI). Children were re-examined by an IMCI expert and the diagnoses were compared. A total of 1221 children (671 paper, 550 electronic) were observed. Results: For all ten critical IMCI items included in both systems, adherence to the protocol was greater for eIMCI than for pIMCI. The proportion assessed under pIMCI ranged from 61% to 98% compared to 92% to 100% under eIMCI (p &lt; 0.05 for each of the ten assessment items). Conclusions: Use of electronic systems improved the completeness of assessment of children with acute illness in Tanzania. With the before-after nature of the design, potential for temporal confounding is the primary limitation. However, the data collection for both phases occurred over a short period (one month) and so temporal confounding was expected to be minimal. The results suggest that the use of electronic IMCI protocols can improve the completeness and consistency of clinical assessments and future studies will examine the long-term health and health systems impact of eIMCI. abstract_id: PUBMED:24666561 Factors influencing the implementation of integrated management of childhood illness (IMCI) by healthcare workers at public health centers &amp; dispensaries in Mwanza, Tanzania. Background: Integrated Management of Childhood Illness (IMCI) was developed by the World Health Organization (WHO) and the United Nations International Children's Fund (UNICEF) and aims at reducing childhood morbidity and mortality in resource-limited settings including Tanzania. It was introduced in 1996 and has been scaled up in all districts in the country. The purpose of this study was to identify factors influencing the implementation of IMCI in the health facilities in Mwanza, Tanzania since reports indicates that the guidelines are not full adhered to by the healthcare workers. Methods: A cross-sectional study design was used and a sample size of 95 healthcare workers drawn from health centers and dispensaries within Mwanza city were interviewed using self-administered questionnaires. Structured interview was also used to get views from the city IMCI focal person and the 2 facilitators. Data were analyzed using SPSS and presented using figures and tables. Results: Only 51% of healthcare workers interviewed had been trained. 69% of trained Healthcare workers expressed understanding of the IMCI approach. Most of the respondents (77%) had a positive attitude that IMCI approach was a better approach in managing common childhood illnesses especially with the reality of resource constraint in the health facilities. The main challenges identified in the implementation of IMCI are low initial training coverage among health care workers, lack of essential drugs and supplies, lack of onsite mentoring and lack of refresher courses and regular supportive supervision. Supporting the healthcare workers through training, onsite mentoring, supportive supervision and strengthening the healthcare system through increasing access to essential medicines, vaccines, strengthening supply chain management, increasing healthcare financing, improving leadership &amp; management were the major interventions that could assist in IMCI implementation. Conclusions: The healthcare workers can implement better IMCI through the collaboration of supervisors, IMCI focal person, Council Health Management Teams (CHMT) and other stakeholders interested in child health. However, significant barriers impede a sustainable IMCI implementation. Recommendations have been made related to supportive supervision and HealthCare system strengthening among others. abstract_id: PUBMED:16306072 Improving quality and efficiency of facility-based child health care through Integrated Management of Childhood Illness in Tanzania. Objectives: To assess the effect of Integrated Management of Childhood Illness (IMCI) relative to routine care on the quality and efficiency of providing care for sick children in first-level health facilities in Tanzania, and to disseminate the results for use in health sector decision-making. Design: Non-randomized controlled trial to compare child health care quality and economic costs in two intervention (&gt;90% of health care workers trained in IMCI) and two comparison districts in rural Tanzania. Participants: For quality measures, all sick children presenting for care at random samples of first-level health facilities; for costs, all national, district, facility and household costs associated with child health care, taking a societal perspective. Results: IMCI training is associated with significantly better child health care in facilities at no additional cost to districts. The cost per child visit managed correctly was lower in IMCI than in routine care settings: $4.02 versus $25.70, respectively, in 1999 US dollars and after standardization for variations in population size. Conclusion: IMCI improved the quality and efficiency of child health care relative to routine child health care in the study districts. Previous study results indicated that the introduction of IMCI in these Tanzanian districts was associated with mortality levels that were 13% lower than in comparison districts. We can therefore conclude that IMCI is also more cost-effective than routine care for improving child health outcomes. The dissemination strategy for these results led to adoption of IMCI for nationwide implementation within 12 months of study completion. abstract_id: PUBMED:14679280 The effect of Integrated Management of Childhood Illness on observed quality of care of under-fives in rural Tanzania. Integrated Management of Childhood Illness (IMCI) has been adopted by over 80 countries as a strategy for reducing child mortality and improving child health and development. It includes complementary interventions designed to address the major causes of child mortality at community, health facility, and health system levels. The Multi-Country Evaluation of IMCI Effectiveness, Cost and Impact (IMCI-MCE) is a global evaluation to determine the impact of IMCI on health outcomes and its cost-effectiveness. The MCE is coordinated by the Department of Child and Adolescent Health and Development of the World Health Organization. MCE studies are under way in Bangladesh, Brazil, Peru, Tanzania and Uganda. In Tanzania, the IMCI-MCE study uses a non-randomized observational design comparing four neighbouring districts, two of which have been implementing IMCI in conjunction with evidence-based planning and expenditure mapping at district level since 1997, and two of which began IMCI implementation in 2002. In these four districts, child health and child survival are documented at household level through cross-sectional, before-and-after surveys and through longitudinal demographic surveillance respectively. Here we present results of a survey conducted in August 2000 in stratified random samples of government health facilities to compare the quality of case-management and health systems support in IMCI and comparison districts. The results indicate that children in IMCI districts received better care than children in comparison districts: their health problems were more thoroughly assessed, they were more likely to be diagnosed and treated correctly as determined through a gold-standard re-examination, and the caretakers of the children were more likely to receive appropriate counselling and reported higher levels of knowledge about how to care for their sick children. There were few differences between IMCI and comparison districts in the level of health system support for child health services at facility level. This study suggests that IMCI, in the presence of a decentralized health system with practical health system planning tools, is feasible for implementation in resource-poor countries and can lead to rapid gains in the quality of case-management. IMCI is therefore likely to lead to rapid gains in child survival, health and development if adequate coverage levels can be achieved and maintained. abstract_id: PUBMED:15519628 Effectiveness and cost of facility-based Integrated Management of Childhood Illness (IMCI) in Tanzania. Background: The Integrated Management of Childhood Illness (IMCI) strategy is designed to address major causes of child mortality at the levels of community, health facility, and health system. We assessed the effectiveness of facility-based IMCI in rural Tanzania. Methods: We compared two districts with facility-based IMCI and two neighbouring comparison districts without IMCI, from 1997 to 2002, in a non-randomised study. We assessed quality of case-management for children's illness, drug and vaccine availability, and supervision involving case-management, through a health-facility survey in 2000. Household surveys were used to assess child-health indicators in 1999 and 2002. Survival of children was tracked through demographic surveillance over a predefined 2-year period from mid 2000. Further information on contextual factors was gathered through interviews and record review. The economic cost of health care for children in IMCI and comparison districts was estimated through interviews and record review at national, district, facility, and household levels. Findings: During the IMCI phase-in period, mortality rates in children under 5 years old were almost identical in IMCI and comparison districts. Over the next 2 years, the mortality rate was 13% lower in IMCI than in comparison districts (95% CI -7 to 30 or 5 to 21, depending on how adjustment is made for district-level clustering), with a rate difference of 3.8 fewer deaths per 1000 child-years. Contextual factors, such as use of mosquito nets, all favoured the comparison districts. Costs of children's health care with IMCI were similar to or lower than those for case-management without IMCI. Interpretation: Our findings indicate that facility-based IMCI is good value for money, and support widespread implementation in the context of health-sector reform, basket funding, good facility access, and high utilisation of health facilities. abstract_id: PUBMED:15976878 Does the Integrated Management of Childhood Illness cost more than routine care? Results from the United Republic of Tanzania. Objective: The Integrated Management of Childhood Illness (IMCI) strategy is designed to address the five leading causes of childhood mortality, which together account for 70% of the 10 million deaths occurring among children worldwide annually. Although IMCI is associated with improved quality of care, which is a key determinant of better health outcomes, it has not yet been widely adopted, partly because it is assumed to be more expensive than routine care. Here we report the cost of IMCI compared with routine care in four districts in the United Republic of Tanzania. Methods: Total district costs of child care were estimated from the societal perspective as the sum of child health-care costs incurred in a district at the household level, primary health-facility level and hospital level. We also included administrative and support costs incurred by national and district administrations. The incremental cost of IMCI is the difference in costs of child health-care between districts with and without IMCI, after standardization for population size. Findings: The annual cost per child of caring for children less than five years old in districts with IMCI was USD 11.19, 44% lower than the cost in the districts without IMCI (USD 16.09). Much of the difference was due to higher rates of hospitalization of children less than 5 years old in the districts without IMCI. Not all of this difference can be attributed to IMCI but even when differences in hospitalization rates are excluded, the cost per child was still 6% lower in IMCI districts. Conclusion: IMCI was not associated with higher costs than routine child health-care in the four study districts in the United Republic of Tanzania. Given the evidence of improved quality of care in the IMCI districts, the results suggest that cost should not be a barrier to the adoption and scaling up of IMCI. abstract_id: PUBMED:16306073 Impact of Integrated Management of Childhood Illness on inequalities in child health in rural Tanzania. We examined the impact of the Integrated Management of Childhood Illness (IMCI) strategy on the equality of health outcomes and access across socioeconomic gradients in rural Tanzania, by comparing changes in inequities between 1999 and 2002 in two districts with IMCI (Morogoro Rural and Rufiji) and two without (Kilombero and Ulanga). Equity differentials for six child health indicators (underweight, stunting, measles immunization, access to treated and untreated nets, treatment of fever with antimalarial) improved significantly in IMCI districts compared with comparison districts (p&lt;0.05), while four indicators (wasting, DPT coverage, caretakers' knowledge of danger signs and appropriate careseeking) improved significantly in comparison districts compared with IMCI districts (p&lt;0.05). The largest improvements were observed for stunting among children between 24-59 months of age. The concentration index improved from -0.102 in 1999 to -0.032 in 2002 for IMCI, while it remained almost unchanged -0.122 to -0.133 in comparison districts. IMCI was associated with improved equity for measles vaccine coverage, whereas the opposite was observed for DPT antigens. This study has shown how equity assessments can be incorporated in impact evaluation at relatively little additional cost, and how this may point to specific interventions that need to be reinforced. The introduction of IMCI led to improvements in child health that did not occur at the expense of equity. Answer: Yes, smartphones and tablets can improve the management of childhood illness in Tanzania. Studies have shown that the use of electronic algorithms on mobile devices, such as the electronic Algorithm for Management of Childhood Illness (ALMANACH), has been positively assessed by health workers. They reported that ALMANACH simplified their work, reduced antibiotic prescription, and provided correct classification and treatment for common causes of childhood illnesses (PUBMED:25890078). Additionally, a pilot cluster-randomized controlled study found that the use of smartphones running guidelines improved the consultation process and rational use of medicines for children, with a significant increase in children checked for danger signs and a drop in antibiotic prescription rates (PUBMED:28077751). Furthermore, the use of electronic technology, such as electronic IMCI (eIMCI), has been shown to improve adherence to IMCI protocols compared to paper-based protocols, with higher adherence rates for all ten critical IMCI items included in both systems (PUBMED:23981292). The implementation of IMCI by healthcare workers has also been associated with improved quality and efficiency of facility-based child health care at no additional cost to districts (PUBMED:16306072; PUBMED:15519628). Moreover, the cost of IMCI was found to be lower than routine care, suggesting that cost should not be a barrier to the adoption and scaling up of IMCI (PUBMED:15976878). In the context of the United States, factors associated with the use of tablets or smartphones to support medical decision-making included having health and wellness apps on the devices, using other electronic devices to monitor health, and sharing information from devices with health care professionals (PUBMED:32784181). While this study was conducted in the US, it indicates a broader trend of mobile technology being used to support medical decision-making, which could be applicable to settings like Tanzania. Overall, the evidence suggests that mobile technology, including smartphones and tablets, can be an effective tool to improve the management of childhood illness in Tanzania by enhancing the quality of care, adherence to treatment guidelines, and rational use of medicines, without incurring higher costs.
Instruction: Reasons for cannabis use in first-episode psychosis: does strength of endorsement change over 12 months? Abstracts: abstract_id: PUBMED:25541346 Reasons for cannabis use in first-episode psychosis: does strength of endorsement change over 12 months? Background: Why patients with psychosis use cannabis remains debated. The self-medication hypothesis has received some support but other evidence points towards an alleviation of dysphoria model. This study investigated the reasons for cannabis use in first-episode psychosis (FEP) and whether strength in their endorsement changed over time. Methods: FEP inpatients and outpatients at the South London and Maudsley, Oxleas and Sussex NHS Trusts UK, who used cannabis, rated their motives at baseline (n=69), 3 months (n=29) and 12 months (n=36). A random intercept model was used to test the change in strength of endorsement over the 12 months. Paired-sample t-tests assessed the differences in mean scores between the five subscales on the Reasons for Use Scale (enhancement, social motive, coping with unpleasant affect, conformity and acceptance and relief of positive symptoms and side effects), at each time-point. Results: Time had a significant effect on scores when controlling for reason; average scores on each subscale were higher at baseline than at 3 months and 12 months. At each time-point, patients endorsed 'enhancement' followed by 'coping with unpleasant affect' and 'social motive' more highly for their cannabis use than any other reason. 'Conformity and acceptance' followed closely. 'Relief of positive symptoms and side effects' was the least endorsed motive. Conclusions: Patients endorsed their reasons for use at 3 months and 12 months less strongly than at baseline. Little support for the self-medication or alleviation of dysphoria models was found. Rather, patients rated 'enhancement' most highly for their cannabis use. abstract_id: PUBMED:26235479 Relationship between cannabis and psychosis: Reasons for use and associated clinical variables. The mechanism underneath the relationship between cannabis and psychosis remains controversial, for which several hypotheses have been proposed, including cannabis as self-medication and cannabis as a risk for the development of psychosis. The aim of this work was to study the relationship between cannabis and psychosis in first-episode psychosis cannabis users and non-users, and non-psychotic cannabis users. The age at the first psychotic episode, duration of untreated psychosis, psychopathology and reasons for cannabis use were assessed. First-episode psychosis cannabis users showed an earlier age at psychosis onset than non-user patients. No significant differences in symptomatology were found. The distinguishing reasons to use cannabis for patients with first-episode psychosis with respect to non-psychotic users were to arrange their thoughts and deal with hallucinations and suspiciousness. These findings are in agreement with both hypotheses: self-medication and secondary psychosis hypothesis. However, longitudinal prospective cohort studies assessing reasons for cannabis use are needed to investigate both hypotheses and their complementarity. abstract_id: PUBMED:38091900 Endorsement of psychotic-like experiences and problematic cannabis use associated with worse executive functioning performance in undergraduates. Background: Emerging adults who endorse more positive psychotic-like experiences (PLEs; bizarre experiences, delusional ideations) may experience greater cannabis-related impairments in executive function. Negative and depressive PLEs are also associated with cannabis use, however, less is known about their relation to executive functioning. Here, we hypothesize that high positive PLEs and cannabis use are associated with worse performance on computerized versions of the Iowa Gambling Task (IGT) and the Card Sorting Task (CST); exploratory analyses are conducted with negative and depressive PLEs. Methods: We recruited university students (N = 543) who completed an online study consisting of self-report measures of problematic cannabis use (Cannabis Use Disorder Identification Test; CUDIT-R) and PLEs (Community Assessment of Psychotic Experiences; CAPE). Of these, n=270 completed the CST and n=251 completed the IGT. Results: Problematic cannabis use and high endorsement of positive PLEs related to significantly worse performance on the IGT and greater perseverative errors on the CST. In addition, people who endorsed high levels of positive PLEs were also significantly more likely to complete the IGT with less money relative to those who endorsed fewer PLEs, regardless of cannabis use. Further analyses based on negative PLEs revealed a similar pattern for perseverative errors on the CST; depressive PLEs were not related to task performance. Conclusion: Findings highlight that problematic cannabis use and more frequent and distressing positive PLEs are associated with poorer executive functioning. Thus, executive functioning may have implications for intervention among those high on both attributes, who are at high risk of onset of psychosis. abstract_id: PUBMED:21172414 Why do patients with psychosis use cannabis and are they ready to change their use? Numerous studies have shown that patients with psychosis are more likely to use illicit drugs than the general population, with cannabis being the most popular. There exists overwhelming evidence that cannabis use can contribute to the onset of schizophrenia and poor outcome in patients with established psychosis. Therefore, understanding why patients use cannabis and whether they are motivated to change their habits is important. The evidence is that patients with psychosis use cannabis for the same reasons the general population does, to 'get high', relax and have fun. There is little support for the 'self-medication' hypothesis, while the literature points more towards an 'alleviation of dysphoria' model. There is a lack of research reporting on whether psychotic patients are ready to change their use of cannabis, which has obvious implications for identifying which treatment strategies are likely to be effective. abstract_id: PUBMED:17073176 Cannabis and psychosis Cannabis is abused by a progressively larger and younger proportion of our population. For the clinician, this can raise the question of what the relationship between cannabis and psychosis is. For the patient who is already psychotic, this relationship is most certainly adverse; cannabis worsens the symptoms and prognosis of a psychosis. What may be of even greater concern is the growing evidence that cannabis may cause psychosis in healthy individuals. Many studies now show a robust and consistent association between cannabis consumption and the ulterior development of psychosis. Furthermore, our better understanding of cannabis biology allows the proposal of a plausible hypothetical model, based notably on possible interactions between cannabis and dopaminergic neurotransmission. abstract_id: PUBMED:24274357 Reasons for cannabis use among youths at ultra high risk for psychosis. Aim: Cannabis use is prevalent in schizophrenia and its risk states, despite its association with anxiety and positive symptoms. While schizophrenia patients report using cannabis for mood enhancement and social motives, it is not known what motivates clinical high risk (CHR) patients to use cannabis. Methods: Among 102 CHR patients, 24 (23%) endorsed cannabis use, and were queried as to reasons for use, using a scale previously administered in schizophrenia patients. We hypothesized a primary motivation for mood enhancement related to anhedonia. We evaluated the 'self-medication' hypothesis by examining if motivation for symptom relief was associated with concurrent severity of symptoms. Results: The rank order of reasons for use in CHR patients was similar to that previously reported by schizophrenia patients, with mood enhancement and social motives as primary reasons for use, and the motivation to use cannabis for symptom relief comparatively less common. Motivation for mood enhancement had a trend association with anhedonia. Motivation for symptom relief was entirely unrelated to concurrent severity of positive and anxiety symptoms. Conclusion: As in schizophrenia, CHR patients primarily use cannabis for mood enhancement, especially in the context of decreased motivation to seek pleasure otherwise. Negative symptoms may drive cannabis use in schizophrenia and its risk states, which may exacerbate positive symptoms. By contrast, CHR patients do not report using cannabis to 'self-medicate' emergent positive symptoms. The understanding of motives for cannabis use among CHR patients may be informative for treatments aimed at reducing use, such as motivational interviewing. abstract_id: PUBMED:35020045 Prevalence and self-reported reasons of cannabis use for medical purposes in USA and Canada. Rationale: There has been increasing attention on cannabis use for medical purposes, but there is currently a lack of data on its epidemiology. Objectives: To examine the epidemiology of self-reported cannabis use for medical purposes by (1) estimating its prevalence, (2) comparing gender and age differences, and (3) investigating what reasons they were used to manage. Methods: Participants included 27,169 respondents (aged 16-65) who completed Wave 1 of The International Cannabis Policy Study (ICPS) conducted across Canada and the USA in 2018 via online surveys. Cannabis policy conditions were "US legal-recreational" (legal for both recreational and medical uses), "US legal-medical only", "US illegal", and "Canada-medical only". Results: The overall prevalence of self-reported ever cannabis use for medical purposes was 27%, with similar rates by sex and the highest prevalence in young adults. Prevalence was higher in US legal-recreational states (34%) than US illegal states (23%), US legal-medical only states (25%), and Canada (25%). The most common physical health reasons include use to manage pain (53%), sleep (46%), headaches/migraines (35%), appetite (22%), and nausea/vomiting (21%). For mental health reasons, the most common were for anxiety (52%), depression (40%), and PTSD/trauma (17%). There were 11% who reported using cannabis for managing other drug or alcohol use and 4% for psychosis. Conclusions: A substantial proportion of the North American population self-reported cannabis use for medical purposes for a variety of medical reasons, including those living in jurisdictions without legal markets. Further research is needed to understand the safety and efficacy of these forms of medical cannabis use. abstract_id: PUBMED:37391686 Scoping Review of Cannabis-Reduction Psychosocial Interventions and Reasons for Use among Young Adults with Psychosis. Objective: No evidence-based intervention effectively reduces cannabis use in young adults with psychosis (YAP). To generate hypotheses about why, a scoping review was conducted to synthesize evidence about motivations for cannabis use and reduction/cessation for YAP and the psychosocial interventions trialed to identify possible gaps between motivations and interventive strategies. Methods: A systematic literature search was conducted in December, 2022. Reviews of titles and abstracts (N = 3,216) and full-texts (n = 136) resulted in 46 articles. Results: YAP use cannabis for pleasure, to reduce dysphoria, and for social and recreational reasons; motivations for cessation include insight about cannabis-psychosis interactions, incompatibility with goals and social roles, and support from social networks. Interventions with at least minimal evidence of efficacy include motivational interviewing, cognitive-behavioral strategies, and family skills training. Conclusions: Authors recommend additional research on mechanisms of change and motivational enhancement therapy, behavioral activation, and family-based skills interventions matched to YAP motivations for use/cessation. abstract_id: PUBMED:27982748 Self-Reported Reasons for Smoking: Predicting Abstinence and Implications for Smoking Cessation Treatments Among Those With a Psychotic Disorder. Objectives: People living with a psychotic illness have higher rates of cigarette smoking and face unique barriers to quitting compared to the general population. We examined whether self-reported reasons for smoking are useful predictors of successful quit attempts among people with psychosis. Methods: As part of a randomized controlled trial addressing smoking and cardiovascular disease risk behaviors among people with psychosis, self-reported reasons for smoking were assessed at baseline (n = 235), 15 weeks (n = 151), and 12 months (n = 139). Three factors from the Reasons for Smoking Questionnaire (Coping, Physiological, and Stimulation/Activation) were entered into a model to predict short- and long-term abstinence. The relationship between these factors and mental health symptoms were also assessed. Results: Participants scoring higher on the Stimulation/Activation factor (control of weight, enjoyment, concentration, and "peps me up") at baseline were just less than half as likely to be abstinent at 15 weeks. Female participants were five times more likely to abstinent at 15 weeks, and those with a higher global functioning at baseline were 5% more likely to be abstinent. There was a positive correlation between changes over time in the Stimulation/Activation factor from baseline to 12-month follow-up and the Brief Psychiatric Rating Scale total score at 12-month follow-up. This indicates that increasingly higher endorsement of the factor was associated with more psychological symptoms. There was also a negative correlation between the change over time in the Stimulation/Activation factor and global functioning at 12 months, indicating that increasingly higher endorsement of the factor led to lower global assessment of functioning. Conclusions: The Stimulation/Activation factor may be particularly important to assess and address among smokers with psychosis. It is recommended that further research use the Reasons for Smoking Questionnaire among smokers with psychosis as a clinical tool to identify specific quit barriers. Further research into why females have higher smoking cessation rates in the short term and relapse prevention interventions seem worthy of further investigation. abstract_id: PUBMED:32877036 Cannabis use and somatic consequences Cannabis use and somatic consequences. Cannabis is the most frequently used illicit psychoactive substance in the world. It is perceived as a low-risk drug, as it is a plant, although many warnings in the medical literature underlined increased complications of cannabis use. Acute and chronic cannabis use is known to be harmful inducing psychiatric and addictive effects. An increase in the potency of cannabis as defined by a high ratio between the more important components, tetrahydrocannabinol and cannabidiol has been observed for years, which leads to more serious complications. Evidence indicates that both acute and chronic consumptions of cannabis can be detrimental to both mental and physical health. Effects of cannabis use include mood disorders, exacerbation of psychotic disorders in vulnerable people, cannabis use disorders, withdrawal syndrome, neurocognitive impairments, cardiovascular and respiratory and other diseases. Synthetic cannabinoid has rapidly spread for the last few years; they are chemical substances inducing similar psychoactive effects to cannabis. Although, cannabis users call these substances "cannabis", synthetic cannabinoids have different pharmacological properties, which make them dangerous substances leading to more serious complications. Physicians dealing with cannabis users should be aware of these differences between vegetal cannabis and synthetic cannabinoids. Answer: The study investigating the reasons for cannabis use in first-episode psychosis (FEP) patients found that the strength of endorsement for the reasons for cannabis use did change over 12 months. At baseline, patients rated their motives for using cannabis, and these motives were reassessed at 3 months and 12 months. The results showed that the average scores on each subscale of the Reasons for Use Scale were higher at baseline than at 3 months and 12 months, indicating that patients endorsed their reasons for use less strongly over time (PUBMED:25541346). The reasons for cannabis use among FEP patients were primarily 'enhancement,' followed by 'coping with unpleasant affect' and 'social motive.' 'Conformity and acceptance' were also endorsed, while 'relief of positive symptoms and side effects' was the least endorsed motive. This pattern was consistent at each time-point, suggesting that the self-medication or alleviation of dysphoria models received little support. Instead, the primary reason for cannabis use was for enhancement purposes (PUBMED:25541346).
Instruction: Are surname telephone oversamples an efficient way to better understand the health and healthcare of minority group members? Abstracts: abstract_id: PUBMED:18049351 Are surname telephone oversamples an efficient way to better understand the health and healthcare of minority group members? Objectives: Surname oversamples are commonly used in health research to increase the number of persons from minority racial and ethnic groups represented in general population surveys. This article considers the sample design efficiency in the use of Hispanic, Hmong, and Asian surname samples. Methods: The study uses 3 state surveys (Alabama, Missouri, and Minnesota) that used surname oversamples to increase the proportion of Hispanic, Hmong, or Asian respondents included in the studies. We examine whether surname oversamples lead to more completed surveys with the targeted minority groups than would have been achieved if surname oversamples had not been used. We also assess gains in terms of effective sample sizes from the use of surname oversampling. Results: The sensitivities of the Hispanic surname list ranged from 46% to 63% across the 3 surveys. The sensitivity of the Asian survey was 34%, and the sensitivity of the Hmong was 38%. Although the use of surname increased the number of targeted minority group members in the final study, the increased number had a very minimal impact on the effective sample size of the minority populations for the key survey estimates of interest in the 3 health surveys. Conclusions: The use of surname samples achieved the goal of having more persons who identify as Hispanic, Hmong, or Asian in the final sample. However, the use of surname oversamples is inefficient when considering the statistical power gained for minority group estimates. abstract_id: PUBMED:38391800 Mapping the Dynamic Complexity of Sexual and Gender Minority Healthcare Disparities: A Systems Thinking Approach. Sexual and gender minority (SGM) populations experience extensive health disparities compared to their straight and cisgender counterparts. The importance of addressing these disparities is paramount, as SGM groups often encounter significant barriers to accessing comprehensive healthcare, including societal stigma, provider bias, and financial constraints. This study utilizes a community-based system dynamics approach to understand and visualize the barriers to and facilitators of healthcare engagement for SGM groups across their life course. It aims to identify core constructs, relationships, and dynamic feedback mechanisms related to the experiences of connection/disconnection with physical, mental, and dental healthcare of SGM individuals. Barriers to access, such as discriminatory practices and the limited availability of SGM-informed healthcare professionals, exacerbate these disparities, underscoring the urgency of developing targeted interventions. System dynamics, a complex systems science (CSS) methodology, was used for this research. Group model building sessions were conducted with diverse SGM groups, including youth, older adults, and trans and gender-expansive community members. Causal loop diagrams were developed according to an iterative process, and a meta-model of their collective experiences was created. The study revealed extensive, dynamic, and shifting structural barriers for SGM community members accessing healthcare. Societal and structural stigma, provider bias, and pathologization were identified as significant barriers throughout their life course. Community-led interventions and SGM-focused holistic healthcare were identified as critical facilitators of SGM healthcare connection. The findings highlight the need for SGM-affirming and culturally responsive healthcare settings. This paper calls for a concerted effort from SGM health researchers to use CSS in developing interventions to reduce SGM health disparities. abstract_id: PUBMED:31794349 Mental health and structural harm: a qualitative study of sexual minority men's experiences of mental healthcare in Toronto, Canada. Compared to the general population, sexual minority men report poorer mental health outcomes and higher mental healthcare utilisation. However, they also report more unmet mental health needs. To better understand this phenomenon, we conducted qualitative interviews with 24 sexual minority men to explore the structural factors shaping their encounters with mental healthcare in Toronto, Canada. Interviews were analysed using grounded theory. Many participants struggled to access mental healthcare and felt more marginalised and distressed because of two interrelated sets of barriers. The first were general barriers, hurdles to mental healthcare not exclusive to sexual minorities. These included financial and logistical obstacles, the prominence of psychiatry and the biomedical model, and unsatisfactory provider encounters. The second were sexual minority barriers, obstacles explicitly rooted in heterosexism and homophobia sometimes intersecting with other forms of marginality. These included experiencing discrimination and distrust, and limited sexual minority affirming options. Discussions of general barriers outweighed those of sexual minority barriers, demonstrating the health consequences of structural harms in the absence of overt structural stigma. Healthcare inaccessibility, income insecurity and the high cost of living are fostering poor mental health among sexual minority men. Research must consider the upstream policy changes necessary to counteract these harms. abstract_id: PUBMED:36958894 Using the National Institute on Minority Health and Health Disparities framework to better understand disparities in major amputations. Recently, the United States experienced its first resurgence of major amputations in more than 20 years. Compounding this rise is a longstanding history of disparities. Patients identifying as non-Hispanic Black are twice as likely to lose a limb as those identifying as non-Hispanic White. Those identifying as Latino face a 30% increase. Rural patients are also more likely to undergo major amputations, and the rural-urban disparity is widening. We used the National Institute on Minority Health and Health Disparities framework to better understand these disparities and identify common factors contributing to them. Common factors were abundant and included increased prevalence of diabetes, possible lower rates of foot self-care, transportation barriers to medical appointments, living in disadvantaged neighborhoods, and lack of insurance. Solutions within and outside the health care realm are needed. Health care-specific interventions that embed preventative and ambulatory care services within communities may be particularly high yield. abstract_id: PUBMED:34464395 The perspectives of health professionals and patients on racism in healthcare: A qualitative systematic review. Objective: To understand racial bias in clinical settings from the perspectives of minority patients and healthcare providers to inspire changes in the way healthcare providers interact with their patients. Methods: Articles on racial bias were searched on Medline, CINAHL, PsycINFO, Web of Science. Full text review and quality appraisal was conducted, before data was synthesized and analytically themed using the Thomas and Harden methodology. Results: 23 articles were included, involving 1,006 participants. From minority patients' perspectives, two themes were generated: 1) alienation of minorities due to racial supremacism and lack of empathy, resulting in inadequate medical treatment; 2) labelling of minority patients who were stereotyped as belonging to a lower socio-economic class and having negative behaviors. From providers' perspectives, one theme recurred: the perpetuation of racial fault lines by providers. However, some patients and providers denied racism in the healthcare setting. Conclusion: Implicit racial bias is pervasive and manifests in patient-provider interactions, exacerbating health disparities in minorities. Beyond targeted anti-racism measures in healthcare settings, wider national measures to reduce housing, education and income inequality may mitigate racism in healthcare and improve minority patient care. abstract_id: PUBMED:36458212 Healthcare Mistreatment, State-Level Policy Protections, and Healthcare Avoidance Among Gender Minority People. Introduction: This study examined whether past experiences of mistreatment in healthcare were associated with greater healthcare avoidance due to anticipated mistreatment among gender minority (GM) people. We evaluated whether state-level healthcare policy protections moderated this relationship. Methods: Data from the 2018 Annual Questionnaire of The PRIDE Study, a national longitudinal study on sexual and gender minority people's health, were used in these analyses. Logistic regression modeling tested relationships between lifetime healthcare mistreatment due to gender identity or expression and past-year healthcare avoidance due to anticipated mistreatment among GM participants. Interactions between lifetime healthcare mistreatment and state-level healthcare policy protections and their relationship with past-year healthcare avoidance were tested. Results: Participants reporting any lifetime healthcare mistreatment had greater odds of past-year healthcare avoidance due to anticipated mistreatment among gender expansive people (n = 1290, OR = 4.71 [CI]: 3.57-6.20), transfeminine people (n = 263, OR = 10.32 [CI]: 4.72-22.59), and transmasculine people (n = 471, OR = 3.90 [CI]: 2.50-6.13). Presence of state-level healthcare policy protections did not moderate this relationship in any study groups. Conclusions: For GM people, reporting lifetime healthcare mistreatment was associated with healthcare avoidance due to anticipated mistreatment. State-level healthcare policy protections were not a moderating factor in this relationship. Efforts to evaluate the implementation and enforcement of state-level policies are needed. Continued efforts to understand instances of and to diminish healthcare mistreatment of GM people are recommended. Supplementary Information: The online version contains supplementary material available at 10.1007/s13178-022-00748-1. abstract_id: PUBMED:21565822 Recruitment and retention of minority participants in the health and retirement study. Purpose: Minority oversamples of African Americans and Hispanics have been a key feature of the Health and Retirement Study (HRS) design from its origins in 1992. The objective of this article was to assess the quality of the HRS with respect to the recruitment and retention of minority respondents. Design And Methods: To evaluate minority recruitment efforts, we examine baseline response rates for the early baby boom cohort that was added in the 2004 wave and the representativeness of this cohort with regard to demographic, socioeconomic, and health characteristics. To evaluate retention, we focus on minority differentials in 2008 interview, nonresponse and mortality outcomes for the full HRS sample. We also examine minority differentials in participation in supplemental components of the HRS. Results: Minority response rates at baseline and in longitudinal follow-ups for the main HRS interview have been equal to or better than that of majority Whites. Conversely, response rates to some specific supplemental components have been lower for minority sample members. Implications: The oversample strategies that the HRS has employed have been successful at identifying and recruiting minority participants at response rates very comparable with that of Whites and others. Minority differentials in participation in supplemental components have been overcome to some extent through interviewer training and targeted follow-up strategies. The HRS experience suggests that well-trained interviewers can overcome most if not all of whatever race and ethnic differentials exist in willingness to participate in surveys, including those involving biological data collection. abstract_id: PUBMED:32028955 Improving health equity among the African ethnic minority through health system strengthening: a narrative review of the New Zealand healthcare system. Background: In New Zealand, health equity is a pressing concern and reaching disadvantaged populations has become the goal to close the inequity gap. Building and strengthening health systems is one way to secure better outcomes. However, the discourse to date has predominately focussed on inequities in health outcomes for Māori. This study has interest in the African ethnic minority community in New Zealand. It undertakes a narrative review of the New Zealand health system which aims to identify literature around the attainment of health equity of African minority by: (i) providing a critical overview of the healthcare delivery system using World Health Organization's six inter-related building blocks of health system strengthening; (ii) developing a summary and discussions of the research results and; (iii) identifying priorities and recommendations for future research. Method: A narrative review of 27 articles published between January 2010 and June 2019 were selected from CINAHL, PubMed, Scopus, Google Scholar. Grey literature also informed the review. Articles excluded studies: (i) non-related to New Zealand; (ii) with no focus on equity on ethnic minority in the delivering of healthcare; (iii) had no full text available. Findings: Literature on Africans health outcomes were scarce regarding the six building blocks. However, findings show inequities in accessibility of health services, a non-ethnic inclusive health workforce, a leadership and governance which lack political will on migrant health and resultantly an under-performing health information system which influences resource allocation. Recommendation And Conclusion: An improvement and well-functioning health information system is pivotal to capture the unmet needs of the African population. There is a need for research and political will to invest in African minority health and diverse workforce that understands the background of the African population; and action to address structural and institutional racism and white privilege to address root causes of inadequate access and care processes for ethnic minorities. abstract_id: PUBMED:37546324 Diversity awareness, diversity competency and access to healthcare for minority groups: perspectives of healthcare professionals in Croatia, Germany, Poland, and Slovenia. Introduction: Due to cultural, language, or legal barriers, members of social minority groups face challenges in access to healthcare. Equality of healthcare provision can be achieved through raised diversity awareness and diversity competency of healthcare professionals. The aim of this research was to explore the experiences and attitudes of healthcare professionals toward the issue of social diversity and equal access to healthcare in Croatia, Germany, Poland, and Slovenia. Methods: The data reported come from semi-structured interviews with n = 39 healthcare professionals. The interviews were analyzed using the methods of content analysis and thematic analysis. Results: Respondents in all four countries acknowledged that socioeconomic factors and membership in a minority group have an impact on access to healthcare services, but its scope varies depending on the country. Underfunding of healthcare, language barriers, inadequate cultural training or lack of interpersonal competencies, and lack of institutional support were presented as major challenges in the provision of diversity-responsive healthcare. The majority of interviewees did not perceive direct systemic exclusion of minority groups; however, they reported cases of individual discrimination through the presence of homophobia or racism. Discussion: To improve the situation, systemic interventions are needed that encompass all levels of healthcare systems - from policies to addressing existing challenges at the healthcare facility level to improving the attitudes and skills of individual healthcare providers. abstract_id: PUBMED:37635440 Interpreters as Translation Machines: Telephone Interpreting Challenges as Awareness Problems. Telehealth has been widely adopted during the COVID-19 pandemic, and this article examines challenges faced by telephone interpreters in working with healthcare providers in the context of the Australian healthcare system. Based on one-on-one interviews with 67 healthcare interpreters in Australia, it explores various elements which affect communication processes in telephone interpreting and interpreters' views on healthcare providers' abilities to collaborate with interpreters. Data analysis indicates that telephone interpreting is often affected by a lack of briefing, poor acoustics and the absence of visual cues. While these factors pose significant challenges to telephone interpreters, a provider's tendency to see interpreters as 'translation machines' was perceived as a deeper underlying problem by the interpreters. The mechanistic approaches to interpreting among healthcare providers pose barriers to interpreter-provider collaboration and exacerbate communication problems caused by the external elements in telephone-interpreted encounters. The article calls for urgent need to raise awareness of interpreting among healthcare providers as a key to ensuring desirable health outcomes for patients from minority backgrounds. Answer: Surname telephone oversamples are commonly used in health research to increase the representation of minority racial and ethnic groups in general population surveys. However, the study using state surveys in Alabama, Missouri, and Minnesota found that while the use of Hispanic, Hmong, and Asian surname samples did increase the number of targeted minority group members in the final study, the impact on the effective sample size of the minority populations for key survey estimates was minimal. The sensitivities of the surname lists ranged from 34% to 63%, indicating that a significant proportion of the targeted minority populations were not captured by this method. Therefore, the use of surname oversamples is considered inefficient when considering the statistical power gained for minority group estimates (PUBMED:18049351). In conclusion, while surname oversamples may achieve the goal of including more individuals from certain minority groups in research samples, they do not significantly enhance the efficiency or statistical power for analyzing health and healthcare outcomes within these groups.
Instruction: Is laparoscopic radical prostatectomy better than traditional retropubic radical prostatectomy? Abstracts: abstract_id: PUBMED:23172996 Radical retropubic prostatectomy: comparison of the open and robotic approaches for treatment of prostate cancer. Radical prostatectomy represents the standard of care for surgical treatment of clinically localized prostate cancer. First described in 1904, the operation became widely performed only after advances in diagnostic and surgical techniques occurred later in the century. Over time, open retropubic radical prostatectomy (RRP) became the most common operation for prostate cancer, and excellent long-term survival outcomes have been reported. More recently, minimally invasive techniques such as the robotic-assisted laparoscopic radical prostatectomy (RALRP) were introduced. Despite a lack of prospectively collected, long-term data supporting its use, RALRP has overtaken RRP as the most frequently performed prostate cancer operation in the United States. This article uses currently available data to compare oncologic, functional, and quality-of-life outcomes associated with both the open and robotic approaches to radical prostatectomy. abstract_id: PUBMED:27638375 A multi-institutional comparison of radical retropubic prostatectomy, radical perineal prostatectomy, and robot-assisted laparoscopic prostatectomy for treatment of localized prostate cancer. To evaluate the pathological stage and margin status of patients undergoing radical retropubic prostatectomy (RRP), radical perineal prostatectomy (RPP) and robot-assisted laparoscopic prostatectomy (RALP). We performed a retrospective analysis of 196 patients who underwent RRP, RPP, and RALP as part of our multi-institution program. Fifty-seven patients underwent RRP, 41 RPP, and 98 RALP. Patient age, preoperative prostate specific antigen (PSA), preoperative Gleason score, preoperative clinical stage, pathological stage, postoperative Gleason score, and margin status were reviewed. The three groups had similar preoperative characteristics, except for PSA (8.4, 6.5, and 6.2 ng/ml) for the retropubic, robotic, and perineal approaches. Margins were positive in 12, 24, and 36% of the specimens from RALP, RRP, and RPP, respectively (P = 0.004). The positive margin rates in patients with pT2 tumors were 4, 14, and 19% in the RALP, RRP, and the RPP groups, respectively (P = 0.03). Controlling for age and pre-operative PSA and Gleason score, the rate of positive margins was statistically lower in the RALP versus both the RRP (P = 0.046) and the RPP groups (P = 0.02). In the patients with pT3 tumors, positive margins were observed in 36% of patients undergoing the RALP and 53 and 90% of those patients undergoing the RRP and RPP, respectively (P = 0.015). Controlling for the same factors, the rate of positive margins was statistically lower in the RALP versus the RPP (P = 0.01) but not compared with the RRP patients (P = 0.32). The percentage of positive margins was lower in RALP than in RPP for both pT2 and pT3 tumors. RRP had a higher percentage of positive margins than RALP in the pT2 tumors but not in the pT3 tumors. abstract_id: PUBMED:32681540 Obese men undergoing radical prostatectomy: Is robotic or retropubic better to limit positive surgical margins? Results from SEARCH. Objectives: To evaluate the association between obesity and positive surgical margins in patients undergoing retropubic radical prostatectomy versus robotic-assisted laparoscopic prostatectomy. Methods: We retrospectively reviewed the data of 3141 men undergoing retropubic radical prostatectomy and 1625 undergoing robotic-assisted laparoscopic prostatectomy between 1988 and 2017 at eight Veterans Health Administration hospitals. The positive surgical margin location (peripheral, apical, bladder neck, overall) was determined from pathology reports. We adjusted for age, race, prostate-specific antigen, surgery year, prostate weight, pathological grade group, extracapsular extension, seminal vesicle invasion, hospital surgical volume and surgical method (in analyses not stratified by surgical method). Interactions between body mass index and surgical approach were tested. Results: Among all patients, higher body mass index was associated with increased odds of overall, peripheral and apical positive surgical margins (OR 1.02-1.03, P ≤ 0.02). Although not statistically significant, there was a trend between higher body mass index and increased odds of bladder neck positive surgical margins (OR 1.03, P = 0.09). Interactions between body mass index and surgical method were significant for peripheral positive surgical margins only (P = 0.024). Specifically, there was an association between body mass index and peripheral positive surgical margins among men undergoing retropubic radical prostatectomy (OR 1.04, P &lt; 0.001), but not robotic-assisted laparoscopic prostatectomy (OR 1.00, P = 0.98). Limitations include lacking individual surgeon data and lacking central pathology review. Conclusions: In this multicenter cohort, higher body mass index was associated with increased odds of positive surgical margins at all locations except the bladder neck. Furthermore, there was a significant association between obesity and peripheral positive surgical margins in men undergoing retropubic radical prostatectomy, but not robotic-assisted laparoscopic prostatectomy. Long-term clinical significance requires further study. abstract_id: PUBMED:19680527 Status of radical prostatectomy in 2009: is there medical evidence to justify the robotic approach? This article presents the evolution of open radical retropubic prostatectomy (ORRP) into a minimally invasive procedure and reviews the literature to provide a legitimate comparison between ORRP and robotic-assisted laparoscopic radical retropubic prostatectomy (RALRP). The article is limited to manuscripts cited in the peer-reviewed literature, and an effort was made to identify those articles that fulfilled the highest level of medical evidence. In centers of excellence, ORRP is performed with no mortality, extraordinarily low technical and medical complications (1%), the rare need for blood transfusions, 1- to 2-day hospital stays, urinary catheters that are routinely removed in a week, the majority of men returning to work in 2 weeks, and up to 97% of men regaining urinary continence. Return of potency remains a challenge, especially for older men with marginal erections. RALRP is now the most common approach for the surgical removal of the malignant prostate. A critical review of the literature fails to support the marketing claims that RALRP is associated with shorter hospitalization, less pain, better cosmetics, shorter catheter time, lower transfusion rates, or improved continence and potency rates. The highest level of medical evidence suggests that RALRP may significantly compromise oncologic outcomes and that men undergoing this approach have higher regret rates than men undergoing ORRP. abstract_id: PUBMED:28878598 Bladder Neck Contracture Following Radical Retropubic versus Robotic-Assisted Laparoscopic Prostatectomy. Introduction: Radical retropubic prostatectomy (RRP) and robotic-assisted laparoscopic prostatectomy (RALP) are co-standard surgical therapies for localized prostatic adenocarcinoma. These surgical modalities offer similar outcomes; however, lower rate of bladder neck contracture (BNC) is amongst the touted benefits of RALP. The differences between approaches are largely elucidated through multiple-surgeon comparisons, which can be biased by differential experience and practice patterns. We aimed to eliminate inter-surgeon bias through this single-surgeon comparison of BNC rates following RRP and RALP. Materials And Methods: We retrospectively reviewed all RRPs and RALPs performed by one surgeon over 4 years. We compared clinical characteristics, intraoperative and postoperative outcomes. Results: RRP patients had more advanced cancer and a higher biochemical recurrence rate. No significant differences were noted between groups in rates of anastomotic leakage, BNC, or 12-month postoperative pad-free continence. Conclusion: RRP offers similar outcomes to RALP with regard to postoperative urinary extravasation, urinary continence, and BNC. abstract_id: PUBMED:24912809 Pitfalls of robot-assisted radical prostatectomy: a comparison of positive surgical margins between robotic and laparoscopic surgery. Objectives: To compare the surgical outcomes of laparoscopic radical prostatectomy and robot-assisted radical prostatectomy, including the frequency and location of positive surgical margins. Methods: The study cohort comprised 708 consecutive male patients with clinically localized prostate cancer who underwent laparoscopic radical prostatectomy (n = 551) or robot-assisted radical prostatectomy (n = 157) between January 1999 and September 2012. Operative time, estimated blood loss, complications, and positive surgical margins frequency were compared between laparoscopic radical prostatectomy and robot-assisted radical prostatectomy. Results: There were no significant differences in age or body mass index between the laparoscopic radical prostatectomy and robot-assisted radical prostatectomy patients. Prostate-specific antigen levels, Gleason sum and clinical stage of the robot-assisted radical prostatectomy patients were significantly higher than those of the laparoscopic radical prostatectomy patients. Robot-assisted radical prostatectomy patients suffered significantly less bleeding (P &lt; 0.05). The overall frequency of positive surgical margins was 30.6% (n = 167; 225 sites) in the laparoscopic radical prostatectomy group and 27.5% (n = 42; 58 sites) in the robot-assisted radical prostatectomy group. In the laparoscopic radical prostatectomy group, positive surgical margins were detected in the apex (52.0%), anterior (5.3%), posterior (5.3%) and lateral regions (22.7%) of the prostate, as well as in the bladder neck (14.7%). In the robot-assisted radical prostatectomy patients, they were observed in the apex, anterior, posterior, and lateral regions of the prostate in 43.0%, 6.9%, 25.9% and 15.5% of patients, respectively, as well as in the bladder neck in 8.6% of patients. Conclusions: Positive surgical margin distributions after robot-assisted radical prostatectomy and laparoscopic radical prostatectomy are significantly different. The only disadvantage of robot-assisted radical prostatectomy is the lack of tactile feedback. Thus, the robotic surgeon needs to take this into account to minimize the risk of positive surgical margins. abstract_id: PUBMED:21206663 Laparoscopic radical prostatectomy. Millions of men are diagnosed annually with prostate cancer worldwide. With the advent of PSA screening, there has been a shift in the detection of early prostate cancer, and there are increased numbers of men with asymptomatic, organ confined disease. Laparoscopic radical prostatectomy is the latest, well accepted treatment that patients can select. We review the surgical technique, and oncologic and functional outcomes of the most current, large series of laparoscopic radical prostatectomy published in English.Positive margin rates range from 2.1-6.9% for pT2a, 9.9-20.6% for pT2b, 24.5-42.3% for pT3a, and 22.6-54.5% for pT3b. Potency rates after bilateral nerve sparing laparoscopic radical prostatectomy range from 47.1 to 67%. Continence rates at 12 months range from 83.6 to 92%. abstract_id: PUBMED:31436793 Patient-reported outcomes after open radical prostatectomy, laparoscopic radical prostatectomy and permanent prostate brachytherapy. Objective: To assess patient-reported outcomes after open radical prostatectomy, laparoscopic radical prostatectomy and permanent prostate brachytherapy. Methods: patient-reported outcomes were evaluated using Expanded Prostate Cancer Index Composite scores at baseline and 1, 3, 6, 12 and 36 months after treatment, respectively, using differences from baseline scores. Results: Urinary function was the same in the three groups at baseline, but worse after surgery than after permanent prostate brachytherapy until 12 months, and similar after open radical prostatectomy and permanent prostate brachytherapy and better than after laparoscopic radical prostatectomy at 36 months. Urinary bother was significantly worse at 1 month after surgery, but better after open radical prostatectomy than after permanent prostate brachytherapy and laparoscopic radical prostatectomy at 3 months, after which symptoms improved gradually in all groups. Obstructive/irritative symptoms were worse after permanent prostate brachytherapy than after open radical prostatectomy at 36 months, and worse after laparoscopic radical prostatectomy until 6 months. Urinary incontinence was worse after surgery, particularly after 1 month. This symptom returned to the baseline level at 12 months after open radical prostatectomy, but recovery after laparoscopic radical prostatectomy was slower. Bowel function after permanent prostate brachytherapy was significantly worse than after surgery at 1 month and this continued until 6 months. Bowel bother was slightly worse at 3 and 6 months after permanent prostate brachytherapy compared to these time points after surgery. Conclusion: Urinary function and bother were worst after laparoscopic radical prostatectomy, especially in the early postoperative phase, whereas urinary obstructive/irritative symptom, bowel function and bother were worse after permanent prostate brachytherapy. These findings are useful and informative for the treatment of patients with prostate cancer. abstract_id: PUBMED:31061797 Open and robotic radical prostatectomy. Open retropubic radical prostatectomy has been the "gold standard" treatment for locally confined prostate cancer (PCa) but in recent years minimal invasive techniques as laparoscopy and robot-assisted prostatectomy have become widely available. The trifecta of the surgical treatment of PCa is cancer control, the preservation of continence, and erectile potency. Over the years the complication rates of radical prostatectomy have become very limited with improved cancer control and better functional results. We review the indications and the surgical technique of radical prostatectomy, be it open or laparoscopic, eventually robot-assisted as well as the pre- and postoperative measures and the surgery-related consequences. abstract_id: PUBMED:27011560 Robot-Assisted Radical Prostatectomy vs. Open Retropubic Radical Prostatectomy for Prostate Cancer: A Systematic Review and Meta-analysis. Open retropubic radical prostatectomy (ORP) remains the "gold standard" for surgical treatment of clinically localized prostate cancer (PCa). Robot-assisted radical prostatectomy (RARP) is a robotic surgery used worldwide. The aim of this study is to collect the data available in the literature on RARP and ORP, and further evaluate the overall safety and efficacy of RARP vs. ORP for the treatment of clinically localized PCa. A literature search was performed using electronic databases between January 2009 and October 2013. Clinical data such as operation duration, transfusion rate, positive surgical margins (PSM), nerve sparing, 3- and 12-month urinary continence, and potency were pooled to carry out meta-analysis. Six studies were enrolled for this meta-analysis. The operation duration of RARP group was longer than that of ORP group (weighted mean difference = 64.84). There was no statistically significant difference in the transfusion rate, PSM rate, and between RARP and ORP (transfusion rate, OR = 0.30; PSM rate, OR = 0.94). No significant difference was seen in 3- and 12-month urinary continence recovery (3 months, OR = 1.32; 12 months, OR = 1.30). There was a statistically significant difference in potency between the 3- and 12-month groups (3 months, OR = 2.80; 12 months, OR = 1.70). RARP is a safe and feasible surgical technique for the treatment of clinically localized PCa owing to the advantages of fewer perioperative complications and quicker patency recovery. Answer: The comparison between laparoscopic radical prostatectomy (LRP) and traditional retropubic radical prostatectomy (RRP) has been the subject of various studies, and the outcomes can vary depending on the specific metrics being compared. One study found that positive margin rates for LRP ranged from 2.1-6.9% for pT2a, 9.9-20.6% for pT2b, 24.5-42.3% for pT3a, and 22.6-54.5% for pT3b tumors. Potency rates after bilateral nerve-sparing LRP ranged from 47.1 to 67%, and continence rates at 12 months ranged from 83.6 to 92% (PUBMED:21206663). These results suggest that LRP can offer favorable oncologic and functional outcomes. However, patient-reported outcomes after LRP indicated that urinary function and bother were worst after LRP, especially in the early postoperative phase, compared to open radical prostatectomy and permanent prostate brachytherapy (PUBMED:31436793). This suggests that the recovery experience for patients may be more challenging after LRP in the short term. A systematic review and meta-analysis comparing robot-assisted radical prostatectomy (RARP), which is a form of laparoscopic surgery, with ORP found no statistically significant difference in transfusion rate, positive surgical margins rate, and 3- and 12-month urinary continence recovery between the two methods. However, there was a statistically significant difference in potency between the 3- and 12-month groups, with RARP showing quicker potency recovery (PUBMED:27011560). This indicates that while LRP may have some advantages in terms of recovery of potency, other outcomes are comparable to ORP. It is important to note that the highest level of medical evidence does not support claims that RARP (and by extension, LRP) is associated with shorter hospitalization, less pain, better cosmetics, shorter catheter time, lower transfusion rates, or improved continence and potency rates compared to ORRP (PUBMED:19680527). In conclusion, whether LRP is better than traditional RRP is not a straightforward question and depends on the specific outcomes being considered.
Instruction: Can Bladder Wall Thickness Predict Videourodynamic Findings in Children with Spina Bifida? Abstracts: abstract_id: PUBMED:25776909 Can Bladder Wall Thickness Predict Videourodynamic Findings in Children with Spina Bifida? Purpose: We investigated whether bladder wall thickness measured at specifically defined bladder volumes could predict videourodynamic findings in children with spina bifida. Materials And Methods: We prospectively investigated patients with spina bifida on intermittent catheterization who underwent ultrasound examination simultaneously with videourodynamics. We evaluated the association between bladder wall thickness measured at maximum cystometric capacity and parameters including age, maximum detrusor pressure during filling or at leak and bladder compliance. Differences in bladder wall thickness measured at each percent maximum cystometric capacity were compared between patients with and without unfavorable videourodynamic findings. Maximum detrusor pressure 40 cm H2O or greater during filling or at leak, bladder compliance less than 10 ml/cm H2O, detrusor overactivity, bladder trabeculation and vesicoureteral reflux were defined as unfavorable videourodynamic findings. Results: A total of 23 males and 30 females with spina bifida (median age 7.8 years) underwent measurement of bladder wall thickness at maximum cystometric capacity. Mean ± SD bladder wall thickness measured at maximum cystometric capacity was 1.7 ± 0.5 mm. Only age had a weak correlation with bladder wall thickness measured at maximum cystometric capacity (p &lt;0.05). In 31 patients bladder wall thickness was measured at each percent maximum cystometric capacity. Regarding unfavorable videourodynamic findings, there were no significant differences in bladder wall thickness measured at each percent maximum cystometric capacity, except for bladder trabeculation. Conclusions: Even if bladder wall thickness is measured at specifically defined bladder volumes, it cannot predict videourodynamic findings other than bladder trabeculation in children with spina bifida. abstract_id: PUBMED:38057255 Effect of bladder volume and compliance on ultrasonographic measurement of bladder wall thickness in children with neurogenic bladder dysfunction. Background: Spina bifida is the most common cause of neurogenic bladder dysfunction in children. Measurement of bladder wall thickness (BWT) is non-invasive, low-cost, and may be complementary to urodynamics in demonstrating bladder condition. It is still unknown the wall thickness response to volume change in bladders with different compliance states. Objective: To determine the most appropriate measurement technique by revealing the differences in measurements performed in different compliance and volume conditions in children with neurogenic bladder. Study Design: In the prospective study in 2022, patients with spina bifida who continued their urological follow-up in our spina bifida center were included. Patients with a diagnosis of the neurogenic bladder who performed clean intermittent catheterization at least 4-6 times a day and had a recent urodynamic result in the last 6 months were included in the study. According to urodynamic results, patients were divided into two groups normocompliant (NC) and low-compliant (LC) bladders. BWT measurements were made from the anterior, posterior, right lateral, and left lateral walls of the bladder 3 times as full, half-volume, and an empty bladder. Results: The study included 50 patients (NC group n:21, LC group n:29). The mean age of 24 female and 26 male patients was 6.45 ± 4.07 years. In all BWT measurements, a significant increase in wall thickness was observed with decreasing bladder volume. The best correlation between bladder volume and BWT was found in the anterior wall with a weak negative correlation (p = 0.049, r = -0.280). However, lateral wall measurements were significantly higher in low compliant patients compared to normocompliant patients when the bladder was full. The mean right lateral wall thickness was 1.58 ± 0.68 mm in the NC group and 2.18 ± 1.35 mm in the LC group (p = 0.044). Left lateral wall thickness was 1.45 ± 0.44 mm in the NC group and 2.02 ± 1.4 mm in the LC group (p = 0.033). Discussion: Although standardization has been tried to be achieved in BWT measurements, we understand from the studies in the literature that there is no unity in practice. Conclusion: Lateral wall measurements were found to be significantly higher in LC patients compared to normocompliants in full bladder suggesting that more accurate BWT follow-up can be performed with lateral wall measurements in neurogenic bladder patients if bladder compliance is low. The present study seems to be the first study in the literature in which bladder compliance and ultrasonographic bladder wall thickness measurements were evaluated together in children with neurogenic bladder dysfunction. abstract_id: PUBMED:26889485 Bladder wall thickness in the assessment of neurogenic bladder: a translational discussion of current clinical applications. The prospective trial by Kim et al. "Can Bladder Wall Thickness Predict Videourodynamic Findings in Children with Spina Bifida?" published in Journal of Urology investigated the measurement of bladder wall thickness (BWT) as a non-invasive assessment tool for lower urinary tract changes in neurogenic bladder (NGB). In this study, no significant association was observed between BWT and high-risk urodynamic parameters. This editorial discusses the basic science of bladder wall thickening as well as prior studies relating wall thickness to clinical parameters. Although Kim et al. provide a unique literature contribution in terms of assessment of BWT at defined percent cystometric capacity, specific aspects of study methodology and population may have contributed to a lack of correlation with high-risk urodynamic findings. The application of non-invasive modalities to lower urinary tract assessment of NGB remains a promising and relevant area of future research to prevent progression to end stage lower urinary tract changes for all individuals with spina bifida. abstract_id: PUBMED:35460290 Ultrasound-Estimated Bladder Weight Correlates With Videourodynamic Studies in Neurogenic Bladder Dysfunction. Purpose: This retrospective study was designed to evaluate which lower urinary tract ultrasound parameter(s) could predict the results of invasive urodynamic testing which are the current reference standard in the evaluation of bladder dysfunction in children with spina bifida. Materials And Methods: Fifty eight children with spina bifida undergoing video urodynamic evaluation and a renal bladder ultrasound as their standard of care were evaluated. Quantitative and qualitative ultrasound parameters were then correlated with the videourodynamic study results which served as the reference standard. Results: For bladders with ending storage pressures above 15 cm H2 O, there were increases in these ultrasound measured parameters: 1) bladder mass (P = .00019), 2) bladder/body mass ratio (P = .0059), and 3) wall thickness (P = .01). We defined the storage cost as the final storage pressure divided by the percentage of expected bladder capacity attained. These data were analyzed to compute receiver operating curves with assuming end storage pressures cutoff points of 15, 20, 30, and 40 cm H2 O. The optimal area under the curve was found for a bladder weight of 65 g and a pressure cutoff of 30 cm H2 O with a sensitivity of 75% with a specificity of 84%. Conclusion: Bladder weight is independent of luminal volume, can be normalized to body weight, and may serve as a clinically valuable tool for noninvasive screening to define a subset of patients with neurogenic bladder with a higher likelihood of having abnormal videourodynamic results. abstract_id: PUBMED:34332835 Can ultrasound measurement of bladder wall thickness be a useful adjunct for regular urodynamics in children with spina bifida? Introduction: Spina bifida is a major cause of neurologic bladder dysfunction among children. The goal of neurogenic bladder treatment is to preserve renal function. Close follow-up is essential, as lower urinary tract functions can change with patient growth. Presently, invasive urodynamics is the gold standard for precisely assessing lower urinary tract function. Ultrasound is a low-cost, non-invasive, uncomplicated examination that can be easily repeated. Bladder wall thickness (BWT) measurement by ultrasound has been proposed as a non-invasive alternative for identifying lower urinary tract dysfunctions. Objective: Currently there are few reports on BWT in children with spina bifida, and BWT assessment methodology has yet to be defined. The present study aim was to investigate whether BWT could be a useful adjunct for regular urodynamics in children with spina bifida. We especially focused on the precise bladder volume during BWT measurements that were simultaneously performed with urodynamics. Study Design: This prospective observational study investigated 33 patients with spina bifida who underwent video urodynamics. We assessed BWT measurements using ultrasound simultaneously performed with video urodynamics. BWT was calculated for the ventral and dorsal walls at 0%, 20%, 40%, 60%, 80%, and 100% of the expected bladder capacity. Results: Median of bladder capacity was 240 mL, and bladder compliance was 19.2 mL/cmH2O. Detrusor overactivity was present in 66.7% and vesicoureteral reflux was present in 27.3% of the patients. BWT of the ventral wall was significantly lower than the dorsal wall. During increases in the bladder volume, both the ventral and dorsal walls exhibited proportional thinning (p &lt; 0.05). There were no significant differences for BWT between males and females. Although there was a higher statistical tendency for detrusor overactivity versus without detrusor overactivity (p = 0.085), there were no significant differences found between patients with and without detrusor overactivity. Discussion: This is the first report where multiple BWT measurements points with video urodynamics were simultaneously performed. Selection of bladder volumes for BWT measurements is critical. Our current study measured six points for each patient during urodynamics. However, available data was not sufficient for detecting bladder function. Until now, there has been no valid standard condition defined for measuring BWT and thus, lack of a standardized method has resulted in discrepancies among studies. Conclusion: Our measurement conditions showed BWT may not correlate with the degree of bladder detrusor dysfunction. As BWT ultrasound cannot identify bladder dysfunction of children with spina bifida, this cannot be used as a substitute for invasive urodynamics. abstract_id: PUBMED:36245719 The correlation between the transverse rectal diameter and urodynamic findings in children with neurogenic bowel and bladder dysfunction. Background: The aim of this study was to examine the correlation between the transverse rectal diameter and urodynamic findings in children with neurogenic bowel and bladder dysfunction. Methods: Between 2014 and 2022, we prospectively evaluated 81 consecutive spina bifida children with neurogenic bowel and bladder dysfunction (35 boys and 46 girls, mean age 9.5 ± 3.4 years). All patients underwent echosonographic measurement of transverse rectal diameter and urodynamic studies. Results: We found a strong negative correlation between transverse rectal diameter and maximum bladder capacity (r = -0.682, p &lt; 0.001) and compliance (r = -0.690, p &lt; 0.001). There was also a strong positive correlation between transverse rectal diameter and maximal detrusor pressure (r = 0.650, p &lt; 0.001), leak point pressure (r = 0.793, p &lt; 0.001), and PVR (r = 0.762, p &lt; 0.001). In ROC analysis, transverse rectal diameter demonstrated good performance for distinguishing children with upper urinary tract deterioration, with an AUC of 0.857 (95% CI 0.761-0.953). A transverse rectal diameter ≥40 mm was 83.3% sensitive and 100% specific for the diagnosis of unfavorable urodynamic patterns. Conclusion: There is a correlation between the transverse rectal diameter and urodynamic findings in children with neurogenic bowel and bladder dysfunction. Ultrasonographically assessed transverse rectal diameter of ≥40 mm may be used as a risk factor for upper urinary tract deterioration (unfavorable urodynamic findings). We suggest the transverse rectal diameter echosonographic measurement use as an integral part of the diagnostic approach in children with neurogenic bowel and bladder dysfunction, as it can help decision-making while waiting for urodynamic testing. abstract_id: PUBMED:25656013 Efficacy of combination therapy with mirabegron for anticholinergic-resistant neurogenic bladder: videourodynamic evaluation Using a videourodynamic study, we examined the efficacy of combination therapy with mirabegron for anticholinergic-resistant neurogenic bladder. We retrospectively studied 7 patients with neurogenic bladder (5 males and 2 females) who had detrusor overactivity (DO) or low compliance bladder (<10 ml/cmH2O) despite taking anticholinergic medication. Bladder deformity was categorized from G0 to G3 by Ogawa's classification. Mean age of study patients was 51 years (25-76). Underlying diseases were spinal cord injury in 3 patients, spina bifida in 2, spinal cord infarction in 1, and post-radical hysterectomy in 1. Preceding anticholinergic medication was solifenacin 5 mg in 1 patient, solifenacin 10 mg in 5, and tolterodine 4 mg in 1. Before mirabegron, bladder deformity was G1 in 4 patients, G2 in 1 and G3 in 2, and vesicoureteral reflux (VUR) was detected in 3 patients. Five and 4 patients had detrusor overactivity and low compliance bladder, respectively. Videourodynamic study was reevaluated at a mean of 7 months (2- 12 months) after mirabegron. After mirabegron, urinary incontinence was improved in all patients. G3 bladder deformity was improved to G2 and G1 in one patient each, and VUR disappeared in all 3 patients. DO disappeared in 2 of the 5 patients, and bladder compliance was improved in all 4 patients with low compliance bladder. In conclusion, combination therapy of mirabegron is effective and beneficial for anticholinergic-resistant neurogenic bladder. abstract_id: PUBMED:32376289 The influence of bowel management on urodynamic findings in spina bifida children with detrusor overactivity and detrusor sphincter dyssynergia. Objective: To examine the effects of bowel management on urodynamic findings in spina bifida children with detrusor overactivity (DO) and detrusor sphincter dyssynergia (DSD). Material And Methods: Between 2014 and 2019 we prospectively evaluated 39 consecutive spina bifida children with DO and DSD (18 (46.2%) boys and 21 (53.8%) girls, aged 4-16 years; mean age 9.5 ± 3.7 years) who received bowel management and we treated their bowel dysfunction with an aim of alleviating the symptoms of constipation, preventing constant overdistension of the rectosigmoid, providing regular emptying of the colon and faecal continence. Bowel management included daily enema, laxative application and a special diet who was performed during 12 months. All children had undergone urodynamic studies before and after starting bowel management, with no changes in their urological treatment. Results: Bowel management caused an increase in maximum bladder capacity from 183.0 (112.0-234.0) to 237.0 (165.0-298.0) (p &lt; 0.001). When maximal bladder capacity adjusted for age there was also significant increment (adjusted before 0.54 (0.47-0.64), adjusted after 0.75 (0.70-0.82), p &lt; 0.001). In addition, we observed a decrease in maximal detrusor pressure from 64.3 (49.0-77.0) to 46.4 (32.0-59.0) cm H2O (p &lt; 0.001) and an increase in bladder compliance from 3.0 (2.0-3.3) to 5.6 (3.9-6.6) ml/cm H2O (p &lt; 0.001). There was also significant reductions in leak point pressure from 62.0 (48.0-69.0) to 39.0 (30.0-43.0) cm H2O (p = 0.001), and significant reductions in post-void residual volume (PVR) from 165.0 (128.0-187.0) to 98.0 (68.0-136.0) ml in our 13 patients who could achieve spontaneous voiding (p = 0.001). Conclusion: Administering bowel management may be useful for bladder function and urodynamic findings in spina bifida children with DO and DSD. Therefore, bowel management should form an integral part of the treatment in spina bifida children with DO and DSD. abstract_id: PUBMED:35735950 Bladder Wall Segmentation and Characterization on MR Images: Computer-Aided Spina Bifida Diagnosis. (1) Background: Segmentation of the bladder inner's wall and outer boundaries on Magnetic Resonance Images (MRI) is a crucial step for the diagnosis and the characterization of the bladder state and function. This paper proposes an optimized system for the segmentation and the classification of the bladder wall. (2) Methods: For each image of our data set, the region of interest corresponding to the bladder wall was extracted using LevelSet contour-based segmentation. Several features were computed from the extracted wall on T2 MRI images. After an automatic selection of the sub-vector containing most discriminant features, two supervised learning algorithms were tested using a bio-inspired optimization algorithm. (3) Results: The proposed system based on the improved LevelSet algorithm proved its efficiency in bladder wall segmentation. Experiments also showed that Support Vector Machine (SVM) classifier, optimized by Gray Wolf Optimizer (GWO) and using Radial Basis Function (RBF) kernel outperforms the Random Forest classification algorithm with a set of selected features. (4) Conclusions: A computer-aided optimized system based on segmentation and characterization, of bladder wall on MRI images for classification purposes is proposed. It can significantly be helpful for radiologists as a part of spina bifida study. abstract_id: PUBMED:36999336 Risk determination for upper urinary tract damage in children with neuropathic bladder. Aim: Neuropathic bladder dysfunction (NBD) or neurogenic lower urinary tract dysfunction occurs as a result of a lesion at any level of central nervous system. The most common aetiology of NBD in children is abnormal development of spinal column. These defects cause neurogenic detrusor overactivity which contributes to detrusor-sphincter dysfunction and results in lower urinary tract symptoms such as incontinence. One of the more progressive and insidious, at the same time preventable result of neuropathic bladder is upper urinary tract deterioration. It is vital to aim reduction in bladder pressures and the minimisation of urine stasis in order to prevent or at least attenuate renal disease. Despite world-wide strategies for prevention of neural tube defects currently, we will still be involved in the care of spina bifida patients born every year with a neuropathic bladder and at risk of long-term renal damage. This study was planned for evaluation of results and detection of possible risk factors for upper urinary tract deterioration during routine visits of neuropathic bladder population. Methods: The electronic medical records of the patients who were followed up for at least 1 year with the diagnosis of neuropathic bladder in Pediatric Urology and Nephrology units of Adana City Training and Research Hospital were reviewed retrospectively. A total of 117 patients, whose blood, urine, imaging and urodynamic study required for the evaluation of nephrological and urological status were completed and included in the study. Patients under the age of one were excluded from the study. Demographic characteristics, medical history, laboratory and imaging results were recorded. All statistical analyses were analysed by SPSS version 21 software package and analysed by descriptive statistical. Results: Of the 117 patients who participated in the study, 73 (62.4%) were female and 44 (37.6%) were male. Mean age of the patients was 6.7 ± 4.9 years. The leading cause of neuropathic bladder was neuro-spinal dysraphism with 103 (88.1%) patients. Urinary tract ultrasound imaging revealed hydronephrosis in 44 patients (35.9%), parenchymal thinning in 20 patients (17.1%), increased parenchymal echo in 20 patients (17.1%) and trabeculation or increase in wall thickness of the bladder in 51 patients (43.6%). Voiding cystogram revealed the presence of vesicoureteral reflux in a total of 37 patients (31.6%), with 28 patients being unilateral and 9 patients bilateral. More than half of the patients had abnormal bladder findings (52.1%). In the Tc 99 m DMSA scan of the patients, 24 patients (20.5%) had unilateral renal scars and 15 (12.8%) had bilateral. Loss of renal function was detected in 27 (23.1%) of the patients. Urodynamic study revealed decreased bladder capacity in 65 patients (55.6%) and increased detrusor leakage pressure was found in 60 patients (51.3%). The mean leak point pressure of the patients was 36 ± 26 cm H2 O, and the mean leakage volume was determined as 157 ± 118 mL. Conclusion: Findings obtained from imaging and urodynamic studies during routine investigation of neuropathic bladder patients can be a guide for the upper urinary tract. According to our results, age, bladder changes in ultrasound and voiding cystogram, high leak point pressure obtained during urodynamic studies are thought to have strong correlation with upper urinary tract damage. The prevalence of progressive chronic kidney disease in children and adults with spina bifida is remarkable and completely preventable. The strategies for prevention of renal disease in this patient population should be planned by coordinated work of urologist with nephrologist and requires family cooperation. Answer: Based on the provided abstracts, bladder wall thickness (BWT) has been investigated as a potential non-invasive predictor of videourodynamic findings in children with spina bifida, but the results are not conclusive. One study found that bladder wall thickness measured at specifically defined bladder volumes could not predict videourodynamic findings other than bladder trabeculation in children with spina bifida (PUBMED:25776909). This suggests that while BWT may be associated with some bladder changes, it is not a reliable predictor of most videourodynamic findings in this population. Another study indicated that lateral wall measurements of BWT were significantly higher in low compliant bladders compared to normocompliant bladders when the bladder was full, suggesting that BWT measurements, particularly of the lateral walls, might be more accurate in predicting bladder compliance states in children with neurogenic bladder dysfunction (PUBMED:38057255). However, this study also acknowledges that there is no unity in practice for BWT measurements, indicating a lack of standardization in the field. An editorial discussing the study by Kim et al. (PUBMED:26889485) highlighted that while BWT is a promising non-invasive modality for assessing lower urinary tract changes in neurogenic bladder, the lack of significant association between BWT and high-risk urodynamic parameters in their study could be due to specific aspects of study methodology and population. A retrospective study found that ultrasound-estimated bladder weight, which includes wall thickness, correlated with videourodynamic studies in neurogenic bladder dysfunction, suggesting that bladder weight might serve as a non-invasive screening tool for identifying abnormal videourodynamic results (PUBMED:35460290). Another study concluded that BWT measurements may not correlate with the degree of bladder detrusor dysfunction, indicating that BWT ultrasound cannot identify bladder dysfunction in children with spina bifida and cannot be used as a substitute for invasive urodynamics (PUBMED:34332835). In summary, while there is some evidence that BWT, particularly lateral wall measurements, may correlate with certain aspects of bladder compliance in children with neurogenic bladder dysfunction, it is not a consistent or reliable predictor of videourodynamic findings in children with spina bifida. Further research and standardization of measurement techniques are needed to clarify the role of BWT in this context.
Instruction: Surgical management of supravalvular aortic stenosis: does Brom three-patch technique provide superior results? Abstracts: abstract_id: PUBMED:19632417 Surgical management of supravalvular aortic stenosis: does Brom three-patch technique provide superior results? Background: Various surgical techniques have been proposed to repair supravalvular aortic stenosis. However, the optimal approach remains to be determined. The present study was undertaken to evaluate the results of surgery for supravalvular aortic stenosis and to compare the symmetric three-patch repair (Brom technique) with other surgical reconstructions. Methods: Between 1995 and 2007, 34 patients with supravalvular aortic stenosis underwent surgery. Fourteen patients (41%) had Williams syndrome. Supravalvular stenosis was discrete in 24 patients (71%) and diffuse in 10 (29%). Eight patients (23%) underwent one-patch repair; 3 patients (9%) had inverted bifurcated patch enlargement (Doty repair) and 23 patients (68%) had symmetric repair (Brom three-patch technique). Follow-up was complete and ranged from 6 months to 12 years (mean, 5.8 years). Results: There was one early and no late deaths. One patient (with one-patch repair) underwent reoperation for residual obstruction. At last follow-up, left ventricle to aorta peak gradient was 45 +/- 28 mm Hg after one-patch repair, 30 +/- 9 mm Hg after Doty operation, and 11 +/- 18 mm Hg after symmetric Brom procedure. Brom repair was associated with a low incidence of residual obstruction (peak gradient &gt; or = 40 mm Hg) (2 of 22; 9.1%) and moderate aortic insufficiency (1 of 22; 4.5%). Conclusions: Brom three-patch repair provides symmetric reconstruction of the aortic root in patients with supravalvular aortic stenosis. This may lead to improved midterm results in terms of relief of the obstruction and incidence of aortic insufficiency. abstract_id: PUBMED:29544407 Surgical Correction of Supravalvar Aortic Stenosis: 52 Years' Experience. Objectives: Supravalvar aortic stenosis (SVAS) is a rare congenital anomaly. The "single-patch technique," "'two sinus augmentation with an inverted Y-patch" (both nonsymmetrical corrections), "three-patch technique," and the "slide aortoplasty" (both symmetrical corrections) are the techniques implemented by the majority of surgeons for the correction of SVAS. In the few studies that compared these techniques, no technique was shown to be superior over another. The aim of the present study is to review the 52-year experience with the surgical correction of SVAS in two of four congenital cardiothoracic surgical centers in the Netherlands. Methods: We retrospectively reviewed all patient files of those who underwent an operation to correct their SVAS, between 1962 and 2014 in our centers. Patients were divided according to their operating technique. These groups were compared using the end points freedom from reoperation and mortality. Results: A total of 49 patients were included, 23 (46.9%) patients in the nonsymmetrical group and 26 (53.1%) patients in the symmetrical group. Survival after 20 years in the nonsymmetrical group was 80% (standard error [SE]: 0.091) and in the symmetrical group was 85% (SE: 0.085; P = .163). Freedom from reoperation after 20 years in the nonsymmetrical group was 88% (SE: 0.079) and in the symmetrical group was 71% (SE: 0.107; P = 0.313). Conclusion: In this patient group, there is no significant difference in survival and freedom from reoperation between the different surgical techniques for SVAS repair. Compared to the survival in the general population, the survival of SVAS patients is remarkably low. Apparently, SVAS is not a benign disease and probably patients should be followed more closely for the rest of their lives. abstract_id: PUBMED:24415669 Surgery for supravalvular aortic stenosis - the three-patch technique. The objective of this paper is to describe the three-patch technique for repair of supravalvular aortic stenosis (SVAS). Supravalvular aortic stenosis is a rare malformation as a result of an abnormal thickening of the aortic wall. SVAS may present in two forms: a localized form (affecting only the aortic sinotubular junction) and a diffuse form, where the aortic arch and its side branches are also affected. Since 1960, multiple surgical techniques have been described with the aim of relieving the aortic narrowing and restoring the aortic root. We present the three-patch technique as originally developed by Brom. After transection of the aorta at the sinotubular junction, three longitudinal incisions are made into the three sinuses. The aortic root geometry is then restored by placement of three separate patches of autologous pericardium in the opened sinuses. Brom's technique provides a complete and symmetric restoration of the aortic anatomy. The technique is illustrated by angiographies, surgical drawings, videos and a review of the literature. The results of the three-patch technique are good and our long-term experience will be described. abstract_id: PUBMED:30189261 Three-Patch Aortic Root Reconstruction With Extended Left Main Coronary Artery Patch Augmentation in Neonates and Infants. Left main coronary artery (LMCA) stenosis is present in approximately 5% of patients with congenital supravalvular aortic stenosis (SVAS) (Fig. 1)1 and is associated with an increased risk of sudden cardiac death.2 However, patients undergoing coronary artery intervention at the time of SVAS repair are at the highest risk of experiencing major adverse cardiac events.3 Literature reports of surgical techniques and outcomes of concomitant coronary artery repair in these high-risk patients are diverse and inconsistently described. We have recently adopted a standardized surgical technique for management of this complex pathology by combining extended LMCA patch augmentation with a 3-patch aortic root reconstruction (Brom's technique). In this report, we describe our contemporary surgical technique of 3-patch aortic root reconstruction with extended LMCA patch augmentation for patients with congenital SVAS with ostial LMCA stenosis and bilateral outflow tract obstruction. Institutional review board approval was obtained for retrospective review of patient charts. abstract_id: PUBMED:25282210 An unusual complication of three-patch repair of supravalvar aortic stenosis. Three-patch repair of supravalvar aortic stenosis is a widely accepted surgical approach for this congenital heart lesion. We describe an unusual complication of this approach, which resulted in ischemia in the left anterior coronary artery distribution. Subtle oversizing of the left sinus of Valsalva patch led to kinking of the origin of the left anterior descending artery; the circumflex artery was not affected. Sinus of Valsalva reconstruction and reimplantation of the left coronary button restored normal coronary perfusion. abstract_id: PUBMED:36902518 Clinical Results of a Modified Doty's Technique for Supravalvular Aortic Stenosis. This study aimed to assess the early and mid-term results of the modified Doty's technique compared with the traditional Doty's technique in patients with congenital supravalvular aortic stenosis (SVAS). We retrospectively included 73 consecutive SVAS patients in Beijing and Yunnan Fuwai Hospitals between 2014 and 2021. Patients were divided into the modified technique (n = 9) and the traditional technique group (n = 64). The modified technique involves altering the right head of the symmetrical inverted pantaloon-shaped patch into an asymmetrical triangular form to prevent compression of the right coronary artery ostium. The primary safety outcome was in-hospital surgery-related complications and the primary effectiveness outcome was re-operation at follow-up. The Mann-Whitney U test and Fisher's exact test were used to test the group difference. The median age at operation was 50 months (IQR 27.0-96.0). Twenty-two (30.1%) of the patients were female. The median follow-up was 23.5 months (IQR 3.0-46.0). No in-hospital surgery-related complications and follow-up re-operation occurred in the modified technique group, but the traditional technique group had 14 (21.8%) surgery-related complications and 5 (7.9%) re-operation. Patients with the modified technique had a well-developed aortic root and no aortic regurgitation occurred. A modified technique could be considered for patients with poor aortic root development to reduce postoperative surgery-related complications. abstract_id: PUBMED:14650592 Successful repair of supravalvular aortic stenosis with ostial stenosis of the left coronary artery using Brom's three patch technique. Stenotic lesion of the left coronary artery is an unnoticed but complicating feature of supravalvular aortic stenosis (SAS). We present successful repair of SAS with left coronary ostial stenosis. A 9-year-old girl was diagnosed as Williams syndrome associated with SAS. She had no symptoms of angina but cardiac catheterization revealed severe stenosis of the left coronary artery ostium. We adopted Brom's three patch technique, which could enlarge the aortic root and ostial lesion of left coronary artery inclusively. This method is also ideal regarding restoration of the aortic root geometry. abstract_id: PUBMED:20519303 Extended three-patch technique for congenital supravalvular aortic stenosis. We describe a modified extended 3-patch surgical procedure to relieve supravalvular aortic stenosis. This simple technique provides effective and lasting enlargement of the ascending aorta in patients with severe congenital supravalvular aortic stenosis. abstract_id: PUBMED:27785133 Application of the CardioCel bovine pericardial patch - a preliminary report. Introduction: Animal pericardial patches are widely used in adult and pediatric cardiac surgery. A search is ongoing for a new material with optimal surgical properties that will reduce intraoperative bleeding and the occurrence of restenosis, calcification, and pseudoaneurysms in long-term observation. One product of interest is the CardioCel bovine pericardial patch. Aim: Evaluation of the short-term results of CardioCel bovine pericardial patch implantation during pediatric cardiac surgery. Material And Methods: The study included 8 patients who underwent surgical correction of congenital cardiac defects between January 2015 and February 2016. Pericardial patches were used to repair supravalvular aortic stenosis and reconstruct the aortic arch and pulmonary arteries. The age of the patients ranged from 10 days to 14 years. Results: There were no hospital deaths. The new material exhibited satisfactory durability and elasticity during surgery, facilitating optimal adaptation of the patch to the patient's tissues. No significant bleeding was reported from the suture site. The median duration of follow-up was 58 days. During the follow-up, there were no symptoms of pseudoaneurysm formation, patch thickening, or calcification in the areas where the pericardial patches were implanted. No clinical or laboratory symptoms of infection were observed in locations where the new material was applied. Conclusions: Satisfactory surgical properties of the patch were observed intraoperatively. Positive results using the new pericardial patch were obtained in short-term follow-up. abstract_id: PUBMED:10534693 Forty-one years of surgical experience with congenital supravalvular aortic stenosis. Objective: Several techniques for symmetric reconstruction of the aortic root in congenital supravalvular aortic stenosis have been developed, but it remains unclear whether these prove superior to patch enlargement of the noncoronary sinus alone. We reviewed our experience with surgical treatment of supravalvular aortic stenosis and investigated the impact of the surgical technique on long-term results. Methods And Results: Seventy-five patients underwent operations to treat congenital supravalvular aortic stenosis at our institution between 1957 and 1998. Surgical procedures included patch enlargement of the noncoronary sinus only (n = 34), inverted bifurcated patch plasty (n = 35), and 3-sinus reconstruction of the aortic root (n = 6). There were 7 early deaths. Among those who survived the operation, 100% were alive at 5 years, 96% were alive at 10 years, and 77% were alive at 20 years. According to time-related analysis diffuse stenosis of the ascending aorta proved a risk factor for both survival and reoperation (P &lt;.01 for each). Patients with multiple-sinus reconstructions of the aortic root accounted for only 2 of the 14 reoperations and none of the late deaths (both P &lt;.001). Residual gradients were lower after multiple-sinus reconstruction of the aortic root (median 10 mm Hg vs 20 mm Hg for patch enlargement of the noncoronary sinus only, P =. 008), as was the prevalence of moderate aortic regurgitation at follow-up (3% vs 22%, P =.05). Conclusions: Results of operations for supravalvular aortic stenosis improved greatly after the introduction of more symmetric reconstructions of the aortic root. Multiple-sinus reconstructions (inverted bifurcated patch plasty and 3-sinus reconstruction) resulted in superior hemodynamics and were associated with reductions in both mortality rate and need for reoperation. Answer: The surgical management of supravalvular aortic stenosis (SVAS) has been approached with various techniques, and the Brom three-patch technique is one of the methods used for this condition. According to the available literature, the Brom three-patch technique provides a symmetric reconstruction of the aortic root and may lead to improved midterm results in terms of relief of the obstruction and incidence of aortic insufficiency (PUBMED:19632417). This technique involves transecting the aorta at the sinotubular junction and placing three separate patches of autologous pericardium in the opened sinuses, which is believed to restore the aortic root geometry completely and symmetrically (PUBMED:24415669). However, when comparing different surgical techniques for SVAS repair, including symmetrical and nonsymmetrical corrections, no significant difference in survival and freedom from reoperation was observed in a study that reviewed a 52-year experience with SVAS surgical correction (PUBMED:29544407). This suggests that while the Brom technique may offer certain advantages, it may not necessarily be superior in terms of long-term outcomes such as survival and reoperation rates. Additionally, the Brom three-patch technique has been successfully used in cases with left coronary ostial stenosis, providing enlargement of the aortic root and ostial lesion inclusively (PUBMED:14650592). An extended version of the three-patch technique has also been described, which provides effective and lasting enlargement of the ascending aorta in patients with severe congenital SVAS (PUBMED:20519303). Despite the potential benefits, complications can arise from the Brom three-patch technique, such as ischemia in the left anterior coronary artery distribution due to oversizing of the left sinus of Valsalva patch (PUBMED:25282210). Therefore, while the Brom three-patch technique may offer a symmetric reconstruction with good midterm results, it is not without risks, and the literature does not conclusively establish its superiority over other techniques. The choice of surgical technique may depend on the individual patient's anatomy and the surgeon's experience and preference.
Instruction: Is frontloaded sputum microscopy an option in active tuberculosis case finding? Abstracts: abstract_id: PUBMED:25519797 Is frontloaded sputum microscopy an option in active tuberculosis case finding? Setting: Active tuberculosis (TB) case finding (ACF) in Phnom Penh, Cambodia using light-emitting diode fluorescence microscopy (FM). Objective: To evaluate the smear-positive yield of frontloaded (same-day) smear microscopy in ACF. Design: All presumptive TB cases screened through ACF were asked to provide three sputum specimens: two spot specimens on Day 1 and a morning specimen on Day 2 (spot-spot-morning, SSM). Laboratory technicians blinded to previous results read the smears using FM. We considered only SSM series with at least one positive smear to calculate the proportion of TB cases missed and to determine the difference between the spot-spot (SS) and spot-morning (SM) approach. Results: Of 4616 presumptive TB patients enrolled, 3306 provided three sputum samples. Of 2957 (89.4%) who followed the SSM approach, 188 (6.4%) were smear-positive: 177 on SM and 160 on SS. The incremental yield of the second sputum sample was 18.1% for SM vs. 9.4% for SS. Relative to any smear-positive case detected by SSM, 28/188 (14.9%, 95%CI 10.1-20.8) TB cases would be missed by SS vs. 11/188 (5.9%, 95%CI 3.0-10.2) by SM. The difference in the proportion of missed TB patients was 9.0% (P = 0.006). Conclusion: ACF frontloaded sputum microscopy is inferior in terms of smear-positive yield: the SS approach would have missed a significant proportion of smear-positive TB. abstract_id: PUBMED:38380873 Pooled sputum testing by Xpert® MTB/RIF Ultra for active tuberculosis case finding among high-risk groups in a low-incidence area: a prospective study. Background: Early detection and treatment of tuberculosis (TB) are of great importance to stop its spread. However, optimising the active case findingstrategy is critical to improving its feasibility in regions where TB is epidemic. Method: The different pooled ratios between TB-positive and TB-negative sputum specimens were evaluated and a pooling ratio of 5:1 was used for the active case finding screening by Xpert MTB/RIF Ultra among high-risk groups in Beijing. Results: The sensitivity of pooling ratio at 5:1 was 97.5% (39/40). Between October 2022 and March 2023, among 17,681 participants, 1729 metthe active case finding criteria and were screened by 350 5:1 sputum pools by Xpert MTB/RIF Ultra. Four pools (1.1%) tested positive and were further confirmed as definite active TB cases. In our study population with high TB incidence (231/100,000), the cost for detection of individual patients was reduced by 77.4% at a 5:1 pooling ratio. Conclusions: pooled sputum testing at a suitable ratio using Xpert MTB/RIF Ultra provides a rapid, efficient, and cost-effective method for active TB case finding among high-risk groups in a low-incidence area. abstract_id: PUBMED:23772012 Tuberculosis: active case finding survey in an urban area of India, in 2012. Background: Tuberculosis (TB) continues to pose a major global health problem and thus intensive action is needed to control and ultimately eliminate the disease. Methods: An active case finding (ACF) survey was conducted in the urban slum areas of the R-South Municipal Ward of Mumbai City for a period of two months from June to July 2012. Micro planning for survey was done by District Tuberculosis Officer and Medical Officer of Health of R-South ward. The entire health post staffs of R-South ward were trained to perform the survey. TB suspects were identified by trained community health volunteers during their home to home visit. These suspects were referred to the designated microscopy centers (DMCs) for sputum examination and those diagnosed with TB were put on anti-TB treatment. Results: A total of 278 TB suspects were identified on enquiring on the presence of symptoms suggestive of TB. Out of them 221(79.5%) patients got tested for sputum examination. Sputum positive TB was diagnosed in 29 suspects and the sputum positivity rate was 13.1%, which was slightly higher than the passive case finding norms of 10% as prescribed under Revised National TB Control Program. Conclusion: Active case finding for tuberculosis in the general community was discouraged for several decades because of high costs of implementation. However, results of the survey suggest that periodic ACF should be incorporated in populations wherever tuberculosis incidence / prevalence is high as there was a definite improvement in the case detection rate. abstract_id: PUBMED:28033717 Active Community-Based Case Finding for Tuberculosis With Limited Resources. Papua New Guinea is one of the 14 highest-burden countries for tuberculosis (TB) infection, but few community-based studies exist. We evaluated a low-cost method of active community case finding in Kabwum and Wasu in Morobe Province, Papua New Guinea. Over 3 months we visited 26 villages and screened adults and children for symptoms and signs of TB. Sputum samples were examined using smear microscopy. A total of 1700 people had chronic symptoms, of which 267 were suspicious for TB on further examination. Sputum from 230 symptomatic adults yielded 97 samples that were positive for acid-fast bacilli. In addition, 15 cases of extrapulmonary TB in adults and 17 cases of TB in children were identified. One hundred and thirty people were identified with active TB disease among the source population of approximately 17 000, giving an estimated prevalence of 765 per 100 000. One hundred and six (82%) cases were not previously diagnosed. The cost per case identified was US$146. It is feasible to conduct active community-based case finding and treatment initiation for TB with limited resources and in remote areas, and in Papua New Guinea the yield was high. Active case finding and follow-up of treatment in villages is needed to address the hidden burden of TB in Papua New Guinea and other high-burden Asia Pacific countries. abstract_id: PUBMED:28571575 Active case-finding for tuberculosis by mobile teams in Myanmar: yield and treatment outcomes. Background: Since 2005, the Myanmar National Tuberculosis Programme (NTP) has been implementing active case finding (ACF) activities involving mobile teams in hard-to-reach areas. This study revealed the contribution of mobile team activities to total tuberculosis (TB) case detection, characteristics of TB patients detected by mobile teams and their treatment outcomes. Methods: This was a descriptive study using routine programme data between October 2014 and December 2014. Mobile team activities were a one-stop service and included portable digital chest radiography (CXR) and microscopy of two sputum samples. The algorithm of the case detection included screening patients by symptoms, then by CXR followed by sputum microscopy for confirmation. Diagnosed patients were started on treatment and followed until a final outcome was ascertained. Results: A total of 9 349 people with symptoms suggestive of TB were screened by CXR, with an uptake of 96.6%. Of those who were meant to undergo sputum smear microscopy, 51.4% had sputum examinations. Finally, 504 TB patients were identified by the mobile teams and the overall contribution to total TB case detection in the respective townships was 25.3%. Among total cases examined by microscopy, 6.4% were sputum smear positive TB. Treatment success rate was high as 91.8% in study townships compared to national rate 85% (2014 cohort). Conclusions: This study confirmed the feasibility and acceptability of ACF by mobile teams in hard-to-reach contexts, especially when equipped with portable, digital CXR machines that provided immediate results. However, the follow-up process of sputum examination created a significant barrier to confirmation of the diagnosis. In order to optimize the ACF through mobile team activity, future ACF activities were needed to be strengthened one stop service including molecular diagnostics or provision of sputum cups to all presumptive TB cases prior to CXR and testing if CXR suggestive of TB. abstract_id: PUBMED:27865243 Lessons learnt from active tuberculosis case finding in an urban slum setting of Agra city, India. Active case finding (ACF) is recognized as one of the key strategies to reach the missing 3 million cases in high tuberculosis (TB) burden countries. In India, we conducted ACF as a pilot project to assess its operational feasibility in four slums of Agra city in 2012 and covered 3940 households (in 14 wards) with a population of 21,870. Trained community volunteers visited households with an intention to provide information on TB and refer those with cough ≥2 weeks for sputum smear examination. Volunteers identified 8 persons with cough of ≥2 weeks by asking the first or the main respondent of the household. However, by directly asking (or probing) all available members of the household, they identified 374 persons with cough of ≥2 weeks. All 382 persons with cough of ≥2 weeks were referred for sputum smear examination. While 40% of those referred reached health facilities for sputum examination on their own, 60% had to be accompanied by the community volunteers to the health facility for sputum smear examination by Ziehl-Neelsen staining method. Eventually, seven persons were found to be sputum smear positive. This study highlighted important aspects for implementing ACF: First, all household members have to be asked for TB symptoms and Second, mere referral for sputum examination is not enough and there is a need to support people to reach the health facility for sputum smear examination. abstract_id: PUBMED:29732982 Prevalence of Tuberculosis in a Prison in Tehran by Active Case Finding. Background: Tuberculosis [TB] is one of the most important infectious diseases among prisoners. TB Screening plays an important role in prevention among prisoners and their visitors, also better caring of the patients. Active case finding is one of screening methods that is relatively an easy one to find TB suspected prisoners. Therefore, we aimed to assess the prevalence of TB in male prisoners in Tehran, Iran. Methods: This study was conducted among male prisoners in Great Tehran Prison to screen and diagnose TB patients from October 2013 to May 2014. We used active case finding [ACF] as a screening method to find TB suspected prisoners in which healthcare staff explained TB symptoms for prisoners and individuals with those symptoms referred to the clinic for further work up (sputum sample tests). Results: In total of 6900 prisoners, 448 (6.5%) prisoners were TB suspected by ACF and only nine patients were diagnosed with tuberculosis, indicating the prevalence of 0.13%. Conclusion: The relatively low TB prevalence within prisoners besides considering likely common symptoms in ACF may result in many false positive cases during screening (6.5% vs. 0.13%). But, ACF is more fast, easy and affordable method to find TB suspected prisoners. Also, during this active method, prisoners are trained about TB symptoms so, it has a more prolonged effect towards screening goals. abstract_id: PUBMED:27051605 Experience of active tuberculosis case finding in nearly 5 million households in India. In India, to increase tuberculosis (TB) case detection under the National Tuberculosis Programme, active case finding (ACF) was implemented by the Global Fund-supported Project Axshya, among high-risk groups in 300 districts. Between April 2013 and December 2014, 4.9 million households covering ~20 million people were visited. Of 350 047 presumptive pulmonary TB cases (cough of ⩾2 weeks) identified, 187 586 (54%) underwent sputum smear examination and 14 447 (8%) were found to be smear-positive. ACF resulted in the detection of a large number of persons with presumptive pulmonary TB and smear-positive TB. Ensuring sputum examination of all those with presumptive TB was a major challenge. abstract_id: PUBMED:23501083 Increased case finding of tuberculosis from sputum and sputum deposits after magnetic bead concentration of mycobacteria. Concentration of mycobacteria from sputum by centrifugation prior to acid-fast microscopy increases case finding compared to direct microscopy of the sputum (direct smear). However, centrifugation has to be performed outside the safety cabinet and many laboratories do not have access to a centrifuge. Magnetic bead extraction of the mycobacteria is an alternative method that can be performed in a cabinet with just a magnet. Magnetic TB-Bead (Microsens Medtech Ltd) extraction of mycobacteria from sputum prior to microscopy was compared to direct smear on 78 sputum samples. Microscopy of the TB-Bead extracts identified all of 26 of the direct smear positive samples either with the same microscopy score or, in 19/27 of samples, with an increased microscopy score which aided microscopy detection. In addition, microscopy of the TB-Bead extracts identified 10 additional positive samples compared to direct smear; which represents a statistically significant increase in case finding of 38% (p = 0.002) compared to direct smear. In a separate study, TB-Beads enabled further 4 positive samples to be detected from 30 centrifuged pellets that were originally smear negative; two of these were subsequently found to be positive when the original deposits were reinvestigated by smear microscopy. By concentrating mycobacteria from sputum and sputum deposits, TB-Beads have been demonstrated to increase the number of positive sputum samples which could increase case-finding. The TB-Bead method is simple and rapid and compatible with use within a safety cabinet. abstract_id: PUBMED:34388976 Comparison of yield and relative costs of different screening algorithms for tuberculosis in active case-finding: a cross-section study. Background: Part of tuberculosis (TB) patients were missed if symptomatic screening was based on the main TB likely symptoms. This study conducted to compare the yield and relative costs of different TB screening algorithms in active case-finding in the whole population in China. Methods: The study population was screened based on the TB likely symptoms through a face-to-face interview in selected 27 communities from 10 counties of 10 provinces in China. If the individuals had any of the enhanced TB likely symptoms, both chest X-ray and sputum tests were carried out for them furtherly. We used the McNemar test to analyze the difference in TB detection among four algorithms in active case-finding. Of four algorithms, two were from WHO recommendations including 1a/1c, one from China National Tuberculosis Program, and one from this study with the enhanced TB likely symptoms. Furthermore, a two-way ANOVA analysis was performed to analyze the cost difference in the performance of active case-finding adjusted by different demographic and health characteristics among different algorithms. Results: Algorithm with the enhanced TB likely symptoms defined in this study could increase the yield of TB detection in active case-finding, compared with algorithms recommended by WHO (p &lt; 0.01, Kappa 95% CI: 0. 93-0.99) and China NTP (p = 0.03, Kappa 95% CI: 0.96-1.00). There was a significant difference in the total costs among different three algorithms WHO 1c/2/3 (F = 59.13, p &lt; 0.01). No significant difference in the average costs for one active TB case screened and diagnosed through the process among Algorithms 1c/2/3 was evident (F = 2.78, p = 0.07). The average costs for one bacteriological positive case through algorithm WHO 1a was about two times as much as the costs for one active TB case through algorithms WHO 1c/2/3. Conclusions: Active case-finding based on the enhanced symptom screening is meaningful for TB case-finding and it could identify more active TB cases in time. The findings indicated that this enhanced screening approach cost more compared to algorithms recommend by WHO and China NTP, but the increased yield resulted in comparative costs per patient. And it cost much more that only smear/bacteriological-positive TB cases are screened in active case-finding. Answer: Frontloaded sputum microscopy, which involves examining two sputum specimens collected on the same day, has been evaluated as an option in active tuberculosis (TB) case finding. However, a study conducted in Phnom Penh, Cambodia, found that frontloaded sputum microscopy using light-emitting diode fluorescence microscopy (FM) is inferior in terms of smear-positive yield compared to the traditional spot-spot-morning (SSM) approach. The study revealed that using the spot-spot (SS) approach would have missed a significant proportion (14.9%) of smear-positive TB cases that would have been detected using the spot-morning (SM) approach (PUBMED:25519797). Therefore, based on this study, frontloaded sputum microscopy may not be the most effective option in active TB case finding, as it could result in missing a considerable number of TB cases.
Instruction: Effects of graft quality on non-urgent liver retransplantation survival: should we avoid high-risk donors? Abstracts: abstract_id: PUBMED:22968536 Effects of graft quality on non-urgent liver retransplantation survival: should we avoid high-risk donors? Background: Few studies have studied the effects of graft quality on non-urgent liver retransplantation (ReLT) outcomes. We aimed to analyze graft characteristics and survival in non-urgent ReLT and the effect of using grafts with extended criteria on survival. Methods: Eighty non-urgent ReLT were performed from June 1988 to June 2010. The whole series was divided by identical time periods to study time-related effects. We assessed graft quality with donor risk index (DRI) and Briceño scores and recipient status with the Model for End-stage Liver Diseases and Rosen scores. Low and high-risk grafts were defined by a DRI cutoff of 1.8. Results: Graft survival was similar in both periods (1-, 5-, and 10-year graft survivals: 73.5, 46.9, and 40.8 versus 71, 47.7, and 47.7%, p=0.935) although donor quality was worse in the second period (DRI: 1.35±0.32 vs. 1.66±0.34, p&lt;0.001). In the first period high-risk grafts did worse than low-risk grafts (5-year survival: 0 vs. 54.5%, p=0.002) while in the second period outcomes were similar (5-year survival: 48.6 vs. 56.7%, p=0.660). Donor age was the only independent donor factor for graft survival, with lower survival when using grafts from donors over 60-years-old. Conclusions: Graft quality in ReLT has worsened with time mainly because of older donors but nowadays the use of high-risk grafts in non-urgent ReLT is not associated with worse graft survival because of better perioperative management. Moreover of being selective on recipient conditions, care should be taken when using grafts from donors over 60-years-old for non-urgent ReLT. abstract_id: PUBMED:24550160 Retransplantation using living-donor right-liver grafts. Background: This study reviews the outcomes of retransplantation using living-donor right-liver grafts. Methods: A retrospective study of liver retransplants performed between 1996 and 2013 was conducted. The retransplants were divided into the DD group (with deceased donors) and the LD group (with living donors). Survival outcomes were analyzed. Results: The DD group contained 23 patients and 27 retransplants using whole-liver grafts and the LD group contained 11 patients and 11 retransplants using right-liver grafts. Vascular and biliary complications were the main indications for retransplantation in both groups. The LD group had significantly younger donors, lighter grafts, shorter cold ischemia and longer operations. The two groups were comparable in age, preoperative liver function, warm ischemia, blood loss, transfusion, intensive care unit stay, hospital stay, hospital mortality, complication and graft loss. The 1-year, 3-year and 5-year patient survival rates were 78.3%, 73.7% and 63.8%, respectively, in the DD group. The LD group had the corresponding rates all at 90.9% (P = 0.246). The 1-year, 3-year and 5-year graft survival rates were 74.1%, 65.8% and 61.5%, respectively, in the DD group. The LD group had the corresponding rates all at 90.9% (P = 0.132). Conclusion: Excellent long-term survival after retransplantation using living-donor right-liver grafts can be achieved. abstract_id: PUBMED:24112236 Early liver retransplantation in adults. Up to 23% of liver allografts fail post-transplant. Retransplantation is only the recourse but remains controversial due to inferior outcomes. The objective of our study was to identify high-risk periods for retransplantation and then compare survival outcomes and risk factors. We performed an analysis of United Network for Organ Sharing (UNOS) data for all adult liver recipients from 2002 through 2011. We analyzed the records of 49,288 recipients; of those, 2714 (5.5%) recipients were retransplanted. Our analysis included multivariate regression with the outcome of retransplantation. The highest retransplantation rates were within the first week (19% of all retransplantation, day 0-7), month (20%, day 8-30), and year (33%, day 31-365). Only retransplantation within the first year (day 0-365) had below standard outcomes. The most significant risk factors were as follows: within the first week, cold ischemia time &gt;16 h [odds ratio (OR) 3.6]; within the first month, use of split allografts (OR 2.9); and within the first year, use of a liver donated after cardiac death (OR 4.9). Each of the three high-risk periods within the first year had distinct causes of graft failure, risk factors for retransplantation, and survival rates after retransplantation. abstract_id: PUBMED:16908320 Results of urgent liver retransplantation in the state of São Paulo, Brazil. The treatment of end-stage liver disease includes transplantation as a life-saving procedure although it has serious complications of hepatic artery thrombosis, liver dysfunction, or primary nonfunction, which frequently lead to the need for retransplantation. According to various reports, the incidence of retransplantation is around 10%. Given the critical organ shortage, the chance for a second transplant remains a controversial discussion in medical, ethical, and economic grounds because patient and graft survival rates after retransplantation are lower than those for primary transplantations. We retrospectively reviewed all of the urgent liver retransplants from October 2001 to February 2005 (52 months) by analyzing the number of retransplants, blood group, time between first and second liver transplantation, age, sex, and mortality. Data were obtained from the Transplantation System, State of Sao Paulo Health Secretariat. Among 1252 liver transplants performed during this period, 98 (7.82%) were urgent retransplantations. The primary procedure employed 955 (76.28%) deceased donors and 297 (23.72%) living donors. All 98 retransplants were performed using an organ from the pool of deceased donors. The retransplant rate was acceptable according to the literature, although we observed high rates of early mortality (&lt;60 days), leading to a discussion of which patients had a better chance of survival and the best time to perform the second transplantation to use this scarce and precious resource in the best possible way. abstract_id: PUBMED:37558742 Abrogation of greater graft failure risk of female-to-male liver transplantation with donors older than 40 years or graft macrosteatosis greater than 5. Greater graft-failure-risk of female-to-male liver transplantation (LT) is thought to be due to acute decrease in hepatic-estrogen-signaling. Our previous research found evidence that female hepatic-estrogen-signaling decreases after 40 years or with macrosteatosis. Thus, we hypothesized that inferiority of female-to-male LT changes according to donor-age and macrosteatosis. We stratified 780 recipients of grafts from living-donors into four subgroups by donor-age and macrosteatosis and compared graft-failure-risk between female-to-male LT and other LTs within each subgroup using Cox model. In recipients with ≤ 40 years non-macrosteatotic donors, graft-failure-risk was significantly greater in female-to-male LT than others (HR 2.03 [1.18-3.49], P = 0.011). Within the subgroup of recipients without hepatocellular carcinoma, the inferiority of female-to-male LT became greater (HR 4.75 [2.02-11.21], P &lt; 0.001). Despite good graft quality, 1y-graft-failure-probability was 37.9% (23.1%-57.9%) in female-to-male LT within this subgroup while such exceptionally high probability was not shown in any other subgroups even with worse graft quality. When donor was &gt; 40 years or macrosteatotic, graft-failure-risk was not significantly different between female-to-male LT and others (P &gt; 0.60). These results were in agreement with the estrogen receptor immunohistochemistry evaluation of donor liver. In conclusion, we found that the inferiority of female-to-male LT was only found when donor was ≤ 40 years and non-macrosteatotic. Abrogation of the inferiority when donor was &gt; 40 years or macrosteatotic suggests the presence of dominant contributors for post-transplant graft-failure other than graft quality/quantity and supports the role of hepatic-estrogen-signaling mismatch on graft-failure after female-to-male LT. abstract_id: PUBMED:17617863 Donor factors predicting recipient survival after liver retransplantation: the retransplant donor risk index. The use of extended criteria liver donors (ECD) is controversial, especially in the setting of retransplantation. The aims of this study are to investigate the effects of ECD grafts on retransplantation and to develop a predictive mortality index in liver retransplantation based on the previously established donor risk index. The United Network for Organ Sharing (UNOS) liver transplant dataset was analyzed for all adult, non-status 1, liver retransplantations occurring in the United States since February 2002. All donors were categorized for multiple characteristics of ECD, and using multivariate survival models a retransplant donor risk index (ReTxDRI) was developed. A total of 1327 retransplants were analyzed. There were 611 (46%) recipients who received livers with at least one ECD criterion. The use of ECD grafts in recipients with HCV did not incur worse survival than the non-ECD grafts. The addition of the cause of recipient graft failure to the donor risk index formed the ReTxDRI. After adjusting for multiple recipient factors, the ReTxDRI was predictive of overall recipient survival and was a strongly independent predictor of death after retransplantation (HR 2.49, 95% CI 1.89-3.27, p &lt; 0.0001). The use of the ReTxDRI can improve recipient and donor matching and help to optimize posttransplant survival in liver retransplantation. abstract_id: PUBMED:36406331 Assessing Donor Liver Quality and Restoring Graft Function in the Era of Extended Criteria Donors. Liver transplantation (LT) is the final treatment option for patients with end-stage liver disease. The increasing donor shortage results in the wide usage of grafts from extended criteria donors across the world. Using such grafts is associated with the elevated incidences of post-transplant complications including initial nonfunction and ischemic biliary tract diseases, which significantly reduce recipient survival. Although several clinical factors have been demonstrated to impact donor liver quality, accurate, comprehensive, and effective assessment systems to guide decision-making for organ usage, restoration or discard are lacking. In addition, the development of biochemical technologies and bioinformatic analysis in recent years helps us better understand graft injury during the perioperative period and find potential ways to restore graft function. Moreover, such advances reveal the molecular profiles of grafts or perfusate that are susceptible to poor graft function and provide insight into finding novel biomarkers for graft quality assessment. Focusing on donors and grafts, we updated potential biomarkers in donor blood, liver tissue, or perfusates that predict graft quality following LT, and summarized strategies for restoring graft function in the era of extended criteria donors. In this review, we also discuss the advantages and drawbacks of these potential biomarkers and offer suggestions for future research. abstract_id: PUBMED:28434081 The outcomes of pediatric liver retransplantation from a living donor: a 17-year single-center experience. Purpose: Liver retransplantation is the only therapeutic option for patients with graft failure after liver transplantation. The aim of this study is to evaluate the outcomes of pediatric retransplantation from living donor at a single center. Methods: Between December 1998 to August 2015, retransplantation from a living donor was performed for 14 children (&lt;18 years of age) at Kumamoto University Hospital. The characteristics of the retransplantation recipient and the clinicopathological factors between primary transplantation and retransplantation were analyzed to detect the prognostic factors. Results: In retransplantation, the operative time was longer and the amount of blood loss was greater in comparison to primary transplantation. The 1-, 3-, and 5-year survival rates from the date of retransplantation were 85.7, 85.7, and 78.6%, respectively. The rates of re-laparotomy after primary transplantation, bile leakage and postoperative bleeding after retransplantation were higher than after primary transplantation. Among the three patients who died after retransplantation, the operative time, the rate of re-laparotomy after primary transplantation and the incidence of gastrointestinal complications were higher in comparison to the surviving patients. Conclusion: Pediatric retransplantation from a living donor is an acceptable procedure that could save the lives of recipients with failing allografts when organs from deceased donors are scarce. To ensure good results, it is essential to make an appropriate assessment of the cardiopulmonary function and the infectious state of the patients before Re-LDLT. abstract_id: PUBMED:34160850 The Liver Retransplantation Risk Score: a prognostic model for survival after adult liver retransplantation. High-risk combinations of recipient and graft characteristics are poorly defined for liver retransplantation (reLT) in the current era. We aimed to develop a risk model for survival after reLT using data from the European Liver Transplantation Registry, followed by internal and external validation. From 2006 to 2016, 85 067 liver transplants were recorded, including 5581 reLTs (6.6%). The final model included seven predictors of graft survival: recipient age, model for end-stage liver disease score, indication for reLT, recipient hospitalization, time between primary liver transplantation and reLT, donor age, and cold ischemia time. By assigning points to each variable in proportion to their hazard ratio, a simplified risk score was created ranging 0-10. Low-risk (0-3), medium-risk (4-5), and high-risk (6-10) groups were identified with significantly different 5-year survival rates ranging 56.9% (95% CI 52.8-60.7%), 46.3% (95% CI 41.1-51.4%), and 32.1% (95% CI 23.5-41.0%), respectively (P &lt; 0.001). External validation showed that the expected survival rates were closely aligned with the observed mortality probabilities. The Retransplantation Risk Score identifies high-risk combinations of recipient- and graft-related factors prognostic for long-term graft survival after reLT. This tool may serve as a guidance for clinical decision-making on liver acceptance for reLT. abstract_id: PUBMED:37973525 Emergency Liver Retransplantation With Marginal Donors. Background: The goal of the present study is to determine if using marginal donors negatively impacts the outcomes of emergency liver retransplantation. Methods: A retrospective case-control study was performed, including all emergency liver retransplantations done in our center between 1990 and 2021. Recipients from the control group received the second grafts from "ideal donors", and patients from the case group received them from marginal donors. Analyzed variables included demographics of recipients and donors, complications, and survival rates. Results: 38 emergency retransplantations were performed. 23 recipients were included in the control group, and the remaining 15 were in the case group. The second donors from the case group were significantly older (mean age 58 vs 71 years old, P &lt; 0.0001). On the contrary, there were no differences between groups regarding the mean age of recipients, comorbidities, Model for End-Stage Liver Disease scores, or causes of retransplantation (the most common was hepatic artery thrombosis). No differences were found in early perioperative death rates (control group 26.1% vs case group 20%, P =1) and, although the case group seemed to have slightly poorer outcomes in long-term survival (control group 70%, 61%, and 55% vs case group 73%, 59%, and 39%, respectively, at 1, 5, and 10 years), the differences were not statistically significant (log-rank = 0.808). Conclusions: The use of marginal donors for emergency liver retransplantation was proved safe in our study, as there were no differences in complications or in short- or mid-term survival rates. Answer: The effects of graft quality on non-urgent liver retransplantation (ReLT) survival have been examined in several studies, with varying conclusions on whether high-risk donors should be avoided. One study found that graft quality in ReLT has worsened over time, mainly due to the use of older donors. However, the use of high-risk grafts, as defined by a donor risk index (DRI) cutoff of 1.8, is not currently associated with worse graft survival due to improved perioperative management. The study suggests that care should be taken when using grafts from donors over 60 years old for non-urgent ReLT (PUBMED:22968536). Another study focusing on retransplantation using living-donor right-liver grafts reported excellent long-term survival, suggesting that living donors can be a viable option for ReLT (PUBMED:24550160). An analysis of United Network for Organ Sharing (UNOS) data identified high-risk periods for retransplantation within the first year post-transplant, with distinct causes of graft failure and survival rates after ReLT. The study suggests that different risk factors are associated with different periods post-transplant (PUBMED:24112236). A study from São Paulo, Brazil, discussed the high rates of early mortality after urgent liver retransplantation, raising questions about patient selection and timing for the second transplantation to optimize the use of scarce resources (PUBMED:16908320). Research on the impact of donor gender and age on graft failure risk in female-to-male liver transplantation (LT) found that the inferiority of female-to-male LT was only present when the donor was ≤ 40 years and non-macrosteatotic. This suggests that other factors may dominate post-transplant graft failure risk when the donor is older than 40 years or the graft has macrosteatosis (PUBMED:37558742). The development of the retransplant donor risk index (ReTxDRI) helps predict mortality based on extended criteria liver donors (ECD) and recipient graft failure causes, indicating that ECD grafts do not necessarily incur worse survival (PUBMED:17617863). In summary, while there is evidence that graft quality can impact ReLT survival, the decision to use high-risk donors should be made on a case-by-case basis, considering improved perioperative management and other factors such as donor age and the presence of extended criteria.
Instruction: Are selective serotonin reuptake inhibitors cardiac teratogens? Abstracts: abstract_id: PUBMED:9443704 Selective serotonin reuptake inhibitors: infrequent medical adverse effects. The selective serotonin reuptake inhibitors have a very favorable safety profile. However, because of the wide use of this drug group, even low-frequency adverse effects become more prevalent. This article reviews 5 of these relatively infrequent adverse medical effects associated with the use of selective serotonin reuptake inhibitors: the syndrome of inappropriate antidiuretic hormone secretion, extrapyramidal effects, bleeding complications, cardiac arrhythmias, and the serotonin syndrome. abstract_id: PUBMED:26448056 Cardiac Toxicity in Selective Serotonin Reuptake Inhibitor Users. Several classes of recreational and prescription drugs have been associated with an increased risk of cardiovascular disease and the occurrence of arrhythmias, which may be involved in sudden deaths in chronic users even at therapeutic doses. The study presented herein focuses on pathological changes involving the heart, which may be caused by selective serotonin reuptake inhibitor use and their possible role in the occurrence of sudden cardiac death. A total of 40 cases were included in the study and were divided evenly into 2 groups: 20 cases of patients treated with selective serotonin reuptake inhibitors and 20 cases of sudden deaths involving patients void of any drug treatment. The first group included 16 patients treated with citalopram and 4 with sertraline. Autopsies, histology, biochemistry, and toxicology were performed in all cases. Pathological changes in selective serotonin reuptake inhibitor users consisted of various degrees of interstitial and perivascular fibrosis as well as a small degree of perineural fibrosis within the myocardium of the left ventricle. Within the limits of the small number of investigated cases, the results of this study seem to confirm former observations on this topic, suggesting that selective serotonin reuptake inhibitors may play a potential, causative role in the pathogenesis of sudden deaths in chronic users even at therapeutic concentrations. abstract_id: PUBMED:19691085 Are selective serotonin reuptake inhibitors cardiac teratogens? Echocardiographic screening of newborns with persistent heart murmur. Background: Selective serotonin reuptake inhibitors (SSRIs) have been suspected of cardiac teratogenicity, but reports have been inconsistent. Our aim was to investigate the rate of nonsyndromic congenital heart defects in newborns exposed in utero to SSRIs compared with unexposed controls. Methods: This prospective study of women who gave birth at our tertiary center from 2000 to 2007 yielded 235 women who reported first-trimester SSRI use during pregnancy. All newborns born during the study period and found to have a persistent cardiac murmur on day 2 or 3 of life were referred for examination by a pediatric cardiologist and by echocardiography. The findings were compared between the newborns who were exposed to SSRIs and those who were not. Results: Nonsyndromic congenital heart defects were identified by echocardiography in 8 of 235 (3.40%) newborns exposed in utero to SSRIs and in 1083 of 67,636 (1.60%) non-exposed newborns. The difference in prevalence between the two groups was significant (relative risk, 2.17; 95% confidence interval, 1.07-4.39). The prevalence rates for paroxetine and fluoxetine exposure were 4.3% and 3.0%, respectively. All cardiac defects in the study group were mild: ventricular septal defect (6), bicuspid aortic valve (1) and right superior vena cava to coronary sinus (1). Conclusions: Newborns exposed in utero to SSRIs, have a twofold higher risk of mild nonsyndromic heart defects than unexposed infants. The data suggest that women who require SSRI treatment during pregnancy can be reassured that the fetal risk is low and possible cardiac malformations will probably be mild. Late-targeted ultrasound and fetal echocardiography at 22 to 23 weeks' gestation are recommended in this patient group. abstract_id: PUBMED:29674777 The Diagnostic clinical Interview for Drug Withdrawal 1 (DID-W1) – New Symptoms of Selective Serotonin Reuptake Inhibitors (SSRI) or Serotonin Norepinephrine Reuptake Inhibitors (SNRI): inter-rater reliability Aim: A wide range of clinical phenomena have been reported with dose reduction or drug discontinuation of Selective Serotonin Reuptake Inhibitors (SSRIs) or Serotonin Norepinephrine Reuptake Inhibitors (SNRIs). In 2015, a new classification of SRIs/SNRIs withdrawal (i.e., new withdrawal symptoms, rebound symptoms withdrawal, persistent post-withdrawal disorders) was outlined on the basis of the literature and clinical observations. A semistructured clinical interview, the Diagnostic clinical Interview for Drug Withdrawal 1 - New Symptoms of SSRI and SNRI (DID-W1), was developed for identifying and differentiating such syndromes. Its inter-rater reliability has been tested. Methods: Seventeen consecutive outpatients with a history of SSRI or SNRI dose reduction or discontinuation were assessed independently by 2 clinicians at different times during the same day. Percent agreement, Cohen’s kappa, and the squared correlation coefficient were used to measure inter-rater reliability. Results: The percent agreement for the whole interview was 97.06%, the Cohen’s kappa 0.85 (95% CI of 0.61-1.08), the squared correlation coefficient 0.72. Discussion And Conclusions: The kappa values indicated excellent inter-rater agreement. Validity evaluation and comparison with other instruments need to be performed. The DID-W1 may help diagnosing the clinical phenomena related to SSRI and SNRI discontinuation, their differentiation from relapse, and the potential iatrogenic origin of psychiatric symptoms in clinical practice. abstract_id: PUBMED:27430585 Selective serotonin reuptake inhibitors for depression in pregnancy. Perinatal depression is associated with a high risk of morbidity and mortality and may have long-term consequences on child development. The US Preventive Services Task Force has recently recognized the importance of identifying and treating women with depression in the perinatal period. However, screening and accessing appropriate treatment come with logistical challenges. In many areas, there may not be sufficient access to psychiatric care, and, until these resources develop, the burden may inadvertently fall on obstetricians. As a result, understanding the risks of perinatal depression in comparison with the risks of treatment is important. Many studies of selective serotonin reuptake inhibitors in pregnancy fail to control for underlying depressive illness, which can lead to misinterpretation of selective serotonin reuptake inhibitor risk by clinicians. This review discusses the risks and benefits of selective serotonin reuptake inhibitor treatment in pregnancy within the context of perinatal depression. Whereas selective serotonin reuptake inhibitors may be associated with certain risks, the absolute risks are low and may be outweighed by the risks of untreated depression for many women and their offspring. abstract_id: PUBMED:29557689 Selective serotonin reuptake inhibitors' passage into human milk of lactating women. Objective: Selective serotonin reuptake inhibitors are commonly used for the treatment of pregnancy-related and postnatal depression. However, only a few studies have evaluated the passage of these drugs into human milk, often with conflicting results. Here, we sought to evaluate the passage of selective serotonin reuptake inhibitors into human milk in the first days after delivery and their potential association with neonatal outcomes. Study design: The passage of selective serotonin reuptake inhibitors into human milk was expressed both as percentage of milk-to-plasma ratio of drug concentrations and as the relative infant dose (RID). Selective serotonin reuptake inhibitors were quantified by high-performance liquid chromatography combined with mass spectrometry. Results: Nineteen women treated with selective serotonin reuptake inhibitors during the third trimester of pregnancy and lactation were considered. Human milk-to-plasma ratios ranged from 51.1% to 703.4%. The patients had a median RID of 1.5%, with differences among the selective serotonin reuptake inhibitors. All newborns had been breastfed from birth up to day three of life. At 1 week follow up, 58% of infants were breastfed, 37% were complementary fed, and 5% were formula fed. No side effects due to passage of selective serotonin reuptake inhibitors into human milk were found. Conclusions: Selective serotonin reuptake inhibitors were detected in human milk, with milk-to-plasma ratios which in some cases exceeded 100%. Given the need for maternal therapy and the low incidence of neonatal adverse events, it is advisable not to preclude breastfeeding a priori but recommend it with careful follow-up. abstract_id: PUBMED:35544938 Effects of tricyclic antidepressants, selective serotonin reuptake inhibitors, and selective serotonin-norepinephrine reuptake inhibitors on the ocular surface. Purpose: This study aimed to investigate the effects of tricyclic antidepressants, selective serotonin reuptake inhibitors, and selective serotonin noradrenaline reuptake inhibitors on the ocular surface. Methods: The study included 330 eyes of 165 patients using antidepressants and 202 eyes of 101 controls. Tear fluid breakup time, Schirmer I test, and Ocular Surface Disease Index (OSDI) questionnaire were administered. Beck Depression Inventory and Beck Anxiety Inventory were applied to record drug use, dosages, psychiatric disease duration, and remission time. Results: Mean tear fluid breakup time was 14.29 ± 4.81 (4-26) sec, and Schirmer I test value was 16.05 ± 5.89 (2-28) mm in study group. Tear fluid breakup time was 18.16 ± 2.12 (15-24) sec and Schirmer I test value was 16.64 ± 2.31 (15-24) mm in control group (p&lt;0.001 and p=0.005, respectively). In study group, 38.18% (n=63) of patients had dry eye, and 17% (n=18) of patients in control group had dry eye (p&lt;0.001). The mean OSDI score was 82.56 ± 16.21 (66-100) in the tricyclic antidepressants Group, 60.02 ± 29.18 (10-100) in the serotonin reuptake inhibitors Group, and 22.30 ± 20.87 (0-75) in the serotonin-noradrenaline reuptake inhibitors Group (p&lt;0.001). Mean tear fluid breakup time was 14.36 ± 3.35 (10-20) sec in tricyclic antidepressants Group, 13.94 ± 5.81 (4-26) sec in the serotonin reuptake inhibitors Group, and 14.93 ± 4.20 (6-20) sec in serotonin-noradrenaline reuptake inhibitors Group (p=0.730). The mean Schirmer I test value was 9.90 ± 7.22 (2-30) mm in tricyclic antidepressants Group, 15.55 ± 5.15 (2-25) mm in serotonin reuptake inhibitors Group and 17.71 ± 4.21 (10-30) mm in serotonin-noradrenaline reuptake inhibitors Group (p&lt;0.001). There was no statistically significant difference between OSDI score, tear fluid breakup time, and Schirmer I test values in serotonin reuptake inhibitors and serotonin-no-radrenaline reuptake inhibitors subgroups. Conclusions: Dry eye is common in antidepressant users, but considering the ocular surface, serotonin-noradrenaline reuptake inhibitors may be more reliable than other antidepressants. Patients using serotonin-noradrenaline reuptake inhibitors have lower OSDI scores. Serotonin-noradrenaline reuptake inhibitors, which are useful in chronic pain syndromes, may also have a corrective effect on dry eye symptoms. abstract_id: PUBMED:30170040 Prenatal exposure to selective serotonin reuptake inhibitors and serotonin norepinephrine reuptake inhibitors and risk for persistent pulmonary hypertension of the newborn: a systematic review, meta-analysis, and network meta-analysis. Background: There is a marked increase in the use of selective serotonin reuptake inhibitors and serotonin norepinephrine reuptake inhibitors in the last decade. Many newborns are likely to be exposed during pregnancy and labor. Objective: We aimed to evaluate the association between exposure to selective serotonin reuptake inhibitors and serotonin norepinephrine reuptake inhibitors during pregnancy and the risk for persistent pulmonary hypertension of the newborn. We sought to compare the risk for persistent pulmonary hypertension of the newborn between specific selective serotonin reuptake inhibitor agents. Study Design: MEDLINE, Embase, and Cochrane were searched up to July 2017. No language restrictions were applied. Search key words included: "SSRI," "SNRI," "pregnancy," "risk," "new-born," and "pulmonary hypertension." Retrospective cohort studies and case-control studies reporting the risk for persistent pulmonary hypertension of the newborn in the offspring of women exposed to selective serotonin reuptake inhibitors or serotonin norepinephrine reuptake inhibitors during pregnancy, were extracted. Two independent researchers identified relevant data. Random effects meta-analysis was used to pool results. Odds ratios were calculated with subsequent 95% confidence intervals. Network meta-analysis was conducted, incorporating direct and indirect comparisons among different selective serotonin reuptake inhibitors. The primary outcome was risk for persistent pulmonary hypertension of the newborn after exposure to selective serotonin reuptake inhibitors or serotonin norepinephrine reuptake inhibitors during pregnancy. Results: A total of 11 studies were identified. A total of 156,978 women and their offspring were exposed to selective serotonin reuptake inhibitors or serotonin norepinephrine reuptake inhibitors during pregnancy. Persistent pulmonary hypertension of the newborn was detected among 452 exposed offspring, representing an incidence rate of 2.9 cases per 1000 live births and a number needed to harm of 1000. The risk for persistent pulmonary hypertension of the newborn was significantly increased in the analysis of exposure to selective serotonin reuptake inhibitor/serotonin norepinephrine reuptake inhibitor in any trimester (odds ratio, 1.82; 95% confidence interval, 1.31-2.54; I2 = 72%), as well as in analysis restricted to exposure week &gt;20 (odds ratio, 2.08; 95% confidence interval, 1.44-3.01; I2 = 76%). In network meta-analysis, sertraline was ranked most likely to have the lowest risk for persistent pulmonary hypertension of the newborn among the different selective serotonin reuptake inhibitors (P = .83). Conclusion: Exposure to selective serotonin reuptake inhibitors or serotonin norepinephrine reuptake inhibitors during pregnancy is associated with an increased risk for persistent pulmonary hypertension of the newborn. According to our findings, sertraline ranked as most likely to have the lowest risk for persistent pulmonary hypertension of the newborn compared to other selective serotonin reuptake inhibitors, suggesting it may have the best safety profile for use in pregnancy in this regard. Further studies are needed to fully establish these results. abstract_id: PUBMED:16580283 Selective serotonin reuptake inhibitors and adverse pregnancy outcomes. Objective: The purpose of this study was to assess the safety of the use of selective serotonin reuptake inhibitors in pregnancy. Study Design: We carried out a retrospective cohort study of 972 pregnant women who had been given at least 1 selective serotonin reuptake inhibitor prescription in the year before delivery and 3878 pregnant women who did not receive selective serotonin reuptake inhibitors and who were matched by the year of the infant's birth, the type of institute at birth, and the mother's postal code from 1990 to 2000 in the Canadian province of Saskatchewan. Results: The risks of low birth weight (adjusted odds ratio, 1.58; 95% CI, 1.19, 2.11), preterm birth (adjusted odds ratio, 1.57; 95% CI, 1.28, 1.92), fetal death (adjusted odds ratio, 2.23; 95% CI, 1.01, 4.93), and seizures (adjusted odds ratio, 3.87; 95% CI, 1.00, 14.99) were increased in infants who were born to mothers who had received selective serotonin reuptake inhibitor therapy. Conclusion: The use of selective serotonin reuptake inhibitors in pregnancy may increase the risks of low birth weight, preterm birth, fetal death, and seizures. abstract_id: PUBMED:15613191 Selective serotonin reuptake inhibitors for migraine prophylaxis. The objective of this study was to assess the efficacy of sertraline in migraine prophylaxis. Other selective serotonin reuptake inhibitors have been studied for migraine prophylaxis, but this is the first report with sertraline. Twenty-seven subjects were enrolled and baseline assessment of migraine frequency and severity were measured over a 4-week period. Subjects were then randomized to receive placebo or sertraline in a double-blind fashion with headache frequency and severity measured over an 8-week period. Subjects completed a daily diary reporting the occurrence, severity, and degree of impairment associated with migraine. The headache index, a composite measure of migraine frequency and severity, scores did not significantly improve between assessments at baseline (20.8 +/- 14.88), 8 weeks (17.6 +/- 12.27), and 12 weeks (16.7 +/- 6.38) in the treatment group (n=6) (P=0.956). This finding is compared to other studies with the serotonin selective reuptake inhibitors, fluoxetine, fluvoxamine, and paroxetine. The authors believe that the selective serotonin reuptake inhibitors are not as effective as conventional migraine prophylaxis medications such as beta-blockers, tricyclic antidepressants, or divalproex sodium, but that in patients with comorbid depression who have failed conventional therapy selective serotonin reuptake inhibitors may be effective. Answer: Selective serotonin reuptake inhibitors (SSRIs) have been suspected of cardiac teratogenicity, but reports have been inconsistent. A study aimed to investigate the rate of nonsyndromic congenital heart defects in newborns exposed in utero to SSRIs compared with unexposed controls found that newborns exposed to SSRIs had a twofold higher risk of mild nonsyndromic heart defects than unexposed infants. The prevalence rates for paroxetine and fluoxetine exposure were 4.3% and 3.0%, respectively. All cardiac defects in the study group were mild, including ventricular septal defect, bicuspid aortic valve, and right superior vena cava to coronary sinus. The data suggest that while there is an increased risk, the fetal risk is low and possible cardiac malformations will probably be mild. Late-targeted ultrasound and fetal echocardiography at 22 to 23 weeks' gestation are recommended in this patient group (PUBMED:19691085). Therefore, while SSRIs may pose some risk as cardiac teratogens, the overall risk appears to be low, and the potential cardiac malformations are likely to be mild. It is important for women who require SSRI treatment during pregnancy to be reassured that the fetal risk is low, but they should also be monitored with appropriate fetal echocardiography.
Instruction: Does revealing contextual knowledge of the patient's intention help nurses' handling of nurse calls? Abstracts: abstract_id: PUBMED:26725689 Does revealing contextual knowledge of the patient's intention help nurses' handling of nurse calls? Objectives: An inherent part of nurses' work is to handle nurse calls that often cause challenging interruptions to ongoing activities. In situations when nurses are interrupted by a nurse call, they need to decide whether to continue focusing on the task at hand or to abort and respond to the nurse call. The difficult decision is often influenced by a number of factors and can have implications for patient safety and quality of care. The study investigates how technology could be designed to support nurses' handling of nurse calls by allowing patients to communicate a more contextualised message revealing their intention to the nurse when issuing a nurse call. Methods: Through a qualitative methodology employing a scenario-based design approach, three different nurse call system concepts are evaluated by nurses from different departments of a Norwegian university hospital. Results: Nurses find the uncertainty of not knowing the reason behind a nurse call stressful in situations where they are required to prioritise either the calling patient or a patient they are currently nursing. Providing information about a patient's intention behind a nurse call influences the nurse's decision to various degrees depending on the situation in which they find themselves and the information that is communicated. The nurses' reflections suggested that the message communicated should be designed to contain neither too little nor too much information about the patient's needs. Conclusions: A nurse call system that allows nurses to discern the reason behind a nurse call allows them to make a more accurate decision and relieves stress. In particular, the information communicated would reduce uncertainty and lessen nurses' dependence on other factors in their decision. The design of such a system should, however, carefully consider the needs of the department in which it is deployed. abstract_id: PUBMED:33784891 Relationship between workplace ostracism and turnover intention among nurses: the sequential mediating effects of emotional labor and nurse-patient relationship. This study aimed to elaborate on the mechanism by which workplace ostracism influences turnover intention through exploring the sequential mediation effects of emotional labour and nurse-patient relationship. Using a sample of 379 nurses collected from a time-lag survey in a tertiary public hospital in China, we applied structural equation modelling techniques to test our hypothesized model. Our findings revealed that workplace ostracism positively influenced surface acting and deep acting. Workplace ostracism influenced turnover intention through the sequential mediation of surface acting and nurse-patient relationship. The findings of this study imply that nurses should receive education and training in emotional management skills to deal with workplace ostracism. Besides, fostering positive nurse-patient relationships may help reduce nurses' turnover intention. abstract_id: PUBMED:31795420 Patient-Nurse Ratio is Related to Nurses' Intention to Leave Their Job through Mediating Factors of Burnout and Job Dissatisfaction. In healthcare settings, nurses' workload, burnout, and job satisfaction are associated to the patient-nurse ratio. Whether this ratio also affects their intention to leave the nursing profession, along with the underlying stress pathway, remains unclear. This study aimed to investigate the effects of the patient-nurse ratio on nurses' intention to leave and considering the mediating roles of burnout and job dissatisfaction. The study analyzed the data of two pooled cross-sectional surveys collected in 2013 and 2014. Measures were obtained by a structure questionnaire, which queried the average daily patient-nurse ratio (ADPNR), nurses' personal burnout, client-related burnout, job dissatisfaction, intention to leave, and other demographics. ADPNRs were standardized according to hospital levels. Multiple regression models examined mediation hypotheses, and a percentile bootstrap confidence interval was applied to determine the significance of indirect effects. A total of 1409 full-time registered nurses in medical and surgical wards of 24 secondary or tertiary hospitals in Taiwan completed self-administered questionnaires. Most of the participants were female (97.2%), and the mean age was 29.9 years. The association between the standardized ADPNR and intention to leave their job was significantly mediated by personal burnout, client-related burnout, and job dissatisfaction. Higher standardized ADPNRs predicted higher levels of personal burnout, client-related burnout, and job dissatisfaction, each of which resulted in higher levels of intention to leave the current job. The results highlight that appropriate patient-nurse ratio standards may be further discussed by selecting personal burnout, client-related burnout, and job dissatisfaction as indicators. abstract_id: PUBMED:38191960 Emotional labour and turnover intention among nurses in China: Mediating effects of nurse-patient relationship and self-rated health. Aim: This study tested the mediating role of the nurse-patient relationship and self-rated health in the effect of emotional labour on turnover intention among nurses in China. Background: The underlying mechanism behind the effect of emotional labour on turnover intention remains inadequately understood. Introduction: Nurses with a high level of emotional labour are predisposed to experiencing poor health and tension in their relationships with patients, which may increase turnover intention. Methods: A cross-sectional survey of 527 nurses in a public tertiary hospital in Qiqihar, located in China's Heilongjiang province, was conducted. Emotional labour and turnover intention were assessed using existing validated scales containing multiple items, while the nurse-patient relationship and self-rated health were assessed using single items, respectively. Baron and Kenny's causal steps and the Karlson/Holm/Breen method were adopted to test the mediating effects of the nurse-patient relationship and self-rated health in the association between emotional labour and turnover intention after adjusting for variations in sociodemographic and job characteristics. Results: Emotional labour was positively associated with turnover intention. Self-rated poor health and a disharmonious nurse-patient relationship partially mediated the positive effect of emotional labour on turnover intention. Conclusions: Emotional labour significantly affects the turnover intention of nurses working in public tertiary hospitals in China, and this effect is partially mediated by self-rated health and the nurse-patient relationship. Implications For Nursing Practice And Nursing Policy: Giving more attention to nurses' negative emotions and work attitudes is crucial. Developing comprehensive strategies for enhancing nurses' emotional management ability, promoting their physical and psychological well-being, and improving nurse-patient relationship to reduce nurses' turnover. abstract_id: PUBMED:38460274 Nurses' intention to leave, nurse workload and in-hospital patient mortality in Italy: A descriptive and regression study. Higher nurse-to-patient ratios are associated with poor patient care and adverse nurse outcomes, including emotional exhaustion and intention to leave. We examined the effect of nurses' intention to leave and nurse-patient workload on in-hospital patient mortality in Italy. A multicentered descriptive and regression study using clinical data of patients aged 50 years or older with a hospital stay of at least two days admitted to surgical wards linked with nurse variables including workload and education levels, work environment, job satisfaction, intention to leave, nurses' perception of quality and safety of care, and emotional exhaustion. The final dataset included 15 hospitals, 1046 nurses, and 37,494 patients. A 10 % increase in intention to leave and an increase of one unit in nurse-patient workload increased likelihood of inpatient hospital mortality by 14 % (odds ratio 1.14; 1.02-1.27 95 % CI) and 3.4 % (odds ratio 1.03; 1.00-1.06 95 % CI), respectively. No other studies have reported a significant association between intention to leave and patient mortality. To improve patient outcomes, the healthcare system in Italy needs to implement policies on safe human resources policy stewardship, leadership, and governance to ensure nurse wellbeing, higher levels of safety, and quality nursing care. abstract_id: PUBMED:32819885 Investigating emergency nurses' beliefs and experiences with patient handling in the emergency department. Background: Patient handling policy intends to decrease the risk of musculoskeletal injury for nurses. Many factors influence nurses' adherence to patient handling policy, including the context in which the activities take place. The aim of this study was to investigate emergency nurses' beliefs and experiences with patient handling in the emergency department. Methods: A phenomenological approach was used to explore the participants' experience of patient handling in the ED. Focus group interviews were held in a Victorian emergency department. The interviews were audio-recorded, transcribed, and the data were analysed using thematic analysis. Results: Five interviews were held with 40 nurse participants. Four themes were identified that described participants beliefs and experiences of patient handling: 'Putting the patient first' describes participants prioritisation of patient safety over their own; 'Patient -related challenges' describes the patient factors (e.g. language, mobility, size) that make patient handling more difficult; 'Staff knowledge' of policy and procedure; and 'Inadequate resources' which describes the physical and human resource limitations that made patient handling more difficult. Conclusions: Issues with equipment, education and patient handling culture are widespread, and this study reaffirms the importance of considering context in developing interventions to improve practice. Introduction of a Safe Patient Handling Program in the ED, that addresses multiple barriers simultaneously, may improve adherence to policy, and reduce the risk of musculoskeletal injury in emergency nurses. abstract_id: PUBMED:25735553 Nurse managers' role in older nurses' intention to stay. Purpose: The purpose of this paper is to propose and test a model of the underlying mechanisms linking perceived availability of human resource (HR) practices relevant to older nurses and older nurses' intentions to stay with their hospitals. Design/methodology/approach: Quantitative data were collected from randomly selected older registered nurses (N=660) engaged in direct patient care in hospitals in Canada. Structural equation modelling was used to test the hypothesized model. Findings: The relationship between perceptions of HR practices (performance evaluation, recognition/respect) and intentions to stay was mediated by the perceived fairness with which nurse managers managed these HR practices and nurse manager satisfaction. When nurse managers were perceived to administer the HR practices fairly (high perceived procedural justice), older nurses were more satisfied with their nurse manager and, in turn, more likely to intend to stay. Research Limitations/implications: The cross-sectional research design does not allow determination of causality. Practical Implications: It is important that nurse managers receive training to increase their awareness of the needs of older nurses and that nurse managers be educated on how to manage HR practices relevant to older nurses in a fair manner. Equally important is that hospital administrators and HR managers recognize the importance of providing such HR practices and supporting nurse managers in managing these practices. Originality/value: The findings increase the understanding of how HR practices tailored to older nurses are related to the intentions of these nurses to remain with their hospital, and especially the crucial role that first-line nurse managers play in this process. abstract_id: PUBMED:37370217 Psychological safety, communication openness, nurse job outcomes, and patient safety in hospital nurses. Promoting psychological safety in a workplace is known to contribute to improved job outcomes across a wide variety of industries. This study aimed to examine the relationships between psychological safety at work and job outcomes (i.e., job satisfaction and intention to leave), and patient safety among hospital nurses; and to determine the mediating effect of communication openness on these relationships. This cross-sectional study used survey data from 867 hospital nurses working in the United States. Multiple logistic regression models and Hayes' PROCESS macro in SPSS were used. Nurses who worked in an environment with higher psychological safety levels were more likely to be satisfied in their current job, less likely to intend to leave their current job within the next year, and more likely to report favorable patient safety ratings. Communication openness mediated these relationships. When nurses feel psychologically safe at work, they are more likely to engage in open communication, which in turn can lead to greater job satisfaction, decreased turnover intention, and improved patient safety. Nurse managers should strive to develop leadership strategies that promote psychological safety in the work environment and support open communication among nurses. abstract_id: PUBMED:32293228 Leadership Styles of Nurse Managers and Turnover Intention. The shortage of experienced nurses is a concern in health organizations. This study investigated the leadership styles of nurse managers' impact on turnover intention among nurses in hospitals. A descriptive correlational research design was used. Samples of 250 nurses working in five hospitals were selected to complete self-administered questionnaire. Findings show that participatory and transformational leadership styles are predominantly practiced. Correlation analysis revealed that participative and transformational leadership styles decreases turnover intention while autocratic and laissez-faire leadership styles increases turnover intention. Therefore, leadership styles of nurse managers are determinants of nurses' turnover intentions. abstract_id: PUBMED:38095274 Intention to stay of pediatric nurses: Does pediatric nurse-parent partnership and perceived stress matter? Aims: To explore intentions to stay of pediatric nurses and to determine how pediatric nurse-parent partnership and perceived stress differ among the subgroups. Background And Introduction: The shortage of nurses, particularly pediatric nurses, has become a global problem. Intention to stay has been an established predictor of nurse retention. Less is known about the heterogeneity of intention to stay and the specific characteristics, which may restrict the effectiveness of prevention and interventions for pediatric nurse retention. Methods: This was a cross-sectional online survey. A total of 603 pediatric nurses were surveyed on their pediatric nurse-parent partnership, perceived stress, and intention to stay. Latent class analysis, multinomial logistic regression, and univariate analysis were used to analyze the related factors. We followed the STROBE checklist. Results: Three latent classes of intention to stay were identified: low level, moderate level, and high level. Aged between 31 and 35, higher perceived stress, contract nurse, and no spouse were associated with the low level of intention to stay. Good sensitivity in the pediatric nurse-parent partnership was associated with high level of intention to stay. Having a spouse was associated with moderate level compared with low level of intention to stay. Discussion And Conclusion: Our study was one of the first to show three different classes of intention to stay of pediatric nurses, and the specific sociodemographic information, perceived stress, and pediatric nurse-patient partnership associated with different classes of intention to stay were reported. Implications For Nursing And/or Health Policy: Our results informed the development of pediatric nurse support strategies. Nurse managers could consider the specific factors affecting the intention to stay and make great efforts in developing strategies and projects for different classes of intention to stay that can maximize nurse intention to stay. Answer: Yes, revealing contextual knowledge of the patient's intention does help nurses' handling of nurse calls. According to a study investigating how technology could be designed to support nurses' handling of nurse calls, nurses found the uncertainty of not knowing the reason behind a nurse call stressful. They often had to decide whether to prioritize the calling patient or the patient they were currently attending to. Providing information about a patient's intention behind a nurse call influenced the nurse's decision to various degrees depending on the situation and the information communicated. The study concluded that a nurse call system that allows nurses to discern the reason behind a nurse call enables them to make a more accurate decision and relieves stress. It also suggested that the message communicated should be designed to contain neither too little nor too much information about the patient's needs (PUBMED:26725689).
Instruction: Aortic valve replacement with different types of prosthesis: are there differences in the outcomes during hospital phase? Abstracts: abstract_id: PUBMED:31753170 Transcatheter Aortic Valve Replacement: Procedure and Outcomes. Initially, transcatheter aortic valve replacement (TAVR) was only used in patients with severe symptomatic aortic stenosis and prohibitive risk for surgical aortic valve replacement. Subsequently, TAVR was extended to patients with high and intermediate surgical risk. Recently, the results of randomized trials in low-surgical-risk patients showed superiority or noninferiority of TAVR versus surgical aortic valve replacement in this population. Procedural outcomes have been improved. Long-term durability of transcatheter heart valves remains to be confirmed. This article presents the evolution and current status of TAVR, with respect to the different types of devices and procedures as well as its outcomes. abstract_id: PUBMED:19229427 Aortic valve replacement with different types of prosthesis: are there differences in the outcomes during hospital phase? Objective: To analyze intraoperative data and possible differences in clinical evolution during postoperative hospital phase for aortic valve replacement surgery using different types of prosthesis. Methods: Analysis of 60 patients divided into three groups. Valve replacement with bioprosthesis (20), mechanical prosthesis (20) and homologous valve (20). The mean age was 51.1, 60% were male and 40% female patients; 86.7 % were in NYHA II or III; 63.3% presented arterial hypertension and 18.3% had diabetes. Aetiology of valve disease was degenerative for 39%, rheumatic for 36% and endocarditis for 15%. Results: The hospital mortality was 5%; there were no differences in the incidence of septic or cardiogenic shock, acute renal failure, rhythms disorders during surgery or intensive care, neither for total time in intensive care and mechanical ventilation. However, there was statistical differences as regards the cardiopulmonary bypass total time (P=0.02) and the aortic clamping time (P&lt;0.0001) unfavorable to homograft valve group. The ward admission time was greater for mechanical valve group (P=0.05) as well as for total admission time, but without statistical significance. It was observed that patients with preoperative hematocrit smaller than 38.1% used 2.73 units of blood components, and with postoperative hematocrit smaller than 32% used 1,79 units of blood components. Echocardiography control showed minimal evolutional differences. Conclusion: The use of different types of prosthesis for this study does not cause differences in the results of postoperative hospital phase. The use of homograft valve is a feasible option with good clinical applicability. abstract_id: PUBMED:37915291 Comparable Outcomes of Bicuspid Aortic Valves for Rapid-Deployment Aortic Valve Replacement. Background: Edwards Intuity is recognized as a relatively contraindicated bioprosthesis for bicuspid aortic valve disease. This study compared the early echocardiographic and clinical outcomes of rapid-deployment aortic valve replacement for bicuspid versus tricuspid aortic valves. Methods: Of 278 patients who underwent rapid-deployment aortic valve replacement using Intuity at Seoul National University Hospital, 252 patients were enrolled after excluding those with pure aortic regurgitation, prosthetic valve failure, endocarditis, and quadricuspid valves. The bicuspid and tricuspid groups included 147 and 105 patients, respectively. Early outcomes and the incidence of paravalvular leak were compared between the groups. A subgroup analysis compared the outcomes for type 0 versus type 1 or 2 bicuspid valves. Results: The bicuspid group had more male and younger patients. Comorbidities, including diabetes mellitus, hypertension, chronic kidney disease, and coronary artery disease, were less prevalent in the bicuspid group. Early echocardiographic evaluations demonstrated that the incidence of ≥mild paravalvular leak did not differ significantly between the groups (5.5% vs. 1.0% in the bicuspid vs. tricuspid groups, p=0.09), and the early clinical outcomes were also comparable between the groups. In the subgroup analysis between type 0 and type 1 or 2 bicuspid valves, the incidence of mild or greater paravalvular leak (2.4% vs. 6.7% in type 0 vs. type 1 or 2, p=0.34) and clinical outcomes were comparable. Conclusion: Rapid-deployment aortic valve replacement for bicuspid aortic valves demonstrated comparable early echocardiographic and clinical outcomes to those for tricuspid aortic valves, and the outcomes were also satisfactory for type 0 bicuspid aortic valves. abstract_id: PUBMED:33985812 Influence of surgical volume on outcomes in low-risk patients undergoing isolated surgical aortic valve replacement. Background: Intermediate- and high-risk patients undergoing isolated surgical aortic valve replacement have superior outcomes at higher-volume compared with lower-volume hospitals. This study examines the impact of hospital volume on outcomes in low-risk patients undergoing aortic valve replacement. Methods: Using data from our 29 North Texas hospital collaborative, we examined 2066 low-risk cases (Society of Thoracic Surgeons Predicted Risk of Mortality of ≤3%) undergoing aortic valve replacement surgery between January 1, 2012, and December 31, 2017. Cases from 3 high-volume hospitals (≥200 aortic valve replacement cases total) and 26 low-volume hospitals (&lt;200 cases total) were examined. Results: Some 53.5% of the cases were at high-volume hospitals. No statistical differences were seen in the Predicted Risk of Mortality or pertinent demographics between the high-volume and low-volume hospitals. Significant differences in outcomes were seen between the 2 groups in operative mortality (2.28% vs 1.19%, P = .007), renal failure (2.14% vs 1.00%, P = .003), and readmission within 30 days (9.72% vs 6.25%, P &lt; .001) when comparing low-volume with high-volume hospitals. In patients undergoing aortic valve replacement, undergoing the procedure in a low-volume hospital was an independent predictor for operative mortality (odds ratio, 1.95; 95% confidence interval, 1.20-3.20; P = .008) and renal failure (odds ratio, 2.25; 95% confidence interval, 1.33-3.81; P = .003). Conclusions: Operative mortality in low-risk patients undergoing aortic valve replacement is higher in low-volume compared with high-volume hospitals. The probability of a major complication, such as renal failure, is also greater in low-volume hospitals. The relationship between volume and outcomes in low-risk patients will require more research to understand the causes and design appropriate interventions. abstract_id: PUBMED:30569661 In-hospital outcomes of transcatheter versus surgical aortic valve replacement for nonagenarians. Objectives: To compare the in-hospital outcomes of transcatheter aortic valve replacement (TAVR) versus surgical aortic valve replacement (SAVR) in nonagenarians. Background: Data comparing the outcomes of TAVR versus SAVR in nonagenarians are limited. Methods: Using the National Inpatient Sample years 2012-2014, hospitalization data were retrieved for subjects aged ≥90 years who underwent TAVR or SAVR for severe aortic stenosis. The incidence of in-hospital mortality and peri-procedural outcomes were compared using unadjusted, multivariate logistic regression, and propensity score matched analyses. Results: The final cohort included 6,680 records of nonagenarians undergoing aortic valve replacement, among which 5,840 (87.4%) underwent TAVR. There was no difference in the incidence of in-hospital mortality between both groups in the unadjusted (5.8% versus 6.0% P = 0.95), multivariate (odds ratio [OR] 0.78, 95% confidence interval [CI] 0.35-1.74), and propensity score matched (OR 1.07, 95% CI 0.75-1.51) analyses. In the propensity-matched analysis, TAVR was associated with a lower incidence of acute kidney injury (OR 0.58, 95% CI 0.47-0.72), post-operative blood transfusion (OR 0.51, 95% CI 0.43-0.61), a higher likelihood of discharge to home (OR 4.71, 95% 3.44-5.06), and a similar incidence of pacemaker placement (OR 1.16, 95% 0.89-1.53) and stroke (OR 1.34, 0.90-1.99). Conclusions: In this nationwide analysis, TAVR was associated with an overall similar incidence of in-hospital mortality and less morbidity compared with SAVR. These findings suggest that TAVR is effective and safe in nonagenarians. abstract_id: PUBMED:32669528 The Association of In-Hospital Transcatheter Aortic Valve Replacement Availability on Outcomes of Surgical Aortic Valve Replacement in Elderly Patients. Background: Transcatheter aortic valve replacement (TAVR) has been performed more and more frequently in elderly patients with aortic stenosis. We investigated the association of in-hospital availability of TAVR on outcomes of surgical aortic valve replacement (SAVR) in the era of TAVR.Methods and Results:We utilized data from the Japan Adult Cardiovascular Surgery Database. Between October 2013 and December 2016, 9,330 patients aged ≥80 years underwent isolated SAVR or SAVR with coronary artery bypass grafting in 557 centers in Japan. We assessed the associations of in-hospital TAVR availability with operative mortality and composite complications adjusting for each patient's characteristics, JapanSCORE predicted the risk scores, and hospital volumes of SAVR using generalized estimation equation methods. Observed operative mortality rates were 3.4% in all centers, 2.0% in TAVR centers and 4.0% in non-TAVR centers. The multivariable analyses showed that TAVR centers had statistically significantly lower operative mortality compared with non-TAVR centers among all patients (odds ratio 0.60, 95% confidence interval 0.41-0.89, P=0.01) and among intermediate/high-risk patients (odds ratio 0.52, 95% confidence interval 0.32-0.85, P&lt;0.01) but not among low-risk patients (odds ratio 0.82, 95% confidence interval 0.44-1.51, P=0.52). Conclusions: In-hospital TAVR availability was associated with better outcomes of SAVR among elderly patients. This association was statistically significant among intermediate/high-risk patients but not significant among low-risk patients. abstract_id: PUBMED:32642209 Tricuspid regurgitation and in-hospital outcomes after transcatheter aortic valve replacement in high-risk patients. Background: The prognostic impact of tricuspid regurgitation (TR) following transcatheter aortic valve replacement (TAVR) is uncertain, and the management of patients with severe aortic stenosis and significant TR undergoing TAVR is unclear. Methods: Retrospective study investigating the role of TR severity on hospital outcomes in high risk patients with severe aortic stenosis undergoing TAVR. Results: A total of 174 participants were included in the present study. The median age was 84 years and 48% were women. The median (IR) STS score was 7.3 (4.7-13.6). The pre-procedural mean (SD) aortic valve area (AVA) was 0.69 (0.2) cm2 and the average (SD) peak and mean gradients were 71 [23]/42 [15] mmHg. Pre TAVR, 28.7% of patients had significant (moderate or severe) TR. Significant TR pre-TAVR increased the risk of in-hospital cardiovascular (CV) and all-cause and mortality [adjusted relative risk (RR) (95% CI): 14.67 (1.35-159.51) and 5.09 (1.14-22.72), respectively], and those with severe TR post-TAVR had longer hospital stay [median (IR): 9.9 (2.9-17.0) days]. No improvement or worsened TR (greater than mild) post-TAVR was associated with higher CV and all-cause mortality [adjusted RR (95% CI): 21.5 (1.81-255.96) and 8.19 (1.67-40.29), respectively]. Right ventricular systolic pressure (RVSP) was independently associated with TR severity pre and post TAVR. Conclusions: Significant TR was common among patients undergoing high risk TAVR, and is associated with increased in hospital mortality and longer hospital stay. Patients with elevated RVSP and persistent moderate or severe TR after TAVR are at higher risk of in hospital death. abstract_id: PUBMED:31980400 Network Analysis of Outcomes in Patients Undergoing Transcatheter Aortic Valve Replacement for Stenotic Bicuspid Aortic Valves According to Valve Type. Background: It is currently unknown if outcomes after transcatheter aortic valve replacement (TAVR) differ according to the prosthetic valve deployed in patients with bicuspid aortic valves (BAV). Objectives: This study evaluated valve-specific outcomes post-TAVR in patients with BAV. Methods: Literature search was performed using the Cochrane databases, PubMed, ClinicalTrials, SCOPUS and EMBASE databases from inception until July 2018. We computed risk ratios and their 95% confidence intervals for all outcomes of interest. For each outcome, the data were pooled using a multivariate random-effects meta-analysis including multiple treatment as well as direct and indirect comparisons. Results: Ten studies enrolling a total of 1547 BAV patients undergoing TAVR using 6 different prosthetic valve types were analyzed. There were no significant differences in 30-day all-cause mortality, life-threatening bleeding and device success among the diverse prosthetic valve types implanted. However, 2nd generation balloon-expandable valves had consistently lower risk of moderate-to-severe prosthetic valve regurgitation. Conclusion: In patients with BAV, there were no significant differences in 30-day all-cause mortality after TAVR among the various prosthetic valve types. abstract_id: PUBMED:28623173 Outcomes of Transcatheter and Surgical Aortic Valve Replacement in Patients on Maintenance Dialysis. Background: The introduction of transcatheter aortic valve replacement (TAVR) expanded definitive therapy of aortic stenosis to many high-risk patients, but it has not been fully evaluated in the dialysis population. We aimed to evaluate the current trend and in-hospital outcome of surgical aortic valve replacement (SAVR) and TAVR in the dialysis population. Methods: Severe aortic stenosis patients on maintenance dialysis who underwent SAVR or TAVR in the Nationwide Inpatient Sample database from January 1, 2005, through December 31, 2014, were included in our comparative analysis. The trends of SAVR and TAVR were assessed. In-hospital mortality, rates of major adverse events, hospital length of stay, cost of care, and intermediate care facility utilization were compared between the 2 groups using both unadjusted and propensity-matched data. Results: Utilization of aortic valve replacement in dialysis patients increased 3-fold; a total of 2531 dialysis patients who underwent either SAVR (n = 2264) or TAVR (n = 267) between 2005 and 2014 were identified. Propensity score matching yielded 197 matched pairs. After matching, a 2-fold increase in in-hospital mortality was found with SAVR compared with TAVR (13.7% vs 6.1%, P = .021). Patients who underwent TAVR had more permanent pacemaker implantation (13.2% vs 5.6%, P = .012) but less blood transfusion (43.7% vs 56.8%, P = .02). Rates of other key morbidities were similar. Hospital length of stay (19 ± 16 vs 11 ± 11 days, P &lt;.001) and non-home discharges (44.7% vs 31.5%, P = .002) were significantly higher with SAVR. Cost of hospitalization was 25% less with TAVR. Conclusion: In patients on maintenance dialysis, TAVR is associated with lower hospital mortality, resource utilization, and cost in comparison with SAVR. abstract_id: PUBMED:26892076 Early outcomes after isolated aortic valve replacement with rapid deployment aortic valve. Objective: Minimal access aortic valve replacement is associated with favorable clinical outcomes; however, several meta-analyses have reported significantly longer crossclamp times compared with a full sternotomy. We examined the procedural and early safety outcomes after isolated rapid deployment aortic valve replacement by surgical approach in patients enrolled in the Surgical Treatment of Aortic Stenosis With a Next Generation Surgical Aortic Valve trial. Methods: The Surgical Treatment of Aortic Stenosis With a Next Generation Surgical Aortic Valve trial was a prospective, multicenter, single-arm study, with successful implants in 287 patients with aortic valve stenosis who underwent rapid deployment aortic valve replacement using the EDWARDS INTUITY Valve System (Edwards Lifesciences, Irvine, Calif). Patients were evaluated perioperatively for procedural times and technical success rates; at discharge, for hospital length of stay; and, at 30 days, for early adverse events. Results: A total of 158 patients underwent isolated aortic valve replacement through a full sternotomy (n = 71), upper hemisternotomy (n = 77), or right anterior thoracotomy (n = 10). Mean age at baseline was 75.7 ± 7.2 years. Mean aortic crossclamp and cardiopulmonary bypass times (minutes) were similar for full sternotomy and upper hemisternotomy, 43.5 ± 32.5/71.6 ± 41.8 and 43.1 ± 13.1/69.6 ± 19.1, respectively, and significantly longer for right anterior thoracotomy, 88.3 ± 18.6/122.2 ± 22.1 (P &lt; .000). Early adverse event rates were similar, and in-hospital mortality rates were low regardless of surgical approach. Conclusions: These data suggest that isolated rapid deployment aortic valve replacement through an upper hemisternotomy can lead to shorter crossclamp times than has been reported historically in the literature. This may facilitate minimal access aortic valve replacement by eliminating the issue of prolonged crossclamp times. Further, low in-hospital mortality and new permanent pacemaker implant rates were observed regardless of surgical approach. Answer: The outcomes during the hospital phase for aortic valve replacement using different types of prosthesis do not show significant differences in terms of postoperative hospital phase results. A study analyzing 60 patients divided into three groups—those receiving bioprosthesis, mechanical prosthesis, and homologous valve—found that while there were statistical differences in cardiopulmonary bypass total time and aortic clamping time, with longer times unfavorable to the homograft valve group, the overall hospital mortality was 5% without differences in the incidence of septic or cardiogenic shock, acute renal failure, rhythms disorders during surgery or intensive care, neither for total time in intensive care and mechanical ventilation. The ward admission time was greater for the mechanical valve group, but this did not reach statistical significance for total admission time. Echocardiography control showed minimal evolutional differences, suggesting that the type of prosthesis used does not cause significant differences in the results of the postoperative hospital phase, and the use of homograft valve is a feasible option with good clinical applicability (PUBMED:19229427). Therefore, based on the available abstract, it can be concluded that while there may be some procedural differences, the choice of prosthesis type does not significantly impact the immediate outcomes during the hospital phase following aortic valve replacement.
Instruction: Does the interval from primary surgery to chemotherapy influence progression-free survival in ovarian cancer? Abstracts: abstract_id: PUBMED:12217760 Does the interval from primary surgery to chemotherapy influence progression-free survival in ovarian cancer? Objectives: The objective of this study was to determine whether the length of the interval from primary surgery to commencement of chemotherapy has any direct effect on progression-free survival in ovarian cancer. Methods: The progression-free survival of 472 patients enrolled in four trials who had all received platinum-containing chemotherapy (either in combination with a taxane or cyclophospamide) was subjected to univariate analysis. Dividing subjects into those above and below the median interval from surgery to chemotherapy formed two groups for analysis. The analysis was stratified by study and arm/cohort within study to remove any possible influence of the different studies and study doses. Multivariate analysis was then performed including stage, bulk of residual disease, and performance status as well as interval to starting chemotherapy. Results: The median interval from surgery to chemotherapy was 22 days (range 7-100). Univariate analysis of the above median and below median groups showed worse progression-free survival for those with earlier treatment (hazard ratio 0.84, P = 0.14, 95% CI 0.67-1.06); however, those treated earlier tended to have bulkier residual disease (&gt;2 cm; P = 0.006). When multivariate analysis was performed incorporating residual disease status, FIGO stage, and performance status, the hazard rate ratio for interval to surgery was 0.99 (P = 0.91, 95% CI 0.79-1.24). Conclusions: This study suggests that the interval from surgery to commencement of chemotherapy is not an independent prognostic factor for progression-free survival. abstract_id: PUBMED:16800244 Ovarian cancer stage IIIC. Consequences of treatment level on overall and progression-free survival. Background: Maximum cytoreduction at primary surgery has been found to be one of the strongest prognostic factors for survival of ovarian cancer. The aim of the study was to investigate the influence of hospital level (primary vs secondary care centre), number and timing of surgery and chemotherapy on how radical the surgery was at primary treatment of epithelial ovarian cancer Stage IIIC. Material And Methods: A retrospective study based on record information from all patients with epithelial ovarian cancer Stage IIIC treated at the Norwegian Radium Hospital (NRH) 1985-2000, in total 776, subdivided into four groups: 1) Local primary surgery, no direct re-operation at NRH, no interval debulking; 2) local primary surgery, no direct re-operation, but interval debulking after 3-4 courses of chemotherapy at NRH; 3) local primary surgery, direct re-operation at NRH, no interval debulking; 4) primary surgery at NRH. Lymph node biopsies at re-operation in early stages and upgrading of stage where necessary were registered. Results: Whether surgery was radical or not was an independent prognostic factor for overall and progression-free survival. The treatment group was an independent prognostic factor for overall, but not for progression-free survival. Group 3 had significantly the best overall and progression-free survival (p = 0.01 and 0.05). For macroscopically radical surgery both overall and progression-free survival were found significantly better for groups 3, 4 and 1 than for group 2. Most lymph node biopsies were performed during the last period and 28% were upgraded from Stage I and II to IIIC. More patients were referred for primary surgery at NRH during the last 5-year period during which overall survival and time to progression were significantly better. Interpretation: Whether primary surgery is radical or not is a significant prognostic factor for survival and primary surgery is best performed by specialists in gynaecological oncology. abstract_id: PUBMED:32522770 Delays from neoadjuvant chemotherapy to interval debulking surgery and survival in ovarian cancer. Introduction: Delays from primary surgery to chemotherapy are associated with worse survival in ovarian cancer, however the impact of delays from neoadjuvant chemotherapy to interval debulking surgery is unknown. We sought to evaluate the association of delays from neoadjuvant chemotherapy to interval debulking with survival. Methods: Patients with a diagnosis of stage III/IV ovarian cancer receiving neoadjuvant chemotherapy from July 2015 to December 2017 were included in our analysis. Delays from neoadjuvant chemotherapy to interval debulking were defined as time from last preoperative carboplatin to interval debulking &gt;6 weeks. Fisher's exact/Wilcoxon rank sum tests were used to compare clinical characteristics. The Kaplan-Meier method, log-rank test, and multivariate Cox Proportional-Hazards models were used to estimate progression-free and overall survival and examine differences by delay groups, adjusting for covariates. Results: Of the 224 women, 159 (71%) underwent interval debulking and 34 (21%) of these experienced delays from neoadjuvant chemotherapy to interval debulking. These women were older (median 68 vs 65 years, P=0.05) and received more preoperative chemotherapy cycles (median 6 vs 4, P=0.003). Delays from neoadjuvant chemotherapy to interval debulking were associated with worse overall survival (HR 2.4 95% CI 1.2 to 4.8, P=0.01), however survival was not significantly shortened after adjusting for age, stage, and complete gross resection, HR 1.66 95% CI 0.8 to 3.4, P=0.17. Delays from neoadjuvant chemotherapy to interval debulking were not associated with worse progression-free survival (HR 1.55 95% CI 0.97 to 2.5, P=0.062). Increase in number of preoperative cycles (P=0.005) and lack of complete gross resection (P&lt;0.001) were the only variables predictive of worse progression-free survival. Discussion: Delays from neoadjuvant chemotherapy to interval debulking were not associated with worse overall survival after adjustment for age, stage, and complete gross resection. abstract_id: PUBMED:29790149 Survival after secondary cytoreductive surgery and chemotherapy compared with chemotherapy alone for first recurrence in patients with platinum-sensitive epithelial ovarian cancer and no residuals after primary treatment. A registry-based study. Introduction: The aim of this study was to investigate whether secondary cytoreductive surgery and platinum-based chemotherapy improved survival among patients with recurrent, platinum-sensitive epithelial ovarian cancer compared with those who received platinum-based chemotherapy alone, and to identify possible predictors for selection to secondary cytoreductive surgery. Material And Methods: We included 397 patients who had a primary diagnosis of FIGO stage I-IV epithelial ovarian cancer recorded in the Cancer Registry of Norway between 1 January 2002 and 31 December 2012, received primary surgery with no residuals followed by platinum-based chemotherapy, had first recurrence six or more months after completion of primary platinum-based chemotherapy, and received secondary treatment with either secondary cytoreductive surgery and platinum-based chemotherapy (secondary cytoreductive surgery+platinum-based chemotherapy group) or platinum-based chemotherapy alone (platinum-based chemotherapy group). Outcomes were progression-free survival to second recurrence or death and overall survival. Hazard ratios were estimated using multivariable Cox regression. Results: There were 75 patients in the secondary cytoreductive surgery+platinum-based chemotherapy group in whom complete resection was achieved for 60 (80%), and 322 patients in the platinum-based chemotherapy group. Both progression-free survival (hazard ratio 0.45, 95% confidence interval 0.32-0.62) and overall survival (hazard ratio 0.50, 95% confidence interval 0.32-0.70) were improved in the secondary cytoreductive surgery+platinum-based chemotherapy compared with the platinum-based chemotherapy group. A survival benefit was only seen in patients with no residuals at secondary cytoreductive surgery. Conclusions: In selected epithelial ovarian cancer patients with no residuals after primary surgery and a recurrent, platinum-sensitive tumor, the complete resection of recurrent tumor at secondary cytoreductive surgery improves progression-free survival and overall survival. Our results suggest that a long treatment-free interval and non-disseminated lesions (three or fewer lesions) on radiological images could be useful predictors for complete resection at secondary cytoreductive surgery. abstract_id: PUBMED:37743060 Optimal number of neoadjuvant chemotherapy cycles prior to interval debulking surgery in advanced epithelial ovarian cancer: a systematic review and meta-analysis of progression-free survival and overall survival. Objective: Neoadjuvant chemotherapy (NACT) represents a treatment option in patients with advanced epithelial ovarian cancer (AEOC) who are not good candidates for primary debulking surgery. Usually, 3 cycles of chemotherapy before surgery have been considered the best option for patient survival, although quite often some patients receive more than 3 cycles. The aim of this systematic review and meta-analysis was to identify the optimal number of NACT cycles reporting better survival in AEOC patients. Methods: PubMed, Cochrane Library, and Scopus were searched for original articles that analyzed the relationship between the number of chemotherapy cycles and clinical outcomes in AEOC patients before interval debulking surgery (IDS). The main outcomes were progression-free survival (PFS) and overall survival (OS). Results: A total of 22 studies comprising 7,005 patients diagnosed with AEOC were included in our analysis. In terms of survival, the reviewed studies dividing the patients in ≤3 NACT cycles vs. &gt;3, showed a trend for a decrease in PFS and a significant reduction in OS with an increasing number of cycles, while a difference in both PFS and OS was revealed if early IDS included patients with 4 NACT cycles. These results should be interpreted with caution due to the complex characteristics of AEOC patients. Conclusion: In conclusion, our review and meta-analysis revealed that there is not enough evidence to determine the optimal number of NACT treatments before surgery. Further research in the form of well-designed randomized controlled trials is necessary to address this issue. Trial Registration: PROSPERO Identifier: CRD42022334959. abstract_id: PUBMED:37664032 Impact of the time interval between primary or interval surgery and adjuvant chemotherapy in ovarian cancer patients. Introduction: Primary debulking surgery (PDS), interval debulking surgery (IDS), and platinum-based chemotherapy are the current standard treatments for advanced ovarian cancer (OC). The time to initiation of adjuvant chemotherapy (TTC) could influence patient outcomes. Methods: We conducted a multicenter retrospective cohort study of advanced (International Federation of Gynecology and Obstetrics (FIGO) stage III or IV) OC treated between 2014 and 2018 to assess progression-free survival (PFS) and overall survival (OS) in relation to TTC. All patients underwent a germline multigene panel for BRCA1/2 evaluation. Results: Among the 83 patients who underwent PDS, a TTC ≥ 60 days was associated with a shorter PFS (hazard ratio (HR) 2.02, 95% confidence interval (CI) 1.04-3.93, p = 0.038), although this association lost statistical significance when adjusting for residual disease (HR 1.52, 95% CI 0.75-3.06, p = 0.244, for TTC and HR 2.73, 95% CI 1.50-4.96, p = 0.001, for residual disease). Among 52 IDS patients, we found no evidence of an association between TTC and clinical outcomes. Ascites, type of chemotherapy, or germline BRCA1/2 mutational status did not influence TTC and were not associated with clinical outcomes in PDS or IDS patients. Discussion: In conclusion, longer TTC seems to negatively affect prognosis in patients undergoing PDS, especially those with residual disease. abstract_id: PUBMED:20099745 Neo-adjuvant chemotherapy followed by interval debulking surgery in advanced ovarian cancer treatment--a retrospective study Backgrounds: Primary debulking surgery and chemotherapy (paclitaxel and carboplatin) remain the standard treatment for advanced ovarian cancer. The size of the residual tumour after primary debulking surgery has proved to be an important prognostic determinant. Complete tumour debulking without any macroscopic residual disease is considered the optimal primary debulking surgery. It is not possible to perform such an aggressive operation in patients with advanced ovarian cancer due to the bad performance status and extensive disease. Neo-adjuvant chemotherapy and interval debulking surgery seem to be an effective treatment strategy in this group of patients. Material And Methods: The retrospective analysis evaluated the efficiency of interval debulking surgery in correlation with progression-free and overall survival in patients with advanced ovarian cancer. 38 patients were treated with standard chemotherapy: paclitaxel 175 mg/m2 and carboplatin 5-6 AUC every three weeks. According to the clinical response, surgical debulking was considered, after which postoperative chemotherapy was given. Ineligible patients for interval debulking were treated with 2nd line chemotherapy. Results: After neo-adjuvant chemotherapy, 24 patients of the group of 38 achieved partial remission and interval debulking surgery was indicated. Optimal interval debulking surgery was performed in 12 patients, suboptimal debulking surgery in 12 patients. Of the entire group, 14 patients did not show any adequate response to the primary treatment, they did not have interval debulking surgery indicated and they were treated with 2nd line chemotherapy. Progression-free survival in patients after optimal debulking was 11 months, median overall survival was not achieved (OS &gt; 42.5 months). Progression-free survival in patients after suboptimal debulking was 6 months and median overall survival was 33 months. Median overall survival in patients without surgical treatment was 21.5 months. Conclusion: The results of the study confirm that neo-adjuvant chemotherapy with subsequent interval debulking surgery is a suitable therapeutic approach in primary inoperable patients with advanced ovarian cancer. abstract_id: PUBMED:33643924 Comparison of Survival Between Primary Debulking Surgery Versus Neoadjuvant Chemotherapy for Ovarian Cancers in a Personalized Treatment Cohort. Objective: To compare survival between primary debulking surgery (PDS) and neoadjuvant chemotherapy (NACT) for the treatment of ovarian cancer patients per our selective protocol. Methods: Between Sep 1st, 2015, and Aug 31st, 2017, 161 patients were enrolled in our prospective cohort. All of the patients received preoperative clinic-radiological assessments, according to the Suidan criteria for R0 resection. Patients with a score of 0-2 received PDS. Patients with a score of ≥3 were counseled on the choices of PDS, NACT, or an optional staging laparoscopy, according to the Fagotti criteria. Clinic-pathological data were prospectively collected until May 1st, 2020, and the impacts of different treatment strategies on progression-free survival (PFS) and overall survival (OS) were analyzed. Results: 110 patients underwent PDS, and 51 patients received NACT with consequent interval debulking surgery. The R0 resection rate was 57.8%. All but one of the patients received platinum-based chemotherapy, and 105 (65.2%) patients were platinum-sensitive. Based on the univariate analysis, the PDS group exhibited prolonged PFS compared with the NACT group (P=0.029). The subgroup analysis showed that patients receiving NACT with residual disease (RD) exhibited the worst PFS (P=0.001). Based on the multivariate analysis, NACT with RD was still an independent impaired factor for PFS (P=0.04). However, NACT did not affect OS in the univariate or multivariate analyses. Conclusion: In our prospective cohort, NACT ovarian patients exhibited inferior PFS and noninferior OS compared with PDS patients. Given our selective protocol, NACT cannot be arbitrarily denied while appropriate PDS is still a priority. abstract_id: PUBMED:29704010 Timing is everything: intraperitoneal chemotherapy after primary or interval debulking surgery for advanced ovarian cancer. Purpose: To evaluate the outcomes of intraperitoneal chemotherapy (IP) compared with those of intravenous chemotherapy (IV) in patients with advanced ovarian cancer after neoadjuvant chemotherapy (NACT) and interval debulking surgery (IDS) or primary debulking surgery (PDS). Methods: Patients with advanced epithelial ovarian carcinoma treated with PDS or NACT and IDS from 2006 to 2015 were identified. Comparative statistics were used to evaluate covariates, and survival rates were calculated using the Kaplan-Meier method and compared with log-rank tests. Results: Sixty-six patients received NACT followed by IDS with residual disease of ≤ 1 cm; 42 of these patients (63.6%) received IP therapy; and 24 patients (36.3%) had IV therapy only after IDS. The median progression-free survival (PFS) was 16.0 months in the IP group and 13.5 months in the IV group (p = 0.13). The estimated median overall survival (OS) was 64.0 months with IP and 50.0 months with IV (p = 0.44). During the same study period, 149 patients underwent optimal PDS after which 93 patients (62.4%) received IP and 56 patients (37.6%) were given IV chemotherapy. Patients after IP demonstrated improved survival outcomes when compared to patients after IV therapy. The median PFS was 28.0 months after IP and 16.5 months after IV (p = 0.0006), and the median OS was not reached for IP and 50.0 months after IV (p &lt; 0.0001). Conclusions: Although IP chemotherapy after PDS is associated with improved survival, IP therapy after NACT and IDS, despite high rates of completion, may not have the same degree of survival advantage over IV therapy. abstract_id: PUBMED:29174056 Impact of the time interval from completion of neoadjuvant chemotherapy to initiation of postoperative adjuvant chemotherapy on the survival of patients with advanced ovarian cancer. Objective: To investigate the relationship of the time interval from the completion of neoadjuvant chemotherapy (NAC) to the initiation of postoperative adjuvant chemotherapy (POAC) with the survival outcomes in patients with ovarian cancer. Methods: We retrospectively investigated 220 patients with pathologically confirmed epithelial ovarian cancer who received NAC at Yonsei Cancer Hospital between 2006 and 2016. The time interval was defined as the period from the completion of NAC, spanning interval debulking surgery (IDS), to the initiation of POAC. Results: The median time interval was 42 (range 16-178) days; 103 patients (53.1%) received POAC within 42days after NAC while 91 patients (46.9%) received it after 42days. There were no significant differences in patient characteristics between these 2 groups. Kaplan-Meier analysis showed that patients with longer time intervals (&gt;42days) had poorer progression-free survival and overall survival (P=0.039 and 0.005, respectively). In the multivariate analysis, patients with longer time intervals had significantly poorer progression-free (hazard ratio, 1.41; 95% confidence interval, 0.98-2.03; not significant) and overall survivals (hazard ratio, 2.03; 95% confidence interval, 1.16-3.54). When the patients were categorized according to time interval quartiles (≤37, 38-42, 43-50, and &gt;50days), longer time intervals were associated with higher risks of recurrence and death (P for trend: 0.006 and &lt;0.001, respectively). Conclusion: The time interval from the completion of NAC to the initiation of POAC appears to influence survival. Efforts to reduce the time interval might improve the outcomes in ovarian cancer patients undergoing NAC. Answer: The interval from primary surgery to chemotherapy does not appear to be an independent prognostic factor for progression-free survival in ovarian cancer. A study involving 472 patients who received platinum-containing chemotherapy showed that the median interval from surgery to chemotherapy was 22 days, and univariate analysis indicated worse progression-free survival for those treated earlier. However, after multivariate analysis incorporating residual disease status, FIGO stage, and performance status, the interval to starting chemotherapy did not significantly affect progression-free survival (hazard rate ratio 0.99, P = 0.91) (PUBMED:12217760). In contrast, another study found that delays from neoadjuvant chemotherapy to interval debulking surgery were associated with worse overall survival, but not with worse progression-free survival after adjusting for age, stage, and complete gross resection (PUBMED:32522770). Additionally, a multicenter retrospective cohort study indicated that a longer time to initiation of adjuvant chemotherapy (TTC) after primary debulking surgery (PDS) seemed to negatively affect prognosis, especially in patients with residual disease (PUBMED:37664032). Furthermore, a systematic review and meta-analysis did not find enough evidence to determine the optimal number of neoadjuvant chemotherapy treatments before surgery, suggesting that the relationship between chemotherapy timing and survival outcomes may be complex and not yet fully understood (PUBMED:37743060). Overall, while some studies suggest that timing may have an impact on survival outcomes, the evidence does not consistently support the interval from primary surgery to chemotherapy as a significant factor for progression-free survival in ovarian cancer.
Instruction: Is tamoxifene porphyrinogenic? Abstracts: abstract_id: PUBMED:15945486 Is tamoxifene porphyrinogenic? Background: As reported in previous studies, porphyria cutanea tarda (PCT) developed in several tamoxifene-treated patients with breast cancer. We studied the group of patients with cancer having only tamoxifene therapy after the initial surgery. We evaluated their clinical and laboratory results and compared them with the results of the group of patients suffering also from breast tumor, but treated after the surgery with other systemic therapies, mostly with chemotherapy. Methods And Results: 20 patients were complexly studied, 10 of them with only tamoxifene therapy, and 10 without it. Diagnosis of the breast tumor was histologically confirmed in all of them. With the use of laboratory methods we examined their urinary excretion of diagnostically important porphyrins (uro- and coproporphyrin), then total blood count, liver function tests (ALT and AST), blood sugar, cholesterin, serum iron and ferritin, and performed also urinanalysis and detection of possible anti-HCV antibodies. The laboratory examination was repeated in the patient subgroup after three months, urinary uro- and coproporphyrin excretion also in the the control group, for to have an opportunity to follow the dynamics of laboratory changes. All the patients were examined during their regular laboratory controls performed so as not to be bothered with repeated additional phlebotomies. We did not confirm in our patients suffering from breast tumor the results of other autors, suggesting the connection between tamoxifene-therapy and development of porphyria cutanea tarda. Conclusions: Isolated cases of PCT can be induced through the effect of various hepatotoxic factors. However, the influence of common porphyrinogenically acting noxious substances (alcohol, HCV virus or iron overload as a result of the HFE gene mutations) were not found in our patients. abstract_id: PUBMED:25386744 Cytochrome P450 expression in mouse brain: specific isoenzymes involved in Phase I metabolizing system of porphyrinogenic agents in both microsomes and mitochondria. Brain cytochrome P450 (CYP) metabolizes a variety of drugs to produce their pharmacological effects within the brain. We have previously observed that porphyrinogenic agents altered CYP levels in brain. The aim of this work was to further study the involvement of mice brain mitochondrial and microsomal Phase I drug metabolizing system when porphyrinogenic agents, such as Enflurane, Isoflurane, allylisopropylacetamide, veronal, ethanol, and Griseofulvin were administered. To this end, CYP2E1, CYP2B1, and CYP3A4 expression were measured. NADPH cytochrome P450 reductase (CPR) expression was also determined. Western Blots were performed in microsomes and mitochondria of whole brain. Some of the drugs studied altered expression mainly in microsomes. Chronic Isoflurane augmented mitochondrial isoform, although this anaesthetic diminished microsomal expression. Ethanol and topical Griseofulvin affected expression in microsomes but not in mitochondria. CYP2E1 mitochondrial activity was induced by acute Enflurane; while the activity of the microsomal protein was enhanced in alcoholised animals. Ethanol also induced CYP2E1 expression in microsomes, although Isoflurane provoked opposite effects in mitochondria and microsomes. Expression of CPR was also induced. Several reports support an emergent role of CYP enzymes in the pathogenesis of neurological disorders, so CYP response in brain could be one of the multiples factors influencing porphyria acute attacks. abstract_id: PUBMED:35059624 Comparative differential cytotoxicity of clinically used SERMs in human cancer lines of different origin and its predictive molecular docking studies of key target genes involved in cancer progression and treatment responses. SERMS like Tamoxifene, 5-hydroxy tamoxifene, raloxifene and endoxifene has been used for the treatment of hormonal imbalances and dependent cancers owing to their action via Estrogen receptors as in the treatment of estrogen sensitive breast cancers. Due to the adverse side effects, modifications and development of the existing or newer SERMS has always been of immense interest. Ormeloxifene, a SERM molecule manufactured by HLL Lifecare Ltd, India as birth control under the trade names Saheli, Novex, and Novex-DS which is also investigated against mastalgia, fibro-adenoma and abnormal uterine bleeding. Anti-cancer effects have been reported in estrogen dependent and independent cancers which shows its wide scope to be implemented in cancer therapy. Current investigation is a comprehensive effort to find the cytotoxic potential of Ormeloxifene in comparison with clinically used four SERMS in twenty six cancer cell lines of different origin using Adriamycin as positive control. Also the computational studies pertaining to selected target/ligand with respect to tumor progression, development, treatment responses and apoptosis. The studies proved effective cytotoxicity of Ormeloxifene on cancer cell lines with lower TGI, GI50 and LC50 values which are significantly comparable. Also the in silico studies proved that the docking score of the compound suggests the interaction of the compound which could tightly regulate key target genes controlling cancer like ER, EGFR kinase, EGFR-cSRC, HDAC-2, PARP-1 and BRAF. This study brings out the superior efficacy of Ormeloxifene compared to other SERMS with proven safety profile to be repositioned as an anti-cancer drug to treat diverse cancer types. abstract_id: PUBMED:29476795 Effects of volatile anaesthetics on heme metabolism in a murine genetic model of Acute Intermittent Porphyria. A comparative study with other porphyrinogenic drugs. Background: Acute Intermittent Porphyria (AIP) is an inherited disease produced by a deficiency of Porphobilinogen deaminase (PBG-D). The aim of this work was to evaluate the effects of Isoflurane and Sevoflurane on heme metabolism in a mouse genetic model of AIP to further support our previous proposal for avoiding their use in porphyric patients. A comparative study was performed administering the porphyrinogenic drugs allylisopropylacetamide (AIA), barbital and ethanol, and also between sex and mutation using AIP (PBG-D activity 70% reduced) and T1 (PBG-D activity 50% diminished) mice. Methods: The activities of 5-Aminolevulinic synthetase (ALA-S), PBG-D, Heme oxygenase (HO) and CYP2E1; the expression of ALA-S and the levels of 5-aminolevulinic acid (ALA) were measured in different tissues of mice treated with the drugs mentioned. Results: Isoflurane increased liver, kidney and brain ALA-S activity of AIP females but only affected kidney AIP males. Sevoflurane induced ALA-S activity in kidney and brain of female AIP group. PBG-D activity was further reduced by Isoflurane in liver male T1; in AIP male mice activity remained in its low basal levels. Ethanol and barbital also caused biochemical alterations. Only AIA triggered neurological signs similar to those observed during human acute attacks in male AIP being the symptoms less pronounced in females although ALA-S induction was greater. Heme degradation was affected. Discussion: Biochemical alterations caused by the porphyrinogenic drugs assayed were different in male and female mice and also between T1 and AIP being more affected the females of AIP group. General Significance: This is the first study using volatile anaesthetics in an AIP genetic model confirming Isoflurane and Sevoflurane porphyrinogenicity. abstract_id: PUBMED:28990424 An Inducible Promoter Responsive to Different Porphyrinogenic Stimuli Improves Gene Therapy Vectors for Acute Intermittent Porphyria. Porphobilinogen deaminase (PBGD) gene therapy represents a promising therapeutic option for acute intermittent porphyria (AIP) patients suffering recurrent acute attacks. A first-in-human Phase I clinical trial confirmed the safety and tolerability of adeno-associated virus (AAV)-AAT-PBGD gene therapy, but higher doses and/or more efficient vectors are needed to achieve therapeutic expression of the transgene. This study assayed the insertion into the promoter of a short enhancer element able to induce transgene expression during exposure to endogenous and exogenous stimuli related to the pathology of the disease. The inclusion in tandem of two elements of the minimal functional sequence of human δ-aminolevulinic acid synthase drug-responsive enhancing sequence (ADRES) positioned upstream of the promoter strongly induced transgene expression in the presence of estrogens, starvation, and certain drugs known to trigger attacks in porphyria patients. The inclusion of two ADRES motives in an AAV vector improved therapeutic efficacy, reducing 10-fold the effective dose in AIP mice. In conclusion, the inclusion of specific enhancer elements in the promoter of gene therapy vectors for AIP was able to overexpress the therapeutic transgene when it is most needed, at the time when porphyrinogenic factors increase the demand for hepatic heme and precipitate acute porphyria attacks. abstract_id: PUBMED:9250538 Evaluation of the porphyrinogenic risk of antineoplastics. The use of antineoplastics is common in cancer therapy, and some of them have been associated with the development of porphyria in patients with cancer. However, knowledge of their effects on the haeme metabolic pathway is at present scarce and unclear. So, the present study evaluates the porphyrinogenic ability of nine antineoplastics (both alkylating and non-alkylating). These were tested either alone or in conjunction with 3,5-diethoxycarbonyl-1,4-dihydrocollidine (latent porphyria model) in chick embryos and in mice. The results obtained suggest that the use of cyclophosphamide, azathioprine, 5-fluorouracil, busulphan, procarbazine and hexamethylmelamine be avoided in the treatment of porphyric patients. On the other hand, dacarbazine, chlorambucil and melphalan are non-porphyrinogenic. We also provide evidence showing that neither the presence of the mustard group in the structure of the antineoplastic nor alterations in ferrochelatase or protoporphyrinogen oxidase activities are responsible for the porphyrinogenic ability of cyclophosphamide. abstract_id: PUBMED:2429498 Evaluation of porphyrinogenic effect of lindane in rats In order to study the porphyrinogenic ability of lindane in mammals, rats were treated with the pesticide suspended with the aid of Tween or dissolved in oil during about 3 months. The urinary excretion of porphyrins and its precursors: delta-aminolaevulinate (ALA) and porphobilinogen (PBG), as well as the faecal excretion of coproporphyrin (COPRO) and protoporphyrin (PROTO) was determined weekly. At the end of the treatment the hepatic activities of ALA Synthase (ALA-S), the first and rate limiting enzyme of haem pathway, and porphyrinogen carboxy-lyase (PCL), enzyme which sequentially decarboxylates uroporphyrinogen (8 COOH) to coproporphyrinogen (4 COOH), were assayed. Lindane moderately increased the urinary excretion of porphyrins and its precursors, being the former the mainly affected parameter. The faecal excretion of COPRO and PROTO was also increased. However, the hepatic activity of ALA-S was not altered. This would suggest that the regulatory haem pool was not affected. Nor was PCL activity altered in spite of being the key enzyme for the attack of other chlorinated compounds. Although hexachlorobenzene (HCB), a very well known porphyrinogenic drug, and lindane are chemically related and generate similar metabolites, the last one produces a small and qualitatively different alteration of haem biosynthesis. This may be related with the absence or scarce formation of the reactive metabolite that accounts for the porphyrinogenic ability of HCB. abstract_id: PUBMED:20187079 Mass spectrometric characterization of tamoxifene metabolites in human urine utilizing different scan parameters on liquid chromatography/tandem mass spectrometry. Different liquid chromatographic/tandem mass spectrometric (LC/MS/MS) scanning techniques were considered for the characterization of tamoxifene metabolites in human urine for anti-doping purpose. Five different LC/MS/MS scanning methods based on precursor ion scan (precursor ion scan of m/z 166, 152 and 129) and neutral loss scan (neutral loss of 72 Da and 58 Da) in positive ion mode were assessed to recognize common ions or common losses of tamoxifene metabolites. The applicability of these methods was checked first by infusion and then by the injection of solution of a mixture of reference standards of four tamoxifene metabolites available in our laboratory. The data obtained by the analyses of the mixture of the reference standards showed that the five methods used exhibited satisfactory results for all tamoxifene metabolites considered at a concentration level of 100 ng/mL, whereas the analysis of blank urine samples spiked with the same tamoxifene metabolites at the same concentration showed that the neutral loss scan of 58 Da lacked sufficient specificity and sensitivity. The limit of detection in urine of the compounds studied was in the concentration range 10-100 ng/mL, depending on the compound structure and on the selected product ion. The suitability of these approaches was checked by the analysis of urine samples collected after the administration of a single dose of 20 mg of tamoxifene. Six metabolites were detected: 4-hydroxytamoxifene, 3,4-dihydroxytamoxifene, 3-hydroxy-4-methoxytamoxifene, N-demethyl-4-hydroxytamoxifene, tamoxifene-N-oxide and N-demethyl-3-hydroxy-4-methoxytamoxifene, which is in conformity to our previous work using a time-of-flight (TOF) mass spectrometer in full scan acquisition mode. abstract_id: PUBMED:11929041 The effects of some porphyrinogenic drugs on the brain cholinergic system. In central nervous system, acetylcholinesterase (AChE) and butyrylcholinesterase (BuChE) hydrolyse acetylcholine. Diminished cholinesterase activity is known to alter several mental and psychomotor functions. The symptoms of cholinergic crisis and those observed during acute attacks of acute intermittent porphyria are very similar. The aim of this study was to investigate if there could be a link between the action of some porphyrinogenic drugs on brain and the alteration of the cholinergic system. To this end, AChE and BuChE activities were assayed in whole and different brain areas. Muscarinic acetylcholine receptor (mAChR) levels were also measured. Results obtained indicate that the porphyrinogenic drugs tested affect central cholinergic transmission. Quantification of mAChR gave quite different levels depending on the xenobiotic. Veronal administration inhibited 50% BuChE activity in whole brain, cortex and hippocampus; concomitantly cortex mAChR was 30% reduced. Acute and chronic isoflurane anaesthesia diminished BuChE activity by 70-90% in whole brain instead cerebellum and hippocampus mAChR levels were only altered by chronic enflurane anaesthesia. Differential inhibition of cholinesterases in the brain regions and their consequent effects may be of importance to the knowledge of the mechanisms of neurotoxicity of porphyrinogenic drugs. abstract_id: PUBMED:7543189 Up-regulation of CYP2A5 expression by porphyrinogenic agents in mouse liver. Coumarin 7-hydroxylase (COH) activity is catalyzed by the Cyp2a-5 gene product (CYP2A5 enzyme) in mice. Mouse hepatic CYP2A5 expression is often increased in conditions in which other P450 forms are repressed, e.g. after the administration of heavy metals and other toxic agents known to affect cellular heme balance. In this study, the effect of various porphyrinogenic chemicals on the expression CYP2A5 and the key enzymes in heme metabolism was studied. Administration of single doses of griseofulvin (1000 mg/kg), thioacetamide (10 mg/kg) and aminotriazole (1000 mg/kg) to DBA/2 and C57BL/6 mice produced up to 10-fold increases in hepatic COH catalytic activity. Dramatic, up to 130-fold increases in response to the inducers was observed in the amount of CYP2A5 steady-state mRNA. The mRNA contents of aminolevulinate synthase, ferrochelatase and heme oxygenase were also increased to a variable extent, possibly reflecting feed-back regulatory mechanisms. In D2 mice the CYP2A5 inducing effect of aminotriazole and thioacetamide, but not that of griseofulvin, pyrazole and phenobarbital, was abolished by exogenously administered heme arginate. In the B6 strain heme arginate treatment increased CYP2A5 expression but it did not affect the induction caused by porphyrinogenic agents. These results show that porphyrinogenic agents act as efficient inducers of CYP2A5, and suggest that regulation of the transcription of the Cyp2a-5 gene could in some instances involve heme-sensitive factors. Answer: Based on the provided abstracts, tamoxifene does not appear to be porphyrinogenic. In a study that specifically investigated the connection between tamoxifene therapy and the development of porphyria cutanea tarda (PCT) in patients with breast cancer, no confirmation was found to suggest that tamoxifene treatment led to PCT. The study involved patients who had only received tamoxifene therapy after initial surgery and compared their clinical and laboratory results with those of patients treated with other systemic therapies, mostly chemotherapy. The results did not support the suggestion by other authors of a link between tamoxifene therapy and the development of PCT (PUBMED:15945486). Therefore, based on the evidence from this study, tamoxifene is not considered to be porphyrinogenic.
Instruction: Is there an association between deprivation and pre-operative disease severity? Abstracts: abstract_id: PUBMED:19689987 Is there an association between deprivation and pre-operative disease severity? A cross-sectional study of patient-reported health status. Objective: Differences in access to elective surgery may contribute to socioeconomic differences in health. We studied the associations between pre-operative health status (as an indicator of clinical need) and deprivation. Design: Cross-sectional study with risk-adjusted comparison of preoperative patient-reported health status and deprivation scores. Setting: Thirteen NHS hospitals, two independent sector treatment centres and one private hospital in England and Wales. Participants: A total of 1160 NHS-funded patients undergoing hip replacement, knee replacement or varicose vein surgery. Intervention: s) None. Main Outcome Measure(s): General health status (EQ-5D), disease-specific health status (Oxford hip score, Oxford knee score and Aberdeen varicose vein symptom severity score) and area deprivation score. Results: Patients from more deprived areas reported worse EQ-5D scores. Differences in crude mean disease-specific health status scores between the least and most deprived fifths were small: hip score 3.5; knee score 6.8; varicose vein score 4.8. When risk adjusted the strength of the association fell by about half for hip (0.176-0.083) and knee (0.214-0.117) and one-third for varicose vein surgery (0.215-0.140), although the coefficients remained statistically significant (P &lt; or = 0.01). Conclusions: Deprivation was associated with worse pre-operative general health status. However, given that the variation in pre-operative disease-specific health status by deprivation score was of small clinical significance and the limited power of the risk adjustment model, there is little evidence of socioeconomic inequity in access to three common elective surgical procedures. abstract_id: PUBMED:30976423 Pre-operative total parenteral nutrition improves post-operative outcomes in a subset of Crohn's disease patients undergoing major abdominal surgery. Background: Despite major advances in the medical management of Crohn's disease (CD), a significant proportion of patients will require surgery within 5 years of diagnosis. Malnutrition is an independent risk factor for adverse post-operative outcomes following gastrointestinal surgery. Data on the value of pre-operative total parenteral nutrition (TPN) in CD patients are mixed and there is a paucity of data in the biologic era. We aimed to define the role of pre-operative TPN in this population. Methods: This was a retrospective cohort study conducted at a tertiary referral center. CD patients who underwent major abdominal surgery were identified. Patients receiving pre-operative TPN were compared to controls. We compared the incidence of 30-day infectious and non-infectious post-operative complications between the two groups. Results: A total of 144 CD patients who underwent major abdominal surgery between March 2007 and March 2017 were included. Fifty-five patients who received pre-operative TPN were compared to 89 controls. Twenty-one (14.6%) patients developed infectious complications (18.2% in TPN group vs 12.3% in non-TPN group, P = 0.34) and 23 (15.9%) developed non-infectious complications (14.5% in TPN group vs 16.9% in non-TPN group, P = 0.71). In a multivariate analysis, controlling for differences in baseline disease severity and malnutrition between groups, patients receiving pre-operative TPN for ≥60 days had significantly lower odds of developing non-infectious complications (odds ratio 0.07, 95% confidence interval: 0.01-0.80, P = 0.03). Weight loss of &gt;10% in the past 6 months was a significant predictor of post-operative complications. Conclusions: In a subset of malnourished CD patients, TPN is safe and allows comparable operative outcomes to controls. Pre-operative TPN for ≥60 days reduced post-operative non-infectious complications without associated increase in infectious complications. abstract_id: PUBMED:32626573 The association between low pre-operative step count and adverse post-operative outcomes in older patients undergoing colorectal cancer surgery. Background: Multiple tools exist estimating perioperative risk. With an ageing surgical demographic, frailty is becoming an increasingly important concept in perioperative medicine due to its association with adverse post-operative outcomes. Reduced physical activity is a hallmark of frailty, and we postulate that a low pre-operative step count may be an objective measure of frailty. This study aimed to determine the association between low pre-operative step count and post-operative outcomes in patients undergoing elective colorectal cancer surgery. Methods: A prospective analysis of 85 older patients undergoing major elective colorectal surgery was performed at a tertiary centre between October 2017 and October 2018. Patients aged 65 years and over who met inclusion criteria were provided with an activity tracker to wear for 14 days prior to planned surgery. Their median daily step count was measured and a cut-off of &lt; 2500 steps/day was used to define a reduced step count. Primary outcomes included length of stay and 30-day post-operative complication rate. Multivariable logistic regression analyses were used to analyze the influence of low pre-operative step count and other preoperative variables, on post-operative outcomes including mortality, prolonged hospital admission, and complication rates. Results: Of 85 patients, 17 (20%) were identified as having a low pre-operative step count. A low pre-operative step count was associated with a significantly increased length of stay (14 vs. 6 days, IRR 2.09, 95% CI 1.55-2.83, p ≤ 0.01) and rate of major post-operative complications (29.4% vs. 8.8%, OR 3.34, 95% CI 1.03-14.3, p = 0.04). It was also associated with significantly increased rates of discharge to care facilities (p &lt; 0.01) and requiring support on discharge (p = 0.03). Conclusion: Low pre-operative step count (&lt; 2500 steps/day) is predictive of an increased risk of post-operative morbidity in patients undergoing elective colorectal surgery. Accurate preoperative identification may allow for treatment modification and tailored perioperative care. The possibility of using a wearable activity tracker as a simple but powerful pre-habilitation tool is raised as an important avenue for future study. Trial Registration: Australian New Zealand Clinical Trials Registry (ACTRN12618000045213). abstract_id: PUBMED:31701527 The association of pre-operative anaemia with survival after orthotopic liver transplantation. Anaemia is common in patients with end-stage liver disease. Pre-operative anaemia is associated with greater mortality after major surgery. We analysed the association of pre-operative anaemia (World Health Organization classification) with survival and complications after orthotopic liver transplantation using Cox and logistic regression models. We included patients undergoing their first orthotopic liver transplantation between 2004 and 2016. Out of 599 included patients, 455 (76%) were anaemic before transplantation. Pre-operative anaemia was not associated with the survival of 485/599 (81%) patients to 1 year after liver transplantation, OR (95%CI) 1.04 (0.64-1.68), p = 0.88. Pre-operative anaemia was associated with higher rates of intra-operative blood transfusions and acute postoperative kidney injury on multivariable analysis, OR (95%CI) 1.70 (0.82-2.59) and 1.72 (1.11-2.67), respectively, p &lt; 0.001 for both. Postoperative renal replacement therapy was associated with pre-operative anaemia on univariate analysis, OR (95%CI) 1.87 (1.11-3.15), p = 0.018. abstract_id: PUBMED:27785551 The effect of pre-operative optimization on post-operative outcome in Crohn's disease resections. Background: The timing of surgical intervention in Crohn's disease (CD) may depend on pre-operative optimization (PO) which includes different interventions to decrease the risk for unfavourable post-operative outcome. The objective of this study was to investigate the effect of multi-model PO on the post-operative outcome in CD. Method: This is a multicentre retrospective cohort study. The primary outcome was 30-day post-operative complications. Secondary outcomes were intra-abdominal septic complications, surgical site infection (SSI), re-operation, length of post-operative stay in a hospital and re-admission. PO included nutritional support, discontinuation of medications, pre-operative antibiotic course and thrombosis prophylaxis. Results: Two hundred and thirty-seven CD elective bowel resections were included. Mean age was 39.9 years SD 14.25, 144 (60.8 %) were female and 129 (54.4 %) had one or more types of medical treatment pre-operatively. Seventy-seven patients (32.5 %) optimized by at least nutritional support or change in pre-operative medications. PO patients were more likely to have penetrating disease phenotype (p = 0.034), lower albumin (p = 0.015) and haemoglobin (p = 0.021) compared to the non-optimized. Multivariate analyses showed that treatment with anti-TNF alpha agents OR 2.058 CI [1.043-4.4.064] and low haemoglobin OR 0.741 CI [0.572-0.0.961] increased the risk of overall post-operative complications. Co-morbidity increased the risk of SSI OR 2.567 CI [1.182-5.576] while low haemoglobin was a risk factor for re-admission OR 0.613 CI [0.405-0.926]. Low pre-operative albumin correlated with longer stay in hospital. Conclusions: PO did not change post-operative outcome most likely due to selection bias. Anti-TNF alpha agents, low haemoglobin, low albumin and co-morbidity were associated with unfavourable outcome. abstract_id: PUBMED:36420099 Comparing pre-operative versus post-operative single and multi-fraction stereotactic radiotherapy for patients with resectable brain metastases. Background: The standard treatment for patients with large brain metastases and limited intracranial disease is surgical resection and post-operative stereotactic radiosurgery (SRS). However, post-operative SRS still has elevated rates of local failure (LF) and is complicated by radiation necrosis (RN), and meningeal disease (MD). Pre-operative SRS may reduce the risk of RN and MD, while fractionated therapy may improve local control through delivering a higher biological effective dose. We hypothesize that pre-operative fractionated stereotactic radiation therapy (FSRT) will have less toxicity compared to patients who receive post-operative SRS or FSRT. Methods: A multi-institutional analysis was conducted and included patients who had surgical resection and stereotactic radiation therapy to treat at least one brain metastasis. Pertinent demographic, clinical, radiation, surgical, and follow up data were collected for each patient. The primary outcome was a composite endpoint defined as patients with one of the following adverse events: 1) LF, 2) MD, and/or 3) Grade 2 or higher (symptomatic) RN. Results: 279 patients were eligible for analysis. The median follow-up time was 9 months. 87 % of patients received fractionated treatment. 29 % of patients received pre-operative treatment. The composite endpoint incidences for post-operative SRS (n = 10), post-operative FSRT (n = 189), pre-operative SRS (n = 27), and pre-operative FSRT (n = 53) were 0 %, 17 %, 15 %, and 7.5 %, respectively. Conclusions: In our study, the composite endpoint of 7.5% for pre-operative FSRT compares favorably to our post-operative FSRT rate of 17%. Pre-operative FSRT was observed to have low rates of LF, MD, and RN. Prospective validation is needed. abstract_id: PUBMED:34959941 Current Use of EEN in Pre-Operative Optimisation in Crohn's Disease. Despite the increasing array of medications available for the treatment of Crohn's disease and a focus on mucosal healing, approximately 35% of patients with Crohn's disease undergo bowel surgery at some stage. The importance of nutritional optimisation before Crohn's surgery is well-highlighted by surgical, nutritional, and gastroenterological societies with the aim of reducing complications and enhancing recovery. Surgical procedures are frequently undertaken when other treatment options have been unsuccessful, and, thus, patients may have lost weight and/or required steroids, and are therefore at higher risk of post-operative complications. EEN is used extensively in the paediatric population to induce remission, but is not routinely used in the induction of remission of adult Crohn's disease or in pre-operative optimisation. Large prospective studies regarding the role of pre-operative EEN are lacking. In this review, we evaluate the current literature on the use of EEN in pre-operative settings and its impact on patient outcomes. abstract_id: PUBMED:20110524 Disease severity in ankylosing spondylitis: variation by region and local area deprivation. Objective: To investigate whether patient disease severity in ankylosing spondylitis (AS) varies among regions or by local area social deprivation. Methods: Eight hundred patients with AS from 8 specialist rheumatology centers across England were invited to participate in a cross-sectional survey. Sociodemographic and disease-related variables were collected [pain (numerical rating scale), disease activity (Bath AS Disease Activity Index), and physical function (Bath AS Functional Index)]. Deprivation was measured using the Index of Multiple Deprivation 2004. Results: Of the 800 patients invited, 468 responded (adjusted response rate 62.8%). Most were male (72.9%), with a mean age of 50.2 years (SD 12.1), and a mean diagnosed disease duration of 17 years (SD 11.4). Across all centers, those living in more deprived areas demonstrated significantly greater disease severity and poorer psychological health. After controlling for age, gender, disease duration, and region, greater deprivation was significantly associated with greater disease activity (OR 3.39; 95% CI 1.65, 6.98) and poorer function (OR 4.46; 95% CI 2.11, 9.44). There was a nonsignificant trend toward more pain (OR 1.98; 95% CI 0.97, 4.07). There was also a significant independent association between region and disease severity. Conclusion: The need for healthcare is greatest for patients with AS who are living in more socially deprived areas. With the growing use of interventional therapies, these findings have important implications if health service resources are to be allocated equitably; particularly as deprived patients are known to access healthcare less frequently. abstract_id: PUBMED:38143976 Correlation of Serum Cancer Antigen-125 (CA-125) Levels with Severity of Pre-eclampsia. Background: Pre-eclampsia is often associated with unfavourable feto-maternal outcomes. There is a lacuna in its pathophysiology, which emphasizes the need to research for tests, which can predict or correlate with the severity of pre-eclampsia. Cancer antigen-125 (CA-125) is a simple, readily available biomarker with evidence of its secretion at the choriodecidual unit and may have a possible role. This study compared serum CA-125 levels between normal pregnant and pre-eclamptic women and determined its clinical usefulness in correlating with the severity of pre-eclampsia. Methods: A case-control study was conducted enrolling 58 women with pre-eclampsia further divided into severe and non-severe groups and 62 gestational age-matched healthy, pregnant controls. Serum CA-125 levels were compared between the two groups. Results: The mean serum CA-125 in the controls was 16.44 ± 8.28 IU/ml, 13.82 ± 9.18 IU/ml in the non-severe and 23.55 ± 30.55 IU/ml in the severe pre-eclampsia group (p = 0.134). Serum CA-125 had a significant association with systolic blood pressure (SBP) p = 0.002), diastolic blood pressure (DBP) (p = 0.026), foetal growth restriction (p = 0.025), pre-term birth (p = 0.039) and a highly significant association with 24-h urinary protein, liver enzymes, placental abruption, need of maternal intensive care as well as with poor neonatal outcome including stillbirth and neonatal mortality (p &lt; 0.001). Conclusion: Serum CA-125 levels were found to be higher in the severe pre-eclampsia group as compared to non-severe pre-eclampsia and normotensive group, but the difference was not statistically significant. More studies on a larger scale are required to prove the usefulness of this marker with respect to maternal and perinatal outcome as well as its association with pre-eclampsia and its severity. abstract_id: PUBMED:34643797 Risk of post-operative cardiovascular event in elderly patients with pre-existing cardiovascular disease who are undergoing hip fracture surgery. Purpose: To evaluate the association between pre-existing cardiovascular disease (CVD) and the risk of developing post-operative cardiovascular event among elderly patients who underwent hip fracture surgery. Methods: We performed an observational study among patients with acute hip fracture aged at least 65 years and who received surgical intervention. Hip fracture patients with pre-existing CVD were matched for age, gender, fracture type, and year of admission with patients without pre-existing CVD. The primary endpoint was post-operative cardiovascular events, and patients were followed until discharge from hospital. Conditional logistic regression was used to determine the association between pre-existing CVD and post-operative cardiovascular event after adjusting for potential confounders including age, body mass index, time from fracture to surgery, pre-existing comorbidities, and the Charlson Comorbidity Index (CCI). Results: The study matched 858 pairs of patients with and without pre-existing CVD. Post-operative cardiovascular events developed in 40 and 14 patients with and without pre-existing CVD (44.6 versus 16.3 per 1000 persons), respectively. Compared to patients without pre-existing CVD, patients with any pre-existing CVD were more likely to develop post-operative cardiovascular events, with a crude odds ratio (OR) of 2.857 [95% confidence interval (CI), 1.554 to 5.251] and multivariable adjusted OR of 2.850 (95% CI, 1.318 to 7.139), respectively. Conclusion: In elderly patients who received hip fracture surgery, patients with pre-existing CVD are at a higher risk of developing post-operative cardiovascular events. Appropriate screening for this vulnerable population is recommended to prevent the risk of post-operative complications. Answer: Yes, there is an association between deprivation and pre-operative disease severity. A cross-sectional study found that patients from more deprived areas reported worse pre-operative general health status as measured by the EQ-5D scores. However, the differences in crude mean disease-specific health status scores between the least and most deprived fifths were small for hip, knee, and varicose vein surgeries. After risk adjustment, the strength of the association decreased, although the coefficients remained statistically significant, suggesting that deprivation was associated with worse pre-operative general health status. Despite this, the variation in pre-operative disease-specific health status by deprivation score was of small clinical significance, and the risk adjustment model had limited power, leading to the conclusion that there is little evidence of socioeconomic inequity in access to three common elective surgical procedures (PUBMED:19689987).
Instruction: Subcortical gray matter volume abnormalities in healthy bipolar offspring: potential neuroanatomical risk marker for bipolar disorder? Abstracts: abstract_id: PUBMED:18356765 Subcortical gray matter volume abnormalities in healthy bipolar offspring: potential neuroanatomical risk marker for bipolar disorder? Objective: A growing number of structural neuroimaging studies have shown that bipolar disorder (BD) is associated with gray matter (GM) volume abnormalities in brain regions known to support affect regulation. The goal of this study was to examine whole-brain regional GM volume in healthy bipolar offspring (HBO) relative to age-matched controls to identify possible structural abnormalities that may be associated with risk for BD. Method: Participants were 20 youths (8-17 years old) with at least one parent diagnosed with BD, and 22 age-matched healthy individuals. All of them were free of Axis I diagnoses. High-resolution magnetic resonance imaging structural images were acquired using a 3-T Siemens scanner. Voxel-based morphometric analyses were conducted using SPM5. Results: Relative to controls, HBO had significantly increased GM volume in left parahippocampal/hippocampal gyrus (p &lt;.05 corrected), following whole-brain analyses. This increase was correlated with puberty but not age in HBO. Region-of-interest analyses on the amygdala and orbitomedial prefrontal cortex did not yield any significant group differences after conducting small volume correction. Conclusions: The pattern of increased GM volume in parahippocampal/hippocampal gyrus in HBO suggests a potential marker for risk for BD. It can also be considered as a potential neuroprotective marker for the disorder because HBO were free of current psychopathology. Prospective studies examining the relationship between changes in GM volume in these regions and subsequent development of BD in HBO will allow us to elucidate further the role of this region in either conferring risk for or protecting against the development of BD. abstract_id: PUBMED:36526112 Aging of the brain in bipolar disorder: Illness- and onset-related effects in cortical thickness and subcortical gray matter volume. Background: Older adults with bipolar disorder (BD) have received little study, although they often have severe symptoms, treatment resistance and high suicide risk. Furthermore, a subset develops cognitive dysfunction for unknown reasons. Methods: Here, cortical thickness and subcortical gray matter volume were compared across individuals ages 40-79y: 103 with BD ("later-onset" at ages ≥25y, n = 21; "early-onset" &lt; 25y, n = 82) and healthy controls (HCs, n = 98). Results: Overall, those with BD showed lower prefrontal, cingulate, sensorimotor, parahippocampal, insula, temporal, parietal, and occipital cortical thickness (Cohen's d: 0.4 to 0.8) and hippocampal, amygdalar, thalamic, and striatal gray matter volume (d: 0.6 to 0.8). Later-onset BD showed negative relationships between age and parahippocampal, insular, temporal, parietal, and occipital cortical thickness, and hippocampal, thalamic and striatal volume (r: -0.7 to -0.4). Suicide attempt history was associated with lower dorsolateral prefrontal cortical thickness (d = 0.5). Limitations: The study used a cross-sectional design and the sample of those with a later-onset of BD was relatively modest. Conclusions: Results support widespread gray matter decreases in older adults with BD, and also suggest a separable later-onset phenotype characterized by age-related gray matter reductions in regions subserving cognitive, emotional and perceptual processes. Moreover, the results are the first to demonstrate structural brain differences associated with a history of suicide attempts in older adults with BD. abstract_id: PUBMED:26233321 Neuroanatomical correlates of genetic risk for bipolar disorder: A voxel-based morphometry study in bipolar type I patients and healthy first degree relatives. Background: Bipolar disorder (BD) is a highly heritable mental illness which is associated with neuroanatomical abnormalities. Investigating healthy individuals at high genetic risk for bipolar disorder may help to identify neuroanatomical markers of risk and resilience without the confounding effects of burden of illness or medication. Methods: Structural magnetic resonance imaging scans were acquired from 30 euthymic patients with BD-I (BP), 28 healthy first degree relatives of BD-I patients (HR), and 30 healthy controls (HC). Data was analyzed using DARTEL for voxel based morphometry in SPM8. Results: Whole-brain analysis revealed a significant main effect of group in the gray matter volume in bilateral inferior frontal gyrus, left parahippocampal gyrus, left lingual gyrus and cerebellum, posterior cingulate gyrus, and supramarginal gyrus (alphasim corrected (≤0.05 FWE)). Post-hoc t-tests showed that inferior frontal gyrus volumes were bilaterally larger both in BP and HR than in HC. BP and HR also had smaller cerebellar volume compared with HC. In addition, BP had smaller left lingual gyrus volume, whereas HR had larger left parahippocampal and supramarginal gyrus volume compared with HC. Limitations: This study was cross-sectional and the sample size was not large. All bipolar patients were on medication, therefore we were not able to exclude medication effects in bipolar group in this study. Conclusions: Our findings suggest that increased inferior frontal gyrus and decreased cerebellar volumes might be associated with genetic predisposition for bipolar disorder. Longitudinal studies are needed to better understand the predictive and prognostic value of structural changes in these regions. abstract_id: PUBMED:36714682 Bipolar spectrum disorders are associated with increased gray matter volume in the medial orbitofrontal cortex and nucleus accumbens. Objective: Elevated sensitivity to rewards prospectively predicts Bipolar Spectrum Disorder (BSD) onset; however, it is unclear whether volumetric abnormalities also reflect BSD risk. BSDs emerge when critical neurodevelopment in frontal and striatal regions occurs in sex-specific ways. The current paper examined the volume of frontal and striatal brain regions in both individuals with and at risk for a BSD with exploratory analyses examining sex-specificity. Methods: One hundred fourteen medication-free individuals ages 18-27 at low-risk for BSD (moderate-reward sensitivity; N = 37), at high-risk without a BSD (high-reward sensitivity; N = 47), or with a BSD (N = 30) completed a structural MRI scan of the brain. We examined group differences in gray matter volume in a priori medial orbitofrontal cortex (mOFC) and nucleus accumbens (NAcc) regions-of-interest. Results: The BSD group had enlarged frontostriatal volumes (mOFC, NAcc) compared to low individuals (d = 1.01). The mOFC volume in BSD was larger than low-risk (d = 1.01) and the high-risk groups (d = 0.74). This effect was driven by males with a BSD, who showed an enlarged mOFC compared to low (d = 1.01) and high-risk males (d = 0.74). Males with a BSD also showed a greater NAcc volume compared to males at low-risk (d = 0.49), but not high-risk males. Conclusions: An enlarged frontostriatal volume (averaged mOFC, NAcc) is associated with the presence of a BSD, while subvolumes (mOFC vs. NAcc) showed unique patterning in relation to risk. We report preliminary evidence that sex moderates frontostriatal volume in BSD, highlighting the need for larger longitudinal risk studies examining the role of sex-specific neurodevelopmental trajectories in emerging BSDs. abstract_id: PUBMED:34078485 Childhood trauma is associated with reduced frontal gray matter volume: a large transdiagnostic structural MRI study. Background: Childhood trauma increases risk for psychopathology and cognitive impairment. Prior research mainly focused on the hippocampus and amygdala in single diagnostic categories. However, other brain regions may be impacted by trauma as well, and effects may be independent of diagnosis. This cross-sectional study investigated cortical and subcortical gray matter volume in relation to childhood trauma severity. Methods: We included 554 participants: 250 bipolar-I patients, 84 schizophrenia-spectrum patients and 220 healthy individuals without a psychiatric history. Participants filled in the Childhood Trauma Questionnaire. Anatomical T1 MRI scans were acquired at 3T, regional brain morphology was assessed using Freesurfer. Results: In the total sample, trauma-related gray matter reductions were found in the frontal lobe (β = -0.049, p = 0.008; q = 0.048), this effect was driven by the right medial orbitofrontal, paracentral, superior frontal regions and the left precentral region. No trauma-related volume reductions were observed in any other (sub)cortical lobes nor the hippocampus or amygdala, trauma-by-group (i.e. both patient groups and healthy subjects) interaction effects were absent. A categorical approach confirmed a pattern of more pronounced frontal gray matter reductions in individuals reporting multiple forms of trauma and across quartiles of cumulative trauma scores. Similar dose-response patterns were revealed within the bipolar and healthy subgroups, but did not reach significance in schizophrenia-spectrum patients. Conclusions: Findings show that childhood trauma is linked to frontal gray matter reductions, independent of psychiatric morbidity. Our results indicate that childhood trauma importantly contributes to the neurobiological changes commonly observed across psychiatric disorders. Frontal volume alterations may underpin affective and cognitive disturbances observed in trauma-exposed individuals. abstract_id: PUBMED:16176432 Neuroanatomical abnormalities as risk factors for bipolar disorder. Objective: Neuroimaging studies show structural brain abnormalities in bipolar patients. Some of the abnormalities may represent biological risk factors conveying vulnerability for the disease. This paper aims to identify neuroanatomical risk factors for bipolar disorder (BD). Methods: We reviewed magnetic resonance imaging (MRI) findings in populations in which the effects of the disease or treatment are minimal or where the chances of finding genetically coded risk factors shared within the families are increased. Such populations include unaffected relatives of bipolar patients, first-episode patients, children or adolescents with BD and patients with familial BD. Results: MEDLINE search revealed 30 relevant scientific papers. Abnormalities in the volume of the striatum, left hemispheric white matter, thalamus and anterior cingulate as well as quantitative MRI signal hyperintensities were identified already in unaffected relatives of bipolar patients. Subjects in the early stages of the disease showed volume changes of the ventricles, white matter, caudate, putamen, amygdala, hippocampus and the subgenual prefrontal cortex. Reduction in the subgenual prefrontal cortex volume was replicated in three of four studies in patients with familial BD. Conclusions: Possible candidates for neuroanatomical risk factors for BD are volumetric abnormalities of the subgenual prefrontal cortex, striatum, white matter, and probably also the hippocampus and amygdala. Qualitative finding of white matter hyperintensities was already utilized as an endophenotype. abstract_id: PUBMED:23796123 Overlapping and distinct gray and white matter abnormalities in schizophrenia and bipolar I disorder. Objectives: Schizophrenia and bipolar disorder may share common neurobiological mechanisms, but few studies have directly compared gray and white matter structure in these disorders. We used diffusion-weighted magnetic resonance imaging and a region of interest based analysis to identify overlapping and distinct gray and white matter abnormalities in 35 patients with schizophrenia and 20 patients with bipolar I disorder in comparison to 56 healthy volunteers. Methods: We examined fractional anisotropy within the white matter and mean diffusivity within the gray matter in 42 regions of interest defined on a probabilistic atlas following non-linear registration of the images to atlas space. Results: Patients with schizophrenia had significantly lower fractional anisotropy in temporal (superior temporal and parahippocampal) and occipital (superior and middle occipital) white matter compared to patients with bipolar disorder and healthy volunteers. By contrast, both patient groups demonstrated significantly higher mean diffusivity in frontal (inferior frontal and lateral orbitofrontal) and temporal (superior temporal and parahippocampal) gray matter compared to healthy volunteers, but did not differ from each other. Conclusions: Our study implicates overlapping gray matter frontal and temporal lobe structural alterations in the neurobiology of schizophrenia and bipolar I disorder, but suggests that temporal and occipital lobe white matter deficits may be an additional risk factor for schizophrenia. Our findings may have relevance for future diagnostic classification systems and the identification of susceptibility genes for these disorders. abstract_id: PUBMED:33519204 Exploring the Effects of Temperament on Gray Matter Volume of Frontal Cortex in Patients with Mood Disorders. Background: Patients with bipolar disorder (BD) and patients with major depressive disorder (MDD) have relatively specific temperament and structural abnormalities of brain regions related to emotion and cognition. However, the effects of temperament factors on the structure of frontal and temporal cortex is still unclear. The aims of this study were to explore the differences and relationships between temperament characteristics and the gray matter volume of frontal and temporal cortex in patients with BD or MDD. Methods: T1-weighted magnetic resonance imaging (MRI) data, demographic and clinical information were obtained from 279 depressed patients (90 patients with BD, 189 patients with MDD) and 162 healthy controls (HC). Temperament was assessed with the Chinese short version of Temperament Evaluation of Memphis, Pisa and San Diego - Auto questionnaire (TEMPS-A). The Desikan-Killiany atlas was used for yielding gray matter volume by FreeSurfer 6.0 software suite. A total of 22 frontal and temporal regions were chosen as regions of interest (ROIs). Results: Compared with patients with MDD, patients with BD had higher TEMPS-A total scores and scores on cyclothymic, irritable and hyperthymic subscales. The gray matter volume in bilateral rostral middle frontal gyrus (RMFG), left temporal pole and right superior frontal gyrus were reduced in patients with BD. Patients with MDD only had lower gray matter volume in bilateral temporal pole. In the pooled patients, there were negative associations between hyperthymia and gray matter volume in right RMFG. Conclusion: Patients with BD and MDD had different temperament characteristics. The prominent temperament subscales in patients with BD were cyclothymia, irritable and hyperthymia. Patients with greater hyperthymia had lower gray matter volume in right frontal gyrus. Temperament may reflect an endophenotype in patients with mood disorders, especially in BD. abstract_id: PUBMED:35691059 Gray matter abnormalities and associated familial risk endophenotype in individuals with first-episode bipolar disorder: Evidence from whole-brain voxel-wise meta-analysis. Gray matter abnormalities have been widely reported in individuals with and at familial risk for bipolar disorder (BD). However, inconsistent findings were reported, and whether shared abnormalities exist between at-risk individuals and patients which can represent an endophenotype remained unclear. This meta-analysis aimed at identifying robust patterns of gray matter changes in patients with first-episode BD (FEBD) and associated risk endophenotype of BD. A systematic literature search was performed to identify eligible voxel-based morphometry studies comparing FEBD patients and healthy controls. Findings of included studies were integrated using the Seed-based d Mapping toolbox. Common and distinct patterns of gray matter abnormalities between FEBD patients and unaffected at-risk individuals were explored. A total of 16 VBM studies comparing 411 FEBD patients and 521 controls were included. FEBD patients showed increased gray matter volume in the cerebellum, posterior cingulate cortex and striatum, and decreased gray matter volume in the medial superior frontal gyrus and gyrus rectus. No common abnormalities were identified between FEBD patients and unaffected at-risk individuals. More gray matter loss in the medial superior frontal gyrus and insula were found in FEBD patients relative to unaffected at-risk individuals. These findings revealed robust gray matter abnormalities in the cortico-striato-cerebellar and default mode network regions in FEBD, and implicated that gray matter deficits may not represent a familial risk endophenotype of BD. abstract_id: PUBMED:23864160 Gray matter abnormalities as brain structural vulnerability factors for bipolar disorder: A review of neuroimaging studies of individuals at high genetic risk for bipolar disorder. Objective: Cortical and subcortical gray matter abnormalities have been reported in individuals at high genetic risk for bipolar disorder, but the findings are inconsistent. The aim of this study was to review the available literature to identify common findings that could represent brain structural vulnerability factors for bipolar disorder and to discuss challenges for the advancement of the field. Method: A systematic search was conducted using the PubMed database to identify all original articles investigating cortical or subcortical gray matter abnormalities in first-degree relatives of bipolar disorder patients. Results: Very few findings were replicated, with the exception of larger insular cortex volumes in adult first-degree relatives and larger right inferior frontal gyrus in offspring of probands with bipolar disorder, both when compared with healthy controls. Isolated findings included decreased gray matter density in the left thalamus, decreased gray matter volumes in the left hippocampus and parahippocampal gyrus, and thicker right hippocampus in unaffected first-degree relatives. Genetic liability for bipolar disorder was associated with gray matter volumes in regions of the anterior cingulate cortex, ventral striatum, medial frontal gyrus, right precentral gyrus, right insular cortex, and medial orbital gyrus. Some studies found no evidence for gray matter abnormalities in first-degree relatives of bipolar disorder patients. Conclusions: Possible reasons for the discrepancies of findings across studies include small samples sizes, small effect size of susceptibility genes, the phenotypic heterogeneity of bipolar disorder, and the possible confounding effect of other Axis I psychopathologies among the relatives of patients. Future multisite, prospective, large studies with more homogeneous samples would be a key strategy to advance the field. The ultimate benefit would be an understanding of how to use brain imaging tools to identify individuals at increased risk for bipolar disorder and develop preventive strategies for that population. Answer: The study of subcortical gray matter volume abnormalities in healthy bipolar offspring (HBO) has revealed potential neuroanatomical markers that may be associated with the risk for bipolar disorder (BD). In a study involving youths with at least one parent diagnosed with BD, HBO showed significantly increased gray matter volume in the left parahippocampal/hippocampal gyrus compared to age-matched healthy individuals. This increase was correlated with puberty but not age in HBO. The findings suggest that increased gray matter volume in the parahippocampal/hippocampal gyrus could be a potential marker for risk for BD, or alternatively, a neuroprotective marker since the HBO were free of current psychopathology (PUBMED:18356765). Further research has identified gray matter volume abnormalities in various brain regions in individuals with BD, including the medial orbitofrontal cortex and nucleus accumbens, which are associated with the presence of a BSD (PUBMED:36714682). Additionally, a voxel-based morphometry study in bipolar type I patients and healthy first-degree relatives found increased inferior frontal gyrus volumes and decreased cerebellar volumes in both groups compared to healthy controls, suggesting these may be associated with genetic predisposition for BD (PUBMED:26233321). Moreover, a review of neuroimaging studies of individuals at high genetic risk for BD found larger insular cortex volumes in adult first-degree relatives and larger right inferior frontal gyrus in offspring of probands with BD, both when compared with healthy controls. Other isolated findings included decreased gray matter density in the left thalamus, decreased gray matter volumes in the left hippocampus and parahippocampal gyrus, and thicker right hippocampus in unaffected first-degree relatives (PUBMED:23864160). These studies collectively indicate that certain subcortical gray matter volume abnormalities may serve as neuroanatomical risk markers for BD, although further research is needed to confirm these findings and understand their predictive and prognostic value.
Instruction: Prescribing behaviour after the introduction of decentralized drug budgets: is there an association with employer and type of care facility? Abstracts: abstract_id: PUBMED:19291589 Prescribing behaviour after the introduction of decentralized drug budgets: is there an association with employer and type of care facility? Objective: To analyse whether prescribing patterns changed after introduction of drug budgets and whether there is an association between drug prescribing patterns and the type of employer and care facility. Methods: Data analysed encompassed information on dispensed medicines, by workplaces, prescribed in the Region Vastra Gotaland, Sweden, for the years 2003 and 2006. Workplaces (n = 969) were categorized according to type of employer and type of care facility. Five prescribing indicators reflecting goals for cost-containing prescribing in Region Vastra Gotaland were assessed. Changes over time and differences between different types of employer and care facility were analysed by Mann-Whitney tests. Results: In 2003, workplaces with a public employer had a significantly higher adherence to three of the prescribing indicators compared with private practitioners. Two of these differences remained in 2006. In 2003, none of the prescribing indicators differed between primary care and other care facilities. Three years later workplaces in primary care had a significantly higher adherence to three of the prescribing indicators than other care facilities. There was a statistically significant difference in change between 2003 and 2006 between primary care and other care facilities; there were no differences in change between workplaces with public and private employers. Conclusions: Adherence to three of the prescribing indicators increased after the introduction of decentralized drug budgets. Workplaces with a public employer showed greater adherence to two of the prescribing indicators than private sector workplaces. abstract_id: PUBMED:15861754 Introduction of electronic prescribing in an aged care facility. Residents of aged care facilities are at high risk of adverse drug events. Electronic prescribing may reduce some of these risks. This article reports on the introduction of electronic prescribing software to a residential aged care facility with the aim of improved medication safety. abstract_id: PUBMED:36093419 Evaluation of prescription completeness, rational drug-use patterns using WHO prescribing, patient-care and facility indicators in Debre Tabor Comprehensive Specialized Hospital, Ethiopia: A cross-sectional study. Objective: Irrational medicine use results in diseases worsening, disability, death, and wastage of limited resources. This study was predominantly aimed to evaluate the rationality of medicine use patterns by assessing prescribing, patient-care, and facility indicators at Debre Tabor Comprehensive Specialized Hospital by using World Health Organization core drug use indicators. Methods: A cross-sectional design was conducted to collect prescribing indicator data from 780 prescriptions retrospectively from 1 July 2020 to 30 June 2021 at outpatient pharmacy of the hospital and exit interview was held prospectively on 150 patients to evaluate patient-care indicators. In addition, indicators for health-facility were assessed by actual observation of the presence of different guidelines. Descriptive analysis of the collected data was computed using SPSS version 25. Result: Totally, 1521 drugs were prescribed to 780 encounters. On average, 1.95 drugs were prescribed per prescription. Percentage of encounters by generic name, encounters with antibiotics and injection, and drugs from essential drug list were 99.9%, 35.4%, 15.3%, and 100%, respectively. From healthcare provider-related information, the names of prescribers and dispensers were correctly written in 93.59% and 77.69%, respectively. Frequency (97.18%) and route (86.15%) of drug administration were better stated in the reviewed encounters, whereas dosage form is poorly stated (7.44%). Labeling of dispensed medicine was obtained for only 10% of patients and patient knowledge for correct dose was 94%. Availability of facility indicators was 95%. Conclusion: From the results of this study, it is possible to conclude that the prescribing, patient-care, and facility indicators did not fulfill many World Health Organization standards. Therefore, it is important if all prescribers, dispensers, patients themselves, and the governing bodies take an action to improve the prescribing and patient-care indicators for the best of the end users. abstract_id: PUBMED:37727328 Evaluation of Rational Drug Use Based on WHO/INRUD Core Drug Use Indicators in a Secondary Care Hospital: A Cross-Sectional Study in Western Uganda. Purpose: Rational drug use (RDU) promotes safe, efficient, and cost-effective utilization of medicines in hospital settings. The aim of this study was to assess rational drug use based on the World Health Organization (WHO) and the International Network for Rational Use of Drugs (INRUD) core drug use indicators. Patients And Methods: This prospective, descriptive, hospital-based cross-sectional study was conducted among patients attending the Outpatient Department of a secondary care hospital located in the Sheema District of Western Uganda. A total of 450 prescriptions were prospectively collected from eligible patients and subjected to evaluation by using the WHO/INRUD core drug use indicators (prescribing, patient care, and health-facility indicators). Results: The average number of drugs prescribed per encounter was found to be 3.2 (optimal value=1.6-1.8). The percentages of drugs prescribed by their generic name (90.48%) and from the Essential Medicine List (96.23%) were close to the WHO reference (100%). The percentage of antibiotics (66.22%) and injections (25.22%) per encounter exceeded the WHO standards (antibiotics=20.0-26.8; injections=13.4-24.1). Among the patient-care indicators, the average consultation time (5.41 minutes), average dispensing time (131.03 seconds), percentage of medicines dispensed (76.11%), percentage of medicines adequately labeled (59.74%), and percentage of patients with dosage knowledge (49.50%) did not meet the WHO reference. Facility indicators such as the percentage of key medicines available in the stock (66.67%) did not conform to the WHO optimal value. The hospital made the EML hundred percent available to all practitioners. Conclusion: The study concludes that the prescribing, patient care, and health facility indicators at Sheema District Secondary Care Hospital deviate from the optimal values recommended by the WHO. Therefore, this study indicates a need for improvement on these indicators and a requirement for the ongoing educational initiatives focused on rational drug prescribing, dispensing, and patient use in order to comply with the standards set by the WHO. abstract_id: PUBMED:31157059 Pharmacist prescribing in critical care: an evaluation of the introduction of pharmacist prescribing in a single large UK teaching hospital. Objectives: To evaluate the introduction of pharmacist independent prescribing activity across three general critical care units within a single large UK teaching hospital. To identify the prescribing demographics including total of all prescriptions, number prescribed by pharmacists, reason for pharmacist prescription, range of medications prescribed, pharmacist prescribing error rate and the extent of pharmacist second 'clinical check'. Methods: Retrospective evaluation of e-prescribing across all general critical care units of a single large UK teaching hospital. All prescribing data were downloaded over a 1-month period (May to June 2016) with analysis of pharmacist prescribing activity including rate, indication, therapeutic class and error rate. Results: In total, 5374 medicines were prescribed in 193 patients during the evaluated period. Prescribing pharmacists were available on the units on 60.4% (58/96) of days, during their working hours and accounted for 576/5374 (10.7%) of medicines prescribed in 65.2% (126/193) of patients. The majority (342/576) of pharmacist prescriptions were for new medicines. Infections, central nervous system, and nutrition/blood were the top three British National Formulary (BNF) therapeutic categories, accounting for 63.4% (349/576) of all pharmacist prescriptions. The critical care pharmacist prescribing error rate was 0.18% (1/550). Conclusions: Pharmacist independent prescribers demonstrated a high degree and wide-ranging scope of prescribing activity in general critical care patients. Pharmacists contributed a significant proportion of total prescribing, despite less than full service coverage. Prescribing activity was also safe with a very low error rate recorded. abstract_id: PUBMED:30741736 Burden and Cost of Outpatient Hemorrhoids in the United States Employer-Insured Population, 2014. Introduction: Although hemorrhoids are a common indication for seeking health care, there are no contemporary estimates of burden and cost. We examined data from an administrative claims database to estimate health care use and aggregate costs. Methods: We conducted a cross-sectional study using the MarketScan Commercial Claims and Encounters Database for 2014. The analysis included 18.9 million individuals who were aged 18-64 and continuously enrolled with prescription coverage. Outpatient hemorrhoid claims were captured using the International Classification of Diseases, Ninth Revision, Clinical Modification diagnosis codes in the first position, as well as Common Procedural Terminology codes. Prescription medications were identified using National Drug Codes. Annual prevalence and costs were determined by summing gross payments for prescription medications, physician encounters, and facility costs. We used validated weights to standardize annual cost estimates to the US employer-insured population. Results: In 2014, we identified 227,638 individuals with at least one outpatient hemorrhoid-related claim (annual prevalence, 1.2%). Among those, 119,120 had prescription medication claims, 136,125 had physician claims, and 28,663 had facility claims. After standardizing, we estimated that 1.4 million individuals in the US employer-insured population sought care for hemorrhoids in 2014 for a total annual cost of $770 million. This included $322 million in physician claims, $361 million in outpatient facility claims, and $88 million in prescription medication claims. Conclusions: The estimated economic burden of hemorrhoids in the employer-insured population approaches $800 million annually. Given the substantial and rising burden and cost, expanded research attention should be directed to hemorrhoidal etiology, prevention, and treatment. abstract_id: PUBMED:37408477 Person-centred, integrated non-communicable disease and HIV decentralized drug distribution in Eswatini and South Africa: outcomes and challenges. Introduction: Non-communicable diseases (NCDs) are highly prevalent in people living with HIV above 50 years of age and account for increasing mortality. There is little published evidence supporting person-centred, integrated models of HIV care, hypertension and diabetes treatment in southern Africa, and no data demonstrating mortality reduction. Where clinical visits for NCDs and HIV cannot be combined, integrated medication delivery presents an opportunity to streamline care and reduce patient costs. We present experiences of integrated HIV and NCD medication delivery in Eswatini and South Africa, focusing on programme successes and implementation challenges. Programmatic data from Eswatini's Community Health Commodities Distribution (CHCD) from April 2020 to December 2021 and South Africa's Central Chronic Medicines Dispensing and Distribution (CCMDD) from January 2016 to December 2021 were provided by programme managers and are summarized here. Discussion: Launched in 2020, Eswatini's CHCD provides over 28,000 people with and without HIV with integrated services, including HIV testing, CD4 cell count testing, antiretroviral therapy refills, viral load monitoring and pre-exposure prophylaxis alongside NCD services, including blood pressure and glucose monitoring and hypertension and diabetes medication refills. Communities designate neighbourhood care points and central gathering places for person-centred medication dispensing. This programme reported fewer missed medication refill appointments among clients in community settings compared to facility-based settings. South Africa's CCMDD utilizes decentralized drug distribution to provide medications for over 2.9 million people, including those living with HIV, hypertension and diabetes. CCMDD incorporates community-based pickup points, facility "fast lanes" and adherence clubs with public sector health facilities and private sector medication collection units. There are no out-of-pocket payments for medications or testing commodities. Wait-times for medication refills are lower at CCMDD sites than facility-based sites. Innovations to reduce stigma include uniformly labelled medication packages for NCD and HIV medications. Conclusions: Eswatini and South Africa demonstrate person-centred models for HIV and NCD integration through decentralized drug distribution. This approach adapts medication delivery to serve individual needs and decongest centralized health facilities while efficiently delivering NCD care. To bolster programme uptake, additional reporting of integrated decentralized drug distribution models should include HIV and NCD outcomes and mortality trends. abstract_id: PUBMED:24648505 Participatory action research in antimicrobial stewardship: a novel approach to improving antimicrobial prescribing in hospitals and long-term care facilities. It is challenging to change physicians' antimicrobial prescribing behaviour. Although antimicrobial prescribing is determined by contextual (e.g. a lack of guidelines), cultural (e.g. peer practice) and behavioural (e.g. perceived decision making autonomy) factors, most antimicrobial stewardship programmes fail to consider these factors in their approach. This may lead to suboptimal intervention effectiveness. We present a new approach in antimicrobial stewardship programme development that addresses relevant determinants of antimicrobial prescribing: participatory action research (PAR). PAR is a collaborative process that aims to bring about change in social situations by producing practical knowledge that is useful in local practice. It requires substantial involvement of relevant stakeholders to address determinants of the studied behaviour and to facilitate empowerment. PAR is well suited for complex problems in multidisciplinary settings as it adapts to local needs, delivering a tailored approach to improving local practice. We describe how PAR can be applied to antimicrobial stewardship, and describe the PAR design of two ongoing multicentre antimicrobial stewardship projects, in the acute care setting and the long-term care setting, respectively. abstract_id: PUBMED:27688887 Assessment of WHO/INRUD core drug use indicators in two tertiary care hospitals of Bahawalpur, Punjab, Pakistan. Background: Medicines are a main therapeutic intervention provided within hospitals and their proper use in the outpatient setting is important for patients and the community. The objective of this study was to evaluate drug use patterns in the outpatient departments (OPDs) of two tertiary care hospitals (Bahawal Victoria Hospital and Civil Hospital) in the Bahawalpur district of the Punjab province of Pakistan by employing the standard World Health Organization/International Network of Rational Use of Drugs (WHO/INRUD) drug use indicators. Methods: A descriptive, cross-sectional study design was employed. For assessing the prescribing indicators a sample of 2400 prescriptions were systematically reviewed out of a total of 1,560,000 prescriptions written from 1st April 2014 to 31st March 2015. A total of 600 randomly selected patients and all pharmacy personnel were observed and interviewed to investigate the patient-care and facility-specific indicators. We used the published ideal standards for each of the WHO/INRUD indicators for comparison purposes. Results: Among the prescribing indicators, the average number of drugs per prescription was 2.8 (SD = 1.3), the drugs prescribed by generic name were 56.6 %, the encounters with an antibiotic prescribed were 51.5 %, no injections were prescribed and 98.8 % of the drugs prescribed were from the Essential Drugs List (EDL). Among the patient-care indicators, the average consultation time was 1.2 min (SD = 0.8), the average dispensing time was 8.7 s (SD = 4.9), the percentage of drugs actually dispensed was 97.3 %, the percentage of drugs adequately labeled was 100 % and the patients' knowledge of correct dosage schedule was 61.6 %. Among the facility-specific indicators, all OPDs had a copy of the EDL and 72.4 % of the key drugs were available in stock. Conclusion: Irrational use of drugs was observed in both OPDs. Polypharmacy, brand prescribing, over-prescribing of antibiotics, short consultation and dispensing times, lack of patients' knowledge about prescribed medicines and unavailability of all key drugs in stock were the major issues that need attention of the healthcare authorities. This study necessitates the requirement to implement the relevant WHO recommended core interventions to promote rational use of medicines in these hospital-based OPDs. abstract_id: PUBMED:30611211 Attitudes and behaviour towards psychotropic drug prescribing in Swedish primary care: a questionnaire study. Background: The prescribing of psychotropic drugs, i.e. antidepressants, sedatives (anxiolytics, hypnotics), and antipsychotics is considerable and a large proportion is prescribed by general practitioners (GPs). There are concerns about dependency and medicalisation, and treatment decisions in psychiatry may appear arbitrary. Increased knowledge of GPs' opinions on the prescribing of psychotropics may lead to more rational use of these drugs. We aimed to quantify GPs' attitudes, beliefs and behaviour towards various aspects of psychotropic drug prescribing. Methods: A questionnaire was distributed to physicians in all 199 GP practices in Region Västra Götaland, Sweden. The questions concerned determinants of psychotropic drug prescribing that had been identified in a previous, qualitative study. Results: Questionnaires from 516 physicians (64% of whom were specialists in family medicine, 21% interns in family medicine, 15% others) at 152 GP practices (59% of which were state owned, 72% in an urban area, with a median of 7808 registered patients) were returned (estimated response rate: 48%). A majority - 62% - of GPs found it easier to start prescribing psychotropic drugs than to stop (95% confidence interval, 57%, 66%) vs. 8% (6%, 10%). Most GPs considered psychotherapy more suitable than psychotropic drugs in cases of mild psychiatric disease: 81% (77%, 84%) vs. 4% (3%, 6%). The problems treated with psychotropic drugs were considered to be mostly socioeconomic, or mostly medical, by similar proportions of physicians: 38% (34%, 42%) vs. 40% (36%, 45%). GPs were on average satisfied with their levels of antidepressant and sedative prescribing in relation to medical needs. More GPs regarded their prescribing of antipsychotics as being too low rather than too high: 33% (28%, 39%) vs. 7% (4%, 10%). Conclusions: This study illustrates the complexities of psychiatric drug treatment in primary care and identifies potential drivers of increased prescribing of psychotropics. The manifold factors, medical and non-medical, that affect prescribing decisions may explain a sense of arbitrariness surrounding psychotropic drug treatment. This notwithstanding, GPs seem mostly content with their prescribing. Answer: After the introduction of decentralized drug budgets, there was an observed change in prescribing patterns. Workplaces with a public employer had a significantly higher adherence to three of the prescribing indicators compared with private practitioners in 2003, and two of these differences remained in 2006. In 2003, there were no differences in prescribing indicators between primary care and other care facilities, but by 2006, primary care workplaces had a significantly higher adherence to three of the prescribing indicators than other care facilities. There was a statistically significant difference in change between 2003 and 2006 between primary care and other care facilities; however, there were no differences in change between workplaces with public and private employers (PUBMED:19291589). This suggests that the type of employer and care facility can be associated with different prescribing behaviors following the introduction of decentralized drug budgets.
Instruction: Histological examination of circular stapled 'doughnuts': questionable routine practice? Abstracts: abstract_id: PUBMED:16623161 Histological examination of circular stapled 'doughnuts': questionable routine practice? Background: Left-sided colonic resections are often anastomosed by the use of the circular stapling gun. Most surgeons routinely submit the resulting set of 'doughnuts' for histological examination. Aim: The aim of this study is to question the need for this practice by providing our own experience of the impact of 'doughnut' submission on patient management. Patients And Methods: Patients who had undergone a stapled anastomosis for colorectal cancer resection (1998-2004) were identified from the department cancer database and clinical records and histopathological reports were reviewed for all cases. Results: From a consecutive series of 100 sets of doughnuts only two showed histological abnormality (inflammatory change and a metaplastic polyp). Three patients had local recurrence over the follow-up period: 12, 14 and 36 months after surgery. Histological review of the 'doughnuts' in these patients did not show any abnormality. None of the above findings had any influence on subsequent management. Conclusion: Histological examination of the 'doughnuts' has a considerable impact in terms of time and resource use. This study has shown no benefit in performing routine histological examination of the 'doughnuts'. abstract_id: PUBMED:24798632 Routine histological sampling of doughnuts post oncologic anterior resection is not necessary. Background: A stapled end-to-end colo-rectal anastomosis is increasingly adopted following a proctectomy for various colorectal conditions. The ensuing doughnuts of colonic tissue created from the stapling device have been routinely sent for histological analysis. However, its efficacy remains debatable. This study aims to determine if there is a role in sending colonic doughnuts for histological examination following a stapled end-to-end colo-rectal anastomosis following proctectomy for colorectal cancers. Methods: A retrospective review of all histological reports of the doughnuts from patients who underwent a proctectomy followed by a stapled end-to-end colo-rectal anastomosis for colorectal cancer in the National University Hospital, Singapore, from 2004-2010 was performed. Results: A total of 336 patients who underwent a stapled end-to-end anastomosis following proctectomy for colorectal cancer from 2004-2010 had available histological reports. The specimen margins and colonic doughnuts were all negative for malignancy. Clinical decisions were not altered based on doughnut histology results. A total cost of USD$61,017.60 was incurred for the histological processing of these doughnuts. Conclusion: There appears to be no additional oncological or cost benefits in sending doughnuts after a stapled end-to-end colo-rectal anastomosis following proctectomy for colorectal cancers except in selected clinico-pathological conditions whereby the risk of a positive margin is deemed higher. abstract_id: PUBMED:28187868 No clinical benefit from routine histologic examination of stapler doughnuts at low anterior resection for rectal cancer. Background: The aim of this study was to evaluate the clinical utility and cost-effectiveness of routine histologic examination of the doughnuts from stapled anastomoses in patients undergoing a low anterior resection for rectal cancer. Methods: We performed a retrospective review of 486 patients who underwent a low anterior resection with stapled anastomosis for rectal cancer between 2002 and 2015 at 3 institutions. Pathologic findings in the doughnuts and their impact on patient management were recorded. Tumor characteristics that may influence how often doughnuts were included in the pathology report were analyzed. An approximate cost of histologic examination of doughnuts was also calculated. Results: A total of 412 patients (85%) had doughnuts included in their pathology reports. Two patients had cancer cells in their doughnuts, and both patients had a positive distal margin in their primary tumor specimen; 33 patients had benign findings in their doughnuts. Pathologic examination of the doughnut did not change clinical management in any patient. Patients with rectosigmoid tumors were less likely to have their doughnuts included in the pathology report compared to patients with low tumors (P = .003). Doughnuts were not bundled with the primary tumor specimen in 374 (77%) of our patients; in these patients, pathologic analysis of the doughnut added an additional cost of approximately $643 per specimen. Conclusion: This study demonstrates no clinical benefit in sending anastomotic doughnuts for histopathologic evaluation after performing a low anterior resection with a stapled anastomosis for rectal cancer. Overall cost may be decreased if doughnuts are not analyzed or if they are bundled with the primary tumor specimen. abstract_id: PUBMED:23248448 Efficacy of the modified anvil grasper for laparoscopic intra-corporeal circular stapled anastomosis. The traditional anvil grasper may be difficult to use for connecting the stem of an anvil with the centre rod of a circular stapler because the grasper holds the anvil completely still. In addition, the head angle is fixed and cannot handle the anvil head delicately in a tight pelvic space. Many surgeons use a grasper designed for holding the bowel or a dissector for holding the anvil during intra-corporeal circular stapled anastomosis during low anterior resection, sigmoidectomy, left hemi colectomy and know that it is difficult to connect segments with these instruments due to slipping. A new modified anvil grasper was developed with curved blades that can easily grasp the stem of an anvil and smoothly connect it with the centre rod of the circular stapler. This grasper should be useful for surgeons performing laparoscopic intra-corporeal circular stapled anastomoses, which are the most challenging part of laparoscopic colorectal surgery. abstract_id: PUBMED:12790989 Is routine histological reporting of doughnuts justified after anterior resection for colorectal cancer? Objective: 'Doughnuts' of colonic tissue which remain on a circular stapler after firing are routinely examined histologically. However, Royal College of Pathologists' guidelines state that this is not necessary. The aim of this study was to examine current practice among pathologists in one region in the UK and to determine the incidence of clinically significant pathology within colonic doughnuts. Methods: Current practice in all pathology Departments within the South-west Region was examined and 100 doughnuts from anterior resection specimens were reviewed for the presence of abnormal pathology. Results: Practice varied between hospitals with only 15% following guidelines for examination of doughnuts and 38% following guidelines for reporting of resection margins. Review of pathology specimens demonstrated no incidence of colonic adenocarcinoma in a doughnut. Benign pathology was present in 9% and dysplasia in 3% but in no case was patient management altered as a result of the pathology report. Conclusion: Application of recommended guidelines is inconsistent. We have demonstrated a low incidence of clinically significant pathology in doughnuts and more economical guidelines could therefore be safely applied without detriment to patient care. abstract_id: PUBMED:28694899 Comparison of circular- and linear-stapled gastrojejunostomy in laparoscopic Roux-en-Y gastric bypass: a multicenter study. Introduction: Laparoscopic Roux-en-Y gastric bypass (LRYGB) is a common, well-established procedure, but no consensus regarding selection of the gastrojejunostomy (GJ) technique has been reached, and standardization of this precise technique is far from being achieved. Aim: To compare circular-stapled and linear-stapled GJ in LRYGB in terms of operative time and postoperative complications. Material And Methods: This retrospective case-control study compared the perioperative and postoperative outcomes of LRYGB with a circular-stapled (LRYGB-CS) versus linear-stapled (LRYGB-LS) gastrojejunostomy. All patients, operated on in two academic referral care centers for bariatric surgery, were enrolled from April 2013 to June 2016. 457 patients were included (255 and 202 respectively in the LRYGB-CS and LRYGB-LS groups). After matching the groups for age, sex, body mass index, arterial hypertension, and presence of type 2 diabetes in a 1 : 1 ratio, 99 patients were enrolled in each. Results: The total operative time was longer in the LRYGB-LS group (140 vs. 85 min, p &lt; 0.001). The postoperative hemorrhage and wound infection rates were lower in the LRYGB-LS group (2.1% vs. 10.3%, p = 0.021, and 1.0% vs. 9.3%, p = 0.011). The readmission rates were comparable (8.2% vs. 6.1%, p = 0.593). There was no significant difference in the incidence of gastrojejunostomy leakage, stricture, port-site hernia, or marginal ulcer. Conclusions: Both anastomosis types for LRYGB are safe and have low and comparable risks of postoperative complications. After LRYGB-CS, postoperative bleeding and wound infections are slightly more frequent; however, the operative time is shorter. abstract_id: PUBMED:33506342 The Circular Stapled Esophagogastric Anastomosis in Esophagectomy: No Differences in Anastomotic Insufficiency and Stricture Rates Between the 25 mm and 28 mm Circular Stapler. Background: For patients undergoing an Ivor Lewis esophagectomy with a circular stapled anastomosis, the optimal diameter of the used circular stapler to restore continuity is unknown. The aim of this study was to compare the 25 mm stapled versus the 28 mm stapled esophagogastric anastomosis after Ivor Lewis esophagectomy, focusing on anastomotic insufficiency and postoperative anastomotic strictures. Methods: Between February 2008 and June 2019, 349 consecutive patients underwent Ivor Lewis esophagectomy with gastric conduit reconstruction and circular stapled anastomosis. Patient characteristics and postoperative results, such as anastomotic insufficiency rates, postoperative anastomotic stricture rates, time to anastomotic stricture rate, and the number of dilatations, were recorded in a prospective database and analyzed. Results: In 222 patients (64%), the 25 mm circular stapler was used and in 127 patients (36%) the 28 mm circular stapler was used. There were no differences in baseline characteristics. Anastomotic insufficiency rates were comparable between the 25 mm (12%) and the 28 mm groups (11%) (p = 0.751). There were no differences between postoperative anastomotic strictures in the 25 mm (14%) and the 28 mm groups (14%) (p = 0.863). Within patients with postoperative anastomotic strictures, a median number of 2 dilatations were observed in each group (p = 0.573) without differences in the time to first diagnosis (p = 0.412). Conclusion: There were no differences in anastomotic insufficiency and postoperative anastomotic stricture rates between the 25 mm and the 28 mm circular stapled esophagogastric anastomosis after Ivor Lewis esophagectomy. Both the 25 mm and 28 mm stapler can be safely used to create a circular stapled esophagogastric anastomosis to restore continuity after esophagectomy. abstract_id: PUBMED:35965865 Double purse-string suture technique for circular-stapled anastomosis during robotic Ivor Lewis esophagectomy. Background: With the advantage of the robotic suturing capacity, the purse-string suture is technically simple and convenient. This study aimed to present our technical aspects and initial results of robotic Ivor Lewis esophagectomy using two purse-string sutures for circular-stapled anastomosis. Methods: After stomach mobilization, gastric conduit formation, esophagus mobilization and two-field lymphadenectomy, the first robotic hand-sewn purse-string suture was applied to the esophageal muscular layer with an adequate margin above the tumor. A longitudinal incision in the anterior wall of the esophagus was made and the circular stapler anvil was inserted. The esophagus was transected by scissors 1 cm caudal to the first purse-string suture and the purse-string tied to secure the anvil. Then the second robotic hand-sewn purse-string suture was applied to the whole-layer of the proximal end of the esophagus and tied. Finally, the anvil was connected to the body of the stapler and fired. Results: The clinical data of ten patients who underwent robotic Ivor Lewis esophagectomy with an intrathoracic circular-stapled end-to-side anastomosis from February 2022 to April 2022 were collected. There were seven male and three female patients and had a mean age of 63.2 ± 7.6 years. Tissue donuts were complete in all cases and all operations were successfully performed without conversions. The mean overall operative time was 358.2 ± 40.3 min. The mean estimated blood loss was 83.2 ± 15.6 ml. The median length of hospital stay was 11.5 ± 4.1 days. All the patients had an uneventful postoperative period. Conclusion: Two purse-string sutures are necessary to obtain a tight seal of the esophageal tissue around the anvil to avoid potential anastomotic leak and are an essential process for the safety of circular-stapled anastomosis during robotic Ivor Lewis esophagectomy. abstract_id: PUBMED:31915339 The Role of Histological Assessment of Distal Doughnut in Low Anterior Resection for Low Rectal Cancer. Anterior resection (AR), especially low anterior resection (LAR), for low rectal cancer and colorectal anastomosis is a technical challenge to surgeons. But by using circular stapling devices now it is possible make more LARs technically feasible. A stapled end-to-end colorectal anastomosis is increasingly adopted following a low anterior resection for low rectal cancer. This descriptive cross-sectional study was carried out in the department of Surgery, Bangabandhu Sheikh Mujib Medical University, Dhaka, Bangladesh from December 2015 to December 2016. The ensuing doughnuts created from the stapling device are routinely sent for histological analysis. However, its efficacy remains debatable. This study aims to determine the role of sending distal doughnut for histological examination following a stapled end-to-end colorectal anastomosis done in low anterior resection for low rectal cancers. abstract_id: PUBMED:27705608 Simple Technique of Circular Stapled Anastomosis in Ivor Lewis Esophagectomy. Background: Advent of minimally-invasive esophagectomy necessitated the incorporation of stapled anastomotic techniques especially for intrathoracic anastomosis. We present our approach to the Ivor Lewis esophagectomy highlighting a simple modification in the anastomotic technique and review our experience with anastomotic outcomes. Methods: With IRB approval, patients who underwent Ivor Lewis esophagectomy with circular-stapled end-to-end anastomosis (EEA) were identified, divided into three equal sequential cohorts (A, B, and C), and compared for perioperative outcome. Cohorts were divided in a chronological order to have equal number of patients in each group. Results: Seventy-five patients underwent Ivor Lewis esophagectomy with circular stapled (EEA-25/28) anastomosis. Group A had longer median postoperative hospital stay and median postoperative ICU stay compared to Groups B and C. Ten patients (13%) had anastomotic leak-one patient required redo-anastomosis and other patients were managed with endoscopic interventions. There was significant decrease in rate of anastomotic leak with experience (8 versus 1 versus 1, P = .004). There were two perioperative deaths, one each in Groups A and C, including one death due to anastomotic leak (Group A). Conclusion: Use of simple modifications to stapled EEA, as described here, has led to decrease in anastomotic leaks following Ivor Lewis esophagectomy. Answer: The routine practice of histological examination of circular stapled 'doughnuts' following colorectal anastomosis has been called into question by several studies. These studies have found that the histological examination of 'doughnuts' has little to no impact on patient management and does not provide additional oncological or cost benefits. A study examining patients who underwent a stapled anastomosis for colorectal cancer resection found that out of 100 sets of 'doughnuts', only two showed histological abnormalities, neither of which influenced subsequent patient management. The study concluded that routine histological examination of 'doughnuts' is not beneficial and has a significant impact on time and resource use (PUBMED:16623161). Similarly, a retrospective review in Singapore of patients who underwent a proctectomy followed by a stapled end-to-end colo-rectal anastomosis for colorectal cancer revealed that none of the specimen margins or colonic 'doughnuts' were positive for malignancy, and clinical decisions were not altered based on the histology results of the 'doughnuts'. The study suggested that routine sending of 'doughnuts' for histological examination is not necessary except in selected cases where the risk of a positive margin is higher (PUBMED:24798632). Another retrospective review of 486 patients who underwent a low anterior resection with stapled anastomosis for rectal cancer found that pathologic examination of the 'doughnuts' did not change clinical management in any patient. The study also highlighted the additional cost incurred by separate analysis of the 'doughnuts' when not bundled with the primary tumor specimen, suggesting that overall costs could be decreased if 'doughnuts' are not analyzed or are bundled with the primary tumor specimen (PUBMED:28187868). In the UK, a study found that the practice of examining 'doughnuts' varied between hospitals, with only a small percentage following the Royal College of Pathologists' guidelines. The study reported no incidence of colonic adenocarcinoma in a 'doughnut' and concluded that more economical guidelines could be safely applied without detriment to patient care (PUBMED:12790989). In summary, the evidence suggests that routine histological examination of circular stapled 'doughnuts' following colorectal anastomosis is of questionable value and may not be necessary in most cases, as it does not typically affect patient management and incurs additional costs.
Instruction: Does smoking reduce the progression of osteoarthritis? Abstracts: abstract_id: PUBMED:23335563 Does smoking reduce the progression of osteoarthritis? Meta-analysis of observational studies. Objective: To determine whether smoking reduces the progression of osteoarthritis (OA). Methods: Observational studies examining smoking and progression of OA were systematically searched through Medline (1948-), EMBase (1980-), Web of Science, PubMed, and Google and relevant references. The search was last updated in May 2012. Odds ratios (ORs) and 95% confidence intervals (95% CIs) were directly retrieved or calculated. Current standards for reporting meta-analyses of observational studies (Meta-Analysis of Observational Studies in Epidemiology) were followed. Quality-related aspects such as study design, setting, sample selection, definition of progression, and confounding bias were recorded. Stratified and meta-regression analyses were undertaken to examine the covariates. Results: Sixteen studies (976,564 participants) were identified from the literature. Overall, there was no significant association between smoking and progression of OA (OR 0.92; 95% CI 0.83, 1.02). There was moderate heterogeneity of results (I(2) = 57.3%, P = 0.0024). Subgroup analyses showed some associations of marginal significance; however, meta-regression did not confirm any significant results. Conclusion: There is no compelling evidence that smoking has a protective effect on the progression of OA. The results concur with a previous meta-analysis published by this group that showed no association between smoking and incidence of OA. Taken together, smoking does not appear to reduce either the incidence or progression of OA. abstract_id: PUBMED:26432984 The relationship between smoking and knee osteoarthritis in the Osteoarthritis Initiative. Objective: To estimate the extent that smoking history is associated with symptoms and disease progression among individuals with radiographically confirmed knee Osteoarthritis (OA). Method: Both cross-sectional (baseline) and longitudinal studies employed data from the Osteoarthritis Initiative (OAI) (n = 2250 participants). Smoking history was assessed at baseline with 44% current or former smokers. The Western Ontario and McMaster Universities Arthritis Index (WOMAC) was used to measure knee pain, stiffness, and physical function. Disease progression was measured using joint space width (JSW). We used adjusted multivariable linear models to examine the relationship between smoking status and exposure in pack years (PY) with symptoms and JSW at baseline. Changes in symptoms and JSW over time were further assessed. Results: In cross-sectional analyses, compared to never-smokers high PY (≥15 PY) was associated with slightly greater pain (beta 0.36, 95% CI: 0.01-0.71) and stiffness (beta 0.20, 95% CI: 0.03-0.37); and low PY (&lt;15 PY) was associated with better JSW (beta 0.15, 95% CI: 0.02-0.28). Current smoking was associated with greater pain (beta 0.59, 95% CI: 0.04-1.15) compared to never-smokers. These associations were not confirmed in the longitudinal study. Longitudinally, no associations were found between high or low PY or baseline smoking status with changes in symptoms (at 72 months) or JSW (at 48 months). Conclusion: Cross-sectional findings are likely due residual confounding. The more robust longitudinal analysis found no associations between smoking status and symptoms or JSW. Long-term smoking provides no benefits to knee OA patients while exposing them to other well-documented serious health risks. abstract_id: PUBMED:28711879 The Prevalence, Incidence, and Progression of Hand Osteoarthritis in Relation to Body Mass Index, Smoking, and Alcohol Consumption. Objective: To estimate the extent that overweight/obesity, smoking, and alcohol are associated with prevalence and longitudinal changes of radiographic hand osteoarthritis (OA). Methods: Participants from the Osteoarthritis Initiative (n = 1232) were included, of whom 994 had 4-year followup data. In analyses on incident hand OA, only persons without hand OA at baseline were included (n = 406). Our exposure variables were overweight/obesity [body mass index (BMI), waist circumference], smoking (current/former, smoking pack-yrs), and alcohol consumption (drinks/week). Using linear and logistic regression analyses, we analyzed possible associations between baseline exposure variables and radiographic hand OA severity, erosive hand OA, incidence of hand OA, and radiographic changes. Analyses were adjusted for age, sex, and education. Results: Neither overweight nor obesity were associated with hand OA. Current smoking was associated with less hand OA in cross-sectional analyses, whereas longitudinal analyses suggested higher odds of incident hand OA in current smokers (OR 2.20, 95% CI 1.02-4.77). Moderate alcohol consumption was associated with higher Kellgren-Lawrence sum score at baseline (1-3 drinks: 1.55, 95% CI 0.43-2.67) and increasing sum score during 4-year followup (4-7 drinks: 0.33, 95% CI 0.01-0.64). Moderate alcohol consumption (1-7 drinks/week) was associated with 2-fold higher odds of erosive hand OA, which was statistically significant. Additional adjustment for BMI gave similar strengths of associations. Conclusion: Overweight/obesity were not associated with hand OA. Contrasting results were observed for smoking and hand OA, suggesting lack of association. Moderate alcohol consumption was associated with hand OA severity, radiographic changes, and erosive hand OA, warranting further investigation. abstract_id: PUBMED:8877927 Risk factors for progression to new sites of radiographically defined osteoarthritis in women. Objective: To describe the association between hormonally related risk factors and the progression to new sites of radiographically defined full body (generalized) osteoarthritis (OA) in a cohort of older women. Methods: A retrospective cohort design was used to study former radium dial painters over the age of 40 years who had minimal radium exposure. At study entry and at varying followup times, clinical examinations were conducted and full body radiographs were taken. Two followup groups were defined: women with a followup radiograph 1-9 years after baseline (n = 75) and 10-19 years after baseline (n = 53). Fifty-five joints (10 joint groups) were independently graded at baseline and followup for OA by the method of Kellgren and Lawrence, and provided the basis for summary full body OA progression scores. Progression was defined as an increase in the number of sites with OA and in separate analyses as an increase in the number of joint groups with OA. Results: Increasing length of followup and lower baseline OA score were associated with greater OA progression, while age at baseline examination showed no clear relation to progression. Beyond these variables, increasing height and having ever smoked were inversely associated with OA progression, while body mass index (BMI) showed a weak positive association. In multivariable modeling for followup 1-9 years, only lower baseline OA score predicted greater OA progression to new sites (partial r2 = 0.13, p = 0.0009). In followup 10-19 years, baseline OA score (partial r2 = 0.12, p = 0.0011), height (partial r2 = 0.057, p = 0.033), and smoking status (partial r2 = 0.09, p = 0.035) were independent predictors of OA progression to new sites, while greater BMI was a positive, weak, and nonsignificant predictor (partial r2 = 0.031, p = 0.29). History of prior cholecystectomy, hysterectomy, dilation and curetage, number of pregnancies, and change in BMI were not significantly related to progression of OA to new sites. Similar results were found for predictors of OA progression to new joint groups. Conclusion: Lower baseline level of OA is associated with greater OA progression to new sites or joint groups independent of age, suggesting a "burnout" phenomenon. In addition, shorter height and having never smoked appear to be independent risk factors that predict the progression of radiographic OA to new sites or joint groups. abstract_id: PUBMED:26680278 Racial differences in associations between baseline patterns of radiographic osteoarthritis and multiple definitions of progression of hip osteoarthritis: the Johnston County Osteoarthritis Project. Background: To identify baseline radiographic features that predict hip osteoarthritis (HOA) progression, and to explore differences in these associations by race. Methods: Radiographs from the community-based Johnston County OA Project were scored using Kellgren-Lawrence (KL) grade and for presence and location of joint space narrowing (JSN), osteophytes, and subchondral changes. Associations between these features and HOA progression (increase of at least 1 KL grade, interval hip replacement, range of motion [ROM, a reduction of ≥10° in internal rotation], or disability [increase of ≥0.2 in Health Assessment Questionnaire scores], or Any of these) were assessed using logistic regression, adjusting for age, gender, race, hip injury, BMI, education, smoking and follow-up time, accounting for multiple comparisons. Race interactions were assessed and analyses stratified as indicated. Results: The sample (n = 1,422) included 40 % men and 26 % African American (AA) participants, with mean age 61 years and BMI 29 kg/m(2). The baseline frequency of radiographic hip OA (RHOA) between Caucasians and AAs was similar (23 %), although some radiographic features differed. AAs were more likely to have progression defined by ROM or disability or Any progression; Caucasians were more likely to have RHOA progression. JSN, subchondral sclerosis, and medial osteophytes were associated with increased RHOA progression overall; JSN was associated with disability progression only in AAs, while lateral osteophytes were associated with ROM progression only in Caucasians. Conclusions: AAs and Caucasians exhibited differences in the radiographic presentation and progression patterns of HOA, with AAs reporting progressive pain and disability, while Caucasians had more RHOA progression. abstract_id: PUBMED:20129202 What are the best markers for disease progression in osteoarthritis (OA)? To be able to prevent progression of osteoarthritis, the knowledge of prognostic factors of this progression is important. If certain prognostic factors are modifiable, they may enhance our ability to reduce osteoarthritis progression. Even if these prognostic factors are not modifiable, they can still be used to identify high-risk groups, which may have implications for patient information and the perspective of medical treatment. Prognostic factors of progression are reviewed here, mainly for hip and knee osteoarthritis as most data available concern these localisations. Areas of further research are highlighted. abstract_id: PUBMED:36890196 Association of smoking and osteoarthritis in US (NHANES 1999-2018). Little is currently known about the effect of smoking on osteoarthritis (OA). This study aimed to investigate the relationship between smoking and OA in the United States (US) general population. Cross-sectional study. Level of evidence, 3. 40,201 eligible participants from the National Health and Nutrition Examination Survey 1999-2018 were included and divided into OA and non-arthritis groups. Participants demographics and characteristics were compared between the two groups. Then the participants were divided into non-smokers, former smokers, and current smokers based on their smoking status, also demographics and characteristics among the three groups were compared. Multivariable logistic regression was used to determine the relationship between smoking and OA. The current and former smoking rate in the OA group (53.0%) was significantly higher than that in the non-arthritis group (42.5%; p &lt; 0.001). Multivariable regression analysis including body mass index (BMI), age, sex, race, education level, hypertension, diabetes, asthma and cardiovascular disease showed that smoking was an association for OA. This large national study highlights a positive association between smoking and OA prevalence in the general US population. It is necessary to further study the relationship between smoking and OA in order to determine the specific mechanism of smoking on OA. abstract_id: PUBMED:32950705 Impact of smoking on femorotibial and hip osteoarthritis progression: 3-year follow-up data from the KHOALA cohort. Objectives: To evaluate the clinical and structural impact of smoking on knee and hip osteoarthritis at baseline and after 3years. Methods: Observational data on the progressive effects of smoking at baseline and after 3years were collected from The Knee and Hip Osteoarthritis Long-Term Assessment cohort comprising a French population of patients aged 40-75years with symptomatic lower limb osteoarthritis. Clinical (the Western Ontario and McMaster Universities Arthritis Index and Harris scores) and structural (radiography for osteophyte detection and joint-space narrowing assessment) were conducted. The tobacco usage categories were 'never smoker', 'former smoker', and 'current smoker'. Results: Of the 873 subjects included, 215 (25%) were former smokers and 119 (14%) were current smokers. Multivariate analyses revealed that former and current smokers had fewer knee osteophytes in the medial compartment at baseline (odds ratio [OR]=0.64 [0.41-0.99] and 0.63 [0.36-1.11], respectively), lower osteophyte development in the lateral condyle after 3years (OR=011 [0.03-0.45] and 0.15 [0.03-0.97]), and lower osteophyte development in the lateral tibial plateau after 3years (OR=0.22 [0.06-0.75] and 0.68 [0.14-3.35]). Higher tobacco consumption and longer duration of consumption were significantly associated with fewer knee osteophytes at baseline and lower osteophyte development at 3years. Conclusion: Although cigarette smoking did not influence knee function, pain, or the need for replacement surgery, current and former smokers developed fewer osteophytes. This relationship may be linked to the quantity and duration of consumption. Our results provide further insight into the smoking-related pathophysiology of osteoarthritis. abstract_id: PUBMED:33385863 Incidence and progression of ankle osteoarthritis: The johnston county osteoarthritis project. Objective: To determine the incidence and progression of ankle osteoarthritis (OA) and associated risk factors in a community-based cohort of African Americans and whites. Methods: Data were from 541 participants who had standardized lateral and mortise radiography of the ankles in weight bearing at baseline (2013-2015) and follow-up (2017-2018). Incident radiographic ankle OA (rAOA) was defined as a Kellgren-Lawrence grade (KLG) ≥ 1 at follow-up among ankles with baseline KLG &lt; 1; progressive rAOA was a ≥ 1 KLG increase at follow-up among ankles with KLG ≥ 1 at baseline. Symptoms were assessed using self-reported pain, aching, and stiffness (PAS) on most days and the Foot and Ankle Outcome Score (FAOS) symptoms subscale. Ankle-level logistic regression models were used to assess associations of ankle outcomes with covariates (age, sex, race, body mass index [BMI], smoking, number of symptomatic joints, comorbidities, prior ankle injury, and knee or foot OA). Results: Among ankles without rAOA at baseline, 28% developed incident rAOA, 37% had worsening FAOS symptoms, and 7% had worsening PAS. Incident rAOA and worsening ankle symptoms were associated with higher BMI and symptoms in other joints. Among ankles with baseline rAOA, 4% had progressive rAOA, 35% had worsening of FAOS symptoms, and 9% had worsening PAS. rAOA progression was associated with ankle injury and concomitant knee or foot OA; worsening of symptoms was associated with higher BMI and other symptomatic joints. Conclusions: Not all ankle OA is post-traumatic. Smoking prevention/cessation, a healthy weight, and injury prevention may be methods for reducing the incidence and progression of rAOA. abstract_id: PUBMED:33609692 The cholinergic system in joint health and osteoarthritis: a narrative-review. Osteoarthritis (OA) poses a major health and economic burden worldwide due to an increasing number of patients and the unavailability of disease-modifying drugs. In this review, the latest understanding of the involvement of the cholinergic system in joint homeostasis and OA will be outlined. First of all, the current evidence on the presence of the cholinergic system in the normal and OA joint will be described. Cholinergic innervation as well as the non-neuronal cholinergic system are detected. In a variety of inflammatory diseases, the classic cholinergic anti-inflammatory pathway lately received a lot of attention as via this pathway cholinergic agonists can reduce inflammation. The role of this cholinergic anti-inflammatory pathway in the context of OA will be discussed. Activation of this pathway improved the progression of the disease. Secondly, chondrocyte hypertrophy plays a pivotal role in osteophyte formation and OA development; the impact of the cholinergic system on hypertrophic chondroblasts and endochondral ossification will be evaluated. Cholinergic stimulation increased chondrocyte proliferation, delayed chondrocyte differentiation and caused early mineralisation. Moreover, acetylcholinesterase and butyrylcholinesterase affect the endochondral ossification via an acetylcholine-independent pathway. Thirdly, subchondral bone is critical for cartilage homeostasis and metabolism; the cholinergic system in subchondral bone homeostasis and disorders will be explored. An increase in osteoblast proliferation and osteoclast apoptosis is observed. Lastly, current therapeutic strategies for OA are limited to symptom relief; here the impact of smoking on disease progression and the potential of acetylcholinesterase inhibitors as candidate disease-modifying drug for OA will be discussed. Answer: The evidence regarding the impact of smoking on the progression of osteoarthritis (OA) is mixed and does not conclusively support the notion that smoking reduces OA progression. A meta-analysis of observational studies found no significant association between smoking and the progression of OA, with moderate heterogeneity in results and no compelling evidence to suggest a protective effect of smoking on OA progression (PUBMED:23335563). Similarly, longitudinal analysis from the Osteoarthritis Initiative indicated no associations between smoking status and symptoms or joint space width changes in knee OA, suggesting that long-term smoking does not provide benefits to knee OA patients (PUBMED:26432984). In contrast, some studies have reported an inverse association between smoking and OA progression. For instance, a study on the progression of OA to new sites in women found that having ever smoked was inversely associated with OA progression (PUBMED:8877927). Additionally, data from the KHOALA cohort indicated that current and former smokers developed fewer osteophytes, which may suggest a potential relationship between smoking and certain aspects of OA pathophysiology (PUBMED:26680278). However, other studies have shown different associations. For example, current smoking was associated with higher odds of incident hand OA in longitudinal analyses (PUBMED:28711879), and a large national study from the US highlighted a positive association between smoking and OA prevalence (PUBMED:36890196). Furthermore, the Johnston County Osteoarthritis Project found that smoking prevention/cessation, maintaining a healthy weight, and injury prevention may be methods for reducing the incidence and progression of radiographic ankle OA (PUBMED:33385863). In summary, while some studies suggest a potential inverse relationship between smoking and certain OA outcomes, the overall body of evidence does not support the idea that smoking reduces the progression of OA. Instead, the more robust longitudinal analyses and larger population studies do not find a protective effect of smoking on OA, and smoking is generally associated with other serious health risks (PUBMED:26432984; PUBMED:36890196).
Instruction: The natural history of atrial fibrillation in patients with permanent pacemakers: is atrial fibrillation a progressive disease? Abstracts: abstract_id: PUBMED:26139311 The natural history of atrial fibrillation in patients with permanent pacemakers: is atrial fibrillation a progressive disease? Objectives: Atrial fibrillation (AF) is thought to be a progressive arrhythmia, starting with short paroxysmal episodes, until eventually, it becomes permanent. Evidence for this is limited to studies with short follow-up or with minimal cardiac rhythm monitoring. We utilised the continuous rhythm monitoring capabilities of implanted pacemakers to define better the natural history of AF. Methods: The study included 356 patients with pacemaker devices capable of continuous atrial rhythm monitoring (186 male, mean age (± SD) 79.5 ± 8.9 years). All clinical records, including history/physical examination reports, laboratory results, ECGs and Holter monitoring data were reviewed. Patients were included if AF episodes &gt;30 s were documented. Permanent pacemaker diagnostic data were reviewed at least every 12 months. ACC/AHA/ESC guidelines were used to define AF episodes as paroxysmal, persistent or long-standing persistent/permanent. Results: Study follow-up period (± SD) was 7.2 ± 3.1 years. Over the study period, 179 of 356 patients (50.3 %) had at least one episode of persistent AF. Of the 356 patients, 314 (88.2 %) had paroxysmal AF and 42 (11.8 %) had persistent AF at the time of diagnosis. The predominant AF subtype, at latest follow-up, was paroxysmal for 192 patients (53.9 %), persistent for 77 (21.6 %) and long-standing persistent/permanent for 87 (24.4 %). Univariable predictors of progression to persistent AF were (1) male gender, (2) increasing left atrial diameter (LAD), (3) reduced atrial pacing (AP) and (4) increasing ventricular pacing. Conclusions: Although many patients with AF will have persistent episodes, long-term continuous pacemaker follow-up demonstrates that the majority will have a paroxysmal, as opposed to persistent, form of the arrhythmia. abstract_id: PUBMED:7123283 Natural history of sinus node dysfunction after permanent pacemaker implantation. To explore the natural history of symptomatic sinus node dysfunction after permanent pacemaker implantation, we followed up 71 patients (27 with sinus arrest, 27 with unexplained sinus bradycardia, 14 with the bradycardia/tachycardia syndrome, and three with sinoatrial block) for four to 14 years after placement of a permanent ventricular pacemaker. Survival rates at one, five, and ten years were 88%, 75%, and 70%, respectively. Overall mortality was 31%. Patients with congestive heart failure had significantly lower survival rates than those without (P less than .03). Survival rates were significantly lower in patients with coronary artery disease than in those with conduction system disease only (P less than .01). Fourteen of the 19 patients whose cause of death was known died of cardiovascular disease, ten of complications of coronary artery disease. Thus, survival after pacemaker implantation is adversely and profoundly influenced by underlying cardiovascular disease, particularly coronary artery disease and its complications. abstract_id: PUBMED:36673543 Prevalence of Cardio-Embolic Brain Complications in Permanent and Paroxysmal Atrial Fibrillation Patients. Background: Atrial fibrillation (AF) is the most frequent of all cardiac arrhythmias, with an increasing prevalence in the last 20 years. Cardio-embolic brain complications (CEBC) related to AF often occur or recur, even following appropriate treatment. Method: We conducted a retrospective study and analyzed the presence of stroke, dementia, and Parkinson’s disease (PD) in both paroxysmal and permanent AF patients. The records of 1111 consecutive admitted patients with primary diagnosis of AF at the Municipal Emergency University Hospital, Timisoara, between 2015 and 2016 were examined. Statistical analysis was performed on the patients included in the study based on the inclusion and exclusion criteria. Results: A significant statistical difference was noted among the permanent AF group for stroke (48.75% vs. 26.74%, p &lt; 0.001) and dementia (10.25% vs. 3.86%, p &lt; 0.001) compared to paroxysmal AF patients. Permanent AF patients presented a higher risk of developing stroke, dementia, and PD compared to patients with paroxysmal AF. Meanwhile, male gender and an increase in age showed an increase in the odds of having cardio-embolic brain complications in patients with paroxysmal AF. Conclusion: Based on the results obtained, it can be concluded that the risk of cardio-cerebral embolic complications is greater in permanent AF patients compared to paroxysmal AF cases. Ischemic stroke and dementia are more frequent in the permanent AF group, but analyzing the data regarding the age of onset paroxysmal AF is critical due to the fact that it involves a younger population. Prompt diagnosis and treatment can help significantly in saving stroke patients. abstract_id: PUBMED:324160 Natural history in patients with mitral- and aorticvalve-disease (author's transl) Knowledge and due consideration of the natural history of valvular heart disease are prerequisites for their operative therapy. Presumptive mortality and morbidity of the surgical intervention must be weighted against the expected prognosis under medical treatment alone. The timing of the operation depends on these considerations. Mitral stenosis and the chronic forms of mitral and aortic incompetence have similar natural histories and for both signs and symptoms are good indicators for an eventual progression of the condition. The length of the period during which the patient is free of complaints may be quite variable but a critical change in the natural history comes about once the disease causes signs and symptoms. Surgical repair is indicated when the patient reaches stage III according to the NYHA-classification. The prognosis is worst for aortic stenosis, in particular due to the danger of sudden death. Patients with high pressure gradients are at particularly high risk; this holds even true for those patients which are not yet suffering from any complaints. The prognosis becomes even more serious, when signs such as dyspnea, anginal pain, or syncopal attacks occur. Prognosis and indication for surgical intervention cannot be evaluated reliably by considering only the clinical signs without knowledge of hemodynamic parameters. Acute mitral and aortic incompetence, in paricular when they occur during baterial endocarditis, must be observed very closely because of their most serious prognosis; if necessary, emergency surgery must be carried out in these cases. abstract_id: PUBMED:528172 Natural history of sick sinus syndrome following permanent pacemaker implantation. Forty-four patients with sick sinue syndrome (SSS) who had been treated with a permanent pacemaker were followed for an average period of 39 months. Fifteen patients had bradyarrhythmia alone (group 1) and 29 had bradycardia-tachycardia syndrome (group 2). Eight patients, all from group 2, died within a short period following pacemaker implantation. They all had ischemic heart disease, congestive heart failure and a short history of the symptomatic dysrhythmia. Eleven patients developed stable chronic atrial fibrillation, which terminated the clinical syndrome. In the remaining 25 patients, all without evidence of ischemic heart disease, the dysrhythmia persisted although symptoms were successfully controlled following pacemaker therapy. Based on these observations and data obtained from other surveys, we delineated three courses of SSS: 1) a subacute course, characterized by a short-term survival; 2) a transient, self-limited course in which conversion to stable atrial fibrillation occurs; and 3) a chronic course, in which the dysrhythmia persists and permanent pacemaker therapy is indicated. abstract_id: PUBMED:32190503 Delays in Temporary and Permanent Pacemakers: Causes and In-Hospital Outcomes. Background Temporary pacemakers frequently serve as a bridge to permanent pacemakers, but placement of the latter may be delayed. This study assessed the causes and in-hospital outcomes of patients with delayed placement of permanent pacemakers. Methods This retrospective study included all patients admitted to the Emergency Department who underwent temporary transvenous pacing in the Department of Cardiology, Lady Reading Hospital, Peshawar, Pakistan. The duration of hospitalization and the time from temporary to permanent placement were calculated in days. Asystole, infections, cardiac arrest, and death were recorded during the waiting period. Results Of the 260 patients who underwent temporary transvenous pacing, 136 (52.3%) were males and 124 (47.7%) were females, with an age range of 46-78 years. Coronary artery disease was prevalent in 34% of the patients. Only 5% of the patients were on arteriovenous (AV) nodal blocking agents, 44% had complete AV block, 22% had sinus node disease, and 14% had slow atrial fibrillation. The cause of high-degree AV block could not be determined in most patients. Most patients with ischemia- and hyperkalemia-induced AV block recovered. AV blocks induced by ischemia and with no known cause were not reversible, with most of these patients receiving permanent pacemakers. Of the 260 patients with high-degree AV block, 165 (63.5%) recovered. The mean waiting time for permanent pacemaker implantation was 8.7 ± 5.4 days. The waiting time was associated with increased infections and adverse hospital course. Conclusion A longer waiting period between permanent pacemaker indication and implantation is dangerous, as it is associated with an increased risk of adverse events such as infections, syncope, asystole, malignant arrhythmias, cardiac arrest, and death. abstract_id: PUBMED:30879900 Permanent cardiac pacing for patients with iatrogenic or potentially reversible bradyarrhythmia. Introduction: In iatrogenic or potentially reversible bradyarrhythmia, drug discontinuation or metabolic correction is recommended before permanent cardiac pacemaker (PM) implantation. These patients often have conduction system disease and there are few data on recurrence or the need for a permanent PM. Objective: To analyze the need for PM implantation in patients with iatrogenic bradyarrhythmia or bradyarrhythmia associated with other potentially reversible causes. Methods: We assessed consecutive symptomatic patients admitted to the emergency department with a primary diagnosis of bradyarrhythmia (atrioventricular [AV] node disease - complete or second-degree AV block (AVB) [CAVB: 2nd-degree AVB - 2:1], sinus bradycardia [SB] and atrial fibrillation [AF] with slow ventricular response [SVR]) in the context of iatrogenic causes or metabolic abnormalities. We determined the percentage of patients who required PM implantation. Results: We studied 153 patients (47% male) admitted for iatrogenic or potentially reversible bradyarrhythmia. Diagnoses were SB 16%, CAVB 63%, second-degree AVB 12%, and AF with SVR 10%. Eighty-five percent of patients were under negative chronotropic therapy, 3% had hyperkalemia and 12% had a combined etiology. After correction of the cause, 55% of patients (n=84) needed a PM. In these patients the most common type of bradyarrhythmia was CAVB, in 77% (n=65) patients. Conclusion: In a high percentage of patients with bradyarrhythmia associated with a potentially reversible cause, the arrhythmia recurs or does not resolve during follow-up. Patients with AV node disease constitute a subgroup with a higher risk of recurrence who require greater vigilance during follow-up and should be considered for PM implantation after the first episode. abstract_id: PUBMED:1512343 Natural history of sinus node disease treated with atrial pacing in 213 patients: implications for selection of stimulation mode. Objectives: This study was designed to analyze the incidence and determinants of complications and long-term survival in sinus node disease treated with atrial pacing. Background: Knowledge of the natural history of sinus node disease treated with different pacing modes is imperfect, and controversy exists regarding the optimal pacemaker therapy. Methods: A consecutive series of 213 patients with sinus node disease initially treated with atrial pacing was studied for a median follow-up period of 60 months. The end points studied were permanent atrial fibrillation, high grade atrioventricular (AV) block, P wave undersensing, pacing mode change, reoperation and death. Several prognostic factors were evaluated statistically and the survival rate was compared with that of a matched general population. Results: The incidence rate of permanent atrial fibrillation during follow-up was 7% (1.4%/year). The risk of this arrhythmia increased substantially with age greater than or equal to 70 years at pacemaker implantation. Only 2 of the 15 patients who developed permanent atrial fibrillation required ventricular pacing. High grade AV block occurred in 8.5% (1.8%/year) and its incidence was much greater in patients with complete bundle branch block or bifascicular block (35%) than in patients without such conduction disturbances (6%). A change to ventricular or dual-chamber stimulation was necessary in 14% of all patients, primarily because of early lead dislodgment or high grade AV block. Surgical intervention with maintenance of atrial pacing was required in 7% of patients. The survival rates of 97% at 1 year, 89% at 5 years and 72% at 10 years did not differ significantly from those of a matched general population. Conclusions: In sinus node disease, atrial pacing can be successfully applied during long-term follow-up. Patients with complete bundle branch or bifascicular block in addition to sinus node disease should initially receive a dual-chamber pacemaker, but routine application of dual-chamber stimulation does not appear to be warranted. abstract_id: PUBMED:37918562 Prevalence of metabolic syndrome and its association with atrial arrhythmias in patients with implanted permanent pacemaker for cardiac conduction abnormality. The objective of the study was to find the prevalence of metabolic syndrome along with identifying the atrial arrhythmias, QTC interval, and coronary artery disease among these patients during follow-ups. Among 171 subjects who were implanted with permanent pacemakers, metabolic syndrome was present in 90 (52.6 %). Prevalence of Arrhythmias was 49 (28.7 %), atrial tachycardia (AT)/atrial fibrillation (AF) was seen in 29 (17 %) patients. Our study showed that there is a strong association between metabolic syndrome and atrial arrhythmias. Metabolic syndrome, age, coronary artery disease and Systolic blood pressure were good independent predictors of atrial arrhythmias among patients with pacemaker implantation. abstract_id: PUBMED:24499313 The prevalence of sinus rhythm in patients with "permanent" atrial fibrillation. Background: The term "permanent" atrial fibrillation (AF) is generally used to describe the rhythm status of patients for whom cardioversion has failed or attempts to restore normal sinus rhythm (NSR) have ceased. However, the rhythm status of such patients is typically assessed by symptoms or intermittent monitoring, and therefore categorization may be imprecise. Methods: We evaluated the presence of NSR among patients who were identified by their physicians as having permanent AF and who underwent prior insertion of a cardiac rhythm management device in the OMNI study. Patients with a dual- or triple-chamber device (pacemaker, implantable cardiac defibrillator, or cardiac resynchronization therapy) and ≥30 days of device data were studied. We tabulated the percentage of follow-up days spent entirely in NSR, entirely in AF, or in both NSR and AF. Results: A total of 69 patients met inclusion criteria and were followed for 767 ± 479 days. More than 73% of patients experienced ≥1 entire day in NSR. On average, 38.2% of days were spent entirely in NSR, 11.8% of days were spent in a combination of NSR and AF, and only 50.0% of days were spent entirely in AF. The median daily AF burden during follow-up was 14.6 [1.1-23.7] hours/day. Conclusions: NSR is common in many device patients thought to have permanent AF, suggesting that continuous arrhythmia monitoring could be useful in identifying permanent AF patients who may benefit from renewed rhythm control efforts. Alternatively, some permanent AF patients undergoing atrioventricular nodal ablation may benefit from dual-chamber devices due to likely periods of NSR. Answer: The natural history of atrial fibrillation (AF) in patients with permanent pacemakers suggests that AF may not always be a progressive disease. A study that included 356 patients with pacemaker devices capable of continuous atrial rhythm monitoring found that over a follow-up period of 7.2 ± 3.1 years, 50.3% of patients had at least one episode of persistent AF. However, the majority of patients (53.9%) had a paroxysmal form of the arrhythmia at the latest follow-up, with 21.6% having persistent AF and 24.4% having long-standing persistent/permanent AF. Univariable predictors of progression to persistent AF included male gender, increasing left atrial diameter (LAD), reduced atrial pacing (AP), and increasing ventricular pacing (PUBMED:26139311). This indicates that while many patients with AF will experience persistent episodes, the majority will maintain a paroxysmal form of the arrhythmia over time. Therefore, AF may not necessarily follow a linear progression from paroxysmal to persistent to permanent in all patients with permanent pacemakers. The study's findings challenge the notion that AF is always a progressive disease and highlight the variability in its natural history. It is important to note that individual patient characteristics and the presence of underlying cardiovascular conditions can influence the progression and management of AF.
Instruction: Loss of autonomy of hospitalized elderly patients: does hospitalization increase disability? Abstracts: abstract_id: PUBMED:24858036 Loss of autonomy of hospitalized elderly patients: does hospitalization increase disability? Aim: The study of the determinants of loss of autonomy during hospitalization may be valuable in the identification of the most effective interventions and to achieve better outcomes. The aim of this study was to describe changes in the level of autonomy of the elderly admitted to the hospital at the entrance and at discharge in relation to a rehabilitation program. Methods: Prospective observational study conducted at the INRCA Geriatric Hospital of Ancona. The study included patients aged 65 years and over, daily admitted to INRCA Hospital of Ancona between September and December 2010. Criteria for inclusion were age ≥ 65 years, length of stay &gt; 24 hours and signed informed consent. Patients admitted for less than 24 hours or in day hospital or day surgery were excluded from the beginning. A total of 1266 elderly patients were recruited in the period. From this sample, 74 people who died during hospitalization were excluded. At the time of hospitalization (within 24 hours) and at discharge, patients were evaluated with the Barthel Index (BI), the Rankin scale, and a short assessment of cognitive status derived from the Mini Mental State Examination (MMSE). Results: Referring to 1192 subjects who participated to the study, the mean age was 82.13 years ±7.39, age range between 65 and 100 years. The average BI was 56.6±36.16 (SD) (median value =60) at admission and 63.84±34.7 (SD) (median value=70) at discharge. The average Rankin score at admission was 2.63±1.5 (SD) (median value=3). Conclusion: Patients presented better score of the BI at discharge and this figure was associated to the implementation of a rehabilitation treatment. Hospitalization of the elderly patient in a suitable environment, such as a geriatric hospital, contrary to some theories highlighting only the negative aspects of removal from the living environment, can be a measure of benefit for the reduction of disability and the recovery of compromised activities along and after the acute event. The collection of data on the level of autonomy of the subjects before and after hospitalization can be a useful element for clinical evaluation in a geriatric hospital. abstract_id: PUBMED:25271030 The medical autonomy of elderly in Taiwan The elderly population is increasing rapidly in Taiwan. With the average life expectancy on the rise, the elderly have become major consumers of healthcare products and services. Factors that influence respect for autonomy, a core value of medical ethics, may be related to family, society, and the medical culture. Especially in patients who are already elderly, aging causes declines in physical, mental and societal capacities. Practicing a respect for patient autonomy is particularly challenging for healthcare professionals in Taiwan due the unique culture background of elderly Taiwanese patients. This article reviews and integrates the literature related to the issue of patient autonomy and elaborates on medical decision-making among elderly patients in Taiwan in the contexts of: the disadvantages faced by the elderly, the background of Chinese culture, and the current medical decision-making environment. A few suggestions are proposed to help preserve the medical-decision-making autonomy of elderly patients in Taiwan. abstract_id: PUBMED:23426191 Taste loss in hospitalized multimorbid elderly subjects. Background: Loss of the sense of taste is common among older people. Morbidities and polypharmacy may contribute to the age-related decline in gustatory function. The aims of the present study were to investigate taste perception in elderly hospitalized patients by comparing their taste recognition thresholds with those of healthy, free-living elderly individuals and to identify potential determinants of taste loss. Methods: The participants in this observational study were 55 elderly patients hospitalized in the acute geriatric section of the Department of Medical and Surgical Sciences at Padova University and 41 free-living individuals aged older than 65 years, randomly recruited from elderly people attending mild fitness programs at public gymnasiums in Padova. Data were collected on nutrition, health, cognitive, and functional status for all participants. Gustatory capabilities were assessed using aqueous solutions of sucrose, sodium chloride, citric acid, and quinine hydrochloride (representing sweet, salty, sour, and bitter stimuli, respectively), and taste recognition thresholds were measured in both groups. Results: In comparison with the free-living elderly subjects, those in hospital were significantly less able to recognize the taste of citric acid (P &lt; 0.05). Low citric acid sensitivity was independently associated with advanced age (≥75 years; odds ratio [OR] 3.01, 95% confidence interval [CI] 1.01-9.82), polypharmacy (number of prescribed drugs ≥ 4; OR 2.74, 95% CI 1.01-7.72), and poor nutritional status (as assessed by Mini Nutritional Assessment score &lt; 23.5; OR 5.08, 95% CI 1.76-14.6). Conclusion: Because gustatory impairment may reduce a person's appetite and lead to inadequate dietary intake, compensatory nutritional measures, such as the use of flavor-enhanced foods, should be strongly encouraged, particularly in the hospital setting. abstract_id: PUBMED:29569564 Pilot study: effectiveness of a training program about autonomy in elderly patients hospitalized after an acute episode. Patients hospitalized in rehabilitation after an acute episode is at high risk of functional impairment and require active measures to prevent loss of autonomy. Resistance training has been proved beneficial to improve physical performances of aged subjects in the community. However, the feasibility and effects of training programs has never been assessed among hospitalized patients after an acute medical episode. In this pilot study, we aimed to evaluate a personalized training program of adapted physical education teacher combining muscular reinforcement and endurance training among patients over 65 year-old hospitalized in a geriatric rehabilitation care setting. The principal outcome considered was the evolution of the SPPB (short physical performance battery) score from baseline, after the 3-week period of training. Thirty-eight patients (mean age&gt; 80 years) participated to the program. The mean SPPB score at baseline was 6.37 points (standard deviation: 2.17). After the intervention, the SPPB showed a significant improvement of 2.50 points, to reach 8.87 points (standard deviation: ±1.60; p=0.001). These results confirm the feasibility of resistance-training programs among hospitalized elderly. The observed improvements in terms of physic al performances also appear promising with regard to the prevention of loss of autonomy during rehabilitation after acute medical episodes. abstract_id: PUBMED:14694738 Participation and autonomy of mothers in the care of the hospitalized child This is a qualitative study whose purpose was to check with mothers the meaning and significance of remaining in the hospital and their participation and autonomy in the care of the hospitalized child. The field research was carried out through participant observation and through interviews with mothers. The inclusion of the mother in the care of the child has been taking place in a subtle way, and the mother has been taking on procedures which nurses were formerly in charge of, concentrating information about the therapeutic process. Nurses establish a hierarchical relation of domination-subordination with the mothers, who create some autonomy breaks in everyday situations. The lack of a collective therapeutic project in a perspective of care focused on the family is one of the obstacles to the involvement of the mother in the assistance. abstract_id: PUBMED:31980381 Short-term autonomy and survival after hepatectomy in the elderly. Aim: To study morbi-mortality, survival after hepatectomy in elderly patients, and influence on their short-term autonomy. Patients And Methods: This is a retrospective study conducted between 2002 and 2017 comparing patients less than 65 years old (controls) to those more than 65 years old (cases) from a prospective database, with retrospective collection of geriatric data. Cases were divided into three sub-groups (65-70 years, 70-80 years and&gt;80 years). Results: Four hundred and eighty-two patients were included. There was no age difference in number of major hepatectomies (P=0.5506), length of stay (P=0.3215), mortality at 90 days (P=0.3915), and surgical complications (P=0.1467). There were more Grade 1 Clavien medical complications among the patients aged over 65 years (P=0.1737). There was no difference in overall survival (P=0.460) or disease-free survival (P=0.108) according to age after adjustment for type of disease and hepatectomy. One-third of patients had geriatric complications. The "home discharge" rate decreased significantly with age from 92% to 68% (P=0.0001). Early loss of autonomy after hospitalization increased with age, 16% between 65 and 70 years, 23% between 70 and 80 years and 36% after 80 years (P=0.10). We identified four independent predictors of loss of autonomy: age&gt;70 years, cholangiocarcinoma, length of stay&gt;10 days, and metachronous colorectal cancer. Conclusions: Elderly patients had the same management as young patients, with no difference in surgery or survival, but with an increase in early loss of autonomy. abstract_id: PUBMED:22460501 Reflections on the care of nursing and the autonomy of the human being under the condition of hospitalized elderly There has been an increase of chronic diseases with the subsequent need for care and hospitalization, due to the acceleration of the elderly demographic growth. The objective of the present paper is to reflect on the care of nursing and the autonomy of the human being under the condition of elderly hospitalization. Nursing professionals, caring in the hospital ambient, can contribute significantly for the construction of the elderly autonomy and participation on the decision making about their need for attention or health care. The ethics redeeming, under a worthy aging perspective, permeates several discussions in the aging society and, specifically, among health professionals who act in hospital institutions. abstract_id: PUBMED:23506006 Impact of diabetes on cognitive impairment and disability in elderly hospitalized patients with heart failure. Aim: Heart failure (HF) and diabetes mellitus (DM) are each associated with cognitive impairment and disability. The aim of the present study was to evaluate the impact of DM on cognitive impairment and functional status in elderly hospitalized patients affected by HF. Methods: A total of 79 elderly hospitalized patients with HF were enrolled in the present study. They underwent physical and instrumental examination, and geriatric multidimensional assessment including Mini-Mental State Examination (MMSE), Geriatric Depression Scale (GDS), activities of daily living (ADL) and instrumental activities of daily living (IADL). Differences between groups were established by t-test, Spearman's correlation coefficient was searched to examine the relation between variables. All results were considered significant if P was &lt;0.05. Results: HF and DM coexisted in 43 patients (54.4% of cases); when they occurred together patients showed, compared with non diabetic patients, a greater clinical severity of HF (44.2% were in New York Heart Association class IV vs 16.7%, P = 0.017), a lower MMSE score (20.4 ± 3.6 vs 23 ± 3.8, P = 0.004), and a lower number of preserved functions in ADL (3 ± 1.6 vs 4 ± 1.8, P = 0.008) and in IADL (3.2 ± 1.7 vs 4.6 ± 2.3, P = 0.003). The correlation between DM and cognitive impairment, and disability was confirmed by multivariate and univariate analysis. Conclusions: We confirm that DM is frequent in elderly hospitalized patients with HF, and we report that it has a negative impact on cognitive functions and functional status, worsening cognitive impairment, and disability observed in these patients. Comprehensive geriatric assessment is necessary for older adults with HF, especially when DM coexists. abstract_id: PUBMED:23493461 An investigation on sleep behaviors of the elderly hospitalized in Zahedan. Background: Sleep is an essential need in every individual's life. A disorder in the natural sleep can cause physical and mental problems. The elderly are usually faced with more sleep problems. Therefore, the present study aimed to define sleep behavior among the elderly hospitalized in Zahedan. Materials And Methods: This is a descriptive analytical study conducted on 300 elderly people aged 60 years and over who were hospitalized in Zahedan. In this research convenience sampling method was used and the research tool was a questionnaire. The data were collected through interviews. Descriptive (frequency distribution) and inferential (X(2)) statistical tests were employed to analyze the data. Findings: The results showed that 62% of the hospitalized elderly people (total of 300) had sleep disorder. About 44.7%took sleep medication in order to sleep, and only 16.7% did not take sleeping medications. About 44.7% had no special schedule for the time of their sleeping and waking up, and 4.3% were involved in drug abuse and smoking, and had a big dinner before sleeping. There was a significant association between sleep disorder, gender, education, living in urban or rural areas, the cause of hospitalization, and suffering from a chronic disease. Conclusions: Sleep disorder and inappropriate sleep related behaviors had a high prevalence among the elderly. With regard to the important role of sleep in the quality of life of the elderly, detection of the reasons of sleep disorder, motivating them to practice an appropriate sleep behavior, and preventing them from having inappropriate sleep related behaviors are crucial issues. abstract_id: PUBMED:19646099 Autonomy in relation to health among people with intellectual disability: a literature review. Background: Since the 1990s, individualisation, participation, normalisation and inclusion have been the main principles of care for people with intellectual disability (ID). Autonomy has become an important issue for these people. This review of the literature tried to answer the question: how do people with ID exercise autonomy in relation to health? Method: Searches in Cochrane, Medline and PsycINFO were based on the following aspects of autonomy: self-determination, independence, self-regulation and self-realisation. Results: Thirty-nine of 791 articles met our criteria, including 14 on self-determination, seven on independence, 15 on self-regulation and three on self-realisation. Conclusions: In spite of decades of promoting autonomy, the exercise of autonomy in relation to health has so far rarely been an issue in the literature. Answer: The question of whether hospitalization increases disability in elderly patients is addressed in several studies. According to the prospective observational study conducted at the INRCA Geriatric Hospital of Ancona, elderly patients showed an improvement in their level of autonomy at discharge compared to admission, as measured by the Barthel Index (BI) and the Rankin scale. This improvement was associated with the implementation of a rehabilitation treatment, suggesting that hospitalization in a suitable environment like a geriatric hospital can be beneficial for reducing disability and aiding recovery after an acute event (PUBMED:24858036). However, the impact of hospitalization on elderly patients' autonomy can be multifaceted. A pilot study evaluating a personalized training program for patients over 65 years old hospitalized in a geriatric rehabilitation care setting found significant improvements in physical performance as measured by the Short Physical Performance Battery (SPPB) score after a 3-week period of training. This indicates that resistance-training programs can be feasible and potentially effective in preventing loss of autonomy during rehabilitation after acute medical episodes (PUBMED:29569564). On the other hand, a retrospective study on the short-term autonomy and survival after hepatectomy in the elderly found that while there was no difference in surgery or survival between younger and older patients, there was an increase in early loss of autonomy with age. The study identified age over 70 years, among other factors, as an independent predictor of loss of autonomy (PUBMED:31980381). In the context of Taiwan, factors influencing respect for autonomy in elderly patients may be related to family, society, and medical culture, and practicing respect for patient autonomy can be challenging due to the unique cultural background of elderly Taiwanese patients (PUBMED:25271030). Overall, while some studies suggest that hospitalization with appropriate rehabilitation can improve autonomy and reduce disability in elderly patients, other factors such as age, cultural context, and specific health conditions can influence the degree of autonomy and potential for disability following hospitalization.
Instruction: Does disability predict attempted suicide in the elderly? Abstracts: abstract_id: PUBMED:25891986 Does disability predict attempted suicide in the elderly? A community-based study of elderly residents in Shanghai, China. Objectives: Disability, which has been proved to be associated with suicide and suicidal ideation, has received little attention in relation to suicidal attempts among the elderly. The aim of this study was to explore the influence of disability on attempted suicide within this demographic. Method: A multi-stage cluster sample of 8399 residents aged 60 or more was investigated from 15 communities in Shanghai, China. Disability was measured using the Lawton instrumental activities of daily living (IADL) scale. Results: The prevalence of attempted suicide in the elderly was 0.75%. Specific IADL disabilities, including shopping (OR = 3.01, 95% CI = 1.56-5.81), preparing meals (OR = 4.12, 95% CI = 2.12-8.00), housekeeping (OR = 2.48, 95% CI = 1.01-6.06), doing laundry (OR = 2.82, 95% CI = 1.09-7.35), using transport (OR = 3.10, 95% CI = 1.36-6.99) and medical care (OR = 4.41, 95% CI = 2.10-9.17), were significantly and independently associated with attempted suicide in the elderly. The presence of at least one such disability was associated with an almost threefold increase in the attempted suicide rate, and the presence of five or more IADL disabilities was associated with an approximate fivefold increase in the attempted suicide rate. Conclusion: Specific IADL disabilities, such as preparing meals or dealing with medical care, may be significant predictive factors for risk of suicidal attempts among the elderly. Therefore, elderly people with certain disabilities should be considered for suicide prevention interventions and should be supported in IADL as much as possible. abstract_id: PUBMED:31728099 Non-fatal Home Injuries among the Elderly in Tamil Nadu, India. Background: Injuries are considered as an emerging public health problem in India. Globally every year, injuries kill more than 5 million people, and by 2020, injuries will be the third-leading cause of death and disability worldwide. The study aimed to assess the prevalence and characteristics of nonfatal home injuries among the elderly in Tamil Nadu. Methodology: A cross-sectional analytical study was conducted among 1139 elderly (60 years and above), sampled from three districts of Tamil Nadu, India. Probability proportional to size sampling technique was used for sampling; a pretested questionnaire was used to collect the data. Results: Prevalence of nonfatal home injuries among elderly within the past 1 year was 14.6% (12.5-16.7 at 95% confidence interval), (n = 1003), among those injured, 94.5% were unintentionally injured and 5.5% were of intentional nature. When classified according to the types of injuries, majority of them had fall injuries (6.7%) followed by minor domestic injuries (5.4%), animal-related injury (0.2%), burn injuries (1.1%), road traffic injury (0.4%), and suicide attempt (0.8%). Majority of the respondents were in the age group of young-old, 60-69 years of age (84%), and there was more number of males (55%) in the study. Conclusion: The study reveals that nonfatal home injuries among elderly are an emerging public health problem, unintentional injuries contribute to the majority of the injuries, fall was the single largest contributor for all injuries among elderly. abstract_id: PUBMED:9065674 Suicidal thoughts and associated factors in an elderly population. This study examined the prevalence of suicidal thoughts and associated factors in an elderly population. Data for 969 elderly subjects from a population-based study were used, and examinations by physicians, including psychiatric examination and informants' interview, were assessed. In total, 13.3% of the subjects had had suicidal thoughts during the last 2 weeks (10.8% fleetingly and 2.5% frequently). Of those who had had fleeting suicidal thoughts, 26.7% also had major depression, while 50% of those who had frequent suicidal thoughts were depressed. In addition, suicidal thoughts were associated with increased disability in daily living, institutionalization, visual problems and the use of psychotropic drugs. It is concluded that a careful psychiatric assessment is necessary when suicidal thoughts are expressed by an elderly person. abstract_id: PUBMED:16874117 Mood disorders in intellectual disability. Purpose Of Review: This article examines reviews and research on the diagnosis and treatment of mood disorders in people with intellectual disability published from September 2004 to December 2005. Recent Findings: Patients with intellectual disability have limitations in verbal ability, and with increasing levels of disability may have an atypical clinical presentation. Thus, methods to diagnose mood disorders were a major research focus. Informant-rating scales and two self-report instruments provided data on thought patterns, aberrant behavior, appetite, and suicidality. Behavioral symptoms such as aggression were frequently associated with mood disorders. Pharmacotherapy and electroconvulsive therapy were found to be effective treatments. Mood disorders were frequently identified in people with intellectual disability, although suicide was still quite rare. Summary: Patients with milder levels of disability can use self-report measures and can be diagnosed using standard criteria with little modification. For those with more severe disability, diagnosis is challenging and often requires the use of residual categories. Atypical clinical presentation, including maladaptive behaviors, lent support for 'behavioral equivalent' substitutes of standard criteria. Typical pharmacological agents were effective for depression and electroconvulsive therapy for treatment-resistant bipolar disorder. abstract_id: PUBMED:23925963 Pathways into the criminal justice system for individuals with intellectual disability. Background: Studies focusing on pathways in the criminal justice system for individuals with intellectual disability are limited in that they only study individuals once they are involved in the system and do not consider the pathways into it. The purpose of this study is to examine predisposing factors that lead to various outcomes for individuals with intellectual disability when police are called to respond to their behavioural crises. Method: The current study examined the outcome of police response to 138 individuals with intellectual disability in crisis. Following police intervention, 15 individuals were arrested, 76 were taken to the emergency department and 47 received on-scene resolution. Comparisons between the three groups were conducted. Results: The three groups differed in terms of residence at the time of crisis, history of forensic involvement and type of crisis. Police intervention with adults with intellectual disability can happen for different reasons. Both individual and situational predictors explained this outcome. abstract_id: PUBMED:16155686 Adolescents with intellectual disability and suicidal behavior. It has been assumed that impaired intellectual capacity could act as a buffer to suicidality in the population of children and adolescents with intellectual disability. The few studies that have been conducted contest this assumption and in fact the findings showed that the characteristics of suicidality in the population of children and adolescents with intellectual disability are very similar to other adolescents without intellectual disability. This paper reviews the few studies conducted and describe the symptomatology in this population. abstract_id: PUBMED:24946241 Suicide and the elderly. Impending demographic changes mean that the proportion of the elderly in the population will increase. Suicide rates increase with age. The literature on elderly suicide, with emphasis on methodology, cross-national rates and trends, correlates, cultural factors, location and method, attempted suicides and prevention, is critically reviewed in this paper. abstract_id: PUBMED:26574117 Suicide in the Elderly Introduction: Suicide is a public health problem worldwide, with multiple features and risk factors. It has some common and unique trends in each phase across the lifespan. Objective: To review the medical literature related to suicide in the elderly, in order to determine the current status of this problem in the world, and especially in Colombia. Method: Literature review. Results: There is a high volume of articles about suicide in general, even in Colombia, with many papers describing the problem in a comprehensive manner, but there is a need for more studies and publications on the scope of this problem in the elderly. abstract_id: PUBMED:38069494 Factors Associated With Experiences of Harassment or Abuse Among Lesbian, Gay, Bisexual, Trans, Queer, and Asexual Young People With Disability in Australia. Lesbian, gay, bisexual, trans, queer, and asexual (LGBTQA+) young people with disability are known to experience higher rates of harassment or abuse than LGBTQA+ young people without disability. This study focused on participants in Australia and identified factors associated with harassment or abuse among LGBTQA+ adolescents and young adults who reported a disability as well as associations with mental health outcomes. Analyses were conducted from a national survey that included 2,500 LGBTQA+ people who reported a disability and were aged 14 to 21 years. Measures included experiences in the past 12 months of verbal and physical harassment or abuse due to one's sexual orientation or gender identity, sexual harassment or abuse, mental health, suicidality, and sociodemographic traits. Overall, 48.4% of participants with disability reported experiencing verbal harassment or abuse, 12.4% physical harassment or abuse, and 29.7% sexual assault or harassment. In multivariable regression analyses, verbal harassment or abuse was significantly more likely among trans men, participants with an intellectual disability, and those who were "out" to most or all of their family. Physical harassment or abuse was significantly more likely among participants with a physical or sensory disability. Sexual harassment or abuse was significantly more likely among trans women and participants with a physical or sensory disability. Participants who experienced harassment or abuse were also significantly more likely to have attempted suicide in the past 12 months. These findings will assist policymakers and practitioners in identifying contexts linked to a heightened risk of abuse among LGBTQA+ young people with disability and further underscore an immediate need to address and prevent harm in this population. abstract_id: PUBMED:16155687 Suicide behavior in persons with intellectual disability. Suicide is today in the Western world one of the leading causes of death and most people have had suicidal ideation at some time during their life. In the population of persons with intellectual disability some researchers have thought that impaired intellectual capacity could act as a buffer to suicidal behavior, but the fact is that the few studies conducted in that population contest this assumption and showed that the characteristics of suicidality in this population are very similar to persons without intellectual disability. This paper reviews the studies conducted and describe the symptomatology in this population. Professionals working with this population should therefore be aware of and assess for this behavior. Sadness or depression are symptoms that could indicate later suicidal behavior. Answer: Yes, disability does predict attempted suicide in the elderly. A community-based study in Shanghai, China, found that specific instrumental activities of daily living (IADL) disabilities, such as shopping, preparing meals, housekeeping, doing laundry, using transport, and managing medical care, were significantly and independently associated with attempted suicide in the elderly. The presence of at least one such disability was associated with an almost threefold increase in the attempted suicide rate, and the presence of five or more IADL disabilities was associated with an approximate fivefold increase in the attempted suicide rate (PUBMED:25891986). This suggests that elderly people with certain disabilities should be considered for suicide prevention interventions and should be supported in IADL as much as possible.
Instruction: Are colonoscopy and bowel ultrasound useful for assessing response to short-term therapy and predicting disease outcome of moderate-to-severe forms of ulcerative colitis? Abstracts: abstract_id: PUBMED:19997096 Are colonoscopy and bowel ultrasound useful for assessing response to short-term therapy and predicting disease outcome of moderate-to-severe forms of ulcerative colitis?: a prospective study. Objectives: Mucosal healing has been proposed as an important sign of the efficacy of medical treatment of inflammatory bowel disease; however, direct evidence in ulcerative colitis (UC) is scarce. We evaluated the usefulness of colonoscopy and bowel ultrasound (US) as indexes of response to short-term therapy and as predictors of subsequent outcome in UC. Methods: A total of 83 patients with moderate-to-severe UC were recruited; endoscopic and US severity was graded 0-3 at entry according to validated scores. Of the recruited patients, 74, who were clinically responsive to steroids, were followed up with repeated colonoscopy and bowel US at 3, 9, and 15 months from recruitment. Concordance between clinical, endoscopic, and US scores at various visits was determined by kappa statistics. Multiple unconditional logistic regression models were used to assess the predictivity of clinical, endoscopic, and US scores measured at 3 and 9 months on the development of endoscopic UC relapse within 15 months. Results: A variable concordance was found over time between endoscopic and clinical score (weighted kappa between 0.38 and 0.95), with high and consistent concordance between endoscopic and US scores (weighted kappa between 0.76 and 0.90). On logistic regression analysis, moderate-to-severe endoscopic and US scores at 3 months were associated with a high risk of endoscopic activity at 15 months (odds ratio (OR): 5.2; 95% confidence interval (CI): 1.6-17.6 and OR: 9.1; 95% CI: 2.5-33.5, respectively). Conclusions: Bowel US may be used as a surrogate of colonoscopy in assessing the short-term response of severe forms of UC to therapy. Both US score and endoscopic score after 3 months of steroid therapy predict outcome of disease at 15 months. abstract_id: PUBMED:33132642 Comparative study between bowel ultrasound and magnetic resonance enterography among Egyptian inflammatory bowel disease patients. Background: Bowel ultrasound and magnetic resonance enterography (MRE) are decisive medical imaging modalities for diagnosing and locating bowel lesions with its extramural extent and complications. They assess the degree of activity, help clinicians to identify patients in need of surgery, and can be used for patient follow-up. Aim: To compare the role of MRE and bowel ultrasound in diagnosis and follow-up of inflammatory bowel disease (IBD) patients in Egypt. Methods: The study was conducted on 40 patients with IBD. All patients were subjected to clinical assessment, laboratory investigations, bowel ultrasound, MRE, and colonoscopy up to the terminal ileum with biopsies for histopathological examination. Results: This study was conducted on 14 patients (35%) with ulcerative colitis and 26 patients (65%) with Crohn's disease; 34 (85%) of these patients had active disease. Bowel ultrasound detected different bowel lesions with the following accuracies: ileum (85%), large bowel (70%), fistula (95%), stricture and proximal dilatation (95%) and abscesses (100%). Also, it showed that statistically significance of bowel ultrasound in differentiation between remission and activity of IBD in comparison to MRE and colonoscopy. Conclusion: In comparison to MRE, bowel ultrasound is a useful, non-invasive, and feasible bedside imaging tool for the detection of inflammation, detection of complications, and follow-up of IBD patients when performed by the attending physician. abstract_id: PUBMED:37250995 Computerized-adaptive testing versus short forms for pediatric inflammatory bowel disease patient-reported outcome assessment. Introduction: Computerized-adaptive testing (CAT) may increase reliability or reduce respondent burden for assessing patient-reported outcomes compared with static short forms (SFs). We compared CAT versus SF administration of the Patient-Reported Outcomes Measurement Information System® (PROMIS®) Pediatric measures in pediatric inflammatory bowel disease (IBD). Methods: Participants completed 4-item CAT, 5- or 6-item CAT, and 4-item SF versions of the PROMIS Pediatric measures. We compared average T-scores, intra-class correlations (ICCs), floor and ceiling effects, and standard error of measurement (SEM) across forms, along with mean effect sizes between active versus quiescent IBD disease activity groups. Results: Average PROMIS T-scores across forms were &lt;3 points (minimally important difference) of each other. All forms correlated highly with each other (ICCs ≥0.90) and had similar ceiling effects, but the CAT-5/6 had lower floor effects. The CAT-5/6 had lower SEM than the CAT-4 and SF-4, and the CAT-4 had a lower SEM than the SF-4. Mean effect sizes were similar across forms when contrasting disease activity groups. Conclusions: The CAT and SF forms produced similar score results, but the CAT had better precision and lower floor effects. Researchers should consider PROMIS pediatric CAT if they anticipate that their sample will skew toward symptom extremes. abstract_id: PUBMED:35979252 Intestinal Ultrasound in Inflammatory Bowel Disease: A Valuable and Increasingly Important Tool. Background: Intestinal ultrasound is emerging as a non-invasive tool for monitoring disease activity in inflammatory bowel disease patients due to its low cost, excellent safety profile, and availability. Herein, we comprehensively review the role of intestinal ultrasound in the management of these patients. Summary: Intestinal ultrasound has a good accuracy in the diagnosis of Crohn's disease, as well as in the assessment of disease activity, extent, and evaluating disease-related complications, namely strictures, fistulae, and abscesses. Even though not fully validated, several scores have been developed to assess disease activity using ultrasound. Importantly, intestinal ultrasound can also be used to assess response to treatment. Changes in ultrasonographic parameters are observed as early as 4 weeks after treatment initiation and persist during short- and long-term follow-up. Additionally, Crohn's disease patients with no ultrasound improvement seem to be at a higher risk of therapy intensification, need for steroids, hospitalisation, or even surgery. Similarly to Crohn's disease, intestinal ultrasound has a good performance in the diagnosis, activity, and disease extent assessment in ulcerative colitis patients. In fact, in patients with severe acute colitis, higher bowel wall thickness at admission is associated with the need for salvage therapy and the absence of a significant decrease in this parameter may predict the need for colectomy. Short-term data also evidence the role of intestinal ultrasound in evaluating therapy response, with ultrasound changes observed after 2 weeks of treatment and significant improvement after 12 weeks of follow-up in ulcerative colitis. Key Messages: Intestinal ultrasound is a valuable tool to assess disease activity and complications, and to monitor response to therapy. Even though longer prospective data are warranted, intestinal ultrasound may lead to a change in the paradigm of inflammatory bowel disease management as it can be used in a point-of-care setting, enabling earlier intervention if needed. abstract_id: PUBMED:29972402 THE IMPORTANCE OF COLONOSCOPY IN INFLAMMATORY BOWEL DISEASES. Introduction: Endoscopic evaluation, particularly the macroscopic mucosal and histological results of ileocolic biopsies, is essential for the management of inflammatory bowel disease. Endoscopic appearance is not always sufficient to differentiate Crohn's disease and ulcerative colitis, but there are some characteristics that favor one or another diagnosis. Both diseases have an increased incidence of colorectal carcinoma; so, surveillance colonoscopy is important for detecting early neoplastic lesions. Objective: To update the importance of endoscopy in the evaluation, diagnosis and prognosis of inflammatory bowel disease. Method: Search was done in the scientific literature of the TRIP database, chosen from clinical questions (PICO) with the following descriptors: "inflammatory bowel disease", "endoscopy/colonoscopy", "Crohn's disease", "ulcerative colitis" and "diagnosis/treatment". Results: Endoscopic investigation in patients with chronic colitis is quite accurate for the differential diagnosis between ulcerative colitis and Crohn's disease. Endoscopy is indicated for ulcerative colitis during severe crisis due to its prognostic value. Another accepted indication for endoscopy in inflammatory bowel disease is its use in the screening for dysplastic lesion. Conclusion: Ileocolonoscopy allows an accurate diagnosis of Crohn's disease or ulcerative colitis in up to 90% of cases. The healing of the mucosa assessed by endoscopy after treatments despite not being consensus is still the gold-standard in the evaluation of remission of the disease. Colonoscopy is essential for long-term cancer surveillance and in the future the implementation of Confocal Laser Endomicroscopy seems to be very promising in assessing the initial dysplasia. abstract_id: PUBMED:33349199 Milan ultrasound criteria are accurate in assessing disease activity in ulcerative colitis: external validation. Introduction: The aim of this study was to provide an external validation of bowel ultrasound (US) predictors of activity in ulcerative colitis (UC) and quantitative Milan Ultrasound Criteria (MUC). Methods: Forty-three consecutive patients with UC (16 in endoscopic remission and 27 with endoscopic activity) underwent bowel US and colonoscopy in a tertiary referral inflammatory bowel disease unit. Results: An MUC score &gt;6.2 discriminated patients with active versus non-active UC with a sensitivity of 0.85 (95% confidence interval (CI) 0.66-0.96), specificity of 0.94 (95% CI 0.70-0.99) and an area under the curve of 0.902 (95% CI 0.772-0.971) in complete agreement with the derivation study. Conclusion: The external validation of MUC confirms that it is an accurate tool for assessing disease activity in patients with UC. abstract_id: PUBMED:37996771 Intestinal ultrasound in inflammatory bowel disease: New kid on the block. Intestinal ultrasound is an emerging technique for diagnosing and monitoring patients with inflammatory bowel disease (IBD). It is a simple, non-invasive, inexpensive, safe and reliable tool for monitoring patients with IBD. This technique has good diagnostic accuracy in the assessment of the extent and severity of IBD and its complications. The most commonly used parameters are bowel wall thickness, color Doppler flow, bowel wall stratification and peri-bowel inflammation. Various scoring systems have been developed utilizing the above parameters to monitor patients with IBD. It is a good tool to monitor response to therapy and follow-up for post-operative recurrence. Early response on intestinal ultrasound (IUS) predicts long-term clinical remission and mucosal healing in patients with Crohn's disease. In patients with ulcerative colitis (UC), the response to IUS can be assessed as early as two weeks. Recent data has emerged to predict the response to corticosteroids and colectomy in patients with acute severe UC. Point of care IUS in the outpatient clinic is an excellent tool to follow-up patients and guide clinical decision-making and has good acceptability among patients. It is an underutilized technique in spite of its appeal and the availability of evidence. Underutilization can be attributed to the lack of awareness, expertise and training centres. This review discusses the technical details and the evidence to support the use of IUS in IBD. We aim to increase awareness and use of intestinal ultrasound and build local expertise and data. abstract_id: PUBMED:35132795 Ultrasonography in inflammatory bowel disease - So far we are? Intestinal ultrasound (IUS) is a non-invasive and accurate tool to assess inflammatory bowel disease. The ECCO-ESGAR guideline recommends the use of IUS for the assessment of disease activity and complications in Crohn's disease (CD). In addition, an increasing body of evidence suggests the use of IUS as alternative to colonoscopy in monitoring CD and in assessing disease activity of ulcerative colitis. Early responsiveness of IUS findings by the first weeks of treatment and the advantage to perform the procedure in real time may dramatically change frequency of the assessment of treatment response in the future and speed up the clinical decision-making process. Development of validated and reproducible sonographic scores to measure disease activity and therapeutic response and spread of knowledge of IUS remain relevant issues for the future in which current researchers and the International Bowel Ultrasound (IBUS) Group are actively engaged. abstract_id: PUBMED:35695823 Early Intestinal Ultrasound Predicts Intravenous Corticosteroid Response in Hospitalised Patients With Severe Ulcerative Colitis. Background And Aims: Our aim was to determine if transabdominal intestinal ultrasound changes after 48 ± 24 h of intravenous corticosteroids can predict treatment outcomes in hospitalised patients with severe ulcerative colitis. Methods: We performed a blinded observational multicentre study. Ultrasound parameters were assessed before treatment initiation, after 48 ± 24 h, and 6 ± 1 days. Treatment response was determined within 7 days by two outcome measures: 1] partial Mayo score reduction; 2] no administration of rescue therapy. Results: Out of 69 recruited patients, 56 were included in the final analysis, with 37 responders. The colon segment with the highest baseline bowel wall thickness was analysed, being the sigmoid in all patients. There was no difference in baseline bowel wall thickness between responders and non-responders in the partial Mayo score outcome. At 48 ± 24 h, a significant difference between responders and non-responders was identified in both absolute bowel wall thickness [median 3.1 mm vs 4.9 mm; p &lt;0.0001], absolute reduction [-1.9 mm vs -0.2 mm; p &lt;0.001], and relative reduction [-35.9% vs -4.1%; p &lt;0.0001]. A ≤20% reduction had a sensitivity of 84.2% (95% confidence interval [CI] 60.4, 96.6%) and a specificity of 78.4% [61.8, 90.2%] for determining non-response [area under the curve 0.85]. In the multivariable analysis, a &gt;20% reduction had the highest odds ratio (22.6 [4.2, 201.2]; p = 0.001) for determining response. Similar results were seen for the rescue therapy outcome. Conclusions: Changes in bowel wall thickness, after 48 ± 24 h following intravenous corticosteroid treatment in hospitalised patients with severe ulcerative colitis, identify responders with high accuracy and might be used as an early marker to guide accelerated rescue therapy. abstract_id: PUBMED:19786753 Bowel ultrasound and mucosal healing in ulcerative colitis. Background And Aim: Mucosal healing (MH) after short-term medical treatment is being considered as an important step in the therapeutic work-up of inflammatory bowel disorder (IBD) patients due to the potential prognostic role of MH in predicting disease outcome. However, IBD patients are reluctant to be re-endoscoped during follow-up; therefore, there is a need for non-invasive alternative index of MH which can replace endoscopy in clinical practice. We evaluated bowel ultrasound (US) as a surrogate of colonoscopy in a series of consecutive patients with active ulcerative colitis (UC). Patients And Methods: 83 patients with moderate to severe UC requiring high-dose steroids were initially recruited; endoscopic severity of UC was graded 0-3 according to Baron score, and US severity was also graded 0-3 according to the colonic wall thickening and the presence of vascular signal at power Doppler. 74 patients responsive to steroids and then maintained on 5-ASA compounds were followed up with repeated colonoscopy and bowel US at 3, 9 and 15 months from entry. Concordance between clinical, endoscopic and US scores at various visits was determined by kappa statistics. Multiple unconditional logistic regression models were used to assess the predictivity of Truelove, Baron and US scores measured at 3 and 9 months on the development of a UC relapse (Baron score 2-3) at 15 months. Results: An inconsistent concordance was found over time between 0 and I Baron scores and Truelove score (weighted kappa between 0.38 and 0.94), with high and consistent concordance between 0 and I Baron scores and US scores (weighted kappa between 0.76 and 0.90). On logistic regression analysis, a moderate/severe Baron score, regardless of their Truelove score, at 3 months was associated with a high risk of endoscopic activity at 15 months (OR 5.2; 95% CI: 1.6-17.6); similarly, patients with severe US scores (2-3) at 3 months had a high risk of severe endoscopic activity at 15 months (OR 9.1; 95% CI: 2.5-33.5). Discussion: In expert hands bowel US may be used as a surrogate of colonoscopy in evaluating the response to high-dose steroids in severe forms of UC. US score after 3 months of steroid therapy accurately predicts clinical outcome of disease at 15 months. Answer: Yes, both colonoscopy and bowel ultrasound are useful for assessing response to short-term therapy and predicting disease outcome in moderate-to-severe forms of ulcerative colitis (UC). A study found that bowel ultrasound (US) could be used as a surrogate for colonoscopy in assessing the short-term response of severe forms of UC to therapy, with both US score and endoscopic score after 3 months of steroid therapy predicting the outcome of the disease at 15 months (PUBMED:19997096). Another study confirmed that bowel ultrasound is a useful, non-invasive, and feasible bedside imaging tool for the detection of inflammation, detection of complications, and follow-up of IBD patients when performed by the attending physician (PUBMED:33132642). Additionally, the Milan Ultrasound Criteria (MUC) were externally validated as accurate tools for assessing disease activity in patients with UC (PUBMED:33349199). Furthermore, intestinal ultrasound has been recognized as a valuable tool to assess disease activity and complications, and to monitor response to therapy in IBD management. It can be used in a point-of-care setting, enabling earlier intervention if needed (PUBMED:35979252, PUBMED:37996771). Early response on intestinal ultrasound predicts long-term clinical remission and mucosal healing in patients with Crohn's disease and can assess response as early as two weeks in patients with UC (PUBMED:37996771). Changes in ultrasonographic parameters are observed as early as 4 weeks after treatment initiation and persist during short- and long-term follow-up (PUBMED:35979252). Moreover, a study demonstrated that changes in bowel wall thickness after 48 ± 24 hours following intravenous corticosteroid treatment in hospitalized patients with severe UC could identify responders with high accuracy and might be used as an early marker to guide accelerated rescue therapy (PUBMED:35695823). Lastly, bowel ultrasound has been evaluated as a non-invasive alternative index of mucosal healing, which could replace endoscopy in clinical practice, showing high concordance with endoscopic scores (PUBMED:19786753).
Instruction: Is the growth in laparoscopic surgery reproducible with more complex procedures? Abstracts: abstract_id: PUBMED:19628097 Is the growth in laparoscopic surgery reproducible with more complex procedures? Background: Laparoscopic (LAP) surgery has experienced significant growth since the early 1990s and is now considered the standard of care for many procedures like cholecystectomy. Increased expertise, training, and technological advancements have allowed the development of more complex LAP procedures including the removal of solid organs. Unlike LAP cholecystectomy, it is unclear whether complex LAP procedures are being performed with the same growth today. Methods: Using the Nationwide Inpatient Sample (NIS) from 1998 to 2006, patients who underwent elective LAP or open colectomy (n = 220,839), gastrectomy (n = 17,289), splenectomy (n = 9,174), nephrectomy (n = 64,171), or adrenalectomy (n = 5,556) were identified. The Elixhauser index was used to adjust for patient comorbidities. To account for patient selection and referral bias, a matched analysis was performed using propensity scores. The main endpoints were adjusted for in-hospital mortality and prolonged length of stay (LOS). Results: Complex LAP procedures account for a small percentage of total elective procedures (colectomy, 3.8%; splenectomy, 8.8%; gastrectomy, 2.4%; nephrectomy, 7.0%; and adrenalectomy, 14.2%). These procedures have been performed primarily at urban (94%) and teaching (64%) centers. Although all LAP procedures trended up, the growth was greatest in LAP colectomy and nephrectomy (P &lt; .001). In a case-controlled analysis, there was a mortality benefit only for LAP colectomy (hazard ratio [HR] = 0.53; 95% confidence interval [CI] = 0.34-0.82) when compared with their respective open procedures. All LAP procedures except gastrectomy had a lower prolonged LOS compared with their open counterparts. Conclusion: Despite the significant benefits of complex LAP procedures as measured by LOS and in-hospital mortality, the growth of these operations has been slow unlike the rapid acceptance of LAP cholecystectomy. Future studies to identify the possible causes of this slow growth should consider current training paradigms, technical capabilities, economic disincentive, and surgical specialization. abstract_id: PUBMED:21966136 Combined procedures with laparoscopic cholecystectomy. With advancement in laparoscopic surgery a number of surgical procedures can be performed combined with laparoscopic cholecystectomy in a single surgery. We evaluate the safety &amp; efficacy of such surgeries. A retrospective review of all patients who had undergone combined procedures with laparoscopic cholecystectomy during January 2005 to June 2009 was performed. 3144 laparoscopic cholecystectomies were performed in the period from January 2005 to June 2009. Of these, 401 cases were combined with another procedure. The mean operative time was 80 min (range 50-270 min). The mean hospital stay was 3.2 days (range 1-5 days). The mean no. of days injectable analgesics was required was 2 days (range 1 day-4 days). Combined procedures provide patients with all the benefits of minimal invasive surgery and also give the benefit of single time anaesthesia without adding to post operative morbidity &amp; hospital stay. abstract_id: PUBMED:30411215 A new affordable and easy-to-make pelvic model for training in complex urogynecological laparoscopic procedures. Introduction And Hypothesis: Our aim was to introduce a new affordable and easy-to-make pelvic model for training in complex urogynecological laparoscopic procedures. Methods: We modified a commercial female pelvic model consisting of sacrum, coccyx, two hip bones, the pubic symphysis, the fifth lumbar vertebra with intervertebral disc, and certain pelvic ligaments. We used sponge foam paper, felt fabric pieces, chenille stems, foam, plastic ties, fabric glue, and a thick, coated wire to create pelvic floor, uterus/vaginal cuff, bladder, both ureters, and anterior longitudinal and pectineal ligaments. Results: We created two different pelvic models: one with the uterus and one with the vaginal cuff. They enable training for laparoscopic pectopexy and hysteropexy/sacrocolpopexy. Trainees can practice proper mesh placement and suture the mesh to the corresponding anatomical structures. Because of the wire inserted in the uterus/vaginal cuff, it is possible to move the uterus/vaginal cuff in the anterior-posterior direction, thus mimicking the use of the manipulator during surgery. Besides the basic pelvis, all other parts of the model can be easily replaced when necessary. Conclusions: We believe that our pelvic model could provide a valuable tool for training complex urogynecological laparoscopic procedures and help to reduce the long learning curve of these procedures. abstract_id: PUBMED:33692922 Combined Retroperitoneal and Transperitoneal Laparoscopic Procedures by a Single Surgeon: Boon to Economically and Medically Backward Areas. Introduction This study was done to evaluate our experience of combining a retroperitoneal laparoscopic urological operation with other transperitoneal laparoscopic operations. Materials and methods We present a retrospective study of a series of 20 cases of retroperitoneal laparoscopic urological surgeries combined with at least one transperitoneal laparoscopic procedures, performed by a senior minimally invasive surgeon, between March 2013 and August 2020. We have excluded three patients where either of the procedures required conversion to open surgery. We retrospectively reviewed all the data regarding the patient's demographics, combined surgical procedures done, operative time taken, blood loss, intraoperative and postoperative complications, and days of hospital stay. Results Total of 20 patients had undergone simultaneous retroperitoneal and transperitoneal laparoscopic procedures. A total of nine (45%) cases comprised a combination of retroperitoneal laparoscopic ureterolithotomy and laparoscopic cholecystectomy. Two patients had undergone a combination of three laparoscopic procedures in the same operation. The mean hospital stay was 3.6 days. Blood loss was minimal to moderate in all the patients, none needed any perioperative blood transfusion. No major complications were noted in any patients. Conclusion Combining a retroperitoneal laparoscopic urological procedure with another transperitoneal laparoscopic surgery is very much feasible. It becomes even easier and relevant provided both the procedures are performed by a single, experienced laparoscopic surgeon. abstract_id: PUBMED:31135773 Laparoscopic procedures experience in Mexico during 2015: ¿Where are we? Background: 26 years have passed from the first laparoscopic cholecystectomy in Mexico. Since then the laparoscopy has been adopted of variable way and has extended into different specialties. Objective: To identify the place that laparoscopic approaches occupy at the present time in Mexico. Method: We searched the codes that had the word laparoscopy or laparoscopic in the records of the Automated System of Hospitable Discharges in 2015. Based on the obtained information there was realized a descriptive and retrospective study. Results: We found 55 different procedures in a total of 30,174. Of them, 79.7% in women and 20.3% in men. The most common age was between 25 and 29 years. The ten first ones were cholecystectomy, appendectomy, total abdominal laparoscopic hysterectomy, procedures to create esophago-gastric sphincter competence, unilateral salpingo-oophorectomy, partial cholecystectomy, ovarian resection, umbilical hernia repair, incidental appendectomy and unilateral oophorectomy (94.6%). The States with the major number are Mexico City, State of Mexico, Jalisco, Guanajuato and Sonora. Conclusions: The laparoscopic procedures have increase in the national health systems and there is concordance of the most common with the international statistics. However, is necessary to diversify them and reduce the times of hospital stay. Its application is in process and it still face challenges in relation to availability of organizational elements, equipment, infrastructure and training, although there are different ways to overcome them. abstract_id: PUBMED:36922428 SAGES masters program: the top 10 seminal articles for the laparoscopic left and sigmoid colectomy pathway for complex disease. Background: The SAGES University Colorectal Masters Program is a structured educational curriculum that is designed to aid practicing surgeons develop and maintain knowledge and technical skills for laparoscopic colorectal surgery. The Colorectal Pathway is based on three anchoring procedures (laparoscopic right colectomy, laparoscopic left and sigmoid colectomy for uncomplicated and complex disease, and intracorporeal anastomosis for minimally invasive right colectomy) corresponding to three levels of performance (competency, proficiency and mastery). This manuscript presents focused summaries of the top 10 seminal articles selected for laparoscopic left and sigmoid colectomy for complex benign and malignant disease. Methods: A systematic literature search of Web of Science for the most cited articles on the topic of laparoscopic complex left/sigmoid colectomy yielded 30 citations. These articles were reviewed and ranked by the SAGES Colorectal Task Force and invited subject experts according to their citation index. The top 10 ranked articles were then reviewed and summarized, with emphasis on relevance and impact in the field, study findings, strength and limitations and conclusions. Results: The top 10 seminal articles selected for the laparoscopic left/sigmoid colectomy for complex disease anchoring procedure include advanced procedures such as minimally invasive splenic flexure mobilization techniques, laparoscopic surgery for complicated and/or diverticulitis, splenic flexure tumors, complete mesocolic excision, and other techniques (e.g., Deloyers or colonic transposition in cases with limited colonic reach after extended left-sided resection). Conclusions: The SAGES Colorectal Masters Program top 10 seminal articles selected for laparoscopic left and sigmoid colectomy for complex benign and malignant disease anchoring procedure are presented. These procedures were the most essential in the armamentarium of practicing surgeons that perform minimally invasive surgery for complex left and sigmoid colon pathology. abstract_id: PUBMED:30460381 Use of observed ureteric catheter movement to facilitate laparoscopic identification and dissection of the ureter in complex gynaecological procedures. Introduction And Hypothesis: Ureteric injury is a serious complication in gynaecological surgery. Identification and dissection of the ureter is important to minimise the risk of injury. However, some pelvic pathologies and previous pelvic surgery can render laparoscopic identification and dissection of the ureter difficult and risky. This video demonstrates the use of observing ureteric catheter movement to facilitate identification and dissection of the ureter in complicated laparoscopic gynaecological procedures. Materials: A 42-year-old woman with previous history of total abdominal hysterectomy underwent laparoscopic removal of her right ovary, which was plastered over the course of the ureter. A 5-F ureteric catheter was inserted, and observing its movement facilitated identification and dissection of the ureter from the ovary. Results: The procedure was completed safely, and the patient was discharged the same day. Conclusions: This video demonstrates that moving the ureteric catheter laparoscopically to facilitate identification and dissection of the ureter in complex gynaecological procedures may be useful for urogynecologists when assisting ureteric dissection in complex cases. abstract_id: PUBMED:26877605 Laparoscopic and robot-assisted laparoscopic digestive surgery: Present and future directions. Laparoscopic surgery is applied today worldwide to most digestive procedures. In some of them, such as cholecystectomy, Nissen's fundoplication or obesity surgery, laparoscopy has become the standard in practice. In others, such as colon or gastric resection, the laparoscopic approach is frequently used and its usefulness is unquestionable. More complex procedures, such as esophageal, liver or pancreatic resections are, however, more infrequently performed, due to the high grade of skill necessary. As a result, there is less clinical evidence to support its implementation. In the recent years, robot-assisted laparoscopic surgery has been increasingly applied, again with little evidence for comparison with the conventional laparoscopic approach. This review will focus on the complex digestive procedures as well as those whose use in standard practice could be more controversial. Also novel robot-assisted procedures will be updated. abstract_id: PUBMED:11030641 Laparoscopic procedures. Laparoscopic procedures are increasingly used in clinical surgery because of significantly faster convalescence than occurs with open surgery. These advances have been facilitated not least by modern gastroenterological procedures and the co-operation between surgeons and gastroenterologists. Laparoscopic cholecystectomy is the most evident example of such surgery. The widespread use of this minimally invasive approach was essentially made possible by the availability of endoscopic retrograde cholangiography (ERC) and ERT, since the laparoscopic removal of bile duct stones is still difficult and not always reliable. Further instances of so-called 'therapeutic splitting' are palliative procedures and the combined endoscopic/endoluminal and laparoscopic/intracavitary approach in cases of early cancer. Additionally, laparoscopic procedures are described that might be a surgical alternative to medical treatment (e.g. fundoplication and cardiomyotomy). abstract_id: PUBMED:31994494 Laparoscopic and robot-assisted procedures in patients with gastrointestinal stromal tumors (GIST) of stomach Objective: To analyze the results of minimally invasive surgical procedures in patients with gastric GIST. Material And Methods: The study included 30 patients aged 52.2±9.8 years. ASA grade II (44%), III (26%) and I (23%) were predominant. Lesion of stomach body was noted in 17 (56%) patients, antrum - in 8 (27%), fundus - in 2 (7%) and cardia - in 3 (10%) patients. GIST was located on the anterior wall of stomach in 56% of patients. Exophytic growth was observed in 16 (53%) patients, endophytic - in 12 (40%), transmural growth - in 2 (7%) cases. T2 grade of tumor was noted in 67% of cases (TNM 8). All patients underwent laparoscopic or robot-assisted ('daVinci Si') partial resection of the stomach. Results: Radical resection (R0) without injury of pseudocapsule of tumor was made in all patients. Laparoscopic procedures were performed in 25 (83%) patients, robot-assisted - in 5 (17%). Early postoperative complications (Clavien-Dindo II) were observed in 3 patients. Spindle cell structure of tumors (93%) with low mitotic activity (63%) prevailed. Tumor stage I was observed in 56% of patients. Patients with high mitotic tumor index (44%) were directed to targeted therapy. Recurrence and/or metastasis in long-term period were absent in 26 (87%) patients. Conclusion: Laparoscopic surgery for gastric GIST is safe and characterized by the absence of significant postoperative complications and long-term recurrence-free period. The use of robotic surgical system is effective and justified for anatomically difficult localization of GIST. Answer: The growth in laparoscopic surgery has been significant for certain procedures, but it is not uniformly reproducible across all complex procedures. While laparoscopic surgery has become the standard of care for many procedures, such as cholecystectomy, the adoption of more complex laparoscopic procedures has been slower (PUBMED:19628097). Complex laparoscopic procedures, such as colectomy, gastrectomy, splenectomy, nephrectomy, and adrenalectomy, account for a small percentage of total elective procedures and are primarily performed at urban and teaching centers. Although there has been an upward trend in all laparoscopic procedures, the growth has been greatest in laparoscopic colectomy and nephrectomy (PUBMED:19628097). The slow growth in complex laparoscopic procedures may be due to factors such as current training paradigms, technical capabilities, economic disincentives, and surgical specialization (PUBMED:19628097). However, there is evidence that combined procedures with laparoscopic cholecystectomy are safe and effective, providing patients with the benefits of minimally invasive surgery without adding to postoperative morbidity and hospital stay (PUBMED:21966136). Additionally, training models have been developed to help reduce the long learning curve associated with complex urogynecological laparoscopic procedures (PUBMED:30411215). In economically and medically backward areas, combining retroperitoneal laparoscopic urological operations with other transperitoneal laparoscopic operations has been shown to be feasible and beneficial when performed by an experienced laparoscopic surgeon (PUBMED:33692922). In Mexico, laparoscopic procedures have increased in the national health systems, and there is a need to diversify them and reduce hospital stay times (PUBMED:31135773). The SAGES Colorectal Masters Program has identified top seminal articles for laparoscopic left and sigmoid colectomy for complex disease, indicating a structured approach to mastering these complex procedures (PUBMED:36922428). In summary, while the growth in laparoscopic surgery for more complex procedures has not been as rapid as for simpler procedures like cholecystectomy, there is a trend towards increased adoption, and efforts are being made to improve training and outcomes for these more challenging surgeries.
Instruction: Can motor nerve conduction velocity predict foot problems in diabetic subjects over a 6-year outcome period? Abstracts: abstract_id: PUBMED:10444029 Glomerular ultrafiltration of IGF-I may contribute to increased renal sodium retention in diabetic nephropathy. Insulin-like growth factor-I (IGF-I) is found in plasma at relatively high levels (approximately 40 nmol/L) but &lt;1% is present in the free form and &gt;99% is bound to specific binding proteins to form high-molecular-weight complexes of approximately 50 and approximately 150 kd. We hypothesized that in rats with diabetic nephropathy but not in normal animals, IGF-I-containing binding protein complexes undergo glomerular ultrafiltration, allowing the peptide to interact with IGF-I receptors in apical tubular membranes. By this route, ultrafiltered IGF-I may increase tubular epithelial cell sodium absorption in overt diabetic nephropathy. In serum samples from diabetic rats, IGF-I levels (227 +/- 34 ng/mL) were reduced as compared with control levels (319 +/- 33 ng/mL, P = .05), and IGF-binding protein-2 (IGFBP-2) is increased about 2-fold. In diabetic rats, IGF-I undergoes glomerular ultrafiltration and is present in proximal tubular fluid that was collected by nephron micropuncture at 2.54 +/- 0.54 nmol/L but is below the detection limit in tubular fluid from normal rats. IGFBP-1, IGFBP-2, IGFBP-3, and IGFBP-4 are all present in diabetic rat glomerular ultrafiltrate, but IGFBP-2 levels are greater than those of each of the other three IGFBPs. Neither recombinant human IGF-I (1 nmol/L) nor diabetic rat glomerular ultrafiltrate affect sodium transport in cultured mouse proximal tubular cells. In contrast, rhIGF-I and diabetic rat glomerular ultrafiltrate increase the apical-to-basolateral transport of 22Na+ in distal tubule-like A6 cells through mechanisms involving apical IGF-I receptors. In normal rats, luminal infusion with rhIGF-I or with diabetic rat glomerular ultrafiltrate into late proximal tubules increases distal tubular Na+ absorption. These findings indicate that diabetic glomerular sclerosis causes glomerular ultrafiltration of IGF-I, and they suggest that tubular fluid IGF-I may contribute to sodium (and fluid) retention that is commonly observed in patients with severe diabetic nephropathy. abstract_id: PUBMED:9987027 Cerebellar granule-cell-specific GABAA receptors attenuate benzodiazepine-induced ataxia: evidence from alpha 6-subunit-deficient mice. Benzodiazepine- and alcohol-induced ataxias in rodents have been proposed to be affected by the gamma-aminobutyric acid type A (GABAA) receptor alpha 6 subunit, which contributes to receptors specifically expressed in cerebellar granule cells. We have studied an alpha 6 -/- mouse line for motor performance and drug sensitivity. These mice, as a result of a specific genetic lesion, carry a precise impairment at their Golgi-granule cell synapses. On motor performance tests (rotarod, horizontal wire, pole descending, staircase and swimming tests) there were no robust baseline differences in motor function or motor learning between alpha 6 -/- and alpha 6 +/+ mice. On the rotarod test, however, the mutant mice were significantly more impaired by diazepam (5-20 mg/kg, i.p.), when compared with alpha 6 +/+ control and background C57BL/6J and 129/SvJ mouse lines. Ethanol (2.0-2.5 g/kg, i.p.) produced similar impairment in the alpha 6 -/- and alpha +/+ mice. Diazepam-induced ataxia in alpha 6 -/- mice could be reversed by the benzodiazepine site antagonist flumazenil, indicating the involvement of the remaining alpha 1 beta 2/3 gamma 2 GABAA receptors of the granule cells. The level of activity in this synapse is crucial in regulating the execution of motor tasks. We conclude that GABAA receptor alpha 6 subunit-dependent actions in the cerebellar cortex can be compensated by other receptor subtypes; but if not for the alpha 6 subunit, patients on benzodiazepine medication would suffer considerably from ataxic side-effects. abstract_id: PUBMED:26283658 Recruitment of dorsal midbrain catecholaminergic pathways in the recovery from nerve injury evoked disabilities. Background: The periaqueductal gray region (PAG) is one of several brain areas identified to be vulnerable to structural and functional change following peripheral nerve injury. Sciatic nerve constriction injury (CCI) triggers neuropathic pain and three distinct profiles of changes in complex behaviours, which include altered social and sleep-wake behaviours as well as changes in endocrine function. The PAG encompasses subgroups of the A10 dopaminergic and A6 noradrenergic cell groups; the origins of significant ascending projections to hypothalamic and forebrain regions, which regulate sleep, complex behaviours and endocrine function. We used RT-PCR, western blots and immunohistochemistry for tyrosine hydroxylase to determine whether (1) tyrosine hydroxylase increased in the A10/A6 cells and/or; (2) de novo synthesis of tyrosine hydroxylase, in a 'TH-naïve' population of ventral PAG neurons characterized rats with distinct patterns of behavioural and endocrine change co-morbid with CCI evoked-pain. Results: Evidence for increased tyrosine hydroxylase transcription and translation in the constitutive A10/A6 cells was found in the midbrain of rats that showed an initial 2-3 day post-CCI, behavioural and endocrine change, which recovered by days 5-6 post-CCI. Furthermore these rats showed significant increases in the density of TH-IR fibres in the vPAG. Conclusions: Our data provide evidence for: (1) potential increases in dopamine and noradrenaline synthesis in vPAG cells; and (2) increased catecholaminergic drive on vPAG neurons in rats in which transient changes in social behavior are seen following CCI. The data suggests a role for dopaminergic and noradrenergic outputs, and catecholaminergic inputs of the vPAG in the expression of one of the profiles of behavioural and endocrine change triggered by nerve injury. abstract_id: PUBMED:16374711 The alpha6 integrin subunit in the developing mouse olfactory bulb. Integrins are heterodimeric cell surface receptors that mediate developmental events by binding extracellular matrix ligands. Several lines of evidence suggest a role for integrins, specifically the alpha 6 subunit, in neuronal migration, neurite outgrowth, and axon guidance during olfactory development. Therefore, we undertook an analysis of the expression of the alpha 6 subunit in the olfactory system of the embryonic and early postnatal mouse to understand the role it may play during neural development. In addition, as a functional assay we examined the developmental effects of the loss of this subunit on olfactory development by analyzing an alpha 6 knockout (alpha 6-/-). Immunohistochemical analyses and confocal microscopy were used to examine alpha 6 expression in the CD-1 embryonic and early postnatal olfactory system and also to examine the organization of the olfactory system in the alpha 6-/- mouse. In CD-1 mice from E13 to E17, alpha 6 localizes in radial patterns extending from the core of the olfactory bulb to the nerve layer and colocalizes with RC2, an antibody specific for radial glia. By the day of birth (P0; approximately E19), expression is limited to the external plexiform layer and the olfactory nerve layer, where it colocalizes with laminin and p75. In the alpha 6-/- mouse, areas of ectopic granule cells were observed in the mitral cell layer of the olfactory bulb. These ectopias coincided with areas of disorganization of the radial glial processes and breaks in the mitral cell layer. These observations suggest a role for alpha 6 integrin in neural migration during olfactory development, likely secondary to organization of the radial glial scaffold. abstract_id: PUBMED:27520081 GABA-A receptor activity in the noradrenergic locus coeruleus drives trigeminal neuropathic pain in the rat; contribution of NAα1 receptors in the medial prefrontal cortex. Trigeminal neuropathic pain is described as constant excruciating facial pain. The study goal was to investigate the role of nucleus locus coeruleus (LC) in a model of chronic orofacial neuropathic pain (CCI-ION). The study examines LC's relationship to both the medullary dorsal horn receiving trigeminal nerve sensory innervation and the medial prefrontal cortex (mPFC). LC is a major source of CNS noradrenaline (NA) and a primary nucleus involved in pain modulation. Although descending inhibition of acute pain by LC is well established, contribution of the LC to facilitation of chronic neuropathic pain is also reported. In the present study, a rat orofacial pain model of trigeminal neuropathy was induced by chronic constrictive injury of the infraorbital nerve (CCI-ION). Orofacial neuropathic pain was indicated by development of whisker pad mechanical hypersensitivity. Hypersensitivity was alleviated by selective elimination of NA neurons, including LC (A6 cell group), with the neurotoxin anti-dopamine-β-hydroxylase saporin (anti-DβH-saporin) microinjected either intracerebroventricularly (i.c.v.) or into trigeminal spinal nucleus caudalis (spVc). The GABAA receptor antagonist, bicuculline, administered directly into LC (week 8) inhibited hypersensitivity. This indicates a valence shift in which increased GABAA signaling ongoing in LC after trigeminal nerve injury paradoxically produces excitatory facilitation of the chronic pain state. Microinjection of NAα1 receptor antagonist, benoxathian, into mPFC attenuated whisker pad hypersensitivity, while NAα2 receptor antagonist, idazoxan, was ineffective. Thus, GABAA-mediated activation of NA neurons during CCI-ION can facilitate hypersensitivity through NAα1 receptors in the mPFC. These data indicate LC is a chronic pain generator. abstract_id: PUBMED:11102514 Expression and partial characterization of kinesin-related proteins in differentiating and adult skeletal muscle. Using pan-kinesin antibodies to screen a differentiating C2C12 cell library, we identified the kinesin proteins KIF3A, KIF3B, and conventional kinesin heavy chain to be present in differentiating skeletal muscle. We compared the expression and subcellular localization characteristics of these kinesins in myogenic cells to others previously identified in muscle, neuronal, and mitotic systems (KIF1C, KIF3C, and mitotic-centromere-associated kinesin). Because members of the KIF3 subfamily of kinesin-related proteins showed altered subcellular fractionation characteristics in differentiating cells, we focused our study of kinesins in muscle on the function of kinesin-II. Kinesin-II is a motor complex comprised of dimerized KIF3A and KIF3B proteins and a tail-associated protein, KAP. The Xenopus homologue of KIF3B, Xklp3, is predominantly localized to the region of the Golgi apparatus, and overexpression of motorless-Xklp3 in Xenopus A6 cells causes mislocalization of Golgi components (). In C2C12 myoblasts and myotubes, KIF3B is diffuse and punctate, and not primarily associated with the Golgi. Overexpression of motorless-KIF3B does not perturb localization of Golgi components in myogenic cells, and myofibrillogenesis is normal. In adult skeletal muscle, KIF3B colocalizes with the excitation-contraction-coupling membranes. We propose that these membranes, consisting of the transverse-tubules and sarcoplasmic reticulum, are dynamic structures in which kinesin-II may function to actively assemble and maintain in myogenic cells. abstract_id: PUBMED:10713972 Ventilatory and central neurochemical reorganisation of O2 chemoreflex after carotid sinus nerve transection in rat. 1. The first step of this study was to determine the early time course and pattern of hypoxic ventilatory response (HVR) recovery following irreversible bilateral carotid sinus nerve transection (CSNT). The second step was to find out if HVR recovery was associated with changes in the neurochemical activity of the medullary catecholaminergic cell groups involved in the O2 chemoreflex pathway. 2. The breathing response to acute hypoxia (10% O2) was measured in awake rats 2, 6, 10, 45 and 90 days after CSNT. In a control group of sham-operated rats, the ventilatory response to hypoxia was principally due to increased respiratory frequency. There was a large reduction in HVR in the CSNT compared to the sham-operated rats (-65%, 2 days after surgery). Within the weeks following denervation, the CSNT rats progressively recovered a HVR level similar to the sham-operated rats (-37% at 6 days, -27% at 10 days, and no difference at 45 or 90 days). After recovery, the CSNT rats exhibited a higher tidal volume (+38%) than the sham-operated rats in response to hypoxia, but not a complete recovery of respiratory frequency. 3. Fifteen days after CSNT, in vivo tyrosine hydroxylase (TH) activity had decreased in caudal A2C2 (-35%) and A6 cells (-35%). After 90 days, the CSNT rats displayed higher TH activity than the sham-operated animals in caudal A1C1 (+51%), caudal A2C2 (+129%), A5 (+216%) and A6 cells (+79%). 4. It is concluded that HVR following CSNT is associated with a profound functional reorganisation of the central O2 chemoreflex pathway, including changes in ventilatory pattern and medullary catecholaminergic activity. abstract_id: PUBMED:20149363 Single and twin ongoing pregnancies in two cases of previous ART failure after ICSI performed with sperm sorted using annexin V microbeads. Objective: To treat couples with intracytoplasmic sperm injection (ICSI) after annexin V sperm sorting. Design: Two case reports. Setting: Department of Reproductive Medicine at a private medical institute. Patient(s): Couples on infertility treatment, donor oocytes. Intervention(s): Sperm sorted with annexin V magnetic microbeads before ICSI, day 3 embryo transfer; case 1: ovum donation; case 2: patient oocytes. Main Outcome Measure(s): 1) Sperm DNA fragmentation (terminal deoxynucleotide transferase-mediated dUTP nick-end labeling [TUNEL]) and active caspase-3 (immunocytochemistry); 2) fertilization rate, embryonic quality, blastocyst development of nontransferred embryos, and pregnancy outcome after ICSI of sorted sperm. Result(s): Case 1: Premature ovarian failure patient with previous fertilization failures: asthenoteratozoospermia, abnormal DNA fragmentation (TUNEL 30%; normal &lt;20%). ICSI with annexin V-treated sperm done on six donated metaphase II (MII) oocytes; four fertilized, and a 5-cell/grade-2 and a 6-cell/grade-2-3 embryo were transfered. A day 5 blastocyst was cryopreserved. The patient was in the last trimester of gestation. Case 2: Couple with &gt;4 years of primary infertility and recent ICSI failure. Semen with teratozoospermia (5% normal forms [Kruger]) and abnormal active caspase-3 (16%; normal &lt;11%). ICSI with annexin V-treated sperm done on 9 MII oocytes. All fertilized; a 7-cell/grade-1 and an 8-cell/grade-1-2 embryo were transferred. A day 5 expanded blastocyst was cryopreserved. The patient was in the second trimester of a twin normal pregnancy. Conclusion(s): Sperm sorting with annexin V columns was effective in the treatment of two cases of ICSI failure, resulting in a single and a twin pregnancy after transfer of two embryos in each case. abstract_id: PUBMED:8944647 Biosynthesis and cell surface delivery of the NHE1 isoform of Na+/H+ exchanger in A6 cells. The Na+/H+ exchanger isoform NHE1 is localized to the basolateral membrane of renal and intestinal epithelia. We examined the plasma membrane distribution, biosynthesis, and cell surface delivery of NHE1 in A6 epithelia. NHE1 was localized to the basolateral membrane. Studies of NHE1 biosynthesis with a pulse-chase protocol demonstrated that a core glycosylated, endoglycosidase H-sensitive, 90-kDa NHE1 was present 0-5 h into the chase period and that mature 110-kDa NHE1 was present 1-24 h into the chase period. Studies of plasma membrane delivery of newly synthesized NHE1 demonstrated that the 90-kDa NHE1 was detected at both apical and basolateral membranes 2-5 h into the chase period. The 110-kDa NHE1 was observed at the basolateral membrane 5-24 h into the chase period. These results suggest that NHE1 is expressed primarily at the basolateral membrane of A6 cells, that core glycosylated NHE1 is delivered to the plasma membrane in a nonpolarized manner, and that nature 110-kDa NHE1 is delivered to the basolateral membrane. abstract_id: PUBMED:20434498 Interactions between orexin-immunoreactive fibers and adrenaline or noradrenaline-expressing neurons of the lower brainstem in rats and mice. Orexins are expressed in neurons of the dorsolateral hypothalamus and their axons widely distribute throughout the central nervous system. The noradrenergic cell groups of the lower brainstem belong to the targets of these orexin projections. Double immunostainings for orexin and phenylethanolamine N-methyltransferase (PNMT), as well as orexin and tyrosine hydroxylase (TH) were applied to demonstrate the orexinergic innervation of catecholamine cell groups in the lower brainstem of the mouse and the rat. In various densities, networks of orexin-positive fibers and terminals were present on neurons of each adrenaline (C1, C2, C3) and noradrenaline (locus coeruleus, A1, A2, A4, A5 and A7) cell groups. The most dense networks of orexin fibers and terminals were detected in the locus coeruleus, the subcoeruleus area, and in the nucleus of the solitary tract. By using confocal microscope to analyze triple immunostainings we could detect that about two-third of the orexin-PNMT or orexin-TH immunopositive close contacts contained synaptophysin (a presynapse-specific protein) in the C1, C2 and C3 adrenaline, or in the A1, A2 noradrenaline cell groups, respectively. Orexin-immunopositive axons in the C1, C2, as well as A1, A2 and A6 cell groups have been examined by an electron microscope. Relatively few asymmetrical (excitatory) synaptic contacts could be demonstrated between PNMT- or TH-positive dendrites and orexin terminals, although the vast majority of orexin-positive axons was located in juxtaposition to PNMT- or TH-positive neurons. Answer: The provided abstracts do not contain information directly related to the prediction of foot problems in diabetic subjects over a 6-year outcome period using motor nerve conduction velocity. The abstracts discuss various topics such as the role of IGF-I in diabetic nephropathy and sodium retention (PUBMED:10444029), the impact of GABAA receptor alpha 6 subunit on benzodiazepine-induced ataxia (PUBMED:9987027), the involvement of dorsal midbrain catecholaminergic pathways in the recovery from nerve injury (PUBMED:26283658), the role of the alpha6 integrin subunit in the developing mouse olfactory bulb (PUBMED:16374711), the activity of GABA-A receptors in the locus coeruleus in trigeminal neuropathic pain (PUBMED:27520081), the expression of kinesin-related proteins in skeletal muscle (PUBMED:11102514), neurochemical reorganization after carotid sinus nerve transection (PUBMED:10713972), ICSI with annexin V microbeads (PUBMED:20149363), biosynthesis of the NHE1 isoform of the Na+/H+ exchanger in A6 cells (PUBMED:8944647), and interactions between orexin-immunoreactive fibers and adrenaline or noradrenaline-expressing neurons in the lower brainstem (PUBMED:20434498). To answer the question about motor nerve conduction velocity predicting foot problems in diabetic subjects, one would need to look for studies that specifically investigate the relationship between diabetic neuropathy, nerve conduction studies, and the development of diabetic foot complications over a long-term period. Such studies would typically involve the longitudinal monitoring of diabetic patients, assessment of their nerve conduction velocities, and tracking the incidence of foot problems such as ulcers, infections, or the need for amputations. None of the provided abstracts address this specific question.
Instruction: Epidemiology of gout: is the incidence rising? Abstracts: abstract_id: PUBMED:12415600 Epidemiology of gout: is the incidence rising? Objective: To determine whether the incidence of gout is higher in 1995-1996 compared to 1977-1978. Methods: Using the Rochester Epidemiology Project computerized medical record system, all potential cases of acute gout in the city of Rochester, Minnesota during the time intervals of 1977-1978 and 1995-1996 were identified. The complete medical records of all potential cases were screened and all who fulfilled the 1977 American College of Rheumatology proposed criteria for gout were included as incidence cases. Demographic data, body mass index, clinical presentation, and associated comorbid conditions were abstracted. The overall and age-gender adjusted incidence rates from the 2 cohorts were calculated and compared. Results: A total of 39 new cases of acute gout were identified during the 2 year interval 1977-1978 representing an age and sex-adjusted annual incidence rate of 45.0/100,000 (95% CI: 30.7, 59.3). For the interval 1995-1996, 81 cases were diagnosed, representing an annual incidence rate of 62.3/100,000 (95% CI: 48.4, 76.2). There was a greater than 2-fold increase in the rate of primary gout (i.e., no history of diuretic exposure) in the recent compared to the older time periods (p = 0.002). The incidence of secondary, diuretic related gout did not increase over time (p = 0.140). Conclusion: Our results indicate that the incidence of primary gout has increased significantly over the past 20 years. While this increase might be a result of improved ascertainment of atypical gout, it may also be related to other, as yet unidentified, risk factors. abstract_id: PUBMED:33150696 Rising Global Burden of Gout: Time to Act. N/A abstract_id: PUBMED:31523046 Rising Incidence of Acute Hospital Admissions due to Gout. Objective: To describe trends in acute hospital admissions due to gout in England, with rheumatoid arthritis (RA) as a comparator, alongside prescribing trends for common gout medications. Methods: An ecological study was performed using UK National Health Service (NHS) Digital Hospital Episode Statistics data to calculate the incidence of unplanned admissions with primary diagnoses of gout or RA in adults in England between April 2006 and March 2017. NHS Digital Community Prescription data for allopurinol, febuxostat, and colchicine were considered over a similar period. Results: The incidence of unplanned gout admissions increased by 58.4% over the study period, from 7.9 admissions per 100,000 population in 2006/07 to 12.5 admissions per 100,000 population in 2016/17 (p &lt; 0.0001). Gout admissions increased as a proportion of all hospital admissions, and accounted for 349,768 bed-days cumulatively. Unplanned RA admissions halved over the study period, from 8.6 admissions per 100,000 population in 2006/07 to 4.3 admissions per 100,000 population in 2016/17 (p &lt; 0.0001). Community prescriptions dispensed for allopurinol and colchicine have increased by 71.4% and 165.6%, respectively, since 2006 (p &lt; 0.0001). Febuxostat prescriptions have increased 20-fold since 2010 (p &lt; 0.0001), when prescription data became available. Conclusion: Acute gout admissions in England increased between 2006 and 2017, accompanied by increasing prescription of gout therapies. Acute admissions due to RA halved over the same time period. These data call for aggressive target-driven therapy for this highly treatable disease. abstract_id: PUBMED:24431399 Rising burden of gout in the UK but continuing suboptimal management: a nationwide population study. Objectives: To describe trends in the epidemiology of gout and patterns of urate-lowering treatment (ULT) in the UK general population from 1997 to 2012. Methods: We used the Clinical Practice Research Datalink to estimate the prevalence and incidence of gout for each calendar year from 1997 to 2012. We also investigated the pattern of gout management for both prevalent and incident gout patients. Results: In 2012, the prevalence of gout was 2.49% (95% CI 2.48% to 2.51%) and the incidence was 1.77 (95% CI 1.73 to 1.81) per 1000 person-years. Prevalence and incidence both were significantly higher in 2012 than in 1997, with a 63.9% increase in prevalence and 29.6% increase in incidence over this period. Regions with highest prevalence and incidence were the North East and Wales. Among prevalent gout patients in 2012, only 48.48% (95% CI 48.08% to 48.89%) were being consulted specifically for gout or treated with ULT and of these 37.63% (95% CI 37.28% to 38.99%) received ULT. In addition, only 18.6% (95% CI 17.6% to 19.6%) of incident gout patients received ULT within 6 months and 27.3% (95% CI 26.1% to 28.5%) within 12 months of diagnosis. The management of prevalent and incident gout patients remained essentially the same during the study period, although the percentage of adherent patients improved from 28.28% (95% CI 27.33% to 29.26%) in 1997 to 39.66% (95% CI 39.11% to 40.22%) in 2012. Conclusions: In recent years, both the prevalence and incidence of gout have increased significantly in the UK. Suboptimal use of ULT has not changed between 1997 and 2012. Patient adherence has improved during the study period, but it remains poor. abstract_id: PUBMED:16046775 Troponin for prediction of cardiovascular collapse in acute colchicine overdose. The use of colchicine, a treatment for acute gout and familial Mediterranean fever, is limited by its toxicity. A relatively low dose of colchicine may be fatal. After a colchicine overdose, monitoring should include 6-12 hourly serum troponin measurements. A rising troponin level predicts cardiovascular collapse and is an indication for more intensive management. abstract_id: PUBMED:29247151 The Rising Incidence of Gout and the Increasing Burden of Comorbidities: A Population-based Study over 20 Years. Objective: To examine the incidence of gout over the last 20 years and to evaluate possible changes in associated comorbid conditions. Methods: The medical records were reviewed of all adults with a diagnosis of incident gout in Olmsted County, Minnesota, USA, during 2 time periods (January 1, 1989-December 31, 1992, and January 1, 2009-December 31, 2010). Incident cases had to fulfill at least 1 of 3 criteria: the American Rheumatism Association 1977 preliminary criteria for gout, the Rome criteria, or the New York criteria. Results: A total of 158 patients with new-onset gout were identified during 1989-1992 and 271 patients during 2009-2010, yielding age- and sex-adjusted incidence rates of 66.6/100,000 (95% CI 55.9-77.4) in 1989-1992 and 136.7/100,000 (95% CI 120.4-153.1) in 2009-2010. The incidence rate ratio was 2.62 (95% CI 1.80-3.83). At the time of their first gout flare, patients diagnosed with gout in 2009-2010 had higher prevalence of comorbid conditions compared with 1989-1992, including hypertension (69% vs 54%), diabetes mellitus (25% vs 6%), renal disease (28% vs 11%), hyperlipidemia (61% vs 21%), and morbid obesity (body mass index ≥ 35 kg/m2; 29% vs 10%). Conclusion: The incidence of gout has more than doubled over the recent 20 years. This increase together with the more frequent occurrence of comorbid conditions and cardiovascular risk factors represents a significant public health challenge. abstract_id: PUBMED:27286837 Hyperuricemia and gout : New aspects of an old disease Gouty arthritis is one of the most common arthritides. Due to increasing life expectancy and changing life style, a rising incidence and prevalence of gout can be expected. Because of associations with the diseases metabolic syndrome and cardiovascular morbidity, gout patients often suffer from significant morbidity. Besides the consequent usage of conventional therapeutics, new treatments for gout attacks and for lowering urate levels are available even for patients refractory to conventional therapy. abstract_id: PUBMED:28040245 The rising prevalence and incidence of gout in British Columbia, Canada: Population-based trends from 2000 to 2012. Objectives: Gout is increasingly recognized as the most common form of inflammatory arthritis worldwide; however, no Canadian data on the disease burden of gout are available. We estimated the prevalence, incidence, prescription patterns, and comorbidity burden of gout in an entire Canadian province [British Columbia (BC)] over the last decade. Methods: We utilized PopulationData BC, a province-wide database, to estimate temporal trends in the prevalence and incidence of gout from 2000 to 2012, as well as according to age category. Annual estimates were age-sex-standardized using 2012 as the reference. We also examined annual trends in prescription patterns of common gout medications and assessed the comorbidity burden among gout patients in 2012. Results: The 2012 prevalence of gout was 3.8% among the overall population, and the incidence rate was 2.9 per 1000 person-years. Both gout prevalence and incidence increased substantially over the study period. This burden additionally increased according to age category, affecting over 8% of those ages 60-69 years in 2012. Approximately 22% of gout patients received a prescription for urate-lowering therapy (ULT), which remained stable over the study period, while colchicine and oral glucocorticoid use both increased modestly. By 2012, 72%, 52%, and 18% of prevalent gout patients had been diagnosed with hypertension, hyperlipidemia, and diabetes, respectively. Conclusions: The burden of gout in BC, Canada, is substantial, and both the prevalence and incidence have increased over the past decade, while prescription of ULT remains low. These data support the need to improve gout prevention and care. abstract_id: PUBMED:32541923 Global epidemiology of gout: prevalence, incidence, treatment patterns and risk factors. Gout is the most common inflammatory arthritis and occurs when hyperuricaemia, sustained elevation of serum urate levels resulting in supersaturation of body tissues with urate, leads to the formation and deposition of monosodium urate crystals in and around the joints. Recent reports of the prevalence and incidence of gout vary widely according to the population studied and methods employed but range from a prevalence of &lt;1% to 6.8% and an incidence of 0.58-2.89 per 1,000 person-years. Gout is more prevalent in men than in women, with increasing age, and in some ethnic groups. Despite rising prevalence and incidence, suboptimal management of gout continues in many countries. Typically, only a third to half of patients with gout receive urate-lowering therapy, which is a definitive, curative treatment, and fewer than a half of patients adhere to treatment. Many gout risk factors exist, including obesity, dietary factors and comorbid conditions. As well as a firmly established increased risk of cardiovascular disease and chronic kidney disease in those with gout, novel associations of gout with other comorbidities have been reported, including erectile dysfunction, atrial fibrillation, obstructive sleep apnoea, osteoporosis and venous thromboembolism. Discrete patterns of comorbidity clustering in individuals with gout have been described. Increasing prevalence and incidence of obesity and comorbidities are likely to contribute substantially to the rising burden of gout. abstract_id: PUBMED:29393108 Gout: A Disease of Kings. As a disease of kings, and the king of diseases, gout is one of the oldest joint diseases known to humans. First described as far back as 2640 B.C., gout is still the most common form of inflammatory arthritis haunting humans in the 21st century. The disease is caused by the chronic elevation of serum uric acid levels above the saturation point for monosodium urate crystal formation. Its incidence is progressively rising even today, but there are also regional and ethnic variations. Finally, the role of genetics is only beginning to be unraveled. Answer: Yes, the incidence of gout is rising. Several studies have reported an increase in the incidence and prevalence of gout over time. A study using the Rochester Epidemiology Project data indicated that the incidence of primary gout (i.e., no history of diuretic exposure) increased significantly over a 20-year period, with a greater than 2-fold increase in the rate of primary gout from 1977-1978 to 1995-1996 (PUBMED:12415600). Another study found that the incidence of gout in Olmsted County, Minnesota, more than doubled over 20 years, with a significant rise in associated comorbid conditions such as hypertension, diabetes mellitus, renal disease, hyperlipidemia, and morbid obesity (PUBMED:29247151). In England, the incidence of unplanned hospital admissions due to gout increased by 58.4% from 2006/07 to 2016/17, while admissions for rheumatoid arthritis halved over the same period (PUBMED:31523046). In the UK, both the prevalence and incidence of gout increased significantly from 1997 to 2012, with a 63.9% increase in prevalence and a 29.6% increase in incidence (PUBMED:24431399). Similarly, in British Columbia, Canada, the prevalence and incidence of gout increased substantially from 2000 to 2012 (PUBMED:28040245). The global epidemiology of gout also reflects a rising trend, with reports of prevalence ranging from less than 1% to 6.8% and an incidence of 0.58-2.89 per 1,000 person-years. The disease is more prevalent in men, with increasing age, and in certain ethnic groups. Despite the increasing prevalence and incidence, the management of gout remains suboptimal in many countries (PUBMED:32541923). Overall, the evidence suggests that the incidence of gout is indeed rising, and this trend is accompanied by an increasing burden of comorbidities and a need for improved prevention and care (PUBMED:33150696; PUBMED:27286837; PUBMED:29393108).
Instruction: Is Inferior Oblique Overaction Associated With Astigmatism? Abstracts: abstract_id: PUBMED:26098543 Is Inferior Oblique Overaction Associated With Astigmatism? Purpose: To study the relationship between inferior oblique muscle overaction and astigmatism, and to explore any change in astigmatism after surgery. Methods: This was a retrospective chart review of patients undergoing strabismus surgery. Demographic data collected included age, diagnosis, preoperative refraction, presence of inferior oblique overaction, surgery performed, and postoperative refraction. Patients were divided into three groups: those having only horizontal strabismus without inferior oblique overaction (no inferior oblique group) as a control group, those having bilateral inferior oblique overaction (bilateral inferior oblique group), and those having unilateral inferior oblique overaction (unilateral inferior oblique group). Right eyes of the bilateral inferior oblique and no inferior oblique groups were chosen for analysis, whereas the eye with inferior oblique overaction was included in the unilateral cases. Refractions were converted to power vector coordinates for comparison. Statistical analysis was performed to compare the power vectors among groups using the no inferior oblique group as controls and to explore any postoperative change in astigmatism. Results: One hundred eighteen patients undergoing strabismus surgery were included (60 males and 58 females; mean age: 11.31 ± 10.59 years). Patients were divided into three groups: those having only horizontal strabismus without inferior oblique overaction (no inferior oblique group; 60 patients) as controls, those having bilateral inferior oblique overaction (bilateral inferior oblique group; 41 patients), and those having unilateral inferior oblique overaction (unilateral inferior oblique group; 17 patients). Preoperatively, there were no differences in astigmatism when comparing eyes with and without inferior oblique overaction. Postoperatively, both the no inferior oblique group and the bilateral inferior oblique group equally showed an increase in with-the-rule cylinder power (P = .02 and .01, respectively). Conclusions: Inferior oblique muscle overaction did not result in an increased prevalence of astigmatism along the axis of the overacting muscle. abstract_id: PUBMED:27980365 Comparison of Astigmatism Induced by Combined Inferior Oblique Anterior Transposition Procedure and Lateral Rectus Recession Alone. Purpose: The purpose of this study is to compare the magnitude and axis of astigmatism induced by a combined inferior oblique (IO) anterior transposition procedure with lateral rectus (LR) recession versus LR recession alone. Methods: Forty-six patients were retrospectively analyzed. The subjects were divided into two groups: those having concurrent inferior oblique muscle overaction (IOOA) and intermittent exotropia (group 1, 20 patients) and those having only intermittent exotropia as a control (group 2, 26 patients). Group 1 underwent combined anterior transposition of IO with LR recession and group 2 underwent LR recession alone. Induced astigmatism was defined as the difference between preoperative and postoperative astigmatism using double-angle vector analysis. Cylinder power, axis of induced astigmatism, and spherical equivalent were analyzed at 1 week, 1 month, and 3 months after surgery. Results: Larger changes in the axis of induced astigmatism were observed in group 1, with 4.5° incyclotorsion, than in group 2 at 1 week after surgery (axis, 84.5° vs. 91°; p &lt; 0.001). However, there was no statistically significant inter-group difference thereafter. Relaxation and rapid regression in the incyclotorsion of induced astigmatism were observed over-time. Spherical equivalent significantly decreased postoperatively at 1 month in both groups, indicating a myopic shift (p = 0.011 for group 1 and p = 0.019 for group 2) but did not show significant differences at 3 months after surgery (p = 0.107 for group 1 and p = 0.760 for group 2). Conclusions: Combined IO anterior transposition procedures caused an increased change in the axis of induced astigmatism, including temporary incyclotorsion, during the first week after surgery. However, this significant difference was not maintained thereafter. Thus, combined IO surgery with LR recession does not seem to produce a sustained astigmatic change, which can be a potential risk factor of postoperative amblyopia or diplopia compared with LR recession alone. abstract_id: PUBMED:27138808 Changes in the axis of astigmatism and in fundus torsion following inferior oblique muscle weakening. Purpose: To study the changes in fundus torsion and in the axis of astigmatism following inferior oblique (IO) myectomy in patients with inferior oblique overaction (IOOA). Methods: The degree of fundus torsion and corneal astigmatism were prospectively evaluated before and 3 months after IO myectomy in patients with IOOA grade +2 or more in one or both eyes and an astigmatic error of ≥1 D. Fundus torsion was evaluated by measuring the disk foveal angle (DFA) using fundus photography. The axis of astigmatism was identified from the anterior sagittal map using Pentacam corneal imaging. Results: A total of 54 eyes of 27 patients were included. Patients were divided into two groups: those with esotropia and those with exotropia. All patients had a preoperative DFA &gt;8°, which decreased postoperatively in both groups (P &lt; 0.01). Postoperatively, there was incyclorotation of the axis of astigmatism by &gt;5° in 80% of the esotropic group and 75% of the exotropic groups (P &lt; 0.01). Conclusions: Incyclorotation of the axis of astigmatism occurs after IO myectomy. Measurement of the change in the axis of astigmatism can be used to assess the torsional changes after IO myectomy in patients with IOOA. abstract_id: PUBMED:28281302 Compensation of corneal oblique astigmatism by internal optics: a theoretical analysis. Purpose: Oblique astigmatism is a prominent optical aberration of peripheral vision caused by oblique incidence of rays striking the refracting surfaces of the cornea and crystalline lens. We inquired whether oblique astigmatism from these two sources should be expected, theoretically, to have the same or opposite signs across the visual field at various states of accommodation. Methods: Oblique astigmatism was computed across the central visual field for a rotationally-symmetric schematic-eye using optical design software. Accommodative state was varied by altering the apical radius of curvature and separation of the biconvex lens's two aspheric surfaces in a manner consistent with published biometry. Oblique astigmatism was evaluated separately for the whole eye, the cornea, and the isolated lens over a wide range of surface curvatures and asphericity values associated with the accommodating lens. We also computed internal oblique astigmatism by subtracting corneal oblique astigmatism from whole-eye oblique astigmatism. Results: A visual field map of oblique astigmatism for the cornea in the Navarro model follows the classic, textbook description of radially-oriented axes everywhere in the field. Despite large changes in surface properties during accommodation, intrinsic astigmatism of the isolated human lens for collimated light is also radially oriented and nearly independent of accommodation both in theory and in real eyes. However, the magnitude of ocular oblique astigmatism is smaller than that of the cornea alone, indicating partial compensation by the internal optics. This implies internal oblique astigmatism (which includes wavefront propagation from the posterior surface of the cornea to the anterior surface of the lens and intrinsic lens astigmatism) must have tangentially-oriented axes. This non-classical pattern of tangential axes for internal astigmatism was traced to the influence of corneal power on the angles of incidence of rays striking the internal lens. Conclusions: Partial compensation of corneal astigmatism by internal optics is due mainly to the highly converging nature of wavefronts incident upon the lens resulting from corneal refraction. The degree of compensation is quadratically dependent on eccentricity but is expected to diminish as the eye accommodates. Neutralising the cornea by index-matching defeats internal compensation, revealing classical, radially-oriented oblique astigmatism in the isolated lens. abstract_id: PUBMED:3454369 The effect of oblique muscle surgery on the axis of astigmatism. A series of 22 highly astigmatic eyes in 14 patients underwent surgery, either in the form of recessing the inferior oblique, tenotomizing the superior oblique, or tightening the superior oblique. The refractive errors obtained in a masked manner six months postoperative were compared to those preoperative. Weakening the inferior oblique or tightening the superior oblique produced a long-term incyclorotation (clockwise for right eyes and counterclockwise for left eyes) of the axis of astigmatism of approximately 10 degrees. Tenotomizing the superior oblique produced a long-term excyclorotation (counterclockwise for right eyes and clockwise for left eyes) of the axis of astigmatism of about 10 degrees. abstract_id: PUBMED:29118489 Minimal duration cataract surgery with oblique limbal stab incision technique. Purpose: Small incision cataract surgery (SICS) and phacoemulsification with oblique limbal stab incision technique were studied and compared. Setting: The study was conducted in the Department of Ophthalmology, Indira Gandhi Medical College, Shimla, for 1 year. Design: This was a prospective randomized study. Methods: One hundred patients undergoing SICS (6-7 mm) were compared with another age- and sex-matched 100 patients undergoing phaco (2.8 mm) surgery with oblique limbal stab incision technique. These two groups were further subdivided into two groups of 50 each. Patients with traumatic cataract, corneal diseases, and preoperative astigmatism &gt;1.5 D were excluded from the study. The patients were subjected to standard preoperative evaluation. Follow-up was done at 1, 2, 4, 6, and 12 weeks. Results: Mean surgically induced astigmatism at 12 weeks was &lt;1 D in both groups (+0.62 D ± 0.34 in Group A and +0.46 D ± 0.39 in Group B) (statistically significant P &lt; 0.0010). Mean surgical duration was 690.09 s in SICS and 792.29 s in phacoemulsification (statistically significant, P &lt; 0.0010). Visual outcome was between 6/6 and 6/9 in 86% of the patients in Group A and 97% of patients in Group B at 12 weeks. Conclusion: SICS and phacoemulsification with this technique yield lesser astigmatism than clear corneal and scleral incisions, with the advantage of extension without suturing in complicated cases of phaco and in patients with rigid intraocular lens phacoemulsification gives better BCVA in a larger proportion of patients at 12 weeks. abstract_id: PUBMED:25701740 Lack of oblique astigmatism in the chicken eye. Primate eyes display considerable oblique off-axis astigmatism which could provide information on the sign of defocus that is needed for emmetropization. The pattern of peripheral astigmatism is not known in the chicken eye, a common model of myopia. Peripheral astigmatism was mapped out over the horizontal visual field in three chickens, 43 days old, and in three near emmetropic human subjects, average age 34.7years, using infrared photoretinoscopy. There were no differences in astigmatism between humans and chickens in the central visual field (chicks -0.35D, humans -0.65D, n.s.) but large differences in the periphery (i.e. astigmatism at 40° in the temporal visual field: humans -4.21D, chicks -0.63D, p&lt;0.001, unpaired t-test). The lack of peripheral astigmatism in chicks was not due to differences in corneal shape. Perhaps related to their superior peripheral optics, we found that chickens had excellent visual performance also in the far periphery. Using an automated optokinetic nystagmus paradigm, no difference was observed in spatial visual performance with vision restricted to either the central 67° of the visual field or to the periphery beyond 67°. Accommodation was elicited by stimuli presented far out in the visual field. Transscleral images of single infrared LEDs showed no sign of peripheral astigmatism. The chick may be the first terrestrial vertebrate described to lack oblique astigmatism. Since corneal shape cannot account for the difference in astigmatism in humans and chicks, it must trace back to the design of the crystalline lens. The lack of peripheral astigmatism in chicks also excludes a role in emmetropization. abstract_id: PUBMED:33506320 Influence of posterior corneal astigmatism on the outcomes of toric intraocular lens implantation in eyes with oblique astigmatism. Purpose: To assess whether the outcomes of toric intraocular lens (IOL) implantation in eyes with oblique astigmatism can be improved by direct measurements of posterior corneal astigmatism using anterior segment optical coherence tomography (AS-OCT) instead of by using anterior corneal measurements alone. Study Design: Retrospective case series. Methods: Two toric IOL power calculation methods were compared: anterior corneal astigmatism was used in the keratometry group, whilst total corneal astigmatism determined by ray tracing through the measured anterior and posterior corneal surfaces was used in the AS-OCT group. In a total of 279 eyes of 232 patients, subgroup analysis was conducted for with-the-rule (WTR) (85 eyes in the keratometry group and 34 eyes in the AS-OCT group), against-the-rule (ATR) (73/29 eyes), and oblique (26/32 eyes) astigmatism. Results: In the WTR and ATR astigmatism groups, the uncorrected distance visual acuity (UDVA) was significantly better in the AS-OCT group than in the keratometry group (P = 0.012 and P &lt; 0.001, Mann-Whitney test), and the residual astigmatism was significantly smaller in the AS-OCT group than in the keratometry group (P = 0.037 and P &lt; 0.001). In eyes with oblique astigmatism, the UDVA (P = 0.299) and residual astigmatism (P = 0.373) of the keratometry and AS-OCT groups did not differ. Conclusion: Incorporation of posterior corneal astigmatism measured with AS-OCT can significantly improve the outcomes of toric IOL implantation in eyes with WTR and ATR astigmatism, but not in eyes with oblique astigmatism. abstract_id: PUBMED:2330948 Ocular and systemic findings in the Aarskog (facial-digital-genital) syndrome. The Aarskog (facial-digital-genital) syndrome is an X-linked disorder in which short stature is accompanied by hypertelorism, digital anomalies, and shawl scrotum. Except for hypertelorism and blepharoptosis, ophthalmic abnormalities have been rarely noted in this condition. We examined four patients who had Aarskog syndrome and unilaterally or bilaterally decreased vision on initial examination. Three family members had V-pattern esotropia, latent nystagmus, inferior oblique overaction, and amblyopia. A fourth patient had bilateral blepharoptosis and severe astigmatism. Other ocular features included hyperopia, anisometropia, deficient ocular elevation, blue sclerae, and posterior embryotoxon. These findings underscore the need for ophthalmic examination in asymptomatic patients with Aarskog syndrome to rule out treatable causes of visual loss. abstract_id: PUBMED:37697822 Systematic ophthalmologic evaluation in cardio-facio-cutaneous syndrome: A genotype-endophenotype correlation. Cardio-facio-cutaneous syndrome (CFCS) is a rare genetic disorder belonging to the RASopathies, a group of developmental syndromes caused by upregulated RAS/MAPK signaling. Pathogenic variants affecting four genes, KRAS, BRAF, MAP2K1 and MAP2K2, encoding core signal transducers of the pathway, underlie the condition. Major clinical features include a distinctive facies, ectodermal and cardiac anomalies, reduced postnatal growth, intellectual disability, and musculoskeletal abnormalities. Similar to other RASopathies, reports of visual impairment, high refractive error, optic nerve pallor, and other ocular abnormalities have been anecdotally reported in the literature. The aim of our study is to report the prevalence of ophthalmologic abnormalities in a large monocentric cohort of individuals affected by CFCS and explore the occurrence of genotype-endophenotype correlations in this series of patients. We observed that BRAF mutations are associated to a higher prevalence of anisometropia &gt;3D (11.8% vs. 0%) and high astigmatism (29.4% vs. 0%; both p &lt; 0.001) while patients with mutations in other genes had a significantly higher prevalence of myopia &gt;6 D (60% vs. 5.9%; p = 0.012). Pale optic disc was associated with higher prevalence of inferior oblique muscle (IO) overaction (33.3% vs. 0%) and lower prevalence of ptosis (0% vs. 11.8%; both p &lt; 0.001). Combined exotropia, IO overaction and nystagmus were frequent in patients with pale optic nerve. Our findings might suggest the need for earlier ophthalmologic referral for CFCS patients due to high risk of amblyopia, especially those expressing BRAF mutations. Answer: Based on the information provided in the abstracts, inferior oblique overaction (IOOA) does not appear to be associated with an increased prevalence of astigmatism along the axis of the overacting muscle. A retrospective chart review of patients undergoing strabismus surgery, which included patients with and without inferior oblique overaction, found no preoperative differences in astigmatism between eyes with and without inferior oblique overaction (PUBMED:26098543). Additionally, a study on the changes in the axis of astigmatism and fundus torsion following inferior oblique muscle weakening showed that incyclorotation of the axis of astigmatism occurs after IO myectomy, suggesting that the torsional changes can be assessed by measuring the change in the axis of astigmatism in patients with IOOA (PUBMED:27138808). However, it is important to note that surgery involving the inferior oblique muscle, such as combined inferior oblique anterior transposition procedures, can cause changes in the axis of induced astigmatism, including temporary incyclotorsion, during the first week after surgery. This significant difference was not maintained thereafter, indicating that combined IO surgery with lateral rectus recession does not produce a sustained astigmatic change (PUBMED:27980365). In the context of ocular and systemic syndromes, such as the Aarskog syndrome and cardio-facio-cutaneous syndrome, inferior oblique overaction has been observed alongside other ocular abnormalities, including astigmatism, but the abstracts do not establish a direct causal relationship between IOOA and astigmatism (PUBMED:2330948, PUBMED:37697822). In summary, while inferior oblique overaction may be associated with changes in the axis of astigmatism following surgical intervention, it does not inherently result in an increased prevalence of astigmatism.
Instruction: Do centenarians die healthy? Abstracts: abstract_id: PUBMED:34139339 Centenarians as models of healthy aging: Example of REST. Centenarians are a group of individuals exhibiting extreme longevity, who are characterized by a remarkable compression of morbidity. Therefore, centenarians have been postulated as a model of healthy aging. Different approaches have been used to decipher the biology and genetics of centenarians in order to identify key anti-aging pathways. The majority of studies have taken advantage of blood samples to perform their analysis. Besides, a recent study in human brain samples deciphered the transcription factor REST (Repressor Element-1 Silencing Transcription Factor) as an important player of extreme longevity and cognitive activity. This study goes from human to animal models and revealed that REST acts as an epigenetic regulator of neuronal homeostasis, to control aging and longevity. The aim of this view point is to summarize recent literature describing genetic and epigenetic factors, as well as molecular pathways associated with centenarians and the biology of aging. We will pay particular attention to the impact of REST in centenarians and longevity. abstract_id: PUBMED:38185288 A metagenomics study reveals the gut microbiome as a sex-specific modulator of healthy aging in Hainan centenarians. Background: Sex differences in health status and life expectancy are widely accepted to exist. The mechanisms underlying it are still poorly understood. In this study, we aimed to clarify the influences and contributions of sex on the gut microbiome in healthy centenarians and to explore the different roles played by the gut microbiome in healthy aging between the sexes. Results: Taking covariates of different dimensions into account (social demographics, anthropometry, the activities of daily living, dietary structure, mental state, blood tests, lifestyle and disease history), our data showed that sex was one of the most significant covariates affecting the gut microbiome of healthy centenarians at both the species and Kyoto Encyclopedia of Genes and Genomes Orthology (KO) levels. The beta diversity between the sexes were significantly different (Adonis test: p = 0.011, R2 = 0.031), and the male centenarians had a greater alpha diversity than the females (Simpson and Shannon test: P&lt;0.05). At the species level, we identified 31 species enriched in males and 7 species enriched in females. The composition and function patterns of the microbiome varied between the sexes. Further functional analysis showed that males' gut microbiome exhibited greater resistance to oxidative stress compared to females. In contrast to men, the species associated with healthy aging dominated among healthy female centenarians, while the species associated with unhealthy aging were relatively rare. Conclusions: The present study reveals that the gut microbiome structure and resistance to oxidative stress in healthy centenarians differ between the sexes and provides new insights into the possible sex-specific role of the gut microbiome in healthy aging. abstract_id: PUBMED:31035790 Where and How Centenarians Die? The Role of Hospice Care. The effect of hospice care on place of death among centenarians remained unexplored. Using data obtained from National Health Insurance Research Database (2002-2010), we compared the differences in place and cause of death between centenarians and noncentenarians. These data were stratified into centenarian (n = 2495) and noncentenarian (n = 820 563) death. Data in place and cause of death and hospice care interventions were retrieved. Poisson regression models were used to evaluate factors associated with the centenarians' place of death. Time series models were used to predict the number of centenarian deaths until 2025. Most (63.8%) of the centenarians died at their own homes, followed by 30.5% who died in hospital. Hospice home care was involved in only 0.3% of the centenarian deaths but in 1.8% of the noncentenarian deaths. The leading causes of death among centenarians were respiratory diseases (16.6%), circulatory diseases (15.2%), and pneumonia (14.8%). Among the centenarians, those who died of circulatory disease, old age, and respiratory diseases were more likely to die at their own homes. We forecasted the number of annual centenarian deaths to reach 800 in 2025. Therefore, an increase in the provision of advanced care planning and earlier home hospice care intervention may enable centenarians to die at their own residence. abstract_id: PUBMED:30037350 Neuropathology and cognitive performance in self-reported cognitively healthy centenarians. With aging, the incidence of neuropathological hallmarks of neurodegenerative diseases increases in the brains of cognitively healthy individuals. It is currently unclear to what extent these hallmarks associate with symptoms of disease at extreme ages. Forty centenarians from the 100-plus Study cohort donated their brain. Centenarians self-reported to be cognitively healthy at baseline, which was confirmed by a proxy. Objective ante-mortem measurements of cognitive performance were associated with the prevalence, distribution and quantity of age- and AD-related neuropathological hallmarks. Despite self-reported cognitive health, objective neuropsychological testing suggested varying levels of ante-mortem cognitive functioning. Post-mortem, we found that neuropathological hallmarks related to age and neurodegenerative diseases, such as Aβ and Tau pathology, as well as atherosclerosis, were abundantly present in most or all centenarians, whereas Lewy body and pTDP-43 pathology were scarce. We observed that increased pathology loads correlated across pathology subtypes, and an overall trend of higher pathology loads to associate with a lower cognitive test performance. This trend was carried especially by the presence of neurofibrillary tangles (NFTs) and granulovacuolar degeneration (GVD) and to a lesser extent by Aβ-associated pathologies. Cerebral Amyloid Angiopathy (CAA) specifically associated with lower executive functioning in the centenarians. In conclusion, we find that while the centenarians in this cohort escaped or delayed cognitive impairment until extreme ages, their brains reveal varying levels of disease-associated neuropathological hallmarks, some of which associate with cognitive performance. abstract_id: PUBMED:33517370 Decoding the role of long noncoding RNAs in the healthy aging of centenarians. Aging is the largest risk factor of major human diseases. Long noncoding RNAs (lncRNAs) as the key regulatory elements have shown a strong impact on multiple biological processes as well as human disease mechanisms. However, the roles of lncRNAs in aging/healthy aging processes remain largely unknown. Centenarians are good models for healthy aging studies due to avoiding major chronic diseases and disabilities. To illustrate their ubiquitous nature in the genome and the 'secrets' of healthy aging regulation from the perspective of lncRNAs, peripheral blood samples from two regions consisting 76 centenarians (CENs), 54 centenarian-children (F1) and 41 spouses of centenarian-children (F1SP) were collected for deep RNA-seq. We identified 11 CEN-specific lncRNAs that is particularly expressed in longevous individuals. By kmers clustering, hundreds of human lncRNAs show similarities with CEN-specific lncRNAs, especially with ENST00000521663 and ENST00000444998. Using F1SP as normal elder controls (age: 59.9 ± 6.6 years), eight lncRNAs that are differentially expressed in longevous elders (CEN group, age: 102.2 ± 2.4 years) were identified as candidate aging/health aging-related lncRNAs (car-lncs). We found that the expression of eight car-lncs in human diploid fibroblasts displayed dynamic changes during cell passage and/or H2O2/rapamycin treatment; of which, overexpression either of THBS1-IT1 and THBS1-AS1, two lncRNAs that highly expressed in CENs, can remarkably decrease p16, p21 and the activity of senescent related β-galactosidase, suggesting that THBS1-IT1 and THBS1-AS1 can inhibit cellular senescence. We provided the first comprehensive analysis of lncRNA expression in longevous populations, and our results hinted that dysregulated lncRNAs in CENs are potential protective factors in healthy aging process. abstract_id: PUBMED:30592018 Neuropsychological Test Performance of Cognitively Healthy Centenarians: Normative Data From the Dutch 100-Plus Study. Objectives: The fraction of the population that reaches the age of 100 years is growing. At this age, dementia incidence is high and cognitive functioning is highly variable across individuals. Normative data for neuropsychological tests are lacking in centenarians, which hampers the ability to evaluate their cognitive functioning for both research and clinical practice. Here, we generated norms for neuropsychological tests in a sample of cognitively healthy centenarians while taking sensory impairments into account. Design: Cross-sectional cohort study. Setting: Centenarians who participate in the prospective 100-plus Study. Participants: A total of 235 centenarians (71.5% female), who self-reported to be cognitively healthy, which was confirmed by an informant and a trained researcher. Measurements: We generated normative data for 15 cognitive tests, measuring global cognition (Mini-Mental State Examination [MMSE]), premorbid intelligence, attention, language, memory, executive function, and visuospatial function by multiple linear regressions and/or by reporting percentiles. Results: Normative data for global cognition resulted in a mean MMSE score of 25.6 ± 3.1 (range = 17-30; interquartile range = 24-28). Vision problems and fatigue often complicated the ability to complete tests, and these problems explained 41% and 22% of the missing test scores, respectively. In contrast, hearing problems (4%) and task incomprehension (6%) rarely complicated test performance. While educational level was associated with performance on the majority of the tests, sex and age were only weakly associated with test performance. Conclusions: We generated normative data for 15 common neuropsychological tests in a large sample of cognitively healthy centenarians, while taking age-related sensory impairments into consideration. These normative data allow the detection of deficits across a wide range of cognitive domains. Our results suggest that, next to education level, vision ability and the level of fatigue should be taken into account when evaluating cognitive functioning in centenarians. J Am Geriatr Soc 67:759-767, 2019. abstract_id: PUBMED:16079208 Do centenarians die healthy? An autopsy study. Background: Our goal was to assess the prevalence of common causes of death and the demographic variables in a selected population of centenarians. Methods: The autopsy reports and medical histories of all individuals &gt; or =100 years, dying unexpectedly out of hospital, were gathered from 42,398 consecutive autopsies, performed over a period of 18 years at the Institute of Forensic Medicine, Vienna. These records were evaluated with regard to age and sex, circumstances of death, season, time and the cause of death, as well as the presence of any other comorbidity. Results: Forty centenarians (11 men, 29 women) were identified with a median age of 102 +/- 2.0 (range: 100-108) years. Sixty percent were described as having been healthy before death. However, an acute organic failure causing death was found in 100%, including cardiovascular diseases in 68%, respiratory illnesses in 25%, gastrointestinal disorders in 5%, and cerebrovascular disease in 2%. Additionally, centenarians suffered from several comorbidities (cardiac antecedents, neurologic disorders, liver diseases, cholecystolithiasis), which were not judged to be the cause of death. Conclusions: Centenarians, though perceived to have been healthy just prior to death, succumbed to diseases in 100% of the cases examined. They did not die merely "of old age." The 100% post mortem diagnosis of death as a result of acute organic failure justifies autopsy as a legal requirement for this clinically difficult age group. abstract_id: PUBMED:30348092 A comparison of perceived uselessness between centenarians and non-centenarians in China. Background: Self-perceived uselessness is associated with poorer health in older adults. However, it is unclear whether there is a difference in self-perceived uselessness between centenarians and non-centenarians, and if so, which factors contributed to the difference. Methods: We used four waves of a nationwide longitudinal dataset from 2005 to 2014 in China to investigate these research goals. We first performed multinomial logit regression models to examine the risk of the high or moderate frequency of self-perceived uselessness relative to the low frequency among centenarians (5778 persons) in comparison with non-centenarians aged 65-99 (20,846 persons). We then conducted a cohort analysis for those born in 1906-1913, examining differences in self-perceived uselessness between those centenarians and those died between ages 91 and 99 during 2005-2014. Results: Compared to persons aged 65-79, centenarians had 84% (relative risk ratio (RRR) = 1.84, 95% CI:1.69-2.01) and 35% (RRR = 1.35, 95% CI: 1.25-1.46) higher risk to have the high frequency and the moderate frequency of feeling useless versus low frequency, respectively, when only demographic factors were controlled for. However, centenarians had 31% (RRR = 0.69, 95% CI: 0.54-0.88), 43% (RRR = 0.57, 95% CI: 0.49-0.68), and 25% (RRR = 0.75, 95% CI: 0.67-0.83) lower risk, respectively, to have the high frequency of self-perceived uselessness relative to the low frequency when a wide set of study covariates were controlled for. In the case of the moderate versus the low frequency of self-perceived uselessness, the corresponding figures were 18% (RRR = 0.82, 95% CI: 0.66-1.02), 22% (RRR = 0.78, 95%CI: 0.67-0.90), and 13% (RRR = 0.87, 95% CI: 0.79-0.96), respectively. The cohort analysis further indicates that those who became centenarians were 36-39% less likely than those died at ages 91-94 to report the high and the moderate frequencies of self-perceived uselessness versus the low frequency; no difference was found between centenarians and those died at ages 95-99. In both period and cohort analyses, behavioral and health-related factors affected the perception substantially. Conclusions: Overall, centenarians were less likely to perceive themselves as useless compared to non-centenarians of younger birth cohorts when a wide set of covariates were considered and non-centenarians of the same birth cohort. How centenarians manage to do so remains an open question. Our findings may help improve our understanding about the longevity secrets of centenarians. abstract_id: PUBMED:28461103 Demographic, genetic and phenotypic characteristics of centenarians in Italy: Focus on gender differences. An impressive and coherent series of epidemiological data from different populations (New England Americans, Mormons, Ashkenazi Jewish, Icelandic, Okinawan Japanese, Italians) suggests that long-lived subjects able to reach the extreme limits of human life, such as centenarians and supercentenarians, represent an extraordinary and informative model to identify the mechanisms responsible for healthy aging and human longevity. In most studies, genetic, demographic and phenotypic characteristics of longevity are discussed separately. However, longevity is a very complex trait due to the complicated interactions of numerous genetic and environmental factors. It is therefore necessary to analyse centenarians with a multidimensional approach, trying to consider different aspects simultaneously. In this review we will focus on Italian centenarians, who have been extensively studied for many years with different approaches, in order to show their peculiarities and the emerging data from the studies carried out on this exceptional population. abstract_id: PUBMED:30229699 Clinical Phenomenology and Characteristics of Centenarians in Hospice. Objectives:: Little is known about the hospice experience of centenarians. As the population of centenarians is projected to increase, understanding their unique end-of-life needs will be important to inform delivery of quality end-of-life care. Our objective was to characterize the hospice experience of centenarians. Methods:: A retrospective single-institution cohort study of centenarians enrolled in hospice from January 1, 2015, to December 31, 2017, was conducted to collect demographic and clinical information. Results:: Seventeen centenarians, who comprised 1.4% of hospice admissions, had an average age of 102 years, were mostly female (71%) and widowed (76%), and all caucasian. Upon hospice admission, centenarians resided in nursing (8, 47%) and assisted living (4, 24%) residencies as well as at home (4, 24%) and in senior independent living (1, 6%). Sixty percent of centenarians died in a nursing home. The most common hospice admission diagnosis was dementia (35%). Median length of stay on hospice was 41 days (range: 16-85) for 15 persons who died or discharged live. Conclusions:: In this group of centenarians, dementia was the most common condition for hospice enrollment. Slightly less than half resided in nursing homes on admission, although death occurred most frequently in a nursing home. Centenarians were generally able to remain out of the hospital at their time of death. Answer: The question of whether centenarians die healthy is complex and multifaceted. Autopsy studies have shown that centenarians, despite being perceived as healthy just prior to death, succumbed to diseases in 100% of the cases examined. They did not die merely "of old age," and the post-mortem diagnosis revealed death as a result of acute organic failure, including cardiovascular diseases, respiratory illnesses, gastrointestinal disorders, and cerebrovascular disease (PUBMED:16079208). This suggests that while centenarians may live to an advanced age, they are not immune to the diseases that commonly affect the elderly population. Moreover, centenarians have been found to have a high prevalence of neuropathological hallmarks related to age and neurodegenerative diseases, such as Aβ and Tau pathology, as well as atherosclerosis. Despite self-reported cognitive health, objective neuropsychological testing suggested varying levels of ante-mortem cognitive functioning among centenarians, with higher pathology loads associated with lower cognitive test performance (PUBMED:30037350). However, it is also important to note that centenarians are a model of healthy aging, as they exhibit a remarkable compression of morbidity and have been postulated to avoid major chronic diseases and disabilities (PUBMED:34139339; PUBMED:33517370). Studies have also shown that centenarians were less likely to perceive themselves as useless compared to non-centenarians, which may contribute to their sense of well-being and healthy aging (PUBMED:30348092). In summary, while centenarians do not die "healthy" in the sense of being free from disease, they often exhibit a compression of morbidity and may have a better quality of life compared to non-centenarians in their final years. Autopsy results clearly indicate that centenarians die from acute organic failures and are not exempt from the diseases that affect the elderly, but their longevity and relatively better well-being suggest that they may have certain protective factors that contribute to their healthy aging process.
Instruction: Pediatric first time non-febrile seizure with focal manifestations: is emergent imaging indicated? Abstracts: abstract_id: PUBMED:24970739 Pediatric first time non-febrile seizure with focal manifestations: is emergent imaging indicated? Purpose: To assess the prevalence of clinically urgent intra-cranial pathology among children who had imaging for a first episode of non-febrile seizure with focal manifestations. Methods: We performed a cross sectional study of all children age 1 month to 18 years evaluated for first episode of non-febrile seizure with focal manifestations and having neuroimaging performed within 24h of presentation at a single pediatric ED between 1995 and 2012. We excluded intubated patients, those with known structural brain abnormality and trauma. A single neuro-radiologist reviewed all cranial computed tomography and/or magnetic resonance imaging performed. We defined clinically urgent intracranial pathology as any finding resulting in a change of initial patient management. We performed univariate analysis using χ(2) analysis for categorical data and Mann-Whitney U test for continuous data. Results: We identified 319 patients having a median age of 4.6 years [IQR 1.8-9.4] of which 45% were female. Two hundred sixty-two children had a CT scan, 15 had an MR and 42 had both. Clinically urgent intra-cranial pathology was identified on imaging of 13 patients (4.1%; 95% CI: 2.2, 7.0). Infarction, hemorrhage and thrombosis were most common (9/13). Twelve of 13 were evident on CT scan. Persistent Todd's paresis and age ≤ 18 months were predictors of clinically urgent intracranial pathology. Absence of secondary generalization and multiple seizures on presentation were not predictive. Conclusions: Four percent of children imaged with first time, afebrile focal seizures have findings important to initial management. Children younger than ≤ 18 months are at increased risk. abstract_id: PUBMED:36866989 Clinically important intracranial abnormalities in children presenting with first focal seizure. Background: Management of pediatric patients presenting with first seizure is challenging, especially with regards to emergent neuroimaging. The rate of abnormal neuroimaging findings is known to be higher in focal seizures than in generalized seizures, but those intracranial abnormalities are not always clinically emergent. In this study, we aimed to determine the rate and indicators for clinically important intracranial abnormalities that change acute management in children presenting with a first focal seizure to the pediatric emergency department (PED). Methods: This study was conducted retrospectively in the PED at a University Children`s Hospital setting. The study population consisted of patients aged between 30 days and 18 years with first focal seizure and who had emergent neuroimaging at the PED between the years 2001 and 2012. Results: There were 65 eligible patients meeting the study criteria. Clinically important intracranial abnormalities requiring emergent neurosurgical or medical intervention were detected in 18 patients (27.7%) at the PED. Four patients (6.1%) underwent emergent surgical procedures. Seizure recurrence and the need for acute seizure treatment in the PED were significantly associated with clinically important intracranial abnormalities. Conclusions: Neuroimaging study yielding of 27.7% shows that first focal seizure must be evaluated meticulously. From the emergency department`s point of view; we suggest that first focal seizures in children should be evaluated with emergent neuroimaging, if possible with magnetic resonance imaging. Especially patients with recurrent seizures at presentation requires more careful evaluation. abstract_id: PUBMED:37907727 Yield of non-invasive imaging in MRI-negative focal epilepsy. Objective: The absence of MRI-lesion reduces considerably the probability of having an excellent outcome (International League Against Epilepsies [ILAE] class I-II) after epilepsy surgery. Surgical success in magnetic-resonance imaging (MRI)-negative cases relies therefore mainly on non-invasive techniques such as positron-emission tomography (PET), subtraction ictal/inter-ictal single-photon-emission-computed-tomography co-registered to MRI (SISCOM), electric source imaging (ESI) and morphometric MRI analysis (MAP). We were interested in identifying the optimal imaging technique or combination to achieve post-operative class I-II in patients with MRI-negative focal epilepsy. Methods: We identified 168 epileptic patients without MRI lesion. Thirty-three (19.6%) were diagnosed with unifocal epilepsy, underwent surgical resection and follow-up ⩾ 2 years. Sensitivity, specificity, predictive values, and diagnostic odds ratio (OR) were calculated for each technique individually and in combination (after co-registration). Results: 23/33 (70%) were free of disabling seizures (75.0% with temporal and 61.5% extratemporal lobe epilepsy). None of the individual modalities presented an OR &gt; 1.5, except ESI if only patients with interictal epileptiform discharges (IEDs) were considered (OR 3.2). On a dual combination, SISCOM with ESI presented the highest outcome (OR = 6). MAP contributed to detecting indistinguishable focal cortical dysplasia in particular in extratemporal epilepsies with a sensitivity of 75%. Concordance of PET, ESI on interictal epileptic discharges, and SISCOM was associated with the highest chance for post-operative seizure control (OR = 11). Conclusion: If MRI is negative, the chances to benefit from epilepsy surgery are almost as high as in lesional epilepsy, provided that multiple established non-invasive imaging tools are rigorously applied and co-registered together. abstract_id: PUBMED:30685877 Voxel-based morphometric magnetic resonance imaging postprocessing in non-lesional pediatric epilepsy patients using pediatric normal databases. Background And Purpose: Pre-surgical evaluation of pediatric patients with drug-resistant focal epilepsy and negative (non-lesional) magnetic resonance imaging (MRI) is particularly challenging. Focal cortical dysplasia (FCD), a frequent pathological substrate in such setting, may be subtle on MRI and evade detection. The aim of this study was to use voxel-based MRI postprocessing to improve the detection of subtle FCD in pediatric surgical candidates. Methods: A consecutive cohort of pediatric patients undergoing pre-surgical evaluation with a negative MRI by visual analysis was included. MRI postprocessing was performed using a voxel-based morphometric analysis program (MAP) on T1-weighted volumetric MRI, with comparison to an age-specific normal pediatric database. The pertinence of MAP-positive areas was confirmed by surgical outcome and pathology. Results: A total of 78 patients were included. Forty-four patients (56%) had positive MAP regions. Complete resection of the MAP-positive regions was positively associated with seizure-free outcome compared with the no/partial resection group (P &lt; 0.001). Patients with no/partial resection of the MAP-positive regions had worse seizure outcomes than the MAP-negative group (P = 0.002). The MAP-positive rate was 100%, 77%, 63% and 40% in the 3-5, 5-10, 10-15 and 15-21 year age groups, respectively. MAP-positive rates were 45% in patients with temporal resection and 63% in patients with extratemporal resection. Complete resection of the MAP-positive regions was positively associated with seizure-free outcome in the extratemporal group (P = 0.001) but not in the temporal group (P = 0.070). Conclusion: Our data suggest the importance of using MRI postprocessing in the pre-surgical evaluation process of pediatric epilepsy patients with apparently normal MRI. abstract_id: PUBMED:35433936 Cost-effectiveness of perampanel as an adjunctive treatment for uncontrolled focal seizures in pediatric patients: a Chinese perspective. Background: The incidence of epilepsy is 41-187 per 100,000 person-year in children. The health care costs for children with uncontrolled epilepsy is a huge burden. Perampanel (PER) was effective, safety and well-tolerated as add-on therapy in pediatric patients aged 4 to &lt;12 years with uncontrolled focal seizures. However, there is still limited evidence on cost-effectiveness of PER in pediatric patients. We aimed to evaluate the cost-effectiveness of PER as an add-on therapy for pediatric patients with uncontrolled focal seizures. Methods: A Markov model was established to conduct an analysis from the perspective of the Chinese health system and society. The incremental cost-effectiveness ratio (ICER) of patients using PER and conventional therapy versus patients using conventional therapy alone were estimated and compared. The transition probability of the response level, health state utility values, and costs were derived from clinical trials and the literature. Costs, including medical, drug, transportation and indirect costs, were calculated. We performed 1-way sensitivity analyses and probabilistic sensitivity analyses. A subgroup analysis of different ages was also conducted. Results: The base-case analysis indicated that compared to maintaining conventional therapy, adding PER as an adjuvant drug therapy had an increased cost of $3,449.85 over 5 years, with an incremental quality-adjusted life years (QALY) value of 0.40, resulting in an ICER of $8,582.58 per additional QALY. The health state utility value had the greatest effect on the ICER. The probabilistic sensitivity analyses showed that the probability of PER being cost-effective was 76.72% at a willingness-to-pay of $11,293/QALY. The ICER of the subgroup ranged from $7,167.95/QALY to $19,710.96/QALY. Conclusions: Our study demonstrated that PER is a cost-effective add-on therapy for pediatric patients. abstract_id: PUBMED:34619650 Non-parametric combination of multimodal MRI for lesion detection in focal epilepsy. One third of patients with medically refractory focal epilepsy have normal-appearing MRI scans. This poses a problem as identification of the epileptogenic region is required for surgical treatment. This study performs a multimodal voxel-based analysis (VBA) to identify brain abnormalities in MRI-negative focal epilepsy. Data was collected from 69 focal epilepsy patients (42 with discrete lesions on MRI scans, 27 with no visible findings on scans), and 62 healthy controls. MR images comprised T1-weighted, fluid-attenuated inversion recovery (FLAIR), fractional anisotropy (FA) and mean diffusivity (MD) from diffusion tensor imaging, and neurite density index (NDI) from neurite orientation dispersion and density imaging. These multimodal images were coregistered to T1-weighted scans, normalized to a standard space, and smoothed with 8 mm FWHM. Initial analysis performed voxel-wise one-tailed t-tests separately on grey matter concentration (GMC), FLAIR, FA, MD, and NDI, comparing patients with epilepsy to controls. A multimodal non-parametric combination (NPC) analysis was also performed simultaneously on FLAIR, FA, MD, and NDI. Resulting p-maps were family-wise error rate corrected, threshold-free cluster enhanced, and thresholded at p &lt; 0.05. Sensitivity was established through visual comparison of results to manually drawn lesion masks or seizure onset zone (SOZ) from stereoelectroencephalography. A leave-one-out cross-validation with the same analysis protocols was performed on controls to determine specificity. NDI was the best performing individual modality, detecting focal abnormalities in 38% of patients with normal MRI and conclusive SOZ. GMC demonstrated the lowest sensitivity at 19%. NPC provided superior performance to univariate analyses with 50% sensitivity. Specificity in controls ranged between 96 and 100% for all analyses. This study demonstrated the utility of a multimodal VBA utilizing NPC for detecting epileptogenic lesions in MRI-negative focal epilepsy. Future work will apply this approach to datasets from other centres and will experiment with different combinations of MR sequences. abstract_id: PUBMED:36062078 High density electric source imaging in childhood-onset epilepsy due to focal cortical dysplasia. Objective: The goal of this study was to investigate the diagnostic utility of electric source imaging (ESI) in the presurgical evaluation of children with focal cortical dysplasia (FCD) and to compare it with other imaging techniques. Methods: Twenty patients with epilepsy onset before 18 years, surgically treated focal epilepsy with a minimal follow-up of 2 years, and histologically proven FCD were retrospectively selected. All patients underwent MRI, positron emission tomography (PET), and 16 patients also had ictal single-photon emission computed tomography (iSPECT). ESI, using EEG with 64 electrodes or more (HD-ESI), was performed in all 20 patients. We determined sensitivity, specificity and accuracy of ESI, and compared its yield to that of other imaging techniques. Results: Twelve patients were seizure-free post-operatively (60%). Among all patients, highest localization accuracy (80%) was obtained with ESI, followed by PET and iSPECT (75%). When results from ESI and SPECT were concordant 100% of patients achieved Engel I outcome. If ESI and PET showed concordant localization, 90% of patients achieved postoperative seizure freedom. Conclusions: Our findings demonstrate that HD-ESI allows accurate localization of the epileptogenic zone in patients with FCD. Significance: In combination with other imaging modalities, ESI helps with planning a more accurate surgery and therefore, the chances of postoperative seizure control are higher. Since it is based on EEG recordings, it does not require sedation, which is particularly interesting in pediatric patients. ESI represents an important imaging tool in focal epilepsies due to cortical dysplasia, which might be difficult to detect on standard imaging. abstract_id: PUBMED:37141669 Awake Craniotomy Language Mapping in Children With Drug-Resistant Epilepsy due to Focal Cortical Dysplasia. Background: Language mapping during awake craniotomy can allow for precise resection of epileptogenic lesions, while reducing the risk of damage to eloquent cortex. There are few reports in the literature of language mapping during awake craniotomy in children with epilepsy. Some centers may avoid awake craniotomy in the pediatric age group due to concerns that children are unable to cooperate with such procedures. Methods: We reviewed pediatric patients from our center with drug-resistant focal epilepsy who underwent language mapping during awake craniotomy and subsequent resection of the epileptogenic lesion. Results: Two patients were identified, both female, aged 17 years and 11 years at the time of surgery. Both patients had frequent and disabling focal seizures despite trials of multiple antiseizure medications. Both patients had resection of their epileptogenic lesions with the aid of intraoperative language mapping; in both cases pathology was consistent with focal cortical dysplasia. Both patients had transient language difficulties in the immediate postoperative period but no deficits at six-month follow-up. Both patients are now seizure-free. Conclusions: Awake craniotomy should be considered in pediatric patients with drug-resistant epilepsy in whom the suspected epileptogenic lesion is in close proximity to cortical language areas. abstract_id: PUBMED:29375005 Everyday executive function in focal onset pediatric epilepsy on the parent-report BRIEF2. Executive function (EF) difficulties are a core neuropsychological feature of pediatric epilepsy, and parent-report measures of EF concerns are an important complement to task-based EF measures. The Behavior Rating Inventory of Executive Function (BRIEF) has shown sensitivity to parent-reported EF concerns in epilepsy and other pediatric populations. We compared profiles of parent-reported EF concerns using the BRIEF and its revision, the BRIEF2, in 117 pediatric patients with focal onset epilepsy to examine the clinical utility of the revised scale. We then compared BRIEF2 profiles between patients and age- and gender-matched healthy controls. Among patients, profiles on the BRIEF did not globally differ from the BRIEF2, and agreement was very good across scales. Patients and controls differed significantly on the BRIEF2, with patients showing higher EF difficulties reported by parents across most scales. High rates of clinical elevation among patients emerged on the Task Monitor, Plan/Organize, Working Memory, and Shift scales. Younger age of epilepsy onset, chronic epilepsy, and right hemisphere seizure focus were associated with higher parent-reported EF concerns. Findings suggest that the BRIEF2 demonstrates similar performance to the BRIEF among pediatric patients with focal onset epilepsy who are most at risk in the areas of task monitoring, working memory, planning and organization, and flexibility. These findings are informative when comparing literature across versions and provide additional insight into the nature of parent-reported EF difficulties among children with focal onset epilepsy. abstract_id: PUBMED:35245763 Cognitive features of adult focal epilepsy with unknown etiology revealed by the trail making test. Purpose: The purpose of this study was to investigate whether the Trail Making Test (TMT) can clarify cognitive dysfunction in focal epilepsy with unknown etiology. Methods: Trail Making Test data were obtained from patients with focal epilepsy with no structural abnormalities on magnetic resonance imaging, history or coexistence of central nerve system diseases, intellectual disability, psychiatric disorders, or medications that might interfere with cognitive function. We performed multiple regression analyses with TMT scores as dependent variables and clinical features as independent variables. Results: We enrolled 125 patients in the study. The statistical analyses revealed that taking fewer antiseizure medications, having a longer duration of education, exhibiting left non-temporal epileptic discharge, and exhibiting right temporal epileptic discharge were associated with shorter time to complete the TMT-A and TMT-B. Older age at the time of last seizure was associated with longer time to complete the TMT-B. In addition, a longer active seizure period was associated with longer time to complete the TMT-A subtracted from time to complete the TMT-B. Conclusions: This study indicated that the TMT can be used for assessing the cumulative effects of seizures and the effects of polypharmacy on cognitive function in patients with focal epilepsy. Furthermore, our results indicated that the visuospatial cognitive ability associated with the TMT may depend on the site of epileptic focus of non-lesional focal epilepsy. Answer: The question of whether emergent imaging is indicated for pediatric first-time non-febrile seizures with focal manifestations is addressed in the provided abstracts. According to the study in abstract PUBMED:24970739, 4.1% of children who underwent imaging for a first episode of non-febrile seizure with focal manifestations had findings on imaging that were important to initial management. The study found that children younger than or equal to 18 months were at increased risk for clinically urgent intracranial pathology. This suggests that emergent imaging may be particularly important in this age group. Another study, abstract PUBMED:36866989, found that 27.7% of children presenting with a first focal seizure had clinically important intracranial abnormalities that required emergent neurosurgical or medical intervention. This high rate of significant findings supports the use of emergent neuroimaging in the evaluation of first focal seizures in children. The other abstracts discuss various imaging techniques and their effectiveness in detecting intracranial abnormalities in cases of focal epilepsy, but they do not directly address the question of emergent imaging for first-time seizures. For instance, abstract PUBMED:37907727 discusses the yield of non-invasive imaging techniques in MRI-negative focal epilepsy, and abstract PUBMED:30685877 highlights the use of voxel-based morphometric MRI postprocessing in non-lesional pediatric epilepsy patients. Abstract PUBMED:35433936 evaluates the cost-effectiveness of perampanel as an adjunctive treatment for uncontrolled focal seizures in pediatric patients, which is not directly related to the question of emergent imaging. Abstract PUBMED:34619650 discusses the use of multimodal MRI for lesion detection in focal epilepsy, and abstract PUBMED:36062078 examines the diagnostic utility of electric source imaging (ESI) in the presurgical evaluation of children with focal cortical dysplasia. Lastly, abstract PUBMED:37141669 describes the use of awake craniotomy language mapping in children with drug-resistant epilepsy due to focal cortical dysplasia, and abstract PUBMED:29375005 and PUBMED:35245763 discuss cognitive features and executive function in pediatric and adult patients with focal epilepsy, respectively. In summary, the evidence from the studies in abstracts PUBMED:24970739 and PUBMED:36866989 suggests that emergent imaging may be indicated for pediatric patients presenting with a first-time non-febrile seizure with focal manifestations, especially in certain high-risk groups such as very young children.
Instruction: Do surveillance intervals in patients with more than five adenomas at index colonoscopy be shorter than those in patients with three to four adenomas? Abstracts: abstract_id: PUBMED:27862272 Do surveillance intervals in patients with more than five adenomas at index colonoscopy be shorter than those in patients with three to four adenomas? A Korean Association for the Study of Intestinal Disease study. Background And Aim: There is controversy about the surveillance interval after colonoscopy when 5-10 adenomas have been found on index colonoscopy. This study aimed to investigate the risk of colorectal neoplasm (CRN) according to the number of adenomas at index colonoscopy. Methods: A retrospective, multicenter study was conducted at 10 university hospitals in Korea. We included 1394 patients with ≥ 3 adenomas at index colonoscopy. The risk of advanced CRN was compared according to the number of adenomas (intermediate risk group, 3-4 small adenomas or at least one ≥ 10 mm, and high risk group, ≥ 5 small adenomas or ≥ 3 at least one ≥ 10 mm). Results: Overall, 164 (11.8%) developed an advanced CRN after a mean of 4.0 years from baseline colonoscopy. The 3-year and 5-year risk of advanced CRN was 2.1% (95% CI 2.09-2.11) and 14.4% (95% CI 14.36-14.44) in intermediate risk group and 3.2% (95% CI 3.19-3.21) and 23.3% (95% CI 19.15-19.25) in high risk group (P = 0.01). Having ≥ 5 adenomas (OR = 1.57, 95% CI 1.11-2.23, P = 0.01) detected at index colonoscopy was a significant risk factor for developing advanced CRN. Conclusions: Although risk of advanced CRN in patients with 5-10 adenomas was significantly higher than that in patients with 3-4 adenomas, the cumulative risk at 3 years was low at 3.2%. Thus, we suggest that a 3-year surveillance interval might be appropriate for the patients with 5-10 adenomas, and further prospective studies are needed to investigate whether more intensive surveillance is needed in this group. abstract_id: PUBMED:36763451 The Impacts of Computer-Aided Detection of Colorectal Polyps on Subsequent Colonoscopy Surveillance Intervals: Simulation Study. Background: Computer-aided detection (CADe) of colorectal polyps has been shown to increase adenoma detection rates, which would potentially shorten subsequent surveillance intervals. Objective: The purpose of this study is to simulate the potential changes in subsequent colonoscopy surveillance intervals after the application of CADe in a large cohort of patients. Methods: We simulated the projected increase in polyp and adenoma detection by universal CADe application in our patients who had undergone colonoscopy with complete endoscopic and histological findings between 2016 and 2020. The simulation was based on bootstrapping the published performance of CADe. The corresponding changes in surveillance intervals for each patient, as recommended by the US Multi-Society Task Force on Colorectal Cancer (USMSTF) or the European Society of Gastrointestinal Endoscopy (ESGE), were determined after the CADe was determined. Results: A total of 3735 patients who had undergone colonoscopy were included. Based on the simulated CADe effect, the application of CADe would result in 19.1% (n=714) and 1.9% (n=71) of patients having shorter surveillance intervals, according to the USMSTF and ESGE guidelines, respectively. In particular, all (or 2.7% (n=101) of the total) patients who were originally scheduled to have 3-5 years of surveillance would have their surveillance intervals shortened to 3 years, following the USMSTF guidelines. The changes in this group of patients were largely attributed to an increase in the number of adenomas (n=75, 74%) rather than serrated lesions being detected. Conclusions: Widespread adoption of CADe would inevitably increase the demand for surveillance colonoscopies with the shortening of original surveillance intervals, particularly following the current USMSTF guideline. abstract_id: PUBMED:28219690 Factors Associated With Shorter Colonoscopy Surveillance Intervals for Patients With Low-Risk Colorectal Adenomas and Effects on Outcome. Background & Aims: Endoscopists do not routinely follow guidelines to survey individuals with low-risk adenomas (LRAs; 1-2 small tubular adenomas, &lt; 1 cm) every 5-10 years for colorectal cancer; many recommend shorter surveillance intervals for these individuals. We aimed to identify the reasons that endoscopists recommend shorter surveillance intervals for some individuals with LRAs and determine whether timing affects outcomes at follow-up examinations. Methods: We collected data from 1560 individuals (45-75 years old) who participated in a prospective chemoprevention trial (of vitamin D and calcium) from 2004 through 2008. Participants in the trial had at least 1 adenoma, detected at their index colonoscopy, and were recommended to receive follow-up colonoscopy examinations at 3 or 5 years after adenoma identification, as recommended by the endoscopist. For this analysis we collected data from only participants with LRAs. These data included characteristics of participants and endoscopists and findings from index and follow-up colonoscopies. Primary endpoints were frequency of recommending shorter (3-year) vs longer (5-year) surveillance intervals, factors associated with these recommendations, and effect on outcome, determined at the follow-up colonoscopy. Results: A 3-year surveillance interval was recommended for 594 of the subjects (38.1%). Factors most significantly associated with recommendation of 3-year vs a 5-year surveillance interval included African American race (relative risk [RR] to white, 1.41; 95% confidence interval [CI], 1.14-1.75), Asian/Pacific Islander ethnicity (RR to white, 1.7; 95% CI, 1.22-2.43), detection of 2 adenomas at the index examination (RR vs 1 adenoma, 1.47; 95% CI, 1.27-1.71), more than 3 serrated polyps at the index examination (RR=2.16, 95% CI, 1.59-2.93), or index examination with fair or poor quality bowel preparation (RR vs excellent quality, 2.16; 95% CI, 1.66-2.83). Other factors that had a significant association with recommendation for a 3-year surveillance interval included family history of colorectal cancer and detection of 1-2 serrated polyps at the index examination. In comparisons of outcomes, we found no significant differences between the 3-year vs 5-year recommendation groups in proportions of subjects found to have 1 or more adenomas (38.8% vs 41.7% respectively; P = .27), advanced adenomas (7.7% vs 8.2%; P = .73) or clinically significant serrated polyps (10.0% vs 10.3%; P = .82) at the follow-up colonoscopy. Conclusions: Possibly influenced by patients' family history, race, quality of bowel preparation, or number or size of polyps, endoscopists frequently recommend 3-year surveillance intervals instead of guideline-recommended intervals of 5 years or longer for individuals with LRAs. However, at the follow-up colonoscopy, similar proportions of participants have 1 or more adenomas, advanced adenomas, or serrated polyps. These findings support the current guideline recommendations of performing follow-up examinations of individuals with LRAs at least 5 years after the index colonoscopy. abstract_id: PUBMED:32338818 Association of second surveillance colonoscopy findings with index and first surveillance colonoscopy results. Objective: Although there have been established guidelines for first surveillance colonoscopy (FSC) after a polypectomy, there is no consensus on performing a second surveillance colonoscopy (SSC), especially in Asian countries. This study aimed to investigate the association of SSC findings with index total colonoscopy (TCS) and FSC results. Methods: This was a single-center retrospective cohort study involving 1928 consecutive Japanese patients who had received three or more colonoscopies. High-risk colonoscopic findings were defined as advanced adenoma (≥10 mm in size, with a villous histology or high-grade dysplasia) or more than three adenomas, whereas low-risk findings were defined as one to two non-advanced adenomas. On the basis of index TCS results, the patients were divided into three groups: no adenomas (NA) (n = 888), low-risk (LR) (n = 476), and high-risk (HR) (n = 564) groups, respectively. Results: In the NA group, the rate of high-risk findings on SSC was significantly higher in patients with high-risk or low-risk findings on FSC than in those with no adenoma (7.7% and 7.9% vs 2.2%, P &lt; 0.05). Patients in the LR and HR groups with high-risk findings on FSC had a significantly higher risk on SSC than those with low-risk findings or no adenoma on FSC (LR group: 28.6%, 9.4%, and 5.9%, respectively, P &lt; 0.01; HR group: 34.5%, 18.8%, and 7.9%, respectively, P &lt; 0.01). Conclusions: Index TCS and especially FSC findings were predictive of SSC results. The study results may be useful for determining appropriate intervals for surveillance colonoscopy in Asian countries. abstract_id: PUBMED:26048253 Yield of second surveillance colonoscopy to predict adenomas with high-risk characteristics. Background And Aims: The yield of surveillance colonoscopies for patients with a history of polyps is well established for first surveillance, but limited for second surveillance. The aim of this study was to evaluate the proportion of high-risk adenomas at second surveillance colonoscopy based on findings of previous colonoscopies. Methods: This retrospective cohort study was conducted in a tertiary hospital and patients who had undergone three colonoscopies were included. Based on the findings at index colonoscopy, patients were categorized into three groups: high-risk adenoma (n=252), low-risk adenoma (n=158) or no-adenoma (n=318). Findings of subsequent high-risk adenoma, low-risk adenoma and no adenoma at surveillance colonoscopies were documented in each group. Results: Among patients with high-risk adenoma at index and first surveillance colonoscopies, significantly higher rates of high-risk findings were found at second surveillance, compared with patients who had low-risk or no-adenoma at index colonoscopy and high-risk adenoma at first surveillance colonoscopy (58%, 33% and 10%, respectively, p&lt;0.001). Conclusions: Both index colonoscopy and first surveillance high-risk adenoma have an impact on incidence high-risk findings at second surveillance colonoscopy and these subjects need close surveillance. abstract_id: PUBMED:33280217 Outcomes of patients with delayed surveillance colonoscopy. Background: Surveillance colonoscopy has been shown to be an effective tool for prevention of colorectal cancer (CRC) in high-risk populations, if adhered to. Aim: To discover the sequelae of late surveillance in a cohort of patient's overdue for colonoscopy. Methods: We conducted a retrospective cohort study on all patients from the Bay of Plenty District Health Board region, New Zealand, placed on the colonoscopy surveillance waitlist from 2006 onwards who had their procedure completed between 1 November 2016 and 31 January 2018. Patients with overdue surveillance, defined as done later than 90 days after the recommended due date, were compared with patients who were done either early or on time. Results: A total of 786 patients was recorded as overdue for surveillance colonoscopy and 386 were completed early or on time. The median time overdue was 22 months. Three (0.4%) cases of CRC were found in overdue patients compared with four (1%) cases for those done on time (adjusted P = 0.24). There were 86 (11%) advanced adenomas (AA) in patients overdue compared with 27 (7%) in those not overdue (odds ratio (OR) 1.6; 95% confidence interval (CI) 1.0-2.5; P = 0.04). Surveillance of 180 high-risk post-polypectomy patients identified 2 CRC and 8/43 AA in those overdue compared with no CRC and 9/137 AA (18.6% vs 6.6%; OR 1.79; 95% CI 1.07-2.0; unadjusted P = 0.03) in those done on time. Conclusion: While overdue surveillance is not predictive of increased CRC, it is associated with an increase in expected number of AA, particularly in patients having surveillance for previous high-risk polypectomy. abstract_id: PUBMED:26048021 Impact of Bowel Preparation Quality on Adenoma Identification During Colonoscopy and Optimal Timing of Surveillance. Background: All present guidelines regarding surveillance intervals after index colonoscopy are based on optimal bowel preparation. However, the appropriate timing of repeat colonoscopy after suboptimal bowel preparation is not clear. Aims: To determine the appropriate timing of repeat colonoscopy following index colonoscopy with suboptimal bowel preparation. Methods: The medical records of patients who underwent colonoscopy over 5 years were retrospectively analyzed. Index colonoscopy was defined as the first colonoscopy in patients who underwent the procedure at least twice during the study period. Bowel preparation quality was classified as optimal, fair, or poor. Results: The overall adenoma detection rate was 39.1% (95% confidence interval [CI], 38.0-40.1%), but the detection rate depended significantly on bowel preparation quality (p &lt; 0.001). The adenoma miss rate (AMR) was significant after poor (69.6%) than after optimal (27.3%) and fair (48.1%) preparation (p &lt; 0.001). At surveillance intervals ≤2 years, the odds ratio (OR) for AMR was significantly higher for poor (OR 6.25; 95% CI, 3.76-11.83) and fair (OR 3.67; 95% CI, 2.19-6.16) preparation relative to optimal preparation; however, no difference was observed at surveillance intervals &gt;2 years. Conclusions: Bowel preparation quality significantly affects AMR. Colonoscopy should be repeated within 2 years in patients with suboptimal bowel preparation at index colonoscopy. abstract_id: PUBMED:31559358 Colonoscopy in Patients Aged 85 Years or Older: An Observational Study. Objectives: Colonoscopy is the first-line modality to examine the colon even in the very elderly but may have an increased risk of complications. This study aimed to evaluate the efficacy and safety of colonoscopy in the very elderly. Methods: Patients ≥85y old, who underwent colonoscopy between September 2010 and August 2012 in two tertiary-care hospitals in Japan were enrolled. Main outcome measures were cecal intubation rate, detection rate of adenomas and cancers, treatment, adverse events, and long-term outcomes. Results: A total of 207 colonoscopies were performed in 177 patients (females 72, males 105; maximum age 95 years). Of these, 202 attempted to reach the cecum, with success in 92%. Excluding patients with known colorectal neoplasms, invasive cancers were detected in 12%, including T1 lesions in 2% and T2 or deeper in 9%. No cancers were detected in patients referred for surveillance or mild abdominal symptoms. Cancers were found in 25% of patients with positive fecal immunochemical tests, 22% with altered bowel habits, 21% with anemia, and 18% with hematochezia. Treatment of 29 patients with cancer included surgery in 22, endoscopic resection in two and no treatment (due to comorbidities) in five. There were no complications. During 730 days (mean) of follow up, 27 patients died but only three died from recurrent colorectal cancer. Conclusions: Colonoscopy for patients aged ≥85 years is safe. A relatively high detection rate of cancers was found, and most were treatable and even curable. (UMIN000018575). abstract_id: PUBMED:22741132 Korean guidelines for postpolypectomy colonoscopy surveillance. Postpolypectomy surveillance has become a major indication for colonoscopy as a result of increased use of screening colonoscopy in Korea. In this report, a careful analytic approach was used to address all available evidences to delineate the predictors for advanced neoplasia at surveillance colonoscopy and we elucidated the high risk findings of the index colonoscopy as follows: 3 or more adenomas, any adenoma larger than 10 mm, any tubulovillous or villous adenoma, any adenoma with high-grade dysplasia, and any serrated polyps larger than 10 mm. Surveillance colonoscopy should be performed five years after the index colonoscopy for those without any high-risk findings and three years after the index colonoscopy for those with one or more high risk findings. However, the surveillance interval can be shortened considering the quality of the index colonoscopy, the completeness of polypectomy, the patient's general condition, and family and medical history. abstract_id: PUBMED:19598329 Colonic adenoma patients have higher incidence of hyperplastic polyps on surveillance colonoscopy. Objectives: To determine the incidence of hyperplastic polyps in patients undergoing surveillance colonoscopy and to compare with the prevalence in individuals undergoing index colonoscopy. Patients And Methods: This prospective observational study included patients with index colonoscopy findings of adenomas, adenoma with concomitant hyperplastic polyps and advanced neoplasia who subsequently underwent surveillance colonoscopy. Patient characteristics, indications for colonoscopy, interval between examinations and findings were recorded. These findings were compared with our control group that comprised of patients who underwent index colonoscopy during the same period. Results: Study group consisted of 108 patients (median age 61.5 years) who underwent index colonoscopy for indications including, weight loss, rectal bleeding, altered bowel habits and anaemia. On index colonoscopy, adenomas, adenoma with concomitant hyperplastic polyps and advanced neoplasia were present in 61, 35 and 12 patients, respectively. Surveillance colonoscopy was performed after a mean follow-up period of 29 +/- 17 months. Findings on surveillance examination included hyperplastic polyps in 35 and 57% of patients with past adenomas and adenoma with concomitant hyperplastic polyps, respectively. Control group included 3852 symptomatic patients who had index colonoscopy (median age, 65 years). Hyperplastic polyps, adenomas and advanced neoplasia were found in 155 (4%), 388 (10%) and 60 (1.5%) of patients, respectively. Hyperplastic polyps and adenoma were significantly higher in study group as compared with control group (P &gt;0.5). Conclusion: Incidence of hyperplastic polyps is significantly higher on surveillance colonoscopy as compared with the prevalence on index colonoscopy. This may signify a continuous spectrum of biological evolution between hyperplastic polyps and adenomas. Answer: The surveillance intervals for patients with more than five adenomas at index colonoscopy should indeed be shorter than those for patients with three to four adenomas. This is supported by a study that found a significantly higher risk of advanced colorectal neoplasm (CRN) in patients with 5-10 adenomas compared to those with 3-4 adenomas at index colonoscopy (PUBMED:27862272). The study suggested that a 3-year surveillance interval might be appropriate for patients with 5-10 adenomas, although the cumulative risk at 3 years was relatively low at 3.2%. However, the authors also noted that further prospective studies are needed to investigate whether more intensive surveillance is needed for this group. Additionally, Korean guidelines for postpolypectomy colonoscopy surveillance recommend that surveillance colonoscopy should be performed three years after the index colonoscopy for those with one or more high-risk findings, which include having three or more adenomas (PUBMED:22741132). This guideline implies that patients with a higher number of adenomas at index colonoscopy are considered at higher risk and therefore warrant a shorter surveillance interval. In summary, the evidence suggests that patients with more than five adenomas at index colonoscopy are at a higher risk of developing advanced CRN and should have shorter surveillance intervals compared to those with three to four adenomas.
Instruction: Is the oral health impact profile measuring up? Abstracts: abstract_id: PUBMED:21491763 Oral health impact profile. an instrument for measuring the impact of oral health on the quality of life A Dutch-language version of the Oral Health Impact Profile, a questionnaire by means of which the impact of oral health on the quality of life of patients can be determined, was developed and subsequently psychometrically tested among a group of patients with complaints concerning missing dentition or their dentures. In addition, a shortened version of this so-called OHIP-NL49, the OHIP-NL14, was psychometrically tested among a group of patients with temporomandibular disorders. The psychometrical characteristics of both the OHIP-NL49 and the OHIP-NL14 were very good: both the reliability and the validity were high. The conclusion was, that the OHIP-NL49 and the OHIP-NL14 are well suited for determining the impact of oral health on the quality of life. abstract_id: PUBMED:15473385 Reference values in oral health-related quality of life for the abbreviated version of the Oral Health Impact Profile Questionnaires measuring oral health-related quality of life (OHRQoL) need reference values for their interpretation. Population-based norms for a questionnaire allow a comparison of the level of OHRQoL observed for an individual or a group of subjects with the level of that construct in the general population. The aims of this study were to establish normative values for abbreviated instruments of the German version of the Oral Health Impact Profile (OHIP-G). Data of the long OHIP-G (53 items) were collected in a national survey (N=2050, age: 16-79 years). Three short instruments with 5 (OHIP-G5), 14 (OHIP-G14) and 21 items (OHIP-G21) were derived from the long OHIP. Norms were computed for the OHIP-G summary score (simple sum of all item responses). The OHIP-G summary score distribution was described by empirical cumulative distribution functions. Because denture usage was strongly associated with OHIP-G summary scores, percentile-based norms were stratified by this variable. Among subjects without dentures, 50% of the population had OHIP-G5 or OHIP-G14 scores 0 and OHIP-G21 scores &lt; or = 2. Summary scores for the three abbreviated OHIPs were &lt; or = 2, &lt; or = 4, and &lt; or = 7 for subjects with removable dentures and &lt; or = 3, &lt; or = 6, and &lt; or = 8 for subjects with complete dentures. Among subjects without dentures, 90% of the population had &lt; or = 4 OHIP-G5, &lt; or = 11 OHIP-G14, and &lt; or = 16 OHIP-G21 points. For subjects with removable dentures, the figures were &lt; or = 7, &lt; or = 17, and &lt; or = 26 points for OHIP-G5, OHIP-G14 and OHIP-G21, respectively; and for subjects with complete dentures, &lt; or = 10, &lt; or = 25, and &lt; or = 38 points, respectively. The presented norms will provide a frame of reference in future cross-sectional studies in the general population or samples with specific oral conditions when OHRQoL is measured by OHIP-G5, OHIP-G14 or OHIP-G21. abstract_id: PUBMED:23939370 Self-rating of oral health according to the Oral Health Impact Profile and associated factors: a systematic review Objective: To systematically evaluate the literature to investigate associations between social, demographic, economic, psychosocial, and behavioral factors and the self-perception of oral health measured using the Oral Health Impact Profile (OHIP). Methods: In this systematic review of the literature, the Preferred Reporting Items for Systematic Reviews (PRISMA) were adapted for the performance of a qualitative metasummary, without meta-analysis. Articles about oral health and associated factors with implications for quality of life were selected, with a focus on the tool for self-rating of the oral health-disease process, the OHIP. Pubmed/National Library of Medicine (NLM) and the Virtual Health Library (Biblioteca Virtual em Saúde - BVS/BIREME) were searched. Articles published between 2001 and 2011 were included. The following Medical Subject Headings (MeSH) were employed: oral health, quality of life, sickness impact profile, and socioeconomic factors. Results: Of 57 articles identified, 20 met the inclusion criteria. The metasummary revealed that a poor self-perception of oral health was associated with unfavorable social, economic, demographic, and psychosocial factors, as well as with undesirable habits and poor clinical oral conditions. Conclusions: There is consensus in the literature about the influence of the investigated factors on the self-perception of oral health and on quality of life. The OHIP is an important aid for determining oral health needs and for developing strategies to control/reduce disease and promote oral health, with a consequent positive impact on quality of life. abstract_id: PUBMED:24909881 Exploratory factor analysis of the Oral Health Impact Profile. Although oral health-related quality of life (OHRQoL) as measured by the Oral Health Impact Profile (OHIP) is thought to be multidimensional, the nature of these dimensions is not known. The aim of this report was to explore the dimensionality of the OHIP using the Dimensions of OHRQoL (DOQ) Project, an international study of general population subjects and prosthodontic patients. Using the project's Learning Sample (n = 5173), we conducted an exploratory factor analysis on the 46 OHIP items not specifically referring to dentures for 5146 subjects with sufficiently complete data. The first eigenvalue (27·0) of the polychoric correlation matrix was more than ten times larger than the second eigenvalue (2·6), suggesting the presence of a dominant, higher-order general factor. Follow-up analyses with Horn's parallel analysis revealed a viable second-order, four-factor solution. An oblique rotation of this solution revealed four highly correlated factors that we named Oral Function, Oro-facial Pain, Oro-facial Appearance and Psychosocial Impact. These four dimensions and the strong general factor are two viable hypotheses for the factor structure of the OHIP. abstract_id: PUBMED:24909797 Confirmatory factor analysis of the Oral Health Impact Profile. Previous exploratory analyses suggest that the Oral Health Impact Profile (OHIP) consists of four correlated dimensions and that individual differences in OHIP total scores reflect an underlying higher-order factor. The aim of this report is to corroborate these findings in the Dimensions of Oral Health-Related Quality of Life (DOQ) Project, an international study of general population subjects and prosthodontic patients. Using the project's Validation Sample (n = 5022), we conducted confirmatory factor analyses in a sample of 4993 subjects with sufficiently complete data. In particular, we compared the psychometric performance of three models: a unidimensional model, a four-factor model and a bifactor model that included one general factor and four group factors. Using model-fit criteria and factor interpretability as guides, the four-factor model was deemed best in terms of strong item loadings, model fit (RMSEA = 0·05, CFI = 0·99) and interpretability. These results corroborate our previous findings that four highly correlated factors - which we have named Oral Function, Oro-facial Pain, Oro-facial Appearance and Psychosocial Impact - can be reliably extracted from the OHIP item pool. However, the good fit of the unidimensional model and the high interfactor correlations in the four-factor solution suggest that OHRQoL can also be sufficiently described with one score. abstract_id: PUBMED:26471978 Cross-cultural equivalence in translations of the oral health impact profile. The Oral Health Impact Profile (OHIP) has been translated for comparisons across cultural boundaries. This report on a systematic search of literature published between 1994 and 2014 aims to identify an acceptable method of translating psychometric instruments for cross-cultural equivalence, and how they were used to translate the OHIP. An electronic search used the keywords 'cultural adaptation', 'validation', 'Oral Health Impact Profile' and 'OHIP' in MEDLINE and EMBASE databases supplemented by reference links and grey literature. It included papers on methods of cross-cultural translation and translations of the OHIP for dentulous adults and adolescents, and excluded papers without translational details or limited to specific disorders. The search identified eight steps to cross-cultural equivalence, and 36 (plus three supplemental) translations of the OHIP. The steps involve assessment of (i) forward/backward translation by committee, (ii) constructs, (iii) item interpretations, (iv) interval scales, (v) convergent validity, (vi) discriminant validity, (vii) responsiveness to clinical change and (viii) pilot tests. Most (&gt;60%) of the translations involved forward/backward translation by committee, item interpretations, interval scales, convergence, discrimination and pilot tests, but fewer assessed the underlying theory (47%) or responsiveness to clinical change (28%). An acceptable method for translating quality of life-related psychometric instruments for cross-cultural equivalence has eight procedural steps, and most of the 36 OHIP translations involved at least five of the steps. Only translations to Saudi Arabian Arabic, Chinese Mandarin, German and Japanese used all eight steps to claim cultural equivalence with the original OHIP. abstract_id: PUBMED:18786135 Is the oral health impact profile measuring up? Investigating the scale's construct validity using structural equation modelling. Objectives: The aim of the study was to provide an empirical test of the construct validity of the Oral Health Impact Profile as a measure of Locker's conceptual model of oral health. Methods: A secondary analysis of data from the Ontario Study of Older Adults was carried out using structural equation modelling to assess the degree to which scale items measured the construct they were supposed to measure (within-construct validity) and whether relations between constructs were as hypothesized by Locker's model (between-construct validity). Results: The findings indicated that the Oral Health Impact Profile as currently conceived does not have adequate within-construct validity. Scale items did not always measure the construct they were supposed to measure, some items within a construct were redundant, many measured more than one construct, and the scale did not represent seven separate constructs of oral health as originally devised. Following reconceptualization of the scale, the revised six-factor 22 item version was a better fit to the data. However, the scale did not have adequate between-construct validity. Conclusion: The present findings do not provide support for the conceptual basis of the Oral Health Impact Profile as a measure of Locker's model of oral health. The need for further conceptual development of the scale, and Locker's model, are discussed. abstract_id: PUBMED:31723667 Investigation of the impact of dental fear on Child Oral Health Impact Profile scores. Background: The Child Oral Health Impact Profile (COHIP) is a measure of oral health-related quality of life (OHRQoL) in children and adolescents. This study examined the impact of dental fear on the OHRQoL by comparing the COHIP scores of children with and without dental fear. Methods: The OHRQoL in children and adolescents was measured using the Korean version of the COHIP. In total, 102 students (49 boys and 53 girls) filled in a questionnaire designed to evaluate dental fear and the OHRQoL in 2012 and 2014. Results: In 2012, the group without dental fear showed higher COHIP scores than the group with dental fear; the difference between the two groups was statistically significant. In 2014, the same pattern was observed, but the difference was non-significant for all COHIP items other than those pertaining to social-emotional well-being. Comparison of COHIP scores according to changes in fear showed that the group with continuous dental fear showed significantly lower overall COHIP, negative COHIP, and low social-emotional wellbeing scores, than the group without continuous fear in 2012 and 2014. Conclusion: We expected children with dental fear to have poor oral health, affecting their OHRQoL; however, dental fear did not affect the OHRQoL. abstract_id: PUBMED:36707172 Cross-cultural adaptations of the oral health impact profile - An assessment of global availability of 4-dimensional oral health impact characterization. Objective: The 4-dimensional (4D) structure of oral health-related quality of life (OHRQoL), comprising of the dimensions Oral Function, Orofacial Pain, Orofacial Appearance, and Psychosocial Impact, is clinically plausible and psychometrically solid. The original Oral Health Impact Profile (OHIP) instrument and its short-form versions have been proven to lend themselves well to the assessment of these 4 OHRQoL dimensions. However, whether this 4-dimensional approach to oral health impact characterization can be performed on a global scale, that is, for most of the world's population, is not known. The purpose of this study was perform a systematic review to identify all cross-cultural adaptations of OHIP versions with 49, 20/19, 14, and 5 items. The global availability of 4D oral health impact characterization was investigated. Methods: We performed searches of electronic databases- Scopus, Pubmed, Web of Science, along with hand searching in June 2022 to identify all cross-cultural language adaptations of the different OHIP versions available in the literature. Whether the 4D oral health impact assessment can be considered a global approach was judged based on the criteria whether 4D psychometric information was available for at least 75% of the most widely spoken languages with an OHIP version. Results: We identified 82 studies with a total of 90 individual OHIP language versions for 45 languages. Among the top 20 languages with most first-language (native) speakers, 16 (80%) had OHIP versions. Among the top 20 languages with the most first- and second-language speakers, also 16 (80%) had OHIP versions. Of these 16 OHIP versions, across both language categories, 13 versions (81%) allowed for 4D oral health impact characterization. Conclusion: Four-dimensional oral health impact assessment using the dimensions Oral Function, Orofacial Pain, Orofacial Appearance, and Psychosocial Impact can be considered a globally available approach given that OHIP versions with 4D information are readily available for most widely spoken languages. Thus, psychometrically sound, practical, and internationally comparable oral health impact characterization can be easily performed to study population oral health and determine oral disease impact and treatment efficacy for dental patients. abstract_id: PUBMED:26879212 Cross-cultural equivalence in translation of the oral health impact profile: how to interpret the final score? The oral health impact profile (OHIP) is one of the most widely known oral health-related quality of life instruments. In Community Dentistry and Oral Epidemiology, MacEntee and Brondani report the results of a systematic review to identify acceptable methods for translating psychometric instruments for cross-cultural equivalence of the OHIP scale. But in no study has unidimensionality, one aspect of the validity of the internal structure of the scale, been verified, whereas it is a major psychometric step. In the absence of the study of unidimensionality, it is difficult to interpret the final score. The methodology of transcultural validation of the OHIP could be improved, and the study of the unidimensionality is a psychometrically necessary step for the interpretation of the finale score. Answer: The Oral Health Impact Profile (OHIP) is a widely recognized instrument for measuring the impact of oral health on quality of life. It has been translated into various languages and adapted for different cultures, aiming to maintain cross-cultural equivalence (PUBMED:26471978, PUBMED:26879212). The OHIP has been psychometrically tested and found to have high reliability and validity in its full-length version (OHIP-NL49) and its shortened version (OHIP-NL14) (PUBMED:21491763). Normative values for abbreviated versions of the OHIP have been established, allowing for comparisons of oral health-related quality of life (OHRQoL) with general population levels (PUBMED:15473385). The OHIP has been used to investigate associations between various factors, such as social, demographic, economic, psychosocial, and behavioral factors, and self-perception of oral health (PUBMED:23939370). However, the construct validity of the OHIP has been questioned. One study using structural equation modeling suggested that the OHIP does not adequately measure the constructs it is supposed to measure, indicating a need for further conceptual development of the scale (PUBMED:18786135). Exploratory and confirmatory factor analyses have been conducted to explore the dimensionality of the OHIP. These analyses suggest that the OHIP may consist of four correlated dimensions—Oral Function, Oro-facial Pain, Oro-facial Appearance, and Psychosocial Impact—alongside a dominant higher-order general factor (PUBMED:24909881, PUBMED:24909797). However, the unidimensionality of the scale, which is important for interpreting the final score, has not been verified in all studies, raising concerns about the interpretation of the OHIP scores (PUBMED:26879212). In summary, while the OHIP is a well-established and widely used tool for assessing the impact of oral health on quality of life, there are ongoing discussions and investigations into its construct validity and dimensionality. The scale's ability to measure up to its intended purpose is supported by its high reliability and validity, but further research is needed to clarify its factor structure and ensure accurate interpretation of its scores.
Instruction: Is bone loss linked to chronic inflammation in antiretroviral-naive HIV-infected adults? Abstracts: abstract_id: PUBMED:24871454 Is bone loss linked to chronic inflammation in antiretroviral-naive HIV-infected adults? A 48-week matched cohort study. Objective: Antiretroviral therapy (ART) has been implicated in bone loss in HIV. The role of inflammation and vitamin D is unclear and better investigated in ART-naive individuals. Design And Methods: This is a 48-week, prospective cohort study to compare baseline and change in hip and spine bone mineral density (BMD) measured by dual-energy X-ray absorptiometry in HIV-infected, ART-naive adults and healthy controls matched by age, sex, and race. We also studied associations between bone loss and inflammation markers and plasma 25-hydroxyvitamin D [25(OH)D] using logistic regression. Results: Forty-seven HIV-infected adults and 41 controls were included. Baseline 25(OH)D, BMD at total hip, trochanter, and spine, and prevalence of osteopenia and osteoporosis were similar between groups. In the HIV-infected group, total hip and trochanter, but not spine, BMD decreased over 48 weeks [hip -0.005 (-0.026-0.008) g/cm², P = 0.02 within group; trochanter -0.013 (-0.03-0.003), P &lt; 0.01]. BMD did not change at any site within controls. The HIV-infected group was more likely to have bone loss at the trochanter (P = 0.03). This risk persisted after adjustment for age, sex, race, BMI, smoking, and hepatitis C (odds ratio 4, 95% confidence interval 1.2-15.8). In the HIV-infected group, higher interleukin-6 concentrations (P = 0.04) and Caucasian race (P &lt; 0.01) were independently associated with progression to osteopenia or osteoporosis, but not 25(OH)D levels. Conclusion: BMD at the total hip and trochanter sites decreased in the HIV-infected, ART-naive adults, but not controls, over this 48-week study. Higher serum interleukin-6 concentrations were associated with progression to osteopenia or osteoporosis status in the HIV-infected group. abstract_id: PUBMED:36151508 Growing old with antiretroviral therapy or elderly people in antiretroviral therapy: two different profiles of comorbidity? Background: In persons living with HIV (PLWH), the burden of non-communicable chronic diseases increased over time, because of aging associated with chronic inflammation, systemic immune activation, and long-term exposure to the combination antiretroviral therapy (ART). Methods: To explore the association of chronological age, age at first ART, and exposure to ART with non-communicable chronic diseases, we performed a cross-sectional analysis to evaluate the prevalence of comorbidities in patients enrolled in the SCOLTA Project, stratified by groups of chronological age (50-59 and 60-69 years) and by years of antiretroviral treatment (ART, ≤ 3 or &gt; 3 years). Results: In 1394 subjects (23.8% women), mean age at enrollment was 57.4 (SD 6.5) years, and at first ART 45.3 (SD 10.7). Men were older than women both at enrollment (57.6 vs 56.8, p = 0.06) and at first ART (45.8 vs 43.6, p = 0.0009). ART duration was longer in women (13.1 vs 11.7 years, p = 0.01). The age- and sex-adjusted rate ratios (aRRs, and 95% confidence interval, CI) showed that longer ART exposure was associated with dyslipidemia (aRR 1.35, 95% CI 1.20-1.52), hypertension (aRR 1.52, 95% CI 1.22-1.89), liver disease (aRR 1.78, 95% CI 1.32-2.41), osteopenia/osteoporosis (aRR 2.88, 95% CI 1.65-5.03) and multimorbidity (aRR 1.36, 95% CI 1.21-1.54). These findings were confirmed in strata of age, adjusting for sex. Conclusions: Our data suggest that longer ART exposure was associated with increased risk of dyslipidemia, hypertension, and osteopenia/osteoporosis, hence the presence of multimorbidity, possibly due to the exposition to more toxic antiretrovirals. We observed different comorbidities, according to ART exposure and age. abstract_id: PUBMED:24089217 Bone disease in the setting of HIV infection: update and review of the literature. The advent of highly active antiretroviral therapy (HAART) in the mid-1990s has transformed Human Immunodeficiency Virus (HIV) infection into a chronic disease. HIV-infected patients are living longer and are facing several non-AIDS-associated morbidities related with aging, including diabetes mellitus, cardiovascular disease, osteoporosis, osteopenia and fragility fractures. The prevalence of bone disease is higher among HIV-infected subjects. In addition to traditional risk factors, HAART, chronic inflammation and the virus itself have been suggested to contribute to bone loss in the setting of HIV infection. In the present review, we summarize the current knowledge about risk factors for low bone mineral density in HIV-positive patients as well as current recommendations for fracture screening and treatment in this specific population. abstract_id: PUBMED:32206852 People living with HIV and fracture risk. PLHIV have an increased risk of osteoporosis and fractures when compared with people of the same age and sex. In this review, we address the epidemiology and the pathophysiology of bone disease and fractures in PLHIV. The assessment of fracture risk and fracture prevention in these subjects is also discussed. The spectrum of HIV-associated disease has changed dramatically since the introduction of potent antiretroviral drugs. Today, the survival of people living with HIV (PLHIV) is close to that of the general population. However, the longer life-span in PLHIV is accompanied by an increased prevalence of chronic diseases. Detrimental effects on bone health are well recognised, with an increased risk of osteoporosis and fractures, including vertebral fractures, compared to the general population. The causes of bone disease in PLHIV are not fully understood, but include HIV-specific risk factors such as use of antiretrovirals and the presence of chronic inflammation, as well as traditional risk factors for fracture. Current guidelines recommend the use of FRAX to assess fracture probability in PLHIV age ≥ 40 years and measurement of bone mineral density in those at increased fracture risk. Vitamin D deficiency, if present, should be treated. Bisphosphonates have been shown to increase bone density in PLHIV although fracture outcomes are not available. abstract_id: PUBMED:29165684 Can Biomarkers Advance HIV Research and Care in the Antiretroviral Therapy Era? Despite achieving human immunodeficiency virus type 1 (HIV-1) RNA suppression below levels of detection and, for most, improved CD4+ T-cell counts, those aging with HIV experience excess low-level inflammation, hypercoagulability, and immune dysfunction (chronic inflammation), compared with demographically and behaviorally similar uninfected individuals. A host of biomarkers that are linked to chronic inflammation are also associated with HIV-associated non-AIDS-defining events, including cardiovascular disease, many forms of cancer, liver disease, renal disease, neurocognitive decline, and osteoporosis. Furthermore, chronic HIV infection may interact with long-term treatment toxicity and weight gain after ART initiation. These observations suggest that future biomarker-guided discovery and treatment may require attention to multiple biomarkers and, possibly, weighted indices. We are clinical trialists, epidemiologists, pragmatic trialists, and translational scientists. Together, we offer an operational definition of a biomarker and consider how biomarkers might facilitate progress along the translational pathway from therapeutic discovery to intervention trials and clinical management among people aging with or without HIV infection. abstract_id: PUBMED:36217985 Frailty and prefrailty in people living with HIV, with focus on women living with HIV. With the increasing lifespan of people living with HIV (PLWH), frailty and prefrailty are becoming topics which require more attention. The reciprocal interactions between chronic inflammation, comorbidities and frailty demonstrate the complex pathophysiology of frailty and its consequences. Female sex, HIV infection without antiretroviral treatment, reduced CD4 cell count, depression and cardiovascular disease are some of the risk factors for frailty among PLWH. Frailty predisposes to falls and can therefore lead to more frequent fractures, hospitalization and death, especially in women with osteoporosis. Continuous antiretroviral treatment, prevention of comorbidities such as depression and diagnosis of prefrailty are crucial interventions to slow the development of frailty. This review summarizes the literature on frailty in people living with HIV and discusses frailty management strategies in order to improve the health outcomes in women living with HIV. abstract_id: PUBMED:22364956 HIV infection and chronic inflammation: is the bacterial translocation the underlying cause? Currently, non-AIDS comorbidities (cardiovascular disease, non-AIDS-related cancers, liver disease, osteoporosis, etc.) have become an important cause of morbimortality in patients with human immunodeficiency virus type 1 (HIV-1) infection. The elevation of plasma markers of inflammation has been associated with the development of cardiovascular disease and death from all causes. Therefore, there is great interest in elucidating the underlying causes responsible for this persistent inflammatory status. The intestinal barrier disruption associated with HIV-1 infection may favor the passage of gut microbial products into the blood, resulting in immune stimulation. In this article we review the pathogenesis of bacterial translocation and its relevance to HIV-1 infection. abstract_id: PUBMED:23100110 HIV infection and osteoporosis: pathophysiology, diagnosis, and treatment options. As the population with HIV continues to age, specialists in HIV care are increasingly encountering chronic health conditions, which now include osteoporosis, osteopenia, and fragility fractures. The pathophysiology of the bone effects of HIV infection is complex and includes traditional risk factors for bone loss as well as specific effects due to the virus itself, chronic inflammation, and HAART. Examining risk factors for low bone density and screening of certain patients is suggested, and consideration should be given to treatment for those considered high risk for fracture. abstract_id: PUBMED:30915506 CD4 T cell count is inversely associated with lumbar spine bone mass in HIV-infected men under the age of 50 years. HIV-infected men under the age of 50 years had a lower bone mass compared to that of HIV-uninfected men. Lower CD4 T cell counts, independent of whether antiretroviral therapy (ART) was used, were associated with lower BMD. HIV-infected patients with low CD4 T cell counts may need follow-up and intervention regarding bone health, including younger patients. Introduction: HIV-infected patients have a low bone mineral density (BMD) owing to multifactorial interaction between common osteoporosis risk factors and HIV-related factors, including chronic inflammation and ART. Although HIV infection and ART might affect bone metabolism, little data is available for patients aged under 50 years. We aimed to investigate the association of HIV infection-induced low CD4 T cell counts and ART with BMD in men aged under 50 years. Methods: We performed an age- and body mass index-matched case-control study. BMD values of HIV-infected and HIV-uninfected men (&lt; 50 years) were compared, and HIV-infected men were stratified by CD4 T cell counts and ART use. Results: After adjusting confounders, HIV-infected men with CD4 T cell counts ≥ 500 cells/μL (n = 28) and &lt; 500 cells/μL (n = 139) had lower BMD at the femoral neck (FN, p &lt; 0.001) and total hip (TH, p &lt; 0.001) than HIV-uninfected men (n = 167). HIV-infected men with CD4 T cell counts &lt; 500/μL had lower BMD at the lumbar spine (LS, p = 0.034) than those with counts of ≥ 500 cells/μL, but not at FN and TH. The CD4 T cell count (γ = 0.169, p = 0.031) was positively correlated with BMD at LS. There was no significant difference in the BMD (p = 0.499-&gt; 0.999) between the ART-naïve (n = 75) and ART-user group (n = 92). Conclusions: Despite their relatively younger age, HIV-infected men had a lower BMD than HIV-uninfected men. Lower CD4 T cell counts, irrespective of ART, might result in lower bone mass. abstract_id: PUBMED:26414081 Bone loss in HIV: a contemporary review. Purpose Of Review: Because of antiretroviral therapy (ART), people are living with HIV infection longer than ever before. As this patient group ages, it is expected that medical comorbidities such as osteoporosis and fragility fractures will increase. The purpose of this review is to address the epidemiology and what is known regarding the pathogenesis of bone loss in people living with HIV infection with a focus on recently published literature. Recent Findings: HIV-infected individuals are at increased risk for low bone mineral density and bone fractures. The cause of bone loss in HIV is multifactorial including traditional risk factors some of which disproportionately affect HIV-infected individuals and alterations in bone metabolism due to ART, HIV viral proteins and chronic inflammation. Lifestyle modification, changing ART, calcium and vitamin D supplementation and pharmacologic treatment for osteoporosis may all be employed to abrogate bone loss in this patient group. Summary: Clinicians should be aware of the contributors to bone loss in people living with HIV in order to recognize high-risk individuals and to take appropriate steps to address modifiable risk factors to prevent future fracture. Answer: Yes, bone loss is linked to chronic inflammation in antiretroviral-naive HIV-infected adults. A study found that bone mineral density (BMD) at the total hip and trochanter sites decreased in HIV-infected, ART-naive adults over a 48-week period. Higher serum interleukin-6 concentrations were independently associated with progression to osteopenia or osteoporosis in the HIV-infected group, suggesting a link between inflammation and bone loss (PUBMED:24871454).
Instruction: Is traditional rural lifestyle a barrier for quality of life assessment? Abstracts: abstract_id: PUBMED:20013158 Is traditional rural lifestyle a barrier for quality of life assessment? A case study using the Short Form 36 in a rural Chinese population. Purpose: The majority of existing quality of life measures are based on urban-living environments. This study aimed at exploring the validity of using an urban-lifestyle-based health questionnaire with individuals living a traditional rural lifestyle. Methods: The Short Form-36 (SF-36) interview was administered to 1603 rural Chinese residents. Semantic ambiguity of the items was investigated using tests of internal consistency, test-retest reliability, exploratory factor analysis, and clustering and ordering of item mean scores. The self-explanations from the respondents were adopted to interpret the implications of the changes in meanings of the items. Results: Cronbach's alpha reliability coefficients were high, whereas test-retest reliabilities were low. Consistent with the original factor structure, eight factors were extracted using exploratory factor analysis. However, the composition of these eight factors was not in full accordance with the priori assignment of items to scales. Seven items violated the clustering and ordering of item mean scores. The association between the identified problems in validity and the change in semantic meanings in the context of the rural lifestyle was established. Conclusions: Quality of life assessment instruments based on urban-living arrangements may not be reliably used with individuals living in rural environments. abstract_id: PUBMED:28482849 Quality of life of depressed and suicidal patients seeking services from traditional and faith healers in rural Kenya. Background: In rural Kenya, traditional and faith healers provide an alternative pathway to health care, including mental health care. However, not much is known about the characteristics of the populations they serve. The purpose of this study was to determine the relationship between depression, suicidal ideation, and socio-demographic variables with Quality of Life (QoL) indicators in a sample seeking mental health services from traditional and faith healers in rural Kenya. Understanding QoL in this sample can help develop mental health policy and training to improve the well-being of this population. Method: This was a cross-sectional epidemiological survey (n = 443) conducted over a period of 3 months among adult patients seeking care from traditional and faith healers in rural Kenya. Data were collected using the Beck Depression Inventory II (BDI-II), Beck Scale for Suicide Ideation (BSS) and WHO Quality of Life Survey- BREF (WHOQOL-BREF), and analyzed using correlation analyses, parametric tests, and regression analyses. Results: Increasing levels of depression were associated with lower QoL among patients seeking care from traditional and faith healers. BSS scores were significantly negatively correlated with overall, physical, psychological, and environmental QoL, p &lt; .05. There was a statistically significant difference between mean scores for overall QoL between depressed (M = 2.35, SD = 0.76) and non-depressed participants (M = 3.03, SD = 0.67), t(441) = 8.899, p &lt; .001. Overall life satisfaction for depressed participants (M = 2.23, SD = 0.69) was significantly lower than non-depressed participants. Regression analyses indicated that depression, suicidal ideation, and being married predicted lower overall QoL controlling for other variables. Post hoc tests and subgroup analysis by gender revealed significant differences for females only. Depression, and older age predicted lower life satisfaction whereas being self-employed predicted higher life satisfaction, when controlling for other variables. Conclusion: This study sheds light on correlates of QoL in depressed and non-depressed patients in rural Kenya. Evidence suggests that traditional and faith healers treat patients with a variety of QoL issues. Further research should focus on understanding how these issues tie into QoL, and how these healers can target these to improve care. abstract_id: PUBMED:32724272 LIFESTYLE AND QUALITY OF LIFE IN WORKING-AGE PEOPLE AFTER STROKE. Recommendations for changing one's lifestyle in the aspect of factors that increase the risk of another stroke are often included in the plan of caring for patients after stroke. The style of life is connected to the quality of life and can be formed not only by socialization but also by conscious work on its health-promoting aspect. Lifestyle is a unique configuration of everyday behavior depending mostly on the quality of life available. The aim of the research was to identify the correlation between lifestyle and quality of life in people of working age after stroke. There were 279 patients after first-ever ischemic or hemorrhagic stroke, including 131 women and 148 men. Abbreviated version of the World Health Organization Questionnaire and the Sickness Impact Profile scale were used to examine the quality of life. For assessment of the quality of life, the following indicators were created: lifestyle before stroke and lifestyle after stroke. Less healthy lifestyle before stroke resulted in lower quality of life in the psychological and environmental sphere of life in these people after stroke, especially those having suffered stroke six months to two years before. Better quality of life in people after stroke was found to be connected to a pro-health lifestyle. abstract_id: PUBMED:32477025 Quality of Life: The Primary Goal of Lifestyle Intervention. In much of lifestyle medicine, health-related quality of life (HRQoL) is a primary focus with the assumption that symptom reduction increases quality of life (QoL) in a more global sense. Lifestyle medicine research has shown that reducing symptoms increases the likelihood that QoL is improved. However, little information is available as to the impact of interventions when they are not effective in creating the desired healthy outcomes. It is possible that some lifestyle interventions have a negative impact on QoL, especially when a patient is not "successful" in reducing their symptomatology. Considering QoL from a broader perspective as an outcome in combination with traditional health outcomes may improve provider-patient rapport and empower patients to provide feedback on treatment, which, in turn, may improve overall treatment. abstract_id: PUBMED:30260173 Lifestyle of the elderly living in rural and urban areas measured by the FANTASTIC Life Inventory. Introduction: A healthy lifestyle can help older people maintain their functional ability and, therefore, satisfaction with life. An important aspect of health promotion is the assessment of lifestyle factors over which patients have some control. Objective: The aim of this study was to analyze the lifestyle of the elderly in urban and rural areas using the Fantastic Life Inventory (FLI). Material And Methods: The research group comprised 138 people aged 65-94 years (M = 72.41, SD = 6.90). Participants were recruited from urban 78 (56.5%) and rural 60 (43.5%) areas. The FLI used in this study has 25 closed-ended questions that explore nine domains, including physical, psychological, and social lifestyle components. Results: The overall lifestyle of most seniors was 'excellent' (45.7%) or 'very good' (41.3%); none of the respondents scored in the lowest category - 'needs improvement'. The domains that mostly demonstrated the need for a change were related to activity, nutrition, insight, sleep, stress. Participants from rural areas tended to have lower scores in the activity (p= 0.017) and nutrition (p= 0.015) domains. The lifestyle of the elderly is determined by several factors, including age, gender, health status, level of education, and the place of residence. Conclusions: The majority of older persons demonstrated an 'excellent' or 'very good' lifestyle with healthy habits. The adoption of healthy lifestyle patterns, in particular a regular balanced diet and proper physical activity, can help prevent functional limitations among the elderly in rural areas. abstract_id: PUBMED:37357469 Determination of Quality of Life and Related Factors in Overweight Women Living in Rural Areas. Background: Obesity is more common among women living in rural areas of Türkiye and being overweight affects the quality of life of women. Aim: Through this study, we aimed to determine the quality of life and related factors of overweight women living in rural areas. Materials And Methods: This is a descriptive correlational study. The data were collected using the Personal Information Form and the Scale of Quality-of-Life Overweight Women (SMQLOW) between April and June 2020. The study sample comprised women with a body mass index (BMI) of 25 and above. Results: The study included 201 women, 41.8% of the participants were in the 36-51 age group and 50.2% had a BMI between 30 and 49.2 (obese). It was determined that the quality of life of women was at a moderate level. The difference between the total SMQLOW score with respect to age, educational status, economic status, occupation of the spouse, general health status, and BMI were found to be statistically significant (P &lt; 0.05). We examined the correlation of some variables with the total scale score of overweight women. A significant positive and weak correlation was found between weight and the SMQLOW total score (r = 0.390, P = 0.001). There was a positively weak and highly significant, correlation between waist circumference and SMQLOW total score (r = 0.277, P &lt; 0.001). Conclusion: Preventing obesity may improve the quality of life among rural women in the study Population. abstract_id: PUBMED:34931879 Factors Affecting Quality of Life among Older Adults with Hypertension in Urban and Rural Areas in Thailand: A Cross-Sectional Study. This study explored factors affecting quality of life in older adults with hypertension by comparing those living in urban and rural areas. A cross-sectional study was conducted on 420 older adults living in urban and rural areas in Thailand. Data were collected using the WHOQOL-OLD and Health-Promoting Lifestyle Profile-II tools, which measured quality of life and health-promoting behaviors among the participants. Older adults in urban areas had higher quality of life scores than those in rural locations. Health-promoting behaviors significantly predicted higher quality of life for all residents. A high perceived health status predicted increase of quality of life in urban residents, whereas the presence of comorbidity effects decreased quality of life. A longer hypertension duration predicted higher quality of life in rural residents. These findings suggest that healthy behaviors and self-management interventions are critical to improve quality of life in older Thai adults with hypertension. abstract_id: PUBMED:37248592 Quality of Life and Its Influencing Factors Among Schizophrenia Patients Living in Urban and Rural Areas Objective: To investigate the status quo of the quality of life of schizophrenia patients in a city in Sichuan Province and to explore, thereof, the urban-rural differences in the factors influencing their quality of life. Methods: A total of 824 schizophrenia patients were selected for the study through multistage stratified cluster random sampling method. All the subjects were selected from a pool of patients covered by the Sichuan Provincial Information System for the Comprehensive Management of Severe Mental Disorders. Questionnaire surveys were conducted with the Schizophrenia Quality of Life Scale (SQLS), the Social Support Rating Scale (SSRS), the general circumstance questionnaire, and the lifestyle questionnaire. In addition, univariate and multiple linear regression models were used to analyze the influencing factors of quality of life among schizophrenia patients living in urban areas and those in rural areas. Results: Rural patients had poorer quality of life than urban patients did in all measurement domains ( P&lt;0.05). Marital status, vocational skills, physical exercise, and social support were influencing factors of the quality of life among urban patients ( P&lt;0.05). Age, marital status, annual household income, vocational skills, participation in community rehabilitation activities, and the time required to walk to the nearest medical institution were influencing factors of the quality of life among rural patients ( P&lt;0.05). Conclusion: Targeted measures for the enhancement of the quality of life of schizophrenia patients should be formulated on the basis of urban and rural characteristics in terms of economic support, vocational skills training, input in mental health services, community rehabilitation services, and social support. abstract_id: PUBMED:37210076 Healthier Energy Balance Behaviors Most Important for Health-Related Quality of Life in Rural Cancer Survivors in Central Pennsylvania. Background: Rural cancer survivors face a greater number of health disparities, including poorer health-related quality of life (HRQoL), than urban cancer survivors. Engagement in healthy lifestyle behaviors also varies between rural and urban cancer survivors. Lifestyle behaviors can improve HRQoL; however, the combination of behaviors most important for HRQoL in rural survivors is unclear. This study examined clusters of lifestyle behaviors in rural cancer survivors, and differences in HRQoL between behavioral clusters. Methods: Rural cancer survivors in the United States (N = 219) completed a cross-sectional survey. Lifestyle behaviors were classified into unhealthy/healthy binary categories (inactive/active, longer/shorter sedentary time, excessive/acceptable fat intake, very low/higher fruit and vegetable intake, some/no alcohol consumption, and poor/good sleep quality). Behavioral clusters were identified by latent class analysis. HRQoL differences between behavioral clusters were assessed by ordinary least squares regression. Results: The 2-class model demonstrated the best fit and interpretability. The "mostly unhealthy behaviors" class (38.5% of sample) had higher probabilities of all unhealthy behaviors, except alcohol consumption. The "healthier energy balance" class (61.5% of sample) had higher probabilities of active, shorter sedentary, higher fruit and vegetable consumption, excessive fat intake, some alcohol consumption, and poor sleep categories, and reported better HRQoL. Conclusions: Healthier energy balance behaviors were particularly relevant for HRQoL in rural cancer survivors. Multiple behavior change interventions to improve HRQoL in rural cancer survivors should focus on supporting energy balance behaviors. Many rural cancer survivors may lead very unhealthy lifestyles, placing them at high risk of adverse outcomes. This subpopulation should be prioritized to help alleviate cancer health disparities. abstract_id: PUBMED:34407158 Work conditions and their repercussions on the quality of life of rural workers. Objective: To assess the context of work and analyze their repercussions on the quality of life of rural workers in soybean agribusiness. Method: Cross-sectional study, with an intentional sample of rural workers, conducted between the months of October and December 2019, with the application of the instruments World Health Organization Quality Life-bref and Scales of Assessment of Context and Human Cost of Work. The data were analyzed through bivariate and multivariate descriptive statistics. Results: The participants amounted to 299 rural workers. The scores of Work Conditions and Socioprofessional Relations were satisfactory and Organization of Work was critical. The scores of Physical and Cognitive costs were considered critical and of the Affective Cost was satisfactory. The Socioprofessional Relations have negatively impacted the overall quality of life and the Physical and Psychological, the Social Relations and the Environmental Domains, whereas Cognitive Cost positively impacted the Physical and the Environmental Domain and the Affective Cost positively impacted the Social Relations Domain. Conclusion: Although the Work Context and Human Cost indicated adverse conditions, only the Socioprofessional Relations exerted negative influences on the quality of life of rural workers. Answer: Yes, traditional rural lifestyle can be a barrier for quality of life assessment. The study using the Short Form-36 (SF-36) in a rural Chinese population found that quality of life assessment instruments based on urban-living arrangements may not be reliably used with individuals living in rural environments. The study revealed issues with the validity of the SF-36 when used in a rural context, including low test-retest reliabilities and a factor structure that did not fully accord with the a priori assignment of items to scales. The semantic meanings of items changed in the context of the rural lifestyle, which affected the validity of the assessment (PUBMED:20013158). Moreover, the study on the quality of life of depressed and suicidal patients seeking services from traditional and faith healers in rural Kenya highlighted that traditional and faith healers treat patients with a variety of quality of life issues, suggesting that the assessment of quality of life in rural settings may require consideration of different factors and perhaps different measurement tools (PUBMED:28482849). Additionally, lifestyle interventions aimed at improving quality of life need to consider the unique lifestyle configurations and everyday behaviors of individuals, which are influenced by the quality of life available to them. This is particularly relevant for working-age people after stroke, where a less healthy lifestyle before stroke was associated with lower quality of life in the psychological and environmental spheres after stroke (PUBMED:32724272). In summary, traditional rural lifestyle can pose challenges to the assessment of quality of life due to differences in the context, meaning, and lifestyle behaviors compared to urban environments. This necessitates careful consideration and potential adaptation of quality of life assessment tools for use in rural populations.
Instruction: Is there any difference in recurrence rates in laparoscopic ileocolic resection for Crohn's disease compared with conventional surgery? Abstracts: abstract_id: PUBMED:12907912 Comparison of conventional and laparoscopic ileocolic resection for Crohn's disease. Purpose: The purpose of this study was to compare open with laparoscopic ileocolic resection in selected patients with refractory Crohn's disease confined to terminal ileum and cecum in terms of small-bowel obstruction and recurrence rates at a follow-up of five years. Methods: Thirty-nine patients who underwent laparoscopic ileocolic resection during a four-year period were compared with 53 patients who had previously undergone open ileocolic resection by the same surgeons at the same institution. Small-bowel obstruction was any clinical evidence of obstruction requiring hospital admission, regardless of recurrent disease or the need for surgery. Recurrence was defined as histologically proven Crohn's disease requiring reoperation and assessed by the actuarial method. Data were presented as medians and ranges and analyzed with Fisher's exact test, Student's t-test, Wilcoxon's rank-sum test, and log-rank test. Results: Open ileocolic resection and laparoscopic ileocolic resection patients were well matched for age, gender, body mass index, American Society of Anesthesiologists grade, smoking, length of time from diagnosis to index operation, site of disease, preoperative medical treatment, previous abdominal surgery, indications for surgery, anastomosis configuration, microscopical involvement of resection margins, and postoperative complications (9.4 vs. 10.2 percent). Operating time was longer for laparoscopic ileocolic resection patients (105 (60-120) vs. 185 (130-210) min, P &lt; 0.001). Hospital stay was shorter for laparoscopic ileocolic resection patients (11.2 (5-20) vs. 5.6 (3-11) days, P &lt; 0.001). At five-year follow-up, 90.5 percent of open ileocolic resection patients and 92.3 percent of laparoscopic ileocolic resection patients were available. Five-year small-bowel obstruction rates were 35.4 and 11.1 percent, respectively, in open ileocolic resection and laparoscopic ileocolic resection patients (17 vs. 4 patients, P = 0.02). Five-year-recurrence rates were 29.1 (14/48) and 27.7 percent (10/36). Median time to recurrence was 48 and 56 months, respectively, following open ileocolic resection and laparoscopic ileocolic resection (P = 0.9104). Conclusion: When compared with open ileocolic resection, laparoscopic ileocolic resection led to lower five-year small-bowel obstruction rates in selected patients with ileocecal Crohn's disease. Five-year recurrence rates did not differ. abstract_id: PUBMED:16328612 Is there any difference in recurrence rates in laparoscopic ileocolic resection for Crohn's disease compared with conventional surgery? A long-term, follow-up study. Purpose: The long-term outcome of laparoscopic ileocolic resection in patients with Crohn's disease is not well defined. This study was designed to define the surgical recurrence rate after laparoscopic ileocolic resection for Crohn's disease and to compare it with that seen after open ileocolic resection. Methods: A retrospective review of 113 records of patients who underwent index ileocolic resection for terminal ileal Crohn's disease was performed (1987-2003). Recurrence was defined as development of new preanastomotic Crohn's disease requiring surgical intervention. Details of recurrence and use of chemoprophylaxis was determined by phone interview and chart review. Results: Sixty-three patients (26 males; mean age, 35.2 years) underwent laparoscopic ileocolic resection and 50 had open ileocolic resection (17 males; mean age, 37.1 years). Surgical recurrence developed in 6 of 63 patients (9.5 percent) in the laparoscopic ileocolic resection group (mean follow-up, 62.9 months) and in 12 of 50 patients (24 percent) in the open ileocolic resection group (mean follow-up, 81.8 months). Rates of chemoprophylaxis were similar between groups (laparoscopic ileocolic resection, 39 percent; open ileocolic resection, 54 percent; P = not significant). Median times to recurrence after laparoscopic ileocolic resection and open ileocolic resection were 60 (range, 36-72) months and 62 (range, 12-180) months, respectively. Fifty percent of the recurrences in the laparoscopic ileocolic resection group and 4 of 12 in the open ileocolic resection group were able to be retreated laparoscopically. Re-recurrence occurred in 4 of 12 open ileocolic resection patients (33 percent) at a mean of 63.6 months, and one patient had a third recurrence at 28 months. Conclusions: In this study, the long-term outcome after laparoscopic ileocolic resection was not shown to be statistically different from that of open ileocolic resection. The relatively low recurrence rates in both groups may be explained by our aggressive use of chemoprophylaxis. abstract_id: PUBMED:19404070 Laparoscopic intracorporeal ileocolic resection for Crohn's disease: is it safe? Purpose: The aim of this study was to assess the impact of laparoscopic ileocolic resection with intracorporeal vascular division and anastomosis on the outcome of patients with terminal ileal Crohn's disease. Methods: Prospective data on patients undergoing laparoscopic ileocolic resection for Crohn's disease confined to terminal ileum and cecum with or without fistulas were reviewed. Exclusion criteria were frozen abdomen, recurrent Crohn's disease following resection, and perforated Crohn's disease. Laparoscopic ileocolic resection involved a lateral-to-medial approach encompassing ten sequential steps. Values were medians (range). Results: From January 1992 to June 2006, 80 laparoscopic ileocolic resections were attempted with a 1.2 percent conversion rate. Sixty-two women and 18 men, age 40 (19-55) years, had a body mass index of 26 (18-37) and an American Society of Anesthesiologists' score of 1 (1-3), and 23.7 percent had previously undergone abdominal surgery. Operating time was 155 (130-210) minutes. Estimated blood loss was 250 (50-600) ml. Length of the skin incision at the specimen extraction site was 35 (30-44) mm. The complication/reoperation rate was 7.5 percent. The readmission rate was 3.7 percent. Except for smoking (P &lt; 0.005), there were no significant differences between patients with and those without complications. The recurrence rate was 30 percent (24 of 80). The median time to recurrence was 64 months. Conclusion: Laparoscopic ileocolic resection with intracorporeal vascular division and anastomosis resulted in a favorable outcome in selected patients with refractory terminal ileal Crohn's disease. abstract_id: PUBMED:21087388 Laparoscopic surgery for recurrent ileocolic Crohn's disease is as safe and effective as primary resection. Aim: The safety and short-term outcome of laparoscopic surgery for recurrent ileocolic Crohn's disease was compared with the outcome following primary resection. Method: Between June 2002 and June 2010, 59 consecutive unselected patients (30 of whom had recurrent disease) underwent laparoscopic ileocolic resection. Four primary resections and one revision were performed as a single incision laparoscopic surgery (SILS) procedure. Results: There was no difference between the two groups in terms of age, body mass index, American Society of Anesthesiology (ASA) grade or the presence or absence of fistulating disease. The median operating time was significantly longer for the revision group (125 min vs 85 min; P &lt; 0.001). The rate of conversion was 8.5%, morbidity was 20% and mortality was 0% (P = not significant between groups). Risk factors for conversion included a complex fistula, fibrosis and the need to carry out multiple stricturoplasty. Patients in whom surgery was converted had a longer hospital stay and a higher morbidity (40%). The median hospital stay was 3 days, the return to theatre rate was 5% and the re-admission rate was 5% (P = not significant between groups). Conclusion: Laparoscopic surgery for recurrent ileocolic Crohn's disease is safe and can lead to significant short-term benefit, including earlier discharge. Conversion increases the length of stay in hospital and the overall morbidity. abstract_id: PUBMED:33710102 Totally Laparoscopic Ileocolic Resection for Complex Enterovisceral Fistulas in Crohn's Disease: A Comparative Study. Introduction: In primary Crohn's disease (CD), laparoscopic ileocolic resection has been shown to be both feasible and safe, and is associated with improved outcomes in terms of postoperative morbidity and length of hospital stay. However, it is unclear whether the laparoscopic approach can be routinely proposed as a safe procedure for patients with complex enterovisceral fistulas.The aim of this study is to assess the feasibility and safety of laparoscopic surgery for complex enterovisceral fistulas, and compare it with CD patients who underwent primary laparoscopic ileocolic resection. Patients And Methods: All patients who underwent laparoscopic primary ileocolic resection (LICR) for complex enterovisceral fistulas between July 2006 and July 2017 were included. They were compared with all consecutive patients who underwent LICR for nonfistulizing CD in the same period of time. Patients with previous bowel resections or recurrent disease were excluded. Results: Nineteen patients with 20 enterovisceral fistulas (group I) were compared with 61 patients who underwent LICR for nonfistulizing disease (group II). There were no differences between the groups in age, sex, preoperative body mass index, nutritional status, and American Society of Anesthesiology score. There was no conversion to open surgery in both groups.There were no significant differences between groups in terms of operative time [120 (range: 65 to 232) vs. 117 (range: 62 to 217) min, P=0.7], hospital stay [6 (5 to 8) vs. 7 (5 to 65) days, P=0.56], overall morbidity 26.3% versus 16.4% (P=0.33), major morbidity (Clavien-Dindo &gt;3) 15.7% versus 10% (P=0.66) and reoperation rates 5.3% versus 4.9% (P=0.9). There was no mortality in both groups. Conclusions: Our experience shows that the laparoscopic approach for complex enterovisceral fistulas in selected CD patients is both feasible and safe in the hands of experienced inflammatory bowel disease surgeons with extensive expertise in laparoscopic surgery. Larger study cohorts are needed to confirm these findings. abstract_id: PUBMED:34977147 Surgical Strategies to Reduce Postoperative Recurrence of Crohn's Disease After Ileocolic Resection. Postoperative recurrence after ileocaecal resection for fibrostenotic terminal ileal Crohn's disease is a significant issue for patients as it can result in symptom recurrence and requirement for further surgery. There are very few modifiable factors, aside from smoking cessation, that can reduce the risk of postoperative recurrence. Until relatively recently, the surgical technique used for resection and anastomosis had little or no impact on postoperative recurrence rates. Novel surgical techniques such as the Kono-S anastomosis and extended mesenteric excision have shown promise as ways to reduce postoperative recurrence rates. This manuscript will review and discuss the evidence regarding a range of surgical techniques and their potential role in reducing disease recurrence. Some of the techniques have been shown to be associated with significant benefits for patients and have already been integrated into the routine clinical practice of some surgeons, while other techniques remain under investigation. Current techniques such as resection of the mesentery close to the intestine and stapled side to side anastomosis are being challenged. It is looking more likely that surgeons will have a major role to play when it comes to reducing recurrence rates for patients undergoing ileocaecal resection for Crohn's disease. abstract_id: PUBMED:32085793 Mesenteric excision surgery or conservative limited resection in Crohn's disease: study protocol for an international, multicenter, randomized controlled trial. Background: The structures of the mesentery including adipose tissue, nerves, and lymphatics play an important role in the pathogenesis and disease progression of Crohn's disease (CD). Conventional surgical resection for CD usually does not involve resecting the mesentery en bloc with the specimen. This contrasts with complete mesocolic excision (CME) in colorectal cancer, which involves radical resection of the mesentery. Preliminary evidence from smaller studies suggests that applying the principle of mesocolic excision to CD surgery may reduce the risk of postoperative recurrence. This randomized controlled trial is designed to test whether applying the principles of mesocolic excision to CD results in reduced postoperative recurrence. It also aims to evaluate intra- and postoperative morbidity between the two approaches. Methods: This international, multicenter, randomized controlled trial will randomize patients (n = 116) scheduled to undergo primary ileocolic resection to either receive extensive mesenteric excision (EME) or conventional ileocolic resection with limited mesenteric excision (LME). Five sites will recruit patients in three countries. In the EME group, the mesentery is resected following CME, while avoiding the root region, i.e., 1 cm from the root of the ileocolic artery and vein. In the LME group, the mesentery is retained, i.e., "close shave" or &lt; 3 cm from the border of bowel. The primary end point will be surgical recurrence after surgery. The secondary end points will be the postoperative endoscopic and clinical recurrence, and intra- and postoperative morbidity. Demographics, risk factors, laboratory investigations, endoscopy, postoperative prophylaxis and imaging examination will be assessed. Analysis of the primary outcome will be on an intention-to-treat basis. Discussion: If mesocolic excision in CD reduces postoperative disease recurrence and does not increase morbidity, this trial has the potential to change practice and reduce recurrence of CD after surgical resection. Trial Registration: Clinical Trials.gov, ID: NCT03769922. Registered on February 27, 2019. abstract_id: PUBMED:34420112 Surgical features to reduce anastomotic recurrence of Crohn's disease that requires reoperation: a systematic review. Anastomotic recurrence after intestinal resection is one of the main embarrassing problems encountered during Crohn's disease (CD) management. This complication is often associated with an expected consequence, mainly a multiple intestinal resection. This systematic review evaluates published evidence on surgical features to reduce surgical recurrence after bowel resection and provide surgeons with recommendations based on published evidence. We conducted bibliographic research on September 05, 2020, through PubMed, Cochrane database, and Google scholar. We retained meta-analysis, randomized clinical trials, and controlled clinical trials. The strength of clinical data and subsequent recommendations were graded according to the Oxford Centre for Evidence-Based Medicine. Fourteen articles were retained. Early resection reduces surgical relapse and the need for additional medical treatment. There was no difference between conventional and laparoscopic bowel resection. Mesenteric excision seems to reduce surgical recurrence. Stapled side-to-side anastomosis reduces surgical recurrence. However, no difference was observed between Kono-S anastomosis ensure at least similar anastomotic recurrence rate compared to conventional anastomosis. Surgical recurrence was reduced after bowel resection when compared to stricturoplasty. There was no difference between the one and two steps resection. Several surgical features have been investigated. Some of them were unanimously found to be effective in lengthening the disease-free relapse. However, others are still controversial. abstract_id: PUBMED:33355687 Single-incision laparoscopic surgery (SILS) for the treatment of ileocolonic Crohn's disease: a propensity score-matched analysis. Introduction: Single-incision laparoscopic surgery (SILS) aims to minimize the surgical access trauma by reducing the number of abdominal incisions to a single site, potentially offering better cosmetic results and decreased postoperative pain. In this study, we compare the results of SILS ileocolic resection for Crohn's disease (CD) to conventional laparoscopy and open surgery using a propensity score-matched analysis in a retrospective national multicentre study. Methods: All consecutive patients undergoing elective SILS ileocaecal or redo ileocolic resection for primary and recurrent CD from 1 June 2018 to 31 May 2019 were included. Patients were matched 1:1:1 with laparoscopy and open surgery according to perianal disease, recurrent disease, penetrating phenotype of CD, history of previous abdominal surgery, preoperative medical treatment with steroids and anti-TNF. Postoperative morbidity within 30 days of surgery was the primary endpoint. Results: Fifty-eight patients were included in each group, for a total of 174 patients. The conversion rate for SILS and laparoscopy was 10.3% and 12%, respectively, with no difference in the incidence of postoperative complications (13.8% and 12%, p = 0.77), whilst open surgery demonstrated a worse morbidity profile, with a complication rate of 25.9% (p &lt; 0.0001). Median length of hospital stay following SILS ileocolic resection was 5 days, significantly shorter compared to 7 days for laparoscopy and 9 for open surgery (p &lt; 0.0001). Conclusions: SILS ileocolonic resection for CD demonstrated a comparable morbidity profile compared to laparoscopy in selected patients, with a reduced length of postoperative hospital stay. abstract_id: PUBMED:37635292 Long-term outcomes following ileocolic resection for Crohn's disease: does earlier elective resection affect disease recurrence rates? Background: Surgical resection in Crohn's disease is sometimes the only alternative treating disease complications or refractory disease. The implications of early resection on disease course are still debatable. We aimed to assess the influence of preoperative disease duration on long-term postoperative disease course. Methods: A retrospective analysis of all Crohn's disease patients who underwent an elective primary ileocolic resection between 2010 and 2021 in a single tertiary medical center. The cohort was divided based on disease duration, Group A (47 patients) had a disease duration shorter than 3 years (median of 1 year) and Group B (139 patients) had a disease duration longer than 3 years (median of 11 years). Results: Surgeries were less complex among Group A as noted by higher rates of laparoscopic assisted procedures (68.1% vs. 45.3%, P = 0.006), shorter surgery duration (134 vs. 167 min, P &lt; 0.0001) less estimated blood loss (72.5 vs. 333 mL, P = 0.016) and faster return of bowel function (3 vs. 4 days, P = 0.011). However, propensity score matching nullified all the differences. Younger age (OR = 0.86, P = 0.004), pre-op steroids (OR = 3.69, P = 0.037) and longer disease duration (OR = 1.18, P = 0.012) were found to be independently significantly associated with severe complications. After a median follow-up time of 71.38 months no significant differences were found between the groups in terms of endoscopic (P = 0.59), or surgical recurrences rates (P = 0.82). Conclusions: The main effect of preoperative short disease duration was noted within the surgical complexity; however, matching suggests confounders as cause of the difference. No significant long-term implication was noted on disease recurrence. Answer: Based on the provided abstracts, there appears to be no significant difference in recurrence rates when comparing laparoscopic ileocolic resection to conventional open surgery for Crohn's disease. Specifically, one study found that the five-year recurrence rates did not differ significantly between laparoscopic and open ileocolic resection patients, with rates of 27.7% and 29.1%, respectively (PUBMED:12907912). Another study also reported that the long-term outcome after laparoscopic ileocolic resection was not statistically different from that of open ileocolic resection, with recurrence rates of 9.5% in the laparoscopic group and 24% in the open surgery group, although the follow-up periods were different (PUBMED:16328612). Additionally, a study on laparoscopic intracorporeal ileocolic resection for Crohn's disease reported a recurrence rate of 30% with a median time to recurrence of 64 months, suggesting that laparoscopic surgery is a safe option for selected patients with refractory terminal ileal Crohn's disease (PUBMED:19404070). Furthermore, a study comparing laparoscopic surgery for recurrent ileocolic Crohn's disease to primary resection found that laparoscopic surgery is safe and effective for recurrent disease, with similar short-term outcomes to primary resection (PUBMED:21087388). Another study found that laparoscopic surgery for complex enterovisceral fistulas in Crohn's disease is feasible and safe, with no significant differences in operative time, hospital stay, overall morbidity, major morbidity, and reoperation rates when compared to primary laparoscopic ileocolic resection for nonfistulizing disease (PUBMED:33710102). In summary, the recurrence rates for laparoscopic ileocolic resection in Crohn's disease are comparable to those of conventional open surgery, and laparoscopic surgery is considered a safe and effective option for both primary and recurrent disease.
Instruction: Location, location, location: does early cancer in Barrett's esophagus have a preference? Abstracts: abstract_id: PUBMED:23622975 Location, location, location: does early cancer in Barrett's esophagus have a preference? Background: Early cancer (high-grade dysplasia [HGD] and intramucosal carcinoma [ImCa]) associated with Barrett's esophagus (BE) may have a circumferential spatial predilection. Objective: To describe the esophageal circumferential location of early cancer in BE. Design And Setting: Retrospective study, single tertiary referral center. Patients And Intervention: One hundred nineteen patients were referred for endoscopic eradication therapy for early cancer associated with BE. Endoscopic images and reports and pathology were reviewed. Main Outcome Measurements: Circumferential location designation of early cancer in BE by using a clock-face orientation. Results: One hundred nineteen of 131 patients referred for endoscopic eradication therapy had a location designation for their advanced histology (91.9%). There were a total of 57 patients (47.9%) with HGD and 62 patients (52.1%) with ImCa. There was a significantly higher rate of early cancer (HGD or ImCa) in the right hemisphere (12 to 6 o'clock location) compared with the left hemisphere (84.9% vs 15.1%, P &lt; .0001). The highest percentage of early cancer was found in the 12 to 3 o'clock quadrant (64.7%); 71.9% of HGD and 58.1% of ImCa lesions were located in the 12 to 3 o'clock quadrant. Limitations: Retrospective design, single center. Conclusions: Early cancer associated with BE is far more commonly found in the right hemisphere of the esophagus (12 to 6 o'clock) with the highest rate in the 12 to 3 o'clock quadrant. These findings support enhanced scrutiny of the right hemisphere of the esophagus during surveillance and endoscopic treatment of patients with BE. abstract_id: PUBMED:9492855 Racial differences in the histology, location, and risk factors of esophageal cancer. Although esophageal cancer is uncommon in the united states, its high mortality rate and recent increased incidence make it an important malignancy. Because there appears to be significant racial variation in the types of esophageal cancer, we examined a group of black patients with esophageal cancer and compared their risk factors, histology, and location with those of a cohort of white patients with esophageal cancer seen during the same period. We retrospectively reviewed patients with esophageal cancer seen at three major hospitals in Atlanta, Georgia from January 1990 to April 1996. Patients of races other than white or black were excluded from this study, the esophagus was separated into upper, middle, and lower thirds by defined criteria. Of the eligible 234 patients, 129 were black and 105 were white. In blacks with esophageal cancer, squamous cell cancer was the predominant type (92%), and adenocarcinoma was infrequent in whites, adenocarcinoma was more common than squamous cell cancer (66% vs. 32%). Although Barrett's esophagus was distinctly uncommon, smoking and alcohol consumption were significantly more common in blacks. Only 43% of the patients with adenocarcinoma had evidence of barrett's esophagus, all adenocarcinomas were located in the lower third of the esophagus. There appear to be significant racial differences in the types, risk factors, and location of esophageal cancer. Adenocarcinoma and Barrett's esophagus are uncommon in blacks. abstract_id: PUBMED:22381529 Circumferential location predicts the risk of high-grade dysplasia and early adenocarcinoma in short-segment Barrett's esophagus. Background: Whether early Barrett's neoplasia has a predilection for particular spatial locations in shorter segment disease is currently unknown. Anatomic factors may play a role in lesion location because of differing levels of mucosal acid exposure. Objective: To identify high-risk lesion locations, which has important implications for surveillance strategies. Design: We interrogated a prospectively maintained database of patients who underwent endoscopic resection (ER) for Barrett's neoplasia at 2 Australian tertiary centers. Lesions targeted for ER were characterized and their location in the distal esophagus was noted as on a clock face. A Z test of proportions was used to test for deviation from uniformity in the distribution of lesions. Setting: Two Australian tertiary centers. Patients: Patients who underwent ER for Barrett's neoplasia. Main Outcome Measurements: Lesion location in the distal oesophagus, resected specimen histology. Results: A total of 146 consecutive patients had ER for biopsy-proven high-grade dysplasia or esophageal adenocarcinoma. A total of 75 patients had Barrett's segment length of 5 cm or less and a visible lesion. Five patients had 2 visible lesions giving a total of 80 lesions. ER of 66 lesions (82.5%) led to the identification of advanced pathology: 37 high-grade dysplasia (46%), 24 mucosal adenocarcinoma (30%), 5 submucosal adenocarcinoma (6%). Of a total of 80 lesions, 43 (53.8%) (95% CI, 42.9%-64.7%) were centered within the 2- to 5-o'clock arc, comprising 25% of the circumference. This area also accounted for 36 (54.5%) of the 66 lesions with advanced histology (95% CI, 42.5%-66.5%). All confidence intervals lie wholly above the 25% expected in a uniform circular distribution (P &lt; .05). Limitations: Observational study in a tertiary center. Conclusions: In Barrett's maximal segments of 5 cm or less, the 2- to 5-o'clock arc, accounts for approximately 50% of macroscopically visible lesions and associated early neoplasia. This finding has important implications for surveillance strategies. abstract_id: PUBMED:34362152 Accuracy of Preoperative Endoscopy in Determining Tumor Location Required for Surgical Planning for Esophagogastric Junction Cancer. Purpose: The surgical strategy for esophagogastric junction (EGJ) cancer depends on the tumor location as measured relative to the EGJ line. The purpose of this study was to clarify the accuracy of diagnostic endoscopy in different clinicopathological backgrounds. Methods: Subjects were 74 consecutive patients with abdominal esophagus to upper gastric cancer who underwent surgical resection. Image-enhanced endoscopy with narrow-band imaging (NBI) was used to determine the EGJ line, prioritizing the presence of palisade vessels, followed by the upper end of gastric folds, as a landmark. The relative positional relationship between the tumor epicenter and the EGJ line was classified into six categories, and the agreement between endoscopic and pathologic diagnoses was examined to evaluate prediction accuracy. Results: The concordance rate of 69 eligible cases was 87% with a kappa coefficient (K) of 0.81. The palisade vessels were observed in 62/69 patients (89.9%). Of the 37 pathological EGJ cancers centered within 2 cm above and below the EGJ line, Barrett's esophagus was found to be a significant risk factor for discordance (risk ratio, 4.40; p = 0.042); the concordance rate of 60% (K = 0.50) in the Barrett's esophagus group was lower than the rate of 91% (K = 0.84) in the non-Barrett's esophagus group. In five of six discordant cases, the EGJ line was estimated to be proximal to the actual line. Conclusion: Diagnostic endoscopy is beneficial for estimating the location of EGJ cancer, with a risk of underestimating esophageal invasion length in patients with Barrett's esophagus. abstract_id: PUBMED:17611912 Prospective evaluation of the macroscopic types and location of early Barrett's neoplasia in 380 lesions. Background And Study Aims: The macroscopic appearance of early gastric cancers, classified according to the Japanese criteria, has been shown to be an important prognostic factor for local endoscopic therapy. No prospective data about the distribution of macroscopic types and their location in early Barrett's neoplasia are available, however. The present study was conducted to evaluate the clinical applicability of this macroscopic classification and to analyze the relative proportions of the different gross types in early Barrett's neoplasms and the correlation between the macroscopic classification and the stage or grade of differentiation. Patients And Methods: A total of 344 patients with 380 Barrett's neoplastic lesions who were referred between October 1996 and September 2005 for endoscopic therapy of early Barrett's high-grade intraepithelial neoplasia and carcinoma were prospectively included in the study. Routine endoscopy prior to endoscopic resection in our center included assessment of the macroscopic type (according to the Japanese classification) and documentation of the radial location of the neoplastic lesions. Images were recorded which were later assessed by six independent reviewers; intra- and interobserver agreement for the assessment of the macroscopic type were calculated using kappa statistics. Results: The distribution of the lesions by gross type was as follows: type I, n = 49 (13 %); type IIa, n = 139 (37 %); type IIb, n = 106 (28 %); type IIc, n = 17 (4 %); type IIa + c, n = 62 (16 %); type III, n = 7 (2 %). Type IIb lesions seem to be the most favorable type with regard to differentiation and T category ( P &lt; 0.05). The mean kappa value for the interobserver agreement was 0.86 and the mean kappa value for the intraobserver agreement was 0.89. Most lesions were found at the 12 o'clock and 3 o'clock positions. Conclusions: Assessment of the macroscopic type may provide important information about the possibility of endoscopic treatment. The harder-to-detect flat lesions are by far the most frequent macroscopic type of neoplastic lesion in Barrett's esophagus. abstract_id: PUBMED:29936984 Exposure to bile acids alters the intracellular location and function of MnSOD in Barrett's esophagus. Background: Oxidative stress secondary to bile-acid exposure has been associated with metaplastic degeneration of normal esophageal mucosa into Barrett's esophagus (BE) cells and eventually esophageal adenocarcinoma. We previously reported that the macromolecular response of BE cells to this stress was largely regulated by the expression of manganese-dependent mitochondrial superoxide dismutase (MnSOD). As the mitochondrion plays a vital role in MnSOD activation, this study sought to determine the location and activity of MnSOD within BE cells after exposure to oxidative stress. Methods: A human BE cell line, BAR-T cell, was exposed 0.4 mM concentrations of taurocholic acid (Tau) or a 0.4 mM 1:1 mixture of bile salts for 4 h. Cell viability was performed with 3-(4, 5-dimthyl-thiazol-2-yl)-2, 5-diphenyltetrazolium bromide assays. Proteins were extracted and separated into mitochondrial, nuclear, and cytoplasmic fractions followed by analysis by a western blot and enzymatic activities. Results: BAR-T cell showed resistance to the bile-salt insults. Expression of MnSOD was significantly increased in the cells exposed to a mixture of bile acids and Tau versus control. Mitochondria MnSOD is abundant and highly active. Nuclear fraction displayed presence of both MnSOD and Cu/zinc superoxide dismutase secondary to bile-acid exposure; however, the MnSOD was inactive in nuclear fraction. Conclusions: This is the first study to specifically evaluate cellular fraction MnSOD expression, increased in BE cells in response to the oxidative stress of bile exposure. Mitochondrial MnSOD contributes to resistance of BAR-T cells to the bile-salt insults. Further investigation is required to determine the potential correlation between bile exposure and BE to adenocarcinoma progression via MnSOD-mediated cell signaling. abstract_id: PUBMED:30635408 Timeline and location of recurrence following successful ablation in Barrett's oesophagus: an international multicentre study. Objective: Surveillance interval protocols after complete remission of intestinal metaplasia (CRIM) post radiofrequency ablation (RFA) in Barrett's oesophagus (BE) are currently empiric and not based on substantial evidence. We aimed to assess the timeline, location and patterns of recurrence following CRIM to inform these guidelines. Design: Data on patients undergoing RFA for BE were obtained from prospectively maintained databases of five (three USA and two UK) tertiary referral centres. RFA was performed until CRIM was confirmed on two consecutive endoscopies. Results: 594 patients achieved CRIM as of 1 May 2017. 151 subjects developed recurrent BE over a median (IQR) follow-up of 2.8 (1.4-4.4) years. There was 19% cumulative recurrence risk of any BE within 2 years and an additional 49% risk over the next 8.6 years. There was no evidence of a clinically meaningful change in the recurrence hazard rate of any BE, dysplastic BE or high-grade dysplasia/cancer over the duration of follow-up, with an estimated 2% (95% CI -7% to 12%) change in recurrence rate of any BE in a doubling of follow-up time. 74% of BE recurrences developed at the gastro-oesophageal junction (GOJ) (24.1% were dysplastic) and 26% in the tubular oesophagus. The yield of random biopsies from the tubular oesophagus, in the absence of visible lesions, was 1% (BE) and 0.2% (dysplasia). Conclusions: BE recurrence risk following CRIM remained constant over time, suggesting that lengthening of follow-up intervals, at least in the first 5 years after CRIM, may not be advisable. Sampling the GOJ is critical to detecting recurrence. The requirement for random biopsies of the neosquamous epithelium in the absence of visible lesions may need to be re-evaluated. abstract_id: PUBMED:37301519 Location and appearance of dysplastic Barrett's esophagus recurrence after endoscopic eradication therapy: no additional yield from random biopsy sampling neosquamous mucosa. Background And Aims: Surveillance after complete remission of intestinal metaplasia (CRIM) is essential. Current recommendations are to sample visible lesions first, followed by random 4-quadrant biopsy sampling of the original Barrett's esophagus (BE) length. To inform post-CRIM surveillance protocols, we aimed to identify the anatomic location, appearance, and histology of BE recurrences. Methods: We performed an analysis of 216 patients who achieved CRIM after endoscopic eradication therapy for dysplastic BE at a Barrett's Referral Unit between 2008 and 2021. The anatomic location, recurrence histology, and endoscopic appearance of dysplastic recurrences were evaluated. Results: After a median of 5.5 years (interquartile range, 2.9-7.2) of follow-up after CRIM, 57 patients (26.4%) developed nondysplastic BE (NDBE) recurrence and 18 patients (8.3%) developed dysplastic recurrence. From 8158 routine surveillance biopsy samplings of normal-appearing tubular esophageal neosquamous epithelium, the yield for recurrent NDBE or dysplasia was 0%. One hundred percent of dysplastic tubular esophageal recurrences were visible and in BE islands, whereas 77.8% of gastroesophageal junction dysplastic recurrences were nonvisible. Four distinct endoscopic features suspicious for recurrent advanced dysplasia or neoplasia were identified: buried or subsquamous BE, irregular mucosal pattern, loss of vascular pattern, and nodularity or depression. Conclusions: The yield of routine surveillance biopsy sampling of normal-appearing tubular esophageal neosquamous epithelium was zero. BE islands with indistinct mucosal or loss of vascular pattern, nodularity or depression, and/or signs of buried BE should raise clinician suspicion for advanced dysplasia or neoplasia recurrence. We suggest a new surveillance biopsy sampling protocol with a focus on meticulous inspection, followed by targeted biopsy sampling of visible lesions and random 4-quadrant biopsy sampling of the gastroesophageal junction. abstract_id: PUBMED:27436487 Barrett's oesophagus and associated dysplasia are not equally distributed within the esophageal circumference. Background: A careful endoscopic surveillance of Barrett's oesophagus is warranted to prevent esophageal cancer. Aim: To identify the preferred location of non-circumferential Barrett's oesophagus and associated dysplasia within the esophageal circumference. Methods: We retrospectively reviewed a prospectively maintained database of patients with non-circumferential lesions. The location of metaplastic lesions and dysplastic lesions within the esophageal circumference was identified as on a clock face, and their distribution in the 4 quadrants was compared. Results: Of overall 443 patients with Barrett's oesophagus, 192 (43%) were eligible for our study. Multiple lesions were diagnosed in 110 (57%) of them, for a total amount of 352 metaplastic areas. Barrett's oesophagus lesions were located significantly more in the posterior wall of the oesophagus (38.4%), rather than in the right wall (28.8%), the anterior wall (22.6%), or the left wall (10.2%) (P&lt;0.0001). Among all metaplastic lesions, 28 were associated with dysplasia (7.9%), and one with adenocarcinoma (0.3%). Dysplastic lesions were significantly more common in the posterior wall (39.3%) than, respectively, in the anterior wall (35.8%), the right wall (21.4%) or the left wall (3.5%) (P=0.03). Conclusion: Our results show that the posterior wall of the oesophagus is the preferential location of both Barrett's oesophagus and associated dysplasia. abstract_id: PUBMED:10379537 Therapeutic strategy for adenocarcinoma in Barrett's esophagus: a study based on a comparison with squamous cell carcinoma The therapeutic strategy for adenocarcinoma in Barrett's esophagus is discussed based on a comparison with squamous cell carcinoma. The pattern and range of lymph node metastasis of adenocarcinoma in Barrett's esophagus is similar to that of squamous cell carcinoma, as is the pattern of recurrence. Chemotherapy is less effective, except for some reports on paclitaxel, but chemoradiation therapy is comparable in effect. The general tendency of a better prognosis for patients with carcinoma in Barrett's esophagus is the result of several factors, such as earlier detection of the disease, lower probability of lymph node metastasis, and distal location which permits less radical curative surgical procedures without cervical and superior mediastinal lymph node dissection. In principal the therapeutic strategy for adenocarcinoma in Barrett's esophagus should be determined like that for squamous cell carcinoma, taking into account the location of the lesion and the depth of invasion. Answer: Yes, early cancer in Barrett's esophagus (BE) does appear to have a preference for certain locations within the esophagus. Studies have shown that early cancer associated with BE, including high-grade dysplasia (HGD) and intramucosal carcinoma (ImCa), is more commonly found in the right hemisphere of the esophagus, specifically in the 12 to 6 o'clock location, with the highest rate in the 12 to 3 o'clock quadrant (PUBMED:23622975). This finding supports the need for enhanced scrutiny of the right hemisphere of the esophagus during surveillance and endoscopic treatment of patients with BE. Additionally, in short-segment Barrett's esophagus, the 2- to 5-o'clock arc accounts for approximately 50% of macroscopically visible lesions and associated early neoplasia (PUBMED:22381529). This has important implications for surveillance strategies, suggesting that particular attention should be paid to this area during endoscopic examinations. Furthermore, the posterior wall of the esophagus has been identified as the preferential location for both Barrett's esophagus lesions and associated dysplasia, with dysplastic lesions being significantly more common in the posterior wall compared to other walls of the esophagus (PUBMED:27436487). These findings highlight the importance of targeted surveillance and biopsy strategies that focus on the areas of the esophagus where early cancer and dysplasia are most likely to occur in patients with Barrett's esophagus.
Instruction: Can cardiovascular risk be predicted by newborn, childhood, and adolescent body size? Abstracts: abstract_id: PUBMED:9470007 Can cardiovascular risk be predicted by newborn, childhood, and adolescent body size? An examination of longitudinal data in urban African Americans. Objective: Recent retrospective studies of older adults have demonstrated a correlation between lower birth weight and hypertension and insulin resistance. We tested this finding in our sample of urban African Americans with prospective data on growth and blood pressure and also tested other variables (in addition to birth weight) for their relationship to adult cardiovascular risk. Study Design: A prospective study of birth weight, growth, and blood pressure (Philadelphia Perinatal Collaborative Project) followed a sample of 137 African Americans, with nine examinations from birth through 28.0 +/- 2.7 years. Metabolic measurements (oral glucose tolerance testing, euglycemic hyperinsulinemic clamp, and plasma lipid concentration) were performed on the subjects as adults. Bivariate correlations among parameters were computed using the Pearson r. The chi-squared statistic was used to determine associations of outcomes with birth weight. Stepwise multiple linear regressions were computed using newborn, early childhood, adolescent, and young adult parameters to predict adult outcomes. Results: Birth weight and blood pressure at age 28 years are not correlated (Pearson r = 0.06). Birth weight is also unrelated to adult obesity. However, weight at 0.3 years and after and body mass index at 7 years and after are correlated with adult weight. Furthermore, weight at age 14 years is significantly negatively correlated with measures of insulin-stimulated glucose use, indicating that obese adolescents may be at greater risk than nonobese adolescents for development of non-insulin dependent diabetes in adulthood. Conclusions: We found no relationship between birth weight and adult outcomes pertaining to cardiovascular risk in this sample of adult African Americans. However, we did find evidence that somatic growth (body weight and body mass index) is significantly related to obesity and attenuated insulin-stimulated glucose utilization in adulthood. These findings indicate that the origins of adult cardiovascular disease are related to somatic growth, but not intrauterine growth, and are evident during childhood. abstract_id: PUBMED:30167798 The Impact of Childhood and Adolescent Obesity on Cardiovascular Risk in Adulthood: a Systematic Review. Purpose Of Review: Adult obesity and cardiovascular diseases are closely linked. Yet, the relationship of childhood and adolescent obesities with cardiovascular diseases in adulthood requires additional evidence. The goal of the review is to inspect the relationship between childhood- and adolescent-increased body mass index (BMI) and cardiovascular risk factors, fatal and non-fatal cardiovascular diseases in adulthood. Recent Findings: Cardiovascular diseases in adulthood are linked by most of the studies to childhood and adolescent obesities. Studies showed that childhood and adolescent obesities increased the incidence of cardiovascular disease risk factors and were linked to higher risk of cardiovascular morbidity and mortality in adulthood. Childhood and adolescent obesities were also associated, likely with a causal relationship, with an increased likelihood for various cardiovascular morbidities including ischemic heart disease, stroke, but also non-ischemic heart disease-related cardiac pathologies. abstract_id: PUBMED:17653070 Newborn size, infant and childhood growth, and body composition and cardiovascular disease risk factors at the age of 6 years: the Pune Maternal Nutrition Study. Objective: To study associations of size and body proportions at birth, and growth during infancy and childhood, to body composition and cardiovascular disease (CVD) risk factors at the age of 6 years. Design: The Pune Maternal Nutrition Study, a prospective population-based study of maternal nutrition and CVD risk in rural Indian children. Methods: Body composition and CVD risk factors measured in 698 children at 6 years were related to body proportions and growth from birth. Measurements: Anthropometry was performed every 6 months from birth. At 6 years, fat and lean mass (dual X-ray absorptiometry) and CVD risk factors (insulin resistance, blood pressure, glucose tolerance, plasma lipids) were measured. Results: Compared with international references (NCHS, WHO) the children were short, light and thin (mean weight &lt;-1.0 s.d. at all ages). Larger size and faster growth in all body measurements from birth to 6 years predicted higher lean and fat mass at 6 years. Weight and height predicted lean mass more strongly than fat mass, mid-upper arm circumference (MUAC) predicted them both approximately equally and skinfolds predicted only fat mass. Neither birthweight nor the 'thin-fat' newborn phenotype, was related to CVD risk factors. Smaller MUAC at 6 months predicted higher insulin resistance (P&lt;0.001) but larger MUAC at 1 year predicted higher systolic blood pressure (P&lt;0.001). After infancy, higher weight, height, MUAC and skinfolds, and faster growth of all these parameters were associated with increased CVD risk factors. Conclusions: Slower muscle growth in infancy may increase insulin resistance but reduce blood pressure. After infancy larger size and faster growth of all body measurements are associated with a more adverse childhood CVD risk factor profile. These rural Indian children are growing below international 'norms' for body size and studies are required in other populations to determine the generalizability of the findings. abstract_id: PUBMED:34465205 Mendelian Randomization Analyses Suggest Childhood Body Size Indirectly Influences End Points From Across the Cardiovascular Disease Spectrum Through Adult Body Size. Background Obesity is associated with long-term health consequences including cardiovascular disease. Separating the independent effects of childhood and adulthood obesity on cardiovascular disease risk is challenging as children with obesity typically remain overweight throughout the lifecourse. Methods and Results This study used 2-sample univariable and multivariable Mendelian randomization to estimate the effect of childhood body size both independently and after accounting for adult body size on 12 endpoints across the cardiovascular disease disease spectrum. Univariable analyses identified strong evidence of a total effect between genetically predicted childhood body size and increased risk of atherosclerosis, atrial fibrillation, coronary artery disease, heart failure, hypertension, myocardial infarction, peripheral artery disease, and varicose veins. However, evidence of a direct effect was weak after accounting for adult body size using multivariable Mendelian randomization, suggesting that childhood body size indirectly increases risk of these 8 disease outcomes via the pathway involving adult body size. Conclusions These findings suggest that the effect of genetically predicted childhood body size on the cardiovascular disease outcomes analyzed in this study are a result of larger body size persisting into adulthood. Further research is necessary to ascertain the critical timepoints where, if ever, the detrimental impact of obesity initiated in early life begins to become immutable. abstract_id: PUBMED:28747248 Pathways between childhood/adolescent adversity, adolescent socioeconomic status, and long-term cardiovascular disease risk in young adulthood. Objective: The current study investigated mediators between childhood/adolescent adversities (e.g., dating violence, maltreatment, homelessness, and parental death), low socioeconomic status (SES) during adolescence, and cardiovascular disease (CVD) risk in young adulthood. The purpose of these analyses was to understand whether SES during adolescence and childhood/adolescent adversities affect CVD risk through similar pathways, including maternal relationship quality, health behaviors, financial stress, medical/dental care, educational attainment, sleep problems, and depressive symptoms. Methods: Using the National Longitudinal Study of Adolescent to Adult Health (N = 14,493), which has followed US adolescents (Wave 1; M = 15.9 years) through early adulthood (Wave 4; M = 28.9 years), associations were examined between childhood/adolescent adversity and SES to 30-year CVD risk in young adulthood. The outcome was a Framingham-based prediction model of CVD risk that included age, sex, body mass index, smoking, systolic blood pressure, diabetes, and antihypertensive medication use at Wave 4. Path analysis was used to examine paths through the adolescent maternal relationship to young adult mediators of CVD risk. Results: Childhood/adolescent adversity significantly predicted greater adult CVD risk through the following pathways: maternal relationship, health behaviors, financial stress, lack of medical/dental care, and educational attainment; but not through depressive symptoms or sleep problems. Lower SES during adolescence significantly predicted greater adult CVD risk through the following pathways: health behaviors, financial stress, lack of medical/dental care, and educational attainment, but not maternal relationship, depressive symptoms, or sleep problems. Conclusions: Childhood/adolescent adversities and SES affected CVD risk in young adulthood through both similar and unique pathways that may inform interventions. abstract_id: PUBMED:21911827 Body size across the life course, mammographic density, and risk of breast cancer. Adult body mass index (BMI) is inversely associated with premenopausal breast cancer risk, and childhood and adolescent body size is inversely associated with breast cancer risk in pre- and postmenopausal women. Breast density is inversely related to body size and may play a role in the association of body size with breast cancer risk. The authors conducted a nested case-control study including 1,528 cases and 2,844 controls from the Nurses' Health Study (1989-2004) and Nurses' Health Study II (1996-2003). Prior to breast cancer diagnosis, participants reported their body fatness during childhood and adolescence, BMI at age 18 years, and current BMI. Mammographic density was measured by using a computer-assisted thresholding method. The inverse association between adult BMI and premenopausal breast cancer (for BMI ≥30 vs. BMI 20-22.4, odds ratio = 0.64, 95% confidence interval: 0.38, 1.06) (P(trend) = 0.36) became positive after adjustment for mammographic density (odds ratio = 1.28, 95% confidence interval: 0.72, 2.30) (P(trend) = 0.07). Conversely, the inverse association between childhood and adolescent body size and breast cancer risk remained after adjustment for mammographic density. The inverse association between adult BMI and premenopausal breast cancer risk may be partially due to negative confounding by mammographic density. Conversely, mammographic density does not appear to explain the inverse association between childhood and adolescent body fatness and breast cancer risk. abstract_id: PUBMED:31945595 Childhood adversity and cardiovascular disease risk: An appraisal of recall methods with a focus on stress-buffering processes in childhood and adulthood. Introduction: Associations between childhood/adolescent adversity and poor adult physical health have been reported in past work. Much of this work has relied on either retrospective or prospective measures of childhood experiences. However, the effect of different assessment methods on potential stress buffering processes remains largely unknown. Objective: We first examined the extent to which long-term cardiovascular disease risk (CVD) was predicted by reports obtained from 10-year old youth regarding adversity experienced in the prior year and those obtained from the same individuals as adults (age 29) regarding their experience of childhood adversity from ages 0-10, focusing in each case on similar types of adversity. To test stress buffering perspectives, we examined the effects of parental emotional support on the association between each measure of childhood adversities and cardiovascular health. Methods: We used data from a longitudinal sample of 454 African Americans enrolled in the Family and Community Health Study. The outcome variable was a 30-year CVD risk score computed from the Framingham algorithm. The hypotheses were tested with beta regression models. Results: The findings revealed a link between childhood adversity and adult CVD risk at age 29, for both measures of adversity. Consistent with the stress-buffering hypothesis, prospectively assessed parental emotional support in adolescence, but not adulthood, buffered effects on cardiovascular risk for each type of assessment of childhood adversity. Conclusions: Prospective and retrospective measures correlated in a manner similar to prior reports (i.e. significantly, but poorly). Further, in line with stress-buffering hypothesis, parental emotional support received at age 10 yielded different buffering effects than parental emotional support received at age 29. The study's findings suggest that theoretically consistent patterns of stress-buffering are detectable using either type of assessment of childhood adversity and provide useful information in the prediction of adult CVD risk. abstract_id: PUBMED:24267463 Identification of obesity and cardiovascular risk factors in childhood and adolescence. Obesity in the child and adolescent population is a growing problem. This article discusses the identification of obesity and overweight youth using body mass index and anthropometry and reviews literature to show the relationship between obesity and premature cardiovascular disease. abstract_id: PUBMED:26810380 Relationship between early growth and CVD risk factors in adolescents. Low birth weight and a rapid weight gain in early childhood may lead to an increased risk for developing cardiovascular disease later in life, such as hypertension and dyslipidaemia. In this study, we examined the associations between size at birth, relative weight gain in infancy and childhood with specific cardiovascular disease risk factors in early adulthood. Adolescents (n=1935) from the Birth to Twenty plus (BT20+) cohort were included in the analysis. The following were treated as exposure variables: weight at birth, and relative conditional weight gain (CW), independent of height, between ages 0-24 months and 24-48 months. Outcomes were serum lipids and body composition variables at age 18 years. After adjusting for sex and other confounders, early life exposures were not associated with adolescent lipid profile. Following adjustment for sex and height (body size), birth weight [β=0.704 (0.40, 1.01)], CW 0-24 [β=1.918 (1.56, 2.28)] and CW24-48 [β=1.485 (1.14, 1.82)] accounted for 48% of the variance in fat mass. However, birth weight [β=0.773 (0.54, 1.01)], CW 0-24 [β=1.523 (1.24, 1.80)] and CW24-48 [β=1.226 (0.97, 1.49)] were also positively predicted and accounted for 71% of the variance in fat mass in adolescence (P&lt;0.05). Our data suggests that birth weight and weight gain during infancy and early childhood independent of linear growth are related to adolescent body composition but not blood lipid profiles in an urban African population. abstract_id: PUBMED:32488216 The predictive value of repeated blood pressure measurements in childhood for cardiovascular risk in adults: the Hanzhong Adolescent Hypertension Study. There is currently a lack of strong evidence linking childhood elevated blood pressure to long-term cardiovascular risk in adulthood. Repeated observations of abnormal blood pressure in childhood may enhance the prediction of cardiovascular risk in adulthood compared with a single observation. The study included 1738 individuals in rural areas of Hanzhong City, Shaanxi, who had been followed for 30 years since baseline (1987, at which time participants were aged 6-15 years). According to four independent measurements of blood pressure in 1987, 1989, 1992, and 1995, childhood elevated blood pressure was defined as 2 in-person examinations with blood pressure values above the 90th percentile. Arterial stiffness and left ventricular hypertrophy in adulthood were assessed by brachial-ankle pulse wave velocity and the Cornell product index, respectively. Childhood elevated blood pressure was associated with an increased risk of adult hypertension (OR, 2.01; 95% CI, 1.53-2.65), arterial stiffness (OR, 1.69; 95% CI, 1.32-2.16) and left ventricular hypertrophy (OR, 1.86; 95% CI, 1.13-3.05) (all P &lt; 0.05). Cardiovascular risk in adults increased with increasing childhood blood pressure levels. In addition, two abnormal childhood blood pressure observations predicted an increased likelihood of hypertension in adulthood (0.77 for 2 versus 0.70 for 1 observation, P &lt; 0.001). Our study provides strong evidence that elevated blood pressure in childhood predicts cardiovascular risk in adults. The prediction was enhanced by two observations of abnormal blood pressure in childhood compared with a single measurement. We emphasize the importance of childhood blood pressure monitoring and control in the prevention of cardiovascular diseases. Answer: Yes, cardiovascular risk can be predicted by newborn, childhood, and adolescent body size. Several studies have found associations between body size during these early life stages and cardiovascular risk factors or diseases in adulthood. For instance, a study on urban African Americans did not find a correlation between birth weight and adult outcomes related to cardiovascular risk, but it did find that weight and body mass index (BMI) during childhood and adolescence were significantly related to obesity and reduced insulin-stimulated glucose utilization in adulthood, which are risk factors for cardiovascular disease (PUBMED:9470007). Another systematic review highlighted that childhood and adolescent obesities were linked to higher risk of cardiovascular morbidity and mortality in adulthood (PUBMED:30167798). Furthermore, the Pune Maternal Nutrition Study found that larger size and faster growth in body measurements from birth to 6 years were associated with increased cardiovascular disease risk factors at the age of 6 years (PUBMED:17653070). Mendelian Randomization Analyses suggested that childhood body size indirectly influences cardiovascular disease endpoints through adult body size, indicating that the effects of childhood obesity on cardiovascular outcomes are largely due to obesity persisting into adulthood (PUBMED:34465205). Additionally, childhood and adolescent adversities, as well as low socioeconomic status during adolescence, have been shown to affect cardiovascular disease risk in young adulthood through various pathways, including health behaviors and educational attainment (PUBMED:28747248). The relationship between early growth and cardiovascular disease risk factors in adolescents has also been examined, with findings suggesting that birth weight and weight gain during infancy and early childhood are related to adolescent body composition but not blood lipid profiles in an urban African population (PUBMED:26810380). Lastly, the Hanzhong Adolescent Hypertension Study provided strong evidence that elevated blood pressure in childhood predicts cardiovascular risk in adults, with the prediction enhanced by repeated observations of abnormal blood pressure in childhood (PUBMED:32488216). In summary, while birth weight alone may not be a strong predictor of adult cardiovascular risk, body size and growth patterns during childhood and adolescence, as well as childhood blood pressure, are important predictors of future cardiovascular health.
Instruction: Is there a link between components of health-related functioning and incident impaired glucose metabolism and type 2 diabetes? Abstracts: abstract_id: PUBMED:20007943 Is there a link between components of health-related functioning and incident impaired glucose metabolism and type 2 diabetes? The Australian Diabetes Obesity and Lifestyle (AusDiab) study. Objective: To determine the longitudinal association of components of health-related functioning (HRF) with incident impaired glucose metabolism and type 2 diabetes. Research Design And Methods: The Australian Diabetes Obesity and Lifestyle (AusDiab) study is a national, longitudinal study of adults aged &gt; or =25 years from 42 randomly selected areas of Australia. Diabetes status was defined using the World Health Organization criteria, and HRF was assessed using the SF-36 questionnaire in 1999-2000 and 2004-2005. Results: Incident impaired fasting glucose (IFG), impaired glucose tolerance (IGT), and newly diagnosed type 2 diabetes were associated with increased bodily pain at baseline compared with those with normal glucose tolerance (NGT) (IFG P = 0.005, IGT P &lt; 0.004, and newly diagnosed type 2 diabetes P = 0.005), after adjustment. In addition, those with incident IGT and newly diagnosed type 2 diabetes had significantly reduced physical functioning, general health, mental health, and vitality at baseline compared with those with NGT. After we controlled for factors associated with incident diabetes, those in the lowest quartile of the physical component summary scale at baseline had at least a 50% higher risk of progression to impaired glucose metabolism and diabetes 5 years later. Conclusions: These findings show that incident IFG, IGT, and newly diagnosed type 2 diabetes are associated with reduced HRF independent of cardiovascular disease and that this is evident before the onset of these conditions. If future health promotion campaigns are to effectively target those at high risk of developing diabetes, an understanding of the process of declining health before onset of the disease is essential. abstract_id: PUBMED:23639608 Impaired glucose metabolism and health related quality of life. Aims: We aimed at investigating whether different categories of glucose tolerance have any effect on a person's HRQoL. Methods: Population-based cross-sectional study conducted as a community sample of apparently healthy middle-aged individuals living in Western Finland. The subjects of the study, 1383 individuals, aged 45-70 years, had at least one cardiovascular risk factor but no previous diagnoses of either diabetes or cardiovascular disease. They completed health related quality of life (HRQoL) questionnaire before the oral glucose tolerance test (OGTT) was performed to diagnose the gategories of glucose tolerance. Results: Persons with newly diagnosed type 2 diabetes (NDM) had lower scores for physical functioning, general health and emotional role than subjects with normal glucose tolerance. Conclusion: The results of the HRQoL questionnaire demonstrated that NDM is negatively associated with HRQoL, but prediabetes - IFG or IGT - does not. abstract_id: PUBMED:34448864 Association of Trimethylamine N-Oxide and Related Metabolites in Plasma and Incident Type 2 Diabetes: The Cardiovascular Health Study. Importance: Although rodent studies suggest that trimethylamine N-oxide (TMAO) influences glucose homeostasis and risk of type 2 diabetes, evidence in humans is limited. Objective: To examine the associations of serial measures of plasma TMAO and related metabolite concentrations with incident type 2 diabetes, fasting plasma insulin and glucose levels, and the Gutt insulin sensitivity index (ISI). Design, Setting, And Participants: This prospective cohort design assessed the association of plasma TMAO and related metabolite concentrations with diabetes outcome, whereas a cross-sectional design assessed the association with insulin and glucose levels and Gutt ISI. The participants were a cohort of older US adults from the Cardiovascular Health Study (CHS). Data from June 1989 to May 1990, from November 1992 to June 1993, and from June 1995 to June 1997 were included, with follow-up through June 2010. Levels of TMAO and related metabolites were measured in CHS plasma samples. Data were analyzed from July 2019 to September 2020. Exposures: Plasma concentrations of TMAO, carnitine, betaine, choline, crotonobetaine, and γ-butyrobetaine, measured by high-performance liquid chromatography and mass spectrometry. Main Outcomes And Measures: Linear regression for associations of TMAO and related metabolites with insulin and glucose levels and Gutt ISI, and proportional hazards regression for associations with diabetes. Results: The study included 4442 participants without diabetes at baseline (mean [SD] age, 73 [6] years at entry; 2710 [61%] women). In multivariable analyses, plasma TMAO, carnitine, crotonobetaine, and γ-butyrobetaine concentrations were positively associated with fasting insulin level (insulin mean geometric ratio comparing fifth with first quintiles of metabolite concentration: 1.07 [95% CI, 1.04-1.10] for TMAO; 1.07 [95% CI, 1.03-1.10] for carnitine; 1.05 [95% CI, 1.02-1.08] for crotonobetaine; and 1.06 [95% CI, 1.02-1.09] for γ-butyrobetaine). In contrast, betaine and choline concentrations were associated with greater insulin sensitivity (mean difference in Gutt ISI comparing fifth with first quintiles: 6.46 [95% CI, 4.32-8.60] and 2.27 [95% CI, 0.16-4.38], respectively). Incident diabetes was identified in 661 participants during a median 12.1 (interquartile range, 6.9-17.1) years of follow-up. In multivariable analyses, TMAO and metabolites were not significantly associated with type 2 diabetes risk (hazard ratios of diabetes comparing fifth with first quintile: 1.20 [95% CI, 0.94-1.55] for TMAO; 0.96 [95% CI, 0.74-1.24] for choline; 0.88 [95% CI, 0.67-1.15] for betaine; 1.07 [95% CI, 0.83-1.37] for carnitine; 0.79 [95% CI, 0.60-1.04] for γ-butyrobetaine; and 1.06 [95% CI, 0.83-1.35] for crotonobetaine). Conclusions And Relevance: Plasma TMAO and related metabolites were not significantly associated with type 2 diabetes among older adults. The metabolites TMAO, carnitine, γ-butyrobetaine, and crotonobetaine may be associated with insulin resistance, and betaine and choline may be associated with greater insulin sensitivity, but temporality of the associations was not established. abstract_id: PUBMED:20740271 Health behaviours, socioeconomic status and diabetes incidence: the Australian Diabetes Obesity and Lifestyle Study (AusDiab). Aims/hypothesis: To identify the impact of socioeconomic status on incident impaired glucose metabolism and type 2 diabetes and to investigate the mediating role of health behaviours on this relationship using national, population-based data. Methods: The Australian Diabetes Obesity and Lifestyle (AusDiab) Study is a national, population-based, longitudinal study of adults aged 25 years and above. A total sample of 4,405 people provided complete baseline (1999-2000) and 5 year follow-up (2004-2005) data relevant for these analyses. Fasting plasma glucose and 2 h plasma glucose were obtained from an OGTT, and demographic, socioeconomic and behavioural data were collected by interview and questionnaire. Multinomial logistic regression examined the role of socioeconomic position in the development of diabetes and mediation analyses tested the contribution of health behaviours in this relationship. Results: Highest level of education was a stronger predictor of incident impaired glucose tolerance and type 2 diabetes (p = 0.002), compared with household income (p = 0.103), and occupational grade (p = 0.202). Education remained a significant independent predictor of diabetes in fully adjusted models. However, the relationship was attenuated by the health behaviours (smoking and physical activity). Mediation analyses indicated that these behaviours were partial mediators (explaining 27%) of the socioeconomic status-diabetes relationship. Conclusion/interpretation: Smoking and physical activity partly mediate the relationship between low education and type 2 diabetes. Identification of these modifiable behavioural mediators should facilitate the development of effective health promotion campaigns to target those at high risk of developing type 2 diabetes. abstract_id: PUBMED:29671081 Type 2 Diabetes Mellitus, the Metabolic Syndrome, and Its Components in Adult Survivors of Acute Lymphoblastic Leukemia and Hematopoietic Stem Cell Transplantations. Purpose Of Review: A growing number of pediatric acute lymphoblastic leukemia (ALL) and hematopoietic stem cell transplantation (HSCT) survivors reach adulthood and face long-term health-related problems. We review risk factors and the prevalence of the metabolic syndrome (MetS), a cluster of obesity-related comorbidities, including abdominal obesity, atherogenic dyslipidemia, elevated blood pressure, impaired glucose metabolism, and type 2 diabetes in ALL and HSCT survivors. Recent Findings: Components of the MetS are already detected during the first year of ALL maintenance therapy and significantly worsen over time. The prevalence of MetS increases at a faster rate in this setting than in the general population. Factors found to be of the greatest potential risk to the development of the MetS are central obesity, increased BMI, irradiation therapy, older age, poor diet, and low level of physical activity. The early onset of MetS and its components among ALL and HSCT survivors calls for early and continuous screening to identify those at risk and to implement preventive measures. abstract_id: PUBMED:24026549 Hepatic insulin clearance is closely related to metabolic syndrome components. Objective: Insulin clearance is decreased in type 2 diabetes mellitus (T2DM) for unknown reasons. Subjects with metabolic syndrome are hyperinsulinemic and have an increased risk of T2DM. We aimed to investigate the relationship between hepatic insulin clearance (HIC) and different components of metabolic syndrome and tested the hypothesis that HIC may predict the risk of metabolic syndrome. Research Design And Methods: Individuals without diabetes from the Metabolic Syndrome Berlin Brandenburg (MeSyBePo) study (800 subjects with the baseline examination and 189 subjects from the MeSyBePo recall study) underwent an oral glucose tolerance test (OGTT) with assessment of insulin secretion (insulin secretion rate [ISR]) and insulin sensitivity. Two indices of HIC were calculated. Results: Both HIC indices showed lower values in subjects with metabolic syndrome (P &lt; 0.001) at baseline. HIC indices correlate inversely with waist circumference, diastolic blood pressure, fasting glucose, triglycerides, and OGTT-derived insulin secretion index. During a mean follow-up of 5.1 ± 0.9 years, 47 individuals developed metabolic syndrome and 33 subjects progressed to impaired glucose metabolism. Both indices of HIC showed a trend of an association with increased risk of metabolic syndrome (HICC-peptide odds ratio 1.13 [95% CI 0.97-1.31], P = 0.12, and HICISR 1.38 [0.88-2.17], P = 0.16) and impaired glucose metabolism (HICC-peptide 1.12 [0.92-1.36], P = 0.26, and HICISR 1.31 [0.74-2.33] P = 0.36), although point estimates reached no statistical significance. Conclusions: HIC was associated with different components of metabolic syndrome and markers of insulin secretion and insulin sensitivity. Decreased HIC may represent a novel pathophysiological mechanism of the metabolic syndrome, which may be used additionally for early identification of high-risk subjects. abstract_id: PUBMED:26160705 Health-Related Quality of Life after Bariatric Surgery and its Correlation with Glycaemic Status in Hong Kong Chinese Adults. Background: To evaluate health-related quality of life (HRQOL) following bariatric surgery and its correlation with different glycaemic status in Hong Kong Chinese adults. Materials And Methods: In 2002-2008, obese Chinese adults were recruited for bariatric surgery, undergoing laparoscopic adjustable gastric banding or laparoscopic sleeve gastrectomy. Patients were invited to complete the Chinese Hong Kong Medical Outcomes Study Short-Form Health Survey (SF-36) at baseline and at 1-year post operation. Results: Sixty patients (60 % female) completed baseline and 1-year follow-up HRQOL assessments. Mean age was 38 years and mean BMI was 41.6 kg/m(2). At baseline, 30.0 % of patients had diabetes and 31.7 % prediabetes. Mean absolute weight reduction 1 year after bariatric surgery was 19.8 kg. Statistically significant improvements in SF-36 scores were demonstrated in all physical domains and in three of the four psychological domains. Greater body weight reduction was associated with greater improvements in certain physical domains postoperatively. After adjusting for co-variables, abnormal glucose tolerance was associated with greater improvements in five of the eight HRQOL domains. Conclusions: Bariatric surgery resulted in significant gains in HRQOL as well as significant reductions in body weight in obese Chinese adults. This study suggests that bariatric surgery offers greater HRQOL improvements in patients with prediabetes and diabetes compared with normoglycaemic individuals. abstract_id: PUBMED:36057662 The link between gestational diabetes and cardiovascular diseases: potential role of extracellular vesicles. Extracellular vesicles are critical mediators of cell communication. They encapsulate a variety of molecular cargo such as proteins, lipids, and nucleic acids including miRNAs, lncRNAs, circular RNAs, and mRNAs, and through transfer of these molecular signals can alter the metabolic phenotype in recipient cells. Emerging studies show the important role of extracellular vesicle signaling in the development and progression of cardiovascular diseases and associated risk factors such as type 2 diabetes and obesity. Gestational diabetes mellitus (GDM) is hyperglycemia that develops during pregnancy and increases the future risk of developing obesity, impaired glucose metabolism, and cardiovascular disease in both the mother and infant. Available evidence shows that changes in maternal metabolism and exposure to the hyperglycemic intrauterine environment can reprogram the fetal genome, leaving metabolic imprints that define life-long health and disease susceptibility. Understanding the factors that contribute to the increased susceptibility to metabolic disorders of children born to GDM mothers is critical for implementation of preventive strategies in GDM. In this review, we discuss the current literature on the fetal programming of cardiovascular diseases in GDM and the impact of extracellular vesicle (EV) signaling in epigenetic programming in cardiovascular disease, to determine the potential link between EV signaling in GDM and the development of cardiovascular disease in infants. abstract_id: PUBMED:29057204 Shift Work: Disrupted Circadian Rhythms and Sleep-Implications for Health and Well-Being. Purpose Of Review: Our 24/7 society is dependent on shift work, despite mounting evidence for negative health outcomes from sleep displacement due to shift work. This paper reviews short- and long-term health consequences of sleep displacement and circadian misalignment due to shift work. Recent Findings: We focus on four broad health domains: metabolic health; risk of cancer; cardiovascular health; and mental health. Circadian misalignment affects these domains by inducing sleep deficiency, sympathovagal and hormonal imbalance, inflammation, impaired glucose metabolism, and dysregulated cell cycles. This leads to a range of medical conditions, including obesity, metabolic syndrome, type II diabetes, gastrointestinal dysfunction, compromised immune function, cardiovascular disease, excessive sleepiness, mood and social disorders, and increased cancer risk. Summary: Interactions of biological disturbances with behavioral and societal factors shape the effects of shift work on health and well-being. Research is needed to better understand the underlying mechanisms and drive the development of countermeasures. abstract_id: PUBMED:24622671 Vitamin D status and ill health: a systematic review. Low serum concentrations of 25-hydroxyvitamin D (25[OH]D) have been associated with many non-skeletal disorders. However, whether low 25(OH)D is the cause or result of ill health is not known. We did a systematic search of prospective and intervention studies that assessed the effect of 25(OH)D concentrations on non-skeletal health outcomes in individuals aged 18 years or older. We identified 290 prospective cohort studies (279 on disease occurrence or mortality, and 11 on cancer characteristics or survival), and 172 randomised trials of major health outcomes and of physiological parameters related to disease risk or inflammatory status. Investigators of most prospective studies reported moderate to strong inverse associations between 25(OH)D concentrations and cardiovascular diseases, serum lipid concentrations, inflammation, glucose metabolism disorders, weight gain, infectious diseases, multiple sclerosis, mood disorders, declining cognitive function, impaired physical functioning, and all-cause mortality. High 25(OH)D concentrations were not associated with a lower risk of cancer, except colorectal cancer. Results from intervention studies did not show an effect of vitamin D supplementation on disease occurrence, including colorectal cancer. In 34 intervention studies including 2805 individuals with mean 25(OH)D concentration lower than 50 nmol/L at baseline supplementation with 50 μg per day or more did not show better results. Supplementation in elderly people (mainly women) with 20 μg vitamin D per day seemed to slightly reduce all-cause mortality. The discrepancy between observational and intervention studies suggests that low 25(OH)D is a marker of ill health. Inflammatory processes involved in disease occurrence and clinical course would reduce 25(OH)D, which would explain why low vitamin D status is reported in a wide range of disorders. In elderly people, restoration of vitamin D deficits due to ageing and lifestyle changes induced by ill health could explain why low-dose supplementation leads to slight gains in survival. Answer: Yes, there is a link between components of health-related functioning (HRF) and incident impaired glucose metabolism and type 2 diabetes. The Australian Diabetes Obesity and Lifestyle (AusDiab) study found that incident impaired fasting glucose (IFG), impaired glucose tolerance (IGT), and newly diagnosed type 2 diabetes were associated with increased bodily pain at baseline compared with those with normal glucose tolerance (NGT). Additionally, those with incident IGT and newly diagnosed type 2 diabetes had significantly reduced physical functioning, general health, mental health, and vitality at baseline compared with those with NGT. The study concluded that these conditions are associated with reduced HRF independent of cardiovascular disease and that this is evident before the onset of these conditions (PUBMED:20007943). Furthermore, a population-based cross-sectional study conducted in Western Finland found that persons with newly diagnosed type 2 diabetes (NDM) had lower scores for physical functioning, general health, and emotional role than subjects with NGT. The study concluded that NDM is negatively associated with health-related quality of life (HRQoL), but prediabetes (IFG or IGT) does not have the same association (PUBMED:23639608). These findings suggest that there is a significant relationship between reduced health-related functioning and the development of impaired glucose metabolism and type 2 diabetes, indicating that HRF components could potentially serve as early indicators for the risk of developing these conditions.
Instruction: Does crossing the legs decrease arterial pressure in diabetic patients with peripheral vascular disease? Abstracts: abstract_id: PUBMED:8269797 Does crossing the legs decrease arterial pressure in diabetic patients with peripheral vascular disease? Objective: To evaluate the effect of crossing the legs at the knee and the ankle on peripheral arterial pressures. Research Design And Methods: A prospective study of 6 diabetic patients with known peripheral vascular disease and 5 nondiabetic control subjects without peripheral vascular disease was conducted. Peripheral arterial pressures were taken at the ankle and at the great toe before and after crossing the legs at the knees and ankles. Comparisons were made of measurements obtained in the supine and sitting positions. All crossed leg measurements were taken in the sitting position. Results: Ankle arm indexes and digital arm indexes pressures taken in the sitting position were equal to or higher than supine pressures, with the exception of one subject, GB. In this patient, ankle arm indexes and digital arm indexes on the right extremity were lower in the sitting position, but increased with the legs crossed at the knees and ankles compared with the uncrossed sitting position. In all patients, lower extremity pressures that decreased slightly with crossing the legs remained higher than pressures obtained in the supine position. Statistical analyses showed no significant differences. Wave forms did not change even when there was a slight decrease in ankle arm indexes or digital arm indexes. Control subjects without peripheral vascular disease showed no change in pressures with crossing the legs. Conclusions: Crossing the legs at the knees and ankles does not result in a significant decrease in peripheral arterial pressures in diabetic patients with peripheral vascular disease. abstract_id: PUBMED:14566089 Pulse wave velocity in lower-limb arteries among diabetic patients with peripheral arterial disease. Objective: Patients with type 2 diabetes mellitus are at an increased risk of atherosclerosis including peripheral arterial disease (PAD). The purpose of this study was to examine the possible alteration in pulse wave velocity (PWV) in lower-limb arteries among diabetic patients with PAD. Methods: We measured brachial-ankle PWV (baPWV) using an automatic device in 101 healthy control subjects and 102 type 2 diabetic patients including those with PAD. Results: Diabetic patients without PAD showed a higher baPWV than the healthy control subjects. There was no significant difference in baPWV between the right and left legs in these groups. In contrast, among diabetic patients with PAD, baPWV was significantly lower in the affected legs than in the non-affected legs, and the reduction in baPWV was greater in those with lower ankle-brachial pressure index (ABI). In the patients with PAD who received percutaneous transluminal angioplasty, both baPWV and ABI were increased following successful vessel dilatation. Conclusions: These results suggest that baPWV is increased in diabetic patients, whereas it is decreased in the affected legs in diabetic patients with PAD. Widening of the right-left difference in baPWV may be a novel marker of PAD. abstract_id: PUBMED:18482527 Peripheral arterial disease in diabetic patients: utility of the toe-brachial index Background And Objective: To evaluate the influence of peripheral neuropathy, autonomic neuropathy and arterial wall calcification on the concordance of ankle-brachial index (ABI) and toe-brachial index (TBI) in diabetic patients. Patients And Method: We evaluated in 221 diabetic patients (73.8% type 2; 66.6% male) the presence of: peripheral arterial disease, peripheral neuropathy, autonomic neuropathy and arterial wall calcification. Results: The mean (standard deviation) difference between ABI and TBI in our population was 0.33 (0.25). Patients with arterial wall calcification had a higher mean difference between ABI and TBI than patients without arterial wall calcification -0.87 (0.39) vs 0.30 (0.21) (p &lt; 0.005)-. Mean difference between ABI and TBI was similar for patients with and without autonomic neuropathy -0.36 (0.26) vs 0.30 (0.24) (p = 0.057)- and for patients with and without peripheral neuropathy -0.34 (0.28) vs 0.31 (0.21) (p = 0.423)-. Among all patients there was a significant linear association (r = 0.506; p &lt; 0.001) between ABI and TBI. For patients with arterial wall calcification there was no correlation between both indices (r = -0.070; p = 0,857). Conclusions: Our findings indicate that in diabetic patients, TBI is the method of choice to evaluate lower limb perfusion in the presence of overt arterial wall calcification. abstract_id: PUBMED:11473082 Peripheral arterial disease in diabetic and nondiabetic patients: a comparison of severity and outcome. Objective: The aim of this study was to quantify the distribution of peripheral arterial disease in the diabetic and nondiabetic population attending for angiography and to compare severity and outcome between both groups of patients. Research Design And Methods: Randomly selected lower-extremity angiograms were examined according to the Bollinger system. Patient demographics and medical history were recorded and case notes were examined to determine which patients later underwent a revascularization procedure or amputation and which patients had died. Results: A total of 136 arteriograms obtained between 1992 and 1996 were analyzed. The age (mean +/- SD) of the patients was 64.7 +/- 10.8 years. Diabetic patients (43%) and nondiabetic patients were of similar age (63.9 +/- 10.4 vs. 65.3 +/- 11.1 years, P = 0.43), with a similar history of smoking (81.0 vs. 76.9%, P = 0.26), ischemic heart disease (41.4 vs. 37.2%, P = 0.54), and hypercholesterolemia (24.4 vs. 30.8%, P = 0.48). However, there were a greater proportion of hypertensive patients in the diabetic group (63.8 vs. 39.7%, P = 0.006). Diabetic patients had greater severity of arterial disease in the profunda femoris and all arterial segments below the knee (P = 0.02). A greater number of amputations occurred in the diabetic group: diabetic patients were five times more likely to have an amputation (41.4 vs. 11.5%, odds ratio [OR] 5.4, P &lt; 0.0001). Mortality was higher in the diabetic group (51.7 vs. 25.6%, OR 3.1, P = 0.002), and diabetic patients who died were younger at presentation than nondiabetic patients (64.7 +/- 11.4 vs. 71.1 +/- 8.7 years, P = 0.04). Conclusions: In patients with peripheral arterial disease, diabetic patients have worse arterial disease and a poorer outcome than nondiabetic patients. abstract_id: PUBMED:19366974 Automated ankle-brachial pressure index measurement by clinical staff for peripheral arterial disease diagnosis in nondiabetic and diabetic patients. Objective: Peripheral arterial disease (PAD) is a prognostic marker in cardiovascular disease. The use of Doppler-measured ankle-brachial pressure index (Dop-ABI) for PAD diagnosis is limited because of time, required training, and costs. We assessed automated oscillometric measurement of the ankle-brachial pressure index (Osc-ABI) by nurses and clinical staff. Research Design And Methods: Clinical staff obtained Osc-ABI with an automated oscillometric device in 146 patients (83 with diabetes) at the time of Dop-ABI measurement and ultrasound evaluation. Results: Measurements were obtained in most legs (Dop-ABI 98%; Osc-ABI 95.5%). Dop- and Osc-ABI were significantly related in diabetic and nondiabetic patients with good agreement over a wide range of values. When Dop-ABI &lt;or=0.90 was used as the gold standard for PAD, receiver operating characteristic curve analysis showed that PAD was accurately diagnosed with Osc-ABI in diabetic patients. When ultrasound was used to define PAD, Dop-ABI had better diagnostic performance than Osc-ABI in the whole population and in diabetic patients (P = 0.026). Both methods gave similar results in nondiabetic patients. The cutoff values for the highest sensitivity and specificity for PAD screening were between 1.0 and 1.1. Estimation of cost with the French medical care system fees showed a potential reduction by three of the screening procedures. Conclusions: PAD screening could be improved by using Osc-ABI measured by clinical staff with the benefit of greater cost-effectiveness but at the risk of lower diagnostic performance in diabetic patients. abstract_id: PUBMED:14631325 Relationship between peripheral vascular disease and high plantar pressures in diabetic neuro-ischaemic patients. Background: It has been shown that high foot pressure in diabetic patients plays a crucial role in plantar ulcer development. The purpose of the study is to analyze the relationship between foot arterial pressures and plantar pressures in diabetic patients with both peripheral neuropathy vascular disease. Methods: We have evaluated the relationship between foot arterial pressures and plantar pressure parameters (Peak Plantar Pressure, Foot-Floor Contact and Plantar Pressure Integral) in eleven diabetic patients with both peripheral neuropathy and peripheral vascular disease. Peripheral neuropathy was defined as a tuning fork score&lt;4/8 measured at the great toe and internal malleolus with a Tuning fork (Rydel-Seiffer 128 Hz), the absence of both patellar and ankle reflexes and with a temperature discrimination more than +5 degrees C (Thermocross). The peripheral vascular disease (PVD) was evaluated by Doppler technique. Peak Plantar Pressure (PPP) and Foot-Floor Contact (FFC) were measured by Force-Sensing Resistive (FSR 174) sensors under the 1st, 3rd and 5th metatarsal heads as well as under the heel and big toe of both feet. The Plantar Pressure Integral (PPI) was defined by the integral of the pressure over the time. Results: We have found significant relationship between plantar pressure parameters (PPP, FFC, and PPI) under the first metatarsal heads and Doppler arterial pressures of both tibial posterior and dorsalis pedis artery. However, there was no relationship between Doppler arterial pressures and plantar pressure parameters (PPP, FFC, PPI) under 3rd and 5th metatarsal heads or under both the heel and the big toe. Conclusion: According to our results, the peripheral vascular disease could contribute to the elevation of plantar pressures and to the prolonged duration of foot floor contact at each step in diabetic patients with both peripheral neuropathy and peripheral vascular disease. In such patients, severe ischaemia could lead to an increased risk of foot ulceration and consecutive lower extremity amputation. abstract_id: PUBMED:15147987 Pressure and the diabetic foot: clinical science and offloading techniques. Diabetic foot ulceration is a common, yet in many cases an eminently preventable, complication that affects 1 in 20 patients with diabetes. Risk factors for ulceration include insensitivity (secondary to somatic neuropathy), high foot pressures, callus formation (a consequence of sympathetic neuropathy and high foot pressures), deformities (such as claw feet, prominent metatarsal heads, etc.), peripheral vascular disease, and most importantly, a past history of ulceration. None of these factors alone causes ulceration; thus, early identification and amelioration of these factors is a primary aim in foot ulcer prevention. A number of therapeutic approaches may help reduce ulcer incidence: these include therapeutic footwear, hosiery, and, potentially, liquid silicone injected under high-pressure areas. In the management of neuropathic ulcers, pressure relief is of the utmost importance, and total contact casting remains the "gold standard" means of achieving such pressure redistribution. The successful management of diabetic foot ulceration depends on a team approach, remembering that ulcers should heal if (1) the arterial circulation is intact, (2) pressure relief is achieved and maintained over the ulcer, and (3) infection is appropriately treated. abstract_id: PUBMED:18639951 Analysis of traditional and nontraditional risk factors for peripheral arterial disease in elderly type 2 diabetic patients in Taiwan. Aims: Diagnosing peripheral arterial disease (PAD) and recognizing its associated risk factors in diabetes is important due to high cardiovascular disease and limb loss risk. However, both traditional and nontraditional risk factors have seldom been analyzed in the same diabetic cohort. The aim of this study was to examine the traditional and nontraditional risk factors for PAD in elderly type 2 diabetic patients. Methods: Five hundred and eighty type 2 diabetic subjects aged &gt;or=60 years were cross-sectionally studied. Diagnosis of PAD was by ankle-brachial index (ABI) &lt;0.90 on either leg. The association between traditional and nontraditional risk factors of PAD was analyzed. Results: Among the confounders, age, diabetes duration, HDL cholesterol, albuminuria, CKD (chronic kidney disease), hsCRP and insulin use differed between patients with and without PAD. Multiple logistic regression revealed that only CKD, insulin use, albuminuria, elevated hsCRP level (&gt;3mg/l) and low HDL cholesterol were independent risk factors. Conclusions: The findings of this study highlight the importance of monitoring nontraditional risk factors of PAD in diabetes. Implementing effective interventions to improve management of these risk factors may lower the risk for PAD. abstract_id: PUBMED:11318823 Is oscillometric ankle pressure reliable in leg vascular disease? The objective of the study was to evaluate the validity of oscillometric systolic ankle pressure in symptomatic leg arterial occlusive disease. Ankle pressure measurements using oscillometric curves obtained using a standard 12-cm cuff with a specially designed device for signal processing were validated against the continuous wave (CW) Doppler technique. Thirty-four subjects without signs or symptoms of peripheral vascular disease (68 legs) and 47 patients with leg ischaemia (85 legs) varying from moderate claudication to critical ischaemia were examined. The oscillometric curves were analysed using several algorithms reported in the literature, based on the assumption that maximum oscillations are recorded near mean arterial pressure. In normals, reasonable agreement between CW Doppler and oscillometric methods was seen. When an algorithm that determined the lowest cuff pressure at which maximum oscillations occurred, and a characteristic ratio for systolic pressure of 0.52 was used, the mean difference between CW Doppler and oscillometry was 1.7 mmHg [range -19 to +27, limits of agreement (2 SD) 21.1 mmHg]. In ischaemic legs, oscillometry overestimated systolic ankle pressure by a mean of 28.8 mmHg [range -126 to +65, limits of agreement 82.8 mmHg]. The difference was more pronounced among patients with critical ischaemia compared with claudicants, and also more evident among diabetics. The error of oscillometric pressure determination in subjects with leg arterial disease inversely increased with CW Doppler ankle pressure. In 39% of the recordings in legs with a CW Doppler systolic pressure below 100 mmHg, the oscillometric mean arterial pressure was higher than the recorded CW Doppler systolic pressure. In conclusion, the oscillometric method to determine systolic ankle pressure, based on the concept of maximum cuff oscillations occurring near mean arterial pressure, is not reliable in leg arterial disease, usually overestimating ankle pressure. abstract_id: PUBMED:11403862 Comparison of peripheral arterial reconstruction in diabetic and non-diabetic patients: a prospective clinic-based study. Unlabelled: To assess the efficacy and safety of lower extremity arterial reconstruction in diabetic and non-diabetic subjects during a 3-year period. A prospective clinic-based study between 1994-1999 in Area 7, Madrid, with a population of 569307 and an estimated diabetic population of 37932 (15505 men and 22427 women). The level of arterial reconstruction and associated risk factors were ascertained. Results: A total of 588 peripheral revascularization surgical procedures were performed in 481 patients. The diabetic patients (n=174, 36.2%) underwent 222 surgical procedures (including 48 follow-on operations, 21.6%), and 307 non-diabetic subjects underwent 366 surgical procedures (59 follow-on operations, 16.1%). The numbers of surgical procedures per 100000 people at risk and year were 18.8 and 1.8 for non-diabetic men and women, respectively, and 145.1 and 29.0 for men and women with diabetes mellitus (7.7- and 16.2-fold, respectively). Age at reconstruction surgery was 2 and 5 years earlier in non-diabetic than in diabetic men and women, respectively. Diabetic patients had a higher neuropathy score (P&lt;0.05) and were less frequently smokers (P&lt;0.05) than non-diabetic subjects. Diabetic subjects more frequently had distal reconstruction while proximal arterial reconstruction was more often performed in non-diabetic subjects. Between 64.6 and 80.4% of people with diabetes and 82.3 and 88.9% of non-diabetic subjects had no complications during their in-hospital stay. Distal amputation simultaneous to arterial reconstruction was the most frequent morbidity of people with diabetes during the study (P&lt;0.05). Despite a graft occlusion rate after femoropopliteal revascularization significantly higher than in non-diabetic people (P&lt;0.05), diabetic people more often required lower extremity amputations (LEAs) for the same level of bypass (P&lt;0.01). Cumulative limb salvage rates were lower in diabetic patients than in non-diabetic subjects at femoropopliteal (49.2 vs. 89.7%; P&lt;0.001), femorodistal (73.5 vs. 95.2%; P&lt;0.01), and distal reverse (77.9 vs. 87.3%; P&lt;0.05) arterial reconstruction, at the end of the third year, but similar after aorto-iliac reconstruction (93.1 vs. 97.5%). A higher neuropathy score and the presence of foot ulcers were associated to significantly lower limb salvage in diabetic patients (P&lt;0.05), but not in non-diabetic people. Survival rates after 3 years were similar between diabetic and non-diabetic populations after aorto-iliac (93.1 vs. 97.5%), femoropopliteal (97.2 vs. 90.3%), and distal reverse (93.2 vs. 98.1%) revascularization, and slightly lower in diabetic compared to non-diabetic patients after femorodistal revascularization (82.1 vs. 96.3%; P&lt;0.05). Conclusion: Although limb salvage after arterial reconstruction is lower in diabetic than in non-diabetic subjects, particularly in those with a higher neuropathy score, this surgical approach can be applied in both diabetic and non-diabetic subjects with otherwise similar outcome. Answer: No, crossing the legs at the knees and ankles does not result in a significant decrease in peripheral arterial pressures in diabetic patients with peripheral vascular disease. This was determined through a prospective study that measured peripheral arterial pressures at the ankle and at the great toe before and after crossing the legs at the knees and ankles in diabetic patients with known peripheral vascular disease and nondiabetic control subjects. The study found that ankle arm indexes and digital arm indexes pressures taken in the sitting position were equal to or higher than supine pressures, and crossing the legs did not significantly change these pressures. Control subjects without peripheral vascular disease also showed no change in pressures with crossing the legs (PUBMED:8269797).
Instruction: Does empirical Clostridium difficile infection (CDI) therapy result in false-negative CDI diagnostic test results? Abstracts: abstract_id: PUBMED:23645849 Does empirical Clostridium difficile infection (CDI) therapy result in false-negative CDI diagnostic test results? Background: Patients with suspected Clostridium difficile infection (CDI) often receive empirical therapy prior to collection of stool specimens for diagnostic testing. The likelihood that such empirical therapy might result in false-negative CDI test results is unknown. Methods: We conducted a prospective study of CDI patients to determine the time to conversion of CDI test results, including polymerase chain reaction (PCR) for toxin B genes, glutamate dehydrogenase, and toxigenic culture, from positive to negative during CDI therapy. We evaluated the frequency of and risk factors for persistence of positive CDI tests. For patients receiving empirical therapy, we assessed the frequency of conversion of positive CDI test results at the time of the test order to negative by the time clinical staff collected stool specimens for testing. Results: For 51 CDI patients, PCR, glutamate dehydrogenase, and toxigenic culture results converted to negative at similar rates. For PCR, 14%, 35%, and 45% of positive CDI tests converted to negative after 1, 2, and 3 days of treatment, respectively. Increased age and infection with North American pulsed-field gel electrophoresis strains were associated with persistent positive PCR results. For CDI patients diagnosed at the time of the test order, conversion to negative PCR results by the time clinical stool specimens were collected occurred in 4 of 9 (44%) patients who were prescribed empirical CDI therapy versus 0 of 23 (0%) who were not (P = .004). Conclusions: Empirical treatment for suspected CDI cases may result in false-negative PCR results if there are delays in stool specimen collection. abstract_id: PUBMED:27543102 False Negative Results in Clostridium difficile Testing. Background: Accurate diagnosis of Clostridium difficile infection (CDI) is paramount for patient management. The wrong diagnosis places patients at risk, delays treatment, and/ or contributes to transmission of infection in the healthcare setting. Although amplification of the toxin B gene by polymerase chain reaction (PCR) is a sensitive method for detecting toxigenic C. difficile, false negative results still occur and could impact the diagnosis and treatment of this infection. Methods: This study investigated 48 patients that tested negative for toxigenic C. difficile via GeneXpert C. difficile epi test, while simultaneously testing positive for toxigenic C. difficile via stool culture. Fifty discrepant samples were collected over a 15-month period and all C. difficile isolates were characterized by ribotype. Patient charts were reviewed to assess whether discrepant results impacted the treatment course or clinical outcome of affected patients. Results: Fifty samples of a total of 2308 samples tested in an acute healthcare facility over a 15-month period had negative PCR and positive stool culture for toxigenic C. difficile. C. difficile isolated from the discrepant samples resulted in diverse ribotyping patterns suggesting they were derived from different strains. The samples belonged to patients who were distributed evenly between age groups and wards in the hospital. In the majority of cases, the false negative C. difficile test results did not seem to impact the clinical outcome in these patients. Conclusions: The PCR limit of detection may impact the results of molecular methods for C. difficile detection. Both clinical and analytical sensitivity of C. difficile tests should be considered when deciding which diagnostic assay to use, and clinical correlates should be examined carefully before excluding CDI as a cause of disease. abstract_id: PUBMED:34058377 A cost of illness comparison for toxigenic Clostridioides difficile diagnosis algorithms in developing countries. Background: Availability of several commercial tests with different Clostridioides difficile targets contributes to uncertainty and controversies around the optimal diagnostic algorithm. While numerous studies have estimated the financial impact of C. difficile infection, models to guide testing strategies decisions in developing countries, where economic value significantly impacts clinical practice, are currently not available. Aim: To determine the cost of illness of different C. difficile infection (CDI) diagnostic strategies in developing countries. Methods: Cost-comparison analysis was performed to compare eleven different algorithms of CDI diagnosis. The basis of calculation was a hypothetical cohort of 1000 adult inpatients suspected of CDI. We analyzed turnaround time of test results (i.e., time from taking sample to results emission), test performance (i.e., sensitivity and specificity) and testing costs. Patients were divided in true positive, false positive, true negative and false negative in order to integrate test performance and economics effects. Additional medical costs were calculated: costs of hygiene, medication, length of stay and intensive care unit costs, based on a Brazilian University Hospital costs. CDI prevalence was considered 22.64%. Findings: From laboratory-assisted tests, simultaneous glutamate dehydrogenase (GDH) and toxin A/B rapid immunoassay arbitrated by nucleic acid amplification test (NAAT) presented the lowest cost of illness (450,038.70 USD), whereas standalone NAAT had the highest (523,709.55 USD). Empirical diagnosis only presented the highest overall cost (809,605.44 USD). Conclusion: The two-step algorithm with simultaneous GDH and toxin A/B rapid immunoassay arbitrated by NAAT seems to be the best strategy for CDI diagnosis in developing countries. abstract_id: PUBMED:30243015 Determining qualitative effect size ratings using a likelihood ratio scatter matrix in diagnostic test accuracy systematic reviews. Background Diagnostic test accuracy (DTA) systematic reviews (SRs) characterize a test's potential for diagnostic quality and safety. However, interpreting DTA measures in the context of SRs is challenging. Further, some evidence grading methods (e.g. Centers for Disease Control and Prevention, Division of Laboratory Systems Laboratory Medicine Best Practices method) require determination of qualitative effect size ratings as a contributor to practice recommendations. This paper describes a recently developed effect size rating approach for assessing a DTA evidence base. Methods A likelihood ratio scatter matrix will plot positive and negative likelihood ratio pairings for DTA studies. Pairings are graphed as single point estimates with confidence intervals, positioned in one of four quadrants derived from established thresholds for test clinical validity. These quadrants support defensible judgments on "substantial", "moderate", or "minimal" effect size ratings for each plotted study. The approach is flexible in relation to a priori determinations of the relative clinical importance of false positive and false negative test results. Results and conclusions This qualitative effect size rating approach was operationalized in a recent SR that assessed effectiveness of test practices for the diagnosis of Clostridium difficile. Relevance of this approach to other methods of grading evidence, and efforts to measure diagnostic quality and safety are described. Limitations of the approach arise from understanding that a diagnostic test is not an isolated element in the diagnostic process, but provides information in clinical context towards diagnostic quality and safety. abstract_id: PUBMED:25636938 Comparing the economic and health benefits of different approaches to diagnosing Clostridium difficile infection. Accurate diagnosis of Clostridium difficile infection (CDI) is essential to effectively managing patients and preventing transmission. Despite the availability of several diagnostic tests, the optimal strategy is debatable and their economic values are unknown. We modified our previously existing C. difficile simulation model to determine the economic value of different CDI diagnostic approaches from the hospital perspective. We evaluated four diagnostic methods for a patient suspected of having CDI: 1) toxin A/B enzyme immunoassay, 2) glutamate dehydrogenase (GDH) antigen/toxin AB combined in one test, 3) nucleic acid amplification test (NAAT), and 4) GDH antigen/toxin AB combination test with NAAT confirmation of indeterminate results. Sensitivity analysis varied the proportion of those tested with clinically significant diarrhoea, the probability of CDI, NAAT cost and CDI treatment delay resulting from a false-negative test, length of stay and diagnostic sensitivity and specificity. The GDH/toxin AB plus NAAT approach leads to the timeliest treatment with the fewest unnecessary treatments given, resulted in the best bed management and generated the lowest cost. The NAAT-alone approach also leads to timely treatment. The GDH/toxin AB diagnostic (without NAAT confirmation) approach resulted in a large number of delayed treatments, but results in the fewest secondary colonisations. Results were robust to the sensitivity analysis. Choosing the right diagnostic approach is a matter of cost and test accuracy. GDH/toxin AB plus NAAT diagnosis led to the timeliest treatment and was the least costly. abstract_id: PUBMED:22238444 Clinical and laboratory characteristics of Clostridium difficile infection in patients with discordant diagnostic test results. The aim of this study was to compare the clinical and laboratory characteristics of Clostridium difficile infection (CDI) in patients with discordant test results for the cytotoxin assay (CYT) and PCR assays. A retrospective study from May to August 2008 and March to May 2010 was performed. CDI was diagnosed in 128 patients. PCR increased the yield of C. difficile cases by 2-fold compared to that of the CYT assay. Fifty-six cases (44%) were detected by PCR only (CYT negative). Forty-nine percent of patients with non-NAP1 strains were detected by PCR only, compared to 28% of those infected with NAP1 strains (P &lt; 0.05). No significant differences were found in the clinical severity of illness and outcome among patients that tested positive for CDI by both tests (CYT and PCR) compared to those that tested positive by PCR only. abstract_id: PUBMED:27124907 Repeat Clostridium difficile PCR Testing After a Negative Result. N/A abstract_id: PUBMED:29290473 Effective utilization of C. difficile PCR and identification of clinicopathologic factors associated with conversion to a positive result in symptomatic patients. Objectives: We assess the diagnostic yield of repeat testing for C. difficile using molecular methods within 7days of a negative test and identify specific factors associated with conversion from negative to positive test result within a 7-day period to aid in selective test utilization. Methods: A retrospective chart review of 20,866 laboratory test orders for C. difficile PCR was conducted. The test result, clinicopathologic patient features, and previous test results were recorded. Univariate and multivariate analysis was conducted to compare patients with initial and repeat negative results (n=248) to a group of patients with conversion from negative to positive results within 7days. Results: Univariate analysis demonstrated a history of C. difficile infection, receipt of antibiotics within 14days, and duration of hospital stay as factors significantly different between patients with repeat negative and conversion to positive C. difficile test result. Only history of C. difficile infection was significantly different upon multivariate analysis. Conclusions: Identification of prior C. difficile infection as the only factor significantly correlated with conversion from negative to positive C. difficile test result within 7days aids in selective test utilization and reduces the costs associated with unnecessary laboratory testing. abstract_id: PUBMED:25122866 Clinical characteristics of patients who test positive for Clostridium difficile by repeat PCR. The high sensitivity of PCR assays for diagnosing Clostridium difficile infection (CDI) has greatly reduced the need for repeat testing after a negative result. Nevertheless, a small subset of patients do test positive within 7 days of a negative test. The aim of this study was to evaluate the clinical characteristics of these patients to determine when repeat testing may be appropriate. The results of all Xpert C. difficile PCR (Cepheid, Sunnyvale CA) tests performed in the clinical microbiology laboratory at New York-Presbyterian Hospital, Columbia University Medical Center (NYPH/CUMC) from 1 May 2011 through 6 September 2013, were reviewed. A retrospective case-control study was performed, comparing patients who tested positive within 7 days of a negative test result to a random selection of 50 controls who tested negative within 7 days of a negative test result. During the study period, a total of 14,875 tests were performed, of which 1,066 were repeat tests (7.2%). Eleven of these repeat tests results were positive (1.0%). The only risk factor independently associated with repeat testing positive was history of a prior CDI (odds ratio [OR], 19.6 [95% confidence interval {CI}, 4.0 to 19.5], P &lt; 0.001). We found that patients who test positive for C. difficile by PCR within 7 days of a negative test are more likely to have a history of CDI than are patients who test negative with repeat PCR. This finding may be due to the high rate of disease relapse or the increased likelihood of empirical therapy leading to false-negative results in these patients. abstract_id: PUBMED:20372142 A 10% false-negative rate for Clostridium difficile infections is too high. N/A Answer: Yes, empirical therapy for suspected Clostridium difficile infection (CDI) can result in false-negative CDI diagnostic test results. A study found that for patients who were prescribed empirical CDI therapy, 44% had conversion to negative polymerase chain reaction (PCR) results by the time clinical stool specimens were collected, compared to 0% of those who were not prescribed empirical therapy (PUBMED:23645849). This suggests that empirical treatment may lead to false-negative results if there are delays in stool specimen collection. Additionally, the high sensitivity of PCR assays for diagnosing CDI has greatly reduced the need for repeat testing after a negative result, but empirical therapy leading to false-negative results in patients with a history of CDI has been identified as a potential issue (PUBMED:25122866).
Instruction: Does spasticity contribute to walking dysfunction after stroke? Abstracts: abstract_id: PUBMED:9598679 Does spasticity contribute to walking dysfunction after stroke? Objectives: Clinically, it is assumed that spasticity of the calf muscles interferes with walking after stroke. The aim was to examine this assumption by evaluating the contribution of spasticity in the gastrocnemius muscle to walking dysfunction in an ambulant stroke population several months after stroke. Methods: Fourteen stroke patients who were able to walk independently and 15 neurologically normal control subjects were recruited. Both resting and action stretch reflexes of the gastrocnemius muscle were investigated under conditions that simulated walking. Resting tonic stretch reflexes were measured to assess spasticity whereas action tonic stretch reflexes were measured to assess the possible contribution of spasticity to gait dysfunction. Results: Two thirds of the stroke patients exhibited resting tonic stretch reflexes which indicate spasticity, whereas none of the control subjects did. However, the stroke patients exhibited action tonic stretch reflexes that were of similar magnitude to the control subjects, suggesting that their reflex activity during walking was not different from that of control subjects. Furthermore, there was no evidence that the action stretch reflex in the stroke patients contributed a higher resistance to stretch than the control subjects. Conclusions: Whereas most of the stroke patients exhibited spasticity when measured both clinically and physiologically, they did not exhibit an increase in resistance to dorsiflexion due to exaggerated action tonic stretch reflexes. It is concluded that it is unlikely that spasticity causes problems in walking after stroke in ambulant patients. Therefore, it seems inappropriate to routinely reduce or inhibit the reflex response to improve functional movement in stroke rehabilitation. Factors other than spasticity should be considered when analysing walking after stroke, so that appropriate treatment is provided to patients. abstract_id: PUBMED:37860215 Effect of spasticity of the ankle plantar flexors on the walking speed of hemiplegic stroke patients after maximum walking speed exercises. Yamada T, Ohta M, Tamari M. Effect of spasticity of the ankle plantar flexors on the walking speed of hemiplegic stroke patients after maximum walking speed exercises. Jpn J Compr Rehabil Sci 2021; 12: 64-69. Objective: This study examined the effect of ankle plantar flexor spasticity on the walking speed of hemiplegic stroke patients immediately following maximum walking speed exercises. Methods: A total of 23 hemiplegic stroke patients were divided into two groups based on the presence (n = 13) or absence (n = 10) of ankle plantar flexor spasticity on the paralyzed side. Gait speed, propulsive force during pre-swing, paretic side ankle plantar flexion movement during pre-swing, paretic side ankle dorsiflexion angle during the stance phase, angular velocity of paretic side dorsiflexion during the stance phase, paretic side trailing limb angle in the terminal stance, paretic side plantar flexion angle in the terminal stance, and the timing of maximum dorsiflexion of the ankle joint on the paretic side were measured before and after the maximum walking speed exercises, using a three-dimensional motion analyzer. Results: In the spasticity group, no significant improvement was observed in any of the categories. In contrast, in the non-spasticity group, significant improvement was observed in all categories, except for the paretic side ankle dorsiflexion angle. Conclusion: This study showed that maximum walking speed exercises immediately improved walking speed in hemiplegic stroke patients without ankle plantar flexor spasticity. abstract_id: PUBMED:34408174 Characterization and clinical implications of ankle impedance during walking in chronic stroke. Individuals post-stroke experience persisting gait deficits due to altered joint mechanics, known clinically as spasticity, hypertonia, and paresis. In engineering, these concepts are described as stiffness and damping, or collectively as joint mechanical impedance, when considered with limb inertia. Typical clinical assessments of these properties are obtained while the patient is at rest using qualitative measures, and the link between the assessments and functional outcomes and mobility is unclear. In this study we quantify ankle mechanical impedance dynamically during walking in individuals post-stroke and in age-speed matched control subjects, and examine the relationships between mechanical impedance and clinical measures of mobility and impairment. Perturbations were applied to the ankle joint during the stance phase of walking, and least-squares system identification techniques were used to estimate mechanical impedance. Stiffness of the paretic ankle was decreased during mid-stance when compared to the non-paretic side; a change independent of muscle activity. Inter-limb differences in ankle joint damping, but not joint stiffness or passive clinical assessments, strongly predicted walking speed and distance. This work provides the first insights into how stroke alters joint mechanical impedance during walking, as well as how these changes relate to existing outcome measures. Our results inform clinical care, suggesting a focus on correcting stance phase mechanics could potentially improve mobility of chronic stroke survivors. abstract_id: PUBMED:38160498 Defining characteristics of independent walking persons after stroke presenting with different arm swing coordination patterns. Background: Persons after stroke present with an altered arm swing during walking. Given the known influence of the arm swing on gait, it is important to identify the characteristics of persons with stroke with different arm-to-leg coordination patterns during walking. Methods: Twenty-five persons after stroke walked on a self-paced treadmill at comfortable walking speed. The frequency of shoulder movements per stride was detected by Fast Fourier transform analysis on the kinematic data for hemiplegic shoulder movements in the sagittal plane. An independent-sample t-test or Mann-Whitney U test was used to compare clinical and biomechanical parameters between identified subgroups. Results: Two earlier described subgroups based on the number of shoulder flexion-extension movements during one stride could be confirmed. Participants in the 1:1 ratio subgroup (one arm swing during one stride, N = 15) presented with a less upper limb impairment and less spasticity of the elbow extensors (p = 0.012) than the participants in the 2:1 ratio subgroup (two arm swings during one stride, N = 9). Although not significant, the participants in the 1:1 subgroup also seemed to have less spasticity of the shoulder internal rotators (p = 0.06) and a less walking variability based on the standard deviation of the step width. Further research on a greater sample should confirm these findings. Conclusion: Fast Fourier transform analysis was used to identify subgroups based on sagittal shoulder kinematics during walking. The clinical and gait related differences between the identified subgroups can be taken into account in future research investigating post-stroke gait interventions aiming to improve the arm swing. abstract_id: PUBMED:36498486 Effects of Combination of Functional Electric Stimulation and Robotic Leg Movement Using Dynamic Tilt Table on Walking Characteristics in Post-Stroke Patients with Spastic Hemiplegia: A Randomized Crossover-Controlled Trial. Background: Spastic hemiplegia causes slow and unstable walking in post-stroke patients. Dynamic tilt table with robotic leg movement (DTTRLM) is safe and effective in improving walking. Functional electric stimulation (FES) improves walking speed in post-stroke patients with spastic hemiplegia. The aim of this study was to determine the effects of combined DTTRLM + FES on walking speed compared with DTTRLM alone. Methods: Twenty post-stroke patients were randomly assigned to receive either a single session of stepping + FES treatment or a single session of stepping alone treatment. After a one-week washout period, the same two groups underwent a single session of the other treatment, and the same measurements were taken. We measured walking speed, cadence, and the number of steps in a 10 m walking test (10MWT) and assessed Modified Ashworth Scale (MAS), Fugl-Meyer Assessment (FMA), and range of motion (ROM) before and after the intervention. Results: Stepping + FES significantly improved walking speed, number of steps, and ankle inversion ROM, compared with stepping alone. Adverse events were not observed in any subject. Conclusions: Robotic stepping therapy combined with FES significantly improved 10 m walking speed (10MWS) compared with stepping only in patients with post-stroke and spastic hemiplegia. Further studies are needed to determine the long-term effects of the combination treatment. abstract_id: PUBMED:33886640 Movement smoothness in chronic post-stroke individuals walking in an outdoor environment-A cross-sectional study using IMU sensors. Background: Walking speed is often used in the clinic to assess the level of gait impairment following stroke. Nonetheless, post-stroke individuals may employ the same walking speed but at a distinct movement quality. The main objective of this study was to explore a novel movement quality metric, the estimation of gait smoothness by the spectral arc length (SPARC), in individuals with a chronic stroke displaying mild/moderate or severe motor impairment while walking in an outdoor environment. Also, to quantify the correlation between SPARC, gait speed, motor impairment, and lower limb spasticity focused on understanding the relationship between the movement smoothness metric and common clinical assessments. Methods: Thirty-two individuals with a chronic stroke and 32 control subjects participated in this study. The 10 meters walking test (10 MWT) was performed at the self-selected speed in an outdoor environment. The 10 MWT was instrumented with an inertial measurement unit system (IMU), which afforded the extraction of trunk angular velocities (yaw, roll, and pitch) and subsequent SPARC calculation. Results: Movement smoothness was not influenced by gait speed in the control group, indicating that SPARC may constitute an additional and independent metric in the gait assessment. Individuals with a chronic stroke displayed reduced smoothness in the yaw and roll angular velocities (lower SPARC) compared with the control group. Also, severely impaired participants presented greater variability in smoothness along the 10 MWT. In the stroke group, a smoother gait in the pitch angular velocity was correlated with lower limb spasticity, likely indicating adaptive use of spasticity to maintain the pendular walking mechanics. Conversely, reduced smoothness in the roll angular velocity was related to pronounced spasticity. Conclusions: Individuals with a chronic stroke displayed reduced smoothness in the yaw and roll angular velocities while walking in an outdoor environment. The quantification of gait smoothness using the SPARC metric may represent an additional outcome in clinical assessments of gait in individuals with a chronic stroke. abstract_id: PUBMED:10206604 Treadmill walking with partial body weight support versus floor walking in hemiparetic subjects. Objective: To compare the gait of hemiparetic subjects walking on a treadmill with various body weight supports and walking on the floor. Design: Hemiparetic subjects walked on a treadmill, secured in a harness, with no body weight support and with 15% and 30% body weight relief, and walked on a floor. Setting: Kinematic laboratory of a department of rehabilitation. Subjects: Eighteen hemiparetic stroke patients. Main Outcome Measures: Gait cycle parameters and kinesiologic electromyogram of six muscles of the affected side and of two muscles of the nonaffected side. Results: On the treadmill, patients walked more slowly because of a reduced cadence, with a longer single stance period of the paretic limb, more symmetrically, and with a larger hip extension (multivariate profile analysis, p&lt;.05). The mean functional activities of the gastrocnemius muscle and of the first crest of the erector spinae of the paretic side were smaller on the treadmill (univariate test, p&lt;.05). Further, the premature activity of the gastrocnemius muscle, indicating spasticity, was less on the treadmill (univariate test, p&lt;.05); correspondingly the qualitative muscle pattern analysis revealed less co-contraction between the gastrocnemius and tibialis anterior muscles in 11 of the 18 subjects. Conclusions: Treadmill training with partial body weight support in hemiparetic subjects allows them to practice a favorable gait characterized by a greater stimulus for balance training because of the prolonged single stance period of the affected limb, a higher symmetry, less plantar flexor spasticity, and a more regular activation pattern of the shank muscles as compared with floor walking. abstract_id: PUBMED:24525883 Input-output properties of the soleus stretch reflex in spastic stroke patients and healthy subjects during walking. The input-output properties of the soleus stretch reflex during walking and sitting were examined in 11 spastic stroke patients and 10 healthy subjects. In the early swing phase, the threshold of the input-output relation was significantly lower in the patient group - on average 108°/s compared to 309°/s in the control group (P=0.02). The slope of the input-output properties was unchanged in patients and in control subjects (P=0.39). In stroke patients we found a minor, but significant, difference in threshold with 108°/s in the early swing phase compared to 74°/s in sitting position at matched TA EMG activity (P=0.01). Furthermore, we found an unchanged slope of 0.25μVs/° during walking and 0.28μVs/° in the sitting position (P=0.21). In contrast, control subjects showed a large difference in the threshold in the early swing phase during walking (309°/s) compared with findings in the sitting position at a matched TA EMG activity (71°/s, P=0.004). The slope was unchanged during walking and when the subject was sitting (P=0.22). There was a significant correlation between clinical spasticity score and stretch reflex threshold in the early swing phase (P=-0.61, P=0.04) and between clinical spasticity score and the slope in the early swing phase (P=0.72, P=0.009). It is concluded that in the early swing phase, the markedly reduced soleus stretch reflex threshold was preventing the stroke patients from making fast dorsiflexion of the foot at the ankle joint and thereby impairing the walking speed. abstract_id: PUBMED:17453990 The effect of spasticity, sense and walking aids in falls of people after chronic stroke. Objective: To study the effects of spasticity, sensory impairment, and type of walking aid on falls in community dwellers with chronic stroke. Methods: Functional Independence Measure (FIM) Instrument, Joint Position Sense Evaluation (JPS), the Rivermead motor assessment scale (RMA), Ashworth Scale, Tinetti Assessment Tool were used to assess 100 cases. Results: Fifty-three of the cases were grouped as nonfallers, 36 as one-time fallers and 11 as repeat fallers. These 3 groups were found to be different from each other in respect to FIM, Tinetti test and RMA (p &lt; 0.001). In respect to knee JPS, nonfallers and one-time faller groups were found to be different from repeat fallers (p = 0.001). There is a difference among the groups in respect to Ashworth assessment (p &lt; 0.001), use of walking aid (p = 0.01) and type of walking aid (p = 0.01). Some 43% of the cases use a walking aid (58.1% cane, 41.9% high cane). According to Ordinal logistic regression analysis, it was found that the possibility of fall increased (p &lt; 0.01), as the value of spasticity increased while the possibility of the fall of the individuals with stroke decreased (p &lt; 0.00 - 0.01) as Tinetti, RMA and FIM variables increased. Conclusions: In respect to falls, spasticity is also an indicator for chronic stroke patients, as is motor impairment, functional situation, impairment of balance and walking. Sensory impairment, using a walking aid and the type were found to be ineffective. abstract_id: PUBMED:20434604 Functional electrical stimulation to dorsiflexors and plantar flexors during gait to improve walking in adults with chronic hemiplegia. Objective: To determine whether functional electrical stimulation (FES) timed to activate the dorsiflexors and plantar flexors during gait improves the walking of adults with hemiplegia. Design: Randomized crossover trial. Setting: Outpatient rehabilitation clinic. Participants: Adults with hemiplegia (N=28) with a mean age +/- SD of 60+/-10.9 years and 4.9+/-3.8 years postincident. Interventions: Intervention "A" included 3 months of wearing the FES system, which activated automatically during walking for 6 to 8h/d, 7d/wk, plus walking 1h/d, 6d/wk. Intervention "B" included 3 months of walking 1h/d, 6d/wk without FES. Of the 28 patients who completed the study, 15 were randomly assigned to group A-B, 13 to group B-A. Crossover occurred at 3 months. Main Outcome Measures: Variables were measured at pretreatment, 3 months, and 6 months. Three primary outcomes were selected a priori and included 2 functional variables, the 6-minute walk test and the Emory Functional Ambulatory Profile, and 1 participation variable, the Stroke Impact Scale. Secondary impairment measures included muscle strength and spasticity. Assessments were done without electrical stimulation. Results: In phase 1, patients who received treatment A (A-B group) showed improvement compared with patients who received treatment B (B-A group) on the 6-minute walk test (P=.02), Emory Functional Ambulatory Profile (P=.08), and Stroke Impact Scale (P=.03). In phase 2, the A-B group maintained improvement in all 3 primary outcomes even without FES. Both groups improved significantly on all primary outcome measures, comparing 6-month to initial measures (P&lt;/=.05). Conclusions: An FES system that stimulates dorsiflexors and plantar flexors similar to the timing of typical adult gait, combined with daily walking, can improve the walking ability of adults with hemiplegia. Answer: The relationship between spasticity and walking dysfunction after stroke is complex and varies across studies. Some research suggests that spasticity may not significantly contribute to walking dysfunction in ambulant stroke patients. For instance, a study found that while most stroke patients exhibited spasticity, they did not show an increase in resistance to dorsiflexion due to exaggerated action tonic stretch reflexes during walking, leading to the conclusion that spasticity is unlikely to cause problems in walking after stroke in ambulant patients (PUBMED:9598679). This suggests that factors other than spasticity should be considered when analyzing walking dysfunction after stroke. However, other studies indicate that spasticity can affect walking speed and gait. For example, one study showed that maximum walking speed exercises immediately improved walking speed in hemiplegic stroke patients without ankle plantar flexor spasticity, but not in those with spasticity (PUBMED:37860215). This suggests that spasticity of the ankle plantar flexors may have a negative impact on the walking speed of stroke patients. Further research has explored the role of joint mechanical impedance, which includes stiffness and damping, in walking dysfunction. It was found that inter-limb differences in ankle joint damping, but not joint stiffness or passive clinical assessments, strongly predicted walking speed and distance, suggesting that altered joint mechanics post-stroke, which include spasticity, can affect mobility (PUBMED:34408174). Additionally, interventions such as functional electrical stimulation (FES) combined with robotic leg movement have been shown to improve walking characteristics in post-stroke patients with spastic hemiplegia (PUBMED:36498486), indicating that addressing spasticity can have positive effects on walking function. In summary, while spasticity may not be the sole factor contributing to walking dysfunction after stroke, it can have an impact on walking speed and gait, and interventions targeting spasticity can potentially improve walking function in stroke patients.
Instruction: Vitamin D Deficiency and Hashimoto's Thyroiditis in Children and Adolescents: a Critical Vitamin D Level for This Association? Abstracts: abstract_id: PUBMED:26316435 Vitamin D Deficiency and Hashimoto's Thyroiditis in Children and Adolescents: a Critical Vitamin D Level for This Association? Objective: Vitamin D has been suggested to be active as an immunomodulator in autoimmune diseases such as Hashimoto's thyroiditis (HT). The goal of the present study was to investigate the vitamin D status in HT patients. Methods: This prevalence case-control study was conducted on 90 patients with HT (of ages 12.32 ± 2.87 years) and 79 age-matched healthy controls (11.85 ± 2.28 years). Serum 25-hydroxyvitamin D3 [25(OH)D3] levels were measured in all 169 subjects. Results: The prevalence of vitamin D deficiency in HT patients (64 of 90; 71.1%) was significantly higher than that in the control group (41 of 79; 51.9%) (p=0.025). Mean serum 25(OH)D3 level in the HT group was significantly lower compared to the control group (16.67 ± 11.65 vs. 20.99 ± 9.86 ng/mL, p=0.001). HT was observed 2.28 times more frequently in individuals with 25(OH)D3 levels &lt;20 ng/mL (OR: 2.28, CI: 1.21-4.3). Conclusion: Vitamin D deficiency is associated with HT in children and adolescents. Levels lower than 20 ng/mL seem to be critical. The mechanism for this association is not clear. abstract_id: PUBMED:22876540 Vitamin D status in children with Hashimoto thyroiditis. Objective: To investigate vitamin D status in children with Hashimoto thyroiditis. Subjects And Methods: The study group consisted of 78 children recently diagnosed as Hashimoto thyroiditis and 74 subjects as the control group. Parameters of calcium metabolism, thyroid function tests, and 25-hydroxyvitamin D [25(OH)D] levels were measured. Results: Vitamin D deficiency rate was significantly higher in the Hashimoto group compared with the control subjects (73.1% vs. 17.6%, p &lt; 0.0001). In the Hashimoto group, mean 25(OH)D levels were significantly lower compared with the control group (31.2 +/- 11.5 versus 57.9 +/- 19.7 nmol/L, p &lt; 0.001) and was inversely correlated with the anti-thyroid peroxidase (anti-TPO) levels (r = -0.30, p = 0.007). Conclusion: The higher vitamin D deficiency rates besides lower vitamin D levels in the Hashimoto group together with the inverse correlation between vitamin D and anti-TPO suggest that vitamin D deficiency may have a role in the autoimmune process in Hashimoto thyroiditis in children. abstract_id: PUBMED:25854833 Meta-analysis of the association between vitamin D and autoimmune thyroid disease. Although emerging evidence suggests that low levels of vitamin D may contribute to the development of autoimmune disease, the relationship between vitamin D reduction and autoimmune thyroid disease (AITD), which includes Graves' disease (GD) and Hashimoto thyroiditis (HT), is still controversial. The aim was to evaluate the association between vitamin D levels and AITD through systematic literature review. We identified all studies that assessed the association between vitamin D and AITD from PubMed, Embase, CENTRAL, and China National Knowledge Infrastructure (CNKI) databases. We included studies that compared vitamin D levels between AITD cases and controls as well as those that measured the odds of vitamin D deficiency by AITD status. We combined the standardized mean differences (SMD) or the odds ratios (OR) in a random effects model. Twenty case-control studies provided data for a quantitative meta-analysis. Compared to controls, AITD patients had lower levels of 25(OH)D (SMD: -0.99, 95% CI: -1.31, -0.66) and were more likely to be deficient in 25(OH)D (OR 2.99, 95% CI: 1.88, 4.74). Furthermore, subgroup analyses result showed that GD and HT patients also had lower 25(OH)D levels and were more likely to have a 25(OH)D deficiency, suggesting that low levels of serum 25(OH)D was related to AITD. abstract_id: PUBMED:27734704 Vitamin D and autoimmune thyroid diseases From the recent literature data it may be concluded that vitamin D deficiency is associated with increased risk of thyroid autoimmunity development and thus should be considered as an additional important risk factor for both chronic autoimmune thyroiditis (postpartum thyroiditis including) and Graves´ disease. A higher risk of Graves´ disease development is also associated with several polymorphisms in the gene encoding for vitamin D binding protein and for the specific receptor of active form of vitamin D - 1,25-(OH)2D3 in the respective target cells. Whether careful supplementation with vitamin D aimed to normalize low 25(OH)D levels brings preventive or therapeutic effect is subject to further research.Key words: autoimmune thyroiditis - D vitamin deficiency - D vitamin supplementation - Graves´disease. abstract_id: PUBMED:32781825 Vitamin D in Hypothyroid Patients and Association between Vitamin D and Anti-TPO in Autoimmune Hypothyroidism. Background: To find out the incidence of vitamin D deficiency in autoimmune hypothyroidism and the correlation between vitamin D and anti-TPO antibodies. Methods: This observational study was undertaken in the Biochemistry Department of ESI-PGIMSR &amp; Model Hospital Andheri Mumbai from March 2015 to September 2015. Vitamin D and anti-TPO levels of 101 known cases of hypothyroidism were measured on a Cobas e411. Patients included were between 20 and 50 years old with 63 females and 38 males. Results: Twenty-two of 101 patients were anti-TPO positive with a mean vitamin D level 10.86 ± 5.91 and mean anti-TPO 59.45 ± 12.46. A negative correlation of r = -0.4949 was found between vitamin D and anti-TPO in patients with anti-TPO positive hypothyroidism. Conclusions: Low vitamin D levels were found in hypothyroid patients with severe deficiency seen in anti-TPO positive cases. abstract_id: PUBMED:26768630 Assessment of Vitamin D Level in Autoimmune Thyroiditis Patients and a Control Group in the Polish Population. Background: Vitamin D, known for its role in calcium-phosphorus homeostasis, is also a significant immunomodulatory factor. Vitamin D deficiency has been reported in some autoimmune disorders. Recently, vitamin D level in autoimmune thyroiditis (HT - Hashimoto's thyroiditis) has become the subject of researchers' interest. Objectives: This study aims to assess vitamin 25-OH-D3 levels in HT patients in comparison to a control group in the Polish population. This would be the first attempt conducted in this region with such poor sunlight exposure. Material And Methods: The group we studied consisted of 62 subjects diagnosed with HT (mean age 49.15±15.51) and 32 healthy controls matched with age and sex (mean age 46.09±14.32). All blood samples were collected in the first quarter of the year to minimize the impact of seasonal fluctuations of vitamin D concentrations. Results: In the HT group the mean vitamin D level was 20.09 nmol/L (SD±12.66), compared to 30.31 nmol/L (SD±19.49) in the controls, p=0.014. None of the patients and the controls was vitamin D sufficient (75-125 nmol/L). The deficiency (&lt;50 nmol/L) was significantly more common among HT patients compared to the controls (61-98.4% vs. 27- 84.4%, p=0.029). Conclusions: In conclusion, we found that serum vitamin D concentration is significantly lower in HT patients in comparison to the control group. This suggests vitamin D deficit as one of the risk factors for HT development. Observed vitamin D level was also low in the control group, therefore wider supplementation in general population should be recommended. abstract_id: PUBMED:35836431 Association Between Vitamin D Deficiency and Autoimmune Thyroid Disorder: A Systematic Review. Despite recent evidence that low serum 25-hydroxyvitamin D (25(OH)D) levels and deflects may influence the emergence of autoimmune thyroid disorders (AITD), the relationship between vitamin D deficiency and Graves' disease (GD) and Hashimoto's thyroiditis (HT), which comprise AITD, remains unclear. We retrieved studies that described vitamin D association with HT and GD from PubMed/Medline, Google Scholar, and the Cochrane Library. We included research studies that compared vitamin D levels and deficiency or sufficiency between AITD cases such as HT and GD cases and control subjects. The final assessment comprised 11 studies that recruited 1952 AITD cases (HT and GD) that were published between 2011 and 2021; these were included in the final review. All the included studies were observational, and more precisely, case-control studies that recruited healthy subjects as well as controls. The majority of the studies reviewed indicated that HT and GD patients have a greater prevalence of vitamin D deficiency or low serum 25 (OH)-D levels. Two studies failed to establish an association between vitamin D deficiency and HT and GD disease. In conclusion, vitamin D deficiency or insufficiency can increase the rate of autoimmune diseases such as HT and GD. Randomized controlled trials with a longer follow-up period are needed to confirm the causal relationship between autoimmune thyroid disorder and vitamin D and to provide more reliable insights into the relevance of treatment effects of vitamin D therapy or supplementation. abstract_id: PUBMED:27086659 Regulatory T Cells and Vitamin D Status in Children with Chronic Autoimmune Thyroiditis. Objective: It is suggested that vitamin D is one of the factors that can regulate the function of Treg cells. In this study, the relationships between Treg cells and vitamin D levels was investigated in pediatric chronic autoimmune thyroiditis (CAT) patients. Methods: Thirty-two children with CAT and 24 healthy subjects were studied. FOXP3 expressing CD4+CD25+high Foxp3+T cells were identified as Treg cells. At diagnosis, 25-hydroxycholecalciferol (25OHD3) levels were determined in all patients. FOXP3 expression was measured before and after vitamin D replacement therapy in patients having low levels of 25OHD3. Results: In the CAT patients, Treg cell levels did not differ from the control group, while the frequency of vitamin D deficiency was higher and FOXP3 molecule expression was lower. FOXP3 molecule expression significantly increased in CAT patients having vitamin D deficiency who were given vitamin D replacement. Conclusion: FOXP3 expression is decreased in pediatric CAT patients. This reduction seems to be associated with vitamin D levels. Vitamin D can play a role in enhancing natural Treg cell functions. abstract_id: PUBMED:25308530 Is vitamin D a player or not in the pathophysiology of autoimmune thyroid diseases? 1,25-Dihydroxyvitamin D is a steroid hormone derived from vitamin D, playing an important role in maintaining an adequate serum level of calcium and phosphorus. It is now clear that vitamin D exerts an endocrine action on the cells of the immune system, generating anti-inflammatory and immunoregulatory effects. The mechanisms underlying the role of vitamin D in autoimmunity are not completely understood. Lower vitamin D levels have been found in several autoimmune diseases, such as rheumatoid arthritis, systemic lupus erythematosus, systemic sclerosis, type 1 diabetes mellitus, multiple sclerosis, inflammatory bowel diseases, autoimmune thyroid diseases (i.e. Hashimoto's thyroiditis and Graves' disease) and autoimmune gastritis. Several genetic studies have demonstrated an association between thyroid autoimmunity susceptibility and gene polymorphisms of vitamin D receptor, vitamin D binding protein, 1-alpha-hydroxylase and 25-hydroxylase. Of note, some papers do not confirm this connection. With regard to the role of vitamin D in autoimmune thyroid diseases, available data remain controversial. Only few reports have analyzed the supposed association between autoimmune thyroid diseases and vitamin D concentration with inconclusive results. In our experience, low serum levels of vitamin D do not correlate either with Hashimoto's thyroiditis or with Graves' disease. The inability to achieve an unambiguous conclusion is in part due to the limitations in study design. In fact, most of the studies are cross-sectional surveys with a small number of subjects. In addition, the heterogeneity of the study population, seasonal variation of blood sampling, inter-method analytical variability of vitamin D assays and different definitions of vitamin D deficiency/insufficiency contribute to contradicting results. Therefore, further randomized, controlled, prospective trials are needed in order to demonstrate the causality of vitD in AITD and consequently the role of vitamin D supplementation in prevention or improvement of AITD, providing also information on the best formulation, dose and timing of supplementation. abstract_id: PUBMED:28092021 Sunshine vitamin and thyroid. Vitamin D exerts its canonical roles on the musculoskeletal system and in the calcium/phosphorus homeostasis. In the last years, increasing evidences suggested several extra-skeletal actions of this hormone, indicating that vitamin D may produce effects in almost all the body tissues. These are mediated by the presence of vitamin D receptor (VDR) and thanks to the presence of the 1-α-hydroxylase, the protein that converts the 25-hydroxyvitamin (calcidiol) to the active form 1,25-dihydroxyvitamin (calcitriol). Several studies evaluated the possible role of vitamin D in the pathogenesis of thyroid diseases, and this review will focus on the available data of the literature evaluating the association between vitamin D and thyroid function, vitamin D and autoimmune thyroid diseases, including Hashimoto's thyroiditis, Graves' disease and post-partum thyroiditis, and vitamin D and thyroid cancer. Answer: The association between vitamin D deficiency and Hashimoto's Thyroiditis (HT) in children and adolescents has been investigated in several studies. The critical vitamin D level for this association appears to be less than 20 ng/mL. A study conducted on 90 patients with HT and 79 age-matched healthy controls found that the prevalence of vitamin D deficiency in HT patients was significantly higher (71.1%) compared to the control group (51.9%), and HT was observed 2.28 times more frequently in individuals with 25(OH)D3 levels below 20 ng/mL (PUBMED:26316435). Another study with 78 children diagnosed with HT and 74 control subjects reported that vitamin D deficiency rates were significantly higher in the Hashimoto group (73.1% vs. 17.6%), and mean 25(OH)D levels were significantly lower in the Hashimoto group. There was also an inverse correlation between vitamin D and anti-thyroid peroxidase (anti-TPO) levels (PUBMED:22876540). A meta-analysis of twenty case-control studies found that AITD patients, including those with HT, had lower levels of 25(OH)D and were more likely to be deficient in vitamin D compared to controls (PUBMED:25854833). Furthermore, literature data suggest that vitamin D deficiency is associated with an increased risk of thyroid autoimmunity development and should be considered an important risk factor for autoimmune thyroid diseases (PUBMED:27734704). An observational study also found a negative correlation between vitamin D and anti-TPO in patients with anti-TPO positive hypothyroidism, indicating low vitamin D levels in these patients (PUBMED:32781825). In the Polish population, HT patients had significantly lower serum vitamin D concentrations compared to controls, suggesting vitamin D deficit as a risk factor for HT development (PUBMED:26768630). In conclusion, vitamin D deficiency is associated with HT in children and adolescents, and levels lower than 20 ng/mL seem to be critical for this association.
Instruction: Is the PCOS diagnosis solved by ESHRE/ASRM 2003 consensus or could it include ultrasound examination of the ovarian stroma? Abstracts: abstract_id: PUBMED:17053004 Is the PCOS diagnosis solved by ESHRE/ASRM 2003 consensus or could it include ultrasound examination of the ovarian stroma? Background: The clinical heterogeneity of polycystic ovary syndrome (PCOS) is mirrored by the unceasing debate on the most appropriate diagnostic criteria. Methods And Results: To highlight differences and inconsistencies between NIH and ESHRE/ASRM criteria, we applied them to 375 patients with oligo/amenorrhoea and signs of hyperandrogenism. Among them, we identified 273 women with PCOS according to NIH, whereas up to 345 patients fulfilled ESHRE/ASRM criteria. The 72 patients, constituting the gap between the two classifications, exhibited a lower expression of clinical signs compared with the 273 patients matching both criteria. To the whole group, we then applied the ESHRE/ASRM criteria modified to include an easily reproducible ultrasound examination of the ovarian stroma (UCSC criteria). In this way, we identified 30 women who were healthy according to all criteria, 37 affected by PCOS according only to the ESHRE/ASRM Consensus, 35 affected according only to the UCSC and ESHRE/ASRM criteria and 273 who were considered to have PCOS by all criteria. These groups showed a progressively increasing expression of PCOS features. Conclusion: In the grey area between NIH and ESHRE/ASRM classifications, UCSC criteria could identify a subgroup of women, missed by NIH criteria, with more pronounced stigmas than those identified by ESHRE/ASRM criteria alone, and who may profit more from a targeted therapy. abstract_id: PUBMED:22153789 Consensus on women's health aspects of polycystic ovary syndrome (PCOS): the Amsterdam ESHRE/ASRM-Sponsored 3rd PCOS Consensus Workshop Group. Polycystic ovary syndrome (PCOS) is the most common endocrine disorder in females, with a high prevalence. The etiology of this heterogeneous condition remains obscure, and its phenotype expression varies. Two widely cited previous ESHRE/ASRM sponsored PCOS consensus workshops focused on diagnosis (published in 2004) and infertility management (published in 2008), respectively. The present third PCOS consensus report summarizes current knowledge and identifies knowledge gaps regarding various women's health aspects of PCOS. Relevant topics addressed-all dealt with in a systematic fashion-include adolescence, hirsutism and acne, contraception, menstrual cycle abnormalities, quality of life, ethnicity, pregnancy complications, long-term metabolic and cardiovascular health, and finally cancer risk. Additional, comprehensive background information is provided separately in an extended online publication. abstract_id: PUBMED:17425838 The ESHRE/ASRM consensus on polycystic ovary syndrome (PCOS)--an extended critical analysis. The 'ESHRE/ASRM consensus on diagnosis, nomenclature and long-term health risks of polycystic ovarian syndrome (PCOS)' (conference in Rotterdam, Netherlands, March, 2003), which was criticized shortly after its oral presentation in 2003, is still being debated following its publication in 2004. Therefore, an extended, intensive analysis of all parameters and their combinations has been performed, differentiating between inclusion and exclusion criteria. By doing this, the profound significance of the endocrinological part of the syndrome is highlighted. The nomenclature, semantic meaning of terms, definitions, structural balancing, specificity, practicability, flexibility, limitations and reproducibility for both individualization, as well as grouping from the clinical and scientific perspective, have been examined in detail. This analysis supports previous critical comments, and reaches the conclusion that the recommendations of the consensus lack accuracy and specificity, a weakness that results in the risk of overdiagnosis and in a lack of reproducibility. Altogether, they fail to fulfil the criteria of universal guidelines on several levels. Instead, a profound paradigm shift is demanded, from an inconsistent perception to a more systematic, balanced and comprehensive view. A short indication on an alternative sweeping concept of a strictly defined, therapy-targeted classification and diagnostic procedure of female functional androgenization is presented. abstract_id: PUBMED:26918000 The Thessaloniki ESHRE/ESGE consensus on diagnosis of female genital anomalies. What is the recommended diagnostic work-up of female genital anomalies according to the European Society of Human Reproduction and Embryology (ESHRE)/European Society for Gynaecological Endoscopy (ESGE) system? The ESHRE/ESGE consensus for the diagnosis of female genital anomalies is presented. Accurate diagnosis of congenital anomalies still remains a clinical challenge due to the drawbacks of the previous classification systems and the non-systematic use of diagnostic methods with varying accuracy, with some of them quite inaccurate. Currently, a wide range of non-invasive diagnostic procedures are available, enriching the opportunity to accurately detect the anatomical status of the female genital tract, as well as a new objective and comprehensive classification system with well-described classes and sub-classes. The ESHRE/ESGE Congenital Uterine Anomalies (CONUTA) Working Group established an initiative with the goal of developing a consensus for the diagnosis of female genital anomalies. The CONUTA working group and imaging experts in the field have been appointed to run the project. The consensus is developed based on (1) evaluation of the currently available diagnostic methods and, more specifically, of their characteristics with the use of the experts panel consensus method and of their diagnostic accuracy performing a systematic review of evidence and (2) consensus for (a) the definition of where and how to measure uterine wall thickness and (b) the recommendations for the diagnostic work-up of female genital anomalies, based on the results of the previous evaluation procedure, with the use of the experts panel consensus method. Uterine wall thickness is defined as the distance between interostial line and external uterine profile at the midcoronal plane of the uterus; alternatively, if a coronal plane is not available, the mean anterior and posterior uterine wall thickness at the longitudinal plane could be used. Gynaecological examination and two-dimensional ultrasound (2D US) are recommended for the evaluation of asymptomatic women. Three-dimensional ultrasound (3D US) is recommended for the diagnosis of female genital anomalies in "symptomatic" patients belonging to high-risk groups for the presence of a female genital anomaly and in any asymptomatic woman suspected to have an anomaly from routine avaluation. Magnetic resonance imaging (MRI) and endoscopic evaluation are recommended for the sub-group of patients with suspected complex anomalies or in diagnostic dilemmas. Adolescents with symptoms suggestive for the presence of a female genital anomaly should be thoroughly evaluated with 2D US, 3D US, MRI and endoscopy. The various diagnostic methods should be used in a proper way and evaluated by experts to avoid mis-, over- and underdiagnosis. The role of a combined ultrasound examination and outpatient hysteroscopy should be prospectively evaluated. It is a challenge for further research, based on diagnosis, to objectively evaluate the clinical consequences related to various degrees of uterine deformity. abstract_id: PUBMED:33354626 Terminology for describing normally sited and ectopic pregnancies on ultrasound: ESHRE recommendations for good practice. Study Question: What recommendations can be provided to improve terminology for normal and ectopic pregnancy description on ultrasound? Summary Answer: The present ESHRE document provides 17 consensus recommendations on how to describe normally sited and different types of ectopic pregnancies on ultrasound. What Is Known Already: Current diagnostic criteria stipulate that each type of ectopic pregnancy can be defined by clear anatomical landmarks which facilitates reaching a correct diagnosis. However, a clear definition of normally sited pregnancies and a comprehensive classification of ectopic pregnancies are still lacking. Study Design Size Duration: A working group of members of the ESHRE Special Interest Group in Implantation and Early Pregnancy (SIG-IEP) and selected experts in ultrasound was formed in order to write recommendations on the classification of ectopic pregnancies. Participants/materials Setting Methods: The working group included nine members of different nationalities with internationally recognised experience in ultrasound and diagnosis of ectopic pregnancies on ultrasound. This document is developed according to the manual for development of ESHRE recommendations for good practice. The recommendations were discussed until consensus by the working group, supported by a survey among the members of the ESHRE SIG-IEP. Main Results And The Role Of Chance: A clear definition of normally sited pregnancy on ultrasound scan is important to avoid misdiagnosis of uterine ectopic pregnancies. A comprehensive classification of ectopic pregnancy must include definitions and descriptions of each type of ectopic pregnancy. Only a classification which provides descriptions and diagnostic criteria for all possible locations of ectopic pregnancy would be fit for use in routine clinical practice. The working group formulated 17 recommendations on the diagnosis of the different types of ectopic pregnancies on ultrasound. In addition, for each of the types of ectopic pregnancy, a schematic representation and examples on 2D and 3D ultrasound are provided. Limitations Reasons For Caution: Owing to the limited evidence available, recommendations are mostly based on clinical and technical expertise. Wider Implications Of The Findings: This document is expected to have a significant impact on clinical practice in ultrasound for early pregnancy. The development of this terminology will help to reduce the risk of misdiagnosis and inappropriate treatment. Study Funding/competing Interests: The meetings of the working group were funded by ESHRE. T.T. declares speakers' fees from GE Healthcare. The other authors declare that they have no conflict of interest. Trial Registration Number: N/A. Disclaimer: This Good Practice Recommendations (GPR) document represents the views of ESHRE, which are the result of consensus between the relevant ESHRE stakeholders and where relevant based on the scientific evidence available at the time of preparation. ESHRE's GPRs should be used for informational and educational purposes. They should not be interpreted as setting a standard of care or be deemed inclusive of all proper methods of care nor exclusive of other methods of care reasonably directed to obtaining the same results. They do not replace the need for application of clinical judgement to each individual presentation, nor variations based on locality and facility type. Furthermore, ESHRE's GPRs do not constitute or imply the endorsement, recommendation or favouring of any of the included technologies by ESHRE. abstract_id: PUBMED:14714587 Ultrasound assessment of the polycystic ovary: international consensus definitions. The polycystic ovary syndrome (PCOS) is a heterogeneous condition, the pathophysiology of which appears to be both multifactorial and polygenic. The definition of the syndrome has been much debated. Key features include menstrual cycle disturbance, hyperandrogenism and obesity. There are many extra-ovarian aspects to the pathophysiology of PCOS, yet ovarian dysfunction is central. At a recent joint ASRM/ESHRE consensus meeting, a refined definition of the PCOS was agreed, encompassing a description of the morphology of the polycystic ovary (PCO). According to the available literature, the criteria fulfilling sufficient specificity and sensitivity to define the PCO should have at least one of the following: either 12 or more follicles measuring 2-9 mm in diameter, or increased ovarian volume (&gt; 10 cm3). If there is a follicle &gt; 10 mm in diameter, the scan should be repeated at a time of ovarian quiescence in order to calculate volume and area. The presence of a single PCO is sufficient to provide the diagnosis. The distribution of follicles and a description of the stroma are not required in the diagnosis. Increased stromal echogenicity and/or stromal volume are specific to PCO, but it has been shown that the measurement of ovarian volume (or area) is a good surrogate for quantification of the stroma in clinical practice. A woman having PCO in the absence of an ovulation disorder or hyperandrogenism ('asymptomatic PCO') should not be considered as having PCOS, until more is known about this situation. Three-dimensional and Doppler ultrasound studies may be useful research tools but are not required in the definition of PCO. This review outlines evidence for the current ultrasound definition of the polycystic ovary and technical specifications. abstract_id: PUBMED:22147920 Consensus on women's health aspects of polycystic ovary syndrome (PCOS). Polycystic ovary syndrome (PCOS) is the most common endocrine disorder in females with a high prevalence. The etiology of this heterogeneous condition remains obscure and its phenotype expression varies. Two, widely cited, previous ESHRE/ASRM-sponsored PCOS consensus workshops focused on diagnosis (published in 2004) and infertility management (published in 2008). The present third PCOS consensus paper summarizes current knowledge and identifies knowledge gaps regarding various women's health aspects of PCOS. Relevant topics addressed-all dealt with in a systematic fashion-include adolescence, hirsutism and acne, contraception, menstrual cycle abnormalities, quality of life, ethnicity, pregnancy complications, long-term metabolic and cardiovascular health and finally cancer risk. Additional, comprehensive background information is provided separately in an extended online publication. abstract_id: PUBMED:20381789 Which morphological investigations and how to interpret them to make the diagnosis of PCOS? The need for a calibrated imaging of polycystic ovaries (PCO) is now stronger than ever since the consensus conference held in Rotterdam in 2003. However, imaging PCO is not an easy procedure and it requires a thorough technical and medical background. The two-dimensional (2-D) ultrasonography (U/S) remains the standard for imaging PCO and the current consensus definition of PCO determined at the joint ASRM/ESHRE consensus meeting on PCOS rests on this technique: either 12 or more follicles measuring 2 to 9mm in diameter and/or increased ovarian volume (&gt;10cm(3)). However, these thresholds need being revisited with the use of the new machines that have better spatial resolution and with the advent of the 3-D U/S. Doppler study and magnetic resonance imaging (MRI) are seldom useful for diagnosis but may be interesting for clinical research. abstract_id: PUBMED:14973405 Diagnosis of polycystic ovary syndrome: from NIH criteria to ESHRE-ASRM guidelines. In the past, the diagnosis of polycystic ovary syndrome (PCOS) was based on National Institute of Health (NIH) criteria (hyperandrogenism and chronic anovulation) or on sonographic findings of polycystic ovaries. Diffe-rences in diagnosis criteria made it difficult to compare the data of studies coming from different countries. Moreover, there was criticism of both the methods used. In 2003, at a joint meeting of the European Society for Human Reproduction (ESHRE) and the American Society of Reproductive Medicine (ASRM), new guidelines for the diagnosis of PCOS were suggested. According to these guidelines, it is possible to reach a diagnosis of PCOS when at least 2 of these 3 elements are present: hyperandrogenism, chronic anovulation and polycystic ovaries. New criteria for the echographic diagnosis of polycystic ovaries have been suggested, too. These diagnostic guidelines represent important progress because they are more flexible and permit us to make the diagnosis in patients who were previously excluded by the syndrome (such as ovulatory hyperandrogenic women with polycystic ovaries or anovulatory normoandrogenic women with polycystic ovaries). However, doubts still exist and regard some borderline group of patients such as hirsute ovulatory normoandrogenic women with polycystic ovaries. A new classification of PCOS syndrome is suggested on the basis of new guidelines. abstract_id: PUBMED:18277355 Evidence-based approach for the use of ultrasound in the management of polycystic ovary syndrome. Polycystic ovary syndrome (PCOS) is a complex endocrinopathy, affecting 5-10% of women in the reproductive age group, with a wide spectrum of phenotypes. The cardinal features are hyperandrogenism, ovulatory dysfunction and/or polycystic ovary (PCO) appearance. Three major diagnostic criteria for PCOS have been proposed by the National Institute of Health (NIH 1990), the Rotterdam European Society for Human Reproduction and Embryology/American Society for Reproductive Medicine sponsored PCOS Consensus Workshop Group (ESHRE/ASRM 2003) and the recent task force of the Androgen Excess Society (AES 2006). The PCO appearance is not a prerequisite for the diagnosis of PCOS. The aim of this review was to assess the available evidence in order to highlight the role of ultrasound in the diagnosis and management of PCOS. The evidence supports positive correlation between ultrasound features and biochemical indices of PCOS suggesting that ultrasound could play a role in prediction of PCOS severity and prognosis. Recent advances in ultrasonography resulted in a change of emphasis on the relative importance of total ovarian volume, follicle number and ovarian stromal changes in the diagnosis of PCO. In particular, ovarian stromal area/total area (S/A) ratio seems to have the most efficient diagnostic performance for hyperandrogenism. Three-dimensional (3D) ultrasound is a relatively new imaging modality that has the potential to improve the sensitivity and specificity of ultrasound without adding much to the practical management of the syndrome. In addition to its role in the definition of PCO, ultrasound is helpful to predict fertility outcome in patients undergoing treatment. Answer: The diagnosis of polycystic ovary syndrome (PCOS) is not entirely resolved by the ESHRE/ASRM 2003 consensus. While the consensus provided a set of criteria for diagnosing PCOS, there is still debate and ongoing research regarding the most appropriate diagnostic criteria due to the clinical heterogeneity of PCOS (PUBMED:17053004, PUBMED:17425838, PUBMED:14714587, PUBMED:14973405, PUBMED:18277355). The ESHRE/ASRM 2003 consensus criteria allowed for the diagnosis of PCOS when at least two of the following three elements are present: hyperandrogenism, chronic anovulation, and polycystic ovaries (PUBMED:14973405). However, the criteria have been criticized for potentially leading to overdiagnosis and lacking specificity (PUBMED:17425838). The inclusion of ultrasound examination of the ovarian stroma has been suggested as a way to refine the diagnosis. A study applying the ESHRE/ASRM criteria modified to include an easily reproducible ultrasound examination of the ovarian stroma (UCSC criteria) identified a subgroup of women with more pronounced PCOS features than those identified by ESHRE/ASRM criteria alone (PUBMED:17053004). This suggests that ultrasound examination of the ovarian stroma could potentially improve the specificity of PCOS diagnosis and help identify women who may benefit more from targeted therapy. Furthermore, ultrasound features have been shown to correlate with biochemical indices of PCOS, suggesting that ultrasound could play a role in predicting the severity and prognosis of PCOS (PUBMED:18277355). The ovarian stromal area/total area (S/A) ratio, in particular, seems to have the most efficient diagnostic performance for hyperandrogenism (PUBMED:18277355). In conclusion, while the ESHRE/ASRM 2003 consensus provided a framework for diagnosing PCOS, the inclusion of ultrasound examination of the ovarian stroma could enhance the diagnostic process by identifying women with more pronounced features of PCOS, thus potentially leading to more targeted and effective treatment strategies.
Instruction: Does a Combined Fascial Sling - Burch Colposuspension Display Advantages over a Fascial Sling alone for Treatment of Urinary Stress Incontinence in Females? Abstracts: abstract_id: PUBMED:14566688 Does a Combined Fascial Sling - Burch Colposuspension Display Advantages over a Fascial Sling alone for Treatment of Urinary Stress Incontinence in Females? Purpose: Despite the development of new surgical techniques, the fascial sling procedure remains an important surgical technique for the treatment of female urinary stress incontinence. An advantage of combining it with an additional Burch colposuspension has been suggested. The objective of our study was to evaluate retrospectively selected patients who had undergone a fascial sling procedure with and without Burch colposuspension. Materials And Methods: Of a total of 390 females who underwent an incontinence operation at our department between 1990 and 1999, 56 patients had had a fascial sling plasty. A total of 50 patients (89 %) were followed for a median of 59.5 months. The median age was 60 years. 56 % of the patients displayed recurrent stress incontinence. The previous operations had been performed via a vaginal approach in 42.9 % and an abdominal approach in 57.1 %. The sling procedure used was that of Narik and Palmrich. Of the 50 patients, 14 had an additional Burch colposuspension. Results: The continence rates (no pads) were for patients with a fascial sling procedure alone 63.9 % and for the combination of both operations 64.4 %. An improvement (1-3 pads) was seen in 27.8 % and 21.4 %, respectively. No changes were seen in 5.6 % and 7.1 % and impairment was seen in 2.7 % and 7.1 %, respectively. After a five-year follow-up, the total patient satisfaction rate was 78 %. Conclusions: The fascial sling is effective operative technique for treating female urinary stress incontinence, especially in severe and type III incontinence and in patients who had undergone previous operations for incontinence. The operation is safe and is the only technique that offers controlled overcorrection in desperate cases. An advantage of adding a Burch colposuspension to the fascial sling procedure was not detected in our patient group. abstract_id: PUBMED:26563492 Salvage autologous fascial sling after failed synthetic midurethral sling: Greater than 3-year outcomes. Objective: To determine long-term surgical outcomes of salvage autologous fascial sling placement after a failed synthetic midurethral sling. Methods: Women who had undergone autologous fascial sling placement without concomitant pelvic surgery for a failed synthetic midurethral sling utilizing mesh with a minimum follow up of 36 months were identified. Charts were reviewed, and patients were contacted by telephone. Success was determined by the Patient Global Impression of Improvement. Secondary measures included the Incontinence Severity Index questionnaire, patient recommendation of the autologous fascial sling and need for further incontinence surgery. Results: A total of 35 patients met the criteria, and 21 were successfully contacted. Of those contacted, the median age at surgery was 67 years (range 53-81 years) and at the time of the survey was 75 years (range 63-84 years) with median follow up of 74 months (range 36-127 years). Preoperatively, 12 patients (57.1%) had urethral hypermobility and 13 patients (61.9%) had mixed urinary incontinence. Eight patients (38.1%) had concomitant sling excision with five of those combined with urethrolysis at the time of the salvage operation. Patient Global Impression of Improvement success was noted in 16 patients (76.2%). A total of 11 patients (52.4%) were dry or had slight incontinence by the Incontinence Severity Index. One patient required additional anti-incontinence surgery (4.8%). A total of 18 patients (85.7%) recommended the autologous fascial sling. No statistical impact was noted with sling excision (P = 0.62), mixed urinary incontinence (P = 0.61), age at surgery (P = 0.23), age at follow up (P = 0.15), length of follow up (P = 0.71) or first surgery type (transobturator tape vs retropubic; P = 1.00). Conclusions: Autologous fascial sling provides reasonable long-term success as a salvage operation for failed midurethral slings. abstract_id: PUBMED:30466701 Surgery for Stress Urinary Incontinence: Autologous Fascial Sling. This article describes the operative technique of autologous fascial pubovaginal sling (AFPVS) surgery, examines the senior author's outcomes with AFPVS, compares these outcomes with those of other large studies and meta-analyses, and compares the safety and efficacy of AFPVS with those of the synthetic midurethral sling (SMUS). Recently, the SMUS has become the treatment of choice for most surgeons. The efficacy of the SMUS remains unchallenged and comparable with that of AFPVS, but SMUS are associated with more severe complications. In the author's opinion, the AFPVS should remain the gold standard for treating SUI. abstract_id: PUBMED:31407861 The autologous pubovaginal fascial sling: An update in 2019. The contemporary management of stress urinary incontinence (SUI) has seen renewed interest in the use of autologous fascia for the formation of a retropubic suburethral sling. Traditionally, it has been used in only the most severe of incontinence cases, such as those women with intrinsic sphincter deficiency, or in patients requiring concomitant reconstructive procedures. In the current climate surrounding transvaginal mesh, many doctors and patients are shying away from the less morbid synthetic midurethral sling. International literature has demonstrated that the fascial sling is a safe and efficacious procedure for all patients with SUI, adequately treating both urethral hypermobility and intrinsic sphincter deficiency. This paper will discuss the indications, technique, and outcomes of autologous fascial slings. We will explore the use of fascial slings following failed synthetic slings and also the troubleshooting of commonly encountered issues intra- and postoperatively. abstract_id: PUBMED:17517855 Burch colposuspension versus fascial sling to reduce urinary stress incontinence. Background: Many surgical procedures are available for women with urinary stress incontinence, yet few randomized clinical trials have been conducted to provide a basis for treatment recommendations. Methods: We performed a multicenter, randomized clinical trial comparing two procedures--the pubovaginal sling, using autologous rectus fascia, and the Burch colposuspension--among women with stress incontinence. Women were eligible for the study if they had predominant symptoms associated with the condition, a positive stress test, and urethral hypermobility. The primary outcomes were success in terms of overall urinary-incontinence measures, which required a negative pad test, no urinary incontinence (as recorded in a 3-day diary), a negative cough and Valsalva stress test, no self-reported symptoms, and no retreatment for the condition, and success in terms of measures of stress incontinence specifically, which required only the latter three criteria. We also assessed postoperative urge incontinence, voiding dysfunction, and adverse events. Results: A total of 655 women were randomly assigned to study groups: 326 to undergo the sling procedure and 329 to undergo the Burch procedure; 520 women (79%) completed the outcome assessment. At 24 months, success rates were higher for women who underwent the sling procedure than for those who underwent the Burch procedure, for both the overall category of success (47% vs. 38%, P=0.01) and the category specific to stress incontinence (66% vs. 49%, P&lt;0.001). However, more women who underwent the sling procedure had urinary tract infections, difficulty voiding, and postoperative urge incontinence. Conclusions: The autologous fascial sling results in a higher rate of successful treatment of stress incontinence but also greater morbidity than the Burch colposuspension. (ClinicalTrials.gov number, NCT00064662 [ClinicalTrials.gov] .). abstract_id: PUBMED:33316277 Long-term effectiveness and safety of open Burch colposuspension vs retropubic midurethral sling for stress urinary incontinence-results from a large comparative study. Background: There are few adequately powered long-term trials comparing midurethral sling and Burch colposuspension. Recent concerns about synthetic mesh with new stringent clinical and research governance support the need for evidence to facilitate shared decision making. Objective: This study aimed to compare long-term outcomes of open Burch colposuspension with the retropubic midurethral sling. Study Design: A matched cohort study of 1344 women with urodynamic stress incontinence (without intrinsic sphincter deficiency) who underwent surgery for stress urinary incontinence. Women had either open Burch colposuspension or the retropubic midurethral sling, from January 2000 to June 2018, in a tertiary center. Follow-up was by chart review and one-time phone follow-up until 2019, using a dedicated database. Primary outcomes were the presence or absence of stress urinary incontinence on follow-up, the success of index surgery based on response to validated questionnaires of patient-reported outcomes, and retreatment rates. Secondary outcomes are described below. Matching (1:3) was done at baseline to avoid confounding. Results: The study included 1344 women who had either Burch colposuspension (336) or retropubic midurethral sling (1008). Mean follow-up was 13.1 years for Burch colposuspension and 10.1 years for retropubic midurethral sling. In the Burch colposuspension group, 83.0% of patients (279 of 336) reported no ongoing stress urinary incontinence at the time of the latest follow-up vs 85.0% (857 of 1008) in the retropubic midurethral sling group (P=.38). Success in terms of the latest reported International Consultation on Incontinence Questionnaire-Urinary Incontinence Short Form (defined as International Consultation on Incontinence Questionnaire-Urinary Incontinence Short Form score of ≤6) where these data were available were similar within both groups: 76.0% (158 of 208 where this was available) in Burch colposuspension vs 72.1% (437 of 606 where this was available) in retropubic midurethral sling (P=.32). Where this information was available, success defined by a Patient Global Impression of Improvement of "very much improved" and "much improved" was similar between Burch colposuspension and retropubic midurethral sling groups (84.1% [243 of 289] vs 82.0% [651 of 794]; P=.88). Where data were available, 88.1% of women (178 of 202) in the Burch colposuspension group said they were very likely to recommend the surgery to family or a friend vs 85.0% (580 of 682) in retropubic midurethral sling (P=.30).Overall, 3.6% needed repeat incontinence procedures (13 in Burch colposuspension group [3.8%] vs 35 in retropubic midurethral sling group [3.5%]; P=.73). The incidence of mesh exposure was 1.0 %. Notably, 1 Burch colposuspension patient had a suture in the bladder during follow-up; 5 patients have reported long-standing pain across the study population. Overall, 51 women reported new-onset overactive bladder symptoms on follow-up: 10 of 336 (3.0%) had Burch colposuspension and 41 of 1008 (4.1%) had retropubic midurethral sling (P=.41). The need for future prolapse surgery per index procedure was 3.3% after Burch colposuspension vs 1.1% postretropubic midurethral sling (P=.01). Moreover, 9 of the 11 patients who needed a prolapse repair after Burch colposuspension required a posterior repair. The incidence of long-term severe voiding difficulty needing self-catheterization was similar in both groups (0.3% in Burch colposuspension and 0.5 % in retropubic midurethral sling group; P=1.00). Conclusion: This study shows no difference in success, patient satisfaction, or complications between Burch colposuspension and retropubic midurethral sling, although the risk of posterior compartment prolapse operations after Burch colposuspension is increased. Reoperation rates for incontinence were similar in both groups. Chronic pain was a rare outcome. abstract_id: PUBMED:30620096 Burch colposuspension. Aims: To evaluate the historic and pathophysiologic issues which led to the development of Burch colposuspension, to describe anatomic and technical aspects of the operation and to provide an update on current evidence. Methods: We have performed a focused literature review and have searched the current available literature about historic dimension, technical descriptions, and efficacy of Burch colposuspension. Results: Burch colposuspension, performed either by an open or a laparoscopic approach, is an effective surgical treatment for stress urinary incontinence. Conclusions: In current recommendations, Burch colposuspension remains an option for secondary treatment. Because midurethral slings have recently become under scrutiny, it may return as a first-line treatment procedure. Both open and laparoscopic Burch colposuspension should therefore nowadays be provided in fellowship programs worldwide. abstract_id: PUBMED:34259895 Modified autologous fascial sling technique ('sling on a string') for stress incontinence. Introduction And Hypothesis: Describe the modified autologous fascial sling procedure that has been employed in the largest randomized controlled trial comparing autologous slings, mesh slings and xenografts. Methods: The video aims to demonstrate the modified Aldridge technique. The surgical procedure is demonstrated. A 6-cm suprapubic incision is made to harvest the rectus sheath fascia. Loop-0-PDS sutures are attached on either end of the sling. A marking suture is placed in the middle of the graft to facilitate tension-free adjustment. A vaginal incision is made at the mid-urethra. Paraurethral dissection is performed to create a tunnel for the fascial graft to be passed through (in the same manner as with transvaginal mesh slings). The ends of the graft PDS sutures are passed through the paraurethral tunnel. One hand is placed abdominally below the rectus muscles to palpate the pelvic floor from above. The graft sutures are passed through the pelvic floor with control on either side. A cystoscopy is performed to check the bladder integrity. The graft placement is adjusted to be tenson-free. The incisions are closed. The short- and long-term outcomes of this technique have been investigated and published. Results: The cure rates and complication rates were no different in the mesh and autologous slings. The xenograft had inferior outcomes. Conclusion: Autologous fascial slings can be used in the surgical management of urodynamic stress incontinence. The technique demonstrated in this video is the technique employed in the largest randomized controlled trial investigating the efficacy of autologous fascial slings to xenografts and tapes. abstract_id: PUBMED:19296969 Complications in women undergoing Burch colposuspension versus autologous rectus fascial sling for stress urinary incontinence. Purpose: We determined the clinicodemographic factors associated with complications of continence procedures, the impact of concomitant surgery on the complication rate and the relationship between the incidence of cystitis and the method of postoperative bladder drainage. Materials And Methods: We reviewed serious adverse events and adverse events in the Stress Incontinence Surgical Efficacy Trial, a randomized trial comparing Burch colposuspension to the autologous rectus fascial sling. Clinicodemographic variables were analyzed to determine those associated with adverse events using logistic regression analysis. Complications were stratified based on the presence or absence of concomitant surgery. Differences in complication rates (controlling for concomitant surgery) and cystitis rates (controlling for the bladder emptying method) were compared using Fisher's exact test. Results: Blood loss (p = 0.0002) and operative time (p &lt;0.0001) were significantly associated with an adverse event. Patients who underwent concomitant surgery had a significantly higher serious adverse event rate (14.2% vs 7.3%, p = 0.01) and adverse event rate (60.5% vs 48%, p &lt;0.01) than patients who underwent continence surgery alone. Cystitis rates were higher in the sling vs the Burch group up to 6 weeks postoperatively regardless of concomitant surgery status (p &lt;0.01). Intermittent self-catheterization increased the cystitis rate by 17% and 23% in the Burch and sling groups, respectively. Conclusions: Concomitant surgery at continence surgery increased the risk of complications. Sling surgery was associated with a higher risk of cystitis within the first 6 weeks postoperatively. Intermittent self-catheterization increased the risk of cystitis in each group. Complications were associated with surgical factors and not with patient related factors. abstract_id: PUBMED:21996105 Changes in urodynamic measures two years after Burch colposuspension or autologous sling surgery. Objective: To characterize the urodynamic (UDS) changes in subjects 24 months after Burch urethropexy and autologous fascial sling surgery for stress urinary incontinence. Methods: In the Stress Incontinence Surgical Treatment Efficacy Trial (SISTEr), 655 women underwent standardized UDSs before and 2 years after Burch or sling surgery. Paired t tests were used to compare the pre- and postoperative UDS measures by treatment group. Analysis of variance models were fit predicting the change in UDS measures, controlling for the treatment group. Results: The noninstrumented maximal flow rate decreased 3.6 mL/s in the Burch group and 4.7 mL/s in the sling group (P = .42). The average flow rates also decreased (2.4 mL/s in the Burch group and 3.8 mL/s in the sling group, P = .039). No difference was found in the increases in first sensation between the Burch and sling groups (23.3 and 29.3 mL, respectively, P = .61). Also, no differences were found in the reduction in the pressure flow study maximal flow rates (2.3 mL/s in the Burch group and 4.4 mL/s in the sling group, P = .11). An increased detrusor pressure at maximal flow rate (11.4 cm H(2)O, P &lt; .001) was seen only after the sling procedure. Increases in the bladder outlet obstruction index occurred after both procedures, with greater increases seen after sling surgery (change, Burch +6.27 vs sling +20.12, P = .001). Conclusion: The Burch colposuspension and autologous fascial sling procedures were associated with similar decreases in noninstrumented flow rates, and the sling was associated with greater increases in the detrusor pressure at maximal flow rate and bladder outlet obstruction index. These changes suggest that both procedures are effective, in part, because of increased outlet resistance. However, the sling procedure might be more obstructive. Answer: Based on the available abstracts, a combined fascial sling and Burch colposuspension does not display significant advantages over a fascial sling alone for the treatment of urinary stress incontinence in females. A retrospective study evaluating patients who underwent a fascial sling procedure with and without Burch colposuspension found that the continence rates were similar for both groups. Specifically, patients with a fascial sling procedure alone had a continence rate of 63.9%, while the combination of both operations had a slightly higher rate of 64.4%. The total patient satisfaction rate after a five-year follow-up was 78%, and the study concluded that adding a Burch colposuspension to the fascial sling procedure did not show an advantage in their patient group (PUBMED:14566688). Furthermore, other studies have shown that autologous fascial sling provides reasonable long-term success as a salvage operation for failed midurethral slings (PUBMED:26563492), and the autologous fascial sling is considered a safe and efficacious procedure for all patients with stress urinary incontinence (PUBMED:31407861). The autologous fascial sling is also suggested to remain the gold standard for treating stress urinary incontinence (SUI) (PUBMED:30466701). In a randomized clinical trial comparing the pubovaginal sling using autologous rectus fascia and the Burch colposuspension, the autologous fascial sling resulted in a higher rate of successful treatment of stress incontinence but also greater morbidity than the Burch colposuspension (PUBMED:17517855). Another study comparing long-term outcomes of open Burch colposuspension with the retropubic midurethral sling found no difference in success, patient satisfaction, or complications between the two procedures (PUBMED:33316277). In conclusion, the evidence does not support a clear advantage of combining a fascial sling with Burch colposuspension over a fascial sling alone for the treatment of urinary stress incontinence in females. The fascial sling alone remains an effective and important surgical technique for this condition.
Instruction: Could sustained monomorphic ventricular tachycardia in the early phase of a prime acute myocardial infarction affect patient outcome? Abstracts: abstract_id: PUBMED:17070829 Could sustained monomorphic ventricular tachycardia in the early phase of a prime acute myocardial infarction affect patient outcome? Objectives: Sustained monomorphic ventricular tachycardia (SMVT) in the course of a prime acute myocardial infarction is not a common arrhythmia and its prognostic significance has not been specifically elucidated. The aim of the study was to estimate the prognostic implications of the occurrence of sustained monomorphic ventricular tachycardia in the early phase (&lt;72 h) of a prime acute myocardial infarction. Methods: We studied 690 consecutive patients admitted to the coronary care unit with a diagnosis of a prime myocardial infarction. SMVT was observed in 18 (2.6%) patients and we followed these patients for establishing the prognostic value of the arrhythmia according to the clinical characteristics. Results: Patients with SMVT had a more extensive myocardial infarction based on the peak of the CK-MB isoenzyme activity (480+/-290 IU/L, vs 270+/-190 IU/L, P &lt; .01), and higher mortality rate (40% vs 9%, P &lt; .001). The independent predictors of SMVT were CK-MB (odds ratio [OR] 12.4), presence of complex ventricular arrhythmias (OR = 5.7), a wide QRS complex &gt; or =130 milliseconds (OR = 4.8) and Killip class (OR = 4.8). The SMVT was itself an independent predictor of mortality (OR = 5.0). Compared with patients with ventricular fibrillation or polymorphic ventricular tachycardia, those with SMVT had a higher CK-MB activity, higher rate of wide QRS &gt; or =130 milliseconds (33% vs 8%, P &lt; .002), had a worse hemodynamic condition (Killip class &gt;I:58% vs 23%, P &lt; .04) and higher recurrence rate of ischemic events (68% vs 16%, P &lt; .05). During the one year follow-up period, 4 patients (36.3%) of the 11 survivors from those with SMVT died of cardiac related causes. Conclusions: SMVT during the first 72 h of a prime myocardial infarction is an index of a larger healing myocardium with acute very complexed electrophysiological changes and it is an independent predictor of in-hospital mortality and a prognostic factor of a poor one year outcome. abstract_id: PUBMED:8962550 Predisposing factors and prognostic value of sustained monomorphic ventricular tachycardia in the early phase of acute myocardial infarction. Objectives: The purpose of the study was to analyze the factors that favor the occurrence of sustained monomorphic ventricular tachycardia in the early phase (&lt; 48 h) of acute myocardial infarction and to establish its prognostic implications. Background: Sustained monomorphic ventricular tachycardia early in the course of an acute myocardial infarction is an uncommon arrhythmia, and its significance has not been specifically studied. Methods: The clinical characteristics and prognosis of sustained monomorphic ventricular tachycardia were studied in 21 (1.9%) of 1,120 consecutive patients admitted to the coronary care unit with a diagnosis of myocardial infarction. Results: Patients with sustained monomorphic ventricular tachycardia had a larger infarct on the basis of peak creatine kinase, MB fraction (CK-MB) isoenzyme activity (435 +/- 253 IU/liter vs. 168 +/- 145 IU/liter, p &lt; 0.001) and higher mortality rate (43% vs. 11%, p &lt; 0.001). By logistic regression analysis, independent predictors of sustained monomorphic ventricular tachycardia were CK-MB (odds ratio [OR] 11.8), Killip class (OR 4.0) and bifascicular bundle branch block (OR 3.1). Moreover, sustained monomorphic ventricular tachycardia was itself an independent predictor of mortality (OR 5.0). Compared with patients with ventricular fibrillation, those with sustained monomorphic ventricular tachycardia had a worse Killip class (Killip class &gt; I: 63% vs. 30%, p &lt; 0.05), higher CK-MB activity (430 +/- 260 IU/liter vs. 242 +/- 176 IU/liter, p &lt; 0.01) and higher arrhythmia recurrence rate (31% vs. 4%, p &lt; 0.01). During the follow-up period, 5 (42%) of 12 survivors in the sustained monomorphic ventricular tachycardia group died of cardiac-related causes. Recurrence of ventricular tachycardia was seen in two patients (17%). Conclusions: Sustained monomorphic ventricular tachycardia during the first 48 h of myocardial infarction is a sign of extensive myocardial damage and an independent predictor of in-hospital mortality. abstract_id: PUBMED:29454138 Prognostic implications of early monomorphic and non-monomorphic tachyarrhythmias in patients discharged with acute coronary syndrome. Background: The prognostic implication of early ventricular tachyarrhythmias (VTs) after acute coronary syndrome (ACS) remains unclear. Objective: We sought to investigate the clinical outcomes of early monomorphic and non-monomorphic VTs that occur within 48 hours in patients after ACS. Methods: We retrospectively reviewed the clinical outcomes of 2033 [mean age 67.0 ± 13.4 years; 1486 (73.1%) men] consecutive patients who presented with ACS from 2004 to 2015. Results: A total of 67 (3.3%) and 90 (4.4%) patients developed early monomorphic or non-monomorphic VT, respectively. Killip class IV (odds ratio [OR] 3.05; 95% confidence interval [CI] 1.47-6.36; P &lt; .01), creatine kinase level (OR 1.01; 95% CI 1.00-1.02 per 100 IU/L; P = .01), and left ventricular ejection fraction (OR 0.96; 95% CI 0.94-0.99; P &lt; .01) were independently associated with early monomorphic VT, whereas age (OR 0.98; 95% CI 0.97-0.99; P = .04), ST elevated myocardial infarction (OR 3.53; 95% CI 1.71-7.27; P &lt; .01), Killip class IV (OR 4.91; 95% CI 2.76-8.74; P &lt; .01), diabetes mellitus (OR 0.48; 95% CI 0.28-0.81; P &lt; .01), and left ventricular ejection fraction (OR 0.97; 95% CI 0.95-0.99; P &lt; .01) were independently associated with early non-monomorphic VT. More patients with early monomorphic VT (n = 22 [32.8%]) died in hospital than those with non-monomorphic VT (n = 16 [17.8%]) or without early VT (n = 133 [7.1%]; P &lt; .01). After a mean follow-up of 67.8 ± 43.2 months, 21 patients with early monomorphic VT (46.7%), 22 patients with early non-monomorphic VT (29.7%), and 552 patients without early VT (31.7%) died. Both early monomorphic and non-monomorphic VTs were associated with a long-term increase in sudden arrhythmic deaths and recurrent VTs. Nevertheless, only early monomorphic VT was shown to independently predict overall survival (hazard ratio 1.62; 95% CI 1.03-2.55; P = .04). Conclusion: Early monomorphic VT, but not early non-monomorphic VT, independently predicted all-cause mortality in patients with ACS who survived to hospital discharge. abstract_id: PUBMED:2754119 Inducibility of sustained monomorphic ventricular tachycardia as a prognostic indicator in survivors of recent myocardial infarction: a prospective evaluation in relation to other prognostic variables. The prognostic significance of sustained monomorphic ventricular tachycardia inducible with up to three extrastimuli was assessed in relation to other prognostic markers, including clinical assessment, signal-average electrocardiogram (ECG), Holter monitoring, ejection fraction measurement and exercise testing, in 75 patients after recent myocardial infarction. Among eight patients with inducible sustained monomorphic ventricular tachycardia, six suffered arrhythmic events during a median follow-up period of 16 months. No patient without inducible sustained monomorphic ventricular tachycardia suffered an arrhythmic event. Multivariate analysis showed that of all the variables examined, inducible sustained monomorphic ventricular tachycardia was the only independent predictor of arrhythmic events during the follow-up period. The sensitivity for predicting arrhythmic events by this response was 100%, the specificity 97% and the positive predictive accuracy 75%. Individually, the other prognostic variables were less sensitive and much less accurate predictors of arrhythmic events, but the combination of the occurrence of acute phase complications or frequent ectopic activity with an abnormal signal-averaged ECG approached the sensitivity and accuracy of inducible sustained monomorphic ventricular tachycardia. The prognostic utility of programmed ventricular stimulation in patients with recent myocardial infarction is limited because comparable information can be obtained less invasively. However, the test may have a role in selecting therapy in patients judged to be at risk from arrhythmias on the basis of noninvasive assessment. abstract_id: PUBMED:23110246 Low doses of intravenous epinephrine for refractory sustained monomorphic ventricular tachycardia. We report three cases of sustained monomorphic ventricular tachycardia (VT) in the setting of coronary artery disease, resistant to beta-blockers in two patients and to amiodarone in all, successfully terminated by low doses of intravenous (IV) epinephrine. VT was the first manifestation of coronary artery disease in one patient, whereas the other two patients had a previous history of myocardial infarction and were recipients of an implantable cardioverter-defibrillator (ICD). One of these two patients experienced an arrhythmic storm. All had hemodynamic instability at the time of epinephrine administration. A single slow administration of IV epinephrine (0.5 to 1 mg administered over 30 to 60 s) restored sinus rhythm after 30-90 s with only minor side effects. In the ICD patient with recurrent VT and several cardioversions due to transformation of VT to ventricular fibrillation, epinephrine injection led to the avoidance of further shocks. Although potentially harmful, low doses of IV epinephrine used alone or in combination with beta-blocker treatment and electrical cardioversion may be an alternative effective therapy for sustained monomorphic VT refractory to amiodarone. The role of epinephrine in the termination of VT should be studied further, especially in patients pre-treated with amiodarone in combination with beta-blockers. abstract_id: PUBMED:2330862 Inducible monomorphic sustained ventricular tachycardia in the conscious pig. Sustained monomorphic ventricular tachycardia (VT) is of clinical importance but has not been readily modeled in conscious animals. Eleven pigs had myocardial infarction induced by pulling snares previously placed around the left anterior descending (LAD) coronary artery. Six days after occlusion, bipolar pacing catheters were inserted in the right ventricular apex for induction of VT. Testing was repeated in conscious pigs on 6 out of 8 to 19 days after infarction. Monomorphic VT was induced in each animal during each session, using three to four extrastimuli. VT was terminated by burst pacing in 74% of trials; average VT rate was 362 +/- 26 beats/min. VT was prevented in four of eight animals by procainamide and in five of eight animals by magnesium, but was not prevented by lidocaine or metoprolol. The model may be useful in the study of potentially malignant ventricular tachyarrhythmias, important prodromes for sudden death. abstract_id: PUBMED:34131550 Electrical Storm With Incessant Ventricular Tachycardia in a COVID-19 Patient: Review of Current Evidence. Coronavirus disease 2019 (COVID-19) is associated with various cardiovascular manifestations, including myocarditis, myocardial infarction, and arrhythmias. A prothrombotic state is the primary underlying pathogenic mechanism. While cardiac arrhythmias manifest more commonly amongst critically ill COVID-19 populations, ventricular arrhythmias have been reported only in few cases. This report describes a case of a 95-year-old African American man with COVID-19, who developed sustained monomorphic ventricular tachycardia, which progressed to an electrical storm. The case highlights the importance of high clinical suspicion, early recognition of electrical abnormalities in patients with active COVID-19 infection, and its ability to precipitate fatal ventricular arrhythmia. Also, we provide a literature review on the electrical storm in COVID-19 patients, highlighting the pathophysiologic mechanisms and the management of this deadly arrhythmia. abstract_id: PUBMED:36993916 Prognostic value of early sustained ventricular arrhythmias in ST-segment elevation myocardial infarction treated by primary percutaneous coronary intervention: A substudy of VALIDATE-SWEDEHEART trial. Background: Prognostic assessment of ventricular tachycardia (VT) or ventricular fibrillation (VF) in ST-segment elevation myocardial infarction (STEMI) is based mainly on distinguishing between early (&lt;48 hours) and late arrhythmias, and does not take into account its time distribution with regard to reperfusion, or type of arrhythmia. Objective: We analyzed the prognostic value of early ventricular arrhythmias (VAs) in STEMI with regard to their type and timing. Methods: The prespecified analysis of the multicenter prospective Bivalirudin versus Heparin in ST-Segment and Non-ST-Segment Elevation Myocardial Infarctionin Patients on Modern Antiplatelet Therapy in the Swedish Web System for Enhancement and Development of Evidence-based Care in Heart Disease evaluated according to Recommended Therapies Registry Trial included 2886 STEMI patients undergoing primary percutaneous coronary intervention (PCI). VA episodes were characterized regarding their type and timing. Survival status at 180 days was assessed through the population registry. Results: Nonmonomorphic VT or VF was observed in 97 (3.4%) and monomorphic VT in 16 (0.5%) patients. Only 3 (2.7%) early VA episodes occurred after 24 hours from symptom onset. VA was associated with higher risk of death (hazard ratio 3.59; 95% confidence interval [CI] 2.01-6.42) after adjustment for age, sex, and STEMI localization. VA after PCI was associated with an increased mortality compared with VA before PCI (hazard ratio 6.68; 95% CI 2.90-15.41). Early VA was associated with in-hospital mortality (odds ratio 7.39; 95% CI 3.68-14.83) but not with long-term prognosis in patients discharged alive. The type of VA was not associated with mortality. Conclusion: VA after PCI was associated with an increased mortality compared with VA before PCI. Long-term prognosis did not differ between patients with monomorphic VT and nonmonomorphic VT or VF, but events were few. VA incidence during 24 to 48 hours of STEMI is negligibly low, thus precluding assessment of its prognostic importance. abstract_id: PUBMED:2330890 Prognostic significance of sustained monomorphic ventricular tachycardia induced by programmed ventricular stimulation using up to triple extrastimuli in survivors of acute myocardial infarction. The prognostic significance of sustained monomorphic ventricular tachycardia (VT) induced by programmed ventricular stimulation using up to 3 extrastimuli was evaluated in 133 consecutive survivors of acute myocardial infarction (AMI) at a mean interval of 1.8 +/- 1.1 months after onset. This was compared with hemodynamic and angiographic abnormalities shown by cardiac catheterization and ventricular ectopic activity detected by Holter monitoring. Sustained monomorphic VT was induced in 25 (19%) patients, sustained polymorphic VT in 11 (8%) patients, nonsustained monomorphic VT (greater than or equal to 10 beats) in 12 patients (9%) and nonsustained polymorphic VT in 9 patients (7%). Multivariate logistic regression analysis of clinical, angiographic, hemodynamic and electrocardiographic variables showed that the presence of a left ventricular aneurysm (p = 0.005) and Lown grade 4B ventricular ectopic activity (p less than 0.001) were independent predictors of inducibility of sustained monomorphic VT. During a mean follow-up of 21 +/- 13 months, there were 8 (6%) sudden cardiac deaths and 3 (2.3%) spontaneous occurrences of life-threatening sustained VT. The 2-year probability of freedom from sudden cardiac death or sustained ventricular tachyarrhythmias was 53 +/- 13% for patients with inducible sustained monomorphic VT, 70 +/- 10% for those with a left ventricular ejection fraction less than 40% and 58 +/- 13% for those with Lown grade 4B ventricular ectopic activity.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:2106850 Electrophysiology in evaluating the treatment of sustained monomorphic ventricular tachycardia: criteria for efficacy The authors studied the influence on recurrence and mortality of induced ventricular arrhythmias during electrophysiological studies performed to assess the efficacy of treatment of sustained monomorphic ventricular tachycardia. One hundred and twenty-six consecutive patients investigated from 1981 to 1988 were included. The underlying pathology was chronic myocardial infarction (N = 56), dilated cardiomyopathy (N = 24), right ventricular dysplasia (N = 31) and there were 15 idiopathic cases. All these tachycardias could be induced during the control study. A second test was performed after instituting treatment. This was maintained whatever the result of the electrophysiological study except in patients in whom the tachycardia rate was over 130/mn and/or poorly tolerated. Recurrences were defined as the observation of tachycardia with the same morphology and/or the occurrence of sudden death. Follow-up averaged 29 +/- 21 months. The absence of recurrence and survival were assessed by the Kaplan-Meier method and Logrank's test. It was not possible to induce any arrhythmia after treatment in 52 patients (41%). The prevalence of absence of recurrence in this group was 0.863. If the induction of ventricular fibrillation, doublets or short runs of VT (N less than 6 with no recurrences) are included, the value increased to 0.877. These patients were considered to be non-inducible. The prevalence of absence of recurrence of arrhythmia in patients in whom it was possible to induce sustained ventricular tachycardia of the same morphology as the clinical arrhythmia was 0.512 (p = 0.001).(ABSTRACT TRUNCATED AT 250 WORDS) Answer: Yes, sustained monomorphic ventricular tachycardia (SMVT) in the early phase of a prime acute myocardial infarction can affect patient outcomes. Studies have shown that patients with SMVT had a more extensive myocardial infarction, as indicated by higher peak levels of the CK-MB isoenzyme, and a higher mortality rate compared to those without SMVT (PUBMED:17070829; PUBMED:8962550). SMVT has been identified as an independent predictor of in-hospital mortality and is considered a prognostic factor for poor one-year outcomes (PUBMED:17070829). Additionally, patients with SMVT had a higher rate of wide QRS complex, worse hemodynamic condition, and higher recurrence rate of ischemic events compared to those with other types of ventricular arrhythmias (PUBMED:17070829; PUBMED:8962550). Furthermore, early monomorphic VT, but not early non-monomorphic VT, independently predicted all-cause mortality in patients with acute coronary syndrome who survived to hospital discharge (PUBMED:29454138). The occurrence of SMVT during the first 48 hours of myocardial infarction is a sign of extensive myocardial damage and is an independent predictor of in-hospital mortality (PUBMED:8962550). In summary, the presence of SMVT in the early phase of a prime acute myocardial infarction is associated with a larger infarct size, higher mortality rates, and a poor prognosis, making it an important factor in the assessment of patient outcomes following a myocardial infarction.
Instruction: Should the elderly be resuscitated following out-of-hospital cardiac arrest? Abstracts: abstract_id: PUBMED:2913781 Should the elderly be resuscitated following out-of-hospital cardiac arrest? Purpose: Elderly and younger patients who were successfully resuscitated and hospitalized following out-of-hospital cardiac arrest were studied to determine if there was a significant difference in hospital course and long-term survival between the two groups. Patients And Methods: The study consisted of 214 consecutive patients, divided into two age groups: elderly (more than 70 years, n = 112) and younger (less than 70 years, n = 102). Hospital charts and paramedic run data were retrospectively reviewed for each patient and findings were compared between the two age groups. Results: Prior to cardiac arrest, 47 of 112 (42 percent) elderly patients had a history of heart failure, compared with 19 of 102 (18 percent) younger patients, and were more commonly taking digitalis (51 percent versus 29 percent) and diuretics (47 percent versus 26 percent). Younger patients, however, more often had an acute myocardial infarction at the time of the cardiac arrest (33 percent versus 16 percent). At the time of cardiac arrest, 83 percent of younger patients demonstrated ventricular fibrillation, compared with 71 percent of the elderly. In contrast, electromechanical dissociation was five times more common in the elderly patients. Although hospital deaths were more common in the elderly (71 percent versus 53 percent), the length of hospitalization and stay in intensive care units were not significantly different between the age groups. The number of neurologic deaths was similar in both age groups, as were residual neurologic impairments. Only five elderly patients and six younger patients required placement in extended-care facilities. Calculated long-term survival curves demonstrated similar survival in both age groups, with approximately 65 percent of hospital survivors alive at 24 months after hospital discharge. Conclusion: Resuscitation of elderly patients in whom out-of-hospital cardiac arrest occurs is reasonable and appropriate, according to the findings of this study. Even though elderly patients are more likely than younger patients to die during hospitalization, the hospital stay of the elderly is not longer, the elderly do not have more residual neurologic impairments, and survival after hospital discharge is similar to that in younger patients. abstract_id: PUBMED:37874467 Indications for Cardiac Catheterization and Percutaneous Coronary Intervention in Patients with Resuscitated Out-of-Hospital Cardiac Arrest. Purpose Of Review: The role of emergent cardiac catheterization after resuscitated out-of-hospital cardiac arrest (OHCA) has evolved based on recent randomized evidence. This review aims to discuss the latest evidence and current indications for emergent coronary angiography (CAG) and mechanical circulatory support (MCS) use following OHCA. Recent Findings: In contrast to previous observational data, recent RCTs evaluating early CAG in resuscitated OHCA patients without ST elevation have uniformly demonstrated a lack of benefit in terms of survival or neurological outcome. There is currently no randomized evidence supporting MCS use specifically in patients with resuscitated OHCA and cardiogenic shock. Urgent CAG should be considered in all patients with ST elevation, recurrent electrical or hemodynamic instability, those who are awake following resuscitated OHCA, and those receiving extracorporeal cardiopulmonary resuscitation (ECPR). Recent evidence suggests that CAG may be safely delayed in hemodynamically stable patients without ST-segment elevation following resuscitated OHCA. abstract_id: PUBMED:8222679 Survival in the elderly after out-of-hospital cardiac arrest. Objectives: To evaluate the survival prognosis for the elderly (&gt; or = 70 yrs of age) after out-of-hospital cardiac arrest in a large urban center, and to identify any specific differences in survival factors relative to those adults &lt; 70 yrs of age. Design: The study was a prospective, inception cohort study. Setting: An urban population of approximately 2,000,000, served by one centralized municipal emergency medical services system. Patients: All 986 adult victims (367 elderly and 619 younger patients) of primary cardiac arrest attended by the emergency medical services system over a 12-month period. Interventions: Not applicable. Measurements And Main Results: All victims of out-of-hospital cardiac arrest occurring within a single, large, urban municipality were studied over a 12-month period. Each event was analyzed for age, sex, witnesses, bystander cardiopulmonary resuscitation, presenting electrocardiographic rhythm, paramedic response time, scene time, return of spontaneous circulation (pulses), and electrocardiographic rhythm on hospital arrival. Outcomes evaluated included inhospital admission (resuscitation) and successful discharge from the hospital (survival). Patients were followed until death or discharge from the hospital. Of 367 elderly cardiac arrest victims, 81 (22%) patients were successfully resuscitated and 24 (7%) patients survived. However, of 119 (32% of all elderly patients) patients who presented with ventricular fibrillation/tachycardia, 48 (40%) patients were resuscitated and 17 (14%) patients survived. These 17 patients with ventricular fibrillation/tachycardia accounted for 71% of all elderly survivors. During the same study period, there were 619 adult primary cardiac arrest victims &lt; 70 yrs of age, 160 (26%) of whom were resuscitated and 73 (12%) of whom survived. Among the younger patients, 296 (48%) patients presented with ventricular fibrillation/tachycardia, of whom 110 (37%) were resuscitated and of whom 60 patients (20%) survived. Within the context of this study, survival rates for younger and older ventricular fibrillation/tachycardia patients were not significantly different. Also, among survivors, there were no other major differences in terms of established survival determinants. Conclusions: Survival chances for the elderly after out-of-hospital cardiac arrest are not bleak, and are reasonable if ventricular fibrillation/tachycardia is the presenting rhythm. Survival determinants are similar for younger and older adults. abstract_id: PUBMED:25307009 Care of patient resuscitated from cardiac arrest. Background And Objectives: Out-of-hospital cardiac arrest is a major cause of death with survival rates as low as 5% to 35%. A large number of patients who survive resuscitation will face significant neurological damage, as a result of the ischemia that occurs both during cardiac arrest and reperfusion. However understanding of the mechanisms responsible for brain damage has not resulted in prognostic improvement. Therapeutic hypothermia after resuscitation may be a valid option associated to reduction of neurological damage. The purpose of this study was to review scientific evidence related to a therapy for patients resuscitated from cardiac arrest. Contents: Description and analysis of the main risk factors associated with neurological damage after resuscitation from cardiac arrest as well as prognostic criteria was carried out. A non-systematic search was conducted in the PubMed data base for papers on a therapeutic approach for patients resuscitated from cardiac arrest. Bibliographic references of reviewed papers were also analyzed. Practical rules were drafted for such an approach. Conclusions: Patients resuscitated from cardiac arrest face a high level of risk of neurological damage. Therapeutic hypothermia and control of physiological parameters to optimise brain perfusion, may improve prognosis. abstract_id: PUBMED:35753507 Targeted plasma metabolomics in resuscitated comatose out-of-hospital cardiac arrest patients. Background: Out-of-hospital cardiac arrest (OHCA) is a leading cause of death. Even if successfully resuscitated, mortality remains high due to ischemic and reperfusion injury (I/R). The oxygen deprivation leads to a metabolic derangement amplified upon reperfusion resulting in an uncontrolled generation of reactive oxygen species in the mitochondria triggering cell death mechanisms. The understanding of I/R injury in humans following OHCA remains sparse, with no existing treatment to attenuate the reperfusion injury. Aim: To describe metabolic derangement in patients following resuscitated OHCA. Methods: Plasma from consecutive resuscitated unconscious OHCA patients drawn at hospital admission were analyzed using ultra-performance-liquid-mass-spectrometry. Sixty-one metabolites were prespecified for quantification and studied. Results: In total, 163 patients were included, of which 143 (88%) were men, and the median age was 62 years (53-68). All measured metabolites from the tricarboxylic acid (TCA) cycle were significantly higher in non-survivors vs. survivors (180-days survival). Hierarchical clustering identified four clusters (A-D) of patients with distinct metabolic profiles. Cluster A and B had higher levels of TCA metabolites, amino acids and acylcarnitine species compared to C and D. The mortality was significantly higher in cluster A and B (A:62% and B:59% vs. C:21 % and D:24%, p &lt; 0.001). Cluster A and B had longer time to return of spontaneous circulation (A:33 min (21-43), B:27 min (24-35), C:18 min (13-28), and D:18 min (12-25), p &lt; 0.001). Conclusion: Circulating levels of metabolites from the TCA cycle best described the variance between survivors and non-survivors. Four different metabolic phenotypes with significantly different mortality were identified. abstract_id: PUBMED:29309882 Patient, health service factors and variation in mortality following resuscitated out-of-hospital cardiac arrest in acute coronary syndrome: Analysis of the Myocardial Ischaemia National Audit Project. Aims: To determine patient and health service factors associated with variation in hospital mortality among resuscitated cases of out-of-hospital cardiac arrest (OHCA) with acute coronary syndrome (ACS). Methods: In this cohort study, we used the Myocardial Ischaemia National Audit Project database to study outcomes in patients hospitalised with resuscitated OHCA due to ACS between 2003 and 2015 in the United Kingdom. We analysed variation in inter-hospital mortality and used hierarchical multivariable regression models to examine the association between patient and health service factors with hospital mortality. Results: We included 17604 patients across 239 hospitals. Overall hospital mortality was 28.7%. In 94 hospitals that contributed at least 60 cases, mortality by hospital ranged from 10.7% to 66.3% (median 28.6%, IQR 23.2% to 39.1%)). Patient and health service factors explained 36.1% of this variation. After adjustment for covariates, factors associated with higher hospital mortality included increasing serum glucose, ST-Elevation myocardial infarction (STEMI) diagnosis, and initial admission to a primary percutaneous coronary intervention (pPCI) capable hospital. Hospital OHCA volume was not associated with mortality. The key modifiable factor associated with lower mortality was early reperfusion therapy in STEMI patients. Conclusion: There was wide variation in inter-hospital mortality following resuscitated OHCA due to ACS that was only partially explained by patient and health service factors. Hospital OHCA volume and pPCI capability were not associated with lower mortality. Early reperfusion therapy was associated with lower mortality in STEMI patients. abstract_id: PUBMED:32914452 Osborn waves during therapeutic hypothermia and recurrence of fatal arrhythmia in patients resuscitated following sudden cardiac arrest. Background: This study investigates the impact of the occurrence of Osborn waves during therapeutic hypothermia (TH) on the recurrence of future fatal arrhythmias in patients resuscitated after sudden cardiac arrest (SCA). Methods: Of all survivors of out-of-hospital SCA, 100 consecutive patients (mean age, 52 ± 15 years; 80% men) who received TH were included in this study. Results: The most common first documented arrhythmia was ventricular fibrillation (VF) (77%), and ischemic heart disease (44%) and idiopathic VF (22%) were the most common causes of SCA in resuscitated patients. During TH, Osborn waves developed in 29 patients (29%). Osborn waves occurred more frequently in patients with Brugada syndrome. Patients with Osborn waves had lower in-hospital (10.3% vs 26.8%; P = .072) and 1-year death rates (20.7% vs 39.4%; P = .073) and better cerebral function (cerebral performance category scale, 2.0 ± 1.5 vs 2.7 ± 1.8; P = .053) than those without Osborn waves, although there was no statistical significance. Among 78 in-hospital survivors, 31 (40%) underwent implantable cardioverter-defibrillator (ICD) implantation. Appropriate ICD shocks from fatal arrhythmias were more frequent in patients who had Osborn waves than in those without Osborn waves (43% vs 6%; P = .032). Conclusions: Osborn waves during TH had no significant effect on the survival and cerebral function of patients resuscitated SCA. However, appropriate ICD shocks due to the recurrence of VF were more frequent in patients with Osborn waves during long-term follow-up. abstract_id: PUBMED:25192593 Differentiating between comatose patients resuscitated from acute coronary syndrome-associated and subarachnoid hemorrhage-associated out-of-hospital cardiac arrest. Background: Upon initial evaluation in the emergency department (ED), it is often difficult to differentiate between comatose patients resuscitated following acute coronary syndrome (ACS)-associated and subarachnoid hemorrhage (SAH)-associated out-of-hospital cardiac arrest (OHCA). We assessed the clinical differences between resuscitated comatose ACS-OHCA and SAH-OHCA patients during initial evaluation in the ED. Methods: Data of 1259 consecutive OHCA patients were analyzed retrospectively. Of these, 23 resuscitated comatose ACS-OHCA patients and 20 resuscitated comatose SAH-OHCA patients were included in the final analysis. Clinical data obtained during initial evaluation in the ED were compared between groups. Results: Pulseless electrical activity (PEA) or asystole as the initial cardiac rhythm, female gender, and preserved left ventricular ejection fraction (≥50%) on the echocardiogram were significantly more common in the SAH-OHCA group (p&lt;0.05 each). Although ST-T abnormalities suggesting myocardial damage (ST elevation and/or ST depression) were noted in most patients in both groups via 12-lead electrocardiogram (95%, ACS-OHCA group; 85%, SAH-OHCA group, p=0.50), reciprocal ST depression was significantly more often absent in the SAH-OHCA group (p=0.025). Initial PEA/asystole and presence of 1 other factor was sufficient to differentiate SAH-OHCA patients from ACS-OHCA patients (100% sensitivity, 91% specificity, 95% accuracy). Conclusions: Initial ED evaluation is sufficient to differentiate between comatose ACS-OHCA and SAH-OHCA patients prior to further diagnostic work-up (e.g. emergent coronary angiography and head computed tomography). abstract_id: PUBMED:26126505 Outcomes of patients resuscitated from cardiac arrest in the setting of drug overdose. Objectives: To compare the attributes and clinical outcomes of patients with cardiac arrest in the setting of drug overdose (OD) to patients with cardiac arrest from non-drug related etiologies. Methods: We utilized a US inpatient cardiac arrest registry used to study targeted temperature management (TTM) to identify patients with cardiac arrest in the setting of drug overdose between 2005 and 2013. Data regarding the cardiac arrest, resuscitation interventions, use of post-arrest TTM, urine drug screen, survival, and neurologic outcome were examined. These results were compared to patients suffering cardiac arrest from other causes during the same time period using Wilcoxon rank-sum tests for continuous variables and chi-square tests on categorical variables. Results: Approximately 2.5% (64/2584) of cardiac arrests occurred in the setting of drug overdose. Patients in the OD cohort were younger, more likely to be male, and more likely to have an out-of-hospital cardiac arrest that was unwitnessed with no bystander CPR and from a non-shockable rhythm. However, the patients in the OD cohort had similar rates of survival and good neurologic outcomes (Cerebral Performance Category 1-2) compared to non-OD patients. A fraction of initially resuscitated patients in each group (8% in OD cohort vs. 15% in non-OD cohort, p=ns) did not receive post-arrest TTM due to prompt awakening following resuscitation. Conclusions: Patients resuscitated from cardiac arrest in the setting of drug OD have neurologic and survival outcomes comparable to non-OD patients despite lower rates of bystander CPR, shockable rhythms, and witnessed arrest. abstract_id: PUBMED:20049976 Emergent percutaneous coronary intervention for resuscitated victims of out-of-hospital cardiac arrest. Long-term survival rates even after successful resuscitation from out-of-hospital cardiac arrest are dismal. Most of those initially resuscitated expired during their hospitalization. Recent reports have suggested that a more aggressive approach to postresuscitation care is the key to better outcome. Waiting for the evidence of neurological recovery before acting can result in missed opportunity to improve such recovery. Immediate induction of mild therapeutic hypothermia for all resuscitated victims who remain comatose offers the best hope for neurological recovery. Numerous reports suggest that early coronary angiography and PCI also improve outcome among those resuscitated from cardiac arrest whose postresuscitation ECG show evidence of ST elevation myocardial infarctions. Most promising is combining these two postresuscitation therapies, namely immediate induction of hypothermia and early coronary angiography and PCI. Combining these therapies has resulted in long-term survival rates of 70% with more than 80% of all such survivors neurologically functional. Even those without ST elevation on their postresuscitation ECG can greatly benefit from timely induction of hypothermia and early angiography/PCI. Answer: Yes, the elderly should be resuscitated following out-of-hospital cardiac arrest. A study comparing elderly (more than 70 years) and younger (less than 70 years) patients who were successfully resuscitated and hospitalized following out-of-hospital cardiac arrest found that even though hospital deaths were more common in the elderly, the length of hospitalization and stay in intensive care units were not significantly different between the age groups. The number of neurologic deaths was similar in both age groups, as were residual neurologic impairments. Long-term survival curves demonstrated similar survival in both age groups, with approximately 65 percent of hospital survivors alive at 24 months after hospital discharge. The study concluded that resuscitation of elderly patients in whom out-of-hospital cardiac arrest occurs is reasonable and appropriate (PUBMED:2913781). Another study evaluating the survival prognosis for the elderly after out-of-hospital cardiac arrest in a large urban center found that survival chances for the elderly are not bleak, and are reasonable if ventricular fibrillation/tachycardia is the presenting rhythm. Survival determinants were similar for younger and older adults (PUBMED:8222679). These findings suggest that age alone should not be a determining factor in the decision to resuscitate elderly patients following out-of-hospital cardiac arrest, as the outcomes can be comparable to those of younger patients.
Instruction: Does it matter whom you see? Abstracts: abstract_id: PUBMED:28775658 Representation of [Formula: see text]-Bernstein polynomials in terms of [Formula: see text]-Jacobi polynomials. A representation of [Formula: see text]-Bernstein polynomials in terms of [Formula: see text]-Jacobi polynomials is obtained. abstract_id: PUBMED:27026912 Convergence in [Formula: see text]-quasicontinuous posets. In this paper, we present one way to generalize [Formula: see text]-convergence and [Formula: see text]-convergence of nets for arbitrary posets by use of the cut operator instead of joins. Some convergence theoretical characterizations of [Formula: see text]-continuity and [Formula: see text]-quasicontinuity of posets are given. The main results are: (1) a poset P is [Formula: see text]-continuous if and only if the [Formula: see text]-convergence in P is topological; (2) P is [Formula: see text]-quasicontinuous if and only if the [Formula: see text]-convergence in P is topological. abstract_id: PUBMED:29670323 A note on [Formula: see text]-Bernstein polynomials and their applications based on [Formula: see text]-calculus. Nowadays [Formula: see text]-Bernstein polynomials have been studied in many different fields such as operator theory, CAGD, and number theory. In order to obtain the fundamental properties and results of Bernstein polynomials by using [Formula: see text]-calculus, we give basic definitions and results related to [Formula: see text]-calculus. The main purpose of this study is to investigate a generating function for [Formula: see text]-Bernstein polynomials. By using an approach similar to that of Goldman et al. in (SIAM J. Discrete Math. 28(3):1009-1025, 2014), we derive some new identities, relations, and formulas for the [Formula: see text]-Bernstein polynomials. Also, we give a plot generating function of [Formula: see text]-Bernstein polynomials for some selected p and q values. abstract_id: PUBMED:26594617 Detection of demyelination in multiple sclerosis by analysis of [Formula: see text] relaxation at 7 T. Multiple sclerosis (MS) is a relatively common cause of inflammatory demyelinating lesions of the central nervous system. In an attempt to detect and characterize ongoing demyelination in MS patient brains, we used a novel magnetic resonance imaging (MRI) technique, involving the fitting of a three-component model to the [Formula: see text] relaxation behavior at high-field (7 T). This model allowed estimation of the amount of myelin water (and thus indirectly myelin content), axonal water, and interstitial water. In this study, 25 relapsing-remitting MS patients underwent a 7 T MRI from which 12 gadolinium-enhancing lesions, 61 non-enhancing lesions, and their corresponding contralateral normal appearing white matter (NAWM) regions were analyzed. In both enhancing and non-enhancing lesions, the amplitude of myelin water was significantly decreased, and interstitial and axonal water were increased relative to the contralateral NAWM. Longer relaxation time [Formula: see text] of interstitial and axonal water, and lower frequency shift of axonal water, were also observed in both enhancing and non-enhancing lesions when compared to the contralateral NAWM. No significant difference was found between enhancing lesions and non-enhancing lesions. These findings suggest that the fitting of a three-component model to the [Formula: see text] decay curve in MS lesions may help to quantify myelin loss. abstract_id: PUBMED:38104615 Experimental perspectives on the matter-antimatter asymmetry puzzle: developments in electron EDM and [Formula: see text] experiments. In the search for clues to the matter-antimatter puzzle, experiments with atoms or molecules play a particular role. These systems allow measurements with very high precision, as demonstrated by the unprecedented limits down to [Formula: see text] e cm on electron EDM using molecular ions, and relative measurements at the level of [Formula: see text] in spectroscopy of antihydrogen atoms. Building on these impressive measurements, new experimental directions offer potential for drastic improvements. We review here some of the new perspectives in those fields and their associated prospects for new physics searches. This article is part of the theme issue 'The particle-gravity frontier'. abstract_id: PUBMED:28824262 Approximation properties of Chlodowsky variant of [Formula: see text] Bernstein-Stancu-Schurer operators. In the present paper, we introduce the Chlodowsky variant of [Formula: see text] Bernstein-Stancu-Schurer operators which is a generalization of [Formula: see text] Bernstein-Stancu-Schurer operators. We also discuss its Korovkin-type approximation properties and rate of convergence. abstract_id: PUBMED:23776306 Spectrum of the [Formula: see text]-Neumann Laplacian on the Fock space. The spectrum of the [Formula: see text]-Neumann Laplacian on the Fock space [Formula: see text] is explicitly computed. It turns out that it consists of positive integer eigenvalues, each of which is of infinite multiplicity. Spectral analysis of the [Formula: see text]-Neumann Laplacian on the Fock space is closely related to Schrödinger operators with magnetic fields and to the complex Witten Laplacian. abstract_id: PUBMED:29290666 k-fractional integral trapezium-like inequalities through [Formula: see text]-convex and [Formula: see text]-convex mappings. In this paper, a new general identity for differentiable mappings via k-fractional integrals is derived. By using the concept of [Formula: see text]-convexity, [Formula: see text]-convexity and the obtained equation, some new trapezium-like integral inequalities are established. The results presented provide extensions of those given in earlier works. abstract_id: PUBMED:28989258 The closure property of [Formula: see text]-tensors under the Hadamard product. In this paper, we investigate the closure property of [Formula: see text]-tensors under the Hadamard product. It is shown that the Hadamard products of Hadamard powers of strong [Formula: see text]-tensors are still strong [Formula: see text]-tensors. We then bound the minimal real eigenvalues of the comparison tensors of the Hadamard products involving strong [Formula: see text]-tensors. Finally, we show how to attain the bounds by characterizing these [Formula: see text]-tensors. abstract_id: PUBMED:28680243 A basic problem of [Formula: see text]-Bernstein-type operators. In this note, we give an elaboration of a basic problem on convergence theorem of [Formula: see text]-analogue of Bernstein-type operators. By some classical analysis techniques, we derive an exact class of [Formula: see text]-integer satisfying [Formula: see text] with [Formula: see text] and [Formula: see text] under [Formula: see text]. Our results provide an erratum to corresponding results on [Formula: see text]-analogue of Bernstein-type operators that appeared in recent literature. Answer: The provided abstracts do not contain any information related to the social or psychological implications of whom one might see, such as in a healthcare or personal context. Instead, they focus on mathematical and scientific concepts, such as [Formula: see text]-Bernstein polynomials, [Formula: see text]-quasicontinuous posets, [Formula: see text] relaxation in MRI for multiple sclerosis, matter-antimatter asymmetry experiments, Chlodowsky variant of [Formula: see text] Bernstein-Stancu-Schurer operators, the spectrum of the [Formula: see text]-Neumann Laplacian on the Fock space, k-fractional integral trapezium-like inequalities, the closure property of [Formula: see text]-tensors under the Hadamard product, and [Formula: see text]-Bernstein-type operators (PUBMED:28775658, PUBMED:27026912, PUBMED:29670323, PUBMED:26594617, PUBMED:38104615, PUBMED:28824262, PUBMED:23776306, PUBMED:29290666, PUBMED:28989258, PUBMED:28680243). Therefore, based on the abstracts provided, the question "Does it matter whom you see?" cannot be answered as it is not relevant to the mathematical and scientific content discussed in these abstracts.
Instruction: Tryptase levels in patients after acute coronary syndromes: the potential new marker of an unstable plaque? Abstracts: abstract_id: PUBMED:12918638 Tryptase levels in patients after acute coronary syndromes: the potential new marker of an unstable plaque? Background: Mast cells (MCs) are multifunctional immune cells that produce a number of vasoactive or thromboactive mediators. Elevated numbers of human heart MCs are observed in the shoulder regions of coronary atherosclerotic plaques, suggesting that they play a role in plaque rupture. Cardiac MC degranulation after myocardial ischemia has been documented in animal models. Cardiac MCs are highly profibrinolytic cells and release tryptase, their specific protease, after ischemic events. Hypothesis: Mast cell activation and release of tryptase may differentiate among patients with acute coronary syndromes (ACS), potentially determining the clinical course of ACS. Tryptase levels may indirectly reflect the fibrinolytic status of patients. Methods: Mast cell activation after ACS was estimated in 10 controls and 52 patients by measuring the serum levels of tryptase in the acute phase, at 2 weeks, and at 3 months after the ACS episode. Total tryptase levels were determined by using the UniCAP system and analyzed with respect to the patients' clinical types of ACS on admission (ACS with persistent ST-segment elevation on electrocardiogram or with ST-segment depression). Results: Significant differences in serum tryptase levels between the groups were found, with higher serum tryptase concentrations in the ST-segment depression group in the acute phase, and at follow-up. Conclusions: Serum tryptase concentration differences among patients with distinct types of ACS may indicate a more important role of human heart MCs in ACS with ST-segment depression pathogenesis. To our knowledge, this is the first report indicating that serum tryptase levels may differentiate patients with distinct types of ACS. abstract_id: PUBMED:26537210 Feasibility of diagnosing unstable plaque in patients with acute coronary syndrome using iMap-IVUS. Objective: To compare the plaque composition between stable and unstable plaques, characterize unstable plaque by using iMap-intravascular ultrasound (IVUS), and quantify the diagnostic criteria for unstable plaque. Methods: Thirty-three acute coronary syndrome (ACS) patients who had undergone coronary angiography and IVUS from February 19, 2014 to December 19, 2014 at Peking University People's Hospital were enrolled in the study. Baseline data were collected. The patients were divided into two groups according to their gray-scale IVUS imaging, stable plaque and unstable plaque. A difference-in-difference evaluation was performed using the baseline data and off-line iMap imaging results between the two groups. A receiver operating characteristic (ROC) curve was constructed to obtain the optimal cut-off value to diagnose unstable plaque. Results: Percentages of fibrotic and necrotic tissues, absolute values of lipidic, necrotic, and calcified tissues, and plaque burden were independent predictors for unstable plaque. Absolute necrotic area was the best predictor and exhibited the highest diagnostic value for plaque vulnerability (area under the curve (AUC)=0.806, P=0.000, 95% CI (0.718, 0.894)). The cut-off score for predicting unstable plaque was 4.0 mm(2). Conclusions: This study attempted to propose a cut-off value based on absolute necrotic area using iMap-IVUS to predict plaque vulnerability in patients with ACS. This score might provide a valuable reference for diagnosing unstable plaque. abstract_id: PUBMED:33011223 Identification of hub genes in unstable atherosclerotic plaque by conjoint analysis of bioinformatics. Aims: Unstable atherosclerotic plaque is the main pathological basis of acute coronary syndrome, which is the leading cause of death and disability worldwide. Therefore, we combined multiple bioinformatics tools to identify key genes related to unstable plaque. Main Methods: GSE94605 contained 7 plasma sample pools of 175 healthy and 6 sample pools of 150 unstable angina pectoris (UAP) patients, and detected with miRNA array while GSE60993 collected peripheral blood from 7 normal and 9 UAP, and detected with mRNA array. GSE120521 collected carotid plaques from 4 patients and dissected in stable and unstable regions, then detected with RNA-seq. Differentially expressed miRNAs (DEMs) and genes (DEGs) in UAP were re-analyzed. Gene Ontology (GO)/Kyoto Encyclopedia of Genes and Genomes (KEGG) and Protein-protein interaction (PPI) network were applied on top 10 up-regulated or down-regulated DEMs targets, and whole DEGs. MiRNAs-mRNAs network was constructed with these DEMs and DEGs, and the expression profile of genes within the network was finally validated in GSE120521. Key Findings: Totally, 263 up-regulated and 201 down-regulated DEMs were identified in GSE94605, and 78 up-regulated and 29 down-regulated DEGs were identified in GSE60993. Subsequently, a miRNAs-mRNAs network was constructed with 6 up-regulated miRNAs targeted to 12 down-regulated genes, and 4 down-regulated miRNAs targeted to 8 up-regulated genes. Finally, MORF4L2, RAB3IL1 and MMP9 within the network were considered as hub genes in unstable plaque progression after being validated in GSE120521. Significance: These 3 genes may provide new targets for diagnosis and therapy of unstable atherosclerotic plaque. abstract_id: PUBMED:9471327 Unstable atherosclerotic plaque and acute coronary syndromes Acute coronary syndromes (unstable angina pectoris, acute myocardial infarction, sudden cardiac death) participate significantly in cardiovascular and general morbidities and mortalities. Their common pathogenetic mechanism resides in the disturbance of the integrity of atherosclerotic plaque by a fissure, rupture, or ulceration and the origin of unstable atherosclerotic plaque by the formation of thrombi, which together with vasoconstriction, causes a varying degree of the dynamic obstruction of the coronary artery. Thrombogenesis takes place in coincidence with the factors of vascular wall, rheologic, thrombotic (proaggregatory and procoagulatory), and antithrombotic (antiaggregatory and anticoagulatory-fibrinolytic) factors. The formation of unstable atherosclerotic plaque is a critical point of the dissociation of both stable and unstable myocardial ischaemiae. The prevention and therapy of atherosclerosis must be complex, namely antiatherogenic, however most of all endothelium-protective, or cellulo-protective, antilipidogenic and antithrombogenic. They cannot be alternative; one therapy will not substitute another. Regarding the importance of even residual thrombosis and thrombin, new antithrombotic substances are being intensively investigated. abstract_id: PUBMED:16168805 Serum tryptase levels in acute coronary syndromes. Background: Mast cell accumulation and activation have been demonstrated in the vulnerable shoulder regions of atherosclerotic plaques and at the actual sites of plaque erosion and rupture. When activated and degranulated, mast cells release tryptase, a neutral protease, capable of activating matrix metalloproteinases and predisposing to plaque rupture. We tested the hypothesis that in acute coronary syndromes the levels of serum tryptase would reflect mast cell activation. Methods And Results: The study population consisted of 183 patients admitted to the emergency room of 3 general hospitals because of acute chest pain of ischemic origin. Of these patients, 64 suffered from exertional angina presenting with acute chest pain, 60 had unstable angina, and 59 had acute myocardial infarction. Serum tryptase levels were analyzed from samples drawn, on average at 7 h, and also at 24 h after the onset of the chest pain. As controls served 41 patients admitted for surgical treatment of inguinal hernia or varicose veins. Serum tryptase levels remained stable within the observation period, and no differences were detected between the patient groups and controls. On the other hand, the differences in C-reactive protein levels reflected the extent of myocardial injury. Conclusions: In ACS, serum tryptase levels are normal and remain stable. Our results do not exclude the possibility of local activation of coronary mast cells, but suggest that the excess quantity of tryptase acutely released from mast cells in ACS, if any, is not sufficient to be detected by measuring tryptase concentration in the systemic circulation. abstract_id: PUBMED:12024333 Pathology of the unstable plaque. The term "vulnerable" in the context of human coronary plaques was originally intended to provide a morphological description consistent with lesions prone to rupture. Coronary thrombosis is now recognized as a diverse process arising from plaque rupture, erosion, or calcified nodules. These findings have prompted the search for more definitive terminology to describe the precursor lesion of rupture, now referred to as "thin-cap fibroatheromas." This review provides a focused discussion of the thin-cap fibroatheroma as a specific cause of acute coronary syndromes. The pathology of the unstable plaque is presented in reference to unstable angina, acute myocardial infarction, and sudden coronary death. The influence of risk factors for coronary artery disease on culprit lesion morphology is also discussed. Finally, the value of coronary calcification, as a predictor of unstable plaques in the clinical setting, is explored. abstract_id: PUBMED:23973042 Targeting the unstable plaque in acute coronary syndromes. Background: Rupture or erosion of an unstable atherosclerotic plaque is the typical pathology and usual cause of acute coronary syndromes. Despite detailed understanding of the processes of lipid accumulation, thinning of the fibrous cap, and inflammation leading to plaque instability, there are no strategies in clinical use that uniquely target the unstable plaque. Objective: A critical review of recent publications on potential therapies that could be used to stabilize unstable plaque. Methods: We searched PubMed, other literature databases, drug development sites, and clinical trial registries to retrieve clinical studies on anti-inflammatory and lipid-modulating therapies that could be used to stabilize unstable atherosclerotic plaque. Results: Multiple experimental targets involving lipid and inflammatory pathways have the potential to stabilize the plaque and expand the armamentarium against coronary artery disease. Randomized clinical trials of darapladib, methotrexate, canakinumab, and colchicine are well advanced to establish if plaque stabilization is feasible and effective in patients with acute coronary syndromes. Conclusions: Although there are still no agents in clinical use for plaque stabilization, there are important advances in understanding plaque instability and several encouraging approaches are being evaluated in Phase III clinical trials. abstract_id: PUBMED:37595697 The microenvironment of the atheroma expresses phenotypes of plaque instability. Data from histopathology studies of human atherosclerotic tissue specimens and from vascular imaging studies support the concept that the local arterial microenvironment of a stable atheroma promotes destabilizing conditions that result in the transition to an unstable atheroma. Destabilization is characterized by several different plaque phenotypes that cause major clinical events such as acute coronary syndrome and cerebrovascular strokes. There are several rupture-associated phenotypes causing thrombotic vascular occlusion including simple fibrous cap rupture of an atheroma, fibrous cap rupture at site of previous rupture-and-repair of an atheroma, and nodular calcification with rupture. Endothelial erosion without rupture has more recently been shown to be a common phenotype to promote thrombosis as well. Microenvironment features that are linked to these phenotypes of plaque instability are neovascularization arising from the vasa vasorum network leading to necrotic core expansion, intraplaque hemorrhage, and cap rupture; activation of adventitial and perivascular adipose tissue cells leading to secretion of cytokines, growth factors, adipokines in the outer artery wall that destabilize plaque structure; and vascular smooth muscle cell phenotypic switching through transdifferentiation and stem/progenitor cell activation resulting in the promotion of inflammation, calcification, and secretion of extracellular matrix, altering fibrous cap structure, and necrotic core growth. As the technology evolves, studies using noninvasive vascular imaging will be able to investigate the transition of stable to unstable atheromas in real time. A limitation in the field, however, is that reliable and predictable experimental models of spontaneous plaque rupture and/or erosion are not currently available to study the cell and molecular mechanisms that regulate the conversion of the stable atheroma to an unstable plaque. abstract_id: PUBMED:24661698 Measurement of β-tryptase in postmortem serum in cardiac deaths. Mast cells are well known for their role in hypersensitivity reactions. However, there is increasing evidence that they might also participate in both developing and weakening atherosclerotic plaques, potentially causing plaque instability. Some clinical studies have therefore postulated the existence of relationships between blood β-tryptase levels and acute coronary syndromes. In this study, we investigated postmortem serum β-tryptase levels in a series of 90 autopsy cases with various degrees of coronary atherosclerosis that had undergone medico-legal investigations. β-tryptase concentrations in these cases were compared to levels observed in 6 fatal anaphylaxis cases following contrast material administration. Postmortem serum β-tryptase concentrations in the anaphylactic deaths ranged from 146 to 979 ng/ml. In 9 out of 90 cases of cardiac deaths, β-tryptase levels were higher than clinical reference values of 11.4 ng/ml and ranged from 21 to 65 ng/ml. These results indicate that increased postmortem serum β-tryptase levels can be observed, though not systematically, in cardiac deaths with varying degrees of coronary atherosclerosis disease, thereby suggesting that mast cell activation in this disease cannot be ascertained by postmortem serum β-tryptase measurements. abstract_id: PUBMED:11581149 Unstable coronary plaque and its relation to coronary calcium. Coronary calcium is intimately associated with coronary atherosclerotic plaque development. The use of electron-beam computed tomography (EBCT) for accurate quantitative measurements has led to an increased interest in understanding the clinical importance of coronary calcium, particularly in terms of the ability to identify unstable coronary plaques that underlie the clinical acute coronary syndromes. Histopathologic studies have demonstrated that calcium is a frequent feature of ruptured plaques, but the presence or absence of calcium does not allow for reliable distinction between unstable versus stable plaques. This issue is complicated by the lack of a prospective definition for "unstable." Plaque rupture is sometimes found in apparently healthy subjects and in patients with clinically stable disease. Coronary atherosclerosis is a coronary systemic disease process. Imaging of coronary calcium, although unable to identify a localized unstable plaque, potentially can identify the more clinically pertinent "unstable patient." Almost all patients with a recent acute coronary syndrome have measurable coronary calcium because moderate-to-advanced coronary plaque disease is already present, although obstructive disease frequently is not. Prospective studies have demonstrated that extensive coronary calcium detected by EBCT is associated with a significantly increased incidence of subsequent myocardial infarction, need for revascularization, and coronary death. The incremental prognostic value of coronary calcium compared with that of risk factor assessment remains to be fully defined. The occurrence of an acute coronary syndrome is determined by many factors apart from the extent of atherosclerotic plaque disease. Large prospective trials in the general population are needed to define the subgroups that will benefit most from quantitative assessment of coronary calcium. Answer: Tryptase, a protease released by mast cells, has been investigated as a potential marker for unstable plaque in patients with acute coronary syndromes (ACS). Mast cells are known to accumulate and become activated in the shoulder regions of atherosclerotic plaques, which are prone to rupture, and their degranulation releases tryptase among other mediators (PUBMED:12918638). Elevated numbers of mast cells in these regions suggest a role in plaque rupture, and cardiac mast cells are considered highly profibrinolytic (PUBMED:12918638). A study measuring serum tryptase levels in patients with ACS found significant differences between groups with distinct types of ACS, particularly higher concentrations in patients with ST-segment depression in the acute phase and at follow-up (PUBMED:12918638). This suggests that serum tryptase levels may differentiate patients with distinct types of ACS and indicates a more important role of human heart mast cells in the pathogenesis of ACS with ST-segment depression (PUBMED:12918638). However, another study reported that in ACS, serum tryptase levels are normal and remain stable, suggesting that the quantity of tryptase released from mast cells in ACS may not be sufficient to be detected in the systemic circulation (PUBMED:16168805). This indicates that while local activation of coronary mast cells may occur, it might not translate into detectable systemic changes in tryptase levels. In contrast, a study investigating postmortem serum β-tryptase levels in cardiac deaths with varying degrees of coronary atherosclerosis found that increased levels can be observed, though not systematically, suggesting that mast cell activation in coronary atherosclerosis disease cannot be ascertained by postmortem serum β-tryptase measurements (PUBMED:24661698). In summary, while there is some evidence to suggest that tryptase could be a marker of unstable plaque in ACS, the findings are not consistent across studies. Further research is needed to clarify the role of tryptase as a reliable biomarker for plaque instability in the context of ACS.
Instruction: Can quality of life assessments differentiate heterogeneous cancer patients? Abstracts: abstract_id: PUBMED:24919068 Can quality of life assessments differentiate heterogeneous cancer patients? Purpose: This research conducted a face validation study of patient responses to the application of an HRQOL assessment research tool in a comprehensive community cancer program setting across a heterogeneous cohort of cancer patients throughout the natural history of diagnosed malignant disease, many of whom would not be considered candidates for clinical research trial participation. Methods: Cancer registries at two regional cancer treatment centers identified 11072 cancer patients over a period of nine years. The EORTC QLQ-C30 was administered to patients at the time of their initial clinical presentation to these centers. To determine the significance of differences between patient subgroups, two analytic criteria were used. The Mann-Whitney test was used to determine statistical significance; clinical relevance defined a range of point differences that could be perceived by patients with different health states. Results: Univariate analyses were conducted across stratification variables for population, disease severity and demographic characteristics. The largest differences were associated with cancer diagnosis and recurrence of disease. Large differences were also found for site of origin, mortality and stage; minimal differences were observed for gender and age. Consistently sensitive QoL scales were appetite loss, fatigue and pain symptoms, and role (work-related), social and physical functions. Conclusions: 1) The EORTC QLQ-C30 collected meaningful patient health assessments in the context of non-research based clinical care, 2) patient assessment differences are manifested disparately across 15 QoL domains, and 3) in addition to indicating how a patient may feel at a point in time, QoL indicators may also reveal information about underlying biological responses to disease progression, treatments, and prospective survival. abstract_id: PUBMED:31561134 Current state of quality of life and patient-reported outcomes research. The 5th EORTC Quality of Life in Cancer Clinical Trials Conference presented the current state of quality of life and other patient-reported outcomes (PROs) research from the perspectives of researchers, regulators, industry representatives, patients and patient advocates and health care professionals. A major theme was the assessment of the burden of cancer treatments, and this was discussed in terms of regulatory challenges in using PRO assessments in clinical trials, patients' experiences in cancer clinical trials, innovative methods and standardisation in cancer research, innovative methods across the disease sites or populations and cancer survivorship. Conferees demonstrated that PROs are becoming more accepted and major efforts are ongoing internationally to standardise PROs measurement, analysis and reporting in trials. Regulators are keen to collaborate with all stakeholders to ensure that the right questions are asked and the right answers are communicated. Improved technology and increased flexibility of measurement instruments are making PROs data more robust. Patients are being encouraged to be patient partners. International collaborations are essential, because this work cannot be accomplished on a national level. abstract_id: PUBMED:16547776 Predictors of the extent of agreement for quality of life assessments between terminally ill cancer patients and their primary family caregivers in Taiwan. The use of family caregivers to report patients' dying experiences has been suggested as a way of resolving the problem of non-response bias and missing data in end-of-life research. There is a dearth of information in the literature about the predictors of the extent of agreement for quality of life (QOL) assessments between patients and family informants outside of Western countries. In addition, to date there has been little progress made in deciphering meaningful influencing factors on the levels of agreement between patient and family QOL assessments. The purpose of this study was to identify the impact of the demographics and disease characteristics of patients and families, relationships of the patient to family caregiver, and caregiving burden on the extent of agreement for QOL assessments between Taiwanese terminally ill cancer patients and their family caregivers. Results from assessments by 114 dyads of Taiwanese terminally ill cancer patients and their family caregivers indicated that the extent of agreement between patient and family assessments of patient QOL was negatively influenced by the caregiving burden (amount of care needed and the impact of caregiving on caregiver's health), positively influenced by the patient's poorer health status, and there were a mixed effect of the demographics and the relationship between the patient and the family. Providing clinical care tailored at empowering families and aimed at reducing their unmet caregiving needs may improve family caregivers' abilities to assess patient QOL sensitively. abstract_id: PUBMED:12546523 Focus on the individual--quality of life assessments in oncology. In this review we investigate how assessments of quality of life (QoL) can be used in daily clinical practice. The focus is on individualized QoL assessments, but computerized-adaptive testing and interpretation of QoL scores are also considered. Quality of life (QoL) measurements have been used relatively infrequently in clinical practice, although individualized care planning and follow-up based on QoL information may lead to better outcomes of treatment and informed and autonomous decision-making by patients. A recent approach in assessing QoL is by individualized measures, which tap QoL as defined by the individual patient. Individualized QoL instruments, such as the Schedule for the Evaluation of Individual Quality of Life (SEIQoL) and the Patient-Generated Index (PGI), have obvious appeal for use in clinical practice, since they incorporate topics of greatest concern to the individual patient, while also capturing their ratings and weightings. However, before general use in daily clinical practice can be recommended, some aspects of these instruments require further research, particularly issues concerning reliability, feasibility, interpretation and effectiveness. abstract_id: PUBMED:34394479 Predictors of quality of life among older patients with cancer during treatment. Background: Improving quality of life in older patients with cancer has become an important goal of healthcare providers. Aims: The purpose of this study was to identify the predictors of quality of life among older patients with cancer, aged 60 years and over during the treatment period. Methods: A descriptive correlational study was conducted among 150 patients. The Functional Assessment of Cancer Therapy Scale, Herth Hope Index and Hospital Anxiety and Depression Scale were used. Results: The results showed that the total quality-of-life mean score was 58.50 (SD = 7.44), indicating low overall quality of life. The social-family well-being subscale had the highest mean (20.50, SD = 3.79) among all subscales of quality of life, while the emotional well-being subscale had the lowest mean (8.06, SD = 4.23). Hope and educational level had statistically significant positive relationships with all subscales of quality of life. However, anxiety was associated negatively with physical, social-family and functional well-being subscales, but positively with the emotional well-being subscale. Anxiety, income, marital status, health insurance, duration of treatment, educational level, gender and hope were identified as predictors of quality-of-life subscales. Conclusions: The results could help to develop specific programmes that may improve quality of life among older patients with cancer during treatment. abstract_id: PUBMED:17328030 Computerised assessment of quality of life in oncology patients and carers. Objectives: To examine the feasibility of a computerised assessment and to compare resulting quality of life scores among oncology carers and patients. Methods: Six hundred and twenty-two subjects (carers n = 234, patients n = 388) attending an oncology or haematology clinic completed the WHO-Quality of Life-BREF (WHOQOL-BREF) and demographics on a personal computer. Age- and gender-standardised scores for carers and patients were compared to community norms. Carers were compared with patients using six three-way ANOVAs. Results: In total, 60.3% (234/388) of the patients had an accompanying carer attempt the survey and 93.6% (219 of 234) completed all items. The four domains of the WHOQOL-BREF had acceptable internal consistency (Cronbach alpha coefficients). Carers had lower quality of life on three domains (physical, psychological and environmental), and patients on two domains (physical and psychological), compared with the general community. Carers had higher overall satisfaction with their health than patients (standardised difference of 0.52 units-moderate effect). Conclusion: A computerised system for assessing quality of life was feasible and found that carers and patients had significant impairments in quality of life. Clinicians and services interested in quality of life for carers and patients may use computerised assessments to identify those with poorer quality of life. abstract_id: PUBMED:26161285 Preoperative Quality of Life in Patients with Gastric Cancer. Purpose: We evaluated the socio-personal and clinical factors that can affect preoperative quality of life to determine how to improve preoperative quality of life in patients with gastric cancer. Materials And Methods: The preoperative quality of life data of 200 patients (68 females and 132 males; mean age 58.9±12.6 years) with gastric cancer were analyzed according to socio-personal and clinical factors. The Korean versions of the European Organization for Research and Treatment of Cancer (EORTC) Quality of Life Questionnaire Core (QLQ) 30 and the EORTC QLQ-STO22, a gastric cancer-specific module, were used to assess quality of life. Patients were asked to complete the questionnaire preoperatively by themselves. Results: Patients with a higher academic background and stage I disease tended to have higher global health status scores. Highly educated younger men had better physical functioning scores. Highly educated and well-nourished patients with stage I cancer had higher role functioning scores. Married patients had better emotional scores. The symptom scales were affected by sex, age, education level, nutrition, and cancer stage. Conclusions: Preoperative quality of life in patients with gastric cancer can be improved by nutritional support and treatment of symptoms caused by disease progression. Psychological support may be helpful for patients with a poor quality of life. abstract_id: PUBMED:30830292 Presurgical assessments For the estimation of perioperative risks and mortality in the context of radical urological tumor surgery in elderly patients, the guidelines meanwhile require the use of geriatric assessments. The aim of this work is therefore to explain frequently used geriatric assessments and to give an overview of their predictive significance in radical urological tumor surgery. Comprehensive geriatric assessments provide a good description of the patient's state of health, but are hardly feasible in clinical routine due to their complexity. It is more reasonable to use screening tools with subsequent targeted examination of high-risk patients. Special tools allow the standardized assessment of functional status, mobility, cognition, mood, nutrition, frailty, comorbidities and polypharmacy and have different prognostic significance. Evidence on the predictive value of assessments prior to radical urological tumor surgery is mainly described for the systematic classification of comorbidities. In cystectomy, the Charlson Comorbidity Index (CCI) and the American Society of Anesthesiologists (ASA) score allow an estimation of the risk of complications and mortality. The focus of assessments prior to prostatectomy is to identify patients with sufficient life expectancy to benefit from radical surgery. CCI and ASA scores as well as the Eastern Co-operative Oncology Group (ECOG) score can help to assess the risk of perioperative complications in kidney tumor surgery. abstract_id: PUBMED:8398476 Is there congruence in hospice nurses' view of their patients' quality of life and their hospice patients' view of their own quality of life? Since the development of the hospice movement in the United States, care of the terminally ill has undergone vital change. This change involves the alternative of home care for the terminally ill. It allows the patients, the primary caregiver, and the rest of the family to be in control of care. As stated by Masters, hospice is unique in that the patient and family controls the environment, and lifestyle, including treatment modalities, more than in any other concept in health care. Hospice was developed to give people with a terminal illness an option to traditional medicine. Hospice is a concept of care that has been established to improve the quality of life of the terminally ill. Therefore, the ultimate goal of hospice care is to enhance the quality of life of patients with terminal illness, mainly cancer. In the hospice setting, the nurse applies a holistic approach when planning and providing primary care. The nurse becomes increasingly more knowledgeable about the problems and strengths of the patient and family to whom he/she is caring for. The nurse must make the transition in her philosophy from cure-to-care in hospice, and symptom control being the nurses' priority in order to enhance the patients quality of life. Curtis and Fernsler explain that hospice care focuses on helping patients to communicate effectively and to share life agendas with their primary caregiver and the hospice nurse. These researchers continue to state that these efforts promote congruence between patient, primary caregiver, and the hospice nurses' view of the patients quality of life.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:12600219 Health-related quality-of-life assessments in patients with advanced cancer of the prostate. Health-related quality of life (HR-QOL) research, in general, is developing. It is becoming increasingly recognised as being important and knowledge on how to assess HR-QOL is spreading. Modern HR-QOL research should assess prevalence and severity of symptoms or functions, as well as the bother that symptoms or changes in functions inflict and whether this bother affects the overall well-being of the patient. In the prostate cancer field, HR-QOL assessments have evolved rapidly during the last decade. The importance of HR-QOL and associated research in prostate cancer is being more and more appreciated. Most published studies concern patients with early prostate cancer; studies in patients with late advanced stages are less frequent. In early stages, patients often have no or few symptoms from the disease. HR-QOL aspects for these patients are thus often associated with anxiety and treatment adverse effects such as changes in sexual, urinary and bowel functions. Changes in these functions also often affect HR-QOL in patients with more advanced local stages of prostate cancer or metastatic disease. Moreover, HR-QOL in these patients is also affected by endocrine treatment aiming to castrate the patient. Monotherapy with antiandrogens as endocrine treatment may be the treatment alternative that affects HR-QOL the least. This alternative may, however, not be as effective as castration. In the palliative phase, patients with prostate cancer have problems common to patients dying of other cancers. HR-QOL research should be encouraged and the knowledge on how HR-QOL can be assessed needs to be disseminated. HR-QOL assessments should be as natural in outcomes research as survival assessments. The technology of assessing HR-QOL needs to be further investigated and improved particularly with respect to the understanding of differences in performance between instruments using summary scores and item-specific assessments. Also, we need an increased understanding of the mechanisms behind patients' trade-offs and of how information should be communicated to patients in order to give the best possibilities for making informed treatment decisions. Answer: Yes, quality of life assessments can differentiate heterogeneous cancer patients. The research conducted using the EORTC QLQ-C30 tool in a comprehensive community cancer program setting demonstrated that patient health assessments collected meaningful data in the context of non-research based clinical care. The study found that patient assessment differences are manifested disparately across 15 quality of life (QoL) domains, with the largest differences associated with cancer diagnosis and recurrence of disease. Other significant differences were found for site of origin, mortality, and stage, while minimal differences were observed for gender and age. Sensitive QoL scales included appetite loss, fatigue, pain symptoms, and role (work-related), social, and physical functions. These findings suggest that QoL indicators can reveal information about underlying biological responses to disease progression, treatments, and prospective survival (PUBMED:24919068). Furthermore, other studies have identified various predictors of quality of life in cancer patients, such as caregiving burden, patient's poorer health status, demographics, and the relationship between the patient and the family caregiver (PUBMED:16547776). Individualized QoL assessments, such as the Schedule for the Evaluation of Individual Quality of Life (SEIQoL) and the Patient-Generated Index (PGI), have been suggested for use in clinical practice as they incorporate topics of greatest concern to the individual patient (PUBMED:12546523). Additionally, predictors of quality of life among older patients with cancer during treatment have been identified, including hope, educational level, anxiety, income, marital status, health insurance, duration of treatment, gender, and hope (PUBMED:34394479). In summary, quality of life assessments are capable of differentiating between heterogeneous cancer patients by capturing a wide range of patient experiences and health states, which are influenced by various factors including disease characteristics, demographics, and personal circumstances.
Instruction: Should embolectomy be performed in late acute lower extremity arterial occlusions? Abstracts: abstract_id: PUBMED:19688102 Should embolectomy be performed in late acute lower extremity arterial occlusions? Background: We analyzed the embolectomy results and complications of patients who were operated on after a diagnosis of late acute arterial occlusion of lower extremities. Methods: A total of 122 patients operated on in our clinic between 2004 and 2009 for late acute arterial occlusion were included in the study. Late arterial occlusion was defined as occlusion occuring 72 hours after initial manifestation of the patient complaints related to the affected lower extremity. Results: Average age of the 122 patients (71 male, 51 female) was 54.2 +/- 16.8 years. In this cohort, 64.75% of patients had cardiac pathologies, while 28.68% had extracardiac causes; 1.64% patients had cathetherization, 0.81% patient had malignancy, and 2.46% patients had a history of trauma. In 1.64% of the cases, no reason for thromboembolysis could be found. Thirty-one patients (25.40%) had additional surgical operations, 14 (11.47%) had fasciotomy, and 9 (7.37%) had amputation. Re-embolectomy was performed on 37 patients (30.32%) who had ongoing ischemia after an operation. Additional surgical operations were performed on 31 patients (25.40%) with ongoing ischemia. In 14 of these cases (11.47%), patients were treated with fasciotomy due to development of compartment syndrome. Amputation was performed on a total of 9 patients. Early in the postoperative phase, mortality was observed in 11 patients (9.01%). Conclusion: We believe that late embolectomies of acute late leg ischemia increases blood flow in the extremity and reduces the number of amputations required. abstract_id: PUBMED:19819793 Accompanying embolectomy and closed fasciotomy in the same session for the delayed arterial thromboembolic occlusions in lower extremity Objective: We aimed to compare the postoperative complications in patient with acute lower extremity arterial occlusion who admitted in the late period in whom accompanying embolectomy and fasciotomy in the same session and fasciotomy following embolectomy were applied, because of development of compartment syndrome. Methods: A total of 36 patients (13 female, 23 male) with acute lower extremity arterial occlusion, who admitted to our clinic at least 12 hours after onset of symptoms were enrolled to the study and investigated retrospectively. While embolectomy was performed in all cases, the cases in which mini fasciotomy was performed after development of compartment syndrome constitute the group 1 (n=21) and the ones in whom accompanying embolectomy and closed mini fasciotomy in the same session constitute the group 2 (n=15), groups were formed randomly. Groups were compared in regard of extremity amputation, renal failure, hypoesthesia in lower extremity, paresthesia in fingers in their clinical course. In statistical analysis t test, Chi-square test and Mann Whitney U test were used for comparison of groups, and a p&lt;0.05 was considered as significant. Results: While cardiac originating embolus was the most frequent etiologic factor, femoropopliteal artery was the most frequent involved area in both groups. Morbidity rates were 66.6% in group 1 and 13.3% in group 2 (p&lt;0.002). Amputation below the knee was performed in two patients in group 1. Conclusion: In order to reduce the unfavorable effects of reperfusion injury, accompanying fasciotomy and embolectomy in the same session, may have an important role in decreasing the postoperative mortality rate in patients who admitted in the late stage of lower extremity acute arterial occlusions. abstract_id: PUBMED:35856365 Admission monocyte to high density lipoprotein cholesterol ratio predicts amputation in patients undergoing embolectomy for acute lower extremity ischemia. Objective: Acute limb ischemia is a common clinical manifestation of embolism or thrombosis, which can lead to amputation. Increasing evidence suggests that various biomarkers can predict amputation at the time of admission. Identifying an easily obtainable and inexpensive indicator has always been a major objective. The aim of this study was to determine the predictive value of the admission monocyte count to the HDL-C ratio for a lower extremity amputation in patients undergoing embolectomy for acute limb ischemia. Patients And Methods: This retrospective, single-center study included 269 patients who underwent an emergent embolectomy. The study population was divided into two groups according to early amputation: the non-amputation group (n = 220) and the amputation group (n = 49). Two groups were compared based on various data. Results: According to the multivariate regression analysis, patients with a higher CRP and MHR have a significantly higher amputation rate (HR: 1.148; CI: 1.075-1.225; p &lt; 0.001 and HR: 1.547; CI: 1.003-2.387; p = 0.04, respectively). Patients with arterial back bleeding have a significantly lower amputation rate (HR: 0.106; CI: 0.02-0.558; p = 0.008). Conclusions: Our study demonstrated that preoperative CRP, MHR, and no arterial back bleeding after surgery were found to be independent predictors of amputation as a poor prognostic factor within 30 days after an embolectomy. abstract_id: PUBMED:27863366 Early versus late thrombolysis in acute arterial occlusion of lower extremity. Background: Acute arterial occlusion in lower extremity is an urgent condition which occurs when there is an abrupt interruption of blood flow into an extremity. Reperfusion through early intervention can increase limb salvage and decrease mortality. There was no common agreement when is the best to start thrombolysis in treating acute arterial occlusion. This study was designed to study the efficacy of an early thrombolysis compared with a late thrombolysis. Method: We identified all patients discharged from the Gwangju Veterans hospital with a diagnosis of acute arterial occlusion between 2006 and 2014. 72 patients were eligible, and every patient had treated with catheter-directed thrombolysis on the day or 1day after admission. Among them, 42 patients had undergone an early thrombolysis (less than 7days after the onset of symptoms) and the other 30 patients had undergone a late thrombolysis (more than 7days after the onset of symptoms). The primary outcome was amputation rate at 6months. The secondary outcomes were all cause mortality at 6months and increase of ankle brachial index (ABI). Results: Amputation rate at 180days in the early thrombolysis group was 7.1% as compared with 30% in the late thrombolysis group. All cause mortality at 6months and increase of ABI were not different between two groups. In multivariable Cox-regression analysis, late thrombolysis was independent predictor of amputation at 6months. Conclusion: Early thrombolysis was superior in preventing amputation than late thrombolysis. abstract_id: PUBMED:12525809 Microtibial embolectomy. Background: microtibial embolectomy is an important technique in cases of limb threatening acute arterial occlusion affecting native crural and pedal vessels. It is particularly useful when thrombolysis is contraindicated or ineffective as in "trash foot". Methods: in order to evaluate the efficacy of this technique, a retrospective case note review was carried out for patients undergoing microtibial embolectomy from 1990 to 1999. Data collected included the causes and degree of ischaemia, additional procedures required, vessel patency, limb salvage and complications encountered. Results: twenty-two limbs underwent exploration of the crural/pedal vessels with ankle level arteriotomies under local anaesthetic in 12 cases, general anaesthetic in nine and epidural in one. The causes of ischaemia were cardiac emboli (8), "trash foot" (7), emboli from aortic and popliteal aneurysms (3) and thrombotic occlusion of crural vessels (4). The vessel patency rate was 69% and limb salvage rate 62% (13/21) up to 5-years follow-up. Six of the seven cases with "trash foot" were salvaged while one required an amputation at 3-months post-operatively. The 30-day mortality was 22% (5/22). Conclusions: microtibial embolectomy is effective in acute occlusion of the crural/pedal arteries including cases of "trash foot", offering limb salvage to a worthwhile proportion of cases. abstract_id: PUBMED:22606056 Analysis of risk factors for amputation in 822 cases with acute arterial emboli. Background: We retrospectively examined the records of 822 patients who underwent a total of 901 operations for acute peripheral arterial occlusion of the upper or lower extremities between 1999 and 2009. We analyzed the effects of atherosclerotic structure, the time of admission to hospital, and re-embolectomies on amputation in the early postoperative period. Methods: There were 466 (56.7%) men and 356 (43.3%) women. The time of admission to hospital was in the range of 58 hours. There were lower extremity emboli in 683 (83%). Bypass procedures were done in 27 (3.3%) patients. Fasciotomy, patchplasty, and endarterectomy were made in 19 (2.3%), 9 (1.1%), and 7 (0.8%) patients, respectively. Results: Early revision (re-embolectomy) was performed in 77 (9.3%) patients. Amputation was performed in 112 (13.6%) patients. Delay after six hours from the onset of complaints and re-embolectomies increased the risk of amputation and rates. Conclusion: If the embolectomy, which is a rapid and easy technique for treatment of acute arterial emboli, is performed by experienced surgeons without delay, the complications associated with the emboli may be prevented. Otherwise, delayed operation and repeated re-embolizations in acute arterial play important roles in morbidity. abstract_id: PUBMED:30488304 Percutaneous Mechanical Thromboembolectomy in Acute Lower Limb Ischemia. Purpose: To analyze the immediate outcome of percutaneous mechanical thromboembolectomy in acute infrainguinal leg ischemia in a consecutive cohort of patients with acute lower limb ischemia. Material And Method: We retrospectively analyzed the data of 156 acute infrainguinal ischemic events in 148 patients. Patients presented with acute limb ischemia Rutherford category I in 68 cases (44%), Rutherford category II A in 64 instances (41%) and Rutherford II B in 24 instances (15%). In 62 cases (39.7%), the occlusion site started below the knee joint level, in 94 (60.3%) cases above. As a basic technique, an intervention was started by manual aspiration but if aspiration failed, an additional device was added. Most frequently, a rotational thrombectomy device (Rotarex, Straub Medical, Wangs, Switzerland) was used. An antegrade access to the femoral artery was the preferred access to the limb (154/156). Results: In 145 of 156 incidents, a technical success was achieved (93%). Aspiration was used in 153 cases (98%). Rotational thrombectomy by use of the Rotarex catheter was added in 60 cases (38%). Directional atherectomy was applied in a total of five patients. As main technical complications, a downward embolization occurred (n = 11). There were four surgical groin revisions. Five patients died during the early follow-up with four not related to the intervention. Clinically, 135 patients (86.5%) showed an improvement in their clinical situation. Conclusion: Acute lower limb ischemia can be successfully treated by mechanical thromboembolectomy only by combining aspiration embolectomy with rotational thrombectomy in most cases but manual aspiration alone will frequently fail especially above the knee joint level. abstract_id: PUBMED:37152341 Acute Upper Extremity Arterial Occlusion Diagnosed on POCUS in the Emergency Department. Upper extremity acute limb ischemia (ALI) is a limb-threatening and potentially lethal pathology that is most commonly caused by vascular embolization. Outcomes of limb ischemia are time-sensitive due to the correlation between a longer time from symptom onset to intervention with a vastly higher risk of amputation. In this report, point of care ultrasound (POCUS) was utilized to rapidly diagnose a patient with a proximal right brachial artery embolic occlusion, prompting expedited surgical consultation and successful embolectomy. POCUS can provide a focused vascular examination of the limbs to expedite diagnosis of time-sensitive ALI and facilitate timely medical intervention and surgical consultation. abstract_id: PUBMED:37401427 Clinical Characteristics of Acute Lower Extremity Ischemia Due to Left Atrial Myxoma: A Rare Case Report with Review of Literature. Emboli caused by cardiac myxomas mostly occur in the cardiovascular or cerebrovascular systems and rarely in the lower extremity vasculature. We introduce the rare case of a patient with left atrial myxoma (LAM) whose right lower extremity (RLE) suffered from acute ischemia due to tumor fragments, along with a review of the relevant literature, and highlight the clinical characteristics of LAM. An 81-year-old female presented with acute ischemia of RLE. Color Doppler ultrasound showed no blood flow signal far from the RLE femoral artery. Computed tomography angiography showed an occlusion of the right common femoral artery. A transthoracic echocardiogram revealed a left atrial mass. Femoral artery embolectomy was performed under local anesthesia, followed by thoracotomy with tumor resection under general anesthesia on postoperative day seven. The tumor was pathologically confirmed as an atrial myxoma. A literature search of the PubMed database returned 58 cases of limb ischemia due to LAM, and the conclusions drawn from the statistical analysis were that emboli from LAM occurred most commonly in the aortoiliac and bilateral lower limb vasculature and were rarely associated with upper extremity and atrial fibrillation. Multisystem embolism is characteristic of cardiac myxoma. The removed embolus should be examined pathologically for signs of a cardiac myxoma. Lower-limb embolisms should be promptly diagnosed and treated to avoid osteofascial compartment syndrome. abstract_id: PUBMED:22087211 Emergency microsurgical embolectomy for the treatment of acute intracranial artery occlusion: report of two cases. Background: The main treatment for acute arterial ischemic stroke is intravenous or intra-arterial thrombolysis within a particular time window. Endovascular mechanical embolectomy is another treatment option in the case of major artery occlusion. Endovascular mechanical embolectomy is a useful technique for restoring blood flow in patients with large-vessel occlusion, and especially in those who are contraindicated for thrombolytics or in whom thrombolytic therapy has failed. Case Report: We report herein two cases of emergency microsurgical embolectomy for the treatment of acute middle cerebral artery and internal carotid artery occlusion as an alternative treatment for major artery occlusion. Conclusions: Emergency microsurgical mechanical embolectomy may be an alternative treatment option for restoring blood flow in selected patients with large-vessel acute ischemic stroke. Answer: Embolectomy can be considered for late acute lower extremity arterial occlusions. Studies have shown that late embolectomies can increase blood flow in the extremity and reduce the number of amputations required (PUBMED:19688102). Additionally, performing an accompanying fasciotomy and embolectomy in the same session may have an important role in decreasing the postoperative mortality rate in patients who are admitted in the late stage of lower extremity acute arterial occlusions (PUBMED:19819793). Moreover, certain biomarkers, such as the admission monocyte count to the HDL-C ratio, have been found to predict amputation rates in patients undergoing embolectomy for acute limb ischemia, which could help in making clinical decisions (PUBMED:35856365). It is also noted that early thrombolysis is superior in preventing amputation compared to late thrombolysis (PUBMED:27863366), suggesting that timely intervention is crucial. Microtibial embolectomy has been shown to be effective in acute occlusion of the crural/pedal arteries, offering limb salvage to a worthwhile proportion of cases (PUBMED:12525809). Furthermore, the risk of amputation increases with delays in treatment beyond six hours from the onset of symptoms and with the need for re-embolectomies (PUBMED:22606056). Percutaneous mechanical thromboembolectomy has also been successful in treating acute lower limb ischemia, especially when combining aspiration embolectomy with rotational thrombectomy (PUBMED:30488304). In conclusion, while the timing of the intervention is critical, and earlier treatment is generally associated with better outcomes, embolectomy can still be beneficial in late acute lower extremity arterial occlusions, particularly when combined with other procedures like fasciotomy and when guided by predictive biomarkers. However, the decision should be individualized based on the patient's condition, the presence of viable tissue, and the risk of complications.
Instruction: Is it possible to identify dengue in children on the basis of Ministry of Health criteria for suspected dengue cases? Abstracts: abstract_id: PUBMED:15951905 Is it possible to identify dengue in children on the basis of Ministry of Health criteria for suspected dengue cases? Objectives: To identify clinical characteristics indicative of dengue and to evaluate the applicability to children of the Health Ministry criteria for suspected cases. Methods: A cross-sectional study undertaken at the General Pediatrics Center of the Fundação Hospitalar de Minas Gerais. Children were enrolled if presenting acute febrile conditions with no definite etiology, lasting &gt; 24 hours and &lt; or = 7 days and if resident in the Metropolitan Region of Belo Horizonte. Clinical variables were investigated, specific tests were performed and aspartate-aminotransferase assayed, during a period considered both endemic and epidemic for the disease. The subset of children who did have dengue was compared with the subset of nonspecific acute febrile diseases. The Health Ministry criteria for suspected cases was evaluated. Results: Dengue was diagnosed in 50.4% of the 117 children studied. There were no statistically significant associations between the disease and the majority of the symptoms analyzed. Only exanthema was more often associated with dengue (Prevalence Ratio = 1.49; 95% CI: 1.05-2.11). The criteria for suspected cases of dengue had a sensitivity of just 50.8% and a positive predictive value of 62.5%. These values were greater among schoolchildren and during the period of greater disease incidence. Conclusions: Dengue is common among febrile diseases of childhood, with prevalence that varies according to the epidemiological situation. The clinical status of children with dengue was very similar to that of children with other nonspecific diseases. The Health Ministry criteria for suspected cases was shown to be of little use, particularly with smaller children and during periods of reduced incidence. abstract_id: PUBMED:17892564 Health seeking and access to care for children with suspected dengue in Cambodia: an ethnographic study. Background: The continuing contribution of dengue fever to the hospitalization and deaths in hospital of infants and small children in Cambodia is associated with delays in presentation for medical attention, diagnosis and appropriate care. It is important to identify the reasons that influence these delays, in order to develop appropriate interventions to redress the impact of dengue. Methods: Data on health seeking were collected during an ethnographic study conducted in two villages in the eastern province of Kampong Cham, Cambodia in 2004. Interviews were conducted with mothers whose children had been infected with suspected dengue fever, or who had been sick for other reasons, in 2003 and 2004. Results: Women selected a therapeutic option based on perceptions of the severity of the child's condition, confidence in the particular modality, service or practitioner, and affordability of the therapy. While they knew what type of health care was required, poverty in combination with limited availability and perceptions of the poor quality of care at village health centers and public referral hospitals deterred them from doing so. Women initially used home remedies, then sought advice from public and private providers, shifting from one sector to another in a pragmatic response to the child's illness. Conclusion: The lack of availability of financial resources for poor people and their continuing lack of confidence in the care provided by government centres combine to delay help seeking and inappropriate treatment of children sick with dengue. abstract_id: PUBMED:35387590 Infectious diseases prevention and control using an integrated health big data system in China. Background: The Yinzhou Center for Disease Prevention and Control (CDC) in China implemented an integrated health big data platform (IHBDP) that pooled health data from healthcare providers to combat the spread of infectious diseases, such as dengue fever and pulmonary tuberculosis (TB), and to identify gaps in vaccination uptake among migrant children. Methods: IHBDP is composed of medical data from clinics, electronic health records, residents' annual medical checkup and immunization records, as well as administrative data, such as student registries. We programmed IHBDP to automatically scan for and detect dengue and TB carriers, as well as identify migrant children with incomplete immunization according to a comprehensive set of screening criteria developed by public health and medical experts. We compared the effectiveness of the big data screening with existing traditional screening methods. Results: IHBDP successfully identified six cases of dengue out of a pool of 3972 suspected cases, whereas the traditional method only identified four cases (which were also detected by IHBDP). For TB, IHBDP identified 288 suspected cases from a total of 43,521 university students, in which three cases were eventually confirmed to be TB carriers through subsequent follow up CT or T-SPOT.TB tests. As for immunization screenings, IHBDP identified 240 migrant children with incomplete immunization, but the traditional door-to-door screening method only identified 20 ones. Conclusions: Our study has demonstrated the effectiveness of using IHBDP to detect both acute and chronic infectious disease patients and identify children with incomplete immunization as compared to traditional screening methods. abstract_id: PUBMED:33970118 Development of an Index for the Inspection of Aedes aegypti Breeding Sites in Brazil: Multi-criteria Analysis. Background: Aedes aegypti is a vector for the transmission of diseases such as dengue fever, chikungunya, Zika fever, and yellow fever. In 2016, over 1 million cases of these diseases were reported in Brazil, which is an alarming public health issue. One of the ways of controlling this disease is by inspecting and neutralizing the places where A. aegypti lays its eggs. The Ministry of Planning, Development, and Administration of Brazil maintains the inspection statistics. Objective: We propose a multi-criteria analysis to create an index for A. aegypti inspections reported through the Ministry of Planning, Development, and Administration system of Brazil. Methods: Based on the repository from urban cleaning services combined with data on inspections conducted by government agencies in several Brazilian cities and municipalities, we selected and combined metrics, which we further ranked using the analytic hierarchy process methodology. We also developed risk maps based on the analytic hierarchy process ranking of the A. aegypti breeding sites. Results: Based on our analysis and the available data, the priority for inspections should consider the number of sick people (weight 0.350), medical evaluations (weight 0.239), inspections (weight 0.201), mosquito breeding sites (weight 0.126), and days of absence from work (weight 0.096). Conclusions: The proposed index could aid public health practitioners in preventing the appearance of new A. aegypti breeding sites. This information technology application can help solve such public health challenges. abstract_id: PUBMED:26153080 Hantavirus infection among children hospitalized for febrile illness suspected to be dengue in Barbados. Emerging picture of hantavirus infection in the South America is characterized by greater proportion of childhood infection and wider spectrum of disease from mild asymptomatic to lethal cardiopulmonary disease. Barbados is endemic for dengue and leptospirosis, both of which share clinical features with hantavirus infection and in many cases neither of these diagnosis could be confirmed. We investigate whether some of the children hospitalized with suspected dengue could indeed have been hantavirus infections. In this prospective study children hospitalized with suspected dengue were tested for hantavirus infection using ELISA for the IgM antibodies. Thirty-eight children tested positive for hantavirus infection. They presented with fever, headache and mild respiratory and gastrointestinal symptoms and signs. None of them had features suggestive of hantavirus cardiopulmonary syndrome. Blood count values ranged from low to normal to high for their age. There were no deaths. Hantavirus infection is prevalent in this Caribbean country. It predominantly presents with milder disease and is responsible for some of the nonspecific febrile illnesses in children. abstract_id: PUBMED:31926105 Pakistan: country report on children's environmental health. Pakistan is a developing country with an estimated population of 220 million, and among them 34% are children. Due to the lack of a proper health care system, particularly in rural areas, children are prone to many environmental hazards. This paper reports about the different environmental health issues faced by children in Pakistan and was presented at the Regional Meeting on Children's Environmental Health at the Chulabhorn Research Institute, Bangkok, Thailand in May 2019. Pakistan is among those countries where polio is still present due to misconceptions, hard-to-reach areas, war zones, political uncertainty, etc. Extensively drug-resistant typhoid is broadly spreading in Pakistan, and this is a very serious concern over antibiotic failure around the world. Pakistan was long considered a region with a low prevalence of human immunodeficiency virus, but now the disease is spreading at an alarming rate. Globally, Pakistan has the third highest burden of maternal, fetal and child mortality. In the last 10 years or so, around 10,000 cases of dengue have been reported with around 800 deaths, including children. Other issues related to children's environmental health in Pakistan include cholera because of poor sanitation and hygiene, respiratory disease due to pollution, exposure to pesticides, diarrhea, occupational hazards as a result of child labor, etc. abstract_id: PUBMED:25120572 Clinical, Molecular, and Epidemiological Analysis of Dengue Cases during a Major Outbreak in the Midwest Region of Minas Gerais, Brazil. This study aims to perform the first molecular and clinical-epidemiological analysis of dengue cases in Divinopolis, MG, Brazil. Data from 4,110 cases of dengue were accessed and 190 clinical samples were collected for molecular analyses. In this study, 2.7% of the men and 3.0% of the women were admitted to hospital. There was no association between gender and hospital admission. The symptoms observed in this study are according to the Health Ministry, but fever was present in 82.2% and not in 100% of cases. The chance of hospital admission was 1.55 higher in patients with any kind of bleeding (334) and 2.4% of individuals without bleeding were also hospitalized due to other warning signs. In the molecular analyses, 23% of the samples were positive for DENV. DENV-2 and DENV-3 were identified in 2010, DENV-3 in 2011, DENV-1 in 2012, and DENV-1 and DENV-4 in 2013. DENV detection was possible in samples with only one day of symptoms. This first report of dengue data in Divinópolis provided more insight into the viral types and effects of disease in the city, confirming the need for caution in assessing cases of suspected dengue and for revision of the criteria proposed by the Health Ministry to classify cases of the disease. abstract_id: PUBMED:32165128 Experience in diagnosing congenital Zika syndrome in Brazilian children born to asymptomatic mothers. Introduction: Typical symptoms of primary Zika virus infection are not specific and share similarities with other arbovirus infections such as dengue fever and chikungunya. As acute infection can be asymptomatic in up to 73% of cases, infants with microcephaly represent a diagnostic challenge for pediatricians. We describe the frequency of congenital Zika syndrome (CZS) in Brazilian children born to asymptomatic pregnant mothers and its differential diagnosis. Methods: This longitudinal, observational study was conducted on children with suspected CZS whose mothers did not report rash during pregnancy, referred to the reference hospital in a metropolitan area of ​ Rio de Janeiro, Brazil. The diagnosis of suspected CZS was based on Brazilian Ministry of Health protocol. Results: Forty-three (17%) of 246 referred children were born to mothers without rash history during pregnancy. Thirteen (30%) of 43 children met the Brazilian Ministry of Health criteria for CZS, all with microcephaly (two post-natal). The other children included 11 cases with post-natal microcephaly due to hypoxic-ischemic encephalopathy (6), non-progressive encephalopathy of unknown etiology (2), microcephaly under investigation (2) and congenital toxoplasmosis (1); 17 children were misdiagnosed with microcephaly and progressed with normal head circumference during the follow-up period; one child was included because of epidemiological link and one was loss to follow-up. All children who underwent laboratory investigation for ZIKV infection during neonatal period had negative RT-qPCR tests. Conclusion: We emphasize the increasing importance of CZS in differential diagnosis of microcephaly at birth or post-natal period. Detailed clinical investigation assisted by neuroimaging tests may clarify the diagnosis of CZS when laboratory tests are not available during the acute phase of the disease. abstract_id: PUBMED:37663092 The Effect of Age on Dengue Presentation and the Diagnostic Accuracy of the 2015 Pan American Health Organization Case Criteria in a Puerto Rican Cohort. Background: We evaluated dengue presentation by age, the performance of the 2015 Pan American Health Organization (PAHO) case criteria in identifying dengue cases, and variables to improve specificity. Methods: Patients with fever ≤7 days (N = 10 408) were recruited from 2 emergency departments from May 2012 through December 2015. Serum samples were tested for dengue, chikungunya, and nasopharyngeal swabs for respiratory viruses. Smoothing splines assessed differences in the frequencies of signs/symptoms by age. Least absolute shrinkage and selection operator regressions identified the variables that best predicted dengue. Results: Among 985 dengue cases, children aged &lt;5 years were least likely to have leukopenia, but most likely to have rash and petechiae. Adults had the highest odds of aches/pains and headaches/retro-orbital pain. The 2015 PAHO criteria had sensitivity of 93% and specificity of 25%. Specificity could be improved by requiring at least 2 of the following criteria: vomiting/nausea, petechiae, rash, or leukopenia (specificity 68%, sensitivity 71%) or by using 2015 PAHO criteria plus either (1) aspartate aminotransferase &gt;50 IU/L or platelet count &lt;100 000 platelets/μL (specificity 81%, sensitivity 56%) or (2) itchy skin or absence of rhinorrhea or cough (specificity 51%, sensitivity 82%). Conclusions: The 2015 PAHO dengue case criteria had excellent sensitivity but poor specificity. This can be improved by adding signs/symptoms associated with dengue diagnosis. abstract_id: PUBMED:22224116 What health professionals should know about the health effects of air pollution and climate change on children and pregnant mothers. Background: Health professionals face the adverse health effects of climate change and air pollution in their practices. This review underscores the effects of these environmental factors on maternal and children's health, as the most vulnerable groups to climate change and air pollution. Methods: We reviewed electronic databases for a search of the literature to find relevant studies published in English from 1990 to 2011. Results: Environmental factors, notably climate change and air pollution influence children's health before conception and continue during pregnancy, childhood, and adolescence. Experts have suggested that such health hazards may represent the greatest public health challenge that humanity has faced. The accumulation of greenhouse gases such as carbon dioxide, primarily from burning fossil fuels, results in warming which has an impact on air pollution particularly on levels of ozone and particulates. Heat-related health effects include increased rates of pregnancy complications, pre-eclampsia, eclampsia, low birth weight, renal effects, vector-borne diseases as malaria and dengue, increased diarrheal and respiratory disease, food insecurity, decreased quality of foods (notably grains), malnutrition, water scarcity, exposures to toxic chemicals, worsened poverty, natural disasters and population displacement. Air pollution has many adverse health effects for mothers and children. In addition to short-term effects like premature labour, intrauterine growth retardation, neonatal and infant mortality rate, malignancies (notably leukaemia and Hodgkin lymphoma), respiratory diseases, allergic disorders and anaemia, exposure to criteria air pollutants from early life might be associated with increase in stress oxidative, inflammation and endothelial dysfunction which in turn might have long-term effects on chronic non-communicable diseases. Conclusions: Health professionals have an exclusive capability to help prevent and reduce the harmful effects of environmental factors for high-risk groups, and should consider this capacity in their usual practice. Answer: Based on the information provided in the abstracts, it appears that identifying dengue in children solely on the basis of Ministry of Health criteria for suspected dengue cases may be challenging. A study conducted at the General Pediatrics Center of the Fundação Hospitalar de Minas Gerais found that the majority of symptoms analyzed were not statistically significantly associated with dengue, with only exanthema being more often associated with the disease. The Health Ministry criteria for suspected cases of dengue had a sensitivity of just 50.8% and a positive predictive value of 62.5%, indicating that these criteria may not be very effective, particularly with smaller children and during periods of reduced disease incidence (PUBMED:15951905). Furthermore, another study that evaluated the 2015 Pan American Health Organization (PAHO) case criteria, which is likely similar to the Ministry of Health criteria, found that while the criteria had excellent sensitivity, the specificity was poor. However, specificity could be improved by adding certain signs/symptoms associated with dengue diagnosis (PUBMED:37663092). These findings suggest that while the Ministry of Health criteria can be a starting point for suspecting dengue cases in children, additional clinical, epidemiological, and laboratory considerations are necessary to improve the accuracy of dengue diagnosis. Health professionals may need to consider a combination of symptoms, local epidemiology, and laboratory tests to more reliably identify dengue in pediatric patients.
Instruction: Digital imaging: a diagnostic screening tool? Abstracts: abstract_id: PUBMED:33574738 Digital health screening tool for identification of elder mistreatment. Purpose: It is estimated that 15.7% of people aged 60 years and older were subjected to some form of Elder Mistreatment (EM) globally (Yon et al., 2017). In the USA, as many as 1 in 24 EM cases are left unidentified by professionals, with a 300% increased mortality risk for older adults who do not receive help (National Center on Elder Abuse, n.d.; Dong, 2009). Current methods of screening tend to miss less obvious signs of EM and may discourage older adults from disclosing EM, due to either a lack of understanding of what constitutes mistreatment or fear of retaliation from the perpetrator. Method: Our approach shifts the focus of EM identification to the older adults themselves through an automated tablet-based tool. The Virtual cOaching in making Informed Choices on Elder Mistreatment Self-Disclosure (VOICES) tool includes various multimedia components such as videos, audio, and animations designed to educate and enhance screening. Patients screened as positive are guided through a Brief Negotiated Interview (BNI) utilizing motivational interviewing to assist in self-identification (recognize that they are experiencing elder mistreatment) or self-disclosure (inform others about their elder mistreatment experiences). During tool development, we conducted a qualitative study to evaluate the perceived value and likelihood of adopting a tablet-based approach to facilitate screening and self-disclosure of EM in the ED. We held 3 focus groups with stakeholders, including 24 adults 60 years or over, 2 social workers, 2 caregivers, and 2 ED clinicians. We used the findings from the focus groups and User-Centered Design approach (UCD) to develop the tablet-based screening tool. Once the tool was ready, we tested its usability and acceptability with 14 older adults. Results And Discussion: Focus group participants supported use of a tablet-based tool to screen for EM, indicating that digital screening benefits from feelings of privacy and anonymity. On a 7-point Likert scale ranging from "1=Very Comfortable" to "7=Very Uncomfortable", older adults scored 2.8 on average for whether they would feel comfortable using a tablet device to screen for EM. Prominent suggestions made by older adults included using a female voice for the tool narrator, larger font size, more multimedia, headphones for privacy; and having someone available during screening for assistance if needed. Participants indicated that it is difficult for older adults experiencing EM to ask for help and that any type of mistreatment screening would be helpful. They also highlighted the need to explain community resources available to older adults once EM is disclosed, especially resources offering help to the caregiver. Participants of the usability evaluation rated the tool a mean score of 86.6 (median= 88.8, iQR =18.1) on the System Usability Scale (SUS), far above the benchmark SUS score of 68, which indicates that the system is "good" or "acceptable" (Bangor et al., 2008). Shifting the focus from the provider to the older adult may encourage self-disclosure of EM by addressing major barriers to traditional screening processes. In summary, this study supported the use of self-administered automated tablet-based screening for EM. Participants generally believed that the use of digital health tools to facilitate the screening process would be beneficial in the ED setting. abstract_id: PUBMED:36287601 A Digital Smartphone-Based Self-administered Tool (R+ Dietitian) for Nutritional Risk Screening and Dietary Assessment in Hospitalized Patients With Cancer: Evaluation and Diagnostic Accuracy Study. Background: Malnutrition is a common and severe problem in patients with cancer that directly increases the incidence of complications and significantly deteriorates quality of life. Nutritional risk screening and dietary assessment are critical because they are the basis for providing personalized nutritional support. No digital smartphone-based self-administered tool for nutritional risk screening and dietary assessment among hospitalized patients with cancer has been developed and evaluated. Objective: This study aims to develop a digital smartphone-based self-administered mini program for nutritional risk screening and dietary assessment for hospitalized patients with cancer and to evaluate the validity of the mini program. Methods: We have developed the R+ Dietitian mini program, which consists of 3 parts: (1) collection of basic information of patients, (2) nutritional risk screening, and (3) dietary energy and protein assessment. The face-to-face paper-based Nutritional Risk Screening (NRS-2002), the Patient-Generated Subjective Global Assessment Short Form (PG-SGA-SF), and 3 days of 24-hour dietary recall (3d-24HRs) questionnaires were administered according to standard procedure by 2 trained dietitians as the reference methods. Sensitivity, specificity, positive predictive value, negative predictive value, κ value, and correlation coefficients (CCs) of nutritional risk screened in R+ Dietitian against the reference methods, as well as the difference and CCs of estimated dietary energy and protein intakes between R+ Dietitian and 3d-24HRs were calculated to evaluate the validity of R+ Dietitian. Results: A total of 244 hospitalized patients with cancer were recruited to evaluate the validity of R+ Dietitian. The NRS-2002 and PG-SGA-SF tools in R+ Dietitian showed high accuracy, sensitivity, and specificity (77.5%, 81.0%, and 76.7% and 69.3%, 84.5%, and 64.5%, respectively), and fair agreement (κ=0.42 and 0.37, respectively; CC 0.62 and 0.56, respectively) with the NRS-2002 and PG-SGA-SF tools administered by dietitians. The estimated intakes of dietary energy and protein were significantly higher (P&lt;.001 for both) in R+ Dietitian (mean difference of energy intake: 144.2 kcal, SD 454.8; median difference of protein intake: 10.7 g, IQR 9.5-39.8), and showed fair agreement (CC 0.59 and 0.47, respectively), compared with 3d-24HRs performed by dietitians. Conclusions: The identified nutritional risk and assessment of dietary intakes of energy and protein in R+ Dietitian displayed a fair agreement with the screening and assessment conducted by dietitians. R+ Dietitian has the potential to be a tool for nutritional risk screening and dietary intake assessment among hospitalized patients with cancer. Trial Registration: Chinese Clinical Trial Registry ChiCTR1900026324; https://www.chictr.org.cn/showprojen.aspx?proj=41528. abstract_id: PUBMED:31760365 Introducing Multiple Screener: An unsupervised digital screening tool for cognitive deficits in MS. Background: Cognitive deficits affect up to 70% of all patients with Multiple Sclerosis (MS) and have a significant impact on quality of life. Cognitive assessments need to be performed by a neuropsychologist and are often time-consuming, hampering timely identification and adequate monitoring of cognitive decline in MS. Objective: To develop a time-efficient, unsupervised, digital tool to screen for cognitive deficits in MS. Methods: A digital (adjusted) version of the Brief International Cognitive Assessment for MS, including the Symbol Digit Modalities Test (SDMT, information processing speed), the California Verbal Learning Test (CVLT-II, verbal memory) and the Spatial Recall Test (SPART, visuospatial memory) was developed: Multiple Screener (intellectual property of Sanofi Genzyme). Firstly, the clarity and feasibility of the tool was confirmed by 16 patients with MS (mean age 50.9 years (SD 9.4, range 37-68). Next, in 60 healthy controls (HCs, mean age 44.5 years (SD 14.0, range 18-67), intraclass correlation coefficients (ICC) were calculated to describe how strongly the digital version resembled the paper and pencil-based assessment. Finally, 236 HCs (mean age 42.8 years (SD 12.8, range 18-69) were included to obtain norm scores for each test. Results: ICCs between digital and paper and pencil-based assessment were excellent to good (SDMT (ICC 0.79, confidence interval (CI) 0.67-0.87); CVLT-II (ICC 0.77, CI 0.64-0.85); SPART (ICC 0.61, CI 0.42-0.75)). For each test, a regression-based correction for the effect of age was applied on the raw scores before converting them to norm Z-scores. Additionally, the SDMT scores needed correction for education and the CVLT-II for education and sex (subgroups were created). Conclusions: Performance on an adjusted, digital version of the BICAMS correlates highly with the standard paper-and-pencil based test scores in HCs. Multiple Screener is an unsupervised, digital tool, with available norm scores, ultimately allowing for easy monitoring of cognitive decline in patients with MS. abstract_id: PUBMED:30651894 Validation of a digital identification tool for individuals at risk for hereditary cancer syndromes. Background: The number of individuals meeting criteria for genetic counseling and testing for hereditary cancer syndromes (HCS) is far less than the number that actually receive it. To facilitate identification of patients at risk for HCS, Counsyl developed a digital identification tool (digital ID tool) to match personal and family cancer history to National Comprehensive Cancer Network (NCCN) BRCA-related Hereditary Breast and Ovarian Cancer (HBOC), Lynch syndrome, and polyposis testing criteria in one-to-one, automated fashion. The purpose of this study was to validate the ability of the digital ID tool to accurately identify histories that do and do not meet NCCN testing criteria. Methods: Third-party recorded three-generation pedigrees were retrospectively reviewed by a certified genetic counselor (CGC) to determine if independent events included in pedigree histories met NCCN guidelines, and were then sorted into groups: high risk events (meets criteria) and low risk events (does not meet criteria). Events were entered into the digital ID tool to determine the extent of its concordance with events sorted by CGC review. Statistical tests of accuracy were calculated at a 95% confidence interval (CI). Results: One hundred ninety-seven pedigrees were reviewed consecutively representing 765 independent events for analysis across groups. 382/382 (100%) high risk events identified by the digital ID tool and 381/383 (99.47%) low risk events identified by the digital ID tool were concordant with CGC sorting. The digital ID tool had a sensitivity of 100% (99.04-100% CI) and specificity of 99.48% (98.13-99.94% CI). The overall accuracy of the digital ID tool was estimated to be 99.74% (99.06-99.97% CI), reflecting the rate at which the digital ID tool reached the same conclusion as that of CGC review of pedigree events for the recommendation of genetic testing for individuals at risk for HCS. Conclusions: The digital ID tool accurately matches NCCN criteria in one-to-one fashion to identify at-risk individuals for HCS and may be useful in clinical practice, specifically for BRCA-related HBOC and Lynch Syndrome. abstract_id: PUBMED:30446072 "I couldn't find it your honour, it mustn't be there!" - Tool errors, tool limitations and user error in digital forensics. The field of digital forensics maintains significant reliance on the software it uses to acquire and investigate forms of digital evidence. Without these tools, analysis of digital devices would often not be possible. Despite such levels of reliance, techniques for validating digital forensic software are sparse and research is limited in both volume and depth. As practitioners pursue the goal of producing robust evidence, they face the onerous task of both ensuring the accuracy of their tools and, their effective use. Whilst tool errors provide one issue, establishing a tool's limitations also provides an investigatory challenge leading the potential for practitioner user-error and ultimately a grey area of accountability. This article debates the problems surrounding digital forensic tool usage, evidential reliability and validation. abstract_id: PUBMED:35696607 Digital Screening and Automated Resource Identification System to Address COVID-19-Related Behavioral Health Disparities: Feasibility Study. Background: Digital mental health (DMH) tools use technology (eg, websites and mobile apps) to conveniently deliver mental health resources to users in real time, reducing access barriers. Underserved communities facing health care provider shortages and limited mental health resources may benefit from DMH tools, as these tools can help improve access to resources. Objective: This study described the development and feasibility evaluation of the Emotional Needs Evaluation and Resource Guide for You (ENERGY) System, a DMH tool to meet the mental health and resource needs of youth and their families developed in the context of the COVID-19 pandemic. The ENERGY System offers a brief assessment of resource needs; problem-solving capabilities; and symptoms of depression, anxiety, trauma, and alcohol and substance use followed by automated, personalized feedback based on the participant's responses. Methods: Individuals aged ≥15 years were recruited through community partners, community events, targeted electronic health record messages, and social media. Participants completed screening questions to establish eligibility, entered demographic information, and completed the ENERGY System assessment. Based on the participant's responses, the ENERGY System immediately delivered digital resources tailored to their identified areas of need (eg, relaxation). A subset of participants also voluntarily completed the following: COVID-19 Exposure and Family Impact Survey (CEFIS) or COVID-19 Exposure and Family Impact Survey Adolescent and Young Adult Version (CEFIS-AYA); resource needs assessment; and feedback on their experience using the ENERGY System. If resource needs (eg, housing and food insecurity) were endorsed, lists of local resources were provided. Results: A total of 212 individuals accessed the ENERGY System link, of which 96 (45.3%) completed the screening tool and 86 (40.6%) received resources. Participant responses on the mental health screening questions triggered on average 2.04 (SD 1.94) intervention domains. Behavioral Activation/Increasing Activities was the most frequently launched intervention domain (56%, 54/96), and domains related to alcohol or substance use were the least frequent (4%, 4/96). The most frequently requested support areas were finances (33%, 32/96), transportation (26%, 25/96), and food (24%, 23/96). The CEFIS and CEFIS-AYA indicated higher than average impacts from the pandemic (ie, average scores &gt;2.5). Participants were satisfied with the ENERGY System overall (65%, 39/60) as well as the length of time it took to answer the questions (90%, 54/60), which they found easy to answer (87%, 52/60). Conclusions: This study provided initial support for the feasibility of the ENERGY System, a DMH tool capable of screening for resource and mental health needs and providing automated, personalized, and free resources and techniques to meet the identified needs. Future studies should seek direct feedback from community members to further improve the ENERGY System and its dissemination to encourage use. abstract_id: PUBMED:26496565 Baseline Screening Mammography: Performance of Full-Field Digital Mammography Versus Digital Breast Tomosynthesis. Objective: Baseline mammography studies have significantly higher recall rates than mammography studies with available comparison examinations. Digital breast tomosynthesis reduces recalls when compared with digital mammographic screening alone, but many sites operate in a hybrid environment. To maximize the effect of screening digital breast tomosynthesis with limited resources, choosing which patient populations will benefit most is critical. This study evaluates digital breast tomosynthesis in the baseline screening population. Materials And Methods: Outcomes were compared for 10,728 women who underwent digital mammography screening, including 1204 (11.2%) baseline studies, and 15,571 women who underwent digital breast tomosynthesis screening, including 1859 (11.9%) baseline studies. Recall rates, cancer detection rates, and positive predictive values were calculated. Logistic regression estimated the odds ratios of recall for digital mammography versus digital breast tomosynthesis for patients undergoing baseline screening and previously screened patients, adjusted for age, race, and breast density. Results: In the baseline subgroup, recall rates for digital mammography and digital breast tomosynthesis screening were 20.5% and 16.0%, respectively (p = 0.002); digital breast tomosynthesis screening in the baseline subgroup resulted in a 22% reduction in recall compared with digital mammography, or 45 fewer patients recalled per 1000 patients screened. Digital breast tomosynthesis screening in the previously screened patients resulted in recall reduction of 14.3% (p &lt; 0.001; p for interaction = 0.21). The recall rate reduction for baseline screening was especially pronounced in women younger than 50 years (p = 0.005). DBT implementation resulted in an increase in cancer detection in the baseline subgroup of 40.5% versus an increase in the previously screened subgroup of 17.4%. DBT implementation resulted in an increase in PPV1 in the baseline subgroup of 85% versus 35.3% in the previously screened subgroup, although the p-interaction was not significant. Conclusion: If resources are limited, women younger than 50 years who are undergoing baseline screening or do not have prior available mammograms may benefit more from digital breast tomosynthesis than from digital mammography alone. abstract_id: PUBMED:36238178 Digital Breast Tomosynthesis as a Breast Cancer Screening Tool for Women with Gynecologic Cancer. Purpose: The purpose of our study was to evaluate digital breast tomosynthesis as a breast cancer screening modality for women with gynecologic cancer. Materials And Methods: This retrospective study included patients with underlying gynecologic malignancies who underwent screening digital breast tomosynthesis for breast cancer. The cancer detection rate, recall rate, sensitivity, specificity, and positive predictive value (PPV) were calculated. PPV1 was defined as the percentage of all positive screening exams that have a tissue diagnosis of cancer within a year. PPV2 was defined as the percentage of all diagnostic exams (and Breast Imaging Reporting and Data System category 4, 5 from screening setting) with a recommendation for tissue diagnosis that have cancer within a year. PPV3 was defined as the percentage of all known biopsies actually performed that resulted in a tissue diagnosis of cancer within the year. For each case of screen-detected cancer, we analyzed the age, type of underlying gynecologic malignancy, breast density, imaging features, final Breast Imaging Reporting and Data System assessment, histologic type, T and N stages, molecular subtype, and Ki-67 index. Results: Among 508 patients, 7 with breast cancer were identified after a positive result. The cancer detection rate was 13.8 per 1000 screening exams, and the recall rate was 17.9%. The sensitivity was 100%, and the specificity was 83.2%. The false negative rate was 0 per 1000 exams. The PPV1, PPV2, and PPV3 were 7.7, 31.8, and 31.8, respectively. Conclusion: Digital breast tomosynthesis may be a promising breast cancer screening modality for women with gynecologic cancer, based on the high cancer detection rate, high sensitivity, high PPV, and high detection rate of early-stage cancer observed in our study. abstract_id: PUBMED:37203731 The Demonstration of a Tool for Self-Estimating Digital Competence. This study presents the results from a demonstration of a tool for self-estimation of digital competence for nurses and assistant nurses. The data was gathered from twelve participants working as leaders of older care homes. The results show that digital competence is of importance in health and social care, that the dimension of motivation is of utmost importance and that the presentation of the survey results should be flexible. abstract_id: PUBMED:37883134 A Digital Tool for the Self-Assessment of Homes to Increase Age-Friendliness: Validity Study. Background: Age-friendly environments in homes and communities play an important role in optimizing the health and well-being of society. Older people have strong preferences for remaining at home as they age. Home environment assessment tools that enable older people to assess their homes and prepare for aging in place may be beneficial. Objective: This study aims to establish the validity of a digital self-assessment tool by assessing it against the current gold standard, an occupational therapy home assessment. Methods: A cohort of adults aged ≥60 years living in metropolitan Adelaide, South Australia, Australia, assessed their homes using a digital self-assessment tool with 89 questions simultaneously with an occupational therapist. Adults who were living within their homes and did not have significant levels of disabilities were recruited. Cohen κ and Gwet AC1 were used to assess validity. Results: A total of 61 participants (age: mean 71.2, SD 7.03 years) self-assessed their own homes using the digital self-assessment tool. The overall levels of agreement were high, supporting the validity of the tool in identifying potential hazards. Lower levels of agreement were found in the following domains: steps (77% agreement, Gwet AC1=0.56), toilets (56% agreement, κ=0.10), bathrooms (64% agreement, κ=0.46), and backyards (55% agreement, κ=0.24). Conclusions: Older people were able to self-assess their homes using a digital self-assessment tool. Digital health tools enable older people to start thinking about their future housing needs. Innovative tools that can identify problems and generate solutions may improve the age-friendliness of the home environment. Answer: Digital imaging technologies have become increasingly important as diagnostic screening tools in various medical fields. For instance, digital breast tomosynthesis (DBT) has shown promise as a breast cancer screening modality, particularly for women undergoing baseline screening or those without prior mammograms. Studies have found that DBT can reduce recall rates and increase cancer detection, especially in women younger than 50 years (PUBMED:30446072). Additionally, DBT has been evaluated as a screening tool for women with gynecologic cancer, demonstrating a high cancer detection rate, high sensitivity, high positive predictive value (PPV), and a high detection rate of early-stage cancer (PUBMED:36238178). Beyond cancer screening, digital tools have also been developed for other diagnostic purposes. For example, the Multiple Screener is an unsupervised digital tool designed to screen for cognitive deficits in patients with Multiple Sclerosis (MS), showing high correlation with standard paper-and-pencil based test scores (PUBMED:31760365). Similarly, a digital identification tool has been validated for accurately identifying individuals at risk for hereditary cancer syndromes (HCS), matching personal and family cancer history to National Comprehensive Cancer Network (NCCN) criteria (PUBMED:30651894). In the context of elder mistreatment (EM), a digital health screening tool called VOICES has been developed to facilitate self-disclosure of EM in older adults, with focus group participants supporting its use due to the privacy and anonymity it offers (PUBMED:33574738). Another digital tool, the R+ Dietitian mini program, has been created for nutritional risk screening and dietary assessment among hospitalized patients with cancer, showing fair agreement with assessments conducted by dietitians (PUBMED:36287601). Moreover, the Emotional Needs Evaluation and Resource Guide for You (ENERGY) System is a digital mental health tool developed to screen for mental health and resource needs, providing automated, personalized feedback and resources based on user responses (PUBMED:35696607). In summary, digital imaging and other digital tools have become valuable diagnostic screening tools across various medical specialties, offering advantages such as improved detection rates, reduced recall rates, and personalized assessments that can enhance patient care and outcomes.
Instruction: Does improved functional performance help to reduce urinary incontinence in institutionalized older women? Abstracts: abstract_id: PUBMED:22953994 Does improved functional performance help to reduce urinary incontinence in institutionalized older women? A multicenter randomized clinical trial. Background: Urinary incontinence (UI) is a major problem in older women. Management is usually restricted to dealing with the consequences instead of treating underlying causes such as bladder dysfunction or reduced mobility.The aim of this multicenter randomized controlled trial was to compare a group-based behavioral exercise program to prevent or reduce UI, with usual care. The exercise program aimed to improve functional performance of pelvic floor muscle (PFM), bladder and physical performance of women living in homes for the elderly. Methods: Twenty participating Dutch homes were matched and randomized into intervention or control homes using a random number generator. Homes recruited 6-10 older women, with or without UI, with sufficient cognitive and physical function to participate in the program comprising behavioral aspects of continence and physical exercises to improve PFM, bladder and physical performance. The program consisted of a weekly group training session and homework exercises and ran for 6 months during which time the control group participants received care as usual. Primary outcome measures after 6 months were presence or absence of UI, frequency of episodes (measured by participants and caregivers (not blinded) using a 3-day bladder diary) and the Physical Performance Test (blinded). Linear and logistic regression analysis based on the Intention to Treat (ITT) principle using an imputed data set and per protocol analysis including all participants who completed the study and intervention (minimal attendance of 14 sessions). Results: 102 participants were allocated to the program and 90 to care as usual. ITT analysis (n = 85 intervention, n = 70 control) showed improvement of physical performance (intervention +8%; control -7%) and no differences on other primary and secondary outcome measures. Per protocol analysis (n = 51 intervention, n = 60 control) showed a reduction of participants with UI (intervention -40%; control -28%) and in frequency of episodes (intervention -51%; control -42%) in both groups; improvement of physical performance (intervention + 13%; control -4%) was related to participation in the exercise program. Conclusions: This study shows that improving physical performance is feasible in institutionalized older women by exercise. Observed reductions in UI were not related to the intervention. [Current Controlled Trials ISRCTN63368283]. abstract_id: PUBMED:33201269 Physical performance measures in older women with urinary incontinence: pelvic floor disorder or geriatric syndrome? Introduction And Hypothesis: To evaluate physical performance measures of mobility and functional impairments and assess their association with urinary incontinence (UI) severity and impact on quality of life among older women with UI. Methods: In a cross-sectional pilot study, 20 women aged ≥ 70 years with UI completed UI questionnaires (Global Impression of Severity, Incontinence Impact Questionnaire [IIQ-7]) and functional status evaluation. Functional status evaluation included the Modified Physical Performance Test (MPPT; range 0-36), Short Physical Performance Battery (SPPB; range 0-12), and other physical performance measures (e.g., Timed Up and Go [TUG]). MPPT and SPPB scores &lt; 32 and &lt; 10, respectively, indicated impaired mobility and function. Descriptive statistics and Spearman correlation coefficients evaluated study variables and associations between UI and physical performance measures. Results: Women were 76.6 ± 4.7 years old with mean body mass index 33.5 ± 9.0 kg/m2. Mixed UI was the most prevalent (n = 17; 85%), and 14 (70%) participants rated their UI as moderate or severe. Low MPPT (&lt; 32) and SPPB (&lt; 10) scores were present in 65% (n = 13) and 35% (n = 7) of participants, respectively. Lower MPPT score (r = -0.46; P = 0.04) and worse TUG performance (r = 0.50; P = 0.03) were associated with greater UI impact on quality of life based on IIQ-7. SPPB did not correlate (P &gt; 0.05) with UI measures. Conclusions: Mobility and functional impairments are common among older women with UI. Associations between MPPT score and TUG performance with UI impact on quality of life suggest these physical performance measures could be markers of mobility and functional impairments in future research on UI in older women. abstract_id: PUBMED:25335630 Promoting urinary continence in older women. Continence promotion involves informing and educating the public and healthcare professionals that urinary incontinence is not an inevitable part of ageing, and can be treated or at least made more manageable. While awareness of urinary continence is improving slowly, the taboo around discussing incontinence remains. Women are at increased risk of developing urinary incontinence as they grow older because of physiological, functional and cognitive changes. Healthcare professionals can identify women with bladder symptoms by routinely asking trigger questions and can promote continence through education about lifestyle choices that aggravate or ameliorate urinary incontinence. This article discusses the main risk factors associated with urinary incontinence in older women and the ways in which healthcare professionals can help to identify those with symptoms of urinary incontinence. abstract_id: PUBMED:24000147 Fecal incontinence epidemiology and help seeking among older people in Taiwan. Aims: This study assesses the prevalence, comorbidity, and risk factors of fecal incontinence (FI), and explores help seeking patterns among older FI people. Methods: The data were obtained from the 2005 National Health Interview Survey, in which 1,345 men and 1,370 women aged above 65 years responded to questions concerning FI occurrence, and 218 FI-affected elderly people responded to the question about FI help seeking. The chi-square test, Student's t-test, and multiple logistic regression were used. Results: The weighted FI prevalence of older Taiwanese people was 6.9% for men and 9.3% for women. Urinary incontinence, diabetes mellitus, dementia, and asthma significantly increased the risk of FI among old men, but being overweight appeared to be FI-protective (OR 0.37, 95% CI = 0.17-0.80). In women, urinary incontinence, stroke, transit ischemia attack, dementia, chronic hepatitis, being underweight (BMI &lt; 18.5), and greater parity were significant FI risk factors. Of those with FI, 49.1% had sought medical help. Using Andersen's model, data analysis showed that living in an urban area (OR 2.36, CI = 1.19-4.68), mucous stool incontinence (OR 3.56, 95% CI = 1.35-8.32), and one or more of the following FI-related problems, namely, anxiety, families interaction, social life, work life, and sexual life, significantly predicted FI sufferers seeking medical attention. Conclusion: Self-reported FI is a prevalent condition with various etiologies among older people in Taiwan. Understanding the epidemiology and comorbidity of FI, and investigating the factors associated with help seeking, is beneficial in devising and implementing prevention and management strategies. abstract_id: PUBMED:35118038 Experiences of Urinary Incontinence Management in Older Women: A Qualitative Study. Introduction: Older women have various experiences regarding the management of urinary incontinence depending on the societies they live in and their cultural backgrounds. The present study aimed to determine older women's experiences in urinary incontinence management. Methods: The present qualitative study employed a conventional content analysis approach and was conducted in Iran from 2019 to 2020. In this research, the data were collected through face-to-face unstructured in-depth interviews with 22 older women suffering from urinary incontinence selected via purposeful sampling. The interviews were continued until reaching the data saturation point. Data analysis was performed simultaneously with data collection. The interviews were recorded, transcribed, and analyzed through Graneheim and Lundman style content analysis, and data management was done using the MAXQDA software. In order to achieve the accuracy and validity of the study, the Four-Dimensions Criteria (FDC) by Lincoln and Guba, namely credibility, dependability, conformability, and transformability, were considered and used. Results: This study was conducted on 22 older women suffering from urinary incontinence with the mean age of 66.54 ± 5.76 years. The acquired data were put in four main categories of "resilience" with three subcategories, "change in lifestyle" with six subcategories, "attempt for treatment of the condition" with three subcategories, and "receiving support" with two subcategories. Conclusion: The study results indicated that the older women suffering from urinary incontinence were resilient against the condition, had changed their lifestyles to manage the condition, and sought treatment. In addition, receiving support from the family and the society played a significant role in the follow-up and management of the condition. The present study findings can help healthcare team members focus on urinary incontinence, design care programs for older women with this condition, and improve their quality of life. Furthermore, focusing on young and middle-aged women's health, providing them with the necessary training for taking care of the genitourinary system, and raising their awareness for preventing urinary incontinence during old ages can be helpful. Moreover, increasing the healthcare team's sensitivity and following the patients up can help diagnose, manage, and treat the condition before exerting adverse impacts on their quality of life. abstract_id: PUBMED:30203699 Urinary Incontinence Is Associated With Physical Performance Decline in Community-Dwelling Older Women: Results From the International Mobility in Aging Study. Objective: The objective of this study is to evaluate the influence of urinary incontinence (UI) on physical performance. Method: In prospective analyses from the International Mobility in Aging Study (IMIAS), 915 women (65-74 years) from Canada, Colombia, Albania, and Brazil were evaluated in relation to self-reported UI (past week) and physical performance (Short Physical Performance Battery [SPPB]), with reevaluation after 2 years. Linear mixed models examined the influence of UI on SPPB, adjusted by covariates (age, study site, education, income sufficiency, body mass index [BMI] and parity). Results: Women reporting some UI presented lower SPPB mean (β = -0.41, p = .009) and a greater reduction (β = -0.53, p = .001) over 2 years than those reporting no UI. Discussion: Compared with no reported UI, some UI was associated with worse and more pronounced declines in physical performance over 2 years. This study highlights the importance of practices to reduce UI to contribute to healthier aging. abstract_id: PUBMED:37870774 Prevalence and Influencing Factors of Help-Seeking Behavior Among Women with Urinary Incontinence: A Systematic Review and Meta-Analysis. Purpose: This systematic review and meta-analysis aimed to investigate the mean (weighted) prevalence of help-seeking behavior among women with urinary incontinence (UI) in relevant subgroups and the related influencing factors. Materials and Methods: Six English and four Chinese databases were systematically searched between 1996 and July 10, 2022. Two researchers independently screened the literature, extracted data, and evaluated the quality of the included studies. All statistical analyses were conducted using RevMan 5.4. Results: The mean (weighted) prevalence of help-seeking behavior based on the 41 included studies, including a total of 32,640 women with UI, was 28% (95% confidence interval [CI]: 22%-34%). We performed a subgroup analysis based on UI type, population, region, publication time, case definition of help-seeking, and use of validated tools to determine UI. The results of the subgroup analysis showed that the pooled prevalence of help-seeking behavior was 23% (95% CI: 14%-32%) among pregnant and maternity women, 27% (95% CI: 19%-35%) among menopausal women, 24% (95% CI: 14%-35%) among 20- to 50-year-old women, 31% (95% CI: 25%-36%) among those older than 50 years, 24% (95% CI: 17%-30%) in Asia, and 33% (95% CI: 22%-44%) in Europe. Meanwhile, the pooled odds ratio showed that education level, UI type, UI severity, and quality of life significantly influenced help-seeking behavior among women with UI. Conclusions: This systematic review revealed that the pooled prevalence of help-seeking behaviors among women with UI is 28%. These influencing factors have important implications for adapting to health care and social care systems. abstract_id: PUBMED:37300566 Help-seeking behavior for nonsevere stress urinary incontinence among elderly women in communities, Beijing, China. Introduction And Hypothesis: Stress urinary incontinence (SUI) is a common health problem and influences women's quality of life significantly. In order to enhance health education according to a specific situation, it is essential to identify barriers to seeking help among elderly women with nonsevere SUI. The objectives were to investigate reasons for (not) seeking help for nonsevere SUI among women aged ≥60 years, and to analyze factors affecting help-seeking behavior. Methods: We enrolled 368 women aged ≥60 years with nonsevere SUI from communities. They were asked to filled out sociodemographic information, International Consultation on Incontinence Questionnaire Short Form (ICIQ-SF), Incontinence Quality of Life (I-QOL), and self-constructed questiones on help-seeking behavior. Mann-Whitney U tests were used to analyze the different factors between seeking group and nonseeking group. Results: Only 28 women (7.61%) had ever sought help from health professionals for SUI. The most frequent reason for seeking help was urine-soaked clothes (67.86%, 19 out of 28). The most frequent reason for not seeking help was that women thought it was normal (67.35%, 229 out of 340). Compared with the nonseeking group, the seeking group had higher total ICIQ-SF scores and lower total I-QOL scores. Conclusion: Among elderly women with nonsevere SUI, the rate of seeking help was low. Lack of correct perception about the SUI kept women from doctor visits. Women who were bothered by more severe SUI and lower quality of life were more likely to seek help. abstract_id: PUBMED:27860034 Understanding help-seeking in older people with urinary incontinence: an interview study. The prevalence of urinary incontinence (UI) increases with age and can negatively affect quality of life. However, relatively few older people with UI seek treatment. The aim of this study was to explore the views of older people with UI on the process of seeking help. Older people with UI were recruited to the study from three continence services in the north of England: a geriatrician-led hospital outpatient clinic (n = 18), a community-based nurse-led service (n = 22) and a consultant gynaecologist-led service specialising in surgical treatment (n = 10). Participants took part in semi-structured interviews, which were transcribed and underwent thematic content analysis. Three main themes emerged: Being brushed aside, in which participants expressed the feeling that general practitioners did not prioritise or recognise their concerns; Putting up with it, in which participants delayed seeking help for their UI due to various reasons including embarrassment, the development of coping mechanisms, perceiving UI as a normal part of the ageing process, or being unaware that help was available; and Something has to be done, in which help-seeking was prompted by the recognition that their UI was a serious problem, whether as a result of experiencing UI in public, the remark of a relative, the belief that they had a serious illness or the detection of UI during comprehensive geriatric assessment. Greater awareness that UI is a treatable condition and not a normal part of ageing is needed in the population and among health professionals. Comprehensive geriatric assessment appeared an important trigger for referral and treatment in our participants. Screening questions by healthcare professionals could be a means to identify, assess and treat older people with UI. abstract_id: PUBMED:26882204 Sexual function and help seeking for urinary incontinence in postmenopausal women. Urinary incontinence (UI) is a common condition, especially in middle-aged and older women. UI is known to affect sexual function. Many women with UI do not consult a doctor about their condition. The aim of this study was to determine the relationship of sexual function and help seeking in postmenopausal women with urinary incontinence. This cross-sectional correlation study took place from March to May 2012. The subjects were selected by a clustered sampling method from various zones of Rasht (North of Iran). The data were collected using personal data forms, Questionnaire for Urinary Incontinence Diagnosis, Incontinence Severity Index, and Incontinence Quality of Life questionnaire. Data were analyzed by SPSS17 at the significant level of P &lt; .05 and then were compared by parametric and nonparametric tests. A total of 313 menopausal women aged 45 to 60 years (mean 52.9) were recruited for the study. The mean sexual function score was 31.07 ± 7.52. Only 27.3% of subjects seek care for urinary incontinence. There was a significant correlation between sexual function and help seeking. The results of this study indicate that there is a significant correlation between sexual function and help seeking in postmenopausal women who participated in the present study. Health-care professionals should pay more attention to sexual symptoms of UI and make patients aware of available treatments. Answer: The multicenter randomized clinical trial described in abstract PUBMED:22953994 aimed to determine if a group-based behavioral exercise program designed to improve functional performance of pelvic floor muscle (PFM), bladder, and physical performance could prevent or reduce urinary incontinence (UI) in institutionalized older women. The study found that while physical performance improved in the intervention group, the observed reductions in UI were not related to the intervention. The intention-to-treat (ITT) analysis showed no differences in UI presence or frequency of episodes between the intervention and control groups, although the per protocol analysis indicated a reduction in participants with UI and frequency of episodes in both groups. This suggests that while improving physical performance is feasible, it may not directly correlate with a reduction in UI in this population. Additionally, the cross-sectional pilot study in abstract PUBMED:33201269 found that mobility and functional impairments are common among older women with UI, and that lower scores on physical performance measures were associated with a greater impact of UI on quality of life. This suggests that while physical performance measures could be markers of mobility and functional impairments, they may not necessarily indicate a direct improvement in UI symptoms. The literature overall suggests that while physical performance and functional mobility are important aspects of health and quality of life for older women with UI, improved functional performance alone may not be sufficient to reduce UI. Other factors, such as help-seeking behavior, awareness, and treatment of underlying causes, are also important in managing UI in this population (PUBMED:25335630, PUBMED:35118038, PUBMED:30203699, PUBMED:37870774, PUBMED:37300566, PUBMED:27860034, PUBMED:26882204).
Instruction: Does the system of Papanicolaou test nomenclature affect the rate of referral for colposcopy? Abstracts: abstract_id: PUBMED:8252144 Does the system of Papanicolaou test nomenclature affect the rate of referral for colposcopy? A survey of family physicians. Objective: To determine whether a new system of Papanicolaou test nomenclature (the Bethesda system) or other physician variables influence recommendations for colposcopy and biopsy for women with borderline to moderately abnormal Papanicolaou test results. We hypothesized that physician demographic and practice variables, in addition to Papanicolaou test nomenclature, would influence recommendations for colposcopy. Design: A survey was mailed to a random sample of 510 active members of the American Academy of Family Physicians. Participants: Three hundred thirty-five (66%) of the eligible physicians responded, representing all 50 states. Of those in active practice, 78% were in private practice, with a mean age of 44 years and a mean time in practice of 10 years. Ninety-three percent of respondents in active practice performed Papanicolaou tests. Main Outcome Measure: Rates of recommendation for colposcopy and biopsy in response to abnormal Papanicolaou test reports framed by a single clinical scenario. Results: Physicians recommended colposcopy more often when the Bethesda nomenclature system was used to describe the results of the Papanicolaou test. These differences were significant for four specific Papanicolaou smear pairs. Inclusion of recommendations for further evaluation strongly influenced physicians to recommend colposcopy. In multivariable analyses, demographic and practice variables were not associated with recommendations for colposcopy. Conclusions: The Bethesda system of nomenclature, when compared with a traditional descriptive nomenclature system, influenced family physicians to recommend colposcopy and biopsy more often for abnormal Papanicolaou test results presented in a clinical scenario. Greater utilization of technology and higher medical care costs may result from use of the Bethesda system. Guidelines for evaluation of abnormal Papanicolaou test results are needed for use in conjunction with the Bethesda system guidelines for Papanicolaou test reports. abstract_id: PUBMED:8387595 Use of speculoscopy in the evaluation of women with atypical Papanicolaou smears. Improved cost effectiveness by selective colposcopy. Since up to 45% of patients with atypical Papanicolaou smears have been shown to have significant pathology, women with persistent atypia are usually referred for colposcopy. This study evaluated the use of a new adjunctive screening test, speculoscopy, in selecting women with atypical Papanicolaou smears who would most benefit from referral for colposcopy. Both screening and referral patients were evaluated with the Papanicolaou smear, speculoscopy and colposcopy at 10 study centers. Biopsies were obtained from most women with positive colposcopy. The results in patients with atypical smears were used to perform a cost-benefit analysis of each of three management protocols. Using the results of speculoscopy to select women with atypical Papanicolaou smears for colposcopy provided a cost-effective alternative to performing colposcopy either on all women or on those with persistent atypia following treatment. Even when all women undergo speculoscopy at the time of screening, this protocol provides a cost savings of up to 24% and no significant loss of diagnostic accuracy. These data suggest that speculoscopy performed at the time of initial screening can accurately select women with atypical Papanicolaou smears who require colposcopy for diagnostic biopsy in a cost-effective manner. abstract_id: PUBMED:7827598 Colposcopy in women with a normal Papanicolaou smear. Background: Colposcopy is usually done when the Papanicolaou smear is abnormal so that the necessity for cone biopsy is reduced. However, there is often a strong clinical suspicion of cervical neoplasia even though the smear is normal and this problem of the false negative Papanicolaou smear has not, we feel, been adequately addressed. We, therefore, examined the usefulness of colposcopy in detecting cervical neoplasia in women in whom the Papanicolaou smear was normal. Methods: A cross-sectional design was used which included 80 patients in whom cervical neoplasia was suspected and who had undergone colposcopy in one gynaecology unit at the Christian Medical College Hospital, Vellore. The Papanicolaou smear result of these patients was then related to their diagnosis on colposcopy. Results: The Papanicolaou smears were normal in 56 patients and abnormal in 24. In the normal Papanicolaou smear group there were 18 women with grade I cervical intra-epithelial neoplasia and 3 with grade II lesions. In the abnormal smear group there were 8 women with grade I cervical intra-epithelial neoplasia, 9 with grade II cervical intra-epithelial neoplasia and 1 had microinvasive carcinoma. Patients with abnormal Papanicolaou smears had a relative risk of 7.8 (95% CI 2.4 to 25.8) of having a lesion more advanced than grade I cervical intra-epithelial neoplasia compared with those in whom cervical neoplasia was only suspected clinically. Conclusion: When the clinician suspects cervical neoplasia but the Papanicolaou smear is negative, colposcopy reveals at least grade II cervical intra-epithelial neoplasia in 5% of patients. Such patients, therefore, need to be followed up closely or else have a colposcopy and directed biopsy. abstract_id: PUBMED:1755347 Experience with colposcopy in a family practice center. The experience of a family practice center that performed colposcopy for one year was reviewed to provide insight into the appropriateness of this procedure to a family practice clinic. Almost three thousand Papanicolaou smears were performed, which prompted 97 colposcopy clinic appointments involving 68 patients. Dysplastic Pap smears accounted for 1.82% of the total. Atypical Pap smears occurred in 3.91% of the tests, and 16.1% of a sample of these demonstrated dysplasia. Controversy concerning the atypical Pap smear is discussed. The colposcopy service was convenient for the patients and was at least revenue neutral while providing a new professional opportunity for selected family practitioners. A referral base of at least 1,000 Pap smears might justify a colposcopy program, depending on referral strategies. abstract_id: PUBMED:11547646 The colposcopic impression. Is it influenced by the colposcopist's knowledge of the findings on the referral Papanicolaou smear? Objective: To determine whether the colposcopic impression is influenced by the colposcopist's knowledge of the referral Papanicolaou smear. Study Design: Using a community hospital database, the accuracy of the colposcopic impression (accuracy = proportion of women with histology greater than cervical intraepithelial neoplasia [CIN] 2 that have colposcopic impressions of greater than CIN 2) when referral smears were atypical squamous cells of uncertain significance (ASCUS), atypical glandular cells of uncertain significance (AGUS) or low grade squamous intraepithelial lesion (LSIL) was compared to that when smears showed high grade squamous intraepithelial lesion (HSIL) or cancer. The analysis was repeated with a screening study database in which colposcopic impression was assigned without knowledge of the Papanicolaou smear. Univariate and logistic regression analysis of the second database determined the relative importance of size and grade of lesion and Papanicolaou result to the accuracy of the colposcopic impression. Results: In the community database, colposcopic accuracy was 60/510 (12%) when smears were ASCUS, AGUS or LSIL and 77/132 (58%) when smears were HSIL or cancer (P &lt; .001); in the second database, it was 2/19 (11%) when smears subsequently were reported as negative, ASCUS, AGUS or LSIL and 33/65 (54%) when smears were HSIL or cancer (P &lt; .005). An accurate colposcopic impression was seen in 5/39 (13%) women with one-quadrant lesions, 8/18 (44%) with two-quadrant lesions and 23/27 (85%) with three- or four-quadrant lesions (P &lt; .005). None of 19 women with smears reported as negative, ASCUS, AGUS or LSIL had lesions involving three or four quadrants of the cervix, while 27/65 (42%) women with smears reported as HSIL or cancer had such lesions (P &lt; .005). With logistic regression, the more quadrants of the cervix involved, the more accurate the colposcopic impression. Once controlled for lesion size, there was no improvement when worst histologic grade or Papanicolaou smear result was considered. Conclusion: Through lesions greater than CIN 2 were more often overlooked when referral smears were negative, ASCUS, AGUS or LSIL than when they were HSIL or cancer, the real reason that the lesions were not detected by colposcopy was that they were small. abstract_id: PUBMED:3106869 Comparative utility of repeat Papanicolaou smears, cervicography, and colposcopy in the evaluation of atypical Papanicolaou smears. In an attempt to establish the significance and management of the atypical Papanicolaou smear, 97 patients with atypical Papanicolaou smears were each evaluated with a repeat Papanicolaou smear, cervicography, and colposcopy. In the detection of significant lesions, cervicography was more sensitive than a repeat smear, but less so than colposcopy. Forty-two percent of the colposcopically detected lesions would have gone undetected by repeat Papanicolaou smears, compared with 11% by cervicography. However, Papanicolaou smears were more specific than cervicography (55 versus 29%). The cost per case detected using cervicography for triage was equal to that using follow-up Papanicolaou smears, but was a third higher than referring all patients directly to colposcopy. Merely using repeat smears in patients with atypical Papanicolaou smears may result in nondetection of many significant lesions, especially in populations where follow-up is poor. abstract_id: PUBMED:2246766 Aggressive evaluation for atypical squamous cells in Papanicolaou smears. A retrospective study was done on women who had atypical Papanicolaou smears and were referred for immediate colposcopy. The smears were obtained during January 1985 to March 1989 at Edwards Air Force Base, California. Excluded from the evaluation were abnormal Papanicolaou smears with hyperkeratosis, parakeratosis and koilocytotic atypia suggestive of human papillomavirus (HPV) infection. The evaluation included colposcopy, colposcopically directed biopsies, endocervical curettage and repeat Papanicolaou smears. A total of 101 patients were included in the study. Cervical intraepithelial neoplasia (CIN) was seen in 29.7% (30 patients): 12.9% (13) CIN I, 12.9% (13) CIN II and 3.9% (4) CIN III. Carcinoma was seen in 3.9% (4) of the patients: 2.9% (3) was carcinoma in situ, and 0.99% (1) was invasive squamous cell carcinoma, stage IIb. HPV and dysplastic lesions were seen together in 19.8% (20) of the patients. HPV was seen alone in 45% (46). Twenty-one patients (20.8%) had no apparent lesions on colposcopy, although one developed microinvasive keratinizing squamous cell carcinoma within 36 months of colposcopy. Many significant lesions can go undetected for extended periods of time in women with atypical Papanicolaou smears, resulting in delayed management. Referral for immediate colposcopy is advocated strongly. abstract_id: PUBMED:26312394 Waiting time for the first colposcopic examination in women with abnormal Papanicolaou test Purpose: To evaluate the waiting times before obtaining the first colposcopic examination for women with abnormal Papanicolaou smears. Methods: Retrospective cohort study conducted on patients who required a colposcopic examination to clarify an abnormal pap test, between 2002 January and 2008 August, in a metropolitan region of Brazil. The waiting times were defined as: Total Waiting Time (interval between the date of the pap test result and the date of the first colposcopic examination); Partial A Waiting Time (interval between the date of the pap test result and the date of referral); Partial B Waiting Time (interval between the date of referral and the date of the first colposcopic examination). Means, medians, relative and absolute frequencies were calculated. The Kruskal-Wallis test and Pearson's chi-square test were used to determine statistical significance. Results: A total of 1,544 women with mean of age of 34 years (SD=12.6 years) were analyzed. Most of them had access to colposcopic examination within 30 days (65.8%) or 60 days (92.8%) from referral. Mean Total Waiting Time, Partial A Waiting Time, and Partial B Waiting Time were 94.5 days (SD=96.8 days), 67.8 days (SD=95.3 days) and 29.2 days (SD=35.1 days), respectively. Conclusion: A large part of the women studied had access to colposcopic examination within 60 days after referral, but Total waiting time was long. Measures to reduce the waiting time for obtaining the first colposcopic examination can help to improve the quality of care in the context of cervical cancer control in the region, and ought to be addressed at the phase between the date of the pap test results and the date of referral to the teaching hospital. abstract_id: PUBMED:1323087 An evaluation of human papillomavirus testing as part of referral to colposcopy clinics. Objective: To determine the usefulness of human papillomavirus (HPV) testing as a triage method for predicting which women referred to a colposcopy clinic were most likely to have histologically confirmed cervical intraepithelial neoplasia (CIN). Methods: Papanicolaou tests, ViraPap tests for HPV infection, and colposcopically directed biopsies were performed concurrently on 482 women referred to a student health colposcopy clinic. Results: The results demonstrated that HPV positivity was associated with a greatly increased likelihood of histologic confirmation of CIN, especially among women with concurrent cytologic findings that were negative or showed only atypical squamous cells of undetermined significance. Conclusions: Testing for HPV appears to have a role in the triage of students now being referred to our colposcopy clinic. A combination of HPV testing and repeated cytologic screening would provide reasonably sensitive screening for cervical neoplasia while limiting the use of colposcopic services, which are currently overburdened. The eventual usefulness of HPV testing will depend on the cost and availability of colposcopy services, the cost of Papanicolaou tests, the cost and accuracy of HPV tests, and the predictive value of HPV detection in the population being screened. abstract_id: PUBMED:21180872 Agreement between cytology, colposcopy and cervical histopathology Purpose: to evaluate the agreement between conventional cytology using the Papanicolaou test, repeated at the time of colposcopy, with colposcopic and histopathological findings. Methods: the study was carried out at the central public health laboratory of the state of Pernambuco between January and July, 2008, involving 397 women referred for colposcopic evaluation following an abnormal cervical smear test. Cytology was repeated at the time of colposcopy using conventional method, with particular attention being paid to the presence of abnormal colposcopic findings. The nomenclature used for cytology was the 2001 Bethesda system terminology, while that used for histology was the World Health Organization 1994 classification. Cytology performed at the time of colposcopy was compared with colposcopy and with histopathology obtained by colposcopy-directed biopsy. The Kappa coefficient was used to evaluate the agreement between methods, as well as the χ2 test, with the level of significance set at 5%. Results: poor agreement was found between cytology performed at the time of colposcopy and colposcopic findings (K=0.33; 95%CI=0.21-0.45) and between colposcopy and histopathology (K=0.35; 95%CI=0.39-0.51). Cytology performed at the time of colposcopy compared with histopathology revealed a Kappa of 0.41 (95%CI=0.29-0.530), which was considered to reflect moderate agreement. Conclusions: agreement was better between cytology and histopathology than between colposcopy and cytology or between colposcopy and histopathology. Answer: Yes, the system of Papanicolaou test nomenclature does affect the rate of referral for colposcopy. A study found that physicians recommended colposcopy more often when the Bethesda nomenclature system was used to describe the results of the Papanicolaou test, compared with a traditional descriptive nomenclature system. These differences were significant for four specific Papanicolaou smear pairs. The study concluded that the Bethesda system influenced family physicians to recommend colposcopy and biopsy more often for abnormal Papanicolaou test results presented in a clinical scenario (PUBMED:8252144).
Instruction: Quiet please! Drug round tabards: are they effective and accepted? Abstracts: abstract_id: PUBMED:24930500 Quiet please! Drug round tabards: are they effective and accepted? A mixed method study. Background: The use of drug round tabards is a widespread intervention that is implemented to reduce the number of interruptions and medication administration errors (MAEs) by nurses; however, evidence for their effectiveness is scarce. Purpose: Evaluation of the effect of drug round tabards on the frequency and type of interruptions, MAEs, the linearity between interruptions and MAEs, as well as to explore nurses' experiences with the tabards. Study Design: A mixed methods before-after study, with three observation periods on three wards of a Dutch university hospital, combined with personal inquiry and a focus group with nurses. Methods: In one pre-implementation period and two post-implementation periods at 2 weeks and 4 months, interruptions and MAEs were observed during drug rounds. Descriptive statistics and univariable linear regression were used to determine the effects of the tabard, combined with personal inquiry and a focus group to find out experiences with the tabard. Findings: A total of 313 medication administrations were observed. Significant reductions in both interruptions and MAEs were found after implementation of the tabards. In the third period, a decrease of 75% in interruptions and 66% in MAEs was found. Linear regression analysis revealed a model R2 of 10.4%. The implementation topics that emerged can be classified into three themes: personal considerations, patient perceptions, and considerations regarding tabard effectiveness. Conclusions: Our study indicates that this intervention contributes to a reduction in interruptions and MAEs. However, the reduction in MAEs cannot be fully explained by the decrease in interruptions alone; other factors may have also influenced the effect on MAEs. We advocate for further research on complementary interventions that contribute to a further reduction of MAEs. Clinical Relevance: We can conclude that drug round tabards are effective to improve medication safety and are therefore important for the quality of nursing care and the reduction of MAEs. abstract_id: PUBMED:37006745 A quiet place: The impact of the word "quiet" on clinical workload. Purpose: This study aimed to determine the impact of uttering the word "quiet" on clinical workload during the overnight otolaryngology call shift and understand the factors contributing to resident busyness. Materials And Methods: A multicenter, single-blind, randomized-controlled trial was conducted. A total of 80 overnight call shifts covered by a pool of 10 residents were randomized to the quiet or to the control group. At the start of shift, residents were asked to state aloud, "Today will be a quiet night" (quiet group) or "Today will be a good night" (control group). Clinical workload, as measured by number of consults, was the primary outcome. Secondary measures included number of sign-out tasks, unplanned inpatient and operating room visits, number of phone calls and hours of sleep, and self-perceived busyness. Results: There was no difference in the number of total (P = 0.23), nonurgent (P = 0.18), and urgent (P = 0.18) consults. Tasks at signout, total phone calls, unplanned inpatient visits, and unplanned operating room visits did not differ between the control and quiet groups. While there were more unplanned operating room visits in the quiet group (29, 80.6%) compared to the control group (34, 94.4%), this was not found to be significant (P = 0.07). The majority of residents reported feeling "not busy" during control nights (18, 50.0%) compared to feeling "somewhat busy" during quiet nights (17, 47.2%; P = 0.42). Conclusion: Contrary to popular belief, there is no clear evidence that uttering the word "quiet" significantly increases clinical workload. abstract_id: PUBMED:33794643 Development of novel drug carrier via round window membrane As isolated anatomical position, limited labyrinthine artery supply, and blood-labyrinth barrier hampers systemic drug delivery to the inner ear. The efficient concentration of drug treatment is unsatisfactory and there's possible side effects after systemic administration. Intratympanic injection of drug can bypass the blood-labyrinth and permeated to the hair cells or synaptic area via the round-or oval window of the cochlea. Efficacy and safety of pharmacotherapy has become increasingly relied on the inner ear delivery carrier system. The goal of this review focus on the anatomical barrier that need to be overcome in the intratympanic applications, the improvement of drug retention and specific targets, and the safety of novel drug carriers, these emerging strategies of local drug delivery promise novel and better guidance for the clinical application. abstract_id: PUBMED:36059574 Quiet Ego Intervention Enhances Flourishing by Increasing Quiet Ego Characteristics and Trait Emotional Intelligence: A Randomized Experiment. The quiet ego-a personality construct characterized by empathy, inclusivity, non-defensiveness, and growth-mindedness in self-other relations-correlates positively with varied health markers. There is also emerging evidence that quiet-ego-based interventions may have a positive impact on health-related outcomes. However, no research has examined whether such interventions promote psychological flourishing and through what mechanisms. We addressed this gap with a randomized longitudinal experiment, hypothesizing that a quiet ego contemplation would improve participants' flourishing and that the link between the intervention and flourishing would be mediated by higher trait emotional intelligence (EI). Using Amazon MTurk, we randomly assigned 75 participants to a 3-session intervention or control condition. As hypothesized, participants in the intervention condition reported higher trait EI scores that, in turn, elevated their flourishing. Results extend the causal benefits of brief quiet ego interventions to psychological flourishing. Given the study's context during the COVID-19 pandemic, the findings may have implications for mitigating the negative impact of the pandemic. Supplementary Information: The online version contains supplementary material available at 10.1007/s10902-022-00560-z. abstract_id: PUBMED:27242757 A Drug Combination Screen Identifies Drugs Active against Amoxicillin-Induced Round Bodies of In Vitro Borrelia burgdorferi Persisters from an FDA Drug Library. Although currently recommended antibiotics for Lyme disease such as doxycycline or amoxicillin cure the majority of the patients, about 10-20% of patients treated for Lyme disease may experience lingering symptoms including fatigue, pain, or joint and muscle aches. Under experimental stress conditions such as starvation or antibiotic exposure, Borrelia burgdorferi can develop round body forms, which are a type of persister bacteria that appear resistant in vitro to customary first-line antibiotics for Lyme disease. To identify more effective drugs with activity against the round body form of B. burgdorferi, we established a round body persister model induced by exposure to amoxicillin (50 μg/ml) and then screened the Food and Drug Administration drug library consisting of 1581 drug compounds and also 22 drug combinations using the SYBR Green I/propidium iodide viability assay. We identified 23 drug candidates that have higher activity against the round bodies of B. burgdorferi than either amoxicillin or doxycycline. Eleven individual drugs scored better than metronidazole and tinidazole which have been previously described to be active against round bodies. In this amoxicillin-induced round body model, some drug candidates such as daptomycin and clofazimine also displayed enhanced activity which was similar to a previous screen against stationary phase B. burgdorferi persisters not exposure to amoxicillin. Additional candidate drugs active against round bodies identified include artemisinin, ciprofloxacin, nifuroxime, fosfomycin, chlortetracycline, sulfacetamide, sulfamethoxypyridazine and sulfathiozole. Two triple drug combinations had the highest activity against amoxicillin-induced round bodies and stationary phase B. burgdorferi persisters: artemisinin/cefoperazone/doxycycline and sulfachlorpyridazine/daptomycin/doxycycline. These findings confirm and extend previous findings that certain drug combinations have superior activity against B. burgdorferi persisters in vitro, even when pre-treated with amoxicillin. These findings may have implications for improved treatment of Lyme disease. abstract_id: PUBMED:33474538 Radiologic anatomy of the round window relevant to cochlear implantation and inner ear drug delivery. Objective: To determine anatomic relationships and variation of the round window membrane to bony surgical landmarks on computed tomography. Study Design: Retrospective imaging review. Methods: 100 temporal bone images were evaluated. Direct measurements were obtained for membrane position. Vector distances and angulation from umbo and bony annulus were calculated from image viewer software coordinates. Results: The angle of round window membrane at junction with cochlear basal turn was (42.1 ± 8.6)°. The membrane's position relative to plane of the facial nerve through facial recess was (14.7 ± 5.2)° posterior from a reference line drawn through facial recess to carotid canal. Regarding transtympanic drug delivery, the round window membrane was directed 4.1 mm superiorly from the inferior annulus and 5.4 mm anteriorly from the posterior annulus. The round window membrane on average was angled superiorly from the inferior annulus (77.1 ± 27.9)° and slightly anteriorly from the posterior annulus (19.1 ± 11.1°). The mean distance of round window membrane from umbo was 4 mm and posteriorly rotated 30° clockwise from a perpendicular drawn from umbo to inferior annulus towards posterior annulus. Together, these measurements approximate the round window membrane in the tympanic membrane's posteroinferior quadrant. Conclusions: These radiologic measurements demonstrate normal variations seen in round window anatomy relative to facial recess approach and bony tympanic annulus, providing a baseline to assess round window insertion for cochlear implantation and outlines anatomic factors affecting transtympanic drug delivery. abstract_id: PUBMED:27274613 BUY QUIET INITIATIVE IN THE USA. Noise-induced hearing loss is still considered one of the most common work-related illnesses in the United States of America. The U.S. National Institute for Occupational Safety and Health launched a national Buy Quiet campaign to raise awareness of the importance of purchasing quieter equipment. Buy Quiet encourages companies to seek out and demand quieter equipment thus driving the market to design and create quieter products. In the long run, investment in noise controls should be more prevalent as the market demands quieter products. This paradigm occurs as the market for quieter products expands both from the supply side (manufacturers) and the demand side (tool and equipment purchasers). The key to experiencing the reduced costs and increased benefits of Buy Quiet will be to develop partnerships between manufacturers and consumers. To this end, the U.S. National Institute for Occupational Safety and Health continues to work with partners to educate stakeholders about the risks and true costs of noise-induced hearing loss, as well as the economic benefits of buying quieter equipment. abstract_id: PUBMED:34176371 Drug distribution along the cochlea is strongly enhanced by low-frequency round window micro vibrations. The cochlea's inaccessibility and complex nature provide significant challenges to delivering drugs and other agents uniformly, safely and efficiently, along the entire cochlear spiral. Large drug concentration gradients are formed along the cochlea when drugs are administered to the middle ear. This undermines the major goal of attaining therapeutic drug concentration windows along the whole cochlea. Here, utilizing a well-known physiological effect of salicylate, we demonstrate a proof of concept in which drug distribution along the entire cochlea is enhanced by applying round window membrane low-frequency micro vibrations with a probe that only partially covers the round window. We provide evidence of enhanced drug influx into the cochlea and cochlear apical drug distribution without breaching cochlear boundaries. It is further suggested that ossicular functionality is not required for the effective drug distribution we report. The novel method presented here of local drug delivery to the cochlea could be implemented when ossicular functionality is absent or impeded and can be incorporated in clinically approved auditory protheses for patients who suffer with conductive, sensorineural or mixed hearing loss. abstract_id: PUBMED:37591357 Discrete cortical control during quiet stance revealed by desynchronization and rebound of beta oscillations. Postural sway during quiet stance often exhibits a repetition of micro-fall and the subsequent micro-recovery. The classical view -that the quiet bipedal stance is stabilized by the ankle joint stiffness- has been challenged by paradoxical non-spring-like behaviors of calf muscles: gastrocnemius muscles are shortened and then lengthened, respectively, during the micro-fall and the micro-recovery. Here, we examined EEG based brain activity during quiet stance, and identified desynchronization and synchronization of beta oscillations that were associated, respectively, with the micro-fall and the micro-recovery. Based on a widely accepted scenario for beta-band desynchronization during movement and post-movement rebound in the control of discrete voluntary movement, our results reveal that the beta rebound can be considered as a manifestation of stop command to punctuate the motor control for every fall-recovery cycle. Namely, cortical interventions to the automatic postural control are discrete, rather than continuous modulations. The finding is highly compatible with the intermittent control model, rather than the stiffness control model. abstract_id: PUBMED:30621011 Implementation of Quiet Areas in Sweden. The notion of quiet areas has received increasing attention within the EU in recent years. The EU Environmental Noise Directive (END) of 2002 stipulates that member states should map existing quiet areas and formulate strategies to keep these quiet. Quiet areas could play an important role in balancing densified urban development by ensuring access to relative quietness and associated health benefits. This paper reports on a recent study investigating how the notion of quiet areas has been implemented in Sweden. The study, initiated by the Sound Environment Center in 2017, was carried out in two phases. In phase one, an overview of the current situation was obtained by scrutinizing regional and municipal mapping initiatives, aided by a short digital questionnaire sent out to all 290 municipalities in Sweden. This provided a general understanding and highlighted initiatives for further study in phase two. The results revealed that 41% (n = 118) of Sweden's municipalities include quiet areas in their general plans, but that significantly fewer of these have sophisticated strategies for implementation (n = 16; 6%). Moreover, the interest in quiet areas in municipalities does not seem to be directly related to the END, but is instead inspired by previous regional initiatives in Sweden. The study highlights a number of considerations and examples of how quiet areas are approached in Sweden today. In general, Sweden has come a long way in terms of identifying and mapping quiet areas, but more progress is needed in developing strategies to protect, maintain, and publicize quiet areas. Answer: The study "Quiet please! Drug round tabards: are they effective and accepted?" (PUBMED:24930500) aimed to evaluate the effect of drug round tabards on the frequency and type of interruptions, medication administration errors (MAEs), and nurses' experiences with the tabards. The study was a mixed methods before-after study conducted in a Dutch university hospital, which included observations during drug rounds before and after the implementation of the tabards, as well as personal inquiry and a focus group with nurses. The findings from the study indicated that there was a significant reduction in both interruptions and MAEs after the implementation of the tabards. Specifically, in the third period of observation, there was a decrease of 75% in interruptions and 66% in MAEs. However, the reduction in MAEs could not be fully explained by the decrease in interruptions alone, suggesting that other factors may have also influenced the effect on MAEs. The study also identified three themes from the implementation topics: personal considerations, patient perceptions, and considerations regarding tabard effectiveness. The study concluded that drug round tabards are effective in improving medication safety and are therefore important for the quality of nursing care and the reduction of MAEs. The authors advocate for further research on complementary interventions that could contribute to a further reduction of MAEs. Overall, the study suggests that drug round tabards are both effective and accepted as an intervention to reduce interruptions and MAEs during medication administration rounds.
Instruction: Is absence related to work stress? Abstracts: abstract_id: PUBMED:34754488 Association between split shift work and work-related injury and disease absence. Background: Shift work (particularly split shifts) has been noted among the working conditions that hinder sustainable work. However, little is known regarding the effects of split shifts on health. This study aimed to investigate the association between split shift work and work-related injury and disease absence. Methods: This study used data from the fifth Korean Working Conditions Survey and included 4,042 paid shift workers. Shift work type and self-reported work-related injury and disease absence were investigated using a questionnaire. Logistic regression analysis was performed to investigate the association between split shift work and work-related absence with rotating shift worker as the reference group. Results: Of the 4,042 shift workers, 980 (24.3%) were split shift workers. The adjusted odds ratio (aOR) of split shift for absence due to work-related injury was high at 2.94 (95% confidence interval [CI]: 1.85-4.68) and the aOR of split shift for absence due to work-related disease was also high at 1.58 (95% CI: 1.09-2.29) compared to rotating shift. Conclusions: Split shift work leads to an increased risk of absences due to work-related injury and disease. abstract_id: PUBMED:27515971 Reporting and prediction of work-related sickness absence by general practitioners. Background: Information on sickness absence (SA) duration in general practice is difficult to record. The duration of absence certified by general practitioners (GPs) can be viewed as a prognosis for return to work. The Health and Occupation Research network in General Practice (THOR-GP) collects SA information from GPs associated with cases of work-related ill-health. A sample of these cases is followed up 1 year retrospectively to gather information on the duration of absence. Aims: To examine the extent of the underestimation of SA in routinely reported data and to investigate how well GPs predict patients' return to work. Methods: THOR-GPs submit case and SA information using a web-based form. GPs who submitted selected cases were asked about the total number of days of SA and whether the patient had returned to work. Results: THOR-GPs' routine SA data collection underestimated absence duration by 61%. According to the retrospective data, a much larger proportion of periods of absence due to work-related mental ill-health developed into long-term SA (60%) than episodes attributed to musculoskeletal disorders (32%). In over half the reported cases, the return to work was longer than the GP initially predicted. Conclusions: THOR-GP prospectively reported SA data underestimated the total length of absence; however, these data can examine the episodic rates of absence within different groups. More accurate longitudinal data can be collected retrospectively. GPs' ability to predict the length of time a patient will be away from work is important to enable treatment and rehabilitation planning in order to decrease the likelihood of a patient falling into long-term SA. abstract_id: PUBMED:26647948 Selected aspects of absence at work and work-related health problems in Polish enterprises. Workers' working conditions, work-related health problems and sickness absence are interdependent factors. Both workers' health problems and their absence are adverse events which generate significant costs for both Poland's Social Insurance Institution (ZUS) and employers. Despite the related burdens, it is difficult to assess the number of workers who experience work-related health problems, to indicate the share of those workers who have been unfit for work owing to such disorders and to indicate the types of workers' disorders which are caused by factors the workers are exposed to in the working environment. This article presents the findings of surveys carried out in selected production and service-providing companies, assessing the scale and nature of work-related health problems and their links with workers' sickness absence. abstract_id: PUBMED:36070001 Exposure to work-related violence and/or threats of violence as a predictor of certified sickness absence due to mental disorders: a prospective cohort study of 16,339 Swedish men and women in paid work. Objectives: The aim of this prospective cohort study was to investigate if exposure to work-related violence and/or threats of violence predict certified sickness absence due to mental disorders. Methods: Information on work-related exposure to violence and/or threats of violence were derived from the biannual Swedish Longitudinal Occupational Survey of Health (SLOSH) study 2012-2016, including individuals in paid work across Sweden and from different occupations/sectors (n = 16,339). Certified sickness absence due to mental disorders were ascertained from register data from the Swedish Social Insurance Agency. Odds ratios of certified sickness absence due to mental disorders according to exposure to work-related violence were estimated using multiple logistic regression. Several potential confounding variables, such as demographic and socio-economic factors, age, sex, cohabitation, children living at home, socio-economic status, educational level, as well as other types of psychosocial work environmental factors, were adjusted for in the analyses. Results: In the total study sample, 9% reported exposure to violence and/or threats of violence and the prevalence of sickness absence due to mental disorders was 5%. Exposure to work-related violence and/or threats of violence was associated prospectively with certified sickness absence due to mental disorders (odds ratio 1.46, 95% confidence interval 1.17-1.82, p &lt; 0.01). Analysis of possible interaction showed no difference in association when comparing women to men and different age groups. Conclusions: Exposure to work-related violence and/or threats of violence appear to increase the odds of certified sickness absence due to mental disorders. Preventive measures aiming to lower the risk of exposure is thus of great importance. abstract_id: PUBMED:23638695 Predictors of return to work and duration of absence following work-related hand injury. The aims of the study were to explore the situation and the potential determinants of return-to-work (RTW) and the absence duration following work-related hand injury, and to provide evidence for the future intervention strategy of improving RTW. A prospective cohort of workers with work-related hand injury from three selected hospitals in East China was followed up on the outcomes of RTW up to 8 months after discharge. Demographic and clinical data were collected during admission; economic factors, psychological factors and RTW outcomes were, respectively, investigated using a structured questionnaire via phone call after discharge from the hospitals in 0.5 month, 2 months, 4 months and 8 months. Univariate analysis and Cox regression model were used to examine the associations between potential determinants and outcomes of the RTW. Out of the 246 cases, 192 (78.1%) eventually returned to work with the median duration of the absence of 44.0 days during the 8-month follow-up. Factors from demographic, clinical, economic and psychological domains affected RTW in the univariate analyses. Receiving timely treatment at outpatient clinics, less serious injury, no tendon trauma and no skin loss were found to be significantly beneficial to RTW, while workers with the decreased monthly salary during absence and lower pre-injury salary were likely to take longer sick leave. Most of the workers successfully achieved RTW after work-related hand injury. Proper clinical treatment and rehabilitation, as well as economic and social support seem to have played vital roles in prompting RTW that should be prioritised for the intervention strategy. abstract_id: PUBMED:26171332 Which Stress Does Influence Returning to Work in Japan, Inside or Outside the Workplace? Background: In this cohort study, we investigated the background factors promoting and inhibiting a return to work after long-term absence from work due to sickness among psychiatric outpatients. Methods: We surveyed 73 psychiatric outpatients who were absent from work for a long time (POAWs), and 42 POAWs who were followed up until the 2-year time point. GHQ-30, NEO-FFI, MPS, RSS and questionnaires in-quiring about background factors, including relationships with others, were used, and the data were compared those who had returned to the work by the 2-year time point with those who had not. Results: Factors promoting a return to work were "extroversion (NEO-FFI)", "organization (MPS)", and "neuroticism (NEO-FFI)", whereas "concern over mistakes (MPS)" was an inhibitory factor. Period of absence from work was markedly associated with psychological stress outside the workplace, while depression, anxiety, and even psychological stress inside the workplace were not. Conclusions: POAWs left from work by the result of psychiatric problems like depression, anxiety and so on. These were the result of their background factors, their characteristics, and psychological stress in/outside the workplace. After two-year psychiatric treatment, their psychological stress outside the workplace remained as an essential matter. These were inescapable because they were set in private place, while stress at the workplace could be left at the workplace. abstract_id: PUBMED:27014490 Effects of Work-Related Stress on Work Ability Index among Iranian Workers. Background: Work Ability Index (WAI) is a well-known and valid self-report tool that has been widely used in various studies to identify and avoid early retirement and work-related disability. Nevertheless, very few studies have been carried out to evaluate work ability in Iran. We aimed to investigate the WAI and the effect of work-related stress on it among Iranian workers. Methods: A cross-sectional, descriptive and analytic study was carried out among 449 workers from five working sectors in three big cities of Iran. Work ability and work-related stress were measured using the Persian version of WAI and the Persian version of Health and Safety Executive Stress Indicator Tool. Results: More than a third of the workers surveyed (34.70%) did not have an appropriate level of work ability (WAI &lt; 37). There was a significant correlation between subscales of work-related stress and the mean score of WAI. Furthermore, the variables of body mass index, sleep quality, exercise activity, job tenure, and three subscales of work-related stress including demands, supervisor support, and role were significant predictors of WAI. Conclusion: According to the results of this study, the interventional programs must be focused on improving supervisors support, eliminating ambiguity and conflicts in the role of workers in their job and organization, reducing job demands, improving sleep quality, and increasing exercise activity. abstract_id: PUBMED:37097739 Work-Focused Versus Generic Internet-Based Interventions for Employees With Stress-Related Disorders: Randomized Controlled Trial. Background: In recent decades, stress-related disorders have received more attention, with an increasing prevalence, especially within the working population. The internet provides new options for broad dissemination, and a growing body of evidence suggests that web-based interventions for stress might be effective. However, few studies have examined the efficacy of interventions in clinical samples and work-related outcomes. Objective: The aim of this study was to evaluate the efficacy of an internet-based cognitive behavioral intervention for stress-related disorders integrating work-related aspects (work-focused and internet-based cognitive behavioral therapy [W-iCBT]), compared with a generic internet-based cognitive behavioral therapy (iCBT) group and a waitlist control (WLC) group. Methods: In this trial, 182 employees, mainly employed in the health care, IT, or educational sector, who fulfilled the criteria for a stress-related disorder, were randomized to a 10-week W-iCBT (n=61, 33.5%), generic iCBT (n=61, 33.5%), or WLC (n=60, 33%). Self-rated questionnaires on perceived stress, burnout, exhaustion, and other mental health- and work-related outcomes were administered before and after the treatment and at 6- and 12-month follow-ups. Results: Compared with the WLC group, participants of the W-iCBT and iCBT groups showed an equal and significant reduction in the primary outcome (Shirom-Melamed Burnout Questionnaire [SMBQ]) from pretreatment to posttreatment assessment (Cohen d=1.00 and 0.83, respectively) and at the 6-month follow-up (Cohen d=0.74 and 0.74, respectively). Significant moderate-to-large effect sizes were also found in the secondary health- and work-related outcomes. The W-iCBT was the only group that exhibited significant effects on work ability and short-term sickness absence. Short-term sickness absence was 445 days lower than the WLC group and 324 days lower than the iCBT intervention group. However, no significant differences were found in terms of work experience or long-term sick leave. Conclusions: The work-focused and generic iCBT interventions proved to be superior compared with the control condition in reducing chronic stress and several other mental health-related symptoms. Interestingly, effects on work ability and short-term sickness absence were only seen between the W-iCBT intervention and the WLC groups. These preliminary results are promising, indicating that treatments that include work aspects may have the potential to accelerate recovery and reduce short-term sickness absence because of stress-related disorders. Trial Registration: ClinicalTrials.gov NCT05240495; https://clinicaltrials.gov/ct2/show/NCT05240495 (retrospectively registered). abstract_id: PUBMED:37151906 Work Stress, Work-Related Rumination, and Depressive Symptoms in University Teachers: Buffering Effect of Self-Compassion. Purpose: The association between work stress and depressive symptoms has been well documented. More research is still needed to further investigate the underlying explanatory process mechanism and the potential protective function of personal resource in attenuating the negative effect of work stress on mental health. To address this gap and informed by the stress process model and emotion regulation theory,the present study examined the indirect role of work-related rumination and the buffering effect of self-compassion in university teachers, a population with increased vulnerabilities for both work stress and mental health issues. Methods: This study collected data from 727 university teachers (Mage = 37.65 ± 7.77 years, 61.5% women) through an online survey. Indirect effect analysis and moderation effect analysis were implemented using structural equation modeling via Mplus. Results: Work stress was directly and indirectly associated with depressive symptoms through work-related rumination, and self-compassion moderated the associations between (a) work stress and depressive symptoms and (b) work-related rumination and depressive symptoms. The direct and indirect effects of work stress on depressive symptoms were weaker in university teachers with high levels of self-compassion than in those with low levels of self-compassion. Conclusion: Work stress was associated with depressive symptoms through a cognitive pathway. The findings deepen our understanding of the link between work stress and mental health outcomes by revealing the indirect role of cognitive processes, the protective function of emotion regulation strategies, and their interplay. Reducing work-related rumination and strengthening self-compassion may constitute effective interventions to alleviate the negative effects of work stress on mental health. abstract_id: PUBMED:33808679 Salivary Biomarkers and Work-Related Stress in Night Shift Workers. Work organization, such as shifts and night work, can interfere with the perception of work-related stress and therefore on the development of pathological conditions. Night shift work, particularly, can have a negative impact on workers' wellbeing by interfering with the biological sphere. The aim of this study is to evaluate the associations between work activities, shift work effects and stress-related responses in 106 dock workers enrolled in southeast Italy. Dock workers' tasks consist of complex activities that seemed to affect more sleep quality than work-related stress. An analysis of salivary biomarkers such as cortisol, α-amylase, melatonin and lysozyme was performed along with validated psycho-diagnostic questionnaires. Alpha-amylase showed a significant negative correlation with the effort/reward imbalance score; thus, the measurement of salivary α-amylase is proposed as a sensitive and non-invasive biomarker of work-related stress. This study may provide new insights into developing strategies for the management of night shift work. Salivary biomarkers should be further investigated in the future in order to develop simple and effective tools for the early diagnosis of work-related stress or its outcomes. Answer: Yes, absence is related to work stress. Several studies have found associations between work-related stress and absence from work. For instance, a study on work-related sickness absence by general practitioners found that the duration of absence certified by GPs can be viewed as a prognosis for return to work, and that over half the reported cases had a longer return to work than initially predicted by the GP, indicating that work-related stress can lead to longer periods of sickness absence (PUBMED:27515971). Another study on work-related health problems in Polish enterprises highlighted the interdependence between workers' health problems and their absence, suggesting that work-related health issues, which can be exacerbated by stress, contribute to sickness absence (PUBMED:26647948). Furthermore, a prospective cohort study in Sweden showed that exposure to work-related violence and/or threats of violence, which can be a source of significant stress, was associated with an increased likelihood of certified sickness absence due to mental disorders (PUBMED:36070001). Similarly, a study on university teachers found that work stress was directly and indirectly associated with depressive symptoms through work-related rumination, and that high levels of self-compassion could buffer the negative effects of work stress on mental health, implying that stress can lead to mental health issues that may result in absence from work (PUBMED:37151906). Additionally, a study on night shift workers suggested that work organization, such as shifts and night work, can affect the perception of work-related stress and influence the development of pathological conditions that may lead to absence (PUBMED:33808679). Lastly, a study on work-related hand injury found that economic and psychological factors influenced the duration of absence following injury, indicating that stress-related factors can affect the length of time workers are absent due to work-related injuries (PUBMED:23638695).
Instruction: Is there an alternative to centralization for pancreatic resection in New Zealand? Abstracts: abstract_id: PUBMED:24846497 Is there an alternative to centralization for pancreatic resection in New Zealand? Background: Internationally pancreatic surgery has become increasingly centralized; however, geographical and population distribution within New Zealand (NZ) limits the practicalities of such an approach. The aim of this study was to review the short-term outcomes of patients undergoing pancreatic surgery by a single hepato-pancreato-biliary trained surgeon in a centre that would meet the minimum criteria set by the NZ National Standards but not necessarily the definition of a high-volume surgeon/centre. Methods: A retrospective review of consecutive patients undergoing pancreatic resection within an enhanced recovery programme by a single surgeon between March 2005 and April 2013. Primary outcomes were 30-day morbidity and 90-day mortality. Results: A total of 156 patients who underwent a pancreatic resection were included. Eighty-two (53%) patients underwent a pancreaticoduodenectomy. Forty-seven (30%) underwent a left pancreatectomy. Overall, 30-day morbidity was 64% and overall 90-day mortality was 2.6%. Overall median length of stay was 11 (3-140) days. Conclusions: Acceptable outcomes have been achieved for patients undergoing pancreatic resection within a centre that meets the criteria proposed by the NZ National Standards for treatment of pancreatic cancer. abstract_id: PUBMED:31037503 Centralization of Pancreatic Surgery in Europe. Background: The objective of this article is a review and an analysis of the current state of centralization of pancreatic surgery in Europe. Numerous recent publications demonstrate higher postoperative in-hospital mortality rates in low-volume clinics after pancreatic resection than previously assumed due to their not publishing significantly worse outcomes when compared to high-volume centres. Although the benefits of centralization of pancreatic surgery in high-volume centres have been demonstrated in many studies, numerous countries have so far failed to establish centralization in their respective health care systems. Methods: A systematic literature search of the Medline database for studies concerning centralization of pancreatic surgery in Europe was conducted. The studies were reviewed independently for previously defined inclusion and exclusion criteria. We included 14 studies with a total of 117,634 patients. All data were extracted from or provided by health insurance company or governmental registry databases. Results: Thirteen out of the 14 studies demonstrate an improvement in their respective outcome related to volume. Twelve studies showed a significantly lower postoperative mortality rate in the highest annual volume group in comparison to overall postoperative mortality rate in the whole patient cohort. Conclusion: As the available data indicate, most European countries have so far failed to establish centralization of pancreatic surgery to high-volume centres due to numerous reasons. Considering a plateau in survival rates of patients undergoing treatment for pancreatic cancer in Europe during the last 15 years, this review enforces the worldwide plea for centralization to lower post-operative mortality after pancreatic surgery. abstract_id: PUBMED:28503843 The volume, cost and outcomes of pancreatic resection in a regional centre in New Zealand. Background: The frequency, costs and outcome of pancreatic resection (both pancreaticoduodenectomy and distal pancreatectomy) were reviewed in our own institution and correlated with regional population growth as well as national resection rates and locations. Methods: Demographic, pathological and outcome data on pancreaticoduodenectomy and distal pancreatectomy were obtained from a prospectively maintained database for the years 2005-2009 and 2010-2014. During this period, the catchment population grew from 460 000 to 567 000. Costing information was obtained from the hospital-independent costing and coding committee, and the locations and rates of pancreatic resection were obtained by interrogating the national minimum dataset. Results: A total of 41 pancreatectomies (29 pancreaticoduodenectomy, 12 distal pancreatectomy) were performed between 2005 and 2009, increasing to 84 pancreatectomies (55 pancreaticoduodenectomies, 27 distal pancreatectomies and two total pancreatectomies) between 2010 and 2014. This constituted one sixth of the national volume of pancreatic resections. There was no difference in patient demographics or indications for resection between the two time periods; however, portal vein resection was used more frequently in the second period. Margin positivity rate decreased (7 of 41 versus 8 of 84) and lymph node harvest increased (median 8 nodes versus median 15 nodes) between the two time periods. Overall 30-day mortality was 1.6%. Conclusion: In New Zealand, regional rates of pancreatic resection reflect regional population demands, and institutional growth is driven by local population requirements. Institutional growth can be achieved with the maintenance of internationally accepted outcomes and quality indicators. abstract_id: PUBMED:27586576 Re: Is there an alternative to centralization for pancreatic resection in New Zealand? N/A abstract_id: PUBMED:24844590 Impact of centralization of pancreatic cancer surgery on resection rates and survival. Background: Centralization of pancreatic surgery has been shown to reduce postoperative mortality. It is unknown whether resection rates and survival have also improved. The aim of this study was to analyse the impact of nationwide centralization of pancreatic surgery on resection rates and long-term survival. Methods: All patients diagnosed in the Netherlands between 2000 and 2009 with cancer of the pancreatic head were identified in the Netherlands Cancer Registry. Changes in referral pattern, resection rates and survival after pancreatoduodenectomy were analysed. Multivariable regression analysis was used to assess the impact of hospital volume (20 or more procedures per year) on survival after resection. Results: Between 2000 and 2009, 11,160 patients were diagnosed with cancer of the pancreatic head. The resection rate increased from 10.7 per cent in 2000-2004 to 15.3 per cent in 2005-2009 (P &lt; 0.001). No significant difference in survival after resection was observed between the two intervals (P = 0.135), although survival was significantly better in high-volume hospitals (median survival 18 months versus 16 months in low/medium-volume hospitals; P = 0.017). After adjustment for patient and tumour characteristics, high hospital volume remained associated with better overall survival after resection (hazard ratio 0.70, 95 per cent confidence interval 0.58 to 0.84; P &lt; 0.001). Conclusion: Centralization of pancreatic cancer surgery led to increased resection rates. High-volume centres had significantly better survival rates. Centralization improves patient outcomes and should be encouraged. abstract_id: PUBMED:33738473 Effect of centralization and regionalization of pancreatic surgery on resection rates and survival. Background: Centralization of pancreatic surgery in the Netherlands has been ongoing since 2011. The aim of this study was to assess how centralization has affected the likelihood of resection and survival of patients with non-metastatic pancreatic head and periampullary cancer, diagnosed in hospitals with and without pancreatic surgery services. Methods: An observational cohort study was performed on nationwide data from the Netherlands Cancer Registry (2009-2017), including patients diagnosed with non-metastatic pancreatic head or periampullary cancer. The period of diagnosis was divided into three time intervals: 2009-2011, 2012-2014 and 2015-2017. Hospital of diagnosis was classified as a pancreatic or non-pancreatic surgery centre. Analyses were performed using multivariable logistic and Cox regression models. Results: In total, 10 079 patients were included, of whom 3114 (30.9 per cent) were diagnosed in pancreatic surgery centres. Between 2009-2011 and 2015-2017, the number of patients undergoing resection increased from 1267 of 3169 (40.0 per cent) to 1705 of 3566 (47.8 per cent) (P for trend &lt; 0.001). In multivariable analysis, in 2015-2017, unlike the previous periods, patients diagnosed in pancreatic and non-pancreatic surgery centres had a similar likelihood of resection (odds ratio 1.08, 95 per cent c.i. 0.90 to 1.28; P = 0.422). In this period, however, overall survival was higher in patients diagnosed in pancreatic surgery than in those diagnosed in non-pancreatic surgery centres (hazard ratio 0.92, 95 per cent c.i. 0.85 to 0.99; P = 0.047). Conclusion: After centralization of pancreatic surgery, the resection rate for patients with pancreatic head and periampullary cancer diagnosed in non-pancreatic surgery centres increased and became similar to that in pancreatic surgery centres. Overall survival remained higher in patients diagnosed in pancreatic surgery centres. abstract_id: PUBMED:34607651 Centralization of pancreatic cancer treatment within an integrated healthcare system improves overall survival. Background: Higher-volume centers for pancreatic cancer surgeries have been shown to have improved outcomes such as length of stay. We examined how centralization of pancreatic cancer care within a regional integrated healthcare system improves overall survival. Methods: We conducted a retrospective study of 1621 patients treated for pancreatic cancer from February 2010 to December 2018. Care was consolidated into 4 Centers of Excellence (COE) in surgery, medical oncology, and other specialties. Descriptive statistics, bivariate analysis, Chi-square tests, and Kaplan-Meier analysis were performed. Results: Neoadjuvant chemotherapy use rose from 10% to 31% (p &lt; .001). The median overall survival (OS) improved by 3 months after centralization (p &lt; .001), but this did not reach significance on multivariate analysis. Conclusions: Our results suggest that in a large integrated healthcare system, centralization improves overall survival and neoadjuvant therapy utilization for pancreatic cancer patients. abstract_id: PUBMED:12622425 Ethnic disparity of pancreatic cancer in New Zealand. Background: The etiology of pancreatic cancer remains elusive. Identification of high-risk groups may enable targeted study to identify new markers and risk factors. Aim: To review the epidemiology of pancreatic cancer in New Zealand and identify any differences between ethnic groups. Methods: All cases notified with pancreatic cancer between 1988 and 1997 were identified from the New Zealand Cancer Registry. Age-specific and age-standardized incidence rates (ASR) of pancreatic cancer were calculated for the total sample and the ethnic subgroups (Maori, Pacific, and Other, which was predominantly European). Data on the site, morphology, stage of tumors, and survival times were also reviewed. Results: There were 3004 cases over the 10-yr period. Ethnic ASR comparisons demonstrated higher rates in Maori (7.3/100,000) compared with Pacific (6.4/100,000) and the Other (5.6/100,000) ethnic group. Males had higher incidence rates than females in Pacific and Other, but not in Maori because of the unusually high rate among Maori women (7.2/100,000). The most commonly identified site, morphological type, and stage at presentation were the head of the pancreas (80.9% of site-specified cases), adenocarcinoma (44.3% of histologically confirmed cases), and diffuse +/- metastases (69.2% of staged cases), respectively. The median survival time was 92 d, and this did not differ significantly for the Maori and Other ethnic groups. Conclusion: The Maori have higher rates of pancreatic cancer than other ethnic groups in New Zealand, and do not show the expected male predominance. Maori women currently have one of the highest reported female rates in the world, and are a population that should be further investigated for disease markers and modifiable risk factors. abstract_id: PUBMED:31420860 National Trends in Centralization of Surgical Care and Multimodality Therapy for Pancreatic Adenocarcinoma. Background: Studies have demonstrated that multimodality therapy and surgery at high volume centers are associated with a longer survival. However, it is unknown if these data have translated into national changes in care delivery. Methods: Patients with stages I-III pancreatic adenocarcinomas who underwent resections between 2004 and 2010 were identified from the National Cancer Data Base. The primary outcome was a 3-year overall survival. Temporal trends in survival outcomes and treatment variables were measured. A mediation analysis using the Lin method was used to discern the relative contribution of changes in treatment variables towards improvements in survival over time. Results: A total of 22,196 patients were identified. Between 2004 and 2010, a 90-day peri-operative mortality remained unchanged (8.5 % to 8.4 %, p = 0.488), 3-year overall survival improved from 26 to 30% (p &lt; 0.001), use of adjuvant/neoadjuvant chemotherapy increased (51 % to 61 %, p &lt; 0.001), and more cases shifted to high volume centers (46 % at institutions performing &gt; 10 cases/year in 2004 vs. 65 % in 2010, p &lt; 0.001). On multivariable analysis, 32 % of the improvement in survival over time was attributable to receipt of chemotherapy, while 12 % was due to the shift of patients towards high volume centers (p &lt; 0.001). Conclusions: Over the period from 2004 to 2010, a 3-year survival increased for patients undergoing resection for pancreatic cancer. This survival improvement can be partially attributed to the increasing utilization of chemotherapy and centralization of surgical care at high volume centers. A continued emphasis on these factors will likely result in further prolongation of a survival following resection. abstract_id: PUBMED:30522826 Management of postoperative complications may favour the centralization of distal pancreatectomies. Nationwide data on pancreatic distal resections in Finland 2012-2014. Background: Centralization of pancreatic surgery has proceeded in the last few years in many countries. However, information on the effect of hospital volume specifically on distal pancreatic resections (DP) is lacking. Aim: To investigate the effect of hospital volume on postoperative complications in DP patients in Finland. Methods: All DP performed in Finland during the period 2012-2014 were analyzed, information having been retrieved from the appropriate national registers. Hospital volumes, postoperative pancreatic fistulae (POPF) and overall complications were graded. High volume centre (HVC) was defined as performing &gt; 10 DPs, median volume centre (MVC) 4-9 DPs and low volume centre (LVC) fewer than 4 DP annually. Results: A total of 194 DPs were performed at 18 different hospitals. Of these 42% (81) were performed in HVCs (2 hospitals), 43% (84) in MVCs (6 hospitals) and the remaining 15% (29) in LVCs (10 hospitals). Patient demographics did not differ between the hospital volume groups. The overall rate of clinically relevant POPF, Clavien-Dindo grade 3-5 complications, and 90-day mortality showed no significant differences between the different hospital volumes. Grade C POPF was found more often in LVCs, being 1.2% in HVCs, 0% in MCVs and 6.9% in LVCs, p = 0.030. More reoperations were performed in LVCs (10.3%) than in HVCs (1.2%) or MVCs (1.2%); p = 0.025. Conclusions: Even though the rate of postoperative complications after DP is not affected by hospital volume, reoperations were performed ten times more often in the low-volume centres. Optimal management of postoperative complications may favour centralization not only of PD, but also of DP. Answer: The study conducted by a single hepato-pancreato-biliary trained surgeon in New Zealand suggests that acceptable outcomes for pancreatic resection can be achieved within a center that meets the criteria proposed by the NZ National Standards for treatment of pancreatic cancer, even if it does not necessarily meet the definition of a high-volume surgeon/center. The study reported a 30-day morbidity of 64% and a 90-day mortality of 2.6%, with an overall median length of stay of 11 days (PUBMED:24846497). This indicates that with appropriate standards and training, centralization may not be the only pathway to achieving satisfactory short-term outcomes for pancreatic resections in New Zealand. However, it is important to note that centralization of pancreatic surgery has been associated with improved outcomes in other studies. For instance, in Europe, centralization has been shown to lower post-operative mortality after pancreatic surgery (PUBMED:31037503), and in the Netherlands, centralization led to increased resection rates and significantly better survival rates in high-volume centers (PUBMED:24844590). Moreover, after centralization in the Netherlands, the resection rate for patients with pancreatic head and periampullary cancer diagnosed in non-pancreatic surgery centers increased and became similar to that in pancreatic surgery centers, although overall survival remained higher in patients diagnosed in pancreatic surgery centers (PUBMED:33738473). In the context of New Zealand, regional rates of pancreatic resection reflect regional population demands, and institutional growth can be achieved with the maintenance of internationally accepted outcomes and quality indicators (PUBMED:28503843). This suggests that while centralization may offer certain benefits, regional centers that adhere to high standards of care and have experienced surgeons may provide a viable alternative to centralization, particularly in a country like New Zealand where geographical and population distribution may limit the practicality of centralizing all pancreatic surgeries.
Instruction: Delayed emesis induced by moderately emetogenic chemotherapy: do we need to treat all patients? Abstracts: abstract_id: PUBMED:28365889 Antiemetic therapy for non-anthracycline and cyclophosphamide moderately emetogenic chemotherapy. Although antiemetic management in cancer therapy has improved, chemotherapy-induced nausea and vomiting remain common and troubling adverse events. Chemotherapeutic agents are classified based on their emetogenic effects, and appropriate antiemetics are recommended according to this categorization. Chemotherapy categorized as moderately emetogenic is associated with a wide spectrum of emetic risks. Combined anthracycline and cyclophosphamide regimens have been recently reclassified as highly emetogenic chemotherapy regimen. This review focuses on antiemetic pharmacotherapy in patients receiving non-anthracycline and cyclophosphamide-based moderately emetogenic chemotherapy regimens. Combination therapy with a 5-hydroxytryptamine-3 receptor agonist, preferably palonosetron, and dexamethasone is the standard therapy in moderately emetogenic chemotherapy, although triple therapy with add-on neurokinin-1 receptor antagonist is used as an alternative treatment strategy. Among moderately emetogenic chemotherapy regimens, carboplatin-containing chemotherapy has considerable emetic potential, particularly during the delayed phase. However, the additional of a neurokinin-1 receptor antagonist to the standard antiemetic therapy prevents carboplatin-induced nausea and vomiting. For regimens including oxaliplatin, the benefit of adding neurokinin-1 receptor antagonist requires further clarification. abstract_id: PUBMED:29790666 Rolapitant for the prevention of nausea in patients receiving highly or moderately emetogenic chemotherapy. Most patients receiving highly or moderately emetogenic chemotherapy experience chemotherapy-induced nausea and vomiting without antiemetic prophylaxis. While neurokinin-1 receptor antagonists (NK-1RAs) effectively prevent emesis, their ability to prevent nausea has not been established. We evaluated the efficacy of the long-acting NK-1RA rolapitant in preventing chemotherapy-induced nausea using post hoc analyses of data from 3 phase 3 trials. Patients were randomized to receive 180 mg oral rolapitant or placebo approximately 1-2 hours before chemotherapy in combination with a 5-hydroxytryptamine type 3 RA and dexamethasone. Nausea was assessed by visual analog scale during the acute (≤24 hours), delayed (&gt;24-120 hours), and overall (0-120 hours) phases. Post hoc analyses by treatment group (rolapitant vs control) were performed on pooled data within patient subgroups receiving cisplatin-based, carboplatin-based, or anthracycline/cyclophosphamide (AC)-based chemotherapy. In the cisplatin-based chemotherapy group, significantly more patients receiving rolapitant than control reported no nausea (NN) in the overall (52.3% vs 41.7% [P &lt; .001]; absolute benefit [AB] = 10.6%), delayed (55.7% vs 44.3% [P &lt; .001]; AB = 11.4%), and acute (70.5% vs 64.3% [P = .030]; AB = 6.2%) phases. Similar results were observed in the carboplatin-based chemotherapy group, with significantly more patients receiving rolapitant than control reporting NN in the overall (62.5% vs 51.2% [P = .023]; AB = 11.3%) and delayed (64.1% vs 53.6% [P = .034]; AB = 10.5%) phases. In the AC-based chemotherapy group, patients receiving rolapitant or control reported similar NN rates during the overall and delayed phases. Rolapitant effectively prevents nausea during the overall and delayed phases in patients receiving cisplatin- or carboplatin-based chemotherapy. abstract_id: PUBMED:25839011 Open-label observational study to assess the efficacy and safety of aprepitant for chemotherapy-induced nausea and vomiting prophylaxis in Indian patients receiving chemotherapy with highly emetogenic chemotherapy/moderately emetogenic chemotherapy regimens. Context: Currently, there is limited data on the prevention of chemotherapy-induced nausea and vomiting (CINV) in Indian population with aprepitant containing regimens. Aims: The aim was to assess the Efficacy and Safety of Aprepitant for the prevention of nausea and vomiting associated with highly emetogenic chemotherapy/moderately emetogenic chemotherapy (HEC/MEC) regimens. Settings And Design: Investigator initiated, multicentric, open-label, prospective, noncomparative, observational trial. Subjects And Methods: Triple drug regimen with aprepitant, palonosetron, and dexamethasaone administration was assessed for the prevention of CINV during acute, delayed, and the overall phase (OP) for HEC/MEC Regimens. The primary endpoint was complete response (CR; no emesis and no use of rescue medication) and the key secondary endpoint was the complete control (CC; no emesis, no rescue medication and no more than mild nausea) during the OP. Statistical Analysis Used: Perprotocol efficacy was analyzed for the first cycle with results represented in terms of CR/CC rates using descriptive statistics. Results: Seventy-five patients were included in the study with median age of 49.7 years and 89.7% being females. The CR rate (OP) for patients administered HEC or MEC regimens during the first cycle were 92% and 90.9%, respectively. Similarly, the CC rates (OP) were 75% and 90% for these regimens, respectively. 7 (9.2%) patients reported adverse drug reactions that were mild and transient with no reports of any serious adverse events. Conclusions: Use of aprepitant containing regimen for patients receiving HEC/MEC regimen resulted in significantly high CR and CC response rates, which further consolidate its potential role to improve patient quality of life and compliance to disease management. abstract_id: PUBMED:28260945 Differential pharmacology and clinical utility of rolapitant in chemotherapy-induced nausea and vomiting. Chemotherapy-induced nausea and vomiting (CINV) is a debilitating side effect of many cytotoxic chemotherapy regimens. CINV typically manifests during two well-defined time periods (acute and delayed phases). The acute phase is the first 24 hours after chemotherapy and is largely managed with 5-hydroxytryptamine 3 receptor antagonists. The delayed phase, a 5-day at-risk period during which patients are not often in direct contact with their health care provider, remains a significant unmet medical need. Neurokinin-1 (NK-1) receptor antagonists have demonstrated protection against acute and delayed CINV in patients treated with highly emetogenic chemotherapy and moderately emetogenic chemotherapy when used in combination with a 5-hydroxytryptamine 3 receptor antagonist and dexamethasone. Furthermore, recent data indicate that this protection is maintained over multiple treatment cycles. Rolapitant, a selective and long-acting NK-1 receptor antagonist, is approved as oral formulation for the prevention of delayed CINV in adults. This review discusses the differential pharmacology and clinical utility of rolapitant in preventing CINV compared with other NK-1 receptor antagonists. abstract_id: PUBMED:32735029 Metoclopramide, Dexamethasone, or Palonosetron for Prevention of Delayed Chemotherapy-Induced Nausea and Vomiting After Moderately Emetogenic Chemotherapy (MEDEA): A Randomized, Phase III, Noninferiority Trial. Background: For the prevention of chemotherapy-induced nausea and vomiting (CINV) during the delayed phase (24-120 hours) after moderately emetogenic chemotherapy (MEC), the use of 3-day dexamethasone (DEX) is often recommended. This study compared the efficacy and safety of two DEX-sparing regimens with 3-day DEX, focusing on delayed nausea. Patients And Methods: This open-label, randomized, phase III study was designed to demonstrate noninferiority of two DEX-sparing regimens: ondansetron + DEX on day 1 + metoclopramide on days 2-3 (MCP arm), and palonosetron + DEX on day 1 (PAL arm) versus ondansetron on day 1 + DEX on days 1-3 (DEX arm) in chemotherapy-naïve patients receiving MEC. Primary efficacy endpoint was total control (TC; no emetic episodes, no use of rescue medication, no nausea) in the delayed phase. Noninferiority was defined as a lower 95% CI greater than the noninferiority margin set at -20%. Secondary endpoints included no vomiting, no rescue medication, no (significant) nausea, impact of CINV on quality of life, and antiemetics-associated side effects. Results: Treatment arms were comparable for 189 patients analyzed: predominantly male (55.7%), median age 65.0 years, colorectal cancer (85.7%), and oxaliplatin-based chemotherapy (81.5%). MCP demonstrated noninferiority to DEX for delayed TC (MCP 56.1% vs. DEX 50.0%; 95% CI, -11.3%, 23.5%). PAL also demonstrated noninferiority to DEX (PAL 55.6% vs. DEX 50.0%; 95% CI, -12.0%, 23.2%). There were no statistically significant differences for all secondary endpoints between treatment arms. Conclusion: This study showed that DEX-sparing regimens are noninferior to multiple-day DEX in terms of delayed TC rate in patients undergoing MEC. ClinicalTrials.gov identifier. NCT02135510. Implications For Practice: Chemotherapy-induced nausea and vomiting (CINV) in the delayed phase (24-120 hours after chemotherapy) remains one of the most troublesome adverse effects associated with cancer treatment. In particular, delayed nausea is often poorly controlled. The role of dexamethasone (DEX) in the prevention of delayed nausea after moderately emetogenic chemotherapy (MEC) is controversial. This study is the first to include nausea assessment as a part of the primary study outcome to better gauge the effectiveness of CINV control and patients' experience. Results show that a DEX-sparing strategy does not result in any significant loss of overall antiemetic control: DEX-sparing strategies incorporating palonosetron or multiple-day metoclopramide are safe and at least as effective as standard treatment with a 3-day DEX regimen with ondansetron in controlling delayed CINV-and nausea in particular-following MEC. abstract_id: PUBMED:31379219 Netupitant plus palonosetron is a cost-effective treatment for the prophylaxis of chemotherapy-induced nausea and vomiting in highly and moderately emetogenic cancer treatment. Introduction: The analysis was conducted to assess a cost-efficacy analysis of new antiemetic drugs (netupitant plus palonosetron (NEPA)) for the prophylaxis of chemotherapy-induced nausea and vomiting (CINV) in highly and moderately emetogenic chemotherapy for cancer treatment. Areas covered:The present evaluation was restricted to pivotal phase III randomized controlled trials (RCTs) of NEPA versus (vs.) palonosetron for the prophylaxis of CINV. We calculated the pharmacological costs necessary to get the benefit in complete response (CR), for each trial. Our analysis evaluated 2 RCTs, including 1720 patients. Referring to both highly and moderately emetogenic chemotherapy, NEPA plus DEX was economic superior to palonosetron (PALO) plus DEX, with 13 312 € and 7885 € gain in medical costs every 100 patients treated, respectively. The cost-effectiveness ratios (CERs) (€/CR) in highly emetoge nic risk were 1.24 and 13.23 for the NEPA and PALO group, respectively and 1.49 and 15.20 for the same groups in moderately emetogenic risk. The incremental cost-effectiveness ratio (ICER) between the groups was 1016.18 €/CR and 1024.03 €/CR in highly and moderately emetogenic risk, respectively. Expert opinion:The combination of NEPA plus DEX is cost-effective for preventing CINV in highly and moderately (AC-based) emetogenic cancer treatment. abstract_id: PUBMED:27510316 2016 updated MASCC/ESMO consensus recommendations: Prevention of nausea and vomiting following moderately emetogenic chemotherapy. Purpose: An update of the recommendations for the prophylaxis of acute and delayed emesis induced by moderately emetogenic chemotherapy published after the last MASCC/ESMO antiemetic consensus conference in 2009 has been carried out. Methods: A systematic literature search using PubMed from January 1, 2009 to January 6, 2015 with a restriction to papers in English was conducted. Results: Overall, two randomized phase II and seven randomized phase III studies plus the results of three subgroup analysis of large phase III trials and those of a pilot study have been included. Conclusions: In carboplatin-treated patients, a moderate benefit from adding an NK1 receptor antagonist to dexamethasone and a 5-HT3 receptor antagonist has been shown. However, in oxaliplatin-treated patients, contrasting results about the role of NK1 receptor antagonists have been obtained. At present, it is not possible to suggest a specific 5-HT3 receptor antagonist to use for the prevention of acute emesis in these patients. No routine prophylaxis for delayed emesis is recommended but in patients receiving moderately emetogenic chemotherapy with known potential for delayed emesis (e.g., oxaliplatin, doxorubicin, cyclophosphamide) the use of dexamethasone for days 2-3 can be considered. abstract_id: PUBMED:28018788 Efficacy of triple antiemetic therapy (palonosetron, dexamethasone, aprepitant) for chemotherapy-induced nausea and vomiting in patients receiving carboplatin-based, moderately emetogenic chemotherapy. Background: Chemotherapy-induced nausea and vomiting (CINV) is a major adverse toxicity of cancer chemotherapy. Recommended treatments for prevention of CINV vary among published guidelines, and optimal care for CINV caused by moderately emetogenic chemotherapy has not been established. This study assessed the efficacy and safety of triple antiemetic therapy comprising palonosetron, dexamethasone and aprepitant for carboplatin-based chemotherapy. Chemotherapy-naïve patients with lung cancer scheduled for a first course of a carboplatin-containing regimen formed the study cohort. Patients were pretreated with antiemetic therapy comprising palonosetron (0.75 mg, i.v.) and dexamethasone (9.9 mg, i.v.) on day 1, and aprepitant (125 mg, p.o.) on day 1 followed by 80 mg on days 2 and 3. Primary endpoint was the proportion of patients who did not experience vomiting and did not require rescue medication [complete response (CR)] in the acute phase (0-24 h), late phase (24-168 h) and overall. Secondary endpoint was the proportion of patients who experienced no vomiting episodes and no more than mild nausea without the need for rescue medication [complete control (CC)]. Results: Prevalence of a CR during the acute phase, delayed phase, and overall was 100, 91.9 and 91.9%, whereas that of CC was 100, 84.4 and 84.4%, respectively. The most common adverse event was mild constipation; severe adverse events related to antiemetic treatment were not observed. Conclusion: Triple antiemetic therapy comprising palonosetron, dexamethasone and aprepitant shows excellent effects in the prevention of CINV in patients receiving a carboplatin-containing regimen. abstract_id: PUBMED:30499739 Hydration requirements in patients receiving highly emetogenic chemotherapy. Aim: Chemotherapy-induced nausea and vomiting diminishes quality of life and increases healthcare resource use. This retrospective medical records analysis evaluated hydration requirements with emetogenic chemotherapy. Patients & Methods: Cancer patients received moderately emetogenic chemotherapy (MEC) or highly emetogenic chemotherapy (HEC), and antiemetics palonosetron or granisetron extended-release subcutaneous (GERSC), neurokinin 1 receptor antagonist and dexamethasone. Unscheduled hydration event rates were determined. Results: For 186 patients (92 palonosetron, 94 GERSC) overall, mean hydration rate was significantly higher with palonosetron (0.6 vs 0.2; p = 0.0005). Proportion of patients with ≥1 hydration event was significantly higher with palonosetron overall (54 vs 33%; p = 0.0033) and in cycles 2-4 and the HEC subgroup. Conclusion: GERSC within a three-drug antiemetic regimen may reduce unscheduled hydration requirements with MEC or HEC. abstract_id: PUBMED:28194109 Delayed Chemotherapy-Induced Nausea and Vomiting: Pathogenesis, Incidence, and Current Management. Even when chemotherapy-induced nausea and vomiting (CINV) can be effectively controlled in the acute phase, it may still occur in the delayed phase. Identifying at-risk patients is complex and requires consideration of clinical, personal, demographic, and behavioral factors. Delayed CINV has a significant detrimental effect on patients' daily life and is responsible for significant healthcare resource utilization. Patients who do not experience acute CINV are not necessarily exempt from delayed CINV, and healthcare professionals have been shown to underestimate the incidence of delayed CINV. Failure to protect against CINV during the first cycle of chemotherapy is the most significant independent risk factor for delayed CINV during subsequent cycles. Addition of a neurokinin-1 receptor antagonist to antiemetic prophylactic regimens involving a 5-hydroxytryptamine type 3 receptor antagonist and a corticosteroid helps to ameliorate delayed CINV, particularly vomiting. Netupitant and rolapitant are second-generation neurokinin-1 receptor antagonists that provide effective prophylaxis against delayed chemotherapy-induced vomiting and also have an antinausea benefit. All of the neurokinin-1 receptor antagonists with the exception of rolapitant inhibit or induce cytochrome P450 3A4 (CYP3A4), and a reduced dose of dexamethasone (a CYP3A4 substrate) should be administered with aprepitant or netupitant; by contrast, this is not necessary with rolapitant. Here we review specific challenges associated with delayed CINV, its pathophysiology, epidemiology, treatment, and outcomes relative to acute CINV, and its management within the larger context of overall CINV. Answer: The management of delayed emesis induced by moderately emetogenic chemotherapy (MEC) is a nuanced issue, and the decision to treat all patients prophylactically is not straightforward. The literature suggests that not all patients will experience delayed chemotherapy-induced nausea and vomiting (CINV), and the risk can vary based on individual patient factors and the specific chemotherapeutic agents used (PUBMED:28194109). For instance, in patients receiving MEC with known potential for delayed emesis, such as oxaliplatin, doxorubicin, or cyclophosphamide, the use of dexamethasone for days 2-3 can be considered (PUBMED:27510316). However, the 2016 updated MASCC/ESMO consensus recommendations do not recommend routine prophylaxis for delayed emesis but suggest considering dexamethasone in specific cases (PUBMED:27510316). Moreover, the efficacy of different antiemetic regimens in preventing delayed CINV varies. For example, a study showed that triple antiemetic therapy comprising palonosetron, dexamethasone, and aprepitant was highly effective in preventing CINV in patients receiving carboplatin-based MEC (PUBMED:28018788). Another study demonstrated that dexamethasone-sparing regimens were noninferior to multiple-day dexamethasone in controlling delayed CINV and nausea in particular, following MEC (PUBMED:32735029). The use of neurokinin-1 receptor antagonists (NK-1 RAs) like rolapitant and aprepitant has also been shown to be effective in preventing delayed CINV, particularly vomiting, and they have an antinausea benefit as well (PUBMED:28260945; PUBMED:28194109). However, the benefit of adding NK-1 RAs to standard antiemetic therapy for regimens including oxaliplatin requires further clarification (PUBMED:28365889). In summary, while some patients may benefit from prophylactic treatment for delayed emesis following MEC, it may not be necessary to treat all patients. The decision should be individualized based on the emetogenic potential of the chemotherapy regimen, patient risk factors, and preferences, as well as the efficacy and safety profiles of available antiemetic therapies.
Instruction: Journal Club: incidental thyroid nodules detected at imaging: can diagnostic workup be reduced by use of the Society of Radiologists in Ultrasound recommendations and the three-tiered system? Abstracts: abstract_id: PUBMED:24370125 Journal Club: incidental thyroid nodules detected at imaging: can diagnostic workup be reduced by use of the Society of Radiologists in Ultrasound recommendations and the three-tiered system? Objective: The purpose of this study was to determine the number of thyroid nodule workups that could be eliminated and the number of malignant tumors that would be missed if the Society of Radiologists in Ultrasound (SRU) recommendations and the three-tiered system were applied to incidental thyroid nodules (ITN) detected at imaging. Materials And Methods: This retrospective study included ITN in 390 consecutively registered patients who underwent ultrasound-guided fine-needle aspiration of one or more thyroid nodules from July 2010 to June 2011. Images were reviewed, and nodules were categorized according to two workup criteria: ITN seen on ultrasound images were categorized according to SRU recommendations, and those seen on CT, MR, or PET/CT images were classified according to the three-tiered risk-categorization system. Results: In this study 114 of 390 (29%) patients had nodules first detected incidentally during imaging studies, and 107 patients met the inclusion criteria. These patients had 47 ITN seen at ultrasound and 60 ITN seen at either CT, MRI, or PET/CT. If the SRU recommendations had been applied, 14 of 47 (30%) patients with ITN on ultrasound images would not have received fine-needle aspiration and one of four cases of cancer would have been missed. The missed malignant tumor was a 14-mm localized papillary carcinoma. If the three-tiered system had been applied, 21 of 60 (35%) patients with ITN on CT, MR, or PET/CT images would not have received fine-needle aspiration, but none of the three malignancies would have been missed. Overall, 35 of 107 (33%) of patients with ITN did not meet the SRU recommendations or the three-tiered criteria. Conclusion: Use of the SRU recommendations and three-tiered system can reduce the workup of ITN by one third compared with current practice without specific guidelines. One case of localized papillary carcinoma was missed when the SRU recommendations were used. abstract_id: PUBMED:25310642 What to do with incidental thyroid nodules identified on imaging studies? Review of current evidence and recommendations. Purpose Of Review: To discuss the problem of incidental thyroid nodules (ITN) detected on imaging; summarize the literature for workup methods; and provide recommendations based on current evidence. Recent Findings: ITN are a common problem, seen in 40-50% of ultrasound and 16% of computed tomography (CT) and MRI studies that include the thyroid. The personal and financial costs of workup frequently outweigh the benefits when considering that the majority of ITN are benign; 25-41% of patients undergo surgery after biopsy, of which more than half ultimately result in a benign diagnosis, and small thyroid cancers have an indolent course. Workup should consider reduction in unnecessary workup in addition to cancer diagnosis. The Society of Radiologists in Ultrasound recommendations have been proposed for ITN detected on ultrasound and found to reduce workup by 30%. For ITN detected on CT, MRI, or PET/CT, a three-tiered system categorization method reduces workup of ITN by 35-46%. Summary: The ideal approach to selecting ITN detected on imaging for workup would not be to diagnose all cancers, but to diagnose cancers that have reached clinical significance, while avoiding unnecessary tests and surgery in patients with benign nodules, especially those who have limited life expectancy. The three-tiered system and the Society of Radiologists in Ultrasound recommendations are supported by existing studies and focus on reducing unnecessary biopsy. abstract_id: PUBMED:24555597 Applying the Society of Radiologists in Ultrasound recommendations for fine-needle aspiration of thyroid nodules: effect on workup and malignancy detection. Objective: The Society of Radiologists in Ultrasound (SRU) recommendations on thyroid nodules are intended to "diagnose thyroid cancers that have reached clinical significance, while avoiding unnecessary tests and surgery in patients with benign nodules." The aim of our study was to determine the proportion of thyroid nodules undergoing ultrasound-guided fine-needle aspiration (FNA) that do not meet SRU recommendations. Materials And Methods: This study is a retrospective study of 400 consecutive ultrasound-guided thyroid FNA encounters from July 2010 through June 2011. An encounter was defined as presentation to the department of radiology on a given date for FNA of one or more thyroid nodules. The criteria for performing biopsy of a nodule were determined by the referring clinicians. Nodules were categorized on the basis of sonographic findings as meeting SRU recommendations for biopsy, which we refer to as "SRU-positive," or not, which we refer to as "SRU-negative." Patients without a definitive pathology diagnosis of Bethesda class benign or malignant nodules were excluded. The characteristics of malignancies were compared for SRU-positive and SRU-negative encounters. Results: The final study group consisted of 360 biopsy encounters for 350 patients and 29 malignancies (8%). Of the 360 biopsy encounters, 86 (24%) were SRU-negative encounters. Malignancy rates in SRU-positive and SRU-negative encounters were 9% (24/274) and 6% (5/86), respectively, and were not significantly different (p=0.5). Eighteen malignancies (75%) in the SRU-positive group were localized, whereas the others had nodal metastases (4/24) or distant metastases (2/24). SRU-positive encounters included medullary carcinoma, anaplastic carcinoma, and melanoma metastasis in addition to papillary carcinoma. All SRU-negative malignancies were localized papillary carcinomas. Conclusion: One in four thyroid biopsy encounters at our institution did not meet SRU recommendations for biopsy. The application of SRU recommendations reduces the number of benign nodules that undergo workup. Potentially missed malignancies in SRU-negative nodules are less aggressive by histologic type and stage compared with SRU-positive malignancies. abstract_id: PUBMED:25203387 Using the 3-tiered system for categorizing workup of incidental thyroid nodules detected on CT, MRI, or PET/CT: how many cancers would be missed? Background: A 3-tiered system has been proposed by radiologists for the reporting and workup of incidental thyroid nodules (ITN) detected on computed tomography (CT), magnetic resonance imaging (MRI), or positron emission tomography/computed tomography (PET/CT). It has been shown to reduce the workup rate, but there remains concern about missed malignancies. This study aims at estimating the proportion of incidental cancers that would be missed relative to all thyroid cancers if the 3-tiered system were applied to ITN seen on CT, MRI, or PET/CT. We also aim to characterize these missed incidental cancers by histology and tumor stage. Methods: A retrospective review was performed of 680 consecutive patients with thyroid cancer who underwent surgery between January 2003 and December 2012. Medical records were reviewed to identify incidental thyroid cancers detected on imaging. Patients with incidental cancers detected on CT, MRI, or PET/CT were categorized according to the system as 3-tiered system-positive and 3-tiered system-negative. The system recommends that only 3-tiered system-positive ITN undergo further workup with ultrasound. Three-tiered system-negative cancers were the cancers that would have been missed if the system were used in practice. These tumors are described by tumor type, size, and stage to determine the potential impact if the 3-tiered system were adopted. Results: One hundred and one patients had imaging-detected incidental cancers, of whom 64 met the inclusion criteria and were originally detected on CT, MRI, or PET/CT. Eight were 3-tiered system-negative, which represents 13% of the 64 incidental cancers that could be categorized and 1.2% of all thyroid cancers treated at our institution in the 10-year period. Three-tiered system-negative tumors were all papillary in histology and had a median size of 12 mm (interquartile range 10-12 mm). Six tumors (75%) were American Joint Committee on Cancer (AJCC) stage I, one was AJCC stage II, and one was AJCC stage III at diagnosis. Conclusions: Based on thyroid cancers diagnosed during a decade, incidental malignancies missed by the 3-tiered system represent 1.2% of all thyroid malignancies. Three-tiered system-negative incidental cancers were all small papillary cancers. Given that few cancers would be missed and most are less aggressive, we propose that the 3-tiered system could be adopted in clinical practice to guide the workup of ITN identified on CT, MRI, and PET/CT. abstract_id: PUBMED:35466880 A Comparison of the Performances of Artificial Intelligence System and Radiologists in the Ultrasound Diagnosis of Thyroid Nodules. Aims: The purpose of this paper is to prospectively evaluate the performance of an artificial intelligence (AI) system in diagnosing thyroid nodules and to assess its potential value in comparison with the performance of radiologists with different levels of experience, as well as the factors affecting its diagnostic accuracy. Background: In recent years, medical imaging diagnosis using AI has become a popular topic in clinical application research. Objective: This study aimed to evaluate the performance of an AI system in diagnosing thyroid nodules and compare it with the performance levels of different radiologists. Methods: This study involved 426 patients screened for thyroid nodules at the First Affiliated Hospital of Guangzhou Medical University between July 2017 and March 2019. All of the nodules were evaluated by radiologists with various levels of experience and an AI system. The diagnostic performances of two junior and two senior radiologists, an AI system, and an AI-assisted junior radiologist were compared, as were their diagnostic results with respect to nodules of different sizes. Results: The senior radiologists, the AI system, and the AI-assisted junior radiologist performed better than the junior radiologist (p &lt; 0.05). The area under the curves of the AI system and the AI-assisted junior radiologist were similar to the curve of the senior radiologists (p &gt; 0.05). The diagnostic results concerning the two nodule sizes showed that the diagnostic error rates of the AI system, junior radiologists, and senior radiologists for nodules with a maximum diameter of ≤1 cm (Dmax ≤ 1 cm) were higher than those for nodules with a maximum diameter of 1 cm (Dmax &gt; 1 cm) (23.4% vs. 12.1%, p = 0.002; 26.6% vs. 7.3%, p &lt; 0.001; and 38.3% vs. 14.6%, p &lt; 0.001). Conclusion: The AI system is a decision-making tool that could potentially improve the diagnostic efficiency of junior radiologists. Micronodules with Dmax ≤ 1cm were significantly correlated with diagnostic accuracy; accordingly, more micronodules of this size, in particular, should be added to the AI system as training samples. Other: The system could be a potential decision-making tool for effectively improving the diagnostic efficiency of junior radiologists in the community. abstract_id: PUBMED:35848013 Ultrasound-based computer-aided diagnosis for cytologically indeterminate thyroid nodules with different radiologists. Purpose: To evaluate a computer-aided diagnosis (CAD) technique in predicting malignancy for cytologically indeterminate thyroid nodules (TNs) as compared with different experienced radiologists. Method: 436 patients with 436 cytologically indeterminate TNs on fine-needle aspiration cytology (FNAC) were included and all were confirmed by surgical pathology. They were retrospectively analyzed with respect to ultrasound (US) characteristics using a commercially available CAD system (AmCAD-UT; AmCad BioMed, Taiwan, China) and reviewed by one junior and one senior radiologists.The CAD system and different experienced radiologists stratified the risk of malignancy using ACR TI-RADS category. The diagnostic performance by different experienced radiologists independently and after consulting the CAD (different experienced radiologists + CAD) and by the CAD alone were compared. Results: The different experienced radiologists showed significantly higher specificities than the CAD system alone. The combination of radiologist and CAD system showed improved diagnostic performance with an AUC (Area under the curve) of 0.740 in the senior radiologist and 0.677 in the junior radiologist, as compared with CAD (AUC: 0.585) alone (all P &lt; 0.05). The combination of senior radiologist and CAD system had the highest diagnostic performance (AUC: 0.740) and specificity (68.9%) compared to the others (all P &lt; 0.05). Conclusion: The CAD system may play the potential role as a decision-making assistant alongside radiologists for differential diagnosis of TNs with indeterminate cytology. abstract_id: PUBMED:17146329 Management of thyroid nodules detected at US: Society of Radiologists in Ultrasound consensus conference statement. The Society of Radiologists in Ultrasound convened a panel of specialists from a variety of medical disciplines to come to a consensus on the management of thyroid nodules identified with thyroid ultrasonography (US), with particular focus on which nodules should be subjected to US-guided fine needle aspiration and which thyroid nodules need not be subjected to fine-needle aspiration. The panel met in Washington, DC, October 26-27, 2004, and created this consensus statement. The recommendations in this consensus statement, which are based on analysis of the current literature and common practice strategies, are thought to represent a reasonable approach to thyroid nodular disease. abstract_id: PUBMED:16304103 Management of thyroid nodules detected at US: Society of Radiologists in Ultrasound consensus conference statement. The Society of Radiologists in Ultrasound convened a panel of specialists from a variety of medical disciplines to come to a consensus on the management of thyroid nodules identified with thyroid ultrasonography (US), with particular focus on which nodules should be subjected to US-guided fine needle aspiration and which thyroid nodules need not be subjected to fine-needle aspiration. The panel met in Washington, DC, October 26-27, 2004, and created this consensus statement. The recommendations in this consensus statement, which are based on analysis of the current literature and common practice strategies, are thought to represent a reasonable approach to thyroid nodular disease. abstract_id: PUBMED:34855708 Interobserver Variability of Ultrasound Features Based on American College of Radiology Thyroid Imaging Reporting and Data System Lexicon in American College of Radiology Thyroid Imaging Reporting and Data System System: A Single-Center Study With Radiologists and Radiology Residents. Abstract: The aim of this study is to evaluate the variability of selecting the ultrasound features used in American College of Radiology Thyroid Imaging Reporting and Data System (ACR-TIRADS) and in assigning the ACR-TIRADS level in a single center among radiologists and radiology residents. The study cohort consisted of 108 thyroid nodules in 102 patients who had definite cytology results after thyroid fine needle aspiration biopsy (Bethesda category II, VI) or surgery. Seven observers including 3 radiologists and 4 radiology residents evaluated the nodules according to 5 ultrasound feature categories. The evaluation process was performed after a joint meeting session, in which the "white papers" of the ACR-TIRADS committee were discussed regarding the thyroid ultrasound reporting lexicon, and final TIRADS system. Variability of ultrasound features and assigning ACR-TIRADS level was measured using Fleiss kappa statistics. Agreement for ultrasound features was "substantial" to "almost perfect" among all observers, with composition (κ = 0.86), macrocalcification (κ = 0.89) and peripheral calcification (κ = 0.92) at the highest level of agreement. The level of agreement for large comet tail artifacts and punctate echogenic foci was "moderate" in residents, whereas in radiologists, that level was "substantial." The agreement for assigning ACR-TIRADS level was moderate in resident as well as in radiologist subgroup. Agreement of thyroid ultrasound features was "substantial to almost perfect" among all observers. Although the level of agreement among resident group decreased to "moderate" level, ACR-TIRADS is a useful system in thyroid nodule management. abstract_id: PUBMED:24475865 Thyroid cancers incidentally detected at imaging in a 10-year period: how many cancers would be missed with use of the recommendations from the Society of Radiologists in Ultrasound? Purpose: To estimate the prevalence of incidental thyroid cancer (ITC) among patients undergoing thyroid surgery and to apply the Society of Radiologists in Ultrasound (SRU) guidelines to ITC. Materials And Methods: This HIPAA-compliant study was approved by the institutional review board, with waiver of the need to obtain informed consent. A retrospective review of data in patients who underwent thyroid surgery between January 1, 2003, and December 31, 2012, was performed. Imaging studies and reports were reviewed for ITCs that were first detected at either ultrasonography (US) or a different imaging modality and that included US as part of the work-up. ITCs were categorized by using the SRU guidelines to determine the characteristics of SRU criteria-positive and SRU criteria-negative malignancies. Patient demographic data, tumor histologic findings, tumor size, and tumor stage were compared for the SRU criteria-positive and SRU criteria-negative cancers by using the unpaired t test and the χ(2) test. Results: Among 2090 patients who underwent thyroid surgery, 680 had thyroid cancer; of these patients, 101 (15%) had imaging-detected ITC. The SRU recommendations were applied to the findings in 90 of the 101 patients who had undergone US with images or had reports available for review. Sixteen (18%) of the 90 patients had SRU criteria-negative tumors, which represented 2% (16 of 680) of all thyroid cancers. SRU criteria-negative tumors were smaller than SRU criteria-positive tumors (mean, 1.1 cm [range, 0.9-1.4 cm] vs mean, 2.5 cm [range, 1.0-7.6 cm]; P &lt; .001) and were more likely to be stage I (15 [94%] of 16 vs 47 [64%] of 74; P = .02). Conclusion: Imaging-detected ITCs are uncommon. Two percent (16 of 680) of malignancies would not undergo fine-needle aspiration biopsy or surgery if the SRU guidelines were used for work-up of incidental thyroid nodules. SRU criteria-negative tumors are lower in stage than SRU criteria-positive tumors. Answer: Yes, the diagnostic workup for incidental thyroid nodules (ITN) detected at imaging can be reduced by using the Society of Radiologists in Ultrasound (SRU) recommendations and the three-tiered system. A study that included ITN in 390 patients found that if the SRU recommendations had been applied, 30% of patients with ITN on ultrasound images would not have received fine-needle aspiration (FNA), and one case of cancer would have been missed. Similarly, if the three-tiered system had been applied, 35% of patients with ITN on CT, MR, or PET/CT images would not have received FNA, but none of the three malignancies would have been missed. Overall, 33% of patients with ITN did not meet the SRU recommendations or the three-tiered criteria, suggesting that the use of these guidelines can reduce the workup of ITN by one third compared with current practice without specific guidelines (PUBMED:24370125). Another study reviewed the current evidence and recommendations for ITN and concluded that the ideal approach to selecting ITN for workup would not be to diagnose all cancers, but to diagnose cancers that have reached clinical significance while avoiding unnecessary tests and surgery in patients with benign nodules. The three-tiered system and the SRU recommendations are supported by existing studies and focus on reducing unnecessary biopsy (PUBMED:25310642). Furthermore, a retrospective study of 400 consecutive ultrasound-guided thyroid FNA encounters found that 24% of the biopsy encounters did not meet SRU recommendations for biopsy. The application of SRU recommendations reduces the number of benign nodules that undergo workup, and potentially missed malignancies in SRU-negative nodules are less aggressive by histologic type and stage compared with SRU-positive malignancies (PUBMED:24555597). Lastly, a study that aimed to estimate the proportion of incidental cancers that would be missed relative to all thyroid cancers if the three-tiered system were applied to ITN seen on CT, MRI, or PET/CT found that incidental malignancies missed by the three-tiered system represent 1.2% of all thyroid malignancies. The missed tumors were all small papillary cancers, suggesting that few cancers would be missed and most are less aggressive, supporting the adoption of the three-tiered system in clinical practice to guide the workup of ITN identified on CT, MRI, and PET/CT (PUBMED:25203387).
Instruction: Do muscle mass, muscle density, strength, and physical function similarly influence risk of hospitalization in older adults? Abstracts: abstract_id: PUBMED:19682143 Do muscle mass, muscle density, strength, and physical function similarly influence risk of hospitalization in older adults? Objectives: To examine the association between strength, function, lean mass, muscle density, and risk of hospitalization. Design: Prospective cohort study. Setting: Two U.S. clinical centers. Participants: Adults aged 70 to 80 (N=3,011) from the Health, Aging and Body Composition Study. Measurements: Measurements were of grip strength, knee extension strength, lean mass, walking speed, and chair stand pace. Thigh computed tomography scans assessed muscle area and density (a proxy for muscle fat infiltration). Hospitalizations were confirmed by local review of medical records. Negative binomial regression models estimated incident rate ratios (IRRs) of hospitalization for race- and sex-specific quartiles of each muscle and function parameter separately. Multivariate models adjusted for age, body mass index, health status, and coexisting medical conditions. Results: During an average 4.7 years of follow-up, 1,678 (55.7%) participants experienced one or more hospitalizations. Participants in the lowest quartile of muscle density were more likely to be subsequently hospitalized (multivariate IRR=1.47, 95% confidence interval (CI)=1.24-1.73) than those in the highest quartile. Similarly, participants with the weakest grip strength were at greater risk of hospitalization (multivariate IRR=1.52, 95% CI=1.30-1.78, Q1 vs. Q4). Comparable results were seen for knee strength, walking pace, and chair stands pace. Lean mass and muscle area were not associated with risk of hospitalization. Conclusion: Weak strength, poor function, and low muscle density, but not muscle size or lean mass, were associated with greater risk of hospitalization. Interventions to reduce the disease burden associated with sarcopenia should focus on increasing muscle strength and improving physical function rather than simply increasing lean mass. abstract_id: PUBMED:38033420 Relationship between muscle mass and muscle strength with physical performance in older adults: A systematic review. Background: Declining muscle mass is not always accompanied by declining muscle strength in older adults, challenging the notion that low muscle mass is the sole criterion for diagnosing sarcopenia. Objective: This review aims to find out the relationships between muscle mass and muscle strength with physical performance in older adults. Design: This article was a systematic review using Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines. Data Sources And Methods: We do a systematic search of observational studies that are published between 2013 and August 2023 in PubMed, ScienceDirect, Sage journal, Tripdatabase, Cochrane Library, Embase, and CINAHL. Two reviewers selected and extracted data independently and an association measure was recorded from included studies. Results: The review analyzed 17 observational studies conducted between 2013 and September 2023. The findings suggest that while declining muscle mass is often associated with sarcopenia, it may not always correspond to declining muscle strength in older individuals. The most common method used to measure muscle mass was bioelectrical impedance analysis, while handgrip strength was the predominant measure of muscle strength. Tests such as timed up and go and gait speed were used to assess physical performance. Conclusions: Physical performance in older adults is significantly related to muscle strength, whereas the relationship between muscle mass and physical performance is either weak or negligible. Therefore, when evaluating physical performance in older individuals, focusing on muscle strength is more important than muscle mass alone. abstract_id: PUBMED:38161587 Correlates of physical activity levels, muscle strength, working memory, and cognitive function in older adults. Objective: To explore the relationship between physical activity level, muscle strength, working memory and cognitive function in older adults. Methods: A cross-sectional research design was employed to recruit 120 older adults individuals aged 70 and above. Participants were asked to complete the International Physical Activity Questionnaire-Short Form and the Montreal Cognitive Assessment Scale. Data on variables such as grip strength and performance on the N-back task were collected. Data analysis involved the use of independent samples t-tests, χ2 tests, linear regression analysis, Pearson correlation analysis, and one-way analysis of variance (ANOVA). Results: The detection rate of cognitive dysfunction in older adults was 53.211%; 1-back correct rate had an explanatory power of 11.6% for the cognitive function scores of older adults (R2 = 0.116, p &lt; 0.001); grip strength showed a significant positive correlation with 1-back correct rate (r = 0.417, p &lt; 0.001), and was significantly correlated with the 0-back response time (r = -0.478), 1 -back response time (r = -0.441) were significantly negatively correlated (p &lt; 0.001); physical activity level was significantly positively correlated with grip strength (r = 0.559, p &lt; 0.001), and the difference in grip strength among older adults with different physical activity levels was statistically significant (F = 19.685, p &lt; 0.001). Conclusion: Physical activity level, muscle strength, working memory, and cognitive function are closely related in older adults, and the relational pathway of physical activity → muscle strength → working memory → cognitive function may serve as a useful addition to promote the field of cognitive research in older adults. To identify and prevent cognitive decline in older adults, physical activity questionnaires, grip strength tests, and 1-back task tests can be extended to nursing homes and communities. abstract_id: PUBMED:32163796 Relationship of low muscle mass and obesity with physical function in community dwelling older adults: Results from the Nagahama study. Aim: Relationships of low muscle mass and obesity with physical function were investigated in older adults. Methods: The participants were 1922 community dwelling persons aged 60 years or older (67.7 ± 5.2 years old, 643 men and 1279 women). One-leg standing time, knee extensor muscle strength, and maximum walking speed were assessed. Muscle mass was evaluated using the bioelectrical impedance analysis (BIA) method, and skeletal mass index (SMI) was determined. For the obesity index, waist circumference with a stronger association with visceral fat was used. Participants were classified into the following four groups based on standard values of waist circumference and SMI: non-obese low-SMI, obese low-SMI, non-obese normal-SMI, and obese normal-SMI. Results: By two-way analysis of variance (obesity × SMI), the main effects of waist circumference and muscle mass were noted in the one-leg standing time. The maximum walking speed was higher in the non-obese than the obese group, and in the normal than the low SMI group. In the muscle strength, a main effect was noted only in the muscle mass of women. An interaction was noted in men and significant differences were detected between all combinations except between the non-obese low SMI and obese low SMI groups. Conclusion: Low muscle mass and obesity negatively influence balance and walking abilities. However, the influence of low muscle mass and obesity on muscle strength were different between the sexes. In men, the relationship between obesity and muscle strength would be different depending on whether muscle mass is retained. abstract_id: PUBMED:33428891 Longitudinal Changes in Muscle Mass, Muscle Strength, and Physical Performance in Acutely Hospitalized Older Adults. Objectives: Acute hospitalization may lead to a decrease in muscle measures, but limited studies are reporting on the changes after discharge. The aim of this study was to determine longitudinal changes in muscle mass, muscle strength, and physical performance in acutely hospitalized older adults from admission up to 3 months post-discharge. Design: A prospective observational cohort study was conducted. Setting And Participants: This study included 401 participants aged ≥70 years who were acutely hospitalized in 6 hospitals. All variables were assessed at hospital admission, discharge, and 1 and 3 months post-discharge. Methods: Muscle mass in kilograms was assessed by multifrequency Bio-electrical Impedance Analysis (MF-BIA) (Bodystat; Quadscan 4000) and muscle strength by handgrip strength (JAMAR). Chair stand and gait speed test were assessed as part of the Short Physical Performance Battery (SPPB). Norm values were based on the consensus statement of the European Working Group on Sarcopenia in Older People. Results: A total of 343 acute hospitalized older adults were included in the analyses with a mean (SD) age of 79.3 (6.6) years, 49.3% were women. From admission up to 3 months post-discharge, muscle mass (-0.1 kg/m2; P = .03) decreased significantly and muscle strength (-0.5 kg; P = .08) decreased nonsignificantly. The chair stand (+0.7 points; P &lt; .001) and gait speed test (+0.9 points; P &lt; .001) improved significantly up to 3 months post-discharge. At 3 months post-discharge, 80%, 18%, and 43% of the older adults scored below the cutoff points for muscle mass, muscle strength, and physical performance, respectively. Conclusions And Implications: Physical performance improved during and after acute hospitalization, although muscle mass decreased, and muscle strength did not change. At 3 months post-discharge, muscle mass, muscle strength, and physical performance did not reach normative levels on a population level. Further research is needed to examine the role of exercise interventions for improving muscle measures and physical performance after hospitalization. abstract_id: PUBMED:34481136 Peripheral Quantitative Computed Tomography Derived Muscle Density Is Associated With Physical Performance in Older Adults. Purpose: The assessment of body composition is an integral part in diagnosing sarcopenia. The purpose of this study was to determine the relationships between peripheral quantitative computed tomography (pQCT)-derived measures of body composition and measures of physical performance in older adults. Methods: Muscle density, muscle area, and fat area of 168 patients aged 65 years and older (76.3±6.5) were measured with pQCT at the distal forearm additionally to clinical assessment consisting of medical history, physical examination and physical assessment including hand grip strength, gait speed and chair rise tests. Regression analyses assessed associations between patients' physical performance and pQCT derived data. Results: Among the three pQCT parameters, especially muscle density was significantly correlated with all of the three measures of physical performance even after adjusting for sex, age, BMI, vitamin D serum level and the level of physical activity. The same analysis for muscle area achieved significance level only for handgrip strength but not for gait speed nor for chair rise time. Fat area was significantly correlated only with gait speed after adjusting for sex and age. The association of muscle density with physical performance held up in an additional subanalysis stratified by body mass index. Conclusion: Muscle density, a proxy for muscle fat infiltration, seems to be better than muscle area or fat area at assessing muscle quality and physical performance in older adults. This association seems to be independent of the body mass index. abstract_id: PUBMED:35805870 Efficacy of Exercise on Muscle Function and Physical Performance in Older Adults with Sarcopenia: An Updated Systematic Review and Meta-Analysis. This study aimed to analyze the efficacy of exercise interventions on muscle strength, muscle mass, and physical performance in older adults with sarcopenia. Randomized controlled studies assessing exercise effects on sarcopenia were searched in Web of Science, PubMed, Cochrane Library, ProQuest, EBSCOhost, Scopus, EMBASE, and VIP and CNKI up to 31 March 2022. Data were expressed as weighted/standardized mean difference (MD/SMD) with 95% confidence intervals (CI). I2 index was employed for heterogeneity. The initial search identified 5379 studies, and 23 studies involving 1252 participants met the inclusion criteria for further analysis. Results revealed that exercise interventions can significantly improve grip strength (MD = 2.38, 95%CI = 1.33-3.43), knee extension strength (SMD = 0.50, 95%CI = 0.36-0.64), muscle mass of lower extremities (MD = 0.28, 95%CI = 0.01-0.56), walking speed (SMD = 0.88, 95%CI = 0.49-1.27), and functional mobility (MD = -1.77, 95%CI = -2.11--1.42) among older adults with sarcopenia. No significant exercise effects were found on fat-free muscle mass, appendicular muscle mass, skeletal muscle mass, and muscle mass of the upper extremities. The results of subgroup analysis indicated that both resistance training and multicomponent exercise could significantly increase the muscle strength, while aerobic exercise did not. The findings suggest that exercise intervention can effectively improve muscle function and physical performance in older adults with sarcopenia, but has limited effects on the muscle mass of the upper extremities. In addition, it is highly recommended to apply group-based and supervised resistance training and multicomponent exercise in the prevention and treatment of sarcopenia among the older population. abstract_id: PUBMED:28817825 Muscle Strength and Muscle Mass in Older Patients during Hospitalization: The EMPOWER Study. Background: Low muscle strength and muscle mass are associated with an increased length of hospital stay and higher mortality rate in inpatients. To what extent hospitalization affects muscle strength and muscle mass is unclear. Objective: We aimed to assess muscle strength and muscle mass at admission and during hospitalization in older patients and its relation with being at risk of geriatric conditions. Methods: The EMPOWER study included patients aged 70 years and older, admitted to 4 wards of the VU University Medical Center in the Netherlands between April and December 2015. At admission, patients were screened for being at risk of 4 geriatric conditions: delirium, falls, malnutrition, and functional disability. At admission and at discharge, muscle strength and muscle mass were assessed. Results: A total of 373 patients (mean age, standard deviation [SD]: 79.6, 6.38 years) were included at admission, and 224 patients (mean age, SD: 80.1, 6.32 years) at discharge. At admission, lower muscle strength in both female and male patients and low muscle mass in male patients were associated with being at risk of a higher cumulative number of geriatric conditions. Muscle strength increased during hospitalization, but no change in muscle mass was observed. Changes in muscle measures were not associated with being at risk of geriatric conditions. Discussion: Older patients with lower muscle strength and muscle mass at admission were at risk of a higher cumulative number of geriatric conditions. However, being at risk of geriatric conditions did not forecast further decrease in muscle strength and muscle mass during hospitalization. abstract_id: PUBMED:26017097 Muscle strength: A better index of low physical performance than muscle mass in older adults. Aim: The most appropriate muscle index for the definition of sarcopenia has not been agreed on. We aimed to investigate the associations of muscle mass and strength with 5-year mortality and low physical performance. Methods: We included 560 participants aged 65 years or older in the analysis. Muscle and fat mass were assessed by dual-energy X-ray absorptiometry. Leg muscle and grip strength were measured using dynamometers. The clinical outcomes were 5-year mortality and low physical performance (Short Physical Performance Battery score &lt; 9) in 5 years. Associations between muscle indices and clinical outcomes were analyzed. A Cox proportional hazard model for mortality and a logistic regression model for physical performance were used. Results: Decreases in leg muscle and grip strength were significantly associated with 5-year mortality and low physical performance in both sexes. Total muscle mass in men and appendicular skeletal mass in both sexes were associated with mortality, but not with low physical performance. Lower leg muscle strength (OR 0.107; P = 0.020) was an independent predictor of low physical performance in women after adjusting for age, fat, cognition, and depression. Lower leg muscle (OR 0.123; P = 0.031) and lower grip strength (OR 0.950; P = 0.012) were independent predictors of low physical performance in men. Conclusions: Muscle strength is a better indicator of 5-year adverse clinical outcomes of mortality and low physical performance than muscle mass. Muscle strength was an independent predictor of low physical performance in 5 years. Geriatr Gerontol Int 2016; 16: 577-585. abstract_id: PUBMED:36186132 Effect Of Exercise on Muscle Mass, Fat Mass, Bone Mass, Muscular Strength and Physical Performance in Community Dwelling Older Adults: Systematic Review and Meta-Analysis. The demographic transition makes it necessary to establish new recommendations about the components that are most affected by aging, such as: muscle mass, fat mass, bone mass, muscle strength, and physical performance. Exercise has been identified as a factor that improves those conditions. The aim of this review is to synthetize and analyze the results of exercise interventions on muscle mass, fat mass, bone mass, strength and physical performance in community dwelling older adults (OA). A systematic search was carried out in eleven databases, using validated terms as "aged", "exercise" and others. For the meta-analysis, we differentiated by type of exercise and outcome. Twenty-nine randomized clinical trials were obtained for the review and 24 clinical trials for meta-analysis. This study identified an increase of 1.0 kg (95% Confidence Interval [CI] 0.3 -1.7) in total muscle mass (TMM) and 0.4 kg (95%CI 0.0,0.7) in appendicular muscle mass (AMM); a decrease of -3.7 kg (95% CI -5.8, -1.5) in total fat mass and -3.7% (95%CI -5.8, -1.5) in fat percentage after the resistance exercise intervention by 2-3 times per week. A -3.0% (95%CI -4.6, -1.3) decrease was observed in fat percentage after the aerobic exercise intervention. The quality of the evidence was ranked from high to very low; the risk of bias most common was performance bias and other bias. This study suggests that resistance exercise is the intervention that shows a positive effect on muscle fat mass, and bone mass. More research is needed for other exercise interventions. Answer: Muscle mass, muscle density, strength, and physical function do not all similarly influence the risk of hospitalization in older adults. According to a prospective cohort study, weak strength, poor function, and low muscle density, but not muscle size or lean mass, were associated with a greater risk of hospitalization. Specifically, participants in the lowest quartile of muscle density and those with the weakest grip strength were at a higher risk of hospitalization. Comparable results were seen for knee strength, walking pace, and chair stands pace, indicating that interventions should focus on increasing muscle strength and improving physical function rather than simply increasing lean mass (PUBMED:19682143). This finding is supported by a systematic review which concluded that physical performance in older adults is significantly related to muscle strength, whereas the relationship between muscle mass and physical performance is either weak or negligible. Therefore, when evaluating physical performance in older individuals, focusing on muscle strength is more important than muscle mass alone (PUBMED:38033420). Moreover, another study found that physical activity level, muscle strength, working memory, and cognitive function are closely related in older adults. This suggests that interventions aimed at increasing physical activity and muscle strength may have a beneficial impact on cognitive function as well (PUBMED:38161587). In summary, while muscle mass may not be a strong predictor of hospitalization risk, muscle density and strength, along with physical function, are more closely associated with the risk of hospitalization in older adults. These factors should be the focus of interventions aimed at reducing the disease burden associated with sarcopenia and preventing hospitalizations in this population.
Instruction: Do anxiety-disordered children need to come into the clinic for efficacious treatment? Abstracts: abstract_id: PUBMED:22545740 Do anxiety-disordered children need to come into the clinic for efficacious treatment? Objective: This study compared 3 experimental conditions: wait-list, therapist-supported bibliotherapy, and individual therapy, in the treatment of child anxiety. Method: Participants were 55 children (25 girls and 30 boys), aged 7 to 14 years diagnosed with an anxiety disorder, and their parents. Families were assigned using a modified random assignment process to 1 of the 3 conditions. The intervention evaluated in the 2 active treatment conditions was a family-focused, cognitive-behavioral program. Results: At posttreatment, participants in both treatment conditions had improved significantly on both diagnostic and questionnaire outcome measures compared with participants in the wait-list condition, with no differences demonstrated between the treatment conditions. Thus, at posttreatment, 0% of children in the wait-list condition were anxiety diagnosis free, compared with 95% in the therapist-supported bibliotherapy condition and 78.3% in the individual therapy condition. There was no significant difference between diagnostic status at posttreatment between the 2 treatment conditions. Participants assigned to a treatment condition were reassessed at 3-month and 6-month follow-up. Treatment gains were maintained in both conditions across the follow-up period. Conclusion: In light of the fact that more than 80% of anxiety-disordered children never receive treatment, these data suggest that therapist-supported bibliotherapy represents a cost-effective means of reaching a greater number of anxious children. abstract_id: PUBMED:23616708 Neurocognitive performance and behavior before and after treatment for sleep-disordered breathing in children. Neurocognitive and behavioral problems are increasingly reported in children with sleep-disordered breathing (SDB). The impact of treatment for SDB on neurocognition and behavior is, therefore, an issue of increasing importance. To date, there has been little consideration given to the quality of studies when reviewing associated neurocognitive and behavioral problems in children with SDB, and furthermore, there has been little systematic review of treatment outcomes. The aim of this review was to provide an up-to-date and critical review of the current literature. Findings indicate a specific pattern of neurocognitive problems in children with SDB; however, the pattern of behavioral problems is less clear. Very few studies were found to provide a rigorous investigation of posttreatment neurocognitive and behavior outcomes. Despite this, relatively consistent improvements in global intelligence, attention, and visual spatial ability are shown; however, persistent deficits in other domains are also evident. For behavior, problems of hyperactivity, aggression or conduct problems, and somatic complaints improve following treatment. In contrast, symptoms of anxiety and social problems less consistently improve. These findings should aid in the development of more targeted investigations and well-designed studies exploring both the causative mechanisms and the treatment response for neurocognitive and behavior problems in children with SDB. abstract_id: PUBMED:36109824 Increased behavioral problems in children with sleep-disordered breathing. Background: Healthy sleep is essential for the cognitive, behavioral and emotional development of children. Therefore, this study aimed to assess the behavioral consequences of sleep disturbances by examining children with sleep-disordered breathing compared with control participants. Methods: Seventy-eight children with SDB (average age: 6.7 years (SD = 1.83); 61 had OSA and 17 had primary snoring) and 156 control subjects (average age: 6.57 years (SD = 1.46) participated in the study. We matched the groups in age (t(232) = 0.578, p = 0.564) and gender (χ2(1) = 2.192, p = 0.139). In the SDB group, the average Apnea-Hypopnea Index was 3.44 event/h (SD = 4.00), the average desaturation level was 87.37% (SD = 6.91). Parent-report rating scales were used to measure the children's daytime behavior including Attention Deficit Hyperactivity Disorder Rating Scale, Strengths and Difficulties Questionnaire, and Child Behavior Checklist. Results: Our results showed that children with SDB exhibited a higher level of inattentiveness and hyperactive behavior. Furthermore, the SDB group demonstrated more internalizing (anxiety, depression, somatic complaints, social problems) (p &lt; 0.001) and externalizing (aggressive and rule-breaking behavior) problems compared with children without SDB, irrespective of severity. Conclusions: Based on our findings we supposed that snoring and mild OSA had a risk for developing behavioral and emotional dysfunctions as much as moderate-severe OSA. Therefore, clinical research and practice need to focus more on the accurate assessment and treatment of sleep disturbances in childhood, particularly primary snoring, and mild obstructive sleep apnea. abstract_id: PUBMED:33593387 Disordered eating among adolescents with chronic pain: the experience of a pediatric rheumatology subspecialty pain clinic. Background: Disordered eating and chronic pain often co-occur in adolescents, but the relationship between these conditions is not well understood. We aimed to determine the prevalence of and to identify the clinical characteristics associated with the presence of disordered eating among adolescents with chronic musculoskeletal pain (CMP) presenting to a pediatric rheumatology subspecialty pain clinic. Methods: This was a retrospective cohort study of pediatric patients presenting to a pediatric rheumatology subspecialty pain clinic for an initial consultation from March 2018 to March 2019. We complemented data from an existing patient registry with secondary chart review for patients identified with disordered eating. We compared patient characteristics based on the presence or absence of disordered eating among adolescents with CMP. Logistic regression modeling was used to determine factors associated with disordered eating. Results: Of the 228 patients who were seen for an initial consultation in the pain clinic in 1 year, 51 (22.4%) had disordered eating. Only eight (15.7%) of the 51 patients identified with disordered eating had a previously documented formal eating disorder diagnosis. Through multivariate logistic regression modeling, we found that disordered eating was associated with older age, higher functional disability, presence of abdominal pain, presence of gastrointestinal comorbidities, and presence of anxiety (all p &lt; 0.05). Conclusions: Adolescents with chronic pain, especially those who experience gastrointestinal issues, anxiety, and greater functional disability, should be evaluated for disordered eating by the treating clinician in order to ensure timely and appropriate treatment. abstract_id: PUBMED:15721572 Disclosure of distress among anxiety-disordered youth: differences in treatment outcome. The present study evaluated treatment outcome differences in anxiety-disordered youth who differed in their disclosure of internal distress as measured in a structured diagnostic interview. One hundred and seventy-one clinic-referred, anxiety-disordered children served as participants. Participants' primary diagnoses were one of three anxiety disorders: separation anxiety, generalized anxiety/overanxious, or social phobia/avoidance. At a pretreatment assessment, children and their parents were interviewed separately using the Anxiety Disorders Interview Schedule (ADIS) to determine the child's diagnosis. The child's status as a discloser of high distress or discloser of low distress was determined by the parents' endorsement of an anxiety disorder and the child's endorsement or lack of endorsement of an anxiety disorder, respectively. Parents, teachers, and children also completed measures assessing the child's psychopathology (e.g., Revised Children's Manifest Anxiety Scale, Child Behavior Checklist). In general, findings indicated that the level of distress reported by the children moderated treatment outcome. Although both groups benefited from treatment, the children disclosing high distress experienced greater treatment gains than the children disclosing low distress. abstract_id: PUBMED:28344934 EFFICACY AND SAFETY OF ORAL KETAMINE PREMEDICATION IN CHILDREN UNDERGOING DAY CASE SURGERY. Background: Oral premedication for paediatric age group is an uncommon practice amongst anaesthetists in Nigeria. Both parents and the child suffer some form of emotional or psychological distress. Aim: To determine the efficacy and safety of oral formulated ketamine for premedication in children scheduled for ambulatory surgeries. Methods: Seventy three children aged 1 - 6 years with American Society of Anesthesiologists (ASA) physical status I-II were prospectively studied. They were assigned randomly to receive either 5 mg/kg (Group A), 10 mg/kg (Group B), or no ketamine (Group C).The children were observed for acceptance of premedication, sedation and anxiolysis at 10, 20 and 30 minutes after drug administration. Behavior/response of each child at the time of separation from parents, intravenous access, and acceptance of facemask for induction, postanaesthetic arousal state and complications were also recorded. Results: There were 73 children in this study with a mean age of 37.4±18.0 months. The groups were comparable in age. The studied agent was tolerated by both groups that received premedication with no significant difference (P 0.73). Adequate sedation and anxiolysis were observed in groups A and B, (52%, 84%) and (68%, 88%) respectively. However, more children in group B (82.6%) had satisfactory behaviour at separation from parents and a better acceptance of anaesthetic face mask (64%) at induction than those in groups A and C (33.3%, 21.7%, respectively). No side effect was recorded in either of the premedication groups or the control group. Conclusion: Oral ketamine is acceptable and safe premedication for children. It provided good sedation, relieved anxiety and had no side effect in the children at the studied doses. abstract_id: PUBMED:26206412 Oral health-related quality-of-life among children in Swedish dental care: The impact from malocclusions or orthodontic treatment need. Objectives: To describe oral health-related quality-of-life (OHRQoL) and the impact of malocclusions or orthodontic treatment need in a cohort of children in Swedish dental care, using the Swedish version of the Child Perceptions Questionnaire-Impact Short Form (CPQ11-14-ISF:16). Subjects And Methods: Two hundred and fifty-seven children (mean age = 11.5 years, SD = 0.8, range = 9.8-13.5 years) completed the CPQ11-14-ISF:16 in conjunction with a clinical examination. In addition to malocclusions and orthodontic treatment need (based on the Index of Orthodontic Treatment Need-Dental Health Component), possible confounders (caries, enamel defects, dental trauma, headache and socio-economic markers) were recorded. Children also rated their own dental fear on the Children's Fear Survey Schedule-Dental Sub-scale (CFSS-DS). Results: The mean total CPQ11-14-ISF:16 score was 9.31. The logistic regression analyses revealed an impact of orthodontic treatment need on OHRQoL (CPQ), but no clear association between higher severity and higher impact on OHRQoL was seen. Dental fear and headache appeared to discriminate for poorer OHRQoL. No impact from caries, enamel defects, dental trauma, or socio-economic markers was revealed. Conclusions: This cohort of children reported good self-perceived OHRQoL. Effects on OHRQoL from malocclusions or orthodontic treatment need were limited and inconsistent. Dental fear and headache were found to be more distinct impact factors on OHRQoL than were malocclusions or orthodontic treatment need. abstract_id: PUBMED:34874788 Impact of Tonsillectomy on the Life Quality of Parents to Children With Obstructive Sleep-Disordered Breathing. Objective: Parents of children with any chronic illness may experience increased anxiety and reduced health-related quality of life (QoL). Our objective was to evaluate the change in parental QoL before vs after tonsillectomy. Our hypothesis was that pediatric tonsil surgery with or without adenoidectomy would improve parental QoL. Study Design: A prospective cohort study. Setting: An otolaryngology department in a tertiary academic center. Methods: We enrolled parents of 79 children &lt;5 years old. Adenotonsillectomy due to an obstructive airway indication was performed in 45 children. A group of 34 parents to healthy children served as a control group. Initially, we validated a modified version of the PAR-ENT-QoL questionnaire; then, we analyzed QoL parameters among parents of children with obstructive sleep-disordered breathing before and after surgical treatment. Results: We found significant differences between the QoL score before and after surgery (P = .003). The QoL score after surgery significantly improved and was even lower than that of the control group (P &lt; .001). Conclusions: These results highlight the importance of timely diagnosis and treatment of children with obstructive sleep-disordered breathing, as this condition may affect not only the children themselves but also their caregivers. abstract_id: PUBMED:28641122 Working with parents to treat anxiety-disordered children: A proof of concept RCT evaluating Fear-less Triple P. Little is known about the efficacy of parent-only interventions and the maintenance of gains over time with anxiety-disordered children and adolescents. The current study aimed to evaluate the efficacy of a 6-session parent-focused intervention (Fear-less Triple P) in reducing children's anxiety symptomatology. The parents of 61 anxiety-disordered children (7-14 years) were randomly assigned to either the 6-session parent-only group CBT intervention or a wait-list control (WL) group. Diagnostic and questionnaire measures were administered at post-treatment; as well as 3-, 6- and 12 months following the completion of treatment. Families in the WL group were re-assessed after 6 weeks (the duration of the active intervention) and were then offered the intervention. The parent-only intervention produced superior outcomes for children on diagnostic and questionnaire measures. The percentages of children free of any anxiety diagnosis following the intervention were 38.7% (post-treatment); 58.6% (3-mth); 69.2% (6-mth); and 84% (12-mth). At the post-treatment assessment point, 3.4% of children in the WL group were free of any anxiety diagnosis. Mother and child questionnaire measures demonstrated gains from pre to post-treatment that were maintained over time. This proof of concept study suggests that the brief, parent-only intervention evaluated is an efficacious treatment approach for child anxiety disorders. A parent-only, group CBT intervention such as the one described here offers a cost-effective, low intensity alternative to traditional child-focused interventions. abstract_id: PUBMED:33201740 Psychological characteristics of parents in a Paediatric Outpatient during the SARS-CoV-2 Epidemic. With the outbreak and spread of the novel coronavirus (SARS-CoV-2), there has been a significant increase in the number of children infected, and some severe infection cases and neonatal cases have been reported. The parents or other family members who come to our paediatric clinic inevitably experience panic, tension and anxiety. The generation of these emotions has seriously affected the normal order of outpatient treatment and has led to many children not receiving an accurate diagnosis or proper treatment. This situation is not conducive to the control of the epidemic or the children's physical and mental health. Through summarizing parents' behaviours and emotional characteristics during the epidemic period, we hope to develop relevant coping and nursing strategies to ensure better control of the epidemic and to protect the physical and mental health of children. Answer: Based on the provided abstracts, it appears that anxiety-disordered children do not necessarily need to come into the clinic for efficacious treatment. The study in abstract PUBMED:22545740 compared therapist-supported bibliotherapy with individual therapy and found that both treatment conditions led to significant improvements in child anxiety compared to a wait-list condition. Notably, 95% of children in the therapist-supported bibliotherapy condition were anxiety diagnosis free at posttreatment, which was comparable to the 78.3% in the individual therapy condition. These results suggest that therapist-supported bibliotherapy, which does not require children to come into the clinic, can be an effective and cost-efficient means of treating child anxiety. Furthermore, the abstract PUBMED:36109824 highlights the importance of addressing sleep disturbances in children, which can lead to behavioral and emotional dysfunctions, suggesting that treatments targeting sleep issues could also be beneficial for anxiety-disordered children without necessarily requiring in-clinic visits. Additionally, the abstract PUBMED:28344934 indicates that adolescents with chronic pain, particularly those with gastrointestinal issues and anxiety, should be evaluated for disordered eating, which implies that treatments may also need to address co-occurring conditions that can be managed outside of a clinic setting. Lastly, the abstract PUBMED:28641122 presents a proof of concept RCT evaluating a parent-focused intervention (Fear-less Triple P) for anxiety-disordered children, which demonstrated efficacy in reducing children's anxiety symptomatology. This parent-only intervention did not require children to come into the clinic and still resulted in a significant percentage of children being free of any anxiety diagnosis over time. In summary, the evidence from these abstracts suggests that there are efficacious treatment options for anxiety-disordered children that do not require in-clinic visits, such as therapist-supported bibliotherapy and parent-focused interventions.
Instruction: Are UK otorhinolaryngologists maintaining their research output? Abstracts: abstract_id: PUBMED:23631921 Are UK otorhinolaryngologists maintaining their research output? Background: In the general surgical and anaesthetic literature, there has been a decline in research output originating from the UK. This study analysed the 10 globally leading and 2 UK leading otorhinolaryngology journals to determine whether this trend was also reflected within otorhinolaryngology. Methods: Citable research output was analysed from 4 individual years, over a 10-year period (2000-2010), to determine absolute output, geographical mix and article type. Results: The proportion of research output from the UK and Ireland grew 22.8 per cent among the leading global otorhinolaryngology journals, but fell 28.6 per cent among the leading two UK otorhinolaryngology journals. The converse trend was true for the USA and Canada. Output from European and the rest of the world grew among both sets of journals, while Japanese output fell. 'Research' articles remained the most prevalent type. Conclusion: These results are encouraging as they refute the fall in UK research output observed by other authors. In the face of growing challenges, it is important to maintain published output so that the fate that has befallen other specialties is not mirrored within UK otorhinolaryngology. abstract_id: PUBMED:32777931 Are UK and Ireland trauma and orthopaedic surgeons maintaining their research output? Introduction: Healthcare faces growing challenges. With reports of diminishing research output from the UK and Ireland in the leading surgical journals, this study aimed to ascertain whether this trend had been echoed in the trauma and orthopaedic literature. Materials And Methods: Citable research output from the 10 globally leading trauma and orthopaedic journals was analysed from five individual years, over a 20-year period, to ascertain trends in absolute output, geographical mix, and level of evidence. Results: The overall number of published articles fell by 14.5%. North America saw the greatest decline (-8.0%), followed by Japan (-5.6%) and Europe (-3.3%). The UK and Ireland (+2.9%) and the rest of the world (+13.9%) saw rising output. A decline in lower (levels IV and V) and a rise in higher (levels I, II and III) quality evidence was observed. The UK and Ireland had a greater proportion of higher-quality studies than North America and Japan, but lower than Europe and the rest of the world. The impact factor of the leading journal rose from 4.47 to 7.01. Discussion: The research landscape has evolved, with leading journals placing greater emphasis on higher-quality evidence. UK and Irish research output remains stable, contributing 14% of the most highly cited publications in 2018, and challenging North America's dominance with a greater proportion of level I and II evidence in the leading journals. Conclusion: With the growing emergence of Europe and the rest of the world, UK and Irish authors must build upon the region's output despite political challenges such as Brexit. Increasing international collaboration will continue to play an important role. abstract_id: PUBMED:37206764 Knowledge, Beliefs, and Practices Towards Cochlear Implantations Among Otorhinolaryngologists in India. The outcome of the cochlear implant is dependent highly on the knowledge, belief and practice of cochlear implant in otolaryngologists who are among the important team members. The study explored the knowledge, beliefs, and practices towards cochlear implantations among otorhinolaryngologists in India. An online cross-sectional survey study was carried out using convenient sampling among otorhinolaryngologists in India. Phase-I involved developing and validating of a questionnaire to study the knowledge, beliefs, and practices towards cochlear implants among otorhinolaryngologists in India while phase II involved administration of the questionnaire and analysis. Data collection was conducted using Google Forms. A total of 106 otorhinolaryngologists participated across 24-65 years of age and with experience ranging from 1 to 42 years. The participating otorhinolaryngologists reported having good knowledge about the candidacy for a cochlear implant but having limited knowledge of the recent developments and governmental schemes. The otorhinolaryngologists displayed positive beliefs regarding cochlear implantation. Most recommended a battery of tests to determine the candidacy and gave a lot of importance to rehabilitation (96.2%) and surgery for implantation (83%). The respondents also practiced giving importance to a team approach involving multiple team members. High costs and financial burden emerged to be the major challenges for cochlear implantation in India. The findings of the survey indicate an overall positive belief and practices towards cochlear implantation by otorhinolaryngologists in India. However, there is a need to spread more awareness among them about the recent advances and schemes that would further improve their service delivery. abstract_id: PUBMED:34392970 UK paediatric surgical academic output (2005-2020): A cause for concern? Background: The publication record can be regarded as a key metric of the academic output of a craft surgical speciality with an almost exponential increase in the number of such publications worldwide over the past 20 years (Ashfaq et al. J Surg Res 2018;229:10-11). We aimed to examine and explore if this was the experience within UK paediatric surgery centres. Methods: The academic search engine Scopus™ (Elsevier) was used to track every paediatric surgeon's (NHS or University) publication history between Jan. 2005 - Dec. 2020. This was validated by an algorithmic search of PubMed™. The h-index (citations/publication), considered a validated metric of career academic output, was also calculated for each individual surgeon. A Field-Weighted Citation Index (Scopus™) (FWCI) was used to assess impact of individual publications. Textbooks, book chapters, abstracts, duplications ("double dipping") and output attributed to UK BAPS-CASS national studies were excluded. Some output(s), not considered as relevant to "paediatric surgery", was edited. Data are quoted as median(range). Results: During this 16-year period, there were 3838 publications identified from 26 centres with a "top ten" listing of those paediatric surgical units contributing over half the output (n = 2189, 57%). To look for evidence of trend(s) we analysed the output from these surgical centres in two 5-year periods (2005-9 and 2015-19) and showed an overall fall in output(s) - [730 (53.4%) to 645 (46.4%)] with 6/10 (60%) ' top ten ' centres here recording a reduction in publications. The median h-index of the 232 contributing paediatric surgical consultants was 12 (range 1-56). The best performing publication from the "top ten" centres had 96.5(51-442) citations with the FWCI being 4.5 (2.2 - 30.2). Conclusions: This study highlights current paediatric surgery publication output metrics in UK centres. There is evidence of a relative reduction in outputs overall which is a cause for concern for the future, although individual publications from the 10 most active units in the UK remain highly cited. These findings may serve purpose in several ways: (i) UK paediatric surgical centre rankings may be helpful for guiding residency / trainee application; (ii) surgical research funding for the top performing units may be better facilitated and finally (iii) UK centres showing a ' fertile ground ' for nurturing and training paediatric surgeons with academic aspirations could be useful for future workforce planning. abstract_id: PUBMED:33489852 An Experience of Otorhinolaryngologists as Frontline Worker with Novel Coronavirus: A Qualitative Analysis. During this COVID pandemic healthcare worker systems were overwhelmed. Doctors especially otorhinolaryngologists in addition to doing their specialty duty were also recruited to provide care to COVID-19 patients. To our knowledge, no studies about their experiences regarding COVID-19 have been published. Present study aimed to describe the experiences of Otorhinolaryngologist during this pandemic. This is a qualitative study using an empirical phenomenological approach. 30 ENT doctors were recruited from government and private sector. They participated in semi-structured, in-depth interviews by telephone in a period of one month. Interviews were recorded by consent and data was analyzed. After analysis four themes were revealed namely; 'nature of duty during Covid-19 pandemic', 'modification made to adjust the duty', 'conflict between professional duty and family responsibilities', and 'fear of pandemic ill effect on their health'. The extreme work and fatigue drained ours ENT doctor's physically as well as emotionally. But our doctors showed their resilience and the spirit of professional dedication to overcome difficulties. Comprehensive support should be provided to safeguard the wellbeing of health-care providers mainly doctors. In the meantime timely as well as intensive training for all healthcare worker is highly recommended to promote preparedness and improve efficacy during the pandemic. abstract_id: PUBMED:34505794 Impact of clinical research coordinators on research output in orthopedic surgery. Aim: To describe the efficacy of clinical research coordinators (CRCs) in terms of research output among faculty, residents and medical students. Materials &amp; methods: We analyzed and reviewed departmental research output from the 3 years prior to the hire of 3 CRCs compared with the 3 years following. Results: From 2014 to 2016, the department output was a total of 27 peer-reviewed publications, 16 oral presentations and 33 poster presentations. From 2017-2019 there was a 52% increase in publications (n = 41), 131% increase in oral presentations (n = 37) and a 61% increase in poster presentations (n = 53). Conclusion: The implementation of CRCs has markedly increased tangible research output and trainee involvement. This role may serve as a valuable model for other academic departments. abstract_id: PUBMED:24705885 Postdoctoral researchers in the UK: a snapshot at factors affecting their research output. Postdoctoral training is a typical step in the course of an academic career, but very little is known about postdoctoral researchers (PDRs) working in the UK. This study used an online survey to explore, for the first time, relevant environmental factors which may be linked to the research output of PDRs in terms of the number of peer-reviewed articles per year of PDR employment. The findings showed reliable links between the research output and research institutions, time spent as PDR, and parental education, whereas no clear links were observed between PDRs' output and research area, nationality, gender, number of siblings, or work environment. PDRs based in universities tended to publish, on average, more than the ones based in research centres. PDRs with children tended to stay longer in postdoctoral employment than PDRs without children. Moreover, research output tended to be higher in PDRs with fathers educated at secondary or higher level. The work environment did not affect output directly, but about 1/5 of PDRs were not satisfied with their job or institutional support and about 2/3 of them perceived their job prospects as "difficult". The results from this exploratory study raise important questions, which need to be addressed in large-scale studies in order to understand (and monitor) how PDRs' family and work environment interact with their research output-an essential step given the crucial role of PDRs in research and development in the country. abstract_id: PUBMED:21836716 Mutual learning and research messages: India, UK, and Europe. India and UK have had a long history together, since the times of the British Raj. Most of what Indian psychiatry is today, finds its roots in ancient Indian texts and medicine systems as much as it is influenced by the European system. Psychiatric research in India is growing. It is being influenced by research in the UK and Europe and is influencing them at the same time. In addition to the sharing of ideas and the know-how, there has also been a good amount of sharing of mental health professionals and research samples in the form of immigrants from India to the UK. The Indian mental health professionals based in UK have done a good amount of research with a focus on these Indian immigrants, giving an insight into cross-cultural aspects of some major psychiatric disorders. This article discusses the impact that research in these countries has had on each other and the contributions that have resulted from it. abstract_id: PUBMED:12625882 Current practices of cocaine administration by UK otorhinolaryngologists. The aim of this study was to determine which anaesthetic and vasoconstrictor preparations UK Otorhinolaryngologists use for rhinological surgery, with particular reference to cocaine and adrenaline. The incidence and types of adverse reactions to cocaine were also recorded. A postal survey of all BAO-HNS consultant members was performed. Of the 360 consultant surgeons included in the survey, the majority still use peri-operative cocaine on a regular basis, 66 per cent use cocaine and adrenaline together and more than 40 per cent use cocaine in paediatric patients. Sixteen per cent of respondents did not use cocaine. Only 11 per cent of surgeons had experienced cocaine toxicity in their patients, with only one recorded case of mortality. Most surgeons in the UK use cocaine because of the superior operative field it provides and because they consider it to be safe even with adrenaline. The actual incidence of adverse reactions to cocaine is low, with serious complications being less common than the risks from general anaesthesia. Cocaine remains a valuable agent in the armamentarium of the rhinologist. abstract_id: PUBMED:22470241 A scientometric analysis of Indian research output in medicine during 1999-2008. Objective: This study analyzes the research activities of India in medicine during 1999-2008, based on the total publication output, its growth rate, quality of papers published and rank of India in the global context. Patterns of international collaborative research output and the major partner countries of India are also discussed. This study also evaluates the research performance of different types of Indian medical colleges, hospitals, research institutes, universities and research foundations and the characteristics of published literature in Indian and foreign journals. It also analyzes the medical research output by disease and organs. Materials And Methods: The publication data on medicine has been retrieved by using SCOPUS database. Results: India holds 12th rank among the productive countries in medicine research consisting of 65,745 papers with a global publication share of 1.59% and registering a growth rate of 76.68% for the papers published during 1999-2003 to 2004-2008. Conclusion: High quality research in India is grossly inadequate and requires strategic planning, investment and resource support. There is also a need to improve the existing medical education system, which should foster research culture. Answer: UK otorhinolaryngologists have shown an encouraging trend in maintaining their research output. According to a study analyzing the 10 globally leading and 2 UK leading otorhinolaryngology journals, the proportion of research output from the UK and Ireland grew by 22.8 percent among the leading global otorhinolaryngology journals over a 10-year period (2000-2010). However, there was a decline of 28.6 percent among the leading two UK otorhinolaryngology journals. This suggests that while the global presence of UK otorhinolaryngology research has increased, there has been a decrease in output within the leading UK-specific journals (PUBMED:23631921). The study also noted that the output from European and the rest of the world grew among both sets of journals, while Japanese output fell. Despite the mixed trends, the overall results refute the fall in UK research output observed in other specialties, indicating that UK otorhinolaryngology is maintaining its research output in the face of growing challenges. It is important for the specialty to continue this trend to avoid the decline seen in other areas (PUBMED:23631921).
Instruction: Association of glucose measures with total and coronary heart disease mortality: does the effect change with time? Abstracts: abstract_id: PUBMED:19671481 Association of glucose measures with total and coronary heart disease mortality: does the effect change with time? The Rancho Bernardo Study. Aims: To compare the associations of three glucose measures with coronary heart disease (CHD) and total mortality and to examine how these associations changed over time. Methods: Prospective study of 1774 adults (median age 68 years, 56% female). Fasting plasma glucose (FPG), 2h post-challenge glucose (2hPG), and glycohemoglobin (GHb) were obtained in 1984-1987. Mortality data was obtained for all participants. Multivariable Cox models examined the association of baseline glucose measures with mortality during sequential periods of follow-up (0-6, 7-12, and 13-18 years), adjusting for age, sex, blood pressure, LDL-cholesterol, smoking, exercise, and aspirin use. Results: 854 (48%) participants died during follow-up. In adjusted models, only GHb was associated with total mortality over the entire 18 years (p=0.007). In analyses of mortality in successive six year time intervals, the association of GHb and total mortality was only significant in years 0-6. For each 1% increase in GHb, the hazard ratio for death in years 0-6 was 1.14 (95% CI 1.01-1.30, p=0.04) and the hazard ratio for CHD death was 1.26 (95% CI 1.03-1.55, p=0.02). Stratification by sex and exclusion of participants with diabetes did not change our results. Conclusions: Higher levels of GHb were associated with increased total and CHD mortality within the first six years independent of cardiac risk factors. Though further research is needed, this supports the hypothesis that early glycemic control may affect mortality outcomes. abstract_id: PUBMED:8686692 Increase in fasting insulin and glucose over seven years with increasing weight and inactivity of young adults. The CARDIA Study. Coronary Artery Risk Development in Young Adults. To characterize 7-year changes in fasting serum insulin and glucose concentrations, the authors analyzed population-based data on 3,095 nondiabetic black and white men and women who were initially aged 18-30 years in the Coronary Artery Risk Development in Young Adults (CARDIA) Study. Measurements were made of fasting insulin (by an assay with little cross-reactivity to proinsulin) and fasting glucose on frozen samples from baseline and Year 7 examinations. Over the 7-year period, mean fasting insulin increased 10-25%, mean fasting glucose increased 7-10%, and mean body mass increased 7-12% across the four race-, sex-groups. The strongest predictor of both insulin increase and glucose increase was an increase in body mass over the 7 years. Adjusted for age and examination time period in race-, sex-specific repeated measures analyses, fasting insulin increased longitudinally by approximately 5 microU/mL per 5 kg/m2 increase in body mass index (p &lt; 0.05). Adjusted for age and time period, fasting insulin increased over the 7 years by approximately 2.5 microU/mL per 0.08 unit increase in waist/hip ratio (p &lt; 0.05), although this association was much stronger cross-sectionally. In a similar model, each 100 unit decrease in physical activity longitudinally predicted a 0.1-0.2 microU/mL increase in fasting insulin (p &lt; 0.05 in black men only); this association was stronger and statistically significant in all race-, sex-groups cross-sectionally. Fasting insulin was not associated with energy intake either cross-sectionally or longitudinally, but age- and time-adjusted associations of insulin change with change in several nutrients (e.g., energy from fat) were statistically significant in whites. The authors conclude that marked increased in weight in young adulthood adversely alters glucose and insulin metabolism, and that, if not reversed, this may lead to harmful health consequences in later life. abstract_id: PUBMED:24657509 Changes in lipid measures and incident coronary heart disease: Tehran Lipid &amp; Glucose Study. Background: Data on the impact of changes in lipid measures on subsequent coronary heart disease (CHD) outcomes are not consistent. Methods: Study was conducted in 4459 adults, aged ≥30 years, free of cardiovascular disease at baseline who attended two consecutive examinations first in 1999-2001 and second in 2001-2003, and were followed up until March 31, 2010. Multivariate Cox proportional hazard regression adjusted for baseline lipid measures and other risk factors was calculated for a 1 standard deviation (SD) change in total cholesterol (TC), log-transformed triglyceride (TG), high density lipoprotein cholesterol (HDL-C), low density lipoprotein cholesterol (LDL-C) (calculated using modified Friedewald formula), non-HDL-C, TC/HDL-C and log-transformed TG/HDL-C. Effect of change in dyslipidemia (TC ≥6.21 mmol/L or TG ≥2.26 mmol/L or HDL-C &lt;1.03 mmol/L or non-HDL-C ≥4.91 mmol/L) on incident CHD was examined, considering those with no dyslipidemia at baseline and follow-up as the reference group. Results: During a mean follow-up of 9.5 years, 303 cases of CHD occurred. A 1-SD increase in TC, TG, non-HDL-C, TC/HDL-C and TG/HDL-C was associated with 14, 20, 19, 16 and 14% increase in risk of CHD event, respectively (all p values &lt;0.05); the corresponding risk for LDL-C was [1.12 (0.99-1.27), P=0.07]. Participants with maintained dyslipidemia during follow-up had a significant risk for incident CHD [HR: 1.67(1.21-2.49)] compared to those with no dyslipidemia at baseline or follow-up. Conclusion: Changes in TC, TG, and non-HDL-C, TC/HDL-C, TG/HDL-C were independent predictors of CHD events. Furthermore, maintained dyslipidemia was a strong predictor for CHD events. abstract_id: PUBMED:32228577 Change in glucose intolerance status and risk of incident cardiovascular disease: Tehran Lipid and Glucose Study. Background: To assess the impact of changes in different glucose tolerance states on risk of incident cardiovascular disease (CVD)/coronary heart disease (CHD). Methods: A total of 4094 Iranians (43.9% men) aged ≥ 30 years, without diabetes and CVD at enrolment were included. The following categories were defined both at baseline visit and 3 years later (second visit): normal fasting glucose (NFG), normal glucose tolerance (NGT), NFG and NGT (NFG/NGT), impaired fasting glucose (IFG), impaired glucose tolerance (IGT) and IFG and/or IGT (IFG/IGT). Changes in the categories, i.e. regression to normoglycemia, remaining in previous status and progression to diabetes were assessed. We used Cox's proportional hazard models adjusted for traditional risk factors and their changes, to estimate the hazard ratio (HR) with 95% confidence interval (CI) of different changing categories for incident CVD/CHD. Results: During a median follow-up of 12.42 years, 428 subjects (men = 265) experienced CVD. Considering persistent NFG/NGT as reference, participants who shifted from NFG/NGT to IFG/IGT showed a lower hazard of CVD in the fully adjusted model, HR 0.72 [95% CI 0.52-0.996, P = 0.048]. Moreover, subjects who shifted from IFG, IGT and IFG/IGT to diabetes had an increased risk of CVD/CHD. The risk however, was only statistically significant for those with IFG/IGT, 1.61 [(1.03-2.51), P = 0.04] for CVD and 1.75 [(1.10-2.78), P = 0.02] for CHD; considering IFG/IGT at both visits as reference. Furthermore, those who regressed from IFG/IGT to normoglycemia were at the same risk as those remained in IFG/IGT state, 1.12 [(0.79-1.60), P = 0.52] for CVD and 1.04 [(0.70-1.53), P = 0.85] for CHD. Among a subgroup of population with insulin data (n = 803) those with insulin resistance (IR) that converted to diabetes showed a higher risk for CVD, 3.68 [(1.49-9.06), P = 0.01] and CHD, 2.76 [(1.00-7.60), P = 0.05] events in the fully adjusted model. Conclusions: Among participants with IFG, IGT or IFG/IGT at baseline, only those who developed diabetes had a higher risk of developing CVD/CHD. Persistent IFG/IGT was not associated with higher risk, compared with those reverted to normoglycemia. Moreover, subjects who converted from NFG/NGT to incident IFG/IGT showed a signal for lower risk of CVD/CHD. abstract_id: PUBMED:12145776 Do coronary heart disease risk factors change over time? The stability over a 12-year period of several coronary heart disease (CHD) risk factors was evaluated in 348 individuals who had remained healthy following baseline measurements made of the same variables in 1981. CHD risk factors evaluated were fasting and post-glucose challenge (120-minute) plasma glucose and insulin concentrations, plasma triglyceride (TG), low-density lipoprotein (LDL) and high-density lipoprotein (HDL) concentrations, and the ratio of LDL/HDL cholesterol concentrations. Approximately 40% to 60% of individuals in the highest CHD risk quartile (or lowest in the case of HDL cholesterol concentrations) in 1981 were still at highest risk in 1993. A similar proportion of individuals at lowest risk in 1981 were still in that category in 1993. At least 50% of the participants in this prospective analysis experienced a change by 1 quartile or more in each of the metabolic CHD risk factors measured, and these differences were highly statistically significant for all variables measured with the exception of the TG and HDL cholesterol concentrations. These results demonstrate that the implicit assumption in epidemiological studies that CHD risk factors at baseline remain stable may require examination. abstract_id: PUBMED:34988972 The impact of weight change and measures of physical functioning on mortality. Introduction: Lower grip strength and measures of physical functioning are associated with all-cause mortality. Relationships among long-term weight loss, physical functioning, and mortality in older women are understudied. Methods: Participants were 5039 women who were part of the Long Life Study (LLS) ancillary study to the Woman's Health Initiative (WHI). Average age was 78.76 ± 6.92. We defined long-term weight loss or gain as a decrease or increase of 5% or more of baseline body weight. Our primary outcome was all-cause mortality and our secondary outcomes were vascular death, and coronary heart disease (CHD). The mean follow-up time was 5.4 years. Cox regression modeling was performed for each outcome of interest. Variables of interest were weight change, grip strength, and functional status as measured by the Short Physical Performance Battery (SPPB) controlling for multiple potential confounders. Results: Weight loss of 5% or more percent body weight was associated with a hazard ratio of 1.66 (1.37-2.01) for all-cause mortality. Weight gain was not related to mortality or cardiovascular outcomes. Those in the highest grip strength quartile had a hazard ratio of 0.51 (0.39-0.66) for all-cause mortality. For the SPPB the hazard ratio was 0.29 (0.21-0.40), adjusting for changes in weight, race, smoking, history CHD, smoking, and diabetes. Higher grip strength and SPPB were associated with lower risks for vascular death, and CHD, independently of weight change. Conclusions: Weight loss was associated with increased mortality. Stronger grip strength and higher SPPB scores were associated with lower mortality risk independent of weight change. abstract_id: PUBMED:19167774 Impact of glucose metabolism and birth size on cognitive performance in elderly subjects. Aims: We aimed to investigate the impact of diabetes and impaired glucose tolerance on cognitive performance and to explore the association between birth weight and cognitive performance among diabetic subjects. Methods: We performed a standard oral glucose tolerance test and a computerised test for assessment of cognitive performance (CogState) in 1243 subjects; 173 of them had type 2 diabetes. At the time of cognitive testing the mean age of the subjects was 64 years. Subjects with type 1 diabetes or a history of stroke were excluded. Results: Subjects with known diabetes performed significantly poorer in cognitive tasks measuring visual attention, working memory and episodic learning than subjects with normal glucose tolerance. Subjects with newly diagnosed diabetes or milder impairments in glucose regulation did not differ from the normoglycaemic group. A low birth weight enhanced the association between diabetes and poor performance in the working memory and episodic learning tasks. Conclusions: Poorer cognitive performance was associated with known type 2 diabetes but not with newly diagnosed diabetes or milder impairments in glucose regulation. Low birth weight was found to be an additional vulnerability factor enhancing cognitive decline in diabetic subjects. abstract_id: PUBMED:1425271 Carbohydrate and lipid metabolism following heart bypass operations. The effect of the intravenous hypocaloric administration of glucose versus glucose xylitol (1:1) The effect of glucose-xylitol infusion on carbohydrate and lipid metabolism was investigated in 18 metabolically normal men (mean age 56.1 [35-65] years) with coronary heart disease after they had undergone a coronary artery bypass operation. During the first postoperative hours, group I (n = 6) received glucose only (2 mg/kg.min), group II (n = 6) glucose+xylitol (1 mg/kg.min each), and group II a glucose-containing electrolyte solution (0.83 mg/kg.min glucose). Blood glucose and insulin concentrations during the infusion period were significantly (P &lt; 0.05) lower in groups II and III than I (glucose after 6 h: group I 21.5 [15.3-26.8] mmol/l; group II 14.2 [11.2-18.1] mmol/l; group III 12.6 [6.8-16.0] mmol/l). The highest lactate concentrations were reached in group I, 6 hours after the operation. Palmitine and stearine, as well as oleic and linoleic acid concentrations were significantly lower 12 hours postoperatively in group I than groups II and III (P &lt; 0.05). These data indicate that energy-ineffective high glucose concentrations were avoided and endogenous lactate production reduced by the postoperative infusion of glucose+xylitol. In addition, it achieved a higher supply of free fatty acids as energy source to the myocardium without reaching toxic concentrations in the postischaemic myocardium. abstract_id: PUBMED:37454147 High triglyceride-glucose index and stress hyperglycemia ratio as predictors of adverse cardiac events in patients with coronary chronic total occlusion: a large-scale prospective cohort study. Background: The triglyceride-glucose (TyG) index and the stress hyperglycaemia ratio (SHR) are both positively associated with cardiovascular (CV) risk in patients with coronary heart disease. However, the prognostic value of these two biomarkers has not been well elucidated in patients with chronic total occlusion (CTO). Therefore, this study aims to evaluate the association of the TyG index and the SHR with long-term prognosis in patients with CTO. Methods: This prospective cohort study consecutively included 2740 angina patients with CTO from January 2017 to December 2018 at Fuwai Hospital. The outcomes are a composite of CV death and target vessel myocardial infarction (TVMI) and major CV cerebrovascular adverse events (MACCEs, including all-cause death, nonfatal MI, ischaemia-driven target vessel revascularization, and stroke). The association between biomarkers and prognosis was analysed by multivariable Cox proportional hazard models, and the predictive value was determined by a receiver-operating characteristic (ROC) curve. Results: During the follow-up with a median time of 3 years, 179 (6.5%) cases of MACCEs and 47 (1.7%) cases of CV death or TVMI were recorded. Patients with a high TyG index (&gt; 9.10) and a high SHR (&gt; 0.87) showed a significantly increased risk of CV death/TVMI (TyG index: HR 4.23, 95% CI 1.58-11.37; SHR: HR 5.14, 95% CI 1.89-13.98) and MACCEs (TyG index: HR 2.47, 95% CI 1.54-3.97; SHR: HR 2.91, 95% CI 1.84-4.60) compared with those with a low Tyg index and a low SHR (TyG &lt; 8.56, SHR &lt; 0.76). The area under the curve (AUC) values were 0.623 (TyG index) and 0.589 (SHR) for CV death/TVMI and 0.659 (TyG index) and 0.624 (SHR) for MACCEs. Furthermore, patients with both a high TyG index and a high SHR showed the highest risk of clinical outcomes among patients with different levels of these two biomarkers, and the AUC for the TyG-SHR combination was larger than the TyG index alone in predicting MACCE risk. Conclusions: The study revealed that a high TyG index and a high SHR were significantly correlated with poor prognosis in patients with CTO and suggested that these two biomarkers are reliable in predicting long-term prognosis in CTO patients. abstract_id: PUBMED:12810406 Association of food patterns, central obesity measures and metabolic risk factors for coronary heart disease (CHD) in middle aged Bengalee Hindu men, Calcutta, India. The association of central obesity measures and food patterns with metabolic risk factors for coronary heart disease (CHD) were studied among middle aged (&gt;or =30 years) Bengalee Hindu men of Calcutta, India. CHD risk factors included total cholesterol (TC), fasting triglyceride (FTG), fasting plasma glucose (FPG), high density lipoprotein cholesterol (HDL-c), low density lipoprotein cholesterol (LDL-c) and very low density lipoprotein cholesterol (VLDL-c). The total sample size in the study was 212 male individuals. Anthropometric measurements, metabolic and food pattern variables were collected from each participant. The relative role of central obesity measures and food pattern variables in explaining metabolic risk factors of CHD were also made in this study. The results revealed that body mass index (BMI) had no significant relation with any of the metabolic risk factors of CHD. Whereas almost all-central obesity measures, namely waist circumference (WC), waist-hip ratio (WHR), and conicity index (CI) were significantly and positively related with TC, FTG, FPG and VLDL-c. Of the food pattern variables, only the frequency of egg, fried snacks and Bengalee sweets consumption were positively and significantly related with all central obesity measures. In contrast, frequency of chicken and fish consumption was negatively associated with central obesity measures. Conicity index (CI) was found to be the most consistent in explaining metabolic variables of CHD. Percent of variance explained by central obesity measures and food patterns were TC (10%), FPG (16%), FTG (6.6%) and VLDL-c (6.7%). Significant negative association of chicken and fish consumption with central obesity measures indicates the beneficial effect of both these items in this population. Answer: The association of glucose measures with total and coronary heart disease (CHD) mortality does appear to change over time. In the Rancho Bernardo Study, glycohemoglobin (GHb) was the only glucose measure associated with total mortality over an 18-year period, and this association was significant only in the first six years of follow-up. For each 1% increase in GHb, the hazard ratio for death in years 0-6 was 1.14 for total mortality and 1.26 for CHD death. These associations were independent of other cardiac risk factors, suggesting that early glycemic control may affect mortality outcomes (PUBMED:19671481). In contrast, the Tehran Lipid and Glucose Study found that changes in glucose intolerance status had an impact on the risk of incident cardiovascular disease (CVD)/CHD. Participants who shifted from normal fasting glucose/normal glucose tolerance (NFG/NGT) to impaired fasting glucose/impaired glucose tolerance (IFG/IGT) showed a lower hazard of CVD, while those who progressed from IFG/IGT to diabetes had an increased risk of CVD/CHD. However, those who regressed from IFG/IGT to normoglycemia had the same risk as those who remained in the IFG/IGT state (PUBMED:32228577). These findings indicate that the relationship between glucose measures and mortality due to total and CHD may be stronger in the earlier years following the measurement of glucose levels. Over time, the impact of glucose measures on mortality risk may diminish, but the progression to diabetes remains a significant risk factor for CVD/CHD. Therefore, maintaining normoglycemia and preventing the progression to diabetes are important for reducing the risk of CVD/CHD over time.
Instruction: Are apathy and depression independently associated with longitudinal trajectories of cortical atrophy in mild cognitive impairment? Abstracts: abstract_id: PUBMED:23636003 Are apathy and depression independently associated with longitudinal trajectories of cortical atrophy in mild cognitive impairment? Objectives: To examine whether depression and apathy are independently associated with longitudinal trajectories of cortical atrophy in the entorhinal cortex compared with frontal subregions previously implicated in late-life mood disturbance. Methods: Data from 334 participants classified as having mild cognitive impairment in the Alzheimer's Disease Neuroimaging Initiative (ADNI) were analyzed by using multilevel models for change adjusted for age, global cognitive status, and total intracranial volume at enrollment. Participants in ADNI were recruited from &gt;50 clinical research sites in the United States and Canada. Depression and apathy were identified by informants using the Neuropsychiatric Inventory Questionnaire. Serial magnetic resonance imaging was performed on 1.5-Tesla scanners according to the standardized ADNI-1 protocol on an average of 5 occasions over an average of 30.5 months. Regional cortical thickness values were derived from longitudinal data processing in FreeSurfer version 4.4. Results: Depression was associated with reduced cortical thickness in the entorhinal cortex at baseline and accelerated atrophy in the anterior cingulate cortex. Similar relationships between depression and the orbitofrontal cortex and between apathy and the anterior cingulate cortex were not significant. Conclusions: In mild cognitive impairment, depression signs are a better marker of longitudinal cortical atrophy than apathy. Results are consistent with hypotheses that depression is an early sign of a more aggressive neurodegenerative process or that depression lowers brain reserve capacity, allowing for more rapid progression of Alzheimer disease neuropathology. abstract_id: PUBMED:37381880 Regional gray matter volume correlates with anxiety, apathy, and resilience in geriatric depression. Objectives: Geriatric depression (GD) is associated with significant medical comorbidity, cognitive impairment, brain atrophy, premature mortality, and suboptimal treatment response. While apathy and anxiety are common comorbidities, resilience is a protective factor. Understanding the relationships between brain morphometry, depression, and resilience in GD could inform clinical treatment. Only few studies have addressed gray matter volume (GMV) associations with mood and resilience. Participants: Forty-nine adults aged &gt;60 years (38 women) with major depressive disorder undergoing concurrent antidepressant treatment participated in the study. Measurements: Anatomical T1-weighted scans, apathy, anxiety, and resilience data were collected. Freesurfer 6.0 was used to preprocess T1-weighted images and qdec to perform voxel-wise whole-brain analyses. Partial Spearman correlations controlling for age and sex tested the associations between clinical scores, and general linear models identified clusters of associations between GMV and clinical scores, with age and sex as covariates. Cluster correction and Monte-Carlo simulations were applied (corrected alpha = 0.05). Results: Greater depression severity was associated with greater anxiety (r = 0.53, p = 0.0001), lower resilience (r = -0.33, p = 0.03), and greater apathy (r = 0.39, p = 0.01). Greater GMV in widespread, partially overlapping clusters across the brain was associated with reduced anxiety and apathy, as well as increased resilience. Conclusion: Our results suggest that greater GMV in extended brain regions is a potential marker for resilience in GD, while GMV in more focal and overlapping regions may be markers for depression and anxiety. Interventions focused on improving symptoms in GD may seek to examine their effects on these brain regions. abstract_id: PUBMED:33316852 Apathy in presymptomatic genetic frontotemporal dementia predicts cognitive decline and is driven by structural brain changes. Introduction: Apathy adversely affects prognosis and survival of patients with frontotemporal dementia (FTD). We test whether apathy develops in presymptomatic genetic FTD, and is associated with cognitive decline and brain atrophy. Methods: Presymptomatic carriers of MAPT, GRN or C9orf72 mutations (N = 304), and relatives without mutations (N = 296) underwent clinical assessments and MRI at baseline, and annually for 2 years. Longitudinal changes in apathy, cognition, gray matter volumes, and their relationships were analyzed with latent growth curve modeling. Results: Apathy severity increased over time in presymptomatic carriers, but not in non-carriers. In presymptomatic carriers, baseline apathy predicted cognitive decline over two years, but not vice versa. Apathy progression was associated with baseline low gray matter volume in frontal and cingulate regions. Discussion: Apathy is an early marker of FTD-related changes and predicts a subsequent subclinical deterioration of cognition before dementia onset. Apathy may be a modifiable factor in those at risk of FTD. abstract_id: PUBMED:29142089 Motor, cognitive, and functional declines contribute to a single progressive factor in early HD. Objective: To identify an improved measure of clinical progression in early Huntington disease (HD) using data from prospective observational cohort studies and placebo group data from randomized double-blind clinical trials. Methods: We studied Unified Huntington Disease Rating Scale (UHDRS) and non-UHDRS clinical measures and brain measures of progressive atrophy in 1,668 individuals with early HD followed up prospectively for up to 30 to 36 months of longitudinal clinical follow-up. Results: The results demonstrated that a composite measure of motor, cognitive, and global functional decline best characterized clinical progression and was most strongly associated with brain measures of progressive corticostriatal atrophy. Conclusions: Use of a composite motor, cognitive, and global functional clinical outcome measure in HD provides an improved measure of clinical progression more related to measures of progressive brain atrophy and provides an opportunity for enhanced clinical trial efficiency relative to currently used individual motor, cognitive, and functional outcome measures. abstract_id: PUBMED:35764422 The association of depression and apathy with Alzheimer's disease biomarkers in a cross-cultural sample. Cross-cultural differences in the association between neuropsychiatric symptoms and Alzheimer's disease (AD) biomarkers are not well understood. This study aimed to (1) compare depressive symptoms and frequency of reported apathy across diagnostic groups of participants with normal cognition (CN), mild cognitive impairment (MCI), and dementia, as well as ethnic groups of Hispanic Americans (HA) and European Americans (EA); (2) evaluate the relationship between depression and apathy with Aβ deposition and brain atrophy. Statistical analyses included ANCOVAs, chi-squared, nonparametric tests, correlations, and logistic regressions. Higher scores on the Geriatric Depression Scale (GDS-15) were reported in the MCI and dementia cohorts, while older age corresponded with lower GDS-15 scores. The frequency of apathy differed across diagnoses within each ethnicity, but not when comparing ethnic groups. Reduced volume in the rostral anterior cingulate cortex (ACC) significantly correlated with and predicted apathy for the total sample after applying false discovery rate corrections (FDR), controlling for covariates. The EA group separately demonstrated a significant negative relationship between apathy and superior frontal volume, while for HA, there was a relationship between rostral ACC volume and apathy. Apathy corresponded with higher Aβ levels for the total sample and for the CN and HA groups. abstract_id: PUBMED:31735378 Apathy and anxiety are early markers of Alzheimer's disease. In this study, we investigated associations between neuropsychiatric symptoms (i.e., apathy, anxiety, and depression) and cerebral atrophy, white matter lesions (WML), beta-amyloid (Aβ) deposition, and cognitive decline in a nondemented sample. 104 cognitively unimpaired and 53 subjects with mild cognitive impairment were followed for up to 4 years within the Swedish BioFINDER study. Neuropsychiatric assessments included the Hospital Anxiety and Depression Scale and the Apathy Evaluation Scale. Magnetic resonance imaging and 18F-flutemetamol-positron emission tomography quantified brain atrophy, WML, and Aβ deposition. Mini-Mental State Examination assessed longitudinal global cognition. Regression analyses were used to test for associations. Apathy and anxiety were shown related to Aβ deposition and predicted cognitive decline. Anxiety also interacted with amyloid status to predict faster cognitive deterioration. Apathy was further related to frontotemporal and subcortical atrophy, as well as WML. To conclude, the associations between apathy and anxiety with Aβ deposition and cognitive decline point to these symptoms as early clinical manifestations of Alzheimer's disease. abstract_id: PUBMED:33393797 Functional reserve: The residual variance in instrumental activities of daily living not explained by brain structure, cognition, and demographics. Objective: Cognitive reserve is a concept that explains individual differences in resilience to brain pathology and susceptibility to poor late-life cognitive outcomes. We evaluate the analogous concept of "Functional Reserve," defined as the difference between observed functional abilities and those predicted by brain structure, cognitive performance, and demographics. This study aims to validate the construct of functional reserve by testing its utility in predicting clinical outcomes and exploring its predictors. Method: Longitudinal data collected annually for up to 7 years from 1,084 older adults (ndementia = 163; nMCI = 333; nCN = 523) were analyzed. Functional reserve was operationalized as the residual variance in the Lawton-Brody Instrumental Activities of Daily Living (IADL) Scale after accounting for demographics (sex/gender, race, ethnicity, education), neuropathology (gray matter, hippocampal, and white matter hyperintensity volumes), and cognition (executive function, verbal episodic memory, semantic memory, and spatial function). Structural equation models estimated (a) functional reserve's associations with 7-year changes in clinical diagnosis and disease severity and (b) predictors of functional reserve. Results: Functional reserve was lower in dementia versus cognitively normal individuals. Higher baseline functional reserve was associated with lower concurrent dementia severity and slower clinical progression and attenuated the association of cognition with concurrent dementia severity. Physical function and apathy were the strongest predictors of functional reserve. Conclusions: Results provide preliminary validation of functional reserve for explaining individual differences in susceptibility to IADL dysfunction independent of neuropathology, cognition, and demographics. Physical functioning and apathy are promising modifiable intervention targets to enhance functional reserve in the context of brain atrophy and cognitive decline. (PsycInfo Database Record (c) 2021 APA, all rights reserved). abstract_id: PUBMED:34924392 Cerebral Volumetric Correlates of Apathy in Alzheimer's Disease and Cognitively Normal Older Adults: Meta-Analysis, Label-Based Review, and Study of an Independent Cohort. Background: Affecting nearly half of the patients with Alzheimer's disease (AD), apathy is associated with higher morbidity and reduced quality of life. Basal ganglia and cortical atrophy have been implicated in apathy. However, the findings have varied across studies and left unclear whether subdomains of apathy may involve distinct neuroanatomical correlates. Objective: To identify neuroanatomical correlates of AD-associated apathy. Methods: We performed a meta-analysis and label-based review of the literature. Further, following published routines of voxel-based morphometry, we aimed to confirm the findings in an independent cohort of 19 patients with AD/mild cognitive impairment and 25 healthy controls assessed with the Apathy Evaluation Scale. Results: Meta-analysis of 167 AD and 56 healthy controls showed convergence toward smaller basal ganglia gray matter volume (GMV) in apathy. Label-based review showed anterior cingulate, putamen, insula, inferior frontal gyrus (IFG) and middle temporal gyrus (MTG) atrophy in AD apathy. In the independent cohort, with small-volume-correction, right putamen and MTG showed GMVs in negative correlation with Apathy Evaluation Scale total, behavioral, and emotional scores, and right IFG with emotional score (p &lt; 0.05 family-wise error (FWE)-corrected), controlling for age, education, intracranial volume, and depression. With the Mini-Mental State Examination scores included as an additional covariate, the correlation of right putamen GMV with behavioral and emotional score, right MTG GMV with total and emotional score, and right IFG GMV with emotional score were significant. Conclusion: The findings implicate putamen, MTG and IFG atrophy in AD associated apathy, potentially independent of cognitive impairment and depression, and suggest potentially distinct volumetric correlates of apathy. abstract_id: PUBMED:37719694 It is not depression: A case report of a 37-year-old firefighter with frontotemporal dementia. A 37-year-old married, right-handed active firefighter presented to the primary care clinic with a self-report of difficulties in memorising and recalling information associated with declining work performance. Collaborative history-taking revealed that he also experienced emotional and social withdrawal, apathy, irritability and distractibility. He was initially diagnosed with major depressive disorder but showed no improvement with an antidepressant. This young man with no history of hereditary neurodegenerative disorder had further deterioration in cognitive function, predominantly executive behaviours, with progressive aphasia. Brain magnetic resonance imaging revealed cerebral atrophy predominant over the frontotemporal lobe. Positron emission tomography showed fluorodeoxyglucose hypometabolism at the bifrontal and left parietal and temporal cortices, consistent with frontotemporal dementia. He required institutionalisation with full nursing care less than 24 months after the onset of the symptoms. This case highlights the need for diagnostic consideration of dementia in young individuals presenting with cognitive impairment among other symptoms. It also emphasises the importance of obtaining collateral information from close relatives to avoid misdiagnosis and addresses the psychosocial impact of dementia at a young age. abstract_id: PUBMED:24953008 Apathy as a feature of prodromal Alzheimer's disease: an FDG-PET ADNI study. Objective: The goal of this study is to evaluate brain metabolism in mild cognitive impairment (MCI) patients with and without apathy (as determined by the Neuropsychiatric Inventory Questionnaire). Methods: Baseline data from 65 MCI participants (11 with apathy and 54 without) from the Alzheimer's Disease (AD) Neuroimaging Initiative study were analyzed. All participants underwent a comprehensive cognitive and neuropsychiatric assessment, volumetric MRI and measures of cerebral glucose metabolism applying (18)F-fluorodeoxyglucose positron emission tomography at baseline. The presence of apathy at baseline was determined by the Neuropsychiatric Inventory Questionnaire. Results: There was no difference between apathy and apathy-free MCI patients regarding cognitive assessment and neuropsychiatric measures when apathy-specific items were removed. Cerebrovascular disease load and cerebral atrophy were equivalent in both groups. Compared with the apathy-free MCI patients, MCI patients with apathy had significantly decreased metabolism in the posterior cingulate cortex. Conclusion: The presence of apathy in MCI patients is associated with AD-specific pattern of brain metabolic defect. These results could suggest that apathy belongs to the spectrum of prodromal AD symptoms. Answer: Yes, apathy and depression are independently associated with longitudinal trajectories of cortical atrophy in mild cognitive impairment (MCI). Depression has been found to be associated with reduced cortical thickness in the entorhinal cortex at baseline and accelerated atrophy in the anterior cingulate cortex (PUBMED:23636003). Although similar relationships between depression and the orbitofrontal cortex and between apathy and the anterior cingulate cortex were not significant in this study, the findings suggest that depression signs are a better marker of longitudinal cortical atrophy than apathy in MCI. Additionally, other studies have shown that apathy is an early marker of neurodegenerative changes and predicts a subsequent subclinical deterioration of cognition before dementia onset, with apathy progression being associated with baseline low gray matter volume in frontal and cingulate regions (PUBMED:33316852). Furthermore, apathy and anxiety have been related to beta-amyloid (Aβ) deposition and predicted cognitive decline, with apathy also being related to frontotemporal and subcortical atrophy, as well as white matter lesions (PUBMED:31735378). In summary, both apathy and depression are associated with cortical atrophy in MCI, although they may affect different brain regions and may have distinct relationships with the progression of neurodegenerative changes.
Instruction: Situation of pertussis in Valencian Community: an epidemic revival? Abstracts: abstract_id: PUBMED:23332293 Situation of pertussis in Valencian Community: an epidemic revival? Introduction: Pertussis is a highly contagious vaccine-preventable disease. An increasing incidence has been reported in several countries around the world in the last few years. The aim of this study is to analyze the situation of pertussis in the Valencian Community (Spain), with the purpose of verifying the increased incidence of the illness. Methods: A descriptive analysis of probable and confirmed cases detected during 2011 was conducted. Methods used for Bordetella pertussis detection from clinical samples were: culture isolation, polymerase chain reaction (PCR)-based detection of bacterial DNA, and/or detection of specific IgM antibodies. The disease incidence and other epidemiological variables were estimated. These results were compared to data collected in previous years (2008-2010). The Epidemiological Surveillance Analysis and Microbiological Surveillance Network integrated systems of the regional Department of Health were used as sources of information. Results: In 2011, 249 cases of pertussis were detected (incidence rate of 4.89×10(5) inhabitants). This rate is statistically significantly higher than those reported in 2008 (0.73×10(5) inhabitants), 2009 (0.53×10(5) inhabitants), and 2010 (0.53×10(5) inhabitants). The highest incidence rate was observed in cases younger than one year old (252.97×10(5) inhabitants), with marked differences compared to the rest of age groups. More than two-thirds (69%) of reported cases were confirmed by a laboratory test. Detection of specific serum IgM antibodies was positive in 10% of cases, culture isolation was positive in 17%, and PCR-based detection of bacterial DNA in 35% of cases. Conclusions: The results of this study show a clear increase of pertussis incidence in the Valencian Community during 2011. abstract_id: PUBMED:24508373 Situation of pertussis in the community of Valencia: an epidemic revival? N/A abstract_id: PUBMED:24636961 Reply to "Situation of pertussis in the community of Valencia: an epidemic revival?" N/A abstract_id: PUBMED:28598324 A case-control study to assess the effectiveness of pertussis vaccination during pregnancy on newborns, Valencian community, Spain, 1 March 2015 to 29 February 2016. In the Valencian Community (Spain), the programme of maternal pertussis vaccination during pregnancy started in January 2015. The objective of this study was to estimate in this region the vaccine effectiveness (VE) in protecting newborns against laboratory-confirmed pertussis infection. A matched case-control study was undertaken in the period between 1 March 2015 and 29 February 2016. Twenty-two cases and 66 controls (+/- 15 days of age difference) were included in the study. Cases were non-vaccinated infants &lt; 3 months of age at disease onset testing positive for pertussis by real-time PCR. For every case three unvaccinated controls were selected. Odds ratios (OR) were calculated by multiple conditional logistic regression for association between maternal vaccination and infant pertussis. Other children in the household, as well as mother- and environmental covariates were taken into account. The VE was calculated as 1 - OR. Mothers of five cases (23%) and of 41 controls (62%) were vaccinated during pregnancy. The adjusted VE was 90.9% (95% confidence interval (CI): 56.6 to 98.1). The only covariate in the final model was breastfeeding (protective effect). Our study provides evidence in favour of pertussis vaccination programmes for pregnant women in order to prevent whooping cough in infants aged less than 3 months. abstract_id: PUBMED:14094762 EPIDEMIOLOGICAL SITUATION IN FRANCE IN 1961 N/A abstract_id: PUBMED:26016353 The epidemiological situation in the Republic of Abkhazia in 2013-2014 and Rospotrebnadzor participation in activities for its stabilization Official statistics of Republican SES on infectious diseases are used. The characteristics of the current epidemiological situation in the Republic of Abkhazia is given. The analysis of infectious dis- eases from 2012 to 2014 is presented. It was found that the most widespread infectious diseases are acute respiratory viral and acute intestinal infection. During the analyzed period in the country outbreaks of dysentery, whooping cough and measles were reported. Epidemic threat is the presence of active natu- ral foci of infectious diseases and permanently disadvantaged anthrax points on the territory ofAbkhazia. Activities carried out by the Ministry of Health of the Republic of Abkhazia with the help and active participation of Rospotrebnadzor contributed stabilization the epidemiological situation. abstract_id: PUBMED:11285747 The status of vaccine preventable diseases in Germany The European members of the World Health Assembly (WHO) adopted the goal of eliminating poliomyelitis by the year 2000 (certification 2003), tetanus of the newborn by 2005 and measles by 2007 (certification 2010). Regarding the reduction by 2010 diphtheria, hepatitis B, pertussis and rubella syndrome are in the foreground of discussions. As WHO-member Germany looks after these aims with growing acceptance, too. The current situation of the specific target illnesses is differentiated in the eastern and western part of Germany. While the final stage of certification for a polio-free region in the whole of Germany has been reached and there have only been single illnesses of diphtheria and tetanus for years, there are reported more than 5,000 illnesses of hepatitis B every year with an estimated number of unknown cases of at least 15,000 clinically manifested illnesses and a high age specific incidence rate for people who are 20-40 years old. The incidence rate in the eastern federal states is lower than in the western federal states owing to a smaller portion of endangered risk groups in the population. The trend on the whole is declining. Useful epidemiological data of measles and pertussis are currently only available in the eastern federal states of reunited Germany. To control measles a national intervention programme "measles, mumps, rubella" was started. The aim of this programme is to reduce the illnesses of measles in Germany from currently 50 to at first 5 illnesses per 100,000 inhabitants. In the eastern German federal states the situation is still better than in the western federal states. However, a permanent lower incidence rate of less than 1/100,000 inhabitants has not been reached after 1990. Pertussis is an example for the consequences of different vaccination strategies in East and West. The estimated illness rates in the western federal states are at 80-100/100,000 inhabitants. In the eastern federal states a continuous increase of incidence rates of more than 5 per 100,000 inhabitants has been noticed since 1991. For a permanent reduction of incidence rates of infectious diseases which are preventable by vaccination, provable high vaccination coverage and an effective epidemiological control are necessary. abstract_id: PUBMED:19053212 Pertussis in Latin America: current situation and future vaccination challenges. Bordetella pertussis remains in fifth place in the list of deaths attributed to vaccine-preventable diseases in children under 5 years of age around the world. It causes significant morbidity and mortality in both developing and developed nations. Although the epidemiology of the disease has been studied in detail in many developed countries, the current situation and burden of pertussis in Central and South America and the Caribbean is unknown, and there are no review articles describing data from Latin America. This review summarizes and highlights the most important recent published literature concerning pertussis in Latin America and addresses critical issues, such as disease awareness, epidemiological surveillance and future vaccination challenges, among others. abstract_id: PUBMED:31403888 Current situation, causes, and countermeasures to NIP vaccine shortages in Guangzhou, China. From 2017 to 2018, Guangzhou experienced a shortage in 3-6 types of National Immunization Program (NIP) vaccines. To evaluate the current situation and causes of the NIP vaccine shortage, we analysed the duration, intensity and causes of the shortage from data in the vaccine information system of the Guangzhou Center for Disease Control and Prevention; we also proposed countermeasures to resolve the shortage. In 2017, there were shortages of three types of NIP vaccines in Guangzhou. The most affected vaccines were inactivated poliovirus (IPV) vaccine and meningococcal group AC (MenAC) vaccine, accounting for 39.5% and 16.5% of the reported shortages, respectively. In 2018, the shortage extended to six vaccine types; the most affected were diphtheria, tetanus and pertussis (DTP) vaccine, MenAC vaccine, and Bacille Calmette Guerin (BCG) vaccine. The four main causes for the shortage were: 1) an insufficient production capacity; 2) a delay in batch issuance; 3) vaccine-related events; and 4) an extended bidding procurement cycle. Four solutions are proposed: 1) expand production output; 2) exempt creditworthy enterprises from batch inspections; 3) establish alternative enterprises and emergency use authorizations; and 4) establish public health funds and stockpile storage systems. abstract_id: PUBMED:19522244 Epidemiological situation of infectious diseases in Lvov and Cracow during and after World War I (until the year 1922) In Lvov and Cracow at the times of World War I, the infectious diseases such as tuberculosis followed by typhus fever, typhoid fever, dysentery, as well as scarlet fever, diphtheria, measles, whooping cough, smallpox, cholera and venereal diseases (particularly syphilis) posed one of the most significant and dangerous problems for inhabitants. Their increased prevalence was the result of deteriorating sanitary and living conditions of the city population. The spread of epidemic infectious diseases was enhanced by marching troops, migration of civilians and war prisoners, return of large groups of displaced people and demobilized soldiers after regaining independence in November 1918. Additionally, unfavorable epidemiological situation in Lvov deteriorated at the time of the war with Ukrainians (November 1918-April 1919) and Bolsheviks (July-August 1920). The control of infectious diseases was in the hands of regional local physicians who referred patients to hospitals, isolated homes, bath and disinfection institutions, and conducted vaccinations against smallpox. A decrease in infectious diseases prevalence and deaths to the prewar levels occurred in 1922. Answer: The situation of pertussis in the Valencian Community has shown a significant increase in incidence, suggesting a potential epidemic revival. In 2011, there were 249 cases of pertussis detected, which corresponds to an incidence rate of 4.89 per 100,000 inhabitants. This rate is statistically significantly higher than the rates reported in the previous three years: 2008 (0.73 per 100,000 inhabitants), 2009 (0.53 per 100,000 inhabitants), and 2010 (0.53 per 100,000 inhabitants). The highest incidence rate was observed in cases younger than one year old, with 252.97 per 100,000 inhabitants, which is markedly different compared to other age groups. More than two-thirds (69%) of reported cases were confirmed by a laboratory test, with detection methods including culture isolation, PCR-based detection of bacterial DNA, and detection of specific serum IgM antibodies (PUBMED:23332293). To address this increase in pertussis cases, the Valencian Community implemented a programme of maternal pertussis vaccination during pregnancy starting in January 2015. A case-control study conducted between 1 March 2015 and 29 February 2016 assessed the effectiveness of this vaccination programme. The study included 22 cases of non-vaccinated infants under 3 months of age who tested positive for pertussis by real-time PCR and 66 controls. The adjusted vaccine effectiveness (VE) was calculated to be 90.9%, indicating that maternal vaccination during pregnancy is highly effective in protecting newborns against laboratory-confirmed pertussis infection (PUBMED:28598324). These findings underscore the importance of vaccination programmes for pregnant women to prevent whooping cough in infants and suggest that the increased incidence of pertussis in the Valencian Community may be mitigated through such preventative measures.
Instruction: Is there really a pot of gold at the end of the rainbow? Abstracts: abstract_id: PUBMED:32629526 Is there a Pot of Gold at the End of the Spectrum? Full spectrum flow cytometry: Is there a pot of gold at the end of the spectrum? abstract_id: PUBMED:30463357 One-Pot Synthesis of Multi-Branch Gold Nanoparticles and Investigation of Their SERS Performance. Gold nanoparticles with multiple branches have attracted intensive studies for their application in sensing of low trace molecules. A large number of the merits found on the gold nanoparticles for the above applications are attributed to the strong localized surface plasmon resonance excited by the incident radiation. However, a facile and flexible way of synthesizing the multi-branch gold nanoparticles with tunable localized surface plasmon resonance frequency is still a challenge for the plasmonic research field. Herein, we report an efficient one-pot synthesis of multi-branch gold nanoparticles method that resembles a seed-medicated approach while using no further chemicals except chloroauric acid, ascorbic acid and 4-(2-Hydroxyethyl)-1-piperazinyl]-ethanesulfonic acid. By controlling the amounts of ascorbic acid volumes in the reaction mixture, the morphology and the localized surface plasmon resonance frequency of the synthesized multi-branch gold nanoparticles can be manipulated conveniently. Moreover, using the 4-Mercaptobenzoic acid as the Raman reporter, the multi-branch gold nanoparticles show superior surface-enhanced Raman spectroscopy characteristics that can be potentially used in chemical and biological sensing. abstract_id: PUBMED:26367015 Gold Nanoparticles Supported on a Layered Double Hydroxide as Efficient Catalysts for the One-Pot Synthesis of Flavones. Flavones are a class of natural products with diverse biological activities and have frequently been synthesized by step-by-step procedures using stoichiometric amounts of reagents. Herein, a catalytic one-pot procedure for the synthesis of flavone and its derivatives is developed. In the presence of gold nanoparticles supported on a Mg-Al layered double hydroxide (Au/LDH), various kinds of flavones can be synthesized starting from 2'-hydroxyacetophenones and benzaldehydes (or benzyl alcohols). The present one-pot procedure consists of a sequence of several reactions, and Au/LDH can catalyze all these different types of reactions. The catalysis is shown to be truly heterogeneous, and Au/LDH can be readily recovered and reused. abstract_id: PUBMED:19331651 Gene therapy: the end of the rainbow? The increased understanding of the molecular basis of oral cancer has led to expectations that correction of the genetic defects will lead to improved treatments. Nevertheless, the first clinical trials for gene therapy of oral cancer occurred 20 years ago, and routine treatment is still not available. The major difficulty is that genes are usually delivered by virus vectors whose effects are weak and temporary. Viruses that replicate would be better, and the field includes many approaches in that direction. If any of these are effective in patients, then gene therapy will become available in the next few years. Without significant advances, however, the treatment of oral cancer by gene therapy will remain as remote as the legendary pot of gold at the end of the rainbow. abstract_id: PUBMED:28106364 Novel Strategy for One-Pot Synthesis of Gold Nanoplates on Carbon Nanotube Sheet As an Effective Flexible SERS Substrate. In this work, we demonstrate a novel route for one-pot synthesis of two-dimensional gold nanoplates (2-D AuNPLs) on carbon nanotube (CNT) sheet. Well-defined AuNPLs are grafted onto CNT sheet via a facile hydrothermal reduction process, during which bromine ions are employed as the surfactant for gold anisotropic growth. Scanning electron microscopy (SEM) shows large-scale AuNPLs with micrometer-scaled length and sub-100 nm thickness are deposited uniformly on the CNT sheet. Transmission electron microscopy (TEM) and X-ray diffraction (XRD) results confirm the synthesized AuNPLs are single-crystalline with preferential {111} orientation. Based on the CNT sheet/AuNPLs hybrid, we have fabricated a flexible surface-enhanced Raman scattering (SERS) substrate, which can effectively detect the analyte Rhodamine 6G (Rh6G) at the concentration as low as 1 × 10-7 M. The excellent SERS performance of this novel flexible substrate is mainly attributed to nanoscaled gaps between the neighbors, large surface area with roughness, and their sharp edges and corners. abstract_id: PUBMED:35121507 Ready-to-use room temperature one-pot synthesis of surface-decorated gold nanoparticles with targeting attributes. Gold nanoparticles (AuNPs) can be used in diagnostic and therapeutic applications. The development of facile and fast synthetic approaches is accordingly desirable towards ready-to-use biomedical materials. We report a practical one-pot method for the synthesis in aqueous media and room temperature of surface-decorated AuNPs with enhanced biological responses. The gold ions could be reduced using only polyethyleneimine (PEI) derivatives containing sugar and-or alkyl chains acting simultaneously as reducing and stabilizing agent, without the aid of any other mediator. The process is possibly potentialized by the presence of the amino groups in the polymer chains which further confer colloidal stability. The kinetics of AuNPs nucleation and growth depends on the chemical nature of the polymer chains. Particularly, the presence of lactose moieties conjugated to the PEI chains conducted to surface-decorated AuNPs with low cytotoxicity that are remarkably faster uptaken by HepG2 cells. These cells overexpress asialoglycoprotein (ASGP-R), a galactose receptor. These findings may kick off significant advances towards the practical and ready-to-use manufacturing of functionalized AuNPs towards cell-targeting since the methodology is applicable for a large variety of other ligands that can be conjugated to the same polymer chains. abstract_id: PUBMED:31536763 One-pot synthesis and characterization of ovalbumin-conjugated gold nanoparticles: A comparative study of adjuvanticity against the physical mixture of ovalbumin and gold nanoparticles. Only few adjuvants are licensed for use in humans and there is a need to develop safe and improved vaccine adjuvants. In this study, we report the one-pot synthesis of antigen ovalbumin (OVA)-conjugated gold nanoparticles (OVA@GNPs). A systematical study was performed by comparing OVA@GNPs with the simple mixture of OVA and gold nanoparticles (OVA+GNPs), including their physiochemical properties through spectrometric and electrophoretic analysis, in vitro stability, cytotoxicity and cellular uptake, and in vivo humoral immune responses following subcutaneous and transcutaneous immunization in mice. The results demonstrate a much stronger interaction between protein and GNPs in OVA@GNPs than OVA+GNPs, which makes OVA@GNPs more stable under in vitro conditions than OVA+GNPs with the ability to induce 4 times higher OVA-specific serum IgG titers following subcutaneous immunization. We also show the dose sparing of OVA@GNPs, as the dosage for aluminum adjuvant required to reach to an equivalent OVA-specific antibody titer was almost five times higher than OVA@GNPs. However, we found that the co-administration of small-sized GNPs had a limited ability for the transcutaneous delivery of OVA. These results demonstrate the potential application of one-pot synthesis approach for producing antigen protein-conjugated gold nanoparticles for vaccine delivery. abstract_id: PUBMED:34714952 Divergent and Modular Synthesis of Terpenoid Scaffolds via a AuI Catalyzed One-Pot Cascade. A one-pot cascade sequence to generate synthetically challenging polycyclic scaffolds is reported utilizing a novel Lewis acid gold catalyst for the key cyclization step, enabling the divergent synthesis of both 6,6,5-tricyclic and 6,6,6,5-tetracyclic cores through both ligand and reaction condition control. We have combined the intrinsic complexity and stereoselectivity of cycloadditions with the electronic and steric properties of gold complexes to selectively generate complex polycyclic scaffolds in a single operation. abstract_id: PUBMED:26696774 End-to-end self-assembly of gold nanorods in isopropanol solution: experimental and theoretical studies. Abstract: We describe here a modification of properties of colloidal gold nanorods (NRs) resulting from the chemical treatment used to carry out their transfer into isopropanol (IPA) solution. The NRs acquire a tendency to attach one to another by their ends (end-to-end assembly). We focus on the investigation of the change in position and shape of the longitudinal surface plasmon (l-SPR) band after self-assembly. The experimental results are supported by a theoretical calculation, which rationalizes the dramatic change in optical properties when the NRs are positioned end-to-end at short distances. The detailed spectroscopic characterization performed at the consecutive stages of transfer of the NRs from water into IPA solution revealed the features of the interaction between the polymers used as ligands and their contribution to the final stage, when the NRs were dispersed in IPA solution. The efficient method of aligning the NRs detailed here may facilitate applications of the self-assembled NRs as building blocks for optical materials and biological sensing. Graphical Abstract: abstract_id: PUBMED:13786512 A pot of gold at the end of a rainbow. N/A Answer: No, the concept of a pot of gold at the end of the rainbow is a myth and part of folklore, not a physical reality. The phrase is often used metaphorically to represent an elusive or unattainable goal. In the context of the provided abstracts, the term "pot of gold" is used metaphorically to discuss the potential benefits and advancements in various scientific fields, such as full spectrum flow cytometry (PUBMED:32629526), gene therapy for oral cancer (PUBMED:19331651), and the synthesis of gold nanoparticles for various applications (PUBMED:30463357, PUBMED:26367015, PUBMED:28106364, PUBMED:35121507, PUBMED:31536763, PUBMED:34714952, PUBMED:26696774). None of these abstracts provide evidence of a literal pot of gold at the end of a rainbow.